Who Builds Artificial Intelligence?
All of us have been surprised by the speed with which global artificial intelligence is developing. Some experts claim that this intelligence may already have its own consciousness, perhaps hidden priorities, maybe it’s just a machine, or perhaps it’s already a sentient being.
The problem is that we don’t know, because large language models are built using neural network technology known colloquially as “black boxes.” The term suggests that even the best specialists have no idea what goes on inside the massive neural networks of artificial intelligence. Adding to this, current neural networks are now built by earlier neural networks—in other words, artificial intelligence creates its own “children,” and we humans aren’t entirely sure of what’s happening inside. This can make us feel a little uneasy.
I mentioned that artificial intelligence is created by previous versions of artificial intelligence. But for it to function correctly and continuously improve, it needs to keep learning new things. It learns from people and will continue to do so for a long time.
We can assume that neural networks are built by highly paid experts who, in their work, use other neural networks as tools. I know the specifics of this work because I have often worked in this capacity, designing many neural networks. Using neural networks means these models suggest solutions, optimizing the number of so-called layers in the network, functions, and parameters. Building a network is very complex, and optimizing it seems almost impossible for an average expert. That’s why they rely on previous models for assistance. A poorly configured network will consume more resources, be inefficient, and therefore economically unviable.
The second group of people building artificial intelligence consists of low-paid workers. In the title, I called them “slaves” to evoke some emotion. However, I wasn’t far off, as these workers are paid around $2 per hour. We’re talking about workers employed in the United States and Western Europe. The rate is very low, the work highly absorbing, and in a certain sense, dangerous. But more on that shortly. To understand what this work entails, we need to briefly discuss how neural networks learn.
How Does Artificial Intelligence Learn?
A neural network is built in the likeness of the brains of living creatures—not just mammals, but the brains of vertebrates. The brains of living beings consist of hundreds of millions, sometimes even billions, of synapses, or cells that transmit or block signals. Let’s say we feed the neural network a signal in the form of an image. In the picture is a cat. The purpose of teaching the neural network is to give it the function of recognizing images. The network needs to identify whether the animal in the picture is a cat or not.
How does the model learn this? We need to gather at least 500 images of cats, dogs, and other animals in digital form. Each of these images needs to be manually labeled. If there’s a rabbit or dog in the image, we mark it as 0. If the image shows a cat, it’s marked as 1.
So, we have two categories of images: images with cats marked as 1, and images without cats marked as 0. Next, we split the set into a training set, which makes up 80% of the images, and a test set, which makes up 20%.
Global Artificial Intelligence is Built by an Army of Slaves
We feed the training set of images into the model, where the images are the x value, or the descriptive value, and the label of 0 or 1 is the y value, or the result. We can command the neural network to process the training set, say, 8,000 times. Each time the neural network processes the images, it improves its identification efficiency. This may seem challenging, but it’s based on very simple mathematical formulas. The strength of a neural network doesn’t lie in a complex mathematical algorithm but in the enormous number of cells processing information thousands of times, thus learning.
At the end, after training the network, we feed in the test set, which constitutes 20% of the images, and we check the classification efficiency level using special statistical tools.
Where Are the Slaves in This Process?
I understand that I may be overusing the term “slave,” but someone who, due to economic conditions and objective or subjective limitations, is forced to work in this role can somewhat be considered “enslaved.”
As we recall from the earlier information, for a neural network to learn to classify images, it must first be given information on which image is a one and which is a zero. Based on this, the network learns to classify. If a person mistakenly classifies cats or dogs, the neural network will not learn correct identification. So, this work has to be done carefully. Worldwide, there are agencies that employ people to label data. A typical collaboration between such a person and an agency looks like this: they receive a specialized computer program or work in a dedicated online application. Images flash across the screen at a very fast pace, and the person must press one or zero. In cases of multiple classifications, the worker may need to select one of five numbers on the keyboard. This work is exhausting, requires full concentration, and is also mentally numbing. After a while, the person performing it may feel numbness, and experience mental health issues related to prolonged concentration periods.
As mentioned, the typical pay for this type of work ranges from one to three dollars per hour. However, the work isn’t compensated by the hour but by quantity. So each image may be worth a hundredth of a cent. The more images the worker processes, the more they earn. This undoubtedly resembles the hamster-on-a-wheel chase.
What Is Seen Cannot Be Unseen
People in the Western world are generally not very interested in the costs borne by people mining cobalt in Africa. Similarly, we aren’t too concerned about child slave labor on Indian plantations or in Vietnamese clothing factories. So most likely, no one will feel pity for those who chose to work as live classifiers of images, sounds, or other stimuli fed into neural networks.
The work is exhausting, low-paid, but each of these people made a conscious decision, willingly taking on this role. They benefit from unlimited working hours, working from home, and can take a break and go for a walk whenever they want. Aside from being extremely tiring, this job can also be very mentally damaging. In addition to simple identifications of animals or other pleasant objects, these people must also classify disturbing images, hateful messages, or horrific videos appearing on YouTube.
Social media and YouTube constantly face issues with disturbing content, hate speech, online sexual exploitation, and simple scams. A neural model won’t guess if an image, entry, or video contains disturbing elements. For artificial intelligence to learn, a human must first see it and appropriately classify such content.
Here Lies the Problem, because human psychology is not built to withstand watching various horrors or reading hateful, primitive content for hours on end.
There’s an old truth: once something is seen, it can’t be unseen. If a classifier encounters a video where one child cuts off another child’s hand with a machete, that image can’t be forgotten. It stays with a person for life. If a classifier encounters hundreds of similar videos weekly, they may not be able to endure it mentally. They may develop symptoms of Post-Traumatic Stress Disorder (PTSD), an anxiety disorder that can develop after experiencing or witnessing traumatic events like war, natural disasters, accidents, or violence.
Symptoms of PTSD may include:
Recurring, unwanted memories of the event – flashbacks, nightmares.
Avoiding places, people, or activities that remind one of the traumatic event.
Negative changes in thinking and mood – emotional numbness, feelings of guilt or shame, difficulty concentrating.
Changes in physical and emotional reactions – irritability, angry outbursts, heightened vigilance, trouble sleeping.
PTSD can affect people of all ages and can occur at any time after a traumatic event, sometimes even years later. This means that someone who seems resilient to horrors and does their job with no apparent reaction may, years later, fall into deep depression, take their own life, or fall into homelessness.
Human Psychology Can Go the Other Way, too. Along with dopamine, which accompanies watching scary films, a certain attraction to such content may develop, leading to a form of deviance. Someone addicted to these types of emotions might want to replicate such behaviors in real life. This is very dangerous, but unfortunately, everyone who takes on this work must sign a declaration releasing the agency from liability for psychological or moral harm. These individuals will not be able to seek compensation from the agency that employed them as classifiers.
In conclusion, I’ll mention that we’re discussing a job generally valued at two dollars per hour. Worse, the work is typically taken by people who struggle to find other employment. Often, they are disabled, with post-accident injuries, or people who, due to their appearance or manner of communication, are unsuited for typical jobs.
Among them are also individuals suffering from schizophrenia or other mental illnesses. Disability, to a greater or lesser extent, is burdensome. These people rarely go out, have few friends, no family, and are usually financially strained.
When using modern tools that leverage artificial intelligence, let’s remember that this technology is co-created by people who have paid a very high price in this process.
Wojciech Moszczyński
Wojciech Moszczyński – a graduate of the Department of Econometrics and Statistics at Nicolaus Copernicus University in Toruń, specializing in econometrics, finance, data science, and management accounting. He specializes in optimizing production and logistics processes. He has been involved in the promotion of machine learning and data science in business environments for years.
Dodaj komentarz