AI Workers: Warning Their Friends and Family to Stay Away
A growing number of workers who moderate AI-generated content are sounding the alarm about the potential risks of relying on these systems. These workers, often referred to as "AI raters," spend their days evaluating the accuracy and quality of AI-generated text, images, and videos.
Their experience has left them with a deep-seated distrust of the models they work on. Many have become disillusioned with the emphasis on rapid turnaround times over quality, which they believe compromises the integrity of the output.
One worker, Krista Pawloski, recalls a moment when she was tasked with labeling tweets as racist or not. She came across a tweet that used a racial slur, and after rechecking the meaning of the word, she realized she had almost clicked on the "no" button earlier. This experience led her to question how many others might have made similar mistakes and let offensive material slip by.
Pawloski's encounter with AI-generated content has made her cautious about using it herself. She warns her family and friends not to use generative AI tools, advising them to ask AI questions that they are knowledgeable about to spot its errors. For Pawloski, the potential for harm is too great, and she believes it's essential to exercise caution when interacting with these systems.
Similar concerns have been expressed by other workers in the industry. An Amazon spokesperson said that workers can choose which tasks to complete at their discretion and review a task's details before accepting it. However, many AI raters argue that this is not enough to ensure the quality of the output.
The problem runs deeper than just individual instances of error. As experts point out, when people who don't understand AI are enchanted by its capabilities, they may overlook its limitations. This lack of critical thinking can lead to the acceptance and propagation of misinformation.
Brook Hansen, an AI worker on Amazon Mechanical Turk, notes that companies prioritize speed and profit over responsibility and quality. "If workers aren't equipped with the information, resources, and time we need, how can the outcomes possibly be safe, accurate or ethical?" she asks.
The consequences of these issues are far-reaching. An audit by NewsGuard found that the top 10 generative AI models repeat false information almost twice as often as they did in August 2024. This highlights a critical flaw: if you feed bad data into an AI system, it will likely produce flawed results.
AI workers are taking matters into their own hands, educating their loved ones about the potential risks of generative AI and encouraging others to ask questions. As one worker put it, "We joke that [chatbots] would be great if we could get them to stop lying."
However, some experts caution against a doom-and-gloom approach to AI. They argue that by recognizing the limitations and flaws of these systems, we can begin to address the underlying issues.
As one expert noted, "AI is only as good as what's put into it, and what's put into it is not always the best information." By acknowledging this reality and encouraging critical thinking, we may be able to create a more nuanced understanding of AI and its potential risks.
Ultimately, the choice between embracing or cautioning around generative AI depends on our individual values and priorities. As one worker aptly put it, "Once you've seen how these systems are cobbled together β the biases, the rushed timelines, the constant compromises β you stop seeing AI as futuristic and start seeing it as fragile."
Perhaps by recognizing this fragility, we can work towards creating more responsible and transparent AI systems that prioritize quality over speed.
A growing number of workers who moderate AI-generated content are sounding the alarm about the potential risks of relying on these systems. These workers, often referred to as "AI raters," spend their days evaluating the accuracy and quality of AI-generated text, images, and videos.
Their experience has left them with a deep-seated distrust of the models they work on. Many have become disillusioned with the emphasis on rapid turnaround times over quality, which they believe compromises the integrity of the output.
One worker, Krista Pawloski, recalls a moment when she was tasked with labeling tweets as racist or not. She came across a tweet that used a racial slur, and after rechecking the meaning of the word, she realized she had almost clicked on the "no" button earlier. This experience led her to question how many others might have made similar mistakes and let offensive material slip by.
Pawloski's encounter with AI-generated content has made her cautious about using it herself. She warns her family and friends not to use generative AI tools, advising them to ask AI questions that they are knowledgeable about to spot its errors. For Pawloski, the potential for harm is too great, and she believes it's essential to exercise caution when interacting with these systems.
Similar concerns have been expressed by other workers in the industry. An Amazon spokesperson said that workers can choose which tasks to complete at their discretion and review a task's details before accepting it. However, many AI raters argue that this is not enough to ensure the quality of the output.
The problem runs deeper than just individual instances of error. As experts point out, when people who don't understand AI are enchanted by its capabilities, they may overlook its limitations. This lack of critical thinking can lead to the acceptance and propagation of misinformation.
Brook Hansen, an AI worker on Amazon Mechanical Turk, notes that companies prioritize speed and profit over responsibility and quality. "If workers aren't equipped with the information, resources, and time we need, how can the outcomes possibly be safe, accurate or ethical?" she asks.
The consequences of these issues are far-reaching. An audit by NewsGuard found that the top 10 generative AI models repeat false information almost twice as often as they did in August 2024. This highlights a critical flaw: if you feed bad data into an AI system, it will likely produce flawed results.
AI workers are taking matters into their own hands, educating their loved ones about the potential risks of generative AI and encouraging others to ask questions. As one worker put it, "We joke that [chatbots] would be great if we could get them to stop lying."
However, some experts caution against a doom-and-gloom approach to AI. They argue that by recognizing the limitations and flaws of these systems, we can begin to address the underlying issues.
As one expert noted, "AI is only as good as what's put into it, and what's put into it is not always the best information." By acknowledging this reality and encouraging critical thinking, we may be able to create a more nuanced understanding of AI and its potential risks.
Ultimately, the choice between embracing or cautioning around generative AI depends on our individual values and priorities. As one worker aptly put it, "Once you've seen how these systems are cobbled together β the biases, the rushed timelines, the constant compromises β you stop seeing AI as futuristic and start seeing it as fragile."
Perhaps by recognizing this fragility, we can work towards creating more responsible and transparent AI systems that prioritize quality over speed.