As tech executives to speak with lawmakers about potential AI regulations this week, they are also being probed on the working conditions of the workers who make , , and possible. US lawmakers are probing nine tech giants—Microsoft, OpenAI, Anthropic, Meta, Alphabet, Amazon, Inflection AI, Scale AI, and IBM—on the working conditions of data labelers. Data labelers are human workers tasked with labeling training data and rating chatbot responses to make sure that AI systems are safe and reliable.
“Despite the essential nature of this work, millions of data workers around the world perform these stressful tasks under constant surveillance, with low wages and no benefits,” wrote a group of lawmakers including Senator Edward Markey, Elizabeth Warren, and Bernard Sanders in a letter to the tech executives on Wednesday (Sept. 13). “These conditions not only harm the workers, they also risk the quality of the AI systems—potentially undermining accuracy, introducing bias, and jeopardizing data protection.
” The letter also brings attention to newer AI startups including Inflection AI, Scale AI, and Anthropic, highlighting a who’s who of the companies shaping AI systems today. Tech companies have a responsibility to ensure these workers have safe working conditions, are paid fairly, are protected from unjust disciplinary proceedings, and must be more transparent about the role these workers play in AI companies, the lawmakers wrote. The data-labeling workforce behind ChatGPT and Bard Tech companies tend to outsource data labeling to staffing firms that hire workers outside of the US in countries including .
AI products are used to automate decision-making processes, and the algorithms behind the products must be taught how to “see” things. For instance, a self-driving car algorithm must be able to tell the difference between a pedestrian and a stop sign. The algorithm is trained by data labelers who watch hours of video content and identify the objects in each frame, as .
An hour of video takes eight hours to annotate, according to the FT. These workers often endure harsh conditions. To make ChatGPT safer, Kenyan laborers, who earn less than $2 per hour, must label images including sexual abuse, hate speech, and violence, as a from this earlier found.
The employees told Time that they were expected to read and label between 150 and 250 spinnets of text, which range from 100 words to over 1,000 words, per nine-hour shift. The employees reported that they were mentally scarred by the work, and though they were entitled to wellness sessions, they often found the sessions unhelpful. Data labelers are also not afforded the same benefits as the employees of the tech companies commissioning the work.
With generative AI products being released left and right, data labeling doesn’t appear to be going away. The global data annotation and labeling market hit $800 million last year and is expected by the end of 2027, according to Markets and Markets, a market research firm. As the , data-labeling companies say that workers are specializing more in different types of data such as driving or medical information.
📬 Sign up for the Daily Brief Our free, fast, and fun briefing on the global economy, delivered every weekday morning. .
From: quartz
URL: https://qz.com/tech-companies-ai-data-labelers-congress-1850834407