Menu Close

Open AI Paid as Little as $2.00 per hour: Times Report

openai shocking times report

The use of human labor to label data for AI systems has come under scrutiny following a report by Time Magazine about the practices of Sama, a Kenyan company that worked with OpenAI to label data for its AI language model, ChatGPT. According to the report, Sama employees were exposed to disturbing text and imagery, including explicit content, while working on the project.

In response to the report, OpenAI confirmed that it had received 1,400 images from Sama which included, but were not limited to, C4, C3, C2, V3, V2, and V1 images. These images are part of a categorization system used to label explicit content. C4 is the most severe category, including images depicting child sexual abuse and other illegal activities. C3 includes images depicting non-consensual sexual activities or sexual violence. C2 includes images depicting sexual acts or nudity, but with no clear non-consensual or violent context. V3, V2 and V1 refer to increasingly less severe categories of violence. It is important to note that these categories are not universally accepted or used, and the specific definitions of these categories may vary depending on the context or industry.

Sama’s decision to end its work with OpenAI had a significant impact on the livelihoods of its employees. Sama workers reported that they were told by the company’s human resources team that they did not want to expose their employees to such content again. As a result, many of the roughly three dozen workers were moved onto other lower-paying workstreams without the $70 explicit content bonus per month, and others lost their jobs.

The report also revealed that Sama was involved in content moderation for Facebook, which involved viewing images and videos of executions, rape and child abuse for as little as $1.50 per hour. The investigation led to the cancellation of Sama’s $3.9 million content moderation contract with Facebook, resulting in the loss of some 200 jobs in Nairobi.

Sama’s decision to end its work with OpenAI meant Sama employees no longer had to deal with disturbing text and imagery, but it also had a big impact on their livelihoods. Sama workers say that in late February 2022 they were called into a meeting with members of the company’s human resources team, where they were told the news. “We were told that they [Sama] didn’t want to expose their employees to such [dangerous] content again,” one Sama employee on the text-labeling projects said. “We replied that for us, it was a way to provide for our families.” Most of the roughly three dozen workers were moved onto other lower-paying workstreams without the $70 explicit content bonus per month; others lost their jobs. Sama delivered its last batch of labeled data to OpenAI in March, eight months before the contract was due to end.

In light of these revelations, experts have raised concerns about the ethical implications of using human labor to label data for AI systems. Andrew Strait, an AI ethicist, recently wrote on Twitter, “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent.” He added, “These are serious, foundational problems that I do not see OpenAI addressing.”

In conclusion, the report by Time Magazine highlights the ethical implications of using human labor to label data for AI systems. The practices of Sama and its involvement in content moderation for Facebook raise serious questions about the treatment of workers and the impact of such work on their mental and emotional well-being. It also highlights the need for companies like OpenAI to take a more proactive approach in addressing these issues.

Summary:

  • Time Magazine reported on the practices of Sama, a Kenyan company that worked with OpenAI to label data for its AI language model, ChatGPT.
  • Sama employees were exposed to disturbing text and imagery, including explicit content, while working on the project.
  • OpenAI confirmed that it had received 1,400 images from Sama, but clarified that it did not open or view the content in question and could not confirm if it contained images in the C4 category.
  • Sama’s decision to end its work with OpenAI had a significant impact on the livelihoods of its employees.
  • Sama was also involved in content moderation for Facebook, which involved viewing images and videos of executions, rape and child abuse for as little as $1.50 per hour.
  • The investigation led to the cancellation of Sama’s $3.9 million content moderation contract with Facebook, resulting in the loss of some 200 jobs in Nairobi.
  • Experts have raised concerns about the ethical implications of using human labor to label data
0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x