Aug 15, 2023 - Technology

OpenAI touts GPT-4 for content moderation

Ina Fried
A photo illustration of the Open AI logo displayed on a smartphone.  In this photo illustration a Open AI logo is displayed on a smartphone.

Photo Illustration: Omar Marques/SOPA Images/LightRocket via Getty Images

OpenAI, the makers of ChatGPT, say their engine can do the work of human content moderators with much of the accuracy, more consistency and without the emotional toll that people face when forced to view violent and abusive content for hours.

Why it matters: This is the latest example of tech companies touting AI as the key to tackling problems created — or exacerbated — by AI.

Details: OpenAI says it has been using the content moderation system it developed, which is based on its latest GPT-4 model, and has found it to be better than a moderator with modest training, though not as effective as the most skilled human moderators.

  • This system is designed to work on a range of steps in the process of identifying and removing problematic content: from the development of moderation policy through to implementing that policy.
  • "You don't need to hire tens of thousands of moderators," OpenAI head of safety systems Lilian Weng told Axios. Instead, Weng said people can act as advisors who ensure the AI-based system is working properly and to adjudicate borderline cases.

The big picture: Content moderation has been a huge challenge even before the arrival of generative AI. The new technology threatens to exacerbate the challenges by making it even easier to generate misinformation and other unwanted content.

  • However, AI is also seen by some technologists as the only likely answer to the expected rise in misinformation because of its ability to scale.
  • Social media companies are already heavily reliant on earlier AI technologies, such as machine learning, to scan for rule-breaking content.
Go deeper