Chatbot's doomsday scenario for truth
The world's response to the oracular artificial intelligence program called ChatGPT started with chuckles but has quickly moved on to shivers.
What's happening: Trained on vast troves of online text, OpenAI's chatbot remixes those words into often-persuasive imitations of human expression and even style.
- In the days after OpenAI put ChatGPT out for free use, more than a million users began finding ways to have fun with it.
Yes, but: A growing chorus of experts believes it's too good at passing for human. Its capacity for generating endless quantities of authentic-seeming text, critics fear, will trigger a trust meltdown.
- A shakeup in the online information business, along with a likely flood of misinformation and spam, is just the start of the impact.
Why it matters: ChatGPT's ability to blur the line between human and machine authorship could wreak overnight havoc with norms across many disciplines, as people hand over the hard work of composing their thoughts to AI tools.
- High school and college instructors have long had to battle plagiarism and ghost-written term papers. But ChatGPT — and the likelihood that it will be followed by even more advanced AI — threatens to make this problem exponentially harder.
Education is where ChatGPT's disruption will land first, but any discipline or business built on foundations of text is in the blast radius.
- Think law, entertainment, science, history, media.
- The exact same set of concerns applies in the world of images, thanks to the parallel rise of image-generating AI programs.
What they're saying: "Shame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Paul Kedrosky, a venture investor and longtime internet analyst, wrote on Twitter earlier this month. “A virus has been released into the wild with no concern for the consequences."
The intrigue: AI companies, including OpenAI, are working on schemes that could watermark machine-generated texts.
- Venture capitalist Fred Wilson foresees the use of cryptographic signatures to verify a document's origins and history.
- For now, though, these remedies are just ideas, while ChatGPT is already up and running.
The big picture: The intense online debate over ChatGTP among technologists, investors and critics has surfaced a range of warnings over its failings.
Accuracy: ChatGTP's conversational fluency masks its inability to distinguish between fact and fiction.
- "It often looks like an undergraduate confidently answering a question for which it didn’t attend any lectures," as tech analyst Benedict Evans wrote.
Bias: OpenAI has tried to limit the potential for ChatGPT to say things that are blatantly offensive or discriminatory, but users have found many holes in its restraints. (That's likely what OpenAI wanted to happen in this public trial so it could improve the product.)
- Generative AIs like ChatGPT learn from the patterns of the texts they ingest, and the corpus of human expression is full of humanity's failings.
- It's on AI-makers' shoulders to scrub the data they feed their algorithms and limit potential harms to society, but much of the industry has chosen haste and risk over caution.
Control: Large-scale machine learning-based AI provides output without explanation: Programmers know what they fed the program, but not why it arrived at a particular answer.
- That leaves some critics fearful the programs could evolve in dangerous directions that their authors can't predict, understand or defend against.
- "Perhaps it is a bad thing that the world’s leading AI companies cannot control their AIs," essayist Scott Alexander wrote last week.
The other side: Historically, previous waves of automation — like the Industrial Revolution — triggered eras of instability but left society intact.
- With thoughtful deployment, ChatGPT-like tools could end up freeing us from drudgery without undermining genuine learning.
- AI could help students with writing the way calculators help them with math, as sociologist and New York Times columnist Zeynep Tufekci suggests.
Our thought bubble: Writing is hard! The more writing AI does for us, the fewer of us will practice the skill.
- That could set off a downward spiral in our collective capacity to expand knowledge, with a dwindling supply of new human creations available to train the next AI.
- Worst case: Humanity gets stuck in an AI-plowed rut.