What ChatGPT can't do
Impressive as ChatGPT is, its current version has some severe limitations, as even its creators acknowledge.
The big picture: The AI tool can put together answers to a lot of questions, but it doesn't actually "know" anything — which means it has no yardstick for assessing accuracy, and it stumbles over matters of common sense as well as paradoxes and ambiguities.
- OpenAI notes that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical answers ... is often excessively verbose ... [and] will sometimes respond to harmful instructions or exhibit biased behavior."
Details: ChatGPT can't distinguish fact from fiction. For sure, humans have trouble with this too — but they understand what those categories are.
- As a result, it confidently asserts obvious inaccuracies, like "it takes 9 women 1 month to make a baby."
- It "hallucinates" — that is, makes stuff up — at a rate that one expert pegs at 15%–20% of the time.
- If we want to assess its reliability, it can't tell us where its information comes from.
- Its information is outdated. Today ChatGPT's knowledge of the world ends sometime in 2021, though this is probably one of the easier problems to fix.
It tries not to provide biased, hateful or malicious responses, but users have been able to defeat its guardrails.
- According to Time, OpenAI used Kenyan workers to label violent or explicit content, including child sexual abuse material, so ChatGPT could learn not to repeat such content.
It can't intuit what users really want from it, so its responses can vary widely in response to small differences in how questions are phrased.
What's next: Although OpenAI and other AI companies will keep pushing to improve accuracy, reduce bias and eliminate other problems, no one knows whether the technology's drawbacks can be overcome — or whether ChatGPT and its successors might ever become truly dependable.
Yes, but: It doesn't look like anyone in the industry is going to let that slow them from widely deploying the technology.