IBM's AI can beat humans in an argument
On Monday, a quarrelsome AI from IBM matched wits with a pair of human debaters in San Francisco in an impressive showcase of technology known as "computational argumentation."
Why it matters: By quickly synthesizing persuasive arguments from a trove of source material, IBM's remarkably conversant debater can "help broaden minds with unbiased debate," said Arvind Krishna, IBM's director of research. It could even be used to combat fake news by "asking critical questions of news," according to Noam Slonim, a technical staff member at IBM's Haifa Research Laboratory in Israel.
But, but, but: To construct its arguments, the computer dips into hundreds of millions of articles from newspapers and academic journals. It's not able to determine the veracity of what it reads, so it has to trust that its source material is accurate.
The details: IBM's Project Debater sparred with two world-class human debaters in front of an audience, which later ranked each debater's performance. In one matchup, the computer argued eloquently for government subsidies for space exploration, contending that it will "expand our collective sense of humanity's sense of place in the universe." In the second, it offered statistics to argue for expanding telemedicine — and at one point stopped just short of calling its human opponent a liar.
The score: Based on voting, the first debate was a wash. But in the second, the computer changed the minds of nine undecided audience members, while its human opponent didn't change any. It even cracked some self-deprecating jokes about its artificial nature along the way.
- The good: Project Debater got consistently high marks from the audience for thoughtful arguments that were packed full of facts and quotations. It structured its points with clarity, and understood its opponent's speeches accurately enough to rebut them point by point.
- The bad: In a possibly callous slip-up, Project Debater deemed space exploration more important than better health care. It also displayed a clueless streak when it repeatedly urged its human listeners not to "be afraid" of new technology. Slonim remarked that for all its debating prowess, the system still has no tact.
The big question: Will this technology help AI explain its reasoning? Opaque algorithms that offer data-driven outputs without backup are increasingly under fire for being error-prone or even unethical. If AI programs ever get to the point where they can present evidence of how they reach their decisions, something like Project Debater could serve as their interpreter — and nudge the field towards much-needed transparency.