When bad AI happens to good people
The artificial intelligence revolution is fundamentally different from past big tech cycles, say leading researchers: unlike with almost any other major invention through history, AI will allow ordinary malefactors to easily do some extraordinarily bad things.
What they're saying: For a couple of years now, high-profile technologists and scientists have rung alarms about the potential for super-human AI to inflict harm. But this alert, raised in a new paper, is different in warning about AI's evolution short of human intelligence, what the field calls "General artificial intelligence."
The bottom line: Developing AI is hard, says Jack Clark, one of the 26 authors, but the field will create easy-to-use software that bad actors can exploit cheaply and without technical expertise. "It could create new threats and make existing threats more severe," he tells Axios. Nuclear weapons are an example of a superlatively dangerous invention that you need serious expertise to develop and use. With off-the-shelf AI, you won't require any such ability. Among the possible threats, he says:
- A robotic platform like a cleaning robot that delivers explosives;
- Automated propaganda customized to harm someone in specific ways;
- Much, much worse cyber attacks.
One potential hazard: That trust in society, already at a low point, could become further tattered. The paper, the result of a year of work led by Miles Brundage, an AI researcher at the Future of Humanity Institute at Oxford University, proposes how the AI community and policymakers might move forward. The findings are summarized in a blog post here.