Illustration: Lazaro Gamio/Axios
The advancement of AI-fueled technologies like robotics and self-driving cars is creating a confusing legal landscape that leaves manufacturers, programmers and even robots themselves open to liability, according to legal scholars who study AI.
Why it matters: As autonomous vehicles take to the road and get into collisions, drivers, insurers and manufacturers want to know who — or what — is liable when a harmful mistake occurs. The degree of liability comes down to whether AI is treated as a product, service or a human decision-maker.
Even without such a pledge, it's likely that manufacturers would end up paying if their autonomous car caused harm. If the offending car were considered a defective product, its maker could be held liable under strict product-design standards, potentially leading to class-action lawsuits and expensive product recalls — like Takata faced for its dangerous airbags.
- But if a car's driving software were considered a service, it could be charged with behaving negligently, the way a reckless human driver might.
- Treating human and AI drivers the same would "level the playing field" and prevent costly product-liability lawsuits, according to Nathan Greenblatt, an IP lawyer at Sidley Austin.
- Yes, but: Exactly who counts as the manufacturer might not be immediately obvious, says John Kingston, a professor who studies AI and law at the University of Brighton. If a self-driving car kills a pedestrian, the car company might be considered the manufacturer, he says — but it could also be a subcontractor who wrote the software, or even a hardware supplier that produced a faulty camera.
- Allianz, the insurance giant, predicted in a recent report that product liability insurance would someday become compulsory in order to protect drivers when they've put their cars in autonomous mode.
Another possibility: Going deeper into the system, the AI itself could be held responsible, according to Gabriel Hallevy, a law professor at Ono Academic College in Israel, who wrote a book about AI and criminal negligence. That still means its programmer or manufacturer could be found negligent as well, or even an accomplice to a crime.
- But it's hard to punish AI if it's found guilty. The simplest sanction would be to decommission the offending robot or the program. But there are more creative options: Hallevy suggested that AI found to have broken a law could be shut off for a period of time — the equivalent of a prison sentence — or even be required to perform community service, like cleaning the streets or helping out at the public library.
What's needed: New laws may be in order to deal with errant AI, says Kingston. Many laws, for example, hinge on whether a reasonable person would have acted a certain way. AI, clever as it may be in its own field, doesn't yet have the background knowledge or common sense it would need to emulate a reasonable person's decision-making.
What to expect: The first big AI liability case will likely cause a temporary chill in AI development, says Kingston, as company lawyers scramble to protect their employers. But in the long term, he says, clearer guidelines would be beneficial for AI research and development.