A new way to classify powerful AI
Researchers released a new framework outlining possible harms from and benefits of AI, designed for regulators to decide how to move forward on regulation.
Why it matters: Regulators have failed to keep pace with AI advances, sometimes leading to biased or flawed outcomes with limited accountability, on issues ranging from medical misinformation to police surveillance .
The intrigue: The authors of the framework were worried that risk categories in the draft EU AI Act were too rigid and did not take account of AI benefits — so they set about creating a more flexible framework that could be updated more easily than legislation.
- A Microsoft AI Types of Harm List served as a source of inspiration for the framework.
Details: AI impacts are best addressed sector by sector by existing regulators, according to the framework, which also proposes a registry of evaluated AI use cases and their classifications.
Proposed "harms categories" include physical, emotional or psychological injury, loss of privacy, opportunity, liberty or money, and environmental degradation.
- Practical examples of possible harms include medical misdiagnosis, technology-facilitated violence, loss of anonymity, and discrimination in accessing services such as housing and education.
Proposed "benefit categories" include physical health, emotional or psychological health, opportunity access, privacy or liberty protection, and positive environmental impact.
- Practical examples of possible benefits include using AI to minimize health risks of repetitive movement or exposure to dangerous working conditions or more accurate or earlier detection of disease.
What they're saying: "We cannot, nor should we, regulate every AI use case," SCSP senior director Rama Elluru told Axios, adding, "We recognize that not reaping the benefits of AI enabled technologies is actually a big harm in itself."
- "Our hope is the framework leads to a registry of use cases that can inform industry and be shared with the public," Stephanie Tolbert, Johns Hopkins study lead, said via email.