Apr 19, 2024 - Technology

Exclusive: Google finds AI agents pose fresh ethical challenges

Illustration of multiple name tags that all say "Hello my name is AI."

Illustration: Allie Carl/Axios

Giving more autonomy to AI-powered assistants offers tantalizing benefits — along with fresh ethical dilemmas we're only beginning to explore, Google DeepMind researchers say in a new paper, shared first with Axios.

Why it matters: Advanced AI agents that act as assistants, advisers and companions could be the next iteration of AI that people encounter on a daily basis.

  • The tools currently being built — but generally not yet deployed — could book flights, manage calendars, provide information and perform other tasks.
  • These advanced AI agents may ultimately interact with each other, too.
  • They could "radically alter the nature of work, education and creative pursuits as well as how we communicate, coordinate and negotiate with one another, ultimately influencing who we want to be and to become," DeepMind researchers write in their paper.

How it works: The researchers define AI assistants as "artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user — across one or more domains — in line with the user's expectations."

  • Working on someone's behalf requires representing their values and interests, as well as adhering to broader societal norms and standards, the authors write.

Autonomous action also comes with more risk of accidents or spreading misinformation, and the DeepMind team argues these agents require limits.

  • As AI agents become more human-like and personalized, they become more helpful, but they also make people "vulnerable to inappropriate influence," the authors write. That introduces new issues around trust, privacy and anthropomorphizing AI.

Zoom in: Advice-giving AI agents would require a lot of knowledge about someone to dish out what is deemed to be good advice.

  • One pitfall is that an agent could give someone advice they like rather than advice that's good, says Iason Gabriel, a research scientist in the ethics research team at DeepMind and co-author of the paper.
  • "That leads to the much deeper question, which is, 'How do you know what is good for a person?'" he says.

Between the lines: Technologists talk a lot about the importance of alignment, or how well an AI's goals and behavior match the preferences of the people who use it.

  • The DeepMind researchers propose an updated, four-way concept of alignment for AI agents that considers the AI assistant itself, the user, the developer and society.
  • An AI assistant is misaligned when it disproportionately favors one of these participants over another.
  • For example, an AI could be misaligned if it pursues its own goals at the expense of the user or society — or if it's designed to disproportionately benefit the company that makes it.

The intrigue: If AI agents are widely used, they are going to encounter one another, raising questions about how they can cooperate and coordinate — as well as what happens when they conflict.

  • "If they just pursue their users' interests in a competitive or chaotic manner, clearly that could lead to coordination failures," Gabriel says.
  • On the plus side, the researchers say AI assistants could help make it easier to access public services or increase productivity.
  • But they could also deepen inequalities and determine "which people are able to do what, at what time and in what order."

The bottom line: "This is a research frontier and a kind of moral horizon that we need to investigate," Gabriel says.

Go deeper