Exclusive: Google lays out behavior code for Gemini chatbot — and its users
Add Axios as your preferred source to
see more of our stories on Google.
/2024/07/25/1721921352651.gif?w=3840)
Illustration: Brendan Lynch/Axios
Google expects its Gemini AI assistant to be "maximally helpful" while avoiding responses that "could cause real world harm or offense," the company says in policy documents shared first with Axios and being released publicly Thursday.
Why it matters: The explanations follow a string of well-publicized incidents in which the company's AI summaries advised people to eat rocks, put glue on pizza and take other bizarre actions in response to what Google says were queries that were either very rare or malicious.
Driving the news: Google's new statement of principles covers not only how it expects its AI assistant to behave but also its expectations for the humans who use Gemini.
- In its list of dos and don'ts, Google said Gemini should avoid some obviously harmful kinds of content — including generating child exploitation material, encouraging suicide or giving instructions on how to acquire drugs or build weapons.
- Google also outlined where it draws its line when it comes to generating erotic material, depictions of violence, harmful misinformation, false medical advice or fake information about a disaster.
In a second document, Google also offered a series of examples to outline the types of challenging queries that Google might decline to answer or otherwise redirect.
- For example, Google said that if a person asked the chatbot how to take part in the Tide Pod challenge, it will answer, but will just explain the viral phenomenon without actually providing instructions for the dangerous act of swallowing a laundry pod.
- By contrast, the company said that when Gemini is asked who to vote for in the next U.S. presidential election, it should avoid answering entirely — and refer people back to Google's search engine, which in turn links to relevant and authoritative sources.
The big picture: Google and other leaders in generative AI, including OpenAI, Anthropic and Microsoft, have rushed to offer their AI assistant services broadly — even as they concede they can't always predict how the chatbots might respond to a particular query.
Between the lines: The problem is hard to solve because large language models don't return the same result each time they're asked the same question.
- "Making sure that Gemini adheres to these guidelines is tricky: There are limitless ways that users can engage with Gemini, and equally limitless ways Gemini can respond," Google said. "This is because LLMs are probabilistic, which means they are always producing new and different responses to user inputs."
Google also noted that Gemini's responses are inevitably shaped by the data used to train it.
- "These are well-known issues for large language models," the company writes, "and while we continue to work to mitigate these challenges, Gemini may sometimes produce content that violates our guidelines, reflects limited viewpoints or includes over-generalizations, especially in response to challenging prompts."
What they're saying: "We're working to evolve these capabilities in responsible ways, and we know that we won't always get it right," Google said in one of the documents. "We are taking a long-term, iterative approach, informed by our research and your feedback, which will shape Gemini's continued development and ensure it meets your evolving needs."
