Axios AI+

A floating, translucent blue 3D render of the human brain.

February 21, 2024

Ina here, reporting from a rather soggy San Francisco Bay Area. Today's AI+ is 1,124 words, a 4-minute read.

1 big thing: UN official says tech needs to do no harm

Photo illustration of Volker Türk with abstract shapes.

Photo illustration: Axios Visuals. Photo: Picture Alliance via Getty Images

Volker Türk, the UN's high commissioner for human rights, was in Silicon Valley last week to deliver a simple message to tech companies: Your products can do real harm and it's your job to make sure that they don't.

Why it matters: Technologies like artificial intelligence hold enormous potential for addressing a range of societal ills, but without effort and intent, these same technologies can act as powerful weapons of oppression, Türk tells Axios in an interview.

New regulations are often where the tech debate lands, but Türk tells Axios that the firms should already be ensuring their products comply with the existing UN Guiding Principles on Business and Human Rights.

  • "You have already existing obligations and you need to apply them," Türk says, addressing the companies.

That UN document, unanimously approved in 2011, states that "business enterprises have the responsibility to respect human rights wherever they operate and whatever their size or industry."

  • It further notes that responsibility "means companies must know their actual or potential impacts" and "prevent and mitigate abuses."

Yes, but: The guiding principles are non-binding.

  • A years-long process to turn them into enforceable international law has yet to come to fruition.
  • And even enforceable international law has proven difficult to make stick in a world where nation-states retain sovereign freedom of action.

What's happening: Türk met with OpenAI, Meta and Google and also spoke at events, including at Stanford and Berkeley, where representatives from companies like Microsoft, Apple, Cisco, Snap and Anthropic were in attendance.

Between the lines: Türk emphasized that Silicon Valley's lack of diversity and global perspectives hamper the industry. Something that seems positive to a small group of people in San Francisco, he said, may feel far different in other parts of the world.

  • "Very often it's done by people who may not know the consequences of what they're developing," Türk said.
  • "It's done by people with their own biases, with their own prejudices. It's done by people who have no global view of the world," he said.

The big picture: Türk likens the current AI moment to a different sort of artificial intelligence envisioned by Goethe in "Faust."

  • In that classic, Homunculus, an artificial being in a vial intended by its creator to represent the best of Enlightenment-era knowledge, instead ends up encompassing the full range of humanity's traits, including its faults.
  • "This is where we are at the moment," Türk says of today's AI systems. "It's not separate from who we are — our worst fears, our desires, our best aspirations."
  • What's more, AI systems won't necessarily reflect all of humanity, given they will likely be shaped by the values and world views of their creators.

Zoom in: Elections represent a specific threat, with 4 billion people around the world set to go to the polls this year.

  • While such concerns predate generative AI, Türk says the new technology allows for misinformation to spread at a "mind-boggling" pace and scale.
  • "I'm extremely concerned about the combination of social media platforms and generative AI, about the way they could whip up emotions," Türk says.

He warns that AI could be used to scapegoat already marginalized groups, including immigrants and members of the LGBTQ+ community.

  • "If you whip up fears...if you create images that actually make people very afraid, and where you essentially manipulate the realities, then yes, we have a toxic mix," he said.
  • As to whether the tech companies are taking things seriously, Türk says, "The issue of elections is on their mind. I don't have the data to know whether it's enough."

The other side: Türk said what gives him hope is what he called the "silent majority" of people, who he says do "deeply care" about human rights, values and dignity.

  • But that group, he says, needs to speak up. "I wish that the silent majority became a bit louder and were not silent, but actually, you know, overcome their fears, overcome the divisions and, and stand up for human rights."

2. Google's newest AI model is very small

Illustration of a robot hand holding up the Google "G"

Illustration: Aïda Amer/Axios

Google today released Gemma, a range of "lightweight" open AI models designed for text generation and other language tasks, Ryan reports.

Why it matters: Google is betting on the substantial market of developers who don't need or can't afford to use the biggest AI models like Gemini.

Details: Google is releasing models in two sizes and both can run on a laptop.

  • The models come with a "Responsible Generative AI Toolkit," which Google says will help developers build their own safety filters for Gemma models, and a "debugging tool" to help developers investigate Gemma's behavior and address potential issues.
  • Gemma is optimized for Nvidia GPUs, offering the ability to fine tune models locally.
  • Access to Gemma is free via Google's Kaggle platform for data scientists in an effort to encourage transparency about how the models are used and to offer "large scale community validation" of its safety efforts.

The big picture: Instead of chasing an excitement factor or a consumer market, Google is seeding an enterprise market — one that may end up paying big dollars to use Google Cloud, as developers invent new consumer applications to run on Gemma.

What they're saying: "Things that previously would have been the remit of extremely large models are now possible with state-of-the-art smaller models and this unlocks completely new ways of developing AI applications," says Tris Warkentin, a director at Google DeepMind.

Yes, but: Microsoft has also invested in the market for smaller models, via its Phi range.

  • While Google emphasized Gemma's "strict terms" of responsible use, it's placing no limits on which organizations may use the model, creating the risk that malicious actors repurpose the model for unintended uses.

Go deeper: How competition between big and small AI will shape the tech's future

3. Training data

  • On tap: Nvidia reports earnings, while Intel is holding an event in San Jose to give an update on its effort to become a foundry for other chip firms.
  • Walmart is buying TV maker Vizio as it looks to boost its advertising business. (CNBC)
  • Anthropic raised $7.3 billion in the past year on $8 million monthly revenue. (New York Times)
  • Ohio released a new AI toolkit to encourage schools to use ChatGPT and other generative AI, but promises that it can't replace teachers. (Axios Columbus)
  • Will Smith shares a non-AI generated video of himself eating spaghetti. (ArsTechnica)
  • Encrypted messaging app Signal will now allow usernames so you don't have to share your phone number with contacts. (Wired)
  • Meanwhile, Apple says iMessage's new quantum-proof encryption is stronger than anyone else's, including Signal. (Axios)
  • Rob Joyce, the cybersecurity director of the National Security Agency, plans to step down in March. (The Record)

4. + This

Want to spend a year on Mars, but don't want to leave earth? NASA has a job for you.

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.