Axios AI+

December 09, 2024
I'm glad to be back in SF, although D.C.'s cold streak finally broke yesterday, making my last day there a gorgeous one. Today's AI+ is 1,155 words, a 4.5-minute read.
Situational awareness: China opened an antitrust investigation of AI chip giant Nvidia, focusing on its 2020 acquisition of Israeli networking firm Mellanox, Bloomberg reports. The move is another sign of deepening U.S.-China trade tensions.
1 big thing: What Anthropic's AI knows about you
Anthropic, whose Claude models are a key rival to OpenAI, takes an "opt-in" approach when it comes to using customer data to train its models.
Why it matters: How AI companies do and don't make use of the information their users provide is taking on even greater importance as Anthropic and its rivals give their chatbots broader access to personal data.
Catch up quick: By and large, AI companies aren't required to disclose where they get the data used to train their models. However, thanks to a number of privacy laws, they do have to say how they use the data their customers provide.
- In this series, Axios is looking at how the key companies in the generative AI make use of that data.
Zoom in: Anthropic — founded in 2021 by former OpenAI employees seeking to build AI with more stringent safety measures — says that, by default, it won't use customer information to train its models.
- That policy applies to both consumers and businesses, as well as services built on top of Anthropic's APIs — programming interfaces that allow third party access to its models.
Yes, but: Anthropic does reserve the right to use prompts and outputs to train its models, with permission, in some cases, such as when someone clicks a "thumbs up" or "thumbs down."
- Anthropic notes that not only in its privacy policy, but it also flags this for users in a dialog box when they give feedback.
Between the lines: While not used for training of models, Anthropic — as is typical in the industry — automatically scans users' prompts and responses to enforce its safety policies, though it will not use that data to train its models.
- The company says it can use those prompts and responses that are flagged to improve its abuse detection systems, though not the models themselves.
Go deeper: Read the other entries in the series.
2. OpenAI, Google veterans launch audio AI startup
One of the co-creators of ChatGPT's Advanced Voice Mode has struck out on his own with the launch of WaveForm, a startup creating an audio AI system capable of capturing more nuance than rival approaches.
Why it matters: While a growing number of chatbots can process voice input, many do so using speech-to-text systems that end up missing important nuance, such as intonation and emotion.
Driving the news: WaveForm, which has raised $40 million in a seed round led by Andreessen Horowitz, is headed by Alexis Conneau, formerly of OpenAI, along with Coralie Lemaitre, who previously worked in product strategy at Google.
- In an interview, Conneau said he is looking to solve the "Speech Turing Test" — that is, to create a system in which users can't tell whether they are talking to a computer or a human being.
- Doing that, he said, will require creating a system that has fuller emotional understanding than the current generation of AI voice technology.
Yes, but: WaveForm, which has just five employees at the moment, is still developing its models, Conneau told Axios.
- He also acknowledged that the type of AI system he is contemplating poses risks, including the potential for users to become overly attached to the AI characters they interact with.
- He said he hopes the industry has learned lessons from the social media era: "I want to believe that we are more prepared than we were, you know, a few years back."
What to watch: Conneau said it's too soon to talk about the specific products WaveForm has in mind, but there should be more details from the company next year.
- He also said WaveForm will seek to prove itself in the consumer space before launching a business-to-business play.
- Education is among the areas that could benefit from the company's technology, but Conneau said to expect a broad range of uses. "I think this kind of technology is inherently horizontal," he said.
3. Scoop: Advanced AI chips cleared for export to UAE
The U.S. government has approved the export of advanced AI chips to a Microsoft-operated facility in the UAE as part of the company's highly-scrutinized partnership with Emirati AI firm G42, two sources familiar with the deal told Axios.
Why it matters: The agreement between the tech giants is part of a U.S. effort to elbow China out of the UAE's rapidly expanding tech industry and disperse U.S.-developed AI technology around the world to counter China's Digital Silk Road.
- Lawmakers have raised concerns over the partnership, saying there is a risk it could open up a back door for China to access the advanced technology.
- The House China Select Committee earlier this year found that G42 has "extensive ties" to several Chinese firms involved in surveillance and research for the military.
Catch up quick: Earlier this year, Microsoft said it was expanding its collaboration with G42 and investing $1.5 billion in the company.
- G42 is chaired by the UAE's national security adviser Sheikh Tahnoon bin Zayed Al Nahyan — a force in the country's push to become a global AI powerhouse.
- Microsoft and G42 announced a few months later that they would team up to launch two new AI institutes in Abu Dhabi.
- The agreement came after G42 tried to calm the U.S. government's nerves about its ties to Chinese companies. The company said it would remove hardware made by China's telecom giant Huawei from its systems.
- G42 also divested from Chinese companies, but Bloomberg reported those investments were taken over by a fund overseen by Sheikh Tahnoon, whose private investment firm is G42's parent company.
Friction point: U.S. government approval to export the advanced AI chips that are key to the Microsoft-G42 partnership was delayed over continued concerns about the technology ending up in China's hands. The UAE and China remain close economic partners in other sectors and maintain military ties.
Microsoft and the Commerce Department declined to comment. G42 did not respond to a request for comment.
The big picture: While G42 builds its AI infrastructure in the UAE, it is training foundation models, including a bilingual Arabic-English large language model called Jais, at data centers in the U.S. through a partnership with AI chipmaker — and Nvidia competitor — Cerebras.
- G42 announced in October its plans to build an "AI-optimized data center" that would be the largest yet in the UAE.
- Saudi Arabia, where Microsoft is also investing heavily, is planning an AI project to rival the UAE's effort.
4. Training data
- Meta on Friday announced a new 3.3 version of its open-source Llama model. With 70 billion parameters, it performs on par with the 405-billion parameter version of Llama 3.1, per the company. (VentureBeat)
- AI reviews of mammograms to back up human radiologists can improve accuracy and increase the number of actual breast cancer cases detected, a study found. (Gizmodo)
5. + This
While I didn't get to see them, there were some absolutely stunning D.C. sunsets in recent days.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and Anjelica Tan for copy editing it.
Sign up for Axios AI+






