Axios AI+

December 18, 2023
Hi, it's Ryan, with you for the full holiday countdown as Ina takes a break. Today's AI+ is 1,079 words, a 4-minute read.
Situational awareness: Double regulatory whammy from Europe this morning, as Adobe cancels its plans to acquire Figma, seeing "no clear path" to U.K. and EU approval, and the EU opens a broad investigation of Elon Musk's X for violations of its new Digital Services Act.
- Also breaking: Apple will pause sales of the newest models of the Apple Watch as it tries to resolve an intellectual property dispute involving its blood oxygen measurement feature.
1 big thing: AI's road to reality
Illustration: Aïda Amer/Axios
A middle road for AI adoption is taking shape, routing around the debate between those who fear humanity could lose control of AI and those who favor a full-speed-ahead plan to seize the technology's benefits.
Why it matters: The American people consistently tell pollsters they're more concerned about how AI will affect their jobs and day-to-day life than about its long-term risks and rewards.
The big picture: Those who think trust in AI will remain low without a cautious and practical approach to AI development now include America's biggest banks and philanthropists, the White House, labor unions and a new class of niche venture capitalists.
- The emergence of this loose coalition of private sector and civil society organizations could lower the pressure on Congress to deliver AI regulation.
Between the lines: Advocates who see a middle ground with AI are moving more pragmatically and methodically than those at the extremes of the AI debate.
- They're focused on building evidence for their vision and raising funds from outside the biggest tech companies and venture capital firms — which takes more time than writing an open letter or blog post, or tapping existing investors for another round of funding.
What's happening: A growing number of groups are working to spur innovation while protecting against AI's immediate harms.
Players on the middle ground are taking actions across the economy, starting with unions stitching together a workplace AI safety net.
- Governments have committed to pre-deployment testing of very powerful AI models and the United Nations is beginning to develop "AI for good" plans to address urgent global social and environmental challenges.
- The federal government's AI risk management framework from NIST is broadly supported and will be implemented by a suite of Chief AI officers, while private AI companies have made a series of safety commitments to the White House.
- Philanthropists have committed more than $200 million towards public- interest AI development and those funders are starting to deliver cash grants.
- Open source AI models have flooded the market, providing options and counterweights to the big closed systems that dominated the start of the generative AI hype cycle.
- A new class of venture capitalists dedicated to pioneering AI safety investing is emerging, while 50 VCs have made voluntary commitments around how the startups they fund should develop AI responsibly.
- The U.S. and China remain each other's main AI scientific partner, even as geopolitical tensions soar.
Yes, but: The middle road remains a relatively narrow path, avoided by the biggest companies leading AI development.
- The field's giants, including the Microsoft/OpenAI alliance and Google, are flooring the pedal on deployment even as they make broad but vague commitments toward responsibility and caution.
Zoom in: Companies including Philips — a Dutch appliance-maker turned health tech company — are choosing narrow uses of AI, such as mobile lung cancer screening for firefighters, to demonstrate AI's practical value to American consumers, CEO Roy Jakobs tells Axios.
- Philips is also teaming with the Gates Foundation to deliver home-based ultrasounds for at-risk pregnant women.
- Companies in regulated industries are rapidly expanding specialist roles for managing "responsible AI" — large U.S. banks increased their responsible AI staff by 43% between May-Sep 2023, per a consultancy named Evident.
- Omidyar Network is directing its cash into projects intended to boost industry competition, worker power, and "belonging," while Facebook co-founder Chris Hughes is focusing on keeping monopolies out of AI.
Between the lines: Those working to build middle ground AI options are not necessarily neutral or moderate — they're also pushing specific interests.
- For some, it's a rebranding of effective altruism or a way to limit the power of big tech companies, while others see AI safety as a cash cow.
2. EU's neverending AI Act
Illustration: Natalie Peeples/Axios
Negotiations around the EU's AI Act are getting longer and more complicated.
Why it matters: Governments and tech companies around the world have been waiting for months for the final text of the world's first comprehensive and democratic AI regulation. Now they're going to have to wait a bit longer.
- While a Dec. 8 political deal confirmed the regulation will come into force, a final text will only be available in early February, and some elements of the regulation may not be in force until 2027.
What's happening: At a debriefing on the deal held among EU national governments on Friday, four of the largest countries — Germany, France, Italy and Poland — insisted they would not sign off on the deal until a final text is ready.
What they're saying: France's digital minister, Jean-Noël Barrot, has taken to referring to the Dec. 8 deal as merely a "step" in the negotiation process, warning that France will continue to negotiate in favor of innovators and national security interests on the final details.
- One of the main negotiators of the deal, European Parliament member Eva Maydell, tells Axios there's no need to panic, explaining details about the timeline for enforcement of the regulation.
- Enforcement delays may range from 6 to 36 months after the regulation is finally approved, she said.
- Prohibited practices face the shortest enforcement delay: 6 months.
- High-risk uses of AI — such as credit scoring and those involving education and employment decisions — are set to be subject to a 24-month delay.
- Rules for the use of AI in fields that are already highly regulated, such as medical technology and transport, won't kick in for 36 months.
Yes, but: The EU's processes are a 27-country negotiation — never easy.
- Reasons for delays in finalizing EU regulation range from incomplete deals announced for political reasons to the practical: It takes weeks to finalize legally precise translations into the EU's 24 official languages.
3. Training data
- OpenAI suspended the official account of TikTok's parent company ByteDance after a reporter found that ByteDance was using OpenAI's API to train its own model. (The Verge)
- The RAND corporation, backed by $15 million in funding from Facebook co-founder Dustin Moskovitz's Open Philanthropy, played a key role in drafting the Biden Administration's AI executive order. (Politico)
- A global review of AI governance methods and tools concludes that many of today's practices could create "novel, unintended problems or create a false sense of confidence" in how AI is managed. (World Privacy Forum)
- Deloitte says it's using AI to save jobs instead of taking them. The consultancy says it's using automated tools to reskill and learn how to not overhire in the future. (Bloomberg)
4. + This
What happens when John Oliver decides he needs to win a Bird of the Century vote in New Zealand.
Thanks to Scott Rosenberg and Meg Morrone for editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.

