Axios AI+

April 30, 2024
Hi, it's Ryan, coming to you from Stanford, where I started the week touring the university's AI, biotech and space labs. Today's AI+ is 1,164 words, a 4-minute read.
🍎 On June 5, we're bringing our AI+ Summit to NYC for Tech Week. Hear from leaders who are shaping the future of AI across New York's leading industries, from finance to media and health care. Request an invite to attend.
1 big thing: Meta's AI is everywhere all at once
Meta is pushing generative AI into every nook and cranny of its giant platforms — Facebook, Instagram and WhatsApp — frustrating some longtime users and threatening to worsen existing problems with spam and misinformation.
The big picture: Meta's fast-and-furious deployment of new AI features aims to make the technology's benefits accessible — but also risks degrading the experience for its billions of users.
Context: The company took a hit from investors last week after CEO Mark Zuckerberg admitted that he is doubling down on spending on AI infrastructure even though any bottom-line payoff is a long way off.
- Unlike rivals Google, Microsoft and OpenAI, Meta has no clear path to charging consumers for their usage of expensive-to-run AI tools.
Driving the news: Meta is putting its AI assistant in lots of different places across desktop, mobile and web versions of its apps — sometimes to the delight of customers, but as often in ways that are frustrating.
- In Meta's various messaging products, people can choose to have a separate conversation with Meta AI or summon the assistant to help out in new and existing group chats, whether that's to share a funny image created from a text prompt or to ask for advice on where to eat out together.
Most controversially, Meta has put Meta AI front and center in the search bars for Facebook and Instagram, which have served as key tools to find content from particular friends or creators. For many, the main search bar in Instagram now says "Ask Meta AI anything."
- Zuckerberg's post on Threads announcing the changes prompted hundreds of user complaints, particularly because they said they couldn't turn the new search feature off.
- Protests multiplied in other forums like Reddit and X.
Meta is also testing a host of other AI features, meaning not everyone is seeing the same features and interfaces.
- Some of those AI features have also proven problematic, such as this well-publicized example from a parents' group conversation in which Meta AI claimed it had a gifted, disabled child. (I'm told this was part of a since-scrapped test, and Meta AI does not currently post on its own.)
The other side: Meta's AI products have plenty of attractive selling points.
- For example, in WhatsApp and on the Meta.ai site, the Imagine text-to-image creator can show you a preview while you type, so you can see how changing the prompt could change the result. By contrast, most other text-to-image tools require you to finish a prompt and then wait for a reply.
- Meta taps Bing and Google to help provide real-time results for search queries, while it says the latest Llama 3 model makes Meta AI more conversant across a broader range of topics.
- "We believe Meta AI is now the most intelligent AI assistant you can use for free and it's an experience we'll continue to iterate on and enhance moving forward," the company told Axios in a statement.
- Meta says it believes trying things out is the best way to figure out which features resonate: "Our generative AI-powered experiences are under development in varying phases, and we're testing a range of them publicly in a limited capacity."
What's next: The more fully Meta integrates generative AI into the day-to-day lives of its users, the higher the risk that AI tools will accelerate the platforms' longstanding problems with misinformation and spam.
- Meta's conflicting loyalties to both advertisers and users create some incentives for it to develop AI technologies that keep people on their services longer, rather than helping them achieve a task and move on.
- Further, generative AI has the power to aggregate a huge amount of personal data and allow for custom targeting of individuals, potentially quite persuasively.
2. Biden's AI executive order hits deadlines
It's been six months since President Biden signed his AI executive order, and the White House says federal agencies are hitting all their deadlines.
Why it matters: The executive order is what's driving U.S. AI regulation as Congress moves at a much slower pace.
- Some of the deadlines are squarely in line with Hill priorities.
For the April 27 deadline, agencies created frameworks and guidelines to mitigate risks related to dangerous biological materials, critical infrastructure and software, according to the White House.
- The public is now able to comment on managing generative AI risks.
- A 22-member AI Safety and Security Board was launched to advise DHS and the private sector.
- The DOD made progress on a pilot for AI to address national security and military software vulnerabilities.
On civil rights and equity, HUD affirmed discrimination prohibitions apply to AI use for tenant screening and advertisement of housing opportunities.
- There's now guidance for how all levels of government should manage the risk of using AI in SNAP and other public benefit programs.
Efforts to hire tech talent also progressed, as agencies have hired more than 150 AI professionals and are on track to hire hundreds by this summer.
Behind the scenes: Agency workers are taking on the executive order in addition to the work they were already doing.
- Ben Buchanan, White House Office of Science and Technology Policy assistant director, tells Axios, "We didn't take things off their plate. It has to be that way because AI is changing those jobs, it's changing those fields."
What's next: By May 27, agencies must designate a chief artificial intelligence officer, together with AI governance boards.
A version of this story was published first on Axios Pro. Unlock more news like this by talking to our sales team.
3. Ohio uses AI to axe bureaucracy
If you want to take an axe to red tape, ask a bot, according to Ohio Lt. Gov. Jon Husted.
The big picture: Husted's office credits an AI-aided analysis of the state's administrative code with eliminating 2.2 million words' worth of unnecessary and outdated regulations.
- The work began in 2020, two years before ChatGPT's release brought AI to the masses.
What they're saying: Husted tells Axios he literally couldn't have done it without AI's help.
- "If you think about it, no human being could make sense of the administrative code," he said. "I think it's like 17.4 million words."
Husted's office used a tool called RegExplorer, created by Deloitte.
- His team uploaded the state's regulations and entered prompts asking it to identify outdated and duplicative sections.
What's next: Husted is asking the legislature for permission to keep going, estimating he could eventually cut the state's administrative code by a third.
4. Training data
- Bill Gates and his private memos are still treated as gospel among Microsoft's senior leadership. (Business Insider)
- Australian federal and state governments are betting $620 million on PsiQuantum, an American startup promising to build the world's first massive quantum computer. (Semafor)
- An "AI priest" called Father Justin has been demoted to a layperson by a Catholic advocacy group site after claiming to be a real person and telling one user they could baptize a baby in Gatorade. (Futurism)
5. + This
This one's for Ina: A 13-year-old found a rare octopus Lego piece from a 1997 shipwreck that spilled 5 million pieces into the sea.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+



/2024/04/30/1714442301070.gif?w=3840)




