Axios AI+

June 13, 2024
You know that feeling when you've been sick for like two weeks and you finally feel better? Yeah, it's pretty great.
Anyway, today's AI+ is 1,093 words, a 4-minute read.
1 big thing: When AI code goes bad
The same generative AI tools that are supercharging the work of both skilled and novice coders can also produce flawed, potentially dangerous code.
Why it matters: Multiple studies have shown that more than half of programmers are using generative AI to write or edit the software that runs our world — and that number keeps rising.
Catch up quick: AI coding assistants can do everything from helping developers with drudge work all the way to producing whole codebases from brief prompts.
- In 2022, GitHub found that developers who used its AI coding assistant worked 55% faster than those who didn't.
- According to an April 2024 poll from Gartner, 75% of software engineers will use generative AI code assistants by 2028. That's up from less than 10% of coders who used such tools in early 2023.
- All the tech giants and leading AI providers offer code assistants. OpenAI's ChatGPT can code, and so can Meta's Llama 3. Microsoft offers GitHub Copilot, Google's tool is called Gemini Code Assist and Amazon has AWS' CodeWhisperer.
Yes, but: The productivity gains come with a price.
- One study from Stanford found that programmers who had access to AI assistants "wrote significantly less secure code than those without access to an assistant."
- Another study from researchers at Bilkent University in 2023 found that 30.5% of code generated by AI assistants was incorrect and 23.2% was partially incorrect, although these percentages varied among different code generators.
- Research from code reviewing tool GitClear found that the rise of AI coding assistants in 2022 and 2023 correlated with the rise of code that had to be fixed two weeks after it was authored, and if the trend continues in 2024, "more than 7% of all code changes will be reverted within two weeks."
- When ZDNet put general purpose chatbots through a series of coding tests (like "write a Wordpress plugin"), Microsoft Copilot, Google Gemini Advanced, Meta AI and Meta Code Llama failed all of them. (ChatGPT passed.)
Programmers sense there's trouble.
- CodeSignal, a coding skills assessment and AI learning tools platform, found that over half of developers have concerns about the quality of AI-generated code.
Of course, human coders mess up, too.
- Alastair Paterson, CEO of Harmonic Security, tells Axios that many of these models have equivalent skills to a junior developer, but they also can make different kinds of mistakes.
- "The large language model approach is fantastic at some tasks and less good at some other things that you'd think it would be really, really good at," Paterson said. "They make strange logical errors in numbers and loops."
- "The one thing that the large language models are very bad at is doing math," says CodeSignal CEO Tigran Sloyan.
- Paterson says that many projects require big, complex architectural decisions that "these systems are just not capable of thinking about at the moment."
- "A lot of the times the reason that they produce not very good code is that what was asked of them was not correct," Sloyan tells Axios.
AI code generators aren't yet able to generate programs from scratch without input from humans, but as these tools get better, the problems might get bigger.
- Right now, bad AI-generated code that's not caught by a human usually just makes for messy code libraries or minor problems rather than disasters.
- Lee Atchison, former Amazon technical program manager and author of the O'Reilly book "Architecting for Scale," wrote in March that "code complexity and the support costs associated with complex code have increased in recent years in large part due to the proliferation of AI-generated code use."
In other words, generative AI tools might save time and money upfront in code creation and then eat up those savings at the other end.
- That would make them less of a revolutionary breakthrough — instead, they'd just be the latest shortcut the software industry uses to deploy fast and clean up later.
The big picture: There haven't yet been any public disasters related to unchecked AI-generated code, but Sloyan says it's only a matter of time.
- Problems might arise when AI programs are directing other AI programs to write code.
The other side: "I think we're some way off from some sort of AI apocalypse," Paterson says. "These tools ultimately are still just tools, and we've got a pretty good understanding of their limitations."
2. The market loves Apple's AI


There's been an astonishing melt-up in Apple shares over the past two days.
Driving the news: Apple's market value has spiked by $312 billion over the past two trading sessions, causing it to reclaim its place as the most valuable company in the world.
Between the lines: That increase, in dollar terms, dwarfs the value of OpenAI ($80 billion), xAI ($24 billion), Anthropic ($18 billion) and all other AI startups combined.
The big picture: This week's announcement from Apple marks the first time the general public has gotten a glimpse of how AI will improve items we touch and use every day.
- The market's verdict: AI is going to make the iPhone and iPad more valuable franchises — and, by extension, Apple itself.
Follow the money: One of the reasons the stock market continues to hit new record highs is that investors are pricing in a broad-based corporate productivity boost due to AI adoption.
- The move in Apple shares, because it was concentrated over a short period of time, can be seen as a sped-up version of what has been happening in many sectors over the past year or so — or, perhaps, as a harbinger of what might happen to many other companies as they start rolling out the fruits of their AI strategies.
The bottom line: Apple isn't a "picks and shovels" company: It isn't selling AI chips or AI consultants or large language models or even AI training data. It's selling phones, which Wall Street believes will be better thanks to AI.
- That marginal improvement, it turns out, can be worth hundreds of billions of dollars in market cap.
3. Training data
- OpenAI's chief technology officer Mira Murati defended the company against Elon Musk's accusation that ChatGPT will be "creepy spyware" on iPhones. (Fortune)
- Sources say Apple isn't paying OpenAI in their deal, but both companies could potentially make money when free users sign up for ChatGPT subscriptions and Apple takes a cut. (Bloomberg)
- Microsoft's Brad Smith will tell lawmakers on Capitol Hill today that Microsoft "accepts responsibility" for the faulty cybersecurity practices that led to last year's China hack. (Axios)
4. + This
A non-AI-generated photo of a real live flamingo won third place in the "AI" category of a photo contest, Gizmodo reports.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+




