Axios AI+

February 15, 2024
Hi, it's Ryan. Today's AI+ is 1,237 words, a 4.5-minute read.
1 big thing: "Open" software needs an AI rethink
Illustration: Sarah Grillo / Axios
While nearly everyone in the AI world claims to be "open" in some way, the software industry's current method of making open products doesn't fit the way AI is actually built, Megan Morrone reports.
Why it matters: Open approaches could speed up innovation, as advocates believe, or magnify some risks, as critics fear — but the people and companies creating today's most advanced AI models don't even agree on what "open" AI means.
The big picture: Open source software generally means that anyone can read, copy and modify the code. It made today's internet possible, but its rules and licenses don't easily map to the AI world.
- The licenses that govern open source applications and operating systems require free access to the program code. That can save money, speed up research and future-proof systems in cases where vendors go out of business.
But today's AI is more than programming instructions — it takes the form of mathematical models trained on data. To understand, reproduce and change how they work, you need access to all that. That's where things get messy today.
- Despite its name, OpenAI's ChatGPT is not currently an open-source model, although OpenAI has published some open-source models — including GPT-2, Point-E, Whisper, Jukebox, and CLIP.
- Meta calls both of its Llama releases open source because they're freely available for commercial and non-commercial use. With Llama 2, the company released model weights, but did not release the training data.
- Earlier this month the Allen Institute announced an open source model, but went a step further, releasing not only the model and its weights, but also its full training data and pre-training code.
Flashback: The open source movement that began in the '90s — and the free software movement before it — sought to create legal alternatives to copyrighting program code and offer protections for those who wanted to share their work.
- But today's AI industry lacks any sort of consensus on what "open source" means in the new field.
- "There is definitely not a universally accepted definition," Nikolaos Vasiloglou, VP of research ML at RelationalAI, tells Axios.
What they're saying: Sharing an AI model or product is different from sharing a program.
- Moez Draief, managing director of Mozilla.ai, says that issues around privacy, bias and liability — among other factors — make opening an AI model incredibly complex.
- When it comes to AI, there's a spectrum of openness — it's not a binary choice between open or closed, Draief tells Axios.
- Hugging Face, a provider of tools for developing open source AI models, says it takes "a community-based approach to understanding how concepts like 'open source' and 'open' are defined and understood."
- "We tend to use terminology like 'open science' and 'open access,'" Yacine Jernite, machine learning and society lead for Hugging Face, tells Axios.
Between the lines: Some LLM creators start with a closed model and a plan to release the source code later.
- Jon Carvill, senior director of AI communications at Meta, tells Axios that the company's FAIR research team starts every AI research project with the intention to openly release it. "We assess our strategy on openness on a case-by-case basis for our AI research at FAIR," Carvill says.
- Zack Kass, who oversaw the launch of ChatGPT at OpenAI and has since left the company, says this "closed first, open later" approach can help with "alignment " — meaning how well the AI model reflects the values its creators sought to embed in it.
- Like children, Kass argues, AI models need to mature before they're open sourced. "There's a reason we don't tell kids to go out into the world until they're 18," Kass tells Axios.
Yes, but: Internet activist Cory Doctorow charges the industry with "openwashing" — embracing the language of openness without delivering the substance.
- Doctorow argues that the large AI companies call themselves "open" in order to evade both regulation and criticism "by casting themselves as forces of ethical capitalism, committed to the virtue of openness."
What's next: President Biden's AI executive order tasked the National Telecommunications and Information Administration with assessing the opportunities and risks of open source AI.
2. NYC sues social media companies for negligence
Photo: Mayor Eric Adams (Luiz C. Ribeiro/NY Daily News/Tribune News Service via Getty Images)
New York City filed a lawsuit against TikTok, Meta, Snap and Google's YouTube to hold the companies accountable "for fueling the nationwide youth mental health crisis," NYC Mayor Eric Adams announced Wednesday, Axios' Ashley Gold reports.
Driving the news: The lawsuit, filed in California Superior Court, alleges that the companies intentionally manipulate younger users and addict them to social media platforms.
- The lawsuit makes three complaints under New York state laws: negligence, gross negligence and public nuisance.
- Such conduct has been a financial burden and crisis for the city, the lawsuit alleges, and has affected schools, hospitals and other communities.
- The plaintiffs are seeking a jury trial, company behavior changes and financial damages.
- In a press conference, Adams described New York City teens who are constantly in despair, glued to their phones and doing poorly in school.
What they're saying: "Our city is built on innovation and technology, but many social media platforms end up endangering our children's mental health, promoting addiction, and encouraging unsafe behavior," Adams said in a release.
Context: In the absence of new federal laws protecting children online, lawsuits aiming to hold tech accountable are growing more prevalent across the country, filed by school districts and groups of parents who say their children were harmed by social media.
- There's a lot of attention to flashy Capitol Hill hearings where tech executives get grilled, but lawsuits are a costly and tangible threat to companies' business models.
The other side: "We want teens to have safe, age-appropriate experiences online, and we have over 30 tools and features to support them and their parents," Meta spokesperson Andy Stone said.
- Google spokesperson José Castañeda said the allegations were untrue and that safe online experiences are "core" to the company's work, including "robust" parental controls.
- TikTok told Axios its safeguards include "age-restricted features, parental controls" and "an automatic 60-minute time limit for users under 18."
- A Snap spokesperson said in a statement that Snapchat differs from other platforms because it opens to a camera, not a "feed of content that encourages passive scrolling," and has no public likes or comments. The focus, the company says, is on helping users "communicate with their close friends."
3. AI skills trail soft skills
Illustration: Shoshana Gordon/Axios
Communication is the most in-demand job skill for the second year in a row, according to data gathered by LinkedIn, reports Axios' Eleanor Hawkins.
Why it matters: Communication skills trump AI skills — for now.
- Customer service and leadership ranked second and third, respectively, followed by project management, management and analytics.
What they're saying: "Human skills — like communication, leadership and teamwork — are particularly critical in this moment," says LinkedIn Learning global head of content strategy Dan Brodnitz.
- "With a rise in remote and hybrid work, and now AI, the need for human connection and people skills have become even more important."
4. Training data
- Google launched a new AI hub in France today, with 300 researchers and engineers, including Chrome and YouTube development teams. (Reuters)
- After moving ahead of Amazon earlier this week, NVIDIA briefly outranked Alphabet in market capitalization Wednesday, reaching $1.83 trillion. (Reuters)
- The winner in Indonesia's presidential election is Prabowo Subianto, who campaigned using a younger-looking AI version of himself and deepfake videos of long-deceased dictator Suharto "endorsing" him. (N.Y. Times, U.K. Channel 4)
5 + This
Mark Zuckerberg reviewed Meta's own Quest virtual reality headset and Apple's Vision Pro — and you'll never believe it: He loves Quest even more now!
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.


