Axios AI+

November 12, 2024
I'm still thinking about all those who have risked their lives to preserve America's democracy. Today's AI+ is 922 words, a 3.5-minute read.
1 big thing: What Adobe's AI knows about you
Adobe says its stance on using customers' data to train its AI models is simple and absolute: It doesn't.
Why it matters: Adobe's software is widely used by many of the creative professionals who feel their livelihoods are under threat by generative AI.
In this series on What AI Knows About You, Axios is looking, company by company, at how the data-hungry AI industry uses its own customers' information to create and refine its products.
- These data practices will shape both the nature of AI itself and the extent to which the public ends up trusting or rejecting the new technology.
- If AI firms make aggressive use of customer data, creators of intellectual property fear their work will lose economic value — and individual users may find that AI chatbots and other products know, and even share, sensitive personal information.
Catch up quick: Adobe has released a number of generative models under its Firefly brand, including ones to produce photos, vector images, text and video.
- It has integrated generative AI features into a range of products including Photoshop, Illustrator, Express and Premiere Pro.
- Unlike other AI providers whose generative models are trained using a range of "publicly available" data scraped from the web, Adobe has said it will only use content to which it has rights, making its models "commercially safe" for businesses to use.
Between the lines: Despite Adobe's standing commitment not to use customer data for AI training, a change in its terms of service earlier this year left some observers concerned that it was shifting away from that stance.
- In response, Adobe clarified those terms and codified its pledge to keep customer info out of AI training datasets.
- As part of an effort to help indicate which content uses AI, Adobe is also allowing creators to digitally sign their work and indicate whether or not they want it to be used to train AI systems.
Yes, but: Adobe does train its systems on content users contribute to be resold on Adobe Stock, the company's marketplace for stock images.
- Adobe is paying an annual "contributor bonus" to those whose photos, vectors, illustrations or videos are used to train the company's Firefly models.
What they're saying: Adobe chief strategy officer Scott Belsky told Axios the uproar over the terms of service change highlighted to him that "no company that is the steward of data in this modern age has the benefit of the doubt."
- "You have to be explicit not just about what you are going to do, but what you aren't going to do," he said.
The big picture: Even with its assurances, Adobe's embrace of generative AI has been controversial with some customers — many of whom see the technology as a threat to their livelihood as artists, designers and content producers.
- And while the company touts its own models as commercially safe and trained only on content to which it has legal rights, the company is letting customers use other models within Photoshop and other tools — and those alternatives often have murkier data policies and practices.
2. Study: Growth of AI adoption slows among U.S. workers
The percentage of workers in the U.S. who say they are using AI at work has remained largely flat over the last three months, according to a new study commissioned by Slack.
Why it matters: If AI's rapid adoption curve slows or flattens, a lot of very rosy assumptions about the technology — and very high market valuations tied to them — could change.
Driving the news: Slack said its most recent survey found 33% of U.S. workers say they are using AI at work, an increase of just a single percentage point. That represents a significant flattening of the rapid growth noted in prior surveys.
- Global adoption of AI use at work, meanwhile, rose from 32% to 36%.
Between the lines: Slack also found that globally, nearly half of workers (48%) said they were uncomfortable telling their managers they use AI at work.
- Among the top reasons cited were a fear of being seen as lazy, cheating or incompetent.
What they're saying: "Too much of the burden has been put on workers to figure out how to use AI," Slack senior VP of research and analytics Christina Janzer said in a statement. "To ensure adoption of the technology, it's important that leaders not only train workers, but encourage employees to talk about it and experiment with AI out in the open."
- The survey queried 17,372 workers in Australia, Brazil, Canada, France, Germany, India, Italy, Japan, the Netherlands, Singapore, Spain, Sweden, Switzerland, the U.K. and the U.S., and took place between Aug. 2 and Aug 30.
3. Training data
- Google released the code to its AI protein-prediction tool AlphaFold3 after researchers complained that the first release in May was not truly open source. (Science)
- Today's leading AI models perform so well on current benchmarks that AI makers are having to create new benchmarks internally to chart progress. (Financial Times)
- Scientists fear AI's ability to generate huge amounts of fake experimental data that looks real. (Nature)
- A portrait of Alan Turing created by an AI robot sold for more than $1 million at auction. (New York Times)
- OpenAI safety engineer Lilian Weng is leaving the company after seven years. (X)
- Meta will offer paid subscription plans, with no ads, to Facebook and Instagram for users in Europe. (Axios)
4. + This
It was an honor to be present Sunday as Stanford renamed the basketball court at Maples Pavilion in honor of legendary women's hoops coach Tara Vanderveer, who retired after last season.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Anjelica Tan for copy editing it.
Sign up for Axios AI+



/2024/11/12/1731376176849.gif?w=3840)
