Axios AI+

September 06, 2024
Hi from Hollywood, where I am speaking later today on how AI is changing journalism as part of the convention of NLGJA: The Association of LGBTQ+ Journalists.
- Join Axios Pro for a series of virtual conversations and live Q&As unpacking the policy impacts of the 2024 election. RSVP today.
Today's AI+ is 1,067 words, a 4-minute read.
1 big thing: Bots devalue wisdom of the crowd
A rise in bots that simulate human engagement online is making it tougher to use social media chatter as a bellwether for public opinion.
Why it matters: Bad actors can use AI to shift narratives — especially by making outlier opinions seem more widespread.
Driving the news: Conservative activist Robby Starbuck has gained notoriety for his social media posts challenging the DEI policies of several big American brands — and companies like Tractor Supply, John Deere and Harley-Davidson have buckled in response.
- Molson Coors, Lowe's and Ford have also recently walked back their commitments to DEI.
Yes, but: Much of the engagement around these socially divisive posts has been generated by social media bots, according to experts from public relations and marketing firm Jackson Spalding.
By the numbers: The policy reversals of Tractor Supply, John Deere and Harley-Davidson saw 77% more media mentions and roughly 40% more social media interactions than Starbuck's initial anti-DEI campaigns, according to NewsWhip data shared with Axios.
- That's because bot activity is not driven by organic posts, but rather re-posts or likes, says Justin Williams, digital and analytics practice leader at Jackson Spalding.
- "If your first assumption is that tens of thousands of people know about this issue and have paid attention to it, then you would expect that when news comes out [about the policy change], it would actually cool things off — but the reverse occurred," he said.
- "There were fewer humans [who] were actually paying attention at that moment, but as soon as national media picked it up, it broadened the audience and diversified the number of people who were creating original content about it."
What they're saying: "Bots are used to manipulate public opinion by rigging the virality algorithm," says Guy Tytunovich, CEO of cybersecurity platform CHEQ.
- The bots are designed to trick the algorithm into "thinking" certain posts are reliable, relevant and interesting enough to show to more people, making them appear more organically viral, he says.
- This use of bots can cause major divisions and threaten democracy, adds Tytunovich.
State of play: Bot activity should be expected around any potentially polarizing issue, says Williams.
- "When bots are engaged, they're basically making a Super Bowl-like event out of things that are not," says Williams. "So it's important to understand that volume does not always equal depth."
Zoom in: The PR industry is partly to blame for the outsized value placed on social media volume, says Scott Sayres, head of reputation and issues management at Jackson Spalding.
- "In the early days of social media, the very first thing we looked at was 'how many mentions do you have' or 'what's your volume traffic?'" Sayres says. "Now we're having to break that cycle and re-educate everybody."
- Companies must pay attention to the social media chatter's sphere of influence — particularly how it might be reaching their targeted audiences or key stakeholder groups.
Reality check: The rising popularity of and access to generative AI is likely to increase this bot activity.
- Meanwhile, tracking and understanding virality is becoming more difficult, as platforms withhold data to avoid scrutiny over how their algorithms work.
What we're watching: The motives for tricking algorithms can also be purely financial. Expect to see more scammers using AI to create content and then bots to amplify it, generating profit.
- In a recent case, federal prosecutors arrested a music producer for using AI to create and publish hundreds of thousands of songs, then deploying thousands of bots to "listen" to them.
Go deeper: Leading chatbots are spreading Russian propaganda
2. OpenAI's cash calculus
OpenAI is growing its revenue from business users and contemplating hefty price hikes for users who want access to its next-level services, per reports.
Why it matters: Generative AI is notoriously expensive to develop and run, and those costs rise exponentially with each new generational leap — like the one from OpenAI's GPT-4 to its long-awaited successor.
Driving the news: OpenAI said Thursday that it now has more than a million paying business users, up from 600,000 in April.
- By "business users," OpenAI means people who pay for ChatGPT Enterprise, Team and Edu, all of which launched in the last year.
- The company told Axios that its enterprise and education customers include Moderna, Morgan Stanley and Arizona State University.
Yes, but: OpenAI is spending tens of billions to train and deploy its next generation of models and keeps raising more fortunes from investors to pay for that work, which its estimated $2 billion in annual revenue doesn't begin to cover.
- That "money incinerator" is forcing the company to look at big price increases for monthly subscribers when it rolls out the next versions of ChatGPT, according to The Information.
Stunning stat: In internal discussions, OpenAI raised the possibility of a price tag as high as $2,000 monthly, though that sounds like an extreme scenario.
- OpenAI declined to comment on potential price increases.
Between the lines: ChatGPT is available for free, but a $20 monthly subscription offers more features. ChatGPT Team costs $25 a month, and OpenAI says ChatGPT Enterprise subscriptions vary depending on the size and needs of a company.
- Rival chatbot companies Anthropic and Inflection also recently announced enterprise versions of their software.
- Pro, team and enterprise versions of Google's Gemini, Microsoft's Copilot and Anthropic's Claude are similarly priced to ChatGPT.
The bottom line: Investors continue to throw money at OpenAI and its rivals to build ever-bigger genAI models, but sooner or later they will want to see credible evidence of a long-term payoff.
- Pushes into B2B services and subscription increases are common signs of a maturing software market.
Go deeper: OpenAI says ChatGPT usage has doubled since last year
3. Training data
- YouTube is working on new tools that will detect when someone uploads content with AI-generated faces or voices of famous people. (TechCrunch)
- Google is launching its "Ask Photos" AI feature to some U.S. users. The tool will let you search for photos conversationally. (9to5Google)
- Telegram CEO Pavel Durov called his arrest by French authorities "misguided," but the company is also beginning to change some of its moderation rules to allow for more monitoring of private groups. (Axios, The Verge)
- Nick Pickles, head of global affairs at X — and one of the last leaders left from the pre-Elon Musk era — is leaving the company, making him an ex-X exec. (X)
4. + This
Oof. A car crash sent a fire hydrant through a Big Boy statue in Downey, California.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+





