Axios Login

February 21, 2023
Welcome back after the long weekend! (Or, if you had to work yesterday, our condolences.) Ina's taking some time off this week so you've got me, Scott Rosenberg, at the wheel for the next few days.
Today's Login is 1,244 words, a 5-minute read.
1 big thing: Chatbots trigger next misinformation nightmare

Illustration: Sarah Grillo/Axios
New generative AI tools like OpenAI's ChatGPT, Microsoft's BingGPT and Google's Bard that have stoked a tech-industry frenzy are also capable of releasing a vast flood of online misinformation.
Why it matters: Regulators and technologists were slow to address the dangers of misinformation spread on social media and are still playing catch-up with imperfect and incomplete solutions.
- Now, experts are sounding the alarm faster as real-life examples of inaccurate or erratic responses from generative AI bots circulate.
- "It's getting worse and getting worse fast," Gary Marcus, a professor emeritus of psychology and neural science at New York University and AI skeptic, told Axios.
The big picture: Generative AI programs like ChatGPT don't have a clear sense of the boundary between fact and fiction. They're also prone to making things up as they try to satisfy human users' inquiries.
Be smart: For now, experts say the biggest generative AI misinformation threat is bad actors leveraging the tools to spread false narratives quickly and at scale.
- "I think the urgent issue is the very large number of malign actors, whether it's Russian disinformation agents or Chinese disinformation agents," Gordon Crovitz, co-founder of NewsGuard, a service that uses journalists to rate news and information sites, told Axios.
What we're watching: Misinformation can flow into AI models as well as from them. That means at least some generative AI will be subject to "injection attacks," where malicious users teach lies to the programs, which then spread them.
The misinformation threat posed by everyday users unintentionally spreading falsehoods through bad results is also huge, but not as pressing.
- "The technology is impressive, but not perfect ... whatever comes out of the chatbot should be approached with the same kind of scrutiny you might have approaching a random news article," said Jared Holt, a senior research manager at the Institute for Strategic Dialogue.
Between the lines: Tech firms are trying to get ahead of the possible regulatory and industry concerns around AI-generated misinformation by attempting to detect falsehoods and using feedback to train the algorithms in real time. Some help has already arrived from researchers.
- NewsGuard last week introduced a new misinformation prevention tool for training generative artificial intelligence services.
- NewsGuard assembles data on the most authoritative sources of information and the most significant top false narratives spreading online. Generative AI providers can then use the data to better train their algorithms to elevate quality news sources and avoid false narratives.
- Microsoft, a backer of NewsGuard, already licenses NewsGuard’s data and uses it for BingGPT.
How it works: At Microsoft, user feedback is considered a key component to making ChatGPT work better.
- "The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing," the company posted on its blog on Feb. 15, a week after Bing with ChatGPT rolled out.
Yes, but: "The challenge for an end user is that they may not know which answer is correct, and which one is completely inaccurate," Chirag Shah, a professor at the Information School at the University of Washington, told Axios.
- Other issues average users need to look out for include bias, said Shah, which is especially tough for users to discern with ChatGPT-generated answers, because there is a less direct link to where the information in the box is coming from.
2. Supreme Court's internet smarts put to test

Illustration: Brendan Lynch/Axios
As the firestorm over Big Tech and content moderation comes to a head at the Supreme Court today, some experts fear the court simply isn't up to the job, Axios' Sam Baker and Ashley Gold report.
Why it matters: The court has historically not been great at grappling with new technology. As it dives into the political battle over social media algorithms, there's a real fear that the justices could end up creating more controversies than they solve.
Driving the news: The court is set to hear arguments this week in two cases involving Section 230, the federal law that says tech platforms aren’t liable for what their users post.
- Both lawsuits — one against Google, and one against Twitter — argue that while tech companies may not be liable for the content of users' posts, they should be liable for what their algorithms promote or suggest.
- The implications of such a decision may not be fully apparent for years, even to the engineers who work on those products.
"The court might think it's doing one thing and it's actually doing something very different," said Evelyn Douek, a law professor at Stanford who specializes in tech law. "It's ill-matched to the problem."
The concern within the tech industry isn't just that the court might rule against them — every party in a Supreme Court case has to worry about that — but that a Supreme Court ruling limiting Section 230, unlike a law limiting Section 230, could cause unforeseen issues down the road that even the law’s critics may not necessarily be happy about.
- Even if Google and Twitter win, there’s a realistic scenario in which "the court still says problematic things ... that end up weaponizing the legal system against content moderation," Berin Szóka, president of libertarian-leaning think tank TechFreedom, said during a roundtable with reporters last week.
Context: The Supreme Court is an inherently slow-moving institution that tries to solve problems mainly by searching for one broad principle that can last forever. And that's simply hard to square with complex, evolving technology.
3. New fees for user services at Meta, Twitter

Photo: Josh Edelson/AFP via Getty Images
Over the weekend, Meta announced it will test out a monthly subscription service that allows users to verify their accounts, Axios' Ivana Saric reports.
Why it matters: Revenue-hungry social media platforms are finding new ways to charge users.
- The move is aimed at "increasing authenticity and security across our services," Meta CEO Mark Zuckerberg wrote in a Facebook post announcing the news.
State of play: Called Meta Verified, the subscription service will be rolled out this week in Australia and New Zealand, with other countries soon to follow, Zuckerberg wrote.
- It will allow users to verify their accounts using a government ID. In return, users will gain a verified blue badge, direct access to customer support and "extra impersonation protection against accounts claiming to be you," according to Zuckerberg.
- Meta Verified will be priced at $11.99 a month for web users and $14.99 a month on iOS.
The big picture: Meta's rollout of a paid subscription service follows Twitter's decision last year to tie in verification with its Twitter Blue subscription plan.
- On Friday Twitter announced it would stop providing text-message-based two-factor authentication to users who were not paying for Twitter Blue.
- Owner Elon Musk tweeted that text message charges were costing the company $60 million annually for "fake" messages.
4. Take note
On Tap
- The Supreme Court will hear arguments today and tomorrow in major cases affecting the future of liability protections for publishers of online content.
Trading Places
- Todd Fisher will serve as chief investment officer for the Commerce Department's spending under the $52 billion CHIPS act to fund U.S. semiconductor projects.
ICYMI
- Twitter faces at least nine lawsuits from landlords, consultants and vendors who claim the social media giant has not paid its bills. (Wall Street Journal)
- In a policy change, Amazon is now pushing corporate workers to return to the office for at least three days a week. (GeekWire)
5. After you Login
A belated happy birthday to Cherry Coke, which turned 38 over the weekend.
Thanks to Peter Allen Clark for editing and Bryan McBournie for copy editing this newsletter.