Axios AI+

October 05, 2023
Hi, it's Ryan. Today's AI+ is 1,294 words, a 5-minute read.
1 big thing: Newsrooms enter treacherous AI waters
Illustration: Natalie Peeples/Axios
After months of experimenting with artificial intelligence to make their work more efficient, some newsrooms are now dipping their toes in more treacherous waters — trying to harness AI to detect bias or inaccuracies in their work, Axios' Scott Rosenberg and Sara Fischer report.
Why it matters: Confidence in the news media is at an all-time low, pressuring news leaders to look for new ways to win back trust. But today's AI, which has its own biases and makes up fake facts, is an unlikely savior.
Driving the news: The Messenger, a new digital media company, Wednesday said it plans to partner with a company called Seekr to ensure its editorial content "consistently aligns with journalism standards" using AI.
- The Messenger's president, Richard Beckman, said in a statement announcing the partnership that "we believe Seekr's responsible AI technology will help hold our newsroom accountable to our core mission," which is "to deliver the news — not shape it."
- Meanwhile, the CEO of Politico and Insider parent company Axel Springer told CNN Tuesday that the firm will use AI for "fact-checking," without specifying how.
How it works: Seekr analyzes individual articles using factors like "title exaggeration," "subjectivity," "clickbait" and "personal attack" as well as purported political leaning.
- The promise is that a neutral AI will somehow arrive at purely objective ratings — but AI itself is trained on human data, and that data is full of its own biases.
Reality check: Taking humans out of the loop introduces other problems, and automating judgments by algorithm opens the door to many unpredictable failures.
- It took less than a minute to find, for instance, that Seekr gave a "very low" rating to a harmless Messenger story rounding up late-night comedy hosts' schticks about Kevin McCarthy's ouster.
- The story was a compilation of jokes from Stephen Colbert and Jimmy Kimmel, which the program must have viewed as "subjective" and "personal attacks."
The big picture: Several companies have launched in recent years with the goal of evaluating news accuracy and bias. Most rely on human judgment to assess whether a particular outlet or article is credible by analyzing factors like funding transparency and original sourcing.
- Critics argue that relying on human review opens these companies, such as Ad Fontes or NewsGuard, to their own biases. Some firms take measures to prevent bias, such as relying on politically balanced panels to evaluate the same material.
Between the lines: Experts see some value in using AI to fact-check very large datasets — for instance, to track the spread of a falsehood identified by a human across multiple stories and media outlets.
- Google, for example, says it uses AI to identify false claims that have been debunked by fact-checkers that have been repeated in a wide set of information sources.
Our thought bubble: Whatever systems publishers and editors impose, AI will probably enter newsroom workflows informally, as time-pressed journalists turn to tools like ChatGPT to answer questions fast — even if they're advised not to.
Go deeper: The right newsroom jobs for AI and the wrong ones
2. LinkedIn looks to dethrone 4-year degrees
Illustration: Allie Carl/Axios
AI is transforming job hunting and skill development — threatening to relegate four-year college degrees to the category of merely nice-to-have on your CV.
In AI-driven workplaces, employers will need to treat up-skilling investments as a "critical priority" rather than a perk, per the pitch LinkedIn executives made to 2,000 of the nation's top recruiters this week in New York City.
Why it matters: Less than 4 in 10 Americans hold a bachelor's degree — but this group dominates America's decision-making class.
- Recruiters depend on LinkedIn to do their jobs, and the company's wake-up call on degrees is based on data from workers at 63 million organizations.
Driving the news: LinkedIn released a slew of new AI product features this week, including:
- AI-assisted candidate discovery for recruiters, promising better natural language searches, less focus on university credentials and job titles, and prompts on how a role could be tailored around a candidate's strengths and constraints (such as location).
- AI-powered coaching in the subscriber-only LinkedIn Learning, a chatbot service to coach workers through tough moments or career development.
Be smart: Even if you're not changing jobs often, whatever job you're in will likely be changing around you, impacting the value of your degree.
- Those evolving jobs will come to be seen as a collection of skills and tasks, with more focus on "human and people-oriented skills" as drudge work and certain knowledge tasks get automated, LinkedIn CEO Ryan Roslansky told the Talent Connect Summit.
- 72% of American executives surveyed by LinkedIn said soft skills are more valuable to their organization than AI skills.
What they're saying: "AI's going to make it virtually impossible for a one-off moment of learning [like a degree] to last an entire career," Roslansky said.
- Campaigners against elitism in workplaces see opportunity in AI: "Over- credentialing a job that doesn't need a four-year degree is a mistake. You pay a degree premium and miss out on good candidates," Gerald Chertavian, founder and CEO of YearUp, a non-profit, said.
Yes, but: As of 2021, there was a growing earnings gap between those with a four-year degree and those without and the unemployment rate for college graduates was still lower than that of Americans without degrees, per Pew Research.
3. Practical steps for companies to do AI right
Illustration: Sarah Grillo/Axios
EqualAI, a non-profit working with tech companies and the World Economic Forum to highlight and reduce AI harms, has shared exclusively with Axios a collection of the most effective techniques used by participants in its AI governance program, including executives from PepsiCo, Salesforce, Verizon and AWS.
Why it matters: "Responsible AI" has become a go-to slogan for organizations signaling that they're taking AI, and AI safety, seriously. But in the rush to look responsible, and in today's regulatory void, many are confused about what the concept means in practice.
- Uncertainty and delays around AI legislation and litigation have increased the urgency for interim guidance and action on the responsible use of AI.
- "It's not someone else's problem. It's for every company," Miriam Vogel, CEO of EqualAI, told Axios. "So many people feel uncomfortable or that they don't belong [in AI debates], but you absolutely must play," she urged.
Details: The most notable suggestions in the papers include:
- Designating "one senior executive who is ultimately responsible for AI governance" who is kept accountable by a committee.
- Involving non-tech employees in the design and implementation of AI features used by an organization — including through bonuses, part of ensuring "human input and oversight into all stages of AI decision-making."
- Seeking out external stakeholders who can provide feedback on how your organization is deploying AI.
- A simple definition of responsible AI as AI which is "safe, inclusive, and effective for all possible end users," mitigating the risks of "any unintended use case."
Yes, but: More than 40 other organizations, frameworks and policy papers already occupy this space. This paper could complement those approaches — or add to the clutter.
4. Training data
- Illinois has a plan to become a hub for the chip industry. (Axios)
- There's no such thing as reliable AI watermarking, say researchers who broke all the methods they tested. (Wired)
- A "rapid response cohort" of six AI specialists will be deployed to Congressional offices in 2024 under the auspices of the American Association for the Advancement of Science.
- Safeguards for Meta's new generative AI stickers are "incredibly rudimentary," says internet studies professor Tama Leaver, allowing users to enter prompts like "child with a grenade" and "elon musk mammaries." (Gizmodo)
- Bill Gates-backed startup Likewise today launches an AI-driven chatbot that will recommend entertainment and books. (Wall Street Journal)
- On tap: The FTC is hosting a roundtable on AI in creative fields at 3pm ET.
5. + This
This video compiles all the science experiments you weren't allowed (or didn't dare) to do.
Thanks to Megan Morrone and Scott Rosenberg for editing and Bryan McBournie for copy editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.


