Axios Communicators

April 02, 2026
Welcome back!
- 🗂️ Monthly Moves will hit inboxes tomorrow. Share your job news here.
- 🌸 Will you be in D.C. for WHCD? Join Sara Fischer and me for a brunch reception on April 24, where we will speak with newsmakers about AI's impact on the information and media landscape. Request an invite here.
Today's newsletter, edited by Christine Wang and copy edited by Kathie Bozanich, is 1,557 words, 6 minutes.
1 big thing: MIT study challenges AI doomers
AI is going to change the way people work, but it's not going to replace them en masse, according to new research from MIT's Computer Science and Artificial Intelligence Laboratory.
Why it matters: This directly pushes back on fear-based narratives coming from some AI leaders and reframes the debate from "when do jobs disappear?" to "how quickly do tasks shift?"
State of play: AI is advancing across the workforce more like a "rising tide" than a "crashing wave" — meaning work will change broadly and gradually, not through sudden job wipeouts in specific sectors, per the study.
How it works: Instead of using benchmarks, the study measures whether AI can produce usable work in real-world settings.
- The MIT researchers identified 11,500 tasks in the U.S. Labor Department's database and created multiple instances of each. They were then run through more than 40 AI models using workplace-style prompts.
- They had workers in those fields evaluate more than 17,000 AI-generated outputs as to whether they were good enough to use without edits.
By the numbers: In 2024, AI models could complete roughly 50% of text-based tasks at a minimally acceptable level, rising to 65% by 2025, per the report.
- At the current pace, AI could handle 80% to 95% of text-based tasks by 2029 — though only at a "good enough" level.
Yes, but: "Good enough" isn't the same as reliable.
- High-quality, error-free work remains much harder and is a gap that continues to trip up real-world deployments.
- Recent examples include Deloitte's error-filled AI-generated report for a Canadian province and Klarna's pullback from AI-led customer service.
Between the lines: The research finds that we are several years away from AI achieving near-perfect success rates, which means workers may have more time to adapt, making the disruption less abrupt.
Zoom in: AI's impact varies by industry but reinforces the need for humans in the loop.
- AI has the lowest success rate (47%) in legal work due to the need for precision, judgment and strategic guidance.
- It has the highest success rate (73%) across installation, maintenance and repair tasks because of technology's ability to automate the administrative pieces of manual work, like troubleshooting and documentation.
- In media, arts and design, AI has a 55% success rate, proving useful for drafting and ideation but lacking in higher-end creative execution, per the report.
- Meanwhile, AI has a 53% success rate for managerial tasks like planning, writing and analysis, but is weak when it comes to coordination, judgment, and decision-making.
What to watch: Integrating AI into workflows has proven to be hard and costly, which continues to slow AI adoption in the workplace.
- March jobs numbers land tomorrow amid rising headlines about AI-linked layoffs.
- In February, AI was cited in 10% of job cuts, but so far, a broad job apocalypse hasn't materialized.
- Some are using the term "AI-washing" to describe the act of blaming cuts on AI to justify broader restructuring. (See Jack Dorsey's explanation for Block layoffs).
The bottom line: The study challenges the idea of a sudden AI-driven employment cliff and instead points to a slower, more uneven reshaping of work.
- For now, AI isn't replacing jobs — it's gradually redefining them.
💭 Eleanor thought bubble: This is helpful context for business leaders and communications teams managing the AI transformation inside companies.
2. AI gaps in the boardroom are becoming a reputational risk
AI is reshaping corporate strategy at record speed, yet many responsible for overseeing it aren't keeping pace.
Why it matters: That gap is increasingly a reputational and governance risk, not just a technical one.
The big picture: Companies across every industry are being forced into rapid AI-driven transformation, but many corporate boards lack the expertise to guide strategy, manage risk or communicate decisions credibly to stakeholders.
By the numbers: Only 39% of Fortune 100 boards have any form of AI oversight, such as committees, a director with AI expertise, or an ethics board, according to McKinsey research.
- Another recent report found that only 13% of S&P 500 companies have at least one director with AI-related expertise.
- Similarly, McKinsey's survey of directors found that 66% say their boards have "limited to no knowledge or experience" with AI, and nearly one in three say AI does not even appear on their agendas.
- And a report from the National Association of Corporate Directors (NACD) found that only 17% have established an AI education plan for directors, and 6% have a dedicated committee to oversee AI.
Between the lines: Having an AI-savvy board is a major competitive advantage, according to a recent MIT study.
- Those with AI-literate directors outperform their peers by 10.9 percentage points in return on equity, per the study.
The intrigue: Some CEOs are using AI to craft their own unofficial board of directors, chiefs of staff or even an AI version of themselves to serve as a sounding board.
- Still, AI isn't replacing governance anytime soon. There's a broad consensus that human judgment remains essential for overseeing risk, maintaining ethics and managing corporate reputation.
What they're saying: "There is a pretty material variance or spread across companies, around at a board and leadership level, around knowledge, adoption and use of AI, "says Brian Stafford, CEO of Diligent, an AI-powered governance, risk and compliance company.
- The use of AI in the boardroom is also on the rise, says Stafford, adding that ignoring it could lead to more exposure.
- "X number of years from now, what if a board or company didn't use AI to interrogate their financial statements, and there was fraud?" Stafford asked.
- "Board members have a fiduciary responsibility, and if you had a tool that was available to you and you didn't use it, that's a different way to think about risk or legal risk," he added.
What's next: Boards are attempting — but struggling — to recruit AI-literate directors, says Boardsi CEO Martin Rowinski.
- In the meantime, organizations like NACD are rolling out AI oversight certifications and training programs to close the gap.
What to watch: AI board members could be here soon, suggests Stafford.
- "You could have an AI board member that was steeped in the knowledge of every single board presentation you've ever written, remembers it perfectly, understands what you may have told your board two years ago and what you're telling your board now, and be able to recall with perfect clarity," he said.
- "It can be trained up in the world around you, with depth around your competitors, markets, segments, and so I think it's an incredibly powerful capability."
3. Charted: Tariff chatter, one year later


Today marks one year since "Liberation Day"— and a year of CEOs white-knuckling through ongoing policy shifts.
Catch up quick: In a February ruling, the Supreme Court deemed most of President Trump's tariff policy illegal. In response, Trump issued an executive order to impose 10% tariffs on all countries.
- Now, thousands of companies — like Costco, FedEx and Nintendo — are suing for refunds. Meanwhile, 71% of CEOs said tariffs were harmful to their businesses.
🧮 By the numbers: Tariffs dominated corporate talking points last year, peaking with close to 3,600 mentions in Q2 — the quarter following Liberation Day — according to AlphaSense data shared with Axios.
- So far this year, terminology related to tariffs and Trump's trade policy has been mentioned more than 1,500 times in corporate earnings and conference calls.
What to watch: We are starting to see more CEOs speak out against the administration's policies, from Anthropic's Dario Amodei to Citadel's Ken Griffin.
4. 📚 Reading list
👀 The marketing and corporate communications lead at U.K. restructuring firm Coots & Boots appears to be AI-generated, leading to one of the best headlines of the week: "A spokesperson could not be reached for comment (possibly because she doesn't exist)" (The Financial Times)
✈️ Air Canada CEO Michael Rousseau announced his retirement on Monday, and some are linking it to his weak response to last week's deadly crash at New York's LaGuardia Airport. The biggest critique: his inability to speak French. (Fortune)
🍫 "We've always encouraged people to have a break with KitKat — but it seems thieves have taken the message too literally." Nestle said in response to 413,793 KitKats being stolen in Europe. (The Wall Street Journal)
💡 Burson CEO Corey duBrowa spoke with several CEOs to better understand how they were playing the long game and doing so with conviction. (Fast Company)
💰 OpenAI's ad strategy is paying off. The company's pilot ad program — which includes more than 600 advertisers — has exceeded over $100 million in annualized revenue. (The Information)
5. 💭 1 quote to go
"CEOs aren't used to commenting on every single policy that comes out there. ... You can't comment on every little thing. And particularly when people make statements that you know they're going to modify later. So [the media] is always chasing [their] tails because [Trump] is like putting the lure out there and the press all goes running to it and they waste so much time."— JPMorgan CEO Jamie Dimon said in an interview on "The Axios Show."
🧠 Thanks for reading! For more content, apply to become a Mixing Board, powered by Axios member.
Sign up for Axios Communicators



