President Trump said Sunday the U.S. military destroyed nine Iranian warships and is in the process of destroying the rest of Iran's navy.
Why it matters: The U.S. strikes target Iran's ability to close the strategic Strait of Hormuz, the narrow waterway through which roughly a fifth of the world's oil supply flows.
OpenAI's new deal with the Pentagon does not explicitly prohibit the collection of Americans' publicly available information — a sticking point that rival Anthropic says is crucial for ensuring domestic mass surveillance doesn't take place.
Why it matters: OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and the Pentagon's lead AI negotiator Emil Michael all say they care about civil liberties, but disagree on whether the law today offers enough protections for AI use.
Anthropic's Claude hit No. 1 in U.S. app downloads Saturday, overtaking ChatGPT, after the Pentagon blacklisted the company for refusing to loosen safeguards for military use of its AI model.
Why it matters: The long-term business impact for Anthropic remains unclear. But in the short term, the clash has fueled interest in Claude, as some social media users call for dumping ChatGPT over OpenAI's deal with the Pentagon.
The AI trade is getting investors used to a cycle of panic-driven selling followed by overly euphoric rallies followed by more panic.
The big picture: At first, AI made Wall Street's dreams come true, driving the S&P 500 higher by double digits three years in a row. Now, as AI threatens incumbent players and sectors (hi software), it's keeping traders up at night.
Sam Altman said late Friday night that his company reached an agreement with the Pentagon to use its AI models, after the Defense Department agreed to its safety red lines that were similar to rival Anthropic's.
Why it matters: The Pentagon has blasted Anthropic for days, contending its red lines for AI use in the military — mass surveillance and autonomous weapons — are philosophical and "woke."
President Trump said Friday the U.S. government would blacklist Anthropic, and the Pentagon declared the company a "supply chain risk," in the most consequential and controversial policy decision to date at the intersection of artificial intelligence and national security.
The big picture: Anthropic rebuffed the Pentagon's demand to lift all safeguards on the military's use of its model, Claude, due to its concerns about the use of AI for mass domestic surveillance and the development of weapons that fire without human involvement.
Anthropic vowed to challenge the Pentagon in court over its blacklisting of the company for refusing to lift all safeguards on the military's use of its model, Claude — adding it's "deeply saddened" by the escalating dispute.
Why it matters: The frontier AI company is doing what few other companies have done since Trump's second term began — directly and publicly challenging the administration.
A former Trump AI official blasted the Pentagon's blacklisting of Anthropic as "attempted corporate murder" on Friday.
Why it matters: It's a recognition of the stakes in the administration's effort to cull what Trump calls a "woke" company — one that happens to have lately dominated the field.