Axios AI+

December 05, 2024
Hello from DC, where I am in town to moderate an AI keynote at Victory Fund's conference on Saturday. Today's AI+ is 1,040 words, a 4-minute read.
1 big thing: What Trump will do about AI
President-elect Trump's promise to nix the Biden administration's AI executive order could end up being more of a rebrand than a repeal.
Why it matters: The AI executive order was a cornerstone of the Biden administration's tech policy, with sweeping directives designed to try to ensure the federal government is adopting and deploying AI responsibly.
Between the lines: Much of the executive order will already be implemented when Trump takes office.
- The executive order largely did not aim to regulate the private sector.
- The Biden administration instead leaned on voluntary commitments from companies while Congress started educating itself on the technology.
- When Trump takes office, he'll be starting from a place with little AI regulation on the books.
Two places where Trump could potentially overhaul the executive order are in the reporting requirements and the procurement standards.
- Biden's executive order requires companies to tell the Commerce Department when a model was trained using computing power that exceeded a certain threshold.
- The EO also says the government must take certain factors like climate impact into consideration when procuring AI, which the incoming administration may not be amenable to.
Flashback: Trump in his first term signed an AI executive order that was then codified into law as part of the National Artificial Intelligence Initiative Act of 2020 and could continue to inform his approach.
- The EO called for plans to issue AI technical standards and building the AI workforce.
- In 2020, the Trump White House also established the first national AI research institutes and issued another AI executive order focused on the federal government's use of AI.
What they're saying: "The Trump administration can build upon that past approach by cataloging how existing laws impact AI, and by examining other key AI priorities" such as research, technical frameworks and guidelines for government use of the tech, BSA senior vice president of U.S. government relations Craig Albright told Axios.
State of play: The deadlines agencies have already met — more than 100 to date — could be difficult to reverse.
- There are more to hit in the waning days of the Biden administration, including taking steps to mandate NIST guidelines and developing guidance for digital authentication this month.
- By January, agencies will have to address the most important potential data security risks related to chemical, biological, radiological and nuclear weapons.
- That same month, agencies will also have to submit a report to the president on how to advance AI global technical standards and how to mitigate cross-border risks to U.S. critical infrastructure.
Some directives could be at greater risk, as their deadlines will arrive after Trump takes office.
- These include the establishment of at least four new National AI Research Institutes by April and OMB guidance for labeling and authenticating government AI by June.
What we're watching: Trump is known for being unpredictable, and having a pro-AI regulation adviser like Elon Musk in his ear could influence his approach.
- Since Biden's AI executive order, agencies have been required to name chief AI officers to manage the development and strategy of the tech.
- We'll be tracking if these federal agency CAIOs will get caught in the crosshairs of Musk's Department of Government Efficiency efforts.
2. OpenAI, Anduril partner on drone-defense plan
Defense contractor Anduril and ChatGPT maker OpenAI yesterday announced "a strategic partnership to develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions" with an initial focus on anti-drone systems.
Why it matters: The U.S. now sees AI as a global race, chiefly with China, to dominate the new technology and use it to further national interests. At the same time, debates still rage about AI's reliability and long-term safety.
State of play: The companies said they will combine OpenAI's most advanced models and Anduril's military hardware and software to protect the U.S. from unmanned aircraft.
- The project will aim to "detect, assess and respond to potentially lethal aerial threats in real time" by training OpenAI models on Anduril's data about drone "threats and operations."
What they're saying: Anduril CEO and cofounder Brian Schimpf emphasized the effort's commitment to "responsible solutions" that help "military and intelligence operators to make faster, more accurate decisions in high-pressure situations."
- OpenAI CEO Sam Altman said the company "supports U.S.-led efforts to ensure the technology upholds democratic values" and aims to "help the national security community understand and responsibly use this technology to keep our citizens safe and free."
Between the lines: OpenAI started out as a nonprofit specifically intended to prioritize safeguards over speed in developing and deploying AI.
- More recently it has taken the vanguard in making advanced AI available to the general public, and begun the process of transforming itself into a more conventionally structured for-profit firm.
Flashback: Silicon Valley first emerged 50 years ago as a center of defense contracting, but working with the Pentagon has more recently become a source of controversy in some regions of the industry, notably Google.
- This summer employees of Google's DeepMind signed a letter protesting the company's work for military organizations.
The bottom line: AI-defense partnerships are spreading. Last month OpenAI rival Anthropic partnered with Palantir to make Anthropic's Claude models available to U.S. intelligence and defense agencies.
Go deeper: In remote Texas, Anduril probes future of drone warfare
3. Training data
- OpenAI's ChatGPT now has over 300 million weekly active users, CEO Sam Altman said at a New York Times event. (The Information)
- Google DeepMind unveiled an AI weather model that largely beats other systems, especially at predicting unprecedented events that climate change has made more common and more severe. (Axios)
- DeepMind also debuted Genie 2, which can create navigable 3D worlds from a single image, akin to what Fei-Fei Li's World Labs announced earlier this week. (TechCrunch/Axios)
- Creators of the content analytics platform Parse.ly raised $64 million for their AI startup aimed at helping nontechnical employees create, use and share AI assistants. (Axios)
- Georgia lawmakers released a report on AI regulation that updates the state's current deepfake law to encompass election interference, transparency and labeling. (Axios)
- Elon Musk's xAI wants to rapidly expand its already massive Memphis data center to incorporate as many as 1 million GPUs. (Commercial Appeal)
4. + This
Not sure this was intentional or by accident, but both Disneyland and Disney World have hints as to their location embedded in the name.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Anjelica Tan for copy editing it.
Sign up for Axios AI+





