Axios Login

May 30, 2023
Hi, it's Ryan.
Join Axios’ Sophia Cai and me tomorrow at 8:30am ET in Washington, D.C., for a News Shapers event featuring Cybersecurity and Infrastructure Security Agency director Jen Easterly and Sen. Tim Scott (R-S.C.) in his first in-person news event since announcing his candidacy for the Republican presidential nomination. Register here to attend in person.
Today's Login is 1,066 words, a 4-minute read.
1 big thing: Zero chance for global AI regulation
Illustration: Shoshana Gordon/Axios
It will likely take an AI-related catastrophe before any international rulebook or organization begins regulating AI technologies.
Why it matters: AI innovators and researchers worry about both the doomsday scenario of a runaway super-AI and the less science-fictional but more likely harms that could follow from hasty deployment of the technology, in the form of cyberattacks, scams, disinformation, surveillance, and bias.
Driving the news: Tech policymakers meet in Sweden today, at the edge of the Arctic Circle, for the twice-yearly Transatlantic Trade and Technology Council.
- They're mostly skirting the calls for regulation from leading CEOs working on AI, and are instead focused on what they can do to limit China's access to chips and critical minerals, alongside baby steps toward shared terminology around AI risks.
- Microsoft president Brad Smith told "Face the Nation" he expects U.S. regulation within a year.
- Meanwhile, a new open letter from AI leaders — signed by executives from OpenAI, Google's DeepMind and Anthropic, as well as leading AI scientists — warns that "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the New York Times reports. The letter is expected to be released today.
What’s happening: CEOs say they support global governance of the most serious risks associated with AI.
- The founders of OpenAI, the company behind ChatGPT, think the International Atomic Energy Agency — which exists to ensure nuclear tech is used for peaceful purposes — is a good model for limiting AI that shows "superintelligence."
- The Organization for Economic Cooperation and Development — an economic think tank for governments — called for global technical standards for trustworthy AI in principles published in 2019.
The big picture: There's no precedent for global regulation of a potentially dangerous field or specific technology without the cue of some catastrophic event.
- The United Nations was built from the ashes of World War II.
- It took the U.S.'s use of nuclear weapons against civilians and a nuclear arms race that threatened global devastation to eventually prompt the adoption of guardrails in that field.
Between the lines: The IAEA opened 12 years after nuclear bombs were dropped on Hiroshima and Nagasaki.
- It took another 13 years for the Nuclear Non-Proliferation Treaty to come into effect, and even then that didn't stop India, Pakistan and most notoriously North Korea from developing warheads.
What they're saying: Sam Altman and his OpenAI co-founders want to see “an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”
- Given that neither national or international authorities can keep pace with AI innovation, the founders suggest companies "begin implementing elements of what such an agency might one day require," followed by national governments, and eventually a global suite of governments.
Microsoft's Smith is in lockstep with OpenAI (which Microsoft funds) in wanting "proper control over AI," including both government-licensed models and privately watermarked content.
- Smith supports specific regulations for three layers of the AI technology stack — applications, models and infrastructure — without getting into details about how this could work globally.
Sundar Pichai, Google's CEO, told "60 Minutes" he supports a global treaty system for managing AI.
BSA, a software trade association that includes Adobe, Cisco, IBM, Oracle and Salesforce as members, has been advocating for AI regulation since 2021.
Flashback: The speediest modern example of international action in the face of a technological threat was set by the negotiators of the Montreal Protocol in the 1980s, who took four years to ban around 100 chemicals that had created a dangerous hole in the Earth's ozone layer.
- Work began in 1985, the U.S. Senate unanimously ratified the deal in 1988, and it came into effect in 1989.
- Some argue COVAX, the global COVID vaccine delivery partnership, represents a more rapid global mobilization. But its results were mixed and the International Health Regulations that guide pandemic responses remain largely toothless.
Reality check: While CEOs have offered unusually strong support for regulation in theory, their actions are often inconsistent, and recall the efforts of social media platforms to resist regulation in the 2010s.
- ChatGPT doesn't comply with the OECD AI principles which demand explainable AI. Altman last week floated the idea of pulling out of EU markets because of "over-regulation," before backtracking on Friday.
- Google is declining to offer its Bard chatbot in the EU and Canada for unstated reasons — but it might have something to do with privacy investigations of ChatGPT underway in Italy, Germany, France, Spain and Canada.
2. VCs ride AI wave
Illustration: Aïda Amer/Axios
A growing number of companies’ venture arms are rolling out efforts focused on investing in the current AI startup boom, reports Axios' Kia Kokalitcheva.
The big picture: Generative AI and large language models are getting a lot of attention for their potential to automate many business tasks, big and small.
- Of the 44 startups Workday Ventures has backed, about 25% are already using tech developed by OpenAI, according to managing director Barbry McGann. Workday is adding $250 million to its existing fund to invest in AI and machine learning.
Zooming in: Some of the companies investing include:
- Amazon: The company recently announced an accelerator program for generative AI startups that includes Amazon Web Service credits for participants.
- Salesforce: In March, the company announced a new $250 million fund dedicated to generative AI.
- Workday: In February, it said it was adding $250 million to its existing fund specifically to invest in AI and machine learning startups, among other areas.
- OpenAI: The buzzy company behind the popular ChatGPT app has raised a $175 million fund to invest in AI startups.
What's next: Expect more corporate venture arms — and stand-alone VC firms — to focus on backing generative AI tech.
3. Twitter debate of the day
Screenshot: @cwhowell123 (Twitter)
When you try to incorporate ChatGPT into your classrooms and it creates truth issues 100% of the time.
4. Take note
ICYMI

- Nvidia is close to becoming the fifth tech company to be valued over a trillion dollars (after Apple, Google/Alphabet, Microsoft and Amazon), after the chip maker added around 25% to its market value since beating earnings predictions May 23. (CNBC)
- Amazon has rolled back a climate pledge to ensure 50% of its shipments have net-zero emissions by 2030 and will now aim for net-zero across all operations by 2040. (Business Insider)
5. After you Login
Photo by Alexi Rosenfeld/Getty Images
Check out Manhattanhenge: The sun will set in perfect alignment with Manhattan's east-west street grid today.
Thanks to Scott Rosenberg and Peter Allen Clark for editing and Carolyn DiPaolo for copy editing this newsletter.
Sign up for Axios Login

Taking you inside the AI revolution, and delivering scoops and insights on the technologies reshaping our lives.



