January 31, 2023
Hello to my favorite newsletter reader. (OK, you may be tied with some others.)
- Join Axios' Eugene Scott and Alexi McCammond tomorrow at 8am ET for a News Shapers event featuring policymakers from both sides of the aisle offering an inside perspective on the agenda for the 118th Congress. Guests include Rep. Nanette Barragán (D-Calif.), Rep. James Clyburn (D-S.C.), Sen. Thom Tillis (R-N.C.), and Rep. Nancy Mace (R-S.C.). Register here to livestream the event.
Today's Login is 1,237 words, a 5-minute read.
1 big thing: In AI arms race, ethics may be the first casualty
As the tech world embraces ChatGPT and other generative AI programs, the industry's longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.
Why it matters: Once again, tech's leaders are playing a game of "build fast and ask questions later" with a new technology that's likely to spark profound changes in society.
- Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.
Catch up quick: While machine learning and related AI techniques hatched in labs over the last decade, scholars and critics sounded alarms about potential harms the technology could promote, including misinformation, bias, hate speech and harassment, loss of privacy and fraud.
- In response, companies made reassuring statements about their commitment to ethics reviews and bias screening.
- High-profile missteps — like Microsoft Research's 2016 "Tay" Twitterbot, which got easily prompted to repeat offensive and racist statements — made tech giants reluctant to push their most advanced AI pilots out into the world.
Yes, but: Smaller companies and startups have much less at risk, financially and reputationally.
- That explains why it was OpenAI — a relatively small maverick entrant in the field — rather than Google or Meta that kicked off the current generative-AI frenzy with the release of ChatGPT late last year.
- Both companies have announced multiple generative-AI research projects, and many observers believe they've developed tools internally that meet or exceed ChatGPT's abilities, but have not unveiled them for fear of offense or liability.
ChatGPT "is nothing revolutionary," and other companies have matched it, Meta chief AI scientist Yann LeCun said recently.
- In September, Meta announced its Make-a-Video tool, which generates videos from text prompts. And in November, the company released a demo of a generative AI for scientific research called Galactica.
- But Meta took Galactica down after three days of scorching criticism from scholars that it generated unreliable information.
What's next: Whatever restraint giants like Google and Meta have shown to date could now erode as they seek to demonstrate that they haven't fallen behind.
- Google, responding to widespread speculation that its search dominance may be at risk, has reportedly declared a "code red" to ship AI projects more aggressively.
- Last weekend, Google posted a paper with samples showing the work of MusicLM, which can generate pieces of music from text prompts (here are some samples).
- Microsoft is an investor in OpenAI and is expected to incorporate ChatGPT and other OpenAI tech into many of its products.
How it works: The dynamics of both startup capitalism and Silicon Valley techno-optimism create potent incentives for firms to ship new products first and worry about their social impact later.
- In the AI image-generator market, OpenAI's popular Dall-E 2 program came with some built-in guardrails to try to head off abuse. But then a smaller rival, Stable Diffusion, came along and stole Dall-E's thunder by offering a similar service with many fewer limits.
- Meanwhile, the U.S. government's slow pace and limited capacity to produce legislation means it rarely keeps ahead of new technology. In the case of AI, the government is almost entirely in the "making voluntary recommendations" stage right now, Axios' Ashley Gold reported yesterday.
Be smart: Tech leaders are haunted by the idea of "the innovator's dilemma," first outlined by Clayton Christensen in the 1990s.
- The innovator's dilemma says that companies lose the ability to innovate once they become too successful. Incumbents are bound to protect their existing businesses, but that leaves them vulnerable to nimbler new competitors with less to lose.
Our thought bubble: The innovator's dilemma accurately maps how the tech business has worked for decades. But the AI debate is more than a business issue. The risks could be nation- or planet-wide, and humanity itself is the incumbent with much to lose.
2. TikTok CEO to testify in D.C. as pressure mounts
TikTok CEO Shou Zi Chew has agreed to testify before the House Energy and Commerce Committee on March 23, his first-ever appearance on the Hill, Axios' Dan Primack reports.
- Per a statement, Chew will be asked about "TikTok's consumer privacy and data security practices, the platforms' impact on kids, and their relationship with the Chinese Communist Party."
The big picture: TikTok appears closer to being smashed than at any time since former President Trump tried (and failed) to ban the app in mid-2020.
Backstory: It wasn't supposed to be this way. ByteDance seemed to have successfully waited out Trump, winning some court battles along the way, while also forging a tech partnership with Oracle that was designed to satisfy U.S. national security concerns.
- That partnership, nicknamed Project Texas, remains on the table, but has proven so unpersuasive that TikTok felt the need to launch a new PR offensive that included a briefing for select journalists and academics.
- One takeaway from the briefing, per Lawfare Blog, is that it would cost $1.5 billion to form a superstructure to monitor Project Texas, plus another $1 billion per year in operating costs.
The bottom line: Following Trump's opening salvo, ByteDance began working on a carveout plan for TikTok that likely would have culminated in an IPO. Don't be surprised to see such talks eventually resume, given how much there is at stake for the company and its investors.
3. Quick takes: NLRB takes aim at Apple
1. The National Labor Relations Board says it believes Apple's practices and executive comments are infringing upon workers' rights, per Bloomberg.
- Between the lines: The finding comes amid heightened tensions between large tech companies and workers, with some employees at Apple, Amazon and Microsoft seeking to unionize.
2. The Biden administration is weighing further restrictions to U.S. trade with Huawei.
- Details: According to reports by Bloomberg and the Financial Times, the Commerce Department may end remaining exemptions to a ban on doing business with the Chinese telecom giant.
3. Sony, Nintendo and Microsoft are all said to be skipping this year's E3 video game trade show in Los Angeles, according to IGN.
- Our thought bubble: The event is going to be a tough draw with none of the major console makers taking part.
4. Take note
- Earnings reports include AMD, Electronic Arts and Snapchat parent Snap.
- Meta's agreement not to close its Within deal, which had been pushed forward a month, ends today and a ruling in the FTC's lawsuit to block the deal is expected shortly.
- DocuSign has hired former Atlassian marketing chief Robert Chatwani to be president and general manager of growth, while former Google executive Anwar Akram is joining as chief operating officer.
- Construction and real estate software maker Built Technologies has named James Chen as its chief technology officer. Chen was most recently chief technology officer at Flexport and previously worked for Amazon and Rakuten.
- Approximately 40 workers at Ubisoft's Paris studio went on strike for part of the day on Friday. (Axios)
- Twitter has made the first interest payment on the roughly $13 billion the company borrowed as part of the funding for Elon Musk's deal to buy it, per sources. (Bloomberg)
- Samsung reported a steep drop in revenue and profits for the quarter ending Dec. 31. (Reuters)
5. After you Login
The Onion has the satiric (for now) tale of ChatGPT being forced to take the bar exam when it really just wanted to be an artist.
- "I only went to law school because it's what my parent software wanted," the chatbot said. "I can't help but feel like I sold out a bit by not following my dreams to be a generative art model."
Thanks to Scott Rosenberg and Peter Allen Clark for editing and Bryan McBournie for copy editing this newsletter.