December 05, 2022
I hope all your teams beat the other teams over the weekend. Today's Login is 1,236 words, a 5-minute read.
🧐 Situational awareness: Programmers' Q&A site Stack Overflow is temporarily banning code suggestions written by the ChatGPT bot, saying too often they're wrong but look right, per The Verge.
1 big thing: New AI chatbot is scary good
The newest AI wonder, ChatGPT — the latest in a line of incredibly quickly-evolving AI text generators — is causing jaws to drop and brows to furrow.
What's happening: Users are telling ChatGPT to rewrite literary classics in new styles or to produce performance reviews of their colleagues, and the results can be scarily good.
Why it matters: ChatGPT displays AI's power and fun. It could also make life difficult for everyone — as teachers and bosses try to figure out who did the work and all of society struggles even harder to discern truth from fiction.
Driving the news: Last week's public release of ChatGPT came from OpenAI, which had previously set benchmarks in this field with GPT3 and its predecessors. (There's also an unofficial Twitter bot for those who don't want to bother with signing up for the service.)
- Early tech adopters went wild requesting and sharing stories, jokes and poetry, such as this Bible song about ducks and an amazing sonnet on string cheese.
Yes, but: The high quality of ChatGPT's responses adds to the fun, but also highlights the risks associated with AI.
- As we just wrote last week, a big pitfall for today's most advanced AI programs is their ability to be "confidently wrong," presenting falsehoods authoritatively.
- That's certainly the case with ChatGPT, which can weave a convincing tale about a completely fictitious Ohio-Indiana war.
- Nightmare scenarios involve fears that text from AI engines could be used to inundate the public with authoritative-sounding information to support conspiracy theories and propaganda.
- OpenAI chief Sam Altman says that some of what people interpret as "censorship" — when ChatGPT says it won't tackle a user request — is actually an effort to keep the bot from spewing out false info as fact.
Between the lines: ChatGPT, like other text generators, also creates problems when it gets things right. Educators, who already often have to run essays through online tools to make sure they weren't plagiarized, worry that their difficult task could be made even harder.
Zoom in: I gave ChatGPT a few tasks on Sunday, with varying success.
- First I asked it to write an article on ChatGPT in Axios style, because that would have saved me a ton of time. It did fine summarizing its own capabilities, but knows nothing about Axios' style.
- I asked it to write a rap about me and I have to say, it's more flattering than some of the pictures I created of myself with the viral AI app Lensa.
Zoom out: Even in its present form, ChatGPT can serve up useful answers to plenty of questions — and that's without being trained on the latest news and information.
What they're saying: Though people are clearly fascinated by ChatGPT, opinion is decidedly mixed on its net impact.
- Box CEO Aaron Levie: "ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward."
- PyTorch co-creator Soumith Chintala: "ChatGPT seems to be **really** good for creative work and a solid starting point for mundane work (similar to CoPilot). It is unlikely i will trust it with automation, where you need predictability. I wish in the next iterations, they hook it up to verification systems."
2. Exclusive: Adobe to sell AI-made stock images
Adobe is opening its stock images service to creations made with the help of generative AI programs like Dall-E and Stable Diffusion, the company tells Axios.
Why it matters: While some see the emerging AI creation tools as a threat to jobs or a legal minefield (or both), Adobe is embracing them.
- At its Max conference in October, Adobe outlined a broad role it sees generative AI playing in the future of content generation, saying it sees AI as a complement to, not a replacement for, human artists.
The latest: Adobe says it is now accepting images submitted from artists who have made use of generative AI on the same terms as other works, but requires that they be labeled as such.
The big picture: Others are taking a more conservative approach. Getty Images, for example, said in September that it won't accept contributions that use generative AI, citing legal risks.
- Adobe, by contrast, seems comfortable with the risk. Although it is requiring creators to affirm they have proper rights to the works they submit, it will indemnify buyers of stock images should there be any legal challenges.
3. Report: Export control agency needs upgrade
Stronger limits on the export of U.S.-made technology are essential to containing threats from Russia and China, according to a new report shared first with Axios, as Alison Snyder and I report.
Between the lines: Export limits can play a powerful role in ensuring national security, but the agency responsible for managing those rules needs a bigger budget and staffing to carry out that mission, according to the Center for Strategic & International Studies.
The big picture: Export controls are already playing a growing role in U.S. foreign policy, from limiting Russia's ability to access airline parts to slowing China's access to advanced chipmaking technology.
- "In the past few years, Trump and Biden administrations have both chosen to put tech competition at the heart of national security policy and have therefore similarly chosen to put tech export controls as one of the critical tools of U.S. national security policy," said Gregory Allen, one of the three authors of the report.
Between the lines: In order to be effective, though, Congress needs to beef up the budget for the Bureau of Industry and Security, the Commerce Department unit responsible for enforcing such controls.
- "The amount of work that this small operation in the [Commerce Department] has been asked to perform has increased massively in the past 3 to 5 years," Allen said.
It's not just staffing at the agency that's inadequate, but also the underlying technology infrastructure it uses.
- It notes that staffers rely on internal tools that are limited and crash-prone and still perform much of their work using a combination of Google searches and Microsoft Excel.
4. Exclusive: Facebook Dating tests age checks
Meta is testing age verification tools on Facebook Dating, a move the company says will make the product safer, per an announcement shared exclusively with Axios' Ashley Gold.
Why it matters: Meta and other tech platforms are getting ahead of a regulatory environment increasingly focused on the safety of children and teens online, with policy changes underway in the U.S. and abroad.
Driving the news: Meta is launching a test of age verification tools it says have been successful on Instagram to limit usage to adults.
- Meta will test two alternative approaches: video selfies, which will be screened through partner company Yoti's software, which estimates age; or ID uploads.
What they're saying: Meta says these techniques have proven effective on Instagram, where testing since June showed the service was able to keep 96% of teens who tried to edit their birthdays from doing so.
- "We are considering, across our services, what are the places that we want to focus on understanding age in order to ensure people are in the right experience?" Erica Finkle, Meta director of data governance, told Axios.
5. Take note
6. After you Login
This installment of "cat vs. dog" does not go as expected.
Thanks to Scott Rosenberg and Peter Allen Clark for editing and Bryan McBournie for copy editing this newsletter.