Nov 8, 2023 - Technology
Column / Behind the Curtain

Behind the Curtain: What AI architects fear most (in 2024)

Illustration of a curtain with a tassel in the shape of the Axios logo

Illustration: Sarah Grillo/Axios

Brace yourself: You will soon need to wonder if what you see — not just what you read — is real across every social media platform.

Why it matters: Open AI and other creators of artificial intelligence technologies are close to releasing tools that make the easy — almost magical — creation of fake videos ubiquitous.

One leading AI architect told us that in private tests, they no longer can distinguish fake from real — which they never thought would be possible so soon.

  • This technology will be available to everyone, including bad international actors, as soon as early 2024.
  • Making matters worse, this will hit when the biggest social platforms have cut the number of staff policing fake content. Most have weakened their policies to curb misinformation.

The big picture: Just as the 2024 presidential race hits high gear, more people will have more tools to create more misinformation or fake content on more platforms — with less policing. It will make 2020, a hot mess of misinformation, seem like a safe space for sanity.

  • A former top national security official told us that Russia's Vladimir Putin sees these tools as an easy, low-cost, scalable way to help tear apart  Americans. 
  • U.S. intelligence shows Russia actively tried in 2020 to help re-elect former President Trump. Top U.S. and European officials fear Putin will push for a 2024 win by Trump, who wants to curtail U.S. aid to Ukraine.

How bad could it get? By 2025, 90%+ of online content could be generated by AI, according to some estimates.

Yes, the White House and some congressional leaders want regulations to call out real versus fake videos. The top idea: mandating watermarking so it'll be clear what videos are AI-generated.

  • But researchers have tried that. The tech doesn't work yet.
  • And don't expect any new teeth from a divided Congress in the run-up to a presidential election.
  • In any case, deciding which content is "AI-generated" is rapidly becoming impossible, as the tech industry rolls AI into every product used to create and edit media.

"Of course, it's a worry," said Reid Hoffman, co-creator of LinkedIn and forceful defender of AI.

  • "It's one of the places where AI and amplification intelligence could [produce] a negative outcome," he added.
  • Hoffman argues that open-source models (free for anyone to use) are the biggest threats. He backs and works on only closed models, including OpenAI's ChatGPT, because they can self-police.

The White House and some experts similarly have expressed concerns that open-source models, such as Meta's LLaMA large language AI model, could be abused by bad actors.

  • Others, including tech giant Mozilla, argue that open-source models force accountability and transparency.

Sam Altman, co-founder and CEO of Open AI, told us: "This is an important near-term risk for the industry to address. We need a combination of responsible model deployment and public awareness."

  • "We also need continued collaboration across the AI industry, including with distribution channels like social media," he added.

Reality check: The best self-policing in the world won't stop the faucet of fake. The sludge will flow. Fast. Furiously.

  • It could get so bad that some AI architects told us they're pushing to speed up the release of powerful new versions so the public can deal with the consequences — and adapt — long before the election.

A senior White House official told us officials' biggest concern is the use of this technology and other AI capabilities to dupe voters, scam consumers on a massive scale and carry out cyberattacks.

  • Another sick use: revenge porn. Most fake video content in early waves of AI misuse is porn.

There's little that government — or you  — can do to stop the coming flood of fake. It's on us to protect ourselves:

  1. Be alert. Reading this column is a good start. But also be on high alert when you see things online and want to act.
  2. Spread the word. Make sure others, especially kids, realize a new problem is coming. Share this column or bring up the topic in conversation. Don't assume government and companies will protect you.
  3. Clean your feed. It's OK to delete social media apps or at least be cautious about who you follow. Unfollow people who share nonsense.
  4. Share with great care. Don't like, or retweet, or post, or re-post things you aren't certain are real.  If you see something that seems dubious, it probably is.
  5. Engage. AI might freak you out. Sorry, it's coming. The more you know, the more you can make it a force for good in your life.

"Behind the Curtain" is a column by Axios CEO Jim VandeHei and co-founder Mike Allen, based on regular conversations with White House and congressional leaders, CEOs and top technologists.

Go deeper: "How AI will turbocharge misinformation — and what we can do about it," by Axios chief tech correspondent Ina Fried.

Go deeper