Axios Codebook

October 25, 2024
😎 TGIF, everyone. Welcome back to Codebook.
- 📆 Join my colleagues at our next Future of Defense event in Washington, D.C., on Nov. 13. Palantir head of defense Mike Gallagher and Anduril co-founder and CEO Brian Schimpf are among the speakers. Request an invite here.
- 📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,331 words, a 5-minute read.
1 big thing: Election disinformation cycle isn't slowing
Nation-state election disinformation won't end on Nov. 5 as government officials prepare to fend off a wave of lies about the outcome.
Why it matters: Conspiracy theories and partisan social media posts peddled by Russia, Iran and China now have a longer shelf life.
- Adversarial nations have a lofty goal this time around: to incite another event like the Jan. 6, 2021, attack on the U.S. Capitol.
Driving the news: Russia, China and Iran are better prepared than they were four years ago to flood the internet with disinformation after the polls close on Election Day, intelligence officials said in an assessment this week.
- The intelligence community now believes that these countries will keep conducting information operations through Inauguration Day on Jan. 20.
- Microsoft said in its own report this week that Russia, Iran and China will continue their disinformation campaigns to cast doubt on the U.S. elections' outcomes.
The big picture: Fake news stories, partisan-leaning social media posts and website defacements are going to be key for nation-state adversaries in the months leading up to the inauguration, according to officials.
- China, Russia and Iran are likely to amplify posts or spread lies that look to undermine confidence in the election and chip away at trust in the democratic process.
Zoom in: Russia and Iran are "willing to at least consider tactics that could foment or contribute to violent protests," according to the intelligence community assessment.
- China, Russia and Iran could deface and take down election websites to feed unfounded concerns that votes are being tampered with.
- Some actors may also use AI and other tools to publish fake election results or create deepfake audio and video that report unofficial results.
Between the lines: Concerns about disinformation spreading after Election Day are unique to this year's elections, Robert Johnston, CEO of Adlumin, told Axios.
- "That was not the narrative in 2020," said Johnston, who helped the Democratic National Committee investigate the 2016 Russia hack.
- The intelligence community feels "a sense of urgency to get something out before the election in hopes to let Americans know to be mindful of what you see," he added.
Flashback: In 2020, social media platforms had tougher content moderation policies in place to attempt to stop the spread of election lies.
- Facebook reduced the visibility of posts and comments that could incite violence, and it stopped suggesting groups for users to join that they might be interested in.
- But even then, Facebook groups remained a major recruitment tool for the "Stop the Steal" movement.
Yes, but: While social media sites are still investigating nation-state disinformation campaigns, they've also taken new stances on how to moderate political misinformation.
- Meta has since started allowing political ads that make incorrect statements about the outcome of the 2020 election and voter fraud.
- X owner Elon Musk has spread his fair share of conspiracy theories.
The bottom line: The intelligence community says getting ahead of conspiracy theories and proactive communications from local and state officials are the best ways to stymie the impact of nation-state disinformation.
- Voters should also put more trust in news from reputable news sources over random sources found on social media, Johnston said.
2. Fake IT worker schemes go global
A second cybersecurity company has detected a fake IT worker trying to infiltrate its ranks — but this time, the job applicant wasn't from North Korea.
Why it matters: Officials have been focused on the threat North Korea-based IT workers pose to U.S. companies.
- But the latest case study suggests bad actors are now taking up North Korea's tactics to conduct espionage or finance their own government programs.
The big picture: Since 2022, the U.S. government has been warning that North Korean IT workers are posing as Americans to evade sanctions and land coveted, high-paying remote jobs to help pay for the country's missile program.
- These job applicants often steal legitimate Americans' identities and use AI tools to obfuscate their voices or change their likenesses in video calls to go undetected.
Driving the news: HYPR, an identity protection and passwordless provider, said in a blog post yesterday that after doing multiple live video interviews, it hired someone who was posing as an Eastern European software engineer.
- However, the company spotted several red flags while onboarding the person: He submitted documents from a location at least 300 miles from his reported home address. He declined to appear on video during calls. And he failed a separate facial recognition test.
- The employee ended up leaving the role before HYPR could finish onboarding him or even provide any login credentials for its systems, according to the blog post.
Catch up quick: KnowBe4, a popular cybersecurity training platform, fell victim to a North Korean IT worker scam in July.
- In that case, the employee received a corporate laptop and attempted to transfer suspicious files.
What we're watching: Insider threats have become a top concern across the cybersecurity, cryptocurrency and AI industries.
- How widespread these threats are has yet to be determined.
3. Bots venture beyond the text box
Anthropic's announcement this week that it's giving its Claude model a new "computer use" capability has the AI world buzzing.
Why it matters: This doesn't mean that bots have busted free of the chat box to run loose on the desktop and in the browser — but that day looks much closer, and increasingly inevitable.
State of play: Anthropic's "computer use" lets developers and advanced users tell Claude to go off and do things that make use of other applications on a computer — like collecting data from the web and moving it into a spreadsheet, or building, deploying and debugging a new website from scratch.
- This is one version of what the AI industry means by "agents," and it's not hard to see how powerful it could be.
- "It feels like delegating a task rather than managing one," Wharton School professor and AI-use guru Ethan Mollick wrote about Claude's new abilities.
Experts and insiders both foresee a massive multiplier effect in knowledge work as AI keeps adding new abilities.
- In an impromptu onstage demo at the TEDAI conference in San Francisco on Tuesday, Mollick showed what that might look like.
- He spun up three separate "assignments" for chatbots (including both ChatGPT and Claude, without the new "computer use" mode) in quick succession — researching a business, building a financial dashboard, and "figure out what this is" for a random folder.
- Then he kept talking while the bots showed their work onscreen in separate windows. It was like a plate-spinning circus stunt, only the acrobats were bots.
Yes, but: Anthropic isn't letting Claude go crazy on your laptop or phone in the wild quite yet.
- The desktop Claude works on is a sandboxed virtual machine, a software-only computer running in the cloud within some constraints, as blogger-developer Simon Willison explains.
4. Catch up quick
@ D.C.
👀 A Democratic political operative has published the trove of emails an Iranian hacking group stole from the Trump campaign. (Reuters)
🇷🇺 Elon Musk has been in regular contact with Russian President Vladimir Putin since 2022. (Wall Street Journal)
🏛️ The White House has issued its highly anticipated national security memorandum on AI. (Axios Pro)
@ Industry
💰 Ireland's privacy regulator has fined LinkedIn $335 million for using users' data for advertising purposes without first getting their consent. (The Record)
🍎 Apple will pay security researchers up to $1 million if they can find an exploit that can run malicious code on its forthcoming private AI cloud. (TechCrunch)
📍 A mobile phone-tracking tool used by law enforcement and U.S. government agencies can track someone when they travel to an abortion clinic, according to new research. (NOTUS)
@ Hackers and hacks
⚠️ Fortinet disclosed a critical vulnerability in its FortiManager API that hackers had already exploited before it was discovered. (BleepingComputer)
👀 The Change Healthcare hack affected 100 million people, the U.S. health department said. (Reuters)
5. 1 fun thing
💪🏻 An online crypto sleuth has been using his powers for good to help recover funds stolen during scams.
- 🕵🏻♂️ Read about the anonymous vigilante in this Wired profile.
☀️ See y'all Tuesday!
Thanks to Megan Morrone for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Codebook, spread the word.
Sign up for Axios Codebook






