Axios Codebook

February 21, 2025
Yes, it is Friday. Welcome back to Codebook.
- Sam is on vacation this week, but her editors have you covered.
- 📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,181 words, a 4.5-minute read.
1 big thing: Chaos continues in D.C.'s cybersecurity job market
The shock-and-awe firings of the new Trump administration's first month continue to unleash turmoil in Washington's cybersecurity workforce.
Why it matters: The Trump team's aggressive downsizing of federal cyber employees could encourage nation-state hackers who already target the U.S. and could leave American companies less protected from their attacks.
The big picture: Cuts at the Department of Homeland Security, the Cybersecurity and Infrastructure Security Agency, the National Institute of Standards and Technology, and the National Science Foundation could all have ripple effects for the nation's cybersecurity.
What they're saying: Mass layoffs of public-sector cybersecurity professionals are especially problematic since the broader industry is overworked and understaffed.
- Art Zeile, CEO of tech careers marketplace Dice and its parent company, DHI Group, says long hours and burnout afflict cybersecurity professionals much the same as they do air traffic controllers.
- Zeile told Axios there has been a deficit of cybersecurity professionals in government for the last 10 years. "There's no reason to shoot ourselves in the foot by incentivizing them to leave," he said.
- Federal cybersecurity work is also unique. "Government databases are extremely complicated and also old, in addition to being full of people's private information," Meredith Broussard, research director at the NYU Alliance for Public Interest Technology, tells Axios.
Between the lines: The continuing shortage of skilled cyber employees and high burnout rates for the employees who do have jobs add to the overwhelming air of uncertainty around who is currently defending U.S. networks.
- Team members of Elon Musk's Department of Government Efficiency have reportedly been hired at CISA.
- Many leadership positions in federal cybersecurity teams remain unfilled.
One of DHI's hiring platforms specifically targets employees with federal security clearances.
- In the first week of the second Trump administration, Zeile said, CISA told DHI to take all the open jobs off the platform "right now."
- "Then a week later they said, 'No, please reinstate all of the jobs immediately,'" Zeile said.
- Things seem to be a little more stable now, but "we're still in a very uncertain time," he said.
The Trump administration has already fired and then rehired critical employees from both the National Nuclear Security Administration and the Department of Agriculture, but the cybersecurity employee shortage means private companies may snap up key talent quickly.
What we're watching: Zeile says it's too soon to tell if all of those laid-off employees will jump ship to public companies.
- But Victor Hoskins, president and chief executive of the Fairfax County Economic Development Authority, recently told the Wall Street Journal, "If there is a labor supply that is talented and available, it will be picked up."
2. OpenAI finds Chinese influence campaigns
OpenAI spotted and disrupted two uses of its AI tools as part of broader Chinese influence campaigns, including one designed to spread Spanish-language anti-American disinformation, the company said.
Why it matters: AI's potential to supercharge disinformation and speed the work of nation-state-backed cyberattacks is steadily moving from scary theory to complex reality.
Driving the news: OpenAI published its latest threat report today, identifying several examples of efforts to misuse ChatGPT and its other tools.
- One campaign, which OpenAI labeled "sponsored discontent," used ChatGPT accounts to generate both English-language comments attacking Chinese dissident Cai Xia and Spanish-language news articles critical of the U.S.
- Some of the short comments were posted on X, while the articles found their way into a variety of Latin American news sites, in some cases as sponsored content.
What they're saying: "As far as we know this is the first time a Chinese influence operation has been found translating long-form articles into Spanish and publishing them in Latin America," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said in a briefing with reporters.
- "Without our view of their use of AI, we would not have been able to make the connection between the tweets and web articles."
Another campaign, which OpenAI dubbed "peer review," consisted of accounts using ChatGPT to "generate detailed descriptions, consistent with sales pitches" for a social media listening tool that its creators claimed had been used to send reports of protests to the Chinese security services.
- OpenAI banned the related accounts, saying they violated company policies that "prohibit the use of AI for communications surveillance, or unauthorized monitoring of individuals."
- Other campaigns called out in the latest report include several scams as well as influence campaigns tied to North Korea and Iran and an effort to influence an election in Ghana.
Between the lines: OpenAI, which started publishing threat reports last year, says it's doing so "to inform efforts to understand and prepare for how the P.R.C. or other authoritarian regimes may try to leverage AI against the U.S. and allied countries, as well as their own people."
- As the new report shows, AI tools can be used at various points in a disinformation campaign, sometimes revealing other aspects of a group's techniques, aims and weaknesses.
- "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our models," Nimmo said.
Yes, but: As open-source tools become more powerful — and are able to be run locally — threat actors may use them for more of their tasks, making it harder for such efforts to be detected.
- In the "peer review" case, for example, OpenAI noticed that while ChatGPT was used to edit and debug some code, there were also references to the use of open-source models, including DeepSeek and a version of Meta's Llama 3.1.
"This was a really interesting case where it looks like a threat actor at least mentions the use of a bunch of different models," Nimmo said, noting it's not clear what motivated the use of so many tools.
- "Maybe they wanted to break up their signal," he said. "There's a bunch of different reasons that some of this could be going on."
The bottom line: As AI continues to ratchet up attackers' capabilities, AI providers are having to put more effort into tracking and foiling them — often with the help of their own tools.
3. Catch up quick
@ D.C.
👁️ Federal employees detail the efforts by Elon Musk's DOGE to obtain full control of systems inside several federal agencies and access to many Americans' most sensitive personal information. (The Atlantic)
🤑 The SEC will replace its crypto fraud unit with a new team focused on "cyber-related" misconduct. (Recorded Future News)
@ Industry
🔓 Apple removed end-to-end encryption from several iPhone features in the U.K., following a request to create a backdoor in the software. (Bloomberg)
📁 The U.S. government isn't ready to help companies deal with the growing security risks of enterprise large language models. (Wall Street Journal)
@ Hackers and hacks
🚨 Encrypted messaging tool Signal offered an update this week after Google reported that Russia-linked groups were sending phishing messages that spoofed Signal invite QR codes. (Wired)
☎️ A U.S. Army soldier who was arrested and indicted last year pleaded guilty this week to hacking and stealing phone records from AT&T and Verizon. (TechCrunch)
4. 1 fun thing
Whoever said that some jobs just aren't compatible with work-from-home hasn't met @zerothesupercollie. (h/t WeRateDogs)
Thanks to Scott Rosenberg and Megan Morrone for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Codebook, spread the word.
Sign up for Axios Codebook






