Axios AI+

January 07, 2025
Nvidia founder Jensen Huang officially kicked off CES last night, debuting new chips, new foundation models trained on real-world spaces, and a $3,000 personal AI supercomputer.
- Don't forget to check out our ongoing roundup of CES news, a good chunk of which has been AI-related. Today's AI+ is 1,139 words, a 4.5-minute read.
Situational awareness: Mark Zuckerberg announced Meta would drop its third-party fact-checking program to "restore free expression" and turn to a "community notes"-style approach like that of Elon Musk's X. Meta will also stop downgrading political content on Facebook and Instagram.
1 big thing: LinkedIn's hottest job — AI engineer
Technical roles — specifically those focused on artificial intelligence — are the fastest-growing jobs in the U.S. according to LinkedIn's Jobs on the Rise list, out today.
Why it matters: 60% of the jobs are new to the list this year, and roughly half of these professions didn't exist 25 years ago.
State of play: LinkedIn examined millions of jobs that platform users started between Jan. 1, 2022 and July 31, 2024 to calculate a growth rate for each job title.
Leading the list is artificial intelligence engineer.
- Artificial intelligence consultant is No. 2.
- Physical therapist is third, workforce development manager fourth and travel adviser fifth.
Zoom in: Tech and engineering titles are also in demand with bridge engineer (No. 18), nuclear engineer (21), and instrumentation and control (I&C) engineer (24) all making the list.
The big picture: Despite mass layoffs within the tech industry, AI expertise remains a top skill as companies across sectors lean on AI to increase innovation and efficiency.
- As of November 2024, there had been 16,591 new AI job postings — a 59% increase since January 2024 — with California, Washington and Texas serving as hotspots, according to the University of Maryland and LinkUp's AI job tracker.
Yes, but: There is still demand for in-person connection.
- Travel and event roles also rank high on the list, signaling a shift toward more in-person gatherings in 2025.
The bottom line: While AI is not yet coming for your job, humans who understand how to prompt and engineer AI might.
More on Axios: America's AI job hotspots, mapped
2. Exclusive: Two years till AI supercharges cyber weapons
The world has about two years to prepare for AI-powered cyber weapons capable of evading current security tools, a NATO-backed security startup warned in a new report shared first with Axios.
Why it matters: Companies need to start budgeting for better cyber defenses right now.
- Governments also need to quickly coordinate policies for responsibly using AI and securing critical infrastructure like energy grids and railways.
Driving the news: Goldilock, a U.K. cybersecurity startup focused on critical infrastructure security, says agentic malware will become a reality in two years.
- This malware would have the ability to worm its way through computer networks just like the infamous Stuxnet malware — but this time it could automatically update and adapt to a computer system to evade detection, the report warned.
- Instead of targeting specific systems, the malware would theoretically identify new targets on its own and automatically compromise them, Goldilock warns.
The big picture: Heightened global threats and instability — including the threat of a Chinese invasion of Taiwan as soon as 2027 — are creating a ripe environment for adversaries to invest in AI-powered cyber weapons, Stephen Kines, co-founder and COO of Goldilock, told Axios.
Threat level: Goldilock predicts that energy grids, transportation networks, financial institutions and health care systems are the most at risk for agentic malware.
Between the lines: Kines is most concerned about the rate at which AI is being deployed and developed — with limited guardrails.
- As of now, few hurdles exist to stop adversarial countries or cyber criminal gangs from developing their own agentic malware.
- "If we don't harness some better cybersecurity, then we have problems," Kines said. "And the reality is that Big Tech has not kept up."
The intrigue: Depending on who you ask, AI-powered security tools could go a long way to stymieing AI-powered malware.
- But Kines says that just fighting AI with AI alone won't solve the problem.
- "Because AI has been democratized, and anybody can use it, learn it, take existing code and apply it," Kines said. "You're never going to win that code war."
The bottom line: Organizations need to invest in AI-enhanced threat intelligence, network segmentation tools and AI-based detection systems, the report advised.
3. OpenAI's o3 model freaks out comp sci majors
OpenAI's announcement of its new o3 reasoning model has triggered another wave of anxiety among some computer science majors who fear AI will edge them out of the job market.
Why it matters: The new OpenAI model, though not yet widely available, is likely to power ChatGPT and other services eventually, and its capacity to independently tackle larger-scale projects could disrupt many professions.
State of play: One user on X said, "CS grads might honestly be cooked." Another user said they "might need to pivot."
- "Now with o3 from OpenAI, what am I supposed to do as a CS freshman?" a user asked on the "r/singularity" subreddit.
- OpenAI says o3 scores higher on one math benchmark than "human expert" level. Its performance on a coding benchmark beat the company's own chief scientist's score. And the model will only improve from here.
Yes, but: For people who want to go into computer science, "I would tell them there are so many new things that need to be built and would not worry at all," Pascal Van Hentenryck, director of Georgia Tech's AI Hub, told Axios.
- "The job opportunities are going to increase," he said. "The easy and tedious tasks will become automated and people will be able to work at a higher, sophisticated level."
Computer science continues to gain popularity as an undergraduate major, with 5.9% of class of 2025 students pursuing CS degrees compared with 4.8% of the class of 2024, according to data provided by Handshake.
Between the lines: Reasoning models like o3 use enormous amounts of computing resources and energy to arrive at their answers.
- The cost per task for some versions of the o3 model to answer a complex question can run as high as $1,000.
OpenAI CFO Sarah Friar recently said that the company is leaving the "door open" to a $2,000 monthly subscription for its AI products.
- That's a high price tag, but Friar suggests it could be worth it to get "literally a Ph.D.-level assistant for anything that I'm doing."
Zoom out: For every professional who fears generative AI will lower wages and kill jobs, there's another who dreams it will free humans from drudgery.
- Sam Altman wrote in September that "many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter."
The bottom line: Many white-collar jobs will likely shift in the coming years, relying more on generative AI technology for task automation, brainstorming, creative uses and agents that act on one's behalf.
4. Training data
- Meta added three people to its board of directors, including Dana White, the CEO of UFC and a close ally of incoming president Donald Trump. (Axios)
- Google is adding AI features — including Gemini — to its operating system for TVs. (TechCrunch)
- Instagram has begun testing a feature that puts AI-created images of users into their feeds. (404 Media)
5. + This
A Russian hockey player has gone viral after his cellphone fell out of his pocket and onto the ice during a game.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+







