Axios AI+

October 10, 2024
Sending good thoughts to all those in areas affected by Hurricane Milton.
Today's AI+ is 1,246 words, a 4.5-minute read.
1 big thing: AI gets its Nobel moment
AI researchers notched two Nobel Prizes this week, elevating their work and field into the upper echelons of scientific achievement.
Why it matters: There's wide debate about whether, and how, AI will transform the world — but this week's recognition underscores the behind-the-scenes ways the technology is already changing science itself.
- It's solving intractable problems and analyzing vast troves of scientific data. At the same time, it's raising concerns about the ways it might put cutting-edge science in the hands of bad actors.
The big picture: The technical foundations of AI were laid over decades, but its advances have only received wide recognition more recently with the advent of chatbots and the popularization of generative AI.
Driving the news: Geoffrey Hinton and John Hopfield were awarded the Nobel Prize in Physics on Tuesday for their work on AI beginning in the 1980s.
- Hopfield and Hinton each drew on concepts in physics to invent artificial neural networks that sparked and influenced the development of AI. Hopfield is an emeritus professor at Princeton University, and Hinton is a professor at the University of Toronto.
The Nobel committee presented the prize in chemistry yesterday to Google DeepMind CEO Demis Hassabis, DeepMind director John Jumper and University of Washington professor David Baker for their work on proteins that are crucial to life.
- Hassabis and Jumper were recognized for the development of an AI system that cracked one of biology's toughest problems: predicting the structure of a protein.
Between the lines: The Nobel prize is often awarded for research done decades ago, after its impact can be clearly assessed as having "the greatest benefit to humankind."
- In one of the quicker reaction times in the Nobel annals, the committee cited the AlphaFold2 system that was first demonstrated just four years ago and has been used by scientists around the world to tackle a range of scientific problems.
- AlphaFold2 has been used by "more than two million people from 190 countries," according to the Nobel committee, to explore antibiotic resistance, drug design, crop resilience and other scientific questions. The DeepMind team continues to expand its scope and increase its power.
- Baker worked on another AI-driven protein prediction tool called RoseTTAFold and also designed altogether new proteins.
Hassabis' "longstanding passion and motivation for doing AI" was to one day be able to "build learning systems that are able to help scientists accelerate scientific discovery," he told me last year.
Yes, but: "It's far too premature to talk about AI being involved in all prizes," Hassabis said in a press conference yesterday.
- "The human ingenuity comes in first — asking the question, developing the hypothesis — and AI systems can't do any of that. It just sort of analyses data right now," he said.
- "It's interesting the committee decided to make a statement by having the two AI-linked prizes together."
Zoom in: Three of the Nobel prize winners have ties to Google — Hinton left the company last year, saying he wanted to speak freely about what he and others see as the dangers of AI.
- The winners' private-sector ties speak to the enormous resources needed for AI research today, which some researchers warn runs the risk of consolidating the power of the technology and its development to profit-focused companies.
What to watch: AI critic Gary Marcus writes that Hinton (and others) have favored advancing AI through ever-expanding neural networks that learn from vast troves of data — the approach that fuels generative AI.
- But Hassabis and others are exploring what's known as neurosymbolic AI, a technique that combines neural networks and hard-wired, or symbolic, knowledge. In July, DeepMind announced the approach was used to build a math-savvy AI system that made Silicon Valley buzz.
- It's unclear which path will ultimately yield the "greatest benefit to humankind." And of course there's no guarantee either will prove a boon.
2. Dark web AI is helping crypto crooks
AI has unlocked a powerful tool that's being sold to money launderers to create phony accounts on cryptocurrency exchanges, according to new research from Cato Networks, a computer security firm.
Why it matters: Fraudsters need lots of accounts to cash out ill-gotten gains, as they play Whac-A-Mole with the trust and safety teams at the digital asset platforms.
Between the lines: Etay Maor of Cato CTRL, Cato Networks' threat intelligence lab, has released research that details how the attack works.
- First, AI swiftly generates fake documents, such as a passport. In the example they've seen, it's been done for a person who doesn't actually exist.
- These accounts often require some sort of live proof of humanity, such as selfies or a video. And that's where the deepfake comes in:
- AI is able to generate either photos or a video that can match up with the document and fool an automated agent.
"These accounts are important because they are a vital point in the attack life cycle," Maor tells Axios.
Threat level: Abilities like this allow fraudsters to scale the operational end of their money moving.
- "While in the past I've seen this done in a very professional manner with document forgers, now it's done in just a much more accessible manner," Maor said.
Zoom in: A ransomware, pig-butchering or identity fraudster needs to give their victim someplace to send the money so they can cash out. Obviously, they don't want to put their actual name on the account retrieving the ill-gotten gains.
- With services like these, criminals can change identity with every single payment. This decreases the friction at a key bottleneck for fraud.
In the weeds: In a video with the blog post, Cato shows the process of taking an AI photo and using it to create an identity for dozens of companies.
- Then it makes a fake video from that photo that matches the specifications of a specific cryptocurrency exchange.
- It also syncs up so the video seems to be coming from the device's camera.
What we're watching: Social engineering attacks of all kinds could get a boost from these AI tools.
- Fraudsters proficient at manipulating support employees at companies in real time are likely to find AI extremely useful for extending the reach of their cons.
Maor recommends that companies look for glitches in the artifacts sent their way and look to introduce some randomness in their approach from account to account.
- For example, if they verify with video, they can vary the specific instructions given from video to video.
- Humans can be brought in to double-check, but of course this also increases the onboarding friction for legitimate users.
3. Training data
- Numbers from OpenAI documents suggest the company could take until 2029 to turn a profit, with annual losses reaching $14 billion by 2026 — nearly triple the estimate of this year's deficit. (The Information)
- Zoom announced a number of future AI features, including AI avatars that look and sound like you and that you can use to send short video messages. (The Verge)
- Matt Wood, VP of AI for Amazon Web Services, is leaving the company. (GeekWire)
- Wimbledon will use AI to determine faults and make out-of-bounds calls, organizers announced yesterday. (NPR)
- Facebook's parent company expanded the availability of Meta AI to six new countries, including the UK and Brazil, with more planned for the coming weeks. (TechCrunch)
- The Audacious Project — a funding initiative housed at TED — will give $38 million to RAND to fund development tools to evaluate frontier AI safety. (RAND)
4. + This
This Padres outfielder not only robbed the Dodgers of a home run, but did so in epic troll fashion.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+





