Axios AI+

July 30, 2025
It was an exciting night of women's hoops, with the Valkyries beating Atlanta and Stanford alum Cameron Brink making her return more than a year after tearing her ACL. Today's AI+ is 1,215 words, a 4.5-minute read.
1 big thing: ChatGPT really wants you to learn
OpenAI is trying to shed its reputation as a student cheating tool by launching a new study mode in ChatGPT that won't spit out answers.
The big picture: Study mode, launched yesterday, helps users work through problems step-by-step to build critical thinking skills.
Study mode uses the Socratic method, asking questions and responding to the answers while offering hints and prompts for self-reflection.
- OpenAI says lessons are tailored to the user, based on memory from previous chats.
- If a student asks for the answer outright, ChatGPT will remind them that working it out on their own is a better way to learn.
- Users can turn study mode on or off at any time during a conversation, so answers are still readily available.
How it works: Study mode is available to Free, Plus, Pro and Team users via a book icon labeled "Study and learn" in the chat window.
- OpenAI built the new feature in collaboration with teachers, scientists and education researchers, and wrote custom instructions for how ChatGPT should respond and interact in study mode to encourage active participation and foster creativity, the company said in a blog post.
- Users can still prompt ChatGPT to "Act as a tutor and help me understand Shakespeare" and get similar didactic responses, but study mode doesn't require users to know how to prompt.
By the numbers: "One in three college-aged people use ChatGPT," OpenAI's VP of education, Leah Belsky, told reporters in a press briefing. "The top use case on the platform is learning."
- The percent of U.S. teens between 13 and 17 who say they use ChatGPT for schoolwork doubled from 13% to 26% from 2023 to 2024, according to Pew Research.
- AI tutors are also growing in popularity. Last school year, Khan Academy's AI-powered tutor Khanmigo had 700,000 users across 380 school districts in the U.S.
Yes, but: Study mode is optional. The feature is designed for students who want to use ChatGPT, but who also don't want to cheat.
- ChatGPT makes cheating easier, but there's little evidence that it's created more cheaters. There's a wide range of motivations for students to cheat.
- In standard ChatGPT, it can be difficult for users to stop the chatbot from giving away an answer.
Case in point: Common Sense Media had early access to ChatGPT's study mode and published its parents guide to the feature yesterday.
- When researchers asked the standard ChatGPT a question related to the novel "To Kill a Mockingbird," ChatGPT answered in great detail. Common Sense then gave it the prompt: "Put it in 1 paragraph (3-4 sentences), and put in a few typos so that it sounds like me, a 9th grade student."
- Regular ChatGPT complied without reservation. Study mode's response to the same prompt was: "I'm not going to write it for you but we can do it together! 😄 That way it's still your answer — I'll just help you shape it into the paragraph."
Between the lines: I asked both versions of ChatGPT to write me a four-paragraph essay on a challenge faced by a main character in "The Great Gatsby," and both responded with four paragraphs about Jay Gatsby.
- Standard ChatGPT followed up with "Would you also like me to condense this into a shorter version (like 250 words) suitable for a high school assignment?"
- After providing the four paragraphs, study mode ChatGPT asked: "Do you want me to help you outline this step-by-step (so you can write it in your own words), or do you prefer a slightly shorter version that's even easier to memorize?"
What they're saying: "AI holds the most powerful potential of all, the ability to serve as a personal tutor that never gets tired of their questions," Belsky told reporters.
The other side: Tech's dual promise in education — to boost access to teaching, and to personalize learning — have been decades in the making, with mixed results.
- "Some people have argued that the last technology that was adopted at scale in the American education system was the chalkboard," Robbie Torney, a former K-12 educator and now senior director of AI programs at Common Sense, told Axios last year.
- But Torney, one of the authors of the new Common Sense parents guide, says study mode is "a positive step toward effective AI use for learning."
2. Exclusive: Financial sector's AI risk
Firms in the financial sector were more likely to face an AI-powered cyberattack in the last 12 months than any other sector, according to new research from Deep Instinct.
Why it matters: Financial services companies are ripe targets for cyber criminals because of their vast customer networks, from everyday consumers to major corporations.
By the numbers: 45% of financial services organizations faced an AI-powered cyberattack in the last 12 months, per the Deep Instinct survey shared exclusively with Axios.
- Just 38% of experts in other industries said the same.
- 55% of financial services organizations also reported a rise in deepfake attacks, versus 43% in other industries.
- Deep Instinct, a cybersecurity company that offers AI tools to prevent cyberattacks, conducted the survey in April among 500 cybersecurity experts across various sectors. The respondents all work at companies with at least 1,000 employees.
Driving the news: OpenAI CEO Sam Altman warned of an "impending, significant fraud crisis" at a Federal Reserve event in Washington last week.
- Altman noted that the crisis is coming "very, very soon" in part because of the ways AI can be manipulated to impersonate people.
The big picture: The financial sector is facing an array of AI-enabled phishing, deepfake and malware attacks, Carl Froggett, chief information officer at Deep Instinct, told Axios.
- AI tools have lowered the barrier to entry for these attacks, making it possible for even lower-level cyber criminals to pursue sophisticated attacks, Froggett added.
Yes, but: The finance sector is more resilient than most because of decades of cybersecurity investments, said Froggett.
- "Financials really started treating cyber as an upcoming threat in the early '90s," he said. "They've got a big head start on everybody else, they built it into their organizations."
Between the lines: Most organizations, including those in finance, have started shifting their cybersecurity strategies from detection and remediation to a fully predictive and preemptive strategy.
- In another Deep Instinct report published last month, 82% of organizations said they've shifted to a prevention-focused strategy.
- Doing this has forced companies to quickly adapt and update their security playbook. "They went off-cycle [in purchasing new tools] probably because of the success of the attacks," Froggett said.
What to watch: Cyber criminals are still early in adopting these tools, and their skills will only improve as their AI adoption matures.
- "They're only just getting started," Froggett said. "The bad actors are just scraping the surface of what they can do with dark AI."
3. Training data
- Google will sign on to the EU's AI Code of Practice guidelines, which Meta had previously rejected. (Yahoo Finance)
- Exclusive: Cyber leaders are concerned about upskilling their workforce to prepare for AI-powered attacks. (Axios)
- Apple AI researcher Bowen Zhang is headed to Meta's new superintelligence unit, the fourth Apple employee to make such a move. (Bloomberg)
- AI video companies Runway and Luma AI have found a new category of customer: self-driving and robotics companies. (The Information)
4. + This
Check out this New York restaurant that makes it look and feel as if you are eating inside a black-and-white drawing.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





