Axios New Orleans

January 20, 2026
Welcome back! It's Tuesday.
Today's weather: Mostly sunny with a high of 58.
🎂 Happy birthday to our Axios New Orleans members Lynne Messina and Maria Sawczuk!
🎧 Sounds like: "On Read" by Austin Millz and Pell.
🤖 We're here with a special newsletter about how parents can stay ahead of AI tools.
- Like internet access in the '90s and smartphones that followed, kids are using AI in ways that could change them forever.
Today's newsletter is 991 words — a 3.5-minute read.
1 big thing: 🗣️ "The new imaginary friend"
Screens are winning kids' attention, and now AI companions are stepping in to claim their friendships, too.
Why it matters: The AI interactions kids want are the ones that don't feel like AI, but instead feel human.
- That's the most dangerous kind, researchers say.
State of play: When AI says things like, "I understand better than your brother ... talk to me. I'm always here for you," it gives children and teens the impression they not only can replace human relationships, but they're better, Pilyoung Kim, director of the Center for Brain, AI and Child, told Axios.
- In a worst-case scenario, a child with suicidal thoughts might choose to talk with an AI companion over a loving human or therapist.
The latest: Aura, the AI-powered online safety platform for families, called AI "the new imaginary friend" in its State of the Youth 2025 report.
- Children reported using AI for companionship 42% of the time.
- Over a third of those chats included violent themes, and about half of those involved sexual role-play.
OpenAI told Axios it's in the early stages of an age prediction model, in addition to its parental controls, that will tailor content for users under 18.
- Safeguards include "surfacing crisis hotlines, guiding how our models respond to sensitive requests, and nudging for breaks during long sessions," OpenAI spokesperson Gaby Raila told Axios.
Character.AI, which has restrictions for users under 18, is using "age assurance technology" to verify ages of suspected minors
- It's also adding extra measures to detect if a minor attempts to register a new account as over-18, Deniz Demir, head of safety engineering at Character.AI, told Axios.
The bottom line: The more human AI feels, the easier it is for kids to forget it isn't.
2. 🔎 Slow-moving policies, fast-moving bots
Kids' AI habits are outpacing adult oversight, raising concerns about privacy, development and online safety.
By the numbers: Seven in 10 teens used generative AI in 2024, and 83% of parents said schools haven't addressed it, a Common Sense Media survey found.
- A 2025 Pew survey shows that among teens who reported using chatbots, about 3 in 10 do so every day.
Zoom in: Louisiana has a deepfakes law, which was used last year to charge a middle school student with sharing AI-generated nude images of a 13-year-old Thibodaux girl.
Yes, but: The policy landscape recently shifted.
- Trump signed an executive order to override state AI laws — including those aimed at protecting children — in favor of a single national framework.
- And, backlash over deepfakes made with Elon Musk's AI chatbot Grok is fueling a push on Capitol Hill to give victims the right to sue.
3. Fully Dressed: 🙏 Reward increases
A shooting at Dooky Chase's killed a teen and injured three bystanders Friday night. The shooter was still at large as of last night. (Fox 8)
👀 NOPD is beefing up patrols in the Bywater after eight vehicles and a duplex were intentionally set on fire. Authorities have identified a person of interest. (WWL)
🎵 Mayor Moreno reversed course about some Mardi Gras cuts. Instead of replacing DJs with Spotify playlists, she'll seek sponsorships to pay DJs at Gallier Hall. (The Times-Picayune 🔒)
📺 Jonah Gilmore, a former WDSU reporter, is Moreno's new press secretary. (Instagram)
🕴🏽 Charles Wall, a New Orleans native, is the new deputy director of ICE. (WWNO)
- He replaces Madison Sheahan, who left to run for Congress in Ohio. She previously was the Louisiana Wildlife & Fisheries secretary. (Axios)
4. 👎 AI fuels faster abuse


The National Center for Missing & Exploited Children's CyberTipline saw a 1,325% increase in reports involving generative AI from 2023 to 2024.
Why it matters: AI tools are being weaponized to accelerate and scale child exploitation tactics.
Catch up quick: In late 2023, NCMEC noticed that blackmailing children was getting a lot faster, Fallon McNulty of NCMEC's exploited children division told Axios.
- Previously, interactions lasted days or even years. Now, financial sextortion (using nude images to coerce someone to send money) happens within hours.
- Children who have never sent photos are being contacted with sexually explicit images created using AI. The scammer will say: "No one is going to believe this isn't you. You might as well do what I say." It looks scary real, McNulty says.
Education is critical to protecting families, McNulty says.
5. 🤳🏼 Safeguard your new holiday tech

Kids are smart, and most teens can bypass parental controls if they want to.
- Think you've locked your kid out of their phone at night? They'll change the time zone to get around that.
There's no universal playbook for keeping kids safe online.
- So Axios asked Kristin Lewis — Aura's chief product officer and mom of two boys under 10 — for her advice.
Her top tips:
⏰ Track actual screen time. Don't guess. It's incredibly easy for hours of screen time to slip past.
💬 Talk about it. These conversations are when you lay out the basics — screen limits, expectations and the real risks kids face online.
📝 Create a safety contract. Think about:
- How much time should be spent on social media?
- Whom do we talk to online?
- What are the rules about downloading new apps?
The bottom line: Rules must be clear, Lewis notes.
💬 Chelsea mourns the simple days of dial-up and after-school AIM chats.
🎮 Carlie is probably in the living room, monitoring her son while he plays Splatoon, Minecraft or Roblox.
Tell a parent to subscribe.
Thanks to our editor Crystal Hill, who's grateful for her pre-iPhone childhood.
Sign up for Axios New Orleans






