February 14, 2024

Ina here, wishing a happy Valentine's Day to everyone (and especially you, AJ). Today's AI+ is 1,294 words, a 5-minute read.

1 big thing: AI bills flood state houses

Illustration: Aïda Amer/Axios

Nearly all of the state legislatures currently in session are considering AI-related bills and nearly half of those bills address deepfakes, according to an analysis by software industry group BSA, shared exclusively with Axios.

Why it matters: Rapid AI innovation and a federal regulatory vacuum have given state legislatures the impetus to generate a six-fold increase in AI draft legislation compared to a year ago, Ryan reports.

What's happening: As of Feb. 7 there were 407 total AI-related bills across more than 40 states, up from 67 bills a year ago.

  • States introduced 211 AI bills last month.
  • 41 of 43 state legislatures in session have AI bills before them.

Catch up quick: The targets of the bills range from bias and discrimination to facial recognition technology and deepfakes.

  • Legislators in 33 states have put forward election-related AI bills.
  • January saw a huge spike in new bills. They're now being produced at a rate of 50 bills per week — with half of them pertaining to deepfakes.

By the numbers: States with the most bills under consideration are New York (65), California (29), Tennessee (28), Illinois (27), New Jersey (25).

  • Alabama and Wyoming are the only states currently in session without AI legislation under consideration.
  • Connecticut now requires ongoing assessments to ensure AI doesn't cause discrimination or disparate impact.

The intrigue: The states with the biggest AI industries — California and New York — are also generating the most draft bills.

  • Tennessee's AI legislation explosion is driven by the copyright concerns of the local music industry, led by the Ensuring Likeness Voice and Image Security (ELVIS) Act enacted in January.

Flashback: State legislators began building their AI momentum in summer 2023, introducing 191 bills across 31 states by September — but only 14 became law.

What they're saying: "Penalties for deepfakes is the hot topic," Craig Albright, BSA's senior vice president for U.S. government relations, tells Axios.

  • "A lot of the deepfake language is similar across states, we're seeing a lot of coordination," says Matt Lenz, BSA's senior director for state advocacy.
  • Some advocacy groups worry that strict AI regulation will end up protecting early AI leaders, because they'll have the most resources to manage the burden.
  • "Wrapping up new AI models in red tape effectively cements the biggest tech players as winners of the AI race," says Chamber of Progress' tech policy director Todd O'Boyle via email.

Yes, but: Governors so far haven't made AI a priority in their 2024 state of the state addresses.

What's next: State governors have a chance to build up coordination on AI legislation and executive action as they gather in Washington D.C. this week.

2. Slack AI will summarize your chatty co-workers

Image: Slack

Slack announced on Wednesday a number of new AI-powered features, including the ability to get summaries of threads and recaps of what's happened in channels.

Why it matters: Slack is pitching the AI features as a great way for new workers to get up to speed and for overwhelmed employees to keep tabs on myriad threads and channels without having to read each message.

Details: Slack AI, as the company is calling the new features, is a paid add-on for enterprise plans and will show up in a variety of ways.

  • In search queries, Slack will use AI to offer answers to natural language queries, drawing on information through the messages and channels that each individual employee can access.
  • Channel recaps allow workers to catch up on unread messages by summarizing what's happened over the last seven days, or some other custom date range. Slack is pitching this as a good option for those who have been on vacation or parental leave, for example.
  • AI-generated thread summaries, similar to channel recaps, offer a way to quickly understand a long discussion without having to read all the back and forth.

Of note: The company isn't saying how much it is charging, noting it will vary based on the size of the customer, but says its price will be competitive with what other enterprise companies are charging for their generative AI services.

  • Notion has adopted a similar approach, offering its AI features for $10 per person per month, but requiring companies to purchase it for all employees with access to Notion.

The big picture: Slack's announcement is part of a trend among enterprise software and services companies to add generative AI features to their products.

  • They're offering customers an easy, if sometimes pricey, way to experiment with the power of generative AI without having to revamp their data storage or business processes.
  • But some early adopters of the paid versions of enterprise AI (particularly from Microsoft) don't think today's features justify their price tag.

Be smart: Powerful AI tools also add privacy risks. Slack AI won't surface information from channels that workers don't have access to, but the feature could cause headaches for businesses that have been lax with channel permissions.

Between the lines: While originally designed as a better way for business workers to communicate, Slack is replete with data on business practices, customer data and other information that can be surfaced through generative AI.

  • Slack says the new features are saving workers an average of 95 minutes per week based on reports from early users.
  • "We can unlock, frankly, years and years of institutional knowledge in all kinds of ways," Slack product chief Noah Weiss told Axios.

3. ChatGPT gets a personal digital "memory"

OpenAI's new memory feature allows people to choose which memories are saved by ChatGPT. Image: OpenAI

OpenAI said Tuesday it is adding a feature that will allow ChatGPT to remember both information about individual users and how they want the chatbot to respond to different types of queries.

Why it matters: It's another step in allowing the chatbot to customize itself to the person using it.

Details: The new memory feature is similar to giving custom instructions to ChatGPT and allows that information to be stored for future queries.

  • The feature is rolling out to a small number of free and paid ChatGPT Plus subscribers.
  • OpenAI says the memory feature will be made available to business customers once the company is ready to broadly release the feature.

How it works: Users can explicitly ask ChatGPT to remember something.

  • They will be able to see what ChatGPT is storing as memories, delete individual items from memory or delete the entire collection of memories.
  • People will also be able to opt in and out of the memory feature and choose whether any feedback is used to train OpenAI's models.
  • An incognito-like mode will be available allowing people to conduct queries without drawing on memories.

Between the lines: OpenAI says it recognizes that the memory feature also raises additional safety and privacy concerns.

  • It says it has "taken steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details — unless you explicitly ask it to."

Meanwhile, Axios cybersecurity reporter Sam Sabin reports that hackers connected to governments in China, Iran, North Korea and Russia are already using AI chatbots to write phishing emails and study potential targets, according to new research from Microsoft and OpenAI.

4. Training data

  • OpenAI researcher and co-founder Andrej Karpathy is leaving the company. An OpenAI spokesperson tells Axios that Karpathy's responsibilities have been turned over to another senior researcher with whom he worked closely. (The Information)
  • The nonprofit AI research lab at Cohere released an open-source LLM that speaks more than 100 languages. (Axios)
  • OpenAI CEO Sam Altman cautioned Tuesday that "very subtle societal misalignments" around artificial intelligence could have dangerous consequences. (Axios)
  • A report by GLAAD finds that 17% of gamers identify as LGBTQ, while less than 2% of console games have LGBTQ characters. (N.Y. Times)
  • The University of Pennsylvania has launched what it says is the first Ivy League undergraduate degree in artificial intelligence, Bachelor of Science in Engineering in AI.

5. + This

First Elmo had to hear about everyone's personal trauma, now Grover is getting an earful on the state of journalism.

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.