Stories

Schools turn to AI to monitor students' mental health

Sixth grade students sit against a wall and work on laptops
Sixth graders at Flint Hill School in Oakton, Va. Photo: Katherine Frey/Washington Post/Getty

Jolted by rising suicide and self-injury among young Americans, schools are using software to monitor student browsing history and logs for signs of distress, in what they hope will curb the problem.

What's going on: Kids and teens increasingly rely on the internet to answer their mental-health questions, creating browsing patterns that schools hope could identify potential harm before it happens. But that requires sweeping online surveillance that critics say could leave a lasting mark on students.

The context: Between 2006 and 2016, the suicide rate for white children rose 70%, to more than five per 100,000 children, according to the CDC. The rate for black children increased 77%, to 2.62 suicides per 100,000 children.

The details: Many monitoring tools are on the market, including one by GoGuardian, a Los Angeles-based education-tech company.

  • GoGuardian's early filtering products were aimed at keeping students away from online porn, Tyler Shaddix, the company's chief product officer, tells Axios.
  • After a student committed suicide at a school testing the software, an administrator asked whether better monitoring could have prevented it.
  • That led to a new system that evaluated students’ online activity for signs they may harm themselves.

At first, GoGuardian used keyword matching, a standard approach among companies like it. This generated thousands of extraneous alerts, so in its latest version — due out in October — GoGuardian has added artificial intelligence techniques that it says improves the software's accuracy.

How it works: GoGuardian installs an extension on school-issued computers running Chrome OS or Windows. When the system detects a potential problem in chat conversations, emails, Google Docs, or browsing, it sends an alert to whomever the school has designated to receive them.

  • With the help of mental-health experts, the company trained a machine-learning algorithm so it flags content most closely associated with potential harm, like searches for suicide methods.
  • The system now sends a much more manageable 2,000 alerts a week across 5 million students, within seconds of being triggered.
  • A human makes the final call, reading over the alert — including five minutes of browsing history before and after the incident — to decide whether or not to act.

The software can stumble either by sending sending schools unneeded messages, drowning out real flags, or by missing important signs and failing to send a critical alert.

  • GoGuardian CEO Advait Shinde says the system errs on the side of alerting more often, but the company refused to share stats about its software's failure rate.
  • Gennie Gebhart, a researcher at EFF and co-author of a 2017 report on student privacy, said companies that help monitor students are not transparent enough about the accuracy of their software.

Electronic snooping always raises weighty ethical questions, even when it’s meant to safeguard young people.

  • Privacy experts worry about early, all-encompassing monitoring. Gebhart said pervasive surveillance normalizes electronic snooping, and can keep kids from testing out new ideas and identities as they grow.
  • "This piece of software is going beyond ed-tech and toward a new kind of surveillance in the classroom," Gebhart said.
  • When I asked GoGuardian if students have a right to privacy, Shaddix said the benefits of monitoring "outweigh the negatives."

Go deeper: Wired reports on schools that monitor students’ social media feeds for threatening posts or cyberbullying.

More stories loading.