May 27, 2018 - Technology

Researchers seek early warnings of online nastiness

In London, an argument over apartments, 1969. Photo: William Lovelace/Daily Express/Hulton Archive/Getty

Among efforts to make social media a more congenial place, researchers at Cornell are working on artificial intelligence that detects nasty online conversations when they are only starting to take that turn.

What's going on: Most studies of online conversation look for phrases such as, "What the hell is wrong with you.” But, by then, it's too late. In their new paper, Justine Zhang, Jonathan Chang and Cristian Danescu-Niculescu-Mizil say they aim to ferret out anti-social clues "when the conversation is still salvageable."

How they did it: The Cornell team studied some 1,200 conversations on Wikipedia Talk pages, reports MIT Tech Review, in a collaboration with researchers from Jigsaw and Wikimedia.

Among their findings:

  • Cues of civil conversations include greetings and gratitude as well as opinions that are hedged.
  • Conversations going bad feature sentences starting with the word "you," which signals potential trouble.

That won't surprise many of us: The researchers found that humans are still better at this than AI — a control group of humans detected a bad conversation in advance 72% of the time, compared with 61.6% for the AI.

  • Yet if all humans were terrific at heading off online battles, we would not be in some of the mess we currently confront. In other words, early detection would be a good thing.

One bit of good news: In some of the conversations their AI had flagged, the humans eventually self-corrected.

  • The paper concludes, "Interactions which initially seem prone to attacks can nonetheless maintain civility, by way of level-headed interlocutors, as well as explicit acts of reparation."
Go deeper