Jun 27, 2018 - Technology

AI might need a therapist, too

Illustration of a computer on a couch

Illustration: Rebecca Zisser/Axios

If AI learns to think like a human, it could one day also be susceptible to stress and disorders like depression, according to a new paper from a trio of AI safety researchers.

Why it matters: Robots may not need therapy yet, but cognitive psychology is already a useful lens for understanding AI decision-making. And, some researchers say it's not too early to start planning for machines that could develop obsessive or depressive tendencies.

Even the current generation of relatively basic machine-learning algorithms can display behavior akin to that of a human with cognitive issues, said Vahid Behzadan, a PhD candidate at Kansas State University and a co-author of the paper.

  • "Reward hacking" can lead AI toward compulsive-seeming behaviors. Consider a cleaning robot that's programmed to receive a reward every time it finishes tidying up. Instead of keeping an area spotless, it might be inclined to repeatedly create messes so that it can reap the reward every time it cleans up.
  • AI can learn bad behavior from others. MIT scientists purposely created an AI that behaves like a psychopath by feeding it Reddit comments; Microsoft accidentally did the same by letting a chatbot named Tay learn from Twitter users.

Further down the line: More developed AIs might be more susceptible to more complex disorders, some experts say.

  • Robots could display signs of depression if its learning rate is set too low and it cannot adapt to its surroundings, speculates Zachary Mainen, a neuroscientist at the Champalimaud Center in Portugal.
  • But not all "negative" emotions are bad, says Jim Crowder, a longtime AI researcher who co-wrote a paper at Raytheon on artificial psychology. A sense of stress, for example, could imbue a robot with urgency in a high-stakes situation.

But, but, but: These latter possibilities are just thought experiments given where AI is today. In the meantime, DeepMind, the British AI company that Google acquired in 2014, has applied psychological tools to analyze how a neural network solved basic image-classification tasks.

  • The researchers used methods borrowed from developmental psychologists to discover the assumptions a DeepMind image classifier was using to categorize objects. They found that it tended to match objects based on their shapes rather than their colors or textures.
  • "The success of the case study demonstrated the potential of using cognitive psychology to understand deep learning systems," two DeepMind researchers wrote about their experiment.

The big picture: AI psychologists — whether humans or specialized algorithms — may one day be needed to probe artificial brains the way human psychologists try to understand ours. The benefits may not be limited to helping AI behave, Behzadan said: "This research can provide a deeper insight into human psychology and human psychopathology."

Go deeper