How AI might change our judgment and decision-making
Psychologists and behavioral scientists are beginning to study how using sophisticated new AI-driven text and image generators affects human creativity, judgment and decision-making.
The big picture: GPT and other generative AIs have been described as job decimators and productivity superchargers. Scientists and developers are poking, prodding, exposing and comparing the capabilities and weaknesses of these tools. But there are equally important questions about exactly how they could affect our own skills and capabilities.
The possibility of more sophisticated AI in the workplace raises an array of questions about its impact on human judgment and decision-making. Those include:
Creativity: With its confident but frequently false statements, generative AI, especially those that create images, may have more to offer in fields where one answer isn't required and a range of possibilities is acceptable or even desirable.
- In that sense, it could be "a launchpad for creativity," my Axios colleague Ina Fried writes.
AI could also redefine productivity in the workplace, says Tara Behrend, who studies industrial-organizational psychology at Purdue University.
- “If anyone can sneeze out 300 words with Chat GPT, maybe saying something original becomes productivity."
- "It is going to produce cliches, and cliches aren’t actually valuable."
- But the bar for creativity or originality is subjective, and it could be unclear what is considered creative — for people and for AIs.
Influence: The people around us can sway our decisions and conclusions.
- As AI enters the workplace, a big question is: "What are the social influences that are likely to happen working with ChatGPT?" says Gaurav Suri, an experimental psychologist and computational neuroscientist at San Francisco State University.
- "Most people are not using it that way now, but I think that issue is coming."
He points to a classic experiment about how people conform: Social psychologist Muzafer Sherif carried out experiments in the 1930s about social norms and conformity using what's known as the autokinetic effect. If someone is placed in a dark room and shown a light, it appears to jump despite being stationary.
- Sherif asked study subjects — individually and then in groups — to estimate how far the light appeared to move.
- He found that in a group, people used other people's estimates to refine their own — they conformed.
"How does this process change if we interact with an artificial agent in the same way as talking with fellow human beings?" Suri asks.
- "Would that interaction partner change the degree to which people stand behind an idea?"
Trust: When asked whether someone prefers a decision be made with the judgment of a human or an algorithm, most people say they prefer humans.
- So far, people seem to have "a general distrust for machines and algorithms," says Chiara Longoni, a behavioral scientist at Boston University.
- But when people are asked about using human or algorithmic judgment to make specific predictions, they use algorithmic advice more than human advice, says Jennifer Logg of Georgetown University, citing results from a 2019 study with her colleagues and a series of studies in an unpublished working paper she co-authored. The findings support earlier research.
- Longoni poses the question: "How will the interaction with these highly sophisticated models change our perception of AI?"
Researchers are also interested in whether a conversation with a chatbot can provide inspiration like a chat with a colleague can, and whether co-creating with AI changes how meaningful work can feel, which Longoni is studying.
What to watch: If generative AI tools affect critical thinking, learning how to work with them — including how to prompt them to get the desired information — could become a new job skill on resumes.
The bottom line: “Asking how ChatGPT is changing the human response to things, that’s the brave new world," Suri says.
Editor's note: This story has been updated to clarify that Logg and her colleagues reported their results about how people use algorithmic advice in a 2019 study as well as in the unpublished working paper.