Jan 19, 2023 - Technology

ChatGPT is the talk of Davos

Illustration of a computer with quote marks, binary numbers and abstract shapes.

Illustration: Shoshana Gordon/Axios

Forget crypto and blockchain: The tech conversation at this year's World Economic Forum in Davos is all about the rise of artificial intelligence, particularly the text-generator ChatGPT.

Why it matters: Tools like OpenAI's ChatGPT and image generators like Stable Diffusion and Dall-E have been in the works for years — but even the tech experts in the Davos crowd are shocked at just how fast they have matured.

What they're saying: On panels and in side conversations at this week's gathering in Davos, Switzerland — as well as at last week's DLD Conference in Munich — everyone wants to talk about this latest crop of generative AI tools, from how they are experimenting with it personally to how they see it reshaping their businesses and lives.

  • One major tech company CEO I spoke to on the sidelines of the Forum told me he knew all about the large language model approach that underlies these generative AI tools but he wouldn't have predicted even six months ago that they would have emerged as the game-changers they are shaping up to be.

The big picture: It's clear that generative AI has captured the public imagination in a way that no technology has since the arrival of the iPhone in 2007.

  • Everyone is still trying to make sense of just how these technologies will change how they live and work, with some incredibly excited, some fearful, and many just staying busy typing queries into ChatGPT.

Optimists see a world in which AI gives superpowers to knowledge workers and speeds the time needed to achieve breakthroughs in health and sustainability.

  • Hanzade Dogan, chairwoman of Turkish e-commerce company Hepsiburada, noted the opportunity for AI to dramatically lower the costs of expensive services, expanding access to legal help, health care and more.
  • "Or, if we get it wrong, it could be the dystopia of our world," she said during a panel I moderated Wednesday at the Forum. "It's that serious, what we are facing."
  • Investor Jim Breyer has put money in a dozen companies working to use AI in a range of health care applications, including early detection of prostate and breast cancers.
  • "I believe the largest commercial application of AI will be precision medicine. Hard stop," Breyer said.

Yes, but: Concerns range from the inevitable flood of AI-generated misinformation to the biases baked into systems that have been trained on real-world data that's filled with stereotypes and dominated by rich countries.

C3.ai CEO Tom Siebel said it's important to understand biases in data when choosing which problems to point AI at.

  • For example, he said his firm rejected the idea of working a big contract with the military to use AI to help determine Army promotions, noting that it would inevitably recommend white, male West Point graduates.
  • "We're not going to touch it, and my recommendation is you don't touch it either," he said.

Access Now executive director Brett Solomon told Axios earlier this week he worries this new crop of AI technologies will be another weapon used against human rights activists, journalists and others.

  • "Given the fact civil society is already under attack, our ability to defend ourselves against generative AI phishing attacks, impersonations and falsehoods will put us even further at risk," Solomon said.

Another big concern is what AI will mean for jobs.

  • The experts I talked to agree these shifts are inevitable and the best that governments can do on this front is to help train workers for a reshaped world. (I'm moderating another panel for the Forum on Friday focused specifically on AI and jobs.)

What's next: One big question is how regulators will approach the technology. The EU is already working on an AI Act, which aims to be the first broad legislation governing such technology.

  • Another key debate is whether AI systems need to be able to, essentially, show their work or whether it is good enough for them to just improve their accuracy.

The bottom line: Everyone agrees that today's generative AIs need some big improvements — particularly because of their tendency to be confidently wrong.

  • At the pace this field keeps advancing, those improvements might not take long.
Go deeper