Stories by Kaveh Waddell

A reality check for AI hubris

Illustration: Sarah Grillo/Axios

For the better part of a decade, artificial intelligence has been propelled by a rocket fuel in seemingly endless supply. Deep learning, a method that allows machines to identify hidden patterns in data, has powered commercial applications like autonomous vehicles and voice assistants, and it's potentially worth trillions of dollars a year.

The other side: The rosy portrait of unstoppable progress belies a fear among some AI luminaries that things are not on the right path. In a new sort of resource curse, they say that deep learning has sucked energy away from other strains of inquiry without which AI may never approach even a child's intellectual capabilities.

Social media reconsiders its relationship with the truth

Illustration of La Verité by Jules Joseph Lefebvre holding a mobile phone with the Facebook logo in place of her mirror
Illustration: Aïda Amer/Axios

For years, Facebook and other social media companies have erred on the side of lenience in policing their sites — allowing most posts with false information to stay up, as long as they came from a genuine human and not a bot or a nefarious actor.

The latest: Now, the companies are considering a fundamental shift with profound social and political implications: deciding what is true and what is false.

Looking to AI to understand how we learn

Illustration of a robot arm holding a brain.
Illustration: Aïda Amer/Axios

Two parallel quests to understand learning — in machines and in our own heads — are converging in a small group of scientists who think that artificial intelligence may hold an answer to the deep-rooted mystery of how our brains learn.

Why it matters: If machines and animals do learn in similar ways — still an open question among researchers — figuring out how could simultaneously help neuroscientists unravel the mechanics of knowledge or addiction, and help computer scientists build much more capable AI.