Personal flying machines are almost here, and anyone with the money and a few hours of training will be able to fly one.
Why it matters: Many people dream of flying, but getting a pilot certificate takes time, studying and dedication — plus small aircraft can be dangerous and expensive to own and operate.
The preconceived notions people have about AI — and what they're told before they use it — mold their experiences with these tools in ways researchers are beginning to unpack.
Why it matters: As AI seeps into medicine, news, politics, business and a host of other industries and services, human psychology gives the technology's creators levers they can use to enhance users' experiences — or manipulate them.
A hit-and-run incident that left a pedestrian gravely injured in San Francisco earlier this week is raising questions about whether autonomous vehicles (AVs) can handle the unexpected as well as, or better than, human drivers.
Driving the news: The incident involved both a human-driven car (which made the initial impact with the pedestrian) and a Cruise AV (which then also struck the victim).
Why it matters: The SEC said in a filing on Thursday that Musk and the agency both agreed that he would sit for testimony in September 2023 but he failed to appear.
Assassin's Creed Mirage, out today from Ubisoft, is designed to transport players to 9th century Baghdad. It is an unabashed celebration of Arab culture and the Golden Age of Islam, its developers tell Axios.
Why it matters: Video games have long relegated Arab characters to villains or side characters and have largely avoided any inclusion of Islam.
AI is transforming job hunting and skill development — threatening to relegate four-year college degrees to the category of merely nice-to-have on your CV.
The big picture: In AI-driven workplaces, employers will need to treat up-skilling investments as a "critical priority" rather than a perk, per a pitch LinkedIn executives made to 2,000 of the nation's top recruiters this week in New York City.
Why it matters: "Responsible AI" has become a go-to slogan for organizations signaling that they're taking AI, and AI safety, seriously. But in the rush to look responsible, and in today's regulatory void, many are confused about what the concept means in practice.
After months of experimenting with artificial intelligence to make their work more efficient, some newsrooms are now dipping their toes in more treacherous waters — trying to harness AI to detect bias or inaccuracies in their work.
Why it matters: Confidence in the news media is at an all-time low, pressuring news leaders to look for new ways to win back trust. But today's AI, which has its own biases and makes up fake facts, is an unlikely savior.
X began removing news links and headlines from posts in a major overhaul of the platform formerly known as Twitter.
Driving the news: Elon Musk said in August the changes were "coming from me directly" and they would "greatly improve" the aesthetics of the site. Users now click on images to access news reports posted to X.