Axios Future

December 07, 2019
Welcome back to Future. Thanks for reading! Get in touch by replying to this email or writing me at [email protected]. Erica, who writes this newsletter on Wednesdays, is at [email protected].
📺 In the first “Axios on HBO” special, Joe Biden accuses the media of misjudging how liberal the Democratic Party really is and dismisses the idea that Rep. Alexandria Ocasio-Cortez defines it.
- Catch a sneak peek and watch the full interview this Sunday at 6:30 p.m. ET.
This issue is 1,389 words, ~ a 5-minute read.
1 big thing: An obstacle course for AI

Illustration: Aïda Amer/Axios
AI is better at recognizing objects than the average human — but only under super-specific circumstances. Even a slightly unusual scene can cause it to fail, I report with Axios managing editor Alison Snyder.
Why it matters: Image recognition is at the heart of frontier AI products like autonomous cars, delivery drones and facial recognition. But these systems are held back by serious problems interpreting the messy real world.
Driving the news: Scientists from MIT and IBM will propose a new benchmark for image recognition next week at the premier academic conference on AI.
- It takes aim at a big problem with existing tests, which generally show objects in ordinary habitats, like a kettle on a stove or floss in a bathroom.
- But they don't test for all-important edge cases: rare situations that humans can still interpret in an instant, even if they're confounding — like a kettle in a bathroom or floss in a kitchen.
- To get at those cases, this new dataset is made up of 50,000 images compiled by crowdsourced workers. They photographed 313 household objects at 50 different angles with varying backgrounds and perspectives.
The goal is to put object recognition through more realistic paces.
- "We’re not intentionally mean to computer systems, and we should start doing that," Andrei Barbu of MIT tells Axios.
- "We don’t want them to only recognize what is very common," says MIT's Boris Katz of robots and automated vehicles. "We want [a robot] to recognize a chair that is upside down on the floor and not say it is a backpack. In order to do that, they need to be able to generalize."
The big picture: Ten years ago, image recognition got a huge boost from a humble source — a free database with millions of pictures of everyday things, paired with captions.
- Scientists began using that dataset, ImageNet, to train algorithms to tell cats from dogs and trees from people, using thousands of labeled examples from each category.
- But the gold-standard dataset is limited, despite its scale.
Key stat: Tested against the new MIT/IBM benchmark, ObjectNet, the performance of leading image-recognition systems dropped 40%–45%.
- "This says that we have spent tons of our resources overfitting on ImageNet," says Dileep George, cofounder of the AI company Vicarious.
- Overfitting is AI-speak for teaching to the test: It refers to a system that can pass a specific benchmark but can't perform nearly as well in the real world.
- "I don't think we're anywhere near the finish line," says Rayfe Gaspar-Asaoka, a VC investor at Canaan.
What's next: The creators of the new benchmark hope that more realistic tests will prod much-needed changes to AI.
- Now, they're showing the images to humans to understand the compromises the brain makes in processing objects.
- Katz says the ultimate goal is to create detectors that can employ the same patterns of errors as the brain — and generalize as humans do.
Go deeper: Teaching robots to see — and understand
2. The new AI co-writer

Illustration: Eniola Odetunde/Axios
A recently released AI program that generates hyper-realistic writing has become a powerful tool for storytelling, hinting at a new genre of computer-aided creativity.
What's happening: Inventive programmers are using it to generate poetry, interactive text adventures, and even irreverent new prompts for the popular game Cards Against Humanity.
The big picture: AI-written text is reaching new levels of realism — so much so that when scientists at OpenAI released a groundbreaking text generator earlier this year, they warned of potential dangers from mass-produced fake news. The risks are still present, but recent projects demonstrate the creative upsides.
- A new text-based adventure — similar to games from the '70s and '80s where you read a prompt and then type in what you want to do — is built on OpenAI's language model. Players create a new story, generated on the fly, every time.
- A new book of poetry published this week is made up of AI-generated completions to the beginnings of famous poems.
- Cards Against Humanity — a game where players compete to submit the card that best pairs with an outrageous prompt — used the same model last week to come up with a slew of new cards.
How it works: The OpenAI language model is a bit like autocomplete: Based on an enormous amount of human writing, it predicts the best words to generate next. "Fine-tuning" it on a smaller corpus helps make it sound like an expert on that particular subject.
- The text adventure, AI Dungeon 2, was fine-tuned with 100+ human-generated choose-your-own-adventure stories.
- The Cards Against Humanity generator drank a sea of human-written cards.
"It's good enough to generate a story that gets you emotionally invested," says Nick Walton, a senior at Brigham Young University and the creator of AI Dungeon 2. He says he spent somewhere between 200–500 hours on the side project — to the detriment of his GPA.
- The game's AI — the "dungeon master," in D&D-speak — generally deals with human inputs in a highly creative, if slightly wacky, fashion.
- "It's the first time you can really decide to eat the moon and the AI will respond," says Janelle Shane, a Colorado-based scientist and the author of the newly released book, You Look Like a Thing and I Love You.
When they work, the game, the poetry and the cards can feel like magic. But in reality, they're using tricks of probability and dizzyingly enormous datasets to imitate human speech and all the thought that goes into it.
- "They can surprise you so consistently. It's just so vivid, the language they come up with and the ways in which they seem to know what's going on," says Robin Sloan, a Bay Area author who experiments with AI text generation.
- "And then it does break down, and you realize it's not a person or a dungeon master or a novel — it's a weird AI with a relatively limited model of the world," Sloan says.
Go deeper: Where will predictive text take us? (The New Yorker)
3. Cars have become computers

Illustration: Sarah Grillo/Axios
Software features are rising to rival horsepower and styling for the most important elements of the driving experience, Axios transportation correspondent Joann Muller writes.
What's happening: Automakers face an urgent need to redesign their vehicles' electronic architecture to handle the onslaught of advanced features that will one day allow cars to talk to each other and drive themselves.
The big picture: With more than 100 million lines of code in the modern car, advanced software features are testing the limits of the computer hardware under the hood. And it will only get worse: Electric, connected and automated cars will devour even more computing power in the future
The software-driven shift will likely have massive implications for both the automotive and semiconductor industries.
- The market for automotive computer chips is expected to grow from about $40 billion today, per IHS Markit, to as much as $200 billion by 2040, according to KPMG.
- Semiconductor companies are salivating over the growth opportunity, but it will require automakers and their suppliers to collaborate with chipmakers to seamlessly integrate hardware and software.
The state of play: Today's cars typically have as many as 100 electronic control units (ECUs), each dedicated to a separate function — the engine, the window actuators or the lane-keeping system, for example.
- As cars have gotten more sophisticated, all those ECUs, along with the wiring and power supply they require, have turned into an unwieldy and expensive mess that's inefficient and overly complex.
- “The growth of software content and associated [computer] processing ... is really breaking the current vehicle architecture,” said Glen De Vos, chief technology officer at Aptiv, a major auto tech supplier.
- Worth noting: Tesla, which began in 2009 with a clean sheet, doesn't face the same constraint. Its cars were designed as rolling computers from the get-go, and they receive frequent, over-the-air software updates. Earlier this year, Tesla introduced its own computer chip.
What to watch: If the automobile evolves in the way cellphones, PCs and data centers did, there could be a lopsided contest to grab revenue, with a handful of winners and many losers, warns KPMG in a new report.
4. Worthy of your time

Illustration: Rebecca Zisser/Axios
Tech's liability shield becomes trade-deal flashpoint (Margaret Harding McGill - Axios)
China's overblown AI investments (Karen Hao - MIT Tech Review)
2020 Democrats answer 7 key tech questions (Emily Stewart & Rani Molla - Vox)
Taking virtual reality for a test drive (Patricia Marx - The New Yorker)
An e-waste sting ends in betrayal (Colin Lecher - The Verge)
5. 1 fun thing: NYC's linguistic landscape

Multilingual brochures. Photo: Jeffrey Greenberg/Universal Images Group/Getty
A new map from the Endangered Language Alliance plots more than 600 languages onto a map of New York City, placing them near sites where they're spoken.
The result is an incredibly dense, colorful spread that spans the city's usual suspects — Puerto Rican Spanish, Cantonese, Russian — plus tons of infrequently heard languages, like Syriac, Balti and Jola.
Go deeper: Lost languages found in New York (NYT)