Jul 22, 2020

Axios Future

Welcome to Axios Future, where we're currently studying up for the Turing Test.

Today's Smart Brevity count: 1.703 words or about 6 minutes.

1 big thing: Meet the AI that can write

Illustration: Eniola Odetunde/Axios

A new general language machine-learning model is pushing the boundaries of what AI can do.

Why it matters: OpenAI's GPT-3 system can reasonably make sense of and write human language. It's still a long way from genuine artificial intelligence, but it may be looked back on as the iPhone of AI, opening the door to countless commercial applications — both benign and potentially dangerous.

Driving the news: After announcing GPT-3 in a paper in May, OpenAI recently began offering a select group of people access to the system's API to help the nonprofit explore the AI's full capabilities.

  • The reaction from many who tried GPT-3 was nothing short of ecstatic. "Playing with GPT-3," tweeted developer Arram Sabeti, "feels like seeing the future."

How it works: GPT-3 works the same way as predecessors like OpenAI's GPT-2 and Google's BERT — analyzing huge swathes of the written internet and using that information to predict which words tend to follow after each other.

  • What sets GPT-3 apart is the vast amount of data it was trained on half a trillion words.
  • And the program has 175 billion parameters — the values an AI aims to optimize during training — which is 10 times more than its closest competitor.
  • The result is what Suraj Amonkar of the AI company Fractal Analytics calls "the best language model in the field of AI."

Details: As early testers begin posting about their experiments, what stands out is both GPT-3's range and the eerily human-like quality of some of its responses.

  • Want to write poetry like Walt Whitman? Prompt it with a few lines of "O Captain! My Captain!" and GPT-3 will fill out the rest, generating new poetry in the style of the Bard of Brooklyn.
  • Looking to write workable computer code? GPT-3 is your geek squad.
  • Care to engage in a philosophical debate about the nature of God with an AI? GPT-3 is the stoned freshman-year roommate you never had.

Yes, but: Give it more than a few paragraphs of text prompts and GPT-3 will quickly lose the thread of an argument — sometimes with unintentionally hilarious results, as Kevin Lacker showed when he gave GPT-3 the Turing Test.

  • GPT-3 can use its vast dataset to predict words, but as Amonkar notes, "It most probably does not even have any semantic understanding of the underlying words."

The big picture: Just because GPT-3 lacks real human intelligence doesn't mean that it lacks any intelligence at all, or that it can't be used to produce remarkable applications.

  • Technologist and entrepreneur Azeem Azhar argues that GPT-3 should be seen as a major step forward in knowledge manipulation. Unlike Google's search engine, which organizes the information of the web but still requires you to find the answer, GPT-3 aims to produce direct answers for a query, wrapped in a nice bow.
  • It's still far from perfect, but focusing on those imperfections risks missing GPT-3's true significance. "I'm pretty sure Gutenberg's first press wasn't as good as the modern press," says Azhar.
  • GPT-3 is a "contender for the most spectacularly newsworthy happening of 2020," economist Tyler Cowen writes for Bloomberg. He imagines how it could "generate an entire ecosystem" of spinoffs and services.

Of note: OpenAI has already begun partnering with commercial companies on GPT-3, including Replika and Reddit, though pricing is still undecided.

The bottom line: Humans who assemble letters for a living aren't out of a job — yet. But we may look back upon GPT-3 as the moment when AI began seeping into everything we do.

2. The continuing problem of AI bias
Data: Language Models are Few-Shot Learners; Table: Axios Visuals

While GPT-3 has earned ecstatic reviews from many experts for its capabilities, some critics have pointed out clear issues around bias.

Why it matters: As AI becomes more powerful and more integrated into daily life, it becomes even more important to root out the persistent problem of bias and fairness.

What's happening: Researchers at OpenAI noted in the paper introducing GPT-3 that "internet-trained models have internet-scale biases." A model trained on the internet like GPT-3 will share the biases of the internet, including stereotypes around gender, race and religion.

  • As the table above from the paper shows, females were more often described with appearance-associated adjectives, while males were more often described with adjectives that spanned a wider spectrum of descriptions.
  • The paper also found that GPT-3 associated different races with different degrees of sentiment, with Black ranking consistently low.

In a Twitter thread, Facebook AI head Jerome Pesenti raised concerns that GPT-3 can "easily output toxic language that propagates harmful biases."

  • OpenAI CEO Sam Altman responded that he shared those concerns, and he argued that part of the reason the nonprofit was starting off GPT-3 in a closed beta was to do safety reviews before it went fully live.
  • He noted that OpenAI had introduced a new toxicity filter that was on by default.
  • The original paper also found that GPT-3 seemed less prone to bias than earlier, smaller models, offering some preliminary hope that size could help minimize the problem.

What to watch: A system that can generate near-human quality writing could be used for misinformation, phishing and other hacking efforts. And while malicious humans already do all of those things, GPT-3 and future AI systems could effectively scale those efforts up.

The bottom line: If an AI produces racist or sexist content, it's because the system learned it by watching us. That puts the onus on programmers to curb their creations.

3. The deepening financial risks of water stress

New Mexico, a part of the U.S. where water stress is expected to intensify. Photo: Michael Robinson Chavez/Los Angeles Times via Getty Images

Roughly 60% of real estate investment trust (REIT) properties are projected to experience high water stress by 2030 — more than double the number today, per a new analysis from asset management giant BlackRock.

Why it matters: Climate change is set to exacerbate water scarcity in much of the world. Investors who fail to price in the cost of adapting to water stress risk being left high and dry.

Details: Water stress occurs when the need for water exceeds supply, due to a combination of population growth and urbanization — which increases demand — and the effects of climate change, which can alter the distribution of water supplies.

  • BlackRock used the distribution of REITs to identify where investors will feel the pain of water stress.

By the numbers: Almost all REIT properties in Malaysia, Japan and Australia, among other countries, will likely be in what are classified as high-risk water zones within 10 years, according to the report.

  • Roughly two-thirds of U.S. REIT properties are projected to be in high-risk water zones, double the proportion today. This includes most of the country west of the Mississippi.

The big picture: The report comes as BlackRock is increasingly urging fossil fuel companies to get more aggressive on climate, part of a wider sustainability strategy unveiled this year.

Read more

4. Space's big year is blunted

Illustration: Eniola Odetunde/Axios

Ambitious plans for space companies and agencies are threatened by the pandemic and its economic fallout, exacerbating the growing pains of a promising industry, Axios' Miriam Kramer writes.

Why it matters: The U.S. has historically dominated the global space industry, which some have projected could be worth up to $1 trillion by 2040. Delays and setbacks come at a huge cost — both financially and symbolically — in the global space race.

The industry has put up some solid wins this year — like SpaceX's first crewed launch to the International Space Station and this weekend's UAE's launch of its first Mars mission — despite the pandemic.

  • Yes, but: Boeing was also expected to get astronauts to the launch pad this year, though that's looking increasingly less likely after a troubled uncrewed test flight left the company with a series of fixes it needs to implement ahead of another test.

Share this story

5. Worthy of your time

America's innovation engine is slowing (Caleb Watney — The Atlantic)

  • Policy moves to restrict immigration combined with financial problems at the country's universities threaten to disable the wheels that make economic growth run.

The future of housing looks nothing like today's (Kelsey Campbell-Dollaghan — Fast Company)

  • Multigenerational homes are coming back into vogue.

The fake nerd boys of Silicon Valley (Lyta Gold — Current Affairs)

  • Peter Thiel, Elon Musk and a very particular — and lucrative — vision of the future of tech.

Automatic for the bosses (David A. Banks — Real Life)

  • Much of the concern around automation is about the potential for mass job losses, but workers should be more worried about what happens when an AI becomes their boss.
6. 1 sports thing: How modern baseball is like an out-of-control AI

I mean, have you been to a Blue Jays game lately? Photo: Steve Russell/Toronto Star via Getty Images

Modern baseball's relentlessly analytics-driven strategy — which increasingly comes at the cost of spectator enjoyment — resembles the working of a powerful but poorly aligned AI.

Why it matters: It's difficult to grasp the existential threat that artificial general intelligence could pose because true AI would likely act in a very unnatural fashion, so every analogy helps.

Driving the news: Baseball returns tomorrow for its pandemic-shortened season, coming off another year when in-person attendance was down, as were TV ratings for last year's World Series.

It's likely not a coincidence that interest in baseball is declining at the same time that analytics-driven strategy has come to focus on the three true outcomes of an at-bat: a strikeout, a walk or a home run.

  • Plate appearances that end in one of those outcomes — none of which involve, you know, a ball actually going in play — hit a record high of 35.1% in 2019, compared to 27.4% in 2005.
  • If the point is to win the game, this is an optimal strategy. The drawback is that an optimal strategy for winning is not necessarily an optimal strategy for spectators.

The other side: Not unlike modern baseball GMs — at least, the ones who don't run the Mets —AI is about optimization, which means finding the most efficient way to achieve a given objective.

Of note: In part to head off some of those unintended side effects — like an absence of viewers — baseball has instituted a number of rules changes this year.