AI will be at the center of the next financial crisis, SEC chair warns
- Felix Salmon, author of Axios Markets

Illustration: Annelise Capossela/Axios
AI will be at the center of future financial crises — and regulators are not going to be able to stay ahead of it. That's the message being sent by SEC chair Gary Gensler, arguably the most important and powerful regulator in the U.S. at the moment.
Why it matters: A paper Gensler wrote in 2020, while a professor at MIT, is an invaluable resource for understanding those risks — and how little regulators can do to try to address them.
The big picture: The most obvious risk from AI in financial markets is that AI-powered "black box" trading algorithms run amok, and all end up selling the same thing at the same time, causing a market crash.
- "There simply are not that many people trained to build and manage these models, and they tend to have fairly similar backgrounds," wrote Gensler. "In addition, there are strong affinities among people who trained together: the so-called apprentice effect."
- Model homogeneity risk could also be created by regulations themselves. If regulators exert control over what AIs can and can't do, that increases the risk that they'll all end up doing the same thing at the same time, and also increases the likelihood that firms will all choose to use AI-as-a-Service offerings from a small number of beyond-reproach large providers.
Be smart: Because the rules governing when the models buy and sell are opaque to humans and not knowable in advance (or even retrospectively), it's very difficult for regulators to prevent such a crash.
- As Gensler wrote: "If deep learning predictions were explainable, they wouldn't be used in the first place."
Between the lines: The risks from AI go much deeper than trading algos.
- A lot of AIs are devoted to judging creditworthiness, for instance. Because of their opacity, it's very hard to tell whether they're judging humans in a discriminatory manner. And because AIs are constantly evolving in unpredictable ways, it's impossible to know in real time whether an AI that wasn't racist yesterday might have become racist today.
Where it stands: "It is likely that regulatory gaps have emerged and may grow significantly with the greater adoption of deep learning in finance," Gensler wrote. "We conclude that deep learning is likely to increase systemic risks."
- The simplest and possibly most effective regulatory response might well just be to increase the amount of capital that financial institutions need to hold when they (or their regulators) are using AI tools.
- Regulators could also require that all AI-generated results undergo a "sniff test" from a more old-fashioned linear model with greater explainability. Firms could be discouraged or barred from taking actions that can't be broadly explained in terms of fundamentals.
Threat level: Regulators might be able to slow down the rate of increase, but it's very unlikely they'll be able to prevent systemic risk from rising.
- Gensler himself had a long list of regulatory approaches that would help, but he says very clearly that even in aggregate they're "insufficient to the task" at hand.
The data conundrum
AI has an "insatiable demand for data," noted Gensler in his paper.
Why it matters: The risk is that AI models will inevitably converge on a point at which they all share the same enormous training set (Common Crawl, for example), collectivizing whatever inherent weaknesses that set might have.
- "Models built on the same datasets are likely to generate highly correlated predictions that proceed in lockstep, causing crowding and herding," Gensler wrote.
The demand for enormous data sources tends to lead to monopolies.
- Gensler noted that Intercontinental Exchange has quietly come to dominate the mortgage-data business, via its acquisitions of MERS, Ellie Mae, and Simplifile.
- Those monopolies can then become "single points of failure" that threaten the entire network — much as the failure of a single midsize investment bank, Lehman Brothers, caused a global financial catastrophe.
Even the biggest data sets are dangerously incomplete. "Internet use, wearable data, telematics data, and GPS and smartphone data simply do not have long enough time horizons to cover even a single, complete financial cycle," noted Gensler.
- That can have devastating consequences — as we saw during the financial crisis.
- Crowding risk is already with us. "It is hypothesized that herding and crowding in high-frequency algorithmic trading is partially responsible for causing flash crashes," wrote Gensler. As those traders move increasingly to AI, that risk can only increase.
- Companies in developing economies might end up using AIs that weren't trained on domestic data at all, making the risks larger still.
The bottom line: AIs don't know what they don't know. And that can be very dangerous.