Campbell Brown co-launches Forum AI
Add Axios as your preferred source to
see more of our stories on Google.

Campbell Brown. Photo: CNN
Campbell Brown, a veteran news anchor and the former head of news at Meta, has raised a $3 million seed round to co-launch a new company called Forum AI that evaluates AI models for bias and makes judgment calls about high-stakes topics.
Why it matters: Brown believes much more transparency and expertise are required to inform human-level intelligence within AI systems.
- When it comes to how large language models are trained or what data is used, "the lack of transparency in AI means we don't know who the people are — or what their credentials are, or what their experience is — because attribution disappears," Brown tells Axios.
How it works: Forum AI leverages proprietary technology in conjunction with a network of more than 500 curated, domain experts across an array of topics to evaluate AI systems' handling of certain topics for a monthly fee, Brown said.
- It also provides real-time expert insights on sensitive topics as major events unfold, as well as new data for AI companies to train their algorithms in responding to delicate queries.
- Forum AI aims to help AI companies assess whether consumer-facing outputs around complicated topics strike the right tone, balance and context, as well as whether the answers lack critical perspectives or include inadvertent biases.
- Examples of thorny topics the company hopes to help AI models better address include geopolitics, politics, health care and mental health.
Between the lines: The company has already onboarded several experts who will be listed on its website, including former Democratic U.S. Treasury Secretary Larry Summers, former Republican House Speaker Kevin McCarthy, historian Niall Ferguson, CNN anchor Fareed Zakaria, and journalist and author Salena Zito.
- Some experts are paid, and others have equity in the new company, which Brown co-founded with Robbie Goldfarb, a veteran of Meta's AI Trust and Safety team. Others are participating because they see a branding opportunity in having their expertise cited by popular LLMs.
- Forum AI is also collaborating with several institutions to ensure its auditing and content recommendation work is accurate, including Cleveland Clinic, Mount Sinai Health System, Stanford's Institute for Human-Centered AI, Atlantic Council, Carnegie Endowment for International Peace, Hudson Institute, Foundation for Defense of Democracies, and Manhattan Institute.
Follow the money: The startup, which has been in beta for several months, has raised a $3 million seed round led by Lerer Hippeau with investment from Perplexity's venture fund.
- Brown said the firm's current focus is to establish itself and expand the network.
- She has been advising AI startup TollBit over the past year.
Zoom out: Tech companies have historically struggled with how to best inform their algorithms while also dodging the responsibility of making editorial judgments.
- Meta famously created an Oversight Board in 2020 to outsource tricky content moderation questions.
- The board, which it funded, is still operational. But the company has recently taken a more hands-off approach to content moderation, letting community feedback influence its decisions more than fact-checkers.
The bottom line: In the early stages of the AI era, consumers have mostly accepted that models will sometimes "hallucinate" or make up answers and get things wrong. But as the technology progresses, there will be more pressure on tech firms to explain how and why they cite certain information.
- Regulators are also eyeing ways to begin holding AI firms accountable for their information they provide to consumers.
- The EU's AI Act, for example, requires tech companies to create risk management systems that address possible bias.
Editor's note: This story has been corrected to say some of the experts the company is working with are paid.
