Axios Pro Exclusive Content

Cohere CEO: Good regulation will make for better business

Ashley Gold
Sep 18, 2023
Cohere co-founder Aidan Gomez speaking on stage at the Collision conference in Toronto, Canada

Aidan Gomez in Toronto in June. Photo: Piaras Ó Mídheach/Sportsfile for Collision via Getty Images

For enterprise AI companies like Cohere, it's a welcome development that government is thinking about regulation now as opposed to years into generative AI's boom, CEO Aidan Gomez told Axios in an interview last week.

What's happening: After pledging to voluntary "responsible AI" commitments at the White House, Gomez chatted with Axios about the U.S. vs. Canada and the United Kingdom when it comes to tech regulation, why enterprise AI companies need different rules from consumer-facing ones, and more.

This interview has been edited and condensed for clarity.

What is your take on the voluntary commitments the White House is leading for "responsible AI"? What do they really mean?

I think this time we are early, and before wide deployment we're getting these commitments and we're all really thinking through how to make this stuff go well and ensure that both consumers and enterprises are protected.

  • I'm excited to see that there's such close engagement from the government with experts like the folks who have been in research for decades on this stuff.

How should people be thinking about the key differences between consumer and enterprise generative AI?

In one case, you're putting it directly in the hands of the general public, which might not necessarily have the expertise or familiarity with this technology to properly understand it.

  • When you have organizations that are already deploying this tech, they have some degree of expertise inside them with AI, so they might be more familiar with the technology. But they require a certain set of protections in order to adopt it.

What sort of protections are you talking about?

Data privacy is a big one. We can't have new risks in terms of data leakage, or these models picking up proprietary data to organizations and leaking it to the outside world. There's a lot of unique concerns on the enterprise side.

  • I think this administration and this group at the White House really cares about protecting that, as well as the broader consumer-focused efforts.

Cohere has offices in Toronto, NYC, California and the U.K. Is that complicated for the company when it comes to conversations around policy?

It's three parties, so it's not overwhelming. They're quite tightly knit and coordinated among the three of them. Everyone is speaking to each other, trying to wrap their heads around this and trying to come up with some consistent set of guidelines.

Are doomsday scenarios that have been laid out for AI overblown?

A lot of what bad actors could do with open source models is very real ... but a lot of it is a distraction from the actual stuff that could go wrong, which is a little more banal or less extreme.

  • I don't want to see this extreme vision of the future dominate our discourse when what we should be protecting against is much more well known and understood, such as misinformation and very scalable phishing campaigns and data privacy.
  • It's not new risk, it's very amplified existing risk.
  • When we talk about concrete regulation, we have to stay focused on the risks that are most likely.

What would be the most useful type of regulation for a company like Cohere?

Guidelines on the sorts of safeguards that are required for particular use cases; for instance, deployment of these models into medical or legal scenarios. That would make it a lot easier for us and our customers to feel comfortable actually pressing the button on deployment and moving ahead.

  • We don't have a certain safety around clearly moving forward [in some industries]; having those guidelines, that is what good regulation does.

Our thought bubble: Governments are going to have to shift out of the regulatory mindset that developed around social media and remember businesses and individual people have different needs and uses for laws around generative AI.

  • The biggest companies, often serving consumers, tend to dominate discourse around regulation while generative AI is already being used daily by businesses without incident.
Go deeper