
Tudorache on Sept. 24, 2016. Photo: Joe Klamar/AFP via Getty Images
Dragoș Tudorache, a member of the European Parliament who leads the body's negotiations on AI policy, has been to Washington many times before, meeting with members of Congress and the executive branch.
But this time, something was different: U.S. lawmakers actually seemed motivated to act on AI, a welcome change for Tudorache, he told Axios in an interview last week amid his D.C. meetings.
The following interview has been edited and condensed for clarity.
What are you doing in D.C. this week?
I've been trying to cover all bases, [seeing] people in Congress that are working on AI along with the White House and the State Department. It's true that in previous iterations of my trips, I was trying to explain [the EU's] angle and how we saw the urgency of dealing with AI, and at the beginning, I was not necessarily fully understood or heard up on the Hill.
- I leave much more reassured this time, there is a driving energy in Congress that I never saw before. It's really changed, and that change happened in the last nine months.
- The processes taking place in the Senate, for example, the working group led by [Majority Leader Chuck] Schumer, are the kinds of conversations and processes we were doing three years ago.
- I don't want to sound arrogant, but that's just a fact.
Lawmakers here have been saying they don't want to repeat the same mistakes made with social media by failing to regulate AI. Do you think that's a good frame for looking at potential AI legislation?
I think it's a good parallel to make, because we have, in a way, made the same mistake. We passed the [General Data Protection Regulation] quite early on, but we were still somewhat ignoring the risk.
- We thought a Code of Conduct on misinformation would be sufficient to deal with the rest of social media. Six or seven years later, we realized solely with voluntary compliance you're not achieving anything. Hard rules are necessary.
- It's a good start what the White House is doing with voluntary commitments and standards, but from my point of view, that will not suffice.
- We still have the limitation of relying on the self-discipline of companies and their own moral compass. As far as I understand it, this is just a temporary solution to plug in the gap.
How was the experience creating the AI Act given that larger tech companies try to influence legislation to fit what they want? That’s often an issue here in the U.S.
We always try to listen to everyone. But we never let our decisions in terms of how we regulate be dictated by one lobby or another.
- We did nine months of consultation with everyone: Big Tech, small tech, civil society groups, academia, everyone that had a contribution to make.
- Of course, Big Tech tried to muscle in with their own vision of the world and how they should be playing it. You listen to them, you take their point of view, but ultimately you make the decisions.
- I've heard all possible arguments. Did that stop us in our track? No, it didn't, because we saw that this is the right time and the right place, and we simply cannot afford not to bring in regulation also for those models [foundational and generative AI], whether the big companies like it or not.
When Schumer had his CEO forum, he came out of it saying the EU had moved too fast on regulating AI and that the U.S. should be cautious. What's your response to that?
We didn't go back and revise. I would venture a guess that ultimately at the end of the process here, the end result will not be very different from ours.
- What might be different is a more sectoral approach than the horizontal approach that we have taken.
- I disagree we moved too fast, I think we moved just in time.
- But with all the openness and friendship and partnership, I think that sooner or later, we will be arriving at the conclusion that we will not be far apart from each other.
