Jul 17, 2023 - Technology

Leaders, experts offer artificial intelligence advice for real people

Illustration: Aïda Amer/Axios

We reached out to a host of tech luminaries, executives, academics, critics and regulators, and asked them the same question: What’s the single most important thing that people should be doing to prepare for AI?

Here is what they had to say:

Sam Altman, OpenAI CEO: "Use the tools, get a sense of the possibilities, and participate in the conversation so that safe AGI [artificial general intelligence] is as beneficial as possible."

Genevieve Bell, professor of cultural anthropology: "Actively seek to disentangle the science fiction from the reality. Just because it feels a little bit like the movies doesn’t mean the robot uprising is also coming! AI isn’t a technology, and it isn’t singular. It is, in fact, a complex system of many different technical components, systems, infrastructures and processes."

Satya Nadella, Microsoft CEO: "With this next generation of AI, we’re moving from autopilot to copilot, which will unlock a new wave of productivity growth and empowerment for every role, organization, and industry. For the first time, we have access to AI that is as empowering as it is powerful. With this empowerment comes greater human responsibility — all of us who build, deploy, and use AI have a collective obligation to do so responsibly and safely, so AI evolves in alignment with our social, cultural, and legal norms."

Eric Schmidt, former Google CEO: "The responsible application of AI should enhance human intelligence, not replace it or repeat its mistakes. We cannot surrender our sense of autonomy and moral agency even as we rely on AI. It is better to retain responsibility for important decisions affecting civic life, even if AI has a superior capacity to execute them."

Lila Ibrahim, Google DeepMind COO: “Responsibility needs to be the number one priority. That means investing in safety and responsibility early and supporting teams to focus on these priorities, bringing in outside voices to challenge your thinking and embracing a culture of experimentation. AI is one of the most transformational technologies of our time; it’s critical we prioritize responsible development and deployment at the start.”

Aidan Gomez, Cohere CEO: “We need to think carefully about where, when, and most importantly, how, AI is deployed. Any technology, from the combustible engine to the internet, can be used in ways that are beneficial or destructive to society. Each AI use case, from the general public plugging information into chatbots, to companies giving knowledge workers tools to be more effective in their jobs, is different. We need to be more sophisticated as a society weighing the costs and benefits of different ways we use AI than we have been with previous technological breakthroughs.”

Achim Steiner, administrator of the United Nations Development Programme: “We need to ask which values we as humans want artificial intelligence to represent. AI platforms are coming up with perspectives and values based on how they are trained and which data they draw upon, which is quite different from saying they represent the balanced and diverse views of the global population.”

Rep. Yvette Clarke (D-N.Y.): “AI is and always will be a tool in the toolkit. It cannot replace human creativity and innovation, and we should not look for AI to solve all our challenges.”

Eva Maydell MEP, European Parliament lead negotiator EU AI Act: "We need to already be engaging in discussions about the shared vision we have for the future and what this means for social trust, democracy and society.”

Rep. Jay Obernolte (R-Calif.): “Preparing for the major shifts it will bring to our workforce. Embracing the concept of lifelong learning will be critical to our success in integrating AI into our everyday lives.”

Kent Walker, president of global affairs at Google: "I’ve never seen a time of more promise for human progress. With AI bringing us science at digital speed, all of us — technologists, policymakers, civil society, and citizens — have a chance to seize this moment boldly and responsibly. We can be optimistic without being utopian, building and using machine learning to advance opportunity, promote responsibility, and strengthen U.S. and international security."

Julie Samuels, Tech:NYC president and executive director: "We must incorporate this technology in our schools so that the next generation knows how to use it in a safe, responsible, and productive way and is prepared for the AI-powered jobs of the future."

Anja Manuel, executive director Aspen Strategy Group and Aspen Security Forum: It’s time to double-down and be creative about how we coordinate regulation. Those discussions have to happen at the science-to-science level, business-to-business and government-to-government.”

Kian Katanforoosh, CEO and founder of Workera: The most crucial step individuals can take in the realm of AI is to upgrade their skills. By arming yourself with even basic AI knowledge, you can heighten your productivity and efficiency. Even a subtle 3% efficiency increase, compounded over time, could significantly transform your life."

Tom Graham, Metacritic CEO: "Every individual should own their biometric and private data that is used in Generative AI models. And we should all work together to help individuals protect their rights, identity and digital likeness in a world that will be filled with AI-generated content and products."

Sarah Kate Ellis GLAAD president and CEO: “It is past time for AI executives and developers to deliberately and explicitly fix training and other errors that lead to biased outputs which cause harm to people of color, LGBTQ people, people with disabilities and other marginalized communities. The world is also seeing AI’s dangerous role in the spread of disinformation and lies, and the industry should create stronger and more explicit guidelines to thwart such outputs."

Go deeper