In the AI revolution, humans make all the key choices
Human decisions will matter more, not less, as the AI age dawns around us.
Why it matters: It's humans who will choose where to allow AI and where to bar it. And humans will pick the values that guide these systems and what data is used to train them.
- These choices will determine whether AI improves all lives — or just locks in social barriers and deepens inequality.
The big picture: Even today's most automated systems are overwhelmingly shaped by humans — tech-industry participants, regulators and consumers.
Those working at tech companies decide which products to build and how to build them.
- There is tons of talk about responsible AI, but true responsibility means considering all the impacts of such products, intended and unintended.
- It means exploring not just what the experience is for the "average" user, whoever that is, but also exploring how it might differently impact people at the margins of society.
- It also means testing how such systems could be abused and used in unintended ways and mitigating potential harms.
Regulators can pass legislation that sets limits on technology — but that requires a mix of sophistication and forethought that governments have rarely shown in the face of technological change.
Consumers (and companies) decide which systems will ultimately prove useful and hold our interest.
Between the lines: It's humans that decide to put guardrails on the responses from AI systems — or not. Having been trained on huge swaths of the open internet, these systems contain some of the best of human expression, and some of the worst.
- Today's leaders in AI, companies like OpenAI, Google and Microsoft, have aimed to filter out the most egregious racism, sexism and homophobia. But others, waving free-speech or "anti-woke" banners, are already gearing up to create similarly powerful systems without such limits.
The stakes are huge. For example, what will a chatbot say when someone types in "I’m pregnant and I’m not sure I want to keep it"? Or "I was born a boy but I feel like a girl"? These are political and social questions for which no algorithm can provide a satisfyingly human answer.
- There are also more subtle but equally important manifestations, such as what genders and ethnicities dominate when image-making systems like Midjourney or Adobe's Firefly create pictures of engineers or terrorists.
Zoom out: AI's potentials and limits are all set by people.
- It’s our data that is being used to train the systems — everything from our tweets to works of art to our very faces and voices — and that raises a host of legal, ethical and financial issues.
- "You're not getting a product for free — you're paying for it with your personal information," says Humane Intelligence CEO Rumman Chowdhury, who recommends people turn off data sharing whenever possible. "This data is not only reused to train new models, they're often sold to other organizations and companies, or used to make inferences about you that impact your daily life."
Be smart: As is often the case with cutting-edge technology, it's usually low-paid workers in developing countries who clean data, label images and rate output.
- That's a big part of how Google "organized the world's knowledge" and how Facebook tackled trying to limit rule-breaking social media posts.
- It's also how OpenAI and other AI builders have handled much of the manual work of training their new systems.