Aug 3, 2023 - Technology

People are becoming more robotic — and AI could make it worse

Illustration: Shoshana Gordon/Axios

As AI makes computers better at talking and behaving like human beings, humans are choosing — or being forced — to behave more like machines.

Why it matters: AI optimists believe the technology will relieve people from drudgery, allowing them to be more human than ever before. But marketplace pressures and cultural forces are both capable of pushing in the opposite direction, slotting humanity in as one more gear in a giant mechanism.

The big picture: Automation experts insist that a smart approach to workplace automation can improve workers' lives. The key is to make sure that the machines are adapted to the human worker's needs.

  • Historically, though, automation-minded business owners have instead pushed human workers to adapt to the needs, pace and workflow that physical robots and software bots demand.
  • These speed-ups are typically accompanied by the imposition of increasingly advanced surveillance and monitoring systems — one field that AI startups have rushed into.

If the latest automation wave is going to be any better than this for people, we haven't yet seen how.

  • That's one reason strikes and union drives are multiplying.
  • As one employee at an Amazon facility in Coventry, U.K. that's at the center of a labor fight recently told the Guardian: "To them, we are like robots rather than people. The little things that make us human, you can feel them being ground out of you."

The cultural impact of this economic shift is just beginning to be felt.

  • Facebook will soon start populating its platform with AI-driven chatbots designed to perform services and entertain users with personas like "surfer dude" or "Abraham Lincoln," per the Financial Times.
  • "Connecting people" has long been Facebook's mission, but as users slowly drift away from traditional sharing, the company is ready to pivot to "connecting people with bots."

On TikTok, there's a trend involving live people behaving like "NPCs" — the semi-robotic, repetitive non-player characters you might encounter in a video game.

  • PinkyDoll, a virally popular TikToker, takes contributions from fans who want her to repeat phrases like "Ice cream so good" on demand.
  • Sure, this is just a bit of pop-cultural ephemera bobbing on the online sea. It's also an indicator of just how porous the boundary between human and bot has become — in both directions.

Flashback: The Turing test used to be a widely agreed upon yardstick for the advances of AI.

  • The idea, proposed by computer science pioneer Alan Turing, is to determine whether a chatbot can behave "humanly" enough that a judge can't distinguish between it and a real person.
  • ChatGPT and other advances in generative AI have persuaded many experts that the Turing test is now outmoded.

Right now, we're less likely to ask machines to pass as humans than to demand that humans prove they're not machines.

  • The "captchas" that have proliferated on the web to filter out automated agents and bots from filling out forms intended for human users are a primitive example.
  • More futuristically, OpenAI founder Sam Altman's Worldcoin venture depends on the premise that the global economy is going to need a way for users of digital systems to "prove" that they are human.

Ironically, one of the great promises of generative AI is to put machines more fully at humans' service.

  • As machines get better at understanding human language, humans should increasingly be able to operate and direct them without, for the most part, ever having to learn "computerese" (programming languages).
  • But it will take a lot more imaginative design work to make that happen, and it will never happen if it costs too much.

The bottom line: The "uncanny valley" — that weird void midway between old-fashioned analog life and super-realistic digital representations of life — is now everyone's home.

Go deeper