
Screenshot: Axios
Many observers are skeptical that Facebook/Meta is capable of keeping people safe in the virtual metaverse it plans to build, given the company's struggles to moderate today's online world.
Yes, but: Veteran Facebook executive Andrew Bosworth argues Facebook's history is what makes it the firm best suited to the task.
Bosworth says Facebook has the largest team working on the potential harms that might befall users in the metaverse — everything from harassment, hate speech and misinformation to digital theft — and is the only company with experience tackling such challenges at the scale of entire populations.
- "We are uncommonly well suited to spot around corners in new areas like the metaverse," Bosworth told me in an interview last week.
Catch up quick: Bosworth, the man in charge of implementing the company's technical vision for the metaverse, has spent the last few years leading Facebook's AR and VR efforts.
- He's set to become company-wide CTO next year, is one of Facebook's longest tenured employees and played a significant part in key pillars of its core service, including the News Feed.
Between the lines: Bosworth notes that many of the hardest problems aren't about implementing policy on a large scale.
- Even devoting 20 times as many resources toward enforcement would not end controversy around a service like Facebook, he said.
- "It's not just about effort," he said. "It is also about the substance of what the policies should be."
When it comes to governing the real world, Bosworth argues, people embrace a broad range of ideas.
- A highly centralized and controlled regime — like Singapore, for example — can keep things safer and cleaner, but at the expense of individual liberties.
- More open societies foster greater freedom and dialogue, but with less order, more danger and other downsides.
- "This is not unique to software or hardware," he said.
The big picture: CEO Mark Zuckerberg's metaverse keynote last week urged companies to collaborate on building a single digital universe, along the lines of today's internet.
- Getting companies like Microsoft, Google and Apple to buy into a shared vision will itself be quite a challenge.
Yet the more open the metaverse is, the harder it may become to moderate. Existing real-world legal authorities could struggle to assign jurisdiction over a digital world that transcends physical location.
Yes, but: Bosworth notes that there are also some aspects of a metaverse that should be easier to handle than either moderating widely accessible content or policing the real world.
- For one, he says, you could have the option to mute someone who is behaving inappropriately and have them permanently unable to bother you, an option not available in the real world.
What's next: Bosworth says society needs to debate what we expect in terms of security and privacy in the virtual world.
- For example, should Facebook or any company be able to listen to conversations taking place in a virtual park or living room?
- Doing so might provide some users with more safety, but others would legitimately bemoan a lack of privacy.
"I am eager to have those conversations," he said. "I don't think we should be making those decisions, certainly not by ourselves."
Meanwhile, Bosworth says Facebook is already considering some options that balance privacy and security.
- One possibility is allowing users to keep a local copy of their most recent metaverse interactions.
- Such a buffer would allow someone to report an unwanted encounter and provide the digital evidence, without Facebook — or a government — constantly monitoring private interactions.