- Article
How to establish a safe, secure metaverse from the ground up
As opportunities to monetize Web3 proliferate, so too will those bent on exploiting unwitting consumers. The companies that win will be those that best safeguard user data, privacy, and trust.
Netscape cofounder and crypto VC Marc Andreessen recently compared “Web3″—the catchall term for blockchain-based technologies—to the early days of the internet. Just as his browser became a window onto what's now called “Web 1.0," today the metaverse—envisioned as a persistent, immersive online environment—is poised to become the entryway to Web3. The potential of the metaverse is vast, with opportunities in gaming, commerce, and education totaling $5 trillion in value by 2030, according to McKinsey. But like Web 1.0 before it, it also possesses boundless potential for abuse—including regularly hacked accounts, “rug pull" scams, and even virtual sexual assault.
The first iteration of the web solved these problems by developing secure protocols and systems. The social networks of Web 2.0 took this a step further by heavily investing in content moderation to keep their walled gardens safe for billions of users. For the metaverse to make the leap from gaming to the future of computing, similar safeguards are needed. And this time, the responsibility for doing so won't fall to a handful of platforms, but to any organization or community wishing to build a metaverse of their own, aided by the companies building tools to help them. “Proactive intervention will make the metaverse a better place for all," says Mark Childs, high-tech leader at Genpact, a professional services company that delivers business outcomes that transform industries and shape the future. “Hyper-personalization won't just be about content consumption but privacy and personal control over data in everyone's digital lives."
Based on his work helping top tech and media companies embed trust and safety at the core of their products and operations, Childs hopes anyone entering the metaverse now will focus on three things. “One," he says, “is digital identity—how do you ensure people are who they say they are and permanently remove the bad actors? Another is artificial intelligence platforms to detect and remove toxic content or bad actors in real time and not post facto. And finally, they'll need skilled moderators to train those platforms and prevent the things they can't detect."
Blockchain-based ID
If the joke about Web 1.0 was that “no one knows you're a dog," then Web 2.0 tried to solve the identity dilemma by authenticating users, starting with Facebook's “real name" policy. The user data these platforms owned became the cornerstone of their monetization strategies. Creating a mainstream metaverse will require equally strong safeguards to stop trolls and scammers from wreaking havoc. But this time, the mechanism will be Web3 itself.
“We believe the virtual world should be constructed along similar lines as the real world," says Rodric David, president and cofounder of Infinite Reality, a builder of metaverses. While users may be strangers to one another, their avatars will be anchored to a blockchain-based ID. “While in the real world we don't know the person we pass on the footpath, in the metaverse the individual avatar is always known to the metaverse owner," he adds.
This mechanism has the added benefit of users owning their data and the value that stems from that, creating strong incentives for metaverse communities to police themselves. “Communities formed around principles are able to grow and expand with very little support because they've cultivated such a positive community," says Infinite Reality cofounder and chief innovation officer Elliott Jobe, whose model is the utopian anarchy of Burning Man. Given similar control of their identity and community, metaverse residents could aspire to much more than consumption.
Guardians of the metaverse
Idealism aside, building massive, persistent, multiplayer universes will necessitate algorithmic content moderation at scale. This monumental lift can't be tackled by any one company. It will require new metaverse-native outfits to build the tools and provide the services needed to guarantee a safe experience for users—whether that means automatically blocking known bad actors or removing harmful content.
Platforms have mostly relied on people to moderate content. But human oversight takes a toll, relying on thousands of individuals whose mental health has suffered from witnessing untold horrors in order to protect users. Technology like AI can make a dramatic impact on reducing the volumes of harmful content moderators and users alike are exposed to—though it will also be critical to ensure that any AI tasked with guarding the metaverse is both ethical and explainable, to prevent any bias from the real world creeping in.
The metaverse promises to raise the stakes, thanks to its combination of spatial awareness, haptic feedback, and sensory overload. It's unrealistic to expect anyone building their own metaverse to employ a similarly sized workforce—partnering with specialist firms that combine AI with human judgment is inevitable. Harnessing the power of data and analytics to build positive experiences will be a critical tool and capability for any aspiring metaverse.
But this, in turn, offers an opportunity to elevate the moderator's role to a respected front-line worker. “Why shouldn't this role be filled by the professional service firms keeping the digital streets safe?" Childs asks rhetorically. These firms must take care to recruit with resilience first and foremost, he adds, then invest in programs such as staggered rotations to preserve their health and well-being. “The reality is: if not us, then who?" he says. “You don't want this done by organizations who aren't putting their people first."
Freedom vs. Responsibility
This tension between centralized responsibility and decentralized freedom means the struggle between Meta's vision of the ultimate walled garden and Web3's frontier mentality is far from settled. While the former aims to carry forward the Web 2.0 model into new territory, startups such as Infinite Reality are betting that offering creators the tools to build their own worlds will prove more enticing in the end.
“If you want people to feel safe in the Wild West—which is what Web3 feels like right now—you're going to need some infrastructure," Jobe says. “That's how we developed cities, communities, marshals, and sheriffs. The latter didn't replace the ability to homestead and innovate but allowed for enough safety and security to show up." In that sense, the metaverse is well on its way to being civilized.
This article was originally published in Fast Company.