Technology   //   August 10, 2022  ■  5 min read

How to fix the metaverse’s sexual harassment problem (and make ‘metawork’ a reality)

Since Meta – the tech titan formerly known as Facebook – revealed last year that it would invest heavily in the metaverse, there has been massive enthusiasm about the possibilities of this nascent technology, not least in a future-of-work capacity. 

Indeed, at the end of July, a study by Grand View Research predicted the booming metaverse market will reach $6.8 trillion by 2030. However, alarming recent data indicates that almost two-thirds of adults believe metaverse technologies will enable sexual harassment.

A national tracking poll by business-intelligence company Morning Consult, published in March, found that 61% of 4,420 U.S. adults were concerned about this specific subject. Women seem most worried about it, with 41% of female respondents saying they have “major” concerns, compared to 34% of males. 

The same research showed that 79% of adults are worried about the tracking and misuse of personal data in the metaverse. Add in the numerous articles written about people’s personal experiences of harassment in the metaverse, and it’s clear there is a deep-rooted trust issue that business leaders should consider before funding metaverse worlds for employees, whether onboarding staff, hosting events, or meetings.

In truth, the concept of a metaverse – loosely defined as a virtual reality space where users can interact with a computer-generated environment and other users – has been around for decades. 

“Employers need to be aware that, at the moment, early examples may be poorly regulated, and businesses need to carefully consider the safety and security of employees and clients, as well as the privacy of business-sensitive information when circulating in the metaverse.”
Phill Brown, global head of analytics and insights at recruitment company Resource Solutions.

The video game Second Life, launched in 2003, is often cited as the first metaverse, although the term itself was coined in Neal Stephenson’s 1992 novel Snow Crash. That sci-fi book expressly referred to a virtual topography where real estate can be purchased and a place in which humans, avatars, and other software programs connect. 

Back in the realm of reality, business decision-makers should approach the metaverse with caution, certainly in a work context.

Digital Wild West

Phill Brown, global head of analytics and insights at London-headquartered outsourced recruitment company Resource Solutions, is one of a raft of business leaders deeply unconvinced by the metaverse, at least in its current state. “There’s a lot of excitement around the metaverse right now, with a growing number of businesses and brands entering this space and considerable investments being made,” he said. 

“It remains to be seen, though, when and where exactly the hype will be matched by utility and, more pertinently, how much of the buzz may translate into concrete value for companies.”

Brown is unsurprised that most people are concerned that the metaverse could be a hotbed for sexual harassment, with users cloaked in anonymity behind the masks of their avatars. It’s currently like a digital Wild West. Furthermore, data security in the metaverse is a worry and a risk on a personal and business level. 

“Although curbing sexual harassment online and in the metaverse has been on the minds of technologists and HR teams for many years, recent advancements in a specific area of AI, named conversation intelligence, make this dream a reality.”
Surbhi Rathore, CEO and founder of Symbl.

“Employers need to be aware that, at the moment, early examples may be poorly regulated, and businesses need to carefully consider the safety and security of employees and clients, as well as the privacy of business-sensitive information when circulating in the metaverse,” he warned.

Jonathan Merry, CEO of BanklessTimes.com, a news outlet focused on alternative finance and cryptocurrency, explained how interacting in the metaverse could be conducive to abuse. 

“The potential for harassment of any kind in a digital landscape is likely to be something companies will need to be wary of if they plan on embracing ‘metawork,’” he said. 

Discriminatory or lewd comments may be a significant problem. And employees might make colleagues uncomfortable by performing specific gestures or their avatars getting too close to others, Merry said. “Any unwanted behavior in a virtual space considered intimidating, degrading, offensive or hostile could be cause for concern.”

Using tech to flag inappropriate behavior 

But whose responsibility is it to safeguard employees in this potentially dangerous, digital dimension? “Making the metaverse a safe and secure place will be something multiple parties need to invest in,” said Merry. “Companies creating metaverse environments will need policies to guide behavior to a certain extent.”

And in the workplace, HR groups will need to think carefully about how they can “establish an environment that champions respectful behavior in both the physical and digital worlds,” he added.

Could the solution to this tech challenge be more technology? Surbhi Rathore, CEO and founder of Symbl, a company offering conversational AI products, thinks doubling down is vital. “Although curbing sexual harassment online and in the metaverse has been on the minds of technologists and HR teams for many years, recent advancements in a specific area of AI, named conversation intelligence, make this dream a reality,” she said. 

Now the tech has matured to a point where sophisticated machine learning understands the context in live conversations, including tone, emotion and gestures, Rathore posited.

“AI that understands human conversation, such as voice and chat, provides an important opportunity to fight against sexual harassment and other anti-social online behaviors,” she said. “When directed toward a particular outcome, such as harassing speech, a new generation of AI technologies can be beneficial and used for coaching and reporting.” 

Rathore provided an example of how this might work in practice. If, say, a young professional doesn’t realize their comments in an online workspace could be deemed offensive, the tech could flag a real-time warning. “This would explain to the employee how the comment is harmful or before they hit send may be enough to coach and guide their conversation in the future,” she added. 

And, if the employee repeatedly violates the basic rules of professional behavior, the system alerts a trained HR team member, who will review the conversation and take further action if necessary.

Stress caused by being snooped on

Harold Li, vice president of software firm ExpressVPN, linked concerns about sexual harassment in the metaverse with a general push for technology solutions in the workplace that have not been fully thought through. For instance, staff monitoring solutions have risen by 56% since the start of the coronavirus crisis – but few employees are happy about it.

“As with many privacy issues, there are psychological effects of employee surveillance that are particularly worrying,” said Li. ExpressVPN research indicated that many employees would rather leave their job than be subjected to the anxiety and stress caused by being snooped on.

“Surveillance can also have particularly sinister uses if we factor in serious workplace issues like harassment,” he added. “Can businesses ensure that a worker is protected when their private messages could potentially be viewed by their harasser?”

Ultimately, if the answer to this puzzler is “no,” then pursuing monitoring solutions in the metaverse or elsewhere seems not advisable.