As a single mother and successful small business owner Chukurah Ali had long been the sole breadwinner for her family. But a car accident last February plunged her into a depression.
While she recovered from her physical injuries, the psychological trauma was harder to fix. “Everything was extremely hard for me,” said Ali. “Finding a therapist was hard, picking up the phone to make a call was hard, and I was scared of driving.”
She couldn’t afford a therapist, but she knew that she needed assistance. Her doctor recommended Wysa, a free artificial intelligence bot that responds to a person’s expressed emotions and uses cognitive-behavioral techniques, meditation, breathing and other micro-actions to help users feel better.
At first Ali was skeptical about how much this could really help. But she soon found she was consulting it between seven to 10 times a day, including late at night when there wasn’t anyone else to talk to.
A year on and Ali hasn’t felt the need to use the bot in two months, and feels in a much more stable state of mind. While she hasn’t returned to work yet, she is pursuing a degree instead.
While Ali found the service through her doctor, it’s being offered as a mental health benefit by employers via the AI chatbot-guided cognitive behavioral therapy app Wysa for Work.
The debate around AI’s role in the future of work and its influence in how we live has heated up over the last few months, since generative AI bot ChatGPT burst into the mainstream.
Some argue that the technology could drive people further apart and have harmful effects like isolation and lack of human connection. But, there are ways that the technology can be used for good too.
“AI will inevitably alter the nature of work but its larger impact will always be to remain a tool that complements and augments humans’ strength and not replace them,” said Dr. Sameer Maskey, founder and CEO of AI strategy and solutions company Fusemachines. “AI bots such as Wysa can offer behavioral change and cognitive insights based on advanced cognitive behavioral techniques.”
AI can play a valuable role in mental health care, an ongoing area of concern for the workplace. Wysa isn’t the only company looking to offer AI-based mental health tools. Last year, Headspace, which has been a common workplace benefit for years, acquired Sayana, an AI-powered mental health and wellness company. It leverages chat-based sessions with an AI persona that encourages users to track their moods.
Aside from tools like Wysa and Headspace, AI can provide educational resources, self-help tools, and support. These are ideally used in addition to traditional mental health care, but can also help fill the gaps when someone doesn’t have access to therapy.
Naturally, these bots are no true replacement for the kind of medical support a human professional psychologist can offer. But they can complement it.
With Wysa, it clearly states that the AI coach is not intended to provide a diagnosis, treatment or cure of a condition. It’s meant to assist people who are experiencing low mood, stress or anxiety. It provides techniques to maintain your mental well-being in a self-help context.
Currently, over 40 companies use Wysa, like Colgate-Palmolive and L’Oreal. Other clients include Accenture, Aetna International, Cincinnati Children’s Hospital Medical Center, and the Ministry of Health in Singapore. Chaitali Sinha, head of clinical development and research at Wysa, said that some of its largest clients have offices across over 50 countries.
At the end of 2022, The Travelers Companies, the U.S’s largest workers compensation insurer, announced the launch of Wysa. The goal was to reach injured employees who have psychological barriers to their recovery, which is ultimately preventing them from returning to work.
Wysa’s research found that as many as 4 in 10 employees suffer from symptoms of depression or anxiety. While talking to Wysa, 42% of employees opened up about their declining mental health. Every year, unaddressed depression and anxiety cost $580 per employee in absenteeism, lost productivity and turnover. That’s $30 million a year for an employer with 50,000 people, according to Wysa.
“Enterprises must take concerted efforts into establishing these AI-human partnerships,” said Maskey.
One way of doing just that is with conversational AI tools like Wysa. Sinha said that Wysa’s research found that people who have in-person therapy don’t see a difference in their mood for three to five weeks, whereas those using the app have seen a difference in three to five days. One way they measure that difference is with a tool known as the WAI-SR, which measures agreement on the tasks of therapy, agreement on the goals of therapy and development of an effective bond with the bot.
Health insurer Vitality has identified 60,000 U.K. VitalityHealth members who are likely to need support for their mental health and then measured their change in mood. At onboarding, each user measured their anxiety and depression scores through standardized clinical questionnaires like the GAD-7 for anxiety and PHQ-9 for depression. After a month of using Wysa, there was a 31% reduction in moderate anxiety symptoms and a 38% reduction for severe anxiety symptoms, according to the same questionnaires. When it comes to depression, there was a 40% reduction in symptoms for those suffering with moderate depression and a 35% reduction for those with severe depression.
AI bots like Wysa are also being pitched as helpful tools for people who want advice on how to deal with challenging work conditions.
“For example, if an employee isn’t able to work with their manager, and they feel a sense of alienation, they might be wondering how they can improve it and what kind of questions they should be asking,” said Sinha. “They can double up those skills with the app or have a safe space to talk to what they are thinking about because they are ready for the conversations.”
Wysa is currently focused on scaling globally through helping support those who are on disability and workers compensation. “Like anything powerful in this world, it all depends on how the technology is used,” said Sinha. “There should be clear intent and it depends a lot on the use, the safety measures in place, and how much the technology could help or harm.”