AI is actually making workers less productive
Generative AI tools that are poised to eliminate time-consuming tasks leading to major boosts in workplace productivity have yet to make good on that promise. In fact, right now they are doing the opposite — giving employees more work to do and contributing to burnout.
Almost 80% of workers who use generative AI in their jobs said it has added to their workload and is hampering their productivity, an Upwork survey among over 2,500 full-time workers, freelancers and executives found.
Workers say they’re spending more time reviewing or moderating AI-generated content and investing more of their time in learning how to use the tools, and their experiences are far apart from the perceptions of their employers. Some 96% of executives expect AI to boost productivity, while about 40% of employees say they don’t know how that will ever happen, the Upwork survey found.
Accordingly, employers will need to adjust their expectations and approach to effectively integrate new tech and see some ROI — though it’ll likely be less than they’re hoping for.
“What’s happening is that this hype bubble is just huge, and it’s disproportionate to the actual impact the technology can have right now, especially the way it’s being deployed,” said Emily Rose McRae, senior director analyst at Gartner.
“I talked with one client who said their board declared they should cut headcount by 20% due to generative AI. I’m going to be frank that I’m not actually sure any company has come anywhere near that level of headcount cutting, and I don’t know that they will, because that’s just not how the tool works,” McRae said.
A key problem is that the tools themselves remain imperfect. Generative AI is still prone to hallucinate or fabricate an answer that sounds reasonable, meaning a human’s input is needed to double-check the material. “What it does is it saves you the time of putting it all together in the first place,” McRae said.
Naturally, there are examples of companies that have found generative AI is genuinely saving time for workers. There are also some extremely specific use cases where generative AI really has had a massive impact, like the legal field, where during discovery AI tools can help lawyers research and analyze existing case law and summarize huge amounts of information, she said.
In other cases, gen AI-powered chatbots can significantly speed up the time it takes staff to learn how to operate a new software or complete new tasks, or save HR leaders time by answering employee questions and virtually directing them to resources. But in all cases, a human is still needed to review the validity of the output. The review process can be time-consuming itself, and neglecting to do so carries varying levels of risk.
“A lot of your large language models only operate at best when a human’s in the loop and when there is human judgment and oversight, and that’s just the reality of where we are with this technology,” said Kelly Monahan, managing director and head of the Upwork Research Institute.
“In order to really capture the productivity gains, we actually have to take a big step back and say what is the business problem we’re solving for? How do I rethink the way that I’m doing my job in order to achieve that? And how does this tool help? And I’m not sure that a broader AI strategy in workforce development has taken place yet in many organizations,” Monahan said.
Instead workers are feeling left on their own when it comes to making the tools really work for them. The Upwork survey found that 40% of workers feel their company is asking too much of them when it comes to AI, and they are investing far more of their time teaching themselves how to use the tools.
Employers can better support staff by holding focus groups to determine exactly what barriers they’re facing and what kind of targeted training is needed, Mcrae said.
“If it’s a priority that your workforce experiment with and learn how to use generative AI, make sure you have real cases for what you want them to be doing with it and give them the tools and space to learn how to do that. But also ideally be very open to the feedback, that this doesn’t do what we need it to do, and it’s actually not helpful,” she said.
In the near future, Mcrae believes employers will soon start teaching staff to spot AI hallucinations the same way they do with phishing, or through “information skepticism” training. There they will learn how to spot cues to be more skeptical of an output, or any AI-generated content — whether it’s labeled as such or not — and when they can accept one more confidently. Just like phishing they’ll also be able to report such instances and use that data to better inform the language models they are working with.