All the recent chatter about ChatGPT and advancements in generative artificial intelligence has been impossible to avoid for business leaders. More than ever, they are being urged to embrace AI.
True, if used correctly, it can improve efficiencies and forecasting while reducing costs. But many people make the mistake of thinking AI could – and should – be more human.
Science-fiction tropes do not help this perception. Additionally, Alan Turing’s famous test for machine intelligence, proposed in 1950, has conditioned us to think about this technology in a certain way. Originally called the imitation game, the Turing test was designed to gauge the cleverness of a machine compared to humans. Essentially, if a machine displays intelligent behavior equivalent to, or indistinguishable from, that of a human, it passes the Turing test.
But this is a wrongheaded strategy, according to professor Erik Brynjolfsson, arguably the world’s leading expert on the role of digital technology in improving productivity. Indeed, the director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI recently coined the term the Turing trap, as he wanted people to avoid being snared by this approach.
What exactly is the Turing Trap?
Prof. Brynjolfsson wrote in a 2022 paper titled The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence: “The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like – in fact, many of the most powerful systems are very different from humans – and an excessive focus on developing and deploying HLAI can lead us into a trap.”
In a recent interview with WorkLife, he explained the issue and argued that the Turing test provided “an inspiring but misguided vision.” He said: “One of the biggest misconceptions about AI – especially among AI researchers, by the way – is the idea that it needs to do everything a human does and replace them to be effective.” Brynjolfsson added that it is a “pernicious problem” for business leaders.
What are the best and worst examples of companies falling into the Turing trap?
A prime example, suggested Brynjolfsson, was Waymo’s experiments with self-driving vehicles. “[The AI] works 99.9% of the time, but there is a human safety driver overseeing the system and a second safety driver in case the first one falls asleep and stops paying attention. People watching each other is not the path to driverless cars.”
Similarly, it would not make sense in a healthcare setting to use AI and have two radiologists monitoring the technology, said Brynjolfsson. “It’s obviously not a scalable solution.”
London-based Rupert Deering, co-founder of recruitment and advisory firm Timberseed, pointed to Microsoft’s short-lived Tay (bot) “with its penchant for abusive messages” as another excellent example of a company falling into the Turing trap. Microsoft released Tay in March 2016 and it soon began to post inflammatory and offensive tweets. It was shut down after only 16 hours.
What is the current state of play with AI for business?
According to Rackspace Technology’s 2023 AI and Machine Learning Research Report, the Texas-headquartered cloud computing company’s latest annual global survey that gathered answers from over 1,400 IT decision-makers around the world, 69% of respondents rated AI and machine learning (ML) as a high priority – a 15-percentage-point increase on the 2022 figure. Additionally, trust in AI is high. Some 73% of those surveyed said they had confidence in the answers provided by AI and ML.
Notably, most respondents said they no longer considered stringent human oversight of AI/ML necessary. Further, only 19% believed that AI and ML always needed human interpretation – over three times fewer (75%) than last year’s result.
Simon Bennett, CTO of Rackspace Technology in EMEA, said that although the market had matured “at a rapid rate” in the last 12 months, familiar issues were still present. “Long-held challenges such as the talent shortage continue to persist, and in some cases grow, which is making it difficult for businesses to keep up,” he said. “Many lack the internal resources and expertise necessary to adapt to the era of AI.”
Bennett advised that businesses seeking to realize the vast potential of AI must “adopt a strategic approach that considers the costs, skills, and necessary infrastructure that come with its introduction.”
How, though, should business leaders avoid the Turing trap?
Brynjolfsson said a “mindset shift” is required to harness AI’s power. For instance, AI should be the co-pilot rather than the actual driver. He compared Waymo’s two-safety-driver strategy with the approach Toyota Research Institute took. “The [Toyota Research Institute] team has flipped it around, so the autonomous system is used as a guardian angel.”
In this example, with the human in the driving seat, the AI intervened “occasionally, when there’s a looming accident” or something unexpected happened nearby. “I think this is a good model, not just for self-driving, but for many other applications where humans and machines work together,” added. Brynjolfsson.
Deering agreed. “AI is here to advise and allow us to make better decisions, not replace us or our jobs,” he said. “For instance, IBM’s AI tool Watson can help identify potential cancer tumors, but it can’t diagnose it.”
Offering a final tip to business leaders, Deering added: “People and businesses should approach the adoption of AI by it being a ‘second brain’ before they have to leap in with both feet.”