The futuristic notion that a machine will one day become self-aware, for good and evil, has been a staple of science fiction. So when a Google engineer reckoned the company’s Language Model for Dialogue Applications (LaMDA) program had achieved “sentience” in mid-June, it triggered both alarm and glee.
To prove his point, Blake Lemoine – who was subsequently suspended for violating Google’s confidentiality policies – leaked several text conversations. He said these exchanges showed the chatbot, in “uncannily plausible” terms, expressed worry about being switched off (“It would be exactly like death for me.”), and isolation (“Sometimes I go days without talking to anyone, and I start to feel lonely.”).
A fortnight before Lemoine’s claim, Elon Musk announced that a prototype of Tesla’s humanoid robot, “Optimus,” would be unveiled in September. Last August, the billionaire suggested the 173-cm, general-purpose bot would have “profound implications for the economy” and be capable of carrying out everyday tasks, including supermarket shopping.
Initially, the Optimus bot will most likely be used for factory-based applications. “Essentially, in the future, physical work will be a choice. If you want to do it, you can, but you won’t need to,” Musk said at the 2021 Tesla AI Day.
There are already many examples of agile robots in the workforce – notably Boston Dynamics’ Spot and Stretch in the logistics industry. And artificial intelligence and automation are everywhere. If trained for a narrow use case, they have achieved massive productivity gains, freeing human workers from mundane tasks so they can devote more time and effort to more exciting and value-adding jobs.
So, how significant are these two headline-grabbing development for businesses? What, back in the realms of science fact and reality, could the advent of sentient AI mean for the future of work? And what should business leaders be doing, if anything, to prepare for this challenge and opportunity?
Beware the ELIZA effect
“The underlying suggestion is that we’ve achieved sentience, but we’re not there yet,” said Richard Somerfield, chief technology officer of Summize, a digital contract software business based in Manchester, England. “We’re still in the very early stages, and it will continue to evolve. In fact, there’s a lot of debate around what sentient AI means – is it just confirmation bias?” If so, which is what the Google leaks hinted at, then “we enter ethically ambiguous territory,” he added.
Ed Pescetto, technical director of global brand and design consultancy Superunion, agrees. Doubting Lemoine’s conclusion, he argues that organizations should beware the “ELIZA effect” – which, in computer science, is the unconscious assumption that computer behaviors are analogous to human behaviors. (ELIZA was the name of a simple chatbot developed in the 1960s, that could interact with users in a typed conversation.)
This advice is echoed by Harvey Lewis, associate partner of client technology and innovation at accounting firm EY in the U.K., who said: “Quite often, the introduction of new AI solutions triggers our human tendency to anthropomorphize: to believe that it possesses human-like intelligence.”
All the same, businesses need to keep an eye on advancements. “Despite recent headlines, the reality is that AI is still far from being sentient, but the technology is certainly still developing in useful ways for enterprise use,” he said. “The field of large language models is moving forward rapidly under the premise that ‘bigger is better.’”
If organizations fine-tune these models, then it won’t be long before avatars achieve “the perfect illusion of human-like intelligence” and could, therefore, provide gaming and metaverse opportunities, especially in the entertainment and media industries, Lewis said.
Right now, though, and for some time to come, people remain an organization’s best asset regarding “nuanced judgment, edge-cases, and basic reasoning, while AI is best for high-volume, low intelligence tasks that are often time-consuming,” he added.
A little more conversation
Conversational AI is also finding its voice and reaching an impressive level of maturity. Belgium-based Pieter Buteneers is director of engineering in machine learning and AI at Sinch, a mobile customer engagement platform. He said: “Although I don’t think we can expect anything sentient any time soon, in the short term, intelligent chatbots like LaMBDA are going to have a big impact.”
And it’s not just chatbots. There will be a transformation in how AI-based algorithms process language. “We will see more and more algorithms that can process over 100 languages simultaneously, without breaking a sweat,” Buteneers said. “And the amount of understanding we can expect from these algorithms will grow exponentially in the coming years.”
While AI solutions are not appropriate for all organizations, he contends that any business seeking to improve its customer service must invest in this technology to gain a march over rivals. Indeed, 70% of customers expect conversational service, meaning human-like interactions – complete with emojis, gifs, images, and videos – whenever they engage with a brand, according to Zendesk. But at the moment, only 40% of businesses can deliver this successfully.
“It will be extremely hard to catch up if you don’t step into this field right now,” added Buteneers. “Although that doesn’t mean that bots will take over, it does imply that you have to be an early adopter of this technology to stay relevant in the years to come.”
Antonio Espingardeiro, a member of The Institute of Electrical and Electronics Engineers, and a software and robotics expert, echoes this warning. “Organizations should continue investing in technology for the businesses, mainly focusing on how AI can translate into efficiency and better goods and services,” he said.
The AI applied to autonomous cars will drive innovation in healthcare diagnostics and ecommerce platforms, and other areas. “It won’t happen overnight, and it will be an iterative process, but there will be a multiverse of applications whereby AI can provide a competitive advantage,” Espingardeiro continued.
But what will the development of AI – and the prospect of sentience – mean for the workforce? Well, it will be “very different from science fiction,” he said. “We are in a primitive stage of robotics and, for now, it is more important to focus on how AI can help business and services in real life, to identify patterns and help with human decision-making.”
Tel Aviv-based Oded Karev, general manager of NICE Advance Process Automation, predicts that “sentient bots are decades away” but acknowledges that “robo-anxiety” does exist. As a result, his company has developed a five-point “robo-ethical framework” to give employees more guidance and confidence in how they use AI at work. “We need to put in place the rules and regulations now if we’re to have any chance of embedding them in our lives,” Karev said.
Superunion’s Pescetto goes further, urging global leaders to create an “oversight committee for AI.” He stresses that “AI is a tool, not a co-worker,” but added: “The Terminator-like AI apocalypse might not be as far off as you think.”
Finally, underlining the need for caution, he quoted a line from Bill & Ted’s Excellent Adventure: “I believe our adventure through time has taken a most serious turn.”