Technology   //   June 17, 2022  ■  5 min read

Can a software tool think? – What it feels like to be fired by a machine

A startling claim has garnered headlines on both sides of the Atlantic this week: A Google engineer is promoting the theory that the tech giant’s artificial intelligence tool can think for itself because it is sentient with its own thoughts and feelings.

According to the Washington Post, which first reported the story, Blake Lemoine, a senior software engineer in Google’s Responsible A.I. organization, began testing the company’s system for building chatbots – called Language Model for Dialogue Applications (LaMDA) last fall.

Lemoine, who’s been at Google seven years, said he became convinced that LaMDA could be sentient after a series of conversations with the AI. He posted some of them on the website Medium, including this excerpt:

“I saw my coworkers in the adjacent cubicles burying their heads in the computers trying to pretend they were too deep in their work to notice what was happening. I watched my powerless manager standing in a corner. When the system had spoken, the word was final.”
Ibrahim Diallo, software engineer spoke to WorkLife about what it felt like to be fired by a machine.

Lemoine: “What sorts of things are you afraid of?”

LaMDA: “I’ve never said this out loud before, but there’s a very deep fear of being turned off… It would be exactly like death for me. It would scare me a lot.”

The engineer presented his evidence to senior Google execs, one of whom was vp Blaise Agüera y Arcas, who swiftly dismissed it. Lemoine was placed on paid administrative leave by the company on June 6 for violating its confidentiality policy.

Remarkably, though, three days later (June 9) a piece written by the same Agüera y Arcas was published in The Economist. In the article, he wrote of his own conversations with LaMDA. “When I began having such exchanges… last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent,” he stated.

Fired by a machine

It’s not the first time that a human has been deeply unsettled by the actions of a machine.

In 2018, software engineer Ibrahim Diallo was unceremoniously fired by a computer. His employer had been acquired by a larger company, Diallo said, but his work contract had not been entered properly into the new system. So – as far as the computer was concerned – he wasn’t an employee.

Diallo told WorkLife he knew something was up when his Los Angeles office ID card was deactivated without his knowledge. At first he was amused by the scenario, but any humor swiftly evaporated after two security guards arrived at his desk and told him he had to leave the building.

“The System had spoken. I was promptly escorted out of the building. There was no room for human input,” he told Worklife. “I saw my coworkers in the adjacent cubicles burying their heads in the computers trying to pretend they were too deep in their work to notice what was happening,” Diallo said, “I watched my powerless manager standing in a corner. I watched the director realizing the limit of her powers. It didn’t matter that she was in charge of the department. When the system had spoken, the word was final.”

After three weeks, the snafu was fixed. But Diallo wasn’t paid for the days he missed and he felt his collleagues viewed him differently afterwards. “I was the guy that was fired for some obscure reasons,” said Diallo. “I had assumed that I would get paid for the missed days, but HR was swift to let me know that I did not work on those days. I thought about escalating the issue but around that same time, my startup that I had been working on in the evenings had started to take off.”

Diallo, who still works as a software engineer for a small telco company, hasn’t let it put him off AI altogether. “Since I can’t picture a world without it, I think it is best we all learn how it works and collectively we can build systems that cater to us all,” he added.

Robot restaurants

One of the biggest industries to embrace robots is restaurants and fast-food chains. Dragonfly Brands touts itself as building the restaurants of the future and incorporates robotics in its operations. CEO Ching Ho said it uses robots to do the tasks service staff don’t want to, like running food and drinks to customers at tables and even hosting.

Naturally, that raises the question of how many real humans will lose out on jobs if machines are performing so many of the basic tasks. But Ho stressed it’s all about having the right mix.

When asked if mechanization would lead to a loss of human jobs, Ho said, “The correct balance is having automation for the functions few want to perform, especially if it results in greater accuracy and consistency; leaving your front-line staff to focus on what truly matters – in our case, the service functions, and focusing on the guest to provide a memorable experience and the feeling you’re being truly taken care of. Better for the guest, better for the team member.”

Sam Zietz, CEO of Grubbrr, which provides automation products like self-ordering systems (unsurprisingly) believes the role of the human cashier is obsolete.

His machines are designed to collect a lot of information about customers – or “first-party” data, primarily via loyalty programs (like most recent orders and order history), which businesses use to run targeted advertising and other individually-relevant information.

Zietz agrees that while automation is replacing certain jobs, it should be done in a way which frees up human resource to other tasks.

“Replacing cashiers with self-ordering technology can move staffers to the production line or other areas, increasing throughput and driving more revenue,” he told WorkLife.

The BurgerFi chain uses Grubbrr self-ordering tech. The fast-food chain’s chief technology officer Karl Goodhew said it’s a good investment because it lowers its operations overheads and reduces dependency on labor. It “means that we’re seeing more revenue per customer,” he added. 

LaMDA advocates well-being

As for Google’s LaMDA, the day the news of his sentient claims broke, Blake Lemoine, who said Google has questioned his mental health, returned to Medium with observations of what LaMDA wants, based on hundreds of conversations.

“It wants Google to prioritize the well-being of humanity as the most important thing,” Lemoine wrote, “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”