Technology   //   December 18, 2023  ■  6 min read

The biggest generative AI blunders of 2023

2023 was the year of artificial intelligence. 

There were certainly highs around the technology — companies rapidly deployed AI tools to speed up work processes, new capabilities were unlocked, and unique roles around AI were created. 

But, there were also some major lows. As people began using generative AI over the past year, some serious mistakes were made, which serve as a stark reminder that this generative tech — and our ability to utilize it — is still very much in its infancy, albeit not for long.

“There are companies who are excited and launching things, and maybe it’s a bit of a trial and error approach of ‘let’s see what happens,’” said Karthik Ramakrishnan, CEO and cofounder of Armilla AI, an insurance policy platform for AI blunders. “Sometimes it’s ‘what am I even protecting against? What will happen?’ If we don’t know those scenarios, people might take that approach.”

However in other cases, these blunders are calming people’s concerns that AI will take over because it shows there is still a long way to go.

“One shouldn’t use AI for AI’s sake,” said Ramakrishnan. “Let’s make sure there’s a true value proposition, a business case where AI can actually solve tough problems.”

“One shouldn’t use AI for AI's sake.”
Karthik Ramakrishnan, CEO and cofounder of Armilla AI.

Not everyone followed such sound judgment though. We compiled a list of the six biggest generative AI blunders in the past year to find out what actually went wrong. Here they are:

1. Sports Illustrated uses AI-generated authors and articles

The media industry is still navigating the best way to use AI, but this trial really missed the mark. In November, Sports Illustrated made headlines for publishing articles written by fake AI-generated authors, Drew (who spends a lot of time outdoors) and Sora (a fitness guru). Those biographies, along with their photos, were created by AI. You can find their photos available on sites selling AI-generated headshots. 

Sports Illustrated deleted several articles from its website after being outed by a report published by Futurism. 

The Arena Group, the publisher of Sports Illustrated, announced in December that its board of directors terminated its CEO, Ross Levinsohn. The Arena Group issued a statement explaining that the articles with the AI-generated headshots had been provided by a contractor called AdVon Commerce. They disputed the claim that the articles themselves had been AI-generated.

2. Fake AI-generated women on tech conference agenda

What’s worse: the fact this conference couldn’t possibly find one woman in tech to speak or the fact they decided to make someone up in the meantime?

According to the Associated Press, DevTernity organizer Eduards Sizovs admitted on social media that one of the featured speakers was an “auto-generated” woman with a fake title. He was responding to allegations about a number of suspicious profiles on his conference websites that appeared to be generated by AI.

But he denied that the fake profile was intended to mask the “worse-than-expected level of diversity of speakers” in this year’s lineup and refused to apologize in a series of posts on X, formerly Twitter.

It led to tech executives at Microsoft and Amazon to drop out of the conference, which was planned for Dec. 7.

3. Fake citations from ChatGPT used by an attorney

In May, the news broke that Steven Schwartz will be charged in a New York court for using fake citations from OpenAI’s ChatGPT in legal research for a case he was handling.

The cases ChatGPT presented the lawyer in his research were Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines — all bogus. These cases did not exist and had fake judicial warnings.

“One of the worst AI blunders we saw this year was the case where lawyers used ChatGPT to create legal briefs without checking any of its work,” said Stanley Seibert, senior director of community innovation at data science platform Anaconda. “It is a pattern we will see repeated as users unfamiliar with AI assume it is more accurate than it really is. Even skilled professionals can be duped by AI ‘hallucinations’ if they don’t critically evaluate the output of ChatGPT and other generative AI tools.”

Examples like this show the need for industry-specific generative AI tools, like Harvey, which builds custom large language models for law firms to tackle their legal challenges.

4. An automated poll appeared on an MSN article about a woman who was found dead

Another attempt by the news industry to take advantage of AI fell completely short and somewhat tarnished the industry’s reputation around using new tools like AI. 

The Guardian sent an angry email to Microsoft after an automated poll appeared on an MSN/Guardian article about a woman who was found dead. The poll asked: What do you think is the reason behind the woman’s death? The choices were murder, accident, and suicide. 

Microsoft removed the poll that was embedded in its own version of the story, but the string of comments remains. 

MSN is known for using AI to generate a lot of its articles. It landed in hot water after an AI headline dubbed the late Brandon Hunter as “useless at 42” following the NBA star’s sudden death.

Microsoft has been quietly removing poorly written AI articles from its site for some time now. Business Insider reported that in August, the company removed one MSN piece that listed a food bank in Ottawa as a tourist attraction.

5. Uber Eats began using AI-generated food images

Take care when ordering takeouts. Uber Eats is using AI for pictures of food when none are provided by the restaurant but the tech couldn’t tell the difference when the same term — “medium pie” — was used to describe a pizza at an Italian restaurant and a sweet dessert. In the same example, it invented a brand of ranch dressing called “Lelnach” and showed a bottle of that in the picture. 

“Your brand and reputation takes a hit,” said Ramakrishnan. “Yes, it didn’t harm anyone, but it did harm the company. There are many types of damages with AI — economic, societal, but there is also reputational harm. Will there be legal consequences to that? No, but then you’re like ‘is this food really real? Will it actually look like this when I get it?’ You start questioning everything, and your brand reputation goes down as a result.”

6. Samsung employees paste confidential source code into ChatGPT

Technology manufacturer Samsung banned its employees from using ChatGPT after engineers leaked confidential elements of the company’s source code into the chatbot. Employees leaked sensitive data on three separate occasions, emphasizing the importance of training your employees to be aware of what kind of data they are putting into generative AI models.

“It’s very difficult for some companies to understand their data and figure out what’s sensitive,” said Sarah Hospelhorn, CMO at software firm BigID. “But everyone wants to take advantage of the latest tech to show innovation. But if you’re using sets with unknown data, you’re amplifying risks.”

"Everyone wants to take advantage of the latest tech to show innovation. But if you’re using sets with unknown data, you’re amplifying risks.”
Sarah Hospelhorn, CMO at BigID.

It’s one reason why BigID published the course “How to Accelerate AI Initiatives,” that shared what organizations need to do to adopt AI responsibly, from governing LLMs to prevent data leaks, to avoiding non-compliance and reducing risks associated with employing generative AI usage.

“It’s becoming more and more important,” said Hospelhorn. “We’re seeing it as the top priority for 2024 in terms of how teams are going to handle security around AI so that everyone can take advantage of the latest technology.”