Technology   //   March 6, 2023  ■  5 min read

To ban or not to ban: Data privacy concerns around ChatGPT and other AI

Get the latest on how the workplace is changing, delivered to your inbox daily. Sign up for the WorkLife Daily Newsletter here.

“Yes, companies should be concerned about what data their employees are putting into ChatGPT or any other similar platform.” The bot itself told Digiday that last week. 

As more and more workers enjoy the benefits of ChatGPT, some company leaders are growing concerned about employees inputting sensitive information into the bot. It’s led companies like JPMorgan Chase, Verizon and Amazon to limit or ban their employees from using it.

CNN reported that JPMorgan staff were asked not to enter sensitive information into OpenAI’s free-to-use chatbot, due to compliance concerns with third-party software. Earlier this month, Nozha Boujemaa, vp for digital ethics at retailer IKEA, also warned of the possible risks that could arise from using tools like ChatGPT on a daily basis, as reported by AI Business. JPMorgan declined WorkLife’s request for comment.

Meanwhile, Verizon has decided to “keep ChatGPT off of all company-owned devices for employees and contractors,” said a spokesperson over email. “While artificial intelligence is integral to Verizon’s strategy, ChatGPT is not synonymous with AI, as AI is a much broader science and discipline involving using computers to do things that traditionally require human intelligence,” said Nasrin Rezai, chief information security officer at Verizon in a public program “Up to Speed.

Rezai added that using ChatGPT “can put us at risk of losing control of customer information, source code and more.”

Apple has gone a step further and blocked an update to the email app BlueMail, which uses a customized version of OpenAI’s GPT-3 language model. Apple was reportedly concerned the new AI feature in the app could show inappropriate content to children.

The pushback from these organizations has raised debate over whether or not language-generating AI-tools are ready for widespread use. And companies will need to give thoughtful consideration to how much, and how quickly they embrace emerging AI-powered technologies.

ChatGPT makes it clear when you use the program and in its FAQ that the information is helping to train the bot. Overall, though, Kohei Kurihara, CEO at data privacy community Privacy by Design Lab, says it’s important to promote awareness of privacy concerns. “There is a risk we need to take into consideration, even if it’s not apparent at first,” said Kurihara.

Others believe such AI tools may usher in a chaotic period as people come to grips with how any sensitive data they input may be assimilated and used by language-generating AI tools.

“A lot of this will be a time bomb,” said Craig Balding, author of cyber security and AI newsletter and blog on ThreatPrompt.com. “An employee will submit something, and then it later comes out in someone else’s completion. You could have your own password showing up somewhere else. Although there’s no attribution, there are identifiers like company name, well-known project names and more,” he added.

“An employee will submit something, and then it later comes out in someone else’s completion. You could have your own password showing up somewhere else."
Craig Balding, author of cyber security and AI newsletter and blog on ThreatPrompt.com.

In the FAQ, ChatGPT states explicitly: “please don’t share any sensitive information in your conversations.”

It should be noted that it can’t delete specific prompts from your history.

ChatGPT developers also state that as part of their commitment to safe and responsible AI, they review conversations to improve their systems and to ensure the content complies with policies and safety requirements. ChatGPT’s privacy policy states that the personal information collected is used to conduct research, “which may remain internal or may be shared with third parties, published, or made generally available,” and to develop new programs and services, among other things.

So should companies simply trust their employees are using this new tool in the right way that doesn’t put important information at risk?

“Companies have always had to worry about data leakage and one of the common ways data leaks is through employees pasting company data into a website,” said Balding. “It’s not malicious … but you could have personally identifiable information getting pasted into ChatGPT.”

Experts aren’t saying that means ChatGPT should be banned. What it does mean, though, is that leaders need to remind employees about their data privacy policies and create a plan on how to use this new tool. 

“Get ahead of it and agree how you want to use it,” said Balding. “Create a plan where incrementally you try new things. But also have red lines for a certain amount of time that say definitely do not do x,y and z.”

“There is a lot of excitement around how to use ChatGPT, but the technology is so new that there aren’t clear policies yet and that’s causing some confusion because people don’t know.c”
Rachel Woods, founder of media and education company the AI Exchange.

For many people it might be common sense to not put company or client information into these new AI tools, however, other people might not fully understand the risk attached to it. Rachel Woods, founder of media and education company the AI Exchange, said the top question on business leaders’ minds currently are around data privacy. 

“There is a lot of excitement around how to use ChatGPT, but the technology is so new that there aren’t clear policies yet and that’s causing some confusion because people don’t know,” said Woods. “The goal of ChatGPT and why it’s free is to get public feedback and continue improving the models and their abilities. To do that, you need real use cases.”

One option Woods recommends is to check out Open AI’s GPT Playground, where you can opt out of data sharing if you use it directly.

What Woods suggests for people who are using ChatGPT daily for work is to ask yourself one question before opening the website up: Are you okay with someone at OpenAI, and potentially other companies too, seeing that information? If that gives you pause, then it’s probably best to not put it in. 

Andrew Obadriu, chief information security officer at Cobalt, agrees: “Communications within the platform should be treated as public information.”

However, a company identifying clear policies can help both employees and the employer feel more confident about how ChatGPT is being used. 

“This is a great opportunity for leaders and security teams to reevaluate their IT best practices when it comes to sharing information with third parties,” said Obadriu. 

An outright ban might make an employee feel like they are losing an opportunity to learn this new technology, which could be a job differentiator. Woods said if you’ve faced issues using ChatGPT at work due to data privacy, ChatGPT itself still shares data, but as of Mar. 1, using it via the API by default will not.

“A lot of people want to learn it to be successful in this next wave,” said Woods. “When you’re banning it as a company, there is a good chance your competitors aren’t. Are you putting your team and organization at a disadvantage because you’re choosing to ban it instead of choosing the energy to invest it and train your team?”