Technology   //   August 10, 2023  ■  5 min read

Why Zoom’s AI privacy snafu should be a lesson for other companies

Zoom has made waves this week: first for its RTO mandates, second for a potential AI-related data privacy oversight. 

Like many other companies this year, Zoom has upped its AI capabilities. In March, it updated its policies to allow broad access to user data to train its AI models. That change drew intense scrutiny when it was reported over the weekend, sparking questions and alarm from its customers and data privacy advocates. 

The big question is: Should we be able to opt out of having our data used to train generative AI systems? While AI needs data to be trained and become smarter, what data should be used for that?

Zoom’s new AI features

The company launched some of its AI-powered features earlier this year, which lets clients summarize meetings without having to record an entire session. It’s something that a lot of workplace tools have been doing, Otter AI’s OtterPilot, which is a smart AI meeting assistant that can join meetings on Zoom, Google Meet or Microsoft Teams. 

Zoom’s features include Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose, which are offered on a free trial basis. Zoom account owners and administrators control whether to enable these AI features for their accounts.

Its utility is compelling. For example, say a team member joins their Zoom meeting late, they can ask Zoom IQ to summarize what they’ve missed in real time and ask further questions. Or if they need to create a whiteboard session for their meeting, Zoom IQ can generate it based on text prompts. 

Generative AI’s ability to help speed up tasks and create better work efficiency, is clear. But ensuring it is used ethically and responsibly by both organizations and individuals is less clear cut. And that’s exactly what Zoom got grilled for.

“There are a lot of positive benefits of integrating AI with their [Zoom] platform, but not at the expense of consumer privacy,” said Jeff Pedowitz, an AI expert and author.  

Privacy concerns grow

What data is that AI model being trained on? Are our conversations on Zoom being saved? 

These questions quickly bubbled to the surface, and are highly pertinent given so many employees are still working remotely and confidential information is being shared over the platform. Some experts warned the original wording of Zoom’s terms of service could have allowed Zoom to access more user data than needed, including from customer calls.

“It’s the same thing as asking permission to record; 90% of the time it’s not an issue, but sometimes it is. Then you respect their wishes.”
Jeff Pedowitz, an AI expert and author.  

The fact it was offered as a free trial has alarmed privacy experts. However, the default is that customers aren’t enrolled in the free trial. If they enable Zoom IQ for Meeting Summary, the user can then deselect data sharing if they wish and if they enable Zoom IQ, there will be an in-meeting notification, similar to when a recording starts.

“If you’re having a call with a prospect, client, or partners that do not work for you, you have to get permission from everyone that is participating in the meeting before you proceed,” said Pedowitz. “It’s the same thing as asking permission to record; 90% of the time it’s not an issue, but sometimes it is. Then you respect their wishes.”

Transparency from Zoom

In an emailed statement, a Zoom spokesperson said: “Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes. We’ve updated our terms of service to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”

Account owners and administrators are able to choose if they want to turn on the features, which are still available on a trial basis. People who turn them on would “be presented with a transparent consent process for training our AI models using your customer content,” read Zoom’s latest blog post on the matter.

Zoom CEO Eric Yuan shut down the conversation Tuesday night with a LinkedIn post, writing that he wants to “set the record straight.”

“It is my fundamental belief that any company that leverages customer content to train its AI without customer consent will be out of business overnight,” he wrote in the post. “Given Zoom’s value of care and transparency, we would absolutely never train AI models with customers’ content without getting their explicit consent.”

He reiterated that if customers use either of the two new generative AI features that have been available for two months, they will see the prompt to opt-in. He said the earlier change to the terms of service back in March was a process failure internally. 

“Let me be crystal clear – for AI, we do NOT use audio, video, screen share, or chat content for training our AI models without customer explicit consent,” he wrote in the post.

What it means for the future of AI and data usage

The complaints have forced Zoom to provide some transparency around how and when it is using people’s data, but they also raise a tricky subject for other businesses looking to train their AI models.

For example, Google says AI systems should be able to mine publishers’ work unless companies opt out. 

“All big tech companies are using our data in one way, shape or form, to either improve these features or functions or to sell better targeted advertising, but there is ethics involved and it starts with consent,” said Pedowitz. “You can’t take shortcuts. As any company continues to innovate, it should be grounded in ethics and morals. Let that be a guide post.”

Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom, says that privacy will continue to erode, but that it’s crucial for companies to work together to figure out the most responsible way to go about the emergence of this new technology, especially when the technology is moving so fast and legislation can’t keep up to provide safeguards.

"Let’s work together to figure out a balance between feeding more data to the AI while protecting sensitive conversations.”
Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom.

“If it’s training their model, and it is sensitive data, that is a concern and we should know about that,” said Jurkiewicz. “Companies have a responsibility to be a little more specific about what types of data they’re going to be using. What exactly are they training models on?”

Jurkiewicz says that models will only get better if we give them more data and content to train them on so that they can be more accurate. 

“Either we expect these tools to get better so we can be more efficient and more productive, or we don’t, and all we can do is demand and work with these companies to say ‘what you’re providing is not quite good enough, so let’s work together to figure out a balance between feeding more data to the AI while protecting sensitive conversations,’” said Jurkiewicz.