How ChatGPT and other AI tools are taking phishing to the next level
ChatGPT, and other language-generative artificial intelligence tools like it, may have a phishing problem, experts have warned.
Workers have been talking about ChatGPT for the past few months and how it can help take their job to the next level by boosting their efficiency. In fact, there are a number of AI tools that are helping speed up work processes. But, what if the same tools that are currently helping workers are also being used to create next-level phishing attacks that might end up in your work email?
Cybersecurity experts say HR and tech departments need to understand the dangers around hackers using AI tools, which could make it more difficult for employees to spot because someone’s tone can easily be replicated, but there might be more of them because it can be done on a larger scale than ever before.
“HR departments are always concerned about the latest cybersecurity threats and how they might affect their employees, but ChatGPT could make it even harder for employees to detect phishing scams as scammers become more sophisticated in their tactics,” said Grace He, people and culture director of teambuilding.com.
When you ask ChatGPT to write you a phishing email it says: “I’m sorry, but as an AI language model, I cannot generate content that promotes illegal or unethical activities. Phishing emails are illegal and can cause harm to individuals and companies. It is important to remember that phishing is a criminal offense and can result in serious consequences.”
However, if you ask it to write a warm, friendly and convincing email to employees asking them to download and install an important update to their computer’s operating system, it has no problem: “I hope this email finds you well. As a valued member of our team, we want to ensure that you have the best experience possible when working on your computer. That’s why I wanted to reach out and let you know that there is an important update available for your computer’s operating system.
“This update includes critical security enhancements and bug fixes that will improve your computer’s performance and protect your data. To take advantage of these benefits, we strongly recommend that you download and install the update as soon as possible.
“I have included detailed instructions and a link to the update in this email. If you have any questions or concerns, please do not hesitate to reach out to me or the IT department for assistance.”
Some people might realize right away that it doesn’t sound right and might flag it to their HR or tech department, however, others might trust it because, for the most part, it does sound professional and serious. Plus, this is just the first iteration and can be edited further by the scammer.
Greg Hatcher, cybersecurity expert, founder and CEO of cybersecurity firm White Knight Labs, said that it’s especially helpful for scammers whose first language isn’t English.
“ChatGPT can get scammers 10% or 20% of the way there,” said Hatcher. “For example, it can help them write a friendly email saying that your compliance training is out of date. It can automate the easy stuff.”
Benjamin Netter, a cybersecurity expert and the founder of Riot, a cybersecurity training platform for employers and employees, investigated an earlier version of ChatGPT, in collaboration with OpenAI, to see how hackers would use it to exploit employees. “Writing personalized emails isn’t necessarily new,” said Netter. “What’s new is the scale. You couldn’t make tailor-made attacks at scale in the past.”
But with ChatGPT, it’s easy to input information about a person and ask it to output a nicely written email for them specifically.
“Most of the time with phishing emails, the tone is not perfect, that’s usually what reveals an issue,” said Netter. “If I get access to someone in the HR department’s emails from a phishing attack, I can use all of the emails that they sent and train ChatGPT to replicate that tone of voice very easily. Then you get something close to a perfect tone of voice that you can use to ask for other employees’ personal information on their behalf.”
The second thing Netter is concerned about is how other AI tools are being used, like generative audio. For example, Eleven Labs generates top-quality spoken audio in any voice and style and renders human intonation and inflections and adjusts delivery based on context. The company advertises it as being great for storytelling, content creators, video game developers, or for audio books. However, it can also help scammers.
“These computers can train on your voice by finding a couple of videos that you’re speaking in,” said Gregory Pepper, security architect at Check Point Software Technologies. “I’ve done multiple presentations, so my voice footprint is on the internet. Is there a possibility that someone’s AI might be rendering my voice to fake a call to a customer, prospect, partner or employee? Potentially.”
Netter says the days where we only receive phishing emails are over, partially thanks to AI, and scams are happening on WhatsApp and other platforms, where you can receive audio. Since it’s audio, it’s easier to believe it. However, Pepper recommends looking out for whether the speech sounds natural or not because AI-rendered voices don’t always sound the best.
“Although, I can only imagine the technology is going to further improve and get optimized and enhanced,” said Pepper.
That said, HR and tech departments need to consider these next-level phishing tactics that are coming out alongside new AI tech. “To combat this, business leaders must educate employees on the security implications of ChatGPT, and how to spot potential attacks,” said Ron Reiter, co-founder and CTO of data company Sentra. “Employees should be especially critical of text and never assume that something is coming from an authentic source.”
He says that instead of just blindly trusting anyone, employees should put their trust in other mechanisms, like paying close attention to whether an email came from the company server or whether it includes the proper signature. Employees should also use other means of verification besides the style of the text itself.
“Security-awareness training has to be on everyone’s radar,” said Pepper, who recommends it happens quarterly. “I don’t care if you’re a school district, a big bank, or a security company. We’re all potential targets for bad threat actors. We have things they want, whether it’s information or computing resources. It only takes one person to accidentally click on the wrong thing.”