The AI impostor: How fake job candidates are infiltrating companies

In an alarming trend reshaping corporate hiring, businesses are increasingly facing sophisticated AI-generated job candidates — complete with fake IDs, fabricated work histories and AI-powered interview responses.
Security experts warn this isn’t just about polished resumes; it’s a fundamental shift in cybersecurity threats that could leave companies vulnerable to infiltration and data breaches.
According to Gartner predictions, by 2028, one in four job applicants globally will be fake, largely driven by AI-generated profiles.
“As generative AI becomes more sophisticated and more widely available to the general public, the line between authentic and AI-generated content is becoming more blurred than ever before,” said Matt Moynahan, CEO of GetReal Security. He notes that fraudsters are deploying real-time deepfake video technology during virtual interviews that can match facial expressions and lip-sync with remarkable accuracy, while voice cloning technology can simulate accents and intonations from just minutes of audio samples.
HR departments — many of them understaffed and often working with outdated technology — have become prime targets. “Often thought of as the weakest part of the organization, HR departments have now become the focal point for hackers to obtain access to an organization,” Moynahan said, explaining that coming through the “front door” via the hiring process is frequently easier than breaking through infrastructure vulnerabilities.
This represents a fundamental shift in cybersecurity strategy. Before generative AI, cybersecurity focused on protecting information and infrastructure by building secure barriers around privileged information. Human employees were seen as the last control against attacks, using their senses to visually confirm communications were legitimate. “This is no longer possible as workers are now unable to trust their senses,” Moynahan said.
The consequences can be catastrophic. Moynahan reveals that at one company, lookalike candidates and fake interviewers conspired to get hired twice — something he said is happening in virtually every Fortune 500 company. He warns that breaches can result in capital losses, leaked customer data, and ransomware in the form of reputation extortion of the brand of executives.
To combat these threats, Moynahan’s company has developed solutions including a tool for on-demand verification of audio, video, voice and image files, and another that prevents impersonation and deepfake attacks in real-time by analyzing streaming audio and video.
Serena Huang, DEI and AI expert and author of The Inclusion Equation: Leveraging Data & AI for Organizational Diversity and Well-Being, warns that these AI-generated deepfakes create an environment “where it’s harder to trust what you see and hear, increasing the risk of costly hiring mistakes and data breaches.” As a former people analytics executive and chief data officer at companies including GE, Kraft Heinz and PayPal, Huang understands the dual impact: wasted employer resources and eroded trust in virtual hiring.
Among Huang’s recommendations: human-centered solutions including secure identity verification, behavior-based assessments that require critical thinking, training hiring teams to recognize red flags, and maintaining fairness and privacy throughout the process.
Lauren E. Aydinliyim, a professor in the business department at Baruch College, points out the irony that AI was introduced first by businesses to improve recruitment processes but has come to enable candidates to game the system. She notes the ethical challenge of distinguishing between legitimate AI enhancement — using technology to present oneself more effectively — versus complete fabrication of experience or credentials.
Aydinliyim suggests that transparency and disclosure could be part of the solution, much like academic requirements for disclosing an AI assist. “Just as academics are required to note AI usage in their research … candidates should be encouraged to disclose when they’ve used AI tools in job applications or interviews,” she said.
In a world where it’s getting harder and harder to tell the authentic from the phony, Moynahan sees determining the “realness” of something as perhaps “the paramount challenge of our day.”