Most managers use AI for their top personnel decisions — here’s why that’s a problem

Three out of five people managers now rely on AI to make the most pressing decisions about their direct reports — not just basic administrative tasks.
According to resume service Resume Builder, 78% are using AI to determine raises, 77% for promotions, 66% for layoffs and 64% for terminations. What’s more, some 1 in 5 managers frequently let AI make final decisions without any human input at all.
The quick adoption of AI in people management represents a fundamental shift in how workplace decisions get made, but as the numbers suggest, it’s happening largely without proper guardrails. Resume Builder’s survey of 1,342 people managers in the U.S. revealed that the two-thirds who use AI to manage employees never got any formal AI training, even as nearly half are tasked with assessing whether AI might replace direct reports altogether.
“It’s essential not to lose the ‘people’ in people management,” says Stacie Haller, chief career advisor at Resume Builder. “While AI can support data-driven insights, it lacks context, empathy and judgement. AI outcomes reflect the data it’s given, which can be flawed, biased or manipulated.”
Companies have encouraged people managers to use AI to boost efficiency, enable faster decision-making, reduce overhead, and support data-driven insights that enhance productivity and scalability. Yet the rush to automate is unearthing whole new risks organizations may not have fully considered.
Cleo Valeroso, vp of people at AI Squared, a company that helps organizations integrate AI into their operations, has seen that challenge firsthand. “When managers use AI without understanding how it works or where it can go wrong, they tend to just trust the process,” she says.
One of the most common issues Valeroso has seen is blind faith in resume screening or ranking. For example, an AI system or tool will spit out a top 10 list of candidates, which then becomes the shortlist. “No one questions how the list was generated, what data it prioritized or what patterns it learned from,” she explains.
That kind of thing can perpetuate biases in ways that are not immediately obvious. For example, Valeroso describes seeing hiring algorithms consistently prioritize candidates with certain job titles or employers, excluding those who took alternative but hardly disqualifying career paths. “These tools are only as good as the data they’re trained on,” she says. “If the historical data says, ‘Here’s what a strong performer looks like,’ the system will mimic that, flaws and all.”
The solution is not to abandon AI but to approach it with the same critical thinking one would apply to any other business tool, according to experts. “You wouldn’t hand someone a financial model and tell them to approve a budget without understanding the assumptions,” Valeroso says. “The same logic applies here. If we don’t invest in training, we’re just outsourcing judgment, and that never ends well.”
While employers, fearing legal liability or employee pushback, may resist transparency about their AI usage, Valeroso argues that can also backfire. “The greater risk comes from not saying anything,” she says. “Silence creates confusion, and confusion leads to mistrust. And once you lose employee trust, it’s a much harder and more expensive thing to rebuild.”
Effective communication doesn’t require revealing every technical detail, but it does mean being able to answer basic questions about how AI is being used and ensuring that employees know real people remain in the loop for important decisions. As Valeroso warns, “When companies don’t communicate this proactively, employees will fill in the blanks themselves, often with worst-case scenario assumptions.”
As Haller explains, “Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture and maintain trust among employees.”