WTF is agentic AI? (and what HR leaders need to know)
Had enough of AI hype? Tough.
For a while, 2025 was shaping up to look like a period where everyone (at least in the business world) took a collective breath and set an AI path that was less driven by hype and threats of “don’t adopt at your peril,” and more anchored to assessing the ROI.
But it was only a matter of time.
Digital tech has always been susceptible to buzzwords, and now generative AI has bedded in as mainstream vocabulary, it’s being shoulder barged out of the way by the latest term du jour: “agentic AI.”
So if you’ve seen the term increasingly pop up but aren’t fully sure what it is, or even if you need to know – you’re not alone. Here’s an explainer.
What exactly is agentic AI, and why is it suddenly being discussed?
Agentic AI refers to a type of AI that is designed to take independent action, rather than just responding to commands given by humans. It was coined by Andrew Ng, a renowned AI researcher, in June. Big tech companies and vendors have begun to push it more aggressively and the end goal is for these so-called “AI agents” to carry out complex tasks on behalf of humans, but with minimal human oversight. The belief is that it will lead to better organizational efficiency and employee productivity.
How does it dovetail with generative AI?
To understand it fully you need to think about all types of AI and what they do, at their core.
So-called traditional AI completes tasks based on predefined rules and patterns (set by humans.) It is not adaptable and doesn’t learn or improve from new data (think calculators, logic-based chess programs, or a chatbot with scripted responses.) Then you have machine learning (ML) – a subset of AI – which involves training a model using large datasets to identify patterns and make decisions. It is adaptable and improves as it processes more data. Generative AI is a subfield of ML. Generative AI uses the data from large language models to create new content, such as text, images, music or code. It’s like the more creative kind of AI versus ML’s analytical.
Now, we have agentic AI, which takes all the components of traditional and generative AI and uses it to anticipate needs autonomously and execute against them. And it can adapt dynamically, learning from experiences and adjusting future actions based on new information. It can also be proactive, rather than waiting purely on human commands like other forms of AI.
Give an example of what an AI agent can do for an HR practitioner.
Like generative AI, agentic is being touted as an employee productivity enhancer by automating dull and time-consuming routine tasks, freeing people up to focus on more strategic-type work. “It’s important HR leaders understand the impact of agentic AI on workforce performance and well-being because it introduces new dynamics to the employee experience and how work is managed and executed,” said Jay Patel, svp and general manager of Webex Customer Experience Solutions at telco giant Cisco.
Be more specific, let’s have an example.
Take learning and development. Agentic AI can benefit career development for employees by offering more individually tailored training and development programs.
This is how Jill Goldstein, global managing partner for HR and talent transformation at IBM Consulting, sums it up: Traditional AI will recommend training courses based on an employee’s role and past performance. Generative AI can develop custom learning modules, quizzes, and interact with the content based on the specific needs of the employee agenda. Agentic AI manages that employee’s personal learning plan and continually mines for more information. It adjusts the individual’s learning path based on the new information that’s available and provides real-time performance, data, and feedback that then funnels back into the AI so it helps that employee advance their skills. “So the first one [traditional AI/ML] executes, generative AI helps to customize the content, and the agentic AI builds on both of that and helps me problem solve and actually produces the outcomes associated with it,” she added.
It should also free up HR teams from their own time-consuming administrative tasks, so they can give more attention to more strategic planning around employee engagement, personalized support and productivity. For instance, automating responses to common HR-related inquiries like leave policies, payroll information, or training opportunities. Cisco claims that HR professionals can save an estimated 2.5 hours a week.
So why do HR leaders need to know about, and prepare for agentic AI capabilities?
Well, it’s ‘kinda happening already, regardless. The technologies that are already integrated within organizations are adding agentic AI capabilities, just like they did with generative AI. So it makes sense to be informed about what it can do before it’s readily available. On 09 Dec. Amazon unveiled a new research and development lab in San Francisco, purely to focus on building foundational capabilities for AI agents. Visier claims its generative AI product Vee is used by over 20,000 companies, and has now launched its agentic AI platform. So, the building blocks are already there within HR tech stacks.
Also, like most professions, HR leaders will need to sharpen their tech chops continuously over the coming year and beyond. “To be able to interact with those ecosystem providers, plus help the workforce more broadly, they [HR leaders] have to invest in increasing their technical acumen,” said Goldstein. You need to increase that so you can have an intelligent conversation with those people around you, including your own CIO,” she added.
So start by doing an inventory of your existing tech. “CHROs probably have more in their toolbox already than they realize,” said Goldstein.
Increased autonomy always goes hand in hand with greater risk though, right?
Right. A so-called AI agent may need to go outside of an organization’s firewall because it will need wider access to data to operate to full capacity. For instance, it could be that an AI agent needs to access market data so it can define a successful career path for a particular role, and then automatically include that in the learning path for the employee who wants to use the agent to create a development plan for them. Going outside an organization’s firewall always increases risk. That, plus minimal human oversight also heightens the risk.
How do you mitigate that?
Stay on top of regulation and compliance, particularly around the willingness to allow external data to be mined. And be clear, as an organization, when AI can be used and when it can’t. “It’s critical for organizations to mitigate inappropriate uses of AI as it may enable more advanced scams, convincing deep fakes, and more,” said Patel.
Above all, be transparent with employees. “Transparent practices, such as informing users when AI is being used to make decisions, is essential to building trust in the use of the technology and empowering users to make informed decisions around how they interact with technology that leverages AI,” added Patel.
Largely, it will come down to creating a well-rounded organizational culture around using AI ethically, Goldstein stresses. At IBM employees are clear on where the company’s red lines are when it comes to the use of AI, because they’re displayed all over the company website and embedded in training.
“We commit to human in the loop,” she said. “We also commit to actively providing use cases. As an employee, I have visibility into all the use cases that have gone before our ethics board. So for example, I can see [on the website] that IBM has authorized the use of AI for matching employees with roles for internal learning development purposes, but not for recruiting purposes. And because I have visibility to that, I also understand the governance around it, I understand the commitments, and so I have a little bit more trust. So it is more than just training employees. It is around developing a culture around responsible AI.”