On the Job   //   September 10, 2025

EY’s Simon Brown on preparing HR departments for the agentic AI revolution

As AI evolves from simple automation to sophisticated autonomous agents, HR executives face one of the most significant workforce transformations in modern history. The challenge isn’t just understanding the technology — it’s navigating culture change, skills development and workforce planning when AI capabilities double every six months.

Simon Brown, EY’s global learning and development leader, has spent nearly 2 years helping the firm’s 400,000 employees prepare for an AI-driven future. With his past experience as chief learning officer at Novartis and his work with Microsoft, Brown offers critical insights on positioning organizations for success in an autonomous AI world.

What are the top questions C-suite executives need to ask their teams about agentic AI initiatives?

Are people aware of what’s possible with agents? Are we experimenting to find ways agents can help us? Do we have the skills and knowledge to do that properly?

But the most critical question is: Is the culture there to support this? Most organizations are feeling their way through which tools work, what the use cases are, what drives value. There’s a lot of ambiguity. Some organizations manage well through uncertainty; others need clear answers and can’t fail — that’s hard when there’s no clear path and people need to experiment.

How can leaders assess whether their organization has the right culture for agentic AI?

Look at how AI tools like Microsoft Copilot are being embraced. Are people experimenting and finding productivity value, or are they threatened and not using it? If leaders are role modeling use and encouraging their people, that comes through in adoption metrics. Culture shows through communication, leadership role modeling, skill building and time to learn.

What are common blind spots when executives evaluate AI readiness?

Two major issues. First, executives often aren’t aware of what’s possible with the latest AI systems due to security constraints and procurement processes that create 6-to-12-month lags.

Second, the speed of improvement. If I tried an AI tool a month ago versus today, I may get a completely different experience because the underlying model improved. Copilot now has GPT-5 access, giving it a significant overnight boost. Leaders need to shift from thinking about AI as static systems upgraded annually to something constantly improving and doubling in power every six months.

How should leaders approach change management with AI agents?

Change management is essential. When OpenAI releases new capabilities, everyone has access to the technology. Whether organizations get the benefit depends entirely on change management — culture, experimentation ability, skills and whether people feel encouraged rather than fearful. We’re addressing this through AI badges, curricula, enterprise-wide learning — all signaling the organization values building AI skills.

What’s your framework for evaluating whether AI investment will drive real business value?

I think about three loops. First, can I use this to do current tasks cheaper, faster, better? Second, can I realize new value — serving more customers, new products and services? Third, if everyone’s using AI, how do we reinvent ourselves to create new value? It’s moving beyond just doing the same things better to what AI helps us do differently.

How should HR leaders rethink workforce planning given AI’s potential to automate job functions?

Understand which skills AI will impact, which remain uniquely human and what new roles get created. The World Economic Forum predicts significant reduction in certain roles but net increase overall. We’re seeing new, more sophisticated roles created that move people higher up the value chain.

From HR’s perspective, are our processes still fit for AI speed? How are we incentivizing reskilling? Are we ensuring learning access and time? How are we signaling which skills are in demand versus at risk of automation?

How should HR measure success after implementing agentic AI?

Tie back to why it was implemented — business value. Use similar metrics as before but look at what changed. Maybe same output but cheaper, faster, better. Or new capabilities — our third-party risk team uses agents to provide much more extensive supplier analysis than before. Same team size, more client value.

What’s your timeline perspective on when agentic AI becomes competitive necessity versus advantage?

That’s the ultimate question. I’m amazed daily by what I achieve using AI and agents. ChatGPT-5’s recent capabilities are mind-blowing, suggesting dramatic impact quickly. But when deep AI experts have vastly different views — from AGI around the corner to decades away — it’s understandable why leaders struggle to navigate this landscape.