Technology   //   June 16, 2023  ■  7 min read

Q&A with EEOC commissioner: ‘Employers are liable for whatever employment decision is made using AI’

Thanks to the generative artificial intelligence boom, employers now have a wide variety of algorithmic decision-making tools available to assist them in making employment decisions around hiring, recruitment, performance monitoring and more. To help make sense of it all, the U.S. Equal Employment Opportunity Commission (EEOC) – the official governing body of AI in businesses – has released new guidance on the use of AI in the workplace.

The guidance, released mid May, discusses potential adverse impacts around using AI if proper safeguards aren’t in place.

EEOC commissioner Keith Sonderling oversees how AI and workplace technologies are incorporated by organizations. He told WorkLife that ensuring employers don’t inadvertently flout existing civil rights laws by incorrectly implementing AI, is one of this top priorities. We spoke to him to learn more about the importance of responsible AI, the decisions that leaders face around how to use technology. and more.

Answers were edited for clarity and flow.

We know that AI raises ethical concerns around bias, transparency, accountability and fairness. What are HR leaders doing to understand how far these ethical implications can go, especially in the absence of comprehensive AI regulation? 

The absence of an AI regulation specific to AI use, whether it’s broad AI use or use in HR, is a distraction. It doesn’t matter because for HR professionals and businesses who want to use this software now, there are longstanding laws that apply. Title VII of the Civil Rights Act, which really governs most of the day-to-day of HR, prevents employment discrimination and promotes equal opportunity – all the big ticket items we deal with here at the EEOC.

Laws have always been on the books and continue to be on the books, and they continue to apply equally, whether employers use humans for decision making in HR or if they delegate it to AI. That’s why it’s so important for HR professionals to understand that everything they know, everything they’ve been trained on with regards to complying with HR laws and anti-discrimination laws, apply equally in this context. AI tools are just helping decisions be made faster, more efficient, more economically and at a greater scale. 

Is it a heavy lift for HR leaders to understand this new technology? 

Employers are using AI to write job descriptions, screen resumes, chat with applicants, conduct job interviews, predict if an applicant will accept a job offer, track productivity, assess worker sentiment, and decide who gets upskilling and reskilling opportunities. 

"It’s not reviewing documents. It’s not doing routes for deliveries. It’s making decisions on people’s livelihoods. It’s such a higher standard that the law holds to when you’re dealing with civil rights. The awareness of that is critical."
Keith Sonderling, EEOC commissioner.

But, what’s really difficult here is employers are ultimately liable for whatever employment decision is made. Whether there’s a discriminatory decision made in hiring based on somebody injecting their own personal biases against a protected class, or if it’s done by a computer based on a job description that’s not relevant and unintentionally screens out certain individuals, the employer is going to be liable.

That’s why it’s so tricky for employers who want to go all in on HR in AI technology, because like any bad employment decision, you really need to know about, well, how did we get there? Did somebody intend to discriminate, or was it a neutral policy that had those discriminatory results? With AI it’s not very different.

The EEOC recently issued guidance to ensure employers aren’t violating Title VII of the Civil Rights Act of 1964. Will there be more guidance to come and what role will the EEOC play to ensure responsible AI?

Regulating AI is very challenging because of the ever changing technologies and the current interest in it. On one hand, states and local governments that want to regulate it should be commended for really trying to take up a complicated issue of algorithmic discrimination. But at the same time, they potentially create a patchwork of local and state laws that may be conflicting, that may make it very hard for employers to implement and may dissuade employers from using this technology that can really benefit the workforce. 

In New York City, for instance, they are having certain audit requirements if you’re using AI for hiring or promotion. It’s only in certain categories like race, sex and ethnicity. But federal law would require you to look at all the characteristics like age, national origin, religion, disability, and sexual orientation. So if you’re going to be doing it for there, you can’t have a false sense of security just because you’re complying with New York’s very limited audit for promotion and hiring versus the federal government that looks at A to Z and the employment relationship.

The other bigger issue is that with everything going on in Congress and all we’re hearing about potential AI laws, legislations, new government bodies, everything that’s going on in the EU [European Union] – that is a huge distraction – if you’re using AI in HR right now, laws [already] apply. You may say, ‘Well, I’m okay using some of this technology now because Congress hasn’t made an AI body. Congress hasn’t made rules yet.’ Every single use of AI in HR is already governed by the EEOC. It has longstanding compliance obligations that the EEOC has been talking about since our existence.

My plea to employers is to figure out how you are going to comply with these longstanding laws for each use case of AI. My raising awareness of this is coming from the place that the liability exists no different than if a human is making the decision.

How much is AI an opportunity versus a threat?

There are a lot of ways where AI can eliminate bias because you’re not having a human inject their own potential bias and instead using actual parameters you set the AI to look for, such as skills or the ability to perform the job. That’s a really good thing. It can help employers be more transparent and say ‘here’s what we put into the computer, here are the skills we need for this job at this location and here’s how we use machine learning to help us get there.’ If you’re not using this technology, you’re staying how you were with human decision-making processes, where we don’t really know what they’re basing hiring decisions on.

But at the same time, if the AI is not properly designed, or if it’s not properly used, some of those existing biases can be implemented and it can discriminate far greater than any individual HR professional.

There are a lot of promises and a lot of ways it can make decision making better. But if you’re using parameters that will likely lead to discrimination instead of actual job qualifications, there can be significant harm there as well. For each use, you have to look at the benefits and the potential negative impacts and navigate balancing compliance with longstanding civil rights laws.

If you’re thinking about using this, before you ever let it make a decision on someone’s livelihood, get assurances from vendors that they are going to help you implement this and train the employees who have access to it in a way that does not violate the law. At the end of the day, the employer has the responsibility, so that’s why it’s so critical to make the decision on what to use.  

What advice do you have for companies who are struggling with either understanding or implementing AI?

When deciding on a vendor, ask questions like what are you going to use it for, what purpose, and how are you going to ensure compliance with the laws for each use? Beyond that, what are you going to do internally at your company to have the guardrails saying, ‘here are the policies and procedure around how to use this AI,’ and you can only use it for a lawful purpose and if you’ve been trained and within the parameters that the company set. Are you going to get the support not only to invest to buy it, but to build an infrastructure around it to have those policies in place.

You have to have testing to make sure it’s not discriminating. Employers who do that will be in a much better position than those who just buy the software and let it go. It’s not reviewing documents. It’s not doing routes for deliveries. It’s making decisions on people’s livelihoods. It’s such a higher standard that the law holds to when you’re dealing with civil rights. The awareness of that is critical.