Businesses that stumble blindly into using artificial intelligence without vetting the source of its information and ensuring it’s used ethically and responsibly, do so at their peril. Most are trying to put in guardrails, but governments are still playing catch up when it comes to applying new legislation that accounts for the newest developments.
While new laws are being drafted, by the European Union, and in the U.S. by individual states including New York and Illinois, (there is no federal AI regulation) these are a way off covering all the bases, and will take time to be fully ratified.
That’s why the U.S. Equal Employment Opportunity Commission (EEOC) – the official governing body of AI in businesses is pressing companies to turn to existing laws to help guide them in the application of AI. First up: the pithily named Title VII of the Civil Rights Act of 1964.
Here’s what you need to know.
WTF is Title VII of the Civil Rights Act of 1964?
Title VII of the Civil Rights Act of 1964 is focused on preventing discrimination against job seekers and workers. It protects employees and job applicants from employment discrimination based on race, color, religion, sex and nation origin. It is one of the most important employment laws today when it comes to workplace discrimination, which means many human resource departments are largely familiar with it. All employees are protected under Title VII.
Keith Sonderling, EEOC commissioner, describes it simply: “Title VII of the Civil Rights Act, which really governs most of the day-to-day of HR, prevents employment discrimination and promotes equal opportunity – all the big ticket items we deal with here at the EEOC.”
This law is from nearly 60 years ago. How can we apply it to AI today?
“It’s making the connection to things that already exist that we can leverage,” said Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom. “The government can take a really long time to regulate or create or update laws. Sonderling is saying that’s not necessarily the path that is going to get us where we need to go.”
The EEOC guidance that was released mid-May makes the connection and describes how to apply key established aspects of Title VII of the Civil Rights Act to an employer’s use of automated systems, including those that use AI. It discusses adverse impact, a key civil rights concept, to help employers prevent the use of AI from leading to discrimination in the workplace. Nicolette Nowak, associate general counsel, AI ethics & regulation, data privacy & information security at Beamery, says workplace leaders have felt confused with the introduction of new technology but no laws regulating it. That’s where the guidance comes into play.
“There’s been a demand or call for new legislation to regulate AI use, but Sonderling is trying to remind everyone it’s actually already regulated,” said Nowak.
“It gives us a starting point to say no matter what the outcome is, no matter what tool you’re using, these laws still persist,” said Jurkiewicz. “We need to be cognizant of that.”
Even though the law is from nearly 60 years ago, Sonderling, Jurkiewicz and Nowak, along with other workplace experts, argue that it’s more relevant and applicable than ever.
“The absence of an AI regulation specific to AI use, whether it’s broad AI use or use in HR, is a distraction,” said Sonderling. “It doesn’t matter because for HR professionals and businesses who want to use this software now, there are long standing laws that apply.”
What do companies need to know?
Most HR professionals should be extremely familiar with Title VII of the Civil Rights Act of 1964 already, which means it won’t be a big lift for them to understand its application to AI. Nowak says that it’s understanding that companies aren’t off the hook just because a computer made the decision that violates a law or regulation. “It’s a reminder to people that maybe we don’t need more regulation at the moment, we currently have the protections in place,” said Nowak.
“My plea to employers is to figure out how you are going to comply with these longstanding laws for each use case of AI,” said Sonderling. “My raising awareness of this is coming from the place that the liability exists no different than if a human is making the decision.”
Jurkiewicz argues that understanding this legislation and its application to AI depends on the level of resistance to change in an organization, with some leaders believing that a law that’s 60 years old is not relevant enough today.
“At the core, it’s human civil rights,” said Jurkiewicz. “There’s no piece of technology anywhere that is going to outweigh that. If a company is challenging that, I would say there is a fundamental problem that exists regardless of what law we’re talking about.”
Companies that are pushing to be responsible will most likely leverage the guardrails put in place by this legislation. Meanwhile, a lot of private companies are also distributing guiding principles, like Microsoft for example. It’s an opportunity to take what we know from the Title VII of the Civil Rights Act of 1964 and apply it to new principles.
The EEOC’s technical assistance document is part of its AI and algorithmic fairness initiative, which works to ensure that software, including AI, used in hiring and other employment decisions complies with the federal civil rights laws that the EEOC enforces.
“Laws have always been on the books and continue to be on the books, and they continue to apply equally, whether employers use humans for decision making in HR or if they delegate it to AI,” said Sonderling. “That’s why it’s so important for HR professionals to understand that everything they know, everything they’ve been trained on with regards to complying with HR laws and anti-discrimination laws, apply equally in this context.”