Technology   //   April 17, 2023  ■  6 min read

What to know about NYC’s soon-to-be-enforced HR AI bias law

It’s no secret that artificial intelligence is revving up, especially with ChatGPT giving everyone a taste of what generative AI can do for both employees and employers. 

However, it’s raised a lot of questions too. For example, how to ensure that AI is not biased, which is particularly important if it’s being used to make decisions like hiring and recruiting. Fortunately, lawmakers have been ahead of this.

Over the years there has been an increase in automation in hiring tools, which is why Local Law 144 – a law that prevents NYC employers from using AI – law was enacted in Dec. 2021. Since then, it has gone through multiple iterations. At its core, it will require that a bias audit is conducted on an automated employment decision tool prior to the use of said tool. 

“These are decisions that impact employees and their lives in a really major way,” said Ian White, CEO, founder and CTO of data and operations company ChartHop. “Getting a job or getting promoted are life changing things for people. It’s not a bad thing for people to have a reminder of some of the ethical consequences of deploying some of these technologies.”

The law is due an update and will be enforced from July 5. If successful, it could become a blueprint for other cities. Here’s a breakdown of what employees and employers need to know.

What the bill requires

The bill will require that candidates or employees that reside in NYC be notified about the use of automation tools in the assessment or evaluation for hire or promotion, as well as be notified about the job qualifications and characteristics that will be used by the automated employment decision tool. Enforcement was postponed from mid-April to Jul. 5. Violation of the AI law could subject an employer to fines between $500 and $1,500 per violation, per day.

“This is for each instance of discrimination,” said Siobhan Savage, CEO and co-founder of Reejig, a workforce intelligence platform. “If you’re thinking about large companies, and the amount of decisions being made with AI, you’re talking millions of dollars that could be fined.”

“Getting a job or getting promoted are life changing things for people. It’s not a bad thing for people to have a reminder of some of the ethical consequences of deploying some of these technologies.”
Ian White, CEO, founder and CTO of data and operations company ChartHop.

The law’s definition of an automated employment decision tool includes any automated process that either replaces or substantially assists discretionary decision-making for employment decisions. For example, an automated process that screens resumes and schedules interviews based on such screening would be deemed an automated decision tool and subject to the law’s requirements. However, an automated process that simply transfers applicant information from resumes to a spreadsheet, without otherwise scoring or ranking the applicants, would not be subject to the law’s requirements.

“We’re entering a period of rapid transformational change when it comes to AI,” said Sam Shaddox, head of legal for AI recruiting company SeekOut. “Because of that, we as a society need to think about the impact that’s going to have on how we function and how day-to-day life operates.”

Shaddox compares the legislation to a “nutrition label for the use of automated employment decision tools.”

Why the legislation is important

Experts have welcomed the forthcoming legislation, which will put in necessary guardrails for AI adoption, just as the development of the tech picks up speed. Without taking a beat to conduct these audits, it could disproportionately impact individuals who are applying for certain roles. 

“When it comes to causing no harm and making good and fair decisions around people, it’s incredibly important,” said Savage. “A lot of the algorithms are trained on previous decisions by the company. If you have hired historically on a very similar path, the model will learn that it’s the recommendation it should make. That’s where you need a circuit breaker.” 

Douglas Brundage, founder and CEO of creative consultancy Kingsland, said: “Using AI in any capacity before we understand its tendency towards bias, which has been proven over and over again, should be reconsidered.” For example, Amazon had to scrap its automated hiring tool in 2018 after finding it discriminated against women. 

It also sets the tone for other municipalities to introduce similar legislation. 

“When something like this happens in New York, you will feel the ripple effects of this everywhere around the world,” said Savage. “Governments and legislation will really have to live to that benchmark.”

Other countries are also trying to slow the pace of AI adoption until it’s been thoroughly vetted. Italy banned ChatGPT in early April, and in Europe, regulators are preparing an AI regulation which will be enforced across all 27 member countries of the European Union (which now excludes the U.K.).

“It may be cumbersome and bureaucratic, but it’ll be broadly effective and will set a standard,” said Parry Malm, CEO at Phrasee, an AI-powered content platform for marketing.

The audit process will be long

Both Savage and Shaddox shared that their companies have both conducted audits ahead of the law’s enforcement. These took between six and 12 months to complete, which signals that this might be a heavy lift for some employers. However, it’s an important one. “You can’t mark your own homework and say it’s all good and fine,” said Savage, on why an independent audit is necessary. 

The first step a NYC-based company should take is to determine whether it is using AI tools in screening for hiring or promotion. Shaddox says in many cases, a company might turn to their vendors to lead the audit. However, it is ultimately up to the company to ensure that it is being done.

“My whole theory is that whether it’s a human or robot making a decision, you as a company are liable.”
Siobhan Savage, CEO and co-founder of Reejig.

“It creates a unique dynamic where the unique vendors are going to want to be compliant, and need to be compliant, in order to meet the needs of the customers who are the employers,” said Shaddox. “However, it is ultimately the company that is responsible for their vendors.”

“My whole theory is that whether it’s a human or robot making a decision, you as a company are liable,” said Savage.

Transparency is a key component

The next step is to engage an independent auditor to conduct a bias audit of any AI tool being used. The results must then be published on its website.

“Some of these models can seem to be mysterious,” said White. “Publishing results for transparency and clarity for people can help build confidence in how a piece of tech is being used. That transparency piece is a good feature of this as well.”

Each employer must also provide applicants and employees notice of its use of AI in hiring decisions, either via its website, job posting, mail or email. 

“It’s a positive sign to see new regulations regarding the importance of organizations ensuring transparency and reducing bias when using AI in hiring and recruitment techniques,” said Sultan Saidov, president and cofounder of talent management company Beamery. “This will help us guarantee visibility into the algorithms used, and provide an ability to audit AI powered tools.” 

Rijul Gupta, co-founder and CEO of DeepMedia, an AI platform that is setting a standard for responsible synthetic media use, says it would be helpful for companies to view these audits as a catalyst for change.

“Embracing transparency, investing in unbiased AI tools, and cultivating a culture of fairness will ultimately give rise to stronger, more agile organizations ready to conquer the challenges of tomorrow,” said Gupta. “Law is not a constraint, but an invitation to revolutionize the way we think about work, talent, and technology.”

While there is still time before the law is enforced, each expert shared that organizations should get ahead to evaluate their own practices, even if you’re not based in NYC. 

“Organizations should use this time given to them to evaluate their own AI practices and get ahead of any other future legislation,” said Saidov.