Technology   //   December 4, 2023  ■  6 min read

How AI regulation differs in the U.S. and EU

There’s a global artificial intelligence race – and so far the U.S. appears to be in the lead. 

That’s in part thanks to the states being home to huge tech companies like OpenAI, Microsoft, Google and Meta. But, it’s also because of the lack of federal legislation. So far, the only legislation that exists around AI is New York City’s Local Law 144, which requires that a bias audit is conducted on automated hiring processes. Other states, like California and New Jersey, aren’t too far behind with creating their own versions of state legislation. 

The white House has issued an Executive Order on safe, secure and trustworthy AI and a blueprint for an AI Bill of Rights. The Equal Employment Opportunity Commission (EEOC) has also been stringent in saying that it will continue to uphold Title VII of the Civil Rights Act, which is focused on preventing discrimination against job seekers and workers, whether the risk comes from a human or robot. 

Here’s a look at how the regulatory approaches taken in the U.S. and the European Union compare.

The EU’s precautionary approach

Overall, it’s clear the states have a more decentralized and sector-specific approach to AI regulation. Across the Atlantic, the EU has taken a more comprehensive and precautionary tack. This is embodied in the EU AI Act – which passed in June 2023 and is due to be finalized before European Parliament elections in June 2024. This law would classify AI systems by level of risk and mandate regulations depending on what category they fall in. 

The legislation focuses on five main priorities: AI use should be safe, transparent, traceable, non-discriminatory, and environmentally friendly. The legislation also requires that AI systems be overseen by people and not by automation, establishes a technology-neutral, uniform definition of what constitutes AI, and would apply to systems that have already been developed as well as to future AI systems. 

Both the U.S. and EU hold pivotal positions around the future of global AI governance in setting standards for AI risk management. However, Europe-based tech start ups are concerned that the heavier legislation coming out of the EU will hinder innovation, leaving them to fall behind those in the U.S., which has far less red tape.

The issue has led to leadership teams from companies like French-based AI company Mistral to lobby for diluted regulations, arguing that they make the global AI innovation race unequal.

‘We don’t want to cripple our winner, right?’

The U.K. lands somewhere in the middle. No longer officially part of the E.U., the country is developing its own AI rulebook. And yet – just like with the General Data Protection Regulation – if a U.K. company has customers in the EU and works with partners across those member states, it will need to play by the EU AI Act rules.

It’s a rock and a hard place situation. “Then every government kind of says, ‘well we don’t want to cripple our winner, right?,’” said James Clough, CTO and co-founder of Robin AI, a U.K.-based startup using AI to transform the legal industry. “They might see that they might have a really successful AI company growing in their country and they don’t want to regulate it away. But then it gets harder and harder to come up with meaningful regulations.”

And with any regulations, it is extremely bureaucratic. That’s burdensome on companies, especially smaller ones, which can’t match the legal and compliance resources of the tech giants.

“The result of that is it tends to favor established players and big companies,” said Clough. “They [big tech] can handle all of that regulation and it doesn’t stop them from doing what they want to do. Whereas smaller companies might be doing something really innovative, but if they don’t have the big compliance team to write a big report on potential risks, it makes it harder for them to innovate.”

“They might see that they might have a really successful AI company growing in their country and they don’t want to regulate it away. But then it gets harder and harder to come up with meaningful regulations.”
James Clough, CTO and co-founder of Robin AI.

Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom, said that the effects of these pending laws is already reverberating across countries including France, Germany and Italy.

“They don’t have the resources to compete with already established U.S.-based businesses that are already built, well known and well developed foundational models,” said Jurkiewicz. “They’re saying ‘you’re not even allowing us to get to the starting line.’ What they want is the end products to be regulated. That’s the real challenge in this. Do you regulate the framework or the end product?”

The three countries reached an agreement mid-November that supports mandatory self-regulation through codes of conduct for these foundational models. Jurkiewicz explained that this agreement “waters down” the legislation. It will let companies like Mistral continue to build its products with less reporting.

But Jurkiewicz argues the U.S. companies are just too far ahead. “The corporate influence is far too embedded in politics here,” said Jurkiewicz. Even the White House’s Executive Order emphasizes how promoting innovation and competition is a pillar to its strategy around regulation now and in the future. 

“Any regulation should still advocate for innovation responsibly,” said Asha Palmer, a former assistant U.S. attorney and current svp of compliance solutions at training platform Skillsoft. “Oftentimes regulation is developed because we don’t trust that people are innovating responsibly. Any good regulation will balance risk with innovation.”

Staying informed during the AI boom

While different countries navigate AI regulation, remaining well-informed as the tech moves so rapidly, will be critical.

“Employers need to be looking at what other cities, states or countries are doing because it’s going to impact how they look to govern themselves and how they plan to use AI,” said Keith Sonderling, EEOC commissioner. “That’s especially important for muti-national employers, which a lot of the companies that use these AI programs are. Employers need to pay attention to what’s going on in the EU, especially if they will have to comply with the standards there. It might be easier or more cost effective to then apply those standards here in the U.S.”

And Palmer goes even further in saying that beyond developers understanding AI legislation and regulation, it’s important that consumers realize what’s going on as well. 

“What we’re missing is guidance that directs the majority of people who will interact with AI, the consumers, who are the majority of the population,” said Palmer. “What are their responsibilities and scrutiny and obligations as they’re using it? I don’t know who that should come from, to be honest.”

“What we’re missing is guidance that directs the majority of people who will interact with AI, the consumers, who are the majority of the population."
Asha Palmer, svp of compliance solutions, Skillsoft.

The Executive Order does lean into this a little, stating that Americans should be protected from AI-enabled fraud and deception. The Department of Commerce plans to develop guidance for content authentication and watermarking to clearly label AI-generated content. 

It wouldn’t be our first time seeing advertisement campaigns around being careful using new technology. Even on Facebook and X, we see posts flagged for misinformation. Is that what needs to happen as AI inches deeper and deeper into our lives? Jurkiewicz believes it does.

“What I would personally like to see is the explainability of artificial intelligence,” said Jurkiewicz. “There needs to be a broader effort with a consumer focus every time these tools are being used. There has to be some notification on these social media platforms when something is AI-generated.”

Right now, you might be able to somewhat tell that something is AI-generated, but in a year’s time, when the technology gets smarter, that will be harder to do. It’s another angle that also needs to be considered when governing bodies decide what parts of AI to regulate and how. “If you’re going to use AI, it’s fine, but you have to blatantly notify people,” added Jurkiewicz.