chat
expand_more

What the US Can Learn From the UK and EU About Regulating AI

There are ways to protect the public from the potential dangers of AI without stifling innovation—and the Europeans have already shown us how.
November 6, 2024

This article originally appeared in SC Media.

California Gov. Gavin Newsom vetoed a bill last month that would have enacted the most significant AI legislation to date in the United States.

The measure was seen by legislators as offering a potential blueprint for federal regulation, focused on making tech companies legally liable for the harm caused by their AI models. It would have forced the industry to conduct safety tests on powerful AI models and mandated that tech companies enable a “kill switch” for AI technology to stop potential misuse.

Newsom argued that while the AI safety bill's intentions were valid, it used a broad brush approach, applying uniform regulation to all large models without distinguishing between high-risk AI applications and more benign ones.

The governor pointed out that the bill focused on large-scale, expensive AI models, which would potentially give the public a false sense of security by targeting only high-cost systems. Smaller, more specialized AI models, which arguably pose equal or even greater risks, were not sufficiently addressed. Additionally, the bill applied strict safety protocols to all large models, regardless of their actual deployment in high-risk environments or their involvement with sensitive data. As a result, Newsom feared that the bill could create an overly restrictive environment that might hamper innovation.

The bill—and Newsom’s decision to veto it—has sparked widespread debate about the best approach to regulating AI, specifically when it comes to reducing risk without stifling innovation. Other regions, such as the UK and the European Union (EU), are also navigating this debate using various approaches.

Considerations for Regulating AI

So, what are some of the important considerations that go into developing regulation for AI? And what can the U.S. learn from the UK and EU where they are doing it effectively today?

Let’s take a closer look:

In comparison to the U.S., both the UK and EU are further along in their regulatory efforts. And unlike the proposed AI bill in California, both of these regions emphasize regulation focused on distinguishing high-risk applications, no matter if they used large models or smaller, specialized ones.

For example, under Prime Minister Keir Starmer’s government, the UK promotes a safety-focused AI regulatory framework that seeks to prevent misuse by enhancing transparency, human oversight, and data quality standards. It’s particularly focused on high-risk sectors like healthcare and criminal justice, areas in which AI is most likely to be misused or abused.

This approach aligns closely with the EU’s AI Act, which also imposes compliance requirements on high-risk AI applications, such as those in healthcare, finance, and public services. The stringent EU AI Act bans AI systems that pose an "unacceptable level of risk," including social scoring algorithms. Both the UK and EU recognize the importance of public trust in AI, especially in critical sectors, and their regulatory frameworks aim to ensure that AI systems are explainable, reliable, and fair.

But while both the UK and EU regulations aim to mitigate risks, there are still concerns that this strict approach might stifle innovation, particularly for smaller companies. For example, the compliance costs associated with these regulations could become prohibitive for startups—potentially limiting the development of cutting-edge AI technologies.

Lessons for the United States

The U.S. – which today lacks comprehensive federal AI regulation – could learn several lessons from the UK and EU. First, the European regulations are based on the actual risk an AI system poses. Both the UK and EU focus on strictly regulating high-risk AI systems while allowing more flexibility for low-risk applications. This targeted approach could help avoid stifling innovation because of over-regulation, which was one of the main concerns Newsom highlighted in his veto.

Additionally, the emphasis on transparency, human oversight, and accountability in both models offers a roadmap for how the U.S. could structure its own AI governance. Ensuring that AI systems are explainable and accountable is crucial for public trust, particularly as these technologies become more integrated into everyday life.

Another strategy that the UK has adopted, which the U.S. could potentially benefit from, is the use of regulatory sandboxes. Sandboxing lets tech companies experiment with AI technology in a controlled environment, fostering innovation while ensuring that AI applications are subject to rigorous safety testing before being deployed at scale.

Finally, as the U.S. considers its own AI regulations, it should also focus on international competitiveness. The EU's AI Act has already set a global standard, and many U.S. companies will need to comply with these rules when operating in Europe. Aligning U.S. regulations with global standards could help streamline compliance and ensure that American companies remain competitive on an international stage.

In short, Gavin Newsom’s veto of California’s AI safety bill highlights the challenges of balancing innovation with safety in a rapidly evolving landscape. While concerns are valid, the experiences of both the UK and the EU show that it’s possible to create a regulatory framework that protects public safety without unduly restricting technological development.

Adopting targeted, risk-based regulations, fostering transparency and accountability, and supporting innovation through regulatory sandboxes are just a few of the strategies that the U.S. may consider as it continues to develop complex legislation around AI—legislation that's essential for maintaining public trust and driving responsible AI development.

Interested in learning more about AI and how it can protect your organization from advanced cyber attacks? Schedule a demo today!

Schedule a Demo
What the US Can Learn From the UK and EU  About Regulating AI

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

B DKIM Replay Google Phishing Attack
Threat actors used DKIM replay to send Google-branded phishing emails that passed authentication checks. Here’s how the attack worked and why it’s hard to catch.
Read More
B 1500x1500 MKT834 Abnormal AI Blog
Discover why Abnormal Security is rebranding to Abnormal AI as the company continues its mission to protect humans from cybercrime.
Read More
B Pig Butchering
Learn about pig butchering fraud, a new threat to organizational security. Explore operational tactics, warning signs, and strategies to safeguard your business.
Read More
B Gamma Attack Story Blog
Attackers exploit Gamma in a multi-stage phishing attack using Cloudflare Turnstile and AiTM tactics to evade detection and steal Microsoft credentials.
Read More
B Proofpoint Customer Story 16
With Abnormal’s behavioral AI, a top healthcare solutions provider addressed gaps left by Proofpoint, automated workflows, and saved 335 SOC hours monthly.
Read More
B Phishing Australia
Attackers rely on the trust currency of corporate email to launch highly personalised phishing attacks. Luckily, a revolution in email security means humans are no longer the last line of defence.
Read More