UK businesses must brace themselves to adhere to new EU artificial intelligence standards if they plan to trade in the bloc, lawyers and industry experts have said.

UK businesses must brace themselves to adhere to new EU artificial intelligence (AI) standards if they plan to trade in the bloc, lawyers and industry experts have said.

It comes as members of the European Parliament voted overwhelmingly last week to adopt the Artificial Intelligence Act, which aims to encourage “human-centric and trustworthy” AI.

Tamara Quinn, partner in Osborne Clarke’s international AI team, said a challenge for the UK is that “access to neighbouring EU markets will depend on compliance with the EU AI Act”.

In the absence of AI-specific legislation in the UK, developers might make the EU standards their “benchmark for compliance”, Quinn explained.

She added it was “well worth” businesses investing time in understanding AI compliance if they want to unleash AI products and services in EU markets.

The EU AI Act is based on a high-to-low risk scale, with AI technologies deemed ‘high risk’ regulated more heavily. This category includes AI systems used in education, elections, generative programmes like Chat GPT and by large social media companies.

By contrast, the UK – or at least Rishi Sunak’s government – is adopting what has been dubbed a ‘light touch’ regulatory approach. The latest white paper encourages safety but primarily promotes a “pro-innovation framework”.

Despite the difference, Angus Miln, partner at Taylor Wessing, said the EU regulation “might significantly impact the UK business landscape” as companies try to juggle “compliance with the domestic regime and stricter EU laws for their European operations”.

International companies that manage EU and UK businesses as a single unit could wind up applying the former’s rules to both for “operational convenience”, explained Wessing.

If approved by Brussels, the AI Act is anticipated to come into force in early 2024, making it the world’s first law regulating AI. 

The Commission has proposed a compliance period of two years, by the end of which AI developers and companies must conform to the standards.

The future of AI regulation

But as AI rapidly evolves, regulation “will have to keep pace and grow in complexity to ensure organisations act responsibly and use AI to serve a common good as well as their bottom line”, said Greg Hanson, group vice president of EMEA & LATAM at NYSE-listed Informatica. 

It could take years of “close collaboration” between policy makers and businesses to fully develop AI regulation, Hanson added.

What’s more, a change in government may alter the UK’s regulatory position. Speaking at London Tech Week last Tuesday, Labour leader Keir Starmer highlighted his desire to enforce a “stronger” and “overarching” AI regulatory framework if his party was in power. 

Stronger regulations, whether at home or in the EU, “should not come as a surprise or cause a barrier to business” though, said David Spencer, chief technology officer at WPP company Acceleration.

“British companies operating in the EU have always had to adapt to a more stringent interpretation of regulations’, he explained.  

For instance in 2016 the EU adopted the General Data Protection Regulation (GDPR), which claims to be “the toughest privacy and security law in the world”. The UK’s 2018 Data Protection Act is based on the core principles of the EU’s GDPR, according to the Information Commissioner’s Office.

The EU may well steer the direction of tech regulation again.



Source link

author-sign