The EU has officially adopted the AI Act, the world’s first comprehensive rulebook on artificial intelligence.

Members of the European Parliament (MEPs) overwhelmingly endorsed the regulation on Wednesday. The law passed with 523 votes in favour, 46 against, and 49 abstentions.

“Europe is NOW a global standard-setter in AI,” Thierry Breton, the bloc’s commissioner for the internal market, wrote on X.

Breton’s claim was echoed across the tech world — although not always with praise. Supporters praised the act’s attempt to reduce AI risks, but critics warn that the rules will inhibit innovation.

Still, both sides expect the legislation to have a powerful impact. By approving the first law of its kind, the AI Act has set a precedent for governments across the world.

Enza Iannopollo, an analyst at tech advisory firm Forrester, hailed the law’s adoption as the “beginning of a new AI era.”

“Like it or not, with this regulation, the EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI,” she told TNW. “Every other region can only play catch-up.”

The bloc’s timing has also caught attention. The EU had initially planned to vote on the law next month, before deciding that an earlier decision was required.

“They recognise that the technology and its adoption is moving so fast that there is no time to waste, especially when there isn’t an alternative framework available,” Iannopollo said.

The path to implementation

Despite being an EU law, the AI Act will apply to companies around the world that do business in the bloc.

Breaches of the rules can trigger fines of 7% of a company’s global turnover. That’s sparked concern in big tech, which receives more favourable regulation in the US.

Many European businesses have also raised objections. They worry that the continent’s tech sector will be pushed further behind their competitors in the US and China. Lobbying from European startups Mistral AI and Aleph Alpha reportedly pushed the EU to relax the rules for foundation models, a general-purpose technology that underpins the likes of OpenAI’s ChatGPT.

In an attempt to allay their concerns, the AI Act divides applications into different risk categories. The strictest rules are reserved for “high-risk” systems, from cars to law enforcement tools. Deployments designated “unacceptable” — such as social credit scoring — will be banned altogether.

These rules are expected to take effect this May. Iannopollo advises organisations to assemble AI compliance teams as soon as possible.

“There is a lot to do and little time to do it,” she warned.



Source link

author-sign