Europe’s provisional AI legislation attempts to strike a tricky balance between promoting innovation and protecting citizens’ rights.

The European Union reached a provisional agreement on its much-anticipated Artificial Intelligence Act on Dec. 8, becoming the first global power to pass rules governing the use of AI.

The legislation outlines EU-wide measures designed to ensure that AI is used safely and ethically, and includes limitations on the use of live facial recognition and new transparency requirements for developers of foundation AI models like ChatGPT.

Jump to:

What is the AI Act?

The AI Act is a set of EU-wide legislation that seeks to place safeguards on the use of artificial intelligence in Europe, while simultaneously ensuring that European businesses can benefit from the rapidly evolving technology.

The legislation establishes a risk-based approach to regulation that categorizes artificial intelligence systems based on their perceived level of risk to and impact on citizens.

The following use cases are banned under the AI Act:

  • Biometric categorisation systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race).
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
  • Emotion recognition in the workplace and educational institutions.
  • Social scoring based on social behaviour or personal characteristics.
  • AI systems that manipulate human behaviour to circumvent their free will.
  • AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation.

However, there are caveats to the provisional agreement as it currently stands. Perhaps most significant is the fact that the AI Act won’t come into force until 2025, leaving a regulatory vacuum in which companies will be able to develop and deploy AI unfettered and without any risk of penalties. Until then, companies will be expected to abide by the legislation voluntarily, essentially leaving them free to self-govern.

What do AI developers need to know?

Developers of AI systems deemed to be high risk will have to meet certain obligations set by European lawmakers, including mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.”

AI models that are considered high-impact and pose a systemic risk – meaning they could cause widespread problems if things go wrong – must follow more stringent rules. Developers of these systems will be required to perform evaluations of their models, as well as “assess and mitigate systemic risks, conduct adversarial testing, report to the (European) Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.” Additionally, European citizens will have a right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights.

To support European startups in creating their own AI models, the AI Act also promotes regulatory sandboxes and real-world-testing. These will be set up by national authorities to allow companies to develop and train their AI technologies before they’re introduced to the market “without undue pressure from industry giants controlling the value chain.”

What about ChatGPT and generative AI models?

Providers of general-purpose AI systems must meet certain transparency requirements under the AI Act; this includes creating technical documentation, complying with European copyright laws and providing detailed information about the data used to train AI foundation models. The rule applies to models used for generative AI systems like OpenAI’s ChatGPT.

SEE: Generative AI: UK Business Leaders Face Investment Challenges as Everyone Claims to Be an Expert (TechRepublic)

What are the penalties for breaching the AI Act?

Companies that fail to comply with the legislation face fines ranging from €35 million ($38 million USD) or 7% of global turnover to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.

How significant is the AI Act?

Symbolically, the AI Act represents a pivotal moment for the AI industry. Despite its explosive growth in recent years, AI technology remains largely unregulated, leaving policymakers struggling to keep up with the pace of innovation.

The EU hopes that its AI rulebook will set a precedent for other countries to follow. Posting on X (formerly Twitter), European Commissioner Thierry Breton labelled the AI Act “a launchpad for EU startups and researchers to lead the global AI race,” while Dragos Tudorache, MEP and member of the Renew Europe Group, said the legislation would strengthen Europe’s ability to “innovate and lead in the field of AI” while protecting citizens.

What have been some challenges associated with the AI Act?

The AI Act has been beset by delays that have eroded the EU’s position as a frontrunner in establishing comprehensive AI regulations. Most notable has been the arrival and subsequent meteoric rise of ChatGPT late last year, which had not been factored into plans when the EU first set out its intention to regulate AI in Europe in April 2021.

As reported by Euractiv, this threw negotiations into disarray, with some countries expressing reluctance to include rules for foundation models on the basis that doing so could stymie innovation in Europe’s startup scene. In the meantime, the U.S., U.K. and G7 countries have all taken strides towards publishing AI guidelines.

SEE: UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety (TechRepublic)

What are critics saying about the AI Act?

Some privacy and human rights groups have argued that these AI regulations don’t go far enough, accusing the EU lawmakers of delivering a watered-down version of what they originally promised.

Privacy rights group European Digital Rights labelled the AI Act a “high-level compromise” on “one of the most controversial digital legislations in EU history,” and suggested that gaps in the legislation threatened to undermine the rights of citizens.

The group was particularly critical of the Act’s limited ban on facial recognition and predictive policing, arguing that broad loopholes, unclear definitions and exemptions for certain authorities left AI systems open to potential misuse in surveillance and law enforcement.

Ella Jakubowska, senior policy advisor at European Digital Rights, said in a statement:
“It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc. Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm. Our fight against biometric mass surveillance is set to continue.”

Amnesty International was also critical of the limited ban on AI facial recognition, saying it set “a devastating global precedent.”

Mher Hakobyan, advocacy advisor on artificial intelligence at Amnesty International, said in a statement: “The three European institutions – Commission, Council and the Parliament – in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning artificial intelligence (AI) regulation.

“Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space and rule of law that are already under threat throughout the EU.”

What’s next with the AI Act?

The AI Act is now pending formal adoption by both the European Parliament and the Council in order to be enacted as European Union legislation. The agreement will be subject to a vote in an upcoming meeting of the Parliament’s Internal Market and Civil Liberties committees.

Source link