The EU has reached an agreement on the AI Act, which likely will see it come into effect in 2024 to 2025 as scheduled.
This landmark decision resulted from an intense 37-hour negotiation session involving the European Parliament and EU member states.
Thierry Breton, the European Commissioner primarily responsible for this new suite of laws, described the agreement as “historic.” The talks have been in progress since Tuesday, including some days where negotiators worked through the night.
The EU becomes the very first continent to set clear rules for the use of AI 🇪🇺
The #AIAct is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global AI race.
The best is yet to come! 👍 pic.twitter.com/W9rths31MU
— Thierry Breton (@ThierryBreton) December 8, 2023
Carme Artigas, Spain’s Secretary of State for AI, was crucial in steering these negotiations to a successful conclusion.
Artigas pointed out the significant backing the text received from major European countries, specifically stating, “France and Germany supported the text.” This is notable, as France and Germany, keen to encourage their own growing AI industries, were questioning some of the stricter elements of the law.
The EU is now set to lead the way for AI regulation. While specific contents and implications of the new laws are still emerging, they will likely be effective in 2024/2025.
Key agreements and aspects of the EU AI Act
The provisional agreement on the AI Act represents a historic step in regulating AI. It follows in the footsteps of other EU technology regulations, such as GDPR, which subjected tech firms to billions of fines over the years.
Carme Artigas, Spanish secretary of state for digitalization and artificial intelligence, said of the law, “This is a historical achievement and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.”
Here are the core new aspects of the agreed legislation:
- High-impact and high-risk AI systems: The agreement introduces rules on general-purpose AI models that could pose ‘systemic’ risks. Precisely what this means remains ambiguous, but it’s broadly designed to cater to new generations of models at GPT-4 level and beyond.
- Governance and enforcement: A revised governance system was established, including some enforcement powers at the EU level. This ensures a centralized approach to regulating AI across member states.
- Prohibitions and law enforcement exceptions: The agreement extends the list of prohibited AI practices but allows for the use of remote biometric identification by law enforcement in public spaces under strict conditions. This aims to balance public safety with privacy and civil liberties.
- Rights protection: A key aspect of the agreement is the obligation for deployers of high-risk AI systems to assess impacts on people’s rights before using an AI system. “Deployers” is the keyword here, as the Act obligates responsibilities all throughout the AI value chain.
Other agreed areas include:
- Definitions and scope: The definition of AI systems has been aligned with the OECD’s definition, and the regulation excludes AI systems used exclusively for military, defense, research, and innovation purposes or by individuals for non-professional reasons.
- Classification of AI systems: AI systems will be classified based on risk, with high-risk systems subject to stringent requirements and lighter transparency obligations for systems with limited risk. This has been the intention all along.
- Foundation models: The agreement addresses foundation models, large AI systems capable of performing various tasks like ChatGPT/Bard/Claude 2. Specific transparency obligations are set for these models, with stricter regulations for high-impact foundation models.
- High-risk AI systems requirements: High-risk AI systems will be allowed in the EU market but must comply with specific requirements, including data quality and technical documentation, especially for SMEs.
- Responsibilities in AI value chains: The agreement clarifies the roles and responsibilities of different actors in AI system value chains, including providers and users, and how these relate to existing EU legislation.
- Prohibited AI practices: The act bans certain unacceptable AI practices, such as cognitive behavioral manipulation, untargeted scraping of facial images, and emotion recognition in workplaces and educational institutions.
- Emergency procedures for law enforcement: An emergency procedure allows law enforcement agencies to deploy high-risk AI tools that have not passed the conformity assessment in urgent situations.
- Real-time biometric identification: Law enforcement’s use of real-time remote biometric identification systems in public spaces is permitted under strict conditions for specific purposes like preventing terrorist attacks or searching for serious crime suspects.
Governance, penalties, and enforcement:
- Governance: An AI Office within the Commission will oversee advanced AI models, supported by a scientific panel of independent experts and the AI Board comprising member states’ representatives.
- Penalties: The agreement sets fines based on a percentage of the company’s global annual turnover for various violations, with more proportionate caps for SMEs and start-ups.
- Fines: Penalties for non-compliance have been established based on the severity of the violation. The fines are calculated either as a percentage of the company’s global annual turnover from the previous fiscal year or as a fixed amount. The highest of the two is taken. The fines are structured as follows: €35 million or 7% of turnover for violations involving banned AI applications, €15 million or 3% for breaches of the Act’s obligations, and €7.5 million or 1.5% for providing incorrect information.
- Support: The agreement includes AI regulatory sandboxes to test innovative AI systems under real-world conditions and provides support to smaller companies.
This tentative deal still requires approval from the European Parliament and the EU’s 27 member states.
How will the legislation affect ‘frontier models?’
Under the new regulations, all developers of general-purpose AI systems, particularly those with a wide range of potential applications like ChatGPT and Bard, must maintain up-to-date information on how their models are trained, provide a detailed summary of the data used in training, and have policies that respect copyright laws and ensure acceptable use.
The Act also classifies certain models as posing a “systemic risk.” This assessment is primarily based on the computational power training of these models. The EU has set a threshold for this category at models that employ more than 10 trillion trillion operations per second.
Currently, OpenAI’s GPT-4 is the only model that automatically meets this threshold. However, the EU could label other models under this definition.
Models deemed systemic risk will be subject to additional, more stringent rules. These include:
- Mandatory reporting of energy consumption.
- Conducting red-teaming or adversarial tests.
- Assessing and mitigating potential systemic risks.
- Ensuring robust cybersecurity controls.
- Reporting both the information used for fine-tuning the model and details about their system architecture.
How has the agreement been received?
The AI Act has sparked a myriad of reactions commenting on its innovation, regulation, and societal impact.
Fritz-Ulli Pieper, a specialist in IT law at Taylor Wessing, pointed out that, while the end is in sight, the Act is still liable to change.
He remarked, “Many points still to be further worked on in technical trilogue. No one knows how the final wording will look like and if or how you can really push current agreement in a final law text.”
Pieper’s insights reveal the complexity and uncertainty surrounding the AI Act, suggesting that much work remains to ensure the final legislation is effective and practical.
A key theme of these meetings has been balancing AI risks and opportunities, particularly as models can be ‘dual-natured,’ meaning they can provide benefits and inflict harm. Alexandra van Huffelen, Dutch Minister of Digitalisation, noted, “Dealing with AI means fairly distributing the opportunities and the risks.”
The Act also seemingly failed to protect EU citizens from large-scale surveillance, which caught the attention of advocacy groups like Amnesty International.
Mher Hakobyan, Advocacy Advisor on Artificial Intelligence said on this point of controversy, “Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space and rule of law that are already under threat throughout the EU.”
Following this provisional agreement, the AI Act is set to become applicable two years after its official enactment, enabling governments and businesses across the EU to prepare for compliance with its provisions.
In the interim, officials will negotiate the technical details of the regulation. Once the technical refinements are concluded, the compromise text will be submitted to the member states’ representatives for endorsement.
The final step involves a legal-linguistic revision to ensure clarity and legal accuracy, followed by the formal adoption. The AI Act is sure to change the industry both in the EU and globally, but the extent of which is tough to predict.