The European Union’s landmark artificial intelligence law officially enters into force Thursday — and it means tough changes for American technology giants.
The AI Act, a landmark rule that aims to govern the way companies develop, use and apply AI, was given final approval by EU member states, lawmakers, and the European Commission — the executive body of the EU — in May.
CNBC has run through all you need to know about the AI Act — and how it will affect the biggest global technology companies.
What is the AI Act?
The AI Act is a piece of EU legislation governing artificial intelligence. First proposed by the European Commission in 2020, the law aims to address the negative impacts of AI.
The regulation sets out a comprehensive and harmonized regulatory framework for AI across the EU.
It will primarily target large U.S. technology companies, which are currently the primary builders and developers of the most advanced AI systems.
However, plenty other businesses will come under the scope of the rules — even non-tech firms.
Tanguy Van Overstraeten, head of law firm Linklaters’ technology, media and technology practice in Brussels, said the EU AI Act is “the first of its kind in the world.”
“It is likely to impact many businesses, especially those developing AI systems but also those deploying or merely using them in certain circumstances.”
The legislation applies a risk-based approach to regulating AI which means that different applications of the technology are regulated differently depending on the level of risk they pose to society.
For AI applications deemed to be “high-risk,” for example, strict obligations will be introduced under the AI Act. Such obligations include adequate risk assessment and mitigation systems, high-quality training datasets to minimize the risk of bias, routine logging of activity, and mandatory sharing of detailed documentation on models with authorities to assess compliance.
Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decisioning systems, educational scoring, and remote biometric identification systems.
The law also imposes a blanket ban on any applications of AI deemed “unacceptable” in terms of their risk level.
Unacceptable-risk AI applications include “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and the use of emotional recognition technology in the workplace or schools.
What does it mean for U.S. tech firms?
U.S. giants like Microsoft, Google, Amazon, Apple, and Meta have been aggressively partnering with and investing billions of dollars into companies they think can lead in artificial intelligence amid a global frenzy around the technology.
Cloud platforms such as Microsoft Azure, Amazon Web Services and Google Cloud are also key to supporting AI development, given the huge computing infrastructure needed to train and run AI models.
In this respect, Big Tech firms will undoubtedly be among the most heavily-targeted names under the new rules.
“The AI Act has implications that go far beyond the EU. It applies to any organisation with any operation or impact in the EU, which means the AI Act will likely apply to you no matter where you’re located,” Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software firm Appian, told CNBC via email.
“This will bring much more scrutiny on tech giants when it comes to their operations in the EU market and their use of EU citizen data,” Thompson added
Meta has already restricted the availability of its AI model in Europe due to regulatory concerns — although this move wasn’t necessarily the due to the EU AI Act.
The Facebook owner earlier this month said it would not make its LLaMa models available in the EU, citing uncertainty over whether it complies with the EU’s General Data Protection Regulation, or GDPR.
The company was previously ordered to stop training its models on posts from Facebook and Instagram in the EU due to concerns it may violate GDPR.
Eric Loeb, executive vice president of government affairs at enterprise tech giant Salesforce, told CNBC that other governments should look to the EU’s AI Act as a blueprint for their own respective policies.
Europe’s “risk-based regulatory framework helps encourage innovation while also prioritizing the safe development and deployment of the technology,” Loeb said, adding that “other governments should consider these rules of the road when crafting their own policy frameworks.”
“There is still much work to be done in the EU and beyond, and it’s critical that other countries continue to move forward with defining and then implementing interoperable risk-based frameworks,” he added.
How is generative AI treated?
Generative AI is labelled in the EU AI Act as an example of “general-purpose” artificial intelligence.
This label refers to tools that are meant to be able to accomplish a broad range of tasks on a similar level — if not better than — a human.
General-purpose AI models include, but aren’t limited to, OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude.
For these systems, the AI Act imposes strict requirements such as respecting EU copyright law, issuing transparency disclosures on how the models are trained, and carrying out routine testing and adequate cybersecurity protections.
Not all AI models are treated equally, though. AI developers have said the EU needs to ensure open-source models — which are free to the public and can be used to build tailored AI applications — aren’t too strictly regulated.
Examples of open-source models include Meta’s LLaMa, Stability AI’s Stable Diffusion, and Mistral’s 7B.
The EU does set out some exceptions for open-source generative AI models.
But to qualify for exemption from the rules, open-source providers must make their parameters, including weights, model architecture and model usage, publicly available, and enable “access, usage, modification and distribution of the model.”
Open-source models that pose “systemic” risks will not count for exemption, according to the AI Act.
It’s “necessary to carefully assess when the rules trigger and the role of the stakeholders involved,” Van Overstraeten said.
What happens if a company breaches the rules?
Companies that breach the EU AI Act could be fined between 35 million euros ($41 million) or 7% of their global annual revenues — whichever amount is higher — to 7.5 million or 1.5% of global annual revenues.
The size of the penalties will depend on the infringement and size of the company fined.
That’s higher than the fines possible under the GDPR, Europe’s strict digital privacy law. Companies faces fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.
Oversight of all AI models that fall under the scope of the Act — including general-purpose AI systems — will fall under the European AI Office, a regulatory body established by the Commission in February 2024.
Jamil Jiva, global head of asset management at fintech firm Linedata, told CNBC the EU “understands that they need to hit offending companies with significant fines if they want regulations to have an impact.”
Similar to how GDPR demonstrated the way the EU could “flex their regulatory influence to mandate data privacy best practices” on a global level, with the AI Act, the bloc is again trying to replicate this, but for AI, Jiva added.
Still, it’s worth noting that even though the AI Act has finally entered into force, most of the provisions under the law won’t actually come into effect until at least 2026.
Restrictions on general-purpose systems won’t begin until 12 months after the AI Act’s entry into force.
Generative AI systems that are currently commercially available — like OpenAI’s ChatGPT and Google’s Gemini — are also granted a “transition period” of 36 months to get their systems into compliance.