The European Union’s landmark artificial intelligence legislation went into effect on Thursday, marking a significant step in regulating AI while aiming to foster innovation and safeguard citizens’ rights.
Earlier this year, the EU introduced the world’s first comprehensive rules for AI, especially targeting advanced systems like OpenAI’s ChatGPT, following challenging negotiations.
The legislation, initially proposed in 2021, gained urgency with the emergence of ChatGPT in 2022, which demonstrated generative AI’s capability to produce sophisticated text rapidly. Other generative AI technologies, such as Dall-E and Midjourney, can create images in various styles from simple text prompts.
European Commission President Ursula von der Leyen emphasized that the AI Act establishes new safeguards to protect individuals and their interests while providing businesses and innovators with clear rules and certainty.
While companies have until 2026 to comply, regulations concerning AI models like ChatGPT will come into effect a year after the law’s implementation. Additionally, strict prohibitions on using AI for predictive policing based on profiling and systems that use biometric data to infer an individual’s race, religion, or sexual orientation will be enforced six months after the law takes effect.
The AI Act adopts a risk-based approach: high-risk systems face more stringent requirements to protect citizens’ rights. The greater the potential risk to health or rights, the stricter the obligations for companies.
Marcus Evans, a partner at Norton Rose Fulbright, highlighted that the AI Act’s broad geographic scope means organizations with any connection to the EU must establish an AI governance program to meet their obligations.
Companies that violate the rules on banned practices or data requirements could face fines of up to seven percent of their global annual revenue. To ensure compliance, the EU has established an “AI Office” consisting of tech experts, lawyers, and economists under the new law.