
On Wednesday, March 13, the European Parliament approved the Artificial Intelligence Act, a landmark regulation that introduces comprehensive legal oversight of AI usage—including powerful systems such as OpenAI’s ChatGPT. The regulation, which had already been agreed upon by the European Commission, the European Parliament, and member states, was formally presented and debated in the plenary session. It passed with a decisive majority: 523 votes in favor versus 46 against. Once published in the Official Journal of the European Union, the new law will enter into force gradually across all member states over the course of two years starting in May. Bans on unacceptable AI systems will take effect within six months, while regulations concerning generative AI models such as ChatGPT and Midjourney will begin applying next year. The final provisions—particularly those related to human rights assessments that determine whether AI applications exhibit bias or discrimination—are scheduled to come into effect in May 2026.
In this article, we will outline the scope and purpose of the Artificial Intelligence Act, provide an overview of its risk-based classification framework, clarify which actors will be subject to the regulation, and briefly summarize the penalties foreseen for non-compliance.
Scope and Purpose of the Artificial Intelligence Act
The Artificial Intelligence Act adopted by the EU is designed to foster innovation while ensuring that companies align their AI systems with ethical and legal standards. The regulation seeks to promote the development, deployment, and use of trustworthy AI within the European Union by mandating systems that are safe, transparent, traceable, non-discriminatory, and environmentally sustainable. At its core, the Act aims to safeguard public health, safety, and fundamental rights. It is of critical importance for companies to analyze their artificial intelligence systems and develop a strategic roadmap to comply with this law.
The Artificial Intelligence Act, one of the first of its kind in the world, seeks to regulate rapidly advancing technology through a risk-based and human-centric approach. The Artificial Intelligence Act, which was agreed upon and approved by the General Assembly on March 13, 2024, is said to include further remedial provisions to align it with data protection legislation. Special rules have also been agreed upon for foundational models that are subject to certain transparency obligations. So-called “high-impact” foundational models -such as OpenAI’s GPT or Google’s Gemini- that are trained on large datasets and exhibit advanced complexity, capabilities, and significantly above-average performance (including in areas like cybersecurity and energy consumption reporting), will be required to comply with a stricter regime. This will, in turn, propagate systemic risks across the value chain.
Risk-Based Approach and Sanctions Linked to Risk Categories
The Artificial Intelligence Act adopts a risk-based approach, classifying AI systems according to their potential risk levels and subjecting them to specific procedures and sanctions based on their respective risk category. Accordingly, four distinct categories of risk are defined: AI systems posing Unacceptable Risk, High-Risk AI Systems, Limited-Risk AI Systems, and those presenting Minimal or No Risk.
In the risk-based classification under the Artificial Intelligence Act, the category subject to the strictest obligations and prohibitions is that of AI systems posing an Unacceptable Risk. These are considered to present a level of risk to individuals’ fundamental rights and safety that cannot be justified under any circumstances; as such, their development, deployment, and use are categorically prohibited. This classification encompasses AI applications such as: Cognitive behavioral manipulation, indiscriminate extraction of facial images from the internet or CCTV footage, emotion recognition systems deployed in workplaces and educational institutions, social scoring based on personal traits and behavioral patterns, biometric categorization relying on sensitive attributes (e.g. sexual orientation or religious beliefs), and certain predictive policing practices targeting individuals. It should be noted, however, that biometric identification systems are, in principle, prohibited, narrowly defined exceptions have been established for their use by law enforcement in publicly accessible spaces. These systems may only be deployed subject to prior official authorization and strictly for the prosecution of criminal offences enumerated in a clearly defined and limited list.
Artificial intelligence systems classified as high-risk are those deemed likely to pose a significant threat to society, the environment, or individuals’ health, safety, and fundamental rights. Examples of such high-risk AI systems include those used in critical infrastructures -such as transportation that could endanger lives, or utilities like water, gas, and electricity- as well as in medical devices, candidate assessment for access to educational institutions, recruitment processes, or in domains such as law enforcement, border control, the enforcement of laws, and the administration of justice, and democratic processes. Within this framework, it is emphasized that certain AI systems must be classified as high-risk due to their potentially profound implications for democracy, the rule of law, individual freedoms, and fundamental rights such as the right to an effective remedy and the right to a fair trial.
In the use of Limited Risk Artificial Intelligence Systems, such as chatbots, a transparency obligation is imposed whereby users must be informed that they are interacting with a machine, so that they are aware of this fact and can proceed accordingly.
Finally, the Artificial Intelligence Act permits the free use of AI systems that pose Minimal or No Risk to the rights or safety of individuals and society. (For instance, low-risk applications such as AI-powered recommendation systems or spam filters.)
Groups Expected to Be Affected by the Artificial Intelligence Act
The Artificial Intelligence Act imposes obligations on AI system providers, importers, distributors, and operators -even if they are not established within the EU. The Act also applies to companies based outside the EU that offer AI systems or their outputs to users within the Union. As such, all companies engaging in trade with the EU or exporting products and services to its member states must ensure compliance with the transparency and other specific additional obligations outlined in the Act by the time it enters into force. Consequently, companies in Türkiye that trade with or export products and services to EU countries will also be required to comply with the EU Artificial Intelligence Act.
What Are the Penalties for Non-Compliance with the Artificial Intelligence Act?
Penalties for failing to comply with the EU Artificial Intelligence Act are determined based on the higher of either a fixed amount or a percentage of the company’s worldwide annual turnover from the previous financial year. Specifically, for prohibited AI systems involving Unacceptable Risk, the fine may reach €35 million or 7% of annual turnover; for breaches of obligations under the AI Act, up to €15 million or 3% of turnover; and for the provision of incorrect or misleading information, up to €7.5 million or 1.5% of turnover. As a result, with the AI Act expected to enter into force in the near future, all companies trading with the EU or exporting products and services to its member states will be required to comply. Otherwise, they will inevitably face both financial and reputational damage.
On Wednesday, March 13, the European Parliament approved the Artificial Intelligence Act, a landmark regulation that introduces comprehensive legal oversight of AI usage—including powerful systems such as OpenAI’s ChatGPT. The regulation, which had already been agreed upon by the European Commission, the European Parliament, and member states, was formally presented and debated in the plenary session. It passed with a decisive majority: 523 votes in favor versus 46 against. Once published in the Official Journal of the European Union, the new law will enter into force gradually across all member states over the course of two years starting in May. Bans on unacceptable AI systems will take effect within six months, while regulations concerning generative AI models such as ChatGPT and Midjourney will begin applying next year. The final provisions—particularly those related to human rights assessments that determine whether AI applications exhibit bias or discrimination—are scheduled to come into effect in May 2026.
In this article, we will outline the scope and purpose of the Artificial Intelligence Act, provide an overview of its risk-based classification framework, clarify which actors will be subject to the regulation, and briefly summarize the penalties foreseen for non-compliance.
Scope and Purpose of the Artificial Intelligence Act
The Artificial Intelligence Act adopted by the EU is designed to foster innovation while ensuring that companies align their AI systems with ethical and legal standards. The regulation seeks to promote the development, deployment, and use of trustworthy AI within the European Union by mandating systems that are safe, transparent, traceable, non-discriminatory, and environmentally sustainable. At its core, the Act aims to safeguard public health, safety, and fundamental rights. It is of critical importance for companies to analyze their artificial intelligence systems and develop a strategic roadmap to comply with this law.
The Artificial Intelligence Act, one of the first of its kind in the world, seeks to regulate rapidly advancing technology through a risk-based and human-centric approach. The Artificial Intelligence Act, which was agreed upon and approved by the General Assembly on March 13, 2024, is said to include further remedial provisions to align it with data protection legislation. Special rules have also been agreed upon for foundational models that are subject to certain transparency obligations. So-called “high-impact” foundational models -such as OpenAI’s GPT or Google’s Gemini- that are trained on large datasets and exhibit advanced complexity, capabilities, and significantly above-average performance (including in areas like cybersecurity and energy consumption reporting), will be required to comply with a stricter regime. This will, in turn, propagate systemic risks across the value chain.
Risk-Based Approach and Sanctions Linked to Risk Categories
The Artificial Intelligence Act adopts a risk-based approach, classifying AI systems according to their potential risk levels and subjecting them to specific procedures and sanctions based on their respective risk category. Accordingly, four distinct categories of risk are defined: AI systems posing Unacceptable Risk, High-Risk AI Systems, Limited-Risk AI Systems, and those presenting Minimal or No Risk.
In the risk-based classification under the Artificial Intelligence Act, the category subject to the strictest obligations and prohibitions is that of AI systems posing an Unacceptable Risk. These are considered to present a level of risk to individuals’ fundamental rights and safety that cannot be justified under any circumstances; as such, their development, deployment, and use are categorically prohibited. This classification encompasses AI applications such as: Cognitive behavioral manipulation, indiscriminate extraction of facial images from the internet or CCTV footage, emotion recognition systems deployed in workplaces and educational institutions, social scoring based on personal traits and behavioral patterns, biometric categorization relying on sensitive attributes (e.g. sexual orientation or religious beliefs), and certain predictive policing practices targeting individuals. It should be noted, however, that biometric identification systems are, in principle, prohibited, narrowly defined exceptions have been established for their use by law enforcement in publicly accessible spaces. These systems may only be deployed subject to prior official authorization and strictly for the prosecution of criminal offences enumerated in a clearly defined and limited list.
Artificial intelligence systems classified as high-risk are those deemed likely to pose a significant threat to society, the environment, or individuals’ health, safety, and fundamental rights. Examples of such high-risk AI systems include those used in critical infrastructures -such as transportation that could endanger lives, or utilities like water, gas, and electricity- as well as in medical devices, candidate assessment for access to educational institutions, recruitment processes, or in domains such as law enforcement, border control, the enforcement of laws, and the administration of justice, and democratic processes. Within this framework, it is emphasized that certain AI systems must be classified as high-risk due to their potentially profound implications for democracy, the rule of law, individual freedoms, and fundamental rights such as the right to an effective remedy and the right to a fair trial.
In the use of Limited Risk Artificial Intelligence Systems, such as chatbots, a transparency obligation is imposed whereby users must be informed that they are interacting with a machine, so that they are aware of this fact and can proceed accordingly.
Finally, the Artificial Intelligence Act permits the free use of AI systems that pose Minimal or No Risk to the rights or safety of individuals and society. (For instance, low-risk applications such as AI-powered recommendation systems or spam filters.)
Groups Expected to Be Affected by the Artificial Intelligence Act
The Artificial Intelligence Act imposes obligations on AI system providers, importers, distributors, and operators -even if they are not established within the EU. The Act also applies to companies based outside the EU that offer AI systems or their outputs to users within the Union. As such, all companies engaging in trade with the EU or exporting products and services to its member states must ensure compliance with the transparency and other specific additional obligations outlined in the Act by the time it enters into force. Consequently, companies in Türkiye that trade with or export products and services to EU countries will also be required to comply with the EU Artificial Intelligence Act.
What Are the Penalties for Non-Compliance with the Artificial Intelligence Act?
Penalties for failing to comply with the EU Artificial Intelligence Act are determined based on the higher of either a fixed amount or a percentage of the company’s worldwide annual turnover from the previous financial year. Specifically, for prohibited AI systems involving Unacceptable Risk, the fine may reach €35 million or 7% of annual turnover; for breaches of obligations under the AI Act, up to €15 million or 3% of turnover; and for the provision of incorrect or misleading information, up to €7.5 million or 1.5% of turnover. As a result, with the AI Act expected to enter into force in the near future, all companies trading with the EU or exporting products and services to its member states will be required to comply. Otherwise, they will inevitably face both financial and reputational damage.