Was is EU AI Act 3.1?
The EU AI Act is a proposed regulation by the European Union aimed at establishing a legal framework for the development, deployment, and use of artificial intelligence (AI) systems. The act seeks to ensure that AI systems are safe, transparent, and respect fundamental rights, while fostering innovation and competitiveness in the EU.
Article 3.1 of the EU AI Act defines key terms used throughout the regulation. Specifically, it provides the definition of an "AI system" as follows:
"‘Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."
Annex I lists the techniques and approaches that qualify as AI, including:
- Machine learning approaches (e.g., supervised, unsupervised, and reinforcement learning)
- Logic- and knowledge-based approaches (e.g., expert systems, knowledge representation)
- Statistical approaches, Bayesian estimation, and search and optimization methods
This definition is critical because it determines the scope of the regulation—what systems fall under the AI Act and are subject to its requirements. The EU AI Act categorizes AI systems based on their risk level (e.g., unacceptable risk, high risk, limited risk, and minimal risk) and imposes corresponding obligations on developers and users.