The EU has recently published a guidance concerning the AI Act. The guidelines on defining AI systems clarify the practical application of the legal concept established in the AI Act. By providing these guidelines, the European Commission aims to assist providers and other relevant stakeholders in determining whether a software system qualifies as an AI system, ensuring the effective implementation of the regulations.
These guidelines are not legally binding. They are intended to evolve over time and will be updated as needed, particularly in response to practical experiences, emerging questions, and new use cases. In addition to these guidelines, the Commission has also published the Guidelines on Prohibited AI Practices, as defined by the AI Act.
The AI Act seeks to foster innovation while maintaining high standards of health, safety, and fundamental rights protection. It categorizes AI systems based on risk levels, including prohibited, high-risk, and those subject to transparency obligations. As of Sunday, 2 February, the first provisions of the AI Act have come into effect. These include the AI system definition, AI literacy requirements, and a limited set of prohibited AI use cases deemed to pose unacceptable risks within the EU.
The guidance can be accessed using the following link