EU Issues Guidelines for AI Models with Systemic Risks

The European Commission has released guidelines to help AI models with systemic risks comply with the EU AI Act, which will be enforced from August 2, 2025.

The European Commission has issued guidelines to assist AI models identified as having systemic risks in complying with the European Union's artificial intelligence regulation, known as the AI Act. This regulation, which became law last year, will be applicable from August 2, 2025, for AI models with systemic risks and foundation models developed by companies such as Google, OpenAI, and Meta Platforms, Inc.. Companies have until August 2, 2026, to fully comply with the legislation.

The guidelines aim to address concerns from companies regarding the regulatory burden of the AI Act while providing clarity on compliance requirements. AI models with systemic risks are defined as those with advanced computing capabilities that could significantly impact public health, safety, fundamental rights, or society. These models will need to conduct evaluations, assess and mitigate risks, perform adversarial testing, report serious incidents, and ensure cybersecurity protection.

General-purpose AI models will also be subject to transparency requirements, including the creation of technical documentation, adoption of copyright policies, and provision of detailed summaries about the content used for algorithm training. The guidelines are part of a broader effort to ensure the smooth application of the AI Act, as stated by the EU tech chief.

The guidelines are designed to provide legal certainty to AI providers and clarify the scope of obligations under the AI Act. They are part of a comprehensive package that includes the General-Purpose AI Code of Practice, which was developed with input from independent experts and stakeholders.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like AI Policy Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.

Whitepaper

Governing the Future: A Strategic Framework for AI Adoption in Financial Institutions

This whitepaper explores the transformative impact of artificial intelligence on the financial industry, focusing on the governance challenges and regulatory demands faced by banks. It provides a strategic framework for AI adoption, emphasizing the importance of a unified AI approach to streamline compliance and reduce operational costs. The document offers actionable insights and expert recommendations for banks with fewer than 2,000 employees to become leaders in compliant, customer-centric AI.

Read more