
Beyond Fines: The Real Impact of the EU AI Act
Justus Schuster
Nov 10, 2025

- Who is allowed to sanction – and how does the procedure work?
- More than just fines: Administrative intervention measures
- How fines are actually calculated
- Special rules for foundation and general-purpose models
- Transition periods affect sanctions
- Interaction with other EU legal frameworks
- Companies outside the EU – extraterritorial effect
- Conclusion
The EU Artificial Intelligence Act (AI Act) entered into force on 1 August 2024 and is considered the world’s first comprehensive law regulating artificial intelligence. Its goal is clear: to enable innovation without endangering safety, transparency, and fundamental rights.
In overview articles we have already described the basic structures of the law: the risk-based model, what makes an AI system high-risk, and recommendations for companies in the DACH region.
But how will the requirements of the AI Act actually be implemented?
- - Who exactly controls and sanctions violations?
- - According to which criteria is a fine actually determined?
- - Which alternative or supplementary measures – such as product bans or public announcements – are available to the authorities?
- - And how do the rules interact with other EU regulations, such as the GDPR or the Product Liability Directive?
Who is allowed to sanction – and how does the procedure work?
Each EU Member State appoints one or more competent authorities responsible for supervision and sanctions. These national bodies are coordinated by the newly created EU AI Board, which ensures that sanctions are imposed consistently across Europe.
In cross-border cases – for example, when a provider distributes AI systems in several EU countries – the Board takes on a mediating role. This creates an enforcement mechanism similar to the “one-stop-shop” principle known from the GDPR, but extended by technical review bodies.
More than just fines: Administrative intervention measures
Unlike many data protection laws, the AI Act is not limited to financial sanctions. It allows immediate market interventions, including:
- 1. Prohibitionof placing or making an AI system available on the market
- 2. Recall or withdrawal ordersfor already distributed systems
- 3. Orders for technical corrections
- 4. Public announcementsof identified violations
In practice, this means that a company can effectively lose its AI product before a fine is even legally final. This combination of market intervention and the threat of penalties significantly increases the pressure for early compliance.
How fines are actually calculated
The maximum amounts stated in the regulation are only the outer limit. When determining a fine, the authorities must take into account several balancing factors, including:
- - Severity, duration, and frequency of the violation
- - Degree of fault (intent or negligence)
- - Measures taken to mitigate damage
- - Willingness to cooperate with the authorities
- - Size and economic capacity of the company
- - Previous violations or systemic misconduct
Thus, enforcement of the AI Act resembles competition or data protection law more than classic product liability cases. Important: self-reporting and transparent communication with authorities can significantly reduce fines.
Special rules for foundation and general-purpose models
Large AI models – i.e. General Purpose AI (GPAI) or foundation models – are subject to their own sanctioning regime. Violations of documentation or transparency obligations (e.g., missing information about training data or lack of copyright compliance) can be penalized separately.
Providers of these models are subject to stricter thresholds because they can cause systemic risks. They must perform adversarial testing, report security incidents, and submit risk reports to the Commission. Violations of these obligations incur independent fines – regardless of whether the model is classified as “high-risk” or not.
Transition periods affect sanctions
The obligations of the AI Act come into force gradually. While some rules (e.g., on prohibited practices) already apply from 2025, the requirements for high-risk systems and GPAI will only apply in full from 2026 or 2027.
This is relevant for fine practice: a company will not be sanctioned for an obligation that does not yet apply – but it can be penalized if it misses transition deadlines or cannot demonstrate sufficient preparations.
Interaction with other EU legal frameworks
Another often overlooked point: sanctions can be imposed in parallel under several EU regulations.
For example, if an AI system unlawfully processes personal data, fines under the GDPR may also apply.
If it causes physical harm, product liability law may apply.
The interaction of these frameworks makes the risk profile more complex. In practice, supervisory authorities are expected to establish joint investigation procedures and information exchanges, similar to cooperation in data protection and consumer protection.
Companies outside the EU – extraterritorial effect
Particularly relevant for international providers: like the GDPR, the AI Act has an extraterritorial effect. This means that AI companies without a seat in the EU also fall under its scope if their systems are placed on the EU market or used within the EU.
This concerns, for example:
- - U.S. or U.K. providers whose AI models are integrated into European cloud services
- - Swiss companies that license AI components to EU customers
- - Providers from Asia whose software reaches EU end customers via distributors
These companies must appoint an authorised representative (“Authorised Representative”) in the EU who acts as a contact for supervisory authorities and may also be responsible for enforcement and sanctions in case of need.
- - Fines can also be imposed when the actual company is located outside the EU – they are then directed at the European branch or the representative.
- - In cases of persistent non-cooperation, the European Commission can take measures equivalent to a market ban, such as removing AI services from European platforms or app stores.
- - For cloud-based AI providers: mere access by European users is sufficient to trigger the applicability of the AI Act.
This extraterritorial structure makes the AI Act a global standard-setter – similar to the GDPR. International companies must therefore prepare not only technically but also organizationally for EU compliance.
Conclusion
The penalties of the AI Act are more than just sums of money; they are rules that require companies to demonstrate technical and organizational responsibility.
Those who establish transparency, documentation, and internal control mechanisms early reduce not only their financial risk but also protect their market position. The AI Act is therefore less a “fine law” and more a compliance framework for responsible AI development.


