In July, after more than three years, Regulation 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (the “AI Act”) was published in the Official Journal of the European Union (EU) – the world’s first comprehensive regulation of artificial intelligence (AI) which has the potential to significantly shake up not only Europe but the entire tech world. In this article we summarise key information on what the AI Act regulates, who it affects, and what obligations it imposes. On top of that, we present a practical guide on how you can prepare for the AI Act already now, and a glimpse of the gradual introduction of the regulation, including current developments around the regulation in the Czech Republic.
Over the past few years, AI has become an essential part of the modern technological world, and its influence is expanding into more and more fields – from heavy industry to services and public administration. As AI becomes more and more integrated into the everyday functioning of companies as well as individuals, the need to regulate it to ensure its safe and ethical use has grown from an EU perspective. This step became a reality with the adoption of the AI Act earlier this year.
The AI Act is not just about defining what is permitted or prohibited. It is a wide-ranging regulation that classifies AI systems based on their risk potential and sets out specific obligations for a wide range of entities, from technology startups developing advanced AI algorithms to public institutions such as hospitals and authorities using AI systems to improve the efficiency of their processes – the European Commission expects the new regulation to have a far-reaching impact. Although this approach has attracted criticism from various parties with regard to stifling innovation, let us take a pragmatic look at the key areas and aspects of this regulation.
Regulated technologies
In terms of the substantive scope of the AI Act, the regulation covers two key technologies – AI systems and so-called “general purpose” AI models (GPAI). While AI systems are rated according to the degree of risk, GPAI models constitute a separate category with their own system of regulation. The table below provides an overview of both technologies and their classification under the AI Act.
What does the AI Act regulate? | |
---|---|
Technology | Description |
AI system | AI systems based on machine learning that generate predictions, recommendations or decisions and which can influence physical and virtual environments. They are divided into:
|
GPAI | Models that:
The AI Act regulates both basic GPAI models and GPAI models with systemic risk, which may have a significant impact on the market or safety and are subject to more stringent requirements. |
The role of stakeholders in the AI value chain
The AI Act differentiates between different stakeholders in the AI value chain. Providers, deployers, importers and distributors are defined with respect to their specific roles and responsibilities within the AI ecosystem. The table below describes these roles and the entities to whom the AI Act applies.
What entities are regulated by the AI Act? | |
---|---|
Entity | Description |
Providers | An AI system provider is an entity that: develops an AI system or a GPAI or has an AI system or a GPAI developed; or places those systems on the market or puts them into operation under its own name or trademark, whether for payment or free of charge. Providers may also include entities that modify or fine-tune an existing model and place it on the market. |
Deployers | Entities that use AI systems for professional or commercial purposes. |
Importers | Entities established in the EU that place on the market AI systems developed by a non-EU entity and bearing the latter’s name or trademark. |
Distributors | Entities in the supply chain (other than providers or importers) that make AI systems available on the EU market but do not develop or modify the systems themselves. |
Obligations under the AI Act vary significantly depending on the role the entity plays in the value chain. Providers have the most obligations, especially with regard to high-risk AI systems, including the obligation of conformity assessment and registration in the EU database. Deployers and distributors have a lower level of liability, but in some cases they can be treated as providers if, for example, they make substantial modifications to the system or place it on the market under their own name.
Thus, the AI Act may in practice affect a wide range of stakeholders, including those who:
- use AI systems in their operations (e.g. financial institutions using sophisticated AI systems for real-time fraud detection and risk analysis);
- adapt AI systems developed by a third party for their own purposes (e.g. marketing companies adapting AI chatbots provided by third parties to communicate with customers); and in particular
- develop and implement their own AI systems (e.g. technology start-ups creating new machine learning systems for autonomous vehicles).
Obligations under the AI Act
The obligations of the various stakeholders in the AI value chain are divided according to their role and the classification of AI systems. The table below shows the most important obligations that individual entities must comply with under the AI Act.
Entity | Obligations | What is the risk of non-compliance? |
---|---|---|
Providers of high-risk AI systems | Conformity assessment and CE marking: Formal conformity assessment before placing on the market. Identification of the provider: Clear identification on the system. Automatic logging: Recording of events and input data. Registration in the EU database: Registration in the central database. Quality management system: Establishing a quality management system. | Fines of up to EUR 15 million or 3% of annual worldwide turnover (whichever is higher). |
Deployers of high-risk AI systems | Compliance with guidelines: Use in compliance with the provider’s technical instructions. Human oversight: Adequate oversight by trained personnel. Representative data: Use of relevant input data. Monitoring and records: Regular monitoring and record-keeping. | |
Importers and distributors of AI systems | Conformity assessment: Check that the system has the CE mark and an EU declaration of conformity. Storage and transport: Ensure that conditions do not jeopardise the compliance of the system. Market stop: Refrain from placing on the market / supplying a system if it is not compliant. Providing information: Submit documentation and declarations of conformity at the request of the authorities. Cooperation with the authorities: Cooperate to reduce and mitigate risks. | |
Providers of GPAI models | Technical documentation: Maintaining technical documentation for regulatory authorities. Information for integrators: Providing information for safe integration. Copyright protection: Ensuring copyright compliance. Publication of training data: Transparency of the training data used. | |
Providers of GPAI models with systemic risk | Notification to the European Commission: Report on systemic risk. Mitigation of risks: Risk assessment and mitigation. Incident reporting: Report serious incidents to regulatory authorities. Cybersecurity: Defence against cyberattacks. | |
Providers and deployers of certain AI systems in the context of transparency obligations | Informing about AI (providers): Inform users about communication with AI. Labelling of AI-generated content (providers): Label artificially generated content. Labelling of deep fake (deployers): Label deep fake content as artificially generated. Disclosure of information (providers/deployers): Clear information on first interaction. |
Implementation timeline
The AI Act was published in the Official Journal of the EU on 12 July 2024. It came into force on 1 August 2025 and its implementation will be gradual. Key milestones include deadlines for the fulfilment of obligations by obliged entities, the issuance of codes of practice, the establishment of competent national authorities, the creation of regulatory sandboxes, and more.
The table below summarises the most important dates related to the implementation of the AI Act:
Deadline | Measures | Description |
2 February 2025 | The prohibitions of certain AI systems apply | The prohibitions include workplace emotion recognition, social scoring, behavioural manipulation and predictive policing, including systems using sensitive biometric data and the unsupervised scraping of facial images from the internet. |
2 May 2025 | Publication of codes of practice for GPAI by the European Commission | The codes should provide specific technical recommendations for GPAI models that are applicable across the EU. |
2 August 2025 | Designation of national competent authorities and publication of their contacts | Member States must designate the competent authorities that will be responsible for enforcing compliance with the AI Act. |
2 August 2025 | Rules on GPAI models, administrative organisation and penalties apply | The rules include obligations for GPAI model providers and penalties for violations. Penalties will be fully enforced once the regulation becomes generally applicable. |
2 February 2026 | Publication of guidelines on the practical implementation of Article 6 by the European Commission | The guidelines should clarify the practicalities of monitoring AI systems and procedures for compliance with the post-market monitoring obligations of AI model providers. |
2 August 2026 | Establishment of at least one operational national AI regulatory sandbox | Regulatory sandboxes are intended to support the testing of AI systems in a controlled environment and accelerate their deployment to market. |
2 August 2026 | The remaining parts of the AI Act apply, except for Article 6(1) | The provisions cover the obligations of AI system providers, including transparency and security measures. |
2 August 2027 | Article 6(1) of the AI Act become applicable | The provision includes obligations in relation to high-risk AI systems that are products or safety components of products already subject to third party conformity assessment requirements under an EU sectoral regulation. |
31 December 2030 | AI systems as part of large IT systems must comply with the Regulation | AI systems that are part of large-scale IT systems, such as the Schengen Information System, must comply with the requirements of the AI Act. |
What can you do already now?
Although the provisions of the AI Act will become applicable gradually, you should not delay in preparing to avoid regulatory problems and possible penalties. A proactive approach will allow you to ensure a smooth transition into the period when the Regulation, or the provisions relevant to you, become applicable. Below are some of the measures you can take already now to minimise potential regulatory risks.
1. Determine the relevance of the AI Act to your use case
Conducting an inventory of AI systems is crucial to determining the applicability of the Regulation. Such a process will typically consist of:
- identification of the systems you develop, use or distribute;
- determining whether the systems are internally developed or provided by third parties; and
- specifying their purpose, input and output data, and place of use.
For companies that independently develop AI systems, the likelihood of relevance of the AI Act is naturally greater. However, the risk and applicability will primarily depend on the sophistication and capability of the system. While basic systems such as spam filters or simple AI in video games will generally remain outside the scope of the Regulation, more sophisticated systems entail a greater risk, which naturally increases the likelihood of applicability of the AI Act which focuses mainly on higher-risk systems. This logic of course also applies to companies that only use AI systems in relation to the systems they utilize.
2. Build an AI management system
If the AI Act imposes obligations in your case, we recommend putting together a compliance management team and building a sophisticated system to manage the AI you use or develop. This system should constitute the fundamental structure for further concrete steps, such as the introduction of an internal policy for the use of AI as a starting point for compliance with the new legislative requirements.
3. Consider implementing certification standards
Next, consider implementing standards according to ISO/IEC 42001 certification, the first international standard for AI systems management, which specifies requirements for implementing, maintaining and continuously improving AI systems. It focuses on risk management, data protection, security, transparency and fairness throughout the lifecycle of AI systems, in the context of both developers and deployers. In addition to ISO/IEC 42001, there are other standards, such as ISO/IEC 23894 (AI-related risk management), that can serve as basic inspiration to ensure compliance with the AI Act.
4. Update your contracts with suppliers
If you are not a developer and your AI systems are provided by third parties, try to incorporate into your supplier contracts obligations regarding the compliance of the delivered system with the requirements of the AI Act to ensure that the compliance responsibility is delegated to your suppliers. Also review existing contracts to ensure that even long-standing systems will meet the new legislative requirements. By taking this approach you will minimise the risk of having to make changes after the implementation of the system and protect your investment.
5. Ongoing assessment and adaptation of systems
In the medium to long term, be sure to regularly assess which obligations apply to you under the AI Act, whether in the context of a regulatory change or a technical modification to your system. This approach allows you to respond flexibly to change and keep your systems compliant with the AI Act.
What to expect next?
As with other European regulations, the provisions of the AI Act mostly specify desired outcomes, not specific technical or operational details. This lack of a detailed operational framework means that the current wording of the AI Act may be less comprehensible and more difficult to implement for obliged entities.
However, the above shortcoming should be addressed by the forthcoming harmonised standards for high-risk AI systems and codes of practice for GPAI models, which are expected to provide a clearer technical and operational framework. These documents, which are still in the process of being drafted, will also be complemented by European Commission guidances on the definition of AI systems, prohibited categories, and requirements for high-risk systems.
Unclear situation in the Czech Republic
In the context of the implementation of the AI Act in the Czech Republic,[1] it will be necessary to make some rather cosmetic changes to some laws. Relevant amendments should be made to the Czech National Bank Act, Personal Data Protection Act and Electronic Communications Act, which will establish market supervision authorities and regulate their powers. For biometric identification issues, the Act on the Police of the Czech Republic will be amended, and very minor modifications will be made to the Labour Code for labour relations issues.
Regarding the regulatory authority, it is not yet clear which authority will play the main role in overseeing AI. Ivan Bartoš, former Deputy Prime Minister for Digitalization, proposed that this function be taken over by the Czech Telecommunications Office (CTU), which would be supported by the Czech National Bank (for the finance sector) and the Office for Personal Data Protection (for biometrics and privacy protection supervision). The reasons for choosing CTU include its existing experience in overseeing digital services and the potential for synergies in the area of digital transformation.
The implementation will also include the designation of a notifying authority that will be responsible for the certification of high-risk AI systems. This authority will have to make professional assessments as to whether laboratories and testing facilities are sufficiently competent to issue certifications. Certification will be made by means of conformity assessment by third parties, which will be mandatory for some high-risk AI systems, the exact scope of which will not be known until the European Commission issues harmonised standards. The Minister’s proposal envisages that the CTU should also act as the notifying authority.
However, at present there is still uncertainty as to the definitive identification of the relevant authorities. The CTU has resisted taking on this role because of its existing focus on the oversight of services, while AI regulation is more in the nature of product regulation, which requires a different approach. Both the National Cyber and Information Security Bureau (NUKIB) and the Digital and Information Agency (DIA) also decline the role due to capacity constraints and other legislative obligations.
While the Czech Republic is still looking for an optimal solution, some European countries are already taking concrete steps. Spain is setting up a new office with well-paid experts to oversee AI, and Poland is establishing an AI oversight commission, which they see as a more effective solution rather than expanding the powers of existing regulators. Countries neighbouring the EU, to which the AI Act does not apply, are in a different position. The UK, for example, has chosen to follow a path of sectoral regulation (rather than blanket regulation like the EU) and coordination of multiple authorities and ministries involved. It will seek to set up regulation that does not hinder the potential for innovation and at the same time ensures the possibility of very flexible changes and adjustments in response to the rapid technological developments in this field.
Conclusion
The AI Act is undoubtedly a very complex and technically demanding piece of legislation that introduces a number of new obligations for a wide range of entities. Its current version inherently does not provide answers to all key AI-related questions, most notably in the areas of intellectual property law or liability for damages caused by AI systems, which are addressed only marginally, or are not at all.
Many entities may find this regulation in its current form challenging to interpret and implement. Nonetheless, we are hopeful that this will improve in the coming months as harmonised standards and implementing regulations are progressively released, bringing more specific and tangible rules.
However, notwithstanding the above reservations and the criticisms concerning the Regulation, we trust that the AI Act can serve as a solid basis for the EU in an area that undoubtedly needs to be regulated. Such regulation should nonetheless be a flexible regulation, which will be continuously reviewed and modified in line with technological developments and the socio-economic needs of European companies and citizens. A cautious approach to its implementation and subsequent revisions can help ensure that the regulation is flexible enough to reflect future exponential developments in the field of AI.
If you have questions about the AI Act and its implications for your business, whether you are an AI systems developer or a company implementing AI into your processes, we're here to provide you with expert assistance. With our expertise and in-depth knowledge of the legal aspects of AI, we can provide you with end-to-end support, be it interpreting the regulation, applying it in practice or ensuring compliance with the new obligations.
- [1] – Although it is not a directive and therefore the AI Act will have a direct effect, some passages of the regulation will still need to be reflected in Czech law.