A practical breakdown of the Code of Practice for AI providers and deployers.
The EU AI Act is the European Union’s legal framework for regulating artificial intelligence used in the European market. While it officially came into force on August 1, 2024, there is currently a transition period, with standards not being expected to be fully enforced until August 2026. General-Purpose AI (GPAI) models, however, are expected to comply with AI Act obligations by August 2, 2025. To help GPAI model providers comply with AI Act obligations, the EU AI office has published the Code of Practice, which details a list of commitments and measures that GPAI model providers are expected to undertake in order to adopt responsible behaviour before the EU AI Act becomes legally binding.
Who is the Code of Practice for?
The scope of Code of Practice is primarily aimed at Providers (i.e., developers) of GPAI models, in particular those with potential for systemic risk (SR) or exceeding certain computational thresholds. There may be very few such providers (5-15 according to the Code of Practice); however, entities or individuals who modify (e.g. fine-tune) or integrate the developed models into AI Systems may also be affected if the modification leads to ‘significant changes to the models generality, capabilities, or systemic risk’ or exceeds certain computational thresholds [EU Commission Clarification 3.2-62]. In this case, the modifiers become Downstream Providers [Article 3-68]. The Code of Practice recommends capability-related considerations (and other mechanisms) as the more reliable way to understand systemic risk potential.
When do model providers sign onto the Code of Practice?
According to the official timeline, GPAI(SR) models that are just now coming into the EU market need to comply with the EU AI Act in August 2025, and models that are already on the market have until August 2027 to be fully compliant. Model providers simply need to fill out and submit the Signatory Form to the EU AI Office to indicate their intent to adhere to the Code of Practice. Though there is no hard deadline for signing the form, existing providers should sign the form by August 1, 2025 so that the AI Office is aware of who intends to adhere to the code prior to the August 2 application of the EU AI regulations for GPAI models. The following is the timeline of the gradual enforcement of the EU AI Act:
What does the Code of Practice say?
There are three major sections in the Code of Practice: Transparency, Copyright, and Safety and Security.
The first two sections of the Code of Practice—Transparency and Copyright—are relatively brief and straightforward, and they apply to all GPAI models.
Let’s dive into the Safety and Security, which is the more complex chapter and makes up the majority of the Code of Practice. This section applies only to GPAI models with systemic risks (GPAISR), which are essentially models that are deemed highly capable—in the sense that their capabilities could lead to certain risks or exceed certain computational boundaries. Examples of GPAISRs include OpenAI’s GPT-4 and Google DeepMind’s Gemini.
Safety and Security
The Safety and Security section concerns frontier models and AI systems that are deemed to have capabilities or propensities that can lead to systemic risks. It also serves as an excellent set of guidelines and a resource for any provider or deployer that wishes to implement high-quality risk monitoring procedures and be transparent to the public about its model’s risk profile. We group the 10 commitments of the Safety and Security chapter into four major functional categories:
- The Risk Framework that outlines and drive the overall process for one or models that providers aim to put on the market. GPAISR model providers should develop and maintain a structured framework to identify, assess, and mitigate systemic risks throughout the model lifecycle. [Commitment I]
- Commitments related to Risk Assessment outlining an IADM loop (Identify, Analyze, Determine, Mitigate+Monitor). These steps need to be repeated throughout the model lifecycle, as well as every time the model undergoes an update. Additionally, providers must monitor their models and AI Systems after putting them on the market, to identify new risks and implement new evaluations and appropriate mitigations. [Commitments II-V]
- A set of corresponding Deliverables. These commitments are reports and documentation that model providers should produce and update after performing the IADM loop. [Commitments VII, IX, X] There are two additional commitments for organizational accountability and security. [Commitments VI, VIII]
Transparency
The Transparency chapter of the Code of Practice advises that GPAI model providers prepare and maintain technical documentation that includes information such as model architecture, training data summaries, intended use cases, training and testing processes and results, compute resources, acceptable use policies, and more. The documentation must be provided to Downstream Providers (i.e., providers of AI systems who integrate the GPAI model into their systems) and to the EU AI Office upon request. GPAI model providers need to consistently make sure their documentation is current, accurate, and secure from tampering.
GPAI model providers are encouraged to use the Model Documentation Form, which is a standardized template provided by the EU AI Office.
Keep in mind that fine-tuning or other significant modifications on the model may require that downstream providers to also provide transparency documentation (limited to those modifications).
Copyright
The copyright chapter recommends that GPAI model providers prepare and implement an internal copyright policy that ensures lawful data collection and reproduction. This includes the identification and the respect of rights reservations by using mechanisms like robot.txt and other machine-readable protocols to detect opt-outs. The copyright policy should also lay out processes for handling infringement issues and overfitting on copyrighted content, as well as designate a point of contact for copyright complaints.
The copyright policy applies both to the upstream (data that is collected) and to the downstream (when the model is released, used, or shared).
What’s Next?
The Code of Practice is key step toward pragmatic, global AI governance – prioritizing practical safety and security over regulatory excess. Although becoming a signatory of the code is optional for most providers, it can provide a useful and concrete basis on which to build best practices for developing trustworthy AI, and allows the community to start building towards a cohesive ecosystem of shared tools and frameworks. Contact LatticeFlow AI to get started on assessing your AI for risk.



