What Open Source Developers Need to Know about the EU AI Act
Cailean Osborne | 03 April 2025
Disclaimer: This post is provided for informational purposes only and should not be considered as legal advice. For specific compliance questions, please consult with your own legal counsel.
The EU’s AI Act entered into force on 1 August 2024 as the world's first comprehensive regulation for AI, introducing risk-based rules that will be implemented in phases until 2 August 2027 and determine what kinds of AI systems and general-purpose AI (GPAI) models can be placed on the EU market and how.
The good news for the open source community is that the AI Act explicitly recognises the value of open source for research, innovation, and economic growth. For this reason, it creates certain exemptions for the providers (i.e. developers) of AI systems, GPAI models, and tools that are released under free and open source licenses.
However, the open source exemptions are not blank cheques. For example, providers of open source GPAI models are exempt from some but not all obligations, and providers of open source GPAI models with systematic risks are not exempt from any of their obligations.
If you are an open source AI developer, understanding whether the obligations and corresponding exemptions under the AI Act apply to you is not only crucial but urgent, as certain prohibitions already apply and the obligations of providers of GPAI models apply on 2 August 2025.
To help you navigate the AI Act, we provide the following guidance and make three calls to action to raise the open source community's readiness for the AI Act. This guidance builds on the helpful Open Source Developers Guide to the EU AI Act by Bruna Trevelin, Lucie Kaffee, and Yacine Jernite over at Hugging Face.
Contents
- The AI Act basics for open source developers
- Obligations of AI system providers and the open source exemptions
- Obligations of GPAI model providers and the open source exemptions
- Additional considerations for open source developers
- Calls to Action for the open source community
The AI Act basics for open source developers
Aims and scope
The AI Act has been in force since 1 August 2024 and will be implemented in phases until 2 August 2027. Spanning 180 recitals, 113 articles, and 13 annexes, it lays down a comprehensive set of rules that will determine what kinds of AI technologies can be placed on the European market and how.
It aims to strike a balance between supporting AI investment, innovation, and adoption in the EU’s single market on the one hand, and ensuring a high level of protection of the health, safety, and fundamental rights of EU citizens on the other hand. Towards this end, it adopts a risk-based approach that introduces various obligations for various parties depending on the risk category of AI systems and GPAI models.
In this post, we focus on the providers of AI systems and GPAI models, who are defined in Article 3(3) as:
A natural or legal person, public authority, agency or other body that develops an AI system or a GPAI model or that has an AI system or a GPAI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
It is important to the note that the AI Act has extraterritorial reach. As per Article 2(1), it applies to providers that place on the market or put into service AI systems or place on the market GPAI models in the EU, irrespective of whether they are in the EU or a third country. In addition, as per Article 2(1c), it applies to third country providers and deployers of AI systems, where the output of their AI system is used in the EU.
Definition of "AI systems"
AI systems are defined as follows:
“‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The European Commission has published guidelines to clarify this definition. As per the risk-based approach, AI systems are classified into four risk categories:
- Unacceptable risk: AI systems that present a clear threat to the safety, livelihoods and rights of individuals in the EU. The AI Act prohibits eight practices, including AI systems used for harmful deception, social scoring, and criminal risk assessments or predictions.
- High risk: AI systems that can present serious risks to the health, safety or fundamental rights of individuals in the EU. Article 6 defines two sub-categories of high-risk AI system. First, as per Article 6(1), AI systems intended to be used as a safety component of a product or are a product which are covered by harmonisation legislation referred to in Annex I and are already required to undergo a third-party conformity assessment. Second, as per Article 6(2), AI systems used in contexts listed in Annex III, including critical infrastructure, law enforcement, and the administration of justice, among others.
- Transparency risk: AI systems that present transparency risks to individuals in the EU, such as generative AI applications. The AI Act introduces disclosure obligations to ensure individuals are informed when they are interacting with an AI system, and providers must ensure that AI-generated content is identifiable as AI-generated to individuals and in machine-readable format.
- Minimal or no risk: AI systems that present minimal or no risks to individuals in the EU, including AI-enabled video games and spam filters.
Definition of "GPAI models"
GPAI models are defined as follows:
“‘General-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”
A briefing paper for Members of the European Parliament clarifies that GPAI models are synonymous with foundation models. The AI Act classifies GPAI models into two risk categories: GPAI models and GPAI models with systemic risks. The latter are models whose cumulative training compute exceeds 1025 floating point operations (FLOPs). According to Epoch AI, as of February there wre 25 models globally that surpasses this compute threshold, including GPT-4o, Mistral Large 2, and Gemini 1.0 Ultra.
The European Commission acknowledges that “systemic risks” may change over time and as such it may update the threshold to ensure that this risk category continues to single out the most advanced models. The Commission has published guidelines to clarify the terminology and obligations of GPAI model providers. Providers of GPAI models may demonstrate compliance with their obligations, which apply on 2 August 2025, through voluntary adherence to the GPAI Code of Practice, which is due to be published on 2 May 2025.
GPAI models with systemic risks as of February 2025 (Source: Epoch AI)
Overview of the open source exemptions
The AI Act recognises the value of open source in promoting research, innovation, and economic growth. For this reason, it creates exemptions from certain obligations, which are specified in Article 2(12), Article 25(4), Article 53(2), and Article 54(6), which are summarised as follows:
- Article 2(12): The AI Act “does not apply to AI systems released under free and open source licenses, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.” Article 5 concerns prohibited AI systems and Article 50 concerns AI systems with transparency risks, such as ones that directly interact with individuals or generate or modify content like text or images.
- Article 25(4): Third party providers, whose “tools, services, processes, or components, other than GPAI models” are used or integrated in high-risk AI systems, are exempt from reporting obligations if they release them under a free and open source licence.
- Article 53(2): Providers of GPAI models that are released under free and open source licenses are exempt from obligations set out in Article 53(1a-b), which concern providing (a) technical model documentation, and (b) information and documentation to AI system providers, who intend to integrate the GPAI model into their AI systems.
- Article 54(6): The obligation that GPAI model providers that are established in third countries shall by written mandate appoint an authorised representative that is established in the EU prior to placing the GPAI model on the EU market does not apply to providers that release GPAI models under a free and open source licence.
It is crucial to understand how free and open source licences are defined, given that eligibility for these exemptions hinges on using an appropriate license. Recital 102 provides the following definition:
“Software and data, including models, released under a free and open-source licence that allows them to be openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can contribute to research and innovation in the market and can provide significant growth opportunities for the Union economy … The licence should be considered to be free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, including models, under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected.”
If you are a developer of an open source AI system or GPAI model, it is vital that you check that your license complies with this definition. To support open source developers, last year the GenAI Commons published the Model Openness Framework, which recommends permissive licenses for the code, data, and documentation components of AI models. More recently, in March 2025, it introduced the 1.0 draft of OpenMDW, a new permissive license, which is designed to cover the weights, datasets, software, tools, and documentation components of AI models in one license (see slides). The license is targeted to be finalised and published soon.
It is also important to note that Recital 103 elaborates that AI components that are provided against a price or otherwise monetised should not benefit from the exemptions provided to free and open-source AI components. It defines "AI components" as "the software and data, including models and GPAI models, tools, services or processes of an AI system" and states that "making AI components available through open repositories should not, in itself, constitute a monetisation." In other words, if you monetise your open source AI components, including GPAI models, then the open source exemptions should not apply to you.
Obligations of AI system providers and the open source exemptions
Providers will have to comply with obligations that depend on the risk category that their AI system falls into. Most obligations fall on providers of high-risk AI systems.
Article 2(12) introduces a narrow open source exemption to these obligations. It states that the regulation does not apply to AI systems that are released under free and open source licenses, unless the AI system falls under Article 5 (i.e. unacceptable risk), it is placed on the market or put into service as a high-risk system (Article 6), or it falls under Article 50 (i.e. transparency risk). In other words, this exemption is limited to AI systems that classify as posing minimal or no risk.
Let’s take a closer look at the obligations of AI systems providers in the various risk categories and what the open source exemptions mean in practice.
Unacceptable risks
If you develop an AI system that falls into the unacceptable risk category (see Article 5), which includes AI systems used for deception, social scoring, or criminal offence risk predictions, among others, then the AI system is prohibited and the exemption does not apply. The prohibitions applied on 2 February 2025 and the European Commission published guidelines to aid the implementation of these prohibitions.
Takeaway for open source developers: If you develop and release an open source AI system that poses unacceptable risks under the AI Act, then the AI system is prohibited in the EU and the exemption does not apply. Therefore, it is essential that you determine if your AI system falls into this risk category by reviewing Article 5 or the European Commission’s guidelines for prohibited AI practices. The prohibitions applied on 2 February 2025, and the fines for non-compliance are up to €35 million or global annual turnover in the preceding year, whichever is higher. |
High risks
If you develop an AI system that falls into the high risk category (see Article 6), then you must undergo processes that are significantly beyond typical practices of open source developers, and the exemption does not apply.
The obligations of providers of high-risk AI systems are defined in Article 16. They include undergoing third party conformity assessments to demonstrate compliance before market entry, including meeting requirements set out in Articles 9 to 15, submitting the high-risk AI system to an EU database, and adding CE marking, among others. The requirements include establishing a risk management system (Article 9), appropriate data governance and management practices (Article 10), and providing technical documentation (Article 11).
As per Article 40(1), high-risk AI systems or GPAI models, which are in conformity with harmonised standards published in the Official Journal of the EU, shall be presumed to be in conformity with the requirements. The Joint Technical Committee 21 (JTC21) of CEN-CENELEC has been mandated to develop the standards, which are expected to be completed ahead of the obligations applying on 2 August 2026.
While the official standardisation request focuses on high-risk AI systems, the European Commission has stated that “the standards requested should be applicable to all high-risk AI systems, including those integrating GPAI models as components. Therefore, standardisers should fully consider state-of-the-art AI techniques and modern AI system architectures when defining requirements for high-risk AI systems.”
The open source exemption does not apply to these obligations, so it is important that you know whether your AI system qualifies as a high-risk system or not. To help providers determine whether their AI system is high-risk, the European Commission shall, no later than 2 February 2026, provide guidelines specifying the practical implementation of the obligations together with a comprehensive list of practical examples of high-risk and non-high-risk AI systems.
Takeaway for open source developers: If you develop and release an open source AI system that poses high risks under the AI Act, then the open source exemption does not apply. Therefore, it is essential that you determine if your AI system falls into this risk category by reviewing Article 6, Annex I, and Annex III. The European Commission will publish guidelines that specify practical examples of high-risk AI systems no later than 2 February 2026. The obligations apply on 2 August 2026, unless the AI system is used as a safety component in a product or is a product covered by harmonised legislation, in which case the obligations apply on 2 August 2027. |
Transparency risks
If you develop an AI system that falls into the transparency risk category (see Article 50), then you must comply with certain obligations and the exemption does not apply.
Article 50 introduces obligations for both providers and deployers of AI systems that pose transparency risks. For example, providers of AI systems that directly interact with human beings must inform individuals about their interaction with an AI system if it is not apparent to the user, and this disclosure must be met no later than initial interaction with or exposure to an AI system by individuals. Furthermore, providers of AI systems that generate content, such as text, image, audio, or videos, must ensure that outputs are labeled in a machine-readable format and identifiable as artificially generated or manipulated. As noted in Hugging Face's AI Act guidance, tools like Gradio can be used to watermark AI-generated content and comply with these obligations.
Takeaway for open source developers: If you develop and release an open source AI system that poses transparency risks under the AI Act, then the open source exemption does not apply. Therefore, it is essential that you determine if your AI system falls into this risk category by reviewing Article 50. The obligations apply on 2 August 2026. |
Minimal or no risks
If you develop an AI system that falls into the minimal or no risk category, such as AI-enabled recommender systems or spam filters, then you do not have obligations under the AI Act and therefore the exemption is not relevant.
However, as per Article 95, providers and deployers of such AI systems may voluntarily adhere to voluntary codes of conduct. The AI Office will draw up codes of conduct to foster voluntary application of some or all of the requirements for high-risk AI systems.
Takeaway for open source developers: If you develop and release an open source AI system that poses minimal or no risks under the AI Act, then the open source exemption is not relevant since there are no obligations. Nonetheless, developers are encouraged to follow codes of conduct for the voluntary application of some or all of the requirements for high-risk AI systems, which are being developed into harmonised standards by JTC21 CEN-CENELEC. |
Obligations of GPAI model providers and the open source exemptions
The AI Act specifies obligations of providers of GPAI models and GPAI models with systemic risks in Article 53 and Article 55, which apply on 2 August 2025.
GPAI models
Providers of GPAI models must comply with the following obligations:
- Article 53(1a): Draw up and keep up-to-date model technical documentation, including training and testing process and the evaluation results.
- Article 53(1b): Draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems, as per criteria in Annex XII.
- Article 53(1c): Put in place a policy to comply with the EU Copyright Directive.
- Article 53(1d): Draw up and make publicly available a sufficiently detailed summary about the training data of the general-purpose AI model, according to a template provided by the AI Office.
The providers of GPAI models released under free and open source licenses are exempt from some but not all GPAI-specific obligations in Article 53; specifically, they are exempt from the obligations prescribed in Article 53(1a) and (1b). However, providers are not exempt from putting in place a policy to comply with the Copyright Directive and publishing a sufficiently detailed summary of the training data.
Takeaway for open source developers: If you develop and release a GPAI model, which does not present systemic risks, under a free and open source license, then you will be exempt from obligations to provide model documentation and information for downstream AI system providers that integrate your GPAI model. But you will still be required to put in place a policy to comply with the EU Copyright Directive and provide sufficiently detailed information about training data. The Code of Practice will provide guidance for complying with these obligations, which apply on 2 August 2025. |
GPAI models with systemic risks
Providers of GPAI models with systemic risks must comply with the following obligations and are not exempt from any obligations:
- Article 55(1a): Perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model.
- Article 55(1b): Assess and mitigate possible systemic risks, including their sources, that may stem from the development, the placing on the market, or the use of GPAI models with systemic risk.
- Article 55(1c): Keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and take possible corrective measures.
- Article 55(1d): Ensure adequate level of cybersecurity protection for GPAI model and its physical infrastructure.
Takeaway for open source developers: If you develop and release a GPAI model with systemic risks under a free and open source license, you will not be eligible for the exemption and additional obligations apply. The Code of Practice will provide guidance for complying with these obligations, which apply on 2 August 2025. |
GPAI Code of Practice
Providers of GPAI models and GPAI models with systemic risks may demonstrate compliance with these obligations, which apply on 2 August 2025, if they voluntarily adhere to Code of Practice, which will include a transparency template for model documentation and measures for complying with the EU’s Copyright Directive.
Thanks to input from open source advocates, a positive development for open source developers in the third draft of the Code of Practice, published on 11 March 2025, is that the transparency template for model documentation no longer requires an “acceptable use policy” to be in place (“none exists” is an acceptable answer). Please note that the transparency template, as well as other sections in the Code of Practice, may change following the final round of expert feedback, which calls for a follow-up analysis of the Code of Practice once it is published on 2 May 2025.
The AI Office is currently developing a template for providing sufficiently detailed summary about training data that GPAI model providers can use to comply with the obligation in Article 53 (see update provided on 17 January 2025).
Takeaway for open source developers: The Code of Practice will provide guidance for complying with these obligations, which apply on 2 August 2025. The finalised Code of Practice will be published on 2 May 2025. |
Additional considerations for open source developers
The AI Act creates additional exemptions for third-party providers of “tools, services, processes, or components, other than GPAI models” that are released under a free and open source licence, and used or integrated in high-risk AI systems. Specifically, Article 25(4) states that they are exempt from the obligation to provide written documentation to downstream providers of high-risk AI systems that use or integrate their tools, services, processes, or components.
Recital 89 also encourages open source developers “to implement widely adopted documentation practices, such as model cards and data sheets, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union.” As noted in Hugging Face's AI Act guidance, fortunately many of these practices are already common in the open source community. Various tools, including model cards, dataset cards, and watermarking tools like Gradio or SynthID, can help developers implement these best practices. In addition, open source frameworks like PyTorch, LM Evaluation Harness, lighteval, and Inspect easily enable evaluations of models on various metrics and benchmarks.
Calls to Action for the Open Source Community
While the AI Act creates exemptions for open source, they are by no means a free-pass and in many cases the exemptions do not apply to providers of open source GPAI models or AI systems. As prohibitions for certain AI systems already apply and the obligations for GPAI model providers are approaching on 2 August 2025, understanding the obligations and open source exemptions under the AI Act is not only a crucial but indeed an urgent priority for the open source community.
This post provides some clarification of the obligations and open source exemptions, but there is still much we must do to prepare the community for this new regulatory environment. Towards this end, we make the following three calls to action.
1. Raise the open source community's AI Act awareness and readiness
There's an immediate need to raise awareness the open source community's awareness of and readiness for the AI Act, with prohibitions already in effect and obligations of GPAI model providers approaching on 2 August 2025. Open source AI developers may be unaware that these obligations can apply to them and the steep costs of non-compliance. We should, therefore, conduct research to better understand awareness levels and accordingly raise awareness about how open source AI developers may be affected and, if so, how they may comply with the AI Act.
2. Develop AI Act guidance for open source developers
Developers that release open source GPAI models or AI systems may lack clarity on how to meet their obligations under the AI Act, the timelines for compliance, or the penalties for non-compliance. Additionally, developers may be uncertain about which licenses are likely to meet the definition of "free and open source licenses". We should, therefore, develop guidance for compliance with these obligations and commonly-used licenses that align with the AI Act's definition of free and open source licenses. An immediate priority will be to familiarise ourselves with the Code of Practice when it comes out on 2 May 2025 as well as the AI Office’s training data template when it comes out. Since the Code of Practice, including its transparency template, may be change in light of the final round of expert feedback, a follow-up analysis is called for.
3. Develop open source tools for AI safety and model evaluations
Open source tools already provide the means to comply with some of the obligations under the AI Act. For example, frameworks like PyTorch, LM Evaluation Harness, lighteval, and Inspect can be used to evaluate AI models on various metrics and commonly used benchmarks, and tools like SynthID and Gradio can be used to watermark AI-generated content. It is key that we continue to invest in the development and maintenance of such tools. Additionally, as AI capabilities advance, we should invest in maintaining commonly-used benchmarks and developing new ones where appropriate. It is also timely to address benchmark gaps for industry-specific use cases and risk profiles, building on initiatives like MLPerf Automotive, MedPerf, LegalBench, and the Open Financial LLM Leaderboard. We encourage developers to consult the BetterBench checklist of best practices for producing high-quality benchmark datasets.
By taking these proactive steps, the open source community can better position itself to navigate the changing regulatory landscape while continuing to make collective progress in and through open source. If you would like to know more or get involved in our efforts, consider joining the GenAI Commons and Linux Foundation Europe.