EU AI Act Compliance Guide for Non-EU Businesses: Obligations, Risk Classification, and Enforcement
A practitioner-level guide to EU AI Act (Regulation (EU) 2024/1689) compliance for non-EU businesses that use, deploy, or sell AI systems in European markets. Risk classification, provider and deployer obligations, conformity assessments, prohibited practices, and enforcement under the EU AI Office — with actionable guidance for CLOs, CTOs, and Chief Compliance Officers.
Morvantine Legal Editorial Team
20 October 2025
EU AI Act Compliance Guide for Non-EU Businesses: Obligations, Risk Classification, and Enforcement
Regulation (EU) 2024/1689 — the EU Artificial Intelligence Act — entered into force on 1 August 2024 and has been phasing into application on a rolling schedule ever since. By 2 February 2025, the prohibition of unacceptable-risk AI practices became enforceable. By 2 August 2025, the framework governing general-purpose AI (GPAI) models and the institutional architecture of the EU AI Office became operative. The obligations applying to high-risk AI systems under Annex III took effect from 2 August 2026. The CLO or Chief Compliance Officer of any organisation that builds, deploys, or sells AI products touching EU markets can no longer treat the AI Act as prospective regulation. It is current law with active enforcement machinery.
The most significant architectural feature of the AI Act — and the one least understood by non-EU businesses — is its extraterritorial reach. Unlike the first generation of EU product regulation, which achieved extraterritorial effect primarily through market access conditions, the AI Act expressly imposes obligations on entities established outside the European Union. This article provides a practitioner-level analysis of those obligations, the risk classification system that determines their scope, the distinct compliance postures required of providers versus deployers, the conformity assessment process, and the enforcement mechanisms now operational under the EU AI Office.
Extraterritorial Reach: Who Is Caught and How
The geographic scope of the AI Act is defined by Article 2(1), which sets out four distinct jurisdictional hooks. The regulation applies to:
(a) providers placing AI systems on the EU market or putting them into service in the EU, regardless of whether those providers are established in the EU;
(b) providers and deployers of AI systems established in the EU;
(c) providers of AI systems established in a third country where the output produced by the system is used in the EU; and
(d) deployers of AI systems established in a third country where the output produced by the system is used in the EU.
The functional consequence for a US technology company offering an AI-powered HR screening platform, a Japanese manufacturer selling a machine-vision quality-control system, or a Canadian fintech providing AI-driven credit decisioning to European lenders is identical: if the output of the AI system is used in the EU, the company is within the regulatory perimeter of the AI Act — regardless of where it is incorporated, where its servers are located, or whether it has any EU establishment.
This architecture is not simply a replication of the GDPR's Article 3(2) "targeting" approach. It is in some respects broader: the AI Act does not require active targeting of EU users. The decisive criterion is output use in the EU. A system built and operated entirely in a third country, sold to an EU-based company, and producing outputs consumed by EU employees or EU-resident customers, engages the extraterritorial provisions even if the non-EU vendor never contemplated the EU as a primary market.
The Article 2 framework also imposes obligations on importers (Article 23) — entities established in the EU that place on the EU market AI systems bearing the name or trademark of an entity established outside the EU — and distributors (Article 24). For non-EU providers, the importer assumes a material share of the compliance obligations that would otherwise rest with the provider, including verifying that the provider has carried out the required conformity assessment, that the AI system bears the CE marking where required, and that the technical documentation required under Article 11 is available. This creates a downstream compliance dependency: non-EU providers who want their EU importers to remain viable distribution partners must deliver compliant systems and documentation.
The AI Act creates a limited exemption under Article 2(2) for AI systems placed on the market, put into service, or used exclusively for military, national security, or defence purposes. Scientific research and development activities are also carved out under Article 2(6), subject to conditions. These carve-outs are narrow and require affirmative demonstration — they do not apply by default to dual-use AI systems with both civilian and defence applications.
Risk-Based Classification: The Architecture of Obligations
The AI Act's compliance obligations are not uniform. They are calibrated against a four-tier risk classification that determines which obligations apply, at what level of intensity, and with what enforcement consequences. The classification is determined by the system's intended purpose and the use case in which it is deployed — not by the technology itself or the marketing description.
The Risk Classification Framework
| Risk Tier | Regulatory Treatment | Examples | Primary Applicable Provisions |
|---|---|---|---|
| Unacceptable (Prohibited) | Absolute prohibition — placing on market, putting into service, and use are all prohibited | Real-time remote biometric identification in public spaces (with limited exceptions); social scoring by public authorities; subliminal manipulation; exploitation of vulnerabilities of specific groups; predictive policing based solely on profiling; AI that infers sensitive attributes from biometric data for law enforcement | Article 5; penalties up to €35M or 7% global turnover |
| High-Risk (Annex I — Product Safety) | Full conformity assessment; CE marking; Notified Body involvement mandatory for certain categories | AI as safety components in: medical devices (Regulation (EU) 2017/745); in vitro diagnostic medical devices (Regulation (EU) 2017/746); aviation (Regulation (EU) 2018/1139); machinery (Regulation (EU) 2023/1230); automotive (ECE Regulations) | Articles 8–25; Chapter I, Title III; Annex I |
| High-Risk (Annex III — Use Case) | Conformity assessment (primarily internal, some third-party); registration in EU database; post-market monitoring | Biometric identification systems; AI in critical infrastructure management; education/vocational training assessment; employment, HR, and worker management; essential services access (credit, insurance, public benefits); law enforcement; migration/asylum/border control; justice administration; democratic process | Articles 8–25; Chapter II, Title III; Annex III |
| GPAI Models | Technical documentation; copyright compliance; transparency; systemic risk measures for models with ≥10²⁵ FLOPs | Large language models, multimodal foundation models (GPT-series, Gemini, Claude, Llama, Mistral) | Articles 51–56; Chapter II, Title VIII |
| Limited Risk (Transparency) | Disclosure obligations only | Chatbots; emotion recognition systems; biometric categorisation; deep fakes and synthetic media | Article 50 |
| Minimal Risk | No mandatory obligations (voluntary codes of conduct encouraged) | AI-enabled spam filters, AI in video games, basic recommendation systems without material decision impact | Article 95 (voluntary measures) |
Annex III: The Use Cases That Catch Most Non-EU Businesses
The Annex III list is the practical focus for most non-EU technology companies. It covers eight domains, within each of which specific use cases are designated as high-risk:
1. Biometric identification and categorisation — AI systems intended for remote biometric identification of natural persons, except for verification where a natural person confirms identity. This catches facial recognition systems used for customer onboarding, identity verification in financial services, and access control.
2. Critical infrastructure — AI systems intended as safety components in the management or operation of critical digital infrastructure, road traffic, and energy, water, gas, and heating systems. Cloud providers offering AI-managed infrastructure services to EU operators of essential services fall within this category.
3. Education and vocational training — AI systems used to determine access or assignment to educational institutions, to evaluate learning outcomes, and to monitor students during examinations. EdTech companies with EU institutional clients must classify their assessment and proctoring AI as high-risk.
4. Employment, workers management, and access to self-employment — AI systems used for recruitment and selection, evaluation of performance and behaviour, promotion decisions, and assignment of tasks based on individual behaviour or personality traits. HR AI platforms — applicant tracking systems with AI scoring, performance management tools, workforce scheduling AI — are directly within scope. This is the category generating the most enforcement interest in 2025–2026.
5. Access to essential private services and benefits — AI systems used by credit institutions, insurance undertakings, and public benefit authorities to assess creditworthiness, evaluate insurance risk, and determine access to public benefits and emergency services. AI credit scoring is the most commercially significant application within this category.
6. Law enforcement — AI used for individual risk assessments, polygraphs, evaluation of evidence reliability, assessment of recidivism risk, and crime profiling. Limited to public authorities in most Member States, but vendors providing such systems to EU law enforcement agencies bear provider obligations.
7. Migration, asylum, and border control management — AI used for processing migration applications, assessing threats to security or health, automated decision-making in visa processing, and surveillance monitoring at EU borders.
8. Administration of justice and democratic processes — AI used to assist courts, to apply the law to facts, and to influence electoral outcomes. AI-powered legal research tools deployed by EU courts or in EU judicial proceedings fall within scope.
Obligations for Providers vs. Deployers: A Structural Distinction
The AI Act creates a fundamental regulatory distinction between providers — entities that develop an AI system and place it on the market or put it into service — and deployers — entities that use an AI system under their own authority for professional purposes. For non-EU businesses, understanding which role applies in each supply chain configuration is essential, because the compliance burden is substantially asymmetric.
Provider Obligations (Articles 8–25)
Providers of high-risk AI systems bear the primary and most demanding obligations under the AI Act. For non-EU providers, these obligations attach by virtue of Article 2(1)(a) and (c) and cannot be contractually transferred to EU-based deployers (though importers share certain obligations under Article 23).
The principal provider obligations are:
Risk management system (Article 9): A continuous, iterative risk management process that identifies reasonably foreseeable risks to health, safety, and fundamental rights; estimates and evaluates those risks; adopts risk mitigation measures; and documents residual risks. This is not a one-time risk assessment — it is an operational system that must remain current throughout the lifecycle of the AI system.
Data and data governance (Article 10): High-risk AI systems that use training data must be trained, validated, and tested using data that meets quality criteria relating to relevance, representativeness, freedom from errors, and completeness. The AI Act's data governance requirements create express obligations to examine training data for potential biases. For AI systems trained on datasets that include personal data of EU residents, the intersection with GDPR Article 35 DPIA requirements demands a coordinated compliance framework.
Technical documentation (Article 11 and Annex IV): Providers must prepare and maintain technical documentation before placing a high-risk AI system on the market. Annex IV specifies the mandatory content: system description and intended purpose; elements and architecture; development process and training methodologies; performance metrics and thresholds; known risks; accuracy, robustness, and cybersecurity measures; post-market monitoring plan. For non-EU providers supplying EU importers, this documentation must be available to national market surveillance authorities and the EU AI Office on request.
Automatic logging (Article 12): High-risk AI systems must be designed to automatically generate logs of their operation — at a minimum, recording the period of each use, the input data, and any decision output — to the extent technically feasible. These logs are the primary audit trail for post-deployment monitoring and enforcement investigation.
Transparency and information to deployers (Article 13): High-risk AI systems must be designed to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. The instructions for use — which must accompany every high-risk AI system — must contain specific information about the system's intended purpose, performance level, foreseeable misuse scenarios, technical measures available to deployers, and the human oversight measures required.
Human oversight (Article 14): High-risk AI systems must be designed to enable effective human oversight by natural persons during the period of use. This is a design requirement with substantive content: the system must be designed to allow the person responsible for oversight to understand the system's capabilities and limitations, detect anomalies and malfunctions, and intervene to override, interrupt, or stop the system. The human oversight requirement for AI Act purposes intersects with, but is not identical to, the GDPR Article 22(3) requirement for human intervention in automated decision-making — both must be satisfied concurrently for AI systems that produce legal or similarly significant effects on natural persons.
Accuracy, robustness, and cybersecurity (Article 15): High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. Accuracy metrics and thresholds must be declared in the technical documentation. The Act requires that systems maintain consistent performance across foreseeable variations in input data and be resilient against adversarial attacks.
Quality management system (Article 17): Providers must implement a quality management system that covers the full lifecycle of their high-risk AI systems: development, testing, conformity assessment, post-market monitoring, and the management of non-conformities. The QMS must be proportionate to the size and nature of the provider — but for providers of multiple high-risk AI systems, the Act encourages a unified QMS framework.
Conformity assessment (Article 43): Before a high-risk AI system can be placed on the EU market or put into service, the provider must complete a conformity assessment demonstrating that the system meets the requirements of Articles 8–15. For most Annex III systems, this is an internal conformity assessment based on the provider's own procedures; third-party assessment through a Notified Body is mandatory for biometric identification systems (with limited exceptions) and AI systems that are safety components of regulated products under Annex I legislation. Upon completion, the provider must draw up an EU Declaration of Conformity under Article 47 and affix the CE marking under Article 48.
Registration in the EU AI database (Article 49): Providers of high-risk AI systems listed in Annex III must register the system in the EU-maintained public AI database before placing it on the market. The database records the system's intended purpose, accuracy and performance metrics, conformity assessment procedure used, and post-market monitoring plan.
Deployer Obligations (Article 26)
Deployers of high-risk AI systems bear a materially lighter obligation set, but it is not negligible. For non-EU businesses that deploy — rather than develop — AI systems for EU-facing professional use, Article 26 defines the compliance floor:
- Use the AI system in accordance with the instructions for use provided by the provider (Article 26(1)).
- Ensure that natural persons responsible for human oversight have the competence, training, and authority to perform oversight effectively (Article 26(2)).
- Monitor the operation of the AI system based on the instructions for use and report serious incidents to the provider and the relevant national competent authority (Article 26(5)).
- Conduct a fundamental rights impact assessment (FRIA) before deploying a high-risk AI system in use cases involving law enforcement, migration, administration of justice, essential private services, and employment (Article 27, applicable to bodies governed by public law and certain private deployers).
- Inform workers' representatives and affected workers before deploying AI systems in the employment and worker management context (Article 26(7)).
The deployer's obligation to report serious incidents creates an important supply chain dependency: non-EU deployers must maintain the operational capacity to identify, assess, and report incidents involving EU-resident users or EU-facing operations within the timelines specified by the national competent authorities of the Member States where the incident is reported.
When the Provider/Deployer Line Shifts
The AI Act contains a critical provision — Article 25(1) — under which a deployer becomes a provider for regulatory purposes when it:
- places a high-risk AI system on the market under its own name or trademark;
- modifies an existing high-risk AI system substantially enough to change its intended purpose; or
- modifies a non-high-risk AI system such that it becomes high-risk.
For non-EU businesses that white-label AI systems, fine-tune foundation models on proprietary data for specific EU-facing use cases, or integrate AI components into products sold under their own brand in the EU, Article 25(1) is a reclassification trap. The obligation analysis does not follow the corporate structure of the supply chain — it follows the question of who substantively shapes the AI system that reaches the EU market.
Conformity Assessments: Process and Practical Implications
The conformity assessment process is the procedural centrepiece of the AI Act's compliance architecture. Its purpose is to establish, before deployment, that a high-risk AI system meets the substantive requirements of Articles 8–15. The process differs materially depending on whether the AI system falls under Annex I or Annex III.
Annex III systems (most commercial AI): The default conformity assessment procedure is internal — the provider conducts the assessment against their own documented procedures. The legal basis is Article 43(2) and Annex VI. The provider must: review the technical documentation against each requirement of Articles 8–15; document the methodology and outcome of the assessment; draw up the EU Declaration of Conformity listing the applicable requirements and attesting compliance; and retain both the technical documentation and the conformity assessment records for ten years from placement on the market.
The exception to the internal assessment default applies to AI systems used for biometric identification of natural persons. Under Article 43(1) and Annex VII, these systems are subject to third-party conformity assessment by an accredited Notified Body. Notified Bodies for AI Act purposes are national-level accreditation bodies designated under Regulation (EC) No 765/2008. The European Commission maintains the NANDO database of notified bodies; as of early 2026, the notified body infrastructure for AI Act purposes is still being populated at national level.
Annex I systems (safety-component AI): These AI systems are subject to the conformity assessment procedures prescribed by the product safety legislation under which the regulated product is certified. For a medical device incorporating an AI diagnostic component, the conformity assessment follows the Medical Device Regulation (Regulation (EU) 2017/745) Annex IX or Annex X procedures. The AI Act requirements are incorporated into the existing sector-specific conformity assessment via the general safety and performance requirements.
Substantial modification: When a provider modifies a high-risk AI system in a way that constitutes a "substantial modification" under Article 83(5) — a change that affects the intended purpose or the performance of the system in ways that could compromise compliance — the conformity assessment must be repeated for the modified system. This has material implications for AI systems that are updated through continuous learning, fine-tuning, or retraining: each update cycle must be assessed against the substantial modification threshold.
For non-EU providers without EU staff, the practical challenges of managing conformity assessments, maintaining technical documentation in EU-accessible formats, and responding to national authority information requests have led most compliance advisers to recommend appointment of an EU-authorised representative under Article 22. This entity — which must be established in the EU, named in the technical documentation, and registered in the EU AI database — acts as the legal point of contact for national competent authorities and the EU AI Office and bears residual compliance liability where the provider fails to act.
Prohibited Practices: The Red Lines in Effect Since February 2025
Article 5 of the AI Act establishes a list of AI practices that are prohibited absolutely — not high-risk with conditions, but prohibited without exception. These provisions have been in effect since 2 February 2025, making them the most immediately relevant obligations for any business with live AI deployments that touch the EU market.
The prohibited practices are:
Subliminal manipulation (Article 5(1)(a)): AI systems that deploy subliminal techniques beyond a person's consciousness, or manifestly manipulative or deceptive techniques, in ways that materially distort behaviour in a manner that causes harm. This prohibition extends beyond purely subliminal techniques to encompass AI-driven persuasion architecture that is deceptive or that exploits psychological vulnerabilities.
Exploitation of vulnerabilities (Article 5(1)(b)): AI systems that exploit vulnerabilities of specific groups — persons with disabilities, children, elderly persons — in ways that distort their behaviour in a manner that causes harm. This provision is directly relevant to AI systems deployed in social media, gaming, consumer finance, and digital health that use personalisation and behavioural nudging techniques targeting users in these categories.
Social scoring (Article 5(1)(c)): AI systems used by or on behalf of public authorities to evaluate or classify natural persons over a period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics in ways that lead to detrimental treatment that is unrelated to the context in which the data was generated. This prohibition is targeted at governmental social scoring systems of the type that have generated concern in international human rights discourse, but its scope reaches private actors operating under public authority delegation.
Real-time remote biometric identification (RTRBI) in public spaces (Article 5(1)(d)): This is the prohibition most likely to affect commercial technology deployments. AI systems used for real-time remote biometric identification — primarily facial recognition — in publicly accessible spaces for the purpose of law enforcement are prohibited, subject to three narrow exceptions: searching for specific missing persons and victims of trafficking; preventing specific, substantial, and imminent threats to life; and identifying perpetrators of serious criminal offences. Outside these law enforcement exceptions, RTRBI in public spaces is prohibited. Commercial deployments of facial recognition in public retail environments, transport hubs, or entertainment venues fall squarely within this prohibition.
Predictive policing based solely on profiling (Article 5(1)(e)): AI systems used by law enforcement authorities to make individual risk assessments for the sole purpose of predicting the likelihood of a person committing an offence based on profiling or assessment of personality traits.
Untargeted scraping for facial recognition databases (Article 5(1)(f)): AI systems that create or expand facial recognition databases through untargeted scraping from the internet or CCTV footage. This provision directly addresses the Clearview AI model of building biometric databases from scraped imagery.
Emotion recognition in the workplace and education (Article 5(1)(g)): AI systems used to infer emotions of natural persons in the contexts of the workplace and educational institutions. This prohibition covers AI-powered engagement monitoring tools that infer emotional states from facial expressions, voice patterns, or physiological signals in employment or educational settings.
Biometric categorisation by sensitive attributes (Article 5(1)(h)): AI systems that categorise natural persons individually based on biometric data to infer or deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This prohibition applies to AI systems that use facial analysis or other biometric signals to infer these attributes — including AI systems that deploy these inferences as marketing signals.
Any AI system that falls within any of these prohibitions must be immediately taken off the EU market and withdrawn from EU-facing service, regardless of when it was developed or the terms under which it was supplied to EU customers. Compliance reviews that identify a prohibited practice must result in immediate operational suspension — not a remediation timeline.
EU AI Office Enforcement: The Institutional Architecture
The EU AI Office — established within the European Commission by Commission Decision (EU) 2024/1143 — is the central enforcement authority for GPAI model obligations and the coordinator of the broader AI Act enforcement ecosystem. Its role in relation to high-risk AI systems is primarily supervisory over national authorities rather than directly enforcement-oriented, but for GPAI model providers, the AI Office is the primary regulator.
Market surveillance authorities (MSAs): For high-risk AI systems, primary enforcement falls to national market surveillance authorities designated by each Member State under Article 74. Member States had until 2 August 2025 to designate their MSAs. Most major EU economies have designated existing regulators: in Germany, the Bundesnetzagentur and the sector-specific authorities; in France, the CNIL and the Autorité de la concurrence share MSA roles with the newly designated DNUM (Direction du numérique) structure; in Ireland, the Irish MSA function is split between ComReg and the AI-specific designation under the DCCAE; in Spain, AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) was established specifically for AI Act enforcement and has been operational since late 2024.
Investigation powers (Article 74 et seq.): National MSAs have broad investigative powers: access to AI systems, training data, documentation, and source code (subject to confidentiality safeguards); the power to require testing of AI systems and inspection of underlying datasets; the power to impose corrective measures; and the power to impose administrative fines.
Penalties (Article 99): The AI Act's penalty structure is tiered:
- Infringement of Article 5 prohibitions: up to €35 million or 7% of global annual turnover, whichever is higher.
- Infringement of obligations applicable to providers, importers, distributors, and deployers of high-risk AI systems under Articles 8–15, 25–27, and the registration and documentation requirements: up to €15 million or 3% of global annual turnover, whichever is higher.
- Supply of incorrect, incomplete, or misleading information to notified bodies or competent authorities: up to €7.5 million or 1% of global annual turnover, whichever is higher.
For SMEs and startups, fines are capped at the lower of the absolute or percentage ceiling. For large multinational technology companies, the percentage ceiling will typically govern — and 7% of global annual turnover is a larger penalty than the largest GDPR fine in history for any company above approximately €17 billion in annual revenue.
The AI Office has additional enforcement jurisdiction over GPAI model providers under Articles 88–94, including the power to conduct model evaluations, issue binding corrective measures, and impose fines of up to €15 million or 3% of global annual turnover for non-compliance with GPAI obligations.
Practical Takeaways for Corporate Counsel
1. Conduct an immediate AI system inventory against the Article 5 prohibitions — not just the high-risk classification.
The prohibited practices have been enforceable since 2 February 2025. Any AI system within your portfolio that touches EU users must be reviewed against each of the eight prohibitions in Article 5 before it remains operational. Particular attention should be paid to: emotion recognition tools used in HR or customer-facing contexts; facial recognition or biometric categorisation systems used in EU retail, hospitality, or public-access venues; and AI systems that use persuasive design techniques targeting vulnerable groups. Engage a cross-functional team (legal, technology, product) to conduct the assessment — the legal analysis depends on understanding what the AI system actually does at a technical level, not what the product description says it does.
2. Map your AI supply chain to determine whether you are a provider, deployer, or both in each EU-facing context.
The regulatory obligation set differs materially between providers and deployers, and it shifts when Article 25(1)'s reclassification triggers apply. For each AI system you develop, supply, or use in EU-facing operations, document: who trained it and owns the technical documentation; who places it on the EU market or puts it into service; whether fine-tuning, integration, or white-labelling by any party in the chain has constituted a substantial modification; and who bears the obligations at each tier. This mapping exercise should be refreshed each time the system is updated or the distribution model changes.
3. Appoint an EU-authorised representative if you are a non-EU provider of high-risk AI systems.
Article 22 requires non-EU providers of high-risk AI systems to appoint an EU-authorised representative before placing their systems on the EU market. The authorised representative must be established in the EU, mandated in writing, and registered in the EU AI database alongside the system. The representative is jointly liable for AI Act compliance alongside the provider. Selection of the appropriate representative — with the operational capacity to respond to national authority inquiries on short timelines and the legal expertise to manage conformity documentation — should be approached with the same rigour as engagement of an EU GDPR representative.
4. Build the GDPR and AI Act compliance intersection into a single integrated governance framework.
High-risk AI systems that process personal data are subject to both the AI Act and the GDPR simultaneously, with overlapping but not identical obligations. The DPIA required under GDPR Article 35 for automated processing likely to result in high risk must be coordinated with the AI Act risk management system under Article 9, the technical documentation under Article 11, and — for deployers — the fundamental rights impact assessment under AI Act Article 27. Running these as separate exercises creates duplication and inconsistency risk. Legal teams should design a unified AI governance protocol that addresses both frameworks, with clearly assigned ownership for each obligation and a shared data flow mapping that serves both the DPIA and the AI Act documentation requirements.
5. Prepare for the 2 August 2026 Annex III enforcement wave with a specific timeline and accountability structure.
The high-risk AI obligations for Annex III use cases took effect from 2 August 2026. For organisations that have not yet completed conformity assessments, compiled technical documentation, registered systems in the EU AI database, and implemented the quality management system required under Article 17, the urgency is immediate. Assign a named project owner for AI Act compliance within the legal or compliance function, establish a timeline with hard deadlines for each deliverable, and budget appropriately for third-party technical documentation review and, where required, Notified Body assessment. The cost of a compliant conformity assessment programme is substantially lower than the cost of a regulatory investigation — and AI MSAs across the EU have signalled active enforcement intentions for the Annex III wave.
Conclusion
The EU AI Act imposes a comprehensive compliance architecture on any organisation — wherever incorporated — that participates in the development, supply, or deployment of AI systems whose outputs reach the EU market. For non-EU businesses, the extraterritorial provisions of Article 2 eliminate the option of regulatory distance. The prohibited practices of Article 5 are already in force, the GPAI obligations of Articles 51–56 have been operative since August 2025, and the full high-risk AI system obligations under Annex III are now enforceable.
The compliance pathway is technically demanding but structurally clear: classify each AI system against the risk tiers, determine the provider/deployer role within each distribution chain, complete the required conformity assessments before deployment, maintain technical documentation throughout the system lifecycle, and implement the governance structures — risk management, quality management, post-market monitoring — that demonstrate ongoing compliance rather than point-in-time certification.
The EU AI Office and national market surveillance authorities have the investigative tools, the legal basis, and the stated institutional commitment to enforce these obligations against non-EU entities. The fine structure — calibrated as a percentage of global annual turnover — ensures that scale does not confer regulatory immunity. For the CLO or Chief Compliance Officer advising international boards, the AI Act is not a future project. It is today's compliance obligation.
Legal Disclaimer: This article is provided for informational and educational purposes only. It does not constitute legal advice and does not create an attorney-client relationship. The analysis reflects the text of Regulation (EU) 2024/1689, implementing measures, and publicly available guidance as of the date of publication. The AI Act's application to specific facts and systems requires qualified legal counsel with expertise in EU AI regulation. Readers should not act on the basis of this article without seeking independent professional advice tailored to their specific circumstances.
Need expert advice on this topic?
Our team at Morvantine specializes in exactly these issues. Get in touch for a consultation.
Get in Touch