The EU AI Act Is Law. Here's What Changes for AI Infrastructure.
---
The EU AI Act Is Law. Here's What Changes for AI Infrastructure.
By Wingston Sharon | August 2024
On August 1, 2024, Regulation (EU) 2024/1689 โ the EU AI Act โ entered into force. It had been published in the Official Journal on July 12. After years of negotiation, the world's first comprehensive AI regulation is now legally binding across all EU member states.
I've been building AI infrastructure in Europe throughout this legislative process. Here's my honest read on what this means for people who actually build and deploy AI systems โ not the version written for compliance lawyers, but the one for engineers and founders.
What the AI Act Actually Does
The regulation establishes a risk-tiered framework. Every AI system deployed in the EU gets assigned to one of four risk categories, and your obligations follow from that classification.
Unacceptable risk โ prohibited outright. This covers things like social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), and AI that exploits psychological vulnerabilities to manipulate behavior. These bans apply 6 months after entry into force โ February 2025.
High risk โ permitted, but with substantial compliance obligations. This is where most of the regulatory weight sits. High-risk systems include AI used in critical infrastructure, education and vocational training, employment and worker management, essential private and public services, law enforcement, migration and border control, and the administration of justice. Annex III of the regulation lists the specific use cases. If you're building AI that feeds into any of these domains, read that annex carefully.
Limited risk โ transparency requirements only. Chatbots and systems that interact with humans must disclose they're AI. Deepfakes must be labeled. These provisions apply relatively quickly in the implementation timeline.
Minimal risk โ no mandatory requirements. Spam filters, AI in video games, basic recommendation systems. The vast majority of deployed AI falls here.
What High-Risk Means in Practice
For high-risk AI systems, the Act imposes a set of requirements that are technically demanding and organizationally intensive:
- Risk management systems โ documented, ongoing throughout the system lifecycle
- Data governance โ training, validation, and testing data must meet quality criteria
- Technical documentation โ detailed, before the system is placed on the market
- Record-keeping โ automatic logging of events over the system's lifetime
- Transparency to deployers โ instructions for use, including limitations
- Human oversight โ design must allow humans to monitor, intervene, override
- Accuracy, robustness, cybersecurity โ performance metrics that must be maintained
Providers of high-risk AI systems also need to register in the EU database before deployment (for systems listed in Annex III) and undergo conformity assessments. For some categories, this means third-party auditing.
This is not lightweight. If you're building AI infrastructure that ends up classified high-risk, plan for significant compliance overhead.
The GPAI Layer
General-purpose AI models โ the large foundation models that underpin most modern AI applications โ get their own section of the regulation (Articles 51-56). This matters if you're offering an API or a model that others build on top of.
GPAI providers face two tiers of obligation:
All GPAI models must provide technical documentation, maintain policies complying with EU copyright law, and publish summaries of training data used.
GPAI models with systemic risk โ defined as models trained with more than 10^25 FLOPs โ face additional requirements: adversarial testing, incident reporting to the AI Office, cybersecurity measures, and energy efficiency reporting.
The systemic risk threshold is set to catch frontier models like GPT-4 class systems. If you're building on top of those models via API, you're a deployer, not a GPAI provider, and the obligations are different.
The Timeline: What Applies When
This is the part that most summaries get wrong. The AI Act does not apply all at once.
- February 2025 (6 months): Prohibited AI practices banned
- August 2025 (12 months): GPAI provisions apply; AI Office starts work
- August 2026 (24 months): High-risk system requirements fully apply
- August 2027 (36 months): High-risk AI systems already on the market under old rules must comply
For most AI builders right now, the immediately relevant question is: do you need to start compliance work for high-risk systems? The honest answer is yes, even though the rules don't technically apply until August 2026. Conformity assessments, technical documentation, and risk management systems take time to build. Starting in 2025 is realistic. Starting in 2026 when the deadline hits is not.
Provider vs. Deployer: The Distinction That Matters
The Act draws a hard line between providers (who develop and place AI systems on the market) and deployers (who use AI systems in their own operations). Most obligations sit with providers.
If you're building and selling an AI product, you're a provider. If you're buying an API and integrating it into your own service, you're a deployer โ but you still have obligations. Deployers of high-risk systems must conduct fundamental rights impact assessments in some cases, ensure human oversight, monitor system performance, and report serious incidents.
The supply chain implication: if you're building infrastructure that others will use to build high-risk applications, your customers' compliance posture depends partly on the documentation and transparency you provide. This creates real incentives for infrastructure providers to build compliance tooling into their products.
What I'm Actually Uncertain About
The AI Office โ the new EU body responsible for GPAI oversight and enforcement coordination โ is still standing up. Member state market surveillance authorities, which handle enforcement for most categories, vary significantly in capacity and approach.
The standards bodies (CEN/CENELEC) are developing harmonized standards for high-risk AI requirements. Until those standards are finalized and referenced in the Official Journal, there's genuine uncertainty about exactly how to demonstrate conformity. The regulation specifies what outcomes you need to achieve; the standards will specify how to demonstrate you've achieved them. We don't have those yet.
There's also ongoing interpretation work around the Annex III categories. What counts as "AI used in employment decisions"? What's the line between general-purpose AI and a specialized system? The AI Office will issue guidance, but that guidance will take time.
What This Means for How We Build
At Agentosaurus, we're building AI infrastructure for organization discovery and ESG verification. None of what we currently do falls into the high-risk categories โ organization OSINT and sustainability data pipelines aren't in Annex III. But the GPAI transparency obligations do affect what we need to document about the models we use, and the general principle that AI systems should be auditable and explainable is one we've been building toward anyway.
More broadly, the AI Act is shifting the calculus on EU data sovereignty. If you're building AI that might eventually touch high-risk domains, doing that on EU-hosted infrastructure with documented data provenance is a significantly easier compliance path than trying to retrofit sovereignty requirements onto a US-cloud-first architecture later.
That's been my working assumption for a while. The AI Act makes it a regulatory reality.
The regulation text is dense, and I've simplified considerably here. If you're making actual compliance decisions, read Recitals 1-150 and talk to a lawyer who specializes in EU technology law. The AI Office is also publishing guidance materials that are worth tracking.
Questions or pushback on any of this โ hello@agentosaurus.com.
Build This Infrastructure?
We help AI teams build sovereign GPU clouds and autonomous systems. Free 30-minute consultation. Fixed-price projects from โฌ5K.
Schedule Free Consultation