Most companies approaching the EU AI Act start with a simple assumption:
“We didn’t build the AI — so we’re not the provider.”
This assumption is wrong more often than it is right.
And more importantly, it is operationally dangerous.
Because under the AI Act, your role is not determined by what you built, but by what you control, modify, or place on the market.
The Core Distinction: Provider vs Deployer
The regulation introduces a structural distinction:
- Provider
The entity that develops or has an AI system developed and places it on the market or puts it into service under its own name. - Deployer
The entity that uses an AI system under its authority.
At first glance, this seems straightforward.
In practice, it is not.
Because modern AI systems are rarely used “as-is.”
They are:
- integrated into products
- fine-tuned
- configured
- embedded into workflows
- exposed to end-users under a company’s interface
And this is where classification breaks.
Where Misclassification Happens
Most companies sit in a grey zone between usage and control.
Typical scenarios:
1. Wrapped AI Products
You integrate a third-party model into your SaaS product.
- You define inputs and outputs
- You design the interface
- You control how users interact with the system
You did not build the model.
But you may still be placing an AI system on the market under your name.
→ This can shift you toward provider obligations.
2. Fine-Tuned or Modified Models
You take an existing model and:
- fine-tune it
- adjust its behavior
- constrain outputs
- connect it to proprietary data
At this point, the system is no longer identical to the original.
→ You may be considered as having developed the system.
3. Internal Systems with External Impact
Even internal deployments can trigger complexity when:
- outputs affect customers
- decisions influence rights or access
- the system becomes part of a service offering
The boundary between deployer and provider becomes functional, not technical.
Why This Distinction Matters
Because obligations differ significantly.
A provider must:
- ensure compliance with risk classification
- implement risk management systems
- prepare technical documentation
- maintain post-market monitoring
- ensure conformity before placing on the market
A deployer must:
- use systems according to instructions
- monitor performance
- ensure appropriate human oversight
Misclassification leads to:
- missing documentation
- incomplete risk controls
- exposure during procurement
- regulatory vulnerability
And most critically:
→ false confidence
The Procurement Reality
This is not theoretical.
Increasingly, enterprise procurement teams are asking:
- Who is the provider?
- Who carries compliance responsibility?
- Where is the technical documentation?
- How is risk classified?
If your answer is:
“We just use OpenAI / Anthropic / etc.”
You have not answered the question.
You have avoided it.
And that becomes visible immediately in due diligence processes.
The Strategic Insight
The AI Act is not trying to map technical architecture.
It is trying to assign accountability.
So the real question is not:
“Did we build the model?”
But:
“Are we responsible for how this system is presented, controlled, and used?”
If the answer is yes — even partially —
you are likely operating closer to a provider role than you think.
What Companies Should Do Next
Before building documentation, policies, or controls:
- Map all AI systems in use
- Identify how each system is integrated
- Determine who controls outputs and user interaction
- Assess whether systems are placed on the market under your name
- Classify your role — realistically, not optimistically
This is the foundation of any credible AI governance approach.
Without it, everything else is misaligned.
Final Word
The biggest early mistake companies make with the AI Act is not non-compliance.
It is misclassification.
Because once you misunderstand your role,
every downstream decision becomes structurally wrong.
And that is much harder to fix later.



