One of the earliest and most consequential questions under the EU AI Act is also one of the most overlooked.
Before risk classification, before role mapping, before timelines, a team needs to answer a simpler question: do we actually have an AI system under the Regulation?
Many organisations assume the answer is obvious. In practice, it rarely is.
The EU AI Act uses a specific definition of an AI system. That definition determines whether the Regulation applies at all. Getting it wrong at this stage leads to unnecessary work for some teams and missed obligations for others.
The Act defines an AI system as a machine-based system that operates with some degree of autonomy, infers from input, and produces outputs that can influence physical or digital environments.
This definition is intentionally broad, but it is not unlimited.
Two elements matter more than the rest: inference and influence.
Inference refers to the system’s ability to derive patterns, predictions, classifications, recommendations, or generated content from data. Influence refers to whether those outputs affect decisions, behaviour, or environments in a meaningful way.
If both elements are present, the system is likely in scope.
This is why many systems that feel simple still qualify as AI systems under the Act.
Examples include resume screening tools, credit scoring models, dynamic pricing engines, fraud detection systems, and recommendation systems that shape user choices. These systems may rely on relatively standard techniques, but they infer from data and influence outcomes.
By contrast, systems that follow fixed, deterministic rules without inference generally fall outside the definition. Simple calculators, static dashboards, or scripts that execute predefined logic without learning or generalising are usually not considered AI systems under the Act.
The difference is not sophistication. It is behaviour.
Another common misconception is that human involvement removes a system from scope.
Human-in-the-loop or human-on-the-loop systems are still AI systems if they perform inference and shape decisions. Human oversight affects obligations and risk controls, but it does not change whether the system qualifies as AI in the first place.
This distinction matters because many teams incorrectly exclude systems simply because a person reviews the output.
Determining whether something counts as an AI system is not a one-time decision. Systems evolve. Features are added. Models are retrained. What began as a rules-based tool may later incorporate inference.
This is why AI system classification should be revisited when systems change, not treated as a historical label.
A practical way to approach this question internally is to avoid debating terminology and instead focus on behaviour.
Ask:
- Does the system generalise or infer from data?
- Does it produce outputs that influence decisions or environments?
- Would its behaviour change meaningfully if the data changed?
If the answers are consistently yes, the system likely falls within scope.
If the answers are consistently no, the system likely does not.
Ambiguity is a signal to look more closely, not to assume exemption.
Misclassifying AI systems creates downstream problems. Teams may spend months preparing documentation for systems that are out of scope. Others may overlook obligations entirely because they assumed their tools were too simple to matter.
The EU AI Act does not reward overconfidence or excessive caution. It rewards accurate classification.
Clarity at this stage makes everything that follows more predictable.



