EU Artificial Intelligence Act – what companies should know about the obligations of providers and deployers
The European Union’s Artificial Intelligence Act (EU Artificial Intelligence Act, 2024/1689) is the first legislation in history that regulates artificial intelligence. Its aim is to ensure that AI systems used in Europe are safe, transparent, and under human control.
The Act is based on a risk-based approach and rests on the idea that AI is like any other product: the party that places it on the market or puts it into use is responsible for its safety and performance.
The AI Act is therefore a product safety law for AI. It applies to all AI systems that operate with some level of autonomy and learning and can affect people’s lives, work, or rights.
AI risk levels – four tiers
In the Act, AI systems are divided into four risk levels based on how much they can affect people and society.
Provider and deployer – different roles, different responsibilities
The Act clearly distinguishes two main actors:
| Provider | Deployer |
|---|---|
|
An organization that develops an AI system or places it on the market under its own name or trademark.
The provider is responsible for ensuring the AI system is safe and compliant before entering the market. |
An organization that deploys an AI system in its own operations.
|
In other words: the provider ensures the AI is a safe product, and the deployer ensures it is used correctly and responsibly.
General Purpose AI (GPAI)
General-purpose AI models, such as ChatGPT or Copilot, can operate across many applications and tasks. Because such models can form the foundation of numerous systems, they involve systemic risks.
From August 2025 onward, GPAI providers will also be subject to specific rules relating to transparency, copyright, and safety.
In July 2025, the European Commission published three tools to support the responsible development of GPAI models:
- Guidelines that clarify the scope of GPAI obligations
- Code of Practice that provides practical guidance on transparency and safety
- Training Data Summary Model in which providers give an overview of the data used and how it is handled
These tools support safe and innovative AI development and strengthen public trust in AI.
AI literacy and competence obligation
Article 4 of the AI Act requires both providers and deployers to ensure their own and their staff’s AI literacy.
This means a sufficient understanding of how AI works, its risks, and its impacts so that systems can be used knowledgeably and responsibly.
This is about competence, not just training. Without understanding, responsibility cannot be assumed.
What should companies do now?
- Determine your role – are you a provider, a deployer, or both?
- Assess risks – where does AI affect people’s rights and lives?
- Document and monitor – compliance must be demonstrated in writing.
- Train staff – AI literacy is part of the company’s compliance work.
- Follow national implementation – interpretation guidance and clarifications.
Summary
The EU AI Act is a significant step toward safer, more transparent, and more human-centered AI. It does not block innovation but ensures that AI operates on human terms.
AI may be used, provided it is done responsibly.
Sources
- European Commission: EU Artificial Intelligence Act – Regulatory Framework for AI, Digital Strategy, 2025
- Ministry of Economic Affairs and Employment: EU AI Act: bans on AI practices enter into force on 2.2.2025 (31.1.2025)
- University of Helsinki / HY+: Fundamentals of the AI Act (Anna-Mari Wallenberg, Tuomas Mattila, 2025)
- HY+: Information Brief – AI Literacy (Article 4)
About the author

Markus Aho
Advertising agency entrepreneur, 15+ years
M.Eng., vocational teacher, doctoral researcher
markus.aho@funlus.fi
050 585 6005
www.funlus.fi
https://www.linkedin.com/in/markusaho




