EU Artificial Intelligence Act – what companies should know about the obligations of providers and deployers

The European Union’s Artificial Intelligence Act (EU Artificial Intelligence Act, 2024/1689) is the first legislation in history that regulates artificial intelligence. Its aim is to ensure that AI systems used in Europe are safe, transparent, and under human control.

The Act is based on a risk-based approach and rests on the idea that AI is like any other product: the party that places it on the market or puts it into use is responsible for its safety and performance.

The AI Act is therefore a product safety law for AI. It applies to all AI systems that operate with some level of autonomy and learning and can affect people’s lives, work, or rights.

AI risk levels – four tiers

In the Act, AI systems are divided into four risk levels based on how much they can affect people and society.

All AI systems that threaten people’s safety, livelihood, or fundamental rights are prohibited.
Prohibited practices include for example:

harmful AI-based manipulation and deception
using AI to exploit vulnerabilities
social scoring
assessing or predicting the risk of an individual crime
creating facial recognition databases through large-scale web scraping
emotion recognition at workplaces and in educational institutions
biometric categorization to infer protected attributes
real-time remote biometric identification for law enforcement in public spaces

High-risk systems are AI solutions that can significantly affect people’s lives or rights.
Examples include:

  • AI safety components in critical infrastructure (e.g., transport)
  • AI applications that affect education, employment, or health
  • AI-based processing of loans, social benefits, or legal decisions
  • biometric identification, emotion recognition, and categorization
  • use of AI in law enforcement or border control

Strict requirements apply to high-risk AI systems before they enter the market.
Among other things, they require:

  • a risk management system and risk mitigation measures
  • high-quality, non-discriminatory training data
  • logging and traceability of operations
  • documentation and user instructions
  • human oversight and the ability to stop the system
  • high accuracy and cybersecurity

These systems emphasize transparency.
For example, a chatbot or generative AI must inform the user that they are interacting with AI.
Providers of generative AI must also ensure that AI-generated content is recognizable and, when necessary, labeled, especially for deepfakes.

This category includes the majority of current AI applications, such as AI-powered video games or spam filters.
No specific obligations apply to these.

Provider and deployer – different roles, different responsibilities

The Act clearly distinguishes two main actors:

Provider Deployer

An organization that develops an AI system or places it on the market under its own name or trademark.
The provider’s obligations include:

  • preparing risk management and technical documentation
  • ensuring the quality of training data
  • creating user instructions and warnings
  • ensuring transparency and human oversight
  • post-market monitoring and cooperation with authorities

The provider is responsible for ensuring the AI system is safe and compliant before entering the market.

An organization that deploys an AI system in its own operations.
Its obligations are:

  • following user instructions and monitoring procedures
  • conducting a Fundamental Rights Impact Assessment (FRIA) if the AI directly affects people
  • informing employees and customers about the use of AI
  • ensuring continuous human oversight of AI functions
  • reporting deviations and incidents to authorities and the provider

In other words: the provider ensures the AI is a safe product, and the deployer ensures it is used correctly and responsibly.

General Purpose AI (GPAI)

General-purpose AI models, such as ChatGPT or Copilot, can operate across many applications and tasks. Because such models can form the foundation of numerous systems, they involve systemic risks.

From August 2025 onward, GPAI providers will also be subject to specific rules relating to transparency, copyright, and safety.

In July 2025, the European Commission published three tools to support the responsible development of GPAI models:

  • Guidelines that clarify the scope of GPAI obligations
  • Code of Practice that provides practical guidance on transparency and safety
  • Training Data Summary Model in which providers give an overview of the data used and how it is handled

These tools support safe and innovative AI development and strengthen public trust in AI.

AI literacy and competence obligation

Article 4 of the AI Act requires both providers and deployers to ensure their own and their staff’s AI literacy.
This means a sufficient understanding of how AI works, its risks, and its impacts so that systems can be used knowledgeably and responsibly.

This is about competence, not just training. Without understanding, responsibility cannot be assumed.

What should companies do now?

  1. Determine your role – are you a provider, a deployer, or both?
  2. Assess risks – where does AI affect people’s rights and lives?
  3. Document and monitor – compliance must be demonstrated in writing.
  4. Train staff – AI literacy is part of the company’s compliance work.
  5. Follow national implementation – interpretation guidance and clarifications.

Summary

The EU AI Act is a significant step toward safer, more transparent, and more human-centered AI. It does not block innovation but ensures that AI operates on human terms.

AI may be used, provided it is done responsibly.

Sources

  • European Commission: EU Artificial Intelligence Act – Regulatory Framework for AI, Digital Strategy, 2025
  • Ministry of Economic Affairs and Employment: EU AI Act: bans on AI practices enter into force on 2.2.2025 (31.1.2025)
  • University of Helsinki / HY+: Fundamentals of the AI Act (Anna-Mari Wallenberg, Tuomas Mattila, 2025)
  • HY+: Information Brief – AI Literacy (Article 4)

About the author

Markus Aho.

Markus Aho
Advertising agency entrepreneur, 15+ years
M.Eng., vocational teacher, doctoral researcher
markus.aho@funlus.fi
050 585 6005
www.funlus.fi
https://www.linkedin.com/in/markusaho