


AI Act
The EU AI Act is the world’s first comprehensive, binding framework regulating artificial intelligence. Adopted by the European Union, it establishes risk-based rules for how AI systems are developed, deployed, and governed across the EU.
Its core goal: protect fundamental rights, safety, and democracy while enabling trustworthy AI innovation.
1️⃣ Scope & Who Must Comply
The EU AI Act applies to:
-
AI providers (developers of AI systems)
-
Deployers/users (companies using AI in products or operations)
-
Non-EU companies if their AI affects people inside the EU
If your AI:
-
Is sold in the EU, or
-
Produces outputs used in the EU
You are in scope, even if you’re US-based.
2️⃣ Risk-Based Classification System
The Act classifies AI into four risk tiers, each with different obligations:
Unacceptable Risk (Banned)
AI practices that threaten fundamental rights, such as:
-
Social scoring by governments
-
Real-time remote biometric identification (with narrow law enforcement exceptions)
-
Manipulative or exploitative AI targeting vulnerable groups
Status: Prohibited outright.
High-Risk AI
AI used in regulated or sensitive areas, including:
-
Hiring, recruitment, and employee monitoring
-
Creditworthiness and lending
-
Education admissions and exams
-
Biometric identification
-
Medical devices and healthcare
-
Law enforcement and border control
Key obligations:
-
Risk management system
-
High-quality training data
-
Bias and accuracy controls
-
Technical documentation
-
Human oversight
-
Logging & record-keeping
-
Post-market monitoring
Limited Risk AI
Systems that interact with humans or generate content:
-
Chatbots
-
AI-generated images, video, or text (deepfakes)
Requirement:
Transparency — users must know they are interacting with AI or AI-generated content.
Minimal Risk AI
Common AI uses such as:
-
Spam filters
-
Game AI
-
Photo enhancement
-
Recommendation engines
Status: No new obligations (voluntary codes encouraged).
3️⃣ Obligations by Role
AI Providers
-
Classify AI systems
-
Maintain technical documentation
-
Conduct conformity assessments (for high-risk AI)
-
Implement risk & quality management systems
AI Deployers
-
Use AI according to instructions
-
Ensure human oversight
-
Monitor performance and incidents
-
Keep usage logs (for high-risk AI)
4️⃣ Foundation Models / GPAI (General-Purpose AI)
The Act introduces special rules for general-purpose AI (GPAI):
-
Transparency on training compute and data
-
Copyright safeguards
-
Extra risk mitigation if models pose systemic risk
This directly affects:
-
Large language models
-
Multimodal foundation models
-
Open vs closed model providers
5️⃣ Enforcement & Penalties
Fines are severe and tiered:
Violation Type
Maximum Penalty
Prohibited AI use
€35m or 7% global turnover
High-risk non-compliance
€15m or 3% global turnover
Incorrect information
€7.5m or 1% global turnover
6️⃣ Timeline (Simplified)
-
2024–2025: Phased adoption & preparatory obligations
-
2025–2026: High-risk AI enforcement begins
-
2026+: Full enforcement across all applicable systems
7️⃣ Why This Matters for Companies
If you build or use AI:
-
AI governance becomes mandatory
-
Compliance requires documentation, processes, and evidence
-
Investors, customers, and regulators will expect AI risk controls
-
EU AI Act compliance is likely to become a global standard, similar to GDPR
Bottom Line
The EU AI Act transforms AI from a “move fast” technology into a regulated system with auditability, accountability, and legal risk. Companies that prepare early gain:
-
Faster enterprise sales
-
Lower regulatory risk
-
Stronger trust positioning