Compliance
EU AI Act in 5 Minutes: The Practical Guide for SMEs
EU AI Act: What do you really need to do as an SME? No paragraphs, no panic, just concrete steps.

For: CEOs, Team Leads, Compliance Officers
Goal: You'll know what to do. Today.
Your Quick Check: 3 Questions
Before we start, three questions. Answer honestly.
| Question | Yes | No |
|---|---|---|
| Does your company use AI tools? (ChatGPT, Copilot, chatbots...) | ☐ | ☐ |
| Do you use AI for HR decisions? (Recruiting, performance evaluation) | ☐ | ☐ |
| Do you have AI-generated content on your website? | ☐ | ☐ |
Evaluation:
- All "No"? The AI Act barely affects you. Still, keep reading, this changes quickly.
- At least 1x "Yes"? Keep reading. You have action items.
What is the EU AI Act?
In one line: The world's first AI regulation. Applies throughout the EU.
| In force since | August 2024 |
| Applies to | Everyone who uses or offers AI |
| Approach | Risk-based (more risk = more obligations) |
| Model | GDPR for data → AI Act for AI |
No more theory. Now to your obligations.
The Risk Matrix: Where Do You Stand?
The AI Act divides AI into four risk levels. Your obligations depend on this.
| Level | What does this mean? | Examples | Your obligations |
|---|---|---|---|
| 🔴 Prohibited | Not allowed at all | Social scoring, manipulative AI, emotion recognition at workplace | Don't use. Period. |
| 🟠 High | Critical decisions | AI in recruiting, credit assessment, education | Documentation, human oversight, training |
| 🟡 Limited | Interaction with humans | Chatbots, AI-generated content | Labeling requirement |
| 🟢 Minimal | Everyday use | ChatGPT for text, translation, research | No special obligations |
Reality for 90% of SMEs: You're at 🟢 or 🟡.
Category 🟢 Minimal: Business as usual
This applies if you use AI like this:
- ChatGPT, Claude, Perplexity for research and text
- AI translation
- Spam filters
- AI-powered search in tools
Your obligations
No specific AI Act obligations.
Still important
| Data protection | No personal data in AI tools without review |
| Quality | Review outputs (AI hallucinates) |
| GDPR | Still applies. Unchanged. |
Category 🟡 Limited: Transparency is mandatory
This applies if:
- You have a chatbot on your website
- You publish AI-generated text or images
- Customers interact with AI (without knowing it)
Your obligation: Label it
| Situation | What you must do |
|---|---|
| Chatbot | Notice: "You're speaking with an AI assistant" |
| AI-generated text | Notice: "Created with AI assistance" |
| AI-generated image | Visible marking or in description |
| Synthetic media / Deepfakes | Clearly visible marking |
Implementation: 1 hour of work
- Add chatbot disclaimer (IT ticket, 15 min)
- Create editorial guideline for AI content (template below)
- Inform team
Editorial guideline template:
AI-generated or AI-assisted content is labeled with:
"This content was created with AI assistance."
Placement: At the end of text / In image description
Responsible: [Name/Role]
Done.
Category 🟠 High: Now it gets serious
This applies if AI is involved in these decisions:
| Area | Examples |
|---|---|
| HR | AI sorts applications, evaluates performance, recommends promotions |
| Finance | AI checks creditworthiness, calculates insurance premiums |
| Education | AI grades exams, decides on admissions |
| Critical infrastructure | AI controls energy, water, traffic |
Your obligations as a user
You don't develop the AI, but you use it. This brings obligations.
| Obligation | Specifically |
|---|---|
| Documentation | Document in writing: Which tool, what for, since when |
| Human oversight | AI recommends, human decides. Always. |
| Information | Let affected persons know AI is involved |
| Training | Whoever operates the tool must understand it |
| Keep logs | Store usage data for at least 6 months |
Example: AI in recruiting
You use a tool that pre-sorts or ranks applications.
Checklist:
- ☐ Tool and purpose documented
- ☐ HR team trained
- ☐ Final decision by humans ensured
- ☐ Applicants informed about AI use (job ad/process)
- ☐ Logs are stored (min. 6 months)
Time required: 1-2 days, one-time. Then routine.
Category 🔴 Prohibited: Hands off
These AI applications are illegal from February 2025. No exceptions.
| Prohibited | Meaning |
|---|---|
| Social scoring | Rating people based on social behavior |
| Manipulative AI | Subliminal influence on behavior |
| Emotion recognition at workplace | AI analyzes employees' emotions |
| Biometric categorization | Classification by race, religion, sexuality |
| Predictive policing (individuals) | Predicting whether someone will commit crimes |
Mostly irrelevant for SMEs. But if a tool vendor promises such things: Don't buy.
The Timeline: What applies when?
| Date | What happens |
|---|---|
| Feb 2025 | Prohibitions take effect |
| Aug 2025 | Transparency obligations for generative AI |
| Aug 2026 | High-risk rules fully effective |
| Aug 2027 | All remaining rules |
Your next deadline: February 2025 - Check prohibitions.
After that: August 2025 - Ensure transparency.
What does ignoring cost?
The AI Act has penalties. Real ones.
| Violation | Maximum penalty |
|---|---|
| Prohibited AI | €35M or 7% annual revenue |
| High-risk violations | €15M or 3% annual revenue |
| False statements | €7.5M or 1% annual revenue |
Reality for SMEs: The maximum amounts don't hit you. But proportional fines can still hurt. And reputational damage even more.
Probability in 2025: Low for SMEs. Authorities focus on large providers. But: Complaints can trigger inspections.
Your 7-Day Plan
Concrete. Without consultants. Doable this week.
Day 1-2: Inventory
Create a list of all AI tools in the company.
| Tool | Who uses it? | What for? | Risk level |
|---|---|---|---|
Ask proactively. Many tools have built-in AI without it being obvious.
Day 3: Risk classification
For each tool: Which category?
- 🟢 Minimal → No AI Act obligations
- 🟡 Limited → Labeling required
- 🟠 High → Documentation + Oversight + Training
- 🔴 Prohibited → Stop immediately
Day 4: Implement quick wins
Fulfill transparency obligations:
- ☐ Chatbot notice on website
- ☐ AI labeling for published content
- ☐ Create editorial guideline
Day 5: Check high-risk (if relevant)
If you use AI in recruiting, credit or education:
- ☐ Create documentation
- ☐ Ensure human oversight
- ☐ Inform affected persons
- ☐ Designate responsible person
Day 6: Inform team
Short message to everyone:
Subject: AI use in the company, quick update
Hi team,
We reviewed our AI use and set a few rules:
- [Tool X, Y, Z] are approved for [purpose]
- No sensitive data (customer data, HR data) in AI tools
- AI-generated content will be labeled
Questions? [Contact person]
More details: [Link to internal documentation]
Day 7: Store documentation
Everything in one place:
- Tool list with risk classification
- Editorial guideline
- Training proof (if high-risk)
- Responsibilities
Done. You're compliant.
The 5 Most Common Mistakes
| Mistake | Why problematic | Better |
|---|---|---|
| "Doesn't affect us" | Everyone using ChatGPT is affected | Do inventory |
| No labeling | Violation for every AI chatbot, every AI text | Add disclaimer |
| AI decides alone | Prohibited for high-risk | Human has final say |
| No documentation | No proof during inspection | Keep simple list |
| Done once, never updated | New tools, new risks | Review quarterly |
Checklist: AI Act Ready
- ☐Tool inventory created
- ☐Risk levels assigned
- ☐Chatbot disclaimer live
- ☐AI content labeling defined
- ☐High-risk processes documented (if applicable)
- ☐Team informed
- ☐Contact person designated
- ☐Documentation stored
7 of 8? You're ahead of 90% of SMEs.
Conclusion: 3 Sentences
- The AI Act applies to you as soon as you use AI.
- For most SMEs, this means: Label + Document.
- One week of work. No lawyer needed. Start today.
Resources
| AI Act full text (English) | EUR-Lex |
| EU Commission FAQ | ec.europa.eu |
| ENISA AI Security Guide | enisa.europa.eu |
Keep learning
The AI Act is the beginning. AI competence in your team is the next step.
