Compliance

EU AI Act in 5 Minutes: The Practical Guide for SMEs

EU AI Act: What do you really need to do as an SME? No paragraphs, no panic, just concrete steps.

6 min read
ComplianceAI RegulationSMECompliance Checklist
EU AI Act in 5 Minutes: The Practical Guide for SMEs

For: CEOs, Team Leads, Compliance Officers
Goal: You'll know what to do. Today.

Your Quick Check: 3 Questions

Before we start, three questions. Answer honestly.

QuestionYesNo
Does your company use AI tools? (ChatGPT, Copilot, chatbots...)
Do you use AI for HR decisions? (Recruiting, performance evaluation)
Do you have AI-generated content on your website?

Evaluation:

  • All "No"? The AI Act barely affects you. Still, keep reading, this changes quickly.
  • At least 1x "Yes"? Keep reading. You have action items.

What is the EU AI Act?

In one line: The world's first AI regulation. Applies throughout the EU.

In force sinceAugust 2024
Applies toEveryone who uses or offers AI
ApproachRisk-based (more risk = more obligations)
ModelGDPR for data → AI Act for AI

No more theory. Now to your obligations.

The Risk Matrix: Where Do You Stand?

The AI Act divides AI into four risk levels. Your obligations depend on this.

LevelWhat does this mean?ExamplesYour obligations
🔴 ProhibitedNot allowed at allSocial scoring, manipulative AI, emotion recognition at workplaceDon't use. Period.
🟠 HighCritical decisionsAI in recruiting, credit assessment, educationDocumentation, human oversight, training
🟡 LimitedInteraction with humansChatbots, AI-generated contentLabeling requirement
🟢 MinimalEveryday useChatGPT for text, translation, researchNo special obligations

Reality for 90% of SMEs: You're at 🟢 or 🟡.

Category 🟢 Minimal: Business as usual

This applies if you use AI like this:

  • ChatGPT, Claude, Perplexity for research and text
  • AI translation
  • Spam filters
  • AI-powered search in tools

Your obligations

No specific AI Act obligations.

Still important

Data protectionNo personal data in AI tools without review
QualityReview outputs (AI hallucinates)
GDPRStill applies. Unchanged.

Category 🟡 Limited: Transparency is mandatory

This applies if:

  • You have a chatbot on your website
  • You publish AI-generated text or images
  • Customers interact with AI (without knowing it)

Your obligation: Label it

SituationWhat you must do
ChatbotNotice: "You're speaking with an AI assistant"
AI-generated textNotice: "Created with AI assistance"
AI-generated imageVisible marking or in description
Synthetic media / DeepfakesClearly visible marking

Implementation: 1 hour of work

  1. Add chatbot disclaimer (IT ticket, 15 min)
  2. Create editorial guideline for AI content (template below)
  3. Inform team

Editorial guideline template:

AI-generated or AI-assisted content is labeled with:

"This content was created with AI assistance."

Placement: At the end of text / In image description

Responsible: [Name/Role]

Done.

Category 🟠 High: Now it gets serious

This applies if AI is involved in these decisions:

AreaExamples
HRAI sorts applications, evaluates performance, recommends promotions
FinanceAI checks creditworthiness, calculates insurance premiums
EducationAI grades exams, decides on admissions
Critical infrastructureAI controls energy, water, traffic

Your obligations as a user

You don't develop the AI, but you use it. This brings obligations.

ObligationSpecifically
DocumentationDocument in writing: Which tool, what for, since when
Human oversightAI recommends, human decides. Always.
InformationLet affected persons know AI is involved
TrainingWhoever operates the tool must understand it
Keep logsStore usage data for at least 6 months

Example: AI in recruiting

You use a tool that pre-sorts or ranks applications.

Checklist:

  • ☐ Tool and purpose documented
  • ☐ HR team trained
  • ☐ Final decision by humans ensured
  • ☐ Applicants informed about AI use (job ad/process)
  • ☐ Logs are stored (min. 6 months)

Time required: 1-2 days, one-time. Then routine.

Category 🔴 Prohibited: Hands off

These AI applications are illegal from February 2025. No exceptions.

ProhibitedMeaning
Social scoringRating people based on social behavior
Manipulative AISubliminal influence on behavior
Emotion recognition at workplaceAI analyzes employees' emotions
Biometric categorizationClassification by race, religion, sexuality
Predictive policing (individuals)Predicting whether someone will commit crimes

Mostly irrelevant for SMEs. But if a tool vendor promises such things: Don't buy.

The Timeline: What applies when?

DateWhat happens
Feb 2025Prohibitions take effect
Aug 2025Transparency obligations for generative AI
Aug 2026High-risk rules fully effective
Aug 2027All remaining rules

Your next deadline: February 2025 - Check prohibitions.
After that: August 2025 - Ensure transparency.

What does ignoring cost?

The AI Act has penalties. Real ones.

ViolationMaximum penalty
Prohibited AI€35M or 7% annual revenue
High-risk violations€15M or 3% annual revenue
False statements€7.5M or 1% annual revenue

Reality for SMEs: The maximum amounts don't hit you. But proportional fines can still hurt. And reputational damage even more.

Probability in 2025: Low for SMEs. Authorities focus on large providers. But: Complaints can trigger inspections.

Your 7-Day Plan

Concrete. Without consultants. Doable this week.

Day 1-2: Inventory

Create a list of all AI tools in the company.

ToolWho uses it?What for?Risk level
    
    

Ask proactively. Many tools have built-in AI without it being obvious.

Day 3: Risk classification

For each tool: Which category?

  • 🟢 Minimal → No AI Act obligations
  • 🟡 Limited → Labeling required
  • 🟠 High → Documentation + Oversight + Training
  • 🔴 Prohibited → Stop immediately

Day 4: Implement quick wins

Fulfill transparency obligations:

  • ☐ Chatbot notice on website
  • ☐ AI labeling for published content
  • ☐ Create editorial guideline

Day 5: Check high-risk (if relevant)

If you use AI in recruiting, credit or education:

  • ☐ Create documentation
  • ☐ Ensure human oversight
  • ☐ Inform affected persons
  • ☐ Designate responsible person

Day 6: Inform team

Short message to everyone:

Subject: AI use in the company, quick update

Hi team,

We reviewed our AI use and set a few rules:

  • [Tool X, Y, Z] are approved for [purpose]
  • No sensitive data (customer data, HR data) in AI tools
  • AI-generated content will be labeled

Questions? [Contact person]

More details: [Link to internal documentation]

Day 7: Store documentation

Everything in one place:

  • Tool list with risk classification
  • Editorial guideline
  • Training proof (if high-risk)
  • Responsibilities

Done. You're compliant.

The 5 Most Common Mistakes

MistakeWhy problematicBetter
"Doesn't affect us"Everyone using ChatGPT is affectedDo inventory
No labelingViolation for every AI chatbot, every AI textAdd disclaimer
AI decides aloneProhibited for high-riskHuman has final say
No documentationNo proof during inspectionKeep simple list
Done once, never updatedNew tools, new risksReview quarterly

Checklist: AI Act Ready

  • Tool inventory created
  • Risk levels assigned
  • Chatbot disclaimer live
  • AI content labeling defined
  • High-risk processes documented (if applicable)
  • Team informed
  • Contact person designated
  • Documentation stored

7 of 8? You're ahead of 90% of SMEs.

Conclusion: 3 Sentences

  1. The AI Act applies to you as soon as you use AI.
  2. For most SMEs, this means: Label + Document.
  3. One week of work. No lawyer needed. Start today.

Resources

AI Act full text (English)EUR-Lex
EU Commission FAQec.europa.eu
ENISA AI Security Guideenisa.europa.eu

Keep learning

The AI Act is the beginning. AI competence in your team is the next step.

Daily micro-learning units (5 min, directly in Slack/Teams)
EU AI Act compliance module available as add-on
Team dashboard for progress tracking
Try 14 days free