EU AI Act & Swiss SMEs: 2026 Compliance Guide
The EU AI Act applies to Swiss SMEs with EU clients. 2025-2027 deadlines, classification, obligations, sanctions, 7-step checklist.
EU AI Act & Swiss SMEs: 2026 Compliance Guide
Many Swiss SME executives still believe the AI Act is an EU regulation that doesn't concern them. False. Like the GDPR before it, the AI Act applies extraterritorially to any company whose AI systems are used within the EU, or whose outputs are used in the EU. Plain words: if you have a single EU client, you're concerned. Here are the deadlines, concrete obligations per risk category, the interplay with Swiss nFADP, and the 7-step compliance checklist.
Why the AI Act concerns Swiss SMEs
Three extraterritorial channels make it inescapable:
- Placing on the EU market: if you sell an AI service or product in an EU country, you're a provider under the AI Act.
- Use by EU clients: a Swiss consulting firm delivering an AI-generated report to a German client is bound by transparency obligations.
- Output used in the EU: a multilingual chatbot of a Romandie SME answering French prospects is concerned.
The nFADP covers personal data, the AI Act covers the AI system itself (design, market placement, post-market surveillance). They stack — they don't replace each other. More on the data protection side in our nFADP guide for SMEs.
The 4 AI Act risk levels
| Level | Definition | Status | |---|---|---| | Unacceptable | Social scoring, cognitive manipulation, real-time biometric ID for law enforcement | Prohibited since February 2025 | | High risk | Recruitment, credit scoring, medical exams, critical infrastructure, justice, immigration | Strict obligations: conformity, audit, CE marking, human oversight | | Limited risk | Chatbots, deepfakes, generative AI for end users | Transparency obligation: signal AI interaction | | Minimal risk | Spam filters, game AI, simple e-commerce recommendations | No specific obligations |
For Swiss SMEs, 90% of use cases fall under limited or minimal risk. But 10% can tip into high risk: automated CV screening, lead scoring, retail video surveillance, HR evaluation tools.
Key 2025-2027 deadlines
| Date | Obligation | Affected | |---|---|---| | 2 February 2025 | Ban on unacceptable-risk AI systems | All | | 2 August 2025 | Rules on GPAI models (GPT, Claude, Gemini, Mistral) | LLM providers | | 2 February 2026 | Mandatory codes of conduct for GPAI providers | LLM providers | | 2 August 2026 | High-risk obligations (Annex III) applicable | SMEs using HR/scoring/health AI | | 2 August 2027 | High-risk obligations (Annex I — regulated products) | Industry, medical devices |
The 7 compliance steps
Step 1: Exhaustive AI system inventory
List all AI tools used (declared AND shadow IT) with: business use, vendor, data processed, target audience (EU yes/no). SMEs typically discover 3 to 5 forgotten tools. Start with a free AI audit.
Step 2: Risk-level classification
Assign each tool to one of 4 categories. HR tools, scoring, surveillance, predictive customer require fine analysis.
Step 3: Transparent user information
For limited risk (chatbots, generative AI): clearly state the user interacts with AI. Standard notice: "This response was generated by an artificial intelligence." Add to T&Cs, site footer, chatbot welcome.
Step 4: Technical documentation for high risk
Annex IV-compliant docs (description, training data, performance metrics, cybersecurity). Vendors (Microsoft, OpenAI) provide part; you document your usage.
Step 5: Human oversight
Every high-risk system must allow effective human oversight: users can ignore AI output, are trained to detect biases, can intervene. Concretely: a human validates each final decision.
Step 6: Team training (Art. 4 AI Act)
In force since 2 February 2025: every employee using an AI system in a professional context must be appropriately trained. For an SME: 2-4h initial training + annual refresh. See our SME AI training catalogue.
Step 7: Logs, audits, post-market surveillance
Retain AI usage logs for 6 months minimum. Set up an incident notification procedure. Audit classification annually.
Sanctions for SMEs
| Violation | Maximum fine | |---|---| | Prohibited AI practice (unacceptable risk) | EUR 35M or 7% global turnover | | High-risk non-compliance | EUR 15M or 3% global turnover | | Misleading information to authorities | EUR 7.5M or 1% global turnover |
Sanctions are size-calibrated. For a CHF 5M turnover SME, the unacceptable-risk maximum is around CHF 350,000 — far from the headline EUR 35M, but enough to threaten the SME's survival.
Interplay with nFADP: don't redo everything
Good news: if you're already nFADP-compliant, you've covered 60% of the AI Act path. The processing register (Art. 12 nFADP), processor contracts, DPIA (Art. 22 nFADP), data protection training — all reusable.
What's specifically new for the AI Act:
- 4-level risk classification
- CE marking for high-risk systems placed on the market
- Annex IV technical documentation
- 6-month usage logs
- "You're interacting with an AI" notice
FAQ
Is a 5-person Swiss SME really concerned?
If it has a single EU client or uses a chatbot/generative AI open to the public, yes for transparency obligations (limited risk). Quick and cheap to implement.
Do I need an AI Act officer?
No, the AI Act doesn't require a dedicated officer for SMEs. An internal AI lead (often DPO or IT manager) suffices.
Are GPT, Claude, Copilot high risk?
The GPAI models themselves are not high risk. Usage determines classification. ChatGPT for email drafting = limited risk. Same ChatGPT for candidate evaluation = high risk.
What happens if I don't act?
As long as no EU client complains, concrete risk stays low. But B2B markets increasingly require compliance attestation (large companies impose it on their Swiss suppliers).
Can I combine nFADP + AI Act compliance?
Yes, recommended. Obligations overlap by 40-60%. Our free AI audit covers both dimensions.
Want to know where your SME stands on the AI Act? Book a free AI audit. Also see our SME AI budget guide and our consulting offering.