Your AI product serves EU users. That means risk classification, technical documentation, conformity assessments, and phased obligations already in effect. Complara tracks which EU AI Act requirements apply to your systems — and what you need to build before each deadline.
10-day free trial · No credit card required · Setup in 3 minutes
The EU AI Act entered into force in August 2024 with a phased timeline: prohibited practices banned from February 2025, GPAI model obligations from August 2025, and high-risk AI obligations from August 2026. If you're building AI products for EU users, the clock is already ticking.
Determine whether your AI systems are unacceptable risk (prohibited), high risk (strict requirements), limited risk (transparency obligations), or minimal risk (no specific rules).
High-risk AI providers must maintain detailed technical documentation covering system architecture, training data, performance metrics, and intended purpose before market placement.
High-risk AI systems must be designed to allow human oversight — enabling operators to monitor, override, and stop the system in operation.
Most high-risk AI providers must conduct a conformity assessment before placing systems on the EU market. Some categories require third-party assessment.
AI systems interacting with humans (chatbots) must disclose they are AI. Deepfakes and AI-generated content must be labelled. Emotion recognition and biometric categorisation systems have specific disclosure rules.
Providers of general-purpose AI models must maintain technical documentation, provide usage policies, and for high-capability models, conduct adversarial testing and report serious incidents.
The EU AI Act's phased timeline means different obligations kick in at different times. Complara helps you track what's due, when, and who owns each requirement.
Work through a structured checklist to classify each AI system you build or deploy, and understand which compliance track applies.
Track which technical documentation artifacts you've completed — architecture docs, training data summaries, performance benchmarks, and conformity assessment records.
Know which obligations apply now versus August 2025 versus 2026, so you can prioritise without missing critical deadlines.
If you also need GDPR compliance for your AI's data processing, Complara covers both in the same workspace.
EU AI Act compliance intersects heavily with GDPR for data processing, and with SOC 2 or ISO 27001 for enterprise customer requirements.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level and imposes requirements proportionate to that risk. It entered into force in August 2024 and is being phased in through 2026.
Yes. Like GDPR, the EU AI Act has extraterritorial reach. It applies to any provider placing AI systems on the EU market or putting them into service in the EU, regardless of where the company is incorporated.
Unacceptable risk — prohibited (social scoring, real-time biometric surveillance). High risk — strict requirements (critical infrastructure, employment, education, law enforcement). Limited risk — transparency obligations (chatbots, deepfakes). Minimal risk — no specific obligations.
Prohibited practices: February 2025. GPAI model obligations: August 2025. High-risk AI obligations: August 2026. Read the full EU AI Act guide →
Risk classification, technical documentation, oversight requirements, and timeline management — all in one plain-English compliance workspace.