AI Compliance & Regulations — EU AI Act Guide for Engineers (2026)
AI compliance is no longer a legal team’s problem. The EU AI Act — the world’s first comprehensive AI regulation — places technical obligations directly on the engineers who build and deploy AI systems. Risk classification, documentation, transparency labeling, and monitoring are engineering deliverables now. This guide covers the 5 compliance checkpoints every AI engineer should implement, with practical examples for GenAI applications.
1. Why AI Compliance Matters for Engineers
Section titled “1. Why AI Compliance Matters for Engineers”Regulations are shifting compliance responsibility from legal departments to engineering teams, and the penalties for getting it wrong are severe.
The Engineering Obligation
Section titled “The Engineering Obligation”Until 2024, AI compliance was something lawyers worried about. Engineers built systems, legal teams reviewed them, and compliance was a checkbox exercise. That model is dead.
The EU AI Act, which entered into force on August 1, 2024, creates obligations that only engineers can fulfill. Risk classification requires understanding your system’s architecture. Technical documentation requires describing your training data pipeline, model provenance, and evaluation methodology. Transparency requirements demand engineering changes to your UI and API responses. Human oversight mechanisms are architectural decisions, not legal memos.
The penalties are proportional to revenue, not a flat fine. Deploying a prohibited AI system carries fines up to 35 million EUR or 7% of global annual revenue — whichever is higher. Violating high-risk system requirements reaches 15 million EUR or 3% of revenue. Even providing incorrect information to regulators costs up to 7.5 million EUR or 1% of revenue.
Who This Guide Is For
Section titled “Who This Guide Is For”If you are building applications that use large language models — chatbots, content generators, coding assistants, RAG systems, AI agents — this guide is for you. Senior engineers need the architectural patterns for compliance-by-design. Junior engineers need to understand why compliance gates exist in their CI/CD pipelines and how to write documentation that satisfies audit requirements.
For the broader safety context, see the AI Safety guide. For runtime protection patterns, see AI Guardrails.
2. EU AI Act Enforcement Timeline and Scope
Section titled “2. EU AI Act Enforcement Timeline and Scope”The EU AI Act follows a phased enforcement timeline with three key dates that determine when specific obligations become legally binding.
The Three Enforcement Milestones
Section titled “The Three Enforcement Milestones”| Date | What Takes Effect | Impact on Engineers |
|---|---|---|
| February 2, 2025 | Prohibited AI practices banned | Social scoring systems, manipulative AI, real-time biometric identification in public spaces must be shut down |
| August 2, 2025 | General-purpose AI model rules | Providers of foundation models must publish model cards, disclose training data summaries, and comply with copyright obligations |
| August 2, 2026 | Full enforcement | High-risk system requirements, conformity assessments, complete penalty framework, market surveillance active |
Which AI Applications Are Affected
Section titled “Which AI Applications Are Affected”The Act applies to any AI system placed on the EU market or whose output is used within the EU — regardless of where the company is headquartered. If your chatbot serves a single EU user, you are in scope.
Risk Categories at a Glance
Section titled “Risk Categories at a Glance”| Risk Level | Examples | Obligations | Engineering Impact |
|---|---|---|---|
| Unacceptable | Social scoring, subliminal manipulation, real-time facial recognition in public | Banned | Must not be deployed |
| High | Hiring tools, credit scoring, medical devices, critical infrastructure, law enforcement | Conformity assessment, technical documentation, human oversight, post-market monitoring | 25%+ dev overhead; dedicated compliance pipeline |
| Limited | Chatbots, content generators, deepfakes, emotion recognition | Transparency obligations | AI disclosure labels, content watermarking |
| Minimal | Spam filters, video game AI, inventory optimization | No mandatory obligations | Voluntary codes of practice |
The critical insight for GenAI engineers: most LLM applications fall into the “limited risk” category, not “high risk.” A customer support chatbot, a code review assistant, a content summarizer — these require transparency obligations (telling users they are interacting with AI), but they do not require the full conformity assessment pipeline that high-risk systems demand.
The exception: if your GenAI application operates within a high-risk domain — screening job applicants, assessing creditworthiness, providing medical triage — the domain determines your risk level, not the technology.
3. How AI Compliance Works — Core Concepts
Section titled “3. How AI Compliance Works — Core Concepts”AI compliance under the EU AI Act operates on a risk-based classification system where your obligations scale with the potential harm your system can cause.
The Risk-Based Classification Mental Model
Section titled “The Risk-Based Classification Mental Model”Think of it as building codes for software. A garden shed (minimal risk) needs no permit. A residential house (limited risk) needs basic inspections. A hospital (high risk) needs full architectural review, certified materials, and ongoing structural monitoring. The regulations do not ban construction — they scale oversight to match the stakes.
Five Pillars of EU AI Act Compliance
Section titled “Five Pillars of EU AI Act Compliance”1. Risk Classification. Every AI system must be classified into one of four risk tiers. This is not a one-time decision — reclassification is required when you change the system’s intended purpose, expand to regulated domains, or modify the architecture in ways that change the risk profile.
2. Transparency Requirements. Users must know they are interacting with AI. Content generated by AI must be identifiable as such. This applies to all risk levels above minimal. For chatbots, a clear disclosure statement is required. For AI-generated text, images, or audio, machine-readable metadata labeling is the emerging standard.
3. Technical Documentation. High-risk systems need detailed documentation covering system design, training data, evaluation results, and known limitations. General-purpose AI models require model cards. This is not a PDF you write once — it is a living document updated with every significant model or system change.
4. Human Oversight. High-risk systems must include mechanisms for human intervention. This means human-in-the-loop patterns: approval workflows, override controls, and the ability for humans to stop the system. The Act specifies that humans must be able to understand the system’s outputs and override or reverse automated decisions.
5. Post-Deployment Monitoring. You must monitor your system after deployment for performance degradation, bias drift, and incidents. This is not optional for high-risk systems, and it is a best practice for all risk levels. See the LLMOps guide for production monitoring patterns.
General-Purpose AI Model Obligations
Section titled “General-Purpose AI Model Obligations”If you are building on top of a foundation model (GPT-4, Claude, Gemini, Llama), the model provider carries certain obligations under the Act. But as the deployer, you carry your own. The model provider must disclose training methodology and known limitations. You must ensure the final application meets the transparency and risk requirements for your specific use case.
4. Implement AI Compliance Step by Step — 5 Checkpoints
Section titled “4. Implement AI Compliance Step by Step — 5 Checkpoints”Five compliance checkpoints transform regulatory requirements into engineering tasks that integrate with your existing development workflow.
Checkpoint 1: Classify Your Application’s Risk Level
Section titled “Checkpoint 1: Classify Your Application’s Risk Level”Start here. Every other decision flows from your risk classification.
Step 1: Check the prohibited list. Does your system perform social scoring, subliminal manipulation, or real-time biometric identification in public? If yes, stop — you cannot deploy this system in the EU.
Step 2: Check Annex III (high-risk). Does your system fall into any of these domains?
- Biometric identification and categorization
- Critical infrastructure management (water, gas, electricity, transport)
- Educational access and vocational training assessment
- Employment and worker management (hiring, promotion, termination)
- Access to essential services (credit scoring, insurance, social benefits)
- Law enforcement, migration, and justice
- Democratic processes
Step 3: If your system is not prohibited or high-risk, determine whether it generates content, interacts with users, or detects emotions. If yes, it is limited risk with transparency obligations. If none of these apply, it is minimal risk.
Document your classification with the reasoning. Store this in your repository alongside the system documentation. You will need it for audits.
Checkpoint 2: Implement Transparency Requirements
Section titled “Checkpoint 2: Implement Transparency Requirements”For limited-risk GenAI applications, transparency is your primary obligation.
For chatbots and conversational AI:
# Add AI disclosure to every conversation startDISCLOSURE_MESSAGE = ( "You are interacting with an AI assistant powered by a large " "language model. Responses are generated automatically and may " "contain errors. For critical decisions, consult a qualified " "professional.")
async def start_conversation(user_id: str) -> dict: """Initialize conversation with mandatory AI disclosure.""" return { "disclosure": DISCLOSURE_MESSAGE, "conversation_id": generate_id(), "ai_system": True, # Machine-readable flag "model_provider": "anthropic", "model_version": "claude-3.5-sonnet", }For AI-generated content:
# Label AI-generated content with metadatadef label_ai_content(content: str, generation_metadata: dict) -> dict: """Attach EU AI Act transparency metadata to generated content.""" return { "content": content, "ai_generated": True, "generation_timestamp": datetime.utcnow().isoformat(), "model_id": generation_metadata.get("model"), "disclosure": "This content was generated by an AI system.", # C2PA-compatible metadata for images/video "c2pa_manifest": generation_metadata.get("c2pa"), }Checkpoint 3: Document Training Data and Model Provenance
Section titled “Checkpoint 3: Document Training Data and Model Provenance”Even if you are using a third-party model, document your fine-tuning data, RAG knowledge base sources, and system prompt design decisions.
Minimum documentation for a GenAI application:
- Base model provider, model ID, and version
- Fine-tuning dataset description (if applicable): sources, size, filtering criteria, known biases
- RAG knowledge base: source documents, update frequency, data governance policies
- System prompt: version-controlled, with change history
- Evaluation results: accuracy metrics, bias assessments, failure mode analysis
Store documentation in version control alongside your code. A model card template in your repo ensures every deployment has its documentation.
Checkpoint 4: Add Human Oversight Mechanisms
Section titled “Checkpoint 4: Add Human Oversight Mechanisms”For high-risk applications, human oversight is mandatory. For limited-risk applications, it is a best practice that reduces liability.
Key patterns from the Human-in-the-Loop guide:
- Confidence-based routing: Actions below a calibrated threshold go to human review
- Approval workflows: Irreversible or high-impact actions require human sign-off before execution
- Override controls: Humans can stop the system, correct outputs, or disable automation at any time
- Escalation paths: Tiered review based on risk severity — not every action needs the same level of oversight
# Requires: python >= 3.10
class ComplianceRouter: """Route AI decisions based on risk and confidence."""
def __init__(self, confidence_threshold: float = 0.85): self.confidence_threshold = confidence_threshold
def route_decision( self, action: str, confidence: float, risk_level: str ) -> str: if risk_level == "high": return "human_approval_required" if confidence < self.confidence_threshold: return "human_review_recommended" return "autonomous_execution"Checkpoint 5: Set Up Monitoring and Incident Reporting
Section titled “Checkpoint 5: Set Up Monitoring and Incident Reporting”Post-deployment monitoring is where compliance becomes an ongoing engineering practice, not a one-time checklist.
Four monitoring components:
- Audit trail: Log every input, output, and routing decision with timestamps. Immutable storage — logs cannot be retroactively modified.
- Performance drift detection: Track accuracy, latency, and failure rates against your documented baselines. Alert when metrics deviate beyond defined thresholds.
- Incident classification: Categorize failures by severity (critical, major, minor) and regulatory impact (safety, transparency, documentation).
- Reporting cadence: Prepare quarterly compliance summaries. For high-risk systems, serious incidents must be reported to the relevant national authority without undue delay.
For implementation patterns, see the LLMOps guide and Hallucination Mitigation for output quality monitoring.
5. AI Compliance Architecture and System View
Section titled “5. AI Compliance Architecture and System View”A compliant GenAI system wraps the LLM in five layers, from risk classification at the top through monitoring at the base.
The AI Compliance Stack
Section titled “The AI Compliance Stack”The compliance stack mirrors a production safety architecture. Each layer handles a specific regulatory obligation. Requests flow down through classification, transparency, and documentation before reaching the LLM, then pass through oversight and monitoring on the way back.
📊 Visual Explanation
Section titled “📊 Visual Explanation”AI Compliance Stack for GenAI Applications
Five layers of regulatory compliance wrapping the LLM — each layer addresses a specific EU AI Act obligation
6. AI Compliance Code Examples in Python
Section titled “6. AI Compliance Code Examples in Python”Here is what compliance looks like in practice — three patterns you can adapt to your own GenAI applications.
Example A: Adding AI Disclosure to a Chatbot
Section titled “Example A: Adding AI Disclosure to a Chatbot”Every conversational AI system must disclose its AI nature to users. This is a transparency obligation under Article 52.
# Requires: fastapi >= 0.100.0, pydantic >= 2.0
from fastapi import FastAPI, Requestfrom pydantic import BaseModelfrom datetime import datetime
app = FastAPI()
class ChatResponse(BaseModel): """Response model with mandatory AI disclosure fields.""" message: str ai_disclosure: str ai_generated: bool model_id: str timestamp: str
AI_DISCLOSURE = ( "This response is generated by an AI system. " "It may contain inaccuracies. Verify important information " "with authoritative sources.")
@app.post("/chat", response_model=ChatResponse)async def chat(request: Request, user_message: str): # Generate response from your LLM llm_response = await generate_response(user_message)
return ChatResponse( message=llm_response, ai_disclosure=AI_DISCLOSURE, ai_generated=True, model_id="claude-3.5-sonnet-20241022", timestamp=datetime.utcnow().isoformat(), )The disclosure field is not optional. Include it in every response, not just the first message. Returning ai_generated: True as a structured field enables downstream systems and clients to handle AI-generated content appropriately.
Example B: Human-in-the-Loop for High-Risk Decisions
Section titled “Example B: Human-in-the-Loop for High-Risk Decisions”When your application enters a high-risk domain — even temporarily — route the decision through human approval.
# Requires: python >= 3.10
from enum import Enumfrom dataclasses import dataclass
class RiskLevel(Enum): MINIMAL = "minimal" LIMITED = "limited" HIGH = "high" PROHIBITED = "prohibited"
@dataclassclass ComplianceDecision: action: str risk_level: RiskLevel requires_human_review: bool reason: str confidence: float
# Domain-specific risk keywords that trigger elevated classificationHIGH_RISK_DOMAINS = { "hiring", "credit", "medical", "legal", "insurance", "law_enforcement", "education_access",}
def classify_request(user_input: str, context: dict) -> ComplianceDecision: """Classify incoming request and determine oversight requirements.""" domain = context.get("domain", "general") confidence = context.get("confidence", 0.5)
if domain in HIGH_RISK_DOMAINS: return ComplianceDecision( action="process_with_oversight", risk_level=RiskLevel.HIGH, requires_human_review=True, reason=f"Domain '{domain}' is classified as high-risk under EU AI Act Annex III", confidence=confidence, )
return ComplianceDecision( action="process_with_transparency", risk_level=RiskLevel.LIMITED, requires_human_review=False, reason="Standard limited-risk GenAI application — transparency obligations apply", confidence=confidence, )Example C: Building an Audit Trail for LLM Outputs
Section titled “Example C: Building an Audit Trail for LLM Outputs”Every LLM interaction should be logged with enough context to reconstruct the decision chain during an audit.
# Requires: python >= 3.10
import jsonimport hashlibfrom datetime import datetimefrom dataclasses import dataclass, asdict
@dataclassclass AuditEntry: """Immutable audit record for a single LLM interaction.""" interaction_id: str timestamp: str user_id: str input_text: str output_text: str model_id: str risk_classification: str human_reviewed: bool reviewer_id: str | None confidence_score: float # Hash for tamper detection content_hash: str = ""
def __post_init__(self): payload = f"{self.interaction_id}:{self.input_text}:{self.output_text}" self.content_hash = hashlib.sha256(payload.encode()).hexdigest()
class ComplianceAuditLog: """Append-only audit log for EU AI Act compliance."""
def __init__(self, storage_backend): self.storage = storage_backend
async def log_interaction(self, entry: AuditEntry) -> None: """Write audit entry to immutable storage.""" record = asdict(entry) record["logged_at"] = datetime.utcnow().isoformat() await self.storage.append(json.dumps(record))
async def query_by_user(self, user_id: str, start: str, end: str) -> list: """Retrieve audit entries for GDPR subject access requests.""" return await self.storage.query( user_id=user_id, start_date=start, end_date=end )The content_hash field provides tamper detection. If an audit entry is modified after the fact, the hash will not match. Store logs in append-only storage — object storage with versioning enabled, or a database with write-once permissions.
7. AI Compliance Trade-offs and When Not to Over-Invest
Section titled “7. AI Compliance Trade-offs and When Not to Over-Invest”Compliance adds development overhead, and the right level of investment depends on your risk classification — not the maximum possible effort.
The Cost of Compliance
Section titled “The Cost of Compliance”Adding compliance infrastructure to a GenAI application increases development time by 15-25% for limited-risk applications (transparency labels, basic documentation, audit logging). For high-risk applications, the overhead reaches 40-60% when you include conformity assessments, extensive documentation, and formal human oversight mechanisms.
The Real Trade-off Matrix
Section titled “The Real Trade-off Matrix”| Approach | Dev Overhead | Audit Readiness | Risk Exposure |
|---|---|---|---|
| No compliance | 0% | None | Maximum — fines up to 7% of revenue |
| Minimal transparency | 5-10% | Partial | Moderate — covers limited-risk basics |
| Full limited-risk compliance | 15-25% | Good | Low — meets all limited-risk obligations |
| Full high-risk compliance | 40-60% | Complete | Minimal — meets all obligations |
Where Engineers Get Burned
Section titled “Where Engineers Get Burned”Under-compliance is equally dangerous but in the opposite direction. Teams that ignore transparency requirements because “we are just a startup” discover that the EU AI Act applies based on market access, not company size. If a single EU resident uses your product, you are in scope.
The practical approach: implement transparency requirements immediately (they are cheap and universally required). Build documentation infrastructure that grows with your system. Prepare audit trail capabilities even if you are not high-risk today — your classification may change as you enter new markets or add new features.
8. AI Compliance Interview Questions and Answers
Section titled “8. AI Compliance Interview Questions and Answers”Compliance questions test whether you understand the intersection of engineering, law, and ethics — a signal of senior-level thinking.
Q1: How would you ensure your AI application complies with the EU AI Act?
Section titled “Q1: How would you ensure your AI application complies with the EU AI Act?”What the interviewer is testing: Can you translate regulatory requirements into engineering deliverables?
Strong answer: “First, I would classify the application’s risk level by checking it against the prohibited list and Annex III high-risk categories. Most GenAI applications fall into limited risk, which requires transparency obligations — AI disclosure labels, content marking, and user notification. I would implement these as part of the API response schema, not as an afterthought. Then I would set up documentation infrastructure: model cards in version control, training data provenance logs, and evaluation result tracking. For monitoring, I would add audit trail logging to every LLM interaction with immutable storage and tamper-detection hashes. The key insight is that compliance is an engineering system, not a legal document.”
Weak answer: “I would check with the legal team to make sure we are compliant.” (This shows no understanding of the technical obligations.)
Q2: Design a monitoring system for AI compliance
Section titled “Q2: Design a monitoring system for AI compliance”What the interviewer is testing: Can you architect production-grade observability for regulatory requirements?
Strong answer: “I would build four monitoring layers. First, an audit trail that logs every input, output, model version, confidence score, and routing decision to immutable storage. Second, a drift detection pipeline that compares current model performance against documented baselines — accuracy, latency, failure rates — and alerts when thresholds are breached. Third, an incident classification system that tags failures by severity and regulatory impact, with automatic escalation for safety-critical issues. Fourth, a reporting module that generates quarterly compliance summaries from the audit data. The entire system feeds into the team’s existing LLMOps observability stack.”
Q3: What documentation does a GenAI application need for EU AI Act compliance?
Section titled “Q3: What documentation does a GenAI application need for EU AI Act compliance?”What the interviewer is testing: Do you know the difference between limited-risk and high-risk documentation requirements?
Strong answer: “For limited-risk applications, which covers most GenAI systems, I would maintain: a system description document covering architecture, intended purpose, and scope; a model card for each model version with training methodology, evaluation metrics, and known limitations; a risk classification record with reasoning; and transparency implementation evidence — screenshots or API specs showing AI disclosure. For high-risk systems, add: formal risk assessment documentation, detailed training data provenance with bias analysis, human oversight mechanism specifications, conformity assessment records, and a post-market monitoring plan. All documentation lives in version control alongside the code and updates with every significant system change.”
9. AI Compliance in Production — Cost, Monitoring, and Insurance
Section titled “9. AI Compliance in Production — Cost, Monitoring, and Insurance”Production compliance extends beyond initial implementation into ongoing operations that affect cost, legal standing, and organizational risk posture.
Audit Trails at Scale
Section titled “Audit Trails at Scale”At production scale — millions of LLM interactions per day — audit trail storage costs become significant. A single interaction record (input, output, metadata, hash) averages 2-5 KB. At 1 million daily interactions, that is 2-5 GB per day, or 730 GB-1.8 TB per year. Use tiered storage: hot storage (30 days) for active investigations, warm storage (1 year) for regulatory review windows, cold storage (7 years) for legal retention requirements.
The EU AI Act does not specify exact retention periods for limited-risk systems, but GDPR’s data minimization principle applies. Retain audit logs for the minimum period needed for your compliance obligations and delete them when they are no longer needed. High-risk systems should plan for regulatory review cycles of 12-24 months.
GDPR Intersection
Section titled “GDPR Intersection”If your AI system processes personal data and makes automated decisions, GDPR Article 22 grants individuals the right to meaningful information about the logic involved. Your system design must support:
- Subject access requests: Export all AI-related data for a specific user within 30 days
- Right to explanation: Explain why the AI made a specific decision (this is where audit trails pay for themselves)
- Right to object: Users can opt out of purely automated decision-making for decisions with legal or significant effects
Model Cards as Living Documents
Section titled “Model Cards as Living Documents”A model card is not a PDF you write once at launch. It is a versioned document that updates with every fine-tuning run, knowledge base refresh, or system prompt change. Include:
- Model ID, version, and provider
- Intended use cases and out-of-scope applications
- Training data summary (without exposing proprietary data)
- Evaluation metrics: accuracy, bias assessments, safety benchmarks
- Known limitations and failure modes
- Incident history: what went wrong and what was fixed
SOC 2 and ISO 27001 Implications
Section titled “SOC 2 and ISO 27001 Implications”If your organization already holds SOC 2 or ISO 27001 certification, AI compliance adds new control requirements. Your existing information security management system (ISMS) needs to cover AI-specific risks: model integrity, training data security, output validation, and access controls for model parameters. The overlap is significant — audit logging, access management, and incident response processes serve both frameworks.
Insurance Considerations
Section titled “Insurance Considerations”Cyber liability insurance policies are evolving to cover AI-specific risks. Insurers increasingly require evidence of AI governance — documented risk classifications, compliance controls, and incident response plans — before providing coverage for AI-related claims. Your compliance documentation is not just for regulators; it is for your insurer.
For a broader view of production operations, see the LLMOps guide. For security-specific patterns, see LLM Security.
10. Summary and Key Takeaways
Section titled “10. Summary and Key Takeaways”AI compliance is an engineering discipline. Here is what to remember.
- Most GenAI applications are limited risk — you need transparency labels and basic documentation, not a full conformity assessment. Classify your system before scoping your compliance work.
- The EU AI Act applies based on market access, not company location. If EU residents use your product, you are in scope regardless of where your company is headquartered.
- Five compliance checkpoints structure the work: risk classification, transparency implementation, documentation, human oversight, and monitoring. Implement them in order.
- Compliance adds 15-25% development overhead for limited-risk applications. Over-compliance wastes resources; under-compliance risks fines up to 7% of global revenue.
- Audit trails are the foundation. Log every LLM interaction with immutable storage and tamper-detection hashes. They serve compliance, debugging, GDPR subject access requests, and incident investigation.
- Model cards are living documents, not one-time deliverables. Update them with every significant system change.
- Full enforcement begins August 2026. Prohibited practices are already banned (February 2025) and general-purpose AI rules apply from August 2025. The window to prepare is narrowing.
Related
Section titled “Related”- AI Safety — broader safety principles beyond regulatory compliance
- AI Guardrails — runtime safety controls for LLM applications
- Human-in-the-Loop — approval workflows and oversight patterns
- LLM Security — securing AI systems against adversarial attacks
- Hallucination Mitigation — output quality monitoring
- LLMOps — production monitoring and operations
- System Design — architecture patterns for GenAI applications
- Interview Questions — comprehensive GenAI interview preparation
- Evaluation — measuring AI system performance
Last updated: March 20, 2026. EU AI Act enforcement timeline and risk categories verified against the official EU AI Act text (Regulation 2024/1689) as of March 2026.
Frequently Asked Questions
What is AI compliance and why does it matter for engineers?
AI compliance is the process of ensuring your AI system meets legal and regulatory requirements. For engineers, it matters because the EU AI Act places technical obligations — risk classification, documentation, transparency labeling, and monitoring — directly on the teams that build and deploy AI systems. Non-compliance carries fines up to 7% of global annual revenue.
What risk category do most GenAI applications fall under in the EU AI Act?
Most GenAI applications — chatbots, content generators, coding assistants, summarization tools — fall into the limited risk category. This requires transparency obligations: you must clearly disclose that content is AI-generated and that users are interacting with an AI system. Only applications in regulated domains like hiring, credit scoring, or medical diagnosis typically qualify as high risk.
What are the EU AI Act enforcement deadlines?
The EU AI Act follows a phased enforcement timeline. February 2025: prohibited AI practices banned. August 2025: general-purpose AI model rules and governance structures apply. August 2026: full enforcement of all provisions including high-risk system requirements, conformity assessments, and the complete penalty framework.
What fines does the EU AI Act impose for non-compliance?
The EU AI Act has a three-tier penalty structure. Deploying prohibited AI systems: up to 35 million EUR or 7% of global annual revenue. Violating high-risk or general-purpose AI obligations: up to 15 million EUR or 3% of global annual revenue. Providing incorrect information to regulators: up to 7.5 million EUR or 1% of global annual revenue.
How do you classify your AI application's risk level under the EU AI Act?
Start by checking if your application falls into the prohibited category (social scoring, real-time biometric identification in public spaces). Then check high-risk annexes: hiring tools, credit scoring, medical devices, critical infrastructure. If your application generates or manipulates content, it is limited risk with transparency requirements. Everything else is minimal risk with no mandatory obligations.
What transparency requirements apply to GenAI applications?
Under Article 52 of the EU AI Act, GenAI applications must clearly disclose AI involvement. Chatbots must inform users they are interacting with AI. AI-generated content (text, images, audio, video) must be labeled as artificially generated. Deepfakes must be disclosed. These requirements apply regardless of the application's overall risk classification.
What technical documentation does the EU AI Act require?
For high-risk systems, documentation must cover: system description and intended purpose, risk assessment methodology and results, data governance practices including training data provenance, human oversight measures, accuracy and robustness metrics, and post-deployment monitoring plans. For general-purpose AI models, model cards with training methodology, evaluation results, and known limitations are required.
How does AI compliance intersect with GDPR?
GDPR and the EU AI Act are complementary. GDPR governs personal data processing and grants individuals the right to explanation for automated decisions. The EU AI Act adds AI-specific requirements on top. If your AI system processes personal data and makes automated decisions, you must comply with both: GDPR for data protection rights and the AI Act for risk classification, transparency, and documentation.
What is a model card and why do engineers need to maintain one?
A model card is a standardized document describing a machine learning model's intended use, training data, evaluation metrics, known limitations, and ethical considerations. Under the EU AI Act, general-purpose AI model providers must maintain technical documentation equivalent to model cards. Engineers need model cards for audit readiness, internal knowledge transfer, and regulatory compliance.
How do you implement AI compliance monitoring in production?
Production AI compliance monitoring requires four components: audit trail logging that records every input, output, and decision with timestamps; drift detection that flags when model behavior deviates from documented metrics; incident classification that categorizes failures by severity and regulatory impact; and automated reporting that generates compliance summaries for quarterly review. See the LLMOps guide for implementation patterns.