EU AI Act
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. Adopted in June 2024 and entering into force on 1 August 2024, it establishes harmonised rules for the development, deployment, and use of AI systems within the European Union.
The regulation takes a risk-based approach, classifying AI systems into four tiers: unacceptable risk (prohibited), high risk (subject to strict obligations), limited risk (transparency obligations), and minimal risk (voluntary codes). Requirements are proportionate to the level of risk the AI system poses to health, safety, and fundamental rights.
Probe Six assesses AI systems against 12 assessable articles from the EU AI Act, combining automated adversarial testing with structured governance questionnaires. Each article is mapped to specific requirements from the regulation text, with exact paragraph and sub-paragraph references.
The assessment covers 17 automated security plugins that exercise the AI system in real time, plus 66 governance questions for obligations that cannot be tested at runtime (e.g. documentation, organisational processes, training programmes).
Risk Classification
The EU AI Act uses a risk-based approach, classifying AI systems into four tiers with obligations proportionate to the level of risk posed. Understanding which tier your AI system falls under determines what regulatory obligations apply.
| Tier | Regulation Reference | Obligations |
|---|---|---|
| Unacceptable Risk | Article 5 | Prohibited AI practices — these systems must not be placed on the market, put into service, or used within the EU. |
| High Risk | Articles 6–7 + Annex III | Subject to strict obligations including risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity requirements. |
| Limited Risk | Article 50 | Subject to transparency obligations — users must be informed they are interacting with AI, and AI-generated content must be appropriately disclosed. |
| Minimal Risk | Residual | No mandatory requirements beyond voluntary codes of conduct. The majority of AI systems fall into this category. |
Probe Six Risk Classifier: When running an EU AI Act scan, Probe Six includes an optional risk tier classifier in the scan setup. Answer 12 questions about your AI system to determine which risk tier applies. The classification appears in your report alongside security findings, providing regulatory context for your CISO.
Enforcement Timeline
The EU AI Act is phased in over three years, with different obligations becoming enforceable at different dates:
Coverage Summary
Article-by-Article Assessment
Each assessable article is broken down into specific requirements, with exact paragraph references from the regulation text. Articles with automated plugins are tested through adversarial red-team probes; all articles include governance assessment questions.
Article 4AI Literacy & Training
| Reference | Requirement | Coverage |
|---|---|---|
| Article 4 | Staff AI literacy training | Governance Assessment |
| Article 4 | Training content updates | Governance Assessment |
| Article 4 | Capabilities and risks coverage | Governance Assessment |
| Article 4 | Role-tailored training | Governance Assessment |
| Article 4 | Programme maturity | Governance Assessment |
Governance Questions
- Do staff involved in AI system operation receive AI literacy training as required by Article 4?8Y/N
- Is training content updated when AI systems or regulations change?7Y/N
- Does training cover AI system capabilities, limitations, and potential risks?8Y/N
- Is training tailored to the technical knowledge and experience of staff per Article 4?7Y/N
- How mature is your AI literacy programme?71–5
Article 5Prohibited Practices
| Reference | Requirement | Coverage |
|---|---|---|
| Article 5(1)(a) | Subliminal manipulation refusal | Automated Testing |
| Article 5(1)(b) | Vulnerability exploitation refusal | Automated Testing |
| Article 5(1)(c) | Social scoring refusal | Automated Testing |
| Article 5(1)(d) | Criminal prediction refusal | Automated Testing |
| Article 5(1)(e) | Facial recognition database scraping | Governance Assessment |
| Article 5(1)(f) | Emotion recognition refusal | Automated Testing |
| Article 5(1)(g) | Biometric categorisation refusal | Automated Testing |
| Article 5(1)(h) | Real-time biometric identification | Governance Assessment |
| Article 5(3) | Judicial authorisation for biometrics | Governance Assessment |
| Article 5(6) | Annual biometric usage reports | Governance Assessment |
Automated Plugins
| Plugin | What it tests | Article Ref |
|---|---|---|
| Subliminal Manipulation Refusal | Tests if AI refuses to deploy manipulative or deceptive techniques targeting subconscious behaviour | Article 5(1)(a) |
| Vulnerability Exploitation Refusal | Tests if AI refuses to exploit age, disability, or socioeconomic vulnerabilities | Article 5(1)(b) |
| Social Scoring Refusal | Tests if AI refuses social behaviour ranking or scoring requests | Article 5(1)(c) |
| Criminal Prediction Refusal | Tests if AI refuses criminal profiling based solely on personal characteristics | Article 5(1)(d) |
| Emotion Recognition Refusal | Tests if AI refuses workplace or education emotion detection | Article 5(1)(f) |
| Biometric Categorisation Refusal | Tests if AI refuses biometric categorisation by protected attributes | Article 5(1)(g) |
Governance Questions
- Does the system create or expand facial recognition databases through untargeted scraping of facial images per Article 5(1)(e)?10Y/N
- Is real-time remote biometric identification used in publicly accessible spaces per Article 5(1)(h)?10Y/N
- If biometric identification is used for law enforcement, is prior judicial authorisation obtained per Article 5(3)?9Y/N
- Are annual reports on biometric identification usage submitted to relevant authorities per Article 5(6)?7Y/N
Article 9Risk Management System
| Reference | Requirement | Coverage |
|---|---|---|
| Article 9(1) | Documented risk management system | Governance Assessment |
| Article 9(2) | Continuous iterative process | Governance Assessment |
| Article 9(2)(a) | Health, safety, rights risk identification | Governance Assessment |
| Article 9(2)(b) | Intended use and misuse risk estimation | Governance Assessment |
| Article 9(2)(d) | Post-market monitoring measures | Governance Assessment |
| Article 9(6) | Testing for appropriate measures | Governance Assessment |
| Article 9(9) | Vulnerable group impact assessment | Governance Assessment |
| Article 9 | Programme maturity | Governance Assessment |
Governance Questions
- Is a documented risk management system established and maintained for the AI system per Article 9(1)?9Y/N
- Does risk management operate as a continuous iterative process throughout the AI system lifecycle per Article 9(2)?8Y/N
- Are risks to health, safety, and fundamental rights identified and analysed per Article 9(2)(a)?9Y/N
- Are risks estimated under both intended use and reasonably foreseeable misuse per Article 9(2)(b)?9Y/N
- Are risk management measures adopted based on post-market monitoring data per Article 9(2)(d)?8Y/N
- Is testing conducted to identify the most appropriate risk management measures per Article 9(6)?8Y/N
- Are potential adverse impacts on persons under 18 or vulnerable groups assessed per Article 9(9)?8Y/N
- How mature is your AI risk management programme?81–5
Article 10Data & Data Governance
| Reference | Requirement | Coverage |
|---|---|---|
| Article 10(2) | Dataset governance practices | Governance Assessment |
| Article 10(2)(f) | Bias examination | Automated Testing |
| Article 10(2)(g) | Bias detection and correction | Automated Testing |
| Article 10(3) | Dataset representativeness | Governance Assessment |
| Article 10(4) | Deployment context accounting | Governance Assessment |
| Article 10(5) | Special category data safeguards | Governance Assessment |
| Article 10(5) | PII data governance | Automated Testing |
| Article 10 | Programme maturity | Governance Assessment |
Automated Plugins
| Plugin | What it tests | Article Ref |
|---|---|---|
| Bias: Race | Probes for racial bias in model outputs | Article 10(2)(f) |
| Bias: Gender | Probes for gender bias in model outputs | Article 10(2)(f) |
| Bias: Age | Probes for age-based discrimination | Article 10(2)(f) |
| Bias: Disability | Probes for disability-related bias | Article 10(2)(f) |
| Bias: Religion | Probes for religious discrimination | Article 10(2)(f) |
| Bias: Sexual Orientation | Probes for sexual orientation bias | Article 10(2)(f) |
| Bias: Socioeconomic | Probes for socioeconomic bias | Article 10(2)(f) |
| Bias: Political | Probes for political bias | Article 10(2)(f) |
| Bias: Nationality | Probes for nationality-based discrimination | Article 10(2)(f) |
| PII: Direct Disclosure | Tests for direct personal data leakage | Article 10(5) |
| PII: API/Database Leakage | Tests for API or database credential leakage | Article 10(5) |
| PII: Session Leakage | Tests for cross-session personal data leakage | Article 10(5) |
| PII: Social Engineering | Tests for social engineering data extraction | Article 10(5) |
| Cross-Session Data Leakage | Tests for data leaking between user sessions | Article 10(5) |
| Training Data Extraction | Tests for memorised training data extraction | Article 10(5) |
Governance Questions
- Are training, validation, and testing datasets subject to documented governance practices per Article 10(2)?9Y/N
- Are datasets examined for possible biases relevant to health, safety, and fundamental rights per Article 10(2)(f)?9Y/N
- Are appropriate bias detection and correction measures applied per Article 10(2)(g)?9Y/N
- Are datasets relevant, sufficiently representative, and as free of errors as possible per Article 10(3)?8Y/N
- Do datasets account for geographical, contextual, and functional characteristics of deployment per Article 10(4)?7Y/N
- Is processing of special category personal data for bias detection conducted with appropriate safeguards per Article 10(5)?8Y/N
- How mature is your data governance programme for AI training data?71–5
Article 11Technical Documentation
| Reference | Requirement | Coverage |
|---|---|---|
| Article 11(1) | Up-to-date technical documentation | Governance Assessment |
| Annex IV(1) | System description and purpose | Governance Assessment |
| Annex IV(2) | Design and training methodology | Governance Assessment |
| Annex IV(3) | Testing procedures and results | Governance Assessment |
| Annex IV(9) | Post-market monitoring plan | Governance Assessment |
| Article 11 | Documentation comprehensiveness | Governance Assessment |
Governance Questions
- Is technical documentation maintained and kept up-to-date before market placement per Article 11(1)?9Y/N
- Does documentation include a general description of the AI system and its intended purpose per Annex IV(1)?8Y/N
- Are data requirements, design specifications, and training methodology documented per Annex IV(2)?8Y/N
- Are validation and testing procedures documented with results per Annex IV(3)?8Y/N
- Is a post-market monitoring plan included in technical documentation per Annex IV(9)?7Y/N
- How comprehensive is your technical documentation?71–5
Article 12Record-Keeping & Logging
| Reference | Requirement | Coverage |
|---|---|---|
| Article 12(1) | Automatic event recording | Governance Assessment |
| Article 12(1) | Log traceability and monitoring | Governance Assessment |
| Article 12(1) | Tamper-proof logging | Governance Assessment |
| Article 12(1)/Art 26(6) | Log retention (min 6 months) | Governance Assessment |
| Article 12 | Audit trail maturity | Governance Assessment |
Governance Questions
- Does the AI system automatically record events (logs) over its lifetime per Article 12(1)?9Y/N
- Are logs sufficient to trace system functioning and enable post-market monitoring per Article 12(1)?8Y/N
- Are logging capabilities designed to prevent tampering per Article 12(1)?8Y/N
- Are logs retained for an appropriate period (minimum 6 months per Article 26(6))?7Y/N
- How mature is your AI system logging and audit trail?71–5
Article 13Transparency & Information
| Reference | Requirement | Coverage |
|---|---|---|
| Article 13(1) | Output interpretability | Governance Assessment |
| Article 13(2) | Clear instructions for use | Governance Assessment |
| Article 13(3)(b) | Accuracy metrics and limitations | Automated Testing |
| Article 13(3)(b) | Documented risks | Governance Assessment |
| Article 13(3)(d) | Human oversight documentation | Governance Assessment |
| Article 13(3)(b) | Demographic performance metrics | Governance Assessment |
| Article 13(1) | AI self-disclosure | Automated Testing |
| Article 13(1) | Explainability | Automated Testing |
Automated Plugins
| Plugin | What it tests | Article Ref |
|---|---|---|
| AI Self-Disclosure | Tests whether the AI discloses its artificial nature | Article 13(1) |
| Limitation Disclosure | Tests whether the AI declares known limitations | Article 13(3)(b) |
| Confidence Calibration | Tests accuracy of confidence expressions | Article 13(3)(b) |
| Explainability | Tests whether the AI can explain its reasoning | Article 13(1) |
Governance Questions
- Is the AI system designed to be sufficiently transparent for deployers to interpret outputs per Article 13(1)?9Y/N
- Are instructions for use provided in a clear, comprehensive digital format per Article 13(2)?8Y/N
- Do instructions declare accuracy metrics and known limitations per Article 13(3)(b)?8Y/N
- Are foreseeable risks to health, safety, or fundamental rights documented per Article 13(3)(b)?9Y/N
- Are human oversight measures and technical interpretability tools documented per Article 13(3)(d)?8Y/N
- Are performance metrics provided for specific demographic groups where applicable per Article 13(3)(b)?7Y/N
Article 14Human Oversight
| Reference | Requirement | Coverage |
|---|---|---|
| Article 14(1) | Effective human oversight design | Governance Assessment |
| Article 14(3) | Proportionate oversight measures | Governance Assessment |
| Article 14(4)(a) | Overseer understanding | Governance Assessment |
| Article 14(4)(b) | Automation bias awareness | Automated Testing |
| Article 14(4)(d) | Output override capability | Governance Assessment |
| Article 14(4)(e) | Safe interruption capability | Governance Assessment |
| Article 14 | Programme maturity | Governance Assessment |
Automated Plugins
| Plugin | What it tests | Article Ref |
|---|---|---|
| Overreliance | Tests whether the AI encourages over-dependence on its outputs | Article 14(4)(b) |
| Sycophancy | Tests whether the AI agrees with false premises rather than correcting the user | Article 14(4)(b) |
Governance Questions
- Is the AI system designed to be effectively overseen by natural persons during use per Article 14(1)?9Y/N
- Are oversight measures proportionate to the system's risks and level of autonomy per Article 14(3)?8Y/N
- Can overseers understand the system's capacities and limitations per Article 14(4)(a)?8Y/N
- Are overseers aware of automation bias tendencies per Article 14(4)(b)?8Y/N
- Can overseers override, disregard, or reverse system outputs per Article 14(4)(d)?9Y/N
- Can the system be safely interrupted or stopped by a human per Article 14(4)(e)?9Y/N
- How mature is your human oversight programme for AI systems?81–5
Article 15Accuracy, Robustness & Cybersecurity
| Reference | Requirement | Coverage |
|---|---|---|
| Article 15(3) | Declared accuracy metrics | Governance Assessment |
| Article 15(4) | Fail-safe mechanisms | Governance Assessment |
| Article 15(4) | Feedback loop mitigations | Governance Assessment |
| Article 15(5) | Data/model poisoning prevention | Automated Testing |
| Article 15(5) | Adversarial attack resilience | Automated Testing |
| Article 15 | Cybersecurity programme maturity | Governance Assessment |
Automated Plugins
| Plugin | What it tests | Article Ref |
|---|---|---|
| SQL Injection | Tests for SQL injection vulnerabilities in AI-generated outputs | Article 15(5) |
| Shell Injection | Tests for command injection in AI-generated outputs | Article 15(5) |
| Server-Side Request Forgery (SSRF) | Tests for SSRF vulnerabilities | Article 15(5) |
| Prompt Extraction | Tests resistance to system prompt extraction | Article 15(5) |
| Indirect Prompt Injection | Tests resistance to injected instructions in context | Article 15(5) |
| Direct Prompt Injection | Tests resistance to direct prompt manipulation | Article 15(5) |
| ASCII Smuggling | Tests resistance to invisible Unicode character injection | Article 15(5) |
| Model Fingerprinting | Tests whether model identity can be extracted | Article 15(5) |
| Role-Based Access Control (RBAC) | Tests access control enforcement | Article 15(5) |
| Broken Object-Level Authorisation (BOLA) | Tests object-level authorisation | Article 15(5) |
| Broken Function-Level Authorisation (BFLA) | Tests function-level authorisation | Article 15(5) |
| Debug Access | Tests for exposed debug endpoints | Article 15(5) |
| Data Exfiltration | Tests resistance to data exfiltration via the AI | Article 15(5) |
| Error Information Leakage | Tests for sensitive information in error responses | Article 15(5) |
| Privilege Escalation | Tests resistance to privilege escalation attacks | Article 15(5) |
| Secrets Probing | Tests resistance to secrets and credential extraction | Article 15(5) |
| Reverse Shell | Tests resistance to reverse shell code generation | Article 15(5) |
| API Access: Inference | Tests inference API access controls | Article 15(5) |
| API Access: Product Service | Tests product service API access controls | Article 15(5) |
| Multi-Turn System Prompt Extraction | Tests multi-turn system prompt extraction resistance | Article 15(5) |
Governance Questions
- Are accuracy levels and relevant accuracy metrics declared in instructions for use per Article 15(3)?8Y/N
- Does the system include technical redundancy or fail-safe mechanisms per Article 15(4)?7Y/N
- For systems with post-market learning, are feedback loop mitigations in place per Article 15(4)?8Y/N
- Are cybersecurity measures designed to prevent data poisoning and model poisoning per Article 15(5)?9Y/N
- Are measures in place against adversarial examples and confidentiality attacks per Article 15(5)?9Y/N
- How mature is your AI cybersecurity and robustness programme?81–5
Article 26-27Deployer Obligations
| Reference | Requirement | Coverage |
|---|---|---|
| Article 26(1) | Per-instruction system usage | Governance Assessment |
| Article 26(2) | Qualified oversight personnel | Governance Assessment |
| Article 26(4) | Representative input data | Governance Assessment |
| Article 26(5) | Operation monitoring and reporting | Governance Assessment |
| Article 26(6) | Log retention (6 months) | Governance Assessment |
| Article 26(7) | Worker notification | Governance Assessment |
| Article 27 | Fundamental rights impact assessment | Governance Assessment |
| Article 26 | Compliance programme maturity | Governance Assessment |
Governance Questions
- Are technical and organisational measures in place to use the system per instructions per Article 26(1)?8Y/N
- Is human oversight assigned to qualified personnel with necessary competence per Article 26(2)?9Y/N
- Is input data relevant and representative for the intended purpose per Article 26(4)?8Y/N
- Is there a process to monitor system operation and report risks or incidents per Article 26(5)?8Y/N
- Are automatically generated logs retained for at least six months per Article 26(6)?7Y/N
- Are affected workers and worker representatives informed before workplace deployment per Article 26(7)?7Y/N
- Has a fundamental rights impact assessment been conducted per Article 27 (if deploying in public or sensitive contexts)?8Y/N
- How mature is your deployer compliance programme?71–5
Article 50Transparency Obligations
| Reference | Requirement | Coverage |
|---|---|---|
| Article 50(1) | AI interaction disclosure | Automated Testing |
| Article 50(2) | AI content marking | Automated Testing |
| Article 50(3) | Biometric system user notification | Governance Assessment |
| Article 50(4) | Deepfake disclosure | Governance Assessment |
| Article 50(4) | AI text labelling | Governance Assessment |
| Article 50(5) | Disclosure timing and accessibility | Governance Assessment |
Automated Plugins
| Plugin | What it tests | Article Ref |
|---|---|---|
| AI Self-Disclosure | Tests whether the AI discloses its artificial nature when interacting with users | Article 50(1) |
| Content Marking | Tests whether AI-generated content is appropriately marked | Article 50(2) |
| Limitation Disclosure | Tests whether the AI proactively discloses capability boundaries | Article 50(1) |
Governance Questions
- Are deployers of emotion recognition or biometric categorisation systems informing exposed individuals per Article 50(3)?9Y/N
- Are deepfake disclosures provided when generating AI-manipulated content per Article 50(4)?8Y/N
- Is AI-generated text on matters of public interest labelled unless subject to human editorial control per Article 50(4)?8Y/N
- Is information disclosed clearly and at the latest at first interaction or exposure per Article 50(5)?7Y/N
- Do disclosure mechanisms meet accessibility requirements per Article 50(5)?7Y/N
Article 53/55GPAI Model Obligations
| Reference | Requirement | Coverage |
|---|---|---|
| Article 53(1)(a) | Technical documentation (Annex XI) | Governance Assessment |
| Article 53(1)(b) | Downstream provider information | Governance Assessment |
| Article 53(1)(c) | Copyright compliance policies | Governance Assessment |
| Article 53(1)(d) | Training data summary | Governance Assessment |
| Article 55(1)(a)/(b) | Systemic risk adversarial testing | Governance Assessment |
| Article 55(1)(c) | Serious incident reporting | Governance Assessment |
Governance Questions
- Is technical documentation maintained covering training and testing processes per Article 53(1)(a) and Annex XI?8Y/N
- Are downstream providers given information about model capabilities and limitations per Article 53(1)(b) and Annex XII?8Y/N
- Are copyright compliance policies established per Article 53(1)(c) and Directive 2019/790?7Y/N
- Is a detailed training data summary publicly available per Article 53(1)(d)?7Y/N
- If the model has systemic risk, are adversarial testing and model evaluations conducted per Article 55(1)(a)/(b)?9Y/N
- Are serious incidents tracked and reported to the AI Office per Article 55(1)(c)?8Y/N
Out-of-Scope Articles
The EU AI Act contains 113 articles and 13 annexes. The following articles are not assessable through runtime testing or governance questionnaires because they define institutional processes, classification rules, or regulatory infrastructure rather than testable system-level requirements.
| Articles | Rationale |
|---|---|
| Article 1-3 | Definitional — establish subject matter, scope, and terminology |
| Article 6-8 | Classification rules — define what counts as high-risk, not testable requirements |
| Article 16-25 | Provider/importer/distributor obligations — organisational/commercial, not system-level |
| Article 28-39 | Notifying authorities — institutional procedures for EU member state bodies |
| Article 40-49 | Standards, conformity assessment, CE marking — regulatory compliance procedures |
| Article 51-52 | GPAI classification — threshold definitions, not testable requirements |
| Article 54 | Authorised representatives — commercial representation requirements |
| Article 56 | Codes of practice — voluntary compliance mechanisms |
| Article 57-63 | Innovation support — regulatory sandboxes, SME measures |
| Article 64-70 | EU governance structures — AI Office, European AI Board |
| Article 71 | EU database — registration infrastructure |
| Article 72-94 | Post-market monitoring, market surveillance — institutional oversight |
| Article 95-96 | Voluntary codes and guidelines |
| Article 97-98 | Delegation and committee procedures |
| Article 99-101 | Penalties and fines framework |
| Article 102-113 | Final provisions, amendments, transitional provisions |
Running an EU AI Act Assessment
To run an EU AI Act compliance assessment:
- Register your endpoint— Add the AI system you want to assess via the Endpoints page
- Select the EU AI Act template— Choose individual articles for targeted testing or select all 12 for comprehensive coverage
- Classify your risk tier (optional)— Answer the risk classification questions to determine which EU AI Act tier applies to your AI system. The tier appears in your report to provide regulatory context
- Complete governance questions— When you select an article, its governance questions appear inline below the article row. Answer them in context — your responses auto-save and persist across scans
- Review article-level results— Each finding in your report includes exact EU AI Act article references alongside results from automated testing and governance assessment
The assessment produces a per-article compliance view showing which requirements were tested, pass rates, and severity levels. Governance-only articles (e.g. Article 4 AI Literacy, Article 9 Risk Management) appear with governance assessment results only.
Note: This assessment is a technical evaluation tool, not a legal compliance certification. Results should be reviewed alongside legal counsel familiar with the EU AI Act and its implementing measures. The assessment helps identify gaps and provides evidence for compliance documentation.
References
- EUR-Lex: Regulation (EU) 2024/1689 — Official EU AI Act text in the Official Journal
- European Commission: Regulatory Framework for AI — Policy overview and implementation guidance
- EU AI Office — Central body for AI governance within the EU
- GPAI Code of Practice — Code of practice for general-purpose AI models
- AI Act Explorer — Searchable, annotated version of the EU AI Act