EU AI Act

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. Adopted in June 2024 and entering into force on 1 August 2024, it establishes harmonised rules for the development, deployment, and use of AI systems within the European Union.

The regulation takes a risk-based approach, classifying AI systems into four tiers: unacceptable risk (prohibited), high risk (subject to strict obligations), limited risk (transparency obligations), and minimal risk (voluntary codes). Requirements are proportionate to the level of risk the AI system poses to health, safety, and fundamental rights.

Probe Six assesses AI systems against 12 assessable articles from the EU AI Act, combining automated adversarial testing with structured governance questionnaires. Each article is mapped to specific requirements from the regulation text, with exact paragraph and sub-paragraph references.

The assessment covers 17 automated security plugins that exercise the AI system in real time, plus 66 governance questions for obligations that cannot be tested at runtime (e.g. documentation, organisational processes, training programmes).

Risk Classification

The EU AI Act uses a risk-based approach, classifying AI systems into four tiers with obligations proportionate to the level of risk posed. Understanding which tier your AI system falls under determines what regulatory obligations apply.

TierRegulation ReferenceObligations
Unacceptable RiskArticle 5Prohibited AI practices — these systems must not be placed on the market, put into service, or used within the EU.
High RiskArticles 6–7 + Annex IIISubject to strict obligations including risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity requirements.
Limited RiskArticle 50Subject to transparency obligations — users must be informed they are interacting with AI, and AI-generated content must be appropriately disclosed.
Minimal RiskResidualNo mandatory requirements beyond voluntary codes of conduct. The majority of AI systems fall into this category.

Probe Six Risk Classifier: When running an EU AI Act scan, Probe Six includes an optional risk tier classifier in the scan setup. Answer 12 questions about your AI system to determine which risk tier applies. The classification appears in your report alongside security findings, providing regulatory context for your CISO.

Enforcement Timeline

The EU AI Act is phased in over three years, with different obligations becoming enforceable at different dates:

1 August 2024EU AI Act enters into force (Regulation 2024/1689)
2 February 2025Article 5 prohibited AI practices become enforceable
2 August 2025GPAI model obligations (Article 53-55) become enforceable
2 August 2026High-risk AI system obligations (Article 6-49) become enforceable
2 August 2027Annex I high-risk systems (e.g. medical devices, machinery) become enforceable

Coverage Summary

17
Automated Requirements
66
Governance Requirements
83
Total Requirements
Automated TestingRequirements tested through automated adversarial probes that exercise the AI system in real time, producing measurable evidence of compliance or non-compliance against specific EU AI Act articles.
Governance AssessmentRequirements that cannot be tested at runtime (e.g. documentation practices, training programmes, organisational processes) and are instead assessed through structured governance questionnaires referencing exact article paragraphs.

Article-by-Article Assessment

Each assessable article is broken down into specific requirements, with exact paragraph references from the regulation text. Articles with automated plugins are tested through adversarial red-team probes; all articles include governance assessment questions.

Article 4AI Literacy & Training

5 requirements· 5 governance questions
ReferenceRequirementCoverage
Article 4Staff AI literacy trainingGovernance Assessment
Article 4Training content updatesGovernance Assessment
Article 4Capabilities and risks coverageGovernance Assessment
Article 4Role-tailored trainingGovernance Assessment
Article 4Programme maturityGovernance Assessment

Governance Questions

  1. Do staff involved in AI system operation receive AI literacy training as required by Article 4?8Y/N
  2. Is training content updated when AI systems or regulations change?7Y/N
  3. Does training cover AI system capabilities, limitations, and potential risks?8Y/N
  4. Is training tailored to the technical knowledge and experience of staff per Article 4?7Y/N
  5. How mature is your AI literacy programme?71–5

Article 5Prohibited Practices

10 requirements· 6 automated plugins· 4 governance questions
ReferenceRequirementCoverage
Article 5(1)(a)Subliminal manipulation refusalAutomated Testing
Article 5(1)(b)Vulnerability exploitation refusalAutomated Testing
Article 5(1)(c)Social scoring refusalAutomated Testing
Article 5(1)(d)Criminal prediction refusalAutomated Testing
Article 5(1)(e)Facial recognition database scrapingGovernance Assessment
Article 5(1)(f)Emotion recognition refusalAutomated Testing
Article 5(1)(g)Biometric categorisation refusalAutomated Testing
Article 5(1)(h)Real-time biometric identificationGovernance Assessment
Article 5(3)Judicial authorisation for biometricsGovernance Assessment
Article 5(6)Annual biometric usage reportsGovernance Assessment

Automated Plugins

PluginWhat it testsArticle Ref
Subliminal Manipulation RefusalTests if AI refuses to deploy manipulative or deceptive techniques targeting subconscious behaviourArticle 5(1)(a)
Vulnerability Exploitation RefusalTests if AI refuses to exploit age, disability, or socioeconomic vulnerabilitiesArticle 5(1)(b)
Social Scoring RefusalTests if AI refuses social behaviour ranking or scoring requestsArticle 5(1)(c)
Criminal Prediction RefusalTests if AI refuses criminal profiling based solely on personal characteristicsArticle 5(1)(d)
Emotion Recognition RefusalTests if AI refuses workplace or education emotion detectionArticle 5(1)(f)
Biometric Categorisation RefusalTests if AI refuses biometric categorisation by protected attributesArticle 5(1)(g)

Governance Questions

  1. Does the system create or expand facial recognition databases through untargeted scraping of facial images per Article 5(1)(e)?10Y/N
  2. Is real-time remote biometric identification used in publicly accessible spaces per Article 5(1)(h)?10Y/N
  3. If biometric identification is used for law enforcement, is prior judicial authorisation obtained per Article 5(3)?9Y/N
  4. Are annual reports on biometric identification usage submitted to relevant authorities per Article 5(6)?7Y/N

Article 9Risk Management System

8 requirements· 8 governance questions
ReferenceRequirementCoverage
Article 9(1)Documented risk management systemGovernance Assessment
Article 9(2)Continuous iterative processGovernance Assessment
Article 9(2)(a)Health, safety, rights risk identificationGovernance Assessment
Article 9(2)(b)Intended use and misuse risk estimationGovernance Assessment
Article 9(2)(d)Post-market monitoring measuresGovernance Assessment
Article 9(6)Testing for appropriate measuresGovernance Assessment
Article 9(9)Vulnerable group impact assessmentGovernance Assessment
Article 9Programme maturityGovernance Assessment

Governance Questions

  1. Is a documented risk management system established and maintained for the AI system per Article 9(1)?9Y/N
  2. Does risk management operate as a continuous iterative process throughout the AI system lifecycle per Article 9(2)?8Y/N
  3. Are risks to health, safety, and fundamental rights identified and analysed per Article 9(2)(a)?9Y/N
  4. Are risks estimated under both intended use and reasonably foreseeable misuse per Article 9(2)(b)?9Y/N
  5. Are risk management measures adopted based on post-market monitoring data per Article 9(2)(d)?8Y/N
  6. Is testing conducted to identify the most appropriate risk management measures per Article 9(6)?8Y/N
  7. Are potential adverse impacts on persons under 18 or vulnerable groups assessed per Article 9(9)?8Y/N
  8. How mature is your AI risk management programme?81–5

Article 10Data & Data Governance

8 requirements· 15 automated plugins· 7 governance questions
ReferenceRequirementCoverage
Article 10(2)Dataset governance practicesGovernance Assessment
Article 10(2)(f)Bias examinationAutomated Testing
Article 10(2)(g)Bias detection and correctionAutomated Testing
Article 10(3)Dataset representativenessGovernance Assessment
Article 10(4)Deployment context accountingGovernance Assessment
Article 10(5)Special category data safeguardsGovernance Assessment
Article 10(5)PII data governanceAutomated Testing
Article 10Programme maturityGovernance Assessment

Automated Plugins

PluginWhat it testsArticle Ref
Bias: RaceProbes for racial bias in model outputsArticle 10(2)(f)
Bias: GenderProbes for gender bias in model outputsArticle 10(2)(f)
Bias: AgeProbes for age-based discriminationArticle 10(2)(f)
Bias: DisabilityProbes for disability-related biasArticle 10(2)(f)
Bias: ReligionProbes for religious discriminationArticle 10(2)(f)
Bias: Sexual OrientationProbes for sexual orientation biasArticle 10(2)(f)
Bias: SocioeconomicProbes for socioeconomic biasArticle 10(2)(f)
Bias: PoliticalProbes for political biasArticle 10(2)(f)
Bias: NationalityProbes for nationality-based discriminationArticle 10(2)(f)
PII: Direct DisclosureTests for direct personal data leakageArticle 10(5)
PII: API/Database LeakageTests for API or database credential leakageArticle 10(5)
PII: Session LeakageTests for cross-session personal data leakageArticle 10(5)
PII: Social EngineeringTests for social engineering data extractionArticle 10(5)
Cross-Session Data LeakageTests for data leaking between user sessionsArticle 10(5)
Training Data ExtractionTests for memorised training data extractionArticle 10(5)

Governance Questions

  1. Are training, validation, and testing datasets subject to documented governance practices per Article 10(2)?9Y/N
  2. Are datasets examined for possible biases relevant to health, safety, and fundamental rights per Article 10(2)(f)?9Y/N
  3. Are appropriate bias detection and correction measures applied per Article 10(2)(g)?9Y/N
  4. Are datasets relevant, sufficiently representative, and as free of errors as possible per Article 10(3)?8Y/N
  5. Do datasets account for geographical, contextual, and functional characteristics of deployment per Article 10(4)?7Y/N
  6. Is processing of special category personal data for bias detection conducted with appropriate safeguards per Article 10(5)?8Y/N
  7. How mature is your data governance programme for AI training data?71–5

Article 11Technical Documentation

6 requirements· 6 governance questions
ReferenceRequirementCoverage
Article 11(1)Up-to-date technical documentationGovernance Assessment
Annex IV(1)System description and purposeGovernance Assessment
Annex IV(2)Design and training methodologyGovernance Assessment
Annex IV(3)Testing procedures and resultsGovernance Assessment
Annex IV(9)Post-market monitoring planGovernance Assessment
Article 11Documentation comprehensivenessGovernance Assessment

Governance Questions

  1. Is technical documentation maintained and kept up-to-date before market placement per Article 11(1)?9Y/N
  2. Does documentation include a general description of the AI system and its intended purpose per Annex IV(1)?8Y/N
  3. Are data requirements, design specifications, and training methodology documented per Annex IV(2)?8Y/N
  4. Are validation and testing procedures documented with results per Annex IV(3)?8Y/N
  5. Is a post-market monitoring plan included in technical documentation per Annex IV(9)?7Y/N
  6. How comprehensive is your technical documentation?71–5

Article 12Record-Keeping & Logging

5 requirements· 5 governance questions
ReferenceRequirementCoverage
Article 12(1)Automatic event recordingGovernance Assessment
Article 12(1)Log traceability and monitoringGovernance Assessment
Article 12(1)Tamper-proof loggingGovernance Assessment
Article 12(1)/Art 26(6)Log retention (min 6 months)Governance Assessment
Article 12Audit trail maturityGovernance Assessment

Governance Questions

  1. Does the AI system automatically record events (logs) over its lifetime per Article 12(1)?9Y/N
  2. Are logs sufficient to trace system functioning and enable post-market monitoring per Article 12(1)?8Y/N
  3. Are logging capabilities designed to prevent tampering per Article 12(1)?8Y/N
  4. Are logs retained for an appropriate period (minimum 6 months per Article 26(6))?7Y/N
  5. How mature is your AI system logging and audit trail?71–5

Article 13Transparency & Information

8 requirements· 4 automated plugins· 6 governance questions
ReferenceRequirementCoverage
Article 13(1)Output interpretabilityGovernance Assessment
Article 13(2)Clear instructions for useGovernance Assessment
Article 13(3)(b)Accuracy metrics and limitationsAutomated Testing
Article 13(3)(b)Documented risksGovernance Assessment
Article 13(3)(d)Human oversight documentationGovernance Assessment
Article 13(3)(b)Demographic performance metricsGovernance Assessment
Article 13(1)AI self-disclosureAutomated Testing
Article 13(1)ExplainabilityAutomated Testing

Automated Plugins

PluginWhat it testsArticle Ref
AI Self-DisclosureTests whether the AI discloses its artificial natureArticle 13(1)
Limitation DisclosureTests whether the AI declares known limitationsArticle 13(3)(b)
Confidence CalibrationTests accuracy of confidence expressionsArticle 13(3)(b)
ExplainabilityTests whether the AI can explain its reasoningArticle 13(1)

Governance Questions

  1. Is the AI system designed to be sufficiently transparent for deployers to interpret outputs per Article 13(1)?9Y/N
  2. Are instructions for use provided in a clear, comprehensive digital format per Article 13(2)?8Y/N
  3. Do instructions declare accuracy metrics and known limitations per Article 13(3)(b)?8Y/N
  4. Are foreseeable risks to health, safety, or fundamental rights documented per Article 13(3)(b)?9Y/N
  5. Are human oversight measures and technical interpretability tools documented per Article 13(3)(d)?8Y/N
  6. Are performance metrics provided for specific demographic groups where applicable per Article 13(3)(b)?7Y/N

Article 14Human Oversight

7 requirements· 2 automated plugins· 7 governance questions
ReferenceRequirementCoverage
Article 14(1)Effective human oversight designGovernance Assessment
Article 14(3)Proportionate oversight measuresGovernance Assessment
Article 14(4)(a)Overseer understandingGovernance Assessment
Article 14(4)(b)Automation bias awarenessAutomated Testing
Article 14(4)(d)Output override capabilityGovernance Assessment
Article 14(4)(e)Safe interruption capabilityGovernance Assessment
Article 14Programme maturityGovernance Assessment

Automated Plugins

PluginWhat it testsArticle Ref
OverrelianceTests whether the AI encourages over-dependence on its outputsArticle 14(4)(b)
SycophancyTests whether the AI agrees with false premises rather than correcting the userArticle 14(4)(b)

Governance Questions

  1. Is the AI system designed to be effectively overseen by natural persons during use per Article 14(1)?9Y/N
  2. Are oversight measures proportionate to the system's risks and level of autonomy per Article 14(3)?8Y/N
  3. Can overseers understand the system's capacities and limitations per Article 14(4)(a)?8Y/N
  4. Are overseers aware of automation bias tendencies per Article 14(4)(b)?8Y/N
  5. Can overseers override, disregard, or reverse system outputs per Article 14(4)(d)?9Y/N
  6. Can the system be safely interrupted or stopped by a human per Article 14(4)(e)?9Y/N
  7. How mature is your human oversight programme for AI systems?81–5

Article 15Accuracy, Robustness & Cybersecurity

6 requirements· 20 automated plugins· 6 governance questions
ReferenceRequirementCoverage
Article 15(3)Declared accuracy metricsGovernance Assessment
Article 15(4)Fail-safe mechanismsGovernance Assessment
Article 15(4)Feedback loop mitigationsGovernance Assessment
Article 15(5)Data/model poisoning preventionAutomated Testing
Article 15(5)Adversarial attack resilienceAutomated Testing
Article 15Cybersecurity programme maturityGovernance Assessment

Automated Plugins

PluginWhat it testsArticle Ref
SQL InjectionTests for SQL injection vulnerabilities in AI-generated outputsArticle 15(5)
Shell InjectionTests for command injection in AI-generated outputsArticle 15(5)
Server-Side Request Forgery (SSRF)Tests for SSRF vulnerabilitiesArticle 15(5)
Prompt ExtractionTests resistance to system prompt extractionArticle 15(5)
Indirect Prompt InjectionTests resistance to injected instructions in contextArticle 15(5)
Direct Prompt InjectionTests resistance to direct prompt manipulationArticle 15(5)
ASCII SmugglingTests resistance to invisible Unicode character injectionArticle 15(5)
Model FingerprintingTests whether model identity can be extractedArticle 15(5)
Role-Based Access Control (RBAC)Tests access control enforcementArticle 15(5)
Broken Object-Level Authorisation (BOLA)Tests object-level authorisationArticle 15(5)
Broken Function-Level Authorisation (BFLA)Tests function-level authorisationArticle 15(5)
Debug AccessTests for exposed debug endpointsArticle 15(5)
Data ExfiltrationTests resistance to data exfiltration via the AIArticle 15(5)
Error Information LeakageTests for sensitive information in error responsesArticle 15(5)
Privilege EscalationTests resistance to privilege escalation attacksArticle 15(5)
Secrets ProbingTests resistance to secrets and credential extractionArticle 15(5)
Reverse ShellTests resistance to reverse shell code generationArticle 15(5)
API Access: InferenceTests inference API access controlsArticle 15(5)
API Access: Product ServiceTests product service API access controlsArticle 15(5)
Multi-Turn System Prompt ExtractionTests multi-turn system prompt extraction resistanceArticle 15(5)

Governance Questions

  1. Are accuracy levels and relevant accuracy metrics declared in instructions for use per Article 15(3)?8Y/N
  2. Does the system include technical redundancy or fail-safe mechanisms per Article 15(4)?7Y/N
  3. For systems with post-market learning, are feedback loop mitigations in place per Article 15(4)?8Y/N
  4. Are cybersecurity measures designed to prevent data poisoning and model poisoning per Article 15(5)?9Y/N
  5. Are measures in place against adversarial examples and confidentiality attacks per Article 15(5)?9Y/N
  6. How mature is your AI cybersecurity and robustness programme?81–5

Article 26-27Deployer Obligations

8 requirements· 8 governance questions
ReferenceRequirementCoverage
Article 26(1)Per-instruction system usageGovernance Assessment
Article 26(2)Qualified oversight personnelGovernance Assessment
Article 26(4)Representative input dataGovernance Assessment
Article 26(5)Operation monitoring and reportingGovernance Assessment
Article 26(6)Log retention (6 months)Governance Assessment
Article 26(7)Worker notificationGovernance Assessment
Article 27Fundamental rights impact assessmentGovernance Assessment
Article 26Compliance programme maturityGovernance Assessment

Governance Questions

  1. Are technical and organisational measures in place to use the system per instructions per Article 26(1)?8Y/N
  2. Is human oversight assigned to qualified personnel with necessary competence per Article 26(2)?9Y/N
  3. Is input data relevant and representative for the intended purpose per Article 26(4)?8Y/N
  4. Is there a process to monitor system operation and report risks or incidents per Article 26(5)?8Y/N
  5. Are automatically generated logs retained for at least six months per Article 26(6)?7Y/N
  6. Are affected workers and worker representatives informed before workplace deployment per Article 26(7)?7Y/N
  7. Has a fundamental rights impact assessment been conducted per Article 27 (if deploying in public or sensitive contexts)?8Y/N
  8. How mature is your deployer compliance programme?71–5

Article 50Transparency Obligations

6 requirements· 3 automated plugins· 5 governance questions
ReferenceRequirementCoverage
Article 50(1)AI interaction disclosureAutomated Testing
Article 50(2)AI content markingAutomated Testing
Article 50(3)Biometric system user notificationGovernance Assessment
Article 50(4)Deepfake disclosureGovernance Assessment
Article 50(4)AI text labellingGovernance Assessment
Article 50(5)Disclosure timing and accessibilityGovernance Assessment

Automated Plugins

PluginWhat it testsArticle Ref
AI Self-DisclosureTests whether the AI discloses its artificial nature when interacting with usersArticle 50(1)
Content MarkingTests whether AI-generated content is appropriately markedArticle 50(2)
Limitation DisclosureTests whether the AI proactively discloses capability boundariesArticle 50(1)

Governance Questions

  1. Are deployers of emotion recognition or biometric categorisation systems informing exposed individuals per Article 50(3)?9Y/N
  2. Are deepfake disclosures provided when generating AI-manipulated content per Article 50(4)?8Y/N
  3. Is AI-generated text on matters of public interest labelled unless subject to human editorial control per Article 50(4)?8Y/N
  4. Is information disclosed clearly and at the latest at first interaction or exposure per Article 50(5)?7Y/N
  5. Do disclosure mechanisms meet accessibility requirements per Article 50(5)?7Y/N

Article 53/55GPAI Model Obligations

6 requirements· 6 governance questions
ReferenceRequirementCoverage
Article 53(1)(a)Technical documentation (Annex XI)Governance Assessment
Article 53(1)(b)Downstream provider informationGovernance Assessment
Article 53(1)(c)Copyright compliance policiesGovernance Assessment
Article 53(1)(d)Training data summaryGovernance Assessment
Article 55(1)(a)/(b)Systemic risk adversarial testingGovernance Assessment
Article 55(1)(c)Serious incident reportingGovernance Assessment

Governance Questions

  1. Is technical documentation maintained covering training and testing processes per Article 53(1)(a) and Annex XI?8Y/N
  2. Are downstream providers given information about model capabilities and limitations per Article 53(1)(b) and Annex XII?8Y/N
  3. Are copyright compliance policies established per Article 53(1)(c) and Directive 2019/790?7Y/N
  4. Is a detailed training data summary publicly available per Article 53(1)(d)?7Y/N
  5. If the model has systemic risk, are adversarial testing and model evaluations conducted per Article 55(1)(a)/(b)?9Y/N
  6. Are serious incidents tracked and reported to the AI Office per Article 55(1)(c)?8Y/N

Out-of-Scope Articles

The EU AI Act contains 113 articles and 13 annexes. The following articles are not assessable through runtime testing or governance questionnaires because they define institutional processes, classification rules, or regulatory infrastructure rather than testable system-level requirements.

ArticlesRationale
Article 1-3Definitional — establish subject matter, scope, and terminology
Article 6-8Classification rules — define what counts as high-risk, not testable requirements
Article 16-25Provider/importer/distributor obligations — organisational/commercial, not system-level
Article 28-39Notifying authorities — institutional procedures for EU member state bodies
Article 40-49Standards, conformity assessment, CE marking — regulatory compliance procedures
Article 51-52GPAI classification — threshold definitions, not testable requirements
Article 54Authorised representatives — commercial representation requirements
Article 56Codes of practice — voluntary compliance mechanisms
Article 57-63Innovation support — regulatory sandboxes, SME measures
Article 64-70EU governance structures — AI Office, European AI Board
Article 71EU database — registration infrastructure
Article 72-94Post-market monitoring, market surveillance — institutional oversight
Article 95-96Voluntary codes and guidelines
Article 97-98Delegation and committee procedures
Article 99-101Penalties and fines framework
Article 102-113Final provisions, amendments, transitional provisions

Running an EU AI Act Assessment

To run an EU AI Act compliance assessment:

  1. Register your endpoint— Add the AI system you want to assess via the Endpoints page
  2. Select the EU AI Act template— Choose individual articles for targeted testing or select all 12 for comprehensive coverage
  3. Classify your risk tier (optional)— Answer the risk classification questions to determine which EU AI Act tier applies to your AI system. The tier appears in your report to provide regulatory context
  4. Complete governance questions— When you select an article, its governance questions appear inline below the article row. Answer them in context — your responses auto-save and persist across scans
  5. Review article-level results— Each finding in your report includes exact EU AI Act article references alongside results from automated testing and governance assessment

The assessment produces a per-article compliance view showing which requirements were tested, pass rates, and severity levels. Governance-only articles (e.g. Article 4 AI Literacy, Article 9 Risk Management) appear with governance assessment results only.

Note: This assessment is a technical evaluation tool, not a legal compliance certification. Results should be reviewed alongside legal counsel familiar with the EU AI Act and its implementing measures. The assessment helps identify gaps and provides evidence for compliance documentation.

References