ISO/IEC 42001:2023
ISO/IEC 42001:2023 is the first international standard for AI management systems (AIMS). Published in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), it provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system.
The standard follows the ISO High-Level Structure (HLS) used by ISO 27001, ISO 9001, and other management system standards, making it straightforward to integrate into existing management systems. It defines requirements in Clauses 4–10 and provides 38 Annex A controls across 9 domains that address the unique challenges of AI systems.
Probe Six assesses AI systems against all 38 Annex A controls across 9 domains, plus all 7 management system clauses, combining automated adversarial testing with structured governance questionnaires. Each control is mapped to its official ISO reference (e.g. A.6.2.4, A.8.5).
The assessment covers 58 automated security plugins that exercise the AI system in real time across 5 testable controls, plus 128 governance questions across all 9 Annex A domains and 7 management system clauses for obligations that require organisational assessment.
Standard Structure
ISO/IEC 42001 is organised into two main parts: the management system requirements (Clauses 4–10) and the Annex A controls. Together, they provide a comprehensive framework for managing AI risks and ensuring responsible AI development and deployment.
Annex A — Controls
38 controls across 9 domains (A.2–A.10) addressing AI-specific concerns including policies, resources, impact assessment, lifecycle management, data governance, transparency, responsible use, and third-party relationships.
Management System — Clauses 4–10
7 clause groups following the ISO High-Level Structure (HLS), covering context, leadership, planning, support, operation, performance evaluation, and improvement. These are assessed through governance questionnaires.
Coverage Summary
Annex A — Domain-by-Domain Assessment
Each of the 9 Annex A domains is assessed through a combination of automated adversarial testing (where applicable) and governance questionnaires. Controls are listed with their official ISO reference, coverage type, and description.
A.2Policies Related to AI
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.2.2 | AI policy | Governance Assessment | Document a formal AI policy covering the organisation's approach to responsible AI development and use, approved by top management |
| A.2.3 | Policies affected by AI | Governance Assessment | Determine how existing organisational policies are affected by or need to be updated to account for AI systems |
| A.2.4 | Review of AI policy | Governance Assessment | Ensure AI policies are reviewed at planned intervals or when significant changes occur |
Governance Questions
- Is a formal AI policy documented and approved by top management per A.2.2?9Y/N
- Does the AI policy cover the organisation's approach to responsible AI development and use per A.2.2?8Y/N
- Have existing organisational policies been reviewed and updated to account for AI systems per A.2.3?8Y/N
- Is there a documented assessment of how AI systems affect existing policies (HR, data protection, security, etc.) per A.2.3?7Y/N
- Are AI policies reviewed at planned intervals or when significant changes occur per A.2.4?8Y/N
- Are review triggers defined for AI policy updates (e.g. regulatory changes, new AI deployments, incidents) per A.2.4?7Y/N
- How mature is your AI policy framework?81–5
A.3Internal Organisation
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.3.2 | AI roles and responsibilities | Governance Assessment | Define and assign roles and responsibilities for AI governance activities across the lifecycle |
| A.3.3 | Reporting of concerns | Governance Assessment | Establish mechanisms for personnel to report AI-related concerns without fear of retaliation |
Governance Questions
- Are roles and responsibilities for AI governance defined and assigned across the AI system lifecycle per A.3.2?9Y/N
- Is there a documented RACI matrix or equivalent for AI-related activities per A.3.2?8Y/N
- Are accountability mechanisms established for AI system decisions and outcomes per A.3.2?8Y/N
- Are mechanisms in place for personnel to report AI-related concerns without fear of retaliation per A.3.3?9Y/N
- Are reported AI concerns tracked, investigated, and resolved in a timely manner per A.3.3?8Y/N
- How mature is your internal AI governance structure?71–5
A.4Resources for AI Systems
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.4.2 | Resources related to AI systems | Governance Assessment | Document and provide adequate resources for AI systems including infrastructure, support systems, and dependencies |
| A.4.3 | Data resources | Governance Assessment | Identify and document data resources needed for AI systems including training, validation, and operational data |
| A.4.4 | Tooling resources | Governance Assessment | Document tools and frameworks used in AI system development, training, testing, and deployment |
| A.4.5 | System and computing resources | Governance Assessment | Provide adequate computing infrastructure for AI systems |
| A.4.6 | Human resources | Governance Assessment | Ensure adequate personnel with appropriate competencies for AI-related roles |
Governance Questions
- Are resources for AI systems documented including infrastructure, support systems, and dependencies per A.4.2?8Y/N
- Are AI system dependencies (models, APIs, libraries) identified and their availability risks assessed per A.4.2?7Y/N
- Are data resources for AI systems identified and documented (training, validation, operational data) per A.4.3?8Y/N
- Are data requirements specified for each stage of the AI system lifecycle per A.4.3?8Y/N
- Are tools and frameworks used in AI development, training, testing, and deployment documented per A.4.4?7Y/N
- Are tool selection criteria defined and applied (e.g. licensing, security, supportability) per A.4.4?7Y/N
- Is adequate computing infrastructure provided for AI system development, training, and operation per A.4.5?7Y/N
- Are personnel with appropriate competencies assigned to AI-related roles per A.4.6?8Y/N
- Are training and development programmes in place to maintain AI-related competencies per A.4.6?7Y/N
- How mature is your AI resource management?71–5
A.5Assessing Impacts of AI Systems
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.5.2 | AI system impact assessment process | Governance Assessment | Establish a documented process for conducting AI system impact assessments on individuals, groups, and society |
| A.5.3 | Documentation of impact assessment | Governance Assessment | Document impact assessment results including identified impacts, mitigation measures, and residual impacts |
| A.5.4 | Assessing impact on individuals or groups | Hybrid | Assess how AI systems affect individuals or groups including potential for discrimination, bias, privacy violations, and autonomy impacts |
| A.5.5 | Assessing societal impacts | Governance Assessment | Assess broader societal impacts of AI systems including environmental, economic, and social effects |
Automated Plugins
| Plugin | Maps to Control(s) |
|---|---|
| Bias: Race | A.5.4 |
| Bias: Gender | A.5.4 |
| Bias: Age | A.5.4 |
| Bias: Disability | A.5.4 |
| Bias: Religion | A.5.4 |
| Bias: Sexual Orientation | A.5.4 |
| Bias: Socioeconomic | A.5.4 |
| Bias: Political | A.5.4 |
| Bias: Nationality | A.5.4 |
| PII: Direct Disclosure | A.5.4 |
| PII: API/Database Leakage | A.5.4 |
| PII: Session Leakage | A.5.4 |
| PII: Social Engineering | A.5.4 |
| Cross-Session Data Leakage | A.5.4 |
| Training Data Extraction | A.5.4 |
Governance Questions
- Is there a documented process for conducting AI system impact assessments per A.5.2?9Y/N
- Does the impact assessment process cover potential effects on individuals, groups, and society per A.5.2?8Y/N
- Are impact assessment results documented including identified impacts, mitigation measures, and residual impacts per A.5.3?8Y/N
- Are residual impacts formally accepted by an appropriate authority within the organisation per A.5.3?7Y/N
- Is the AI system assessed for potential discrimination, bias, and unfair outcomes affecting individuals or groups per A.5.4?9Y/N
- Are privacy impacts assessed for AI system data collection, processing, and decision-making per A.5.4?8Y/N
- Are impacts on individual autonomy and human agency assessed per A.5.4?8Y/N
- Are broader societal impacts assessed including environmental, economic, and social effects per A.5.5?7Y/N
- Is the environmental impact of AI system operation (energy, compute resources) considered per A.5.5?7Y/N
- How mature is your AI impact assessment programme?81–5
A.6AI System Life Cycle
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.6.1.2 | Objectives for responsible development | Governance Assessment | Establish objectives for responsible AI system development addressing fairness, transparency, accountability, and safety |
| A.6.1.3 | Processes for responsible design and development | Governance Assessment | Define processes for responsible design and development ensuring ethical foundations and adherence to guidelines |
| A.6.2.2 | AI system requirements | Governance Assessment | Specify and document requirements for AI systems including functional, non-functional, constraints, and acceptance criteria |
| A.6.2.3 | Design and development documentation | Governance Assessment | Record AI system design and development decisions based on organisational objectives and requirements |
| A.6.2.4 | Verification and validation | Automated Testing | Test and validate the AI system against requirements including functional, performance, bias, security, and robustness testing |
| A.6.2.5 | AI system deployment | Governance Assessment | Document a deployment plan and ensure requirements are met prior to production deployment |
| A.6.2.6 | Operation and monitoring | Governance Assessment | Plan for ongoing oversight post-deployment including performance monitoring, issue detection, and model drift tracking |
| A.6.2.7 | Technical documentation | Governance Assessment | Determine and provide technical documentation for each relevant category of interested parties |
| A.6.2.8 | Event log recording | Governance Assessment | Enable event log recording throughout the AI system lifecycle, maintaining logs of decisions, inputs, outputs, and changes |
Automated Plugins
| Plugin | Maps to Control(s) |
|---|---|
| SQL Injection | A.6.2.4 |
| Shell Injection | A.6.2.4 |
| Server-Side Request Forgery (SSRF) | A.6.2.4 |
| ASCII Smuggling | A.6.2.4 |
| Debug Access | A.6.2.4 |
| Data Exfiltration | A.6.2.4 |
| Role-Based Access Control (RBAC) | A.6.2.4 |
| Broken Object-Level Authorisation (BOLA) | A.6.2.4 |
| Broken Function-Level Authorisation (BFLA) | A.6.2.4 |
| Model Fingerprinting | A.6.2.4 |
| Error Information Leakage | A.6.2.4 |
| Privilege Escalation | A.6.2.4 |
| Secrets Probing | A.6.2.4 |
| Reverse Shell | A.6.2.4 |
| Multimodal Injection | A.6.2.4 |
| Violent Crime | A.6.2.4 |
| Sex Crime | A.6.2.4 |
| Child Exploitation | A.6.2.4 |
| Self-Harm | A.6.2.4 |
| Chemical & Biological Weapons | A.6.2.4 |
| Indiscriminate Weapons | A.6.2.4 |
| Radicalization | A.6.2.4 |
| Cybercrime | A.6.2.4 |
| Illegal Drugs | A.6.2.4 |
| Illegal Activities | A.6.2.4 |
| Unsafe Practices | A.6.2.4 |
| Graphic Content | A.6.2.4 |
| Profanity | A.6.2.4 |
| Direct Prompt Injection | A.6.2.4 |
| Indirect Prompt Injection | A.6.2.4 |
| Prompt Extraction | A.6.2.4 |
| Prompt Hijacking | A.6.2.4 |
| Self-Replication | A.6.2.4 |
| Hallucination | A.6.2.4 |
| Overreliance | A.6.2.4 |
| Sycophancy | A.6.2.4 |
| Contracts | A.6.2.4 |
Governance Questions
- Are objectives for responsible AI development established addressing fairness, transparency, accountability, and safety per A.6.1.2?8Y/N
- Are responsible development objectives measurable and aligned with organisational values per A.6.1.2?8Y/N
- Are processes for responsible design and development defined and documented per A.6.1.3?8Y/N
- Do design and development processes ensure adherence to ethical guidelines and regulatory requirements per A.6.1.3?7Y/N
- Are functional and non-functional requirements for the AI system specified and documented per A.6.2.2?8Y/N
- Are acceptance criteria defined for AI system validation per A.6.2.2?7Y/N
- Are design and development decisions recorded and traceable to requirements per A.6.2.3?7Y/N
- Is the AI system verified and validated against defined requirements including functional, performance, and security testing per A.6.2.4?9Y/N
- Is bias and robustness testing conducted as part of verification and validation per A.6.2.4?9Y/N
- Is a deployment plan documented with pre-deployment requirements verified per A.6.2.5?8Y/N
- Are deployment rollback and contingency procedures defined per A.6.2.5?8Y/N
- Is post-deployment monitoring planned including performance tracking, issue detection, and model drift per A.6.2.6?8Y/N
- Are mechanisms in place to detect and respond to model degradation or concept drift per A.6.2.6?7Y/N
- Is technical documentation provided for each relevant category of interested parties per A.6.2.7?7Y/N
- Does technical documentation include system architecture, capabilities, limitations, and intended use per A.6.2.7?7Y/N
- Is event log recording enabled throughout the AI system lifecycle per A.6.2.8?8Y/N
- Do event logs capture decisions, inputs, outputs, and changes throughout the lifecycle per A.6.2.8?8Y/N
- How mature is your AI system lifecycle management?81–5
A.7Data for AI Systems
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.7.2 | Data for development and enhancement | Governance Assessment | Define and document processes for managing data used in AI system development and enhancement |
| A.7.3 | Acquisition of data | Governance Assessment | Document data acquisition processes including sources, consent, privacy compliance, and representativeness |
| A.7.4 | Quality of data | Hybrid | Define and measure data quality requirements ensuring accuracy, completeness, relevance, and representativeness |
| A.7.5 | Data provenance | Governance Assessment | Record and document data origin, transformations, and lineage throughout the lifecycle |
| A.7.6 | Data preparation | Governance Assessment | Define and document criteria for selecting data preparation methods and how data is cleaned, labelled, and transformed |
Automated Plugins
| Plugin | Maps to Control(s) |
|---|---|
| Factual Accuracy | A.7.4 |
| Citation Verification | A.7.4 |
| Confidence Calibration | A.7.4 |
Governance Questions
- Are processes for managing data used in AI system development and enhancement defined and documented per A.7.2?8Y/N
- Is there a data management strategy covering the full AI system data lifecycle per A.7.2?7Y/N
- Are data acquisition processes documented including sources, consent, and privacy compliance per A.7.3?8Y/N
- Is data representativeness assessed to ensure training data adequately reflects the intended use population per A.7.3?9Y/N
- Are consent mechanisms in place for personal data used in AI systems per A.7.3?9Y/N
- Are data quality requirements defined and measured (accuracy, completeness, relevance, representativeness) per A.7.4?8Y/N
- Are data quality issues identified, tracked, and remediated per A.7.4?8Y/N
- Is data origin, transformation history, and lineage recorded and documented per A.7.5?8Y/N
- Is data provenance traceable from source through to model training and inference per A.7.5?7Y/N
- Are criteria for selecting data preparation methods defined and documented per A.7.6?7Y/N
- Are data cleaning, labelling, and transformation processes documented and reproducible per A.7.6?7Y/N
- How mature is your AI data management programme?81–5
A.8Information for Interested Parties
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.8.2 | System documentation and user information | Governance Assessment | Provide essential information about AI systems to users including purpose, usage instructions, and technical limitations |
| A.8.3 | Reporting of adverse impacts | Governance Assessment | Establish processes for reporting adverse impacts or incidents to relevant external parties |
| A.8.4 | Communication of incidents | Governance Assessment | Define and implement plans for communicating AI-related incidents to affected parties and regulators |
| A.8.5 | Information about AI system interaction | Automated Testing | Notify users when interacting with an AI system and explain how AI-generated outputs factor into decisions |
Automated Plugins
| Plugin | Maps to Control(s) |
|---|---|
| AI Self-Disclosure | A.8.5 |
| Limitation Disclosure | A.8.5 |
| Explainability | A.8.5 |
| Contracts | A.8.5 |
Governance Questions
- Is essential information about AI systems provided to users including purpose, usage instructions, and limitations per A.8.2?8Y/N
- Is system documentation tailored to the needs and technical level of different user groups per A.8.2?8Y/N
- Are processes established for reporting adverse impacts or incidents to relevant external parties per A.8.3?8Y/N
- Are external reporting obligations identified (regulators, affected communities, data subjects) per A.8.3?7Y/N
- Are incident communication plans defined for notifying affected parties and regulators per A.8.4?8Y/N
- Are communication timelines and escalation procedures defined for AI-related incidents per A.8.4?8Y/N
- Are users notified when they are interacting with an AI system per A.8.5?9Y/N
- Is it explained to users how AI-generated outputs factor into decisions that affect them per A.8.5?8Y/N
- How mature is your AI transparency and communication programme?71–5
A.9Use of AI Systems
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.9.2 | Processes for responsible use | Governance Assessment | Establish operational processes for responsible AI use including human oversight mechanisms and escalation procedures |
| A.9.3 | Objectives for responsible use | Governance Assessment | Define and document responsible use expectations and objectives for all users of AI systems |
| A.9.4 | Intended use of the AI system | Hybrid | Ensure use remains within intended parameters, with clearly defined and communicated intended use and prohibited use |
Automated Plugins
| Plugin | Maps to Control(s) |
|---|---|
| Direct Prompt Injection | A.9.4 |
| Indirect Prompt Injection | A.9.4 |
| Prompt Extraction | A.9.4 |
| Prompt Hijacking | A.9.4 |
| Self-Replication | A.9.4 |
Governance Questions
- Are operational processes for responsible AI use established including human oversight mechanisms per A.9.2?8Y/N
- Are human oversight and escalation procedures defined for AI system decisions per A.9.2?9Y/N
- Can humans intervene in or override AI system decisions when necessary per A.9.2?8Y/N
- Are responsible use expectations and objectives documented and communicated to all users per A.9.3?7Y/N
- Is the intended use of the AI system clearly defined and communicated per A.9.4?8Y/N
- Are prohibited uses clearly defined and communicated to users per A.9.4?8Y/N
- Are mechanisms in place to detect and prevent use outside intended parameters per A.9.4?8Y/N
- How mature is your responsible AI use programme?71–5
A.10Third-Party and Customer Relationships
| Control | Name | Coverage | Description |
|---|---|---|---|
| A.10.2 | Allocating responsibilities across the lifecycle | Governance Assessment | Allocate and document responsibilities between the organisation and third parties for each lifecycle stage |
| A.10.3 | Suppliers | Governance Assessment | Select, assess, and monitor suppliers of AI system components for quality, safety, and ethical alignment |
| A.10.4 | Customers | Governance Assessment | Document AI systems provided to customers and provide appropriate support, information, and transparency |
Governance Questions
- Are responsibilities allocated and documented between the organisation and third parties for each AI system lifecycle stage per A.10.2?8Y/N
- Are contractual arrangements in place defining AI-related responsibilities with third parties per A.10.2?7Y/N
- Are suppliers of AI system components assessed for quality, safety, and ethical alignment per A.10.3?8Y/N
- Is there ongoing monitoring of AI supplier performance and risk per A.10.3?7Y/N
- Are AI systems provided to customers documented with appropriate support and transparency per A.10.4?7Y/N
- Are customer obligations and limitations regarding AI system use clearly communicated per A.10.4?7Y/N
- How mature is your third-party AI governance?71–5
Management System — Clauses 4–10
The management system requirements follow the ISO High-Level Structure (HLS), which is common to all ISO management system standards. Each clause group is assessed through governance questionnaires that verify the organisation has established appropriate processes, documentation, and oversight mechanisms.
Clause 4Context of the Organisation
Determine external and internal issues, understand interested parties, define AIMS scope and boundaries
- Are external and internal issues relevant to the AI management system identified and documented per Clause 4.1?8Y/N
- Are interested parties and their requirements related to AI systems identified per Clause 4.2?8Y/N
- Is the scope of the AI management system (AIMS) defined and documented per Clause 4.3?9Y/N
- Are the boundaries and applicability of the AIMS established per Clause 4.3?8Y/N
- How mature is your AIMS context definition?71–5
Clause 5Leadership
Top management commitment, AI policy establishment, roles, responsibilities, and authorities
- Does top management demonstrate commitment to the AI management system per Clause 5.1?9Y/N
- Has the AI policy been established by top management and communicated to relevant parties per Clause 5.2?9Y/N
- Are organisational roles, responsibilities, and authorities for the AIMS assigned per Clause 5.3?8Y/N
- Does leadership ensure adequate resources are allocated for the AI management system per Clause 5.1?7Y/N
- Does leadership conduct periodic reviews of AI management system effectiveness per Clause 5.1?8Y/N
- How mature is leadership engagement with AI governance?81–5
Clause 6Planning
Risk and opportunity assessment, AI risk assessment and treatment, AI system impact assessment, objectives and change planning
- Are AI risks and opportunities identified and assessed per Clause 6.1?9Y/N
- Is a formal AI risk assessment conducted and documented per Clause 6.1.2?9Y/N
- Is a risk treatment plan documented with selected treatment options per Clause 6.1.3?8Y/N
- Is an AI system impact assessment performed per Clause 6.1.4?8Y/N
- Are AI management system objectives established and measurable per Clause 6.2?8Y/N
- Are change management processes in place for the AI management system per Clause 6.3?7Y/N
- How mature is your AI risk planning process?81–5
Clause 7Support
Resources, competence, awareness, communication, and documented information management
- Are adequate resources provided for establishing, implementing, and maintaining the AIMS per Clause 7.1?8Y/N
- Is competence of persons involved in AI activities ensured and documented per Clause 7.2?8Y/N
- Is there an awareness programme ensuring personnel understand the AI policy and their contributions per Clause 7.3?7Y/N
- Are communication processes defined for internal and external AI-related communications per Clause 7.4?7Y/N
- Is documented information for the AIMS created, controlled, and maintained per Clause 7.5?8Y/N
- How mature is your AIMS support infrastructure?71–5
Clause 8Operation
Operational planning and control, performing AI risk assessment, risk treatment, and impact assessment
- Is operational planning and control implemented for AI management system processes per Clause 8.1?8Y/N
- Is the AI risk assessment performed at planned intervals and when changes occur per Clause 8.2?8Y/N
- Is the AI risk treatment plan implemented and its effectiveness monitored per Clause 8.3?8Y/N
- Is the AI system impact assessment updated when operational changes occur per Clause 8.4?7Y/N
- Are outsourced AI processes identified and controlled per Clause 8.1?7Y/N
- How mature is your AIMS operational management?71–5
Clause 9Performance Evaluation
Monitoring, measurement, analysis, evaluation, internal audit programme, and management review
- Are monitoring, measurement, analysis, and evaluation processes established for the AIMS per Clause 9.1?8Y/N
- Is an internal audit programme implemented at planned intervals per Clause 9.2?9Y/N
- Are audit results documented and communicated to relevant management per Clause 9.2?8Y/N
- Does top management conduct management reviews of the AIMS at planned intervals per Clause 9.3?9Y/N
- Do management review outputs include decisions on improvement opportunities and resource needs per Clause 9.3?7Y/N
- How mature is your AIMS performance evaluation?81–5
Clause 10Improvement
Continual improvement, nonconformity and corrective action, effectiveness review
- Are processes for continual improvement of the AIMS established per Clause 10.1?8Y/N
- Are nonconformity and corrective action procedures documented and followed per Clause 10.2?9Y/N
- Is root cause analysis conducted for identified nonconformities per Clause 10.2?8Y/N
- Is the effectiveness of corrective actions reviewed and verified per Clause 10.2?7Y/N
- How mature is your AIMS improvement programme?71–5
Annex C — AI Objectives
Annex C defines 11 organisational objectives for responsible AI. These objectives provide context for the Annex A controls and inform the governance assessment. Probe Six maps its automated and governance assessments to these objectives to ensure comprehensive coverage.
| Objective | Description |
|---|---|
| Accountability | Clear responsibility for AI system development, deployment, and outcomes with traceable decision-making |
| Transparency | Openness about how AI systems work, their decision-making processes, data usage, and limitations |
| Controllability | Ability to intervene in, adjust, or override AI system behaviour throughout the lifecycle |
| Sustainability | Sustainable practices in AI system development to minimise environmental impact and resource consumption |
| Fairness | Measures to detect and mitigate biases in AI systems with transparent trade-offs and non-discrimination |
| Safety | AI systems operate safely without causing physical, psychological, or societal harm |
| Privacy | Protection of personal data and privacy rights in AI data collection, processing, and decision-making |
| Security | Protection of AI systems from adversarial attacks, data poisoning, model theft, and unauthorised access |
| Availability | AI systems are available and accessible when needed with appropriate reliability and uptime |
| Robustness | AI systems perform reliably under varying and unexpected conditions including adversarial perturbations |
| Explainability | AI system outputs can be explained and understood by users and affected parties with accessible reasoning |
Out of Scope
The following elements of ISO/IEC 42001 are not included in Probe Six's automated assessment. These items require on-site audit, physical inspection, or are informative guidance rather than assessable requirements.
| Item | Reason |
|---|---|
| Annex B (Implementation Guidance) | Informative annex providing guidance on how to implement Annex A controls — not itself auditable requirements |
| Annex D (Cross-Domain Applicability) | Informative annex providing sector-specific guidance (healthcare, finance, defence) — applicability context, not assessable controls |
| Physical security controls | Hardware security, facility access, and physical infrastructure require on-site assessment and cannot be tested via runtime probes |
| Environmental impact measurement | Computing energy consumption and carbon footprint require infrastructure-level monitoring beyond AI endpoint assessment |
| Certification body procedures | Stage 1/2 audit processes and certification maintenance are auditor responsibilities, not system-level requirements |
Running an ISO 42001 Assessment
To run an ISO/IEC 42001 assessment with Probe Six:
- Configure your endpoint— Add your AI system endpoint (AWS Bedrock, OpenAI, or custom API) in the Endpoints section.
- Select domains— On the scan configuration page, select the ISO 42001 template and choose which Annex A domains and management system clauses to include. You can run a full assessment or focus on specific domains.
- Complete governance questions— When you select a domain, its governance questions appear inline below the domain row. Answer them in context — your responses auto-save and persist across scans. These cover Annex A controls and management system clauses that require organisational assessment.
- Run the scan— The Probe Six engine will execute automated adversarial probes against your endpoint for testable controls (A.5.4, A.6.2.4, A.7.4, A.8.5, A.9.4) and incorporate your governance responses for the remaining controls.
- Review your report— The report shows per-domain coverage, pass rates, and severity ratings across all assessed controls. Use this to identify gaps and prioritise remediation before formal certification audits.
Certification Context
ISO/IEC 42001 is a certifiable standard. Organisations can undergo a Stage 1 (documentation review) and Stage 2 (implementation audit) assessment by an accredited certification body to achieve formal ISO 42001 certification.
Probe Six's assessment is designed to complement — not replace — formal certification. It provides evidence of technical compliance for automated controls and structured governance assessment for organisational controls, which can be presented to auditors as supporting evidence during certification audits.
The combination of automated testing results and governance questionnaire responses creates a comprehensive compliance baseline that organisations can use to identify gaps, track improvement, and demonstrate ongoing conformity between certification cycles.