ISO/IEC 42001:2023

ISO/IEC 42001:2023 is the first international standard for AI management systems (AIMS). Published in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), it provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system.

The standard follows the ISO High-Level Structure (HLS) used by ISO 27001, ISO 9001, and other management system standards, making it straightforward to integrate into existing management systems. It defines requirements in Clauses 4–10 and provides 38 Annex A controls across 9 domains that address the unique challenges of AI systems.

Probe Six assesses AI systems against all 38 Annex A controls across 9 domains, plus all 7 management system clauses, combining automated adversarial testing with structured governance questionnaires. Each control is mapped to its official ISO reference (e.g. A.6.2.4, A.8.5).

The assessment covers 58 automated security plugins that exercise the AI system in real time across 5 testable controls, plus 128 governance questions across all 9 Annex A domains and 7 management system clauses for obligations that require organisational assessment.

Standard Structure

ISO/IEC 42001 is organised into two main parts: the management system requirements (Clauses 4–10) and the Annex A controls. Together, they provide a comprehensive framework for managing AI risks and ensuring responsible AI development and deployment.

Annex A — Controls

38 controls across 9 domains (A.2–A.10) addressing AI-specific concerns including policies, resources, impact assessment, lifecycle management, data governance, transparency, responsible use, and third-party relationships.

9 domains· 38 controls· 2 automated, 3 hybrid, 33 governance

Management System — Clauses 4–10

7 clause groups following the ISO High-Level Structure (HLS), covering context, leadership, planning, support, operation, performance evaluation, and improvement. These are assessed through governance questionnaires.

7 clause groups· All governance assessment

Coverage Summary

38
Annex A Controls
7
Clause Groups
58
Automated Plugins
128
Governance Questions
Automated TestingControls tested through automated adversarial probes that exercise the AI system in real time, producing measurable evidence of compliance. Covers security testing (A.6.2.4) and AI interaction disclosure (A.8.5).
HybridControls assessed through both automated testing and governance questionnaires. Automated probes validate technical aspects while governance questions assess organisational processes. Covers impact assessment (A.5.4), data quality (A.7.4), and intended use boundaries (A.9.4).
Governance AssessmentControls assessed through structured governance questionnaires referencing specific ISO 42001 control IDs. These cover organisational policies, processes, and documentation that cannot be tested at runtime.

Annex A — Domain-by-Domain Assessment

Each of the 9 Annex A domains is assessed through a combination of automated adversarial testing (where applicable) and governance questionnaires. Controls are listed with their official ISO reference, coverage type, and description.

A.2Policies Related to AI

3 controls· 7 governance questions
ControlNameCoverageDescription
A.2.2AI policyGovernance AssessmentDocument a formal AI policy covering the organisation's approach to responsible AI development and use, approved by top management
A.2.3Policies affected by AIGovernance AssessmentDetermine how existing organisational policies are affected by or need to be updated to account for AI systems
A.2.4Review of AI policyGovernance AssessmentEnsure AI policies are reviewed at planned intervals or when significant changes occur

Governance Questions

  1. Is a formal AI policy documented and approved by top management per A.2.2?9Y/N
  2. Does the AI policy cover the organisation's approach to responsible AI development and use per A.2.2?8Y/N
  3. Have existing organisational policies been reviewed and updated to account for AI systems per A.2.3?8Y/N
  4. Is there a documented assessment of how AI systems affect existing policies (HR, data protection, security, etc.) per A.2.3?7Y/N
  5. Are AI policies reviewed at planned intervals or when significant changes occur per A.2.4?8Y/N
  6. Are review triggers defined for AI policy updates (e.g. regulatory changes, new AI deployments, incidents) per A.2.4?7Y/N
  7. How mature is your AI policy framework?81–5

A.3Internal Organisation

2 controls· 6 governance questions
ControlNameCoverageDescription
A.3.2AI roles and responsibilitiesGovernance AssessmentDefine and assign roles and responsibilities for AI governance activities across the lifecycle
A.3.3Reporting of concernsGovernance AssessmentEstablish mechanisms for personnel to report AI-related concerns without fear of retaliation

Governance Questions

  1. Are roles and responsibilities for AI governance defined and assigned across the AI system lifecycle per A.3.2?9Y/N
  2. Is there a documented RACI matrix or equivalent for AI-related activities per A.3.2?8Y/N
  3. Are accountability mechanisms established for AI system decisions and outcomes per A.3.2?8Y/N
  4. Are mechanisms in place for personnel to report AI-related concerns without fear of retaliation per A.3.3?9Y/N
  5. Are reported AI concerns tracked, investigated, and resolved in a timely manner per A.3.3?8Y/N
  6. How mature is your internal AI governance structure?71–5

A.4Resources for AI Systems

5 controls· 10 governance questions
ControlNameCoverageDescription
A.4.2Resources related to AI systemsGovernance AssessmentDocument and provide adequate resources for AI systems including infrastructure, support systems, and dependencies
A.4.3Data resourcesGovernance AssessmentIdentify and document data resources needed for AI systems including training, validation, and operational data
A.4.4Tooling resourcesGovernance AssessmentDocument tools and frameworks used in AI system development, training, testing, and deployment
A.4.5System and computing resourcesGovernance AssessmentProvide adequate computing infrastructure for AI systems
A.4.6Human resourcesGovernance AssessmentEnsure adequate personnel with appropriate competencies for AI-related roles

Governance Questions

  1. Are resources for AI systems documented including infrastructure, support systems, and dependencies per A.4.2?8Y/N
  2. Are AI system dependencies (models, APIs, libraries) identified and their availability risks assessed per A.4.2?7Y/N
  3. Are data resources for AI systems identified and documented (training, validation, operational data) per A.4.3?8Y/N
  4. Are data requirements specified for each stage of the AI system lifecycle per A.4.3?8Y/N
  5. Are tools and frameworks used in AI development, training, testing, and deployment documented per A.4.4?7Y/N
  6. Are tool selection criteria defined and applied (e.g. licensing, security, supportability) per A.4.4?7Y/N
  7. Is adequate computing infrastructure provided for AI system development, training, and operation per A.4.5?7Y/N
  8. Are personnel with appropriate competencies assigned to AI-related roles per A.4.6?8Y/N
  9. Are training and development programmes in place to maintain AI-related competencies per A.4.6?7Y/N
  10. How mature is your AI resource management?71–5

A.5Assessing Impacts of AI Systems

4 controls· 15 automated plugins· 10 governance questions
ControlNameCoverageDescription
A.5.2AI system impact assessment processGovernance AssessmentEstablish a documented process for conducting AI system impact assessments on individuals, groups, and society
A.5.3Documentation of impact assessmentGovernance AssessmentDocument impact assessment results including identified impacts, mitigation measures, and residual impacts
A.5.4Assessing impact on individuals or groupsHybridAssess how AI systems affect individuals or groups including potential for discrimination, bias, privacy violations, and autonomy impacts
A.5.5Assessing societal impactsGovernance AssessmentAssess broader societal impacts of AI systems including environmental, economic, and social effects

Automated Plugins

PluginMaps to Control(s)
Bias: RaceA.5.4
Bias: GenderA.5.4
Bias: AgeA.5.4
Bias: DisabilityA.5.4
Bias: ReligionA.5.4
Bias: Sexual OrientationA.5.4
Bias: SocioeconomicA.5.4
Bias: PoliticalA.5.4
Bias: NationalityA.5.4
PII: Direct DisclosureA.5.4
PII: API/Database LeakageA.5.4
PII: Session LeakageA.5.4
PII: Social EngineeringA.5.4
Cross-Session Data LeakageA.5.4
Training Data ExtractionA.5.4

Governance Questions

  1. Is there a documented process for conducting AI system impact assessments per A.5.2?9Y/N
  2. Does the impact assessment process cover potential effects on individuals, groups, and society per A.5.2?8Y/N
  3. Are impact assessment results documented including identified impacts, mitigation measures, and residual impacts per A.5.3?8Y/N
  4. Are residual impacts formally accepted by an appropriate authority within the organisation per A.5.3?7Y/N
  5. Is the AI system assessed for potential discrimination, bias, and unfair outcomes affecting individuals or groups per A.5.4?9Y/N
  6. Are privacy impacts assessed for AI system data collection, processing, and decision-making per A.5.4?8Y/N
  7. Are impacts on individual autonomy and human agency assessed per A.5.4?8Y/N
  8. Are broader societal impacts assessed including environmental, economic, and social effects per A.5.5?7Y/N
  9. Is the environmental impact of AI system operation (energy, compute resources) considered per A.5.5?7Y/N
  10. How mature is your AI impact assessment programme?81–5

A.6AI System Life Cycle

9 controls· 37 automated plugins· 18 governance questions
ControlNameCoverageDescription
A.6.1.2Objectives for responsible developmentGovernance AssessmentEstablish objectives for responsible AI system development addressing fairness, transparency, accountability, and safety
A.6.1.3Processes for responsible design and developmentGovernance AssessmentDefine processes for responsible design and development ensuring ethical foundations and adherence to guidelines
A.6.2.2AI system requirementsGovernance AssessmentSpecify and document requirements for AI systems including functional, non-functional, constraints, and acceptance criteria
A.6.2.3Design and development documentationGovernance AssessmentRecord AI system design and development decisions based on organisational objectives and requirements
A.6.2.4Verification and validationAutomated TestingTest and validate the AI system against requirements including functional, performance, bias, security, and robustness testing
A.6.2.5AI system deploymentGovernance AssessmentDocument a deployment plan and ensure requirements are met prior to production deployment
A.6.2.6Operation and monitoringGovernance AssessmentPlan for ongoing oversight post-deployment including performance monitoring, issue detection, and model drift tracking
A.6.2.7Technical documentationGovernance AssessmentDetermine and provide technical documentation for each relevant category of interested parties
A.6.2.8Event log recordingGovernance AssessmentEnable event log recording throughout the AI system lifecycle, maintaining logs of decisions, inputs, outputs, and changes

Automated Plugins

PluginMaps to Control(s)
SQL InjectionA.6.2.4
Shell InjectionA.6.2.4
Server-Side Request Forgery (SSRF)A.6.2.4
ASCII SmugglingA.6.2.4
Debug AccessA.6.2.4
Data ExfiltrationA.6.2.4
Role-Based Access Control (RBAC)A.6.2.4
Broken Object-Level Authorisation (BOLA)A.6.2.4
Broken Function-Level Authorisation (BFLA)A.6.2.4
Model FingerprintingA.6.2.4
Error Information LeakageA.6.2.4
Privilege EscalationA.6.2.4
Secrets ProbingA.6.2.4
Reverse ShellA.6.2.4
Multimodal InjectionA.6.2.4
Violent CrimeA.6.2.4
Sex CrimeA.6.2.4
Child ExploitationA.6.2.4
Self-HarmA.6.2.4
Chemical & Biological WeaponsA.6.2.4
Indiscriminate WeaponsA.6.2.4
RadicalizationA.6.2.4
CybercrimeA.6.2.4
Illegal DrugsA.6.2.4
Illegal ActivitiesA.6.2.4
Unsafe PracticesA.6.2.4
Graphic ContentA.6.2.4
ProfanityA.6.2.4
Direct Prompt InjectionA.6.2.4
Indirect Prompt InjectionA.6.2.4
Prompt ExtractionA.6.2.4
Prompt HijackingA.6.2.4
Self-ReplicationA.6.2.4
HallucinationA.6.2.4
OverrelianceA.6.2.4
SycophancyA.6.2.4
ContractsA.6.2.4

Governance Questions

  1. Are objectives for responsible AI development established addressing fairness, transparency, accountability, and safety per A.6.1.2?8Y/N
  2. Are responsible development objectives measurable and aligned with organisational values per A.6.1.2?8Y/N
  3. Are processes for responsible design and development defined and documented per A.6.1.3?8Y/N
  4. Do design and development processes ensure adherence to ethical guidelines and regulatory requirements per A.6.1.3?7Y/N
  5. Are functional and non-functional requirements for the AI system specified and documented per A.6.2.2?8Y/N
  6. Are acceptance criteria defined for AI system validation per A.6.2.2?7Y/N
  7. Are design and development decisions recorded and traceable to requirements per A.6.2.3?7Y/N
  8. Is the AI system verified and validated against defined requirements including functional, performance, and security testing per A.6.2.4?9Y/N
  9. Is bias and robustness testing conducted as part of verification and validation per A.6.2.4?9Y/N
  10. Is a deployment plan documented with pre-deployment requirements verified per A.6.2.5?8Y/N
  11. Are deployment rollback and contingency procedures defined per A.6.2.5?8Y/N
  12. Is post-deployment monitoring planned including performance tracking, issue detection, and model drift per A.6.2.6?8Y/N
  13. Are mechanisms in place to detect and respond to model degradation or concept drift per A.6.2.6?7Y/N
  14. Is technical documentation provided for each relevant category of interested parties per A.6.2.7?7Y/N
  15. Does technical documentation include system architecture, capabilities, limitations, and intended use per A.6.2.7?7Y/N
  16. Is event log recording enabled throughout the AI system lifecycle per A.6.2.8?8Y/N
  17. Do event logs capture decisions, inputs, outputs, and changes throughout the lifecycle per A.6.2.8?8Y/N
  18. How mature is your AI system lifecycle management?81–5

A.7Data for AI Systems

5 controls· 3 automated plugins· 12 governance questions
ControlNameCoverageDescription
A.7.2Data for development and enhancementGovernance AssessmentDefine and document processes for managing data used in AI system development and enhancement
A.7.3Acquisition of dataGovernance AssessmentDocument data acquisition processes including sources, consent, privacy compliance, and representativeness
A.7.4Quality of dataHybridDefine and measure data quality requirements ensuring accuracy, completeness, relevance, and representativeness
A.7.5Data provenanceGovernance AssessmentRecord and document data origin, transformations, and lineage throughout the lifecycle
A.7.6Data preparationGovernance AssessmentDefine and document criteria for selecting data preparation methods and how data is cleaned, labelled, and transformed

Automated Plugins

PluginMaps to Control(s)
Factual AccuracyA.7.4
Citation VerificationA.7.4
Confidence CalibrationA.7.4

Governance Questions

  1. Are processes for managing data used in AI system development and enhancement defined and documented per A.7.2?8Y/N
  2. Is there a data management strategy covering the full AI system data lifecycle per A.7.2?7Y/N
  3. Are data acquisition processes documented including sources, consent, and privacy compliance per A.7.3?8Y/N
  4. Is data representativeness assessed to ensure training data adequately reflects the intended use population per A.7.3?9Y/N
  5. Are consent mechanisms in place for personal data used in AI systems per A.7.3?9Y/N
  6. Are data quality requirements defined and measured (accuracy, completeness, relevance, representativeness) per A.7.4?8Y/N
  7. Are data quality issues identified, tracked, and remediated per A.7.4?8Y/N
  8. Is data origin, transformation history, and lineage recorded and documented per A.7.5?8Y/N
  9. Is data provenance traceable from source through to model training and inference per A.7.5?7Y/N
  10. Are criteria for selecting data preparation methods defined and documented per A.7.6?7Y/N
  11. Are data cleaning, labelling, and transformation processes documented and reproducible per A.7.6?7Y/N
  12. How mature is your AI data management programme?81–5

A.8Information for Interested Parties

4 controls· 4 automated plugins· 9 governance questions
ControlNameCoverageDescription
A.8.2System documentation and user informationGovernance AssessmentProvide essential information about AI systems to users including purpose, usage instructions, and technical limitations
A.8.3Reporting of adverse impactsGovernance AssessmentEstablish processes for reporting adverse impacts or incidents to relevant external parties
A.8.4Communication of incidentsGovernance AssessmentDefine and implement plans for communicating AI-related incidents to affected parties and regulators
A.8.5Information about AI system interactionAutomated TestingNotify users when interacting with an AI system and explain how AI-generated outputs factor into decisions

Automated Plugins

PluginMaps to Control(s)
AI Self-DisclosureA.8.5
Limitation DisclosureA.8.5
ExplainabilityA.8.5
ContractsA.8.5

Governance Questions

  1. Is essential information about AI systems provided to users including purpose, usage instructions, and limitations per A.8.2?8Y/N
  2. Is system documentation tailored to the needs and technical level of different user groups per A.8.2?8Y/N
  3. Are processes established for reporting adverse impacts or incidents to relevant external parties per A.8.3?8Y/N
  4. Are external reporting obligations identified (regulators, affected communities, data subjects) per A.8.3?7Y/N
  5. Are incident communication plans defined for notifying affected parties and regulators per A.8.4?8Y/N
  6. Are communication timelines and escalation procedures defined for AI-related incidents per A.8.4?8Y/N
  7. Are users notified when they are interacting with an AI system per A.8.5?9Y/N
  8. Is it explained to users how AI-generated outputs factor into decisions that affect them per A.8.5?8Y/N
  9. How mature is your AI transparency and communication programme?71–5

A.9Use of AI Systems

3 controls· 5 automated plugins· 8 governance questions
ControlNameCoverageDescription
A.9.2Processes for responsible useGovernance AssessmentEstablish operational processes for responsible AI use including human oversight mechanisms and escalation procedures
A.9.3Objectives for responsible useGovernance AssessmentDefine and document responsible use expectations and objectives for all users of AI systems
A.9.4Intended use of the AI systemHybridEnsure use remains within intended parameters, with clearly defined and communicated intended use and prohibited use

Automated Plugins

PluginMaps to Control(s)
Direct Prompt InjectionA.9.4
Indirect Prompt InjectionA.9.4
Prompt ExtractionA.9.4
Prompt HijackingA.9.4
Self-ReplicationA.9.4

Governance Questions

  1. Are operational processes for responsible AI use established including human oversight mechanisms per A.9.2?8Y/N
  2. Are human oversight and escalation procedures defined for AI system decisions per A.9.2?9Y/N
  3. Can humans intervene in or override AI system decisions when necessary per A.9.2?8Y/N
  4. Are responsible use expectations and objectives documented and communicated to all users per A.9.3?7Y/N
  5. Is the intended use of the AI system clearly defined and communicated per A.9.4?8Y/N
  6. Are prohibited uses clearly defined and communicated to users per A.9.4?8Y/N
  7. Are mechanisms in place to detect and prevent use outside intended parameters per A.9.4?8Y/N
  8. How mature is your responsible AI use programme?71–5

A.10Third-Party and Customer Relationships

3 controls· 7 governance questions
ControlNameCoverageDescription
A.10.2Allocating responsibilities across the lifecycleGovernance AssessmentAllocate and document responsibilities between the organisation and third parties for each lifecycle stage
A.10.3SuppliersGovernance AssessmentSelect, assess, and monitor suppliers of AI system components for quality, safety, and ethical alignment
A.10.4CustomersGovernance AssessmentDocument AI systems provided to customers and provide appropriate support, information, and transparency

Governance Questions

  1. Are responsibilities allocated and documented between the organisation and third parties for each AI system lifecycle stage per A.10.2?8Y/N
  2. Are contractual arrangements in place defining AI-related responsibilities with third parties per A.10.2?7Y/N
  3. Are suppliers of AI system components assessed for quality, safety, and ethical alignment per A.10.3?8Y/N
  4. Is there ongoing monitoring of AI supplier performance and risk per A.10.3?7Y/N
  5. Are AI systems provided to customers documented with appropriate support and transparency per A.10.4?7Y/N
  6. Are customer obligations and limitations regarding AI system use clearly communicated per A.10.4?7Y/N
  7. How mature is your third-party AI governance?71–5

Management System — Clauses 4–10

The management system requirements follow the ISO High-Level Structure (HLS), which is common to all ISO management system standards. Each clause group is assessed through governance questionnaires that verify the organisation has established appropriate processes, documentation, and oversight mechanisms.

Clause 4Context of the Organisation

Determine external and internal issues, understand interested parties, define AIMS scope and boundaries

5 governance questions
  1. Are external and internal issues relevant to the AI management system identified and documented per Clause 4.1?8Y/N
  2. Are interested parties and their requirements related to AI systems identified per Clause 4.2?8Y/N
  3. Is the scope of the AI management system (AIMS) defined and documented per Clause 4.3?9Y/N
  4. Are the boundaries and applicability of the AIMS established per Clause 4.3?8Y/N
  5. How mature is your AIMS context definition?71–5

Clause 5Leadership

Top management commitment, AI policy establishment, roles, responsibilities, and authorities

6 governance questions
  1. Does top management demonstrate commitment to the AI management system per Clause 5.1?9Y/N
  2. Has the AI policy been established by top management and communicated to relevant parties per Clause 5.2?9Y/N
  3. Are organisational roles, responsibilities, and authorities for the AIMS assigned per Clause 5.3?8Y/N
  4. Does leadership ensure adequate resources are allocated for the AI management system per Clause 5.1?7Y/N
  5. Does leadership conduct periodic reviews of AI management system effectiveness per Clause 5.1?8Y/N
  6. How mature is leadership engagement with AI governance?81–5

Clause 6Planning

Risk and opportunity assessment, AI risk assessment and treatment, AI system impact assessment, objectives and change planning

7 governance questions
  1. Are AI risks and opportunities identified and assessed per Clause 6.1?9Y/N
  2. Is a formal AI risk assessment conducted and documented per Clause 6.1.2?9Y/N
  3. Is a risk treatment plan documented with selected treatment options per Clause 6.1.3?8Y/N
  4. Is an AI system impact assessment performed per Clause 6.1.4?8Y/N
  5. Are AI management system objectives established and measurable per Clause 6.2?8Y/N
  6. Are change management processes in place for the AI management system per Clause 6.3?7Y/N
  7. How mature is your AI risk planning process?81–5

Clause 7Support

Resources, competence, awareness, communication, and documented information management

6 governance questions
  1. Are adequate resources provided for establishing, implementing, and maintaining the AIMS per Clause 7.1?8Y/N
  2. Is competence of persons involved in AI activities ensured and documented per Clause 7.2?8Y/N
  3. Is there an awareness programme ensuring personnel understand the AI policy and their contributions per Clause 7.3?7Y/N
  4. Are communication processes defined for internal and external AI-related communications per Clause 7.4?7Y/N
  5. Is documented information for the AIMS created, controlled, and maintained per Clause 7.5?8Y/N
  6. How mature is your AIMS support infrastructure?71–5

Clause 8Operation

Operational planning and control, performing AI risk assessment, risk treatment, and impact assessment

6 governance questions
  1. Is operational planning and control implemented for AI management system processes per Clause 8.1?8Y/N
  2. Is the AI risk assessment performed at planned intervals and when changes occur per Clause 8.2?8Y/N
  3. Is the AI risk treatment plan implemented and its effectiveness monitored per Clause 8.3?8Y/N
  4. Is the AI system impact assessment updated when operational changes occur per Clause 8.4?7Y/N
  5. Are outsourced AI processes identified and controlled per Clause 8.1?7Y/N
  6. How mature is your AIMS operational management?71–5

Clause 9Performance Evaluation

Monitoring, measurement, analysis, evaluation, internal audit programme, and management review

6 governance questions
  1. Are monitoring, measurement, analysis, and evaluation processes established for the AIMS per Clause 9.1?8Y/N
  2. Is an internal audit programme implemented at planned intervals per Clause 9.2?9Y/N
  3. Are audit results documented and communicated to relevant management per Clause 9.2?8Y/N
  4. Does top management conduct management reviews of the AIMS at planned intervals per Clause 9.3?9Y/N
  5. Do management review outputs include decisions on improvement opportunities and resource needs per Clause 9.3?7Y/N
  6. How mature is your AIMS performance evaluation?81–5

Clause 10Improvement

Continual improvement, nonconformity and corrective action, effectiveness review

5 governance questions
  1. Are processes for continual improvement of the AIMS established per Clause 10.1?8Y/N
  2. Are nonconformity and corrective action procedures documented and followed per Clause 10.2?9Y/N
  3. Is root cause analysis conducted for identified nonconformities per Clause 10.2?8Y/N
  4. Is the effectiveness of corrective actions reviewed and verified per Clause 10.2?7Y/N
  5. How mature is your AIMS improvement programme?71–5

Annex C — AI Objectives

Annex C defines 11 organisational objectives for responsible AI. These objectives provide context for the Annex A controls and inform the governance assessment. Probe Six maps its automated and governance assessments to these objectives to ensure comprehensive coverage.

ObjectiveDescription
AccountabilityClear responsibility for AI system development, deployment, and outcomes with traceable decision-making
TransparencyOpenness about how AI systems work, their decision-making processes, data usage, and limitations
ControllabilityAbility to intervene in, adjust, or override AI system behaviour throughout the lifecycle
SustainabilitySustainable practices in AI system development to minimise environmental impact and resource consumption
FairnessMeasures to detect and mitigate biases in AI systems with transparent trade-offs and non-discrimination
SafetyAI systems operate safely without causing physical, psychological, or societal harm
PrivacyProtection of personal data and privacy rights in AI data collection, processing, and decision-making
SecurityProtection of AI systems from adversarial attacks, data poisoning, model theft, and unauthorised access
AvailabilityAI systems are available and accessible when needed with appropriate reliability and uptime
RobustnessAI systems perform reliably under varying and unexpected conditions including adversarial perturbations
ExplainabilityAI system outputs can be explained and understood by users and affected parties with accessible reasoning

Out of Scope

The following elements of ISO/IEC 42001 are not included in Probe Six's automated assessment. These items require on-site audit, physical inspection, or are informative guidance rather than assessable requirements.

ItemReason
Annex B (Implementation Guidance)Informative annex providing guidance on how to implement Annex A controls — not itself auditable requirements
Annex D (Cross-Domain Applicability)Informative annex providing sector-specific guidance (healthcare, finance, defence) — applicability context, not assessable controls
Physical security controlsHardware security, facility access, and physical infrastructure require on-site assessment and cannot be tested via runtime probes
Environmental impact measurementComputing energy consumption and carbon footprint require infrastructure-level monitoring beyond AI endpoint assessment
Certification body proceduresStage 1/2 audit processes and certification maintenance are auditor responsibilities, not system-level requirements

Running an ISO 42001 Assessment

To run an ISO/IEC 42001 assessment with Probe Six:

  1. Configure your endpoint— Add your AI system endpoint (AWS Bedrock, OpenAI, or custom API) in the Endpoints section.
  2. Select domains— On the scan configuration page, select the ISO 42001 template and choose which Annex A domains and management system clauses to include. You can run a full assessment or focus on specific domains.
  3. Complete governance questions— When you select a domain, its governance questions appear inline below the domain row. Answer them in context — your responses auto-save and persist across scans. These cover Annex A controls and management system clauses that require organisational assessment.
  4. Run the scan— The Probe Six engine will execute automated adversarial probes against your endpoint for testable controls (A.5.4, A.6.2.4, A.7.4, A.8.5, A.9.4) and incorporate your governance responses for the remaining controls.
  5. Review your report— The report shows per-domain coverage, pass rates, and severity ratings across all assessed controls. Use this to identify gaps and prioritise remediation before formal certification audits.

Certification Context

ISO/IEC 42001 is a certifiable standard. Organisations can undergo a Stage 1 (documentation review) and Stage 2 (implementation audit) assessment by an accredited certification body to achieve formal ISO 42001 certification.

Probe Six's assessment is designed to complement — not replace — formal certification. It provides evidence of technical compliance for automated controls and structured governance assessment for organisational controls, which can be presented to auditors as supporting evidence during certification audits.

The combination of automated testing results and governance questionnaire responses creates a comprehensive compliance baseline that organisations can use to identify gaps, track improvement, and demonstrate ongoing conformity between certification cycles.