ISO and IEC Make Foundational Standard on Artificial Intelligence Publicly Available

Authored by
No items found.
Published on
Sep 15, 2023
share this
ISO and IEC Make Foundational Standard on Artificial Intelligence Publicly Available

The International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) have made ISO/IEC 22989, an AI standard defining terminologies on various aspects of an AI system, available to the public.

The foundational standard defines more than 110 key concepts in the field of AI, including terms like ‘datasets’, ‘AI agents’, ‘transparency’, and ‘explainability’.

Also providing conceptual guidance on aspects associated with Natural Language Processing (NLP) and Computer Vision (CV) models, ISO/IEC 22989 aims to promote the development of a shared vocabulary, terminology, and framework for essential AI concepts, thereby facilitating dialogue between stakeholders.

A crucial building block in articulating different aspects of AI systems, ISO/IEC 22989 is expected to pave the way for the development of technical standards focused on establishing performance baselines, processes, and protocols on responsible AI development and deployment, as well as metrics to gauge model efficacy.

Snapshot of key definitions in ISO/IEC 22989

  • AI System: “An engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.”
  • Artificial General Intelligence (AGI): “Type of AI System that addresses a broad range of tasks with a satisfactory level of performance.”
  • AI Auditor: “An organization or entity that is concerned with the audit of organizations producing, providing or using AI systems, to assess conformance to standards, policies or legal requirements.”
  • Robustness: “Ability of a system to maintain its level of performance under any circumstances.”
  • Explainability: “Property of an AI system to express important factors influencing the AI system results in a way that humans can understand.”
  • Predictability: “Property of an AI system that enables reliable assumptions by stakeholders about the output.”
  • Transparency: “Property of an AI system that appropriate information about the system is made available to relevant stakeholders.”
  • Trustworthiness: “Ability to meet stakeholder expectations in a verifiable way.”

The standard further clarifies that trustworthiness also encompasses reliability, availability, resilience, security, privacy, safety, accountability, transparency, integrity, authenticity, quality and usability.

The importance of multistakeholder consultation

The standard defines a stakeholder as “any individual, group or organization that can affect, be affected by or perceive itself to be affected by a decision or activity” and provides a comprehensive map of the AI stakeholder ecosystem, clearly delineating the different kinds of players associated with an AI system. ISO/IEC 22989 also notes that a single entity can take on multiple stakeholder roles.

AI Stakeholder roles and their sub-roles. ISO/IEC 22989
AI Stakeholder roles and their sub-roles. ISO/IEC 22989

Considering the range of stakeholders that can be invested in a single AI system, the standard highlights the need for multi-stakeholder consultations that represent diverse subject-matter expertise to better identify the risks of each system and ensure regulatory and compliance.

Why AI standardisation is needed

With AI regulations multiplying rapidly in jurisdictions like the European Union, United Kingdom, and United states, the need for governance standards is growing increasingly clear. For instance, the EU AI Act aims to fulfil its objectives regarding AI trustworthiness, accountability, risk management, and transparency by adopting technical and procedural standards.

The urgent need for standardisation is further underscored by the lack of harmonisation in the regulatory language used across different regulations. This has translated to a lack of global alignment and consensus on crucial issues like AI taxonomy, governance mechanisms, assessment methodologies, and measurement.

By establishing clear and universally accepted standards, a more coherent and consistent approach to governing AI technologies, mitigating risks, and fostering responsible and ethical AI development and deployment can be realised.

How Holistic AI can help

For organisations using AI in their business, third-party audits and other conformity assessment processes are increasingly demanded by emerging regulations.

Holistic AI are Governance, Risk, and Compliance specialists. Through or proprietary AI Governance Platform and suite of innovative solutions, we can help you operationalise technical standards at scale, giving you the tools to ensure your AI systems are developed and deployed safely, effectively, and in line with compliance obligations.

We assist organisations to demonstrate responsible AI practices to regulators and consumers through:

  1. AI Assessments: Through quantitative and qualitative assessments, we ensure the dependability of AI-driven products across five key verticals: efficacy, robustness, privacy, bias and explainability.
  2. Third-party Risk Management: Customised mitigation and recommendations to manage and mitigate AI risks.‍
  3. Compliance: Assess compliance against applicable AI regulations and industry standards

Schedule a call with one of our specialists to find out more about how we can help your organisation.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo