Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Mapping AI Standards Across AI Governance, Risk and Compliance

Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Ayesha Gulley
Policy Product Manager at Holistic AI
Published on
Jul 21, 2023
read time
0
min read
share this
Mapping AI Standards Across AI Governance, Risk and Compliance

In the rapidly evolving ecosystem of artificial intelligence (AI), the development and establishment of AI standards has become a pressing necessity. These seek to serve as a set of common guidelines, principles and technical specifications for the development, deployment and governance of AI systems. Providing comprehensive baselines to ensure reliability, ethicality and trust in AI systems, standards can help promote transparency, mitigate risks and address concerns relating to fairness, privacy and accountability. These also play a coordinating role in the increasingly complex AI governance landscape, by facilitating collaboration, consensus and dialogue between stakeholders in the ecosystem.

The Intersection of AI Standards with Governance, Risk, and Compliance

Key Takeways:

  • Increasing instances in global AI regulation, particularly in the EU, UK, and US, underscores the growing need for standardised AI governance to ensure trustworthiness, accountability, risk management, and transparency.
  • Differences in regulatory language and approaches create a lack of global alignment and consensus on crucial AI aspects like taxonomy, governance mechanisms, assessment methodologies, and measurement, making standardisation crucial.
  • Standards development organisations (SDOs), such as ISO, IEC, CEN/CENELAC, ETSI, NIST, and BSI, facilitate the development of consensus-driven standards through multi-stakeholder deliberations, promoting global and regional harmonisation.
  • Technical standards in AI governance encompass foundational, process, measurement, and performance standards, providing a structured approach to responsible AI development and deployment.
  • Adopting standards enables organisations to benchmark, audit, and assess AI systems, ensuring conformity and performance evaluation, benefiting developers, consumers, and data subjects impacted by AI technologies.

AI Governance needs standardisation

The need for standards in the AI governance landscape has become increasingly evident considering growing instances of AI regulation across the globe, particularly in the European Union, United Kingdom and the United States. For example, the EU’s AI Act seeks to achieve its objectives on AI trustworthiness, accountability, risk management and transparency, among others through the adoption of technical and process standards. This is evident through Articles 40 and 41 of the legislation's compromise text, which provide for the development of harmonised standards and common specifications in instances where harmonised standards might be absent or not applicable.

Further, regulatory language on AI governance differs across regulations, translating to a lack of global alignment and consensus on crucial aspects like AI taxonomy, governance mechanisms, assessment methodologies, and measurement. This lack of harmonisation and agreement underscores the urgent need for standardisation. By establishing clear and universally accepted standards, a more coherent and consistent approach to governing AI technologies, mitigating risks, and fostering responsible and ethical AI development and deployment can be realised.

Standards bodies

Standardisation is a multi-stakeholder endeavour, requiring rounds of iterations by technical committees comprised of experts for the development of consensus-driven standards. These deliberative processes are facilitated by standards development organisations (SDOs), which can have global as well as regional/national remits. Examples of global SDOs include the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC), while regional SDOs include the European Committee for Standardisation (CEN/CENELAC) and the European Telecommunications Standards Institute (ETSI) in the European Union, the National Institute of Standards and Technology (NIST) in the United States, and the British Standards Institution (BSI) in the United Kingdom.

AI Governance Standards bodies

Mapping technical standards

Here, we provide an overview of the key types of technical standards for AI governance, risk and compliance (GRC), with notable examples of the same, primarily focused on the ISO:

1. Foundational standards:

These standards help build common language, terminologies and taxonomies on foundational concepts, thereby facilitating dialogue between stakeholders. Serving as building blocks, these pave the way for the development of process, performance, and other forms of specific standardisation endeavours. Preeminent foundational standards include:

  • ISO/IEC 22989: Establishes definitions and terminologies of different aspects of AI systems. The standard covers over 110 concepts used in AI such as datasets, bias, transparency, and explainability, among others.
  • ISO/IEC 23053: Building on ISO/IEC 22989, this standard provides a framework to clearly explain AI systems that use Machine Learning (ML). The framework outlines the various components of the system and their respective roles within the broader AI ecosystem.

2. Process standards

These help create the organisational architecture for the responsible development and deployment of AI systems, by universalising the adoption of best practices in management, process-design, quality control and governance. Certain process standards are considered "certifiable," implying that organisations can undergo independent assessments to determine if they adhere to the prescribed good practice and subsequently obtain certification against that specific standard. Notable process standards for AI systems include:

  • ISO/IEC CD 42001: Provides a template to guide organisations on responsibly integrating and using AI management systems. Currently at draft stage, the standard will be certifiable and auditable, and will be useful for organisations in complying with conformity assessment requirements mandated by legislations like the EU AI Act.
  • ISO/IEC CD 42006: Provides guidance and specifies requirements to enable accredited certification bodies to reliably audit and assure the management systems for organisations that develop and/or use AI systems according to ISO/IEC 42001.
  • ISO/IEC 38507: Offers guidance to the governing body of an organisation that is using or contemplating the use of AI systems. Noting the importance of governance structures to facilitate effective, efficient and acceptable AI use, it promotes the adoption of relevant standards to support effective governance in AI implementation.
  • ISO/IEC 23894:2023(E): provides guidance and clarity on how organisations can manage risks emanating from development, deployment and usage of AI systems. Based on previous risk management standards like ISO 31000:2018, this standard describes processes for the integration and implementation of AI Risk Management.

3. Measurement standards

These standards help provide universal mechanisms and terminologies on measuring different aspects of an AI system’s performance. These are particularly crucial, as the development and efficacy of trustworthy AI systems depends primarily on defensible measurement methods and mechanisms:

  • ISO/IEC DTS4213: This standard clarifies and provides methodologies and metrics for measuring and assessing the performance of classification algorithms and ML models.
  • ISO/IEC TR 24027: Provides measurement techniques and metrics for assessing bias in AI-enabled decision-making.

4. Performance standards

Help establish thresholds, requirements and expectations that must be met at certain levels for the satisfactory operation and use of an AI system:

  • IEEE 2937: This standard, developed under the aegis of the international Electrical and Electronics Engineers (IEEE) establishes methodologies for assessing the performance of AI servers, server clusters and other AI High-Performance Computing (HPC) systems. In addition to providing guidance on performance testing, metrics and measurement, it also prescribes technical requirements for benchmarking tools.
  • ISO/IEC AWI 27090: Currently under development, this standard seeks to provide guidance for organisations to address, detect and mitigate information security risks, threats and failures in AI systems.
Emerging Technical standards mapped across Data Governance, Cybersecurity and Risk Management
[Table 1: Emerging Technical standards mapped across Data Governance, Cybersecurity and Risk Management]

Why adopt

Standards serve as a fundamental framework for benchmarking and auditing systems and organisations, offering a means for conformity assessment before introducing high-risk or high-impact AI systems into the market. Additionally, they facilitate post-market monitoring to evaluate system performance. This comprehensive approach ensures assurance and benefits for not only AI system developers but also consumers and individuals whose data is processed or impacted by the decision-making of these systems.

How can Holistic AI help support conformity to standards?

It is natural for both business owners and consumers to harbour concerns when delving into the exploration of AI solutions. To increase AI adoption and encourage innovation, businesses developing and using AI need assurance mechanisms to demonstrate responsible AI behaviour to regulators, consumers, and each other. To enable organisations to demonstrate the trustworthiness of their products, third-party audits and other conformity assessment processes are increasingly required. Holistic AI’s proprietary Governance, Risk and Compliance solution can help operationalise technical standards at scale to ensure AI systems continue to be developed and deployed responsibly.

We assist organisations in closing the trust divide through:

  1. AI Assessments: Through quantitative and qualitative assessments, we ensure the dependability of AI-driven products across five key verticals: efficacy, robustness, privacy, bias and explainability.
  2. Third-party Risk Management: Customised mitigation and recommendations to manage and mitigate AI risks.
  3. Compliance: Assess compliance against applicable AI regulations and industry standards.

Schedule a call to find out more about how Holisitc AI can help.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more

Track AI Regulations in Real-time

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo