In the rapidly evolving ecosystem of artificial intelligence (AI), the development and establishment of AI standards has become a pressing necessity. These seek to serve as a set of common guidelines, principles and technical specifications for the development, deployment and governance of AI systems. Providing comprehensive baselines to ensure reliability, ethicality and trust in AI systems, standards can help promote transparency, mitigate risks and address concerns relating to fairness, privacy and accountability. These also play a coordinating role in the increasingly complex AI governance landscape, by facilitating collaboration, consensus and dialogue between stakeholders in the ecosystem.
Key Takeways:
The need for standards in the AI governance landscape has become increasingly evident considering growing instances of AI regulation across the globe, particularly in the European Union, United Kingdom and the United States. For example, the EU’s AI Act seeks to achieve its objectives on AI trustworthiness, accountability, risk management and transparency, among others through the adoption of technical and process standards. This is evident through Articles 40 and 41 of the legislation's compromise text, which provide for the development of harmonised standards and common specifications in instances where harmonised standards might be absent or not applicable.
Further, regulatory language on AI governance differs across regulations, translating to a lack of global alignment and consensus on crucial aspects like AI taxonomy, governance mechanisms, assessment methodologies, and measurement. This lack of harmonisation and agreement underscores the urgent need for standardisation. By establishing clear and universally accepted standards, a more coherent and consistent approach to governing AI technologies, mitigating risks, and fostering responsible and ethical AI development and deployment can be realised.
Standardisation is a multi-stakeholder endeavour, requiring rounds of iterations by technical committees comprised of experts for the development of consensus-driven standards. These deliberative processes are facilitated by standards development organisations (SDOs), which can have global as well as regional/national remits. Examples of global SDOs include the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC), while regional SDOs include the European Committee for Standardisation (CEN/CENELAC) and the European Telecommunications Standards Institute (ETSI) in the European Union, the National Institute of Standards and Technology (NIST) in the United States, and the British Standards Institution (BSI) in the United Kingdom.
Here, we provide an overview of the key types of technical standards for AI governance, risk and compliance (GRC), with notable examples of the same, primarily focused on the ISO:
These standards help build common language, terminologies and taxonomies on foundational concepts, thereby facilitating dialogue between stakeholders. Serving as building blocks, these pave the way for the development of process, performance, and other forms of specific standardisation endeavours. Preeminent foundational standards include:
These help create the organisational architecture for the responsible development and deployment of AI systems, by universalising the adoption of best practices in management, process-design, quality control and governance. Certain process standards are considered "certifiable," implying that organisations can undergo independent assessments to determine if they adhere to the prescribed good practice and subsequently obtain certification against that specific standard. Notable process standards for AI systems include:
These standards help provide universal mechanisms and terminologies on measuring different aspects of an AI system’s performance. These are particularly crucial, as the development and efficacy of trustworthy AI systems depends primarily on defensible measurement methods and mechanisms:
Help establish thresholds, requirements and expectations that must be met at certain levels for the satisfactory operation and use of an AI system:
Standards serve as a fundamental framework for benchmarking and auditing systems and organisations, offering a means for conformity assessment before introducing high-risk or high-impact AI systems into the market. Additionally, they facilitate post-market monitoring to evaluate system performance. This comprehensive approach ensures assurance and benefits for not only AI system developers but also consumers and individuals whose data is processed or impacted by the decision-making of these systems.
It is natural for both business owners and consumers to harbour concerns when delving into the exploration of AI solutions. To increase AI adoption and encourage innovation, businesses developing and using AI need assurance mechanisms to demonstrate responsible AI behaviour to regulators, consumers, and each other. To enable organisations to demonstrate the trustworthiness of their products, third-party audits and other conformity assessment processes are increasingly required. Holistic AI’s proprietary Governance, Risk and Compliance solution can help operationalise technical standards at scale to ensure AI systems continue to be developed and deployed responsibly.
We assist organisations in closing the trust divide through:
Schedule a call to find out more about how Holisitc AI can help.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts