The regulation of artificial intelligence (AI) has started to become an urgent priority, with countries around the world proposing legislation aimed at promoting the responsible and safe application of AI to minimise the harms that it can pose. However, while these initiatives all aim to regulate the same technology, there is some divergence in how these different efforts define AI – getting lost in translation.
In this blog post, we survey AI is defined by different regulatory initatives and bodies, including the Information Commissioner’s Office, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, and California’s proposed amendments to their employment regulations different bodies define AI. We then analyse the commonalities and differences that set them apart, centring our analysis on the system outputs, the role of humans, autonomy, and types of technology.
Key Takeaways:
The UK’s Information Commissioner’s Office, which regulates data protection, was among the first to issue guidance on regulating AI with its draft guidance on AI auditing and publication of explaining decisions made with AI in 2020. Co-authored with The Alan Turing Institute, the latter publication provides enterprises with a framework for selecting the appropriate methods for increasing the explainability of systems based on the context they are used in. Here, artificial intelligence is defined as:
“An umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated, or with a ‘human in the loop.’”
This definition does not specify any outputs of the systems or the role of humans, and lacks clarity on the algorithm-based technologies that fall under the scope of AI.
With their proposal for harmonised rules on AI, the European Union is seeking to regulate AI using a risk-based approach, where requirements and penalties are proportional to the risk that a system poses. Likely to become the global gold standard for AI regulation, the latest version of the EU AI Act (the Czech Presidency’s Draft General Approach) defines AI as:
“A system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”
More comprehensive than the ICO definition, the EU AI Act specifies the typical outputs of AI systems) and acknowledges the role that humans play in providing data.
This text was then debated and revised ahead of a European Parliament Vote on 26 April 2023. A political agreement was then reached on 27 April ahead of a key committee vote on 11 May 2023 where, by majority vote, leading parliamentary committees accepted the adopted version of the text. The revised text tweaked the definition of AI to be more aligned with the OECD definition:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
While not identical to the OECD definition, the revised definition is more concise than the previous one and captures many of the same elements as the OECD definition.
The EU AI Act is not the only piece of legislation converging on the OECD definition; California policymakers have proposed regulation targeting deployers and developers of automated decision tools, which use AI to make consequential decisions. Here, AI is defined as:
“A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing a real or virtual environment”.
This definition, therefore, is aligned with the shortened OECD definition of AI, indicating some progress towards more standardisation on what constitutes AI. However, it does not include the autonomy component that is present in the OECD definition, so is selectively aligned with the OECD, but is identical otherwise.
According to the Organisation for Economic Co-operation and Development (OECD)’s Council on Artificial Intelligence, an AI system is defined as:
“A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
Similar to the EU AI Act, this OCED definition recognises the outputs of AI systems but does not acknowledge the role of humans and is vague about what AI systems are.
The OECD AI Principles, however, provide a more comprehensive definition of AI compared to the Council's definition:
“A machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.”
This lengthier definition distinguishes between different processes involved in AI systems and, like the original definition, also recognises that these systems can have various levels of autonomy. It also makes an important note that analysis can be automated or conducted manually.
According to the document ISO/IEC 22989:2022 published by the International Organisation for Standardization’s (SIO) JTC 1/SC 42 technical committee, which defines AI-relevant terminology, artificial intelligence is:
“the research and development of mechanisms and applications of AI systems”
Where an AI system is defined as an:
“engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.”
While one of the shortest definitions and lacking specific examples of AI technology, this ISO’s definition draws on the key themes seen in other definitions, including the role of humans and the possibility for AI systems to produce various outputs.
Adapting the definitions of AI put forward by ISO/IEC 22989:2022 and the OECD, NIST’s AI Risk Management Framework (AI RMF 1.0) defines an AI system as:
“an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”
While taking inspiration from the two sources, this definition is not as comprehensive as the one offered by the EU AI Act in that it does not explicitly outline the role of humans. However, the adaptation of existing definitions could be an early sign that there will be some convergence on how AI is defined in the future.
As part of their efforts to regulate AI, Canada has proposed a Digital Charter Implementation Act that proposes three laws to increase trust and privacy concerning digital technologies. As part of this trio of laws, the Artificial Intelligence and Data Act defines AI as:
“A technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”
Going beyond the EU’s definition, AIDA provides specific examples of AI systems and their outputs.
As part of its efforts to address the use of automated-decision systems in employment-related contexts, California has proposed amendments to its employment regulations. Here, AI is defined using a two-step approach, where AI is defined as:
“A machine learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”
and machine learning is defined as:
“An application of Artificial Intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”
While this definition specifies system outputs, it is circular, defining AI as machine learning and machine learning as an application of AI.
In the most recent version of the text, AI is defined as:
“A machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.”
While machine learning is defined as:
“The ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
The revised definition, therefore, overcomes the circular definition that was put forward previously, although it does still limit AI to machine learning although there are other applications of AI that do not rely on machine learning.
The European Commission’s High-Level Expert Group on AI (AI-HLEG) defines AI systems, for the purpose of its deliverables, as:
“systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal.”
The document also adds that the systems can be designed in a way that allows them to learn to adapt their behaviour based on how their previous actions affect the environment they operate in. It is also noted that AI can include approaches such as machine learning (e.g., deep learning and reinforcement learning), machine reasoning (e.g., planning, scheduling, knowledge representation, and reasoning, search, and optimisation), and robotics (e.g., control, perception, sensors and actuators).
Taking a more succinct approach to defining AI compared to the AI-HLEG, the Council of Europe asserts that AI:
“brings together sciences, theories and techniques (including mathematical logic, statistics, probabilities, computational neurobiology and computer science) and whose goal is to achieve the imitation by a machine of the cognitive abilities of a human being.”
This definition is unique in its depiction of AI as being multidisciplinary, which it often is. However, given that the goal of AI here is to mimic human cognitive abilities, this approach is more aligned with the concepts of Artificial General Intelligence.
According to UNESCO’s publication of their Recommendation on the Ethics of Artificial Intelligence, AI systems are:
“Information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. AI systems are designed to operate with varying degrees of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations. AI systems may include several methods, such as but not limited to: (i) machine learning, including deep learning and reinforcement learning; (ii) machine reasoning, including planning, scheduling, knowledge representation and reasoning, search, and optimization”
Although this definition is among the lengthiest and provides some specific examples of AI technologies that others do not, this definition does not comment on the role that humans play in relation to AI, instead focussing on how the systems generate outputs using modelling.
Effective 1 July 2023, Connecticut Senate Bill 1103 will regulate the use of AI by state agencies, requiring an annual inventory and to assess the impact of systems before their introduction. With another lengthy definition, here AI is defined as:
“(A) an artificial system that (i) performs tasks under varying and unpredictable circumstances without significant human oversight or can learn from experience and improve such performance when exposed to data sets, (ii) is developed in any context, including, but not limited to, software or physical hardware, and solves tasks requiring human-like perception, cognition, planning, learning, communication or physical action, or (iii) is designed to (I) think or act like a human, including, but not limited to, a cognitive architecture or neural network, or (II) act rationally, including, but not limited to, an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communication, decision-making or action, or (B) a set of techniques, including, but not limited to, machine learning, that is designed to approximate a cognitive task”
With some similarities to the UNESCO definition, SB 1103 addresses the use of AI for cognitive tasks (or knowledge representation and reasoning) but arguably moves to include technologies such as artificial general intelligence with its mention of “human-like perception, cognition, planning, learning, communication or physical action”, with some similarities to the Council of Europe definition.
However, despite its length, it fails to note that AI can have various levels of autonomy, is vague about the types of technologies that AI can use, and does not outline the role that humans play in AI.
A daunting task, defining what it is and what it isn’t – across the world policymakers, academics and technologists seem to be at a standstill in establishing a single sufficient definition of Artificial Intelligence. From human-in-the-loop to data sets used to make predictions, now more than ever, clearly defined terms are needed to adopt a practical approach to AI governance. While earlier work focused on technical specifications and more recent approaches take a more conceptual approach, broadly, the definitions comprise four key themes: system outputs, the role of humans, and autonomy, and the type of technology that characterises AI.
While most of the definitions note that the output of AI systems are typically predictions, recommendations, and decisions, and the EU AI Act and AIDA add content as an output, the definition provided by the ICO fails to specify any of the outputs of a system. Thus, with the exception of the ICO, the output of AI systems is something that is broadly agreed on.
Like with the outputs of AI systems, the ICO definition does not acknowledge the role that humans play in the functioning of AI systems. However, the other definitions note two key roles that humans play: providing the data (and inputs) for the models and defining the objectives of the models.
Something that is touched on by almost all of the definitions is the automation associated with the use of AI systems. However, the way that this automation is described varies between each definition. The OECD definition posits that systems can have varying levels of autonomy, a sentiment that is shared by AIDA, which states that systems can be fully or partly autonomous. On the other hand, the EU AI Act seemingly proposes that AI systems can only be partially autonomous, not fully. Throwing a curveball, the ICO definition uses the term automated rather than autonomous and notes that AI systems can be either fully automated, or can have elements of human in the loop. In contrast, the California definition fails to even mention autonomy.
The greatest divergence in the definitions is centered around the types of technologies that fall under the scope of AI. While the ICO and OECD definitions simply define AI as algorithm-based technologies and machine-based systems respectively, AIDA’s definition is more extensive, qualifying technological systems that use genetic algorithms, neural networks, machine learning, or other techniques as AI.
California’s proposed amendments define AI as a machine learning system and machine learning as an application of AI, providing a circular definition. The EU AI Act similarly falls short. Annex I list three categories of modern AI techniques: machine learning, symbolic approaches and statistics. The Act’s shortcomings have already caused dismay among statisticians, who had no idea they were deploying “AI” all along. While simple classification methods are covered by Annex I, intelligence is not just about cataloguing terms.
Moreover, a definition that needs to be kept up to date as technology evolves is the opposite of future-proof. Unsurprisingly, this lack of certainty makes reaching a standardised definition of AI nearly impossible. For example, the U.S. Office of Science Technology and Policy has made regulating automated systems a top Federal priority to “protect citizens from the harms associated with artificial intelligence”, yet the AI Bill of Rights neglects to even define the term. At the core of these definitions is a murky agreement of what exactly constitutes AI – missing a unified conceptualisation of the term with the objectives that embody its function.
What is clear is that all five pieces of legislation are roughly saying the same thing: AI is a form of automised human-defined intelligence. The problem here is that we have little to no understanding of what intelligence actually is. One reason for this challenge may be the philosophical debates and legal uncertainties around the word intelligence, let alone the lack of unanimity around defining ‘artificial intelligence’ which makes most definitions of AI in academic literature rather vague.
To support efforts to differentiate various applications of AI and types of AI systems, the OECD.AI experts have developed a Framework for Classifying AI systems based on the type of model used, data and input, output, economic context, and impact on humans, converging with the themes identified above. However, the framework is best suited to AI systems with specific applications rather than broader systems.
To conclude, it seems for now; we can agree that defining the term Artificial Intelligence is complex. With varied definitions and a range of interpretations, regulators attempting to nail down a clear meaning with absolute precision might be inept. However, for any useful discussion to occur we need to begin with a common understanding of the term. Until then, we remain lost in transl(A)t(I)on.
Written by Ayesha Gulley, Public Policy Associate at Holistic AI & Airlie Hilliard, Senior Researcher at Holistic AI.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI
Get Started