As organizations begin to grapple with the range of emerging AI regulations, differing definitions of AI in legislation can be puzzling. In this blog post, we survey how multiple institutions and laws, including the Information Commissioner’s Office, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, and California’s proposed amendments to their employment regulations, define AI.
For AI and GRC leaders the conceptual backdrop of emerging legislations is useful for understanding what regulations may apply to your organization and what technologies may be implicated. Though many have taken affect, AI regulation is still its earliest days and it is important to stay on top of this ever-changing landscape.
The UK’s Information Commissioner’s Office, which regulates data protection, was among the first to issue guidance on regulating AI with its draft guidance on AI auditing and publication of explaining decisions made with AI in 2020. Co-authored with The Alan Turing Institute, the latter publication provides enterprises with a framework for selecting the appropriate methods for increasing the explainability of systems based on the context they are used in. Here, artificial intelligence is defined as:
“An umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated, or with a ‘human in the loop.’”
This definition does not specify any outputs of the systems or the role of humans and lacks clarity on the algorithm-based technologies that fall under the scope of AI.
With their proposal for harmonized rules on AI, the European Union is seeking to regulate AI using a risk-based approach, where requirements and penalties are proportional to the risk that a system poses. Likely to become the global gold standard for AI regulation, the Draft General Approach was adopted on December 6, 2023 defines AI as:
“A system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”
More comprehensive than the ICO definition, this EU AI Act definition specifies the typical outputs of AI systems and acknowledges the role that humans play in providing data.
This text was then debated and revised ahead of a European Parliament Vote on April 26, 2023. A political agreement was then reached on April 27 ahead of a key committee vote on May 11, 2023 where, by majority vote, leading parliamentary committees accepted the adopted version of the text. The revised text tweaked the definition of AI to be more aligned with the OECD definition, as examined below:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
While not identical to the OECD definition, the revised definition is more concise than the previous one and captures many of the same elements as the OECD definition.
However, in the latest version of the text that was endorsed by Coreper, the definition of an AI system has once again been revised and is now:
“a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The key addition to this definition is the concept of AI systems exhibiting adaptiveness, which could limit the scope to dynamic systems that are constantly learning and adapting and exclude static systems that are updated periodically.
The EU AI Act is not the only piece of legislation converging on the OECD definition; California policymakers have proposed regulation targeting deployers and developers of automated decision tools, which use AI to make consequential decisions. Here, AI is defined as:
“A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing a real or virtual environment”.
This definition, therefore, is aligned with the shortened OECD definition of AI, indicating some progress towards more standardisation on what constitutes AI. However, it does not include the autonomy component that is present in the OECD definition, so is selectively aligned with the OECD, but is identical otherwise.
According to the Organisation for Economic Co-operation and Development (OECD)’s Council on Artificial Intelligence, an AI system is defined as:
“A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
Similar to the EU AI Act, this OCED definition recognises the outputs of AI systems but does not acknowledge the role of humans and is vague about what AI systems are.
The OECD AI Principles, however, provide a more comprehensive definition of AI compared to the Council's definition:
“A machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.”
This lengthier definition distinguishes between different processes involved in AI systems and, like the original definition, also recognises that these systems can have various levels of autonomy. It also makes an important note that analysis can be automated or conducted manually.
However, as of November 2023, the OECD has updated its definition of AI, with the updated definition also likely to be mirrored by the AI Act once finalised. The OECD AI Principles now define AI as:
“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
Key changes to the definition include the explication that objectives can be implicit or explicit and that outputs are inferred from the inputs. Importantly, the definition now removes any mentions of human influence.
According to the document ISO/IEC 22989:2022 published by the International Organisation for Standardization’s (SIO) JTC 1/SC 42 technical committee, which defines AI-relevant terminology and was recently made publicly available, artificial intelligence is
“The research and development of mechanisms and applications of AI systems”
Where an AI system is defined as an
“Engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.”
While one of the shortest definitions and lacking specific examples of AI technology, this ISO’s definition draws on the key themes seen in other definitions, including the role of humans and the possibility for AI systems to produce various outputs.
Adapting the definitions of AI put forward by ISO/IEC 22989:2022 and the OECD, NIST’s AI Risk Management Framework (AI RMF 1.0) defines an AI system as:
“An engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
While taking inspiration from the two sources, this definition is not as comprehensive as the one offered by the EU AI Act in that it does not explicitly outline the role of humans. However, the adaptation of existing definitions could be an early sign that there will be some convergence on how AI is defined in the future.
Executive order 75191 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was published by the Biden Administration on October 30, 2023 to direct multiple agencies and government departments to develop rules, regulations, and introduce laws to promote responsible AI in the US. Beginning in the same way as the OCED Council on AI’s definition, the executive order defines AI as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
The definition also borrows part of the pre-November 2023 OECD AI principles definition, demonstrating the widespread influence of the definitions proposed by the OECD and hinting at the potential for increased convergence on a single definition.
As part of their efforts to regulate AI, Canada has proposed a Digital Charter Implementation Act that proposes three laws to increase trust and privacy concerning digital technologies. As part of this trio of laws, the Artificial Intelligence and Data Act defines AI as:
“A technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”
Going beyond the EU’s definition, AIDA provides specific examples of AI systems and their outputs.
As part of its efforts to address the use of automated-decision systems in employment-related contexts, California has proposed amendments to its employment regulations. In the second version of these amendments, AI is defined using a two-step approach, where AI is defined as:
“A machine learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”
And machine learning is defined as
“An application of Artificial Intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”
While this definition specifies system outputs, it is circular, defining AI as machine learning and machine learning as an application of AI.
In the most recent version of the text, AI is defined as:
“A machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.”
While machine learning is defined as:
“The ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
The revised definition, therefore, overcomes the circular definition that was put forward previously, although it does still limit AI to machine learning although there are other applications of AI that do not rely on machine learning.
The European Commission’s High-Level Expert Group on AI (AI-HLEG) defines AI systems, for the purpose of its deliverables, as:
“Systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal.”
The document also adds that the systems can be designed in a way that allows them to learn to adapt their behaviour based on how their previous actions affect the environment they operate in. It is also noted that AI can include approaches such as machine learning (e.g., deep learning and reinforcement learning), machine reasoning (e.g., planning, scheduling, knowledge representation, and reasoning, search, and optimisation), and robotics (e.g., control, perception, sensors and actuators).
Taking a more succinct approach to defining AI compared to the AI-HLEG, the Council of Europe asserts that AI
“brings together sciences, theories and techniques (including mathematical logic, statistics, probabilities, computational neurobiology and computer science) and whose goal is to achieve the imitation by a machine of the cognitive abilities of a human being.”
This definition is unique in its depiction of AI as being multidisciplinary, which it often is. However, given that the goal of AI here is to mimic human cognitive abilities, this approach is more aligned with the concepts of Artificial General Intelligence.
Introduced in May 2023, Brazil’s so-called AI Bill, Bill 2238, has significant similarities to the EU AI Act, including having a risk-based approach. Translated from Portuguese, an AI system is defined as:
“a computational system, with different degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, through input data from machines or humans, with the aim of producing predictions, recommendations or decisions that may influence the virtual or real environment”.
Although the approach of the Bill does echo the AI Act, there is some divergence in how AI is defined, with Brazil’s bill defining AI as a computational system in contrast to a machine-based system. However, the remainder of the definition does share many commonalities with the OECD and AI Act definitions.
According to UNESCO’s publication of their Recommendation on the Ethics of Artificial Intelligence, AI systems are:
“Information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. AI systems are designed to operate with varying degrees of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations. AI systems may include several methods, such as but not limited toLi) machine learning, including deep learning and reinforcement learning; (ii) machine reasoning, including planning, scheduling, knowledge representation and reasoning, search, and optimization”
Although this definition is among the lengthiest and provides some specific examples of AI technologies that others do not, this definition does not comment on the role that humans play in relation to AI, instead focusing on how the systems generate outputs using modelling.
Effective July 1, 2023, Connecticut Senate Bill 1103 will regulate the use of AI by state agencies, requiring an annual inventory and to assess the impact of systems before their introduction. With another lengthy definition, here AI is defined as:
“(A) an artificial system that (i) performs tasks under varying and unpredictable circumstances without significant human oversight or can learn from experience and improve such performance when exposed to data sets, (ii) is developed in any context, including, but not limited to, software or physical hardware, and solves tasks requiring human-like perception, cognition, planning, learning, communication or physical action, or (iii) is designed to (I) think or act like a human, including, but not limited to, a cognitive architecture or neural network, or (II) act rationally, including, but not limited to, an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communication, decision-making or action, or (B) a set of techniques, including, but not limited to, machine learning, that is designed to approximate a cognitive task.”
With some similarities to the UNESCO definition, SB 1103 addresses the use of AI for cognitive tasks (or knowledge representation and reasoning) but arguably moves to include technologies such as artificial general intelligence with its mention of “human-like perception, cognition, planning, learning, communication or physical action”, with some similarities to the Council of Europe definition.
However, despite its length, it fails to note that AI can have various levels of autonomy, is vague about the types of technologies that AI can use, and does not outline the role that humans play in AI.
A daunting task, defining what it is and what it isn’t – across the world policymakers, academics and technologists seem to be at a standstill in establishing a single sufficient definition of Artificial Intelligence. From human-in-the-loop to data sets used to make predictions, now more than ever, clearly defined terms are needed to adopt a practical approach to AI governance. While earlier work focused on technical specifications and more recent approaches take a more conceptual approach.
Broadly, the definitions we’ve covered above comprise four key themes: system outputs, the role of humans, and autonomy, and the type of technology that characterizes AI.
While most of the definitions note that the output of AI systems are typically predictions, recommendations, and decisions, and the EU AI Act and AIDA add content as an output, the definition provided by the ICO fails to specify any of the outputs of a system. Thus, with the exception of the ICO, the output of AI systems is something that is broadly agreed on.
Like with the outputs of AI systems, the ICO definition does not acknowledge the role that humans play in the functioning of AI systems. However, the other definitions note two key roles that humans play: providing the data (and inputs) for the models and defining the objectives of the models.
Something that is touched on by almost all of the definitions is the automation associated with the use of AI systems. However, the way that this automation is described varies between each definition.
The OECD definition posits that systems can have varying levels of autonomy, a sentiment that is shared by AIDA, which states that systems can be fully or partly autonomous. On the other hand, the EU AI Act seemingly proposes that AI systems can only be partially autonomous, not fully. Throwing a curveball, the ICO definition uses the term automated rather than autonomous and notes that AI systems can be either fully automated, or can have elements of human in the loop. In contrast, the California definition fails to even mention autonomy.
The greatest divergence in the definitions is centred around the types of technologies that fall under the scope of AI. While the ICO and OECD definitions simply define AI as algorithm-based technologies and machine-based systems respectively, AIDA’s definition is more extensive, qualifying technological systems that use genetic algorithms, neural networks, machine learning, or other techniques as AI.
California’s proposed amendments define AI as a machine learning system and machine learning as an application of AI, providing a circular definition. The EU AI Act similarly falls short. Annex I list three categories of modern AI techniques: machine learning, symbolic approaches and statistics. The Act’s shortcomings have already caused dismay among statisticians, who had no idea they were deploying “AI” all along. While simple classification methods are covered by Annex I, intelligence is not just about cataloguing terms. Moreover, a definition that needs to be kept up to date as technology evolves is the opposite of future-proof. Unsurprisingly, this lack of certainty makes reaching a standardized definition of AI nearly impossible. For example, the U.S. Office of Science Technology and Policy has made regulating automated systems a top Federal priority to “protect citizens from the harms associated with artificial intelligence”, yet the AI Bill of Rights neglects to even define the term. At the core of these definitions is a murky agreement of what exactly constitutes AI – missing a unified conceptualization of the term with the objectives that embody its function.
What is clear is that all five pieces of legislation are roughly saying the same thing: AI is a form of automated, human-defined intelligence. One problem here is that we have little to no understanding of what intelligence actually is. This challenge may build on philosophical debates and legal uncertainties around the word intelligence, let alone the lack of ambiguity around defining ‘artificial intelligence’ which makes most definitions of AI in academic literature rather vague.
To support efforts to differentiate various applications of AI and types of AI systems, the OECD.AI experts have developed a Framework for Classifying AI systems based on the type of model used, data and input, output, economic context, and impact on humans, converging with the themes identified above. However, the framework is best suited to AI systems with specific applications rather than broader systems. Further, although the revised definition under the EU AI Act and California Assembly bill 331 have converged on the OECD Council definition, neither have fully adopted it.
Regardless of differences in definitions, it’s important for organizations to stay on top of regulations in their given jurisdictions (or based on AI use). AI is being built into a wider and wider array of products or services. And the use of systems for inventorying AI uses and tracking news on regulations and lawsuits is of utmost importance to both AI and GRC professionals.
To explore what more robust AI tracking and governance looks like for your team, be sure to book a time with our AI policy and machine learning experts.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts