With the expansion of computational power observed in the last ten years, artificial intelligence (AI) models have been gaining more and more space in various sectors of industry and academia. In an article written in 2021 and signed by a relevant group of researchers (including Adriano Koshiyama and Emre Kazim), it was stated that we are starting the Age of Algorithms (whether they are AI, machine learning or similar). This statement logically leads to the idea that we are now increasingly close to the Age of AI Economies. Meaning that work processes, the way that different markets are organised, the consumption patterns of economic agents, the way in which economic phenomena occur and are analysed, all are permeated by algorithms that generate impacts that are as of now still unknown.
Measuring algorithm efficacy gives important insights into how well the model is working and does what it is designed to do. In this blog post, we give an overview of some different metrics that can be used to measure the performance of classification and regression systems.
The European Commission's proposed harmonised rules on artificial intelligence, the EU AI Act, aim to regulate AI systems on the EU market to create greater trust. Spain has not contributed to the development of the Act but has launched the first regulatory sandbox to experiment with its obligations. Spain has also published a National AI Strategy, announced Europe's first AI Supervisory Agency, and targeted platform-based worker rights with its Rider Law.
With artificial intelligence (AI) being increasingly used in high-stakes applications, such as in the military, for recruitment, and for insurance, there are growing concerns about the risks that this can bring. This is because algorithms can introduce novel sources of harm, where issues such as bias can be amplified and perpetuated by the use of AI. As such, recent years have seen a number of controversies around the misuse of AI, which have affected a range of sectors.
Today, artificial intelligence (AI) is increasingly present in our lives and becoming a fundamental part of many systems and applications. However, like any technology, it is important to ensure that AI-based solutions are trustworthy and fair. That's where the Holistic AI library comes in. In this blog post, we provide an overview of Holistic AI’s Bias analysis and mitigation framework, defining bias and how it can be mitigated before giving an overview of the bias metrics and mitigations available in the Holistic AI library.
OpenAI's GPT-4 can now process image-based prompts in addition to text-based ones, although the output is still text-based for now. While OpenAI has implemented ethical safeguards, there are still risks in using GPT. Check out our most recent blog on the dangers.
On 15-16 February 2023, the first global summit on Responsible Artificial Intelligence in the Military Domain (REAIM) was held in the Netherlands. The US used the summit as an opportunity to put forth their “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” In this blog, we begin by looking at the US’s latest development in promoting the adoption of responsible AI in the military, before briefly discussing investments into the military across the US and China, as well as the implications of AI in military capabilities on a macro-level, making a case for the growing importance of risk management and auditable methodologies.
In an attempt to regulate the use of artificial intelligence (AI) by State Agencies, Connecticut lawmakers have taken decisive action and proposed SB- 1103, An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy. The Proposed Bill aims to establish an Office of Artificial Intelligence, a task force to study artificial intelligence and develop an AI Bill of Rights.
Game Theory was an intellectual advance built at final of World War II and initially had mathematicians as its main contributors. Over time, researchers from other areas began to adopt this theoretical framework, highlighting economists and political scientists. The Shapley value describes a method to distribute the total gain to players if they all collaborate in a specific coalition strategy. We can describe that SHAP (SHapley Additive exPlanation) values attribute to each feature the change in the expected model prediction when conditioning on that feature.
While many regulatory efforts surrounding the use of AI are focused on business applications, the increasing use of AI within the public sector has led to an increased focus on regulating governmental use of AI. In this blog post, we give a high-level overview of some of these initiatives, focusing on the US, EU, and UK.
As artificial intelligence (AI) becomes more prevalent in various industries, it is crucial that all stakeholders are equipped to comprehend and articulate the outcomes produced by AI models. This understanding process must be clear and transparent at different dimensions to ensure that the results generated are ethical, unbiased, and trustworthy.
Artificial Intelligence (AI) risk management is an iterative process that requires an understanding of the risks associated with AI systems, and the best practices for managing them. The key steps for implementing a successful AI risk management strategy include: identifying and assessing risks, implementing a risk managment plan, and monitoring development. It is important to identify and mitigate AI risks to ensure a successful implementation of AI technologies and gain a competitive edge.
At the heart of this technology lies the innovative Transformer architecture. A deep learning model that has redefined the way we process natural language text due to its remarkable efficiency. In this article, we dive into the details of Transformer, exploring its impressive history of modification and improvement. By the end, you'll have a solid grasp of the cutting-edge technology driving the language models of today.
In this blog post, we compare New York City Local Law 144, which will require independent bias audits of automated employment decision tools from the 15th of April 2023, and the New Jersey Assembly Bill 4909, which has been recently introduced and will have similar requirements if passed.
The use of large language models (LLMs) such as Galactica, ChatGPT, and BARD have seen significant growth over the past few months. These models are becoming increasingly popular and are being integrated into various aspects of daily life, ranging from grocery lists to helping to write Python code. As with any novel technology, it is essential for society to understand the limitations, possibilities, biases, and regulatory issues brought about by these tools.
Within business organisations, human resources (HR) teams have been at the forefront of innovating their business practices through operationalising emerging technologies such as artificial intelligence (AI) and incorporating them into their talent sourcing and talent management practices.
The European Union’s (EU’s) proposed AI Act aims to harmonise requirements for AI systems across the EU with its risk-based approach. Ahead of this, some countries, like the Netherlands, are pressing ahead with specific national requirements.The Dutch approach is shaped by the scandal around a biased algorithm that their tax office used to assess benefits claims. The tax office implemented the system in 2013 and, after civil society raised concerns, two formal investigations in 2020 and 2021 uncovered systematic bias affecting 1.4 million people.
In 2022, China passed and enforced three distinct regulatory measures on the national, regional and local levels. This momentum has carried into 2023, where in January alone, China has already cracked down on deepfake technology through a national level legislation. In this blog, we survey China’s AI regulatory endeavours, highlighting interesting enforcement mechanisms and comment on China’s approach to AI governance.
AI systems are becoming increasingly integrated into our daily lives and are being used to make high-stakes decisions that can have significant implications for an individual’s life chances. Therefore, there are increasing calls to ensure that there is transparency about the capabilities of AI systems and that their outputs are explainable. In this blog post, we discuss what is meant by AI transparency and explainable AI and how they can be implemented through governance and technical approaches.
The National Institute of Standards and Technology (NIST), one of the leading voices in the development of artificial intelligence (AI) standards, launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0) on 26 January 2023. Underpinning the AI RMF is a focus on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment, and impact of AI systems. We sat down with NIST to discuss the AI RMF and learn about their vision for how it can be implemented.
On 31 January 2023, the Equal Employment Opportunity Commission (EEOC) held a public hearing on how the use of automated systems, including artificial intelligence, in employment decisions can comply with the federal civil rights laws the EEOC enforces. Check out our latest blog article on the key takeaways from the hearing, including how to promote diversity, equity and inclusion.
Speech tech is widely used & has many applications, incl. ASR for voice control of devices & accessing info. However, ASR systems can be fragile & biased, affecting certain groups. This post explores ASR bias metrics used to measure it and recommends datasets to consider.
In recent years, the fairness of automated employment decision tools (AEDTs) has received increasing attention. In November 2021, the New York City Council passed Local Law 144, which mandates bias audits of these systems.
Under Local Law 144, employers and employment agencies are required to commission independent, impartial bias audits of their tools, where, under the latest version of the Department of Consumer and Worker Protection’s (DCWP) proposed rules, bias should be determined using impact ratios based on outcomes for different subgroups. In this blog post, we outline the metrics required to conduct the bias audit, how small sample sizes can pose issues, and how they can be dealt with when carrying out audits.
The New York City Council took decisive action to mandate bias audits of automated employment decision tools (AEDTs) used to evaluate employees for promotion or candidates for employment in New York City, signaling that the risks of Artificial Intelligence (AI) are becoming an increasing regulatory concern. Local Law 144, also known as the NYC Bias Audit Law, is the first of its kind to codify independent, impartial bias audits in law.
The Digital Markets Act (DMA) came into effect on November 1st 2022 and focuses on regulating how online platforms operate with respect to fair competition and consumer choice by reducing the bottlenecks that so-called gatekeepers create by monopolising the digital economy.
First proposed on the 21st of April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence, colloquially known as the EU AI Act, seeks to lead the world in AI regulation. Likely to become the global gold standard for AI regulation, much like the general data protection regulations did for privacy regulation, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI.
AI is being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023.
In January 2019, the New York Department of Financial Services published a circular letter to all insurers authorized to write life insurance in New York State. The letter makes it clear that insurers should not use an external data source, algorithm or predictive model in underwriting or rating unless it has been determined by the insurer (not just the vendor) that the system does not collect or use prohibited criteria.
The ongoing proliferation of automated systems and artificial intelligence (AI) across industries has led to the development of regulation governing the use of these systems. The first of its kind, New York City Local Law 144 mandates independent bias audits of automated employment decision tools (AEDTs) used to evaluate candidates for employment or employees for promotion in New York City.
The Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act) was signed into law by President Biden in October 2022. The Act is premised on education and training to inform procurement and facilitate the adoption of AI for services at the Federal Level.
The regulation of artificial intelligence (AI) has started to become an urgent priority, with countries around the world proposing legislation aimed at promoting the responsible and safe application of AI to minimise the harms that it can pose.
In recent years, the field of AI Ethics, and related fields, such as trustworthy AI and responsible AI, have gained much attention due to increasing concerns about the risks that AI can pose if it is not used safely and ethically.
The latest and final compromise text of the EU AI Act (released on 6 December 2022) marks the EU ministers' official greenlight to adopt a general approach to the AI Act.
The Digital Services Act (DSA) is a lengthy (300 pages) and horizontal (cross-sector) piece of legislation with composite rules and legal obligations for technology companies. Notably, there is a focus on social media, user-oriented communities, and online services with an advertising-driven business model.
The UK government has not yet proposed any AI-specific regulation but has published several policy papers, frameworks, standards, and strategies. This blog post outlines the major AI regulations in the UK.
The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with the proposed EU AI Act. This article explores the proposed penalties of the EU AI Act for organisations that are non-compliant with the Act.
Washington DC proposes the "stop discrimination by algorithms act" aiming to prevent algorithmic discrimination for protected classes.
Canada has proposed the Digital Charter Implementation Act (Bill C-27), a trio of laws for trust and privacy. This new AI Law will regulate AI systems.
Spain’s royal decree 9/2021 or rider law gives platform delivery workers employment rights and imposes algorithmic transparency obligations.
California has proposed a Workplace Technology Accountability Act and modifications to its employment regulations to address automated decision systems. In this blog, we compare these proposals to the proposed EU AI Act.
Following their proposed rules for the NYC Bias Audit legislation, the Department of Consumer and Worker Protection held a public hearing. In this blog, we summarise the key takeaways from this session.
The blog compares the NYC bias audit law with California’s proposed Workplace Technology Accountability Act and Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems.
California has proposed amendments to its employment regulations to extend non-discrimination practices to automated decision tools. Here are a few things you need to know about the proposed modifications.
The California Workplace Technology Accountability Act aims to increase the accountability surrounding using technology in the workplace and reduce potential harm. Here are 10 things that you need to know about the proposed Act.
AI Risk Management is the process of identifying, verifying, mitigating and preventing AI risks. Concrete steps must be taken at each stage of the AI lifecycle, to reduce the likelihood of bias.
Ethical AI is the practice of incorporating the principles of AI ethics, and other related concepts, such as trustworthy AI and responsible AI, into the design, development and deployment of AI systems.
Local Law 144 takes effect on 1st January 2023. It mandates independent bias audits of automated tools which are used to make or support decisions about hiring candidates or promoting employees within New York City. Learn about how RHR International, a global leadership consulting firm, are working with Holistic AI to prepare for the upcoming deadline.
The White House recently published a Blueprint for an AI Bill of Rights, signalling the intent of the U.S. government to regulate AI.
The AI Liability Directive is the EU’s proposed new law to make it easier to prove liability in cases where AI systems cause harm.
AI is increasingly being used in the insurance sector for risk assessments, fraud detection, underwriting, sales, and customer service. While this automation can increase efficiency, it can also introduce novel harms that must be managed.
The EU AI Act was first proposed by the European Commission in April 2021. It will be the first law worldwide which regulates the development and use of AI in a comprehensive way.
The New York City (NYC) Department of Consumer and Worker Protection (DCWP) published proposed amendments to Local Law 144, which mandates independent bias audits of ‘automated employment decision tools’ used by employers or employment agencies.
Aware of the NYC Bias Audit legislation, Hired's CTO Dave Walters began vetting AI audit providers, deciding that Holistic AI was just what they were looking for. In his interview with Protocol, Dave outlines the criteria he used in his search for a partner.
Bias refers to unjustified differences in outcomes for different subgroups. To contextualise this, bias in recruitment could take the form of white candidates being hired at a greater rate than non-white when the race is not related to job requirements.
Facial recognition has several applications, from controlling access to a building and unlocking devices to replacing a boarding pass and identification of suspects by law enforcement.
Colorado’s General Assembly enacted legislation last year that restricts insurers’ use of ‘external consumer data’, prohibits data, algorithms, or predictive models from unfairly discriminating, and requires insurers to test their systems and demonstrate that they are not biased.
This blog explains the key elements of NIST’s AI RMF and why AI risk management will become embedded as a core business function in the coming years.
In response to concerns about harm that can result from the use of AI and the calls for greater governance of AI systems by the AI ethics movement, legislation addressing the use of these technologies has begun to emerge.
An overview of three high-profile cases that highlight the risks associated with the use of algorithms, and outline how applying AI ethics principles could have prevented these harms from occurring.
The upcoming EU AI Act also requires AIAs to determine whether a system is high-risk and subject to additional regulation. This blog post gives an overview of AIAs, DPIAs, and the difference between them, providing some examples of legislation that requires them.
While auditing and assurance are related concepts and practises, they are distinct. In this blog, we give an overview of algorithm auditing and assurance, outlining the key components of each practice and how they link to each other.
In this blog post, we provide an overview of AI ethics, first defining the term before discussing the approaches that bring about more ethical AI and the major themes in the field.
Under a new framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies being delegated to the appropriate regulator(s).
The EU Artificial Intelligence (AI) Act aims to lead the world in the governance of AI, requiring impact assessments to identify the risk associated with the use of AI systems and continuous management and mitigation of this risk.
New York City’s legislation requiring bias audits of automated employment decision tools is coming into effect in just a few months (1st January 2023) and has left many people wondering – do I need an audit, or am I exempt?
Recruitment tools driven by artificial intelligence (AI) algorithms – including game- or image-based assessments and algorithmically analysed video interviews – are becoming more mainstream, with uptake accelerated by the pandemic. The growing adoption of these tools has led to concerns about how they can be applied ethically and without discrimination.
Originally due to come into effect on 1st January 2023, the enforcement date of the NYC Bias audit law (Local Law 144) has been pushed back to 15th April 2023. Here’s what you need to know.
The Artificial Intelligence Video Interview Act came into effect in Illinois on 1st January 2020, affecting employers who use artificial intelligence to analyse video interviews of job applicants. Here are the 5 things you need to know about this legislation.
Our automated AI Risk Management platform empowers your enterprise to confidently embrace AIGet Started