The use of biometric technology and artificial intelligence (AI), from fingerprint scanners to facial recognition, has become increasingly widespread, offering a range of benefits such as convenience, faster service, and improved security. As technology advances, so will the forms of biometric data that can be derived from individuals. In this paper we explore what biometrics are, the regulatory landscape and current legal actions against biometrics, as well as how to manage the risks associated with biometrics moving forward.
The EU AI Act is coming. With the policy proposal — which is seeking to set the gold standard for AI regulation across the globe — now in the final stage of the lawmaking process, it's time to get clued-up on the risk management obligations it will entail.
This study explores transparency challenges in algorithmic fairness. After reviewing progress in technical and regulatory transparency, it is suggested that some level of opacity is inherent to AI systems.
HR has well and truly embraced AI, using the technology to automate processes such as candidate sourcing, performance reviews, and internal mobility. This has allowed HR professionals to allocate their time and resources more efficiently, while automation can also make the recruitment process more engaging and coherent for candidates. But automated tools also come with risks, particularly in terms of algorithmic bias.
Artificial intelligence is revolutionising industries worldwide, saving time and money, and removing the burden on human workers. However, this is not without risks, as demonstrated by several harms and lawsuits that have been observed in recent years. This white paper explores the risks of using AI, with a focus on HR Tech, Insurtech, biometrics, fintech, healthcare, housing, and social media and generative AI, as well as how AI Governance, Risk, and Compliance can make AI safer and increase trust.
To address growing concerns about the use of automated employment decision tools (AEDTs) in making employment decisions, particularly in relation to the risk of discriminatory outcomes, the New York City Council took decisive action and passed legislation that mandates bias audits of these tools. In this paper, we take an in-depth look at the New York City Bias Audit Law (Local Law 144) now that the NYC Department of Consumer and Worker Protection (DCWP) has released the final version of its adopted rules. Announcing a final enforcement date of 5 July 2023.
Artificial intelligence (AI) and automated employment decision tools are revolutionizing talent management in organizations, providing a scalable and efficient solution to sourcing and retaining top talent.
The diversity-validity dilemma is an issue that industrial-organisational psychologists have faced for decades. Describing the trade-off between choosing selection procedures that are the most predictive of future job performance and choosing procedures that result in less adverse impact, referring to differential hiring rates for different subgroups based on characteristics such as sex/gender and race/ethnicity. In this paper, we explore at why the diversity-validity dilemma is significant and how it can be overcome.
Artificial intelligence (AI) and machine learning have opened up new opportunities for psychological assessments and psychometrics. These technologies have also transformed assessments; game and image-based assessments are increasingly being used in place of traditional self-report measures. We no longer have to completely rely on self-report data and can instead rapidly infer insights from a number of sources such as verbal and non-verbal behaviour in videos and the language used in social media posts.
Given that insurance practices are considered a high-risk application of AI since access to policies can have significant implications on an individual’s life, particularly in the case of life and health insurance, recent years have seen an emergence of efforts to regulate insurtech. In this whitepaper, we give an overview of some of the risks associated with the misuse of insurtech before providing an overview of the regulatory efforts targeted at this sector, with a focus on the US and EU.
AI is increasingly being used in talent management, with research from the Society for Human Resource Management finding that almost 25% of companies are using AI to support their HR practices, including recruitment and hiring. In this paper, we give an overview of some novel assessment formats that use AI in their scoring.
Rapid advancements in artificial intelligence (AI) technology have brought about a plethora of new challenges in terms of governance and regulation. AI systems are being integrated into various industries and sectors, creating a demand from decision-makers to possess a comprehensive and nuanced understanding of the capabilities and limitations of these systems.
The issue of fairness in AI has received an increasing amount of attention in recent years. The problem can be approached by looking at different protected attributes (e.g., ethnicity, gender, etc) independently, but fairness for individual protected attributes does not imply intersectional fairness. In this work, we frame the problem of intersectional fairness within a geometrical setting.
Recent advancements in GANs and diffusion models have enabled the creation of high-resolution, hyper-realistic images. However, these models may misrepresent certain social groups and present bias. Understanding bias in these models remains an important research question, especially for tasks that support critical decision-making and could affect minorities.
The use of automated decision tools in recruitment has received an increasing amount of attention. In November 2021, the New York City Council passed a legislation (Local Law 144) that mandates bias audits of Automated Employment Decision Tools. From 15th April 2023, companies that use automated tools for hiring or promoting employees are required to have these systems audited by an independent entity.
The EU AI Act (EU AIA) proposes a “risk-based approach” for regulating AI systems, where systems are classed as having (1) low or minimal risk, (2) limited risk, (3) high-risk, or (4) unacceptable risk.
This year, 2023, the groundwork will be laid for the EU AI Act to take effect within the next two years, prompting the establishment of risk management frameworks. In the United States, the focus will be on how regulatory bodies and case law lead the pack in targeting companies that proliferate algorithmic discrimination or intentionally use bad data, and dark patterns.
The use of AI is proliferating globally across all sectors. While this can have many benefits including increased efficiency and greater accuracy, the use of these systems can pose novel risks. As such, policymakers around the world are starting to propose legislation to manage these risks.
Across the US, legislation aiming to regulate the use of artificial intelligence (AI) and automated systems is starting to emerge. While most of these efforts are at the state and local level, with California, New York City, DC, and Colorado all proposing legislation, some efforts have also been made at the federal level.
The Discussion Paper clarifies how existing regulations and guidance -- including on risk management, consumer protection and data protection -- applies to the use of AI. It seeks feedback on whether new regulations are required.
Researchers, policy-makers and industry sharing this view convened to collectively identify future areas of focus to advance AI standards - particularly, the acute need to ensure standard suggestions are practical and empirically informed.
We investigate the question of what value is being expressed by an algorithm, which we conceptualize in terms of a digital asset, defining a digital asset as a valued digital thing that is derived from a particular digital technology (in this case, an algorithmic system). Our main takeaway is to invite the reader to consider artificial intelligence as a representation of the capture of value sui generis and that this may be a step change in the capture of value vis à vis the emergence of digital technologies.
The purpose of this white paper, which is part of our Holistic AI thought experiment series, is to provide a write-up of the talk given at the start of the session and should be read as a text to stimulate the discussion - much like the one that proceeded in the event.
In this article, we provide a summary and discussion of the key points of the legislation before providing a commentary, where we identify four key themes: i) the creation of boundaries that can contribute to a healthy work-life balance and protect the privacy of workers; ii) how the requirement for impact assessments of automated decision tools and worker information systems reflects the wider movement towards algorithmic assurance; iii) the necessary and potentially problematic requirement to share notices and impact assessment reports with the Labor Agency; and iv) how the proposed legislation might conflict with existing law while not exempting smaller businesses.
In this paper, we review applicants’ perceptions of the procedural fairness of algorithmic recruitment tools based on key findings from seven key studies, sampling over 1300 participants between them. We focus on the sub-facets of behavioural control, the extent to which individuals feel their behaviour can influence an outcome, and social presence, whether there is the perceived opportunity for a social connection and empathy.
This paper maps out the auditing process, explaining its verticals and their regulatory significance. We also look at the current financial regulation, likely future financial regulation, and the current proposals for AI regulation to describe how these could and should operate effectively together. Finally, we provide a case study of an audit in financial services: testing a credit scoring system for bias based on protected characteristics.
In this brief, we summarise and comment on the ‘Presidency compromise text’, which is a revised version of the proposed act reflecting the consultation and deliberation by member states and actors (November 2021).
Given the relative maturity of the data protection debate and that it has translated into legal codification, it is indeed a natural place to start for AI. In this paper, we anticipate directions in what we believe will become a dominant and impactful forthcoming debate, namely, how to conceptualise the relationship between data protection and AI impact.
Business reliance on algorithms are becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage.
Algorithms are becoming ubiquitous. However, companies are increasingly alarmed about their algorithms causing major financial or reputational damage. A new industry is envisaged: auditing and assurance of algorithms with the remit to validate artificial intelligence, machine learning, and associated algorithms.
In this study, we compare the machine-learning-based Lasso approach to ordinary least squares regression, as well as the summative approach that is typical of forced-choice formats. We find that the Lasso approach performs best in terms of generalisability and convergent validity, although the other methods have greater discriminate validity. We recommend the use of predictive Lasso regression models for scoring forced-choice image-based measures of personality over the other approaches. Potential further studies are suggested.
In this paper, we provide an overview of these proposed modifications, contextualising them within the larger artificial intelligence ethics debates and other similar legislation, as well as industry activity. We then identify some key themes in the legislation, namely i) ambiguity in terms of providing and justifying discrimination based on protected characteristics; ii) the lack of penalties for non-compliance; and iii) the lack of active obligations required of employers by the proposed amendments.
Through an examination of the concept of inclusion, this paper explores how to improve the terms on which African populations and subpopulations and their concerns are included in the global AI ethics discourses.
Gamification can mitigate some of these issues through greater engagement and shorter testing times. One avenue of gamification is image-based tests. Although such assessments are gaining traction in personnel selection, few studies describing their validity and psychometric properties exist. The current study explores the potential of a five-minute, forced-choice, image-based assessment of the Big Five personality traits to be used in selection.
This initiative aims to produce guidance that encompasses both technical (e.g. system impact assessments) and non-engineering (e.g. human oversight) components to governance and represents a significant milestone in the movement toward standardising AI governance.
The publication of the EU’s draft AI legal framework is a milestone in the regulatory debate on AI. It proposes a risk-based approach to regulating and reporting. In this white paper, we provide a high-level overview of the risk tiers, which we take as the kernel of the legislation, and follow this by offering our initial thoughts and feedback on strategic points of contention in the legislation.
The two-tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, facilitating trust across algorithm stakeholders. Moreover, it can be understood that implementation of the Standard within the UK’s public sector will inform standards more widely, influencing best practice in the private sector. This article provides a summary and commentary of the text.
In this paper, we frame the discussion of the impact of new digital technologies within the rubric of ‘futurism’, which we take to be a philosophy that speculates about where humanity is headed in light of new digital technologies.
We outline risk sources for using algorithmic hiring tools, suggest the most appropriate opportunities for audits to take place, recommend ways to measure bias in algorithms, and discuss the transparency of algorithms.
This paper presents the focus of the latest research in AIEd on reducing teachers’ workload, contextualized learning for students, revolutionizing assessments and developments in intelligent tutoring systems.
In this brief, we provide an exposition of the regulation, serving two agendas: firstly providing a summary for those unfamiliar with the legislation aimed at ensuring that AI-driven recruitment tools do not prevent the hiring of a diverse workforce; and secondly, providing a commentary that discusses some shortcomings with the legislation and what it might signal for the future.
In this paper, we map out key strategic and normative dilemmas that regulators must navigate in regulating the development and application of AI. We propose three such dilemmas.
As has been the case for decades with financial audits, governments, businesses, and society will soon require algorithm audit: formal assurance that algorithms are legal, ethical and safe. A novel industry of Auditing and Assurance of Algorithms will arise with the task of validating autonomous systems.
In this white paper, which is part of our Holistic AI thought experiment series, we pick up on the responsibility of the AI ethics community - or more specifically ‘AI ethicists’, by advocating that the role of the AI ethicist in the public debate comes with a responsibility to educate and inform (to generate questions and possibilities), rather than to lead and dictate (to provide answers and ideology).
The principal aim of this article is to provide a high-level conceptual discussion of the field by introducing basic concepts and sketching approaches and central themes in AI ethics.
This paper describes a solution proposal for a tacit knowledge elicitation process for capturing operational best practices of experienced workers in industrial domains based on a mix of algorithmic techniques and a cooperative game.
The ethics of Artificial Intelligence (AI Ethics) can be thought of as undergoing three broad phases, with the first two being principles and processes, to the current phase which we read in terms of the assurance. The manner in which AI assurance matures is likely to be a result of the extent and reach of any regulatory intervention.
This critical perspective makes a timely contribution to the tech policy debate concerning the monitoring and moderation of online content. Governments globally are currently considering a range of legislative interventions to limit online abuse, disinformation, and the dissemination of illegal content on social media platforms. These interventions will significantly impact online free speech, competition between platforms, and the democratic function of online platforms. By investigating the UK’s Online Safety Bill, comparing it with similar interventions, and considering the political impact of different digital tools for moderation, this perspective aims to inform the current policy debate by combining technical and political insight. It indicates the need for further research into the comparative efficacy of different methods of content monitoring and moderation.
We provide an overview of the document's structure and offer an emphasised commentary on various standouts. Our main takeaways are Innovation First: a clear signal is that innovation is at the forefront of the UK’s data priorities.
Our key takeaway is that, despite the promising development opportunities, there are actual and potential challenges that African countries need to consider in deciding whether to scale up or down the application of AI in agriculture.
Here, we tackle the important worry that digital ethics in general, and AI ethics in particular, lack adequate philosophical foundations. In direct response to that worry, we formulate and rationally justify some basic concepts and principles for digital ethics/AI ethics, all drawn from a broadly Kantian theory of human dignity.
In this paper, we survey how AI assurance will be used in the wider industry and how the CDEI will work with other organisations to develop regulations, Industry standards, and the creation of AI assurance practitioners. We also comment on the CDEI’s roadmap and our views on building ‘justified trust’ complexities. Finally, we analyse the role of research in AI assurance, the current developments in the AI assurance industry, and an overview of some international regulations.
This paper was written following an ICO workshop led by UCL and UCL’s Centre for Digital Innovation, co-authored by Holistic AI. The paper surveys existing regulation, comparing the broad approach of the European Union with the sector-specific approach from the UK.