Artificial intelligence is revolutionising industries worldwide, saving time and money, and removing the burden on human workers. However, this is not without risks, as demonstrated by several harms and lawsuits that have been observed in recent years. This white paper explores the risks of using AI, with a focus on HR Tech, Insurtech, biometrics, fintech, healthcare, housing, and social media and generative AI, as well as how AI Governance, Risk, and Compliance can make AI safer and increase trust.
To address growing concerns about the use of automated employment decision tools (AEDTs) in making employment decisions, particularly in relation to the risk of discriminatory outcomes, the New York City Council took decisive action and passed legislation that mandates bias audits of these tools. In this paper, we take an in-depth look at the New York City Bias Audit Law (Local Law 144) now that the NYC Department of Consumer and Worker Protection (DCWP) has released the final version of its adopted rules. Announcing a final enforcement date of 5 July 2023.
The use of biometric technology and artificial intelligence (AI), from fingerprint scanners to facial recognition, has become increasingly widespread, offering a range of benefits such as convenience, faster service, and improved security. As technology advances, so will the forms of biometric data that can be derived from individuals. In this paper we explore what biometrics are, the regulatory landscape and current legal actions against biometrics, as well as how to manage the risks associated with biometrics moving forward.
Given that insurance practices are considered a high-risk application of AI since access to policies can have significant implications on an individual’s life, particularly in the case of life and health insurance, recent years have seen an emergence of efforts to regulate insurtech. In this whitepaper, we give an overview of some of the risks associated with the misuse of insurtech before providing an overview of the regulatory efforts targeted at this sector, with a focus on the US and EU.
The EU AI Act (EU AIA) proposes a “risk-based approach” for regulating AI systems, where systems are classed as having (1) low or minimal risk, (2) limited risk, (3) high-risk, or (4) unacceptable risk.
While the use of artificial intelligence (AI) is proliferating across all industries, this is particularly true in the HR sector, where previously human leads responsibilities such as candidate sourcing and screening onboarding performance review and management and internal mobility are increasingly being automated.
The use of AI is proliferating globally across all sectors. While this can have many benefits including increased efficiency and greater accuracy, the use of these systems can pose novel risks. As such, policymakers around the world are starting to propose legislation to manage these risks.
Artificial intelligence (AI) and automated employment decision tools are revolutionizing talent management in organizations, providing a scalable and efficient solution to sourcing and retaining top talent.
Across the US, legislation aiming to regulate the use of artificial intelligence (AI) and automated systems is starting to emerge. While most of these efforts are at the state and local level, with California, New York City, DC, and Colorado all proposing legislation, some efforts have also been made at the federal level.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started