The future of artificial intelligence (AI) is at an inflection point with the mass-adoption of Generative AI. Comprising of Large Language Models (LLMs), transformers and other neural networks, Generative AI and Foundation Models can create new outputs based on raw data. These serve as building blocks for developing more complex and sophisticated models that have the potential to bring exponential benefits across a variety of use-cases, from commerce and cancer research to climate change.
In the past weeks, a discussion has arisen around the need for a “pause” in the advancement of products based on generative artificial intelligence, such as ChatGPT. In this blog post, we explore how artificial intelligence can be made safer and whether a "pause" on AI advancement is warranted, or even possible.
Recent years have seen multiple harms resulting from the mismanagement of artificial intelligence (AI) systems in the workplace. Indeed, there are several examples of these systems going wrong, from biased applicant screening tools to discriminatory monitoring of worker behaviour. In this blog, we outline the key requirements of Massachusetts HD 3051 - an Act Preventing a Dystopian Workplace Environment.
Algorithms are increasingly playing a significant role in facilitating connections on social media platforms, from powering recommendations that help businesses connect with users, to amplifying movements like #MeToo that enable positive socio-cultural shifts. However, they have also been deployed as vectors of harm, both intentionally and unintentionally. In this blog we take a quick look into what some of these harms might be, what governments are doing to mitigate them, and what can be done to ensure that algorithms are developed with safety, fairness and ethics in mind.
With the expansion of computational power observed in the last ten years, artificial intelligence (AI) models have been gaining more and more space in various sectors of industry and academia. In an article written in 2021 and signed by a relevant group of researchers, it was stated that we are starting the Age of Algorithms (whether they are AI, machine learning or similar). This statement logically leads to the idea that we are now increasingly close to the Age of AI Economies. Meaning that work processes, the way that different markets are organised, the consumption patterns of economic agents, the way in which economic phenomena occur and are analysed, all are permeated by algorithms that generate impacts that are as of now still unknown.
With artificial intelligence (AI) being increasingly used in high-stakes applications, such as in the military, for recruitment, and for insurance, there are growing concerns about the risks that this can bring. This is because algorithms can introduce novel sources of harm, where issues such as bias can be amplified and perpetuated by the use of AI. As such, recent years have seen a number of controversies around the misuse of AI, which have affected a range of sectors.
OpenAI's GPT-4 can now process image-based prompts in addition to text-based ones, although the output is still text-based for now. While OpenAI has implemented ethical safeguards, there are still risks in using GPT. Check out our most recent blog on the dangers.
On 15-16 February 2023, the first global summit on Responsible Artificial Intelligence in the Military Domain (REAIM) was held in the Netherlands. The US used the summit as an opportunity to put forth their “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” In this blog, we begin by looking at the US’s latest development in promoting the adoption of responsible AI in the military, before briefly discussing investments into the military across the US and China, as well as the implications of AI in military capabilities on a macro-level, making a case for the growing importance of risk management and auditable methodologies.
As artificial intelligence (AI) becomes more prevalent in various industries, it is crucial that all stakeholders are equipped to comprehend and articulate the outcomes produced by AI models. This understanding process must be clear and transparent at different dimensions to ensure that the results generated are ethical, unbiased, and trustworthy.
Artificial Intelligence (AI) risk management is an iterative process that requires an understanding of the risks associated with AI systems, and the best practices for managing them. The key steps for implementing a successful AI risk management strategy include: identifying and assessing risks, implementing a risk managment plan, and monitoring development. It is important to identify and mitigate AI risks to ensure a successful implementation of AI technologies and gain a competitive edge.
The use of large language models (LLMs) such as Galactica, ChatGPT, and BARD have seen significant growth over the past few months. These models are becoming increasingly popular and are being integrated into various aspects of daily life, ranging from grocery lists to helping to write Python code. As with any novel technology, it is essential for society to understand the limitations, possibilities, biases, and regulatory issues brought about by these tools.
Within business organisations, human resources (HR) teams have been at the forefront of innovating their business practices through operationalising emerging technologies such as artificial intelligence (AI) and incorporating them into their talent sourcing and talent management practices.
The regulation of artificial intelligence (AI) has started to become an urgent priority, with countries around the world proposing legislation aimed at promoting the responsible and safe application of AI to minimise the harms that it can pose.
AI systems are becoming increasingly integrated into our daily lives and are being used to make high-stakes decisions that can have significant implications for an individual’s life chances. Therefore, there are increasing calls to ensure that there is transparency about the capabilities of AI systems and that their outputs are explainable. In this blog post, we discuss what is meant by AI transparency and explainable AI and how they can be implemented through governance and technical approaches.
The National Institute of Standards and Technology (NIST), one of the leading voices in the development of artificial intelligence (AI) standards, launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0) on 26 January 2023. Underpinning the AI RMF is a focus on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment, and impact of AI systems. We sat down with NIST to discuss the AI RMF and learn about their vision for how it can be implemented.
Speech tech is widely used & has many applications, incl. ASR for voice control of devices & accessing info. However, ASR systems can be fragile & biased, affecting certain groups. This post explores ASR bias metrics used to measure it and recommends datasets to consider.
AI is being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023.
The ongoing proliferation of automated systems and artificial intelligence (AI) across industries has led to the development of regulation governing the use of these systems. The first of its kind, New York City Local Law 144 mandates independent bias audits of automated employment decision tools (AEDTs) used to evaluate candidates for employment or employees for promotion in New York City.
In recent years, the field of AI Ethics, and related fields, such as trustworthy AI and responsible AI, have gained much attention due to increasing concerns about the risks that AI can pose if it is not used safely and ethically.
AI Risk Management is the process of identifying, verifying, mitigating and preventing AI risks. Concrete steps must be taken at each stage of the AI lifecycle, to reduce the likelihood of bias.
Ethical AI is the practice of incorporating the principles of AI ethics, and other related concepts, such as trustworthy AI and responsible AI, into the design, development and deployment of AI systems.
AI is increasingly being used in the insurance sector for risk assessments, fraud detection, underwriting, sales, and customer service. While this automation can increase efficiency, it can also introduce novel harms that must be managed.
Bias refers to unjustified differences in outcomes for different subgroups. To contextualise this, bias in recruitment could take the form of white candidates being hired at a greater rate than non-white when the race is not related to job requirements.
Facial recognition has several applications, from controlling access to a building and unlocking devices to replacing a boarding pass and identification of suspects by law enforcement.
This blog explains the key elements of NIST’s AI RMF and why AI risk management will become embedded as a core business function in the coming years.
An overview of three high-profile cases that highlight the risks associated with the use of algorithms, and outline how applying AI ethics principles could have prevented these harms from occurring.
The upcoming EU AI Act also requires AIAs to determine whether a system is high-risk and subject to additional regulation. This blog post gives an overview of AIAs, DPIAs, and the difference between them, providing some examples of legislation that requires them.
While auditing and assurance are related concepts and practises, they are distinct. In this blog, we give an overview of algorithm auditing and assurance, outlining the key components of each practice and how they link to each other.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started