Key Takeaways
AI is being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023.
However, the increasing use cases and normalisation of AI in everyday life and industry have simultaneously resulted in increased regulation and consequences to future-proof and protect consumers from the potential harm that the unchecked adoption of AI has the potential to bring.
In an ongoing lawsuit against State Farm, it is alleged that their automated claims processing has resulted in algorithmic bias against black homeowners. In another recent case, Louisiana authorities use of facial recognition technology led to a mistaken arrest.
For businesses and organizations to seamlessly integrate AI into their operations and reduce potential harms, employing a responsible AI approach is the first step.
Responsible AI is about doing what you can to safeguard against the harms that AI can bring. Holistic AI outlines five best practices for employing responsible AI:
A major source of harm is bias. Bias can emerge from imbalanced training data, where particular subgroups are underrepresented in the data. For example, the Gender Shades project found that commercial gender classification tools were the least accurate for darker-skinned females and the most accurate for lighter-skinned males. This disparity reflected the data used to train the models, which comprised primarily white males.
Data governance refers to the protocols and measures in place for both the technical and non-technical aspects of the development and deployment of AI. Data governance is critical because biases, errors and inconsistencies in the data stage can feed into the model itself, resulting in harm, as seen in the aforementioned cases. For instance, on an industry level, HR Tech and Insurance continue to see issues with biased data affecting their models, leading to adverse effects for those from marginalised groups. By having governance mechanisms which track where data is coming from, the quality of it, and how it is being used and manifesting on a model level, the threat of harm decreases.
Stakeholder communication indicates a commitment to transparency whilst employing AI. This can inform consumers that the products or platforms they interact with utilise AI. If you are an AI vendor, it looks not only to make clear where your data is sourced from and how it has been used in the model but also to make this clear to buyers.
If you are an AI vendor, this looks like making clear to buyers where the data product uses are sourced from, how it has been used in the model and any potential implications of the technology.
Responsible AI is only possible with the commitment of upper-level executives and the C-Suite. From the top, there must be active efforts to deploy responsible AI best practices and foster cross-functional collaboration. The onus lies on more than just developers and engineers. Harms can be predicted and mitigated by pushing teams across disciplines to collaborate, such as social scientists with data scientists.
This push for collaboration from the top is also critical to ensure there are internal efforts towards identifying and mitigating key risks. This includes testing for bias, ensuring the data is diverse and representative, and data minimalisation techniques using perspectives and scopes from differing disciplines.
The regulatory landscape surrounding AI is fast emerging across the globe. Companies are expected to do their due diligence and comply or risk reputational hits along with hefty fines.
For example, upcoming legislation in New York City will require independent, impartial bias audits of automated employment decision tools used to screen candidates for positions or employees for promotions. Likewise, legislation in Colorado will prohibit insurance providers from using discriminatory data or algorithms in their insurance practices.
The AI Act is the EU’s proposed law to regulate the development and use of ‘high-risk’ AI systems, including those used in HR, banking, and education. It will be the first law worldwide to regulate the development and use of AI comprehensively. The AI Act is set to be the global standard with hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organisations which develop and deploy AI.
Keeping up-to-date with relevant regulations and taking steps to comply is an essential component of responsible AI.
Holistic AI has pioneered the field of responsible AI and empowers enterprises to adopt and scale AI confidently. Our team has the technical expertise needed to identify and mitigate risks, and our policy experts use that knowledge of and act on proposed regulations to inform our product. Get in touch with a team member or schedule a demo to find out how you can take steps towards external assurance.
Written by Ashyana-Jasmine Kachra, Public Policy Intern at Holistic AI & Airlie Hilliard, Senior Researcher at Holistic AI.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI
Get Started