Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Responsible AI: 7 Best Practices

Authored by
Ashyana-Jasmine Kachra
Policy Associate at Holistic AI
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Jun 30, 2023
read time
0
min read
share this
Responsible AI: 7 Best Practices

Key Takeaways

  • Responsible AI is about doing what you can to safeguard against the harms that AI can bring.
  • For businesses to seamlessly integrate AI into their operations and reduce potential harms, employing a responsible AI approach is the first step.
  • Data Governance: Can help track where data is coming from, the quality of it, and how it is being used and manifesting on a model level, reducing the risk of harm.
  • Stakeholder Communication: Indicates a commitment to transparency by letting consumers and buyers know that the systems and/or platforms they are interacting with are considered AI.
  • Engagement at the Board-Level & Collaboration: From the top, there must be active efforts to deploy responsible AI best practices and foster cross-functional collaboration.
  • Complying with Relevant Regulation: Companies are expected to do their due diligence by keeping up-to-date with relevant regulations and taking steps to comply.
  • Taking Steps Towards External Assurance: Holistic AI has pioneered the field of responsible AI and empowers enterprises to adopt and scale AI confidently by providing external assurance.

AI is being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023.

However, the increasing use cases and normalisation of AI in everyday life and industry have simultaneously resulted in increased regulation and consequences to future-proof and protect consumers from the potential harm that the unchecked adoption of AI has the potential to bring.

In an ongoing lawsuit against State Farm, it is alleged that their automated claims processing has resulted in algorithmic bias against black homeowners. In another recent case, Louisiana authorities use of facial recognition technology led to a mistaken arrest.  

For businesses and organizations to seamlessly integrate AI into their operations and reduce potential harms, employing a responsible AI approach is the first step.

What is Responsible AI?

Responsible AI is the practice of developing and deploying AI in a fair, ethical, and transparent way. This ensures that AI systems are aligned with human values and do not harm individuals or society. Below are Holistic AI’s five pillars of responsible AI:

  1. Data Governance
  2. Stakeholder Communication
  3. Engagement at the Board-Level & Collaboration
  4. Develop human-centred AI
  5. Complying with Relevant Regulation
  6. Explainable AI
  7. Taking Steps Towards External Assurance

1. Data governance

A major source of harm is bias. Bias can emerge from imbalanced training data, where particular subgroups are underrepresented in the data. For example, the Gender Shades project found that commercial gender classification tools were the least accurate for darker-skinned females and the most accurate for lighter-skinned males. This disparity reflected the data used to train the models, which comprised primarily white males.

Data governance refers to the protocols and measures in place for both the technical and non-technical aspects of the development and deployment of AI. Data governance is critical because biases, errors and inconsistencies in the data stage can feed into the model itself, resulting in harm, as seen in the aforementioned cases. For instance, on an industry level, HR Tech and Insurance continue to see issues with biased data affecting their models, leading to adverse effects for those from marginalised groups. By having governance mechanisms which track where data is coming from, the quality of it, and how it is being used and manifesting on a model level, the threat of harm decreases.

2. Stakeholder communication

Stakeholder communication indicates a commitment to transparency whilst employing AI. This can inform consumers that the products or platforms they interact with utilise AI. If you are an AI vendor, it looks not only to make clear where your data is sourced from and how it has been used in the model but also to make this clear to buyers.

If you are an AI vendor, this looks like making clear to buyers where the data product uses are sourced from, how it has been used in the model and any potential implications of the technology.

3. Engagement at the board-level & collaboration

Responsible AI is only possible with the commitment of upper-level executives and the C-Suite. From the top, there must be active efforts to deploy responsible AI best practices and foster cross-functional collaboration. The onus lies on more than just developers and engineers. Harms can be predicted and mitigated by pushing teams across disciplines to collaborate, such as social scientists with data scientists.

This push for collaboration from the top is also critical to ensure there are internal efforts towards identifying and mitigating key risks. This includes testing for bias, ensuring the data is diverse and representative, and data minimalisation techniques using perspectives and scopes from differing disciplines.

4. Develop human-centred AI

One pillar of responsible AI increasingly being adopted by organisations at the vanguard of AI ethics is to develop systems that complement rather than replace or burden humans. This might involve spying and taking opportunity where AI could make an employee's daily duties easier, like the automation of a menial yet essential task. This school of thought, known as Human-Centred AI, believes that AI systems should be used to increase efficiency and pass the time saved onto humans. The anthropic approach also emphasises the importance of human feedback in AI development and deployment, with a view to creating a future where humans and AI work in tandem to create better social, working, and living conditions.

5. Explainable AI

In AI governance, Explainable AI (XAI) – defined as systems capable of explaining and assessing their decision-making processes – is increasingly being adopted to promote transparency and mitigate bias. As well as being conducive to bias-free and accountable AI, Explainability also acts as a bulwark for regulatory compliance, allowing enterprises to document how their systems reached a decision and therefore ensure regulatory compliance.

6. Complying with relevant regulation

The regulatory landscape surrounding AI is fast emerging across the globe. Companies are expected to do their due diligence and comply or risk reputational hits along with hefty fines.

For example, upcoming legislation in New York City will require independent, impartial bias audits of automated employment decision tools used to screen candidates for positions or employees for promotions. Likewise, legislation in Colorado will prohibit insurance providers from using discriminatory data or algorithms in their insurance practices.

The AI Act is the EU’s proposed law to regulate the development and use of ‘high-risk’ AI systems, including those used in HR, banking, and education. It will be the first law worldwide to regulate the development and use of AI comprehensively. The AI Act is set to be the global standard with hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organisations which develop and deploy AI.

Keeping up-to-date with relevant regulations and taking steps to comply is an essential component of responsible AI.

7. Taking steps towards external assurance

Holistic AI has pioneered the field of responsible AI and empowers enterprises to adopt and scale AI confidently. Our team has the technical expertise needed to identify and mitigate risks, and our policy experts use that knowledge of and act on proposed regulations to inform our product. Get in touch with a team member or schedule a demo to find out how you can take steps towards external assurance.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more

Track AI Regulations in Real-time

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo