What is Ethical AI?

July 4, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
What is Ethical AI?

Ethical AI refers to the development and deployment of artificial intelligence systems that emphasize fairness, transparency, accountability, and respect for human values.

AI ethics emphasises the impact of AI on individuals, groups and wider society. The goal is to promote safe and responsible AI use, to mitigate AI’s novel risks and prevent harm. Much of the work in this area centres around four main verticals:

  • Bias - risk that the system unfairly discriminates against individuals or groups
  • Explainability - risk that the system or its decisions may not be understandable to users and developers
  • Robustness - risk that the algorithm fails in unexpected circumstances or when under attack
  • Privacy - risk that the system does not adequately protect personal data

There are three major approaches to reducing the risks and making AI more ethical:

  • Principles: guidelines and values that inform and direct the design, development and deployment of AI and the standards it should comply with
  • Processes: incorporation of principles into the design of AI systems to address risk in both the technical (accountability and transparency of the technology and design choices) and non-technical (decision-making, training, education, level of human-in-the-loop) aspects of a system
  • Ethical consciousness: – taking actions motivated by a moral awareness and desire to do the right thing when designing, developing, or deploying AI systems

Why is Ethical AI important?

AI Introduces Novel Risks

AI is being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023.

While AI and automation can have major benefits, including increased efficiencies, greater innovation, personalization of services, and reduced burden on human workers, using AI can present novel risks that must be addressed.

For example, in the insurance sector, AI can lead to minority individuals receiving higher quotes for automotive insurance and white patients being prioritized over sicker black patients for healthcare interventions. In law enforcement, algorithms to determine the likelihood of recidivism, or criminals reoffending, can be biased against black defendants, assigning them higher risk scores than their white counterparts even when factors such as prior crimes, age, and gender were controlled for.

Ethical AI can safeguard against harms

However, abiding by AI ethics principles provides an opportunity to prevent or significantly reduce these harms before they occur.

For example, a major source of bias can be imbalanced training data, where particular subgroups are underrepresented in the training data. The Gender Shades project found that commercial gender classification tools were the least accurate for darker-skinned females and the most accurate for lighter-skinned males. This disparity reflected the data used to train the models, which was made up of mostly white males.

Likewise, Amazon’s scrapped resume screening tool was biased against females using the word “women’s” in their resume (e.g., “women’s only college”) since the model was trained using the resumes of applicants who had applied for technical positions at the company over the last ten years, the majority of whom were male. ,

In both cases, the disparity in the data used to train the model resulted in bias against the underrepresented group. This could have been prevented by incorporating AI ethics principles into the design stage.

The importance of incorporating AI ethics principles, particularly explainability, was also highlighted when Apple came under fire when the algorithm used to determine credit limits for its Apple Card reportedly gave a much higher credit limit to a man compared to his wife, despite her having a higher credit score. Apple was ultimately cleared of illegal activity and Goldman Sachs, the provider of the card, was able to justify the decisions that the model came to, highlighting the need to make AI as transparent as possible so that decisions can be explained to relevant stakeholders.

What does it mean for companies?

As AI adoption becomes more widespread, public awareness of the risks increase, and regulatory attention intensifies, companies are under increasing pressure to ensure they are designing and deploying AI in an ethical manner.

Soon, companies will be legally required to incorporate ethical AI into their work. For example, upcoming legislation in New York City will require independent, impartial bias audits of automated employment decision tools used to screen candidates for positions or employees for promotions.  Likewise, legislation in Colorado will prohibit insurance providers from using discriminatory data or algorithms in their insurance practices.

The EU AI Act is the EU’s proposed law to regulate the development and use of ‘high-risk’ AI systems, including those used in HR, banking and education. It will be the first law worldwide which regulates the development and use of AI in a comprehensive way. The AI Act is set to be the “GDPR for AI”, with hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organisations which develop and deploy AI.

How to make your AI more ethical

Ethical AI requires a combination of expertise in computer science and AI policy and governance to ensure that best practices and codes of conduct are followed when developing and deploying systems. Taking steps early is the best way to manage unethical AI and get ahead of upcoming regulations. No matter the stage of system development, steps can always be taken to make AI more ethical. This will be vital for companies to protect their reputation, ensure compliance with evolving legislation, and deploy AI with greater confidence.

Holistic AI can support you in making your AI more ethical

At Holistic AI, we are thought leaders in AI ethics, having published over 50 papers in the realm of AI ethics, algorithm assurance, and auditing, and have developed our own frameworks to guide audits. We have a diverse team of experts in computer science, algorithms, auditing, and public policy who combine their expertise to make AI more ethical and safeguard against potential future harms.

To learn more on how we can empower your enterprise to adopt and scale AI with confidence, schedule a demo with us today!

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call