In recent years, the field of AI Ethics, and related fields, such as trustworthy AI and responsible AI, have gained much attention due to increasing concerns about the risks that AI can pose if it is not used safely and ethically.
As pioneers of the field, we define AI Ethics as a nascent field that has emerged in response to the growing concern regarding the impact of AI. In particular, AI Ethics is concerned with the psychological, social, and political impact of AI and is characterised by three main approaches:
Ethical AI operationalises AI Ethics, with research in this field converging on four key verticals:
How well a system performs on each of these verticals can be determined through algorithm auditing, the practice of assessing, mitigating, and assuring an algorithm’s safety, legality, and ethics. Audits are ongoing processes and should be repeated annually or following any major updates to the system, and can occur at any point in the lifecycle of a system.
While the applications of AI are vast, from recruitment to automating insurance claims, one application has that has gained attention recently is conversational AI. Indeed, OpenAI has recently released ChatGPT, which uses a large language model to answer series of questions in a conversational way. Initially trained by humans, who played the role of the user and AI assistant, the model was later trained using reinforcement learning to reward the model when it produced desirable responses.
Like our own definition, ChatGPT highlights the potential for biased systems, and recommends auditing as a useful approach for ensuring that AI is more ethical. The language model’s article also considers the social impact of the technology, touching on how automation can result in the displacement of jobs. However, seeing as the generated post it is largely indistinguishable from a human’s efforts, who’s to say it won’t be this very system that puts a content writer out of a job.
Whilst experimenting with ChatGPT or making art with Dall-E 2 is bound to be amusing. AI will take many tasks out of our hands in the coming years, but that should not allow us to sleepwalk into the development of poorly designed systems.
The EU High-Level Expert Group on AI and the IEEE have formulated moral values that should be adhered to in the design and deployment of artificial intelligence. However, building ethical AI will require, at a minimum, verification of whether a model complies with the values it has been intended for. We must ask ourselves the right questions: Is the model fully explainable? Was it designed to be interpretable? What are the derived variables used in the model? Are they biased?
Bridging AI ethics from theory to practice will depend on regulatory oversight combined with AI auditing to ensure that technologies placed on the market are adequately monitored and regulated. When the chatbots themselves recognise the importance of ethical AI, it’s certainly time for us to take note!
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts