In recent weeks, a discussion has arisen around the need for a “pause” in the advancement of products based on generative artificial intelligence, such as ChatGPT. The first crucial point to emphasise is that this type of action is improbable in practice. AI models are already a part of the daily lives of thousands of people, academic research projects, Kaggle competitions, business routines, and public systems, as well as a growing set of applications in the industrial and service sectors.
Furthermore, it is a mistake to believe that there is a clear distinction between "safe AI" and "dangerous AI". The use and impact of AI are highly contextual and depend on various factors, such as the data used to train models, the intended application, and the ethical and regulatory frameworks in place. It is essential to approach AI development and deployment with a nuanced understanding of its potential benefits and risks and to prioritise responsible and ethical practices throughout the AI lifecycle.
Like any other type of tool, their beneficial or harmful effects depend on the user's intentions and capability. Social networks can be used as spaces for interaction, socialising, and knowledge sharing, or as instruments for spreading misinformation, virtual bullying, or crimes.
The fundamental and necessary elements that increase trust in AI systems are fairness, bias mitigation, model transparency, robustness, and privacy. These elements are essential for ensuring that AI systems are developed and deployed responsibly and ethically, and that they do not perpetuate or amplify existing societal biases and inequalities.
These are real elements that make AI-based systems safer. In addition, the increasing interest in these topics is illustrated in Figure 1 by the rise of submissions to the main conference on Fairness in AI, the FAccT. This trend highlights the increasing awareness and recognition of the importance of responsible and ethical AI development and deployment.
According to a study by QuantumBlack AI (McKinsey), consumers are increasingly placing value on companies that adopt responsible AI policies. As a result, companies that invest in this area can distinguish themselves and attract conscientious consumers, thereby avoiding legal and reputational issues. By prioritising responsible and ethical AI practices, companies can not only enhance their brand reputation but also contribute to the development of a more trustworthy and beneficial AI ecosystem.
As shown in another study (Figure 3), making models more transparent can help build an AI ecosystem that benefits everyone from engineers/data scientists to professionals in leadership positions (who also need to trust in generated results).
To ensure the development of safe and transparent AI systems, it is crucial to invest in research focused on mitigating potential risks. These risks can include biases in data or models, the potential for malicious use, and unintended consequences that may arise from the use of AI systems. Only through sustained and focused research efforts can we effectively address these challenges and create a more trustworthy AI ecosystem.
A temporary "break" in AI development is not a viable solution, as simply pausing research and development will not address the underlying issues and challenges associated with AI. Instead, we must continue to invest in research and development efforts aimed at creating more responsible and transparent AI systems. This involves not only technical research but also collaboration between researchers, policymakers, and stakeholders from diverse sectors to ensure that AI is developed and deployed in a responsible and ethical manner.
Written by Kleyton da Costa, Researcher at Holistic AI.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started