Policymakers around the world are taking markedly different approaches to promoting responsible AI, with the EU making significant progress with its trio of laws targeting AI and algorithms – the EU AI Act, Digital Services Act, and Digital Markets Act – the US introducing laws at the state, federal, and local levels, particularly targeting HR Tech and insurtech, and the UK taking a light-touch approach in comparison through white papers. China has also passed multiple laws regulating AI, particularly focusing on generative AI, and Brazil has so far introduced four laws, although the country is yet to see any of these progress out of Congress. On the other hand, the ecosystem is Australia is less mature, with the Australian Government only having published two resources to contribute to the regulatory ecosystem.
Published by the Australian Government’s Department of Industry, Innovation, and Science in 2019, the discussion paper on Australia’s AI Ethics Framework marked the opening of a public consultation on the eight proposed core principles needed for responsible AI in Australia. These are:
Posing eight questions about the proposed principles, tools required for responsible AI adoption, and the existence of best practices, the discussion paper recognises the efforts from countries around the world that have published ethical guidance on AI, as well as efforts from entities such as Google and Microsoft.
The discussion paper also places a focus on the importance of having a human in the loop in increasing accountability and reducing harm. With its focus on preventing societal harm and promoting innovation to harness the social benefits of AI, the paper also discusses the need for society in the loop, where the end users of the technologies are adequately considered during the design and development processes to ensure that frameworks are actionable and will be effective when deployed in the real world.
Indeed, the discussion paper draws on a number of scandals and harms to illustrate the importance of ethical AI frameworks, including well-known cases such as Amazon’s scrapped recruitment tool and Northpointe’s COMPAS recidivism tool. As such, the discussion paper proposes a toolkit for preventing these risks, based on nine practices:
The Australian government has yet to codify these principles and tools into regulatory or legal requirements.
Published in June 2021 and now archived, Australia’s AI Action Plan sets out the Australian Government’s vision to position Australia as a global leader in secure, trusted, and responsible AI. In particular, the action plan proposes a combination of new and existing initiatives to achieve this, including direct AI measures, programs and incentives to drive technological growth, and foundational policies to support businesses, innovation, and the economy. The plan envisions to do this through four key focus areas:
For each of these focus areas, the plan outlines how each of the three initiatives – direct AI measures, programs and incentives, and foundational policies – can help to support these efforts.
With the AI Action Plan archived, it is unlikely that it will lead to any AI-specific regulation or legislation and instead will be a consideration of Australia’s Digital Economy Strategy. However, that is not to say that Australia is not taking responsible AI seriously and that existing laws cannot be applied to AI.
Indeed, the Australian Government itself has been held accountable for the failure of its automated debt recovery tool robodebt. In September 2019, Melbourne-based law firm Gordon Legal filed a class action lawsuit on behalf of clients who unjustifiably had government-provided payments taken away or reduced due to false accusations by the tool of Australian citizens underreporting their income between July 2015 and November 2019. The class action represents approximately 648,000 group members against the Commonwealth,
With settlement for the class action reached in September 2022, where the Australian government agreed to pay $112 million in compensation to around 400,000 eligible individuals, including legal costs. It has also repaid more than $751 million to citizens affected by debt collection initiated by the tool, as well as agreeing to drop repayment requests for $744 million in invalid debts that had been partially repaid and $258 million in invalid debts that had not been repaid at all. Overall, over $1.7 billion has been paid out to around 430,00 members.
Prioritise responsible AI
Although AI-specific laws and regulations have not yet been proposed in all jurisdictions, an increasing number of lawsuits, harms, and scandals highlight the importance of responsible AI to avoid harm, minimise liability, and avoid reputational damage. Schedule a demo find out how Holistic AI’s approach to AI Governance, Risk, and compliance can help you embrace AI with confidence.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts