close
close

Australia’s national policy for the ethical use of AI takes shape

Australia’s national policy for the ethical use of AI takes shape

In line with global trends, the Australian Government released a report in June on the development of a national framework for AI safety in government, which will be adopted by federal, state and territory governments across the country.

The framework provides the basis for a nationwide consistent approach to AI safety, the process of testing AI systems to ensure they are trustworthy and adhere to ethical principles.

This move is part of a series of measures the country is taking to ensure safe, responsible and ethical use of AI in government, balancing AI’s transformative potential with its potential harms.

Digital NSW takes the lead

New South Wales was the first state in Australia to develop a government-wide AI ethics policy in 2020 and was among the first states in the world to develop a framework for the application of AI in government in 2022.

Click here to subscribe to the GovInsider Bulletin.

The state’s Digital NSW recently announced that this framework will be integrated into its broader Digital Assurance Framework. This will ensure that the Assurance Framework applies to all relevant agency projects with a budget of over A$5 million (S$4.36 million). The updated framework guides agencies through compliance with the mandate to use the AI ​​Ethics Policy and AI Assessment Framework.

In July, the NSW Legislative Council released a report on the use of AI in the state. The report included ten recommendations for the state government, including the establishment of a NSW Chief AI Officer to work towards responsible use of AI by government departments and agencies.

The report also recommends the establishment of a Joint Standing Committee on Technology and Innovation to ensure ongoing oversight of AI and other emerging technologies, and conducting an analysis of regulatory gaps to identify where new laws may be needed.

The state government’s response to the report is expected at the end of October.

Risk-based approach

The Australian Government’s national framework states that governments should take a risk-based approach when assessing the use of AI on a case-by-case basis and recommends that governments consider oversight mechanisms such as external or internal review bodies in high-risk situations.

Procurement decisions should be guided by factors such as AI ethics, data transparency, and evidence of performance testing. For the use of content created using generative AI (GenAI), the framework specifies that decisions should be based on human oversight and accountability.

The framework also addresses the implementation of Australia’s AI Ethics Principles in government. These principles state that AI systems should benefit individuals, society and the environment, uphold individuals’ privacy rights, not result in unfair discrimination, and that people should have the ability to challenge the use or outcomes of AI systems, to name a few.

Australia’s AI governance frameworks were put to the test earlier this year when the Department of Industry, Science and Resources partnered with Singapore’s Infocomm Media Development Authority (IMDA) to test both countries’ AI ethics policies.

Click here to subscribe to the GovInsider Bulletin.

The collaboration, which took place under the Digital Economy Agreement between the two countries, examined the extent to which the countries’ respective governance frameworks are applicable to the National Australia Bank’s machine learning-based Financial Difficulty Campaign.

The key outcome of the exercise was that the two countries’ AI governance frameworks are aligned and compatible. There are no particular barriers that could prevent an Australian company from complying with both the Australian AI Ethics Principles and Singapore’s Model AI Governance Framework.

Guidelines for civil servants

In July last year, the Federal Digital Transformation Agency (DTA) published interim guidelines on the use of public generative AI tools by the Australian Public Service (APS). The guidelines specify that APS staff must adhere to the AI ​​ethics principles when using these tools.

In September 2023, the Federal Government convened a Taskforce on AI in Government to make recommendations on the use of AI by APS. Co-led by the Digital Transformation Agency and the Department of Industry, Science and Resources, the Taskforce amended the interim guidance to introduce two “golden rules” that APS staff should follow.

These rules specify that APS staff must be able to explain, justify, and take responsibility for advice and decisions related to the use of the technology. APS staff should also assume that any information entered into public GenAI tools could become public and avoid entering confidential or sensitive information.

Meanwhile, the Commonwealth Ombudsman has published best practice guidelines on the use of AI tools for automated decision-making. The guidelines include principles for assessing the suitability of automated systems, ensuring compliance with administrative law requirements, ensuring that the design of an automated system complies with data protection requirements, establishing governance for automated systems projects, developing quality assurance processes to maintain the accuracy of decisions, and ensuring transparency and accountability of AI systems.

The responsible use of automated decision-making is a sensitive issue in Australia due to the 2020 Robodebt scandal. The previous government was forced to apologise in Parliament after an automated system was used to unlawfully send thousands of incorrectly calculated debt notices to welfare recipients over the previous four years.

Safe and responsible AI

In January, the Australian Federal Government released its preliminary response to last year’s consultation on the safe and responsible use of AI in society. The consultation concluded that Australia’s current regulatory framework does not adequately consider the risks of AI and existing laws do not adequately prevent AI-related harm before it occurs.

Click here to subscribe to the GovInsider Bulletin.

The government has stated that it will take a risk-based approach to promote the safe use of artificial intelligence in society. It will limit any mandatory regulatory requirements to high-risk AI applications, while seeking to allow the unhindered development of low-risk applications.

The government has committed to conducting consultations before introducing mandatory guardrails for organisations developing and deploying AI systems in high-risk environments.

The government will also ask the National Centre for Artificial Intelligence to develop an AI safety standard for industry to ensure that the AI ​​systems deployed are safe and secure. Another approach being explored is voluntary labelling and watermarking of AI-generated material in high-risk environments.

(The author is a freelance journalist based in Australia)

Leave a Reply

Your email address will not be published. Required fields are marked *