This is the sixth in a special series of posts on the 50th anniversary of the Kerr Report, examining whether Australian administrative law is still fit for purpose. To see other posts in this series, click here.


The Kerr Committee’s vision for a new administrative justice system led to the ground-breaking introduction of the ‘new administrative law’ package in the 1970s, incorporating the establishment of a generalist administrative tribunal, statutory judicial review, the office of the Commonwealth Ombudsman, and later, in the 1980s, freedom of information (FOI) and privacy legislation.

These reforms had the aim of increasing the opportunities and avenues for citizens to challenge governmental action and thus enhance the accountability of the executive, as well as to promote good decision-making within government.

Professor Dennis Pearce hailed the new system as the ‘vision splendid of the means by which an affected citizen [would] be able to test Commonwealth government decisions’.

Yet 50 years after the watershed Kerr Committee report, the operation of government has been fundamentally transformed by the technological revolution, which has increased the prevalence of automation of government decision-making. These developments could not have been envisaged by the Committee nor the federal Parliament when setting up these institutions.

It is therefore time to evaluate the continuing utility and relevance of the institutions established on the basis of the Kerr Committee report, in light of the modern technological developments in government. In this context, this post will consider the adequacy of merits and judicial review, FOI and privacy legislation, and the operation of the Commonwealth Ombudsman.

Automated Decision-Making in Australia

As early as 2003, the Administrative Review Council listed a large range of major federal government agencies, including Comcare, the Department of Defence, the Department of Veterans’ Affairs, and the Australian Taxation Office, that were making use of automated systems in governmental decision-making. Since then, there have been major advances in technology, including big data analytics, artificial intelligence and machine learning, which provide new opportunities for government authorities to develop automated decision-making tools.

The Australian public sector is already using technology-assisted decision-making in a wide range of contexts, with Centrelink’s automated debt raising and recovery system and the Australian Border Force’s SmartGate identity-checking at Australian airports being some of the best-known examples.

The use of automation in administrative processes can improve efficiency, certainty, predictability and consistency. As Justice Melissa Perry and Alexander Smith argue, automated systems have the capacity ‘to process large amounts of data more quickly, more reliably and less expensively than their human counterparts’ and have a useful role to play ‘when high frequency decisions need to be made by government’.

However, the use of emerging technologies is rarely unproblematic. The utilisation of automated decision-making raises a range of rule of law issues, in particular regarding procedural fairness, the transparency of decision-making, the protection of personal privacy and the right to equality.

The disastrous rollout of Centrelink’s online compliance initiative (dubbed ‘Robodebt’) is illustrative of the issues in automating government decision-making. Errors of methodology in government decision-making resulted in incorrect or inflated debt calculations for over 450,000 individuals.

These large-scale incorrect calculations have reduced public trust in computer-supported government decision-making and led to grave repercussions for vulnerable low-socioeconomic debtors, including individuals experiencing severe mental health issues, with reports of suicide amongst those affected.

The Federal Court consent orders in Amato v Commonwealth (VID611/2019) declared that the automated decisions in Robodebt were irrational and thus unlawful.

Judicial Review and Merits Review of Automated Decisions

Statutory judicial review is governed by the Administrative Decisions (Judicial Review) Act 1977 (Cth) (ADJR Act), while merits review is governed by the Administrative Appeals Tribunal Act 1995 (Cth) (AAT Act). Another form of review, pre-existing before the Committee’s recommendations, is judicial review under s 75(v) of the Constitution or s 39B of the Judiciary Act 1903 (Cth).

In order to challenge a government decision under the ADJR Act, an applicant must establish three elements to enliven the jurisdiction of the relevant court: that there is a ‘decision’, ‘of an administrative character’, ‘made under an enactment’.

The majority decision of the Full Federal Court in Pintarich v Deputy Commissioner of Taxation (2018) 262 FCR 41 throws doubt on whether automated decisions are reviewable under the ADJR Act. Justices Moshinsky and Derrington held that a ‘decision’ made under the ADJR Act has to involve a mental process of deliberation (at [140]).

It is notable that Kerr J provided a significant dissent that recognised the difficulties in imposing a requirement that human mental processes need to be engaged for an act to be a ‘decision’ under the ADJR Act, particularly in the context of automated decision-making systems:

The hitherto expectation that a ‘decision’ will usually involve human mental processes of reaching a conclusion prior to an outcome being expressed by an overt act is being challenged by automated ‘intelligent’ decision-making systems that rely on algorithms to process applications and make decisions.

What was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional. Automated systems are already routinely relied upon by a number of Australian government departments for bulk decision-making (at [46]-[47]).

If a human makes a decision guided or assisted by automated systems, this would still be a decision under the ADJR Act under the majority’s interpretation, as it would still involve a mental process of deliberation and cogitating by a human decision-maker.

Where the decision is actually made by an automated system without any human involvement, there is unlikely to be a decision under the ADJR Act, as the majority’s test presumes that a human brain is involved in a mental process. This may lead to a perverse incentive for departments and agencies to automate so as to avoid judicial review.

The question of whether an automated decision is subject to tribunal review at the AAT has not yet been ascertained to date. Janina Boughey argues (chapter 8) that a tribunal might not be satisfied that the decision reached by an automated system is the ‘correct or preferable’ one, if the tribunal member is unable to comprehend it.

There are two options for reforming judicial and merits review to enable review of automated decisions:

  • The AAT Act and ADJR Act could be amended to make it clear that automated decisions fall within the scope of the Act; or
  • The enabling legislation could make it clear that recourse to the courts is available.

The first option of reforming the ADJR Act is preferable, as it is a one-step solution and does not require each piece of enabling legislation to be amended. This reform should amend the definition of a ‘decision’ in the ADJR Act to clarify that it includes a decision wholly or partly made by an automated system. Failure to make these reforms may risk sidelining statutory judicial review even further, particularly in the context of the rise of privative clauses, as discussed by Tom Liu in his post in this series.

Another avenue of challenge is via s 75(v) of the Constitution, which gives the High Court original jurisdiction in all matters where constitutional writs are sought against an officer of the Commonwealth. There is no issue if a computer is merely assisting a human, and the human, who is a Commonwealth officer such as a public servant, made the actual decision.

However, it is more difficult to argue that a fully automated decision falls within the scope of s 75(v), as courts have read in a requirement of a formal appointment of a natural person, and a prohibition against artificial persons.

On the other hand, it may be argued that it is still possible that s 75(v) review would be available, but possibly not for the decision itself. The focus of s 75(v) is on the decision-maker, rather than the method of decision-making (via a decision). However, there is no definitive case law on this issue to date.

Freedom of Information Laws and AI Data

The Freedom of Information Act 1982 (Cth) (FOI Act) provides a right of access to documents in the possession of public sector bodies (s 11) and also requirements to proactively publish their ‘operational material’ (ss 8 and 8A), that is, the material that assists agencies to perform or exercise their functions or powers in making decisions or recommendations that affect members of the public.

The FOI Act provides a potential avenue of obtaining crucial information about the software used to automate decisions, the circumstances in which it was created or purchased, the materials that were used to train it and any tests run to gauge its accuracy.

The FOI Act applies to ‘documents’ rather than ‘information’. ‘Document’ is broadly defined and includes ‘any article on which information has been stored or recorded, either mechanically or electronically’ (s 4), which seems to include computer programs.

However, the Act’s focus on ‘documents’ has the consequence that applicants must specify the documents to which they are seeking access. Changing the coverage of the FOI Act to apply to ‘information’, consistently with legislation in the United Kingdom and New Zealand, is desirable because it is an inherently broader term, and therefore better able to deal with evolving technology. It would also make it easier for applicants who may be aware that information exists but are unable to identify the specific documents where the information is located.

Another key limitation of current FOI oversight is that AI systems are often protected by trade secrets. As Darren O’Donovan noted (chapter 3), this means that much of the data will fall within the trade secrets or commercial information exemption in the FOI Act (s 47), apart from the limited context where it relates to software developed in-house by an agency purely for non-commercial purposes. This creates an incentive for governments to outsource the creation of information that ought to be available for public scrutiny.

In Cordover and Australian Electoral Commission [2015] AATA 956, the AAT concluded that the source code developed by the Australian Electoral Commission (AEC) constituted a trade secret, based on evidence that the AEC had taken precautions to limit its dissemination and that it had commercial value and was used in trade. It also concluded that the evidence established a potential for the diminution in commercial value of software to the AEC as its owner, if its disclosure was required. Given that the AAT has found that a system developed by a government agency is a trade secret, it is highly likely that systems developed by the private sector are similarly exempt from the FOI Act.

The inaccessibility of information about an important aspect of government which impacts directly on individuals is a cause for concern. This raises the issue of whether it is appropriate to purchase automated systems in circumstances where their key features are subject to trade secrets claims and, if so, what compensating safeguards are required.

There are three key amendments to the FOI Act which would enhance its ability to provide transparency in relation to automated decision-making:

  • Extending its scope so it applies to ‘information’ rather than ‘documents’.
  • Providing an exception to the trade secrets/commercial information exemption for information that is necessary to shed light on algorithms used to make decisions that affect individuals.
  • Including in its proactive disclosure requirements a general description of automated decision-making technology that an agency uses to make decisions about persons.

Public Sector Privacy Law and AI Data

Australian public sector agencies processing personal information are subject to the Privacy Act 1988 (Cth)(Privacy Act). Australian law currently does not contain any specific requirements regarding automated decision-making, which means that the laws that are generally applicable to government handling of personal information also apply to automated decision-making.

There are issues with the definition of ‘personal information’ in the Privacy Act. In Privacy Commissioner v Telstra Corporation Ltd (2017) 249 FCR 24, the Full Federal Court adopted a narrow approach to the question of when information is about an individual. This has the potential to exclude certain metadata that, while generated by or in relation to an individual, is of a largely technical nature. This interpretation appears to underestimate that such information can nonetheless reveal personal characteristics or attributes, especially when combined with other information to create a profile, and that significant effects can follow from decisions made on the basis of such a profile, such as mobile phone location data and IP address information.

It is welcome that, in its response to recommendations arising from the ACCC Digital Platforms Inquiry, the Commonwealth Government has committed to reviewing the definition of ‘personal information’ in the Privacy Act, with a view to capturing technical data and other online identifiers.

A further issue to be considered is that automated decision-making may often involve data sharing between agencies. This was the case in the Robodebt scenario, where the Department of Social Services used annual income data information obtained from the Australian Taxation Office to calculate average fortnightly incomes.

The Commonwealth Government is currently reviewing its data governance framework, including considering the possible enactment of a federal Data Availability and Transparency Act, which will deal with data sharing issues.

Despite these developments, there is also a need to include more specific protections in light of automated decision-making, as are contained in newer international data protection frameworks, including the European Union’s General Data Protection Regulation (GDPR).

Under article 22 of the GDPR, a data subject (ie an identified or identifiable natural person) has the right not to be subject to solely automated decision-making, including profiling, which produces legal or similarly significant effects for the data subject.

Exceptions to this right apply, for example, where the automated decision-making is authorised by EU or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, or where it is based on the data subject’s explicit consent.

The Ombudsman and Scrutiny of Automated Decisions

The Commonwealth Ombudsman has been vigilant in investigating the use of automated systems. The Commonwealth Ombudsman played an instrumental role in investigating the government’s deficiencies in administering the Robodebt system, with reports in 2017 and 2019 on this issue.

The Ombudsman’s 2017 report identified issues with the ‘fairness, transparency and usability of the online system’, and found that ‘many of these issues could have been avoided by better project management, design, user testing and support for users of the online system’.

The Ombudsman’s follow-up investigation in 2019 found that the Departments of Social Services and Human Services had made significant progress in implementing the Ombudsman’s recommendations.

These investigations illustrate the Ombudsman’s strong monitoring role in investigating the Robodebt issue and scrutinising the Departments’ implementation of its recommendations, which has resulted in changes in departmental practices in implementing AI systems.

It should be noted that in order for the Ombudsman to continue to scrutinise departments and agencies effectively, it needs technical expertise and proper funding to do so. It is unfortunate that some federal oversight bodies in Australia have been attacked by governments with parliamentary majorities that have not been persuaded of their merits or enamoured of their activities through defunding, legislative amendment to reduce their powers and even personal attacks.

Further, there should ideally be a systemic examination of the activities of departments and agencies before large-scale programs such as Robodebt are rolled out. This could potentially carried out by oversight bodies such as the Ombudsman, or through reinstatement of the abolished Administrative Review Council, tasked with supervisory authority over the administrative law system.


The Kerr Committee was prescient in recommending the introduction a sophisticated system of checks and balances to executive power, which has reconfigured the administrative state.

However, these institutions, set up in the 1970s, were premised upon certain basal assumptions; in particular, that humans would be the decision-makers rather than automated systems, and that documentation would be paper-based rather than electronic.

Legislative reforms are necessary in the areas of judicial review, merits review, FOI and public sector privacy laws to ensure that these institutions are ‘fit for purpose’ to scrutinise automated decision-making in government.

These reforms will enhance the effectiveness and accountability of automated government decision-making and, in doing so, enhance and modernise the operation of Australia’s system of administrative justice.

Dr Yee-Fui Ng is an Associate Professor in the Faculty of Law at Monash University, and the Deputy Director of the Australian Centre for Justice Innovation.

Suggested citation: Yee-Fui Ng, ‘The Rise of Automated Decision-Making in the Administrative State: Are Kerr’s Institutions still “Fit for Purpose”?’ on AUSPUBLAW (20 August 2021) <>