Data and Judicial Impartiality

Daniel Ghezelbash, Keyvan Dorostkar, Saul Wodak and Robert Ross

This is a post in a special series that AUSPUBLAW is featuring on the Australian Law Reform Commission’s report on Judicial Impartiality. This special series will be hosted across two weeks on AUSPUBLAW, and the full series can be accessed here.

19.08.2022

In the age of ‘big data’, governments and corporations are using data analytics to evaluate and improve programs and services across a wide range of areas. The courts have been relatively immune to this trend — but the tide may finally be turning.

The recent Australian Law Reform Commission (ALRC) Report on judicial impartiality includes ground-breaking findings and recommendations on how data can be used to promote judicial impartiality and public confidence in the legal system. The Report’s 14 recommendations draw on research from the fields of law and social science to formulate strategies to address these issues at an institutional level. A number of these recommendations focus on the role data can play in promoting transparency and judicial impartiality. This includes collecting data on court users’ subjective perceptions of procedural justice (recommendation 12) and collecting and reporting statistics regarding the diversity of the federal judiciary (recommendation 8).

While we commend all the recommendations regarding the collection of additional data, our focus in this blog post is on recommendation 13, which calls on Commonwealth courts to develop a policy on the creation, development, and use of statistical analysis of judicial decision-making. We begin with an overview of the risks and benefits of such statistical analysis. We welcome the ALRC’s recognition of the fact that the collection of such statistics can potentially play a role in counteracting cognitive and social biases in decision-making, and boost transparency and public confidence in the judicial system, and discuss how statistics could be used to achieve these ends. We conclude with a brief discussion of the role that statistical analysis of decision-making can play in claims of apprehended bias.

Risks and Benefits

The Report recognises the sensitivities regarding statistical analysis of judicial decision-making. In particular, it notes concerns from within the legal profession that statistics can never capture the nuances and merits of each case, and that statistical data is prone to misinterpretation that may undermine public trust in the legal system.

The Report illustrates some of the risk by reference to a project in France examining judicial decision-making in refugee cases prompted a significant backlash. The published statistics named individual judges, who raised concerns that this put pressure on them to move towards the ‘average’ outcome, interfering with judicial independence and undermining public confidence in the courts. The controversy surrounding that project contributed to the French government issuing a criminal prohibition in 2019, banning the publication of statistical information about judicial decisions. Offenders face up to five years in prison for violating the new law.  

Two of the present authors (Ghezelbash and Dorostkar) were involved in launching a similar project in Australia in recent weeks. The Kaldor Centre Data Lab collated and published data on judicial decisions from more than 6,700 refugee cases in the Federal Circuit and Family Court of Australia. Analyses of the data revealed a high level of variability in outcomes depending on which judge hears a case, with a refugee applicant’s average chance of success ranging from 0.6% to 23.0% depending on the judge.

In response to the release of the study, the Federal Circuit and Family Court issued a statement arguing that raw statistical data ‘should be accompanied by a relevant analysis of the context in which those decisions were made and a thorough review of the judge’s reasons for judgment.’ The spokesperson also stated that ‘without such analysis, one cannot make an informed assessment of the significance of any raw statistics’.

We agree that statistics on judicial decision-making should always be interpreted with caution and nuance. Judicial decision-making is a complex undertaking and is influenced by many factors. However, we argue that while statistics clearly do not capture the nuance of every case, they undoubtedly tell us something meaningful about how courts are operating a fact that the ALRC clearly recognised in its report.

While noting the potential risks associated with statistical analysis, the ALRC relied on several grounds to justify its recommendation that courts should proactively engage in data collection on judicial decision-making.

The first justification was that this form of statistical analysis of decision-making would continue to happen with or without the input of courts. Given that such analysis is already taking place, and will likely increase over time, the ALRC’s view was that Commonwealth courts should actively engage in this issue in order to mitigate risks this may pose for public confidence in the judiciary. As researchers who have been involved in collecting and analysing this form of statistical data, we have been strong advocates of such an approach. In a recent piece we argued:

Courts could transparently compile and distribute this form of statistical data themselves. This would allow them to provide relevant explanation and context, and reduce any potential for the misuse of this data.

The other key justification provided by the ALRC was the potential for statistical data to assist with identifying and, ultimately, addressing institutional-level and individual-level biases that may impact on judicial decisions.

Counteracting Cognitive and Social Biases        

The Report sets out research from economists, legal academics, psychologists, and political scientists that shows that, like all other forms of decision-making, judicial decision-making involves unconscious cognitive processes that may be influenced by various biases.

Several decades of research has resulted in much progress being made in understanding how humans process information and make judgements and decisions. Drawing on this research, Nobel Prize-winning psychologist Professor Daniel Kahneman identifies two systems in the mind:

  • System 1 operates automatically and quickly, with little or no effort or sense of voluntary control

  • System 2 allocates attention to effortful mental activities that demand it. The operations of System 2 are often associated with the subjective experience of agency, choice and concentration.

An overreliance on System 1 thinking can lead to unwanted influences in judicial decision-making. The Report draws on empirical research demonstrating that judges may be susceptible to a number of intuitions that may lead to bias when engaging in the more intuitive System 1 thinking. This includes mental shortcuts and heuristics — common ‘rules of thumb’ for solving problems and processing information. These shortcuts are essential in guiding human behaviour and decision-making, helping to reduce cognitive overload in our information-rich lives. However, System 1 can also be influenced by unconscious biases. For example, System 1 thinking can open the door for stereotyping that can lead to inequitable outcomes for different groups within society.

Making judges aware of the existence of such biases is an important starting point, and is likely to make them better equipped to counteract them. But this will likely not be enough.

As the Report recognises, cognitive and social biases are notoriously difficult to counteract. Indeed, research suggests that interventions that involve informing people of the existence of unconscious biases before asking them to complete a task are largely ineffective. The same goes for related interventions, including implicit bias training.

Statistics as a feedback tool

In our submission to the inquiry, we set out evidence showing that one of the most effective interventions in counteracting cognitive and social biases was the use of statistics as a feedback tool for judges. This is a process known as ‘post decision auditing’.

It is very difficult to spot the influence of implicit cognitive and social biases in a single case. However, if similar judicial decisions are logged across time and multiple decision-makers, then data can reveal patterns in decision-making outcomes. Providing these statistics and feedback to judges gives them information that they can use to reflect on their decision-making, exposing automatic System 1 thinking to the scrutiny of analytic and deliberative System 2 thinking. Scrutinising statistics may also provide insights at an institutional level and highlight potential institutional biases.

The ALRC’s recommendation that courts should proactively engage in developing policies related to the creation, development and use of statistical analysis of judicial decision-making is a welcome first step towards having courts embrace such an intervention. To be clear, the recommendation is for courts to explore developing such policies – and the ALRC was careful not to dictate that data collection must occur, or that the data should be used in particular ways.

The President of the ALRC, Justice SC Derrington, provided some examples of how data could potentially be used on the ABC’s Law Report:

The data could potentially be useful in relation to bias and unconscious bias… A head of jurisdiction might be alerted to a statistic that shows that a particular judge has never found in favour of a refugee, for example…

But that might only be enough to raise a question and then to look at the nature of cases being allocated to that particular judge, to look at if there have been appeals of decisions of that particular judge, and if so whether those appeals have been successful.

So it might just be enough to ask a question and to ask a question of the judge themselves – have you reflected deeply enough on your own biases before you have made your decision on these cases.

Private or public data-sharing?

In our submission, we summarised evidence supporting the effectiveness of the internal use of data by the courts in this way, particularly when coupled with some form of peer review process which provides an opportunity for judges to account for outcomes of their decision-making to other respected individuals in their profession. However, we suggested that publishing the data publicly would be even more impactful. We set out a robust body of social science research demonstrating that making data public can be even more effective than internal use of data as a feedback tool alone.

The Report did canvass our proposal and the possibility of publishing statistical data publicly, but did not go as far as endorsing such an approach. In our view, having courts publish this form of statistical analysis would not only enhance the effectiveness of such data in counteracting cognitive and social biases on the bench, but would foster greater transparency and hence promote community trust in the judicial system.

Possible Next Steps

Research examining the effectiveness of using data as a feedback tool to counteract biases has been undertaken in other contexts. For instance, a 2012 Cochrane review of 140 studies evaluating the effectiveness of auditing healthcare practitioners and providing performance feedback on suboptimal practices concluded that behavioural feedback can drive improvement. Suboptimal practices included improper use of treatments or laboratory tests, or mismanagement of patients with chronic disease such as heart disease or diabetes.

By contrast, there has been little research on the efficacy of such interventions in the judicial context. This could be remedied by running carefully targeted studies. These could focus on specific areas of law in specific courts, targeting potential judicial bias against specific groups. As noted in the Report, this would likely work best in areas where comparable judicial decisions are made on a high-volume basis, such as in the Federal Circuit and Family Court.

These studies could focus on evaluating the efficacy of using statistics in the context of private feedback to judges. What would be even better would be a study designed to compare the relative efficacy of private vs public feedback. The findings from these studies could inform future scaling of feedback interventions to other courts.

Case allocation

The usefulness of statistical analysis could be further enhanced through reforms to the way cases are allocated to individual judges. While the Report notes that the docket system used in Commonwealth courts should in theory ‘closely reflect a system of random allocation based on effective resource allocation principles’, heads of jurisdiction retain broad discretion over how cases are allocated.  The ALRC explains how reforms bolstering transparency in how cases are allocated and that remove discretion from the allocation process can increase public confidence in the impartiality of the judicial system.

We further argue that more transparency in case allocation would also enhance the effectiveness of interventions using statistics to counteract cognitive and social biases. In particular, if specific types of cases were assigned to judges within a specific panel entirely at random (or close to random), then it would be possible to draw much more robust inferences about whether different judges make different decisions when they consider comparable cases.

Apprehended Bias

The ALRC also examined the intersection of statistics with the substantive law on bias. The Report provides an in-depth overview of the attempts to date to rely on statistics of a judges ‘track record’ of decision-making to make out a claim of apprehended bias. These involve

arguments that a judge’s record of decisions in particular types of cases or concerning particular types of litigants is so one-sided that the fair minded lay observer might reasonably apprehend that the judge might not bring an impartial mind to the resolution of the dispute (p 384)

These arguments have been rejected by the courts to date, which have shown great scepticism about statistical analysis alone being sufficient to make out a claim of apprehended bias.

That was the approach taken by the Full Federal Court in ALA15 v Minister for Immigration and Border Protection [2016] FCAFC 30. The claim relied on statistical material relating to Judge Street’s decision-making in migration cases over a six-month period. The data demonstrated that Judge Street had only decided 0.79% of cases in favour of the applicants, with that figure dropping to 0% for contested cases. Data drawn from the annual reports of the migration tribunals revealed that over a comparable period, the average success rates for judicial review cases were more than 10%.

The reasons provided by Allsop CJ, Kenny and Griffiths JJ in dismissing the claim of apprehended bias fell into two broad categories. The first dealt with the quality and nature of the statistics that were relied on by the applicant. We have argued elsewhere that these concerns could likely be overcome with more sophisticated data compiled through computational methods. The second set of arguments relied on to dismiss the application questioned whether any form of statistical data alone could make out a claim of apprehended bias. While the Court set out a number of discrete arguments against the relevance of statistics, they all revolved around the central premise that ‘raw statistics are generally likely to be irrelevant to the knowledge and information which is imputed to the hypothetical observer’.

Allsop CJ, Kenny and Griffiths J reasoned that that statistics would need to be accompanied by a relevant analysis of the individual’s decisions, so that the ‘statistics were placed in proper context’. Their Honours’ view was that this analysis may conclude that many of the decisions were rightly decided. However, they further reasoned that ‘even if some or all of the judgments were wrongly decided’, even that may not be sufficient, as it may be the result of ‘human frailty on the part of the judge’, and this would be a ‘consideration which a fair-minded lay observer would take into account.’  

The Report canvasses various critiques of this approach, and the fact that it may set the threshold too high and ‘reflect an unrealistic approach to the view a fair-minded lay observer would take in the situation.’ The Report refers to Professor Matthew Groves’ observation that:

Sometimes statistics are so extreme, so one-sided, that their sheer weight alone might say something even in the absence of a detailed analysis of the cases that comprise the statistical set.

The Law Council of Australia similarly noted that:

disregarding a statistical analysis of a judge’s decisions for the purposes of assessing actual or apprehended bias may sit uncomfortably with community expectations.

While not recommending any amendments to the substantive law on apprehended bias, the ALRC rightly flagged the potential use of statistics in claims of apprehended bias as an area where further clarification through case law would be desirable.

With technological advances and the increasing ease and sophistication of data collection and analysis, it is inevitable that we will see more attempts to use statistics in claims of apprehended bias. This means the courts will have ample opportunity to provide further clarity on this issue. It is our hope that when undertaking this task, the courts will more directly draw on the scientific research on bias in determining what insights statistical analysis can provide.

Dr Daniel Ghezelbash is an Associate Professor and Deputy Director of the Kaldor Centre for International Refugee Law at UNSW Law & Justice.

Keyvan Dorostkar is a Masters of Research candidate at Macquarie University Law School. His research focus and interest is in applying data and computational methods to understand and improve legal systems and processes.

Saul Wodak is a behavioural science researcher and advisor with the Behavioural Insights Team in their Sydney office. He specialises in qualitative and quantitative research methodologies to investigate human behaviours with social or environmental implications.

Dr Robert Ross is a postdoctoral research associate in the Department of Psychology at Macquarie University in Sydney. His research focuses on the psychology of belief — in particular, religious belief, delusional belief, and belief in misinformation.

Suggested citation: Daniel Ghezelbash et al, ‘Data and Judicial Impartiality’ on AUSPUBLAW (19 August 2022) <https://www.auspublaw.org/blog/2022/08/data-and-judicial-impartiality>

Previous
Previous

Public Law Events Roundup September 2022

Next
Next

Nothing to fear and much to be gained from a federal judicial commission