Artboard 1Icon/UI/CalendarIcons/Ionic/Social/social-pinterestIcon/UI/Video-outline

Automated decision-making: The importance of human oversight and transparency

04 October 2022

6 min read

#Data & Privacy

Published by:

Clare Giugni

Automated decision-making: The importance of human oversight and transparency

The use of automated systems to make decisions (often using artificial intelligence) is more common than ever before as businesses and government agencies look to automate and digitise various processes.

Entities are primarily encountering problems where their automated systems are able to make ‘decisions’ on their own, without a human supervising and verifying the outcomes (think: Robodebt).

In this article, we take a look at a recent decision of the Federal Court of Australia regarding Origin Energy (Origin) and the Australian Energy Regulator (AER) which shows that not every organisation has learned a lesson from Robodebt on the importance of ensuring sufficient oversight of automated decision-making.

We also discuss the issues that arise  when entities fail to provide consumers with adequate transparency around automated decisions that affect their access to services. This will often happen where the decision-making has been outsourced to a third party provider’s proprietary automated systems. Guidance from the NSW Information Commissioner offers a helpful reminder to government agencies that, notwithstanding outsourcing and automation of decision-making, they must consider their freedom of information obligations.

Oversight of automated decision-making: The importance of having a human ‘in the loop’

The Federal Court recently examined the automated systems Origin uses to make decisions about payment arrangements for customers experiencing financial hardship. The National Energy Retail Rules (Rules) require energy retailers to have specific policies in place to assist customers experiencing financial hardship.

Origin was found to have implemented automated systems that operated without sufficient human supervision and resulted in unilateral variation of payment plans under hardship rules, which were established “without having regard to the relevant customer’s capacity to pay” and are reminiscent of Robodebt and the failure to take into account individual circumstances.

Three key groupings of automated decision-making activity were undertaken, which were in breach of the Rules relating to hardship:

  1. in the first grouping which related to just over 4,000 customers, the automated system sent an SMS to customers advising them of a proposed change (increase) to their current payment plan and to contact Origin if that amount was not affordable. If they did not hear from the customer within three days of the text being sent, the existing plan was automatically cancelled and the new plan with higher instalment amounts was established
  2. in the second set of circumstances, the automated system automatically cancelled payment plans for customers who failed to pay an instalment within four business days of the due date from that instalment. Even if the non-payment was the first instance, the payment plan was cancelled and a new plan unilaterally established. Again without having regard to the relevant customer’s capacity to pay
  3. in the third instance, the automated system attempted an outbound call to the relevant customer in order to implement proposed increases to a payment plan. If that call was unsuccessful, the system sent an email or letter advising the customer to contact Origin within seven business days and advising them if they did not do so, they may be removed from the program. If they did not contact Origin within 10 days of the letter or email being sent, their account was automatically cancelled on the basis of non-engagement.

The decision found that Origin breached its obligations under the relevant legislation and ordered a range of remedies, including training, a review of its internal operations, establishing and maintaining a compliance and training program for three years and having that program reviewed by an independently appointed reviewer who would report to the AER. In addition, the Federal Court also imposed penalties totalling $17 million – the largest penalty ever ordered for breaches of the Rules.

It is important to note that using technology to automate business systems is often helpful with costs and streamlining processes. However, regulatory compliance and maintaining a human in the loop to ensure incidents such as that which occurred for Origin in breach of its statutory obligations need to be guarded against. Numerous guidelines have now been published on the implementation of artificial intelligence or automated systems in various business processes. We can assist you with corporate systems and processes to meet your regulatory obligations and ensuring that you avoid a similar outcome.

Transparency about automated decision-making: A reminder for government agencies

Guidance issued by the NSW Information Commissioner reminds government agencies of the need to ensure they can meet their freedom of information obligations under the Government Information (Public Access) Act 2009 (NSW) (GIPA Act), even where a process is digitised and outsourced to a third party contractor.

The Information Commissioner recognises that individuals are increasingly subject to automated decision-making and that “to fully exercise their rights, it is important that individuals are able to access information on how a decision is made and what information was used to reach that decision.”

As agencies digitise more and more of their processes, a greater volume of information is held by third party technology or software providers. To ensure such information remains publicly accessible, it is critical that entities ensure all government contracts provide the relevant government customer with an immediate right of access to information held by the contractor, as required by section 121 of the GIPA Act.

The outsourcing and automation of government processes can, however, put into conflict an agency’s freedom of information obligations and the commercial interests of the third party contractor. A NSW Civil and Administrative Tribunal (NCAT) decision published earlier this year highlights this conflict.

That case involved an application for access to information relating to the calculation of social housing rental subsidies by the Department of Communities and Justice (Department). The Department refused the application on the basis that the information was held by a third party software provider, Northgate Public Services (Northgate), which used a proprietary algorithm to determine the subsidy.

The NCAT ultimately upheld the refusal to grant access to this information on the basis that it would put the contractor at a “substantial commercial disadvantage in relation to the agency”. Section 121(2) of the GIPA Act states that the right to access information from a contractor does not extend to information that, if disclosed, would have this effect. The NCAT determined Northgate would be at risk of such a disadvantage because if the Department had access to Northgate’s proprietary algorithm, it may no longer require Northgate’s services.

While this case highlights a tension between freedom of information obligations and contractors’ intellectual property, it does not preclude government agencies from securing the contractual right to immediate access to contractor’s records that is contemplated in section 121. Whether certain information sought would put the contractor at a “substantial commercial disadvantage” would require consideration on a case-by-case basis.

Future regulation of automated decision-making

The Australian Government is currently undertaking a review of the Privacy Act 1989 (Cth) (Privacy Act). As part of that review, the Government is considering whether private and public sector entities should have an obligation to be more transparent about their use of automated systems to make decisions that have legal or similarly significant effects for a consumer.

The Government is also contemplating the prospect of designating the use of personal information for automated decision-making with legal or significant effects as a “restricted practice” under the Privacy Act. Under current proposals, an entity engaging in a “restricted practice” would be required to take additional steps to identify and mitigate privacy risks.

Given the likely regulatory reform in this area, private and public sector entities would be well-advised to ensure they have robust controls in place to supervise automated systems, provide consumers with adequate transparency around automated decision-making and monitor associated privacy risks.

If you have any questions, please contact us below or send us your enquiry here.

Authors: Lyn Nicholson & Clare Giugni

The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.

Published by:

Clare Giugni

Share this