Artboard 1Icon/UI/CalendarIcons/Ionic/Social/social-pinterestIcon/UI/Video-outline

Use of generative AI in government – key considerations for agencies

30 November 2025

4 min read

#Data & Privacy, #Government

Published by:

Use of generative AI in government – key considerations for agencies

As government agencies increasingly use artificial intelligence (AI) tools to support service delivery, policy development and internal operations, it is important to remain vigilant about the potential risks that may arise from AI-generated outputs.

Automation does not equal accuracy. AI tools can produce false, exaggerated or biased information, especially when prompted with misleading or poorly structured instructions. A single inaccurate or inappropriate AI-generated response can undermine public trust, damage reputation and, in some cases, expose the agency to legal and ethical risks.

Potential liability when using AI in government

Unlike private businesses, government agencies operate under heightened obligations of transparency and accountability and are ultimately responsible for the information they publish or rely upon, even if it originates with help from an AI tool.

Key risks for those in government include:

  • breach of secrecy provisions and government protocols
  • infringement of third-party copyright, for example if the AI tool created an image or other content that reproduced copyrighted material from its training data, the original copyright holder could pursue an infringement claim
  • provision of false or misleading information to customers
  • provision of outputs that are defamatory (particularly if outputs are not properly checked)
  • bias and discrimination, for example if training data or algorithms have pre-existing biases
  • lack of transparency and accountability, which can make it difficult to explain, audit or justify decisions
  • automation of decision-making, potentially leading to administrative law challenges

In practice, much will depend on the tool being used and the contractual terms governing its use. Most providers of publicly available generative AI platforms, such as ChatGPT, exclude any liability for the outputs their tools generate. This means that if something goes wrong, an agency is unlikely to have meaningful recourse against the supplier. If the platform is being engaged directly by the agency, through, or as part of a whole of government procurement, there is more scope to make sure the procurement of platform meets the needs of the agency and builds in protections like any other ICT procurement.

Key considerations when using AI tools

Agencies currently using or planning to introduce AI tools should consider the following:

  • supplier familiarity – does the supplier have experience working with government or public sector agencies? If they are based overseas, is support available during your business hours and will this comply with your data sovereignty obligations or preferences?
  • functionality – do staff understand how the tool operates? Has training been arranged for those without the required knowledge?
  • data source – what datasets has the tool been trained on? Do they include copyrighted, biased or unreliable information?
  • data handling and control – how will the supplier use information entered into the tool, particularly protected information or classified government information? Does this align with obligations under the privacy and secrecy laws, Privacy Act 1988 (Cth), the Australian Privacy Principles and other equivalent laws at the federal and state level?
  • scope of decision-making – what decisions will the tool assist with? Is there human oversight for highly sensitive or important decisions?
  • traceability and transparency – can the agency trace and justify the tool’s reasoning from input to output? Have outputs been tested for accuracy?
  • accountability – who within the agency is ultimately responsible for AI-generated outputs? Are policies and record-keeping practices in place to support accountability if issues arise?

The future of AI in government service delivery

AI will continue to play a key role in improving government service delivery, policy outcomes and overall productivity. The federal government recently released its AI Plan for the Australian Public Service, which will give government employees access to generative AI tools, and training and guidance on handling government information when using existing platforms such as ChatGPT, Claude and Gemini. The plan also includes appointing a Chief AI Officer in every federal agency in 2026.

We expect this plan to complement existing guidance at the state government level, including the Victorian Government’s Guidance for the safe and responsible use of generative AI in the Victorian public sector, the NSW Government’s three pillars approach, an overall strategy supported by an ethics policy and an assessment framework and the Queensland Government’s Use of generative AI in Queensland Government guideline.

These frameworks emphasise the need for agencies to conduct thorough assessments to ensure transparency and accountability. Before deploying any AI tools, agencies should review its functionality, training data and governance controls and put clear policies and human oversight in place to manage any legal risks.”

If you have any questions about the use of AI or its associated legal risks, please contact us here.

Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.

Published by:

Share this