30 November 2025
4 min read
Published by:
As government agencies increasingly use artificial intelligence (AI) tools to support service delivery, policy development and internal operations, it is important to remain vigilant about the potential risks that may arise from AI-generated outputs.
Automation does not equal accuracy. AI tools can produce false, exaggerated or biased information, especially when prompted with misleading or poorly structured instructions. A single inaccurate or inappropriate AI-generated response can undermine public trust, damage reputation and, in some cases, expose the agency to legal and ethical risks.
Unlike private businesses, government agencies operate under heightened obligations of transparency and accountability and are ultimately responsible for the information they publish or rely upon, even if it originates with help from an AI tool.
Key risks for those in government include:
In practice, much will depend on the tool being used and the contractual terms governing its use. Most providers of publicly available generative AI platforms, such as ChatGPT, exclude any liability for the outputs their tools generate. This means that if something goes wrong, an agency is unlikely to have meaningful recourse against the supplier. If the platform is being engaged directly by the agency, through, or as part of a whole of government procurement, there is more scope to make sure the procurement of platform meets the needs of the agency and builds in protections like any other ICT procurement.
Agencies currently using or planning to introduce AI tools should consider the following:
AI will continue to play a key role in improving government service delivery, policy outcomes and overall productivity. The federal government recently released its AI Plan for the Australian Public Service, which will give government employees access to generative AI tools, and training and guidance on handling government information when using existing platforms such as ChatGPT, Claude and Gemini. The plan also includes appointing a Chief AI Officer in every federal agency in 2026.
We expect this plan to complement existing guidance at the state government level, including the Victorian Government’s Guidance for the safe and responsible use of generative AI in the Victorian public sector, the NSW Government’s three pillars approach, an overall strategy supported by an ethics policy and an assessment framework and the Queensland Government’s Use of generative AI in Queensland Government guideline.
These frameworks emphasise the need for agencies to conduct thorough assessments to ensure transparency and accountability. Before deploying any AI tools, agencies should review its functionality, training data and governance controls and put clear policies and human oversight in place to manage any legal risks.”
If you have any questions about the use of AI or its associated legal risks, please contact us here.
Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.
Published by: