13 November 2025
4 min read
#Corporate & Commercial Law, #Data & Privacy
Published by:
As businesses increasingly use artificial intelligence (AI) tools and chatbots to support their daily operations, it is important to remain vigilant to the potential business and reputational risks that may arise from an AI-generated response.
Teams implementing AI into their operations should be aware that automation does not equal accuracy. AI tools can produce false, exaggerated or biased information, especially when instructed with misleading or poorly structured prompts. A single inaccurate or inappropriate AI-generated response can undermine customer trust, damage reputation and, in some cases, expose the business to legal and compliance risks.
Businesses are ultimately held responsible for any responses that give rise to a claim, for example, if a response is inaccurate, misleading, defamatory or infringes another person’s copyright.
At the same time, businesses have little control over the prompts written by users, which can significantly affect the quality and tone of the responses generated by AI tools.
In practice, much will depend on the tool being used and the contractual terms governing its use. Most providers of publicly available generative AI platforms, such as ChatGPT, exclude any liability for the outputs their tools generate. This means that if something goes wrong, the business is unlikely to have meaningful recourse against the supplier.
On the customer-facing side, businesses may seek to protect themselves by including disclaimers for AI-generated responses. While this may help defend against claims from customers who are dissatisfied or misled by the response, it provides limited protection against third-party claims.
For example:
The financial implications of such claims can be significant and could threaten the future of a business. For instance, under the ACL, penalties for misleading or deceptive conduct can reach up to 30% of a company’s annual turnover or $50 million.
Content that appears on your public-facing website or social channels is not immune from liability simply because an AI tool created it. Even the best-written disclaimers will not provide complete protection.
Businesses currently using or planning to introduce AI chatbots or equivalent tool should consider the following questions to minimise potential risks:
While many of the risks may seem low for a tool as simple as a chatbot, there have been instances where AI products have produced concerning outputs that caused significant issues for businesses. Working through the questions above can help identify potential problems early and reduce the risk of adverse outcomes.
Businesses looking to adopt AI tools should conduct a thorough risk assessment to minimise the risk of misuse. This assessment should consider the tool’s functionality, the data sets it was trained on and how the business will control and manage access to generated results.
It is also important to have clear policies and processes in place to handle any claims that may arise from misleading or inaccurate outputs. If AI-generated content on your website defames an individual or infringes copyright, claiming that “AI did it” will not provide a complete defence. Therefore, businesses must carefully consider how their chosen tool operates and who is ultimately responsible for its output.
If you have any questions regarding your business’ current use of AI or need assistance with conducting a risk assessment before deploying an AI tool, please contact us here.
Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.
Published by: