As Artificial Intelligence (AI) regulation develops, businesses entering AI contracts must confront legal and ethical considerations head on.
Currently, a range of AI governance frameworks are being rolled out. Last month CSIRO’s Data61 released a discussion paper on AI, which included a proposed ethics framework. The following week the EU’s High Level Expert Group on Artificial Intelligence released AI ethics guidelines.
Data61’s proposed ethics framework principles are largely consistent with existing regulation. The proposed core ethics principles for AI are:
Australian organisations should, as a bare minimum, ensure that products which utilise AI (including the data upon which any AI output is based) comply with legislation, including anti-discrimination regulation and data and privacy laws. However, due to the nature of AI, compliance may be more complex than it first appears.
AI discrimination considerations
Discrimination on certain bases is illegal in Australia under anti-discrimination laws. Similarly, the “fairness” principle under Data61’s proposed ethics framework prohibits unfair discrimination against people due to AI decisions.
Although discrimination may seem obvious, where the data set which an AI product utilises is biased, decisions based on such data will necessarily be biased, and may result in indirect discrimination. In particular, data variables that are highly correlated with discriminatory criteria (such as gender or age) are likely to cause indirect discrimination.
Technology giant Amazon is one company that has learnt the dangers of biased data the hard way.
In 2014, Amazon developed a resumé selection tool. The tool assigned job applicants a rating based on resumés and was fed 10 years’ worth of applicant information. As men have historically dominated the technology industry, the tool assigned higher ratings to men’s resumés compared with women’s resumés. Words like “women” resulted in a lower score, as well as verbs more often used by men, like “executed”.
In another case, Amazon released same-day delivery in certain American cities. The service was made available to neighbourhoods with high numbers of Amazon users. Consequently, predominantly non-white neighbourhoods were essentially excluded, resulting in indirect racial discrimination.
When purchasing AI, businesses should ensure that the technology has been developed with the appropriate sample data. The input data should be relevant to the decision at hand and algorithms should be thoroughly tested. If this can’t be guaranteed, further contractual provisions to protect the purchasing party should be considered.
AI transparency considerations
To ensure fairness, it’s important that a business’ AI system is transparent and accountable. “Transparency and explainability”, “contestability” and “accountability” are each principles of Data61’s proposed ethics framework.
According to the framework, organisations should tell people when an algorithm is being used and what information the algorithm uses to make decisions. People should also be able to challenge the decision where an algorithm significantly impacts them.
As well as helping prevent discrimination and promoting fair process, this requirement is in line with the European Union’s General Data Protection Regulation (GDPR). The GDPR includes provisions allowing individuals to challenge automated decisions.
Shrouding AI processes in secrecy may come back to bite organisations. For instance, a school district in Texas used an AI system in deciding to dismiss teachers. The AI software used students’ test scores to assess teacher performance. The relevant algorithms were unable to be scrutinised, as they were the proprietary information of third party software owners and considered trade secrets. A Court found the arrangement impaired the teachers’ right to hear the reasons for their dismissal under Texas law. A major issue was that the outputs couldn’t be scrutinised for error.
Although the law may differ in Australia, such examples demonstrate the reputational and ethical risks involved in blindly relying on AI results. For these reasons, businesses should ensure that they can access and share such information when entering AI agreements.
AI privacy considerations
An AI system should ensure that the privacy of data to be used in a data set is protected. For instance, under Australian law consent is generally required when collecting identifiable data from individuals. “Privacy protection” is another principle of Data61’s proposed ethics framework.
Sharing identifiable data sets without consent may also breach privacy laws. Organisations should ensure publicly available data is truly de-identified. In 2016, the Department of Health published de-identified medical billing records online. It was hoped the data would be useful for research and policy making. However, when combined with other publicly available information, such as dates of childbirth, individuals could be re-identified.
There is a fine balance between fostering transparency and protecting privacy. Although businesses should promote transparency of decisions, they also must ensure that the AI data sets comply with privacy laws.
Author: Dan Pearce & Louise Almeida
Dan Pearce, Partner
T: +61 3 9321 9841
Angela Flannery, Partner
T: +61 2 8083 0448
Andrew Hynd, Partner
T: +61 7 3135 0642
The information in this publication is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this newsletter is accurate at the date it is received or that it will continue to be accurate in the future.
Published by Dan Pearce