Insights

AI & Privacy: the rIght to erAsure vs. mAchInes with greAt memorIes

April 6, 2020
By Amanda Branch and Jasmine Godfrey

This is the third in our series of articles that have considered various privacy challenges introduced by Artificial Intelligence (“AI”). Our first article addressed the challenge of obtaining appropriate consent from users and issues related to the retention of data. Our second article considered issues relating to transparency and appropriate data retention in the context of AI. This article will discuss some complexities and challenges introduced by AI in implementing a Right to be Forgotten, which is beginning to achieve recognition in Canada.

Presently, there is no Canadian legislation that squarely governs artificial intelligence; however, this may change. Between January and March of this year, the Office of the Privacy Commissioner of Canada (“OPC”) welcomed submissions from interested stakeholders in response to its Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence (the “Consultation”). The OPC is currently engaged in legislative reform policy analysis and is of the view that PIPEDA falls short in its application to AI systems and it identified several areas where PIPEDA could be enhanced. The OPC has put forth several proposals for consideration, including requiring organizations to provide individuals with a right to explanation and increased transparency when they interact with, or are subject to automated processing, as well as requiring organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle.

Presently, there are several frameworks which have been introduced to address ethical considerations, including privacy and human-centricity; however, the frameworks set out general principles, and are not yet prescriptive. This can present challenges for organizations who are looking for concrete requirements to structure their compliance efforts. Organizations should look to existing legislation, such as Canada’s Personal Information Protection and Electronic Documents Act (“PIPEDA”) to help guide best practices for using AI.

 

Privacy & the Right to be Forgotten

In the European Union, the General Data Protection Regulation (the “GDPR”) codified a version of the “Right to Erasure”, often called the “Right to be Forgotten”, which helps give EU citizens more control over their personal information by allowing them to demand, in specific instances, that data about them be deleted. For example, in certain circumstances, this may result in search engines like Google having to remove links to information that may be outdated or embarrassing. This is illustrated by the 2010 case—Google Spain v AEPD and Mario Costeja González—in which a complaint was made against Google and a Spanish newspaper for search results which displayed a link to a newspaper article about a property sale made by the complainant to resolve his debts. It was determined that the EU Data Protection Directive (the “DPD”) (the predecessor to the GDPR) applied to companies that market their services in the EU, irrespective of their physical presence, and consumers have a right to request search engine companies to remove links that reference their personal information. Google explains that since May 2014 when it started to apply a version of the right to be forgotten in Europe, it has received more than 845,000 requests to remove a total of 3.3 million web addresses with about 45% of the links ultimately getting delisted.

The right to be forgotten does not currently exist in Canada in the same way as it does in Europe; however, PIPEDA does include data access and correction rights. While Canada does not have a formal rule for de-listing requests, there are signs to suggest the beginning of a recognition of a GDPR-like Right to be Forgotten in Canada. In A.T. v Globe24H.com, the court concluded that it could issue an order declaring that by operating a website publicizing thousands of Canadian judicial and tribunal decisions, this Romanian-based website is violating the law. This declaration could then be used to submit a request to Google seeking the removal of the offending links from its search database. In ordering this, the court may have created the equivalent of a Canadian Right to be Forgotten and suggested that there is now a way to globally remove search results, which may jeopardize people’s privacy rights, despite being factually correct. Further, in its 2017-18 Annual Report to Parliament on PIPEDA, the Office of the Privacy Commissioner of Canada took the position that—under the existing law—Canadians have a right to ask search engines to de-index web pages, as well as websites to remove or amend content, that contains inaccurate, incomplete or outdated information. A recent Federal Court case also suggests that a Right to be Forgotten is gaining traction in Canada.  In 2019, federal privacy commissioner, Daniel Therrien, referred to the Federal Court for hearing and determination of two preliminary questions of jurisdiction that arose in the context of his investigation of a complaint made against Google LLC. The Privacy Commissioner had received a complaint against Google alleging that Internet searches of the complainant’s name using Google’s search engine return results that prominently display links to news articles that he alleges are outdated, inaccurate and disclose sensitive personal information, in a way that causes him direct harm. The complainant alleged that Google was in contravention of PIPEDA, and requested that Google deindex the articles from search results. The Federal Court dismissed Google’s challenge.

 

AI & The Right to be Forgotten

The Right to be Forgotten can be relatively straightforward in a strictly privacy context, but becomes more complicated once the data has been fed into AI and machine-learning algorithms. This raises the issue of how to reclaim the data and its influence on the resulting output. Generally speaking, AI cannot be taught how to “forget” something the way a human can. On a more emotional level, it has also been argued that AI doesn’t recognize mistakes or accidents and doesn’t take remorse into account. Some have even suggested that we should be focused on a “right to be forgiven” in recognition of the challenges to AI being able to meaningfully effect a Right to be Forgotten.

There are some ways that organizations can address these challenges. First, the Right to be Forgotten is not absolute, so organizations may continue to have a legitimate interest in the personal information, which can overrule the request of the data subject.

Secondly, there may be technical solutions. Just how a key or link to personal information is erased on Google, a key that allows access to data can be removed or deleted from an AI. There are other ways that big data apps can handle a Right to be Forgotten, in addition to losing the keys. For example, the apps can break up personal data into smaller sets so that it is extremely difficult to re-identify each separate set. Inevitably, AI is much more complex than simple file deletion, and solutions will differ. It will be interesting to see how the law evolves to account for the ways memory differs between humans and AI.

Third, organizations should consider increased transparency in their consumer facing privacy policies. In its Consultation, the OPC notes transparency is a foundational element of PIPEDA’s openness principle and a precondition to trust; however, the OPC holds that the principle, as currently framed, lacks specificity as related to the challenges posed by AI systems. The OPC believes the openness principle under PIPEDA should include a right for individuals to receive the reasoning underlying any automated processing of their data, and the consequences of such reasoning for their rights and interests. The OPC thinks this would also help to satisfy PIPEDA’s existing obligations of providing individuals with access and correction rights.

Even if these tools can be utilized to enable AI to mimic the human memory and “forget”, there is a conflict in attempting to balance the importance of clean and complete data with the need to protect people’s privacy. Specifically, AI systems rely on large amounts of personal data to train and test algorithms, and that limiting any of the data could reduce the robustness of the output. An example where we can really see the difficulty in balancing these interests is in data retention and AI-enabled toys, where the data of some of society’s most vulnerable members is at stake. One particular type of AI-enabled toy is “My Friend Cayla”, an internet-connected doll that uses voice recognition technology to interact with children in real time. These conversations are recorded and transmitted online to a voice analysis company, which enables the company’s AI to learn and improve how it speaks to the child. However, these toys raise many concerns from a privacy perspective, and have in fact been banned in certain countries. The data from AI toys contains everything a child says to the device, or things the child says while the device is in close proximity, which raises the question of whether the child has a right to have this data deleted? There are currently many measures in place to protect children, such as prohibiting invasive online privacy practices to regulating the type of food marketing children have access to. The question is whether the same level of protection should be afforded to children’s personal data in an AI context?

It will be interesting to see what, if any, legislative developments take place in the near future, particularly with respect to GDPR-type rights. Until such rights are formally adopted in Canada, it will be important for organizations deploying AI systems to comply with privacy principles and ethical guidelines.

Subscribe to our newsletter

You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

This site is registered on wpml.org as a development site.