Copyright and Privacy Legal Issues Resulting from the Rising Popularity of Artificial Intelligence Use
December 21, 2020
By François Larose, Amanda Branch and Naomi Zener
More and more companies are turning to artificial intelligence (AI) and machine learning to help them make sense of the data they are collecting. The use of AI raises a number of interesting and novel legal issues, particularly in the areas of copyright and privacy.
In 2019, we wrote about the USPTO’s call for comments on the impact of providing copyright protection to works created using AI or AI-created works. On October 7, 2020, the USPTO released their AI report compiling their findings and discussions regarding all areas of intellectual property and AI. Much of the feedback concerning copyright revolved around concerns regarding use of copyright protected works to train AI, which may be a violation of fair use and the copyright owner’s reproduction rights in their works (see pg.4 of the report). However, many of the commentators who participated in submitting comments for the report stated they have few concerns about AI and fair use, believing that fair use was adaptable to the use of copyright protected works by AI. Currently, under U.S. copyright laws, any work created by an AI-application without human involvement has no copyright protection. Should the AI-created work have human involvement, subject to the legal requirements being met, such work may be protectable by copyright (see pg.24 of the report). But, the U.S. Copyright Office is clear that if the author is not a human being, then it will not grant copyright registration for the non-human authored work. The majority of commentators stated that AI is merely a tool, like “Photoshop, Garage Band” (see pg. 26 of the report), not capable of authorship in and of itself. The report makes clear that the current U.S. legal position of requiring a human author to exercise creative effort for a work to be protected by copyright continues to stand.
In Canada, while the Industry, Technology and Science Standing Committee (INDU) and the Canadian government released 36 policy recommendations and its report on June 3, 2019 on the statutory review of the Copyright Act, to date, the Copyright Act remains unamended.
In CCH Canadian Ltd. v. Law Society of Upper Canada,  1 S.C.R. 339 (CCH), the Supreme Court of Canada made clear that a work “must not be so trivial that it could be characterized as a purely mechanical exercise”, which raises the question if an AI-created work is protectable by copyright. For any data collected by a human author to be protected by copyright, the assessment will come down to whether such data is a fixed expression that the author used their skill and judgment to create. If that data is merely factual, then it won’t be protected by copyright. Furthermore, where that data is data collected from an AI-application, one has to assess the degree of human involvement in the preparation of the device used to collect the data, and whether that AI-application is merely a purely mechanical exercise or technique, which if it was, it would not be protectable by copyright.
In our previous discussion of the Andrews v McHale, 2016 FC 624 decision in our 2016 Copyright Year in Review article, which has not been appealed, we advised that while the Court implied that there may be authorship in software, it did not go as far as saying such. However, it did say that the author must have written the code for any copyright protection to arise. Notably, the Court did emphasize at paragraph 88 of the decision that “the category of ideas, methods, procedures, algorithms or other categories of contributions which, while perhaps valuable, fall outside the type of intellectual effort protected by copyright law.” This decision has left open the possibility that copyright may subsist in software where the author goes beyond the creation of code and adds to the design and program features and structure in addition to the text programming. In the case of AI-applications, it is possible that human authorship of all features of the application, not simply the coding text, may result in copyright protection.
In our 2017 Copyright Year-in-Review article, we discussed the case of Geophysical Service Incorporated v EnCana Corporation (2017 ABCA 125) (Geophysical), wherein the Alberta Court of Appeal held that copyright exists in the seismic data and the compilation thereof at issue. This decision was case fact-specific; however, holding that copyright protected both raw and processed data and not only the expressions and compilations of data, may prove to be useful to argue that data collected through an AI-application may be the proper subject of copyright. In this case, Geophysical Service Incorporated (GSI) conducted offshore seismic surveys to be licensed to oil and gas companies. They claimed that their seismic data (raw or processed) was protectable by the Copyright Act as a protectable “work” on the basis that the raw data was original because it was created through the exercise of human skill and judgment with the aid of computers. Furthermore, they argued copyright protected in the “seismic sections,” which were graphical representations of processed data, could be used by professional geophysicists for their own interpretation thereof. The defendant energy boards argued that the fact that computer programs created such raw data was tantamount to a purely mechanical exercise, thus such raw data was not an original work for the purposes of protection under the Copyright Act. The Alberta Court of Appeal upheld the Alberta Court of Queen’s Bench’s 2016 decision finding that the raw data and “seismic sections” were protected by copyright on the basis that they were both an original literary compilation and also an artistic compilation work. The Court held that:
- the raw data was protected by copyright because the seismic professionals demonstrated skill and judgment in preparing the devices that collected the data, by selecting the proper location, angles, positioning, etc. for the instruments. The Court noted that this was done in a fashion similar to how photographers create photographs;
- the humans processing the raw data exercised skill and judgment in creating a useable product from the raw data; and
- the seismic sections were akin to a map or chart, requiring selection or arrangement of the underlying data (as is the case with a compilation).
The Geophysical decision suggests that AI-applications used by humans to collect and process data could amount to an original expression capable of attracting copyright protection in the raw data itself and the compilation thereof, depending on the skill and judgment of the author in creating such. Since GSI’s leave to appeal to the Supreme Court of Canada was denied, the Alberta Court of Appeal decision is the established precedent out of that province. However, in the Federal Court of Appeal’s decision in Toronto Real Estate Board v. Commissioner of Competition (2017 FCA 236) (Toronto Real Estate Board), the Court held that there is no copyright in TREB’s Multiple Listing Service® database because it lacked the necessary skill and judgment for it to be an original compilation protectable under the Copyright Act. Furthermore, the Court found that the database consisted of factual information and that the inputting of said information was merely a mechanical exercise failing to meet the standard required by the CCH decision [while the obsoleted test in the decision in Tele-Direct (Publications) Inc. v. American Business Information, Inc. ( 2 FCR 22) was applied in error, the Federal Court of Appeal found the same result in applying the CCH standard]. The Toronto Real Estate Board case was denied leave to appeal by the Supreme Court of Canada, so it remains to be seen which of the Geophysical and Toronto Real Estate Board cases will be followed. Until such time, this law stands on the books creating a dichotomy on when data collected will be protected by copyright and when it won’t be.
It remains to be seen what a Court would say as to whether both a human created AI-application and the data collected or created therefrom will be protected by copyright. What remains clear is that we still have no definitive clarity on whether an AI-application or tool can author a copyright protectable work where there is human involvement, and whether the data collected and compiled by such AI-application or tool are copyright protected as well, respectively.
Copyright protection isn’t the only legal issue raised by the use of AI technology, there are also a number of privacy implications that organizations must consider. These issues often co-exist.
A recent example of this is Clearview AI, an organization which was purportedly using its technology to collect images and make facial recognition available to law enforcement for the purpose of identifying individuals. It was reported that Clearview AI was collecting and using photographs of individuals by scraping social media platforms such as Facebook, Instagram, Twitter and YouTube. It was then using those images to train its algorithms.
Clearview AI’s practices also raised a number of privacy concerns, specifically that individuals had not given consent to Clearview AI for such collection and use of their images and, in February of this year, the federal privacy commissioner and its provincial counterparts in Quebec, British Columbia and Alberta announced a joint investigation in to Clearview AI and its use of facial recognition technology.
In July 2020, Clearview AI advised Canadian privacy protection authorities that, in response to the joint investigation, it would cease offering its facial recognition services in Canada. The privacy commissioners are continuing their investigation into the deletion of the personal information of Canadians that has already been collected by Clearview.
Consent and AI
The Clearview AI investigation again highlights that consent is an important element of privacy law. As we’ve discussed in previous articles, consent is considered valid only if it is “meaningful”, that is, if it is reasonable to expect that individuals to whom a business’ activities are directed would understand the nature, purpose and consequences of the collection, use or disclosure to which they are consenting.
It is generally recognized that obtaining meaningful consent can be challenging, particularly in the context of AI processing which can be unpredictable or nearly impossible to explain to the average consumer.
Earlier this year, the Office of the Privacy Commissioner of Canada (OPC) issued an open consultation for reforming the Personal Information Protection and Electronic Documents Act (“PIPEDA”). As part of this consultation, the OPC was seeking input from experts as to how privacy principles could be applied to the responsible development of AI. The OPC received 86 submissions and in November 2020, released its final recommendations in A Regulatory Framework for AI: Recommendations for PIPEDA Reform (the “Framework”). In the Framework, the OPC recognizes that “in 2020, privacy protection cannot hinge on consent alone”. To that end, it recommended a series of new exceptions to consent that would allow for the benefits of AI to be better achieved within a rights based framework, such as specific exceptions for research and statistical purposes, compatible purposes and legitimate commercial interests.
Also, in November 2020, the federal government introduced Bill C-11, the Digital Charter Implementation Act, 2020, which enacts the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act. Bill C-11 would repeal the privacy provisions of PIPEDA and introduce new obligations for organizations, including new transparency requirements that apply to automated decision-making systems, like AI. Under the CPPA, organizations would have to be transparent about how they use such systems to make significant predictions, recommendations or decisions about individuals. The CPPA also gives individuals the right to request that businesses explain how a prediction, recommendation or decision was made by an automated decision-making system and explain how the personal information that was used to make the decision was obtained.
Commissioner Daniel Therrien issued a statement shortly after Bill C-11 was tabled. The Commissioner commends Bill C-11 as being “a clearer, more readable law”, but expresses concern that while Bill C-11 opens the door to new commercial uses of personal information without consent, it does not specify that such uses are conditional on privacy rights being respected. The Commissioner is clear that he considers privacy to be a fundamental human right, and that such right should prevail in the event of a conflict between individual privacy rights and the interests of commercial enterprises.
Significant developments in Canadian privacy legislation are on the horizon, and in the meantime organizations should continue to ensure that AI systems are created with privacy issues considered from the outset (a Privacy by Design approach), should use de-identified information wherever possible, and should limit collection of personal information to only what is reasonable required.
*The authors would like to thank Prudence Etkin for her research assistance with this article.
Content shared on Bereskin & Parr’s website is for information purposes only. It should not be taken as legal or professional advice. To obtain such advice, please contact a Bereskin & Parr LLP professional. We will be pleased to help you.