Archive | Topic RSS for this section

AI, Face Swapping, and Right of Publicity

By Marie-Andrée Weiss

Last April, several plaintiffs filed a putative class action against NeoCortext, Inc., the developer of the Reface face swapping application, alleging that the application infringed their right of publicity.

NeoCortext moved to dismiss the complaint, claiming that plaintiffs’ right of publicity was preempted by the Copyright Act and barred by the First Amendment. NeoCortext also moved to strike the complaint, claiming that the suit was a strategic lawsuit against public participation (SLAPP) aiming at “gagging a novel application that enables users to engage in creative activities that are protected by the First Amendment.”

On September 5, 2023, U.S. District Judge Wesley L. Hsu denied both motions.

The case is Kyland Young v. NeoCortex Case 2:23-cv-02496-WLH-PVC.

The Reface app

Neocortext developed Reface, a smartphone application using an artificial intelligence algorithm which allowed users to replace their faces in photographs and videos with the faces of celebrities (“face swap”), to place their faces into scenes and movies and to “mix [their] face[s] with a celebrity.”

Users were able to search for their favorite characters or individuals in the catalog of images, movie and show clips, which was compiled from several websites, such as mybestgif.com, https://tenor.com/, Google Video, and Bing Video. Among the individuals featured in the catalog was one of the plaintiffs, Kylan Young, finalist of the 23rd Big Brother show on CBS.

Users  could then upload a photograph featuring one or more human beings, and the app “swapped” the faces with the faces of individuals featured in the images or clip chosen by the user from Reface’s catalogue. NeoCortext offered a free version of the services, where the “face swap” image or video was watermarked with the Reface logo. The complaint referred to these watermarked images and clips as “Teaser Face Swaps.” A paying subscription to the app allowed the user to remove the watermark.

Does the app infringe plaintiff’s right of publicity?

The complaint alleged that the app allowed users to recreate Mr. Young’s scenes from Big Brother, but that NeoCortext never asked for his consent nor paid him any royalties and thus profited from Mr. Young’s likeness and that defendant used the likeness of plaintiffs in violation of California’s right of publicity “to pitch its product for profit.” Plaintiff argued that  Teaser Face Swaps were “essentially ads intended to entice users to buy PRO subscriptions, and the paid PRO version of the applications makes money by including Californians in its library of content.”

California Right of Publicity Law

California recognizes a right of publicity at common law and also by statute, California Civil Code § 3344, which prevents the use without prior consent of a person’s name, voice, signature, photograph or likeness, in products, merchandise or goods, to advertise, sell, or solicit the purchase of goods or services. 

To succeed, a plaintiff must allege  that (1) the defendant’s used the plaintiff’s identity; (2) appropriated the plaintiff’s name or likeness to defendant’s advantage, commercially or otherwise; (3) defendant did not consent;  and (4) injury resulted from this unauthorized use (see for instance Fleet v. CBS, Inc. at  1918).

The two Anti-SLAPP steps.

In its motion to strike the case, NeoCortext argued that the app allowed its users to create “humorous and sometimes absurd new works for personal use” and that “[t]his is exactly the type of creative activity that the First Amendment protects and that the right of publicity does not.”

There are two steps in an anti-SLAPP analysis, the second step being equivalent of the standard used by courts to evaluate a motion to dismiss.

First step:

The first step under California Anti- SLAPP law, Cal. Civ. Proc. Code § 425.16, was for NeoCortext to show that its use of Mr. Young’s image was made “in furtherance of [NeoCortext’s] right of petition or free speech… in connection with a public issue. Such speech can be conduct, including “all conduct in furtherance of the exercise of the right of free speech” (Lieberman v. KCOP Television, Inc., at  166).

Judge Hsu reasoned that the conduct at the basis of Mr. Young’s complaint was the inclusion of his image in the app, allowing users to create a new image.  As such, it was the users who exercised their freedom of speech, not NeoCortext. Because the app is a tool that users can use to exercise their free speech rights, NeoCortext’s use of plaintiff’s image in the app was conduct taken in furtherance of users’ exercise of free speech.

Such speech is connected with a public issue under the test used by California courts as it is: a (1) statement concerning a person or entity in the public eye (Mr. Young); (2)  a conduct that could directly affect a large number of people beyond the direct participants; (3) or a topic of widespread public interest (“the use of technology to alter images and videos of individuals in a way that makes them look realistic” is such topic).

NeoCortext had shown that its conduct is in furtherance of the right of free speech made in connection with a public issue, thus satisfying its burden on the first step of the anti-SLAPP analysis.

Second step:

Plaintiff therefore then carried the burden to show “a probability of prevailing on the claim”, the second step required by California Anti-SLAPP law, identical to the standard for the motion to dismiss, and it did so, leading Judge Tsu to deny both motions.

NeoCortext had argued, unsuccessfully as we will now see, that the Copyright Act and the First Amendment preempted the right of publicity claim.

Copyright Act does not preempt the right of publicity claim

NeoCortext had argued that, if a right of publicity claim is entirely based on the display, reproduction or modification of a work protected by copyright, the claim is preempted by the Copyright Act.

Section 301 of the Copyright Act preempts state laws equivalent to the exclusive copyright rights as detailed by Section 106 of the Copyright Act.

The Ninth Circuit uses a two-part test to determine if a state law claim is preempted by the Copyright Act :

NeoCortex had claimed that Plaintiff’s claim was within the subject matter of copyright, as the images and clips in Neocortext’s catalog were protected by copyright.

In Maloney, the Ninth Circuit Court of Appeals held :

“that a publicity-right claim is not preempted when it targets non-consensual use of one’s name or likeness on merchandise or in advertising. But when a likeness has been captured in a copyrighted artistic visual work and the work itself is being distributed for personal use, a publicity-right claim interferes with the exclusive rights of the copyright holder, and is preempted by section 301 of the Copyright Act.” (Maloney, at 1011, our emphasis).

NeoCortex’s argument relied further on Maloney which held that :

“…where a likeness has been captured in a copyrighted artistic visual work and the work itself is being distributed for personal use, a publicity-right claim is little more than a thinly disguised copyright claim because it seeks to hold a copyright holder liable for exercising his exclusive rights under the Copyright Act.” (Maloney, at 1016).

First part of the Ninth Circuit test: Plaintiffs’  right of publicity claim do not fall within the subject matter of copyright

Nothing that the Copyright Act protects ownership of photographs, but that it does not protect the exploitation of a person’s  likeness, “even if it is embodied in a photograph”, citing the Ninth Circuit decision in Downing v. Abercrombie & Fitch, Judge Hsu found that “[plaintiff]’s right of publicity claim does not fall within the subject matter of copyright”. Judge Hsu  distinguished the case from Maloney, where a photograph of the plaintiff, protected by copyright,  had been sold. In contrast, the use of Mr. Young’s likeness was outside of the original work protected by copyright as it was used to create a product containing the plaintiff’s image.  As plaintiff’s claim did not fall under the subject matter of copyright, it was not preempted by the Copyright Act.

Second part of the Ninth Circuit test: State law rights asserted are not equivalent to Section 106  rights

Judge Hsu also found that the second factor of the test failed, because Section 106 of the Copyright Act does not give  the owners of the photographs the right to use plaintiff’s name and likenesses to advertise the free version of the app and to induce users to buy the subscription. Plaintiff was “not seeking to “merely” restrict the reproduction or distribution of the original photographs/works, as the plaintiffs in Maloney ….”

The rights asserted by plaintiff were not equivalent to the rights conferred by the Copyright Act to the owners of the photographs from the app catalog. Under the two-part test used by the Ninth Circuit, the claim was not preempted by the Copyright Act.

The First Amendment does not preempt the right of publicity claim

NeoCortext had also argued that the First Amendment preempted the claim, as users used the app to create “their own unique, sometimes humorous and absurd expressions” which are protected by the First Amendment. NeoCortext further argued that the photos and clips thus created had “creative and aesthetic value” and that they were “new works … distinct from the originals”.

California courts apply the “transformative use” test to balance right of publicity and First Amendment, detailed by the California Supreme Court in Comedy III Productions v. Gary Saderup, Inc. (at 142):

In sum, when an artist is faced with a right of publicity challenge to his or her work, he or she may raise as affirmative defense that the work is protected by the First Amendment inasmuch as it contains significant transformative elements or that the value of the work does not derive primarily from the celebrity’s fame.” (Our emphasis).

NeoCortext had to show that its use was transformative as a matter of law. Judge Hsu found it had not done so, noting that plaintiff’s face “is the only thing that change in the end product” and that the body is sometimes unchanged, citing  Hilton v. Hallmark Cards, where the Ninth Circuit  found that a greeting card featuring the likeness of Paris Hilton, arguably more transformative than the swap images created the app, was not transformative enough to entitle the defendant to a First Amendment affirmative defense as a matter of law.

What is next?

On September 8, NeoCortex filed an appeal to the U.S. Court of Appeals for the Ninth Circuit.

There have already been several complaints alleging that an AI-powered product or service is infringing the copyright of authors whose works have been used to train the data models, but Young v. NeoCortext is one of the first cases were a product or service triggered by AI is allegedly infringing a right to publicity.

As such it is worthy of following further. To be continued…

Large Language Models and the EU AI Act: the Risks from Stochastic Parrots and Hallucination

By Zihao Li[1]

With the launch of ChatGPT, Large Language Models (LLMs) are shaking up our whole society, rapidly altering the way we think, create and live. For instance, the GPT integration in Bing has altered our approach to online searching. While nascent LLMs have many advantages, new legal and ethical risks[2] are also emerging, stemming in particular from stochastic parrots and hallucination. The EU is the first and foremost jurisdiction that has focused on the regulation of AI models.[3] However, the risks posed by the new LLMs are likely to be underestimated by the emerging EU regulatory paradigm. Therefore, this correspondence warns that the European AI regulatory paradigm must evolve further to mitigate such risks.

Stochastic parrots and hallucination: unverified information generation

One potentially fatal flaw of the LLMs, exemplified by ChatGPT, is that the generation of information is unverified. For example, ChatGPT often generates pertinent, but non-existent academic reading lists. Data scientists claim that this problem is caused by “hallucination”[4] and “stochastic parrots”.[5] Hallucination occurs when LLMs generate text based on their internal logic or patterns, rather than the true context, leading to confidently but unjustified and unverified deceptive responses. Stochastic parrots is the repetition of training data or its patterns, rather than actual understanding or reasoning.

The text production method of LLMs is to reuse, reshape, and recombine the training data in new ways to answer new questions while ignoring the problem of authenticity and trustworthiness of the answers. In short, LLMs only predict the probability of a particular word coming next in a sequence, rather than actually comprehending its meaning. Although the majority of answers are high-quality and true, the content of the answers is fictional. Even though most training data is reliable and trustworthy, the essential issue is that the recombination of trustworthy data into new answers in a new context may lead to untrustworthiness, as the trustworthiness of information is conditional and often context-bound. If this precondition of trustworthy data disappears, trust in answers will be misplaced. Therefore, while the LLMs’ answers may seem highly relevant to the prompts, they are made-up.

However, merely improving the accuracy of the models through new data and algorithms is insufficient, because the more accurate the model is, the more users will rely on it, and thus be tempted not to verify the answers, leading to greater risk when stochastic parrots and hallucinations appear. This situation, where an increase in accuracy leads to higher reliance and potential risks, can be described as the ‘accuracy paradox’. The risk is beyond measure if users encounter these problems in especially sensitive areas such as healthcare or the legal field. Even if utilizing real-time internet sources, the trustworthiness of LLMs may remain compromised, as exemplified by factual errors in new Bing’s launch demo.

These risks can lead to ethical concerns, including misinformation and disinformation, which may adversely affect individuals through misunderstandings, erroneous decisions, loss of trust, and even physical harm (e.g., in healthcare). Misinformation and disinformation can reinforce bias,[6] as LLMs may perpetuate stereotypes present in their training data.[7]

The EU AI regulatory paradigm: Advanced Legal intervention required

The EU has already commenced putting effort into AI governance. The AI Act (AIA) is the first and globally most ambitious attempt to regulate AI. However, the proposed AIA, employing a risk-based taxonomy for AI regulation, encounters difficulties when applied to general-purpose LLMs. On the one hand, categorizing LLMs as high-risk AI due to its generality may impede EU AI development. On the other hand, if general-purpose LLMs are regarded as chatbots, falling within a limited-risk group, merely imposing transparency obligations (i.e., providers need to disclose that the answer is generated by AI) would be insufficient.[8] Because the danger of parroting and hallucination risks is not only related to whether users are clearly informed that they are interacting with AI, but also to the reliability and trustworthiness of LLMs’ answers, i.e., how users can distinguish between truth and made-up answers. When a superficially eloquent and knowledgeable chatbot generates unverified content with apparent confidence, users may trust the fictitious content without undertaking verification. Therefore, the AIA’s transparency obligation is not sufficient.

Additionally, the AIA does not fully address the role, rights, or responsibilities of the end-users. As a result, users have no chance to contest or complain about LLMs, especially when stochastic parrots and hallucination occur and affect their rights. Moreover, the AIA does not impose any obligations on users. However, as aforementioned, the occurrence of disinformation is largely due to deliberate misuse by users. Without imposing responsibilities on the user side, it is difficult to regulate the harmful use of AI by users. Meanwhile, it is argued that the logic of AIA is to work backward from certain harms to measures that mitigate the risk that these harms materialize.[9] The primary focus ought to shift towards the liability associated with the quality of input data, rather than imposing unattainable obligations on data quality.

Apart from the AIA, the Digital Service Act (DSA) aims to govern disinformation. However, the DSA’s legislators only focus on the responsibilities of the intermediary, overlooking the source of the disinformation. Imposing obligations only on intermediaries when LLMs are embedded in services is insufficient, as such regulation cannot reach the underlying developers of LLMs. Similarly, the Digital Markets Act (DMA) focuses on the regulation of gatekeepers, aiming to establish a fair and competitive market. Although scholars recently claim that the DMA has significant implications for AI regulation,[10] the DMA primarily targets the effects of AI on market structure; it can only provide limited help on LLMs. The problem that the DSA and DMA will face is that both only govern the platform, not the usage, performance, and output of AI per se. This regulatory approach is a consequence of the current platform-as-a-service (PaaS) business model. However, once the business model shifts to AI model-as-a-service (MaaS),[11] this regulatory framework is likely to become nugatory, as the platform does not fully control the processing logic and output of the algorithmic model.

Therefore, it is necessary to urgently reconsider the regulation of general-purpose LLMs.[12] The parroting and hallucination issues show that minimal transparency obligations are insufficient, since LLMs often lull users into misplaced trust. When using LLMs, users should be acutely aware that the answers are made-up, may be unreliable, and require verification. LLMs should be obliged to remind and guide users on content verification. Particularly when prompted with sensitive topics, such as medical or legal inquiries, LLMs should refuse to answer, instead directing users to authoritative sources with traceable context. The suitable scope for such filter and notice obligations warrants further discussion from legal, ethical and technical standpoints.

Furthermore, legislators should reassess the risk-based AI taxonomy in the AIA. The above discussion suggests that the effective regulation of LLMs needs to ensure their trustworthiness, taking into account the reliability, explainability and traceability of generated information, rather than solely focusing on transparency. Meanwhile, end-users, developers and deployers’ roles should all be considered in AI regulations, while shifting focus from PaaS to AI MaaS.


[1] The work is adapted and developed from the preprint version of a paper published in Nature Machine Intelligence, “Zihao Li, ‘Why the European AI Act Transparency Obligation Is Insufficient’ [2023] Nature Machine Intelligence. https://doi.org/10.1038/s42256-023-00672-y”

[2] ‘Much to Discuss in AI Ethics’ (2022) 4 Nature Machine Intelligence 1055.

[3] Zihao Li, ‘Why the European AI Act Transparency Obligation Is Insufficient’ [2023] Nature Machine Intelligence.

[4] Ziwei Ji and others, ‘Survey of Hallucination in Natural Language Generation’ [2022] ACM Computing Surveys 3571730.

[5] Emily M Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM 2021) <https://dl.acm.org/doi/10.1145/3442188.3445922&gt; accessed 14 January 2023.

[6] Marvin van Bekkum and Frederik Zuiderveen Borgesius, ‘Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?’ (2023) 48 Computer Law & Security Review 105770.

[7] Zihao Li, ‘Affinity-Based Algorithmic Pricing: A Dilemma for EU Data Protection Law’ (2022) 46 Computer Law & Security Review 1.

[8] Lilian Edwards, ‘The EU AI Act: A Summary of Its Significance and Scope’ (Ada Lovelace Institute 2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/04/Expert-explainer-The-EU-AI-Act-11-April-2022.pdf&gt; accessed 17 January 2023.

[9] Martin Kretschmer and others, ‘The Risks of Risk-Based AI Regulation: Taking Liability Seriously’.

[10] Philipp Hacker, Johann Cordes and Janina Rochon, ‘Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and Beyond’ [2022] SSRN Electronic Journal <https://www.ssrn.com/abstract=4316944&gt; accessed 8 January 2023.

[11] Tianxiang Sun and others, ‘Black-Box Tuning for Language-Model-as-a-Service’, Proceedings of the 39th International Conference on Machine Learning (PMLR 2022) <https://proceedings.mlr.press/v162/sun22e.html&gt; accessed 10 February 2023.

[12] Philipp Hacker, Andreas Engel and Theresa List, ‘Understanding and Regulating ChatGPT, and Other Large Generative AI Models: With input from ChatGPT’ [2023] Verfassungsblog <https://verfassungsblog.de/chatgpt/&gt; accessed 20 May 2023.

New EU rules for a Common Charger for Electronic Devices

By Olia Kanevskaia

On November 23, 2022, the European Union (“EU”) adopted the Directive 2022/2380 that mandates a common, EU-wide charger for electronic equipment (“the Common Charger Directive”).[1] The Directive prescribes the USB Type-C port as a mandatory standard for wired charging for a range of devices.

The new law amends the Radio Equipment Directive that established a framework for placing of radio and telecommunications equipment on the EU markets.[2] The legislation was passed after the Council’s and European Parliaments’ approval of the European Commission’s proposal that was introduced in September 2021, and is in force as of December 2022. The EU Member States are required to transpose the Common Charger Directive into their national laws by December 28, 2023.

Objectives of the Directive

The new Directive mainly pursues two objectives: 1) the economic objective of the EU internal market and 2) the EU environmental objectives of reducing CO2 emissions and electronic waster. The economic objective is prevailing, since the legal basis for the Common Charger Directive is harmonization for the purpose of the proper functioning of the EU internal market.[3]

The new Directive harmonizes the EU-wide communication protocols and interfaces for wired chargers used among others, in mobile phones, keyboards and laptops. As follows from the recitals, fragmentation of the EU market for radio equipment due to different national regulations and practices risks affecting cross-border trade and brings into jeopardy the functioning of the EU internal market.[4]

Furthermore, the Common Charger Directive aims to reduce electronic waste and greenhouse gas emissions that are the result of production and disposal of different electronic chargers. The Directive thus fits among the recent EU legislative initiatives aiming to boost circular economy.[5]

While the Directive does not explicitly list “consumer protection” as one of its objectives, it makes frequent references to consumer benefits and convenience from a common charger.

Wired charging standards

Electronic devices can be charged through cables or wires that are plugged into the device from one side, and into the power outlet from another. Connectors for wired chargers are not harmonized across different categories of radio and telecommunications equipment: while most devices support USB Type-C connectors, iPhones famously run on the thunderbolt lightening cable. The type of connector – in other words, a standard for wired charging, – is typically determined by the market, rather than by law.

The USB standards are developed by the USB Implementers Forum – a global non-profit organization dedicated to the making, testing and promoting USB technologies. The USB Type-C standard is also endorsed as an international standard by the International Electrotechnical Commission (“IEC”) and transposed into a European standard by the European Committee for Electrotechnical Standardization (CENELEC).[6] The USB Type-C standard is widely used for different types of devices; yet, this standard, and its international and European implementations, in principle remains voluntary.[7]

The EU has been restating the importance of compatibility between wired chargers for quite a while, but until recently, it was mainly relying on the industry to agree on common rules. In 2009, fourteen major phone manufacturers, including Samsung, LG and Apple, signed a voluntary commitment to develop a common charging solution in a form of Memorandum of Understanding (“MoU”).[8] While since that time, many devices have indeed adopted Micro USB or, later, the USB Type-C standard as a wired charger connector, the MoU still allowed for the existence of proprietary charging interfaces like Apple’s thunderbolt. Attempts in European standardization committees to agree on a common connector seemed to have reached an impasse, and the voluntary approach resulted in many frustrations for the European legislator and consumer alike. 

Upon the expiration of the MoU in 2014, the European Commission launched two impact assessment studies assessing the potential for implementing a common solution for wired charges, followed by a resolution on a common charger for mobile radio equipment adopted by the European Parliament in 2020.[9] This eventually led to the Commission’s proposal to amend the Radio Equipment Directive and to mandate the USB Type-C standard as EU-wide standard for electronic devices. Similar requirements may be adopted for wireless chargers in the near future.[10]

Key requirements of the new Directive

Article 3 of the Directive mandates a USB Type-C charger for a list of electronic equipment, including mobile phones, tables, headsets, keyboard, e-readers and laptops.[11] This means that the devices should be manufactured already with a USB-C connector to be legally marketed in the EU. The European Commission reserves the right to amend the list of equipment that has to comply with the USB Type-C charger in the light of scientific and technological progress or market developments. The listed equipment should comply with the mandated wired charging requirement by December 28, 2024; for laptops, this deadline is April 28, 2026.

The Commission may further adopt rules for charging interfaces and communications protocols for equipment that can be charged by means other than wired charging. This includes requesting the European standardization organizations to develop harmonized standards for charging interfaces and communications protocols for such equipment. Harmonized standards are voluntary, but compliance with them grants presumption of compliance with European legislation.

When adopting or amending the rules for equipment charged by either wired or other means of charging, the European Commission should take into account the market acceptance of technologies under consideration, consumer convenience, and the reduction of environmental waste and market fragmentation. According to Article 3 (4) of the Directive, these objectives are presumed to be met by technical specifications that are based on relevant available international or European standards. The Directive, however, does not explain what it means by “being based on” and “relevant” or “available” standards. If such standards do not exist, or if the Commission determines that they do not meet the required objectives in an optimal manner, the Commission may develop its own technical specifications: this is in line with the Commission’s power to develop “common specifications” under the new legislation that heavily relies on harmonized standards.[12]

Furthermore, consumers should also be able to purchase electronic equipment without any charging device,[13] provided that the economic operators clearly indicate on a label whether or not the charger is included.[14] The Commission will monitor the extent to which this “unbundling” of charging devices from the radio equipment needs to be made mandatory.[15]

Outlook

The new Directive was met with enthusiasm by consumers, who will not need to purchase a new charger every time they buy a new electronic device. This will also reduce switching costs and prevent consumer lock-in in particular technologies or equipment. The disposal of wired chargers is also likely to be reduced, contributing to the EU’s environmental goals.

In turn, the requirement of a mandated standard for wired chargers does not sit well with some equipment manufacturers. For Apple, the new law means re-designing their products to comply with the EU legal requirements. Furthermore, many companies oppose the approach of standards and technologies mandated “top down”, since the technology selection typically occurs through industry rather than legislature.  The danger is that while pursuing the objective to achieve greater interoperability, the EU will use this Directive as a precedent to intervene in market processes and by this means, will stifle innovation and technological advancement.


[1] Directive (EU) 2022/2380 of the European Parliament and of the Council of 23 November 2022 amending Directive 2014/53/EU on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment, OJ L 315

[2] Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonization of the laws of the member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC, OJ L 153

[3] Article 114 TFEU

[4] Recitals 7 and 8 Common Charger Directive

[5] Recital 3 Common Charger Directive

[6] European Standard EN IEC 62680-1-3:2021 ‘Universal serial bus interfaces for data and power – Part 1-3: Common components – USB Type-C® Cable and Connector Specification’

[7] Case C-613/14, James Elliott Construction Ltd v. Irish Asphalt Ltd [2006] para 53

[8] MoU regarding Harmonisation of a Charging Capability for Mobile Phones (June 5th, 2009)

[9] European Parliament resolution of 30 January 2020 on a common charger for mobile radio equipment (2019/2983(RSP)) OJ C 331

[10] Recital 13 Common Charger Directive

[11] Article 3(4) and Annex Ia Part I Common Charger Directive

[12] See, for example, Article 41 of the Communication (COM)2021 206 final from the Commission of 21 April 2021 on a Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts

[13] Article 3a Common Charger Directive

[14] Articles 10(8) 12(4) and 13(4) Common Charger Directive

[15] Article 47 Common Charger Directive

Return to Sender: The Right of Withdrawal for Contracts for the Supply of Digital Content

By Sebastian Pech

The Consumer Rights Directive (2011/83/EU) from 2011 provides for a right of withdrawal for distance contracts not only for digital content distributed on a tangible medium (e.g., DVD, Blu-ray), but also for digital content supplied without a data carrier, e.g., downloading or streaming. The Consumer Rights Directive was modified in 2019 by the Enforcement and Modernisation Directive ((EU) 2019/2161). The amendments were to be transposed into national law by the member states by November 28, 2021; the provisions are to be applied as of May 28, 2022.

The Directive on Contracts for the Supply of Digital Content and Digital Services ((EU) 2019/770), that also dates from 2019, governs further aspects of contract law regarding the supply of digital content to consumers, such as in particular the requirements for providing digital content in conformity with the contract, as well as remedies. The respective implementations in the member states are already applicable from January 1, 2022.

This contribution provides an overview of the right of withdrawal for contracts for the supply of digital content under the new regulations.

The right of withdrawal in a nutshell

The right of withdrawal aims to provide the consumer with an opportunity to test the goods, since unlike a retail store, a distance contract does not allow the consumer to examine the goods before the contract is concluded.

Therefore, the consumer can withdraw from a distance contract within a period of 14 days, without giving any reason (Article 9 (1) Consumer Rights Directive). As a result, the contracting parties are freed from the obligations under the contract (Article 12). In case of a withdrawal, the trader must reimburse all payments received from the consumer (Article 13 (1)), and the consumer must return the received goods to the trader (Article 14 (1)). The consumer must pay compensation for a loss of value of the goods only if it is caused by handling of the goods which was unnecessary for examining the goods (Article 14 (2)).

Distinctive aspects of contracts for the supply of digital content

Digital content differs from other products since it can be copied as often as desired without any loss of quality. In some cases, such as movies and series, digital content is often purchased for one-time consumption only. These special aspects must also be considered in the context of the right of withdrawal. Otherwise, the consumer could abuse this right by consuming the content before withdrawing or retaining a copy and thereby appropriating its economic value.

Digital content supplied on a tangible medium

Therefore, the Consumer Rights Directive provides for an exception of the right of withdrawal if the consumer unseals a sealed medium containing digital content (Article 16 (i)).

In the case that the right of withdrawal is not excluded, the consumer can withdraw within a period of 14 days after receiving the tangible medium form the trader (Article 9 (2) (b)).

Even if the value of digital content is not usually tied to the physical media holding it, a return by the consumer is in the interest of the trader. They can ensure that the consumer does not continue to use the digital content stored on the medium. Additionally, the trader can potentially resell the medium to another customer. If the medium is sealed, checking the integrity of the seal is necessary to confirm the right of withdrawal.

If the consumer has made a copy of the digital content contained on the medium, for example by installing software on his device, they could continue to use the content even after returning the medium. Therefore, in the event of withdrawal from the contract, Article 14 (2a)’s new provision forbids the consumer, e.g., through deletion of copies, from using the digital content and from making it available to third parties.

Digital content supplied on an intangible medium

In the case, that the digital content is supplied on an intangible medium, Article 16 (m) provides for an exception of the right of withdrawal when the trader has initiated the performance of contract. In addition, it is required that (i) the consumer has provided prior express consent to initiate the performance, (ii) the consumer has acknowledged that they thereby lose the right of withdrawal, and (iii) the trader has provided confirmation in accordance with Article 8 (7).

The Enforcement and Modernisation Directive has added the last requirement to inter alia confirm (where applicable) the consumer’s prior express consent and acknowledgment in accordance with Article 16 (m) that was missing in the old version of the Consumer Rights Directive.

In case the right of withdrawal is not excluded, the consumer can withdraw within a period of 14 days from the day of the conclusion of the contract (Article 9 (2) (c)).

The trader will usually have no interest in the consumer returning the downloaded data, for example by e-mail; new customers can download their copy from the master copy on the server. If the content is retrieved directly from the server, as in the case of streaming, there is no permanent storage on the consumer’s device. The decisive factor for the trader is preventing the consumer’s continued use of the content after exercising the right of withdrawal. The consumer must refrain from using the digital content and making it available to third parties in the event of withdrawal from the contract (Article 14 (2a)). If the consumer has downloaded digital content, they are obliged to delete it. In the case of direct server retrieval, the obligation to refrain from using the digital content pertains to the consumer not accessing the content any further.

In the event of withdrawal from a contract for the supply of digital content on an intangible medium, the consumer must pay no compensation for the use of the digital content before the withdrawal. This results from Article 14 (4) (b) in combination with Article 14 (2), (3), and (5). The reason for not requiring compensation is that digital content transmitted via the Internet does not degrade. In addition, the trader can decide to exclude the consumer’s right of withdrawal with beginning of the performance of the contract in accordance with Article 16 (m).

Provision of data by the consumer in return for the supply of digital content

The new provision of Article 3 (1a) introduced by the Enforcement and Modernisation Directive states that the Consumer Rights Directive also applies to contracts for digital content supplied on an intangible medium where the consumer provides personal data to the trader instead of paying a price. Even if the consumer is not obliged to use the content in the case of “payment with data”, the consumer may still wish to be able to terminate the contract easily by means of a withdrawal, without being bound, e.g., by notice periods.

The performance of the contract or statutory provisions often requires provision of personal data (e.g., name, e-mail address) of the consumer, so that the trader cannot gain any independent economic advantage from the data provided. Therefore, Article 3 (1a) excludes these cases from the scope of the Directive, provided that the trader performs the data processing exclusively for this purpose.

According to Article 16 (m), the right of withdrawal is excluded with beginning of the performance of the contract  for  contracts that do not oblige the consumer to pay a price.

Conclusion

The Consumer Rights Directive distinguishes between digital content distributed on a tangible medium and digital content distributed on an intangible medium. Depending on the type of digital content, different regulations apply to the prerequisites and legal consequences of the right of withdrawal.

The Enforcement and Modernisation Directive contains several modifications to the Consumer Rights Directive; the extension of the scope of application to contracts in which the consumer pays in data instead of paying a price is most significant. This creates a concurrence with the Directive on Contracts for the Supply of Digital Content and Digital Services which also applies to contracts where the consumer provides personal data, instead of money, to the trader.

EU Artificial Intelligence Act: The European Approach to AI

By Mauritz Kop[1]

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. As a Fellow at Stanford University’s Transatlantic Technology Law Forum and a Member of the European AI Alliance, I made independent strategic recommendations to the European Commission. President Ursula von der Leyen’s team adopted some of the suggestions that I offered them, or has itself arrived to the same conclusions. That is encouraging. This contribution will list the main points of this novel regulatory framework for AI.

Core horizontal rules for AI

The EU AI Act sets out horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU. The draft regulation provides core artificial intelligence rules that apply to all industries. The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories . It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets. The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law.

Objectives of the EU Artificial Intelligence Act

The proposed regulatory framework on Artificial Intelligence has the following objectives:

1. ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

2. ensure legal certainty to facilitate investment and innovation in AI;

3. enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

4. facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Subject Matter of the EU AI Act

The scope of the AI Act is largely determined by the subject matter to which the rules apply. In that regard, Article 1 states that:

Article 1
Subject matter

This Regulation lays down:

(a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;

(a) prohibitions of certain artificial intelligence practices;

(b) specific requirements for high-risk AI systems and obligations for operators of such systems;

(c) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;

(d) rules on market monitoring and surveillance.

Pyramid of Criticality: Risk based approach

To achieve the goals outlined, the Artificial Intelligence Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Between these extremes of the spectrum, stricter regulations apply as risk increases. These range from non-binding self-regulatory soft law impact assessments accompanied by codes of conduct, to heavy, externally audited compliance requirements throughout the life cycle of the application.

   The Pyramid of Criticality for AI Systems

Unacceptable Risk AI systems

Unacceptable Risk AI systems can be divided into 4 categories: two of these concern cognitive behavioral manipulation of persons or specific vulnerable groups. The other 2 prohibited categories are social scoring and real-time and remote biometric identification systems. There are, however, exceptions to the main rule for each category. The criterion for qualification as an Unacceptable Risk AI system is the harm requirement.

Examples of High-Risk AI-Systems

Hi-Risk AI-systems will be carefully assessed before being put on the market and throughout their lifecycle. Some examples include:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)
  • Safety components of products (e.g. AI application in robot-assisted surgery)
  • Employment, workers management and access to self-employment (e.g. CV sorting software for recruitment procedures)
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents)
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)
  • Surveillance systems (e.g. biometric monitoring for law enforcement, facial recognition systems)

Market Entrance of High-Risk AI-Systems: 4 Steps

In a nutshell, these 4 steps should be followed prior to Hi-Risk AI-Systems market entrance. Note that these steps apply to components of such AI systems as well.

1. A High-Risk AI system is developed, preferably using internal ex ante AI Impact Assessments and Codes of Conduct overseen by inclusive, multidisciplinary teams.

2. The High-Risk AI system must undergo an approved conformity assessment and continuously comply with AI requirements as set forth in the EU AI Act, during its lifecycle. For certain systems an external notified body will be involved in the conformity assessment audit. This dynamic process ensures benchmarking, monitoring and validation. Moreover, in case of changes to the High-Risk AI system, step 2 has to be repeated.

3. Registration of the stand-alone Hi-Risk AI system will take place in a dedicated EU database.

4. A declaration of conformity must be signed and the Hi-Risk AI system must carry the CE marking (Conformité Européenne). Now the system is ready to enter the European markets.

But this is not the end of the story…

In the vision of the EC, after the Hi-Risk AI system haven obtained market approval, authorities on both Union and Member State level ‘will be responsible for market surveillance, end users ensure monitoring and human oversight, while providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.[2] In other words, continuous upstream and downstream monitoring.

Since people have the right to know if and when they are interacting with a machine’s algorithm instead of a human being, the AI Act introduces specific transparency obligations for both users and providers of AI system, such as bot disclosure. Likewise, specific transparency obligations apply to automated emotion recognition systems, biometric categorization and deepfake/synthetics disclosure. Limited Risk AI Systems such as chatbots necessitate specific transparency obligations as well. The only category exempt from these transparency obligations can be found at the bottom of the pyramid of criticality: the Minimal Risk AI Systems.

In addition, natural persons should be able to oversee the Hi-Risk AI-System. This is termed the human oversight requirement.

Open Norms

The definition of high-risk AI applications is not yet set in stone. Article 6 does provide classification rules. Presumably, the qualification remains a somewhat open standard within the regulation, subject to changing societal views, and to be interpreted by the courts, ultimately by the EU Court of Justice. A standard that is open in terms of content and that needs to be fleshed out in more detail under different circumstances, for example using a catalog of viewpoints. Open standards entail the risk of differences of opinion about their interpretation. If the legislator does not offer sufficient guidance, the courts will ultimately have to make a decision about the interpretation of a standard. This can be seen as a less desirable side of regulating with open standards. A clear risk taxonomy will contribute to legal certainty and offer stakeholders with appropriate answers to questions about liability and insurance.

Enforcement

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

The proposed rules will be enforced through a governance system at Member States level, building on already existing structures, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.’[3]

CE-marking: pre-market conformity requirements

In line with my recommendations, Article 49 of the Artificial Intelligence Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity & legal compliance mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Please note that this pre-market conformity regime also applies to machine learning training, testing and validation datasets on the basis of article 10. These corpora need to be representative (I would almost say: inclusive), hi- quality, adequately labelled and error-free to ensure non-discriminatory and non-biased outcomes. Thus, the input data must abide to the high standards of trustworthy AI as well.

Pursuant to Article 40, harmonized standards for high-risk AI systems are published in the Official Journal of the European Union:

Article 40
Harmonised standards

High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.

The CE marking for the individual types of high-risk AI systems can be applied for via a procedure as described in article 43.

Article 43
Conformity assessment

1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:

(a)the conformity assessment procedure based on internal control referred to in Annex VI;

(b)the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.

Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII.

For the purpose of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be put into service by law enforcement, immigration or asylum authorities as well as EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(5) or (6), as applicable, shall act as a notified body.

Article 43 paragraph 6 aims to prevent or avoid risks with regard to health, safety and fundamental rights:

6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.

Article 48 paragraph 1, EU declaration of conformity indicates that:

Article 48
EU declaration of conformity

1.The provider shall draw up a written EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upon request.

Further, Article 49 CE marking of conformity determines that:

Article 49
CE marking of conformity

1.The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.

2.The CE marking referred to in paragraph 1 of this Article shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.

3.Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking.

Finally, Article 30 of the draft regulation on notifying authorities provides that:

Article 30
Notifying authorities

1.Each Member State shall designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.

2.Member States may designate a national accreditation body referred to in Regulation (EC) No 765/2008 as a notifying authority.

3.Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies and the objectivity and impartiality of their activities are safeguarded.

Self assessment too non-committal (non-binding)?

First, it is crucial that certification bodies and notified bodies are independent and that no conflicts of interest arise due to a financial or political interest. In this regard, I wrote elsewhere that the EU should be inspired by the modus operandi of the US FDA.

Second, the extent to which companies can achieve compliance with this new AI ‘product safety regime’ through risk-based self-assessment and self-certification, without third party notified bodies, determines the effect of the Regulation on business practices and thus on the preservation and reinforcement of our values. Internally audited self-assessment is too non-committal given the high risks involved. Therefore, I think it is important that the final version of the EU AI Act subjects all high-risk systems to external, independent third party assessments requirements. Self-regulation in combination with awareness of the risks via (voluntary or mandatory) internal ai impact assessments is not enough to protect our societal values, since companies have completely different incentives for promoting social good and pursuing social welfare, than the state. We need mandatory third party audits for all High-Risk AI Systems.

In this regard, it is interesting to compare the American way of regulating AI with the European approach. In America people tend to advocate free market thinking and a laissez faire approach. For example, the Stanford University, Silicon Valley group The Adaptive Agents Group recently proposed The Shibboleth Rule for Artificial Agents. Their proposal is reminiscent of the EU Human oversight requirement, and maintains that:

‘Any artificial agent that functions autonomously should be required to produce, on demand, an AI shibboleth: a cryptographic token that unambiguously identifies it as an artificial agent, encodes a product identifier and, where the agent can learn and adapt to its environment, an ownership and training history fingerprint.’[4]

Their modest proposition contrasts strongly with the widely scoped European legal-ethical framework. However, history has already taught us dramatically that the power and social impact of AI is too great to be left largely to the companies themselves.

In addition, it is key that international standard setting bodies like ISO and IEEE adopt and translate the norms and values of the EU Act in their own technical standards, so that they are in line with each other. Such harmonized standards will encourage sustainable innovation and responsible business practices. In other words, worldwide adoption of such technical standards increases the chance that leading firms will adjust their behavior vis-a-vis AI.

Moreover, a harmonized global framework prevents forum shopping. With forum shopping I mean finding the most favorable possible regime to achieve one’s own rights, motivated by financial interests that are often at the expense of consumers, competition, the environment and society.

Innovation Friendly Flexibilities: Legal Sandboxes

In line with my recommendations, the draft aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities and exceptions, including the application of legal sandboxes that afford breathing room to research institutions and SME’s. Thus, to guarantee room for innovation, the draft establishes AI regulatory sandboxes. Further, an IP Action Plan has been drawn up to modernize technology related intellectual property laws.

‘Additional measures are also proposed to support innovation, in particular through AI regulatory sandboxes and other measures to reduce the regulatory burden and to support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups.’[5]

The concept thus seeks to balance divergent interests, including democratic, economic and social values. That irrevocably means that trade-offs will be made. It is to be hoped that during its journey through the European Parliament, the proposal will not be relegated to an unworkable compromise, as happened recently with the Copyright Directive, under the influence of the lobbying power of a motley crew of stakeholders.

Sustainability

Moreover, the explanatory memorandum pays attention to the environment and sustainability, in the sense that the ecological footprint of technologies should be kept as small as possible and that the application of artificial intelligence should support socially and environmentally beneficial outcomes. This is in line with article 37 of the EU Charter of Fundamental Rights (‘the Charter’), and the EU Green Deal, which strives for the decarbonization of our society.

Sector specific rules

On top of the new AI rules, AI infused systems, products and services must comply with sector-specific regulations such as the Machinery Directive and the Regulations for medical devices (MDR) and in vitro diagnostics (IVDR), as well. Furthermore, besides the General Data Protection Regulation (GDPR) for personal data, the FFD Regulation for non-personal data and both GDPR and FFD for mixed datasets, the upcoming Data Act will apply. This applies, among other things, to B2B and B2G data sharing (depending on the types of data used), the use of privacy-preserving synthetic dataset generation techniques, and the use of machine learning training and validation data sets. In addition, audits of products and services equipped with AI must fit into existing quality management systems of industries and economic sectors such as logistics, energy and healthcare.

Regulations versus Directives

In the EU, regulations result in unification, in unification of legal rules. Member States have no discretion here for their own interpretation of the Brussels regulations. Member States do have that room for directives. Directives on the other hand, lead to harmonization of legal rules. Regulations such as the new Artificial Intelligence Act are directly applicable in the national legal orders of the member states, without the need for transposition or implementation. As was necessary, for example, with the recent Copyright Directive. As soon as the European Parliament and the Council of Europe agree with the final text in mid-2022 and if it is adopted, the AI Regulation will be immediately applicable law in all countries of the European Union.

AI Governance: trans-Atlantic perspectives

It is understandable that the European Union considers AI to be part of European strategic autonomy. Moreover, a degree of strategic European digital sovereignty is needed to safeguard European culture. Nevertheless, it is of existential importance for the EU to work together in concert with countries that share our European digital DNA, based on common respect for the rule of law, human rights and democratic values. Against this background, it is essential to stimulate systematic, multilateral transatlantic cooperation and jointly promote and achieve inclusive, participatory digitalization. The transatlantic and geopolitical dialogue on transformative technology, together with the development of globally accepted technology standards and protocols for interoperability, should be strengthened.

Setting Global Standards for AI

It takes courage and creativity to legislate through this stormy, interdisciplinary matter, forcing US and Chinese companies to conform to values-based EU standards before their AI products and services can access the European market with its 450 million consumers. Consequentially, the proposal has extraterritorial effect.

By drafting the Artificial Intelligence Act and embedding our norms and values into the architecture and infrastructure of our technology, the EU provides direction and leads the world towards a meaningful destination. As the Commission did before with the GDPR, which has now become the international blueprint for privacy, data protection and data sovereignty.

Methods also useful for other emerging technologies

While enforcing the proposed rules will be a whole new adventure, the novel legal-ethical framework for AI enriches the way of thinking about regulating the Fourth Industrial Revolution (4IR). This means that – if proven to be useful and successful – we can also use methods from this legal-ethical cadre for the regulation of 4IR technologies such as quantum technology, 3D printing, synthetic biology, virtual reality, augmented reality and nuclear fusion. It should be noted that each of these technologies requires a differentiated horizontal-vertical legislative approach in terms of innovation incentives and risks.

Trustworthy AI by Design

Responsible, Trustworthy AI requires awareness from all parties involved, from the first line of code. The way in which we design our technology is shaping the future of our society. In this vision democratic values and fundamental rights play a key role. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.


[1] Mauritz Kop is Stanford Law School TTLF Fellow at Stanford University and is Managing Partner at AIRecht, Amsterdam, The Netherlands.

[2] https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en

[3] ibid

[4] https://hai.stanford.edu/news/shibboleth-rule-artificial-agents

[5] https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en

EU Digital Consumer Contract Law – The Directive on Contracts for the Supply of Digital Content and Digital Services

By Sebastian Pech

The Directive (EU) 2019/770 on Contracts for the Supply of Digital Content and Digital Services governs the relationship between traders and consumers within this context, and will apply from January 1, 2022. The Directive’s scope of application is broad and affects many types of contracts. This contribution provides an overview of the new regulations.

1. Background of the Directive

The Directive is intended to ensure a high level of consumer protection and legal certainty in cross-border transactions involving digital content and services (see Recitals 4–11). Therefore, the Directive follows the principle of full harmonization, which means that regulations that are introduced by the Member States should meet its threshold, and should not be exceedingly stringent or lenient (Article 4). Furthermore, the regulations set forth in the Directive are of a mandatory nature. Therefore, contractual terms between traders and consumers that differ in a way that is detrimental to the consumer are not binding for the consumer (Article 22).

The member states had to enact the Directive into national law by July 1, 2021, and the new regulations will come into force on January 1, 2022 (Article 24).

2. Scope of Application

a. Material Scope

The material scope of the Directive relates to digital content and digital services:

  • Digital content means “data which are produced and supplied in digital form” (Article 2 (1)). This includes computer programs, games, music, videos, and texts in digital form, regardless of whether they are provided in a physical medium (e.g., CD, DVD, USB stick), as a download, or on a stream (Recital 19). The supply of digital content can occur through a single act (e.g., a file that is downloaded to the consumer’s device, through which the consumer has indefinite access) or on an ongoing basis for a specified period (e.g., a movie on a streaming platform, for which the consumer has access only during the term of the contract) (Recitals 56, 57).
  • Digital services comprise services “that allow[…] the consumer to create, process, store or access data in digital form” as well as services “that allow[…] the sharing of or any other interaction with data in digital form, [which are] uploaded or created by the consumer or other users of that service” (Article 2 (2)). Examples of digital services are cloud storage, messenger services, online games, and social networks (Recital 19).

The definitions of digital content and digital services are intentionally broad, to cover future technical developments (Recitals 10, 19). In practice, it is not always possible to distinguish between digital content and digital services clearly. In most cases, however, this distinguishment is unnecessary, as both categories are primarily treated in the same way.

The Directive is not only applicable to contracts in which the consumer pays money to the trader, but also when the consumer provides personal data that is processed by the trader (Article 3 (1)). The only exception is if personal data are used exclusively by the trader for contractual performance (e.g., requesting an email address because the contract is performed via e-mail) or due to a legal obligation (e.g., originating from a tax law). In practice, “paying with data” is used specifically for contracts on social networks.

b. Personal Scope

Regarding the personal scope of the Directive, the contract must be concluded between a trader and consumer (B2C). Contracts between businesses (B2B) are not covered.

3. Obligation of the Trader to Supply Digital Content or Service to the Consumer

a. Extent of the obligation

The trader has an obligation to supply the digital content or service to the consumer by making it accessible or available to them (Article 5). However, transmission to the consumer is not required, so the trader’s obligation is fulfilled as soon as the consumer can use the digital content or service, without any further action required by the trader (Recital 41).

If no time of performance has been agreed on by the parties, the trader must provide the digital content or service without undue delay after the conclusion of the contract (Article 5 (1)).

b. Burden of Proof

The burden of proof regarding whether the digital content or service was supplied in time is on the trader.

c. Remedies

In case the trader fails to supply the digital content or service to the consumer in time, the consumer is entitled to terminate the contract, after having unsuccessfully requested the trader to provide the content or service (Article 13 (1)). In certain cases, the consumer’s request is not required, for example, if the trader refuses to provide the content or service (Article 13 (2)).

In the event of termination of the contract, the trader must reimburse the consumer for any payments already made (Articles 13 (3), 16 (1)).

4. Obligation of the Trader to Supply Digital Content or Service to the Consumer in Conformity with the Contract

Furthermore, the trader has an obligation to supply the digital content or service to the consumer, in conformity with the contract (Article 6).

a. Extent of the obligation

Conformity with the contract requires that the digital content or service meets subjective and objective requirements, is integrated correctly, and does not infringe on the rights of third parties (Article 6):

  • Subject requirements for conformity result from an agreement between the trader and consumer (Article 7).
  • Objective requirements are determined by the circumstances of the contract and the nature of the digital content or service involved. These factors are, in particular, (a) whether the digital content or service is fit for its usual purpose, or (b) whether it possesses the usual quality for content or services of its same type, and based on what the consumer can reasonably expect, given the nature of the content or service (Article 8 (1)). The usual quality includes requirements that result from public statements (e.g., advertising statements) of the trader and/or developer of the digital content or service.

The trader may deviate from the objective requirements by agreement with the consumer. However, strict requirements are placed on such an agreement. The trader must inform the consumer specifically as to the deviations from the objective requirements, at the time of the conclusion of the contract, and the consumer must expressly and separately accept these deviations (Article 8 (5)).

  • A lack of conformity to the contract can also result from an incorrect integration of the digital content or service into the consumer’s digital environment by the trader, or by the consumer, due to shortcomings in the integration instructions provided by the trader (Article 9).
  • Finally, conformity to the contract requires that the use of the digital content or service does not violate the rights of third parties, especially intellectual property rights (Article 10).

The relevant point in time when the content or service must conform to the contract is determined by the type of supply:

  • Where a contract provides for a single act of supply, the content or service must comply with the contract at the time of supply (Article 11 (2)).
  • In the case of a continuous supply over a specific period, the content or service must conform to the contract during the entire period that it is supplied to the consumer (Article 8 (4), 11 (3)).

b. Burden of Proof

In general, the burden of proof on whether the digital content or service was supplied in conformity to the contract is on the consumer (Recital 59). However, to protect the consumer, burden of proof is shifted to the trader in certain instances. Here too, a distinction is made according to the type of supply:

  • Where a contract provides for a single act of supply, the burden of proof regarding whether the supplied digital content or service conforms to the contract at the time of supply is on the trader if it is within one year from the time when the digital content or service was supplied (Article 12 (2)).
  • In the case of a continuous supply over a specific period, the burden of proof regarding whether the digital content or service conforms to the contract within the period of supply is on the trader (Article 12 (3)).

c. Remedies

If the digital content or service is not provided to the consumer in conformity with the contract, the consumer is entitled to have the digital content or service brought into conformity with the contract, to receive a reduction in the price, or to terminate the contract (Article 14 (1)):

  • If the consumer demands to have the content or service brought into conformity, the trader must comply with this demand within a reasonable period, and at his own cost, unless bringing the content or service into conformity will be impossible or can only be carried out at disproportionate costs (Article 14 (2), (3)).

It is left to the discretion of the trader to decide how to bring the content or service into conformity; for example, by providing a new copy of the content or service to the consumer, or by issuing an update of it (Recital 63). However, often in practice, it is not simply the individual digital content or service being supplied to the consumer that lacks conformity, but the entire series (e.g., software version); therefore, providing a new copy to the consumer will be insufficient. In addition, updating the digital content or service will often be impossible or disproportionately costly for the trader if they are not the developer of the digital content or service. Therefore, the Directive leaves it up to the member states to introduce a direct claim from the consumer against the developer of the digital content or service (Recital 13).

  • If certain conditions are complied with, for example, when it is impossible or refused by the trader to bring the contract into conformity, the consumer can demand a proportionate reduction of the price (Article 14 (4), (5)). However, when “paying with data,” such a reduction is excluded.
  • Instead of demanding to reduce the price, the consumer may also terminate the contract. If the lack of conformity is only minor, termination of the contract is not possible (Article 14 (6)) unless the consumer “pays with data” (Recital 67).

Similar to the termination of the contract due to a failure to supply the digital content or service in time, the trader must reimburse the consumer for payments already made. However, in the case of continuous supply over a specific period, reimbursement will occur only for the time during which the digital content or service was not in conformity with the contract (Article 16 (1)).

After termination of the contract, the consumer may not continue to use the digital content or service or make it available to third parties (Article 17 (1)). In practice, this will not always be easy to control. If digital content was provided on a physical medium, the consumer is obligated to return it at the request and expense of the trader (Article 17 (2)). However, the trader may also actively prevent the consumer’s ability to use the digital content or service; for example, this can be done by disabling the user’s account or through technical measures (Article 16 (5)).

Inversely, if the consumer has created or supplied the trader with digital content (e.g., user-generated content), the trader must refrain from using that content after termination of the contract and must make it available to the consumer upon request (Article 16 (3), (4)). However, in most cases, the content created or provided by the consumer will be personal data; hence, the General Data Protection Regulation (GDPR) is applicable, and not the Directive on digital content and services (Recital 38).

5. Updates and Other Modifications of Digital Content and Service

a. Updates

The trader must provide updates that are necessary to maintain the conformity of the digital content or service with the contract (e.g., security updates) and inform the consumer thereof (Article 8 (2)). This applies not only to contracts on the continuous supply of digital content or services over a specific period, but also for a single act of supply.

The relevant duration for providing updates is determined by the type of supply:

  • In the case of a continuous supply over a specific period, the obligation to update runs for the entire contract term (Article 8 (2) (a)).
  • Where a contract provides for a single act of supply, the period depends on how long the consumer can reasonably expect updates to be provided (Article 8 (2) (a)). Factors to be considered here are the type and purpose of the digital content or service, the circumstances, and the nature of the contract.

The Directive does not establish an independent obligation on the trader to provide updates to the consumer, but instead treats updates as a subset of the obligation to supply the digital content or service to the consumer, in conformity with the contract. Therefore, if the trader fails to provide updates, the content or service will fall short of the objective requirements for conformity. As a result, the consumer can inter alia demand to have the content or service brought into conformity by the trader (Article 14 (2), (3)). However, updating the digital content or service will often be impossible or disproportionately expensive for the trader if they are not the developer of the digital content or service. In practice, if there is no direct claim against the developer of the content or service, the consumer has only the option of demanding a reduction of the price or terminating the contract with the trader.

b. Other Modifications

In the case of continuous supply over a specific period, the trader may have an interest in modifying the content or service, without the necessity to maintain conformity with the contract. This applies, for example, to a software’s range of features or the content available on an audio or video streaming platform. Such modifications require that: (a) the contract allows and provides for a valid reason regarding the modification, (b) the modification is made without additional cost to the consumer, and (c) the consumer is informed, in a clear and comprehensible manner, regarding the modification (Article 19 (1) (a)­–(c)). These requirements apply to all modifications, regardless of whether they are favorable or unfavorable to the consumer (see Recital 75). However, if the modification negatively impacts the consumer’s access to or use of the digital content or service, the consumer must be informed reasonably and in advance of the features and time of such modification (Article 19 (1) (d)­). In addition, the consumer must also be notified of their right to terminate the contract (Article 19 (2)) and the possibility of keeping the digital content or service without modification (Article 19 (4)).

6. Right of Redress

If the trader is liable to compensate a consumer because of a failure to supply the digital content or service, or a lack of contract conformity of the digital content or service, and such issue was caused by a person in the supply chain (e.g., the developer), the trader is entitled to remedies against that person (Article 20).

7. Aspects Not Covered by the Directive

The Directive does not cover aspects of general contract law, such as the formation or validity of a contract on the supply of digital content or digital service (Article 3 (10). Furthermore, no classification is made as to the legal nature of contracts for digital content or service for example, these could be in the form of sales, rentals, or sui generis contracts (Recital 12). Furthermore, the Directive does not contain any provisions regarding the consumer’s right to damages in the case of failure to supply digital content or services and in the event of lack of conformity to the contract (Article 3 (10)). Finally, the question of what occurs if the consumer exercises their rights, as set forth in the GDPR (i.e., to withdraw consent to the processing of personal data) is not addressed (Recital 40). This becomes particularly relevant when the consumer “pays with data.”

The issues that are not covered by the Directive can be regulated by the member states, at their own discretion.

8. Conclusion

The Directive establishes specific regulations for consumer contracts regarding digital content and services. These apply not only to contracts where the consumer pays a price, but also when they provide personal data to the trader.

The Directive leaves not only certain aspect to be regulated by the Member States, but also specific aspects to be clarified by the courts, such as the question regarding the duration of the trader’s obligation to update, for contracts of a single act of supply.

It is also uncertain whether, in practice, consumers will be able to enforce the rights that they are entitled to, particularly regarding claims against the trader to have brought the content or service into conformity if the trader is not the developer of the digital content or service.

Therefore, it remains to be seen whether the new regulations will achieve the goal of the Directive: to ensure a high level of consumer protection and legal certainty in cross-border transactions involving digital content and services.

European Commission Action Plan on Intellectual Property

By Pratyush Nath Upreti

On 25 November 2020, the European Commission adopted a new ‘Action Plan on Intellectual Property’ for the EU recovery and resilience. The action plan reaffirms intellectual property as a key driver to economic growth in the European Union. The action plan is drafted keeping in mind the impacts that Covid-19 may have on innovators and small and medium-sized enterprises (SMEs).

Challenges

Generally, the action plan aims to ensure that innovators have access to fast, effective, and affordable means to protect their intangible capitals. The action plan identifies five challenges that EU companies are facing in protecting their intangible capital; (i) fragmentation in the EU’s IP system (ii) SMEs lack of adequate use of opportunities offered by IP protection (III) insufficient development of tools to facilitate access to IP (iv) counterfeiting and piracy still thriving and (v) lack of fair play at the global level. 

Focus areas of intervention

The action plans emphasize ensuring fast, effective and affordable protection tools to innovators. To do so, the Commission has prioritized three improvements on IPRs protection: (i) Commission argues for the rapid roll-out of the unitary patent system (ii) optimize the supplementary protection certificates system (iii) reforming industrial design to meet the support the digital and green economy (iv) improving the EU geographical indications system with the prospects of extending protection for non-agricultural products.  Another focus area that Commission aims to address is the EU’s capacity to innovate by encouraging innovators and creators to utilize opportunities that IP provides. This is done by incentivizing innovators by introducing IP vouchers for SMEs hit by the Covid-19 crisis. Similarly, financial support and help in managing SME’s IP portfolios are some short-term plans.

Access and Sharing of IP protected assets

The action plan emphasizes developing better licensing tools to facilitate access to IP in times of crisis. The Commission acknowledges the World Health Organization (WHO) Resolution in response to the COVID-19 crisis and reaffirms the relevance of ‘voluntary pooling and licensing of IP related to COVID-19 therapeutics and vaccines’ pursuant to WHO Resolution. Additionally, the Commission recognises the need for an ‘effective system for issuing compulsory licenses,  but as ‘a means of last resort… when all other efforts to make IP available have failed’. 

Concerning standard-essential patents (SEPs), the commission will focus on reforms on clarifying and improving the framework on governing, licensing, and enforcement of SEPs. Similarly, the Commission commits to promoting data sharing in line with European Strategy for Data.

Fighting Infringements and Global Fair play

To address the concerns of online platforms, the Commission commits to ‘clarify and upgrade the responsibilities of online platforms’ and improve the capacity of law enforcement authorities. Similarly, to overcome the counterfeit and piracy challenges, the Commission plans to establish the ‘EU Toolbox against counterfeiting’ to promote the use of new technologies such as artificial intelligence, image recognition and blockchain.

Finally, the action plans demonstrates the Commission’s continuous interest in IP chapters of free trade agreements to ensure higher standards of IP protection for EU business. Similarly, to protect the brands, the Commission commits to EU accession to the Singapore Treaty on the Law of Trademarks.