Archive | Artificial intelligence RSS for this section

AI, Face Swapping, and Right of Publicity

By Marie-Andrée Weiss

Last April, several plaintiffs filed a putative class action against NeoCortext, Inc., the developer of the Reface face swapping application, alleging that the application infringed their right of publicity.

NeoCortext moved to dismiss the complaint, claiming that plaintiffs’ right of publicity was preempted by the Copyright Act and barred by the First Amendment. NeoCortext also moved to strike the complaint, claiming that the suit was a strategic lawsuit against public participation (SLAPP) aiming at “gagging a novel application that enables users to engage in creative activities that are protected by the First Amendment.”

On September 5, 2023, U.S. District Judge Wesley L. Hsu denied both motions.

The case is Kyland Young v. NeoCortex Case 2:23-cv-02496-WLH-PVC.

The Reface app

Neocortext developed Reface, a smartphone application using an artificial intelligence algorithm which allowed users to replace their faces in photographs and videos with the faces of celebrities (“face swap”), to place their faces into scenes and movies and to “mix [their] face[s] with a celebrity.”

Users were able to search for their favorite characters or individuals in the catalog of images, movie and show clips, which was compiled from several websites, such as mybestgif.com, https://tenor.com/, Google Video, and Bing Video. Among the individuals featured in the catalog was one of the plaintiffs, Kylan Young, finalist of the 23rd Big Brother show on CBS.

Users  could then upload a photograph featuring one or more human beings, and the app “swapped” the faces with the faces of individuals featured in the images or clip chosen by the user from Reface’s catalogue. NeoCortext offered a free version of the services, where the “face swap” image or video was watermarked with the Reface logo. The complaint referred to these watermarked images and clips as “Teaser Face Swaps.” A paying subscription to the app allowed the user to remove the watermark.

Does the app infringe plaintiff’s right of publicity?

The complaint alleged that the app allowed users to recreate Mr. Young’s scenes from Big Brother, but that NeoCortext never asked for his consent nor paid him any royalties and thus profited from Mr. Young’s likeness and that defendant used the likeness of plaintiffs in violation of California’s right of publicity “to pitch its product for profit.” Plaintiff argued that  Teaser Face Swaps were “essentially ads intended to entice users to buy PRO subscriptions, and the paid PRO version of the applications makes money by including Californians in its library of content.”

California Right of Publicity Law

California recognizes a right of publicity at common law and also by statute, California Civil Code § 3344, which prevents the use without prior consent of a person’s name, voice, signature, photograph or likeness, in products, merchandise or goods, to advertise, sell, or solicit the purchase of goods or services. 

To succeed, a plaintiff must allege  that (1) the defendant’s used the plaintiff’s identity; (2) appropriated the plaintiff’s name or likeness to defendant’s advantage, commercially or otherwise; (3) defendant did not consent;  and (4) injury resulted from this unauthorized use (see for instance Fleet v. CBS, Inc. at  1918).

The two Anti-SLAPP steps.

In its motion to strike the case, NeoCortext argued that the app allowed its users to create “humorous and sometimes absurd new works for personal use” and that “[t]his is exactly the type of creative activity that the First Amendment protects and that the right of publicity does not.”

There are two steps in an anti-SLAPP analysis, the second step being equivalent of the standard used by courts to evaluate a motion to dismiss.

First step:

The first step under California Anti- SLAPP law, Cal. Civ. Proc. Code § 425.16, was for NeoCortext to show that its use of Mr. Young’s image was made “in furtherance of [NeoCortext’s] right of petition or free speech… in connection with a public issue. Such speech can be conduct, including “all conduct in furtherance of the exercise of the right of free speech” (Lieberman v. KCOP Television, Inc., at  166).

Judge Hsu reasoned that the conduct at the basis of Mr. Young’s complaint was the inclusion of his image in the app, allowing users to create a new image.  As such, it was the users who exercised their freedom of speech, not NeoCortext. Because the app is a tool that users can use to exercise their free speech rights, NeoCortext’s use of plaintiff’s image in the app was conduct taken in furtherance of users’ exercise of free speech.

Such speech is connected with a public issue under the test used by California courts as it is: a (1) statement concerning a person or entity in the public eye (Mr. Young); (2)  a conduct that could directly affect a large number of people beyond the direct participants; (3) or a topic of widespread public interest (“the use of technology to alter images and videos of individuals in a way that makes them look realistic” is such topic).

NeoCortext had shown that its conduct is in furtherance of the right of free speech made in connection with a public issue, thus satisfying its burden on the first step of the anti-SLAPP analysis.

Second step:

Plaintiff therefore then carried the burden to show “a probability of prevailing on the claim”, the second step required by California Anti-SLAPP law, identical to the standard for the motion to dismiss, and it did so, leading Judge Tsu to deny both motions.

NeoCortext had argued, unsuccessfully as we will now see, that the Copyright Act and the First Amendment preempted the right of publicity claim.

Copyright Act does not preempt the right of publicity claim

NeoCortext had argued that, if a right of publicity claim is entirely based on the display, reproduction or modification of a work protected by copyright, the claim is preempted by the Copyright Act.

Section 301 of the Copyright Act preempts state laws equivalent to the exclusive copyright rights as detailed by Section 106 of the Copyright Act.

The Ninth Circuit uses a two-part test to determine if a state law claim is preempted by the Copyright Act :

NeoCortex had claimed that Plaintiff’s claim was within the subject matter of copyright, as the images and clips in Neocortext’s catalog were protected by copyright.

In Maloney, the Ninth Circuit Court of Appeals held :

“that a publicity-right claim is not preempted when it targets non-consensual use of one’s name or likeness on merchandise or in advertising. But when a likeness has been captured in a copyrighted artistic visual work and the work itself is being distributed for personal use, a publicity-right claim interferes with the exclusive rights of the copyright holder, and is preempted by section 301 of the Copyright Act.” (Maloney, at 1011, our emphasis).

NeoCortex’s argument relied further on Maloney which held that :

“…where a likeness has been captured in a copyrighted artistic visual work and the work itself is being distributed for personal use, a publicity-right claim is little more than a thinly disguised copyright claim because it seeks to hold a copyright holder liable for exercising his exclusive rights under the Copyright Act.” (Maloney, at 1016).

First part of the Ninth Circuit test: Plaintiffs’  right of publicity claim do not fall within the subject matter of copyright

Nothing that the Copyright Act protects ownership of photographs, but that it does not protect the exploitation of a person’s  likeness, “even if it is embodied in a photograph”, citing the Ninth Circuit decision in Downing v. Abercrombie & Fitch, Judge Hsu found that “[plaintiff]’s right of publicity claim does not fall within the subject matter of copyright”. Judge Hsu  distinguished the case from Maloney, where a photograph of the plaintiff, protected by copyright,  had been sold. In contrast, the use of Mr. Young’s likeness was outside of the original work protected by copyright as it was used to create a product containing the plaintiff’s image.  As plaintiff’s claim did not fall under the subject matter of copyright, it was not preempted by the Copyright Act.

Second part of the Ninth Circuit test: State law rights asserted are not equivalent to Section 106  rights

Judge Hsu also found that the second factor of the test failed, because Section 106 of the Copyright Act does not give  the owners of the photographs the right to use plaintiff’s name and likenesses to advertise the free version of the app and to induce users to buy the subscription. Plaintiff was “not seeking to “merely” restrict the reproduction or distribution of the original photographs/works, as the plaintiffs in Maloney ….”

The rights asserted by plaintiff were not equivalent to the rights conferred by the Copyright Act to the owners of the photographs from the app catalog. Under the two-part test used by the Ninth Circuit, the claim was not preempted by the Copyright Act.

The First Amendment does not preempt the right of publicity claim

NeoCortext had also argued that the First Amendment preempted the claim, as users used the app to create “their own unique, sometimes humorous and absurd expressions” which are protected by the First Amendment. NeoCortext further argued that the photos and clips thus created had “creative and aesthetic value” and that they were “new works … distinct from the originals”.

California courts apply the “transformative use” test to balance right of publicity and First Amendment, detailed by the California Supreme Court in Comedy III Productions v. Gary Saderup, Inc. (at 142):

In sum, when an artist is faced with a right of publicity challenge to his or her work, he or she may raise as affirmative defense that the work is protected by the First Amendment inasmuch as it contains significant transformative elements or that the value of the work does not derive primarily from the celebrity’s fame.” (Our emphasis).

NeoCortext had to show that its use was transformative as a matter of law. Judge Hsu found it had not done so, noting that plaintiff’s face “is the only thing that change in the end product” and that the body is sometimes unchanged, citing  Hilton v. Hallmark Cards, where the Ninth Circuit  found that a greeting card featuring the likeness of Paris Hilton, arguably more transformative than the swap images created the app, was not transformative enough to entitle the defendant to a First Amendment affirmative defense as a matter of law.

What is next?

On September 8, NeoCortex filed an appeal to the U.S. Court of Appeals for the Ninth Circuit.

There have already been several complaints alleging that an AI-powered product or service is infringing the copyright of authors whose works have been used to train the data models, but Young v. NeoCortext is one of the first cases were a product or service triggered by AI is allegedly infringing a right to publicity.

As such it is worthy of following further. To be continued…

Large Language Models and the EU AI Act: the Risks from Stochastic Parrots and Hallucination

By Zihao Li[1]

With the launch of ChatGPT, Large Language Models (LLMs) are shaking up our whole society, rapidly altering the way we think, create and live. For instance, the GPT integration in Bing has altered our approach to online searching. While nascent LLMs have many advantages, new legal and ethical risks[2] are also emerging, stemming in particular from stochastic parrots and hallucination. The EU is the first and foremost jurisdiction that has focused on the regulation of AI models.[3] However, the risks posed by the new LLMs are likely to be underestimated by the emerging EU regulatory paradigm. Therefore, this correspondence warns that the European AI regulatory paradigm must evolve further to mitigate such risks.

Stochastic parrots and hallucination: unverified information generation

One potentially fatal flaw of the LLMs, exemplified by ChatGPT, is that the generation of information is unverified. For example, ChatGPT often generates pertinent, but non-existent academic reading lists. Data scientists claim that this problem is caused by “hallucination”[4] and “stochastic parrots”.[5] Hallucination occurs when LLMs generate text based on their internal logic or patterns, rather than the true context, leading to confidently but unjustified and unverified deceptive responses. Stochastic parrots is the repetition of training data or its patterns, rather than actual understanding or reasoning.

The text production method of LLMs is to reuse, reshape, and recombine the training data in new ways to answer new questions while ignoring the problem of authenticity and trustworthiness of the answers. In short, LLMs only predict the probability of a particular word coming next in a sequence, rather than actually comprehending its meaning. Although the majority of answers are high-quality and true, the content of the answers is fictional. Even though most training data is reliable and trustworthy, the essential issue is that the recombination of trustworthy data into new answers in a new context may lead to untrustworthiness, as the trustworthiness of information is conditional and often context-bound. If this precondition of trustworthy data disappears, trust in answers will be misplaced. Therefore, while the LLMs’ answers may seem highly relevant to the prompts, they are made-up.

However, merely improving the accuracy of the models through new data and algorithms is insufficient, because the more accurate the model is, the more users will rely on it, and thus be tempted not to verify the answers, leading to greater risk when stochastic parrots and hallucinations appear. This situation, where an increase in accuracy leads to higher reliance and potential risks, can be described as the ‘accuracy paradox’. The risk is beyond measure if users encounter these problems in especially sensitive areas such as healthcare or the legal field. Even if utilizing real-time internet sources, the trustworthiness of LLMs may remain compromised, as exemplified by factual errors in new Bing’s launch demo.

These risks can lead to ethical concerns, including misinformation and disinformation, which may adversely affect individuals through misunderstandings, erroneous decisions, loss of trust, and even physical harm (e.g., in healthcare). Misinformation and disinformation can reinforce bias,[6] as LLMs may perpetuate stereotypes present in their training data.[7]

The EU AI regulatory paradigm: Advanced Legal intervention required

The EU has already commenced putting effort into AI governance. The AI Act (AIA) is the first and globally most ambitious attempt to regulate AI. However, the proposed AIA, employing a risk-based taxonomy for AI regulation, encounters difficulties when applied to general-purpose LLMs. On the one hand, categorizing LLMs as high-risk AI due to its generality may impede EU AI development. On the other hand, if general-purpose LLMs are regarded as chatbots, falling within a limited-risk group, merely imposing transparency obligations (i.e., providers need to disclose that the answer is generated by AI) would be insufficient.[8] Because the danger of parroting and hallucination risks is not only related to whether users are clearly informed that they are interacting with AI, but also to the reliability and trustworthiness of LLMs’ answers, i.e., how users can distinguish between truth and made-up answers. When a superficially eloquent and knowledgeable chatbot generates unverified content with apparent confidence, users may trust the fictitious content without undertaking verification. Therefore, the AIA’s transparency obligation is not sufficient.

Additionally, the AIA does not fully address the role, rights, or responsibilities of the end-users. As a result, users have no chance to contest or complain about LLMs, especially when stochastic parrots and hallucination occur and affect their rights. Moreover, the AIA does not impose any obligations on users. However, as aforementioned, the occurrence of disinformation is largely due to deliberate misuse by users. Without imposing responsibilities on the user side, it is difficult to regulate the harmful use of AI by users. Meanwhile, it is argued that the logic of AIA is to work backward from certain harms to measures that mitigate the risk that these harms materialize.[9] The primary focus ought to shift towards the liability associated with the quality of input data, rather than imposing unattainable obligations on data quality.

Apart from the AIA, the Digital Service Act (DSA) aims to govern disinformation. However, the DSA’s legislators only focus on the responsibilities of the intermediary, overlooking the source of the disinformation. Imposing obligations only on intermediaries when LLMs are embedded in services is insufficient, as such regulation cannot reach the underlying developers of LLMs. Similarly, the Digital Markets Act (DMA) focuses on the regulation of gatekeepers, aiming to establish a fair and competitive market. Although scholars recently claim that the DMA has significant implications for AI regulation,[10] the DMA primarily targets the effects of AI on market structure; it can only provide limited help on LLMs. The problem that the DSA and DMA will face is that both only govern the platform, not the usage, performance, and output of AI per se. This regulatory approach is a consequence of the current platform-as-a-service (PaaS) business model. However, once the business model shifts to AI model-as-a-service (MaaS),[11] this regulatory framework is likely to become nugatory, as the platform does not fully control the processing logic and output of the algorithmic model.

Therefore, it is necessary to urgently reconsider the regulation of general-purpose LLMs.[12] The parroting and hallucination issues show that minimal transparency obligations are insufficient, since LLMs often lull users into misplaced trust. When using LLMs, users should be acutely aware that the answers are made-up, may be unreliable, and require verification. LLMs should be obliged to remind and guide users on content verification. Particularly when prompted with sensitive topics, such as medical or legal inquiries, LLMs should refuse to answer, instead directing users to authoritative sources with traceable context. The suitable scope for such filter and notice obligations warrants further discussion from legal, ethical and technical standpoints.

Furthermore, legislators should reassess the risk-based AI taxonomy in the AIA. The above discussion suggests that the effective regulation of LLMs needs to ensure their trustworthiness, taking into account the reliability, explainability and traceability of generated information, rather than solely focusing on transparency. Meanwhile, end-users, developers and deployers’ roles should all be considered in AI regulations, while shifting focus from PaaS to AI MaaS.


[1] The work is adapted and developed from the preprint version of a paper published in Nature Machine Intelligence, “Zihao Li, ‘Why the European AI Act Transparency Obligation Is Insufficient’ [2023] Nature Machine Intelligence. https://doi.org/10.1038/s42256-023-00672-y”

[2] ‘Much to Discuss in AI Ethics’ (2022) 4 Nature Machine Intelligence 1055.

[3] Zihao Li, ‘Why the European AI Act Transparency Obligation Is Insufficient’ [2023] Nature Machine Intelligence.

[4] Ziwei Ji and others, ‘Survey of Hallucination in Natural Language Generation’ [2022] ACM Computing Surveys 3571730.

[5] Emily M Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM 2021) <https://dl.acm.org/doi/10.1145/3442188.3445922&gt; accessed 14 January 2023.

[6] Marvin van Bekkum and Frederik Zuiderveen Borgesius, ‘Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?’ (2023) 48 Computer Law & Security Review 105770.

[7] Zihao Li, ‘Affinity-Based Algorithmic Pricing: A Dilemma for EU Data Protection Law’ (2022) 46 Computer Law & Security Review 1.

[8] Lilian Edwards, ‘The EU AI Act: A Summary of Its Significance and Scope’ (Ada Lovelace Institute 2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/04/Expert-explainer-The-EU-AI-Act-11-April-2022.pdf&gt; accessed 17 January 2023.

[9] Martin Kretschmer and others, ‘The Risks of Risk-Based AI Regulation: Taking Liability Seriously’.

[10] Philipp Hacker, Johann Cordes and Janina Rochon, ‘Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and Beyond’ [2022] SSRN Electronic Journal <https://www.ssrn.com/abstract=4316944&gt; accessed 8 January 2023.

[11] Tianxiang Sun and others, ‘Black-Box Tuning for Language-Model-as-a-Service’, Proceedings of the 39th International Conference on Machine Learning (PMLR 2022) <https://proceedings.mlr.press/v162/sun22e.html&gt; accessed 10 February 2023.

[12] Philipp Hacker, Andreas Engel and Theresa List, ‘Understanding and Regulating ChatGPT, and Other Large Generative AI Models: With input from ChatGPT’ [2023] Verfassungsblog <https://verfassungsblog.de/chatgpt/&gt; accessed 20 May 2023.

EU Artificial Intelligence Act: The European Approach to AI

By Mauritz Kop[1]

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. As a Fellow at Stanford University’s Transatlantic Technology Law Forum and a Member of the European AI Alliance, I made independent strategic recommendations to the European Commission. President Ursula von der Leyen’s team adopted some of the suggestions that I offered them, or has itself arrived to the same conclusions. That is encouraging. This contribution will list the main points of this novel regulatory framework for AI.

Core horizontal rules for AI

The EU AI Act sets out horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU. The draft regulation provides core artificial intelligence rules that apply to all industries. The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories . It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets. The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law.

Objectives of the EU Artificial Intelligence Act

The proposed regulatory framework on Artificial Intelligence has the following objectives:

1. ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

2. ensure legal certainty to facilitate investment and innovation in AI;

3. enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

4. facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Subject Matter of the EU AI Act

The scope of the AI Act is largely determined by the subject matter to which the rules apply. In that regard, Article 1 states that:

Article 1
Subject matter

This Regulation lays down:

(a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;

(a) prohibitions of certain artificial intelligence practices;

(b) specific requirements for high-risk AI systems and obligations for operators of such systems;

(c) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;

(d) rules on market monitoring and surveillance.

Pyramid of Criticality: Risk based approach

To achieve the goals outlined, the Artificial Intelligence Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Between these extremes of the spectrum, stricter regulations apply as risk increases. These range from non-binding self-regulatory soft law impact assessments accompanied by codes of conduct, to heavy, externally audited compliance requirements throughout the life cycle of the application.

   The Pyramid of Criticality for AI Systems

Unacceptable Risk AI systems

Unacceptable Risk AI systems can be divided into 4 categories: two of these concern cognitive behavioral manipulation of persons or specific vulnerable groups. The other 2 prohibited categories are social scoring and real-time and remote biometric identification systems. There are, however, exceptions to the main rule for each category. The criterion for qualification as an Unacceptable Risk AI system is the harm requirement.

Examples of High-Risk AI-Systems

Hi-Risk AI-systems will be carefully assessed before being put on the market and throughout their lifecycle. Some examples include:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)
  • Safety components of products (e.g. AI application in robot-assisted surgery)
  • Employment, workers management and access to self-employment (e.g. CV sorting software for recruitment procedures)
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents)
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)
  • Surveillance systems (e.g. biometric monitoring for law enforcement, facial recognition systems)

Market Entrance of High-Risk AI-Systems: 4 Steps

In a nutshell, these 4 steps should be followed prior to Hi-Risk AI-Systems market entrance. Note that these steps apply to components of such AI systems as well.

1. A High-Risk AI system is developed, preferably using internal ex ante AI Impact Assessments and Codes of Conduct overseen by inclusive, multidisciplinary teams.

2. The High-Risk AI system must undergo an approved conformity assessment and continuously comply with AI requirements as set forth in the EU AI Act, during its lifecycle. For certain systems an external notified body will be involved in the conformity assessment audit. This dynamic process ensures benchmarking, monitoring and validation. Moreover, in case of changes to the High-Risk AI system, step 2 has to be repeated.

3. Registration of the stand-alone Hi-Risk AI system will take place in a dedicated EU database.

4. A declaration of conformity must be signed and the Hi-Risk AI system must carry the CE marking (Conformité Européenne). Now the system is ready to enter the European markets.

But this is not the end of the story…

In the vision of the EC, after the Hi-Risk AI system haven obtained market approval, authorities on both Union and Member State level ‘will be responsible for market surveillance, end users ensure monitoring and human oversight, while providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.[2] In other words, continuous upstream and downstream monitoring.

Since people have the right to know if and when they are interacting with a machine’s algorithm instead of a human being, the AI Act introduces specific transparency obligations for both users and providers of AI system, such as bot disclosure. Likewise, specific transparency obligations apply to automated emotion recognition systems, biometric categorization and deepfake/synthetics disclosure. Limited Risk AI Systems such as chatbots necessitate specific transparency obligations as well. The only category exempt from these transparency obligations can be found at the bottom of the pyramid of criticality: the Minimal Risk AI Systems.

In addition, natural persons should be able to oversee the Hi-Risk AI-System. This is termed the human oversight requirement.

Open Norms

The definition of high-risk AI applications is not yet set in stone. Article 6 does provide classification rules. Presumably, the qualification remains a somewhat open standard within the regulation, subject to changing societal views, and to be interpreted by the courts, ultimately by the EU Court of Justice. A standard that is open in terms of content and that needs to be fleshed out in more detail under different circumstances, for example using a catalog of viewpoints. Open standards entail the risk of differences of opinion about their interpretation. If the legislator does not offer sufficient guidance, the courts will ultimately have to make a decision about the interpretation of a standard. This can be seen as a less desirable side of regulating with open standards. A clear risk taxonomy will contribute to legal certainty and offer stakeholders with appropriate answers to questions about liability and insurance.

Enforcement

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

The proposed rules will be enforced through a governance system at Member States level, building on already existing structures, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.’[3]

CE-marking: pre-market conformity requirements

In line with my recommendations, Article 49 of the Artificial Intelligence Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity & legal compliance mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Please note that this pre-market conformity regime also applies to machine learning training, testing and validation datasets on the basis of article 10. These corpora need to be representative (I would almost say: inclusive), hi- quality, adequately labelled and error-free to ensure non-discriminatory and non-biased outcomes. Thus, the input data must abide to the high standards of trustworthy AI as well.

Pursuant to Article 40, harmonized standards for high-risk AI systems are published in the Official Journal of the European Union:

Article 40
Harmonised standards

High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.

The CE marking for the individual types of high-risk AI systems can be applied for via a procedure as described in article 43.

Article 43
Conformity assessment

1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:

(a)the conformity assessment procedure based on internal control referred to in Annex VI;

(b)the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.

Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII.

For the purpose of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be put into service by law enforcement, immigration or asylum authorities as well as EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(5) or (6), as applicable, shall act as a notified body.

Article 43 paragraph 6 aims to prevent or avoid risks with regard to health, safety and fundamental rights:

6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.

Article 48 paragraph 1, EU declaration of conformity indicates that:

Article 48
EU declaration of conformity

1.The provider shall draw up a written EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upon request.

Further, Article 49 CE marking of conformity determines that:

Article 49
CE marking of conformity

1.The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.

2.The CE marking referred to in paragraph 1 of this Article shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.

3.Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking.

Finally, Article 30 of the draft regulation on notifying authorities provides that:

Article 30
Notifying authorities

1.Each Member State shall designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.

2.Member States may designate a national accreditation body referred to in Regulation (EC) No 765/2008 as a notifying authority.

3.Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies and the objectivity and impartiality of their activities are safeguarded.

Self assessment too non-committal (non-binding)?

First, it is crucial that certification bodies and notified bodies are independent and that no conflicts of interest arise due to a financial or political interest. In this regard, I wrote elsewhere that the EU should be inspired by the modus operandi of the US FDA.

Second, the extent to which companies can achieve compliance with this new AI ‘product safety regime’ through risk-based self-assessment and self-certification, without third party notified bodies, determines the effect of the Regulation on business practices and thus on the preservation and reinforcement of our values. Internally audited self-assessment is too non-committal given the high risks involved. Therefore, I think it is important that the final version of the EU AI Act subjects all high-risk systems to external, independent third party assessments requirements. Self-regulation in combination with awareness of the risks via (voluntary or mandatory) internal ai impact assessments is not enough to protect our societal values, since companies have completely different incentives for promoting social good and pursuing social welfare, than the state. We need mandatory third party audits for all High-Risk AI Systems.

In this regard, it is interesting to compare the American way of regulating AI with the European approach. In America people tend to advocate free market thinking and a laissez faire approach. For example, the Stanford University, Silicon Valley group The Adaptive Agents Group recently proposed The Shibboleth Rule for Artificial Agents. Their proposal is reminiscent of the EU Human oversight requirement, and maintains that:

‘Any artificial agent that functions autonomously should be required to produce, on demand, an AI shibboleth: a cryptographic token that unambiguously identifies it as an artificial agent, encodes a product identifier and, where the agent can learn and adapt to its environment, an ownership and training history fingerprint.’[4]

Their modest proposition contrasts strongly with the widely scoped European legal-ethical framework. However, history has already taught us dramatically that the power and social impact of AI is too great to be left largely to the companies themselves.

In addition, it is key that international standard setting bodies like ISO and IEEE adopt and translate the norms and values of the EU Act in their own technical standards, so that they are in line with each other. Such harmonized standards will encourage sustainable innovation and responsible business practices. In other words, worldwide adoption of such technical standards increases the chance that leading firms will adjust their behavior vis-a-vis AI.

Moreover, a harmonized global framework prevents forum shopping. With forum shopping I mean finding the most favorable possible regime to achieve one’s own rights, motivated by financial interests that are often at the expense of consumers, competition, the environment and society.

Innovation Friendly Flexibilities: Legal Sandboxes

In line with my recommendations, the draft aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities and exceptions, including the application of legal sandboxes that afford breathing room to research institutions and SME’s. Thus, to guarantee room for innovation, the draft establishes AI regulatory sandboxes. Further, an IP Action Plan has been drawn up to modernize technology related intellectual property laws.

‘Additional measures are also proposed to support innovation, in particular through AI regulatory sandboxes and other measures to reduce the regulatory burden and to support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups.’[5]

The concept thus seeks to balance divergent interests, including democratic, economic and social values. That irrevocably means that trade-offs will be made. It is to be hoped that during its journey through the European Parliament, the proposal will not be relegated to an unworkable compromise, as happened recently with the Copyright Directive, under the influence of the lobbying power of a motley crew of stakeholders.

Sustainability

Moreover, the explanatory memorandum pays attention to the environment and sustainability, in the sense that the ecological footprint of technologies should be kept as small as possible and that the application of artificial intelligence should support socially and environmentally beneficial outcomes. This is in line with article 37 of the EU Charter of Fundamental Rights (‘the Charter’), and the EU Green Deal, which strives for the decarbonization of our society.

Sector specific rules

On top of the new AI rules, AI infused systems, products and services must comply with sector-specific regulations such as the Machinery Directive and the Regulations for medical devices (MDR) and in vitro diagnostics (IVDR), as well. Furthermore, besides the General Data Protection Regulation (GDPR) for personal data, the FFD Regulation for non-personal data and both GDPR and FFD for mixed datasets, the upcoming Data Act will apply. This applies, among other things, to B2B and B2G data sharing (depending on the types of data used), the use of privacy-preserving synthetic dataset generation techniques, and the use of machine learning training and validation data sets. In addition, audits of products and services equipped with AI must fit into existing quality management systems of industries and economic sectors such as logistics, energy and healthcare.

Regulations versus Directives

In the EU, regulations result in unification, in unification of legal rules. Member States have no discretion here for their own interpretation of the Brussels regulations. Member States do have that room for directives. Directives on the other hand, lead to harmonization of legal rules. Regulations such as the new Artificial Intelligence Act are directly applicable in the national legal orders of the member states, without the need for transposition or implementation. As was necessary, for example, with the recent Copyright Directive. As soon as the European Parliament and the Council of Europe agree with the final text in mid-2022 and if it is adopted, the AI Regulation will be immediately applicable law in all countries of the European Union.

AI Governance: trans-Atlantic perspectives

It is understandable that the European Union considers AI to be part of European strategic autonomy. Moreover, a degree of strategic European digital sovereignty is needed to safeguard European culture. Nevertheless, it is of existential importance for the EU to work together in concert with countries that share our European digital DNA, based on common respect for the rule of law, human rights and democratic values. Against this background, it is essential to stimulate systematic, multilateral transatlantic cooperation and jointly promote and achieve inclusive, participatory digitalization. The transatlantic and geopolitical dialogue on transformative technology, together with the development of globally accepted technology standards and protocols for interoperability, should be strengthened.

Setting Global Standards for AI

It takes courage and creativity to legislate through this stormy, interdisciplinary matter, forcing US and Chinese companies to conform to values-based EU standards before their AI products and services can access the European market with its 450 million consumers. Consequentially, the proposal has extraterritorial effect.

By drafting the Artificial Intelligence Act and embedding our norms and values into the architecture and infrastructure of our technology, the EU provides direction and leads the world towards a meaningful destination. As the Commission did before with the GDPR, which has now become the international blueprint for privacy, data protection and data sovereignty.

Methods also useful for other emerging technologies

While enforcing the proposed rules will be a whole new adventure, the novel legal-ethical framework for AI enriches the way of thinking about regulating the Fourth Industrial Revolution (4IR). This means that – if proven to be useful and successful – we can also use methods from this legal-ethical cadre for the regulation of 4IR technologies such as quantum technology, 3D printing, synthetic biology, virtual reality, augmented reality and nuclear fusion. It should be noted that each of these technologies requires a differentiated horizontal-vertical legislative approach in terms of innovation incentives and risks.

Trustworthy AI by Design

Responsible, Trustworthy AI requires awareness from all parties involved, from the first line of code. The way in which we design our technology is shaping the future of our society. In this vision democratic values and fundamental rights play a key role. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.


[1] Mauritz Kop is Stanford Law School TTLF Fellow at Stanford University and is Managing Partner at AIRecht, Amsterdam, The Netherlands.

[2] https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en

[3] ibid

[4] https://hai.stanford.edu/news/shibboleth-rule-artificial-agents

[5] https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en