Archive | Legislation and policy-making RSS for this section

Franchise Agreements: The Case for Limited Non-Compete Clauses

By Alexandros Kazimirov

In early 2023, the Federal Trade Commission proposed a rule under the notice-and-comment process arguing that non-compete clauses constitute an unfair method of competition and therefore violate Section 5 of the Federal Trade Commission Act. The Commission’s intent is to absolve the labor force from binding clauses that impose restrictions on its movement in the market, which in turn harms competition in the country. In the Commission’s enclosed fact sheet, the widespread use of non-competes is highlighted through grossly disproportionate instances, such as the case of a security guard being prevented from getting a job with a new employer by virtue of a two-year non-compete with his previous employer. The Commission’s view is that such over- broad use of non-competes at all levels of employment, cannot be justified to protect trade secrets in light of the fact that some states like California do not enforce such clauses anymore. In the rule proposal, the FTC recognizes that some cases may require deeper inquiry and asks feedback on whether franchisees should be covered by the rule.

Taking as an example a fictional restaurant chain called Big Kahuna Burger (BKB). BKB is in the business of selecting locations, building restaurants, then selling the restaurants and franchising the buyers to allow them to become individually owned and operated BKB restaurants. Let’s assume that all individual BKB restaurants are owned and operated by franchisees. The potential franchisee enters into a standardized franchise agreement which governs many aspects of the franchise operations. BKB franchisees do not receive an exclusive territory, and prospective franchisees are told that franchisees may face competition from other BKB restaurants as well as, of course, from restaurants of other chains. The franchise agreement includes a non-compete clause, which states that: “No employee may seek employment at a different BKB franchise within six months after their termination of employment with their initial BKB franchise.”

It would be an uphill battle to make the case that such a noncompete clause is per se illegal. The reason is because:

(i) it is fairly narrow in scope, i.e. it applies only to BKB franchises and for a limited time,

(ii) does not prescribe price-fixing on its face and

(iii) may retain a pro-competitive effect vis-a-vis other chain restaurants.

The bench is more comfortable in declaring something per se unlawful when the restraint is clearly restrictive on its face. When it is less obvious, the bench may exercise its discretion and review a restraint under the rule of reason theory. It is therefore worth considering whether limited non-competes between franchises can be considered as an unreasonable restraint of trade under a rule of reason analysis.

Would it survive a rule of reason analysis?

In this analysis, the judge would identify two forces: an intrinsic anti-competitive force and an extrinsic pro-competitive force. The intrinsic force concerns the restraint viewed between one franchise and another. In this intrinsic market of employment, the employees are indeed restricted and the clause functions as a brazenly anti-competitive feature because it limits the post-employment options of a franchise employee. The extrinsic force concerns the restraint as viewed between the franchise and the competing restaurants. In this extrinsic market, the employees are not restricted and taken holistically this feature functions more as a pro-competitive feature, because it enhances labor security between franchises and the employees retain an “out” to competing chains.

Next, the analysis would turn to facts and circumstances, i.e. how many opportunities of an “out” the employees of a particular franchise actually have. For example, the judge can select a designated area around a BKB franchise and ask how many competing chains have presence versus BKB franchises. By having an approximate understanding of the market, a judge can determine how broad or narrow the restrictive character of the clause is. For example, if there are more BKB franchises within a designated area than other restaurants, then the intrinsic force of the clause is stronger and therefore it may be interpreted as an unreasonable restraint on competition. Alternatively, if there are more competing chain restaurants than BKB franchises, then the extrinsic force of the clause is stronger and therefore it may be interpreted as a reasonable restraint on competition.

Perhaps weighing the limitations of non-compete covenants and then rewriting them to an acceptable standard is a task that should not burden the courts, some may argue. Traditionally, courts in Delaware and New York (where until recently state law has tolerated non-competes, although state legislatures have indicated to adopt a more hostile stance), have required that restrictions be reasonable in duration, geographic scope and in kind of the business restrained.

However, in Kodiak v. Adams the Court of Chancery admonished parties seeking to have the bench “blue pencil” restrictive covenants to a reasonable and enforceable scope, echoing the growing hesitancy not only to enforce but also to correct non-competes.

Conclusion

Whether the FTC decides to carve out an exception for post-termination non-competes in franchise agreements or moves to include them in its ban, remains unknown so far. Between the hesitation of courts to review non-competes and the lack of flexibility for franchises that a total ban may entail, the former may be the lesser evil.

Cyberstalking and Online Platforms’ Due Diligence in the EU Digital Services Act

By Irene Kamara

Cyberstalking: a pattern of abusive online behaviours

Cyberstalking, the act of using electronic communication devices to create a criminal level of intimidation, harassment, and fear in one or more victims,[1] is a form of – usually gender-based- cyberviolence, with immense impacts on the physical and mental well-being of the victim. The Council of Europe Istanbul Convention on violence against women and children defines stalking as “the intentional conduct of repeatedly engaging in threatening conduct directed at another person, causing her or him to fear for her or his safety.”[2] The characteristic of cyberstalking is the repeated nature of the online harassment. It constitutes a pattern of behaviour, rather than one isolated incident.[3] Because of this aspect, while the victim may feel a continuous threat, classifying different events by a single or multiple offenders as one cyberstalking offence and prosecuting it, runs into several evidentiary obstacles. Such an evidentiary obstacle is that the victim needs to maintain records of the different events over the course of an extended period that amounts to the cyberstalking offence. Where punishable, cyberstalking usually falls under criminal law provisions of harassment, especially in jurisdictions that have signed and ratified the Istanbul Convention of the Council of Europe. However, regulatory approaches targeting the offender are not the only strategy to mitigate cyberstalking as a phenomenon. Online platforms such as social media platforms offer de facto a means that facilitates cyberstalking, since offenders use social media platforms to engage in unwanted communication such as threats against one or more victims or publicise defamatory or image-based abusive material. Several of the most popular platforms have adopted their own community standards on accepted behaviour. For example, Meta has a policy in place on bullying and harassment,[4] where inter alia the platform commits to “remove content that’s meant to degrade or shame, including, for example, claims about someone’s sexual activity.” Those policies however are largely voluntary measures, and their appropriateness is often not reviewed by external state actors, such as an independent supervisory authority.

Cyberstalking and the EU Digital Services Act

Since 2022, the EU has a new Regulation in place assigning a range of responsibilities to online platforms, such as Meta, to identify and take down illegal content including cyberstalking. The Digital Services Act (‘DSA’)[5] aims at providing harmonised EU rules for a “safe, predictable and trusted online environment”,[6] by inter alia establishing rules on due diligence obligations for providers of intermediary services. The DSA modernised some of the provisions of the 2000 e-Commerce Directive[7] and reinstated some others, such as the provision clarifying there is no general obligation for providers of intermediary services to monitor the information in their services, nor to engage into active fact-finding to establish whether an illegal activity takes place abusing their services.[8]

Despite the absence of a general monitoring obligation, providers of intermediary services are subject to several obligations in order to ensure the online safety and trust of the users of their services.[9] Those due diligence obligations, explained in the next section, are centered around the concept of illegal content. The DSA, defines in its Article 3(h) illegal content as “any information that, in itself or in relation to an activity, including the sale of products or the provision of services, is not in compliance with Union law or the law of any Member State which is in compliance with Union law, irrespective of the precise subject matter or nature of that law.” The concept of content is thus very broad meaning any information, ‘products, services and activities’[10] and whether this content is illegal is determined by examining other EU or Member State law. Once thus information shared, publicized, transmitted, stored that is infringing EU or national Member State law, the due diligence framework established in the DSA is applicable to the service provider of intermediary services. Recital 12 DSA provides additional interpretational clarity as per the parameters and examples of illegal content, since applicable rules might render the content itself illegal or this might be rendered illegal because it relates to illegal activities. Examples include the sharing of image-based sexual abuse of children material (CSAM), hate speech or terrorist content, and online stalking (cyberstalking). As a result of this broad definition, even acts or information that are not as such illegal, but relate to the illegal activity of cyberstalking, would also qualify as illegal content, and would be subject to the DSA. This is an important step towards regulating cyberstalking, and essentially limiting the single acts of the cyberstalker causing nuisance or harassment to the victim(s) and other related targets of the offence, such as the friendly, family, or work environment of the victim(s).

The DSA due diligence framework: placing the responsibility on online platforms?

The e-Commerce Directive already provided an obligation for information society service providers to remove or disable access to information, when obtaining knowledge of an illegal activity.[11] The DSA develops a due diligence framework, which involves service providers undertaking action in a reactive manner (e.g. once a report is filed towards an online platform about an abusive image), but also in a proactive manner. The due diligence framework ensures that the service providers, and especially large online platforms, have assessed the systemic risks from the design and the functioning of their services.[12] The due diligence framework comprises of rules relating to transparency, cooperation with law enforcement and judicial authorities, and proactive measures against misuse of the offered services. In terms of proactive measures, very large online platforms must put in place mitigation measures tailored to systemic risks and adapt their moderation processes,  in particular in cases of cyberviolence, which includes cyberstalking. The risk of dissemination of CSAM is – according to Recital 80 DSA – one of the categories of such systemic risks. The mitigation measures include the expeditious removal or disabling access to the illegal content, and adapting the speed and quality of processing notices (Art. 35(1)(c) DSA). In terms of transparency, specifically for online platforms, the DSA imposes strict reporting rules as regards the use of automated moderation tools, including specification of error rates and applied safeguards,[13] but also detailed reporting of the number of suspensions of provision of services due to misuse.[14] As regards cooperation with law enforcement and judicial authorities, all hosting providers must notify the competent authorities of a suspicion that a criminal offence threatening an individual’s safety or life is taking place. The notification threshold is quite low, since Art. 18(1) DSA requires not proven illegal behaviour, but a suspicion that such a behaviour takes place. This means that in cases of cyberstalking, any act pointing the service provider at the direction of a potential of repeated threats directed towards an individual directly or indirectly via friends, family, or colleagues would require a report to the law enforcement authority.

Next steps

The DSA entered into force in 2022, but starts applying early 2024, since the EU legislator provided a grace period to service providers subject to the scope of the DSA to adapt to the new set of obligations. While it should be expected that hate speech, CSAM, and copyright infringing material, will -at the first period of the DSA application monopolise the focus of platforms and the related complaints and reports- the DSA will also be tested as a regulatory instrument against cyberstalking and the role of intermediaries, e.g. in this case online platforms, in combatting such an abusive online behaviour.


[1] Pittaro, M. L. (2007). Cyber stalking: An Analysis of Online Harassment and Intimidation. International Journal of Cyber Criminology, 1(2), 180–197. https://doi.org/10.5281/zenodo.18794

[2] Article 34 Council of Europe Convention on preventing and combating violence against women and domestic violence (‘Istanbul Convention’), Council of Europe Treaty Series No. 210.

[3] Vidal Verástegui, J., Romanosky, S., Blumenthal, M. S., Brothers, A., Adamson, D. M., Ligor, D. C., … & Schirmer, P. (2023). Cyberstalking: A Growing Challenge for the US Legal System.

[4] https://transparency.fb.com/policies/community-standards/bullying-harassment/

[5] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) OJ L 277, 27.10.2022, p. 1–102.

[6] Article 1(1) DSA.

[7] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) OJ L 178, 17.7.2000, p. 1–16.

[8] Read further on the prohibition of general monitoring obligations: Senftleben, Martin and Angelopoulos, Christina, The Odyssey of the Prohibition on General Monitoring Obligations on the Way to the Digital Services Act: Between Article 15 of the E-Commerce Directive and Article 17 of the Directive on Copyright in the Digital Single Market, Amsterdam/Cambridge, October 2020, https://ssrn.com/abstract=3717022

[9] Recital 41 DSA.

[10] Recital 12 DSA.

[11] Recital 46 e-commerce Directive.

[12] Article 34 DSA.

[13] Art. 15 DSA.

[14] Art. 24 DSA.

EU Adoption of DAC 8 – Mandatory Exchange of Information between Tax Authorities on Crypto Assets

By Amedeo Rizzo

On the 17th of October 2023, the Council of the European Union approved Directive DAC 8 on administrative cooperation (Press Release), introducing significant modifications related to the communication and automatic exchange of information regarding proceeds from operations in crypto-assets and information on advance tax rulings for high-net-worth individuals. With this directive, the EU, considering the new opportunities brought about by digitalization, aims to expand the scope of the obligation for automatic exchange of information, fostering a higher degree of administrative cooperation among tax administrations.

Crypto assets definition and tax problems

The term crypto asset refers to a digital representation of value that relies on a cryptographically secured distributed ledger to validate and secure transactions[1]. This mechanism establishes a tamper-resistant record of transactions within the asset without the need for a central authority. The challenge in categorizing assets within this broad class arises from ongoing innovation and the diverse range of services that specific assets can offer. Distinguishing these assets for tax purposes is complex due to these factors.

However, a fundamental tax-relevant dimension that aids in their characterization is the distinction between their use for investment purposes and as a means of payment. At one end of the spectrum are “security token,” which essentially serve as digital representations of traditional financial or other assets. An example includes “Non-fungible tokens” (NFTs), which are cryptographically protected representations of unique assets, such as works of art. Conversely, central bank digital currencies (CBDCs), might be considered to be more similar to fiat currency in digital form. While some national governments remain cautious about their adoption, the prevailing expectation is that the issuance of CBDCs will become widespread over time[2].

The primary impediment in the taxation of crypto assets stems from their inherent “anonymous” nature, wherein transactions employ public addresses that prove exceptionally challenging to associate with individuals or entities. This characteristic introduces a heightened susceptibility to tax evasion, placing the onus on tax authorities to address implementation challenges effectively.

When transactions occur through centralized exchanges, the challenge becomes more manageable as these exchanges can be subjected to standard know your customer (KYC) tracking rules and potential withholding taxes.

Background and content

On December 7, 2021, the Council, in its report to the European Council regarding tax matters, communicated its anticipation that the European Commission would present a legislative proposal in 2022 for the additional amendment of Directive 2011/16/EU on administrative cooperation in taxation (DAC).

This proposed amendment specifically pertained to the exchange of information regarding crypto-assets and tax rulings applicable to individuals with substantial wealth. According to the Council, it was imperative to fortify the stipulations of Directive 2011/16/EU pertaining to the information to be reported or exchanged to accommodate the evolving landscape of diverse markets and, consequently, to effectively address identified instances of tax fraud, tax evasion, and tax avoidance, by facilitating effective reporting and exchange of information.

In light of this objective, the Directive encompasses, among other aspects, the most recent revisions to the Common Reporting Standard (CRS) of the OECD. Notably, this includes the incorporation of provisions pertaining to electronic money and central bank digital currencies (CBDCs) delineated in Part II of the Crypto-Asset Reporting Framework and Amendments to the Common Reporting Standard, endorsed by the OECD on August 26, 2022.

Moreover, the Directive extends the purview of the automatic exchange of information concerning advance cross-border rulings to encompass specific rulings concerning individuals. In particular, it includes in the scope of the current regulation the rulings involving high-net-worth individuals, as well as provisions on automatic exchange of information on non-custodial dividends and similar revenues.

Additionally, the Directive enhances the regulations governing the reporting and communication of Tax Identification Numbers (TIN). The objective is to streamline the identification process for tax authorities, enabling them to accurately identify pertinent taxpayers and assess associated taxes. Additionally, the Directive seeks to modify provisions within the DAC concerning penalties imposed by Member States on individuals who fail to comply with national legislation related to reporting requirements established in accordance with the DAC.

This approach is adopted to ensure uniformity and coherence in the application of these provisions across Member States.

Problems addressed by the Directive

The bottom line of the DAC 8 revolves around the imperative of instituting mandatory reporting for crypto-asset service providers falling within the ambit of the Markets in Crypto-Assets (MiCA) Directive. Additionally, all other crypto-asset operators offering services to residents of the EU are required to comply. Non-EU operators must undergo registration in a Member State to adhere to DAC 8 regulations, ensuring the reporting of pertinent information. This strategic approach equips tax authorities of Member States with the requisite tools to monitor income generated from crypto assets by EU users and implement necessary measures to ensure tax compliance.

The reporting mechanism entails three sequential steps. Initially, crypto-asset service providers collect information of the transactions subject to reporting by their users. Subsequently, the providers submit the compiled information to the competent tax authority of their Member State (for EU providers) or the competent authority of the Member State of registration (for non-EU providers). Lastly, the competent tax authority transmits the reported information, inclusive of the TIN of the reported users, to the competent authorities of the users’ respective Member States of residence.

The Directive also emphasizes reporting requirements concerning reportable users and crypto assets. Reportable users are mandated to furnish their:

  • complete name;
  • address;
  • Member State of residence;
  • date and place of birth;
  • TIN.

Reportable crypto assets are to be identified by their complete name and the aggregate gross amount paid or the aggregate fair market value.

Reporting crypto-asset service providers are obligated to obtain a self-certification from users, encompassing information crucial for determining the user’s tax residence, such as full name, date of birth, residence address, and TIN. The proposal allows a substantial degree of discretion in evaluating the reliability of this self-certification, permitting providers to verify information using alternative sources, including their own customer due diligence procedures, in case of doubts. If a user accesses the platform through a Member State’s digital identity system, the provider is exempt from collecting certain information but is still required to obtain the user’s full name, the identification service used, and the Member State of issuance.

The Directive incorporates provisions facilitating the effective implementation of the proposed measures, including mechanisms for enforcing compliance by non-EU crypto-asset operators with EU resident users. In instances where non-EU operators fail to comply with reporting obligations due to a lack of registration in a Member State, the DAC 8 grants Member States the authority to employ effective, proportionate, and dissuasive measures to ensure compliance, potentially encompassing measures that may prohibit the operator from operating within the EU as a last resort (Article 8ad).

Conclusion

In summary, the recently approved DAC8 emerges as one of the needed responses to the evolving landscape of crypto assets, acknowledging some of the inherent challenges in taxation posed by their anonymous nature and the dynamic innovation within this domain.

By bridging the information gap and enhancing reporting mechanisms, DAC 8 empowers tax administrations to monitor and enforce compliance, thus mitigating some of the potential tax risks associated with crypto assets and tax rulings. The Directive, with its comprehensive approach and emphasis on international cooperation, is a critical step towards achieving transparency in the taxation of these emerging financial instruments.


[1] K. Baer, R. de Mooji, S. Hebous, M. Keen (2023). Taxing Cryptocurrencies, IMF WP/23/144.

[2] Ibid.

Large Language Models and the EU AI Act: the Risks from Stochastic Parrots and Hallucination

By Zihao Li[1]

With the launch of ChatGPT, Large Language Models (LLMs) are shaking up our whole society, rapidly altering the way we think, create and live. For instance, the GPT integration in Bing has altered our approach to online searching. While nascent LLMs have many advantages, new legal and ethical risks[2] are also emerging, stemming in particular from stochastic parrots and hallucination. The EU is the first and foremost jurisdiction that has focused on the regulation of AI models.[3] However, the risks posed by the new LLMs are likely to be underestimated by the emerging EU regulatory paradigm. Therefore, this correspondence warns that the European AI regulatory paradigm must evolve further to mitigate such risks.

Stochastic parrots and hallucination: unverified information generation

One potentially fatal flaw of the LLMs, exemplified by ChatGPT, is that the generation of information is unverified. For example, ChatGPT often generates pertinent, but non-existent academic reading lists. Data scientists claim that this problem is caused by “hallucination”[4] and “stochastic parrots”.[5] Hallucination occurs when LLMs generate text based on their internal logic or patterns, rather than the true context, leading to confidently but unjustified and unverified deceptive responses. Stochastic parrots is the repetition of training data or its patterns, rather than actual understanding or reasoning.

The text production method of LLMs is to reuse, reshape, and recombine the training data in new ways to answer new questions while ignoring the problem of authenticity and trustworthiness of the answers. In short, LLMs only predict the probability of a particular word coming next in a sequence, rather than actually comprehending its meaning. Although the majority of answers are high-quality and true, the content of the answers is fictional. Even though most training data is reliable and trustworthy, the essential issue is that the recombination of trustworthy data into new answers in a new context may lead to untrustworthiness, as the trustworthiness of information is conditional and often context-bound. If this precondition of trustworthy data disappears, trust in answers will be misplaced. Therefore, while the LLMs’ answers may seem highly relevant to the prompts, they are made-up.

However, merely improving the accuracy of the models through new data and algorithms is insufficient, because the more accurate the model is, the more users will rely on it, and thus be tempted not to verify the answers, leading to greater risk when stochastic parrots and hallucinations appear. This situation, where an increase in accuracy leads to higher reliance and potential risks, can be described as the ‘accuracy paradox’. The risk is beyond measure if users encounter these problems in especially sensitive areas such as healthcare or the legal field. Even if utilizing real-time internet sources, the trustworthiness of LLMs may remain compromised, as exemplified by factual errors in new Bing’s launch demo.

These risks can lead to ethical concerns, including misinformation and disinformation, which may adversely affect individuals through misunderstandings, erroneous decisions, loss of trust, and even physical harm (e.g., in healthcare). Misinformation and disinformation can reinforce bias,[6] as LLMs may perpetuate stereotypes present in their training data.[7]

The EU AI regulatory paradigm: Advanced Legal intervention required

The EU has already commenced putting effort into AI governance. The AI Act (AIA) is the first and globally most ambitious attempt to regulate AI. However, the proposed AIA, employing a risk-based taxonomy for AI regulation, encounters difficulties when applied to general-purpose LLMs. On the one hand, categorizing LLMs as high-risk AI due to its generality may impede EU AI development. On the other hand, if general-purpose LLMs are regarded as chatbots, falling within a limited-risk group, merely imposing transparency obligations (i.e., providers need to disclose that the answer is generated by AI) would be insufficient.[8] Because the danger of parroting and hallucination risks is not only related to whether users are clearly informed that they are interacting with AI, but also to the reliability and trustworthiness of LLMs’ answers, i.e., how users can distinguish between truth and made-up answers. When a superficially eloquent and knowledgeable chatbot generates unverified content with apparent confidence, users may trust the fictitious content without undertaking verification. Therefore, the AIA’s transparency obligation is not sufficient.

Additionally, the AIA does not fully address the role, rights, or responsibilities of the end-users. As a result, users have no chance to contest or complain about LLMs, especially when stochastic parrots and hallucination occur and affect their rights. Moreover, the AIA does not impose any obligations on users. However, as aforementioned, the occurrence of disinformation is largely due to deliberate misuse by users. Without imposing responsibilities on the user side, it is difficult to regulate the harmful use of AI by users. Meanwhile, it is argued that the logic of AIA is to work backward from certain harms to measures that mitigate the risk that these harms materialize.[9] The primary focus ought to shift towards the liability associated with the quality of input data, rather than imposing unattainable obligations on data quality.

Apart from the AIA, the Digital Service Act (DSA) aims to govern disinformation. However, the DSA’s legislators only focus on the responsibilities of the intermediary, overlooking the source of the disinformation. Imposing obligations only on intermediaries when LLMs are embedded in services is insufficient, as such regulation cannot reach the underlying developers of LLMs. Similarly, the Digital Markets Act (DMA) focuses on the regulation of gatekeepers, aiming to establish a fair and competitive market. Although scholars recently claim that the DMA has significant implications for AI regulation,[10] the DMA primarily targets the effects of AI on market structure; it can only provide limited help on LLMs. The problem that the DSA and DMA will face is that both only govern the platform, not the usage, performance, and output of AI per se. This regulatory approach is a consequence of the current platform-as-a-service (PaaS) business model. However, once the business model shifts to AI model-as-a-service (MaaS),[11] this regulatory framework is likely to become nugatory, as the platform does not fully control the processing logic and output of the algorithmic model.

Therefore, it is necessary to urgently reconsider the regulation of general-purpose LLMs.[12] The parroting and hallucination issues show that minimal transparency obligations are insufficient, since LLMs often lull users into misplaced trust. When using LLMs, users should be acutely aware that the answers are made-up, may be unreliable, and require verification. LLMs should be obliged to remind and guide users on content verification. Particularly when prompted with sensitive topics, such as medical or legal inquiries, LLMs should refuse to answer, instead directing users to authoritative sources with traceable context. The suitable scope for such filter and notice obligations warrants further discussion from legal, ethical and technical standpoints.

Furthermore, legislators should reassess the risk-based AI taxonomy in the AIA. The above discussion suggests that the effective regulation of LLMs needs to ensure their trustworthiness, taking into account the reliability, explainability and traceability of generated information, rather than solely focusing on transparency. Meanwhile, end-users, developers and deployers’ roles should all be considered in AI regulations, while shifting focus from PaaS to AI MaaS.


[1] The work is adapted and developed from the preprint version of a paper published in Nature Machine Intelligence, “Zihao Li, ‘Why the European AI Act Transparency Obligation Is Insufficient’ [2023] Nature Machine Intelligence. https://doi.org/10.1038/s42256-023-00672-y”

[2] ‘Much to Discuss in AI Ethics’ (2022) 4 Nature Machine Intelligence 1055.

[3] Zihao Li, ‘Why the European AI Act Transparency Obligation Is Insufficient’ [2023] Nature Machine Intelligence.

[4] Ziwei Ji and others, ‘Survey of Hallucination in Natural Language Generation’ [2022] ACM Computing Surveys 3571730.

[5] Emily M Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM 2021) <https://dl.acm.org/doi/10.1145/3442188.3445922&gt; accessed 14 January 2023.

[6] Marvin van Bekkum and Frederik Zuiderveen Borgesius, ‘Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?’ (2023) 48 Computer Law & Security Review 105770.

[7] Zihao Li, ‘Affinity-Based Algorithmic Pricing: A Dilemma for EU Data Protection Law’ (2022) 46 Computer Law & Security Review 1.

[8] Lilian Edwards, ‘The EU AI Act: A Summary of Its Significance and Scope’ (Ada Lovelace Institute 2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/04/Expert-explainer-The-EU-AI-Act-11-April-2022.pdf&gt; accessed 17 January 2023.

[9] Martin Kretschmer and others, ‘The Risks of Risk-Based AI Regulation: Taking Liability Seriously’.

[10] Philipp Hacker, Johann Cordes and Janina Rochon, ‘Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and Beyond’ [2022] SSRN Electronic Journal <https://www.ssrn.com/abstract=4316944&gt; accessed 8 January 2023.

[11] Tianxiang Sun and others, ‘Black-Box Tuning for Language-Model-as-a-Service’, Proceedings of the 39th International Conference on Machine Learning (PMLR 2022) <https://proceedings.mlr.press/v162/sun22e.html&gt; accessed 10 February 2023.

[12] Philipp Hacker, Andreas Engel and Theresa List, ‘Understanding and Regulating ChatGPT, and Other Large Generative AI Models: With input from ChatGPT’ [2023] Verfassungsblog <https://verfassungsblog.de/chatgpt/&gt; accessed 20 May 2023.

The EU Foreign Subsidies Regulation: a Structural Change to the Internal Market

By Amedeo Rizzo

The EU Foreign Subsidies Regulation (“FSR”) has been published on the 14th of December 2022 and entered into force on 12 January 2023. The Regulation creates a new regime with the objective of protecting the internal market of the European Union from distortions created by foreign subsidies. In doing so, the FSR imposes an approval procedure for foreign subsidies to companies engaging in commercial activities in the EU and notification obligations for M&A activities of significant EU businesses and large EU public contracts.

The objective of the Regulation is to close an existing loophole in the internal market supervision, which was very restrictive towards EU state aid regulation but did not take into account possible distortions coming from non-EU countries. This is supposed to create a level playing field for all companies that operate in the EU, supervised by the European Commission, through investigatory powers, ex officio, and rights to implement measures to ensure compliance.

Foreign Subsidies covered by the Regulation

The FSR covers any form of contributions, direct or indirect, provided by non-EU governments or any public or private entity whose actions are attributable to the government of the non-EU country. Contributions could be distortive where they confer benefits that would not normally be available on the market EU company, and which are selective in the way they advantage one or more companies or industries as opposed to all companies or all companies active in a particular industry.

The notion of financial contributions under the FSR is a quite broad concept, including many forms of advantages. As provided in the Regulation, financial contributions include but are not limited to:

  • the transfer of funds/liabilities, such as capital injections, grants, loans, guarantees, tax incentives, the setting off of operating losses, compensation for financial burdens imposed by public authorities, debt forgiveness, debt to equity swaps or rescheduling;
  • the foregoing of revenue that is otherwise due, such as tax exemptions or the granting of special or exclusive rights without adequate remuneration; or
  • the provision of goods or services or the purchase of goods or services.

These kinds of benefits include zero- or low-interest loans, tax exemptions and reductions, state-funded R&D and other forms of intellectual property subsidization, government contracts and grants of exclusive rights without adequate remuneration.

The subjects that are limited in their ability to provide contributions to companies operating in the EU internal market are all the entities related to the non-EU country and therefore include:

  • the central government and public authorities at all other levels;
  • any foreign public entity whose actions can be attributed to the third country, taking into account elements such as the characteristics of the entity and the legal and economic environment prevailing in the State in which the entity operates, including the government’s role in the economy; or
  • any private entity whose actions can be attributed to the third country, taking into account all relevant circumstances.

Distortion of competition in the EU

One of the fundamental factors to trigger the FSR is that the foreign subsidy needs to potentially distort competition in the EU, meaning that it negatively affects it.

Distortions in the internal market are determined on the basis of indicators, which can include:

  • the amount of the foreign subsidy;
  • the nature of the foreign subsidy;
  • the situation of the undertaking, including its size and the markets or sectors concerned;
  • the level and evolution of the economic activity of the undertaking on the internal market;
  • the purpose and conditions attached to the foreign subsidy as well as its use on the internal market.

In general, the Commission seems to have quite an extensive distortionary power over the decision-making process of recognizing the negative effects. However, it will have to take into account also the positive effects on the market, which will burden the Commission with a balancing test.

The Regulation provides some dimension-related thresholds for financial contributions to what is likely to distort competition:

  • A subsidy that does not exceed the de minimis aid measures, contained in Regulation (EU) No 1407/2013 (EUR 200,000 per third country over any consecutive period of three years) shall not be considered distortive.
  • A subsidy that does not exceed EUR 4 million per undertaking over any consecutive period of three years is unlikely to cause distortions.
  • A subsidy that exceeds EUR 4 million is likely to cause distortions if it negatively affects competition in the EU.

The role of the European Commission

On its own initiative, the Commission may review a transaction or a public procurement ex-officioon the grounds of information received by any source or notifications of potentially subsidized M&A transactions or public procurement bids. If the Commission finds sufficient evidence concerning the existence of a distortive subsidy, it carries out a preliminary review.

When this procedure leads to enough evidence of the foreign distortive subsidy, the Commission initiates an in-depth investigation. When a foreign distortive subsidy is identified, the Commission can impose redressive measures or accept commitments.

The non-exhausting list of redressive measures includes the reduction of capacity or market presence of the subsidized entity, the refraining from certain investments, and the repayment of the foreign subsidy.

The recipient of the subsidy may offer commitments and, for instance, pay back the subsidy. The Commission may accept commitments if considers them to be full and effective remedies to the distortion.

A separate mechanism of market investigations allows the Commission to investigate a particular business sector, a type of economic activity or a subsidy if there is reasonable suspicion. In its surveillance activities, the Commission can conduct a request for information that entities or their associations provide certain information, irrespective of whether they are subject to an investigation.

To block damaging activities the Commission can impose interim measures. Additionally, it is authorized to impose fines on the entities for breaching procedural requirements or not providing information. The fines can reach 1% of the aggregate turnover or 5% of the average daily aggregate turnover for each day of the violation, calculated on the previous year’s data. Fines can go up to 10% of the turnover when companies fail to notify a transaction or a subsidy granted during a public procurement procedure, implement a notified concentration before the end of the review period, or try to circumvent the notification requirements.

Conclusion

This measure constitutes a paramount change in the EU approach to competition in the internal market. It will become important to see how much the Commission is going to use this new instrument, and the way it is going to assess market distortions on a case-by-case basis, as there is probably going to be a delicate equilibrium with trade legislation and possible countervailing measures.

It is important for companies that operate in the EU that have received these kinds of financial contributions from non-EU countries to quickly prepare to apply this new Regulation. Perhaps some groups that can fall in this situation might want to reform their internal processes to collect information, understand reporting requirements and preparing justifications or notifications to the EU.

I, Robot: The U.S. Copyright Office Publishes Guidance on Registration of Works Generated by AI

By Marie-Andrée Weiss

On March 16, 2023, the U.S. Copyright Office (USCO) published its Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence(the Guidance).

Artificial Intelligence (AI) is now “capable of producing expressive material”. The USCO chose its words carefully: AI “produces” works, it does not “create” them. However, these works are “expressive materials”.

AI is now among us, but not in the shape imagined by Isaac Asimov, androids, such as Robbie, who is taking care of a little girl. AI is on our desktop and in our pockets, an app installed on our smart phone.

AI technology can be used to produce a work by first obtaining a large data set of preexisting works, using this data set to “train” and then “use inferences from that training to generate new content.” Such content can be a text, an image, or an audio. The USCO mentioned in the Guidance that it would later this year publish a notice of inquiry about how law should address the use of works protected by copyright in the data set.

The USCO mentioned two recent cases raising the issues of whether a work created using an AI program can be protected by copyright: “Entrance to Paradise”, pictural work, and Zarya of the Dawn, a comic book which images were created by AI while a human authored the text. Are the works thus produced protectable by copyright?

An Entrance to Paradise

Dr. Stephen Thaler created A Recent Entrance to Paradise, the image of an abandoned train tracks framed by wisterias, using an AI program it called the “Creativity Machine” that he had created and programmed.

Dr. Thaler sought to register its copyright in November 2018 but the USCO denied registration in August 2019, because the Office has a “Human Authorship Requirement” policy. Dr. Thaler filed two requests for reconsideration which the USCO both denied.  Dr. Thaler filed a suit against the USCO in June 2022, claiming that “the denial creates a novel requirement for copyright registration that is contrary to the plain language of the Copyright Act…, contrary to the statutory purpose of the Act, and contrary to the Constitutional mandate to promote the progress of science.” The denials are subject to judicial review under the Administrative Procedure Act, 5 U.S.C. § 704.

On January 10, 2023, Dr. Thaler filed a motion for summary judgment, arguing that “the plain language of the Copyright Act… does not restrict copyright to human-made works, nor does any case law.” The work is fixed, visual artwork. As explained in 1991 by the Supreme Court of the U.S. (SCOTUS) in Feist Publications, Inc. v. Rural Telephone Service Company “To qualify for copyright protection, a work must be original to the author”, which means that the work must be  independently created by the author and must  possess at least some minimal degree of creativity.

Dr. Thaler also argued that “courts have referred to creative activity in human-centric terms, based on the fact that creativity has traditionally been human-centric and romanticized.”

Alternatively, Dr. Thaler argued that he owns the copyright in “A Recent Entrance to Paradise” because the work for hire ownership originally vested in him because he invented and owns the Creativity Machine and its outputs automatically vest in him.

Zarya of the Dawn

Kristina Kashtanova, created a comic book, Zarya of the Dawn, using an AI program to illustrate it. She sought to register its copyright and was successful at first, but the USCO then canceled the certificate and issued a new one protecting only the text of the comic book and the selection, coordination, and arrangement of its written and visual elements. However, the images created  by AI were not protectable because they “are not the product of human authorship.” The letter of the USCO cited Burrow-Giles Lithographic Co. v. Sarony, a 1884 case where SCOTUS explained that photographs, still a technological novelty at the time, were protected by copyright because they were “representatives of original intellectual conceptions of the author.” SCOTUS defined authors in Burrow-Giles as “he to whom anything owes its origin; originator; maker; one who completes a work of science or literature.” But the Court explained that if photography was a “merely mechanical” process …with no place for novelty, invention or originality” for the photographer, then the photographs could not be protected by copyright.

The USCO explained in its letter about Zarya of the Dawn that even if Ms. Kashtanova claimed to have “guided” the structure and content of the comic images, it was the AI program, not her, “that originated the “traditional elements of authorship” in the images.”

Public guidance on the registration of works containing AI

These two cases show that works can be entirely protected by AI or only partially. The purpose of the Guidance is to provide the public (and its attorneys!) if seeking to register works containing content generated (not created!) by AI.

In the Guidance, the USCO explained that it evaluated whether works containing human authorship combined by uncopyrightable material generated by or with assistance of technology by assessing if technology was an “assisting instrument” or if the work was conceived by it. In the case of AI, the USCO explained that it “will consider whether the AIA contributions containing AI-generated are the result of “mechanical reproduction “or instead an author’s “own original mental conception, to which [the author] gave visible form”, and that this would assessed case by case.

If the AI receives solely a prompt from a human being, the work cannot be protected by copyright, as it is the human being does not have creative control over how the AI system interprets the prompt and generate the work, and that the prompts are more like instructions to a commissioned artist.

If a work contains AI-generated material and sufficient human authorship, it can be protected by copyright, for instance, if a human being selects and arranges AI-generated materials in a way original enough to be protectable.

Public guidance on the registration of works containing AI

Does the Copyright Act indeed require human authorship?

The USCO cited Burrow-Giles in its Guidance to support its view that authors must be human and also cited the Ninth Circuit Urantia Found. v. Kristen Maahera case, where the court held that a book, which both parties believed was “authored by celestial beings and transcribed, compiled and collected by mere mortals.” The defendant in this copyright infringement suit claimed that the book was not protected by copyright, because it was not authored by a human being  and thus not a “work of authorship” within the meaning of the Copyright Act.

However, the Ninth Circuit noted that “[t]he copyright laws, of course, do not expressly require “human” authorship, and considerable controversy has arisen in recent years over the copyrightability of computer-generated works”. In this case, the Court noted that the Copyright Act was not intended to protect “creations of divine beings” and that “in this case some element of human creativity must have occurred in order for the [b]ook to be copyrightable.”

If the Copyright Act does not require human authorship, but refuses to accept that “divine beings” can be the author, and case law states that a monkey, human beings’ closest cousin, cannot be an author within the meaning of the Copyright Act (Naruto v. Slater, a case from the United States District Court, Northern District of California previously discussed in the TTL Newsletter), will robots ever be able to claim authorship of a work? Such works are already winning prizes at art fairs, such as Théâtre D’opéra Spatial, created using AI, which won first prize at the Colorado State Fair’s digital arts competition.

If works created by AI cannot be protected by copyright, the incentive to develop such technology may be lacking. We are likely to see more and more works crated by humans using elements created by AI, and the border between elements crated by human beings or by machines blurring more and more.

Can Banning Apps Contribute to a Privacy-friendlier Internet?

By Salome Kohler

1. Why banning could help

Recently, several governments, such as the UK and other European countries, have decided to ban the popular TikTok app from government devices.[1] In the U.S., the White House told federal agencies at the end of February 2023 that they had one month time to remove the TikTok app from government devices.[2] So far, TikTok has already been banned on the devices of Congress, the White House, and the U.S. Armed Forces, due to concerns that the app collects users’ browsing history, location, etc.[3]

However, the U.S. also considering banning the app from all devices in the U.S.[4] The main problem is that sensitive user data is collected, used, and sold by many different apps and websites, including TikTok.[5] In particular, there is no transparency to the end-user about what kind of data is being collected.[6] While banning apps may violate the First Amendment right to freedom of speech, a ban could be considered if security threats cannot be addressed by other means. A major concern could be the collection of a lot of (sensitive) user data on millions of Americans, leaving them rather unaware of the privacy attack against them. Not only private matters, but also business and other sensitive information can be collected and used by the data collector. Since we don’t know which person has information that could be a security risk, a general prohibition could be supported.

In 2020, a ban on TikTok was rejected by federal courts, which found that the security risks did not outweigh the restriction of First Amendment rights.[7] However, if the RESTRICT Act were to become law, the government would be able to ban apps and other technology products if they come from countries that could be a threat to U.S. interests.[8] A key issue seems to be comprehensive information about how much online data is actually being collected. However, the effectiveness of a ban on data collection would also be a concern – even a ban might not undermine all user data collected.[9]

2. Better approach: Tighter regulation

Computational studies have shown that even when a user declines an app’s collection of data, it is often collected anyway.[10] So we see a lot of privacy violations in Europe as well as the U.S.[11]

TikTok could grow just as fast due to lack of privacy law.[12] However, many other apps still collect intimate details about the user while profiting from that data.[13] This means that TikTok and other actors can still buy the data from data brokers, violating privacy as such. [14]

Therefore, a much better approach would be to address the privacy issues of all apps, websites, etc. that undermine the privacy of online users.[15]


[1] Sapna Mahehwari, Amanda Holpuch, Why Countries Are Trying to Ban TikTok, The New York Times (Mar. 27, 2023), https://www.nytimes.com/article/tiktok-ban.html.

[2] Id.

[3] Haleluya Hadero, Why TikTok is being banned on gov’t phones in US and beyond, AP News (Feb. 28, 2023), https://apnews.com/article/why-is-tiktok-being-banned-7d2de01d3ac5ab2b8ec2239dc7f2b20d.

[4] Sapna Mahehwari, Amanda Holpuch, Why Countries Are Trying to Ban TikTok, The New York Times (Mar. 27, 2023), https://www.nytimes.com/article/tiktok-ban.html.

[5] Id.

[6] Lauren Feiner, How a TikTok ban in the U.S. might work, CNBC, (Mar. 17, 2023), https://www.cnbc.com/2023/03/17/how-a-tiktok-ban-in-the-us-might-work-and-challenges-it-raises.html.

[7] Chloe Xiang, Jordan Pearsons, Jason Koebler, Banning TikTok Is Unconstitutional, Ludicrous, and a National Embarassment, Vice (Mar. 23, 2023), https://www.vice.com/en/article/epv48n/banning-tiktok-is-unconstitutional-ludicrous-and-a-national-embarrassment

[8] Id. (Vice), S. 686 – Restrict Act, 118th Congress (2023-2024), https://www.congress.gov/bill/118th-congress/senate-bill/686/text.

[9] Id. (Vice).

[10] J Reardon, A Feal, P Wijesekera, A Elazari bar On, N Vallina-Rodriguez, S Egelman, 50 Ways to Leak Your Data: An Exploration of Apps’ Circumvention of the Android Permissions System, Proceedings of the 28th USENIX Security Symposium, August 14-16, 2019, Santa Clara, CA, USA, 615.

[11] Id.

[12] Calli Schroeder, TiKTok is Not the Only Problem, EPIC (Mar. 23, 2023), https://epic.org/tiktok-is-not-the-only-problem/

[13] Id.

[14] Id.

[15] Id.