European Commission Working on Ethical Standards for Artificial Intelligence (AI)

By Paul Opitz

In the prominent areas of self-driving cars and Lethal Autonomous Weapons Systems, the development of autonomous systems has already led to important ethical debates.[1] On 9 March 2018 the European Commission published a press release in which it announced to set up a group of experts for developing guidelines on AI ethics, building on a statement by the European Group on Ethics in Science and New Technologies.

 

Call for a wide and open discussion

The Commission emphasizes the possible major benefits from artificial intelligence, ranging from better healthcare to more sustainable farming and safer transport. However, since there are also many increasingly urgent moral questions related to the impact of AI on the future of work and legislation, the Commission calls for a “wide, open and inclusive discussion” on how to benefit from artificial intelligence, while also respecting ethical principles.[2]

 

Tasks of the expert group

The expert group will be set up by May and tasked to:

  • advise the Commission on building a diverse group of stakeholders for a “European AI Alliance”
  • support the implementation of a European initiative on artificial intelligence
  • draft guidelines for the ethical development and the use of artificial intelligence based on the EU´s fundamental rights, considering, inter alia, issues of fairness, safety, transparency, and the future of work.[3]

 

Background

The goal of ensuring ethical standards in AI and robotics was recently set out in the Joint Declaration on the EU´s legislative priorities for 2018-2019. Furthermore, the guidelines on AI ethics will build on the Statement on Artificial Intelligence, Robotics and Autonomous Systems by the European Group on Ethics in Science and New Technologies (EGE) from 9 March 2018. This statement summarizes relevant developments in the area of technology, identifying a range of essential moral questions.

Moral issues

Safety, security, and the prevention of harm are of upmost importance.[4] In addition, the EGE poses the question of human moral responsibility. How can moral responsibility be apportioned, and could it possibly be “shared” between humans and machines?[5]

On a more general level, questions about governance, regulation, design, and certification occupy lawmakers in order to serve the welfare of individuals and society.[6] Finally, there are questions regarding the transparency of autonomous systems and their effective value to society.

Key considerations

The statement explicitly emphasizes that the term “autonomy” stems from the field of philosophy and refers to the ability of human persons to legislate for themselves, the freedom to choose rules and laws for themselves to follow. Although the terminology is widely applied to machines, its original sense is an important aspect of human dignity and should therefore not be relativised. No smart machine ought to be accorded the moral standing of the human person or inherit human dignity.[7]

In this sense, moral debates must be held in broad ways, so that narrow constructs of ethical problems do not oversimplify the underlying questions.[8] In discussions concerning self-driving cars, the ethical problems should not only evolve around so-called “Trolley Problem” thought experiments, in which the only possible choice is associated with the loss of human lives. More important questions include past design decisions that have led up to the moral dilemmas, the role of values in design and how to weigh values in case of a conflict.[9]

For autonomous weapons systems, a large part of the discussion should focus on the nature and meaning of “meaningful human control” over intelligent military systems and how to implement forms of control that are morally desirable.[10]

Shared ethical framework as a goal

As initiatives concerning ethical principles are uneven at the national level, the European Parliament calls for a range of measures to prepare for the regulation of robotics and the development of a guiding ethical framework for the design, production and use of robots.[11]

As a first step towards ethical guidelines, the EGE defines a set of basic principles and democratic prerequisites based on fundamental values of the EU Treaties. These include, inter alia, human dignity, autonomy, responsibility, democracy, accountability, security, data protection, and sustainability.[12]

 

Outlook

It is now up to the expert group to discuss whether the existing legal instruments are effective enough to deal with the problems discussed or which new regulatory instruments might be required on the way towards a common, internationally recognized ethical framework for the use of artificial intelligence and autonomous systems.[13]

[1] EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems,  http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 10.

[2] European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.

[3] European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.

[4] EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems,  http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 8.

[5] Id., at p. 8.

[6] Id., at p. 8.

[7] Id., at p. 9.

[8] Id., at p. 10.

[9] Id., at p. 10-11.

[10] Id., at p. 11.

[11] Id., at p. 14.

[12] Id., at p. 16-19.

[13] Id., at p. 20.

Advertisements

Facebook’s Data Sharing Practices under Unfair Competition Law

By Catalina Goanta

2018 has so far not been easy on the tech world. The first months of the year brought a lot of bad news: two accidents with self-driving cars (Tesla and Uber) and the first human casualty,[1] another Initial Coin Offering (ICO) scam costing investors $660 million,[2] and Donald Trump promising to go after Amazon.[3] But the scandal that made the most waves had to do with Facebook data being used by Cambridge Analytica.[4]

 

Data brokers and social media

In a nutshell, Cambridge Analytica was a UK-based company that claimed to use data to change audience behavior either in political or commercial contexts.[5] Without going too much into detail regarding the identity of the company, its ties, or political affiliations, one of the key points in the Cambridge Analytica whistleblowing conundrum is the fact that it shed light on Facebook data sharing practices which, unsurprisingly, have been around for a while. To create psychometric models which could influence voting behavior, Cambridge Analytica used the data of around 87 million users, obtained through Facebook’s Graph Application Programming Interface (API), a developer interface providing industrial-level access to personal information.[6]

The Facebook Graph API

The first version of the API (v1.0), which was launched in 2010 and was up until 2015, could be used to not only gather public information about a given pool of users, but also about their friends, in addition to granting access to private messages sent on the platform (see Table 1 below). The amount of information belonging to user friends that Facebook allowed third parties to tap into is astonishing. The extended profile properties permission facilitated the extraction of information about: activities, birthdays, check-ins, education history, events, games activity, groups, interests, likes, location, notes, online presence, photo and video tags, photos, questions, relationships and relationships details, religion and politics, status, subscriptions, website and work history. Extended permissions changed in 2014, with the second version of the Graph API (v2.0), which suffered many other changes since (see Table 2). However, one interesting thing that stands out when comparing versions 1.0 and 2.0 is that less information is gathered from targeted users than from their friends, even if v2.0 withdrew the extended profile properties (but not the extended permissions relating to reading private messages).

Table 1 – Facebook application permissions and availability to API v1 (x) and v2 (y)[7]

Cambridge Analytica obtained Facebook data with help from another company, Global Science Research, set up by Cambridge University-affiliated faculty Alexandr Kogan and Joseph Chancellor. Kogan had previously collaborated with Facebook for his work at the Cambridge Prosociality & Well-Being Lab. For his research, Kogan collected data from Facebook as a developer, using the Lab’s account registered on Facebook via his own personal account, and he was also in contact with Facebook employees who directly sent him anonymized aggregate datasets.[8]

Table 2 – The History of the Facebook Graph API

The Facebook employees who sent him the data were working for Facebook’s Protect and Care Team, but were themselves doing research on user experience as PhD students.[9] Kogan states that the data he gathered with the Global Science Research quiz is separate from the initial data he used in his research, and it was kept on different servers.[10] Kogan’s testimony before the UK Parliament’s Digital, Culture, Media and Sport Committee does clarify which streams of data were used by which actors, but none of the Members of Parliament attending the hearing asked any questions about the very process through which Kogan was able to tap into Facebook user data. He acknowledged that for harvesting information for the Strategic Communication Laboratories – Cambridge Analytica’s affiliated company – he used a market research recruitment strategy: for around $34 per person, he aimed at recruiting up to 20,000 individuals who would take an online survey.[11] The survey would be accessible through an access token, which required participants to login using their Facebook credentials.

Access Tokens

On the user end, Facebook Login is an access token which allows users to log in across platforms. The benefits of using access tokens are undeniable: having the possibility to operate multiple accounts using one login system allows for efficient account management. The dangers are equally clear. On the one hand, one login point (with one username and one password) for multiple accounts can be a security vulnerability. On the other hand, even if Facebook claims that the user is in control of the data shared with third parties, some apps using Facebook Login – for instance wifi access in café’s, or online voting for TV shows – do not allow users to change the information requested by the app, creating a ‘take it or leave it’ situation for users.

Figure 1 – Facebook Login interface

On the developer end, access tokens allow apps operating on Facebook to access the Graph API. The access tokens perform two functions:

  • They allow developer apps to access user information without asking for the user’s password; and
  • They allow Facebook to identify developer apps, users engaging with this app, and the type of data permitted by the user to be accessed by the app.[12]

Understanding how Facebook Login works is essential in clarifying what information users are exposed to right before agreeing to hand their Facebook data over to other parties.

 

Data sharing and consent

As Figure 1 shows, and as it can be seen when browsing through Facebook’s Terms of Service, consent seems to be at the core of Facebook’s interaction with its users. This being said, it is impossible to determine, on the basis of these terms, what Facebook really does with the information it collects. For instance, in the Statement of Rights and Responsibilities dating from 30 January 2015, there is an entire section on sharing content and information:

  1. You own all of the content and information you post on Facebook, and you can control how it is shared through your privacyand application settings. In addition: 
  1. For content that is covered by intellectual property rights, like photos and videos (IP content), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it. 
  2. When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others).
  3. When you use an application, the application may ask for your permission to access your content and information as well as content and information that others have shared with you.  We require applications to respect your privacy, and your agreement with that application will control how the application can use, store, and transfer that content and information.  (To learn more about Platform, including how you can control what information other people may share with applications, read our Data Policy and Platform Page.)
  4. When you publish content or information using the Public setting, it means that you are allowing everyone, including people off of Facebook, to access and use that information, and to associate it with you (i.e., your name and profile picture).
  5. We always appreciate your feedback or other suggestions about Facebook, but you understand that we may use your feedback or suggestions without any obligation to compensate you for them (just as you have no obligation to offer them).

This section appears to establish Facebook as a user-centric platform that wants to give as much ownership to its customers. However, the section says nothing about the fact that app developers used to be able to tap not only into the information generated by users, but also that of their friends, to an even more extensive degree. There are many other clauses in the Facebook policies that could be relevant for this discussion, but let us dwell on this section.

Taking a step back, from a legal perspective, when a user gets an account with Facebook, a service contract is concluded. If users reside outside of the U.S. or Canada, clause 18.1 of the 2015 Statement of Rights and Responsibilities mentions the service contract to be an agreement between the user and Facebook Ireland Ltd. For U.S. and Canadian residents, the agreement is concluded with Facebook Inc.[13] Moreover, according to clause 15, the applicable law to the agreement is the law of the state of California.[14] This clause does not pose any issues for agreements with U.S. or Canadian users, but it does raise serious problems for users based in the European Union. In consumer contracts, European law curtails party autonomy in choosing applicable law, given that some consumer law provisions in European legislation are mandatory, and cannot be derogated from.[15] Taking the example of imposing the much lesser protections of U.S. law on European consumers, such clauses would not be valid under EU law. As a result, in 2017 the Italian Competition and Market Authority gave WhatsApp a €3 million fine on the ground that such contractual clauses are unfair.[16]

Apart from problems with contractual fairness, additional concerns arise with respect to unfair competition. Set between competition law and private law, unfair competition is a field of law that takes into account both bilateral transactions, as well as the broader effect they can have on a market. The rationale behind unfair competition is that deceitful/unfair trading practices which give businesses advantages they might otherwise not enjoy should be limited by law.[17] As far as terminology goes, in Europe, Directive 2005/29/EC, the main instrument regulating unfair competition, uses the terms ‘unfair commercial practices’, whereas in the United States, the Federal Trade Commission refers to ‘unfair or deceptive commercial practices’.[18] The basic differences between the approaches taken in the two federal/supranational legal systems can be consulted in Figure 2 below:

Figure 2 – U.S. & EU unfair competition law (van Eijk, Hoofnagle & Kannekens, 2017)[19]

 

Facebook’s potentially unfair/deceptive commercial practices

In what follows, I will briefly refer to the 3 comparative criteria identified by van Eijk et al.[20]

The fact that a business must do something (representation, omission, practice, etc.) which deceives or is likely to deceive or mislead the consumer is a shared criterion in both legal systems. There are two main problems with Facebook’s 2015 terms of service to this end. First, Facebook does not specify how exactly the company shares user data and with whom. Second, this version of the terms makes no reference whatsoever to the sharing of friends’ data, as could be done through the extended permissions. These omissions, as well as the very limited amount of information offered to consumers, through which they are supposed to understand Facebook’s links to other companies as far as their own data is concerned, are misleading.

The second criterion, that of the reasonable/average consumer, is not so straight forward: the information literacy of Facebook users fluctuates, as it depends on demographic preferences. With the emergence of new social media platforms such as Snapchat and Musical.ly, Facebook might not be the socializing service of choice for younger generations. However, official statistics are based on data that includes a lot of noise. It seems that fake accounts make up around 3% of the total number of Facebook accounts, and duplicate accounts make up around 10% of the same total.[21] This poses serious questions regarding the European standard of the average consumer, because there is no way to currently estimate how exactly this 13% proportion would change the features of the entire pool of users. There are many reasons why fake accounts exist, but let me mention two of them. First, the minimum age for joining Facebook is 13; however, the enforcement of this policy is not easy, and a lot of minors can join the social media platform by simply lying about their age. Second, fake online profiles allow for the creation of dissociate lives: individuals may display very different behavior under the veil of anonymity, and an example in this respect is online bullying.

Figure 3 – Distribution of Facebook users worldwide as of April 2018, by age and gender (Statista, 2018)

These aspects can make it difficult for a judge to determine the profile of the reasonable/average consumer as far as social media is concerned: would the benchmark include fake and duplicate accounts? Would the reasonable/average consumer standard have to be based on the real or the legal audience? What level of information literacy would this benchmark use? These aspects remain unclear.

The third criterion is even more complex, as it deals with the likelihood of consumers taking a different decision, had they had more symmetrical information. Two main points can be made here. On the one hand, applying this criterion leads to a scenario where we would have to assume that Facebook would better disclose information to consumers. This would normally take the form of specific clauses in the general terms and conditions. For consumers to be aware of this information, they would have to read these terms with orthodoxy, and make rational decisions, both of which are known not to be the case: consumers simply do not have time and do not care about general terms and conditions, and make impulsive decisions. If that is the case for the majority of the online consumer population, it is also the case for the reasonable/average consumer. On the other hand, perhaps consumers might feel more affected if they knew beforehand the particularities of data sharing practices as they occurred in the Cambridge Analytica situation: that Facebook was not properly informing them about allowing companies to broker their data to manipulate political campaigns. This, however, is not something Facebook would inform its users about directly, as Cambridge Analytica is not the only company using Facebook data, and such notifications (if even desirable from a customer communication perspective), would not be feasible, or would lead to information overload and consumer fatigue. If this too translates into a reality where consumers do not really care about such information, the third leg of the test seems not to be fulfilled. In any case, this too is a criterion which will very likely raise many more questions that it aims to address.

In sum, two out of the three criteria would be tough to fulfill. Assuming, however, that they would indeed be fulfilled, and even though there are considerable differences in the enforcement of the prohibition against unfair/deceptive commercial practices, the FTC, as well as European national authorities can take a case against Facebook to court to order injunctions, in addition to other administrative or civil acts. A full analysis of European and Dutch law in this respect will soon be available in a publication authored together with Stephan Mulders.

 

Harmonization and its discontents

The Italian Competition and Market Authority (the same entity that fined WhatsApp) launched an investigation into Facebook on April 6, on the ground that its data sharing practices are misleading and aggressive.[22] The Authority will have to go through the same test as applied above, and in addition, will very likely also consult the black-listed practices annexed to the Directive. Should this public institution from a Member State find that these practices are unfair, and should the relevant courts agree with this assessment, a door for a European Union-wide discussion on this matter will be opened. Directive 2005/29/EC is a so-called maximum harmonization instrument, meaning that the European legislator aims for it to level the playing field on unfair competition across all Member States. If Italy’s example is to be followed, and more consumer authorities restrict Facebook practices, this could mark the most effective performance of a harmonizing instrument in consumer protection. If the opposite happens, and Italy ends up being the only Member State outlawing such practices, this could be a worrying sign of how little impact maximum harmonization has in practice.

 

New issues, same laws

Nonetheless, in spite of the difficulties in enforcing unfair competition, this discussion prompts one main take-away: data-related practices do fall under the protections offered by regulation on unfair/deceptive commercial practices.[23] This type of regulation already exists in the U.S. just as much as it exists in the EU, and is able to handle new legal issues arising out of the use of disruptive technologies. The only areas where current legal practices are in need of an upgrade deal with interpretation and proof: given the complexity of social media platforms and the many ways in which they are used, perhaps judges and academics should also make use of data science to better understand the behavior of these audiences, as long as this behavior is central for legal assessments.

[1] Will Knight, ‘A Self-driving Uber Has Killed a Pedestrian in Arizona’, MIT Technology Review, The Download, March 19, 2018; Alan Ohnsman, Fatal Tesla Crash Exposes Gap In Automaker’s Use Of Car Data, Forbes, April 16, 2018.

[2] John Biggs, ‘Exit Scammers Run Off with $660 Million in ICO Earnings’, TechCrunch, April 13, 2018.

[3] Joe Harpaz, ‘What Trump’s Attack On Amazon Really Means For Internet Retailers’, Forbes, April 16, 2018.

[4] Carole Cadwalladr and Emma Graham-Harrison, ‘Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach’, The Guardian, March 17, 2018.

[5] The Cambridge Analytica website reads: ‘Data drives all we do. Cambridge Analytica uses data to change audience behavior. Visit our political or commercial divisions to see how we can help you.’, last visited on April 27, 2018. It is noteworthy that the company started insolvency procedures on 2 May, in an attempt to rebrand itself as Emerdata, see see Shona Ghosh and Jake Kanter, ‘The Cambridge Analytica power players set up a mysterious new data firm — and they could use it for a ‘Blackwater-style’ rebrand’, Business Insider, May 3, 2018.

[6] For a more in-depth description of the Graph API, as well as its Instagram equivalent, see Jonathan Albright, The Graph API: Key Points in the Facebook and Cambridge Analytica Debacle, Medium, March 21, 2018.

[7] Iraklis Symeonidis, Pagona Tsormpatzoudi & Bart Preneel, ‘Collateral Damage of Facebook Apps: An Enhanced Privacy Scoring Model’, IACR Cryptology ePrint Archive, 2015, p. 5.

[8] UK Parliament Digital, Culture, Media and Sport Committee, ‘Dr Aleksandr Kogan questioned by Committee’, April 24, 2018; see also the research output based on the 57 billion friendships dataset: Maurice H. Yearwood, Amy Cuddy, Nishtha Lamba, Wu Youyoua, Ilmo van der Lowe, Paul K. Piff, Charles Gronind, Pete Fleming, Emiliana Simon-Thomas, Dacher Keltner, Aleksandr Spectre, ‘On Wealth and the Diversity of Friendships: High Social Class People around the World Have Fewer International Friends’, 87 Personality and Individual Differences 224-229 (2015).

[9] UK Parliament Digital, Culture, Media and Sport Committee hearing, supra note 8.

[10] Ibid.

[11] This number mentioned by Kogan in his witness testimony conflicts with media reports which indicate a much higher participation rate in the study, see Julia Carrie Wong and Paul Lewis, ‘Facebook Gave Data about 57bn Friendships to Academic’, The Guardian, March 22, 2018.

[12] For an overview of Facebook Login, see Facebook Login for Apps – Overview, last visited on April 27, 2018.

[13] Clause 18.1 (2015) reads: If you are a resident of or have your principal place of business in the US or Canada, this Statement is an agreement between you and Facebook, Inc.  Otherwise, this Statement is an agreement between you and Facebook Ireland Limited.

[14] Clause 15.1 (2015) reads: The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions.

[15] Giesela Ruhl, ‘Consumer Protection in Choice of Law’, 44(3) Cornell International Law Journal 569-601 (2011), p. 590.

[16] Italian Competition and Market Authority, ‘WhatsApp fined for 3 million euro for having forced its users to share their personal data with Facebook’, Press Release, May 12, 2018.

[17] Rogier de Vrey, Towards a European Unfair Competition Law: A Clash Between Legal Families : a Comparative Study of English, German and Dutch Law in Light of Existing European and International Legal Instruments (Brill, 2006), p. 3.

[18] Nico van Eijk, Chris Jay Hoofnagle & Emilie Kannekens, ‘Unfair Commercial Practices: A Complementary Approach to Privacy Protection’, 3 European Data Protection Law Review 1-12 (2017), p. 2.

[19] Ibid., p. 11.

[20] The tests in Figure 2 have been simplified by in order to compare their essential features; however, upon a closer look, these tests include other details as well, such as the requirement of a practice being against ‘professional diligence’ (Art. 4(1) UCPD).

[21] Patrick Kulp, ‘Facebook Quietly Admits to as Many as 270 Million Fake or Clone Accounts’, Mashable, November 3, 2017.

[22] Italian Competition and Market Authority, ‘Misleading information for collection and use of data, investigation launched against Facebook’, Press Release, April 6, 2018.

[23] This discussion is of course much broader, and it starts from the question of whether a data-based service falls within the material scope of, for instance, Directive 2005/29/EC. According to Art. 2(c) corroborated with Art. 3(1) of this Directive, it does. See also Case C‑357/16, UAB ‘Gelvora’ v Valstybinė vartotojų teisių apsaugos tarnyba, ECLI:EU:C:2017:573, para. 32.

 

 

The Move Towards Explainable Artificial Intelligence and its Potential Impact on Judicial Reasoning

By Irene Ng (Huang Ying)

In 2017, the Defense Advanced Research Projects Agency (“DARPA”) launched a five year research program on the topic of explainable artificial intelligence.[1] Explainable artificial intelligence, or also known as XAI, refers to an artificial intelligence system whereby its decisions or output are explainable and understood by humans.

The growth of XAI in the field of artificial intelligence research is noteworthy considering the current state of AI research, whereby decisions made by machines are opaque in its reasoning and, in several cases, not understood by their human developers. This is also known as the “black box” of artificial intelligence; when input is being fed into the “black box”, an output based on machine learning techniques is produced, although there is no explanation behind why the output is as it is.[2] This problem is not undocumented – there have been several cases when machine learning algorithms have made certain decisions, but developers are puzzled at how such decisions were reached.[3]

The parallel interest in the use of artificial intelligence in judicial decision-making renders it interesting to consider how XAI will influence the development of an AI judge or arbitrator. Research in the use of AI for judicial decision-making is not novel. It was reported in 2016 that a team of computer scientists from UCL managed to develop an algorithm that “has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy”.[4] Much however remains to be said about the legal reasoning of such an AI-verdict.

The lack of an explainable legal reasoning is, unsurprisingly, a thorny issue towards pressing for automated decision-making by machines. This sentiment has been echoed by several authors who have written in the field of AI judges or AI arbitrators.[5] The opacity in the conclusion of an AI-verdict is alarming for lawyers, especially where legal systems are predicated on the legal reasoning of judges, arbitrators or adjudicators. In certain fields of law, such as criminal law and sentencing, the lack of transparency in the reasoning by an AI-judge in reaching a sentencing verdict can pose further moral and ethical dilemmas.

Furthermore, as AI judges are trained by datasets, who ensures that such datasets are not inherently biased so as to ensure that the AI-verdict will not be biased against specific classes of people as well? The output generated by a machine learning algorithm is highly dependent on the data that is fed to train the system. This has led to reports highlighting “caution against misleading performance measures for AI-assisted legal techniques”.[6]

In light of the opacity in legal reasoning provided by AI judges or AI arbitrators, how would XAI change or impact the field of AI judicial decision-making? Applying XAI in the field of judicial decision-making, an XAI judge or arbitrator would produce an AI verdict and produce a reasoning for such a decision. Whether such reasoning is legal or factual, or even logical, is not important at this fundamental level – what is crucial is that a reasoning has been provided, and such reasoning can be understood and subsequently challenged by lawyers, if disagreed upon. Such an XAI judge would at least function better in legal systems whereby appeal of the verdict is based on challenges to the reasoning of the judge or arbitrator.

This should also be seen in light of the EU’s upcoming General Data Protection Regulation (“GDPR”), whereby a “data subject shall have the right not to be subject to a decision based solely on automated processing”[7] and it appears uncertain at this point whether a data subject has the right to ask for an explanation about an algorithm that made the decision.[8] For developers that are unable to explain the reasoning behind their algorithm’s decisions, this may prove to be a potential landmine considering the tough penalties for flouting the GDPR.[9] This may thus be an implicit call to move towards XAI, especially for developers building AI judicial decision-making software that uses personal data of EU citizens.

As the legal industry still grapples with the introduction of AI in its daily operations, such as the use of the ROSS Intelligence system,[10] the development of other fields of AI such as XAI should not go unnoticed. While the use of an AI judge or AI arbitrator is not commonplace at the present moment, if one considers how XAI may be a better alternative for the legal industry as compared to traditional AI or machine learning methods, development of AI judges or arbitrators using XAI methods rather than traditional AI methods might be more ethically and morally acceptable.

Yet, legal reasoning is difficult to replicate in an XAI – the same set of facts can lead to several different views. Would XAI replicate these multi-faceted views, and explain them? But before we even start to ponder about such matters, perhaps we should first start getting the machine to give an explainable output that we can at least agree and disagree about.

[1] David Gunning, Explainable Artificial Intelligence (XAI), https://www.darpa.mil/program/explainable-artificial-intelligence.

[2] BlackBox, AI, https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/

[3] Will Knight, The Dark Secret at the Heart of AI, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

[4] Chris Johnston and agencies, Artificial intelligence ‘judge’ developed by UCL computer scientists, October 24, 2016, online: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.

[5] See José Maria de la Jara & Others, Machine Arbitrator: Are We Ready?, May 4, 2016, online: http://arbitrationblog.kluwerarbitration.com/2017/05/04/machine-arbitrator-are-we-ready/.

[6] AI Now 2017 Report, online: https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf.

[7] Article 22, General Data Protection Regulation.

[8] https://medium.com/trustableai/gdpr-and-its-impacts-on-machine-learning-applications-d5b5b0c3a815

[9] Penalties of GDPR can range from 10m eur or 2% of the worldwide annual revenue on the lower scale and 20m or 4% of the worldwide revenue on the upper scale. See Article 83, General Data Protection Regulation.

[10] ROSS Intelligence, online: https://rossintelligence.com/.

Transatlantic Antitrust and IPR Developments, Newsletter Issue No. 1/2018 (March 27, 2018)

Contributors:
Gabriel M. Lentner, Giuseppe Colangelo,
Martin Miernicki, Nikolaos Theodorakis

 

Editor-in-chief: Juha Vesala

 


Contents     

Antitrust

European Union

The Ruling of the EU Court of Justice in Intel

Intellectual property

United States

Full-work Licensing Requirement 100 Percent Rejected: Second Circuit Rules in Favor of Fractional Licensing

International Investment Tribunal Accepts Jurisdiction over Trademark Dispute involving US-company

Other developments

European Union

The Commission Launches the EU Blockchain Observatory and Forum

Read More…

The Ruling of the EU Court of Justice in Intel

By Giuseppe Colangelo

Almost ten years have passed since the Commission began its proceeding against Intel. However, the lawfulness of Intel’s practices remains inconclusive. In its recent judgment (Case C-413/14 P), the Grand Chamber of the Court of Justice of the European Union (CJEU) set aside a previous ruling in which the General Court affirmed the decision of the Commission to prohibit Intel’s practices, and referred the case back to the General Court.

The judgment turns on efficiency-enhancing justifications. The Grand Chamber of the CJEU, just as in Post Danmark I (Case C-209/10), reiterates that antitrust enforcement cannot disregard procompetitive effects even in the case of unilateral conduct, such as loyalty rebates. Although Article 102 does not reproduce the prohibition-exemption structure of Article 101, for the sake of consistency there must be room to allow unilateral practices as well. Therefore, like agreements restrictive by object, unilateral conduct which is presumed to be unlawful, as loyalty rebates are, can also be justified and rehabilitated because of the efficiency and consumer welfare benefits it can produce. The General Court’s formalistic approach towards Intel’s rebates demonstrated the need for the CJEU to clarify the role that assessing procompetitive effects must play in the analysis of dominant firms’ practices.

To this end, the CJEU suggests ‘clarifying’ the interpretation of Hoffman-La Roche (Case 85/76), one of the totems of EU antitrust orthodoxy. Unfortunately, it accomplishes exactly the opposite. Intel clearly supports the economic approach by denying a formalist, or per se, shortcut to the authorities. The abusive character of a behavior cannot be established simply on the basis of its form.

In Hoffman-La Roche, the CJEU pronounced that it considered any form of exclusive dealing anathema. To make the link to the exclusive dealing scenarios depicted in Hoffman-La Roche apparent, the General Court introduced a class of ‘exclusivity rebates’ in its ruling on Intel’s pricing practices. This is a new category of discounts different from the previously defined classes of quantity and fidelity rebates.

However, in its judgment the CJEU offers a different interpretation of the law on fidelity rebates. For those cases where dominant firms offer substantive procompetitive justifications for their fidelity rebates, the CJEU requires the Commission to proffer evidence showing the foreclosure effects of the allegedly abusive practice, and to analyze: (i) the extent of the undertaking’s dominant position on the relevant market; (ii) the share of the market covered by the challenged practice as well as the conditions and arrangements for granting the rebates in question, their duration and their amount; (iii) the possible existence of a strategy aimed at excluding from the market competitors that are at least as efficient as the dominant undertaking. As expressly acknowledged by the CJEU, it is this third prong – that is, the assessment of the practice’s capacity to foreclose – which is pivotal, because it “is also relevant in assessing whether a system of rebates which, in principle, falls within the scope of the prohibition laid down in Article 102 TFEU, may be objectively justified.”

The Intel ruling is also a significant step towards greater legal certainty. In addition to being able to effectively assert efficient justifications to overturn the presumption of anti-competitiveness, firms also know that for the CJEU the ‘as efficient competitor test’ (AEC test) represents a reliable proxy (although not the single or decisive criterion) of analysis that cannot be ignored, especially when used by the Commission in its evaluations.

The application of the effect-based approach to all unilateral conduct of dominant firms brings the European experience closer to the rule of reason analysis carried out under Section 2 of the Sherman Act. As indeed is explained by the Court of Appeals in Microsoft [253 F.3d 34 (D.C. Circuit 2001)], when it comes to monopolistic conduct, where the task of plaintiffs complaining about the violation of antitrust law is to show the exclusionary effects of the conduct at stake and how this has negatively affected consumer welfare, the task of the dominant firm is to highlight the objective justifications of its behavior.

The Ruling of the EU Court of Justice in Intel

Full-work Licensing Requirement 100 Percent Rejected: Second Circuit Rules in Favor of Fractional Licensing

By Martin Miernicki

On 19 December 2017, the Second Circuit handed down a summary order on the BMI Consent Decree in the dispute between the Department of Justice (DOJ) and Broadcast Music, Inc. (BMI). The court ruled that the decree does not oblige BMI to license the works in its repertoire on a “full-work” basis.

 

Background[1]

ASCAP and BMI are the two largest U.S. collective management organizations (CMOs) which license performance rights in musical works. Both organizations are subject to so-called consent decrees which entered into force 2001 and 1994, respectively. In 2014, the DOJ’s Antitrust Division announced a review of the consent decrees to evaluate if these needed to be updated. The DOJ concluded the review in August 2016, issuing a closing statement. The DOJ declared that it did not intend to re-negotiate and to amend the decrees, but rather stated that it interpreted these decrees as requiring ASCAP and BMI to license their works on a “full-work” or “100 percent” basis. Under this rule, the CMOs may only offer licenses that cover all performance rights in a composition; thus, co-owned works to which they only represent a “fractional” interest cannot be licensed. In reaction to this decision, BMI asked the “rate court” to give its opinion on this matter. In September 2016, Judge Stanton ruled against the full-work licensing requirement, stating that the decree “neither bars fractional licensing nor requires full-work licensing.”

 

Decision of the court

On appeal, the Second Circuit affirmed Judge Stanton’s ruling and held that fractional licensing is compatible with the BMI Consent Decree. First, referencing the U.S. Copyright Act – 17 U.S.C. § 201(d) –, the court highlighted that the right of public performance can be subdivided and owned separately. Second, as fractional licensing was common practice at the time the decree was amended in 1994, its language does indicate a prohibition of this practice. Third, the court rejected the DOJ’s reference to Pandora Media, Inc. v. ASCAP, 785 F. 3d 73 (2d Cir. 2015) because this judgment dealt with the “partial” withdrawal of rights from the CMO’s repertoire and not with the licensing policies in respect of users. Finally, the Second Circuit considered it to be irrelevant that full-work licensing could potentially advance the procompetitive objectives of the BMI Consent Decree; rather, the DOJ has the option to amend the decree or sue BMI in a separate proceeding based on the Sherman Act.

 

Implications of the judgement

The ruling of the Second Circuit is undoubtedly a victory for BMI, but also for ASCAP, as it must be assumed that ASCAP’s decree – which is very similar to BMI’s decree – can be interpreted in a similar fashion. Unsurprisingly, both CMOs welcomed the decision. The DOJ’s reaction remains to be seen, however. From the current perspective, an amendment of the decrees appears to be more likely than a lengthy antitrust proceeding under the Sherman Act; the DOJ had already partly toned down its strict reading of the decree in the course of the proceeding before the Second Circuit. Yet, legislative efforts might produce results and influence the further developments before a final decision is made. A recent example for the efforts to update the legal framework for music licensing is the “Music Modernization Act” which aims at amending §§ 114 and 115 of the U.S. Copyright Act.

[1] For more information on the background see Transatlantic Antitrust and IPR Developments Issue No. 3-4/2016 and Issue No. 5/2016.

 

International Investment Tribunal Accepts Jurisdiction over Trademark Dispute involving US-company

By Gabriel M. Lentner

Background

On 13 December 2017, an international investment tribunal delivered its decision on expedited objections, accepting jurisdiction to hear the trademark dispute in the case of Bridgestone v Panama. The dispute arose out of a judgment of the Panamanian Supreme Court of 28 May 2014, in which it held the claimants liable to a competitor to pay US $5 million, together with attorney’s fees, due to the claimants’ opposition proceedings regarding the registration of a trademark (”Riverstone”). The claimants argued that the Supreme Court’s judgment weakened and thus decreased the value of their trademarks (“Bridgestone” and ”Firestone”). The tribunal rejected most of the expedited objections raised by Panama. The decision is particularly interesting because it is the first detailed exploration of the question whether and under what conditions a trademark and license can be considered covered investments.

 

Trademarks are investments

On this issue, the tribunal first followed the text of the definition of investment under the applicable investment chapter of the United States—Panama Trade Promotion Agreement (TPA) (Article 10.29 TPA). It held that the investment must be an asset capable of being owned or controlled. The TPA also included a list with the forms that an investment may take, including ”intellectual property rights”, as many BITs do (paras 164 and 166). However, the TPA also requires that an investment must have the ”characteristics” of an investment, giving the examples of commitment of capital or other resources; expectation of gain or profit; assumption of risk (para 164). The tribunal also noted that other characteristics, as those identified in the case of Salini v Morocco, are to be found, such as a reasonable duration of the investment and a contribution made by the investment to the host state’s development. In this respect, the tribunal held that “there is no inflexible requirement for the presence of all these characteristics, but that an investment will normally evidence most of them” (para 165).

In deciding this issue, the tribunal reviewed the way in which trademarks can be promoted in the host state’s market. The tribunal found that ”the promotion involves the commitment of resources over a significant period, the expectation of profit and the assumption of the risk that the particular features of the product may not prove sufficiently attractive to enable it to win or maintain market share in the face of competition.” (para 169) However, the tribunal noted that “the mere registration of a trademark in a country manifestly does not amount to, or have the characteristics of, an investment in that country” (para 171). According to the tribunal, this is because of the negative effect of a registration of a trademark. It merely prevents competitors from using it on their products and does not confer benefit on the country where the registration takes place. Nor does it create any expectation of profit for the owner of the trademark (para 171).

The exploitation of a trademark is key for its characterization as an investment (para 172). This exploitation accords to the trademark the characteristics of an investment, by virtue of the activities to which the trademark is central. It involves a “devotion of resources, both to the production of the articles sold bearing the trademark, and to the promotion and support of those sales. It is likely also to involve after-sales servicing and guarantees. This exploitation will also be beneficial to the development of the home State. The activities involved in promoting and supporting sales will benefit the host economy, as will taxation levied on sales. Furthermore, it will normally be beneficial for products that incorporate the features that consumers find desirable to be available to consumers in the host country.” (para 172)

 

Licenses are investments, too

Another way of exploiting a trademark is licensing it, i.e. granting the licensee the right to exploit the trademark for its own benefit (para 173). The tribunal then brushes aside the following counter-argument raised by Panama:

“Rights, activities, commitments of capital and resources, expectations of gain and profit, assumption of risk, and duration do not add up an ‘investment’ when they are simply the rights, activities, commitments, expectations, and risks associated with, and the duration of, cross-border sales.” (para 175)

The tribunal responded that Panama did not provide any authority for this argument and only rebuts that the “reason why a simple sale does not constitute an investment is that it lacks most of the characteristics of an investment.” (para 176 It further noted that ”[i]t does not follow that an interrelated series of activities, built round the asset of a registered trademark, that do have the characteristics of an investment does not qualify as such simply because the object of the exercise is the promotion and sale of marked goods.” (para 176).

The problem with this argument is that it is precisely the point raised by Panama that the legal requirement for characteristics of investments were developed to distinguish an investment from a mere cross-border sale of goods. Arguably, the tribunal did not explain how the characteristics related to the trademarks at issue differ from those related to the marketing of ordinary sales of goods.

Against this background, the finding of the tribunal that trademark licenses are also investments is even less convincing. Here the tribunal refers to the express wording of Article 10.29(g) of the TPA, which provides that a license will not have the characteristics of an investment unless it creates rights protected under domestic law of the host state (para 178). After reviewing the arguments and expert testimony presented during the proceedings, the tribunal concluded that the license to use a trademark constitutes an intellectual property right under domestic law (para 195), and is thus capable of constituting an investment when exploited (para 198). It reasoned that ”[t]he owner of the trademark has to use the trademark to keep it alive, but use by the licensee counts as use by the owner. The licensee cannot take proceedings to enforce the trademark without the participation of the owner, but can join with the owner in enforcement proceedings. The right is a right to use the Panamanian registered trademark in Panama” (para 195).

In conclusion, it will be interesting to see how future tribunals will deal with this question and react to the precedent set in this case.