By Nicole Daniel
On 22 March 2018, in a court hearing in the Qualcomm case, Judge Koh expressed her concern over possible abuses in asserting legal privilege over certain documents.
In January 2017, the U.S. FTC sued Qualcomm, alleging that the company consistently refused to license its essential patents to competitors, thereby violating its pledge to standards organizations that it would license them on FRAND terms (fair, reasonable and non-discriminatory). Allegedly, Qualcomm also engaged in a policy of withholding processors unless its customers agreed to patent licensing terms favorable to Qualcomm. A trial is set for January 2019.
Furthermore, a class action alleged that Qualcomm’s behavior raised the prices of devices operating with its chips.
At the hearing, judge Koh said she is “deeply disturbed” by the very high percentage of privilege assertions by Qualcomm. However, Qualcomm continues to produce documents after reviewing them again and removing earlier assertions of privilege. Judge Koh expressed her concerns at the court hearing several times and said that she will allow witnesses to be redeposed, as often as necessary, until all documents are available before testimony.
This issue centers around documents from Apple and other customers which were gathered under an EU investigation into the baseband chipsets market. Even though the plaintiffs have already obtained a redacted version of the Commission’s January 2017 decision fining Qualcomm EUR 997 million, they ask for an unredacted version. In this decision, Qualcomm was fined for paying Apple to refrain from buying rival manufacturers’ chips.
The U.S. plaintiffs argue that Qualcomm should have simply asked for third parties’ permission to share the information given to the EU investigators. Qualcomm in turn argued that it cannot circumvent EU law by making the disclosures asked for and referred to the version of the decision to be published by the Commission. In the public version, the Commission makes its own redactions. The U.S. plaintiffs further argued that they contacted Apple, as well as its contracted manufacturers, and those parties do not object to disclosure. Qualcomm replied that they could simply ask them directly for the information. In sum, the U.S. plaintiffs called Qualcomm’s behavior unfair, as it prevents them from fully understanding the EU decision.
Until early May 2018, no public version of the Commission was available. The Commission and the companies involved are still in the process of deciding on a version of the decision that does not contain any business secrets or other confidential information.
By Catalina Goanta
2018 has so far not been easy on the tech world. The first months of the year brought a lot of bad news: two accidents with self-driving cars (Tesla and Uber) and the first human casualty, another Initial Coin Offering (ICO) scam costing investors $660 million, and Donald Trump promising to go after Amazon. But the scandal that made the most waves had to do with Facebook data being used by Cambridge Analytica.
Data brokers and social media
In a nutshell, Cambridge Analytica was a UK-based company that claimed to use data to change audience behavior either in political or commercial contexts. Without going too much into detail regarding the identity of the company, its ties, or political affiliations, one of the key points in the Cambridge Analytica whistleblowing conundrum is the fact that it shed light on Facebook data sharing practices which, unsurprisingly, have been around for a while. To create psychometric models which could influence voting behavior, Cambridge Analytica used the data of around 87 million users, obtained through Facebook’s Graph Application Programming Interface (API), a developer interface providing industrial-level access to personal information.
The Facebook Graph API
The first version of the API (v1.0), which was launched in 2010 and was up until 2015, could be used to not only gather public information about a given pool of users, but also about their friends, in addition to granting access to private messages sent on the platform (see Table 1 below). The amount of information belonging to user friends that Facebook allowed third parties to tap into is astonishing. The extended profile properties permission facilitated the extraction of information about: activities, birthdays, check-ins, education history, events, games activity, groups, interests, likes, location, notes, online presence, photo and video tags, photos, questions, relationships and relationships details, religion and politics, status, subscriptions, website and work history. Extended permissions changed in 2014, with the second version of the Graph API (v2.0), which suffered many other changes since (see Table 2). However, one interesting thing that stands out when comparing versions 1.0 and 2.0 is that less information is gathered from targeted users than from their friends, even if v2.0 withdrew the extended profile properties (but not the extended permissions relating to reading private messages).
Table 1 – Facebook application permissions and availability to API v1 (x) and v2 (y)
Cambridge Analytica obtained Facebook data with help from another company, Global Science Research, set up by Cambridge University-affiliated faculty Alexandr Kogan and Joseph Chancellor. Kogan had previously collaborated with Facebook for his work at the Cambridge Prosociality & Well-Being Lab. For his research, Kogan collected data from Facebook as a developer, using the Lab’s account registered on Facebook via his own personal account, and he was also in contact with Facebook employees who directly sent him anonymized aggregate datasets.
Table 2 – The History of the Facebook Graph API
The Facebook employees who sent him the data were working for Facebook’s Protect and Care Team, but were themselves doing research on user experience as PhD students. Kogan states that the data he gathered with the Global Science Research quiz is separate from the initial data he used in his research, and it was kept on different servers. Kogan’s testimony before the UK Parliament’s Digital, Culture, Media and Sport Committee does clarify which streams of data were used by which actors, but none of the Members of Parliament attending the hearing asked any questions about the very process through which Kogan was able to tap into Facebook user data. He acknowledged that for harvesting information for the Strategic Communication Laboratories – Cambridge Analytica’s affiliated company – he used a market research recruitment strategy: for around $34 per person, he aimed at recruiting up to 20,000 individuals who would take an online survey. The survey would be accessible through an access token, which required participants to login using their Facebook credentials.
On the user end, Facebook Login is an access token which allows users to log in across platforms. The benefits of using access tokens are undeniable: having the possibility to operate multiple accounts using one login system allows for efficient account management. The dangers are equally clear. On the one hand, one login point (with one username and one password) for multiple accounts can be a security vulnerability. On the other hand, even if Facebook claims that the user is in control of the data shared with third parties, some apps using Facebook Login – for instance wifi access in café’s, or online voting for TV shows – do not allow users to change the information requested by the app, creating a ‘take it or leave it’ situation for users.
Figure 1 – Facebook Login interface
On the developer end, access tokens allow apps operating on Facebook to access the Graph API. The access tokens perform two functions:
- They allow developer apps to access user information without asking for the user’s password; and
- They allow Facebook to identify developer apps, users engaging with this app, and the type of data permitted by the user to be accessed by the app.
Understanding how Facebook Login works is essential in clarifying what information users are exposed to right before agreeing to hand their Facebook data over to other parties.
Data sharing and consent
As Figure 1 shows, and as it can be seen when browsing through Facebook’s Terms of Service, consent seems to be at the core of Facebook’s interaction with its users. This being said, it is impossible to determine, on the basis of these terms, what Facebook really does with the information it collects. For instance, in the Statement of Rights and Responsibilities dating from 30 January 2015, there is an entire section on sharing content and information:
- You own all of the content and information you post on Facebook, and you can control how it is shared through your privacyand application settings. In addition:
- For content that is covered by intellectual property rights, like photos and videos (IP content), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.
- When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others).
- When you use an application, the application may ask for your permission to access your content and information as well as content and information that others have shared with you. We require applications to respect your privacy, and your agreement with that application will control how the application can use, store, and transfer that content and information. (To learn more about Platform, including how you can control what information other people may share with applications, read our Data Policy and Platform Page.)
- When you publish content or information using the Public setting, it means that you are allowing everyone, including people off of Facebook, to access and use that information, and to associate it with you (i.e., your name and profile picture).
- We always appreciate your feedback or other suggestions about Facebook, but you understand that we may use your feedback or suggestions without any obligation to compensate you for them (just as you have no obligation to offer them).
This section appears to establish Facebook as a user-centric platform that wants to give as much ownership to its customers. However, the section says nothing about the fact that app developers used to be able to tap not only into the information generated by users, but also that of their friends, to an even more extensive degree. There are many other clauses in the Facebook policies that could be relevant for this discussion, but let us dwell on this section.
Taking a step back, from a legal perspective, when a user gets an account with Facebook, a service contract is concluded. If users reside outside of the U.S. or Canada, clause 18.1 of the 2015 Statement of Rights and Responsibilities mentions the service contract to be an agreement between the user and Facebook Ireland Ltd. For U.S. and Canadian residents, the agreement is concluded with Facebook Inc. Moreover, according to clause 15, the applicable law to the agreement is the law of the state of California. This clause does not pose any issues for agreements with U.S. or Canadian users, but it does raise serious problems for users based in the European Union. In consumer contracts, European law curtails party autonomy in choosing applicable law, given that some consumer law provisions in European legislation are mandatory, and cannot be derogated from. Taking the example of imposing the much lesser protections of U.S. law on European consumers, such clauses would not be valid under EU law. As a result, in 2017 the Italian Competition and Market Authority gave WhatsApp a €3 million fine on the ground that such contractual clauses are unfair.
Apart from problems with contractual fairness, additional concerns arise with respect to unfair competition. Set between competition law and private law, unfair competition is a field of law that takes into account both bilateral transactions, as well as the broader effect they can have on a market. The rationale behind unfair competition is that deceitful/unfair trading practices which give businesses advantages they might otherwise not enjoy should be limited by law. As far as terminology goes, in Europe, Directive 2005/29/EC, the main instrument regulating unfair competition, uses the terms ‘unfair commercial practices’, whereas in the United States, the Federal Trade Commission refers to ‘unfair or deceptive commercial practices’. The basic differences between the approaches taken in the two federal/supranational legal systems can be consulted in Figure 2 below:
Figure 2 – U.S. & EU unfair competition law (van Eijk, Hoofnagle & Kannekens, 2017)
Facebook’s potentially unfair/deceptive commercial practices
In what follows, I will briefly refer to the 3 comparative criteria identified by van Eijk et al.
The fact that a business must do something (representation, omission, practice, etc.) which deceives or is likely to deceive or mislead the consumer is a shared criterion in both legal systems. There are two main problems with Facebook’s 2015 terms of service to this end. First, Facebook does not specify how exactly the company shares user data and with whom. Second, this version of the terms makes no reference whatsoever to the sharing of friends’ data, as could be done through the extended permissions. These omissions, as well as the very limited amount of information offered to consumers, through which they are supposed to understand Facebook’s links to other companies as far as their own data is concerned, are misleading.
The second criterion, that of the reasonable/average consumer, is not so straight forward: the information literacy of Facebook users fluctuates, as it depends on demographic preferences. With the emergence of new social media platforms such as Snapchat and Musical.ly, Facebook might not be the socializing service of choice for younger generations. However, official statistics are based on data that includes a lot of noise. It seems that fake accounts make up around 3% of the total number of Facebook accounts, and duplicate accounts make up around 10% of the same total. This poses serious questions regarding the European standard of the average consumer, because there is no way to currently estimate how exactly this 13% proportion would change the features of the entire pool of users. There are many reasons why fake accounts exist, but let me mention two of them. First, the minimum age for joining Facebook is 13; however, the enforcement of this policy is not easy, and a lot of minors can join the social media platform by simply lying about their age. Second, fake online profiles allow for the creation of dissociate lives: individuals may display very different behavior under the veil of anonymity, and an example in this respect is online bullying.
Figure 3 – Distribution of Facebook users worldwide as of April 2018, by age and gender (Statista, 2018)
These aspects can make it difficult for a judge to determine the profile of the reasonable/average consumer as far as social media is concerned: would the benchmark include fake and duplicate accounts? Would the reasonable/average consumer standard have to be based on the real or the legal audience? What level of information literacy would this benchmark use? These aspects remain unclear.
The third criterion is even more complex, as it deals with the likelihood of consumers taking a different decision, had they had more symmetrical information. Two main points can be made here. On the one hand, applying this criterion leads to a scenario where we would have to assume that Facebook would better disclose information to consumers. This would normally take the form of specific clauses in the general terms and conditions. For consumers to be aware of this information, they would have to read these terms with orthodoxy, and make rational decisions, both of which are known not to be the case: consumers simply do not have time and do not care about general terms and conditions, and make impulsive decisions. If that is the case for the majority of the online consumer population, it is also the case for the reasonable/average consumer. On the other hand, perhaps consumers might feel more affected if they knew beforehand the particularities of data sharing practices as they occurred in the Cambridge Analytica situation: that Facebook was not properly informing them about allowing companies to broker their data to manipulate political campaigns. This, however, is not something Facebook would inform its users about directly, as Cambridge Analytica is not the only company using Facebook data, and such notifications (if even desirable from a customer communication perspective), would not be feasible, or would lead to information overload and consumer fatigue. If this too translates into a reality where consumers do not really care about such information, the third leg of the test seems not to be fulfilled. In any case, this too is a criterion which will very likely raise many more questions that it aims to address.
In sum, two out of the three criteria would be tough to fulfill. Assuming, however, that they would indeed be fulfilled, and even though there are considerable differences in the enforcement of the prohibition against unfair/deceptive commercial practices, the FTC, as well as European national authorities can take a case against Facebook to court to order injunctions, in addition to other administrative or civil acts. A full analysis of European and Dutch law in this respect will soon be available in a publication authored together with Stephan Mulders.
Harmonization and its discontents
The Italian Competition and Market Authority (the same entity that fined WhatsApp) launched an investigation into Facebook on April 6, on the ground that its data sharing practices are misleading and aggressive. The Authority will have to go through the same test as applied above, and in addition, will very likely also consult the black-listed practices annexed to the Directive. Should this public institution from a Member State find that these practices are unfair, and should the relevant courts agree with this assessment, a door for a European Union-wide discussion on this matter will be opened. Directive 2005/29/EC is a so-called maximum harmonization instrument, meaning that the European legislator aims for it to level the playing field on unfair competition across all Member States. If Italy’s example is to be followed, and more consumer authorities restrict Facebook practices, this could mark the most effective performance of a harmonizing instrument in consumer protection. If the opposite happens, and Italy ends up being the only Member State outlawing such practices, this could be a worrying sign of how little impact maximum harmonization has in practice.
New issues, same laws
Nonetheless, in spite of the difficulties in enforcing unfair competition, this discussion prompts one main take-away: data-related practices do fall under the protections offered by regulation on unfair/deceptive commercial practices. This type of regulation already exists in the U.S. just as much as it exists in the EU, and is able to handle new legal issues arising out of the use of disruptive technologies. The only areas where current legal practices are in need of an upgrade deal with interpretation and proof: given the complexity of social media platforms and the many ways in which they are used, perhaps judges and academics should also make use of data science to better understand the behavior of these audiences, as long as this behavior is central for legal assessments.
 Will Knight, ‘A Self-driving Uber Has Killed a Pedestrian in Arizona’, MIT Technology Review, The Download, March 19, 2018; Alan Ohnsman, Fatal Tesla Crash Exposes Gap In Automaker’s Use Of Car Data, Forbes, April 16, 2018.
 John Biggs, ‘Exit Scammers Run Off with $660 Million in ICO Earnings’, TechCrunch, April 13, 2018.
 Joe Harpaz, ‘What Trump’s Attack On Amazon Really Means For Internet Retailers’, Forbes, April 16, 2018.
 Carole Cadwalladr and Emma Graham-Harrison, ‘Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach’, The Guardian, March 17, 2018.
 The Cambridge Analytica website reads: ‘Data drives all we do. Cambridge Analytica uses data to change audience behavior. Visit our political or commercial divisions to see how we can help you.’, last visited on April 27, 2018. It is noteworthy that the company started insolvency procedures on 2 May, in an attempt to rebrand itself as Emerdata, see see Shona Ghosh and Jake Kanter, ‘The Cambridge Analytica power players set up a mysterious new data firm — and they could use it for a ‘Blackwater-style’ rebrand’, Business Insider, May 3, 2018.
 For a more in-depth description of the Graph API, as well as its Instagram equivalent, see Jonathan Albright, The Graph API: Key Points in the Facebook and Cambridge Analytica Debacle, Medium, March 21, 2018.
 Iraklis Symeonidis, Pagona Tsormpatzoudi & Bart Preneel, ‘Collateral Damage of Facebook Apps: An Enhanced Privacy Scoring Model’, IACR Cryptology ePrint Archive, 2015, p. 5.
 UK Parliament Digital, Culture, Media and Sport Committee, ‘Dr Aleksandr Kogan questioned by Committee’, April 24, 2018; see also the research output based on the 57 billion friendships dataset: Maurice H. Yearwood, Amy Cuddy, Nishtha Lamba, Wu Youyoua, Ilmo van der Lowe, Paul K. Piff, Charles Gronind, Pete Fleming, Emiliana Simon-Thomas, Dacher Keltner, Aleksandr Spectre, ‘On Wealth and the Diversity of Friendships: High Social Class People around the World Have Fewer International Friends’, 87 Personality and Individual Differences 224-229 (2015).
 UK Parliament Digital, Culture, Media and Sport Committee hearing, supra note 8.
 This number mentioned by Kogan in his witness testimony conflicts with media reports which indicate a much higher participation rate in the study, see Julia Carrie Wong and Paul Lewis, ‘Facebook Gave Data about 57bn Friendships to Academic’, The Guardian, March 22, 2018.
 Clause 18.1 (2015) reads: If you are a resident of or have your principal place of business in the US or Canada, this Statement is an agreement between you and Facebook, Inc. Otherwise, this Statement is an agreement between you and Facebook Ireland Limited.
 Clause 15.1 (2015) reads: The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions.
 Italian Competition and Market Authority, ‘WhatsApp fined for 3 million euro for having forced its users to share their personal data with Facebook’, Press Release, May 12, 2018.
 Rogier de Vrey, Towards a European Unfair Competition Law: A Clash Between Legal Families : a Comparative Study of English, German and Dutch Law in Light of Existing European and International Legal Instruments (Brill, 2006), p. 3.
 Nico van Eijk, Chris Jay Hoofnagle & Emilie Kannekens, ‘Unfair Commercial Practices: A Complementary Approach to Privacy Protection’, 3 European Data Protection Law Review 1-12 (2017), p. 2.
 Ibid., p. 11.
 The tests in Figure 2 have been simplified by in order to compare their essential features; however, upon a closer look, these tests include other details as well, such as the requirement of a practice being against ‘professional diligence’ (Art. 4(1) UCPD).
 Patrick Kulp, ‘Facebook Quietly Admits to as Many as 270 Million Fake or Clone Accounts’, Mashable, November 3, 2017.
 Italian Competition and Market Authority, ‘Misleading information for collection and use of data, investigation launched against Facebook’, Press Release, April 6, 2018.
 This discussion is of course much broader, and it starts from the question of whether a data-based service falls within the material scope of, for instance, Directive 2005/29/EC. According to Art. 2(c) corroborated with Art. 3(1) of this Directive, it does. See also Case C‑357/16, UAB ‘Gelvora’ v Valstybinė vartotojų teisių apsaugos tarnyba, ECLI:EU:C:2017:573, para. 32.
By Irene Ng (Huang Ying)
In 2017, the Defense Advanced Research Projects Agency (“DARPA”) launched a five year research program on the topic of explainable artificial intelligence. Explainable artificial intelligence, or also known as XAI, refers to an artificial intelligence system whereby its decisions or output are explainable and understood by humans.
The growth of XAI in the field of artificial intelligence research is noteworthy considering the current state of AI research, whereby decisions made by machines are opaque in its reasoning and, in several cases, not understood by their human developers. This is also known as the “black box” of artificial intelligence; when input is being fed into the “black box”, an output based on machine learning techniques is produced, although there is no explanation behind why the output is as it is. This problem is not undocumented – there have been several cases when machine learning algorithms have made certain decisions, but developers are puzzled at how such decisions were reached.
The parallel interest in the use of artificial intelligence in judicial decision-making renders it interesting to consider how XAI will influence the development of an AI judge or arbitrator. Research in the use of AI for judicial decision-making is not novel. It was reported in 2016 that a team of computer scientists from UCL managed to develop an algorithm that “has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy”. Much however remains to be said about the legal reasoning of such an AI-verdict.
The lack of an explainable legal reasoning is, unsurprisingly, a thorny issue towards pressing for automated decision-making by machines. This sentiment has been echoed by several authors who have written in the field of AI judges or AI arbitrators. The opacity in the conclusion of an AI-verdict is alarming for lawyers, especially where legal systems are predicated on the legal reasoning of judges, arbitrators or adjudicators. In certain fields of law, such as criminal law and sentencing, the lack of transparency in the reasoning by an AI-judge in reaching a sentencing verdict can pose further moral and ethical dilemmas.
Furthermore, as AI judges are trained by datasets, who ensures that such datasets are not inherently biased so as to ensure that the AI-verdict will not be biased against specific classes of people as well? The output generated by a machine learning algorithm is highly dependent on the data that is fed to train the system. This has led to reports highlighting “caution against misleading performance measures for AI-assisted legal techniques”.
In light of the opacity in legal reasoning provided by AI judges or AI arbitrators, how would XAI change or impact the field of AI judicial decision-making? Applying XAI in the field of judicial decision-making, an XAI judge or arbitrator would produce an AI verdict and produce a reasoning for such a decision. Whether such reasoning is legal or factual, or even logical, is not important at this fundamental level – what is crucial is that a reasoning has been provided, and such reasoning can be understood and subsequently challenged by lawyers, if disagreed upon. Such an XAI judge would at least function better in legal systems whereby appeal of the verdict is based on challenges to the reasoning of the judge or arbitrator.
This should also be seen in light of the EU’s upcoming General Data Protection Regulation (“GDPR”), whereby a “data subject shall have the right not to be subject to a decision based solely on automated processing” and it appears uncertain at this point whether a data subject has the right to ask for an explanation about an algorithm that made the decision. For developers that are unable to explain the reasoning behind their algorithm’s decisions, this may prove to be a potential landmine considering the tough penalties for flouting the GDPR. This may thus be an implicit call to move towards XAI, especially for developers building AI judicial decision-making software that uses personal data of EU citizens.
As the legal industry still grapples with the introduction of AI in its daily operations, such as the use of the ROSS Intelligence system, the development of other fields of AI such as XAI should not go unnoticed. While the use of an AI judge or AI arbitrator is not commonplace at the present moment, if one considers how XAI may be a better alternative for the legal industry as compared to traditional AI or machine learning methods, development of AI judges or arbitrators using XAI methods rather than traditional AI methods might be more ethically and morally acceptable.
Yet, legal reasoning is difficult to replicate in an XAI – the same set of facts can lead to several different views. Would XAI replicate these multi-faceted views, and explain them? But before we even start to ponder about such matters, perhaps we should first start getting the machine to give an explainable output that we can at least agree and disagree about.
 David Gunning, Explainable Artificial Intelligence (XAI), https://www.darpa.mil/program/explainable-artificial-intelligence.
 Will Knight, The Dark Secret at the Heart of AI, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.
 Chris Johnston and agencies, Artificial intelligence ‘judge’ developed by UCL computer scientists, October 24, 2016, online: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.
 See José Maria de la Jara & Others, Machine Arbitrator: Are We Ready?, May 4, 2016, online: http://arbitrationblog.kluwerarbitration.com/2017/05/04/machine-arbitrator-are-we-ready/.
 Article 22, General Data Protection Regulation.
 Penalties of GDPR can range from 10m eur or 2% of the worldwide annual revenue on the lower scale and 20m or 4% of the worldwide revenue on the upper scale. See Article 83, General Data Protection Regulation.
Full-work Licensing Requirement 100 Percent Rejected: Second Circuit Rules in Favor of Fractional Licensing
By Martin Miernicki
On 19 December 2017, the Second Circuit handed down a summary order on the BMI Consent Decree in the dispute between the Department of Justice (DOJ) and Broadcast Music, Inc. (BMI). The court ruled that the decree does not oblige BMI to license the works in its repertoire on a “full-work” basis.
ASCAP and BMI are the two largest U.S. collective management organizations (CMOs) which license performance rights in musical works. Both organizations are subject to so-called consent decrees which entered into force 2001 and 1994, respectively. In 2014, the DOJ’s Antitrust Division announced a review of the consent decrees to evaluate if these needed to be updated. The DOJ concluded the review in August 2016, issuing a closing statement. The DOJ declared that it did not intend to re-negotiate and to amend the decrees, but rather stated that it interpreted these decrees as requiring ASCAP and BMI to license their works on a “full-work” or “100 percent” basis. Under this rule, the CMOs may only offer licenses that cover all performance rights in a composition; thus, co-owned works to which they only represent a “fractional” interest cannot be licensed. In reaction to this decision, BMI asked the “rate court” to give its opinion on this matter. In September 2016, Judge Stanton ruled against the full-work licensing requirement, stating that the decree “neither bars fractional licensing nor requires full-work licensing.”
Decision of the court
On appeal, the Second Circuit affirmed Judge Stanton’s ruling and held that fractional licensing is compatible with the BMI Consent Decree. First, referencing the U.S. Copyright Act – 17 U.S.C. § 201(d) –, the court highlighted that the right of public performance can be subdivided and owned separately. Second, as fractional licensing was common practice at the time the decree was amended in 1994, its language does indicate a prohibition of this practice. Third, the court rejected the DOJ’s reference to Pandora Media, Inc. v. ASCAP, 785 F. 3d 73 (2d Cir. 2015) because this judgment dealt with the “partial” withdrawal of rights from the CMO’s repertoire and not with the licensing policies in respect of users. Finally, the Second Circuit considered it to be irrelevant that full-work licensing could potentially advance the procompetitive objectives of the BMI Consent Decree; rather, the DOJ has the option to amend the decree or sue BMI in a separate proceeding based on the Sherman Act.
Implications of the judgement
The ruling of the Second Circuit is undoubtedly a victory for BMI, but also for ASCAP, as it must be assumed that ASCAP’s decree – which is very similar to BMI’s decree – can be interpreted in a similar fashion. Unsurprisingly, both CMOs welcomed the decision. The DOJ’s reaction remains to be seen, however. From the current perspective, an amendment of the decrees appears to be more likely than a lengthy antitrust proceeding under the Sherman Act; the DOJ had already partly toned down its strict reading of the decree in the course of the proceeding before the Second Circuit. Yet, legislative efforts might produce results and influence the further developments before a final decision is made. A recent example for the efforts to update the legal framework for music licensing is the “Music Modernization Act” which aims at amending §§ 114 and 115 of the U.S. Copyright Act.
By Gabriel M. Lentner
On 13 December 2017, an international investment tribunal delivered its decision on expedited objections, accepting jurisdiction to hear the trademark dispute in the case of Bridgestone v Panama. The dispute arose out of a judgment of the Panamanian Supreme Court of 28 May 2014, in which it held the claimants liable to a competitor to pay US $5 million, together with attorney’s fees, due to the claimants’ opposition proceedings regarding the registration of a trademark (”Riverstone”). The claimants argued that the Supreme Court’s judgment weakened and thus decreased the value of their trademarks (“Bridgestone” and ”Firestone”). The tribunal rejected most of the expedited objections raised by Panama. The decision is particularly interesting because it is the first detailed exploration of the question whether and under what conditions a trademark and license can be considered covered investments.
Trademarks are investments
On this issue, the tribunal first followed the text of the definition of investment under the applicable investment chapter of the United States—Panama Trade Promotion Agreement (TPA) (Article 10.29 TPA). It held that the investment must be an asset capable of being owned or controlled. The TPA also included a list with the forms that an investment may take, including ”intellectual property rights”, as many BITs do (paras 164 and 166). However, the TPA also requires that an investment must have the ”characteristics” of an investment, giving the examples of commitment of capital or other resources; expectation of gain or profit; assumption of risk (para 164). The tribunal also noted that other characteristics, as those identified in the case of Salini v Morocco, are to be found, such as a reasonable duration of the investment and a contribution made by the investment to the host state’s development. In this respect, the tribunal held that “there is no inflexible requirement for the presence of all these characteristics, but that an investment will normally evidence most of them” (para 165).
In deciding this issue, the tribunal reviewed the way in which trademarks can be promoted in the host state’s market. The tribunal found that ”the promotion involves the commitment of resources over a significant period, the expectation of profit and the assumption of the risk that the particular features of the product may not prove sufficiently attractive to enable it to win or maintain market share in the face of competition.” (para 169) However, the tribunal noted that “the mere registration of a trademark in a country manifestly does not amount to, or have the characteristics of, an investment in that country” (para 171). According to the tribunal, this is because of the negative effect of a registration of a trademark. It merely prevents competitors from using it on their products and does not confer benefit on the country where the registration takes place. Nor does it create any expectation of profit for the owner of the trademark (para 171).
The exploitation of a trademark is key for its characterization as an investment (para 172). This exploitation accords to the trademark the characteristics of an investment, by virtue of the activities to which the trademark is central. It involves a “devotion of resources, both to the production of the articles sold bearing the trademark, and to the promotion and support of those sales. It is likely also to involve after-sales servicing and guarantees. This exploitation will also be beneficial to the development of the home State. The activities involved in promoting and supporting sales will benefit the host economy, as will taxation levied on sales. Furthermore, it will normally be beneficial for products that incorporate the features that consumers find desirable to be available to consumers in the host country.” (para 172)
Licenses are investments, too
Another way of exploiting a trademark is licensing it, i.e. granting the licensee the right to exploit the trademark for its own benefit (para 173). The tribunal then brushes aside the following counter-argument raised by Panama:
“Rights, activities, commitments of capital and resources, expectations of gain and profit, assumption of risk, and duration do not add up an ‘investment’ when they are simply the rights, activities, commitments, expectations, and risks associated with, and the duration of, cross-border sales.” (para 175)
The tribunal responded that Panama did not provide any authority for this argument and only rebuts that the “reason why a simple sale does not constitute an investment is that it lacks most of the characteristics of an investment.” (para 176 It further noted that ”[i]t does not follow that an interrelated series of activities, built round the asset of a registered trademark, that do have the characteristics of an investment does not qualify as such simply because the object of the exercise is the promotion and sale of marked goods.” (para 176).
The problem with this argument is that it is precisely the point raised by Panama that the legal requirement for characteristics of investments were developed to distinguish an investment from a mere cross-border sale of goods. Arguably, the tribunal did not explain how the characteristics related to the trademarks at issue differ from those related to the marketing of ordinary sales of goods.
Against this background, the finding of the tribunal that trademark licenses are also investments is even less convincing. Here the tribunal refers to the express wording of Article 10.29(g) of the TPA, which provides that a license will not have the characteristics of an investment unless it creates rights protected under domestic law of the host state (para 178). After reviewing the arguments and expert testimony presented during the proceedings, the tribunal concluded that the license to use a trademark constitutes an intellectual property right under domestic law (para 195), and is thus capable of constituting an investment when exploited (para 198). It reasoned that ”[t]he owner of the trademark has to use the trademark to keep it alive, but use by the licensee counts as use by the owner. The licensee cannot take proceedings to enforce the trademark without the participation of the owner, but can join with the owner in enforcement proceedings. The right is a right to use the Panamanian registered trademark in Panama” (para 195).
In conclusion, it will be interesting to see how future tribunals will deal with this question and react to the precedent set in this case.
By Nicole Daniel
The proceedings between Apple and Qualcomm began in January 2017 in the U.S. District Court in San Diego when Apple filed suit against Qualcomm over its allegedly abusive licensing practices with its wireless patents. Qualcomm then filed unfair competition law counterclaims. This case is being overseen by U.S. District Judge Gonzalo Curiel.
Apple then sued Qualcomm for similar violations in the UK, China, Japan, and Taiwan.
In July 2017 Qualcomm filed patent claims against Apple also in the U.S. District Court in San Diego. This case is being overseen by U.S. District Judge Dana M. Sabraw. At the same time Qualcomm filed a complaint with the U.S. International Trade Commission accusing the Apple iPhone of infringing five Qualcomm patents.
District Court Case I
In November 2017, Judge Curiel issued a split decision in the first patent and antitrust case between Apple and Qualcomm.
Apple has been seeking a declarative judgment that it had not infringed the nine Qualcomm patents at issue and asked the court to decide on a fair and reasonable licensing rate. Judge Curiel denied those claims, holding instead that no detailed infringement analysis as to the Additional Patents-in-Suit had been conducted.
Judge Curiel further held that Qualcomm had not adequately pleaded claims against Apple based on California’s Unfair Competition Law. These allegations stemmed from Apple’s decision to use both Qualcomm and Intel chips in its iPhone. Before, Apple exclusively used Qualcomm’s chips in earlier versions of the iPhone.
In a hearing in October 2017 the lawyers for Qualcomm claimed that Apple executives threatened to end their business relationship if Qualcomm publicly claimed that its own chipsets were superior to Intel’s. In his order judge Curiel held that Qualcomm had not adequately pleaded the specific facts indicating its own reliance on an alleged omission or misrepresentation by Apple. Accordingly, Qualcomm lacked standing under Unfair Competition Law.
District Court Case II
In the district court patent case, Apple filed counterclaims arguing that Qualcomm infringed patents relating to enabling extended battery life in a smartphone or other mobiles devises by supplying power only when needed. This technology serves to maximize battery life.
Apple further argued that it created the smartphone as its own product category in 2007 when it introduced the iPhone. Qualcomm merely developed basic telephone technology which is now dated.
Qualcomm, on the other hand, argued that the success of the iPhone is due to its technology as Qualcomm has developed high-speed wireless connectivity over decades.
The discussion of who essentially invented the smartphone is of importance since under U.S. President Trump the term “innovator” has become very significant. On 10 November 2017 Makan Delrahim, the new chief of the Department of Justice’s antitrust division, made a policy speech and stated that the government aims to rebalance the scales in antitrust enforcement away from implementers who incorporate the inventions of others into their own products. There will be more emphasis on the innovators’ rights so as to protect their patent-holder rights in cases concerning patents essential to technology standards.
Further Cases filed and the Case at the US International Trade Commission
In November 2017, Qualcomm filed three new district court patent cases against Apple as well as one new complaint for the case pending before the U.S. International Trade Commission. In sum, Qualcomm accuses Apple of infringing 16 non-standard essential patents for technology implemented outside the wireless modern chip.
Despite this litigation, Qualcomm has so far remained a key supplier of chips to Apple.
By Marie-Andrée Weiss
A 5-page copyright infringement complaint filed last April in the Southern District of New York (SDNY) is being closely watched by copyright practitioners, as it may lead the court to rule on whether a Twitter post incorporating a copyrighted photograph, without permission of the author, is copyright infringement. The case is Goldman v. Breitbart News Network LLC et al., 1:17-cv-03144.
In the summer of 2016, Justin Goldman took a picture of the Boston Patriots quarterback, Tom Brady, walking in the streets in the Hamptons, in New York, with members of the basketball team the Boston Celtics. The picture was of interest as it could be implied from it that Tom Brady was helping the Celtics to acquire star player Kevin Durant.
The picture was published by several Twitter users on the microblogging site, and these tweets were then embedded in the body of articles about Tom Brady’s trip to the Hamptons published by Defendants including Yahoo!, Time, the New England Sports Network, Breitbart and others.
Justin Goldman registered his work with the Copyright Office and filed a copyright infringement suit against the platforms which had reproduced his photograph. Defendants moved to dismiss, claiming that the use was not infringing because it was merely embedding, and also because it was fair use. Judge Katherine B. Forrest denied the motion to dismiss on August 17, 2017, because whether embedding a tweet is equivalent to in-line linking could not be determined at this stage of the procedure.
Defendants, minus Breitbart, then filed a motion for partial summary judgment on 5 October 2017. Plaintiff moved to oppose it on 6 November 2017.
The Exclusive Right to Display a Work
Section 106(5) of the Copyright Act gives the copyright owner the exclusive right “to display the copyrighted work publicly.” Section 101 of the Copyright Act defines displaying a work as “to show a copy of it, either directly or by means of a film, slide, television image, or any other device or process or, in the case of a motion picture or other audiovisual work, to show individual images nonsequentially.” Plaintiff argues that “embedding” is one of the processes mentioned in Section 106(5).
Is Embedding a Tweet Just Like In-Line Linking?
Defendants claimed that incorporating an image in a tweet is not different from ‘in-line linking,’ which the Ninth Circuit found to be non-infringing in Perfect 10, Inc., v. Amazon.com, Inc.. In this case, the issue was whether the thumbnail versions of copyrighted images featured by Google on its image search result pages were infringing.
The Ninth Circuit had defined “in-line linking” in Perfect 10 as the “process by which the webpage directs a user’s browser to incorporate content from different computers into a single window”. In this case, Google had provided HTML instructions directing a user’s browser to access a third-party website, but did not store the images on its servers. This was found not to be infringing, as Google did not store the images as it not have a have a copy of the protected photographs, and thus did not display then, since to “display” a work under Section 101 of the Copyright Act requires to show a copy of it. This reasoning is known as the “Server Test”.
Plaintiff distinguished the facts in our case from Perfect 10, claiming that his photograph was shown in full size, that it was not “framed” and that it was featured prominently on Defendant’s websites. He argued that the thumbnails in Perfect 10 were low-resolution pictures which users had to click in order to access the full photos, whereas an embedded tweet allows the user to see the full high-resolution image without further maneuvers.
Defendants argued instead that, similarly to the Perfect 10 facts, tweets were embedded using code which directed user’s browsers to retrieve the Tom Brady picture from Twitter’s servers, and the picture was indeed framed, with a light gray box. They had, as publishers, merely provided an in-line link to the picture already published by the Twitter users, and this was not direct copyright infringement. They argued that the embedded tweets were not stored on, hosted by or transmitted from servers owned or controlled by them.
Meanwhile, in the European Union…
Defendants argued that an embedded tweet functions as a hyperlink, since clicking on it brings the user to the Twitter site. This case is somewhat similar to the European Court of Justice (ECJ) GS Media (see here for our comment) and Swensson cases. In Swensson, the ECJ had found that posting a hyperlink to protected works which had been made freely available to the public is not a communication to the public within the meaning of article 3(1) of the InfoSoc Directive, which gives authors the exclusive right of public communication of their works. Recital 23 of the Directive specifies that this right covers “any… transmission or retransmission of a work to the public by wire or wireless means, including broadcasting.” The ECJ reasoned that providing a hyperlink is not a communication to a new public and is thus not infringing.
In GS Media, the ECJ found that posting hyperlinks to protected works, which had been made available to the public, but without the consent of the right holder, is not a communication to the public within the meaning of article 3(1) of the InfoSoc Directive either. However, if the links were posted by a person who knew or could have reasonably known that the works had been illegally published online, or if they were posted for profit, then posting these hyperlinks are a new communication to the public and thus infringing.
Could ECJ case law on hyperlinks inspire U.S. courts to revisit Perfect 10?