By Martin Miernicki
On 20 December 2017, the Court of Justice of the European Union (CJEU) handed down its decision in Asociación Profesional Élite Taxi v. Uber Systems Spain SL (C-434/15), holding that Uber’s services, in principle, constitute transportation services and thus remain regulated by national legislation. On 10 April 2018, the court essentially confirmed this ruling in Uber France SAS v. Nabil Bensalem (C-320/16).
Background of the cases
Both cases centered on the legal classification of the services provided by Uber under EU law. In the first case, the Asociación Profesional Elite Taxi – a professional taxi drivers‘ association – brought action against Uber before the national (Spanish) court, stating that the company infringed the local rules on the provision of taxi services as well as the laws on unfair competition. The national court observed that neither Uber nor the non-professional drivers had the licenses and authorizations required by national law; however, it was unsure whether the services provided by Uber qualified as “information society services” within the meaning of article 2(a) of Directive 2000/31/EC (E-Commerce Directive) or rather as a “service in the field of transport”, thereby being excluded from said directive as well as the scope of article 56 TFEU and article 2(2)(d) of Directive 2006/123/EC (Services Directive). The second case revolved around a similar question against the background of a private prosecution and civil action brought by an individual against Uber under French law.
Decisions of the court
The CJEU considered Uber’s service overall and not merely its single components, characterizing Uber’s business model as providing, “by means of a smartphone application, […] the paid service consisting of connecting non-professional drivers using their own vehicle with persons who wish to make urban journeys, without holding any administrative licence or authorisation.” (C-434/15, para 2). The CJEU held that Uber offered not a mere intermediation service which – as inherently linked to smartphones and the internet – could, seen in isolation, constitute an information society service. Rather, Uber provides an integral part of an overall service “whose main component is a transport service”. Thus, Uber’s services qualified as “services in the field of transport”, thereby rendering the E-Commerce Directive, the Services Directive and Art 56 TFEU inapplicable. Relying heavily on these findings, the court reached a similar conclusion in the subsequent case and essentially confirmed its prior ruling.
Meaning of the decisions and implications
The judgements are a setback for Uber and services alike, because – both being qualified as transportation services – they cannot rely on the safeguards and guarantees provided for by EU law (especially the freedom to provide services). On the contrary, the CJEU confirmed that transport services remain a field which is still largely in the member states’ domain. This is especially challenging for companies which, like Uber, specialize in a field where the regulatory requirements differ widely, also within the borders of one single member state. It should, however, be noted that the court gave its opinion on the service as described above; one might reach a different conclusion should Uber adapt or restructure its business model.
The dispute in the Uber cases can be seen in the larger context of “sharing economy” business models. Another example for a company active in this field would be Airbnb, for instance. European policy makers are aware of this emerging sector and have launched several initiatives to tackle the issue at the EU level. Among these are the Communication from the Commission on a European agenda for the collaborative economy (COM(2016) 356 final) and the European Parliament resolution of 15 June 2017 on a European Agenda for the collaborative economy (2017/2003(INI)).
By Jonathan Cardenas
On 8 March 2018, the European Commission (“Commission”) introduced its FinTech Action Plan, a policy proposal designed to augment the international competitiveness of the European Single Market in the financial services sector. Together with the FinTech Action Plan, the Commission introduced a proposal for a regulation on European crowdfunding services providers (“Proposed Regulation on Crowdfunding”). Both of these proposals form part of a broader package of measures designed to deepen and complete the European Capital Markets Union by 2019. This article briefly summarizes both the FinTech Action Plan and the Proposed Regulation on Crowdfunding.
- FinTech Action Plan
With the goal of turning the European Union (“EU”) into a “global hub for FinTech,” the FinTech Action Plan introduces measures that build upon several of the Commission’s prior initiatives, including the regulatory modernization objectives set forth by the Commission’s internal Task Force on Financial Technology, the capital market integration objectives identified in the Commission’s Capital Markets Union Action Plan, and the digital market integration objectives identified in the Commission’s Digital Single Market Strategy. Responding to calls from the European Parliament and European Council for a proportional, future-oriented regulatory framework that balances competition and innovation while preserving financial stability and investor protection, and also drawing upon the conclusions of the March–June 2017 Public Consultation on FinTech, the FinTech Action Plan consists of a “targeted,” three-pronged strategy, that sets out 19 steps to enable the EU economy to cautiously embrace the digital transformation of the financial services sector.
- “Enabling Innovative Business Models to Reach EU Scale”
The first prong of the FinTech Action Plan is focused on measures that will enable EU-based FinTech companies to access and scale across the entire Single Market.
Recognizing the need for regulatory harmonization, the Commission calls for uniformity in financial service provider licensing requirements across the EU to avoid conflicting national rules that hamper the development of a single European market in emerging financial services, such as crowdfunding (Step 1). With crowdfunding specifically in mind, the Commission has proposed a regulation on European crowdfunding service providers (“ECSPs”), which, as discussed in further detail below, would create a pan-European passport regime for ECSPs that want to operate and scale across EU Member State borders. In addition, the Commission invites the European Supervisory Authorities (“ESAs”) to outline differences in FinTech licensing requirements across the EU, particularly with regard to how Member State regulatory authorities apply EU proportionality and flexibility principles in the context of national financial services legislation (Step 2). The Commission encourages the ESAs to present Member State financial regulators with recommendations as to how national rules can converge. The Commission also encourages the ESAs to present the Commission with recommendations as to whether there is a need for EU-level financial services legislation in this context. Moreover, the Commission will continue to monitor developments in the cryptocurrency asset and initial coin offering (“ICO”) space in conjunction with the ESAs, the European Central Bank, the Financial Stability Board and other international standard setters in order to determine whether EU-level regulatory measures are needed (Step 3).
Recognizing the importance of common standards for the development of an EU-wide FinTech market, the Commission is focused on developing standards that will enhance interoperability between FinTech market player systems. The Commission plans to work with the European Committee for Standardization and the International Organization for Standardization to develop coordinated approaches on FinTech standards by Q4 2018, particularly in relation to blockchain technology (Step 4). In addition, the Commission will support industry-led efforts to develop global standards for application programming interfaces by mid-2019 that are compliant with the EU Payment Services Directive and EU General Data Protection Regulation (Step 5).
In order to facilitate the emergence of FinTech companies across the EU, the Commission encourages the development of innovation hubs (institutional arrangements in which market players engage with regulators to share information on market developments and regulatory requirements) and regulatory sandboxes (controlled spaces in which financial institutions and non-financial firms can test new FinTech concepts with the support of a government authority for a limited period of time), collectively referred to by the Commission as “FinTech facilitators.” The Commission specifically encourages the ESAs to identify best practices for innovation hubs and regulatory sandboxes by Q4 2018 (Step 6). The Commission invites the ESAs and Member States to take initiatives to facilitate innovation based on these best practices, and in particular, to promote the establishment of innovation hubs in all Member States (Step 7). Based upon the work of the ESAs, the Commission will present a report with best practices for regulatory sandboxes by Q1 2019 (Step 8).
- “Supporting the Uptake of Technological Innovation in the Financial Sector”
The second prong of the FinTech Action Plan is focused on measures that will facilitate the adoption of FinTech across the EU financial services industry.
The Commission begins the second prong by indicating that its policy approach to FinTech is guided by the principle of “technology neutrality,” an EU regulatory principle that requires national regulators to ensure that national regulation “neither imposes nor discriminates in favour of the use of a particular type of technology.” In this regard, the Commission plans to setup an expert group to assess, by Q2 2019, the extent to which the current EU regulatory framework for financial services is neutral toward artificial intelligence and distributed ledger technology, particularly in relation to jurisdictional questions surrounding blockchain-based applications, the validity and enforceability of smart contracts, and the legal status of ICOs (Step 9).
In addition to ensuring that EU financial regulation is fit for artificial intelligence and blockchain, the Commission also intends to remove obstacles that limit the use of cloud computing services across the EU financial services industry. In this regard, the Commission invites the ESAs to produce, by Q1 2019, formal guidelines that clarify the expectations of financial supervisory authorities with respect to the outsourcing of data by financial institutions to cloud service providers (Step 10). The Commission also invites cloud service providers, cloud services users and regulatory authorities to collaboratively develop self-regulatory codes of conduct that will eliminate data localization restrictions, and in turn, enable financial institutions to port their data and applications when switching between cloud services providers (Step 11). In addition, the Commission will facilitate the development of standard contractual clauses for cloud outsourcing by financial institutions, particularly with regard to audit and reporting requirements (Step 12).
Recognizing that blockchain and distributed ledger technology will “likely lead to a major breakthrough that will transform the way information or assets are exchanged,” the Commission plans to hold additional public consultations in Q2 2018 on the possible implementation of the European Financial Transparency Gateway, a pilot project that uses distributed ledger technology to record information about companies listed on EU securities markets (Step 13). In addition, the Commission plans to continue to develop a comprehensive, cross-sector strategy toward blockchain and distributed ledger technology that enables the introduction of FinTech and RegTech applications across the EU (Step 14). In conjunction with both the EU Blockchain Observatory and Forum, and the European Standardization Organizations, the Commission will continue to support interoperability and standardization efforts, and will continue to evaluate blockchain applications in the context of the Commission’s Next Generation Internet Initiative (Step 15).
Recognizing that regulatory uncertainty and fragmentation prevents the European financial services industry from taking up new technology, the Commission will also establish an EU FinTech Lab in Q2 2018 to enable EU and national regulators to engage in regulatory discussions and training sessions with select technology providers in a neutral, non-commercial space (Step 16).
- “Enhancing Security and Integrity of the Financial Sector”
The third prong of the FinTech Action Plan is focused on financial services industry cybersecurity.
Recognizing the cross-border nature of cybersecurity threats and the need to make the EU financial services industry cyberattack resilient, the Commission will organize a public-private workshop in Q2 2018 to examine regulatory obstacles that limit cyber threat information sharing between financial market participants, and to identify potential solutions to these obstacles (Step 17). The Commission also invites the ESAs to map, by Q1 2019, existing supervisory practices related to financial services sector cybersecurity, to consider issuing guidelines geared toward supervisory convergence in cybersecurity risk management, and if necessary, to provide the Commission with technical advice on the need for EU regulatory reform (Step 18). The Commission also invites the ESAs to evaluate, by Q4 2018, the costs and benefits of developing an EU-coordinated cyber resilience testing framework for the entire EU financial sector (Step 19).
- Proposed Regulation on Crowdfunding
In line with the Commission’s Capital Markets Union objective of broadening access to finance for start-up companies, the Proposed Regulation on Crowdfunding is aimed at facilitating crowdfunding activity across the Single Market. The proposed regulation plans to enable investment-based and lending-based ECSPs to scale across Member State borders by creating a pan-European crowdfunding passport regime under which qualifying ECSPs can provide crowdfunding services across the EU without the need to obtain individual authorization from each Member State. The proposed regulation also seeks to minimize investor risk exposure by setting forth organizational and operational requirements, which include, among others, prudent risk management and adequate information disclosure.
 COM (2018) 109/2 – FinTech Action plan: For a more competitive and innovative European financial sector. Available at: https://ec.europa.eu/info/sites/info/files/180308-action-plan-fintech_en.pdf.
 COM (2018) 113 – Proposal for a regulation on European Crowdfunding Service Providers (ECSP) for Business. Available at: https://ec.europa.eu/info/law/better-regulation/initiative/181605/attachment/090166e5b9160b13_en.
 COM (2018) 114 final – Completing the Capital Markets Union by 2019 – time to accelerate delivery. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52018DC0114&from=EN.
 European Commission Press Release, “FinTech: Commission Takes Action For a More Competitive and Innovative Financial Market,” 8 March 2018. Available at: https://ec.europa.eu/info/sites/info/files/180308-action-plan-fintech_en.pdf.
 European Commission Banking and Finance Newsletter, Task Force on Financial Technology, 28 March 2017. Available at: http://ec.europa.eu/newsroom/fisma/item-detail.cfm?item_id=56443&utm_source=fisma_newsroom&utm_medium=Website&utm_campaign=fisma&utm_content=Task%20Force%20on%20Financial%20Technology&lang=en. See also European Commission Announcement, Vice President’s speech at the conference #FINTECHEU “Is EU regulation fit for new financial technologies?,” 23 March 2017. Available at: https://ec.europa.eu/commission/commissioners/2014-2019/dombrovskis/announcements/vice-presidents-speech-conference-fintecheu-eu-regulation-fit-new-financial-technologies_en. See also European Commission Blog Post, “European Commission sets up an internal Task Force on Financial Technology,” 14 November 2016. Available at: https://ec.europa.eu/digital-single-market/en/blog/european-commission-sets-internal-task-force-financial-technology.
 COM/2015/0468 final – Action Plan on Building a Capital Markets Union. Available at : http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52015DC0468&from=EN.
 COM(2015) 192 final – A Digital Single Market Strategy for Europe, 6 May 2015. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52015DC0192&from=EN. See also COM (2017) 228 final – Mid-Term review on the implementation of the Digital Single Market Strategy: A Connected Digital Single Market for All, 10 May 2017. Available at: http://eur-lex.europa.eu/resource.html?uri=cellar:a4215207-362b-11e7-a08e-01aa75ed71a1.0001.02/DOC_1&format=PDF.
 European Parliament Committee on Economic and Monetary Affairs, Report on FinTech: the influence of technology on the future of the financial sector, Rapporteur: Cora van Nieuwenhuizen, 2016/2243(INI), 28 April 2017. Available at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-0176+0+DOC+PDF+V0//EN.
 EUCO 14/17, CO EUR 17, CONCL 5, European Council Meeting Conclusions, 19 October 2017. Available at: http://www.consilium.europa.eu/media/21620/19-euco-final-conclusions-en.pdf.
 European Commission Directorate-General for Financial Stability, Financial Services and Capital Markets Union, “Summary of contributions to the ‘Public Consultation on FinTech: a more competitive and innovative European financial sector,’” 2017. Available at: https://ec.europa.eu/info/sites/info/files/2017-fintech-summary-of-responses_en.pdf.
 FinTech Action Plan.
 European Commission Press Release, “FinTech: Commission Takes Action For a More Competitive and Innovative Financial Market,” 8 March 2018. Available at: https://ec.europa.eu/info/sites/info/files/180308-action-plan-fintech_en.pdf.
 EBA/DP/2017/02 – Discussion Paper on the EBA’s approach to financial technology (FinTech), 4 August 2017. Available at: https://www.eba.europa.eu/documents/10180/1919160/EBA+Discussion+Paper+on+Fintech+%28EBA-DP-2017-02%29.pdf.
 FinTech Action Plan, p. 8.
 Directive 2002/21 on a common regulatory framework for electronic communications networks and services (Framework Directive)  OJ L108/33. Available at: https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A32002L0021.
 FinTech Action Plan, p. 12.
 Capital Markets Union Action Plan.
By Paul Opitz
In the prominent areas of self-driving cars and Lethal Autonomous Weapons Systems, the development of autonomous systems has already led to important ethical debates. On 9 March 2018 the European Commission published a press release in which it announced to set up a group of experts for developing guidelines on AI ethics, building on a statement by the European Group on Ethics in Science and New Technologies.
Call for a wide and open discussion
The Commission emphasizes the possible major benefits from artificial intelligence, ranging from better healthcare to more sustainable farming and safer transport. However, since there are also many increasingly urgent moral questions related to the impact of AI on the future of work and legislation, the Commission calls for a “wide, open and inclusive discussion” on how to benefit from artificial intelligence, while also respecting ethical principles.
Tasks of the expert group
The expert group will be set up by May and tasked to:
- advise the Commission on building a diverse group of stakeholders for a “European AI Alliance”
- support the implementation of a European initiative on artificial intelligence
- draft guidelines for the ethical development and the use of artificial intelligence based on the EU´s fundamental rights, considering, inter alia, issues of fairness, safety, transparency, and the future of work.
The goal of ensuring ethical standards in AI and robotics was recently set out in the Joint Declaration on the EU´s legislative priorities for 2018-2019. Furthermore, the guidelines on AI ethics will build on the Statement on Artificial Intelligence, Robotics and Autonomous Systems by the European Group on Ethics in Science and New Technologies (EGE) from 9 March 2018. This statement summarizes relevant developments in the area of technology, identifying a range of essential moral questions.
Safety, security, and the prevention of harm are of upmost importance. In addition, the EGE poses the question of human moral responsibility. How can moral responsibility be apportioned, and could it possibly be “shared” between humans and machines?
On a more general level, questions about governance, regulation, design, and certification occupy lawmakers in order to serve the welfare of individuals and society. Finally, there are questions regarding the transparency of autonomous systems and their effective value to society.
The statement explicitly emphasizes that the term “autonomy” stems from the field of philosophy and refers to the ability of human persons to legislate for themselves, the freedom to choose rules and laws for themselves to follow. Although the terminology is widely applied to machines, its original sense is an important aspect of human dignity and should therefore not be relativised. No smart machine ought to be accorded the moral standing of the human person or inherit human dignity.
In this sense, moral debates must be held in broad ways, so that narrow constructs of ethical problems do not oversimplify the underlying questions. In discussions concerning self-driving cars, the ethical problems should not only evolve around so-called “Trolley Problem” thought experiments, in which the only possible choice is associated with the loss of human lives. More important questions include past design decisions that have led up to the moral dilemmas, the role of values in design and how to weigh values in case of a conflict.
For autonomous weapons systems, a large part of the discussion should focus on the nature and meaning of “meaningful human control” over intelligent military systems and how to implement forms of control that are morally desirable.
Shared ethical framework as a goal
As initiatives concerning ethical principles are uneven at the national level, the European Parliament calls for a range of measures to prepare for the regulation of robotics and the development of a guiding ethical framework for the design, production and use of robots.
As a first step towards ethical guidelines, the EGE defines a set of basic principles and democratic prerequisites based on fundamental values of the EU Treaties. These include, inter alia, human dignity, autonomy, responsibility, democracy, accountability, security, data protection, and sustainability.
It is now up to the expert group to discuss whether the existing legal instruments are effective enough to deal with the problems discussed or which new regulatory instruments might be required on the way towards a common, internationally recognized ethical framework for the use of artificial intelligence and autonomous systems.
 EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems, http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 10.
 European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.
 European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.
 EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems, http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 8.
 Id., at p. 8.
 Id., at p. 8.
 Id., at p. 9.
 Id., at p. 10.
 Id., at p. 10-11.
 Id., at p. 11.
 Id., at p. 14.
 Id., at p. 16-19.
 Id., at p. 20.
By Catalina Goanta
2018 has so far not been easy on the tech world. The first months of the year brought a lot of bad news: two accidents with self-driving cars (Tesla and Uber) and the first human casualty, another Initial Coin Offering (ICO) scam costing investors $660 million, and Donald Trump promising to go after Amazon. But the scandal that made the most waves had to do with Facebook data being used by Cambridge Analytica.
Data brokers and social media
In a nutshell, Cambridge Analytica was a UK-based company that claimed to use data to change audience behavior either in political or commercial contexts. Without going too much into detail regarding the identity of the company, its ties, or political affiliations, one of the key points in the Cambridge Analytica whistleblowing conundrum is the fact that it shed light on Facebook data sharing practices which, unsurprisingly, have been around for a while. To create psychometric models which could influence voting behavior, Cambridge Analytica used the data of around 87 million users, obtained through Facebook’s Graph Application Programming Interface (API), a developer interface providing industrial-level access to personal information.
The Facebook Graph API
The first version of the API (v1.0), which was launched in 2010 and was up until 2015, could be used to not only gather public information about a given pool of users, but also about their friends, in addition to granting access to private messages sent on the platform (see Table 1 below). The amount of information belonging to user friends that Facebook allowed third parties to tap into is astonishing. The extended profile properties permission facilitated the extraction of information about: activities, birthdays, check-ins, education history, events, games activity, groups, interests, likes, location, notes, online presence, photo and video tags, photos, questions, relationships and relationships details, religion and politics, status, subscriptions, website and work history. Extended permissions changed in 2014, with the second version of the Graph API (v2.0), which suffered many other changes since (see Table 2). However, one interesting thing that stands out when comparing versions 1.0 and 2.0 is that less information is gathered from targeted users than from their friends, even if v2.0 withdrew the extended profile properties (but not the extended permissions relating to reading private messages).
Table 1 – Facebook application permissions and availability to API v1 (x) and v2 (y)
Cambridge Analytica obtained Facebook data with help from another company, Global Science Research, set up by Cambridge University-affiliated faculty Alexandr Kogan and Joseph Chancellor. Kogan had previously collaborated with Facebook for his work at the Cambridge Prosociality & Well-Being Lab. For his research, Kogan collected data from Facebook as a developer, using the Lab’s account registered on Facebook via his own personal account, and he was also in contact with Facebook employees who directly sent him anonymized aggregate datasets.
Table 2 – The History of the Facebook Graph API
The Facebook employees who sent him the data were working for Facebook’s Protect and Care Team, but were themselves doing research on user experience as PhD students. Kogan states that the data he gathered with the Global Science Research quiz is separate from the initial data he used in his research, and it was kept on different servers. Kogan’s testimony before the UK Parliament’s Digital, Culture, Media and Sport Committee does clarify which streams of data were used by which actors, but none of the Members of Parliament attending the hearing asked any questions about the very process through which Kogan was able to tap into Facebook user data. He acknowledged that for harvesting information for the Strategic Communication Laboratories – Cambridge Analytica’s affiliated company – he used a market research recruitment strategy: for around $34 per person, he aimed at recruiting up to 20,000 individuals who would take an online survey. The survey would be accessible through an access token, which required participants to login using their Facebook credentials.
On the user end, Facebook Login is an access token which allows users to log in across platforms. The benefits of using access tokens are undeniable: having the possibility to operate multiple accounts using one login system allows for efficient account management. The dangers are equally clear. On the one hand, one login point (with one username and one password) for multiple accounts can be a security vulnerability. On the other hand, even if Facebook claims that the user is in control of the data shared with third parties, some apps using Facebook Login – for instance wifi access in café’s, or online voting for TV shows – do not allow users to change the information requested by the app, creating a ‘take it or leave it’ situation for users.
Figure 1 – Facebook Login interface
On the developer end, access tokens allow apps operating on Facebook to access the Graph API. The access tokens perform two functions:
- They allow developer apps to access user information without asking for the user’s password; and
- They allow Facebook to identify developer apps, users engaging with this app, and the type of data permitted by the user to be accessed by the app.
Understanding how Facebook Login works is essential in clarifying what information users are exposed to right before agreeing to hand their Facebook data over to other parties.
Data sharing and consent
As Figure 1 shows, and as it can be seen when browsing through Facebook’s Terms of Service, consent seems to be at the core of Facebook’s interaction with its users. This being said, it is impossible to determine, on the basis of these terms, what Facebook really does with the information it collects. For instance, in the Statement of Rights and Responsibilities dating from 30 January 2015, there is an entire section on sharing content and information:
- You own all of the content and information you post on Facebook, and you can control how it is shared through your privacyand application settings. In addition:
- For content that is covered by intellectual property rights, like photos and videos (IP content), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.
- When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others).
- When you use an application, the application may ask for your permission to access your content and information as well as content and information that others have shared with you. We require applications to respect your privacy, and your agreement with that application will control how the application can use, store, and transfer that content and information. (To learn more about Platform, including how you can control what information other people may share with applications, read our Data Policy and Platform Page.)
- When you publish content or information using the Public setting, it means that you are allowing everyone, including people off of Facebook, to access and use that information, and to associate it with you (i.e., your name and profile picture).
- We always appreciate your feedback or other suggestions about Facebook, but you understand that we may use your feedback or suggestions without any obligation to compensate you for them (just as you have no obligation to offer them).
This section appears to establish Facebook as a user-centric platform that wants to give as much ownership to its customers. However, the section says nothing about the fact that app developers used to be able to tap not only into the information generated by users, but also that of their friends, to an even more extensive degree. There are many other clauses in the Facebook policies that could be relevant for this discussion, but let us dwell on this section.
Taking a step back, from a legal perspective, when a user gets an account with Facebook, a service contract is concluded. If users reside outside of the U.S. or Canada, clause 18.1 of the 2015 Statement of Rights and Responsibilities mentions the service contract to be an agreement between the user and Facebook Ireland Ltd. For U.S. and Canadian residents, the agreement is concluded with Facebook Inc. Moreover, according to clause 15, the applicable law to the agreement is the law of the state of California. This clause does not pose any issues for agreements with U.S. or Canadian users, but it does raise serious problems for users based in the European Union. In consumer contracts, European law curtails party autonomy in choosing applicable law, given that some consumer law provisions in European legislation are mandatory, and cannot be derogated from. Taking the example of imposing the much lesser protections of U.S. law on European consumers, such clauses would not be valid under EU law. As a result, in 2017 the Italian Competition and Market Authority gave WhatsApp a €3 million fine on the ground that such contractual clauses are unfair.
Apart from problems with contractual fairness, additional concerns arise with respect to unfair competition. Set between competition law and private law, unfair competition is a field of law that takes into account both bilateral transactions, as well as the broader effect they can have on a market. The rationale behind unfair competition is that deceitful/unfair trading practices which give businesses advantages they might otherwise not enjoy should be limited by law. As far as terminology goes, in Europe, Directive 2005/29/EC, the main instrument regulating unfair competition, uses the terms ‘unfair commercial practices’, whereas in the United States, the Federal Trade Commission refers to ‘unfair or deceptive commercial practices’. The basic differences between the approaches taken in the two federal/supranational legal systems can be consulted in Figure 2 below:
Figure 2 – U.S. & EU unfair competition law (van Eijk, Hoofnagle & Kannekens, 2017)
Facebook’s potentially unfair/deceptive commercial practices
In what follows, I will briefly refer to the 3 comparative criteria identified by van Eijk et al.
The fact that a business must do something (representation, omission, practice, etc.) which deceives or is likely to deceive or mislead the consumer is a shared criterion in both legal systems. There are two main problems with Facebook’s 2015 terms of service to this end. First, Facebook does not specify how exactly the company shares user data and with whom. Second, this version of the terms makes no reference whatsoever to the sharing of friends’ data, as could be done through the extended permissions. These omissions, as well as the very limited amount of information offered to consumers, through which they are supposed to understand Facebook’s links to other companies as far as their own data is concerned, are misleading.
The second criterion, that of the reasonable/average consumer, is not so straight forward: the information literacy of Facebook users fluctuates, as it depends on demographic preferences. With the emergence of new social media platforms such as Snapchat and Musical.ly, Facebook might not be the socializing service of choice for younger generations. However, official statistics are based on data that includes a lot of noise. It seems that fake accounts make up around 3% of the total number of Facebook accounts, and duplicate accounts make up around 10% of the same total. This poses serious questions regarding the European standard of the average consumer, because there is no way to currently estimate how exactly this 13% proportion would change the features of the entire pool of users. There are many reasons why fake accounts exist, but let me mention two of them. First, the minimum age for joining Facebook is 13; however, the enforcement of this policy is not easy, and a lot of minors can join the social media platform by simply lying about their age. Second, fake online profiles allow for the creation of dissociate lives: individuals may display very different behavior under the veil of anonymity, and an example in this respect is online bullying.
Figure 3 – Distribution of Facebook users worldwide as of April 2018, by age and gender (Statista, 2018)
These aspects can make it difficult for a judge to determine the profile of the reasonable/average consumer as far as social media is concerned: would the benchmark include fake and duplicate accounts? Would the reasonable/average consumer standard have to be based on the real or the legal audience? What level of information literacy would this benchmark use? These aspects remain unclear.
The third criterion is even more complex, as it deals with the likelihood of consumers taking a different decision, had they had more symmetrical information. Two main points can be made here. On the one hand, applying this criterion leads to a scenario where we would have to assume that Facebook would better disclose information to consumers. This would normally take the form of specific clauses in the general terms and conditions. For consumers to be aware of this information, they would have to read these terms with orthodoxy, and make rational decisions, both of which are known not to be the case: consumers simply do not have time and do not care about general terms and conditions, and make impulsive decisions. If that is the case for the majority of the online consumer population, it is also the case for the reasonable/average consumer. On the other hand, perhaps consumers might feel more affected if they knew beforehand the particularities of data sharing practices as they occurred in the Cambridge Analytica situation: that Facebook was not properly informing them about allowing companies to broker their data to manipulate political campaigns. This, however, is not something Facebook would inform its users about directly, as Cambridge Analytica is not the only company using Facebook data, and such notifications (if even desirable from a customer communication perspective), would not be feasible, or would lead to information overload and consumer fatigue. If this too translates into a reality where consumers do not really care about such information, the third leg of the test seems not to be fulfilled. In any case, this too is a criterion which will very likely raise many more questions that it aims to address.
In sum, two out of the three criteria would be tough to fulfill. Assuming, however, that they would indeed be fulfilled, and even though there are considerable differences in the enforcement of the prohibition against unfair/deceptive commercial practices, the FTC, as well as European national authorities can take a case against Facebook to court to order injunctions, in addition to other administrative or civil acts. A full analysis of European and Dutch law in this respect will soon be available in a publication authored together with Stephan Mulders.
Harmonization and its discontents
The Italian Competition and Market Authority (the same entity that fined WhatsApp) launched an investigation into Facebook on April 6, on the ground that its data sharing practices are misleading and aggressive. The Authority will have to go through the same test as applied above, and in addition, will very likely also consult the black-listed practices annexed to the Directive. Should this public institution from a Member State find that these practices are unfair, and should the relevant courts agree with this assessment, a door for a European Union-wide discussion on this matter will be opened. Directive 2005/29/EC is a so-called maximum harmonization instrument, meaning that the European legislator aims for it to level the playing field on unfair competition across all Member States. If Italy’s example is to be followed, and more consumer authorities restrict Facebook practices, this could mark the most effective performance of a harmonizing instrument in consumer protection. If the opposite happens, and Italy ends up being the only Member State outlawing such practices, this could be a worrying sign of how little impact maximum harmonization has in practice.
New issues, same laws
Nonetheless, in spite of the difficulties in enforcing unfair competition, this discussion prompts one main take-away: data-related practices do fall under the protections offered by regulation on unfair/deceptive commercial practices. This type of regulation already exists in the U.S. just as much as it exists in the EU, and is able to handle new legal issues arising out of the use of disruptive technologies. The only areas where current legal practices are in need of an upgrade deal with interpretation and proof: given the complexity of social media platforms and the many ways in which they are used, perhaps judges and academics should also make use of data science to better understand the behavior of these audiences, as long as this behavior is central for legal assessments.
 Will Knight, ‘A Self-driving Uber Has Killed a Pedestrian in Arizona’, MIT Technology Review, The Download, March 19, 2018; Alan Ohnsman, Fatal Tesla Crash Exposes Gap In Automaker’s Use Of Car Data, Forbes, April 16, 2018.
 John Biggs, ‘Exit Scammers Run Off with $660 Million in ICO Earnings’, TechCrunch, April 13, 2018.
 Joe Harpaz, ‘What Trump’s Attack On Amazon Really Means For Internet Retailers’, Forbes, April 16, 2018.
 Carole Cadwalladr and Emma Graham-Harrison, ‘Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach’, The Guardian, March 17, 2018.
 The Cambridge Analytica website reads: ‘Data drives all we do. Cambridge Analytica uses data to change audience behavior. Visit our political or commercial divisions to see how we can help you.’, last visited on April 27, 2018. It is noteworthy that the company started insolvency procedures on 2 May, in an attempt to rebrand itself as Emerdata, see see Shona Ghosh and Jake Kanter, ‘The Cambridge Analytica power players set up a mysterious new data firm — and they could use it for a ‘Blackwater-style’ rebrand’, Business Insider, May 3, 2018.
 For a more in-depth description of the Graph API, as well as its Instagram equivalent, see Jonathan Albright, The Graph API: Key Points in the Facebook and Cambridge Analytica Debacle, Medium, March 21, 2018.
 Iraklis Symeonidis, Pagona Tsormpatzoudi & Bart Preneel, ‘Collateral Damage of Facebook Apps: An Enhanced Privacy Scoring Model’, IACR Cryptology ePrint Archive, 2015, p. 5.
 UK Parliament Digital, Culture, Media and Sport Committee, ‘Dr Aleksandr Kogan questioned by Committee’, April 24, 2018; see also the research output based on the 57 billion friendships dataset: Maurice H. Yearwood, Amy Cuddy, Nishtha Lamba, Wu Youyoua, Ilmo van der Lowe, Paul K. Piff, Charles Gronind, Pete Fleming, Emiliana Simon-Thomas, Dacher Keltner, Aleksandr Spectre, ‘On Wealth and the Diversity of Friendships: High Social Class People around the World Have Fewer International Friends’, 87 Personality and Individual Differences 224-229 (2015).
 UK Parliament Digital, Culture, Media and Sport Committee hearing, supra note 8.
 This number mentioned by Kogan in his witness testimony conflicts with media reports which indicate a much higher participation rate in the study, see Julia Carrie Wong and Paul Lewis, ‘Facebook Gave Data about 57bn Friendships to Academic’, The Guardian, March 22, 2018.
 Clause 18.1 (2015) reads: If you are a resident of or have your principal place of business in the US or Canada, this Statement is an agreement between you and Facebook, Inc. Otherwise, this Statement is an agreement between you and Facebook Ireland Limited.
 Clause 15.1 (2015) reads: The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions.
 Italian Competition and Market Authority, ‘WhatsApp fined for 3 million euro for having forced its users to share their personal data with Facebook’, Press Release, May 12, 2018.
 Rogier de Vrey, Towards a European Unfair Competition Law: A Clash Between Legal Families : a Comparative Study of English, German and Dutch Law in Light of Existing European and International Legal Instruments (Brill, 2006), p. 3.
 Nico van Eijk, Chris Jay Hoofnagle & Emilie Kannekens, ‘Unfair Commercial Practices: A Complementary Approach to Privacy Protection’, 3 European Data Protection Law Review 1-12 (2017), p. 2.
 Ibid., p. 11.
 The tests in Figure 2 have been simplified by in order to compare their essential features; however, upon a closer look, these tests include other details as well, such as the requirement of a practice being against ‘professional diligence’ (Art. 4(1) UCPD).
 Patrick Kulp, ‘Facebook Quietly Admits to as Many as 270 Million Fake or Clone Accounts’, Mashable, November 3, 2017.
 Italian Competition and Market Authority, ‘Misleading information for collection and use of data, investigation launched against Facebook’, Press Release, April 6, 2018.
 This discussion is of course much broader, and it starts from the question of whether a data-based service falls within the material scope of, for instance, Directive 2005/29/EC. According to Art. 2(c) corroborated with Art. 3(1) of this Directive, it does. See also Case C‑357/16, UAB ‘Gelvora’ v Valstybinė vartotojų teisių apsaugos tarnyba, ECLI:EU:C:2017:573, para. 32.
By Irene Ng (Huang Ying)
In 2017, the Defense Advanced Research Projects Agency (“DARPA”) launched a five year research program on the topic of explainable artificial intelligence. Explainable artificial intelligence, or also known as XAI, refers to an artificial intelligence system whereby its decisions or output are explainable and understood by humans.
The growth of XAI in the field of artificial intelligence research is noteworthy considering the current state of AI research, whereby decisions made by machines are opaque in its reasoning and, in several cases, not understood by their human developers. This is also known as the “black box” of artificial intelligence; when input is being fed into the “black box”, an output based on machine learning techniques is produced, although there is no explanation behind why the output is as it is. This problem is not undocumented – there have been several cases when machine learning algorithms have made certain decisions, but developers are puzzled at how such decisions were reached.
The parallel interest in the use of artificial intelligence in judicial decision-making renders it interesting to consider how XAI will influence the development of an AI judge or arbitrator. Research in the use of AI for judicial decision-making is not novel. It was reported in 2016 that a team of computer scientists from UCL managed to develop an algorithm that “has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy”. Much however remains to be said about the legal reasoning of such an AI-verdict.
The lack of an explainable legal reasoning is, unsurprisingly, a thorny issue towards pressing for automated decision-making by machines. This sentiment has been echoed by several authors who have written in the field of AI judges or AI arbitrators. The opacity in the conclusion of an AI-verdict is alarming for lawyers, especially where legal systems are predicated on the legal reasoning of judges, arbitrators or adjudicators. In certain fields of law, such as criminal law and sentencing, the lack of transparency in the reasoning by an AI-judge in reaching a sentencing verdict can pose further moral and ethical dilemmas.
Furthermore, as AI judges are trained by datasets, who ensures that such datasets are not inherently biased so as to ensure that the AI-verdict will not be biased against specific classes of people as well? The output generated by a machine learning algorithm is highly dependent on the data that is fed to train the system. This has led to reports highlighting “caution against misleading performance measures for AI-assisted legal techniques”.
In light of the opacity in legal reasoning provided by AI judges or AI arbitrators, how would XAI change or impact the field of AI judicial decision-making? Applying XAI in the field of judicial decision-making, an XAI judge or arbitrator would produce an AI verdict and produce a reasoning for such a decision. Whether such reasoning is legal or factual, or even logical, is not important at this fundamental level – what is crucial is that a reasoning has been provided, and such reasoning can be understood and subsequently challenged by lawyers, if disagreed upon. Such an XAI judge would at least function better in legal systems whereby appeal of the verdict is based on challenges to the reasoning of the judge or arbitrator.
This should also be seen in light of the EU’s upcoming General Data Protection Regulation (“GDPR”), whereby a “data subject shall have the right not to be subject to a decision based solely on automated processing” and it appears uncertain at this point whether a data subject has the right to ask for an explanation about an algorithm that made the decision. For developers that are unable to explain the reasoning behind their algorithm’s decisions, this may prove to be a potential landmine considering the tough penalties for flouting the GDPR. This may thus be an implicit call to move towards XAI, especially for developers building AI judicial decision-making software that uses personal data of EU citizens.
As the legal industry still grapples with the introduction of AI in its daily operations, such as the use of the ROSS Intelligence system, the development of other fields of AI such as XAI should not go unnoticed. While the use of an AI judge or AI arbitrator is not commonplace at the present moment, if one considers how XAI may be a better alternative for the legal industry as compared to traditional AI or machine learning methods, development of AI judges or arbitrators using XAI methods rather than traditional AI methods might be more ethically and morally acceptable.
Yet, legal reasoning is difficult to replicate in an XAI – the same set of facts can lead to several different views. Would XAI replicate these multi-faceted views, and explain them? But before we even start to ponder about such matters, perhaps we should first start getting the machine to give an explainable output that we can at least agree and disagree about.
 David Gunning, Explainable Artificial Intelligence (XAI), https://www.darpa.mil/program/explainable-artificial-intelligence.
 Will Knight, The Dark Secret at the Heart of AI, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.
 Chris Johnston and agencies, Artificial intelligence ‘judge’ developed by UCL computer scientists, October 24, 2016, online: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.
 See José Maria de la Jara & Others, Machine Arbitrator: Are We Ready?, May 4, 2016, online: http://arbitrationblog.kluwerarbitration.com/2017/05/04/machine-arbitrator-are-we-ready/.
 Article 22, General Data Protection Regulation.
 Penalties of GDPR can range from 10m eur or 2% of the worldwide annual revenue on the lower scale and 20m or 4% of the worldwide revenue on the upper scale. See Article 83, General Data Protection Regulation.
By Nikolaos Theodorakis
The European Commission (“Commission”) recently launched the EU Blockchain Observatory and Forum (“Observatory”) with the support of the European Parliament. The Observatory aims to highlight relevant developments and facilitate collaboration between the EU and involved stakeholders.
What is the blockchain technology? What are its benefits?
Blockchain is a distributed ledger technology. In essence, it is a database that keeps a final and definitive record of transactions that no one can penetrate or alter. As a result, Blockchain technology increases trust, traceability and security.
Distributed Ledger Technology (“DLT”), which is the backbone of blockchain technology, was introduced about a decade ago, aiming to develop new financial applications and facilitate decentralized data storage and management. The decentralization of the Internet has been an idea discussed for several decades since it allows for user freedom and democracy in the web. The implementation effort in practice involves avoiding one centralized location, and the need for intermediaries to perform transactions. Blockchain information is shared, verifiable, public, and accessible.
The abovementioned traits can increase accountability. Blockchain has the potential to lead this technological breakthrough. The enhanced trust that it creates can be used for legal services (e.g. smart contracts), financial services, transportation services (e.g. bill of lading disputes), energy, or healthcare issues.
Naturally, the European Commission wishes to further investigate blockchain’s potential, consolidate expertise, and address the challenges created by new blockchain paradigms. To achieve this, it created the Observatory within the Financial Technology pillar, and plans to further help develop the single market, Banking Union, the Capital Markets Union and retail financial services.
As an example of blockchain’s game-changing potential, 10% of global GDP could be stored, via digital assets, through this technology in less than 10 years. This means that governments can take advantage of blockchain to issue IDs that cannot be replicated, or monitor taxation reporting in a unique and transparent way. Insurance companies can utilize automatic execution of contracts, financial bodies can secure money and financial asset transfers, and the intellectual property sector can distribute IP rights pertinent to music, videos or other protected content.
Even if only a fraction of the above benefits materializes, blockchain can significantly change the way digital services are communicated. The European Commission needs to assess, in the form of a feasibility study, whether this technology is fully compliant, particularly with EU law (more on this below). Despite recognizing blockchain as a key emerging trend, it is equally important to manage it in a compliant way.
In essence, the Commission wants to build on existing initiatives launched by the EU members that relate to offering blockchain-based solutions. The broader role of the Observatory is to help Europe fully grasp and exploit the opportunities that this technology offers and allow the continent to remain on the forefront of technological developments. The blockchain will enable cross border cooperation and regulator to discuss and develop new ideas to learn, engage and contribute in an open way.
In a nutshell, the Observatory aims to:
- map key existing initiatives in Europe and beyond;
- monitor developments, analyze trends and address emerging issues;
- become a knowledge hub on blockchain;
- promote European actors and reinforce European engagement with multiple stakeholders;
- represent a major communication opportunity for Europe to set out its vision and ambition on the international scene;
- inspire common actions based on specific use-cases of European interest.
Despite the multiple benefits of blockchain, and its use for cryptocurrencies and multiple other options, this technology comes with a number of drawbacks. For instance, blockchain is in direct conflict with an upcoming EU privacy legislation (the General Data Protection Regulation), which has strict privacy requirements (including privacy by design and by default, encryption, enhanced subject rights etc.). Blockchain makes it more difficult to attribute liability, due to its decentralized nature, and practically impossible to comply with certain privacy rights, like the right to be forgotten (since the blocks cannot be erased, once generated). This direct conflict with EU regulatory standards may cause some bumps in the future development of this technology.
Further, other concerns pertinent to the use of blockchain relate to broader skepticism about security – and whether this technology can remain immune to attacks in the long-run, lack of regulation that leads to unsafe exchange environments particularly regarding cryptocurrencies, and funding of illicit activities and circumvention of international sanctions.
 World Economic Forum Surbey on Technological Tipping Points
CJEU’s General Advocate Bot: Administrators of Facebook Fan Pages May Be Held Responsible for the Data Processing Carried out by Facebook
By Katharina Erler
The opinion of Advocate General Bot delivered on 24 October 2017 and issued in relation to case C-210/16 of the Court of Justice of the European Union (CJEU) suggests that administrators of fan pages on the Facebook social network may as controllers under Article 2(d) of the EU Data Protection Directive (95/46/EC) be held responsible for the data processing carried out by Facebook and for the cookies which Facebook installed for that purpose. In particular, the administrator should be regarded as being, along with Facebook Inc. and Facebook Ireland itself, a controller of the personal data that is carried out for the purpose of compiling viewing statistics for that fan page. Furthermore, Advocate General Bot rejected Facebook’s assertion that its EU data processing activities fall solely under the jurisdiction of the Irish Data Protection Commissioner. The related case is Unabhängiges Landeszentrum für Datenschutz v. Wirtschaftsakademie, C-210/16.
Facebook fan pages are user accounts that may be set up by individuals as well as businesses. Administrators may use their fan page to present themselves or their businesses for commercial purposes. Facebook also offers the administrators the opportunity to obtain viewing statistics containing information on the characteristics and habits of the visitors of their fan page. These statistics are compiled by Facebook, which collects data of the visitors via cookies, and then personalized by the fan page administrator using selection criteria. This may help administrators to better craft the communications on their fan pages. To compile these statistics Facebook stores at least one cookie containing a unique ID number, active for two years, on the hard disk of every fan page visitor.
A German company “Wirtschaftsakademie Schleswig-Holstein GmbH”, which provides education and training services via a fan page hosted on the website of the social network Facebook was ordered on November 3, 2011 by a German regional data-protection authority “Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein” to deactivate its fan page. This decision was based on the fact that neither the “Wirtschaftsakademie” as administrator nor Facebook had informed visitors of the fan page that Facebook was collecting and processing their personal data.
After it challenged this order and the data-protection authority again dismissed that objection, the “Wirtschaftsakademie” brought an action before a regional German Administrative Court. It ruled on October 9, 2013, that the administrator of a fan page is not a “controller” within the meaning of the German data protection act and therefore cannot be addressee of an order to deactivate the fan page under § 38(5) of the German data protection act (“BDSG”). The Higher Administrative Court, however, dismissed an appeal of the data-protection authority holding that the prohibition of the data processing was unlawful. According to its ruling this was, because prohibition of data processing under this provision is only possible if it is the only way to end the infringement. Facebook was in that position to end the processing of data, and therefore the “Wirtschaftsakademie” was not a “controller” of data processing under § 38(5) of the German data protection act.
In the appeal proceedings, the German Federal Administrative Court, however, confirmed that ruling by considering that the administrator of a fan page is not a data controller within the meaning of neither § 38(5) of the German data protection act not the Article 2(d) of EU-Directive 95/46/EC. Hence, the Court referred several questions to the CJEU, which – questions (1) and (2) – as a core issue concern the question, whether a body, which is non-controller under Article 2(d) of EU-Directive 95/46/EC may be also the addressee of orders of the supervisory bodies.
It is worth mentioning that in order to rule on the lawfulness of the order in question, the referring courts also asked – in its questions (3) and (4) – about the distribution of powers among the supervisory bodies in cases where a parent company has several establishments throughout the EU. Finally – questions (5) and (6) concern questions regarding the necessary network to coordinate and align the decisions of the supervisory bodies in order to avoid different legal appraisal.
Article 2(d) of EU Data Protection Directive 95/46/EC provides that a ‘controller’ is the natural or legal person, public authority, agency or any other body which alone or jointly with others determines the purposes and means of the processing of personal data; where the purposes and means of processing are determined by national or Community laws or regulations, the controller or the specific criteria for his nomination may be designated by national or Community law;
Article 17(2) of the EU Data Protection Directive 95/46/EC states that the Member States shall provide that the controller must, where processing is carried out on his behalf, choose a processor providing sufficient guarantees in respect of the technical security measures and organizational measures governing the processing to be carried out, and must ensure compliance with those measures.
Article 24 of the EU Data Protection Directive 95/46/EC states that the Member States shall adopt suitable measures to ensure the full implementation of the provisions of this Directive and shall in particular lay down the sanctions to be imposed in case of infringement of the provisions adopted pursuant to this Directive.
Article 28(3) of EU Data Protection Directive 95/46/EC stipulates that each authority shall in particular be endowed with: investigative powers, such as powers of access to data forming the subject-matter of processing operations and powers to collect all the information necessary for the performance of its supervisory duties; effective powers of intervention, such as, for example, that of delivering opinions before processing operations are carried out, in accordance with Article 20, and ensuring appropriate publication of such opinions, of ordering the blocking, erasure or destruction of data, of imposing a temporary or definitive ban on processing, of warning or admonishing the controller, or that of referring the matter to national parliaments or other political institutions; and the power to engage in legal proceedings where the national provisions adopted pursuant to this Directive have been violated or to bring these violations to the attention of the judicial authorities. Decisions by the supervisory authority which give rise to complaints may be appealed through the courts.
Advocate Bot’s assessment of the questions referred to the CJEU
First, Advocate Bot emphasizes that the referred questions do not touch upon the material matter whether the processing of personal data in the case at hand is contrary to the rules of EU-Directive 95/46/EC.
Under the assumption that the administrator of a fan page is not a controller under Article 2(d) of EU-Directive 95/46/EC, the German Federal Administrative Court especially stresses the question whether Article 2(d) may be interpreted as definitively and exhaustively defining the liability for data protection violations or whether scope remains for responsibility for a body with is no controller within the meaning of this article. This leads to the central question, which is pointed out by General Advocate Bot, whether supervisory bodies are permitted by Article 17(2), 24 and Article 28(3) of Directive 95/46/EC to exercise their powers of interventions against such non-controller.
Advocate General Bot, however, considers the underlying premise to be incorrect and clearly emphasizes that, in his opinion, the administrator of a Facebook fan page must be regarded as jointly responsible for the phase of data processing which consists in the collecting by Facebook of personal data. By referring to CJEU’s Google Spain judgment C-131/12 of 13 May 2014, Advocate General Bot, as a starting point, stresses the importance and fundamental role of the controller under the EU Data Protection Directive and its responsibility to ensure the effectiveness of Directive 95/46/EC and its full protection of data subjects. Therefore, and in view of the history of CJEU’s case law, the concept of the “controller” must be given a broad definition. As the “controller” is the person that decides why and how personal data will be processed, this concept leads to responsibility where there is actually influence.
According to Bot, it is, as the designer of the data processing in question, Facebook Inc. alongside Facebook Ireland, which principally decides on the purposes of this data processing as it, especially, developed the economic model containing on one hand the publication of personalized advertisement and on the other hand the compilation of statistics for fan page administrators. Additionally, because Facebook Ireland has been designated by Facebook Inc. as being responsible for the processing of personal data within the European Union and because some or all of the personal data of Facebook’s users who reside in the European Union is transferred to servers belonging to Facebook Inc. that are located in the United States, Facebook Inc. alongside Facebook Ireland are responsible for data processing.
But at this point Bot additionally emphasized that Article 2(d) of Directive 95/46/EC expressly provides the possibility of shared responsibility and that it is also necessary to add to the responsibility of Facebook Inc. alongside Facebook Ireland the responsibility of the fan page administrator. Although Bot recognized that a fan page administrator is first and foremost user of Facebook, he stresses that this does not preclude those administrators from being responsible for the phase of data processing. In his view determination of the “controller” under Article 2(d) means any influence in law or in fact over the purposes and means of data processing, and not carrying out of the data processing itself.
Advocate General Bot argued that (1) fan page administrators by only having recourse to Facebook for the publication of its information subscribe the principle that visitor’s data will be processed. That data processing would (2) also not occur without the prior decision of the administrator to operate a fan page in the Facebook social network. And (3) by, on the one hand, enabling Facebook to better target the advertisement and, on the other hand, acquiring better insight into the profiles of its visitors the administrator at least participates in the determination of the purposes of data processing. These objectives are according to Advocate General Bot closely related which would support the joint responsibility.
Moreover (4) the administrator has as a decisive influence the power to bring that data processing to an end by closing the page down. Finally, Bot argued that (5) the administrator by defining criteria for the compilation of statistics and using filters is able to influence the specific way in which that data processing tool is used. This classification as a “controller” would also neither be contradicted by imbalances in the relationship of strength nor by any interpretation that is based solely on the terms and conditions of the contract concluded by the fan page administrator and Facebook. With reference to CJEU’s case Google Spain, Bot pointed out that it is not necessary to have complete control over data processing. This result and broad interpretation of “controller” would also serve the purpose of effective data protection and prevents the possibility to evade responsibility by agreeing to terms and conditions of a service provider for the purposes of hosting information on their website.
Furthermore, Advocate General Bot established a parallel with CJEU’s decision Fashion ID, C-40/17, where the manager of a website embeds in its website the Facebook Like Button, which, when activated, transmits personal data to Facebook. As to the question of Fashion ID “controlled” this data processing, Bot holds that there is no fundamental difference between those two cases. Finally, the Advocate General clarified that joint responsibility does not imply equal responsibility. The various parties may be involved in the processing of data to different degrees.
It seems surprising that Advocate General Bot simply rejected the premise of the German Federal Administrative Court, instead bringing to the foreground the question on the interpretation of the “controller” under Article 2(d)—even changing the focus of the referred questions. Furthermore, this broad interpretation and the expansion of the fundamental concept of the “controller” might suggest that, if followed by the CJEU, in the future anyone who has any influence on the data processing, especially by just using a service which is associated with data processing, might be held responsible for infringement of data protection law.
With regard to the question of jurisdiction it is worth mentioning that Advocate General Bot especially emphasized that the processing of data in the case at hand consisted of the collection of personal data by means of cookies installed on the computer of visitors to fanpages and specifically intends to enable Facebook to better target its advertisements. Therefore, in line with CJEU’s decision Google Spain and due to effective and immediate application of national rules on data protection and Advocate General Bot holds that this data processing must regarded as taking place in the context of the activities in which Facebook Germany engages in Germany. The fact that the EU head office of the Facebook Inc. is situated in Ireland does not, according to Bot, therefore, prevent the German data protection authority in any way from taking measures against the “Wirtschaftsakademie”. This, however, may be interpreted differently under the upcoming EU’s General Data Protection Regulation (2016/679), which replaces the existing EU Member State data protection laws based on Directive 95/46/EC when it enters into force on 25 May 2018.