CJEU’s General Advocate Bot: Administrators of Facebook Fan Pages May Be Held Responsible for the Data Processing Carried out by Facebook
By Katharina Erler
The opinion of Advocate General Bot delivered on 24 October 2017 and issued in relation to case C-210/16 of the Court of Justice of the European Union (CJEU) suggests that administrators of fan pages on the Facebook social network may as controllers under Article 2(d) of the EU Data Protection Directive (95/46/EC) be held responsible for the data processing carried out by Facebook and for the cookies which Facebook installed for that purpose. In particular, the administrator should be regarded as being, along with Facebook Inc. and Facebook Ireland itself, a controller of the personal data that is carried out for the purpose of compiling viewing statistics for that fan page. Furthermore, Advocate General Bot rejected Facebook’s assertion that its EU data processing activities fall solely under the jurisdiction of the Irish Data Protection Commissioner. The related case is Unabhängiges Landeszentrum für Datenschutz v. Wirtschaftsakademie, C-210/16.
Facebook fan pages are user accounts that may be set up by individuals as well as businesses. Administrators may use their fan page to present themselves or their businesses for commercial purposes. Facebook also offers the administrators the opportunity to obtain viewing statistics containing information on the characteristics and habits of the visitors of their fan page. These statistics are compiled by Facebook, which collects data of the visitors via cookies, and then personalized by the fan page administrator using selection criteria. This may help administrators to better craft the communications on their fan pages. To compile these statistics Facebook stores at least one cookie containing a unique ID number, active for two years, on the hard disk of every fan page visitor.
A German company “Wirtschaftsakademie Schleswig-Holstein GmbH”, which provides education and training services via a fan page hosted on the website of the social network Facebook was ordered on November 3, 2011 by a German regional data-protection authority “Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein” to deactivate its fan page. This decision was based on the fact that neither the “Wirtschaftsakademie” as administrator nor Facebook had informed visitors of the fan page that Facebook was collecting and processing their personal data.
After it challenged this order and the data-protection authority again dismissed that objection, the “Wirtschaftsakademie” brought an action before a regional German Administrative Court. It ruled on October 9, 2013, that the administrator of a fan page is not a “controller” within the meaning of the German data protection act and therefore cannot be addressee of an order to deactivate the fan page under § 38(5) of the German data protection act (“BDSG”). The Higher Administrative Court, however, dismissed an appeal of the data-protection authority holding that the prohibition of the data processing was unlawful. According to its ruling this was, because prohibition of data processing under this provision is only possible if it is the only way to end the infringement. Facebook was in that position to end the processing of data, and therefore the “Wirtschaftsakademie” was not a “controller” of data processing under § 38(5) of the German data protection act.
In the appeal proceedings, the German Federal Administrative Court, however, confirmed that ruling by considering that the administrator of a fan page is not a data controller within the meaning of neither § 38(5) of the German data protection act not the Article 2(d) of EU-Directive 95/46/EC. Hence, the Court referred several questions to the CJEU, which – questions (1) and (2) – as a core issue concern the question, whether a body, which is non-controller under Article 2(d) of EU-Directive 95/46/EC may be also the addressee of orders of the supervisory bodies.
It is worth mentioning that in order to rule on the lawfulness of the order in question, the referring courts also asked – in its questions (3) and (4) – about the distribution of powers among the supervisory bodies in cases where a parent company has several establishments throughout the EU. Finally – questions (5) and (6) concern questions regarding the necessary network to coordinate and align the decisions of the supervisory bodies in order to avoid different legal appraisal.
Article 2(d) of EU Data Protection Directive 95/46/EC provides that a ‘controller’ is the natural or legal person, public authority, agency or any other body which alone or jointly with others determines the purposes and means of the processing of personal data; where the purposes and means of processing are determined by national or Community laws or regulations, the controller or the specific criteria for his nomination may be designated by national or Community law;
Article 17(2) of the EU Data Protection Directive 95/46/EC states that the Member States shall provide that the controller must, where processing is carried out on his behalf, choose a processor providing sufficient guarantees in respect of the technical security measures and organizational measures governing the processing to be carried out, and must ensure compliance with those measures.
Article 24 of the EU Data Protection Directive 95/46/EC states that the Member States shall adopt suitable measures to ensure the full implementation of the provisions of this Directive and shall in particular lay down the sanctions to be imposed in case of infringement of the provisions adopted pursuant to this Directive.
Article 28(3) of EU Data Protection Directive 95/46/EC stipulates that each authority shall in particular be endowed with: investigative powers, such as powers of access to data forming the subject-matter of processing operations and powers to collect all the information necessary for the performance of its supervisory duties; effective powers of intervention, such as, for example, that of delivering opinions before processing operations are carried out, in accordance with Article 20, and ensuring appropriate publication of such opinions, of ordering the blocking, erasure or destruction of data, of imposing a temporary or definitive ban on processing, of warning or admonishing the controller, or that of referring the matter to national parliaments or other political institutions; and the power to engage in legal proceedings where the national provisions adopted pursuant to this Directive have been violated or to bring these violations to the attention of the judicial authorities. Decisions by the supervisory authority which give rise to complaints may be appealed through the courts.
Advocate Bot’s assessment of the questions referred to the CJEU
First, Advocate Bot emphasizes that the referred questions do not touch upon the material matter whether the processing of personal data in the case at hand is contrary to the rules of EU-Directive 95/46/EC.
Under the assumption that the administrator of a fan page is not a controller under Article 2(d) of EU-Directive 95/46/EC, the German Federal Administrative Court especially stresses the question whether Article 2(d) may be interpreted as definitively and exhaustively defining the liability for data protection violations or whether scope remains for responsibility for a body with is no controller within the meaning of this article. This leads to the central question, which is pointed out by General Advocate Bot, whether supervisory bodies are permitted by Article 17(2), 24 and Article 28(3) of Directive 95/46/EC to exercise their powers of interventions against such non-controller.
Advocate General Bot, however, considers the underlying premise to be incorrect and clearly emphasizes that, in his opinion, the administrator of a Facebook fan page must be regarded as jointly responsible for the phase of data processing which consists in the collecting by Facebook of personal data. By referring to CJEU’s Google Spain judgment C-131/12 of 13 May 2014, Advocate General Bot, as a starting point, stresses the importance and fundamental role of the controller under the EU Data Protection Directive and its responsibility to ensure the effectiveness of Directive 95/46/EC and its full protection of data subjects. Therefore, and in view of the history of CJEU’s case law, the concept of the “controller” must be given a broad definition. As the “controller” is the person that decides why and how personal data will be processed, this concept leads to responsibility where there is actually influence.
According to Bot, it is, as the designer of the data processing in question, Facebook Inc. alongside Facebook Ireland, which principally decides on the purposes of this data processing as it, especially, developed the economic model containing on one hand the publication of personalized advertisement and on the other hand the compilation of statistics for fan page administrators. Additionally, because Facebook Ireland has been designated by Facebook Inc. as being responsible for the processing of personal data within the European Union and because some or all of the personal data of Facebook’s users who reside in the European Union is transferred to servers belonging to Facebook Inc. that are located in the United States, Facebook Inc. alongside Facebook Ireland are responsible for data processing.
But at this point Bot additionally emphasized that Article 2(d) of Directive 95/46/EC expressly provides the possibility of shared responsibility and that it is also necessary to add to the responsibility of Facebook Inc. alongside Facebook Ireland the responsibility of the fan page administrator. Although Bot recognized that a fan page administrator is first and foremost user of Facebook, he stresses that this does not preclude those administrators from being responsible for the phase of data processing. In his view determination of the “controller” under Article 2(d) means any influence in law or in fact over the purposes and means of data processing, and not carrying out of the data processing itself.
Advocate General Bot argued that (1) fan page administrators by only having recourse to Facebook for the publication of its information subscribe the principle that visitor’s data will be processed. That data processing would (2) also not occur without the prior decision of the administrator to operate a fan page in the Facebook social network. And (3) by, on the one hand, enabling Facebook to better target the advertisement and, on the other hand, acquiring better insight into the profiles of its visitors the administrator at least participates in the determination of the purposes of data processing. These objectives are according to Advocate General Bot closely related which would support the joint responsibility.
Moreover (4) the administrator has as a decisive influence the power to bring that data processing to an end by closing the page down. Finally, Bot argued that (5) the administrator by defining criteria for the compilation of statistics and using filters is able to influence the specific way in which that data processing tool is used. This classification as a “controller” would also neither be contradicted by imbalances in the relationship of strength nor by any interpretation that is based solely on the terms and conditions of the contract concluded by the fan page administrator and Facebook. With reference to CJEU’s case Google Spain, Bot pointed out that it is not necessary to have complete control over data processing. This result and broad interpretation of “controller” would also serve the purpose of effective data protection and prevents the possibility to evade responsibility by agreeing to terms and conditions of a service provider for the purposes of hosting information on their website.
Furthermore, Advocate General Bot established a parallel with CJEU’s decision Fashion ID, C-40/17, where the manager of a website embeds in its website the Facebook Like Button, which, when activated, transmits personal data to Facebook. As to the question of Fashion ID “controlled” this data processing, Bot holds that there is no fundamental difference between those two cases. Finally, the Advocate General clarified that joint responsibility does not imply equal responsibility. The various parties may be involved in the processing of data to different degrees.
It seems surprising that Advocate General Bot simply rejected the premise of the German Federal Administrative Court, instead bringing to the foreground the question on the interpretation of the “controller” under Article 2(d)—even changing the focus of the referred questions. Furthermore, this broad interpretation and the expansion of the fundamental concept of the “controller” might suggest that, if followed by the CJEU, in the future anyone who has any influence on the data processing, especially by just using a service which is associated with data processing, might be held responsible for infringement of data protection law.
With regard to the question of jurisdiction it is worth mentioning that Advocate General Bot especially emphasized that the processing of data in the case at hand consisted of the collection of personal data by means of cookies installed on the computer of visitors to fanpages and specifically intends to enable Facebook to better target its advertisements. Therefore, in line with CJEU’s decision Google Spain and due to effective and immediate application of national rules on data protection and Advocate General Bot holds that this data processing must regarded as taking place in the context of the activities in which Facebook Germany engages in Germany. The fact that the EU head office of the Facebook Inc. is situated in Ireland does not, according to Bot, therefore, prevent the German data protection authority in any way from taking measures against the “Wirtschaftsakademie”. This, however, may be interpreted differently under the upcoming EU’s General Data Protection Regulation (2016/679), which replaces the existing EU Member State data protection laws based on Directive 95/46/EC when it enters into force on 25 May 2018.
By Maria E. Sturm
On 12 July 2016, the European Commission issued its implementing decision pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the EU-U.S. Privacy Shield (Decision 2016/1250). It became necessary after the ECJ declared the safe harbor policy of the EU Commission concerning the USA invalid in Maximilian Schrems v Data Protection Commission (C – 362/14). The new privacy shield contained several alterations to its predecessor, as well as the commitment to an annual review to asses, if an adequate level of data protection is still ensured. The first annual report has been published on 18 October 2017. It is based on meetings between the EU Commission and all relevant U.S. authorities, as well as on input from several stakeholders (companies, NGOs, data protection authorities of the Member States, etc.).
The review covered all aspects of the privacy shield. Those are formally, its implementation, administration, supervision and enforcement and with regard to its content the commercial aspects, as well as aspects of governmental access to personal data. So far, 2400 companies have been certified under the new privacy shield. This means first, that it is used actively and second, that the review commission had sufficient data to examine, if it works and where there are possibilities for improvement and refinement.
The U.S. authorities have introduced complaint-handling and enforcement mechanisms, as well as procedures to protect individual right, including the Ombudsperson mechanism. Furthermore, the relevant safeguards concerning access to personal data by public authorities, namely Presidential Policy Directive 28 (PPD-28), are still in force. Therefore, the report states, that in general, the United States provide an adequate level of protection as required by the European Court of Justice. However, the Commission still made some recommendations for further improvement:
- Companies should not be able to publicly refer to their Privacy Shield certification before the certification is finalized by the Department of Commerce (DoC): some companies referred to their certification after their application, but before the process had been finalized. This discrepancy can lead to wrong public information and can undermine the shield’s credibility.
- The DoC should search proactively and regularly for false claims: this refers to companies who initiated, but never completed the certification process, as well as to companies who never applied for a certification but still publicly suggest they comply with the requirements.
- The DoC should monitor compliance with the Privacy Shield Principles continuously: this could be done e.g. via compliance review questionnaires and/or annual compliance reports (either self-assessment or outside compliance review). The results could be used as starting point for follow up action, in case particular deficiencies are detected.
- DoC and Data Protection Authorities (DPA) should further strengthen awareness rising: in particular, EU citizens should receive information about their rights and how to lodge complaints.
- DoC, DPAs and Federal Trade Commission (FTC) should improve their cooperation: more intensive cooperation between all involved authorities on both sides of the Atlantic can help to implement and enforce the Shield.
- Protections of PPD-28 should be enshrined in the Foreign Intelligence Surveillance Act: this could ensure stability and continuity with regard to the protections of non-US persons.
- Privacy Shield Ombudsperson should be appointed as soon as possible: although the Ombudsperson mechanism already works, the Ombudsperson itself still has not been appointed. This should be done as soon as possible to complete this tool.
- Privacy and Civil Liberties Oversight Board (PCLOB) members should be appointed swiftly: here the same argument applies as in point 7. The board itself already started its work, but is not completely manned and therefore not as efficient as it could be.
- Reports should be released timely and publicly: the U.S. administration should release publicly the PCLOB’s report on the implementation of PPD-28, due to its relevance. In addition, the U.S. authorities should provide the Commission with comprehensive reports on recent relevant developments.
Furthermore, on behalf of the Commission, a study on automated decision-making will take place to collect further information and assess the relevance of automated decision-making for transfers carried out on the basis of the Privacy Shield.
After just one year, on could not expect everything to work perfectly, but the report gives an optimistic evaluation. Thus, with some further refinement, it seems, that the United States and the EU have found a helpful and viable tool that balances the companies’ and the government’s need for data with the individuals’ right to protect their data from unauthorized access.
By Irene Ng (Huang Ying)
At the recent ChatbotConf 2017 hosted in Vienna, Austria on October 2-3, distinguished speakers from leading technology companies convened to discuss an up-and-coming tech – none other than the chatbot. The speakers discussed a range of topics, such as “Competing with character”, “turn(ing) conversations into relationships”, and “building conversational experiences”, and other topics, which is viewable at the ChatbotConf website.
If you thought that the above topics were describing human relations, the fact is, you were not exactly wrong – the focus is actually about developing a human character for chatbots. For some of us, the chatbot might be a piece of tech that we are acquainted with. We may have interacted with these bots on social media platforms such as Facebook Messenger, or used bots on Twitter to track down Pokémon to catch on the famous assisted virtual reality game, Pokémon Go. In some cases, these chatbots are designed to provide customer support or service to the target audience. In other cases, such chatbots are built to provide simple, updated information to users, such as the TyranitarBot on Twitter, or Poncho, a bot that is designed to send “fun, personalized weather forecasts every morning”.
This growing prevalence and use of chatbots by businesses or organizations on various platforms is not something to be ignored. Within the legal industry, several companies have created “legal bots” that are designed to either direct users to the right place (e.g. what kind of lawyer they should be seeking), or perform an easy, repetitive service that can be easily automated and resolved. A famous case displaying the potential of chatbots in the legal industry is that of DoNotPay, a chatbot that has reportedly helped “overturn 120,000 parking tickets in New York and London” by challenging parking tickets. Besides DoNotPay, there are other bots in the legal industry such as LegalPundits that helps to determine what kind of legal advice the potential client needs, to “match [the client] with the resources that [the client] needs”.
As users become more comfortable with interacting with chatbots and using chatbots to help them solve their customer queries, an interesting avenue to explore is the use of chatbots for institutions providing pro bono services. Institutions that provide pro bono services, in particular those that run free legal clinics, can benefit from the use of chatbots in various ways. Firstly, these institutions can use chatbots as a screening tool to filter out whether the said applicant has met the means test to qualify for the free pro bono services. Means tests usually require applicants to fulfill a fixed set of criteria, and if such criteria are generally inflexible (e.g. applicant’s income must be less than USD$1,000.00, anything above this amount will be rejected), then the chatbot can be deployed to interact with these applicants to determine whether the applicant has, at the first screening, met the basic criteria for free pro bono services.
Similarly, institutions can use these chatbots to direct applicants or callers to the right ministry or non-profit organization that may be able to assist them further in the specific legal query that they have. For example, an institution providing pro bono services may often get inquirers making simples requests, such as “where can I repeal my parking ticket”, or “how do I get a divorce”. For the latter scenario, the chatbot can be trained to provide a response, indicating that the inquirer ought to seek a divorce lawyer, point the inquirer to a set of easily digestible information on divorces, followed by a list of divorce lawyers that the inquirer may contact.
Granted, there may – or will – be pitfalls in using chatbots to deal with legal pro bono queries. Applicants or inquirers that approach institutions providing pro bono services may become emotional when discussing their legal problems, and having a human touch attending to such a person’s legal needs may seem to be preferable than a machine. Furthermore, while chatbots can be trained to fulfill certain functions such as determining whether an applicant meets the means test, borderline cases may not be adequately attended to. Using the means test example provided earlier, where applicants must have an income of less than USD $1,000.00, an applicant who declares that she earns USD $1,001.00 may be rejected by the chatbot automatically if the developer did not train the chatbot to consider such borderline cases.
However, despite these concerns, there is still much room for chatbots to grow and help serve a public service function by providing greater accessibility to law. A good chatbot can help pro bono institutions make better use of their resources. By implementing a chatbot to help with simple tasks such as diverting inquirers to the right pages, or assisting volunteers to sift out genuine applicants that fulfill the means test, these pro bono institutions can divert resources or manpower, which would otherwise be used to tackle these relatively simple and repetitive tasks, to other areas, thereby increasing efficiency with the same limited budget that such institutions providing pro bono services have.
While there has been much chatter in the chatbot scene to develop an emotional intelligence for chatbots, ultimately, providing legal aid is a form of public service – and as with all types of service, it is unavoidable that humans may still want to converse with a real human being. As we move forward to explore new avenues of providing legal aid through different platforms in a more efficient and cost-effective manner, we should never forget nor neglect to still provide a physical helping hand to those who need legal aid – and not assume that a chatbot can take our place and release us from our social duty as lawyers to help the needy.
By Nikolaos Theodorakis
China’s new cybersecurity law (“Cybersecurity Law”), which came into force on 1 June 2017, is a milestone. Unlike the EU that has adopted the General Data Protection Regulation, China does not have an omnibus data protection law. It instead regulates issues of privacy and cybersecurity over a number of industry-specific laws, like health and education sectors. The cybersecurity law is somewhat different since it has a wide scope and contains provisions relevant both to data privacy and cybersecurity.
What is the new law about?
The Cybersecurity Law focuses on the protection of personal information and privacy. It regulates the collection and use of personal information. Companies based in or doing business with China will now be required to introduce data protection measures and certain data must be stored locally on domestic servers. Depending their activity, companies may need to undergo a security clearance prior to moving data out of China.
The Cybersecurity Law defines personal information as any information that, on its own or in combination with other information, can determine the identity of a natural person (e.g. name, DOB, address, telephone number, etc.). It mainly regulates two types of organizations, network operators and Critical Information Infrastructure (CII) providers.
Network operators must:
- Acquire the user’s consent when collecting their personal information (it is yet unclear whether consent must be express or not);
- State the purpose, method and scope of data collection;
- Keep the information secure and private (e.g. use back up and encryption methods);
- In the event of a data breach or likely data breach, take remedial actions, inform users and report to competent authorities;
- Erase personal information in case of an illegal or unauthorized collection, and correct inaccurate information;
- Keep log-files of cybersecurity incidents and implement cybersecurity incident plans.
CII providers are required to observe the same cybersecurity practices as network operators, along with additional requirements such as conducting annual cybersecurity reviews. Furthermore, they are required to store personal information and “important data” within China, as discussed below.
What does this mean for businesses?
If your company is doing business in China, or has a physical presence in China, you will need to conduct a gap assessment to determine whether you must undertake changes to be fully compliant with the cybersecurity law.
Failure to comply with the new law comes with significant consequences: a monetary fine up to 1 million yuan (about $150,000) and potential criminal charges. Individuals (e.g. company directors/ managers) may be subject to personal, albeit lesser, fines as well. In determining the applicable sanction, elements taken into account include the degree of harm and the amount of illegal gains. Fines could go up to ten times the amount of ill-gotten gains, potentially skyrocketing the amount. The law also gives the Chinese government the ability to issue warnings, confiscate companies’ illegal income, suspend a violator’s business operations, or shut down a violator’s website.
Not every aspect of the Cybersecurity Law applies to all companies, however. Many of the law’s provisions only apply to the two types of companies mentioned above, network operators and critical information infrastructure providers. However, these categories are defined quite broadly. Even companies that would not ordinarily consider themselves as network operators or CII providers could see the law applying to them.
In fact, network operators include network owners, administrators and service providers. Networks are “systems consisting of computers or other data terminal equipment and relevant devices that collect, store, transmit, exchange, and process information according to certain rules and procedures” (Article 76 of the new Cybersecurity Law). The Cybersecurity Law does not differentiate between internal and external networks; the Law is broad enough to include any company that owns an internal network. The Cybersecurity Law therefore suggests that any company that maintains a computer network, even within its own office, could qualify as a network operator. Companies that are based outside of China that use networks to do business within China could also fall under this definition (e.g. an EU based company that uses networks in China to process data for its operations.
Critical Information Infrastructure providers are defined more narrowly: those that if lost or destroyed would damage Chinese national security or the public interest. This includes information services, transportation, water resources and public services. The law also includes more generally-applicable requirements that relate to cybersecurity and contains provisions that apply to other types of entities, like suppliers of network products and services.
Current and upcoming data localization requirements
The new cybersecurity law also requires critical information infrastructure providers to store personal information and important data within China and conduct annual security risk assessments. Important data is not defined in the Cybersecurity Law, yet it likely refers to non-personal information that is critical.
Apart from CIIs, it is anticipated that several foreign companies doing business in China will be required to make significant changes on how they handle data. The draft version of the “Measures for Security Assessment”, published by the Cyberspace Administration of China, suggests expanding the data localization requirements to all network operators. If adopted, this measure will mean that practically all personal information that network operators collect within China must not leave the country other than for a genuine business need and after a security assessment. In anticipation of this development, there is a trend for foreign companies to set up data centers in China to be able to store data locally.
The Draft Implementation Rules also suggest that individuals and entities seeking to export data from China- even if they are not network operators and based outside China- must conduct security assessments of their data exports. This development, if applied, will significantly increase the cybersecurity law’s data localization requirements.
Over the coming months, the Chinese government will continue to issue implementing legislation and official guidance clarifying the scope of the law.
By Maria Sturm
On 6 May 2015, the European Commission issued a communication with the title “A Digital Single Market Strategy for Europe” to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. This digital single market strategy is comprised of three main pillars:
- Better access to online goods and services for consumers and businesses across Europe.
- Creating the right conditions for digital networks and services to flourish.
- Maximizing the growth potential of the European Digital Economy.
The second pillar includes the goal of creating new possibilities to process communication data and to reinforce trust and security in the Digital Single Market. Therefore, in January 2017, the EU Commission issued a proposal for a “Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications)”. A study was conducted on behalf of the EU Commission to evaluate and review Directive 2002/58/EC. The most important findings of the study were:
- The Member States transposed the directive in very different ways. This uneven transposition led to legal uncertainty and an uneven playing field for operators.
- This fragmented implementation leads to higher costs for businesses operating cross-border in the EU.
- New means of communication (e.g. WhatsApp) are not covered by the directive. This means that EU citizens enjoy a different level of protection, depending on which communications tools they use.
Based on these findings, the new proposal seeks to keep up with the pace of the fast developing IT-services. The data business is an important economic actor, which creates a lot of workplaces. This sector needs to be able to use data and make it available. But on the other hand, consumer protection and privacy, as emphasized in Art. 7 of the Charter of Fundamental Rights of the EU, are important in establishing and maintaining trust in the digital single market. Thus, the proposal aims to strike the right balance between the expectations of businesses and the expectations of consumers, and to establish a framework for more security on both sides.
The focal points of the proposal are:
- The directive will be replaced by a regulation to create an even playing field for operators across the EU. While a directive needs to be transposed by each single Member State, the regulation becomes immediately enforceable.
- The proposal covers new means of communication, such as instant messaging or VoIP telephony, the so-called “Over-the-Top communications services”. It therefore guarantees the same level of confidentiality no matter whether a citizen of the EU uses a new communication system or makes a “traditional” phone call.
- New business development opportunities can emerge, because once consent is given, communication data can be used to a greater extent.
- Cookie-rules, which today are cumbersome and result in an overload of consent requests, will be streamlined and made more user-friendly.
- Spam protection will be increased.
- Enforcement will be delegated to national data protection authorities, which are already responsible under the General Data Protection Regulation. This makes enforcement more effective.
The proposal attacks directly the problems and issues detected by the study on Directive 2002/58/EC and aligns the ePrivacy legislation with the General Data Protection Regulation of April 27, 2016 (see also TTLF Newsletter of February 3, 2017). There may be further changes made to the proposal during the rest of the discussion. It remains to be seen exactly what those developments will entail. However, it is a given that the current legislation on privacy and electronic communication is fragmentary and needs to adapt to new electronic evolutions and needs.
 European Commission, Press Release IP-17-16.
 Voice over Internet Protocol.
By Nikolaos Theodorakis
The General Data Protection Regulation (GDPR) will come into force on 25 May 2018, replacing UK’s Data Protection Act 1998 (DPA). It is yet unclear how Brexit will play out, yet in the meantime, the United Kingdom is moving to adopt the GDPR principles so that it adequately protects the personal data transferred within the EU. The GDPR sets a high standard for consent and compliance, which means that companies must start preparing for this transition.
The Information Commissioner’s Office (ICO) issued a guidance on GDPR consent on 2 March, explaining its recommended approach to compliance and its definition of valid consent. The ICO also provides examples and practical advice that can assist companies deciding when consent is unbiased, and when other alternatives must be sought.
The guidance’s main points on consent are:
- Individuals should be in genuine control of consent;
- Companies should check their existing consent practices and revise them if they do not meet the GDPR standard. Evidence of consent must be kept and reviewed regularly;
- The only way to adequately capture consent is through an opt-in;
- Explicit consent requires a very clear and granular statement;
- Consent requests should be separated from other terms and conditions. Companies should avoid making consent a precondition of service;
- Every third party who relies on the consent must be named;
- Individuals should be able to easily withdraw consent;
- Public authorities and employers may find using consent difficult. In cases where consent is too difficult, other lawful bases might be appropriate.
The basic notion of consent is not new. It was initially defined under the Data Protection Act 1998 (DPA) that implemented the Data Protection Directive 95/46/EC, which is currently in force. The GDPR builds on the standard of consent that was introduced in the DPA and includes more details and specific requirements. Consent is now defined in Article 4(11) of the GDPR in a similar way as in previous legislation, yet adding requirements of unambiguity and clear affirmative action. More provisions throughout the GDPR however relate to consent (e.g. Article 7 and recitals 32, 42 and 43), which complicates the notion of consent and what employers need to do to secure valid consent.
The ICO is running a public consultation on the draft guidance until 31 March 2017 to solicit the views of relevant stakeholders and the public. The feedback received will then be taken into account in the published version of the guidance, which is provisionally aimed for May 2017. The GDPR consent guidance can be found here, and the public consultation form here.
Other European countries have already launched relevant public consultation events:
In June 2016, the French data protection authority (“CNIL”) launched a public consultation on the GDPR. Two hundred twenty-fiv organizations participated in the public consultation and the outcome was integrated into recent guidance from the Consortium of European Data Protection Authorities. The CNIL’s report on the French public consultation is available (in French) here.
In Germany, the Interior Ministry has been drafting a proposed Data Protection Amendments and Implementation Law (Datenschutz-Anpassungs- und Umsetzungsgesetz – or “DSAnpUG”) approximately since the GDPR was passed. The DSAnpUG implements the GDPR as well as the EU Law Enforcement Information Sharing Directive 2016/860. At present, several committees of the Upper House of Parliament (Bundesrat) are debating the draft, and a full vote of the Upper House is scheduled for March 8, 2017.
In February 2017, the Spanish Ministry of Justice launched a public consultation as a preliminary step before the drafting of a new bill implementing the GDPR. The press release on the Spanish consultation is available (in Spanish) here.
It is important to remember that invalid consent can have severe financial consequences, apart from reputational damage. Infringements of the basic principles for processing personal data, which includes consent, are subject to the highest tier of administrative fines. This means a fine of up to 20 million Euro, or 4% of a company’s total worldwide annual turnover, whichever is higher, could be issued.
National Competition Authorities take position on regulatory measures for online transport platforms
By Gabriele Accardo
In May 2015, the European Commission committed to assess the role of online transportation platforms, such as Uber, as it launched a public consultation to better understand the social and economic role of platforms, market trends, the dynamics of platform development and the various business models underpinning the platforms. According to the Commission, knowledge gained through this exercise will also contribute to various legislative initiatives—including online platforms regulation—which the Commission plans to launch to boost the Digital Single Market.
Currently there is a heated discussion as to whether online platforms should be subject to regulation at all.
While the European Commission may still take some time to elaborate on the contributions to the public consultation and eventually to state whether and to what extent some form of regulation may be warranted, recently, two national competition authorities, namely the UK Competition and Market Authority (CMA) and the Italian Competition Authority (ICA), made their view public.
The Position of the ICA
On September 29, 2015, the ICA issued an opinion on the legality of activities carried out by companies like Uber, which are carried out by either professional (e.g. Uber Black) or non-professional (e.g. Uber Pop) drivers through digital platforms accessible by tablets and smartphones.
The ICA first noted that it is not clear yet whether acting as an intermediary between the owner of a vehicle and a person who needs to make a trip by managing IT resources, is merely a transport service or, must be considered to be an electronic intermediary service or an information society service, as defined by Article 1(2) of Directive 98/34/EC.
The ICA noted that the Court of Justice of the European Union shall rule on this specific issue, and that until then it cannot be ruled out that the activity falls within the second category (i.e. an electronic intermediary service), which is not regulated, and therefore totally legitimate.
That said the ICA made the following findings, taking into account the characteristics of the activities carried out by Uber.
First, the ICA recognized that even traditional taxi services are more and more adopting technologies similar to those embraced by Uber. Yet, the ICA stressed that services such as Uber ensure a greater ease of use of the mobility service, a better response to a public need for which there is no current offering, and the ensuing reduction of the costs for users of such services. Last but not least, to the extent that it discourages the use of private means of transportation, Uber-like services also contribute to the decongestion of urban traffic.
Second, with regards to the activity of UberBlack or UberVan, i.e. transport services carried out by professional drivers, the ICA considers the current regulation (Italian Law No. 21 of 1992 concerning the non-linear public transport of people) as restrictive of competition insofar as its provisions restrict the geographic scope of the activity of vehicles to the municipality that has granted them a license, and further require that after each trip, each car must return to the base.
Third, with regards to the services such as those provided by UberPop, consisting of acting as an intermediary between the owner (non-professional driver) of a vehicle and a person who needs to make a journey within a city, the ICA observed that the Court of Milan ordered the blocking of UberPop throughout the national territory allegedly because this services would breach the rules regulating the taxi industry and may be characterized as an act of unfair competition. In that respect, the Court held that UberPop’s activity cannot be carried out to the detriment of people’s safety, in terms of cars used for the service, the suitability of drivers, as well as insurance coverage.
Yet, the ICA held that, even so, any form of regulation of such new services, if at all necessary, should be the least invasive as possible. In that respect, the ICA eventually singled out measures such as a registry for online platform providing such services and the provision of certain requirements for drivers.
The Position of the CMA
The position held by the UK Competition and Market Authority is even firmer than that of its Italian counterpart.
Preliminarily, while it recognized that “private hire vehicles” need the protection of appropriate regulation, the CMA considered that consumers also benefit from effective competition exerting downward pressure on prices and upward pressure on service quality and standards.
The CMA takes the view that innovative services (which include app-based booking systems) may drive efficiencies through which it is possible to offer benefits such as lower prices and greater responsiveness to demand. The introduction of new services also has an inherent benefit in the form of greater choice for consumers.
From a general stand point, the CMA thus considers that competition should only be compromised or restricted by regulatory rules to the extent that doing so is absolutely necessary for consumer protection. Above all, regulation should not favor certain groups or business models over others and any measures that restrict the choices available to consumers should be minimized.
The CMA focused on a number of regulatory proposals (made by the Authority Transport for London or “TfL”) that might have the greatest impact on competition.
5-minute wait requirement. TfL proposes that operators must provide booking confirmation details to the passenger at least 5 minutes prior to the journey commencing.
According to CMA, this proposal reduces the competitiveness of alternative services than black cabs by artificially hampering the level of service that new services can provide.
Approval for changes to operating models. TfL proposes that operators will be required to seek TfL approval before changing their operating model. The CMA considers that ex ante regulation of business models is liable to reduce incentives for innovation (a key competitive parameter) and by extension to restrict competition.
Mandatory pre-booking facilities. In the CMA’s view, mandating ancillary functions (such as a facility to pre-book up to seven days in advance) can place undue burdens on some providers, leading to increased costs for private hire vehicles and thus distorting competition, as those unable or unwilling to provide these functions will be excluded from the market. The CMA notes that in instances where consumers find ancillary facilities useful, they are likely to be provided by a competitive market where different offerings proliferate.
Fixed landline telephone requirement. Similarly, the CMA believes that TfL’s proposal whereby operators must have a fixed landline telephone number which must be available for passenger use at all times, could raise barriers to entry (entrants would have to provide both a number and staff to handle calls) as well as restricting innovation (including platform-based business models) and could therefore lead to reduced competition between private vehicle operators. Moreover, it is not clear that it is necessary to make this functionality mandatory, as consumers may not value having a landline number to contact to choose private hire vehicle operators that provide one.
Requirement to specify the fare in advance. Another proposal that the CMA rejects is mandating operators to specify the fare for each journey prior to the commencement of that journey. According to the CMA, the supply of a precise and fixed fare at the time of booking would effectively prohibit innovative pricing models that could be more efficient than pre-calculated fares (e.g. by varying according to supply and demand). This would remove another parameter of competition among private hire vehicle operators.
Drivers to only work for one operator at a time. TfL further proposed a requirement that licensed private hire vehicle drivers can only work for one operator at a time, claiming that this is necessary to reduce the risk of drivers working excessive hours for a number of different operators.
The CMA notes that this proposal may not be suitable or necessary to meet the stated objective. First, TfL’s proposal seems to address only excessive hours among drivers working for multiple operators, and not the risk of excessive hours among drivers working for a single operator, or the danger of black cab drivers working excessive hours.
More interestingly, the CMA believes that ‘multi-homing’ (i.e. the ability of drivers to work for multiple platforms) can allow drivers to switch their supply to where it is needed in the market. Mandatory single-homing can create a strong network effect, as it gives drivers the incentive to only work for the platform with the most customers. The consequence could be fewer private hire vehicle operator platforms, or even a single dominant platform, with the potential for all the consumer harm that platform dominance might bring.