By Irene Ng (Huang Ying)
At the recent ChatbotConf 2017 hosted in Vienna, Austria on October 2-3, distinguished speakers from leading technology companies convened to discuss an up-and-coming tech – none other than the chatbot. The speakers discussed a range of topics, such as “Competing with character”, “turn(ing) conversations into relationships”, and “building conversational experiences”, and other topics, which is viewable at the ChatbotConf website.
If you thought that the above topics were describing human relations, the fact is, you were not exactly wrong – the focus is actually about developing a human character for chatbots. For some of us, the chatbot might be a piece of tech that we are acquainted with. We may have interacted with these bots on social media platforms such as Facebook Messenger, or used bots on Twitter to track down Pokémon to catch on the famous assisted virtual reality game, Pokémon Go. In some cases, these chatbots are designed to provide customer support or service to the target audience. In other cases, such chatbots are built to provide simple, updated information to users, such as the TyranitarBot on Twitter, or Poncho, a bot that is designed to send “fun, personalized weather forecasts every morning”.
This growing prevalence and use of chatbots by businesses or organizations on various platforms is not something to be ignored. Within the legal industry, several companies have created “legal bots” that are designed to either direct users to the right place (e.g. what kind of lawyer they should be seeking), or perform an easy, repetitive service that can be easily automated and resolved. A famous case displaying the potential of chatbots in the legal industry is that of DoNotPay, a chatbot that has reportedly helped “overturn 120,000 parking tickets in New York and London” by challenging parking tickets. Besides DoNotPay, there are other bots in the legal industry such as LegalPundits that helps to determine what kind of legal advice the potential client needs, to “match [the client] with the resources that [the client] needs”.
As users become more comfortable with interacting with chatbots and using chatbots to help them solve their customer queries, an interesting avenue to explore is the use of chatbots for institutions providing pro bono services. Institutions that provide pro bono services, in particular those that run free legal clinics, can benefit from the use of chatbots in various ways. Firstly, these institutions can use chatbots as a screening tool to filter out whether the said applicant has met the means test to qualify for the free pro bono services. Means tests usually require applicants to fulfill a fixed set of criteria, and if such criteria are generally inflexible (e.g. applicant’s income must be less than USD$1,000.00, anything above this amount will be rejected), then the chatbot can be deployed to interact with these applicants to determine whether the applicant has, at the first screening, met the basic criteria for free pro bono services.
Similarly, institutions can use these chatbots to direct applicants or callers to the right ministry or non-profit organization that may be able to assist them further in the specific legal query that they have. For example, an institution providing pro bono services may often get inquirers making simples requests, such as “where can I repeal my parking ticket”, or “how do I get a divorce”. For the latter scenario, the chatbot can be trained to provide a response, indicating that the inquirer ought to seek a divorce lawyer, point the inquirer to a set of easily digestible information on divorces, followed by a list of divorce lawyers that the inquirer may contact.
Granted, there may – or will – be pitfalls in using chatbots to deal with legal pro bono queries. Applicants or inquirers that approach institutions providing pro bono services may become emotional when discussing their legal problems, and having a human touch attending to such a person’s legal needs may seem to be preferable than a machine. Furthermore, while chatbots can be trained to fulfill certain functions such as determining whether an applicant meets the means test, borderline cases may not be adequately attended to. Using the means test example provided earlier, where applicants must have an income of less than USD $1,000.00, an applicant who declares that she earns USD $1,001.00 may be rejected by the chatbot automatically if the developer did not train the chatbot to consider such borderline cases.
However, despite these concerns, there is still much room for chatbots to grow and help serve a public service function by providing greater accessibility to law. A good chatbot can help pro bono institutions make better use of their resources. By implementing a chatbot to help with simple tasks such as diverting inquirers to the right pages, or assisting volunteers to sift out genuine applicants that fulfill the means test, these pro bono institutions can divert resources or manpower, which would otherwise be used to tackle these relatively simple and repetitive tasks, to other areas, thereby increasing efficiency with the same limited budget that such institutions providing pro bono services have.
While there has been much chatter in the chatbot scene to develop an emotional intelligence for chatbots, ultimately, providing legal aid is a form of public service – and as with all types of service, it is unavoidable that humans may still want to converse with a real human being. As we move forward to explore new avenues of providing legal aid through different platforms in a more efficient and cost-effective manner, we should never forget nor neglect to still provide a physical helping hand to those who need legal aid – and not assume that a chatbot can take our place and release us from our social duty as lawyers to help the needy.
Big Data: Italian Authorities Launch Inquiries on Competition, Consumer Protection and Data Privacy Issues
By Gabriele Accardo
On 30 May 2017, the Italian Competition Authority, the Italian Data Protection Authority and the Communications Authority opened a joint inquiry on “Big Data”.
The joint sector inquiry by the Italian Competition Authority, the Italian Data Protection Authority and the Communications Authority will focus, respectively, on potential competition and consumer protection concerns, data privacy, as well as on information pluralism within the digital ecosystem.
First, based on the assumption that the collection of information and its use through complex algorithms have a strategic role for firms, especially for those offering online platforms, which use the collected data to create new forms of value, the inquiry will thus assess whether, and under which circumstances, access to “Big Data” might constitute a barrier to entry, or in any case facilitate anticompetitive practices that could possibly hinder development and technological progress.
Secondly, the use of such large amounts of information may create specific risks for users’ privacy given that new technologies and new forms of data analysis in many cases allow companies to “re-identify” an individual through apparently anonymous data, and may even allow them to carry out new forms of discrimination, and, more generally, to possibly restrict freedom.
A further risk for the digital ecosystem is linked to how online news is now commonly accessed. In fact, digital intermediaries employ users’ information forms of profiling and the definition of algorithms, which in turn, are able to affect both the preservation of the net neutrality principle, and the plurality of the representations of facts and opinions.
It may be expected that while the inquiry will focus on certain specific businesses (typically platforms-related), the authorities may send requests for information to all businesses that collect and make significant use of customer/user data.
Relatedly, on 10 May 2016, French and German Competition Authorities published their joint report on competition law and Big Data. Separately, the French Competition Authority announced the launch of a full-blown sector inquiry into data-related markets and strategies.
In recent months, data-related issues have been at the core of specific investigations by the Italian Competition Authority (against Enel, A2A and ACEA for an alleged abuse of dominance, and against Samsung and WhatsApp for unfair commercial practices), and the Italian Data Protection Authority (against WhatsApp), showing that Italian authorities are getting ready for the challenges that the data-driven economy brings.
Enel, A2A, and ACEA, ongoing investigations on alleged abuse of dominance
On 11 May 2017, following a complaint by the association of energy wholesalers, the Italian competition Authority (“ICA”) raided the business premises of Enel, A2A and ACEA in order to ascertain whether the energy operators may have abused their dominant positions in the electricity market in order to induce their respective customers (private individuals and small businesses) to switch to their market-based electricity contracts.
In particular, according to the ICA, each energy operator may have used “privileged” commercial information (e.g., contact details and invoicing data) about customers eligible for regulated electricity tariffs (so-called Servizio di maggior tutela), which was held in the capacity as incumbent operator(s) (at national level for Enel, and in the Milan and Rome areas for A2A and ACEA, respectively), as well as its dedicated business infrastructure to sell its market-price electricity supply contracts to private individuals and small business customers.
Enel may have also misled consumers by stating that it would be able to guarantee a more secure energy supply than Green Network in order to win-back “former” customers, and thus induce them to choose its contracts.
The investigation is similar to the one recently concluded by the French Competition Authority against energy operator Engie, which resulted in a fine of Euro 100 Million.
Interestingly, both investigations in Italy and France raise issues similar to those addressed in September 2015 by the Belgian Competition Authority against the Belgian National Lottery. The Belgian Authority held that the Belgian National Lottery used personal data acquired as a public monopoly to the market its new product Scooore! on the adjacent sports betting market. The Belgian Competition Authority found that such conduct constituted an abuse of dominance insofar as the information used by the infringer could not be replicated by its competitors in a timely and cost-effective manner.
Samsung – unfair commercial practices
On 25 January 2017, the Italian Competition Authority (“ICA”) levied a 3.1 Million Euro fine on Samsung in relation to two unfair commercial practices related to the marketing of its products, one of which concerned the forced transfer of personal information for marketing purposes.
In essence, Samsung promoted the sale of its electronic products by promising prizes and bonuses (e.g. discounts, bonus on the electricity bill, and free subscription to a TV content provider) to consumers. However, contrary to what the advertising promised, consumers could not get the prize or bonus when buying the product, but could only receive it at a later stage, following a complex procedure that was not advertised, but was only made available in the Terms and Conditions and to consumers who registered on Samsung People online. Besides, consumers were repeatedly requested to provide documents over and over again.
The ICA also found the practice of making discounts conditional upon registering with the company’s digital platform and giving consent to the processing of their data unfair and aggressive, insofar as consumers could not get the promised prize or bonus without giving their consent to the commercial use of their personal data, which were used by Samsung for purposes unrelated with the promotional offer of the product itself. The ICA thus found that the data requested by Samsung were irrelevant and unrelated to the specific promotion in question.
WhatsApp – unfair commercial practices and privacy issues
This is yet another case concerning the forced transfer of personal information for marketing purposes, which followed the same lines of the Samsung case.
Preliminarily, the ICA held that data is a form of information asset, and that an economic value can be attached to it (e.g., Facebook would in fact be able to improve its advertising activity with more data). The ICA further found that a commercial relationship exists in all instances where a business offers a “free” service to consumers in order to acquire their data.
- users were not provided with adequate information on the possibility of denying consent to share with Facebook their personal data on WhatsApp account;
- the option to share the data was pre-selected (opt-in) so that, while users could in fact have chosen not to give their assent to the data sharing and still continue to use the service, such a possibility was not readily clear and in any event users should have removed the pre-selected choice;
During the investigation, WhatsApp offered a set of remedies, but this offer was rejected by the ICA, based on the fact that, as a result of the methods used by WhatsApp to obtain customers’ consent to transfer their data to Facebook, the practice could be characterized as overtly unfair and aggressive, and as such deserved a fine (in any case WhatsApp halted the practice of sharing data with Facebook in light of ongoing discussions with national data protection agencies in Europe).
Interestingly, while the ICA decision is based on consumer protection grounds, last year the German Federal Cartel (FCO) Bundeskartellamt launched an investigation into similar conducts by Facebook, WhatsApp’s mother company, based on competition law grounds. Specifcally, the investigation was based on suspicions that with its specific terms of service on the use of user data, Facebook may have abused its alleged dominant position in the market for social networks. In particular, the presence of excessive trading conditions is the underlying theory of harm for the investigation launched by the FCO. In particular, the FCO is assessing whether Facebook’s position allows it to impose contractual terms that would otherwise not be accepted by its users.
Yet, consumer, competition law, and privacy considerations appear entangled in such cases, as shown by the investigation that Italian Data Protection Authority launched against WhatsApp in parallel with the ICA.
It is understood that while the investigation is still ongoing, the Italian Data Protection Authority requested WhatsApp and Facebook to provide information in order to assess the case thoroughly. In particular, the two companies were asked detailed information on:
- data categories that WhatsApp would like to make available to Facebook;
- arrangements that are in place to obtain users’ consent to disclose their data;
- measures that have been taken to enable exercise of users’ rights under Italy’s privacy legislation, since the notice given to users on their devices would appear to only allow withdrawing consent and objecting to data disclosure for a limited period.
In addition, the Italian Data Protection Authority is seeking to clarify whether the data of WhatsApp users that do not use Facebook will be also disclosed to that company, insofar as no reference to marketing purposes was in the information notice provided initially to WhatsApp users.
Businesses are moving fast to figure out how to best harness the wealth of consumer’s data and make good commercial use of it. Authorities around the globe are putting together their toolkits to address emerging issues in the data-driven economy.
In this cops and robbers game, it appears clear that businesses are struggling to understand which set of rules may apply to their business models, either because there are multiple laws that could potentially apply or because the rules are indeed not readily foreseeable or clear. Obviously, if the same conduct can be caught from many angles, then there is something wrong that need to be addressed, if that can stifle innovation.
That said, the message for businesses sent by these mushrooming initiatives in Europe and around the world is clear: consumers’ freedom to choose whether or not to allow their data to be transferred to parties intending to use this information in order to generate a profit from it should be and will be protected. Enforcers will tackle conduct that unduly influences consumers’ ability to take informed and free decisions.
Consumers on Fyre: Influencer Marketing and Recent Reactions of the United States Federal Trade Commission
By Catalina Goanta
Social media disruptions
Silicon Valley continues to change our world. Technology-driven innovations that are disseminated with the help of the Internet have met with great success. This success translates into heaps of followers, as one can see in the case of platforms such as Facebook and Instagram. However, it is the followers themselves who continually affect the purposes of these platforms. A good example in this sense is Youtube; what started out as an alternative channel for the sharing of low-resolution home videos soon became a place where users could actually create their own content professionally. If well-received, this content leads to real Internet phenomena, and eventually become monetized, via direct or indirect advertising. Individuals around the world now have access to their own TV-stations where they can attract funders and actually make a good living out of running their channels.
Online content creation raises issues that are similar to those in the sharing economy (e.g. Uber, Airbnb, etc.). On the one hand, online platforms connect individual content providers with viewers, in the same peer-to-peer fashion that AirBnB connects an apartment owner and a tourist. Given the service-orientation of both activities, provided they are monetized, a clear issue emerges: when does an individual stop being a peer? In other words, what does it mean to be a consumer in this environment? Relatedly, what legal standards apply to the process of creating such content?
The Fyre Fiasco
The Fyre Festival was supposed to be a luxury music festival scheduled for April and May 2017 in the Bahamas, organized by rapper Ja Rule and young entrepreneur Billy McFarland. The latter has made other business models catering to the rich, such as Magnises, a members-only benefits card programme aimed at wealthy millennials. However, instead of promised luxury and exclusivity, the Fyre Festival organizers could not provide its guests even with the most basic of amenities, ranging from accommodation to food and transport. This led to a massive social media fury, with tens of thousands of reposts on Facebook, Instagram and Twitter, showcasing the disastrous conditions that were far removed from the luxury advertisements and the matching price tags (participants paid up to $ 100,000 to attend the festival). Apart from criminal allegations of mail, wire and securities fraud, Fyre Media – Ja Rule and McFarland’s company – is already faing a $ 100,000,000 class action. In its Introduction, the complaint emphasizes that “[t]he festival’s lack of adequate food, water, shelter, and medical care created a dangerous and panicked situation among attendees—suddenly finding themselves stranded on a remote island without basic provisions—that was closer to ‘The Hunger Games’ or ‘Lord of the Flies’ than Coachella.” Because of the trust-building social media campaign Fyre Media had launched promote the event, festival-goers had no suspicion of fraud before they arrived at the event. Influencers such as Kendall Jenner, Bella Hadid, and Emily Ratajkowski were involved in making Instagram posts about the festival (without any proof of concept), and thereby endorsing the event and communicating to their millions of followers their trust in the Fyre Festival.
The Federal Trade Commission takes action
Influencer marketing is a grey area of consumer advertising. It entails companies reaching out to celebrities who benefit from a faithful following of individuals who they can easily sway to buy certain products. Monetizing a Youtube channel is a process requiring sustained effort, as channel owners will have to strike a balance between keeping their followers entertained and generating enough revenue for their activity. Popularity is correlated with the amount of earnings celebrities can make out of sponsored content.
What makes this into a great marketing technique is also what may hurt consumers the most. The trust-based relation between a celebrity and its fan-base appeals to marketers; it creates a more genuine story for their products or services. But trust is a fine line, and if a celebrity only endorses material things for money, it means they are not being honest with their audience, who might go and buy those products under mistaken assumptions.
The Federal Trade Commission labels these actions as endorsements, and is very clear that since such advertising tools can persuade consumers to engage in commercial transactions, endorsements must be truthful and not misleading. For this reason, the FTA created the Guides Concerning the Use of Endorsements and Testimonials in Advertising, soft rules designed to address the application of Section 5 of the Federal Trade Commission Act on unfair or deceptive acts or practices.
In the light of its guides and the Fyre fiasco, on 19 April the FTC notified more than 90 online influencers about the need for them to disclose their relations to the brands endorsed on social media. According to the Guides, if there is a “material connection” between an influencer and an advertiser which can influence the credibility of the messages posted on social media, the endorser must make this connection clear. In practice, that means adding different hashtags such as the hashtag #ad, by which the public understands that the celebrity in question has been paid to sing the praises of specific products. Still, not many celebrities seem to be bothered by this existing guideline, as only one post relating to Fyre Festival was actually tagged in a clear and conspicuous way to reveal the commercial interest behind the post itself.
Prior to the Fyre Festival debacle, in 2016 the FTC had filed a complaint against retailer Lord & Taylor, who paid more than 50 fashion influencers up to $4,000 to post photos of themselves in Instagram styling a specific dress and using the hashtag #DesignLab, without the disclosure of the material connection. The consumer deceit charges were eventually settled.
Are the guides enough to tackle the issue of endorsement? Perhaps there might be a deterrent effect with respect to aligning celebrities with legal standards, but the problem is wider if we consider the fact that it is not only celebrities advertising products on social media.
Just like Instagram, Youtube is a huge market for reviews on products or services relating to technology, games, clothing or make-up, just to name a few. Ordinary people become channel owners and post regular videos focusing on a particular theme. With time, some of these people reach quasi-stardom and become known names on the Internet. To take an example, NikkieTutorials, a successful make-up vlogger based in the Netherlands, has gained a total of 6,998,037 followers since joining Youtube in 2008, and her videos have been viewed 537,159,106 times so far. And while that might look like a lot, these numbers really fade into oblivion when compared to one of the most famous Youtubers of all time, the Swedish game vlogger PewDiePie. With a total following of around 55,538,695 individuals, his videos have collected an overwhelming total of 15,449,755,042 views ever since he joined Youtube in 2010 and earned approximately $7,400,000 in 2014 on the basis of this following. But these are only examples of very well established Youtubers; thousands if not hundreds of thousands of people are currently turning to Youtube to make a living, and in doing so, they seek to earn money from potential collaborators.
Youtube monetization often entails two main streams of revenue: AdSense and sponsorships. AdSense is a Youtube feature that allows channel owners to play ads in various formats before their own content, and their remuneration depends on the number of views their videos will score. Sponsorships are separate from the Youtube channel, in that external companies can contact a popular Youtuber and offer to pay that Youtuber for a sponsorship agreement. These agreements are likely to entail that the Youtuber endorses specific companies or products. As one of the most important features of Youtubers is that of being relatable, namely the feeling that Youtubers are normal people, just like their followers, channel owners will likely not want to openly disclose sponsorships. This creates a conflict of interests where the channel owner’s main activity is that of generating consumer opinions and reviews, while at the same time being secretive about the products that he or she is being paid to advertise.
On the basis of Section 5 of the FTC Act, such practices could be deemed to be unfair if they “cause or [are] likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” However, this seems to be a test that is not applicable to the mundane low-value objects normally advertised online, which begs the question – should the FTC do something more to align social media advertisers with the public interests it upholds? If that is the case, it most certainly cannot do so alone and will need the willingness of the platforms enabling these new practices to properly address this growing problem.
By Nikolaos Theodorakis
China’s new cybersecurity law (“Cybersecurity Law”), which came into force on 1 June 2017, is a milestone. Unlike the EU that has adopted the General Data Protection Regulation, China does not have an omnibus data protection law. It instead regulates issues of privacy and cybersecurity over a number of industry-specific laws, like health and education sectors. The cybersecurity law is somewhat different since it has a wide scope and contains provisions relevant both to data privacy and cybersecurity.
What is the new law about?
The Cybersecurity Law focuses on the protection of personal information and privacy. It regulates the collection and use of personal information. Companies based in or doing business with China will now be required to introduce data protection measures and certain data must be stored locally on domestic servers. Depending their activity, companies may need to undergo a security clearance prior to moving data out of China.
The Cybersecurity Law defines personal information as any information that, on its own or in combination with other information, can determine the identity of a natural person (e.g. name, DOB, address, telephone number, etc.). It mainly regulates two types of organizations, network operators and Critical Information Infrastructure (CII) providers.
Network operators must:
- Acquire the user’s consent when collecting their personal information (it is yet unclear whether consent must be express or not);
- State the purpose, method and scope of data collection;
- Keep the information secure and private (e.g. use back up and encryption methods);
- In the event of a data breach or likely data breach, take remedial actions, inform users and report to competent authorities;
- Erase personal information in case of an illegal or unauthorized collection, and correct inaccurate information;
- Keep log-files of cybersecurity incidents and implement cybersecurity incident plans.
CII providers are required to observe the same cybersecurity practices as network operators, along with additional requirements such as conducting annual cybersecurity reviews. Furthermore, they are required to store personal information and “important data” within China, as discussed below.
What does this mean for businesses?
If your company is doing business in China, or has a physical presence in China, you will need to conduct a gap assessment to determine whether you must undertake changes to be fully compliant with the cybersecurity law.
Failure to comply with the new law comes with significant consequences: a monetary fine up to 1 million yuan (about $150,000) and potential criminal charges. Individuals (e.g. company directors/ managers) may be subject to personal, albeit lesser, fines as well. In determining the applicable sanction, elements taken into account include the degree of harm and the amount of illegal gains. Fines could go up to ten times the amount of ill-gotten gains, potentially skyrocketing the amount. The law also gives the Chinese government the ability to issue warnings, confiscate companies’ illegal income, suspend a violator’s business operations, or shut down a violator’s website.
Not every aspect of the Cybersecurity Law applies to all companies, however. Many of the law’s provisions only apply to the two types of companies mentioned above, network operators and critical information infrastructure providers. However, these categories are defined quite broadly. Even companies that would not ordinarily consider themselves as network operators or CII providers could see the law applying to them.
In fact, network operators include network owners, administrators and service providers. Networks are “systems consisting of computers or other data terminal equipment and relevant devices that collect, store, transmit, exchange, and process information according to certain rules and procedures” (Article 76 of the new Cybersecurity Law). The Cybersecurity Law does not differentiate between internal and external networks; the Law is broad enough to include any company that owns an internal network. The Cybersecurity Law therefore suggests that any company that maintains a computer network, even within its own office, could qualify as a network operator. Companies that are based outside of China that use networks to do business within China could also fall under this definition (e.g. an EU based company that uses networks in China to process data for its operations.
Critical Information Infrastructure providers are defined more narrowly: those that if lost or destroyed would damage Chinese national security or the public interest. This includes information services, transportation, water resources and public services. The law also includes more generally-applicable requirements that relate to cybersecurity and contains provisions that apply to other types of entities, like suppliers of network products and services.
Current and upcoming data localization requirements
The new cybersecurity law also requires critical information infrastructure providers to store personal information and important data within China and conduct annual security risk assessments. Important data is not defined in the Cybersecurity Law, yet it likely refers to non-personal information that is critical.
Apart from CIIs, it is anticipated that several foreign companies doing business in China will be required to make significant changes on how they handle data. The draft version of the “Measures for Security Assessment”, published by the Cyberspace Administration of China, suggests expanding the data localization requirements to all network operators. If adopted, this measure will mean that practically all personal information that network operators collect within China must not leave the country other than for a genuine business need and after a security assessment. In anticipation of this development, there is a trend for foreign companies to set up data centers in China to be able to store data locally.
The Draft Implementation Rules also suggest that individuals and entities seeking to export data from China- even if they are not network operators and based outside China- must conduct security assessments of their data exports. This development, if applied, will significantly increase the cybersecurity law’s data localization requirements.
Over the coming months, the Chinese government will continue to issue implementing legislation and official guidance clarifying the scope of the law.
By Maria Sturm
On 6 May 2015, the European Commission issued a communication with the title “A Digital Single Market Strategy for Europe” to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. This digital single market strategy is comprised of three main pillars:
- Better access to online goods and services for consumers and businesses across Europe.
- Creating the right conditions for digital networks and services to flourish.
- Maximizing the growth potential of the European Digital Economy.
The second pillar includes the goal of creating new possibilities to process communication data and to reinforce trust and security in the Digital Single Market. Therefore, in January 2017, the EU Commission issued a proposal for a “Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications)”. A study was conducted on behalf of the EU Commission to evaluate and review Directive 2002/58/EC. The most important findings of the study were:
- The Member States transposed the directive in very different ways. This uneven transposition led to legal uncertainty and an uneven playing field for operators.
- This fragmented implementation leads to higher costs for businesses operating cross-border in the EU.
- New means of communication (e.g. WhatsApp) are not covered by the directive. This means that EU citizens enjoy a different level of protection, depending on which communications tools they use.
Based on these findings, the new proposal seeks to keep up with the pace of the fast developing IT-services. The data business is an important economic actor, which creates a lot of workplaces. This sector needs to be able to use data and make it available. But on the other hand, consumer protection and privacy, as emphasized in Art. 7 of the Charter of Fundamental Rights of the EU, are important in establishing and maintaining trust in the digital single market. Thus, the proposal aims to strike the right balance between the expectations of businesses and the expectations of consumers, and to establish a framework for more security on both sides.
The focal points of the proposal are:
- The directive will be replaced by a regulation to create an even playing field for operators across the EU. While a directive needs to be transposed by each single Member State, the regulation becomes immediately enforceable.
- The proposal covers new means of communication, such as instant messaging or VoIP telephony, the so-called “Over-the-Top communications services”. It therefore guarantees the same level of confidentiality no matter whether a citizen of the EU uses a new communication system or makes a “traditional” phone call.
- New business development opportunities can emerge, because once consent is given, communication data can be used to a greater extent.
- Cookie-rules, which today are cumbersome and result in an overload of consent requests, will be streamlined and made more user-friendly.
- Spam protection will be increased.
- Enforcement will be delegated to national data protection authorities, which are already responsible under the General Data Protection Regulation. This makes enforcement more effective.
The proposal attacks directly the problems and issues detected by the study on Directive 2002/58/EC and aligns the ePrivacy legislation with the General Data Protection Regulation of April 27, 2016 (see also TTLF Newsletter of February 3, 2017). There may be further changes made to the proposal during the rest of the discussion. It remains to be seen exactly what those developments will entail. However, it is a given that the current legislation on privacy and electronic communication is fragmentary and needs to adapt to new electronic evolutions and needs.
 European Commission, Press Release IP-17-16.
 Voice over Internet Protocol.
By Martin Miernicki
On 10 February 2017, Italy ratified the Agreement on a Unified Patent Court. Already, the UK had announced their commitment to continuing the ratification process of the agreement, despite the ongoing “Brexit”-discussion.
The unitary patent – an overview
The legal basis for the unitary patent is the so-called “patent package” adopted between 2012 and 2013. It consists of three main instruments:
- Regulation (EU) No 1257/2012 creating a unitary patent (Unitary Patent Regulation)
- Council Regulation (EU) No 1260/2012 on translation arrangements (Unitary Patent Translation Regulation)
- Agreement on a Unified Patent Court (UPC Agreement)
The patent package is the result of an enhanced cooperation (art. 326 et seq. TFEU) between, originally, 25 EU member states. Italy joined in 2015, leaving Spain and Croatia as the only member states not participating in the enhanced cooperation. The adoption of the patent package was accompanied by several disputes, especially regarding translation arrangements.
The unitary patent (European patent with unitary effect) supplements the options for the international protection of patents like the protection systems under the Patent Cooperation Treaty (PCT) or the European Patent Convention (EPC). The unitary patent is designed as a European patent issued by the European Patent Office (EPO) under the EPC. A European patent granted with the same set of claims in respect of all the participating member states can, upon request of the patent owner, benefit from the unitary effect under the Unitary Patent Regulation. In this case, the patent provides uniform protection and has equal effect in the participating member states (art. 3 of the Unitary Patent Regulation). Translations – in addition to those required under the EPC procedure – may be necessary if a dispute arises relating to the infringement of a unitary patent and during a transitional period (article 4, 6 of the Unitary Patent Translation Regulation). The Unified Patent Court (UPC) has jurisdiction for the unitary patents according to the UPC Agreement.
Entry into force
The Unitary Patent Regulation’s entry into force is linked to the UPC Agreement (art. 18). The same applies to the Unitary Patent Translation Regulation (art. 7). The UPC Agreement will enter into force upon the ratification of thirteen member states, including France, Germany, and the UK (as the countries with the highest number of European patents). As of March 2017, 12 signatory states, including France, have ratified the agreement.
What can be expected?
The British announcement to continue preparing for ratification was somewhat surprising given the current circumstances involving Brexit. It remains to be seen how the UK government will proceed, especially in light of the upcoming negotiations between the EU and the UK on their future relationship. The announcement alludes to this point, saying, “[t]he decision to proceed with ratification should not be seen as pre-empting the UK’s objectives or position in the forthcoming negotiations with the EU.” Furthermore, British minister Jo Johnson presented a favorable explanatory memorandum on the UPC to the British Parliament earlier this year. In turn, Italy’s ratification highlights that the preparation for the unitary patent is ongoing, and shows that the patent package could indeed enter into force sooner than later. Meanwhile, the UPC Preparatory Committee is working towards the phase of provisional application, which it expects to start in spring 2017.
 Spain unsuccessfully asked the ECJ to annul the Unitary Patent Regulation, see Spain v. European Parliament, C‑146/13 (2015).
By Nikolaos Theodorakis
The General Data Protection Regulation (GDPR) will come into force on 25 May 2018, replacing UK’s Data Protection Act 1998 (DPA). It is yet unclear how Brexit will play out, yet in the meantime, the United Kingdom is moving to adopt the GDPR principles so that it adequately protects the personal data transferred within the EU. The GDPR sets a high standard for consent and compliance, which means that companies must start preparing for this transition.
The Information Commissioner’s Office (ICO) issued a guidance on GDPR consent on 2 March, explaining its recommended approach to compliance and its definition of valid consent. The ICO also provides examples and practical advice that can assist companies deciding when consent is unbiased, and when other alternatives must be sought.
The guidance’s main points on consent are:
- Individuals should be in genuine control of consent;
- Companies should check their existing consent practices and revise them if they do not meet the GDPR standard. Evidence of consent must be kept and reviewed regularly;
- The only way to adequately capture consent is through an opt-in;
- Explicit consent requires a very clear and granular statement;
- Consent requests should be separated from other terms and conditions. Companies should avoid making consent a precondition of service;
- Every third party who relies on the consent must be named;
- Individuals should be able to easily withdraw consent;
- Public authorities and employers may find using consent difficult. In cases where consent is too difficult, other lawful bases might be appropriate.
The basic notion of consent is not new. It was initially defined under the Data Protection Act 1998 (DPA) that implemented the Data Protection Directive 95/46/EC, which is currently in force. The GDPR builds on the standard of consent that was introduced in the DPA and includes more details and specific requirements. Consent is now defined in Article 4(11) of the GDPR in a similar way as in previous legislation, yet adding requirements of unambiguity and clear affirmative action. More provisions throughout the GDPR however relate to consent (e.g. Article 7 and recitals 32, 42 and 43), which complicates the notion of consent and what employers need to do to secure valid consent.
The ICO is running a public consultation on the draft guidance until 31 March 2017 to solicit the views of relevant stakeholders and the public. The feedback received will then be taken into account in the published version of the guidance, which is provisionally aimed for May 2017. The GDPR consent guidance can be found here, and the public consultation form here.
Other European countries have already launched relevant public consultation events:
In June 2016, the French data protection authority (“CNIL”) launched a public consultation on the GDPR. Two hundred twenty-fiv organizations participated in the public consultation and the outcome was integrated into recent guidance from the Consortium of European Data Protection Authorities. The CNIL’s report on the French public consultation is available (in French) here.
In Germany, the Interior Ministry has been drafting a proposed Data Protection Amendments and Implementation Law (Datenschutz-Anpassungs- und Umsetzungsgesetz – or “DSAnpUG”) approximately since the GDPR was passed. The DSAnpUG implements the GDPR as well as the EU Law Enforcement Information Sharing Directive 2016/860. At present, several committees of the Upper House of Parliament (Bundesrat) are debating the draft, and a full vote of the Upper House is scheduled for March 8, 2017.
In February 2017, the Spanish Ministry of Justice launched a public consultation as a preliminary step before the drafting of a new bill implementing the GDPR. The press release on the Spanish consultation is available (in Spanish) here.
It is important to remember that invalid consent can have severe financial consequences, apart from reputational damage. Infringements of the basic principles for processing personal data, which includes consent, are subject to the highest tier of administrative fines. This means a fine of up to 20 million Euro, or 4% of a company’s total worldwide annual turnover, whichever is higher, could be issued.