Archive | IT Law RSS for this section

The UK House of Commons Treasury Committee Report on Crypto Assets

By Jonathan Cardenas[1]

On September 19, 2018, the UK House of Commons Treasury Committee (the “Committee”) published a Report on Crypto-assets (the “Report”), which provides regulatory policy recommendations for the UK Government, the UK Financial Conduct Authority (the “FCA”) and the Bank of England.[2]   The Report forms part of the Committee’s Digital Currencies Inquiry, which was launched in February 2018 to examine the potential impact of distributed ledger-based digital currencies on the UK financial system and to prepare a balanced regulatory response from the UK Government.[3]  This article briefly summarizes the Committee’s UK regulatory policy recommendations.

 

  1. Crypto Asset Risk

The Committee’s regulatory policy recommendations are structured around a variety of risks that crypto assets pose to crypto asset investors.  These risks include: high price volatility; loss of investment due to fraud and/or third-party hacking of crypto asset exchanges; loss of access to crypto asset exchange accounts and/or digital wallets due to unrecoverable lost passwords; price manipulation due to poor market liquidity and relatively low trading volumes; potential facilitation of money laundering and terrorist financing; and, macro risk to UK financial stability.  Mindful of these risks, the Committee notes that crypto assets presently fall outside the scope of FCA regulation merely because the “transferring, buying and selling of crypto assets, including the commercial operation of crypto asset exchanges”[4] do not meet legal definitional criteria to be considered as either a “specified investment” under the Financial Services and Markets Act 2000 (Regulated Activities) Order (the “Regulated Activities Order”), or as “funds” or “electronic money” under applicable payment services and electronic money regulation, as referenced in expert witness reports provided to the Committee.[5]

 

  1. Initial Coin Offerings

Consumer fraud in the context of initial coin offerings (“ICOs”) is a topic of special concern to the Committee.  The Committee recognizes that there is currently “little the FCA can do to protect individuals”[6] from fraudulent ICOs as a result of a regulatory loophole that permits ICOs to escape FCA jurisdiction.  Since most ICOs do not directly promise financial returns, but rather, offer future access to a service or utility, they do not fall squarely within UK law definitions of “financial instrument,”[7] as referenced in expert witness reports provided to the Committee, and therefore are not FCA regulated.

The Committee concurs with the view of U.S. Securities and Exchange Commission Chairman Jay Clayton that ICOs should not escape the ambit of securities regulation merely because they change the form, and not the actual substance, of a securities offering.[8]  The Committee also concurs with the view expressed in an FCA warning that consumers should be prepared to lose their entire investment in early stage ICO projects due to the FCA’s lack of jurisdiction and consequent inability to protect consumers.[9]  As a result, the Committee recommends that the Regulated Activities Order be updated, as a matter of urgency, in order to bring ICOs within the scope of FCA jurisdiction.

 

  1. Crypto Asset Exchanges

The facilitation of money laundering and terrorist financing through crypto asset exchanges is another area of major concern addressed by the Committee.  Crypto asset exchanges are not currently required to comply with anti-money laundering (“AML”) rules under UK law because their activities are not specifically captured by the language of UK AML regulation, including, most notably, the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017.[10]  Although current UK AML regulation does not target crypto asset exchange activity, crypto asset exchanges do fall within the scope of the European Union’s 5th Anti-Money Laundering Directive (the “5th AML Directive”).[11]  As a consequence, the Committee recommends that the UK Government either (1) transpose the 5th AML Directive into UK law prior to the UK’s planned exit from the EU, or (2) replicate relevant provisions of the 5th AML Directive in UK law as quickly as possible.

 

  1. Regulatory Implementation

The Committee proposes two ways of introducing crypto asset regulation in the UK: (1) by amendment of existing financial services regulation or, (2) by adoption of new regulation tailored specifically to crypto assets.

Amending the existing financial services regulatory framework would involve classifying crypto asset activity as a regulated activity within the Regulated Activities Order.  Doing so would enable the FCA to regulate crypto asset activities by, for example, mandating that licenses be obtained in order to carry out specified crypto activities in the UK.  This approach has previously been used in the context of peer-to-peer lending,[12] and is regarded as the fastest way of providing the FCA with the powers needed to regulate crypto asset activities and protect UK consumers.

Adopting a new regulatory framework separate from pre-existing financial services rules would allow for a more flexible and tailored approach to crypto asset regulation, but would also require substantially more time to formulate and finalize.

Given the rapid growth of crypto asset markets and the expanding set of risks faced by UK consumers, the Committee recommends that the UK Government regulate crypto asset activities by expanding the scope of the Regulated Activities Order, rather than by adopting a separate body of rules.  The Committee also recommends that the UK Government examine the exact type of crypto asset “activity” that would be included in an amended Regulated Activities Order, as well as the ramifications of doing so.

The Committee notes that although the global regulatory response to crypto assets is in early stages, the UK is in a position to learn from the experience of other jurisdictions given the fact that the UK has not yet introduced any specific type of crypto asset regulation.  As a result, the Committee encourages UK regulators to engage with their international counterparts in order to ensure that best practices are applied in the UK.

[1] Disclaimer: The views and opinions expressed in this article are those of the author alone.  The material in this article has been prepared for informational purposes only and is not intended to serve as legal advice.

[2] UK House of Commons Treasury Committee, Crypto-assets, Twenty-Second Report of Session 2017-19, 19 September 2018. Available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmtreasy/910/910.pdf.

[3] UK House of Commons Treasury Committee, Digital Currencies inquiry: Scope of the inquiry, 2017.  Available at: https://www.parliament.uk/business/committees/committees-a-z/commons-select/treasury-committee/inquiries1/parliament-2017/digital-currencies-17-19/.

[4] UK House of Commons Treasury Committee, Evidence – Financial Conduct Authority (DGC0028): Financial Conduct Authority’s written submission on digital currencies, April 2018.  Available at: http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/treasury-committee/digital-currencies/written/81677.pdf.

[5] UK House of Commons Treasury Committee, Evidence – Financial Conduct Authority (DGC0028): Financial Conduct Authority’s written submission on digital currencies, April 2018.

[6] UK House of Commons Treasury Committee, Crypto-assets, 19 September 2018, at para 87.

[7] UK House of Commons Treasury Committee, Oral evidence: Digital Currencies, Statement of David Geale, Q 193, HC 910, 4 July 2018. Available at: http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/treasury-committee/digital-currencies/oral/86572.html.

[8] U.S. Securities and Exchange Commission, Statement on Cryptocurrencies and Initial Coin Offerings, December 11, 2017. Available at: https://www.sec.gov/news/public-statement/statement-clayton-2017-12-11.

[9] Financial Conduct Authority, Consumer warning about the risks of Initial Coin Offerings (‘ICOs’), 9 December 2017. Available at: https://www.fca.org.uk/news/statements/initial-coin-offerings.

[10] The Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017 (S.I. 2017/692), 26 June 2017. Available at: http://www.legislation.gov.uk/uksi/2017/692/made.

[11] Directive (EU) 2018/843 of the European Parliament and of the Council of 30 May 2018 amending Directive (EU) 2015/849 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, and amending Directives 2009/138/EC and 2013/36/EU, OJ L 156, 19.6.2018, p. 43–74. Available at: https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32018L0843.

[12] See Financial Conduct Authority, The FCA’s regulatory approach to crowdfunding (and similar activities), Consultation Paper 13/13, October 2013. Available at: https://www.fca.org.uk/publication/consultation/cp13-13.pdf.

Advertisements

The European Data Protection Board starts its operations

By Nikolaos Theodorakis

The European Data Protection Board (EDPB) started its operations the same date the General Data Protection Regulation (GDPR) entered into force, 25 May 2018. The GDPR creates a harmonized set of rules applicable to all personal data processing taking place in the EU. The GDPR established the EDPB so that it contributes to the consistent application of data protection rules throughout the European Union, and promote cooperation between the EU’s data protection authorities.

The EDPB is the transformation of the Article 29 Working Party, under the previous legal regime. The EDPB is composed of representatives of the national data protection authorities and the European Data Protection Supervisor (EDPS). The EDPB also comprises a secretariat provided by the EDPS and working under the instructions of the EDPB. The secretariat will have an important role in administering the One-Stop-Shop and the consistency mechanism, as explained below. The European Commission has the right to participate in the activities and meetings of the Board, without however having a voting right.

 

Objectives

The EDPB aims to ensure the consistent application of the GDPR and of the European Law Enforcement Directive. In doing so, the EDPB is expected to adopt general guidance to clarify the terms of European data protection laws and provide a consistent interpretation regarding their options and obligations. It can also make binding decisions towards national supervisory authorities to ensure a consistent application of the GDPR.

In brief, the EDPB:

  • Provides general guidance (e.g. guidelines and recommendations) to clarify the law;
  • Advises the European Commission on personal data issues and proposed legislation;
  • Adopts consistency findings for cross-border data protection issues; and
  • Promotes cooperation and the effective exchange of information and best practice between national supervisory authorities.

The EDPB’s principles are independence and impartiality, good governance, collegiality, cooperation, transparency, efficiency, and proactivity.

 

Program and future actions

The EDPB acknowledged the continuity of its predecessor, the Article 29 Working Party, and endorsed a series of important guidelines on the first day of operations:

  • the guidelines on consent;
  • the guidelines on transparency;
  • the automated individual decision-making and profiling Guidelines on Automated individual decision-making and Profiling for the purposes of the GDPR;
  • the personal data breach notification guidelines on personal data breach notification under the GDPR;
  • the right to data portability guidelines;
  • the data protection impact assessment guidelines determining whether processing is “likely to result in a high risk”;
  • the Data Protection Officers guidelines;
  • the Lead Supervisory Authority guidelines;
  • the paper on the derogations from the obligation to maintain records of processing activities;
  • the working document for the approval of “Binding Corporate Rules” for controllers and processors;
  • the recommendation on the standard application for approval of Controller and Processor Binding Corporate Rules, and the elements and principles to be found in said Rules; and
  • the guidelines on the application and setting of administrative fines for the purposes of the GDPR.

 

Moving forward, it is expected that the EDPB will issue guidance for a number of important privacy related issues, like the data portability right, Data Protection Impact Assessments, certifications, the extraterritorial applicability of the GDPR and the role of Data Protection Officers. In doing so, the EDPB plans to regularly consult business representatives and civil society representatives regarding their views on how to implement the GDPR.

 

One-Stop-Shop and Consistency Mechanism

Apart from the guidelines and binding decisions, the EDPB will be instrumental in assisting with the One-Stop-Shop mechanism and the consistency mechanism. The One-Stop-Shop relates to designating a lead Data Protection Authority to resolve data protection issues involving more than one EU Member State. This innovative GDPR framework will allow for better cooperation for processing activities that span across different states.

The EDPB consistency mechanism is a reference to Article 63 of the GDPR, a mechanism through which DPAs cooperate to contribute to the consistent application of the GDPR. The GDPR makes several references to this mechanism and it is expected that it will be an important issue for the EDPB to regulate and interpret. In essence, the EDPB should ensure that where a national data protection authority decision affects a large number of individuals in several EU member states, there is prior collaboration and consistency in the interpretation and application of said decision. This is in line with the EU’s digital single market agenda that tries to bring consistent application of EU laws throughout the single market.

 

A true transformation?

It is too early to tell whether the EDPB will prove to be a transformed body, or whether it is a rebranded version of the Article 29 Working Party. Even though it seems that the WP29 subgroups will continue their work as usual, the action plan indicates that the EDPB will undergo significant changes and that it aspires to be in the epicenter of data protection developments in the European Union. The first indications demonstrate that the EDPB wants to become a prominent body through administrative restructuring and a more clear communication strategy. The GDPR enforcement brought data protection in the spotlight, and the EDPB will certainly have a chance, if it so desires, to prove that it is larger, more influential, and more important body than its predecessor.

Regulation of Taxi Apps: Two Judgements and Bad News for Uber

By Martin Miernicki

On 20 December 2017, the Court of Justice of the European Union (CJEU) handed down its decision in Asociación Profesional Élite Taxi v. Uber Systems Spain SL (C-434/15), holding that Uber’s services, in principle, constitute transportation services and thus remain regulated by national legislation. On 10 April 2018, the court essentially confirmed this ruling in Uber France SAS v. Nabil Bensalem (C-320/16).

 

Background of the cases

Both cases centered on the legal classification of the services provided by Uber under EU law. In the first case, the Asociación Profesional Elite Taxi – a professional taxi drivers‘ association – brought action against Uber before the national (Spanish) court, stating that the company infringed the local rules on the provision of taxi services as well as the laws on unfair competition. The national court observed that neither Uber nor the non-professional drivers had the licenses and authorizations required by national law; however, it was unsure whether the services provided by Uber qualified as “information society services” within the meaning of article 2(a) of Directive 2000/31/EC (E-Commerce Directive) or rather as a “service in the field of transport”, thereby being excluded from said directive as well as the scope of article 56 TFEU and article 2(2)(d) of Directive 2006/123/EC (Services Directive). The second case revolved around a similar question against the background of a private prosecution and civil action brought by an individual against Uber under French law.

 

Decisions of the court

The CJEU considered Uber’s service overall and not merely its single components, characterizing Uber’s business model as providing, “by means of a smartphone application, […] the paid service consisting of connecting non-professional drivers using their own vehicle with persons who wish to make urban journeys, without holding any administrative licence or authorisation.” (C-434/15, para 2). The CJEU held that Uber offered not a mere intermediation service which – as inherently linked to smartphones and the internet – could, seen in isolation, constitute an information society service. Rather, Uber provides an integral part of an overall service “whose main component is a transport service”. Thus, Uber’s services qualified as “services in the field of transport”, thereby rendering the E-Commerce Directive, the Services Directive and Art 56 TFEU inapplicable. Relying heavily on these findings, the court reached a similar conclusion in the subsequent case and essentially confirmed its prior ruling.

 

Meaning of the decisions and implications

The judgements are a setback for Uber and services alike, because – both being qualified as transportation services – they cannot rely on the safeguards and guarantees provided for by EU law (especially the freedom to provide services). On the contrary, the CJEU confirmed that transport services remain a field which is still largely in the member states’ domain. This is especially challenging for companies which, like Uber, specialize in a field where the regulatory requirements differ widely, also within the borders of one single member state. It should, however, be noted that the court gave its opinion on the service as described above; one might reach a different conclusion should Uber adapt or restructure its business model.

The dispute in the Uber cases can be seen in the larger context of “sharing economy” business models. Another example for a company active in this field would be Airbnb, for instance. European policy makers are aware of this emerging sector and have launched several initiatives to tackle the issue at the EU level. Among these are the Communication from the Commission on a European agenda for the collaborative economy (COM(2016) 356 final) and the European Parliament resolution of 15 June 2017 on a European Agenda for the  collaborative economy (2017/2003(INI)).

The European Commission’s FinTech Action Plan and Proposed Regulation on Crowdfunding

By Jonathan Cardenas

On 8 March 2018, the European Commission (“Commission”) introduced its FinTech Action Plan, a policy proposal designed to augment the international competitiveness of the European Single Market in the financial services sector.[1]  Together with the FinTech Action Plan, the Commission introduced a proposal for a regulation on European crowdfunding services providers (“Proposed Regulation on Crowdfunding”).[2]  Both of these proposals form part of a broader package of measures designed to deepen and complete the European Capital Markets Union by 2019.[3]  This article briefly summarizes both the FinTech Action Plan and the Proposed Regulation on Crowdfunding.

 

  1. FinTech Action Plan

With the goal of turning the European Union (“EU”) into a “global hub for FinTech,”[4] the FinTech Action Plan introduces measures that build upon several of the Commission’s prior initiatives, including the regulatory modernization objectives set forth by the Commission’s internal Task Force on Financial Technology,[5] the capital market integration objectives identified in the Commission’s Capital Markets Union Action Plan,[6] and the digital market integration objectives identified in the Commission’s Digital Single Market Strategy.[7]  Responding to calls from the European Parliament[8] and European Council[9] for a proportional, future-oriented regulatory framework that balances competition and innovation while preserving financial stability and investor protection, and also drawing upon the conclusions of the March–June 2017 Public Consultation on FinTech,[10] the FinTech Action Plan consists of a “targeted,”[11] three-pronged strategy, that sets out 19 steps[12] to enable the EU economy to cautiously embrace the digital transformation of the financial services sector.

  • “Enabling Innovative Business Models to Reach EU Scale”

The first prong of the FinTech Action Plan is focused on measures that will enable EU-based FinTech companies to access and scale across the entire Single Market.

Recognizing the need for regulatory harmonization, the Commission calls for uniformity in financial service provider licensing requirements across the EU to avoid conflicting national rules that hamper the development of a single European market in emerging financial services, such as crowdfunding (Step 1).  With crowdfunding specifically in mind, the Commission has proposed a regulation on European crowdfunding service providers (“ECSPs”), which, as discussed in further detail below, would create a pan-European passport regime for ECSPs that want to operate and scale across EU Member State borders.  In addition, the Commission invites the European Supervisory Authorities (“ESAs”) to outline differences in FinTech licensing requirements across the EU, particularly with regard to how Member State regulatory authorities apply EU proportionality and flexibility principles in the context of national financial services legislation (Step 2).  The Commission encourages the ESAs to present Member State financial regulators with recommendations as to how national rules can converge.  The Commission also encourages the ESAs to present the Commission with recommendations as to whether there is a need for EU-level financial services legislation in this context.  Moreover, the Commission will continue to monitor developments in the cryptocurrency asset and initial coin offering (“ICO”) space in conjunction with the ESAs, the European Central Bank, the Financial Stability Board and other international standard setters in order to determine whether EU-level regulatory measures are needed (Step 3).

Recognizing the importance of common standards for the development of an EU-wide FinTech market, the Commission is focused on developing standards that will enhance interoperability between FinTech market player systems.  The Commission plans to work with the European Committee for Standardization and the International Organization for Standardization to develop coordinated approaches on FinTech standards by Q4 2018, particularly in relation to blockchain technology (Step 4).  In addition, the Commission will support industry-led efforts to develop global standards for application programming interfaces by mid-2019 that are compliant with the EU Payment Services Directive and EU General Data Protection Regulation (Step 5).

In order to facilitate the emergence of FinTech companies across the EU, the Commission encourages the development of innovation hubs (institutional arrangements in which market players engage with regulators to share information on market developments and regulatory requirements)[13] and regulatory sandboxes (controlled spaces in which financial institutions and non-financial firms can test new FinTech concepts with the support of a government authority for a limited period of time),[14] collectively referred to by the Commission as “FinTech facilitators.”[15]  The Commission specifically encourages the ESAs to identify best practices for innovation hubs and regulatory sandboxes by Q4 2018 (Step 6).  The Commission invites the ESAs and Member States to take initiatives to facilitate innovation based on these best practices, and in particular, to promote the establishment of innovation hubs in all Member States (Step 7).  Based upon the work of the ESAs, the Commission will present a report with best practices for regulatory sandboxes by Q1 2019 (Step 8).

  • “Supporting the Uptake of Technological Innovation in the Financial Sector”

The second prong of the FinTech Action Plan is focused on measures that will facilitate the adoption of FinTech across the EU financial services industry.

The Commission begins the second prong by indicating that its policy approach to FinTech is guided by the principle of “technology neutrality,” an EU regulatory principle that requires national regulators to ensure that national regulation “neither imposes nor discriminates in favour of the use of a particular type of technology.”[16]  In this regard, the Commission plans to setup an expert group to assess, by Q2 2019, the extent to which the current EU regulatory framework for financial services is neutral toward artificial intelligence and distributed ledger technology, particularly in relation to jurisdictional questions surrounding blockchain-based applications, the validity and enforceability of smart contracts, and the legal status of ICOs (Step 9).

In addition to ensuring that EU financial regulation is fit for artificial intelligence and blockchain, the Commission also intends to remove obstacles that limit the use of cloud computing services across the EU financial services industry.  In this regard, the Commission invites the ESAs to produce, by Q1 2019, formal guidelines that clarify the expectations of financial supervisory authorities with respect to the outsourcing of data by financial institutions to cloud service providers (Step 10).  The Commission also invites cloud service providers, cloud services users and regulatory authorities to collaboratively develop self-regulatory codes of conduct that will eliminate data localization restrictions, and in turn, enable financial institutions to port their data and applications when switching between cloud services providers (Step 11).  In addition, the Commission will facilitate the development of standard contractual clauses for cloud outsourcing by financial institutions, particularly with regard to audit and reporting requirements (Step 12).

Recognizing that blockchain and distributed ledger technology will “likely lead to a major breakthrough that will transform the way information or assets are exchanged,”[17] the Commission plans to hold additional public consultations in Q2 2018 on the possible implementation of the European Financial Transparency Gateway, a pilot project that uses distributed ledger technology to record information about companies listed on EU securities markets (Step 13).  In addition, the Commission plans to continue to develop a comprehensive, cross-sector strategy toward blockchain and distributed ledger technology that enables the introduction of FinTech and RegTech applications across the EU (Step 14).  In conjunction with both the EU Blockchain Observatory and Forum, and the European Standardization Organizations, the Commission will continue to support interoperability and standardization efforts, and will continue to evaluate blockchain applications in the context of the Commission’s Next Generation Internet Initiative (Step 15).

Recognizing that regulatory uncertainty and fragmentation prevents the European financial services industry from taking up new technology, the Commission will also establish an EU FinTech Lab in Q2 2018 to enable EU and national regulators to engage in regulatory discussions and training sessions with select technology providers in a neutral, non-commercial space (Step 16).

  • “Enhancing Security and Integrity of the Financial Sector”

The third prong of the FinTech Action Plan is focused on financial services industry cybersecurity.

Recognizing the cross-border nature of cybersecurity threats and the need to make the EU financial services industry cyberattack resilient, the Commission will organize a public-private workshop in Q2 2018 to examine regulatory obstacles that limit cyber threat information sharing between financial market participants, and to identify potential solutions to these obstacles (Step 17).  The Commission also invites the ESAs to map, by Q1 2019, existing supervisory practices related to financial services sector cybersecurity, to consider issuing guidelines geared toward supervisory convergence in cybersecurity risk management, and if necessary, to provide the Commission with technical advice on the need for EU regulatory reform (Step 18).  The Commission also invites the ESAs to evaluate, by Q4 2018, the costs and benefits of developing an EU-coordinated cyber resilience testing framework for the entire EU financial sector (Step 19).

 

  1. Proposed Regulation on Crowdfunding

In line with the Commission’s Capital Markets Union objective of broadening access to finance for start-up companies,[18] the Proposed Regulation on Crowdfunding is aimed at facilitating crowdfunding activity across the Single Market.  The proposed regulation plans to enable investment-based and lending-based ECSPs to scale across Member State borders by creating a pan-European crowdfunding passport regime under which qualifying ECSPs can provide crowdfunding services across the EU without the need to obtain individual authorization from each Member State.  The proposed regulation also seeks to minimize investor risk exposure by setting forth organizational and operational requirements, which include, among others, prudent risk management and adequate information disclosure.

[1] COM (2018) 109/2 – FinTech Action plan: For a more competitive and innovative European financial sector. Available at: https://ec.europa.eu/info/sites/info/files/180308-action-plan-fintech_en.pdf.

[2] COM (2018) 113 – Proposal for a regulation on European Crowdfunding Service Providers (ECSP) for Business. Available at: https://ec.europa.eu/info/law/better-regulation/initiative/181605/attachment/090166e5b9160b13_en.

[3] COM (2018) 114 final – Completing the Capital Markets Union by 2019 – time to accelerate delivery. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52018DC0114&from=EN.

[4] European Commission Press Release, “FinTech: Commission Takes Action For a More Competitive and Innovative Financial Market,” 8 March 2018. Available at: https://ec.europa.eu/info/sites/info/files/180308-action-plan-fintech_en.pdf.

[5] European Commission Banking and Finance Newsletter, Task Force on Financial Technology, 28 March 2017. Available at: http://ec.europa.eu/newsroom/fisma/item-detail.cfm?item_id=56443&utm_source=fisma_newsroom&utm_medium=Website&utm_campaign=fisma&utm_content=Task%20Force%20on%20Financial%20Technology&lang=en.  See also European Commission Announcement, Vice President’s speech at the conference #FINTECHEU “Is EU regulation fit for new financial technologies?,” 23 March 2017.  Available at: https://ec.europa.eu/commission/commissioners/2014-2019/dombrovskis/announcements/vice-presidents-speech-conference-fintecheu-eu-regulation-fit-new-financial-technologies_en.  See also European Commission Blog Post, “European Commission sets up an internal Task Force on Financial Technology,” 14 November 2016.  Available at: https://ec.europa.eu/digital-single-market/en/blog/european-commission-sets-internal-task-force-financial-technology.

[6] COM/2015/0468 final – Action Plan on Building a Capital Markets Union.  Available at : http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52015DC0468&from=EN.

[7] COM(2015) 192 final – A Digital Single Market Strategy for Europe, 6 May 2015.  Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52015DC0192&from=EN.  See also COM (2017) 228 final – Mid-Term review on the implementation of the Digital Single Market Strategy: A Connected Digital Single Market for All, 10 May 2017.  Available at: http://eur-lex.europa.eu/resource.html?uri=cellar:a4215207-362b-11e7-a08e-01aa75ed71a1.0001.02/DOC_1&format=PDF.

[8] European Parliament Committee on Economic and Monetary Affairs, Report on FinTech: the influence of technology on the future of the financial sector, Rapporteur: Cora van Nieuwenhuizen, 2016/2243(INI), 28 April 2017.  Available at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-0176+0+DOC+PDF+V0//EN.

[9] EUCO 14/17, CO EUR 17, CONCL 5, European Council Meeting Conclusions, 19 October 2017. Available at:  http://www.consilium.europa.eu/media/21620/19-euco-final-conclusions-en.pdf.

[10] European Commission Directorate-General for Financial Stability, Financial Services and Capital Markets Union, “Summary of contributions to the ‘Public Consultation on FinTech: a more competitive and innovative European financial sector,’” 2017.  Available at: https://ec.europa.eu/info/sites/info/files/2017-fintech-summary-of-responses_en.pdf.

[11] FinTech Action Plan.

[12] European Commission Press Release, “FinTech: Commission Takes Action For a More Competitive and Innovative Financial Market,” 8 March 2018. Available at: https://ec.europa.eu/info/sites/info/files/180308-action-plan-fintech_en.pdf.

[13] EBA/DP/2017/02 – Discussion Paper on the EBA’s approach to financial technology (FinTech), 4 August 2017. Available at: https://www.eba.europa.eu/documents/10180/1919160/EBA+Discussion+Paper+on+Fintech+%28EBA-DP-2017-02%29.pdf.

[14] Id.

[15] FinTech Action Plan, p. 8.

[16] Directive 2002/21 on a common regulatory framework for electronic communications networks and services (Framework Directive) [2002] OJ L108/33.  Available at: https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A32002L0021.

[17] FinTech Action Plan, p. 12.

[18] Capital Markets Union Action Plan.

European Commission Working on Ethical Standards for Artificial Intelligence (AI)

By Paul Opitz

In the prominent areas of self-driving cars and Lethal Autonomous Weapons Systems, the development of autonomous systems has already led to important ethical debates.[1] On 9 March 2018 the European Commission published a press release in which it announced to set up a group of experts for developing guidelines on AI ethics, building on a statement by the European Group on Ethics in Science and New Technologies.

 

Call for a wide and open discussion

The Commission emphasizes the possible major benefits from artificial intelligence, ranging from better healthcare to more sustainable farming and safer transport. However, since there are also many increasingly urgent moral questions related to the impact of AI on the future of work and legislation, the Commission calls for a “wide, open and inclusive discussion” on how to benefit from artificial intelligence, while also respecting ethical principles.[2]

 

Tasks of the expert group

The expert group will be set up by May and tasked to:

  • advise the Commission on building a diverse group of stakeholders for a “European AI Alliance”
  • support the implementation of a European initiative on artificial intelligence
  • draft guidelines for the ethical development and the use of artificial intelligence based on the EU´s fundamental rights, considering, inter alia, issues of fairness, safety, transparency, and the future of work.[3]

 

Background

The goal of ensuring ethical standards in AI and robotics was recently set out in the Joint Declaration on the EU´s legislative priorities for 2018-2019. Furthermore, the guidelines on AI ethics will build on the Statement on Artificial Intelligence, Robotics and Autonomous Systems by the European Group on Ethics in Science and New Technologies (EGE) from 9 March 2018. This statement summarizes relevant developments in the area of technology, identifying a range of essential moral questions.

Moral issues

Safety, security, and the prevention of harm are of upmost importance.[4] In addition, the EGE poses the question of human moral responsibility. How can moral responsibility be apportioned, and could it possibly be “shared” between humans and machines?[5]

On a more general level, questions about governance, regulation, design, and certification occupy lawmakers in order to serve the welfare of individuals and society.[6] Finally, there are questions regarding the transparency of autonomous systems and their effective value to society.

Key considerations

The statement explicitly emphasizes that the term “autonomy” stems from the field of philosophy and refers to the ability of human persons to legislate for themselves, the freedom to choose rules and laws for themselves to follow. Although the terminology is widely applied to machines, its original sense is an important aspect of human dignity and should therefore not be relativised. No smart machine ought to be accorded the moral standing of the human person or inherit human dignity.[7]

In this sense, moral debates must be held in broad ways, so that narrow constructs of ethical problems do not oversimplify the underlying questions.[8] In discussions concerning self-driving cars, the ethical problems should not only evolve around so-called “Trolley Problem” thought experiments, in which the only possible choice is associated with the loss of human lives. More important questions include past design decisions that have led up to the moral dilemmas, the role of values in design and how to weigh values in case of a conflict.[9]

For autonomous weapons systems, a large part of the discussion should focus on the nature and meaning of “meaningful human control” over intelligent military systems and how to implement forms of control that are morally desirable.[10]

Shared ethical framework as a goal

As initiatives concerning ethical principles are uneven at the national level, the European Parliament calls for a range of measures to prepare for the regulation of robotics and the development of a guiding ethical framework for the design, production and use of robots.[11]

As a first step towards ethical guidelines, the EGE defines a set of basic principles and democratic prerequisites based on fundamental values of the EU Treaties. These include, inter alia, human dignity, autonomy, responsibility, democracy, accountability, security, data protection, and sustainability.[12]

 

Outlook

It is now up to the expert group to discuss whether the existing legal instruments are effective enough to deal with the problems discussed or which new regulatory instruments might be required on the way towards a common, internationally recognized ethical framework for the use of artificial intelligence and autonomous systems.[13]

[1] EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems,  http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 10.

[2] European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.

[3] European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.

[4] EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems,  http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 8.

[5] Id., at p. 8.

[6] Id., at p. 8.

[7] Id., at p. 9.

[8] Id., at p. 10.

[9] Id., at p. 10-11.

[10] Id., at p. 11.

[11] Id., at p. 14.

[12] Id., at p. 16-19.

[13] Id., at p. 20.

Facebook’s Data Sharing Practices under Unfair Competition Law

By Catalina Goanta

2018 has so far not been easy on the tech world. The first months of the year brought a lot of bad news: two accidents with self-driving cars (Tesla and Uber) and the first human casualty,[1] another Initial Coin Offering (ICO) scam costing investors $660 million,[2] and Donald Trump promising to go after Amazon.[3] But the scandal that made the most waves had to do with Facebook data being used by Cambridge Analytica.[4]

 

Data brokers and social media

In a nutshell, Cambridge Analytica was a UK-based company that claimed to use data to change audience behavior either in political or commercial contexts.[5] Without going too much into detail regarding the identity of the company, its ties, or political affiliations, one of the key points in the Cambridge Analytica whistleblowing conundrum is the fact that it shed light on Facebook data sharing practices which, unsurprisingly, have been around for a while. To create psychometric models which could influence voting behavior, Cambridge Analytica used the data of around 87 million users, obtained through Facebook’s Graph Application Programming Interface (API), a developer interface providing industrial-level access to personal information.[6]

The Facebook Graph API

The first version of the API (v1.0), which was launched in 2010 and was up until 2015, could be used to not only gather public information about a given pool of users, but also about their friends, in addition to granting access to private messages sent on the platform (see Table 1 below). The amount of information belonging to user friends that Facebook allowed third parties to tap into is astonishing. The extended profile properties permission facilitated the extraction of information about: activities, birthdays, check-ins, education history, events, games activity, groups, interests, likes, location, notes, online presence, photo and video tags, photos, questions, relationships and relationships details, religion and politics, status, subscriptions, website and work history. Extended permissions changed in 2014, with the second version of the Graph API (v2.0), which suffered many other changes since (see Table 2). However, one interesting thing that stands out when comparing versions 1.0 and 2.0 is that less information is gathered from targeted users than from their friends, even if v2.0 withdrew the extended profile properties (but not the extended permissions relating to reading private messages).

Table 1 – Facebook application permissions and availability to API v1 (x) and v2 (y)[7]

Cambridge Analytica obtained Facebook data with help from another company, Global Science Research, set up by Cambridge University-affiliated faculty Alexandr Kogan and Joseph Chancellor. Kogan had previously collaborated with Facebook for his work at the Cambridge Prosociality & Well-Being Lab. For his research, Kogan collected data from Facebook as a developer, using the Lab’s account registered on Facebook via his own personal account, and he was also in contact with Facebook employees who directly sent him anonymized aggregate datasets.[8]

Table 2 – The History of the Facebook Graph API

The Facebook employees who sent him the data were working for Facebook’s Protect and Care Team, but were themselves doing research on user experience as PhD students.[9] Kogan states that the data he gathered with the Global Science Research quiz is separate from the initial data he used in his research, and it was kept on different servers.[10] Kogan’s testimony before the UK Parliament’s Digital, Culture, Media and Sport Committee does clarify which streams of data were used by which actors, but none of the Members of Parliament attending the hearing asked any questions about the very process through which Kogan was able to tap into Facebook user data. He acknowledged that for harvesting information for the Strategic Communication Laboratories – Cambridge Analytica’s affiliated company – he used a market research recruitment strategy: for around $34 per person, he aimed at recruiting up to 20,000 individuals who would take an online survey.[11] The survey would be accessible through an access token, which required participants to login using their Facebook credentials.

Access Tokens

On the user end, Facebook Login is an access token which allows users to log in across platforms. The benefits of using access tokens are undeniable: having the possibility to operate multiple accounts using one login system allows for efficient account management. The dangers are equally clear. On the one hand, one login point (with one username and one password) for multiple accounts can be a security vulnerability. On the other hand, even if Facebook claims that the user is in control of the data shared with third parties, some apps using Facebook Login – for instance wifi access in café’s, or online voting for TV shows – do not allow users to change the information requested by the app, creating a ‘take it or leave it’ situation for users.

Figure 1 – Facebook Login interface

On the developer end, access tokens allow apps operating on Facebook to access the Graph API. The access tokens perform two functions:

  • They allow developer apps to access user information without asking for the user’s password; and
  • They allow Facebook to identify developer apps, users engaging with this app, and the type of data permitted by the user to be accessed by the app.[12]

Understanding how Facebook Login works is essential in clarifying what information users are exposed to right before agreeing to hand their Facebook data over to other parties.

 

Data sharing and consent

As Figure 1 shows, and as it can be seen when browsing through Facebook’s Terms of Service, consent seems to be at the core of Facebook’s interaction with its users. This being said, it is impossible to determine, on the basis of these terms, what Facebook really does with the information it collects. For instance, in the Statement of Rights and Responsibilities dating from 30 January 2015, there is an entire section on sharing content and information:

  1. You own all of the content and information you post on Facebook, and you can control how it is shared through your privacyand application settings. In addition: 
  1. For content that is covered by intellectual property rights, like photos and videos (IP content), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it. 
  2. When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others).
  3. When you use an application, the application may ask for your permission to access your content and information as well as content and information that others have shared with you.  We require applications to respect your privacy, and your agreement with that application will control how the application can use, store, and transfer that content and information.  (To learn more about Platform, including how you can control what information other people may share with applications, read our Data Policy and Platform Page.)
  4. When you publish content or information using the Public setting, it means that you are allowing everyone, including people off of Facebook, to access and use that information, and to associate it with you (i.e., your name and profile picture).
  5. We always appreciate your feedback or other suggestions about Facebook, but you understand that we may use your feedback or suggestions without any obligation to compensate you for them (just as you have no obligation to offer them).

This section appears to establish Facebook as a user-centric platform that wants to give as much ownership to its customers. However, the section says nothing about the fact that app developers used to be able to tap not only into the information generated by users, but also that of their friends, to an even more extensive degree. There are many other clauses in the Facebook policies that could be relevant for this discussion, but let us dwell on this section.

Taking a step back, from a legal perspective, when a user gets an account with Facebook, a service contract is concluded. If users reside outside of the U.S. or Canada, clause 18.1 of the 2015 Statement of Rights and Responsibilities mentions the service contract to be an agreement between the user and Facebook Ireland Ltd. For U.S. and Canadian residents, the agreement is concluded with Facebook Inc.[13] Moreover, according to clause 15, the applicable law to the agreement is the law of the state of California.[14] This clause does not pose any issues for agreements with U.S. or Canadian users, but it does raise serious problems for users based in the European Union. In consumer contracts, European law curtails party autonomy in choosing applicable law, given that some consumer law provisions in European legislation are mandatory, and cannot be derogated from.[15] Taking the example of imposing the much lesser protections of U.S. law on European consumers, such clauses would not be valid under EU law. As a result, in 2017 the Italian Competition and Market Authority gave WhatsApp a €3 million fine on the ground that such contractual clauses are unfair.[16]

Apart from problems with contractual fairness, additional concerns arise with respect to unfair competition. Set between competition law and private law, unfair competition is a field of law that takes into account both bilateral transactions, as well as the broader effect they can have on a market. The rationale behind unfair competition is that deceitful/unfair trading practices which give businesses advantages they might otherwise not enjoy should be limited by law.[17] As far as terminology goes, in Europe, Directive 2005/29/EC, the main instrument regulating unfair competition, uses the terms ‘unfair commercial practices’, whereas in the United States, the Federal Trade Commission refers to ‘unfair or deceptive commercial practices’.[18] The basic differences between the approaches taken in the two federal/supranational legal systems can be consulted in Figure 2 below:

Figure 2 – U.S. & EU unfair competition law (van Eijk, Hoofnagle & Kannekens, 2017)[19]

 

Facebook’s potentially unfair/deceptive commercial practices

In what follows, I will briefly refer to the 3 comparative criteria identified by van Eijk et al.[20]

The fact that a business must do something (representation, omission, practice, etc.) which deceives or is likely to deceive or mislead the consumer is a shared criterion in both legal systems. There are two main problems with Facebook’s 2015 terms of service to this end. First, Facebook does not specify how exactly the company shares user data and with whom. Second, this version of the terms makes no reference whatsoever to the sharing of friends’ data, as could be done through the extended permissions. These omissions, as well as the very limited amount of information offered to consumers, through which they are supposed to understand Facebook’s links to other companies as far as their own data is concerned, are misleading.

The second criterion, that of the reasonable/average consumer, is not so straight forward: the information literacy of Facebook users fluctuates, as it depends on demographic preferences. With the emergence of new social media platforms such as Snapchat and Musical.ly, Facebook might not be the socializing service of choice for younger generations. However, official statistics are based on data that includes a lot of noise. It seems that fake accounts make up around 3% of the total number of Facebook accounts, and duplicate accounts make up around 10% of the same total.[21] This poses serious questions regarding the European standard of the average consumer, because there is no way to currently estimate how exactly this 13% proportion would change the features of the entire pool of users. There are many reasons why fake accounts exist, but let me mention two of them. First, the minimum age for joining Facebook is 13; however, the enforcement of this policy is not easy, and a lot of minors can join the social media platform by simply lying about their age. Second, fake online profiles allow for the creation of dissociate lives: individuals may display very different behavior under the veil of anonymity, and an example in this respect is online bullying.

Figure 3 – Distribution of Facebook users worldwide as of April 2018, by age and gender (Statista, 2018)

These aspects can make it difficult for a judge to determine the profile of the reasonable/average consumer as far as social media is concerned: would the benchmark include fake and duplicate accounts? Would the reasonable/average consumer standard have to be based on the real or the legal audience? What level of information literacy would this benchmark use? These aspects remain unclear.

The third criterion is even more complex, as it deals with the likelihood of consumers taking a different decision, had they had more symmetrical information. Two main points can be made here. On the one hand, applying this criterion leads to a scenario where we would have to assume that Facebook would better disclose information to consumers. This would normally take the form of specific clauses in the general terms and conditions. For consumers to be aware of this information, they would have to read these terms with orthodoxy, and make rational decisions, both of which are known not to be the case: consumers simply do not have time and do not care about general terms and conditions, and make impulsive decisions. If that is the case for the majority of the online consumer population, it is also the case for the reasonable/average consumer. On the other hand, perhaps consumers might feel more affected if they knew beforehand the particularities of data sharing practices as they occurred in the Cambridge Analytica situation: that Facebook was not properly informing them about allowing companies to broker their data to manipulate political campaigns. This, however, is not something Facebook would inform its users about directly, as Cambridge Analytica is not the only company using Facebook data, and such notifications (if even desirable from a customer communication perspective), would not be feasible, or would lead to information overload and consumer fatigue. If this too translates into a reality where consumers do not really care about such information, the third leg of the test seems not to be fulfilled. In any case, this too is a criterion which will very likely raise many more questions that it aims to address.

In sum, two out of the three criteria would be tough to fulfill. Assuming, however, that they would indeed be fulfilled, and even though there are considerable differences in the enforcement of the prohibition against unfair/deceptive commercial practices, the FTC, as well as European national authorities can take a case against Facebook to court to order injunctions, in addition to other administrative or civil acts. A full analysis of European and Dutch law in this respect will soon be available in a publication authored together with Stephan Mulders.

 

Harmonization and its discontents

The Italian Competition and Market Authority (the same entity that fined WhatsApp) launched an investigation into Facebook on April 6, on the ground that its data sharing practices are misleading and aggressive.[22] The Authority will have to go through the same test as applied above, and in addition, will very likely also consult the black-listed practices annexed to the Directive. Should this public institution from a Member State find that these practices are unfair, and should the relevant courts agree with this assessment, a door for a European Union-wide discussion on this matter will be opened. Directive 2005/29/EC is a so-called maximum harmonization instrument, meaning that the European legislator aims for it to level the playing field on unfair competition across all Member States. If Italy’s example is to be followed, and more consumer authorities restrict Facebook practices, this could mark the most effective performance of a harmonizing instrument in consumer protection. If the opposite happens, and Italy ends up being the only Member State outlawing such practices, this could be a worrying sign of how little impact maximum harmonization has in practice.

 

New issues, same laws

Nonetheless, in spite of the difficulties in enforcing unfair competition, this discussion prompts one main take-away: data-related practices do fall under the protections offered by regulation on unfair/deceptive commercial practices.[23] This type of regulation already exists in the U.S. just as much as it exists in the EU, and is able to handle new legal issues arising out of the use of disruptive technologies. The only areas where current legal practices are in need of an upgrade deal with interpretation and proof: given the complexity of social media platforms and the many ways in which they are used, perhaps judges and academics should also make use of data science to better understand the behavior of these audiences, as long as this behavior is central for legal assessments.

[1] Will Knight, ‘A Self-driving Uber Has Killed a Pedestrian in Arizona’, MIT Technology Review, The Download, March 19, 2018; Alan Ohnsman, Fatal Tesla Crash Exposes Gap In Automaker’s Use Of Car Data, Forbes, April 16, 2018.

[2] John Biggs, ‘Exit Scammers Run Off with $660 Million in ICO Earnings’, TechCrunch, April 13, 2018.

[3] Joe Harpaz, ‘What Trump’s Attack On Amazon Really Means For Internet Retailers’, Forbes, April 16, 2018.

[4] Carole Cadwalladr and Emma Graham-Harrison, ‘Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach’, The Guardian, March 17, 2018.

[5] The Cambridge Analytica website reads: ‘Data drives all we do. Cambridge Analytica uses data to change audience behavior. Visit our political or commercial divisions to see how we can help you.’, last visited on April 27, 2018. It is noteworthy that the company started insolvency procedures on 2 May, in an attempt to rebrand itself as Emerdata, see see Shona Ghosh and Jake Kanter, ‘The Cambridge Analytica power players set up a mysterious new data firm — and they could use it for a ‘Blackwater-style’ rebrand’, Business Insider, May 3, 2018.

[6] For a more in-depth description of the Graph API, as well as its Instagram equivalent, see Jonathan Albright, The Graph API: Key Points in the Facebook and Cambridge Analytica Debacle, Medium, March 21, 2018.

[7] Iraklis Symeonidis, Pagona Tsormpatzoudi & Bart Preneel, ‘Collateral Damage of Facebook Apps: An Enhanced Privacy Scoring Model’, IACR Cryptology ePrint Archive, 2015, p. 5.

[8] UK Parliament Digital, Culture, Media and Sport Committee, ‘Dr Aleksandr Kogan questioned by Committee’, April 24, 2018; see also the research output based on the 57 billion friendships dataset: Maurice H. Yearwood, Amy Cuddy, Nishtha Lamba, Wu Youyoua, Ilmo van der Lowe, Paul K. Piff, Charles Gronind, Pete Fleming, Emiliana Simon-Thomas, Dacher Keltner, Aleksandr Spectre, ‘On Wealth and the Diversity of Friendships: High Social Class People around the World Have Fewer International Friends’, 87 Personality and Individual Differences 224-229 (2015).

[9] UK Parliament Digital, Culture, Media and Sport Committee hearing, supra note 8.

[10] Ibid.

[11] This number mentioned by Kogan in his witness testimony conflicts with media reports which indicate a much higher participation rate in the study, see Julia Carrie Wong and Paul Lewis, ‘Facebook Gave Data about 57bn Friendships to Academic’, The Guardian, March 22, 2018.

[12] For an overview of Facebook Login, see Facebook Login for Apps – Overview, last visited on April 27, 2018.

[13] Clause 18.1 (2015) reads: If you are a resident of or have your principal place of business in the US or Canada, this Statement is an agreement between you and Facebook, Inc.  Otherwise, this Statement is an agreement between you and Facebook Ireland Limited.

[14] Clause 15.1 (2015) reads: The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions.

[15] Giesela Ruhl, ‘Consumer Protection in Choice of Law’, 44(3) Cornell International Law Journal 569-601 (2011), p. 590.

[16] Italian Competition and Market Authority, ‘WhatsApp fined for 3 million euro for having forced its users to share their personal data with Facebook’, Press Release, May 12, 2018.

[17] Rogier de Vrey, Towards a European Unfair Competition Law: A Clash Between Legal Families : a Comparative Study of English, German and Dutch Law in Light of Existing European and International Legal Instruments (Brill, 2006), p. 3.

[18] Nico van Eijk, Chris Jay Hoofnagle & Emilie Kannekens, ‘Unfair Commercial Practices: A Complementary Approach to Privacy Protection’, 3 European Data Protection Law Review 1-12 (2017), p. 2.

[19] Ibid., p. 11.

[20] The tests in Figure 2 have been simplified by in order to compare their essential features; however, upon a closer look, these tests include other details as well, such as the requirement of a practice being against ‘professional diligence’ (Art. 4(1) UCPD).

[21] Patrick Kulp, ‘Facebook Quietly Admits to as Many as 270 Million Fake or Clone Accounts’, Mashable, November 3, 2017.

[22] Italian Competition and Market Authority, ‘Misleading information for collection and use of data, investigation launched against Facebook’, Press Release, April 6, 2018.

[23] This discussion is of course much broader, and it starts from the question of whether a data-based service falls within the material scope of, for instance, Directive 2005/29/EC. According to Art. 2(c) corroborated with Art. 3(1) of this Directive, it does. See also Case C‑357/16, UAB ‘Gelvora’ v Valstybinė vartotojų teisių apsaugos tarnyba, ECLI:EU:C:2017:573, para. 32.

 

 

The Move Towards Explainable Artificial Intelligence and its Potential Impact on Judicial Reasoning

By Irene Ng (Huang Ying)

In 2017, the Defense Advanced Research Projects Agency (“DARPA”) launched a five year research program on the topic of explainable artificial intelligence.[1] Explainable artificial intelligence, or also known as XAI, refers to an artificial intelligence system whereby its decisions or output are explainable and understood by humans.

The growth of XAI in the field of artificial intelligence research is noteworthy considering the current state of AI research, whereby decisions made by machines are opaque in its reasoning and, in several cases, not understood by their human developers. This is also known as the “black box” of artificial intelligence; when input is being fed into the “black box”, an output based on machine learning techniques is produced, although there is no explanation behind why the output is as it is.[2] This problem is not undocumented – there have been several cases when machine learning algorithms have made certain decisions, but developers are puzzled at how such decisions were reached.[3]

The parallel interest in the use of artificial intelligence in judicial decision-making renders it interesting to consider how XAI will influence the development of an AI judge or arbitrator. Research in the use of AI for judicial decision-making is not novel. It was reported in 2016 that a team of computer scientists from UCL managed to develop an algorithm that “has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy”.[4] Much however remains to be said about the legal reasoning of such an AI-verdict.

The lack of an explainable legal reasoning is, unsurprisingly, a thorny issue towards pressing for automated decision-making by machines. This sentiment has been echoed by several authors who have written in the field of AI judges or AI arbitrators.[5] The opacity in the conclusion of an AI-verdict is alarming for lawyers, especially where legal systems are predicated on the legal reasoning of judges, arbitrators or adjudicators. In certain fields of law, such as criminal law and sentencing, the lack of transparency in the reasoning by an AI-judge in reaching a sentencing verdict can pose further moral and ethical dilemmas.

Furthermore, as AI judges are trained by datasets, who ensures that such datasets are not inherently biased so as to ensure that the AI-verdict will not be biased against specific classes of people as well? The output generated by a machine learning algorithm is highly dependent on the data that is fed to train the system. This has led to reports highlighting “caution against misleading performance measures for AI-assisted legal techniques”.[6]

In light of the opacity in legal reasoning provided by AI judges or AI arbitrators, how would XAI change or impact the field of AI judicial decision-making? Applying XAI in the field of judicial decision-making, an XAI judge or arbitrator would produce an AI verdict and produce a reasoning for such a decision. Whether such reasoning is legal or factual, or even logical, is not important at this fundamental level – what is crucial is that a reasoning has been provided, and such reasoning can be understood and subsequently challenged by lawyers, if disagreed upon. Such an XAI judge would at least function better in legal systems whereby appeal of the verdict is based on challenges to the reasoning of the judge or arbitrator.

This should also be seen in light of the EU’s upcoming General Data Protection Regulation (“GDPR”), whereby a “data subject shall have the right not to be subject to a decision based solely on automated processing”[7] and it appears uncertain at this point whether a data subject has the right to ask for an explanation about an algorithm that made the decision.[8] For developers that are unable to explain the reasoning behind their algorithm’s decisions, this may prove to be a potential landmine considering the tough penalties for flouting the GDPR.[9] This may thus be an implicit call to move towards XAI, especially for developers building AI judicial decision-making software that uses personal data of EU citizens.

As the legal industry still grapples with the introduction of AI in its daily operations, such as the use of the ROSS Intelligence system,[10] the development of other fields of AI such as XAI should not go unnoticed. While the use of an AI judge or AI arbitrator is not commonplace at the present moment, if one considers how XAI may be a better alternative for the legal industry as compared to traditional AI or machine learning methods, development of AI judges or arbitrators using XAI methods rather than traditional AI methods might be more ethically and morally acceptable.

Yet, legal reasoning is difficult to replicate in an XAI – the same set of facts can lead to several different views. Would XAI replicate these multi-faceted views, and explain them? But before we even start to ponder about such matters, perhaps we should first start getting the machine to give an explainable output that we can at least agree and disagree about.

[1] David Gunning, Explainable Artificial Intelligence (XAI), https://www.darpa.mil/program/explainable-artificial-intelligence.

[2] BlackBox, AI, https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/

[3] Will Knight, The Dark Secret at the Heart of AI, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

[4] Chris Johnston and agencies, Artificial intelligence ‘judge’ developed by UCL computer scientists, October 24, 2016, online: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.

[5] See José Maria de la Jara & Others, Machine Arbitrator: Are We Ready?, May 4, 2016, online: http://arbitrationblog.kluwerarbitration.com/2017/05/04/machine-arbitrator-are-we-ready/.

[6] AI Now 2017 Report, online: https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf.

[7] Article 22, General Data Protection Regulation.

[8] https://medium.com/trustableai/gdpr-and-its-impacts-on-machine-learning-applications-d5b5b0c3a815

[9] Penalties of GDPR can range from 10m eur or 2% of the worldwide annual revenue on the lower scale and 20m or 4% of the worldwide revenue on the upper scale. See Article 83, General Data Protection Regulation.

[10] ROSS Intelligence, online: https://rossintelligence.com/.