European Commission Working on Ethical Standards for Artificial Intelligence (AI)

By Paul Opitz

In the prominent areas of self-driving cars and Lethal Autonomous Weapons Systems, the development of autonomous systems has already led to important ethical debates.[1] On 9 March 2018 the European Commission published a press release in which it announced to set up a group of experts for developing guidelines on AI ethics, building on a statement by the European Group on Ethics in Science and New Technologies.

 

Call for a wide and open discussion

The Commission emphasizes the possible major benefits from artificial intelligence, ranging from better healthcare to more sustainable farming and safer transport. However, since there are also many increasingly urgent moral questions related to the impact of AI on the future of work and legislation, the Commission calls for a “wide, open and inclusive discussion” on how to benefit from artificial intelligence, while also respecting ethical principles.[2]

 

Tasks of the expert group

The expert group will be set up by May and tasked to:

  • advise the Commission on building a diverse group of stakeholders for a “European AI Alliance”
  • support the implementation of a European initiative on artificial intelligence
  • draft guidelines for the ethical development and the use of artificial intelligence based on the EU´s fundamental rights, considering, inter alia, issues of fairness, safety, transparency, and the future of work.[3]

 

Background

The goal of ensuring ethical standards in AI and robotics was recently set out in the Joint Declaration on the EU´s legislative priorities for 2018-2019. Furthermore, the guidelines on AI ethics will build on the Statement on Artificial Intelligence, Robotics and Autonomous Systems by the European Group on Ethics in Science and New Technologies (EGE) from 9 March 2018. This statement summarizes relevant developments in the area of technology, identifying a range of essential moral questions.

Moral issues

Safety, security, and the prevention of harm are of upmost importance.[4] In addition, the EGE poses the question of human moral responsibility. How can moral responsibility be apportioned, and could it possibly be “shared” between humans and machines?[5]

On a more general level, questions about governance, regulation, design, and certification occupy lawmakers in order to serve the welfare of individuals and society.[6] Finally, there are questions regarding the transparency of autonomous systems and their effective value to society.

Key considerations

The statement explicitly emphasizes that the term “autonomy” stems from the field of philosophy and refers to the ability of human persons to legislate for themselves, the freedom to choose rules and laws for themselves to follow. Although the terminology is widely applied to machines, its original sense is an important aspect of human dignity and should therefore not be relativised. No smart machine ought to be accorded the moral standing of the human person or inherit human dignity.[7]

In this sense, moral debates must be held in broad ways, so that narrow constructs of ethical problems do not oversimplify the underlying questions.[8] In discussions concerning self-driving cars, the ethical problems should not only evolve around so-called “Trolley Problem” thought experiments, in which the only possible choice is associated with the loss of human lives. More important questions include past design decisions that have led up to the moral dilemmas, the role of values in design and how to weigh values in case of a conflict.[9]

For autonomous weapons systems, a large part of the discussion should focus on the nature and meaning of “meaningful human control” over intelligent military systems and how to implement forms of control that are morally desirable.[10]

Shared ethical framework as a goal

As initiatives concerning ethical principles are uneven at the national level, the European Parliament calls for a range of measures to prepare for the regulation of robotics and the development of a guiding ethical framework for the design, production and use of robots.[11]

As a first step towards ethical guidelines, the EGE defines a set of basic principles and democratic prerequisites based on fundamental values of the EU Treaties. These include, inter alia, human dignity, autonomy, responsibility, democracy, accountability, security, data protection, and sustainability.[12]

 

Outlook

It is now up to the expert group to discuss whether the existing legal instruments are effective enough to deal with the problems discussed or which new regulatory instruments might be required on the way towards a common, internationally recognized ethical framework for the use of artificial intelligence and autonomous systems.[13]

[1] EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems,  http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 10.

[2] European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.

[3] European Commission, Press release from 9 March 2018, http://europa.eu/rapid/press-release_IP-18-1381_en.htm.

[4] EGE, Statement on Artificial Intelligence, Robotics and Autonomous Systems,  http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, p. 8.

[5] Id., at p. 8.

[6] Id., at p. 8.

[7] Id., at p. 9.

[8] Id., at p. 10.

[9] Id., at p. 10-11.

[10] Id., at p. 11.

[11] Id., at p. 14.

[12] Id., at p. 16-19.

[13] Id., at p. 20.

Advertisements