Print
See related documents

Report | Doc. 14288 | 10 April 2017

Technological convergence, artificial intelligence and human rights

Committee on Culture, Science, Education and Media

Rapporteur : Mr Jean-Yves LE DÉAUT, France, SOC

Origin - Reference to committee: Doc. 13833, Reference 4145 of 28 September 2015. 2017 - Second part-session

Summary

The pervasiveness of new technologies and their applications is blurring the boundaries between human and machine, between online and offline activities, between the physical and the virtual world, between the natural and the artificial, and between reality and virtuality.

This report explores social, ethical and legal consequences of technological convergence, artificial intelligence and robotics from the perspective of human rights. Safeguarding human dignity in the 21st century will require developing new forms of governance, new forms of open, informed and adversarial public debate, new legislative mechanisms and above all the establishment of international co-operation. The report makes a number of recommendations for action within the Council of Europe and calls for close co-operation with the institutions of the European Union and UNESCO to ensure a consistent legal framework and effective supervisory mechanisms at international level.

A. Draft recommendation 
			(1) 
			Draft
recommendation adopted unanimously by the committee on 22 March
2017.

(open)
1. The convergence between nanotechnology, biotechnology, information technology and cognitive sciences and the speed at which the applications of new technologies are put on the market have consequences not only for human rights and the way they can be exercised, but also for the fundamental concept of what characterises a human being.
2. The pervasiveness of new technologies and their applications is blurring the boundaries between human and machine, between online and offline activities, between the physical and the virtual world, between the natural and the artificial, and between reality and virtuality. Humankind is increasing its abilities by boosting them with the help of machines, robots and software. Today it is possible to create functional brain-computer interfaces. A shift has been made from the “treated” human being to the “repaired” human being, and what is now looming on the horizon is the “augmented” human being.
3. The Parliamentary Assembly notes with concern that it is increasingly difficult for lawmakers to adapt to the speed at which science and technologies evolve and to draw up the required regulations and standards; it strongly believes that safeguarding human dignity in the 21st century implies developing new forms of governance, new forms of open, informed and adversarial public debate, new legislative mechanisms and above all the establishment of international co-operation making it possible to address these new challenges most effectively.
4. In this regard, the Assembly welcomes the initiative of the Council of Europe Committee on Bioethics to organise, in October 2017 on the occasion of the 20th anniversary of the Council of Europe Convention on Human Rights and Biomedicine (ETS No. 164, “Oviedo Convention”), an international conference to discuss the prospect of the emergence of these new technologies and their consequences for human rights, with a view to developing a strategic action plan during the next biennium 2018-19.
5. In addition, the Assembly considers that it is necessary to implement genuine world internet governance that is not dependent on private interest groups or just a handful of States.
6. The Assembly calls on the Committee of Ministers to:
6.1. finalise the modernisation of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108) in order to have new provisions making it possible to put rapidly in place more appropriate protection;
6.2. define the framework for the use of care robots and assistive technologies in the Council of Europe Disability Strategy 2017-2023 in the framework of its objective to achieve equality, dignity and equal opportunities for people with disabilities.
7. In the light of the above, the Assembly urges the Committee of Ministers to instruct the relevant bodies of the Council of Europe to consider how intelligent artefacts and/or connected devices and, more generally, technological convergence and its social and ethical consequences related to the field of genetics and genomics, neurosciences and big data, challenge the different dimensions of human rights.
8. Moreover, the Assembly proposes that guidelines be drawn up on the following issues:
8.1. strengthening transparency, regulation by public authorities and operators’ accountability concerning:
8.1.1. automatic processing operations aimed at collecting, handling and using personal data;
8.1.2. informing the public about the value of the data they generate, consent to the use of those data and the length of time they are to be stored;
8.1.3. informing everyone about the processing of personal data which have originated from them and about the mathematical and statistical methods making profiling possible;
8.1.4. the design and use of persuasion software and of information and communication technology (ICT) or artificial intelligence algorithms, that must fully respect the dignity and rights of all users and especially the most vulnerable, such as elderly people and people with disabilities;
8.2. a common framework of standards to be complied with when a court uses artificial intelligence;
8.3. the need for any machine, any robot or any artificial intelligence artefact to remain under human control; insofar as the machine in question is intelligent solely through its software, any power it is given must be able to be withdrawn from it;
8.4. the recognition of new rights in terms of respect for private and family life, the ability to refuse to be subjected to profiling, to have one’s location tracked, to be manipulated or influenced by a “coach” and the right to have the opportunity, in the context of care and assistance provided to elderly people and people with disabilities, to choose to have human contact rather than a robot.
9. The Assembly calls for close co-operation with the institutions of the European Union and the United Nations Educational, Scientific and Cultural Organisation (UNESCO) to ensure a consistent legal framework and effective supervisory mechanisms at international level.

B. Explanatory memorandum by Mr Jean-Yves Le Déaut, rapporteur

(open)

1. Introduction

1. Artificial intelligence makes it possible to simulate intelligence or even create intelligent machines thanks to the exponential increase in computing power. As a result of convergence between nanotechnology, biotechnology, information technology and cognitive sciences (“NBIC”), we can see increasing interaction between the life sciences, computer science and engineering.
2. Today, some robots can imitate human behaviour and are in competition with humans in the labour market and in daily life insofar as they are capable of automatic perceptual learning through experience, are becoming autonomous, have enormous memory capacities and have a certain type of artificial consciousness programmed into the machine, giving them, in a way, the ability to reason. New machines equipped with artificial intelligence will be used in expert systems, in military command systems, as aids to diagnosis and decision making, in risk evaluation, in financial management, and for speech and visual pattern recognition. Some machines will be able to express artificial emotions and will be capable of solving complex problems.
3. These developments raise new questions about their implications for human rights and human dignity, and sometimes the boundaries between a human being and an intelligent machine.
4. In my report, I seek to explore the social, ethical and legal consequences of technological convergence, artificial intelligence and robotics from the perspective of human rights, considering also future prospects in terms of new, desirable, forms of governance, the organisation of public debate, developments in regulations and legislation, and international co-operation.
5. I wish to thank Mr Rinie van Est and Mr Joost Gerritsen from the Rathenau Institute in the Netherlands who have assisted me in this process by drafting an expert report which Mr van Est presented to the Committee on Culture, Science, Education and Media in December 2016. 
			(2) 
			Document
AS/Cult/Inf (2016) 11; his paper also includes a large number of
references to interesting scholarly studies. I also wish to thank all the other experts who took part in the committee hearings. 
			(3) 
			Mr Raja
Chatila, Director, Institute of intelligent systems and robotics
(Institut des systèmes intelligents et de robotique (ISIR), University
Pierre et Marie Curie (UPMC), France; Mr Dmytro Shymkiv, Deputy
head of the Presidential Administration of Ukraine on administrative,
social and economic reform; Mr Gérard Lommel, Vice-Chair of the Consultative
Committee of the Convention for the Protection of Individuals with
regard to Automatic Processing of Personal Data, Council of Europe;
Mr Jean-Marc Deltorn, Researcher at the Centre for International
Intellectual Property Studies (CEIP), University of Strasbourg,
France; and Ms Dafna Feinholz, Head of Section, Bioethics and Ethics
of Science, UNESCO The following two chapters are based on the discussions we have had with the Rathenau Institute and with other experts.

2. Technological convergence

6. NBIC convergence refers to four key sectors: nanotechnology, biotechnology, information technology and cognitive sciences.
7. In the second half of the 20th century, the convergence of information technology (IT) into a wide range of scientific disciplines and industrial and service processes characterised the “information revolution” era. For example, internet resulted from the convergence between IT and communication technologies. The advancing research on the human genome was based on convergence between biology and IT, as the mapping of the human genome is dependent on computer power. And conversely, developments in biology also inspired the IT community to develop neural networks, swarm intelligence, and DNA computers. The rapid emergence of cognitive sciences in recent decades accelerated the expansion of NBI (nanotechnology, biotechnology and information technology) convergence to the NBIC, and stimulated the revival of “artificial intelligence” and robotics.
8. Two trends can be observed that indicate a growing interface between man and machine. 
			(4) 
			R.
van Est and D. Stemerding (eds.) (2012), European governance challenges
in bio-engineering – Making perfect life: Bio-engineering (in) the
21st century, Final report, STOA, European Parliament, Brussels. On the one hand, biology is becoming a technology. In other words, the physical sciences (nanotechnology and information technology) enable progress to be made in the life sciences, such as biotechnology and cognitive sciences. This type of convergence created a new set of ambitions with regard to biological and cognitive processes, including the improvement of human capacities. The Committee on Social Affairs, Health and Sustainable Development is currently preparing a report on “Genetically engineered human beings”. 
			(5) 
			Motion for a recommendation, Doc. 13927. I myself am drafting a report for the French Parliament on the genome editing revolution. 
			(6) 
			“Les enjeux économiques,
environnementaux, sanitaires et éthiques des biotechnologies à la
lumière des nouvelles pistes de recherche”, Jean-Yves Le Déaut,
member of the French National Assembly, and Catherine Procaccia,
member of the French Senate, 2017.
9. Some think that, with the development of neurosciences, the simulation of neuronal circuits will make it possible to determine what a person tends to think, do or want, or in other words to read people's minds, better assess individual and collective behaviours, and therefore to control or even manipulate people.
10. The second trend is that technology and biology are becoming much closer and complement each other, since the life sciences inspire, enable progress within and provide new concepts to the physical sciences. In other words, technologies, especially information technologies, are acquiring properties we normally associate with living organisms, such as self-assembly, self-healing, reproduction and intelligent behaviour. Accordingly, in the future we will see a proliferation of new types of man-made modifications (artefacts) using biological, cognitive and social technologies, which will be incorporated into our bodies and brains or intimately integrated into our social lives. Examples of these bio-inspired artefacts are biopharmaceuticals, engineered tissue, stem cells and xenotransplantation and hybrid artificial organs. Humanoid robots, avatars, softbots (software agents in a digital environment), persuasive technologies which can influence decisions and modify relationships with others, and emotion-detection techniques are examples of cognitive-inspired and socio-inspired “artefacts”.
11. Artificial intelligence and robotics use and build on the existing information and communication technology (ICT) infrastructure and nanotechnologies. Robots are not only being used in areas such as medicine, agriculture and manufacturing, they are now also capable of driving cars and piloting drones. In addition, smart devices are changing the nature of the internet, which is assuming the features of a gigantic robotic system, since they have the ability to learn.
12. Humans are becoming more and more intimately connected to technology. 
			(7) 
			R.
van Est, with the assistance of V. Rerimassie, I. van Keulen and
G. Dorren (2014), Intimate technology: The battle for our body and
behaviour, Rathenau Institute, The Hague. We let technology nestle itself within us, close to us and between us. Technology becomes part of our lives on a large scale with smartphones, activity trackers, social media, massively multiplayer online games or augmented reality glasses. These digital machines penetrate our private and social life and increasingly influence how humans interact. Through our interactions with the machines that surround us – such as CCTV cameras, GPS data, smart shoes, DNA chips, face recognition technologies, internet search engines, smart cars, etc. – we are being digitally identified. Digital data on our genetic make-up, health, inclinations, hobbies, feelings, preferences, conversations and whereabouts are being collected. These data are not gathered without purpose, but are often used to categorise human beings in particular profiles with the explicit goal of intervening in our future choices.
13. Lastly, some technologies take on more and more human-like features. Machines can get human traits and move us with their outward appearances, mimic human activities, such as driving a car, and exhibit intelligent behaviour or even show emotions. Think, for example, of self-driving cars, social robots, digital assistants, chatbots, Google Translate, and IBM Watson Health that functions like a clinical decision support system for use by medical professionals. It is important to note that the ability of these machines to mimic human features or activities is often enabled by the fact that there is an enormous amount of accumulated data on our own characteristics and our activities. Google uses the digitised data of human translation, for example generated by translators at the European Parliament, to train its algorithms. Gathering a large amount of digital data about us enables engineers to create machines that behave just like us, and thereby to increase the interaction between humans and machines.
14. So far, most of the bioethical debate and related human rights treaties have focused on invasive biomedical technologies that work inside our organisms. The Council of Europe Convention on Human Rights and Biomedicine (ETS No. 164, “Oviedo Convention”) sets out a number of common guiding principles to preserve human dignity in the application of innovations in biomedicine. Meanwhile, a broad range of emerging ICT-based technologies that work outside the body – but still impact the bodily, mental, and social performance of human beings – has developed, which raise many new ethical, social and human rights issues.
15. Safeguarding human dignity in the 21st century obliges us to look at all kinds of “intimate technologies”, namely the technologies that are inside us (deep brain stimulation), close to us (electroencephalogram (EEG) neuromodulation), between us (social media), that have a lot of information about us (big data), and technologies that imitate us (for example robots and smart environments).
16. To indicate this cluster of technologies, the term “the Internet of Things” is often used. Through robotics the internet is given “senses” by means of sensors, and “hands and feet” by means of actuators. In this way an Internet of Robotic Things is being shaped. A broad set of information and communication technologies, such as sensor networks, internet, big data, artificial intelligence and robotics, play a role in this development. The pervasiveness of these ICTs is ever-increasing and leads to a blurring of the distinctions between human and machine. To indicate this human condition, the term onlife has been used. In this onlife-world we are interacting with all sorts of intelligent or digitally coded artefacts.
17. Each type of interaction between humans and intelligent machines can raise various human rights issues. This report illustrates these issues with references to six specific technologies: self-driving cars, care robots, e-coaches, artificial intelligence used for social sorting, judicial applications of artificial intelligence and augmented reality. This report does not address the question of robots and drones used in defence matters.

3. Human rights related to intelligent artefacts

“Technology is neither good nor bad; nor is it neutral” (Melvin Kranzberg, Six Laws of Technology, 1986). However, it must be judged by the use human beings make of it.

3.1. Protection of personal data

18. The primary business model of the internet is built on mass surveillance. 
			(8) 
			B. Schneier, The Public-Private
Surveillance Partnership, Bloomberg, 31 July 2013, available at: <a href='https://www.bloomberg.com/view/articles/2013-07-31/the-public-private-surveillance-partnership'>https://www.bloomberg.com/view/articles/2013-07-31/the-public-private-surveillance-partnership</a>. For instance, Facebook tracks people all over the internet, even when they are not a member of this social media website. 
			(9) 
			Currently,
the Belgian Data Protection Authority is involved in a legal battle
with Facebook about the tracking of non-users. These data may reveal sensitive information about people’s life such as their sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, and their use of addictive substances. Because this is about the processing of personal data, data protection regulations apply, namely the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108, hereafter “Convention No. 108”)) and the Additional Protocol regarding supervisory authorities and transborder data flows (ETS No. 181).
19. Convention No. 108 contains the principles for fair and lawful processing of personal data. It also provides measures of control available to individuals, such as the right to obtain confirmation of whether personal data are stored and the right to obtain rectification of such data. 
			(10) 
			Article
8.a, c and d. At the level of the European Union, efforts have been made to “make Europe fit for the digital age”, by establishing the General Data Protection Regulation (GDPR), which will apply from 25 May 2018. 
			(11) 
			The General Data Protection
Regulation’s predecessor, Directive
95/46/EC of the European Parliament and of the Council
of 24 October 1995 on the protection of individuals with regard
to the processing of personal data and on the free movement of such
data, will be repealed on the same date.
20. The internet and personal data processing go hand in hand, given the broad definition of personal data. Even the processing of an IP address, used to identify a device connected to the internet, can activate the applicability of data protection regulations. 
			(12) 
			Court of Justice of
the European Union, 19 October 2016, C-582/14 (Breyer) and Court
of Justice of the European Union, 24 November 2011, C-70/10 (Scarlet/SABAM),
paragraph 51. With regard to big data analysis, the use of data for new or incompatible purposes, data maximisation, lack of transparency, the possibility to uncover sensitive information, the risk of re-identification, security implications and incorrect data are all issues that pose challenges to personal data protection. 
			(13) 
			International Working
Group on Data Protection in Telecommunications (Berlin Telecom Group)
(2014), Working Paper on Big Data and Privacy – Privacy principles
under pressure in the age of Big Data analytics, 55th Meeting, Skopje, 5-6
May 2014. Other implications of (big) data analysis that relate to behavioural targeting are for instance take-it-or-leave-it choices; for example, the use of “cookie walls” that deny people’s access to a website unless they give consent to the website owner to track their activities. 
			(14) 
			F.
J. Zuiderveen Borgesius (2014), Improving privacy protection in
the area of behavioural targeting, Kluwer
Law International, Alphen aan den Rijn, p. 232.
21. The Council of Europe has addressed many of these “big data” issues. 
			(15) 
			See, for
example, D. Korff (2013), The use of the Internet & related
services, private life & data protection: Trends & technologies,
threats & implications. T-PD(2013)07Rev, 31 March 2013, with
regard to “internet & related services, private life & data
protection”; and A. Rouvroy (2016), “Of Data and Men” – Fundamental
rights and freedoms in a world of big data, T-PD-BUR(2015)09REV,
11 January 2016, regarding fundamental rights and freedoms in a
world of big data. Convention No. 108 is currently being updated in order to address the new challenges raised by the digital era. The main innovations concern the following issues: proportionality (so far implicit and concerning only the data), data minimisation; the obligation to demonstrate compliance with the applicable principles, especially for data controllers and processors; the obligation to declare data breaches; transparency of data processing; and additional safeguards for the data subject (such as the right not to be subject to a decision based solely on automatic processing without having his or her views taken into consideration, the right to obtain knowledge of the logic underlying the processing and the right to object). 
			(16) 
			Council of Europe,
Modernisation of the Data Protection “Convention 108”, 28 January
2016, <a href='https://www.coe.int/nl/web/portal/28-january-data-protection-day-factsheet'>https://www.coe.int/nl/web/portal/28-january-data-protection-day-factsheet</a>. In addition, guidelines on the protection of individuals regarding personal data processing in a world of big data 
			(17) 
			Guidelines
on the protection of individuals with regard to the processing of
personal data in a world of Big Data, 23 January 2017, T-PD(2017)01,
Strasbourg, <a href='https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806ebe7a'>www.coe.int/dataprotection</a>. have recently been adopted by the Convention No. 108 Committee.
22. Most of the data protection challenges raised with regard to Internet services also apply to the Internet of Things. Like websites and apps, machines within the Internet of Things can be used to collect data. Think of a car robot that registers its surroundings or monitors its travel route (location-based data) or a care robot that tracks an elderly person’s face or emotion (biometric data). Shop owners already use technologies to track their customers within their shop, or even monitor people who pass by their shop. 
			(18) 
			The Dutch Data Protection
Authority imposed penalty payments on a company that could not demonstrate
that Wi-Fi tracking in public spaces was necessary for a legitimate
purpose. See also: Autoriteit Persoonsgegevens, “Dutch DPA investigates
Wi-Fi tracking in and around shops”, 1 December 2015, <a href='https://autoriteitpersoonsgegevens.nl/en/news/dutch-dpa-investigates-wifi-tracking-and-around-shops'>https://autoriteitpersoonsgegevens.nl/en/news/dutch-dpa-investigates-wifi-tracking-and-around-shops</a>. Tech-companies such as Google, Amazon, Facebook, Apple and Microsoft have developed business strategies to gather data in and around the home. As a result, one’s home – which once was one’s castle – has become a place where one’s movements or behaviour is continuously being “watched”, for example via a smartphone, smart meter or smart TV. This data collection – either via the internet or the Internet of Things – enables these companies to gain detailed insight into the lives of millions of people. Sometimes data collection results from legislation, for example the use of “smart meters” or event data recorders used in cars.
23. The development of the Internet of Things raises issues about the transparency of data processing and how the individual is able to exercise his or her rights based on Convention No. 108 or the legislative framework of the European Union. One of the main pillars of legitimate data processing – the individual’s consent to the proposed processing activity – will continue to be put under pressure. Specific questions arise from lack of transparency in automated decisions.
24. Article 15 of the General Data Protection Directive 95/46/EC, the so-called “Kafka provision”, prohibits certain fully automated decisions with far-reaching effects, which are not only “legal effects” but also decisions that “significantly affect a person”. Moreover, in 1992, the European Commission said that “data processing may provide an aid to decision-making, but it cannot be the end of the matter; human judgment must have its place”. 
			(19) 
			F. J. Zuiderveen Borgesius
(2014), Improving privacy protection in the area of behavioural
targeting, Kluwer Law International,
Alphen aan den Rijnn, p. 373, referencing the European Commission
amended proposal for a Data Protection Directive (1992), p. 26. To safeguard the data subject's rights and freedoms he or she has a right to obtain human intervention on the part of the controller, to express his or her point of view and to challenge the decision.
25. However, it is very hard and often even impossible for people to notice that they are excluded from seeing a particular advertisement online or have to pay a higher price due to artificial intelligence identifying him or her as a “rich” person. This makes it difficult to challenge the automated decision. Article 15 also does not help much in reducing filter bubbles and manipulation risks, since these activities might not significantly affect a person within the meaning of this article. Based on the European Union regulations, the controlling party that processes the data should, upon request, inform the “profiled” person of the logic involved in the processing. However, data protection regulations usually do not apply when people are not (in)directly identified. Consequently, there is a need to strengthen the position of the person being profiled through technologies enabling meaningful profiling transparency.

3.2. Right to respect for private life

26. One question of concern is the issue of “Computers As Persuasive Technologies”, or abbreviated captology. Captology includes the design, research, and analysis of interactive computing products (computers, mobile phones, websites, wireless technologies, mobile applications, video games, etc.) created for the purpose of changing people’s attitudes or behaviour. 
			(20) 
			Stanford Persuasive
Technology Lab, “What is captology?”, <a href='http://captology.stanford.edu/about/what-is-captology.html'>http://captology.stanford.edu/about/what-is-captology.html</a> Persuasive technologies rely on data gathering, analysis via artificial intelligence and smart interfaces. This means that captology enables massive psychological experimentation and persuasion on the Internet.
27. Smartphone apps or websites, for instance, measure how people interact with these applications. In this way millions of people are tested on the internet each day. Via a/b testing – a randomised experiment with two variables – knowledge is gathered about our behaviour and how our brain makes choices. Based on these measurements, the applications automatically adjust their content in order to persuade the user to buy an article, click on a certain advertisement or extend his or her time using the application. Just as in the case of slot machines, the financial value of an app is mainly driven by the amount of time consumers use it.
28. Parties developing persuasive technologies apply knowledge from neuroscience and psychology, but do not follow the existing ethics codes for psychologists and for conducting psychological research. For example, Facebook and Cornell University academics tried in 2014 to influence the emotions of almost 700 000 users via news feeds, 
			(21) 
			A.D.I
Kramer, J.E. Guillory and J.T. Hancock (2014), Experimental evidence
of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences 111
(24): 8788-8790. without the users’ consent or the appropriate approval from an ethics committee. The users had no idea that their emotions were being influenced via predominantly negative or positive news feeds. These influencing activities not only evidently interfere with an individual’s autonomy and self-determination, but also the individual’s freedom of thought, conscience and religion. 
			(22) 
			R. Strand and M. Kaiser
(2015), Report on Ethical Issues Raised by Emerging Sciences and
Technologies, 23 January 2015. How can people choose their own path, if organisations via websites or apps nudge them towards certain emotions? This question becomes especially urgent when people are not aware that they are being influenced, rendering them practically defenceless against these influencing activities.
29. Such potentially negative effects on humans explain why psychologists have developed their own code of ethics. Individuals should not have less protection of their rights merely because they are not in a traditional psychologist-client relationship towards the entity that performs the psychological experiment. It is quite remarkable that via the internet (and the Internet of Things), such experiments are being conducted, on a massive scale, while the human subjects are unaware and have not given their consent to them. To citizens it may not always be clear how to file a complaint against such experimental activities conducted by a website or app owner, especially if the data protection regulations do not apply, for example because no personal data are being processed.
30. People use electronic coaches, or “e-coaches”, to better manage their lives. 
			(23) 
			L.
Kool, J. Timmer and R. van Est (eds.) (2015), Sincere support: The
rise of the e-coach, Rathenau Institute, The Hague. Those e-coach systems often use softbots which are integrated in smart wearables. For example, the Fitbit is a wrist bracelet which enables self-tracking practices. This device monitors the individuals’ behaviour, such as their sleep patterns or food diets, in order to coach people to improve their lifestyle (like better sleep, more exercise or weight loss). The sum of the data registered by such an e-coach constitutes an individual’s data double and is part of the datafication of that individual’s daily routine. This is a form of voluntary self-surveillance.
31. Even though users of an e-coach have the intention to take control of their lives, they need to be aware that there are more parties involved in this technology. The interests of these parties may not be aligned with those of the e-coach user. For instance, employers could try to oblige their employees to wear an e-coach in order to track their activities and be able to advise or steer someone into a certain lifestyle. The data registered by the e-coach may contain health information that should be treated carefully. This is also interference with a person’s informational privacy since the individual loses control of his or her personal information. 
			(24) 
			The Dutch Data Protection
Authority decided against the use of e-coaches by employees which
enabled employers to gain insight into, for example, someone’s sleeping
behaviour (health data). See: Autoriteit Persoonsgegevens, “Verwerking
gezondheidsgegevens wearables door werkgevers mag niet”, 8 March
2016, <a href='https://autoriteitpersoonsgegevens.nl/nl/nieuws/ap-verwerking-gezondheidsgegevens-wearables-door-werkgevers-mag-niet'>https://autoriteitpersoonsgegevens.nl/nl/nieuws/ap-verwerking-gezondheidsgegevens-wearables-door-werkgevers-mag-niet</a> (in Dutch). In addition to employers, the developers of the e-coaches have control over the collected data and could use them to “tell” users how they should adjust their lives. Here lies the risk of unwanted manipulation in which autonomy turns into heteronomy. 
			(25) 
			M.
Fuchs in: H. Whittall, L. Palazzani, M. Fuchs and A. Gazo (2015),
Emerging technologies and human rights, international symposium,
4-5 May 2015, session 2: Technology, Intervention and Control of
Individuals, <a href='https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=090000168049596'>https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=090000168049596</a>. In addition to the developers, there are also third parties involved that receive data and analyse them, for example for their own commercial purposes, sometimes irrespective of when the app or handset is in use. 
			(26) 
			See, for example, the
case of the fitness app “Runkeeper”: Forbrukerrådet, “Runkeeper
tracks users when the app is not in use”, 13 May 2016, <a href='https://www.forbrukerradet.no/side/runkeeper-tracks-users-when-the-app-is-not-in-use/'>https://www.forbrukerradet.no/side/runkeeper-tracks-users-when-the-app-is-not-in-use/</a>..
32. Technology developers of, for example, e-coaches should be transparent about the persuasive methods they apply. 
			(27) 
			L. Kool, J. Timmer
and R. van Est (eds.) (2015), op. cit. People should be able to monitor the way in which information reaches them. This also means that transparency about the revenue model should be mandatory. In order to address the quality of e-coaches and the responsibility of their developers, a seal of approval could be developed that would inform users about the quality of the e-coaching apps and devices.
33. Care robots are designed to provide care to vulnerable groups, such as children, or elderly or disabled people. Robots can be applied to both the benefit and detriment of an individual’s autonomy and self-determination. Think of robots that improve the autonomy of elderly people by assisting them when they get dressed or take a bath. The Japanese robot Robear, for example, can lift patients from their beds without the help of a human care provider.
34. In contrast, care robots may also restrain an elderly person when their developers have programmed them to do so. How pushy may a robot become, for example, in reminding someone to take their medication? What if someone refuses to take their medication? The danger of paternalism comes into play. In this case, robot technology can force users to take a particular course of action on the basis that developers know what is best for these users. 
			(28) 
			I.R. Van de Poel and
L.M.M. Royakkers (2011), Ethics, technology,
and engineering: An introduction, Wiley-Blackwell, Oxford,
United Kingdom.
35. Physical robots represent embodied artificial intelligence. This embodiment of the robot offers opportunities to improve the interaction between humans and machines. Machines, like social care robots, capitalise on the ability of people to attribute human form, traits, emotions and intentions to machines. Robotics makes use of this human ability to anthropomorphise to develop social robots which can engage with humans on an emotional level. 
			(29) 
			L. Royakkers and R.
van Est (2016), Just Ordinary Robots:
Automation from Love to War, Boca Raton, FL: CRC Press. Engineers may use this powerful social psychological phenomenon to build persuasive technology. To what extent do we want to deploy the emotional bond between people and machines? And how do we ensure that there is no abuse of the trust that is artificially built between man and machines? Since people can become addicted to their smartphone, or virtual reality girlfriends, it is highly conceivable that people can develop strong feeling for social robots.
36. Over the last few years, the debate has been growing on how new ICTs influence the emotional and social skills of people and the quality of human relationships. Clinical psychologist and sociologist Professor Sherry Turkle sees, as a result of people’s attachment to their devices, a risk of social deskilling: the inability to cope with other humans, with their problems and shortcomings and the unwillingness to invest in human relationships. Reliance on machines increases the risk of closing in on oneself. 
			(30) 
			S. Turkle (2011), Alone together: Why we expect more from technology
and less from each other, New York, Basic Books; S. Turkle
(2015), Reclaiming conversation: The
power of talk in a digital age, New York, Basic Books.
37. Certain types of robots are equipped with artificial intelligence and are programmed to mimic social abilities in order, for example, to establish a conversation with its user. For instance, care robots can use affective computing in order to recognise human emotions and subsequently adjust the robot’s behaviour. 
			(31) 
			R. van Est, with the
assistance of V. Rerimassie, I. van Keulen and G. Dorren (translation:
K. Kaldenbach) (2014), Intimate technology: The battle for our body
and behaviour, Rathenau Institute, The Hague. Potentially, robots can stimulate human relationships. The Dutch care robot Alice asks its care receivers if they have recently called their family members, with the aim of (re-)establishing contact and maintaining their relationships. Several studies on the effect of Paro, a soft seal robot, in inpatient elderly care, seem to suggest that the mood of elderly people improves and that depression levels decrease; in addition, their mental condition becomes better, advancing the communication between the senior citizens and strengthening their social bonds. 
			(32) 
			S.T. Hansen,
H.J. Anderson and T. Bak (2010), Practical
evaluation of robots for elderly in Denmark: An overview. Proceedings
of the fifth ACM/IEEE international conference on human-robot interaction
(pp. 149-150), Osaka, Japan, 2‑5 March. Piscataway, NJ. US: IEEE
Press. However, there is a danger that robots could interfere with the right to respect for family life, as an (un)intentional consequence of how the robot affects its users. Due to anthropomorphism, vulnerable people such as the elderly may consider a social robot for example as their grandchild. If not treated carefully, the care receiver may focus primarily on the care robot, instead of, for example, his or her family members or other human beings.
38. Similarly, virtual or augmented reality technologies may improve someone’s ability to establish and develop relationships with human beings. For instance, such technologies could facilitate communications between family members; Microsoft Research showed this during its “holoportation” demonstration. 
			(33) 
			Engadget,
“’Holoportation' demo makes live-video holograms look easy”, 26
March 2016, <a href='https://www.engadget.com/2016/03/26/holoportation-demo-makes-live-video-holograms-look-easy/'>https://www.engadget.com/2016/03/26/holoportation-demo-makes-live-video-holograms-look-easy/</a>. In contrast, these technologies could also decrease someone’s ability to establish and develop relationships if for example a virtual world is designed in a way which holds back the person from entering into (meaningful) contact with others, but is instead designed to encourage interaction with virtual entities.

3.3. Human dignity

39. Human dignity is one of the core principles of fundamental rights and it also acts as the basis for freedoms and other rights. In this respect, several legal sources, including the Council of Europe’s European Social Charter (revised) (ETS No. 163) and the Charter of Fundamental Rights of the European Union, 
			(34) 
			<a href='http://www.europarl.europa.eu/charter/pdf/text_en.pdf'>www.europarl.europa.eu/charter/pdf/text_en.pdf.</a> underline the importance of independent living and full participation in society for the elderly and persons with disabilities.
40. Communication and interaction with care robots may potentially impact physical and moral relations in our society; it could have positive consequences for someone’s dignity as well as negative ones. Even though the “soft impact” on human dignity may be difficult to estimate, the Committee on Legal Affairs of the European Parliament notes in its report on robotics 
			(35) 
			European Parliament
News, Robots: Legal Affairs Committee calls for EU-wide rules, 12
January 2017, <a href='http://www.europarl.europa.eu/news/en/news-room/20170110IPR57613/robots-legal-affairs-committee-calls-for-eu-wide-rules'>www.europarl.europa.eu/news/en/news-room/20170110IPR57613/robots-legal-affairs-committee-calls-for-eu-wide-rules</a>. that these impacts need to be considered if and when robots replace human care and companionship. For example, mechanical feeding would offer people no choice or autonomy with regard to how they receive their nutrition and could be degrading. Within the European Union-funded Value Aging project, it was recommended that the robot must be capable of communicating its intention of doing something to the user, while the user must be able to cancel the intended action or switch off the robot completely. 
			(36) 
			O. Vermesan and P.
Friess (co-ordinators) (2015), Internet of Things – IoT Governance,
Privacy and Security Issues. European Research Cluster on the Internet
of Things (IERC), January 2015, p. 27, <a href='http://www.internet-of-things-research.eu/pdf/IERC_Position_Paper_IoT_Governance_Privacy_Security_Final.pdf'>www.internet-of-things-research.eu/pdf/IERC_Position_Paper_IoT_Governance_Privacy_Security_Final.pdf</a>.
41. A call for the design and development of robotics that preserve human rights was made at the 38th International Conference of Data Protection and Privacy Commissioners by the European Data Protection Supervisor. The Council of Europe could follow this call and provide guidelines in this field.

3.4. The right to property

42. Two main developments can be distinguished in relation to ownership. Firstly, objects that people possess – such as their home or land – may become part of a virtual or augmented reality and artefacts created as part of a virtual or augmented reality could be placed “on top” of possessions in the physical world. This begs the question: if you own your land, do you also own the virtual space that has been allocated to it by others? For instance, when the developer of the game Pokémon Go put virtual characters – Pokémon – on top of real world homes and environments for the gamers 
			(37) 
			It
has been estimated that 45 million people worldwide played the game
at its peak in July 2016. See: Bloomberg, “These Charts Show That
Pokemon Go Is Already in Decline”, 22 August 2016, <a href='https://www.bloomberg.com/news/articles/2016-08-22/these-charts-show-that-pokemon-go-is-already-in-decline'>https://www.bloomberg.com/news/articles/2016-08-22/these-charts-show-that-pokemon-go-is-already-in-decline</a>. to find and catch them, this led to discussions about trespassing, land rights and the legal boundaries of property. 
			(38) 
			Reuters,
“Get off my lawn! Pokemon Go tests global property laws”, 22 September
2016, <a href='http://www.reuters.com/article/us-landrights-pokemongo-idUSKCN11S1GY'>www.reuters.com/article/us-landrights-pokemongo-idUSKCN11S1GY</a>.
43. Secondly, ownership questions arise in relation to the use of an object, such as a robotised car, a smartphone or a printer. The issues here are twofold. On the one hand, these devices are part of the Internet of Things and therefore connected to networks. This enables entities other than the owner of the device to control it, and even intervene in someone’s use of the device. This is possible due to remote access via the internet by the manufacturer or as part of the software’s design that is incorporated into the device. As a consequence, the individual who owns a robot or a device is hindered in his or her peaceful enjoyment of these possessions.
44. On the other hand, the device (bought, rented or otherwise used by someone) registers data for all kinds of purposes. This is true with regard to a robotised car, which needs these data in order to operate safely. Does the user of the device own the data, which can be either personal data (which activates data protection regulations) or non-personal data (which may fall within the scope of other legal regimes such as intellectual property law)?
45. These examples show that in today’s world, it is not self-evident that one can interact or otherwise use goods such as robots or devices, even if these goods have been purchased. It is possible that some of these issues can be addressed via existing legal instruments as part of consumer law, competition law or intellectual property law. However, another approach could be based on the right to property, arguing that once an object has been bought, the manufacturer or other parties may not interfere with this possession unless the owner has given his or her permission (for example for software updates).

3.5. Safety, responsibility and liability

46. Over the last few decades, the automobile industry has made cars more and more intelligent. This is the long-term trend of car robotisation. 
			(39) 
			L. Royakkers and R.
van Est (2016), Just Ordinary Robots:
Automation from Love to War, Boca Raton, FL: CRC Press. From the 2000s onwards, cars gradually received automated capabilities, such as cruise control and park assist systems. The Science and Technology Options Assessment (STOA) of the European Parliament holds that safety aspects should be one of our primary concerns, that is to say that ways must be found for robots and humans to work together without accidents.
47. The French Parliamentary Office for Evaluating Scientific and Technological Choices and the Committee on Legal Affairs of the European Parliament consider that clarification is needed about the responsibility for the actions of robots, in order to ensure transparency and legal certainty for producers and consumers. 
			(40) 
			European Parliament
News, Robots: Legal Affairs Committee calls for EU-wide rules, op.
cit. At least the following potential players can be identified who may bear the blame in the event of a car crash: the car manufacturer, the software builders that programmed the car’s artificial intelligence, the seller, the buyer, the road authority or others. Addressing the issue of responsibility and liability will depend on the level of automation of the car. 
			(41) 
			Six levels of automation
for “on-road motor vehicles” have been defined by the global association
SAE International. See: SAE International’s new standard J3016:
Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated
Driving Systems: <a href='http://www.sae.org/misc/pdfs/automated_driving.pdf'>www.sae.org/misc/pdfs/automated_driving.pdf</a>.

3.6. Freedom of expression

48. Google and Facebook have become central information gatekeepers of our society. 
			(42) 
			E. Bozdag (2013), Bias
in algorithmic filtering and personalization. Ethics
and Information Technology, September 2013, Volume 15,
Issue 3, pp. 209-227. Even though Facebook insists that it is not a media company, 
			(43) 
			Reuters, “Facebook
CEO says group will not become a media company”, 29 August 2016, <a href='http://www.reuters.com/article/us-facebook-zuckerberg-idUSKCN1141WN'>www.reuters.com/article/us-facebook-zuckerberg-idUSKCN1141WN</a>. almost half of the people online use the website as their leading source for news. 
			(44) 
			Reuters, “More than
half online users get news from Facebook, YouTube and Twitter: study”,
14 June 2016, <a href='http://www.reuters.com/article/us-media-socialmedia-news-idUSKCN0Z02UB'>www.reuters.com/article/us-media-socialmedia-news-idUSKCN0Z02UB</a>.
49. Automated decisions can promote or hinder the free flow of information. If the artificial intelligence programmer provides the user with tools to gather and disseminate information, then this could promote the right of freedom of expression. For instance, tools that bring together RSS-feeds from newspaper sites can be helpful in imparting information in an easy manner. However, if artificial intelligence solely determines what information is to be shown, it challenges the freedom to receive and impart information and ideas without interferences as protected by Article 10 of the European Convention on Human Rights (ETS No. 5). For instance, there is a risk that an “information cocoon”, “echo chamber” 
			(45) 
			C. Sunstein (2001), Echo chambers: Bush v. Gore, impeachment, and
beyond, Princeton University Press, Princeton and Oxford. or “filter bubble” 
			(46) 
			E.
Pariser (2011), The filter bubble: What
the Internet is hiding from you, Penguin Books, London. originates that hinders, among other things, our ability to freely develop our opinions. Limits may even arise when (automatically) selecting information out of a seemingly infinite pool. In the case of Facebook, an algorithm based on criteria such as affinity reinforces affinity, and Google’s search results, for example, depend on previous search history. According to the Youth Partnership: 
			(47) 
			Youth Partnership –
Partnership between the European Commission and the Council of Europe
in the field of youth (2015), Report – Symposium on youth participation
in a digitalised world, Budapest, 14-16 September 2015: <a href='http://pjp-eu.coe.int/en/web/youth-partnership/digitalised-world'>http://pjp-eu.coe.int/en/web/youth-partnership/digitalised-world</a>. “Both cases of information restriction – individual decisions or automated algorithms – might lead to the loss of relevant ‘alternative’ information that should be included in the decision making process of active participation.”
50. According to the European Court of Human Rights, the media not only have the task of imparting information and ideas of public interest: the public also has a right to receive them. 
			(48) 
			European Court of Human
Rights, The Sunday Times v. the United
Kingdom (No. 1), Application No. 6538/74, judgment of
26 April 1979, paragraph 65. Therefore, there is a need to provide a blueprint as to how central information gatekeepers like Google and Facebook should use their algorithmic powers for the benefit of human rights, especially in relation to the right to receive and impart information and ideas. 
			(49) 
			Building upon Committee
of Ministers Recommendation CM/Rec(2012)3 on the protection of human
rights with regard to search engines. Moreover, the committee will be preparing a report on “Are social media contributing to limiting freedom of expression?” 
			(50) 
			Motion for a resolution, Doc. 14184.

3.7. Prohibition of discrimination

51. The delegation of decision making to artificial intelligence may provide an opportunity to combat discrimination. For example, artificial intelligence tools have been developed with the aim of eliminating bias from the hiring process, such as a tool that automatically alerts the use of potentially biased language in job descriptions. 
			(51) 
			Fortune, “SAP Is Building
Bias Filters Into Its HR Software”, 31 August 2016, <a href='http://www.fortune.com/2016/08/31/sap-successfactors-bias-filter/'>www.fortune.com/2016/08/31/sap-successfactors-bias-filter/</a>. Virtual reality technologies have even been deployed to promote diversity education and combating discrimination. 
			(52) 
			USA Today, “Virtual reality tested
by NFL as tool to confront racism, sexism”, 10 April 2016, <a href='http://www.usatoday.com/story/tech/news/2016/04/08/virtual-reality-tested-tool-confront-racism-sexism/82674406/'>www.usatoday.com/story/tech/news/2016/04/08/virtual-reality-tested-tool-confront-racism-sexism/82674406/</a>.
52. However, technologies can also be used to interfere with human rights. This is also true with regard to the prohibition of discrimination. Racist groups may use artificial intelligence to propagate their message. Moreover, there is also unintentional algorithmic discrimination. For instance, in 2015 Google’s Photo app tagged a picture of two black people as “gorillas”. This tag was the result of Google’s artificial intelligence which suggests categories and tags based on machine learning. Google removed the tags and apologised: according to Google, its algorithms will get better at categorising photos if more people correct mistaken tags. The system can therefore be “trained” not to show tag suggestions which one could consider as racist. 
			(53) 
			The
Wall Street Journal, “Google Mistakenly Tags Black People
as ‘Gorillas’, Showing Limits of Algorithms”, 1 July 2015,<a href='http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/'> http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/</a> Nonetheless, it could be influenced by pressure groups wishing to promote their ideas.
53. Machine learning depends upon data that has been collected from society; to the extent that society contains inequality, exclusion or other traces of discrimination, so too will the data. 
			(54) 
			B. Goodman and S. Flaxman
(2016), European Union regulations on algorithmic decision-making
and a “right to explanation”. Machine learning will reproduce discriminatory patterns in the “training” dataset. As a consequence, biased decisions are presented as the outcome of an objective algorithm and “unthinking reliance on data mining can deny members of vulnerable groups full participation in society”. 
			(55) 
			S. Barocas and A.D.
Selbst (2016), Big data’s disparate impact, California
Law Review, Vol. 104, p. 671. Profiling techniques are a specific subset of automated decisions. These techniques are used for instance to assess how rich a website user is, so prices on the website can automatically be adjusted. 
			(56) 
			The
Wall Street Journal, “On Orbitz, Mac Users Steered to
Pricier Hotels”, 23 August 2012, <a href='http://www.wsj.com/articles/SB10001424052702304458604577488822667325882'>www.wsj.com/articles/SB10001424052702304458604577488822667325882</a>. This use of algorithmic profiling can be discriminatory.
54. In order to combat algorithmic discrimination and manipulation, the notion of algorithmic accountability – in addition to the current “right to explanation” – is worth considering. In practice, this would imply things such as properly dealing with the bias in data sets, discrimination-aware data mining, meaningful transparency in relation to algorithms and profiling, restriction of the contexts in which such artificial intelligence is being used, and demanding outputs that avoid disparate impacts. 
			(57) 
			F. Pasquale (2016),
Bittersweet Mysteries of Machine Learning (A Provocation), LSE Media
Policy Project, The London School of Economics and Political Science,
Media Policy Project Blog, <a href='http://blogs.lse.ac.uk/mediapolicyproject/2016/02/05/bittersweet-mysteries-of-machine-learning-a-provocation/'>http://blogs.lse.ac.uk/mediapolicyproject/2016/02/05/bittersweet-mysteries-of-machine-learning-a-provocation/</a>

3.8. Access to justice and the right to a fair trial

55. Courts are increasingly using tools that automate their decision processes. Software robots promise to make more consistent legal decisions than humans and drastically reduce the length of court proceedings. Algorithm-powered artificial intelligence can help litigants assess their case and estimate their chances, which could lead to a reduction in the number of litigation procedures. The use of automated tools by judges may also help to ensure a fair trial.
56. We can assume that computers could take sound decisions in uncomplicated cases, while respecting Article 6 of the European Convention on Human Rights. 
			(58) 
			R.H.
Van den Hoogen (2007), E-Justice, beginselen
van behoorlijke elektronische rechtspraak, SDu Uitgevers,
The Hague. However, some principles – aimed at maintaining accountability transparency and recognisability – should be taken into account when judges use an automated tool to aid the decision-making process. In particular: it should be made known that the judge is being assisted by an artificial intelligence tool and how this tool affects the decisions reached; the judge shall remain responsible for the final decision, even where this decision has been reached with or by assistive [computer] systems; and if the judge deviates from the advice of the [computer] system, then this has to be recorded. 
			(59) 
			Ibid.,
pp. 152-153.
57. In contrast, biased artificial intelligence could act in breach of the impartiality principle. The increased use of risk assessing algorithms in the American justice system raises accountability and transparency issues. 
			(60) 
			M. Smith,
“In Wisconsin, a Backlash Against Using Data to Foretell Defendants’
Futures”, The New York Times, 22 June
2016, <a href='https://www.nytimes.com/2016/06/23/us/backlash-in-wisconsin-against-using-data-to-foretell-defendants-futures.html'>https://www.nytimes.com/2016/06/23/us/backlash-in-wisconsin-against-using-data-to-foretell-defendants-futures.html</a>. It has been reported that software used to set bail was biased against Afro-Americans, although the real impact of the software might not be that clear. 
			(61) 
			The
Washington Post, “A computer program used for bail and
sentencing decisions was labeled biased against blacks. It’s actually
not that clear”, 17 October 2016, <a href='https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas'>https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas</a>. With regard to artificial intelligence agents used by police forces that lead to criminal proceedings, we should not assume that the outcomes of an artificial intelligence tool are necessarily correct, complete or even relevant with regard to possible, potential, suspects. 
			(62) 
			M. Hildebrandt (2016),
“Data gestuurde intelligentie in het strafrecht”, pp. 137-240 in:
E.M.L. Moerel, J.E.J. Prins, M. Hildebrandt, T.F.E Tjong Tjin Tai,
G-J. Zwenne and A.H.J. Schmidt (2016), Homo
Digitalis, Handelingen Nederlandse Juristen-Vereniging
146th volume/2016-I, Wolters Kluwer. The “equality of arms” principle of Article 6 of the Convention cannot be respected unless the public prosecutor, the lawyer of the defendant and the judge are able to check how the police artificial intelligence agent reached its conclusions. Such artificial intelligence agents should log what they did, with what purpose and how they reached the outcome.
58. We should consider establishing a framework of minimum standards to be taken into account when a court uses artificial intelligence, in order to prevent as far as possible individual States devising their own frameworks with the risk of offering varying degrees of protection within the meaning of Article 6 of the European Convention on Human Rights.

4. Two potential new rights

59. To keep the robot age human-friendly, we suggest introducing the right not to be subjected to profiling, not to have one’s location tracked and not to be manipulated or influenced by an e-coach, and therefore to have the right not to be measured, analysed or coached and to be able to choose between human contact and assistance by a robot.

4.1. Right to refuse to be measured, analysed or coached 
			(63) 
			In the French version
of this document, the heading is “Le droit à la tranquillité” (the
right to be left alone)

60. Driven by the internet and the Internet of Things, profiling (by companies and State actors) has become commonplace. Since many technologies nowadays can operate from a distance, most of us are not even aware of this mass surveillance and people are rather defenceless, since there are few possibilities to escape these surveillance activities. This creeping development and its impact on society and human rights have received so far little attention in political and public debate.
61. Georgetown University researchers recently published a study showing that half of all American adults, including innocent ones, are in a police face recognition database as part of a “perpetual line-up”. 
			(64) 
			Center
on Privacy & Technology at Georgetown Law (Georgetown CPT) (2016) The Perpetual Line-up: Unregulated Police Face
Recognition in America – Unregulated Police Face Recognition in
America, 18 October 2016, <a href='http://www.perpetuallineup.org/'>www.perpetuallineup.org</a>. To give another example, in response to worries by consumers about Wi-Fi-tracking by shop owners, the Dutch Minister of Economic Affairs and the (former) State Secretary of Security and Justice stated that people should just turn off their smartphone if they did not want to be tracked. 
			(65) 
			Tweakers, “Kabinet:
zet telefoon uit om wifi-tracking tegen te gaan”, 12 February 2014, <a href='https://tweakers.net/nieuws/94273/kabinet-zet-telefoon-uit-om-wifi-tracking-tegen-te-gaan.html'>https://tweakers.net/nieuws/94273/kabinet-zet-telefoon-uit-om-wifi-tracking-tegen-te-gaan.html</a> (Dutch). Based on this response, it seems that tracking and tracing people is a right which is deemed more important than the (privacy) rights of individuals. Until recently, people could turn off their PC if they did not want to be tracked online. In our “onlife” world this strategy has become out-dated. There has been little debate about the cumulative effect of mass surveillance. Instead, triggered by specific applications and incidents, “mini debates” have been organised, and the outcome of each debate is a balancing act that mostly favours national security or economic interests. The sum of the debates, however, is the gradual but steady dissolving of the privacy and anonymity of the individual.
62. Several authors have stressed various detrimental effects of ubiquitous monitoring, profiling or scoring and persuasion. The Berlin Telecom Group considers large-scale monitoring and profiling activities as an unprecedented risk for the privacy of all citizens. In a worst case scenario, the world could turn into a “global panopticon”. 
			(66) 
			International Working
Group on Data Protection in Telecommunications (Berlin Telecom Group)
(2013), Working Paper on Web Tracking and Privacy – Respect for
context, transparency and control remains essential 53rd meeting,
15-16 April 2013, Prague. What is at stake here is not only the risk of abuse, but the right to remain anonymous and/or the “right to be left alone”, which in the digital era could be phrased as the right not to be electronically measured, analysed or coached.
63. In this respect, I should note that, in the context of the modernisation of Convention No. 108, two new rights have already been introduced in the draft proposal, 
			(67) 
			Draft modernised Convention
for the Protection of Individuals with Regard to the Processing
of Personal Data, September 2016: 
			(67) 
			<a href='https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806a616c'>https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806a616c</a> 
			(67) 
			and the draft explanatory report: 
			(67) 
			<a href='https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806b6ec2'>https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806b6ec2</a>. which seek to afford better protection for persons in a Big Data context. Article 8 states that: “Every individual shall have a right: a) not to be subject to a decision significantly affecting him or her based solely on an automated processing of data without having his or her views taken into consideration; ... c) to obtain, on request, knowledge of the reasoning underlying data processing where the results of such processing are applied to him or her ...;”. Moreover, the new Article 8bis stipulates that “... Each Party shall provide that controllers and, where applicable, processors, examine the likely impact of intended data processing on the rights and fundamental freedoms of data subjects prior to the commencement of such processing, and shall design the data processing in such a manner as to prevent or minimise the risk of interference with those rights and fundamental freedoms ...”. They are also required to “take into account the implications of the right to the protection of personal data at all stages of the data processing”.
64. Other issues have also been introduced in the current draft for modernising Convention No. 108, including data minimisation and security, which will also address the current concerns. With growing complexities of the data processing systems, each of the various stages has to be tackled in order to ensure effective protection.

4.2. Right to choose between human contact and assistance by a robot

65. Sometimes, robots may be able to completely take over a set of human tasks. As a response to the development of autonomous military drones, hundreds of scientists and experts proposed a ban on offensive autonomous weapons beyond “meaningful human control”. 
			(68) 
			Future
of Life Institute, “Open Letter on Autonomous Weapons – Future of
Life Institute”, 28 July 2015, <a href='https://futureoflife.org/open-letter-autonomous-weapons/'>https://futureoflife.org/open-letter-autonomous-weapons/</a>. This concept of meaningful human control is also relevant to other areas – such as the judiciary – in which autonomous or artificial intelligence systems can potentially take critical decisions. In contexts where human contact and interaction play a central role, as in raising children and caring for elderly people or people with disabilities, the “right to meaningful human contact” could play a role.
66. At the level of the individual, a right to meaningful human contact could safeguard one’s well-being and prevent social and emotional deskilling. Modern technology should facilitate and not replace human contact: robots should only be used instrumentally for routine care jobs, and care-giving tasks that require emotional, intimate and personal involvement should be reserved for people.

5. Conclusion

67. At political level, we are not sufficiently aware of the growing impact of science and technology on society and on the daily lives of every individual. These turn out to be very controversial issues and ought to be treated as a political priority. They require new forms of open, informed and adversarial public debate – very early on in the process – involving not only lawmakers and experts but also non-governmental organisations (NGOs), the general public and the media. Science and technology cannot contribute to progress unless, at the same time, there is democratic progress. I would suggest that this issue be explored in depth in a future report.
68. In my opinion we need to raise awareness and better publicise “scientific, technological and industrial culture” through an informed debate. In too many cases, the media have tended to oversimplify complex issues by giving preference to controversy and sensationalism over deeper analysis. This has created entrenched positions in public opinion that were later difficult to change in order to consider all its complexities in an open-minded way. Informed debate on scientific and technological developments and ethical considerations also need to be part of the school curricula.
69. We also need new forms of regulatory mechanisms and governance. It is increasingly difficult for lawmakers to match the speed at which science and technologies evolve with the required regulations and standards. The timespan is getting increasingly shorter to evaluate risks and determine the medium- and long-term consequences on human health and the implications for human rights. I therefore believe that in specific cases we need a new type of legislation that can be reviewed regularly (so called “biodegradable rules”) in order to accompany such rapidly-evolving and often radical developments in science and technology and their application.
70. The Council of Europe Convention on Human Rights and Biomedicine (Oviedo Convention) has been a ground-breaking legal instrument, conceived in 1990s to address human rights challenges with regard to the application of biology and medicine. As we have seen, the fields of NBIC application are now broadening well beyond the biomedical sector. The concerns that guided the drafting of the convention remain relevant today. The Oviedo Convention sets out fundamental guiding principles, some of which, in my opinion, could be extended to NBIC applications outside the biomedical field and implemented in this context.
71. We have moved on from the “treated” human being to the “repaired” human being, and what is looming on the horizon now is the “augmented” human being. This development raises new ethical questions owing to the new interfaces it creates between humans and machines or humans and molecules. The law must make it possible for individuals to resist pressures or constraints to adopt technologies which would improve their performances in areas such as sport, games and also work. Equally, in a fast developing digital era, individuals should have a right to refuse to be measured, analysed or coached.
72. There is an urgent need for greater supervision over automatic processing procedures for collecting, handling and using data produced by individuals and I would therefore strongly encourage the prompt conclusion of the modernisation of the Council of Europe’s Convention No. 108. Until recently, automated procedures relied on experts, specialists in their field, who captured, in the form of rules, an observation or an experience in order to translate it into a model. Nowadays, those experts have been replaced by “machine learning” processes (algorithms) which derive the same rules from the data themselves.
73. A few clicks and some metadata, filtered through such models, suffice to create a photofit of a person and to reveal their most intimate characteristics. Analysing preferences on social media networks, for example, has made it possible to pinpoint a person's political (85%), sexual (83%) and religious (82%) orientation and ethnic origin (95%) with levels of accuracy exceeding those obtained by humans. By modelling (profiling) individuals, these methods are also used to predict behaviour. GAFA (Google, Apple, Facebook, Amazon) make considerable investment in this research, which is developing fast and requires a legal framework.
74. We have seen throughout this report that diverse applications of new technologies and artificial intelligence can have serious consequences for human rights and therefore have to be regulated. The public should be informed of the value of the data that is being gathered and the use made of them. A discussion process must be initiated without delay, because parliaments risk finding themselves powerless in the face of the development of these technologies by companies and large groups experienced in the rapid commercialisation of innovations. We need ethical and legal frameworks to govern the applications. And in order to develop them, we need to improve exchanges between statisticians, IT specialists, legal experts, sociologists and specialists in ethics. Only through such interdisciplinary exchange, which would reflect the hybrid nature of the algorithms, could one begin to master these matters and put in place effective legal protection.
75. In a fast-moving world, scientific evaluation is an essential prerequisite to safeguarding representative democracy’s place in the functioning of our institutions. Consequently, I propose that the national parliaments equip themselves with “technology assessment” structures and that, additionally, they promote awareness-raising programmes and regular exchanges between those working in the human and social sciences and the technological sciences.
76. At European level, there is a need for improved co-ordination of the action of the Council of Europe and its human rights tasks with the work of the European Union and with national parliaments. For example, the European Union already supports 120 robotics projects through the SPARC program, which funds innovation in robotics by European companies and research institutions. For this, €700 million has been made available until 2020 under Horizon 2020, the European Union’s research and innovation programme. Moreover, in January 2017, the European Parliament’s Committee on Legal Affairs adopted the report by Ms Mady Delvaux; 
			(69) 
			European Parliament
News, Robots: Legal Affairs Committee calls for EU-wide rules, op.
cit. and the Resolution of the European Parliament (2015/2103(INL)) 
			(70) 
			<a href='http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2017-0051+0+DOC+XML+V0//EN'>www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2017-0051+0+DOC+XML+V0//EN.</a> was subsequently adopted on 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics that cover, inter alia, provisions on liability, intellectual property rights, standardisation, safety and security.
77. The EPTA network, of which both the European Parliament and the Parliamentary Assembly of the Council of Europe are members, might be one of the settings in which public hearings of opposing views could be held, bringing together experts, politicians and citizens. I would also advocate working on questions of ethics, science and technology in conjunction with United Nations Educational, Scientific and Cultural Organization (UNESCO) so as to harmonise recommendations at world level.