• No results found

“Dear Robot.” : The special effects of socially, assistive robots on the privacy related rights of care receivers

N/A
N/A
Protected

Academic year: 2021

Share "“Dear Robot.” : The special effects of socially, assistive robots on the privacy related rights of care receivers"

Copied!
67
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

“Dear Robot.”

The special effects of socially, assistive robots on the privacy related rights of

care receivers

Tao Zi “A Lonely Giraffe” (2016)*

Aviva de Groot

Monday 11 December 2017 Master’s thesis Information Law IViR

(2)

* “Tao Zi enjoys the company of animals. However, during the creative process, these animals morph into treacherous machines that make her feel uncomfortable.” Image and text © Collection Dolhuys and Outsider Art Museum.


(3)

1. Introduction

3

1.1 Robots: from -iction to badly understood reality ...3 1.2 Scope of researched -ield, robots, and object ...5 1.3 Research question and methodology ...6

2. The emerging practice of socially, assistive robots in care

7

2.1 Peripheral sphere of care ...7 2.2 Socially, assistive robots ...9 2.3 Special effects of robots on humans ...11 2.4 No guts, no glory? On the future of SAR’s and the moral machine question. ...14

3. Care receivers’ right to privacy: normative issues arising from interaction with

robots

15

3.1 Privacy and robots in a care context 16 3.1.1 Privacy: protecting and enabling an autonomous life ...16 3.1.2 Data protection and data dominance ...19 3.1.3 Mapping HRI privacy issues in the current paradigm ...21 3.1.3 Privacy as an element of good care ...23 3.2 Effects on conGidentiality, disclosure and trust 24 3.2.1 Functions of medical con-identiality in the trusted network ...24 3.2.2 Robotic interference with trusted information -lows ...25 3.2.3 Patient robot con-identiality? ...26 3.3 Issues arising from obscurity of robotic processing functionalities 27 3.3.1 Three types of obscurity, and on using access and control as useful distinctions ...27 3.3.2 Obscurity of general operational capabilities ...27 3.3.3 Obscurity of software functionality: the cycle of social reactivity ...29 3.4 InGluence on care receivers’ interaction 33 3.4.1 Reserve: protecting the mental space of social actors ...33 3.4.2 Robotic manipulation of social interaction ...34 3.4.3 Chapter summary ...34

4. Privacy and human robot interaction in the robot governance discourse

36

4.1 Initiatives and challenges of legal and ethical robot governance 36 4.1.1 Robot governance on the EU level — someday, maybe ...36 4.1.2 Regulating form, function and operation: the debate on using human concepts ...37 4.1.3 Technological standards ...39 4.1.4 Ethical principles ...40 4.1.5 Responsible research and innovation? ...41 4.1.6 Quality feedback loops ...42

(4)

4.1.7 Section summary ...43 4.2 Protecting disclosure autonomy 44 4.2.1 “Dear robot:” disclosing as care receiver, con-iding as friend? ...44 4.2.2 Privacy by design ...45 4.2.3 Section summary ...46 4.3 “Why, robot?” Understanding robotic behaviour 47 4.3.1 Privacy by designing for visibly compliant behaviour? ...47 4.3.2 Algorithmic comprehensibility: rights, rules, feasibility ...48 4.3.3 Section summary ...51 4.4 Privacy within a group, privacy for the group 52 4.4.1 Nudging groups and individuals ...52

5. Conclusion

53

6. Bibliography

56

(5)

1. Introduction

1.1 Robots: from fiction to badly understood reality

From the first mention of robots by Czech writer Karel Čapek in his 1920 play R.U.R: Rossum’s Universal Robots, a persistent image of artificial, human-like machine slaves with super human powers and slumbering world domination aspirations has surfaced in popular (western) conscience. And now they are here — 1 arguably on our own invitation, and although many are worried, up close and personal we seem to embrace their presence. They appeal to our fascination for real-life automata, which is indeed universal and predates Čapek by centuries. 2

In industrial environments, confined factory robots performing pre-programmed tasks are joined by freely moving machines that cooperate with human workers. Outside of the factory, a fast-growing industry is responsible for robots showing up in areas like health care, education, hosting and entertainment. Robot toys, family and personal assistants are entering people’s homes. Robotic technology is developing at a fast pace to have robots operate in these increasingly complex environments.

Advanced sensors capture data from the robot’s environment and of the humans in it. Reactions are informed by pre-trained algorithms that increasingly are built to improve in the course of operation. At an advanced 3 level of machine learning, robots are expected to show autonomous progress and anticipate to unexpected change in their environment. This is the field of artificial intelligence (AI), an independent field of study that only in part considers the humans involved in the world in which AI will operate. 4

Isaac Asimov, like Čapek a novelist, in 1942 was the first to design laws for robots. Unsurpassed for 50 5 years (or 75, according to some ) his rule-based, top-down laws were written for robots and inscribed into 6 7 their code. These kinds of laws aren’t adequate for robots that need to behave in harmony with laws that humans made for each other, and whose behaviour should reflect our social and ethical norms. They cannot steer robotic behaviour in our complex surroundings, nor can they keep up with the complexity of the technology itself. Modular, multipurpose designs are being developed, making it harder to foresee how the robots will be used, and to create comprehensive protective regulation. Standardisation bodies are the first to 8 catch up, with more and less successful results.

The accelerating pace of development and deployment of robots, in so many sectors, prompt a growing

As projected by popular (western) media: R2D2, C3PO, Robby the Robot, B-9, Rosey, HAL 9000, Terminator’s machines, Ash, 1 Cyclons, Archos.. Examples range from apr. 1000 BC China, to Alexandrian inventor Ctesibius around 200 BC, the famous inventor Al Jazari 1000 AC 2 and the Japanese craR of Karakuri in the 17th century, to name but a few landmarks Adam Greenfield, Radical Technologies: the design of everyday life (Verso, Brooklyn, NY 2017) 3 Associa[on for the Advancement of Ar[ficial Intelligence, Introduc[on to symposium series Ar[ficial Intelligence for Human-Robot 4 Interac[on, h^ps://ai-hri.github.io/2017/ Isaac Asimov, 'Runaround' in The Complete Robot (First published in Astounding Science Fic[on, Street & Smith Publica[ons, 1942, 5 used: Harper Collins Publishers, paperback edi[on 1995) 1. A robot may not injure a human being or, through inac[on, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protec[on does not conflict with the First or Second Laws. (Added later was the zeroth law: 0. A robot may not harm humanity, or, by inac[on, allow humanity to come to harm.) Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press 2014) 139 6 Robert Anderson, ‘ARer 75 years, Isaac Asimov’s Three Laws of Robo[cs need upda[ng’ theconversa[on.com, March 17, 2017 7 Neil Richards and William D. Smart, 'How should the law think about robots?' in Ryan Calo, Michael Froomkin, Ian Kerr 8 (eds), Robot Law (Edward Elger Publishing 2016) 3-22

(6)

expression of what are known as ELSI concerns: Ethical, Legal and Societal Issues. Advanced robotics and 9 related technologies “unleash a new industrial revolution, which is likely to leave no stratum of society 10 untouched.” This quote from the European Parliament’s request to the European Commission to create 11 comprehensive governance instruments echoes public sentiments of fear, hope and anticipation. These easily upgrade any robot conversation from topical to panoptical over a cup of morning coffee. Robots will set us free from ‘dull, dirty and dangerous’ jobs or replace us in cherished professions, will keep us safe on the road or dispose of us in utilitarian equations, end all wars or enable the worst kind yet.

The same concerns are voiced in the recent Council of Europe Parliamentary Assembly Recommendation on Technological convergence, artificial intelligence and human rights. While “new technologies are blurring the boundaries between human and machine” at a fast pace, controversial issues require “new forms of governance, new forms of open, informed and adversarial public debate.” This debate on robotics and AI is 12 qualitatively underdeveloped. Understanding of the technologies is hindered by competing media spectacles and performance exaggeration by robot companies themselves.

To some extent, every new technology raises regulatory concerns. Collingridge describes what he calls the control dilemma: “when change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming. Not all technologies used in 13 robotics are new to us. But while we struggle to regulate algorithms at play on the internet, robots with and without advanced capabilities have special effects on us, as will be shown. The question is not just whether we need new laws, but also whether we need new laws to deal with ‘them,’ or (just) with us.

In the meantime, robots are already influencing the human robot interaction we look to regulate. To ‘show and tell’ acceptable reactivity, robots are built to assess human emotional states in increasingly advanced ways. Where self learning technologies are used, a second cycle will assess the emotional effect of their 14 actions to improve upon these accordingly. How this happens is partly obscure. Algorithmic technology sometimes baffles its own engineers, and much of human robot interaction is not understood well enough 15 to evaluate its effects. Robots are put on the market that are over- or underestimated, and badly understood. As Kranzberg’s first law reads, “technology is neither good nor bad; nor is it neutral.” It needs to be 16 understood. Our interactions with robots are already governed, too, through all of the four modalities identified by Lessig: (existing) law, social norms, markets and architecture. In the real world as well as in the virtual, architecture is a form of governance that cannot be argued with. But where a road block sits plainly in our view, in ‘cyberspace’ as well as robotic software, our behaviour can be steered in obscure ways. As 17

UNICRI Centre for Ar[ficial Intelligence and Robo[cs (h^p://www.unicri.it/in_focus/on/UNICRI_Centre_Ar[ficial_Robo[cs) 9 Melvin Kranzberg, 'Technology and History: "Kranzberg's Laws"' (1995) Bulle[n of Science, Technology, Society (15)1, 5-13 10 Kranzberg’s second law, “Inven[on is the mother of necessity," means one technological inven[on spawns addi[onal ones to deal with the enhanced affordances" of the first. Advances in algorithmic technology to create value from the increased availability of sensory data is an example. European Parliament, Report with recommenda[ons to the Commission on Civil Law Rules on Robo[cs, 2015/2103, 27/01/17, 11 Introduc[on, under B Council of Europe, Parliamentary Assembly, Recommenda[on 2102 (2017), Technological Convergence, Ar[ficial Intelligence and 12 Human Rights David Collingridge, The social control of technology (St Mar[n's Press, edi[on 1982, New York 1980) preface 13 Eduard Fosch Villaronga, 'Towards a Legal and Ethical Framework for Personal Care Robots. Analysis of Person Carrier, Physical 14 Assistant and Mobile Servant Robots' (Disserta[on, Universitat Autònoma de Barcelona 2017) p.251 Greenfield (n 3) 15 Kranzberg (n 10) 16 Lawrence Lessig, 'The Law of the Horse: What Cyberlaw Might Teach' (1999) 113 Harvard Law Review, 509-512 17

(7)

technological architecture, this governance is not good, bad, or neutral either. 18

This paper inevitably jumps right into the middle of this hot soup of issues. By now, there is a vast and growing body of human robot interaction literature, and effects have been identified in various degrees of understanding. Robotic technology is scrutinised by robot ethicists, and robot governance possibilities are being researched by a growing and interdisciplinary group of scholars. With the use of these sources, issues specific to human-robot interaction can be identified that are not yet adequately covered by existing regulation, nor addressed by emerging regulatory initiatives. Knowledge gaps are revealed that should inform an urgent policy and research agenda.

1.2 Scope of researched field, robots, and object

The focus of this research is a sector in which robot implementation takes place around vulnerable users: the care sector. Within this sector, the focus will be on day-to-day care, rather than strictly medical. This distinction is not sharply delineated, as will be shown. The use of robots in care institutions is expected to grow and robots with advanced ‘social’ capabilities will be offered to the sector. 19

Object of concern will be the privacy related rights of care receivers that interact with what are currently described as socially, assistive robots (SAR’s). These robots are deployed with social (inter)active aims. 20 Think of verbal guidance, physical encouragement, interaction (chat/play), and companionship. In this 21 emerging practice, the technological capabilities of the robots that are used are not well understood by care receivers. It is unclear what data are being processed, to what goal, and with what effects. While new aims, functions and software are being developed fast, private spheres of residents are being influenced. These range from intimate mental and physical space, to confidential conversations, to social group dynamics. In the current legal privacy regulation paradigm, data protection takes a leading role, labelling different types of information and processing stages in detail. However, technologies change, and confusion grows on how to protect the underlying values. The scientific project leader of a social robot development project recently announced the development of a sensor that maps location and activities of residents without using camera’s. With that, he states, their privacy is no longer interfered with. 22

As this is a relatively new and highly developing field, the thesis will provide an introductory chapter. This chapter serves three purposes: it lays a necessary knowledge base for readers, it connects the research to current state of robotic capabilities and that of robot human interaction research, and it further explains the urgency of the project by pointing to some industry intricacies.

Roger Brownsword, 'In the year 2061: from law to technological management' (2015) Law, Innova[on and Technology (7)1, 49 18 David Cameron et al, 'Presence of Life-Like Robot Expressions Influences Children's Enjoyment of Human-Robot Interac[ons in the 19 Field' (2015) 4th Interna[onal Symposium on New Fron[ers in Human-Robot Interac[on David Feil-Seifer and Maja J. Mataríc, 'Ethical Principles for Socially Assis[ve Robo[cs' (2011) IEEE Robo[cs and Automa[on 20 Magazine 18(1), 24-31 George A. Bekey, 'Current trend in Robo[cs: Technology and Ethics' in Patrick Lin, Keith Abney and George A. Bekey (eds), Robot 21 ethics: the ethical and social implica@ons of robo@cs (MIT Press 2012) ch 2; Theodore A. Metzler et al, 'Could robots become authen[c companions in nursing care?’ (2016) Nursing Philosophy 17(1):36-48 Pieter Simoens, h^ps://www.imec-int.com/nl/ar[kelen/vlaamse-zorgrobot-werkt-samen-met-zorgverleners-om-levenskwaliteit-22 van-bewoners-in-woonzorgcentra-te-verhogen

(8)

1.3 Research question and methodology

This thesis’ main research question was formulated as:

“What normative privacy related issues arise from care receivers’ interaction with socially, assistive robots in the non medical sphere of health care?”

The following subquestions will support the eventual findings:

1. How are relevant privacy related rights and values of care receivers (possibly) influenced by the implementation of socially assistive robots in the peripheral sphere of health care?

2. How are these issues addressed in the current robot law and robot ethics discourse? 3. In the absence of comprehensive governance, which issues warrant urgent policy response?

This thesis is situated within the European constitutional tradition, in the light of European practice and the current state of European robot governance. Within this setting, the European Convention on Human Rights (ECHR) and the interpretations of its provisions in the judgements of the European Court of Human Rights (ECtHR) provide the main legal privacy paradigm. The right to privacy and the separate right to data protection in the EU Charter of Fundamental Rights (CFR, the Charter) will also be discussed. The latter has been more prominently developed within the European Union (EU), and some especially relevant and debated concepts in the upcoming General Data Protection Regulation (GDPR) will be used in the analysis. International governance efforts like technical (ISO/IEEE) standards are looked at as well, as these are the first to respond and will be used in Europe, too.

The main method of research is literary study. Texts were sourced from multiple disciplines, as the robot law research community is a self-proclaimed mixed field by choice. Literature was sourced from legal studies (mainly used to crystallise recognised legal interests), engineering, science and technology studies, philosophy/ethics, psychology, and the already interdisciplinary field of privacy studies. As concerns the latter: just like robots, ‘privacy’ lacks a single legal or scholarly definition or typology. What’s more, discussion on privacy rights overlap with normative, conceptual approaches to privacy types and values. 23 This handicap is recognised by Koops et al, whose proposal for a typological framework was created to be projected on “current socio-technical and legal challenges.” In an effort to put this aim to a test, this paper will situate the identified issues within the framework. Issues are identified through the lens of legally protected rights, with an open eye for new phenomena that might not yet be articulated well enough. The existence of a codified right has therefore not been treated as strict demarcation.

This methodology acknowledges the necessary information exchange between designers, engineers, policy makers, lawyers, ethicists and social scientists that will have to inform comprehensive robot governance as well as guide robotic development. As Calo states, “Whether at conferences or hearings, in papers or in draft legislation, the legally and technically savvy will need to be in constant conversation.” After mapping the 24 privacy issues, the study of all included disciplines have informed the governance section. Built on this chapter, the conclusion on what governance is there and what is urgently lacking has been drawn.

As a supporting method, contained qualitative research has been conducted in the course of eight interviews of approximately 2.5 hours each. Candidates from the industry and practice were selected on the basis of their experience with robots from different angles: engineering + distribution companies (2), robot research institutes (1), robot implementation trainers/advisers (2), care institutions/robots handlers (3). All candidates

Bert-Jaap Koops et al, 'A typology of Privacy' (2017) University of Pennsylvania Journal of Interna[onal Law 38(2), 483-575 23

Ryan Calo, 'Robo[cs and the Lessons of Cyberlaw' (2015) California Law Review (103)3, 561 24

(9)

are from the Dutch and Belgian industry. The interviews were mainly used as ‘reality checks’ on this researches findings, and on companies’ (own) performance exaggerations.

The thesis will be structured as follows:

Chapter 2 looks at the emerging practice of social robots in peripheral care. The sector, the robots, and the special effects of robots on humans are introduced. The chapter ends with a (very) short introduction to the ‘morally autonomous robot’ discussion. This subject unhelpfully dominates news articles that feed the public debate, distracting from issues that need an urgent, realistic response. However, as the creation of ‘real’ autonomous robots and ‘real’ AI is pursued with some vigour in the industry, no robot related study can ignore the subject.

In Chapter 3, privacy related issues arising from human robot interaction are explored to the background of legally protected privacy rights and professional care ethics. The first section explores the legal privacy paradigm and how robots effect this in general, the ensuing sections thematically deal with effects on medical confidentiality, disclosure and trust (section 3.2), Issues of obscurity of robotic processing functionalities (3.3) and influence on care receivers’ interaction (3.4). To avoid unwarranted robot-exceptionalism and to do justice to the novel field of robot law, robot governance issues are not treated in this chapter, but in the next.

Chapter 4 starts with an introduction to the current EU robot law paradigm. Affordances of soft law instruments are explored for dealing with governance questions on ethical issues, as suggested by the EU Commission in their response to the EU Parliament call. Next, specific protections and challenges for the issues identified in chapter 3 are investigated.

Chapter 5 concludes by explaining which issues possibly warrant a robot-exceptionalist, urgent policy response in the absence of a regulatory framework. The use of the word ‘possibly’ here means salient knowledge gaps of robotic effects, as they surface in this paper, will be treated as risks that should inform care institutions’ experimental innovation agenda.

2. The emerging practice of socially, assistive robots in care

In this chapter, the research’s scope of exploration is described in more detail. It explains the choices made therein: the growing industry focus on the chosen sector and the calls from within the sector, current capabilities and expected developments of socially assistive robots, and how robots have special impact on human behaviour. The chapter will support the claim that this interaction has high potential for interfering with the privacy of the chosen group.

2.1 Peripheral sphere of care

A prominent sector for growth of the whole robotics industry is health care, including both medical and 25 non-medical care. This is mostly explained by predictions of a (global) ageing population, which is named as one of the greatest political, social and economic challenges for EU States in the European Parliament report. As technological innovation is seen as important factor in solving societal problems, the addition of 26

Chris Holder et al, 'Robo[cs and law: Key legal and regulatory implica[ons of the robo[cs age (Part I of II)' (2016) Computer law & 25 Security review 32:383-402 European Parliament, Report with recommenda[ons to the Commission on Civil Law Rules on Robo[cs (n 11), Introduc[on, 26 under F

(10)

robot care seems inevitable in the light of predicted, reclining human resources. 27

At home, robotic assistants could prolong autonomous living for the elderly, providing relief for informal (family) and professional care givers. Additional monitoring functions, described as prospective tasks both 28 in and out of institutions, are rapidly being developed to add to the domotica (“home automation”) industry that is already catering to these goals. Within institutions, as the population of persons with dementia grows, more pressure on very intensive care is put. For this group especially, but also for non-impaired residents, social interaction with robots (including ‘pets’ ) are named as enhancing quality of care receivers’ lives 29 30 and taking pressure off care givers. In a new way of catering to these separate but related goals, social robots are developed to be able to mingle with residents and proactively detect or even prevent disturbing behaviour by carrying out personalised interceptions. 31

These kinds of robotic applications are a commercial R&D development of global scale, but only part of 32 this is care-specific from the start. In the current technological industry, ideas are being developed in a startup system. Bigger corporations harvest the output, and ideas end up in products, services and industries that they were not developed for. Performance exaggeration and the inflated public debate are also at play 33 here. Predictions and visions influence what is being developed: they can lay groundwork for acceptance of technologies, promote investments and research agendas. Producers want to make the first move and so 34 35 put products on the market that are further developed afterwards. 36

Other commercial sectors’ advances in ‘affective computing’ will be of interest for the care sector. Software used in robots so will have different (kinds of) authors, which hampers predictability of their behaviour on the one hand, and new software version development on the other. These processes are unhelpful if we 37 assume that robots need to be carefully tuned to the needs of the vulnerable population that it is deployed with. The problem is also examined by Gürses and Hoboken, who state that privacy governance should 38 focus on the effects of what they call “the agile turn”. 39

Ronald Leenes et al, 'Regulatory Challenges of Robo[cs: Some guidelines for Addressing Legal and Ethical Issues' (2017) Law, 27 Innova[on and Technology (9)1, 30 David Feil-Seifer and Maja J. Mataríc, 'Ethical Principles for Socially Assis[ve Robo[cs' (2011) IEEE Robo[cs and Automa[on 28 Magazine 18(1),24-31 Also answering to the need for live pets, that usually aren’t allowed. Paradoxically, a growing concern for the well being of actual 29 pets increasingly plays a role. These suffer from the same anthropomorphic inclina[ons of humans, making us believe they like us as much as we like them. Jean-Loup Rault, 'Pets in the digital age: live, robot, or virtual? ' (2015) Fron[ers in Veterinary Science (2) Ar[cle 12 Robots are shown to alleviate stress, encourage human interac[on, and provide [me-intensive mental training. See among others 30 David Feil-Seifer and Maja J. Mataríc (n 28); Björn Kahl et al, 'Acceptance and communica[ve effec[veness of different HRI modali[es for mental s[mula[on in demen[a care' (New Fron[ers of Service Robo[cs for the Elderly, Edinburgh 2014) 11-14 IMEC, Wonder project, h^ps://www.imec-int.com/en/what-we-offer/research-poruolio/wonder 31 Theodore A. Metzler et al, 'Could robots become authen[c companions in nursing care?’ (2016) Nursing Philosophy 17(1):36-48 32 Adam Greenfield, Radical Technologies: the design of everyday life (Verso, Brooklyn, NY 2017) 33 Deborah G. Johnson, 'Technology with No Human Responsibility?' (2015) J Bus Ethics 127: 707-715 34 Adam Greenfield, Radical Technologies: the design of everyday life (Verso, Brooklyn, NY 2017) 35 Seda Gürses and Joris van Hoboken, 'Privacy ARer the Agile Turn' in Selinger et al (eds), The Cambridge Handbook of Consumer 36 Privacy (2017), 3 Ryan Calo, 'Robo[cs and the Lessons of Cyberlaw' (2015) California Law Review (103)3, 534 37 Aimee van Wynsberghe, Healthcare Robots: Ethics, Design and Implementa@on (Ashgate Publishing 2015) 38 Seda Gürses and Joris van Hoboken, 'Privacy ARer the Agile Turn' in Selinger et al (eds), The Cambridge Handbook of Consumer 39 Privacy (2017), 12

(11)

One could say robots at least “need to be able to behave care-ful to be accepted,” reflecting acknowledged care values like empathy. Van Wynsberghe distinguishes between designed-for-care robots and commercial 40 robots used in care, or ‘serving care purposes’ without being integrated in the therapeutical relationship as such. In this categorisation, for the ‘real’ care-robot, social bonds and interaction should be an expression of the robot meeting the ethical needs of a care receiver. For the other types, the social interaction is a goal in itself. In light of the current lack of comprehensive governance of robots within the broader sector, this 41 distinction will be hard to make. The SAR’s currently on offer are mostly not designed for their specific application. Non-medical certification is cheaper for manufacturers, while it is not forbidden to state a robot has therapeutical qualities. Robots designed for use in controlled therapeutical settings ‘scale down’ and are 42 used by untrained operators. Differently standardised robots can look exactly the same, adding to confusion 43 about their capabilities.

Conflicting calls are also heard from within institutions. Predictability of robotic behaviour is valued for cognitively impaired care receivers, but when their high expectations of interaction capabilities are not met, this also disappoints and angers them. These and other calls from practice will feed back into the development process. As care receivers easily get attached to social robots’ presence, needs based on habit 44 might be a result of this interaction and get therapeutical life and weight of their own. But as will be shown, 45 the effects of human SAR interaction aren’t understood well enough for such a feedback loop to be called safe.

In the next sub-section, some currently available SAR’s are described.

2.2 Socially, assistive robots

Like other kinds of robots, the definition of socially, assistive robots has been in continuous evolution together with their technology. Appearance elements are used as classifiers: animal or doll-like machines 46 are called humanoid, androids resemble real humans. Movement seems to be a divisive factor as many 47 exclude non-moving personal assistants. (PA’s) Most however agree that current SAR’s and other robots are electrically powered, feature software driven functionality that enables it to sense, analyse (‘think’) and react to its environment, in the absence of direct human control. Much cited is the working definition of Richards and Smart, although they exclude above named PA’s: “a robot is a constructed system that displays both physical and mental agency but is not alive in the biological sense.” Importantly, they add: the system only

Van Wynsberghe (n 38) 52 40 ibid. 64 41 Jordi Albo-Canals, ‘Toy Robot versus Medical Device’, in Marcel Heerink and Michiel de Jong (eds) Conference Proceedings New 42 Friends 2015 - The 1st Interna[onal Conference on Social Robots in Therapy and Educa[on (Windesheim Flevoland), 96-97 Elaine Sedenberg, John Chuang, Deirdre Mulligan, 'Designing Commercial Therapeu[c Robots for Privacy Preserving Systems and 43 Ethical Research Prac[ces within the Home' (2016) Interna[onal Journal of Social Robo[cs (8)4, 575–587 - they warn for the private experimenta[on with therapeu[c robots, which interac[on actually warrants the protec[ons of human subject research rules and ethics. Ma^hias Scheutz in Ma^ Simon, ‘Companion Robots Are Here. Just Don’t Fall In Love With Them’ ISPR, August 4, 2017 h^p:// 44 ispr.info/2017/08/04/companion-robots-are-here-just-dont-fall-in-love-with-them/ Van Wynsberghe (n 38) 45 David Feil-Seifer and Maja J. Mataríc, ‘Defining socially assis[ve robo[cs’ (9th Interna[onal Conference on Rehabilita[on 46 Robo[cs, 2005) How defini[ons already differ in technical terms used by the Interna[onal Federa[on of Robo[cs, see h^ps:// ifr.org/img/office/Service_Robots_2016_Chapter_1_2.pdf ‘Erica’ is a currently much named example. h^ps://www.theguardian.com/technology/ng-interac[ve/2017/apr/07/meet-erica-47 the-worlds-most-autonomous-android-video

(12)

needs to appear to have agency. Both (semi-)autonomous systems and remote controlled (“wizard-of-oz”) 48 49 operated systems are thus included.

For legal and broader governance purposes, a precise enough understanding of the robot-related object of regulation is necessary, whether this object is the robot structure, a functionality or capability. However, the lack of a commonly accepted definition holds particular value in the light of currently very fast technological developments. Robot law scholars connected to the EU funded Robolaw Project have proposed a robot taxonomy of five classes: embodiment; autonomy level; function; environment, and human-robot interaction. In the European Parliament report, “characteristics” for future definitions are proposed that 50 include sensory and/or other environmental data exchange including the trading and analysing of those data; self-learning capacity (optional); physical support; environmentally adaptive behaviour, and absence of life in the biological sense. 51

Current examples of SAR’s used in care are (from left to right) Paro the baby seal, Justocat, dinosaur Pleo rb., and Probo, a fantasy animal designed for HRI research. Humanoids include the wheel-driven Pepper that also features a touchscreen, and the small Nao/Zora with arms and legs. The Zora version is equipped with care-tailored Zora software. All these robots move. Some only in a stationary fashion (Paro), some break-52 dance (Nao/Zora). Some speak (Nao, Pepper), some only purr (Paro, Justocat). Some have facial expressions (Probo), others show their ‘emotional’ state by ‘body language’ and/or sound. (Nao, Pleo) All of them record image and audio and they feature a range of other sensors among them. Some can avoid obstacles, some have motion/surface/temperature sensors to assess how humans touch them. Some are coupled with wearable sensors of care receivers, some feature voice and /or emotion recognition modules. All of them perform more or less autonomous actions.

These SAR’s are in part or entirely controlled remotely, and so feature wireless connection to a local (institutional) or external network or internet application. As even the simpler robots currently feature inadequate storage and processing powers, they will also need to be connected to an external server (for example a cloud service from the manufacturer). Discussions revolving around the internet of things (IoT)

Neil Richards and William D. Smart, 'How should the law think about robots?' in Ryan Calo, Michael Froomkin, Ian Kerr 48 (eds), Robot Law (Edward Elger Publishing 2016) 3-22 Wizard of Oz is a term used in research to indicate robo[c func[onality is covertly remote controlled, unknown or unseen by the 49 human subject. Pericle Salvini, 'Taxonomy of Robo[c Technologies' in Robolaw Project (2007-2013) (2013) Deliverable D4.1 p.2 50 European Parliament, Report with recommenda[ons to the Commission on Civil Law Rules on Robo[cs, 2015/2103, 27/01/17, 51 Introduc[on, p. 8 In featured order, with SoRbank Robo[cs featuring both Pepper and Nao: h^p://www.parorobots.com/, h^p:// 52

www.justocat.com/nl/, h^p://www.pleoworld.com/pleo_rb/eng/index.php, h^p://probo.vub.ac.be/Probo/, h^ps:// www.ald.soRbankrobo[cs.com/en / h^p://www.zorarobo[cs.be/index.php/nl/,

(13)

will apply in these cases, as robots can be considered a thing within the IoT. However, I will leave these 53 discussions outside of my scope for the most part, as they don’t follow from HRI specifically and are addressed to some extent in the discussions around other IoT objects. 54

Now that the range of practice and of the studied robots have been described, the next sub-section introduces how human interaction with SAR’s is a force to be reckoned with.

2.3 Special effects of robots on humans

To develop meaningful SAR governance instruments, descriptions of robotic functionality should include their architecture’s ‘social’ affordances. This relates to robots’ social meaning and takes into account our 55 human inclination to anthropomorphise. Social robots “occupy the exact same space as humans” and their 56 57 social impact includes effects on humans’ cognitive, social and emotional status. The privacy concerns this 58 raises require “an in-depth examination of human-robot interaction within multiple disciplines over many years.” Recalling Collingridge, by the time we know what we want to regulate, that might have become 59 (very) difficult.

What is clear by now is that robots that show social behaviour especially “push our Darwinian buttons,” encouraging us to enter into social relationships with them. It is not straightforward why this happens, or 60 how to control it. Soldiers get attached to robotic carriers to a point some worry they might risk their lives to save them from harm: the opposite effect of the aim to shift this risk away from humans. The operator of a robot mine defuser couldn’t bear the sight of the machine carrying on its last, not yet ‘sacrificed’ leg. On a lighter note, people feel sorry for their Roomba when it gets stuck under the sofa. 61

Social robots are made for purposes or contexts for which on our inclination to react this way is relied on. Some scholars say that because of our reactions, robots should be viewed as social agents instead of tools, 62 or at least not just as tools. Others warn that this is deceptive framing, and that we should not humanise 63 machines. Taking into account these machines’ charismatic power, it makes us vulnerable to manipulation. 64 Yet others stress that for evaluation purposes, whether [care]robots ‘are’ just machines is co-defined by how

Bibi van den Berg, 'Mind the Air gap: Preven[ng Privacy Issues in Robo[cs' in Serge Gutwirth, Ronald Leenes and Paul De Hert 53 (eds), Data Protec@on on the Move, Current Developments in ICT and Privacy/Data Protec@on (Springer, Dordrecht 2016) 1-24 Professor Machine Learning Max Welling works on the development of smaller neural networks that would eliminate the need to 54 use external clouds: Marc Hijink, NRC Handelsblad, 24 November 2017 Calo 2015 (n 37) p. 531-532 55 To a^ribute human quali[es to non-living objects Salem, Maha et al, 'To Err is Human(-like): Effects of Robot Gesture on Perceived 56 Anthropomorphism and Likability' (2013) Int Journal of Social Robo[cs, 5:313–323 and animals. Joanna Bryson, 'The meaning of the EPSRC principles of robo[cs' (2017) Connec[on Science (29)2, 130-136 57 Pericle Salvini, Lecture, Summer School on “The Regula[on of Robo[cs in Europe: Legal, Ethical and Economic Implica[ons, 58 Scuola Superiore Sant’Anna, Pisa, July 6, 2017 Ryan M. Calo, 'Robots and Privacy' in Patrick Lin, Keith Abney and George A. Bekey (eds), Robot ethics: the ethical and social 59 implica@ons of robo@cs (MIT Press 2012) ch 12 Sherry Turkle, 'Authen[city in the age of Digital Companions' (2007) Interac[on Studies (8)3, 501-517 60 Ma^hias Scheutz, 'The Inherent Dangers of Unidirec[onal Emo[onal Bonds between Humans and Social Robots' in Patrick Lin, 61 Keith Abney and George A. Bekey (eds), Robot ethics: the ethical and social implica@ons of robo@cs (MIT Press 2012) ch 13 Seifer and Matarić 2005 (n 46) 62 Woodrow Hartzog, 'Et tu, Android? Regula[ng Dangerous and Dishonest Robots' (2016) Journal of Human-Robot Interac[on (5)3, 63 70-81 Bryson 2017 (n 57) 64

(14)

we perceive and use them, which depends on the specific context and (inter)actions. All seem to agree at 65 least on this: for research and governance purposes, HRI should be “a close ally.” 66

New objects (persons, situations, information flows ) worthy of privacy protection arising from HRI will 67 need to be recognised. Addressing emerging issues now sheds light on what needs to be more thoroughly researched. This research will do both. 68

The HRI-related privacy issues that this paper identifies can be grouped around three ‘themes’: 1. influence on confidential information flows of care receivers, 2. obscure collection and feedback of characteristic/ emotion-related data, and 3. privacy of care receivers as part of their residential group. This chapter will conclude by presenting some evidence of the special effects of robots within the connected spheres. How these effects can produce privacy interferences will be the subject of the next chapter. The themes used here will be used throughout the research.

1. The concept of medical confidentiality, as a species of privacy, will be further explained in the next chapter. It stretches out to the wider circle of carers and to day to day care practices. Care receivers need to trust this network, and be willing to disclose to be able to receive good care. Robotic presence seems to 69 have trust-building influence on care receivers, and could be used to help them feel at ease. An experiment by Jinnai et al examined the effect of a “humanlike” communication medium on human relationship building (in this case via telephone conversations). Their medium was Elfoid, a ca15 cm tall gender-neutral, open-70 armed plastic figurehead mounted on a mobile phone. Results showed an increase in the amount of self disclosure which lasted for about a month in the crucial establishment phase. The researchers suggested proxemics theory could account for some of this: participants touching the Elfoid might misattribute affinity with the doll onto their conversation partners. Another explanation lies in the proxy function of the Elfoid: participants seemed to interact directly with the doll in ways they did not with the ‘naked’ phone, like gesturing towards it.

Other reports show people easily confide in robots themselves. Dementing patients can be enticed to interact with robots after refusing human interaction. Less cognitively impaired care receivers experienced robots 71 as more predictable and less risky than humans. For monitoring purposes as well, users are also known 72 disclose more, and more kinds, of information, to a robot than to video camera’s. 73

Robots might also enhance an established (human) therapeutical relationship. Pettinati et al researched

Mark Coeckelbergh, 'Care robots and the future of ICT-mediated elderly care: a response to doom scenarios' (2016) AI & Society 65 31:455-462 Hartzog 2016 (n 63) 81 66 Helen Nissenbaum, ‘Respect for context as a benchmark for privacy online’ in Beate Roessler and Dorota Mokrosinska 67 (eds), Social Dimensions of Privacy (Cambridge University Press 2015) ch15 Michael Froomkin, 'Introduc[on' in Ryan Calo, Michael Froomkin, Ian Kerr (eds), Robot Law (Edward Elger Publishing 2016), x 68 Anita Allen, 'Compliance limited health privacy laws' in Beate Roessler and Dorota Mokrosinska (eds), Social Dimensions of 69 Privacy (Cambridge University Press 2015) N. Jinnai et al, 'The Impact of a Humanlike Communica[on Medium on the Development of In[mate Human Rela[onship' in 70 Cheok A., Devlin K., Levy D. (eds), Love and Sex with Robots.Lecture Notes in Computer Science, vol 10237 (Springer, Cham 2017) Björn Kahl et al, 'Acceptance and communica[ve effec[veness of different HRI modali[es for mental s[mula[on in demen[a care' 71 (New Fron[ers of Service Robo[cs for the Elderly, Edinburgh 2014) 11-14 Joanna J. Bryson, 'Robots Should Be Slaves ' in Yorick Wilks (ed), John Benjamins, Close Engagements with Ar@ficial Companions: 72 Key social, psychological, ethical and design issues (2010) ch 11 Elaine Sedenberg, John Chuang, Deirdre Mulligan, 'Designing Commercial Therapeu[c Robots for Privacy Preserving Systems and 73 Ethical Research Prac[ces within the Home' (2016) Interna[onal Journal of Social Robo[cs (8)4, 575–587, ci[ng Caine et al

(15)

whether a Nao, as a “peripheral tool,” might alert for norm violations in talking sessions between care givers and care receivers. Violations can happen for example when patients that suffer from Parkinson's’ disease show a loss of expressivity and a doctor reacts negatively. As a first tested step, an attending Nao was considered unintrusive by test subjects even though it was experienced as listening in, while human attendance would disturb their disclosure readiness.

2. The second theme revolves around the operation of robotic hardware and software. The body of the robot functions as a support for anthropomorphic projections, for example by providing a head with eyes to look at. These features don’t necessarily confuse care receivers about robots being alive, although this can be different for children. However, these features do seem to influence trusting social behaviour. One study 74 illustrates this when a test subject stated “I think I want to see him as something that is alive. He’s charming.” 75

To get the most out of their environment, autonomously moving robots carry themselves in optimal accordance with the location of their sensors. Users mainly explain a robot’s movements in relation to the goals that they expect it to pursue. Unpredictable robot behaviour prompts stronger human projections, 76 77 and technical failures don’t necessarily disturb this process. A robot that asked a human to put an object ‘up there’ but gestured towards the floor, was experienced as particularly likeable. A range of human characteristics was attributed to the robot to make sense of what it was doing. Erring was considered human-like, other participants called the failing robot “cheeky,” or said that it tried to fool them. This human 78 assessment of robotic capabilities also has people adapt their behaviour to that of SAR’s. Older people who did not understand why a robot did not ‘hear’ them right adapted their speech. One report, fittingly named 79 Robots have needs, too, explains how the process of perfecting a robot’s physical distance to a human to capture her data is also an interactive one.

Data capture by robots is done with the aid of advancing sensor technology. Algorithmic technologies are continuously developed to analyse these data and assess physical and emotional states. As social robots are expected to reciprocate ‘feelings,’ these assessments in turn will influence care receivers. The cycle is perpetual: human subjects are known to adjust their behaviour to that of robots, ‘helping’ them to get it right. One study showed that to a certain extent, people also tend to conform to the “norm of reciprocity” with 80 robots.

3. A third cluster of findings revolve around robotic influence on care receivers’ social interaction. Reserve, as state of privacy described among others by Westin, allows humans to maintain their mental and physical

Jacqueline K. Westlund et al, ‘Measuring Children’s Long-Term Rela[onships with Social Robots’ (2017) Workshop on Percep[on 74 and Interac[on dynamics in Child-Robot Interac[on, held in conjunc[on with the Robo[cs: Science and Systems XIII, MIT Lab Susanne Frennert, Håkan ERring, Bri^ Östlund, 'Case Report: Implica[ons of doing Research on Socially Assi[ve Robots in Real 75 Homes' (2017) Interna[onal Journal of Social Robo[cs 9:401-415 Robert H. Wortham, Andreas Theodoru and Joanna Bryson, 'Robot Transparency: Improving Understanding of Intelligent 76 Behaviour for Designers and Users' (TAROS 2017, 2017-07-19 - 2017-07-21, University of Surrey 2017) Kate Darling, 'Extending legal protec[on to social robots: The effects of anthropomorphism, empathy, and violent behaviour 77 towards robo[c objects' in Ryan Calo, Michael Froomkin, Ian Kerr (eds), Robot Law (Edward Elger Publishing 2016) 213-231 Salem, Maha et al, 'To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability' (2013) Int 78 Journal of Social Robo[cs, 5:313–323; See also Scheutz 2012 (n 61) robots that expressed emo[ons about their mistakes were liked for that. Frennert, S. et al (n 75) 79 Eduardo Benítez Sandoval, 'Reciprocity in Human Robot Interac[on' (Disserta[on, University of Canterbury, Christchurch (NZ) 80 2016) As proposed by Goulden and used by the author of the study (phd thesis): ”to those who help us, we should return help, not harm”. Unpredictable behaviour of “briber robots” was not judged in moral terms as strong as where they would a human though, and they were less rewarded for virtue.

(16)

distance(s) to others, for example in communal spaces of living quarters. SAR’s are known to stimulate 81 interaction among residents and between residents and carers. They are praised for their stimulating presence, eliciting positive responses from care receivers. 82

Outside of care, Hoffman and others developed a “peripheral empathy-evoking robotic conversation companion.” Their aim is to increase awareness and promote empathy “by proxy” between conversationalists without compromising natural conversation patterns. Kip1 resembles a desktop lamp without a bulb. It shows either of three ‘emotional’ states in reaction to perceived friendliness of conversations. From sound (volume) variations it infers calm, curious, and fearful states, the latter shows the lamp retracting and shivering. Within care, these and other technologies are a focus point of the industry that as we saw is geared to cater to institutions that deal with large group of ‘restless’ elderly, many times dementing, persons. But where residents will be nudged ‘for their own good,’ questions of privacy, paternalism and autonomy will 83 need to be raised as well.

2.4 No guts, no glory? On the future of SAR’s and the moral machine question.

No paper on social robots should avoid the subject of whether we are, can, or should be building the ‘moral, autonomous robots’ that have such a prominent place in legacy and the media. Not just because some roboticists and engineers are trying (hard), but because the discussion sheds light on the roles we can or should deploy our current robots in, and how we can, and should, frame their current presence. Or even, as several writers state, because this discussion helps us to understand ourselves, the robot builders. 84

That understanding, or rather the lack of it, takes up a prominent place in discussion among robot ethicists. As we will see, the unsettled question of what is morally right already complicates the development of ethically acceptable robot code. As coding is -still- an exact science, we need to be able how to evaluate what ethical behaviour looks like. In commercial industries, understanding “the guts of human decision 85 making” is studied avidly to optimise personalised communication and elicit positive (commercial) response. Insights from psychology and neurology inform that human moral decision making leans on 86 primitive and deliberative types of reasoning, and on emotions, empathy, semantic understanding. 87 88 Whether these elements are also necessary ingredients of moral decision making is undecided. For those who agree they are, differences of opinion centre around whether these traits can -ever- be coded.

A Kantian understanding of autonomy and moral decision making easily leads to the conclusion that robots, Alan Wes[n, Privacy and Freedom (The Bodley Head Ltd, 1962); Kirsty Hughes, ‘The social value of privacy, the value of privacy to 81 society and human rights discourse’ in Beate Roessler and Dorota Mokrosinska (eds), Social Dimensions of Privacy (Cambridge University Press 2015) ch 12 Natalie Wood et al, 'The Paro robot seal as a social mediator for healthy users' (2015) 4th Interna[onal Symposium on New 82 Fron[ers in Human-Robot Interac[on Jason Borenstein and Ronald Arkin, 'Nudging for Good: Robots and the Ethical Appropriateness of Nurturing and Charitable 83 Behaviour' (2016) 25th IEEE Interna[onal Symposium on Robot and Human Interac[ve Communica[on (RO-MAN) Rob Sparrow, 'Can Machines be People? Reflec[ons on the Turing Triage Test' in Patrick Lin, Keith Abney and George A. Bekey 84 (eds), Robot ethics: the ethical and social implica@ons of robo@cs (MIT Press 2012) ch 19; Colin Allen and Wendell Wallach, 'Moral Machines: Contradic[on in Terms or Abdica[on of Human Responsibility?' in Patrick Lin, Keith Abney and George A. Bekey (eds), Robot ethics: the ethical and social implica@ons of robo@cs (MIT Press 2012) ch 4 Anthony F. Beavers, 'Moral Machines and the Threat of Ethical Nihilism' in Patrick Lin, Keith Abney and George A. Bekey 85 (eds), Robot ethics: the ethical and social implica@ons of robo@cs (MIT Press 2012) ch 21 Greenfield (n 33) 86 Gianmarco Veruggio and Keith Abney, 'Roboethics: The Applied Ethics for a new Science' in Patrick Lin, Keith Abney and George A. 87 Bekey (eds), Robot ethics: the ethical and social implica@ons of robo@cs(MIT Press 2012) ch 22 Colin and Wallach (n 34) 88

(17)

by nature lacking the possibility for free, intrinsically motivated goal setting, will never be moral machines. 89 Proponents of this view argue against the use of human concepts like moral and autonomous in the first place. Others see this as a trivial question of language: “if the words [functional morality] bother Kantians, 90 let them call our project by another name, such as norm-compliant computing.” 91

Van Wynsberghe emphasises that in care, moral responsibility should recognisably lie with human care givers. As moral factors, robots will impact care givers’ decisions. The machines should be intentionally designed to avoid roles which require moral responsibility: avoid the role of moral actors. This also has an 92 operational dimension. Care receivers will still treat them as moral actors, this might pose less problems when we all agree that robots are incapable and stress this in our communications. 93

That might be easier said than done. As several studies show, the tasks that robots are currently being developed for easily include moral aspects. A robot instructed to remind an elderly person to take her medication will need to ‘sense’ when the task is fulfilled, as well as be instructed to yes/no warn family or care givers on refusal of the user. Another issue surfaces in experiments like that of Maha Salem et al, 94 which shows how humans knowingly follow instructions of an obviously faulty robot. 95

As this sub-section shows the very human activity of the struggle with technology, a citation of Kranzberg’s final law seems fitting. Stressing the interdisciplinary challenges, it serves as well to introduce the following chapters: “Behind every machine, I see a face, indeed, many faces: the engineer, the worker, the businessman, or businesswoman, and, sometimes, the general and the admiral. Furthermore, the function of technology is its use by human beings.. and sometimes, alas, its abuse and misuse.” 96

3. Care receivers’ right to privacy: normative issues arising from interaction with

robots

In this chapter,privacy issues arising specifically from human interaction with social robots will be explored. The term social here relates not only to the type of robot, but also to the role in which it is introduced in institutions. With regard to interaction, it is important to point out that even when a SAR is not interacting with a care receiver, its presence is experienced as human-like. Although other objects like computers also have social presence, as was shown by Reeves and Nass in The Media Equation , that of SAR is 97 purposefully enacted through its interface. Future robot governance might need to take this into account.

Pericle Salvini, 'Taxonomy of Robo[c Technologies' in Robolaw Project (2007-2013) (2013) Deliverable D4.1 89 Shannon Vallor, Panel discussion, 4TU.Ethics conference, June 12-13, 2017 90 Colin Allen and Wendell Wallach, 'Moral Machines: Contradic[on in Terms or Abdica[on of Human Responsibility?' in Patrick Lin, 91 Keith Abney and George A. Bekey (eds), Robot ethics: the ethical and social implica@ons of robo@cs (MIT Press 2012) ch 4 Aimee van Wynsberghe, Healthcare Robots: Ethics, Design and Implementa@on (Ashgate Publishing 2015), 120 92 Joanna J. Bryson, 'Robots Should Be Slaves ' in Yorick Wilks (ed), John Benjamins, Close Engagements with Ar@ficial Companions: 93 Key social, psychological, ethical and design issues (2010) ch 11 Heather Draper and Tom Sorell, 'Ethical values and social care robots for older people: an interna[onal qualita[ve study' (2016) 94 Ethics of Informa[on Technology 19: 49-68; Moon, Ajung Moon et al, 'The open Roboethics ini[a[ve and the elevator-riding robot' in Ryan Calo, Michael Froomkin, Ian Kerr (eds), Robot Law (Edward Elger Publishing 2016) Salem, Maha et al, ‘Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Coopera[on 95 and Trust’ (Proceedings of the Tenth Annual ACM/IEEE Interna[onal Conference on Human-Robot Interac[on, 2015) Melvin Kranzberg, ‘Technology and History: "Kranzberg's Laws”’(1995) Bulle[n of Science, Technology, Society, STS Press (15)1, 96 5-13 Byron Reeves and Clifford Nass, The Media Equa@on: How People Treat Computers, Television, and New Media Like Real People 97 and Places (paperback edi[on 1998, CSLI publica[ons 1996)

(18)

The privacy related rights at stake will be explained along the lines of the three themes used in the previous chapter: medical confidentiality, disclosure and trust; obscurity of robotic access and processing functionality and influence on care receivers’ interaction. A preceding section will introduce the concepts privacy and data protection in the European constitutional tradition. This will show how social robots are inevitably cast to play a role within the different, protected spheres.

To provide a concise introduction to the broad subject of privacy has never been a simple task. Since both privacy and data protection have become intensely debated subjects in the light of technological developments over the last decades, it has become decidedly harder. A short explanation of the (usefulness of) the typology of privacy that was used to decide the themes will conclude the section, leaving room for reference to the professional ethics that are part of this paper’s chosen sectoral governance.

3.1 Privacy and robots in a care context

Although privacy and data protection remain closely related concepts, within Europe the latter increasingly also develops as an autonomous right. To distinguish their respective legal spheres, data protection will mainly feature in the second sub-section while the more comprehensive right will be treated in the first. The section will also show how the growing focus on data (protection) can be problematic.

3.1.1 Privacy: protecting and enabling an autonomous life

The human right to privacy is built on one of the oldest legal principles, pertaining to the separation of public and private domains. As we came to know it in Western jurisdictions, the power check on states secured 98 protections for private life: in the home, of the body, of family and correspondence. Increasingly, ‘private’ was also understood to refer to the individual freedom to live as one chooses as opposed to being controlled.99This is often viewed in connection with (again, Western) Liberal origins of these legal spheres. But as social and anthropological studies show, similar privacy needs and corresponding values are identified worldwide, although their form varies. 100

In Europe, privacy related rights are protected in constitutional and common law jurisdictions. The wording of provisions differ across states, as does the clustering of protected objects, for example combining or separately protecting reputation, the home or marriage rights. Some constitutions concentrate on the right(s) as protection(s) (NL, ES, IT) some (partly or entirely) formulate the right in provisions on personal freedom(s) (DE, PL, IT). They are (also) laid down in the European Convention on Human Rights and Fundamental Freedoms (ECHR), and in EU Charter of Fundamental Rights (EU CFR, ’the Charter’).

This paper takes the ECHR and the Charter as backdrops for analysis. Courts of Law in Council of Europe Member States are obliged to follow the jurisprudence of the European Court of Human Rights (ECtHR) when interpreting national privacy laws. The Charter also incorporates the meaning of ECHR’s provisions in corresponding articles, and the European Court of Justice (ECJ), that deals with complaints on the basis of 101 the Charter, frequently refers to the ECtHR’s body of jurisprudence.

Bart van der Sloot, Privacy as a Virtue: moving beyond the age of the individual in Big Data (Disserta[on, University of 98 Amsterdam, edi[on: School of Human Rights Research Series, Vol 81, Intersen[a 2017) Bert-Jaap Koops et al, 'A typology of Privacy' (2017) University of Pennsylvania Journal of Interna[onal Law 38(2), 493 99 See for example Sjaak van der Geest, ‘Life aRer dark in Kwahu-Tafo, Ghana’ (2007), Etnofoor 20(2), 23-39. From the same 100 author: how strong privacy may prove harmful where the social embedding of medical confiden[ality cannot be not relied on: Benjamin Kwansa, Jonathan M. Dapaah & Sjaak van der Geest, ‘The dark side of privacy: S[gma, shame, and HIV/AIDS in Ghana’ (Published by author, h^p://www.sjaakvandergeest.socsci.uva.nl/pages/publica[ons/selected/privacy.htm, 2007) Charter of Fundamental Rights of the European Union (2000/C 364/01), ar[cle 52, under 3 101

(19)

Article 8 ECHR (“The Right to respect for private and family life”) reads as follows: “1. Everyone has the right to respect for his private and family life, his home and his correspondence. 2. There shall be no 102 103 interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.” The right is thus not absolute, and the court either weighs an individual’s right against another fundamental right in play (e.g., freedom of expression ) or 104 assesses the necessity and proportionality of the public interference (e.g., in cases of surveillance ). For 105 how these balancing exercises are frequently problematic, see for example Van der Sloot. 106

The Charter’s article 7 (“Respect for private and family life”) states “Everyone has the right to respect for his or her private and family life, home and communications.” Here, as in the ECHR and other human rights treaties the right is not absolute. Exceptions to all Charter rights are laid down in article 52. 107 Article 8 (“Protection of personal data”) will mainly feature in the next sub-section and will be cited 108

there.

As the ECtHR has stated, private life is “a broad term not susceptible to exhaustive definition.” Both 109 privacy of the person and as related to family and social ties have been developed in a long line of jurisprudence. Some of this will be used below to further explain the protected spheres. The continuous, multidisciplinary studies of the morals and values that underpin the legal rights also influence their developing meaning. An important example is the long lasting obfuscation of what happens in the paternal household, leaving women without state protection from domestic abuse for centuries. 110

Before the Court’s interpretations of the provisions are described, it shall be noted that the notion of personal autonomy has been an important underlying principle. Autonomy itself is not formulated as a separate human right. This makes sense if one accepts autonomy is a state that humans are inevitably born into. However, 111 in- and outside of law, autonomy is recognised a ‘soft state' for two reasons: humans can be influenced, as well as denied actionable ‘space,’ both in a concrete and virtual sense. Privacy rights can protect against both. It is also recognisable in the fact that the negative obligations for states not to interfere are 112

‘Dwelling’ is a more appropriate explana[on, as the Court has made clear ‘home’ is not defined by its outward design or 102 appropriateness. Jurisprudence has broadened the scope to other forms of communica[on. 103 Like that of the media, ECtHR, Von Hannover v. Germany II, Applica[on no. 40660/08, 7 February 2012 104 ECtHR, Klass and others v. Germany, Applica[on no. 5029/71, 6 September 1978 105 Bart van der Sloot, 'Is the Human Rights Framework S[ll fit for the Big Data Era? A Discussion of the ECtHR’s Case Law on Privacy 106 Viola[ons Arising from Surveillance Ac[vi[es' in Serge Gutwirth, Ronald Leenes and Paul De Hert (eds), Data Protec@on on the Move, Current Developments in ICT and Privacy/Data Protec@on (Springer, Dordrecht 2016) 411 E.g. the Universal Declara[on of Human Rights (art.12), the Interna[onal Covenant of Civil and Poli[cal Rights (ar[cles 17, 23) 107 Charter of Fundamental Rights of the European Union (2000/C 364/01), ar[cle 52, under 1: “Any limita[on on the exercise of 108 the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of those rights and freedoms. Subject to the principle of propor[onality, limita[ons may be made only if they are necessary and genuinely meet objec[ves of general interest recognised by the Union or the need to protect the rights and freedoms of others.” ECtHR, Bensaid v. United Kingdom, Applica[on no. 44599-98, 6 February 2001 109 On the broader phenomenon: Judith Wagner DeCew, ‘The Feminist Cri[que of Privacy’ in: In pursuit of Privacy: law, ethics, and 110 the rise of technology (Cornell University Press 1997) ch five This is why slavery is a crime against humanity, and it explains the concept of legal representa@on, e.g., of children, and of those 111 considered mentally unfit do mind their own business. It is debated in the Autonomous Robot discussions. This is also visible in Consumer Law, where the ability for consumers to make a free and informed choice is protected. 112

(20)

increasingly understood to hold positive obligations for states to make sure the rights as laid down in their laws aren’t illusory. Laws can then have ‘horizontal effect’ and protect against obstacles and interferences 113 from other parties than states.

An important recognition for the soft state of autonomy can also be found in the capabilities approach as developed by economist and philosopher Amartya Sen, for Human Rights purposes further developed by philosopher Martha Nussbaum and others. This approach is much named in care-robot related literature. 114 The central idea here is that merely providing a resource (like a right) ignores the actual possibilities of subjects to enjoy the functions of any particular (fundamental) capability. This approach is also present in care ethics, as will be explained below. For care receivers, especially those in hospitals or institutions, protecting and re-creating their private spheres, as well as enabling them to use their remaining capacities has even more weight than it has for more ‘capable’ citizens.

In the Court’s interpretations, the possibility for self-determination and ‘decisional privacy’ necessitates 115 116 protection of a person’s physical and mental integrity. This includes the right to know, and to shape, one’s 117 social, gender, ethnic, and sexual identity, and being able to develop one’s personality. It can also protect 118 one’s mental health, moral integrity being a requirement to form social bonds. Related to this, article 5, 119 wherein the right to liberty and security of the person is laid down, is also called upon in cases of involuntary institutionalisation. 120

In the explanation of family life, the freedom to create family ties and to have children is protected. 121 Family life can be established by living together, but this is not a requirement (e.g., independently living parent(s)). Privacy as a necessary building block to establish and develop non-familial social relationships 122 with other human beings and the outside world is also well established. Privacy so is protected within the work environment, as many relationships are established in the course of working life. And where 123 migrants face expulsion, social ties within the migrant community can establish article 8 protected ‘private life’ in the absence of family ties. 124

Within this broad understanding of family life and social relations, the focus is on the individual need to have their abilities protected rather than how these protections benefit society. A pluralist, tolerant community

ECtHR, Airey v. Ireland, Applica[on No. 6289/73, 9 October 1979 113 Shannon Vallor, 'Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century' (2011) Philosophy and 114 Technology 24:251-268, at 262; Mark Coeckelbergh, 'Care robots and the future of ICT-mediated elderly care: a response to doom scenarios' (2016) AI & Society 31:455-462 at 457; Ibo van de Poel and Peter Kroes, 'Can Technology Embody Values?' (2015) S.O. Hansson (ed.), The Role of Technology in Science: Philosophical Perspec[ves Philosophy of Engineering and Technology, 18 at ch 5 ECtHR, Pre^y v. the United Kingdom, Applica[on no. 2346/02, 29 April 2002 115 Beate Roessler, The Value of Privacy (Polity Press, English Edi[on 2005) 116 In the ECHR (art 5) and the Charter (art 3), these are also protected separately 117 Gaskin v. United Kingdom, Applica[on no. 10454/83, 7 July 1989; ECtHR, Von Hannover v. Germany, Applica[on no. 59320/00, 24 118 June 2004; ECtHR, Mikulić v. Croa[a, Applica[on no. 53176/99, 7 February, 2002 ECtHR, Goodwin v. United Kingdom, Applica[on no. 28957/95, 11 July 2002; ECtHR, Gaskin v. United Kingdom, Applica[on no. 10454/83, 7 July 1989 ECtHR, Bensaid v. United Kingdom (n 109) 119 ECtHR, Pleso v Hungary, Applica[on no. 41242/08, 2 October 2012 120 ECtHR, Evans v United Kingdom, Applica[on no 6339/05, 8 March 2006; ECtHR, Dickson v United Kingdom, Applica[on no. 121 44362/04, 5 December 2007 The ECHR also has a separate ar[cle (9) for the right to marry and found a family, but this s rarely used. ECtHR, Berrehab v. the Netherlands, Applica[on no. 10730/84, 28 May 1988 122 ECtHR, Niemitz v. Germany, Applica[on no. 13710/88, 16 December 1992 123 Pre^y v. the United Kingdom (n 115) Niemitz v. Germany (n 123) ECtHR, Üner v the Netherlands (Grand Chamber), Applica[on 124 no. 46410/99, 8 October 2006

Referenties

GERELATEERDE DOCUMENTEN

By identifying the potential need to incorporate sex within the concept of care, and by exploring the use of robot technology to ease its materialization, we hope to contribute

Specifically, the humanoid robot was expected to be the most preferred alternative within the communal condition, as the friendly appearance of a human- like robot

Social robots used as social mediators in autism- related therapies are a good example [26]. Instead of creating a robotic interface that interacts with the user, these

In contrast, the autonomy hypothesis would predict that synchronous and/or predictable movement (i.e., the robot synchronizing with human- initiated movement) should lead to

Instead of needing care-providers around all the time, the proposed solution is to create and provide a Home Care System that can monitor many things and that includes an

explanations with the user’s actual needs and cognitive load in a dynamic, fast moving environment will be essential to successfully deploy robotics and AI offshore robotics and

The sources used in the search include printed books and e-books, organisational articles and white papers, theses, scholarly articles published in local and

Construeer een ruit, als gegeven zijn 't verschil van twee opeenvolgende hoeken, en de grootste diagonaal..