• No results found

The Uncanny Valley of Character

N/A
N/A
Protected

Academic year: 2021

Share "The Uncanny Valley of Character"

Copied!
70
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Uncanny Valley of Character

Philip Hutchison Barry
 11312904


MA New Media and Digital Culture
 University of Amsterdam


June 2017


Supervisor: Dr. Jan Simons 
 Second reader: Dr. Alex Gekker


(2)

Abstract

Robots are constantly entering more environmental spheres, and if this progress is to continue, it is necessary to understand the design ramifications of these devices. Currently research exists

examining the physical appearance of the robot, and how this affects interaction possibilities, however there is a gap in the literature in regard to how the robot acts. Thus this paper offers a solution in the form of a modified version of Mori’s uncanny valley hypothesis, relating to character rather than appearance. It also seeks to discuss how this new phenomenon relates to attachment, demonstrating how a robots demeanor will be important for human-robot interactions. As robot technology is still in its infancy, this new uncanny valley of character theory is relatively theoretical, and as such utilizes examples from film media as justification. Ultimately the robot C-3PO is found to possess a type of character most likely to surmount the problematic valley region, and thus is an ideal candidate for robot designers to work towards.

Keywords:

(3)

Acknowledgements

I would like to express my sincerest gratitude to my thesis supervisor Dr. Jan Simons for his expert guidance. I thoroughly enjoyed our discussions related to robots, technology, and philosophy. Thank you for your advice, patience, encouragement, and all round good nature, it made the whole process a joy. I would also like to thank Dr. Alex Gekker, as the second reader of my thesis, for his valuable time.

Furthermore I am deeply thankful to my family and friends for their outstanding support and encouragement.

(4)

Table of Contents

1. Introduction ...5

2. Literature Review - David Levy’s Love and Sex with Robots ...10

3. Technological attachment ...16

3.1. A brief overview ...16

3.2. Emotional attachment design ...22

4. The uncanny valley theory ...26

5. The uncanny valley of character ...32

5.1. A new theory ...32

5.2. Concept explained ...39

6. The uncanny cliff and effects on attachment ...43

7. Design recommendations ...49

8. Conclusion ...56

9. Discussion ...58

(5)

1. Introduction

The topic of technology is becoming ever more publicized. Artificial intelligence (AI), the internet of things (IoT), and robots are each becoming more dominant in the media as these technologies become more sophisticated and integrated into peoples daily lives. Furthermore much of the press related to these aforementioned technologies have come to be associated with a paradoxical duality. They are associated with the creation of freedom from the drudgeries of work, but also with the terrifying sense of human substitution, in that technology will replace humans in many roles we currently take for granted (Elliott).

Robots are of particular interest as they could replace humans in an extremely visible way. However, rather than outright replacement, if robots are instead created as a form of human augmentation they may integrate far better into society. By augmentation I refer to the ability of a robot to complement the users failings - a young person may not need a robots help to carry groceries, yet an older person might. Conversely the young person may not have time to cook, or clean, or do their taxes, all possible areas in which robots may be able to assist.

Unlike artificial intelligent software or algorithms, or even IoT, robots are embodied into artifacts which invariably mimic natural fauna. More often than not, the aim within the robotics community is focussed on building robots which emulate some form of human movement or effect (Broadbent 629). While depictions in the media often seek to portray robots as humanlike and relatable

(Broadbent 628), robots currently are far removed from these examples, with their performance deficient in a multitude of ways. However since the first truly functional robot was presented in 1959, robots have been becoming more technologically advanced. In some limited ways robots now outperform humans, like in hazardous conditions, whilst in others robots compliment human

abilities, as is the case with the Da Vinci Surgical System. In saying this robots are currently inadequately placed to replace humans en masse, especially in the short term, yet it is easy for perverted media depictions to guide public perceptions. It is plausible that more sophisticated robots will enter human environments in the future to assist or compliment our lifestyles, like the Roomba

(6)

vacuum has. Technology moves at such a rapid pace that it is abundantly likely advances will lead to the eventual creation of rather humanlike robots.

This thesis will discuss human attachment to social robots hypothesized to appear in the future. It is important at this early stage to demarcate the usage of the term ‘robot’ within the contexts of this paper. According to Hegel and colleagues a robot is: “(1) It is a physical object, (2) it is functioning in an autonomous and (3) in a situated manner.” (Hegel et al. 2). In this paper I refer to social robots, that is robots “explicitly developed for the interaction of humans and robots to support a human-like interaction” (Hegel et al. 2). The physical appearance of social robots raises an interesting dichotomy, with some researchers suggesting that a non-human appearance is more appropriate, and others opposed. Borenstein and Arkin state that a robots acceptance will be decided by its similarity to a human (3), while Rosenthal-von de Pütten and Krämer found that when a robots’ appearance was dissimilar from a humans it was more easily categorized, and the robot was therefore less threatening (814). However Rosenthal-von de Pütten and Krämer seem to invalidate this assertion somewhat, with the disclosure that certain robots were less liked due to their

unfamiliarity (814). I would suggest that within the interaction paradigm there is little more familiar to one human than another human. Overall research tends to support the idea that the more similar a robot looks to a person the better the interaction. More successful interactions are formed with robots which “conform to the expectations of the human interaction partner” (Pfadenhauer and Dukat 396). Likewise “the more human-like a robot is the more people expect the robot to interact in the same way as humans do” (Hegel et al. 4). Thus the expectation of a humanlike robot is to look and behave in a humanlike manner, and in achieving this the robot elicits the best interaction results. Additionally, current technological deficiencies restrict robots from appearing truly

humanlike, and this could help to explain the confound that some research finds humanoid robots to be less appealing that their non-humanoid counterparts. Therefore when discussing robots in this paper I refer to those designed with a humanlike appearance, built for the purposes of sociability. As a terminological way to differentiate robots from humans in this paper, I try to designate robots as ‘them’ and ‘they’ separated from humans ‘us’ and ‘we’. Furthermore attachment is used often, and is to be understood as the affection or fondness for someone or something. Attachment is also

(7)

synonymous with the term bonding and both these terms have the consequential effect of leading to relationship forming. Attachment and its effects will be further explored in a later chapter.

It is likely robots will enter our homes as any current white good or device has, for the simple purpose of assisting us with household work. Robots will be exceptionally useful compared to appliances, for the simple reason that they will not be limited to accomplishing one solitary task, as our current appliances are. Broadbent explains that roboticists endeavor to build humanlike robots to generate more natural interactions (635). This concept is particularly rational, as robots will need to be adaptable to our human-built environment. For generations humanity has designed private and social environments to be ideally suited to the human body, and so for a robot to be useful within our landscape and society its bodily articulations would necessitate a roughly average humanoid design. Duffy agrees suggesting that a robots humanlike design is inevitable (181). The closer it conforms to human proportions and dexterity the more purposeful it would be. Within the home - counter tops, doors, and stairs are just some examples which conform to our height, size, and motor skill abilities. A social robot which purposely did not conform to rough human proportions would most likely be disadvantaged for average practical tasks (Duffy 181), in the same way a small child may need help to reach high cupboards. Conversely, a robot with additional capabilities - able to achieve or perform more than a normal person - like extra arms or extreme hyper-mobility, could seem 19th century freak-show uncanny. According to Garland-Thomson, in her book Extraordinary

Bodies, American freak shows were popular because they showed an alternative version of

humanhood (16), they “united their audiences in opposition to the freaks’ aberrance and assured the onlookers they were indeed ‘normal’” (17). In a similar way robot which are too physically

different from humans may be found to be abnormal. Therefore, while specialized or industrial robots do not necessarily need to fit to our human structure, it may be wise to design social robots within strict humanlike parameters. Levy suggests that current technology like robot pets will be superseded by humanlike robots (104). Further he recognizes that current “special-purpose

robots” (87) like the Roomba robot vacuum cleaner - solely capable of one specialized task, will be replaced or augmented by robots capable of multiple tasks (87). Home users will most likely expect greater generalized assistance from social robots than from specialized ones, and therefore social

(8)

robots may necessitate humanlike dexterity. Again, due to this need, it is conceivable that robots will appear mostly humanoid, and this will allow for more natural humanlike interactions. Such interactions would necessitate proximity, and this could lead users to form bonds with their robot companion (Levy 143). A number of academics agree, with Borenstein and Arkin suggesting that intimate relationships will form in the near future (1).

Growing out of the imaginings of 1950s futurist depictions of robots, researchers have sought to advance the field, and as a result research has been conducted not only on robotics itself but also into the subcategory of the aptly named human robot interaction (Hegel et al. 1). Human robot interaction is itself chiefly interested with the interaction of humans and robots. As such it covers a vast variety of topics from psychology to technology, each of which have many distinctive

subtleties. The robotics field has been said to be leading to the creation of an entirely new species (de Graaf 593), one which mimics humans, yet has been programmed to do so in a completely artificial way. Thus the topic of human robot interaction encompasses much of our understanding of human-human interactions. More than this, engineering, design, and ethics must be taken into account. Therefore it becomes an enormously complicated subject, and one must be relatively circumspect when interpreting some of the concepts raised. Due to the fields’ relative infancy, and current technological deficiencies, there is much still to ascertain.

Hegel and colleagues show the field of human robot interaction to be developing at a rapid pace (1), and in this respect it is an important area of study if robots are to be introduced into peoples lives. Speculative framework is laid here for future researchers to challenge, augment and enhance. This framework relies on the writings of both Levy and Mori, two influential researchers within the robotics and human robot interaction fields. Levy hypothesizes the future of attachment to robots, especially in regard to romance, while Mori examines the appearance of the robot and how pleasant or repulsed we find it. Together they postulate the design and outcome of robots, as well as

predicted future behavioral styles. Levy’s book Love and Sex with Robots optimistically espouses the virtues of future robot partners, insinuating that they will conceivably replace human-human relationships thanks to their excellence. In showing this he likens these idealized partners to

(9)

‘Stepford wives’ on multiple occasions (118; 130; 137). Significantly, he meticulously demonstrates how we can attach to a host of computer devices and robotic pets, and will therefore naturally be able to attach to more advanced iterations, namely robots (77). This point is particularly valid, if people can attach to devices and rudimentary robotic precursors, then it should be possible to bond with more evolved forms of robots, especially as they are predicted to mimic humans in looks and temperament. Mori on the other hand shows a different analysis. His scope is focussed within a more specific area found within robotics - the discomfort experienced when interacting with an entity which is humanlike, but not completely human. He determines that our ability to find elements pleasant is determined by our framing of said elements. Exemplifying this is Mori’s oft cited example of the prosthetic hand, which states that when shaking a prosthetic hand we find a sense of strangeness in its unfamiliarity. This is meant to exemplify a common understanding of his principle in action, causing the element - the prosthetic hand - to fall into an area described by Mori as the uncanny valley (2).

Levy shows us the contrived optimism of robots in the future, while Mori paints a more sobering picture, one which justifies the nuanced positives and negatives. Between these two researchers I situate my own work. Borrowing heavily from Mori and expanding his concept to encompass more than the physical appearance he introduces, leads to the outcome of an improved paradigm. This resultant paradigm, embodied within a robot, reflects the kind romanticized by Levy, however emerges more practical and arguably appealing. My work has shortcomings, yet as a form of early groundwork is beneficial for later researchers to rectify with more finesse than I can currently bring to the conversation - much like Mori did in 1970 - by creating the opportunity for further research. As previously mentioned there are many technological hurdles to overcome before humanity will truly appreciate the ideas proclaimed by Mori, Levy or myself, and whether these will accurately come to fruition. However, if humanoid robots do become available to the general public, then it is likely some effects discussed herein will be important to the social functioning and acceptance of these social robots.

(10)

2. Literature Review - David Levy’s Love and Sex with Robots

Independent researcher David Levy has a keen interest in robotics. His seminal book Love and Sex

with Robots has influenced much of the robot attachment research to date, having helped cultivate

research into the area of human-robot relations, serving to raise the subject to a point worthy of academic consideration. As such Levy’s book is of significance when situating any argument about robot attachment, as this paper endeavors to. Levy’s enthusiastic appraisal of robot interaction introduces the prospect of future human-robot relationships. He structures his argument around the belief that humanoid robots are set to arrive in the future, and hence we will interact with them by default. This interaction could lead to attachment and love - described in the first half of his book, and additionally sex - covered in the second half. This paper is only interested with the effects of attachment, so will disregard the second half of Love and Sex with Robots. Levy’s argument first introduces the idea that people often have attachment and loving feelings towards their pets (46). In a similar way owners of robot pets have emotional relationships too (Levy 103). He also shows how some people can attach to computer devices (Levy 64). Thus, because of the precedents which allow for attachment in non-human beings and artifacts, Levy depicts our ability to find love with robots. His main premise is that attachment to robots will be adopted and eventually flourish due to technologies ability to be constantly customizable. That is, the software embodied within the robot will be constantly adjusting the robots behavior towards a kind of human imagined state of

perfection. The needs and desires of each human user will be consistently evaluated by the robot, and said robot will adjust its behavior to meet our expectations and wishes. Although not expressed by Levy directly, the idea of customizability inherent in a robots programming, combined with the enhanced abilities of robots, suggests that they will be endlessly programmable and adaptable. He points to the idea that robots will likely be superior in relationships to human lovers, thanks in part to the comparable ease of programming the robot.

Levy begins by explaining how the pet-owner relationship, common today, is an apt allegory of how human-robot relationships may form in the future. By understanding perceptions regarding the pet relationship, we can gain a better understanding of the ways people may view a

(11)

human-robot relationship. Levy makes comparisons between pets and children positing that they both bring similar “emotional rewards” (48) to their owner or parental figure. Additionally both pets and children rely on owners and parents for enjoyment and fun (Levy 49) and likewise for nurturing and growth. He explains how the love felt for a pet is more similar to the love felt for a young child than that of adult romantic love (Levy 49). This attachment relationship is largely based on the idea that the unconditional love that a pet provides, mimics that of an idealized mother - caring, loyal, attentive, and devoted (60). Levy counters any doubt about pet attachment being abnormal by pointing to the fact that pet ownership as a phenomenon is too widespread to be considered odd (50), and that most pet owners have strong attachment bonds to their pets (53). The human-pet relationship therefore comprises a type of non-judgmental love suggesting that the bonds of attachment can extend beyond simply human to human ones (52).

Levy continues the argument by showing the historical precedent of people’s attachment to computers proposing, in reference to Sherry Turkle’s work, that the relationship bond can extend beyond that of human to living beings and instead further to electronic artifacts. He argues that computers, with their simulated intelligence vie for a place against living beings for our affection (Levy 63). This idea may seem odd, but research suggests that people have shown strong

physiological response when they are separated from their mobile phone (King et al. 141). This physiological response, namely stress and anxiety demonstrates the ability to experience strong emotional effects in direct relation to technological artifacts. Levy, in citing Turkle, confirms that some early AI researchers often constructed a type of quasi-relationship with their computers (Levy 64). Hackers too have had similar relationships often due to the computer’s interactive capabilities, the “immediacy of the feedback it gave” (Levy 67), and the amount of control afforded to the user. According to reports in Levy’s book from Norman Holland, computer programmers have had similar experiences, as the computer is an ideal friend - loyal, helpful, faithful - programming has been proposed to have corollaries with sex (68). An outrageous claim, however computers do meet many of the requirements for an attachment based relationship; providing comfort, reassurance, safety, and an empowering environment - one which can foster exploration (Levy 69). In providing a level of empowerment and presenting an environment which fosters growth and exploration - key

(12)

components of psychologist Bowlby’s attachment theory (Levy 26) - computers offer a type of bond or partnership, an easily accessible attachment model. Building on these premises, many

contemporary users will recognize the obvious pragmatic assistance a computer can bring, but also some may acknowledge the reassurance and comfort a computer can afford the user. Thus even lay-people can become relatively attached to their computers. Conversely, feeling separation distress when away from computers, can be seen in the anxiety people experience when away from their phone, and helps to reinforce Levy’s account of the concept of attachment.

Logically the step beyond attachment might be the relationship. By Levy’s own account

‘relationship’ is the inter-dependance of one actor on their relationship partner - a change in one bringing about a change in the other (71). Explained in computer interaction terms, user inputs result in instantaneous computer responses. Levy suggests that unlike our interaction with static artifacts or objects, computers can offer us an acknowledgement and reply, and therefore the response to input or interaction is two-way, much more like a ‘being’ relationship compared to the uni-directional interaction with non-beings (72). Levy reinforce his hypothesis citing surveys of both children and adults; finding in 2003 that 45 percent of children saw their computer as a trusted friend (70); and 34 percent of adults agreeing that by year 2020 computers will be as important to them as “their own family and friends” (71). Although these claims now seem somewhat

exaggerated, for many people computers are indispensable.

Anthropomorphism is a commonly observed phenomenon in which people attribute human characteristics to non-human animals or artifacts (Duffy 180). Levy describes how

anthropomorphism contributes to attachment. He indicates the belief that when an object is anthropomorphized to the point that it becomes essentially humanized, it therefore becomes possible to treat said object in a humanlike manner (Levy 74). Illustrating his point he uses the example of a computer which refuses to work, demonstrating that the computer is bestowed with the human characteristic of ‘work’ (Levy 76). Levy explains further interesting concepts evolving from the anthropomorphism of computers. One such example is the idea is that as computers show more signs of intelligence we view them as becoming more like “a kind of friend” (Levy 76). His

(13)

overall point here is to demonstrate that the subconscious attributions of anthropomorphism we imbue into computers gives them personality, a relatable trait for humans, and as a result we treat computers more like partners than tools (Levy 77). He then relates this ability of relationship forming with computers to relationship forming with robots (Levy 77). There is an understanding, although not expressly discussed in the text, that robots will rely on computers for their functioning and interaction processing. In this regard there is little difference between the computer-as-partner on a desk and computer-as-partner embodied within a robot. This raises a somewhat important distinction - when discussing computers, both Levy and myself are referring to the computer and the software in combination. The user interacts with the invisible software through the visible hardware - currently the graphical user interface on the physical screen. As the hardware is visible, it has become the semantic norm to indicate that we are interact with the computer rather than the

software, and so for the purposes herein ‘computer’ is to be understood as hardware and software in

unison.

It is from here that we see how the ‘computer’ mentioned above, is the same system as that embodied within the robot’s shell. Therefore the important designation of ‘robot’ is the equivalent of the hardware and software combined. The point being that within the robot body resides some form of computer based intelligence or connectivity to some kind of ‘brain’. Levy does not make mention of the distinction, however there is relevancy as it is specious to imagine robots as some media derived imaginings, rather than a gradual evolution of already existing technologies. Levy opens the topic suggesting that as robots become more lifelike in their behavior people will be more likely to treat them as moral and social beings, thus “raising the perception of robotic creatures toward the level of biological creatures” (98). Levy explores the biological creature quirk through the robot dog AIBO, revealing that while owners know AIBO is not alive, they experience feelings

as if it were alive (99). He adds that when robot pets are sufficiently lifelike the possibility for

enhanced interaction, as compared to real pets, potentially makes them better prospects for consumers. He says that “[f]or children the social benefits of such attachments would include the learning of decent social behavior - being kind to their virtual pets - and unlearning negative social behavior” (Levy 103). This in my mind is somewhat problematic pointing to some level of

(14)

omniscience on behalf of the pet - morally judging negative behavior from positive, and making the necessary verdict and adjustments. This point is important, as it is not simply the pet which makes the decision, but ultimately the software designer, who deems which behaviors are to be uprooted from society, and thus empowers the designer with much responsibility. This will be discussed at greater length in the discussion section.

Levy’s discussion then delves into the concept of robots as social beings (105). Here he

demonstrates that to be perceived as social beings robots will need to display social competence through emotional intelligence - “the ability to monitor one’s own and others’ emotions, to discriminate among them, and to use the information to guide one’s thinking and actions.” (Levy 108 - footnote from Daniel Goleman). Without sufficient competency in the display of emotional intelligence, people may interpret robot behavior as simply an act (112). Contravening Levy’s claim somewhat, it is clearly possible for film audiences to empathize with characters in films, portrayed by actors. Alan Turing is best known for putting forth the hypothesis that if a machine can appear intelligent, then we should assume it has intelligence (Levy 120). Thus if a robot can demonstrate emotional intelligence, then we should assume that it has have emotional intelligence. Ultimately the social robot will need to convince us of its sincerity, its act will need to be consistent, and relatively unflawed. Levy demonstrates his point of view with the example of a planes autopilot program, stating that their superiority in the area of flight has lead to prosecutions of human pilots who have failed to engage the system when they should have (110). Thus if a computer can be programmed with instructions as complicated as flying, besting humans in the process, then

theoretically it can be programmed to emote. As long as the robot can appear to have emotions then we can assume that it has emotions. On this note if robots can demonstrate genuine emotion, then they will be easier for people to accept due to their inherent humanlike characteristics (Levy 119). Further people who already, on some level, are attracted to their computer will potentially find it easy to relate and be attracted to robots (Levy 130).

Another aspect of robot as social being refers to the robots ability to recognize and respond to peoples emotional expression. The ability to judge human emotional response and respond leads to

(15)

better attachment outcomes, and could in time lead to robot relations which mirror human-human ones. According to Levy in the short-term robots will be seen simply as robots - not equal to humans. Levy’s proposal for the longer term is our inference of robots from machines to human equals. This idea is demonstrated in his musings about dating;

“instead of a parent’s asking an adolescent child, “Why do you want to date such a schmuck?” … the gist of the conversation could be, “Which robot is taking you to the party tonight?” … [and finally] rewritten simply as, “Who’s taking you to the party tonight?” Whether it is a robot or a human will become almost irrelevant.” (Levy 110)

This optimistic view manifests itself throughout Levy’s text, the underlying current of which is that robots will become almost human, with relatively little disturbance to society. This somewhat undermines much of the research into the minutia surrounding human-robot interactions. One example of Levy’s overconfidence is the notion that robots will be programmed never to fall out of love with their user. Conversely to reduce the chances of the user falling out of love, the robot will constantly monitor user affection towards it, adapting its behavior, in doing so “restoring its appeal to you” (Levy 132). However it is difficult to justify this hypothetical situation as appealing or the opposite.

Overall the tempting vision Levy portrays of the human-robot relationship is in my mind

overinflated and romanticized. While research shows much promise for human-robot interaction, Levy fails to recognize the difficulties in reaching this hypothetical situation of robot love, instead espousing an idealized vision of robot love in some distant future. In contrast to this I intend to discuss some of the hurdles between contemporary research and Levy’s imagined future, and how they could impact human-robot attachment. Of particular focus is the uncanny valley phenomenon and its impact on interaction and attachment. For this reason, Levy is a critical entry point for the conversation as his groundwork allows for the imaginings of human-robot interaction, attachment, and finally love. In this respect, although his ambitions for robots may endure, there are more contemporaneous obstructions, both technological and psychological in nature that need to be improved.

(16)

3. Technological attachment

3.1. A brief overview

The concept of attachment is immediately innately familiar to people, however thoughts that manifest in relation to the word may not correlate well with another persons exact viewpoint. Therefore for the purposes of context here the definition of ‘attachment’ may be understood as a strong feeling of being emotionally close to someone or something. In other words, attachment relates to a persons ability to experience affectionate closeness, in this case for a robot entity. It is debatable whether this is even possible, to many the idea itself may sound implausible, perhaps even ridiculous. However by evaluating features of attachment in relation to artifacts it is possible to construct an argument demonstrating potential signs of attachment to robots. Levy’s account of the attachment we are predicted to establish with robots seems an overly optimistic and idealized one, yet the opinions he adopts in his work demonstrate potentially sound reasoning for the most part. In saying this there is little evidence from the last decade (since his book was published) pointing directly to the inflated conclusions he hypothesized. There is however growing consensus from researchers showing some signs of primitive attachment (Broadbent 643). Added to this are indicators of attachment to other more common forms of technology (Gerber 140), ones which we interact with on a more regular basis in our everyday lives, and these indicators are arguably more accurate and compelling than lab study results. Furthermore anthropomorphism is a cognitive device which aides the process of attachment, especially when technology is embedded within a body.

Attachment theory, first proposed by the famous psychologist Bowlby in 1969, is a topic to which Levy dedicates a significant portion of his book. It predicts the bonding of a child to its parent as a natural security measure in youth, and this bonding has inherent repercussions for relationships in later life (Levy 26). A more recent evaluation has been conducted by Keefer and colleagues who suggests that our increasing social isolation and individualism has deprived us of some of our traditional social bonds, and thus we seek out feelings of security by bonding with “non-human targets” (524), in other words artifacts. The motivation behind this is that while artifacts do not

(17)

offer care or attention in the same way a caregiver might, their unerring predictability provides a sense of comfort which serves to satisfy our psychological security needs (Keefer et al. 528).

Currently there is an increased dependence upon technology (Keefer et al. 531). Bickmore and Picard cite the increased importance of human attachment to the computer (294). Renowned researcher into the field of technology and self Sherry Turkle delves into the changing dynamic emerging in our connectedness to technology. In her 2011 article - The Tethered Self - she shows how throughly attached people have become to their devices “[w]e text each other at family dinners, while we jog, while we drive, as we push our children on swings in the park” (28). For most readers this will be unsurprising, yearly statistics show that device use becomes more popular and is almost ubiquitous across the population (Small et al. 117). As such our reliance on devices (King et al. 140) feeds the idea that people have become completely captivated by the technology. This reliance is not shocking or surprising, it has become normal. It has become so normal and ubiquitous that the opposite situation, having no device, is essentially abnormal. Reliance helps to increase attachment explained through the theories proposed by Bowlby, proximity for example (Levy 31). Device attachment seems to be transpiring and users are becoming more dependently attached for reasons surplus to just proximity. Devices can also be seen as a means of information delivery. When assessing the device (a product), and separately the internet (as a service) it is possible to understand the influence one has on the other. Together they rely on each others existence. Both device and internet augment the others ability, complimenting each others purpose. Without one the other would have a less purposeful existence from a users point of view. Similarly robots will likely embody connected computing, and users will thus have dual purpose attachment - that of the

physical, the body, and that of the mental, the mind. This is an important distinction as it uncovers a duality discussed later in the text related to the uncanny valley, in doing so further justifies the difference between the body and mind aspects of robots. By focusing on attachment possibilities in relation to the internet rather than solely our artifact devices we find that users are attached to each in different ways. Research has suggested that people use the internet to fulfill their interpersonal needs (Sun et al. 409), and in achieving ones interpersonal needs the internet provides an

(18)

(Ji and Fu 398). The internet is a form of reality escapism (Ji and Fu 401) which has become so ubiquitous and so necessary that many could not imagine living without it (Kim and Haridakis 1004). The other reason is more complex. Devices exploit our social attachment nature by making us insecure, and consequently reinforces our need to bond. Konrath calls this the “empathy

paradox” (11) which explains how we have become closer through connected technology yet not psychologically closer (11). When applied to the internet, this paradox hinders users contentment causing them to feel equal parts connected and alone. Turkle elaborates - “we build a following on Facebook and wonder to what degree our followings are friends” (30). The friendship illusion offered through our digital connections removes our physical connectedness (Turkle 29). Thus it is appreciable that in the last decade people have become better able to attach to both devices and the internet.

As suggested previously, research shows that it is possible to draw links between devices and robots. Much of the interaction software devices currently utilize could evolve into interaction software installed into robots. Arguably robots will be vastly updated iterations of currently available devices. Part of the difference though, between current devices and robots will be the addition of agency which users will likely anthropomorphize due to the robots outwardly humanoid appearance (Paauwe et al. 698). This perceived agency may affect attachment in a significant way. As the argument stands, people have been shown to be attaching to devices, so why not to robots? Human-robot interaction studies have confirmed some of Levy’s work surrounding this idea of attachment to robots. Early evidence suggests that interactions with robots should be modeled on our experiences with other humans (Whitby 6). In this respect the robot becomes less like

interacting with a piece of technology or a tool, and more like interacting with another human (de Graaf 592). Such a situation is arguably more appealing as it is a more naturalistic approach. Thus de Graaf’s quote of “nonhumans as viable others” (593) justifies the embodiment of human

capacities creating a type of human equivalent. Investigating further, de Graaf et al. posit that there are twin perceptions of social robots, that of a simple ‘utility’ (performing simplistic tool-like labour) and the alternative of a sociable entity able to build relationships (2). If users view robots

(19)

strictly as tools then there is little possibility of meaningful attachment, however if they are recognized as social entities then attachment may be possible.

Crucial to our attachment to robots is our ability to anthropomorphize. Anthropomorphism is the human propensity to ascribe artifacts with human characteristics. Anthropomorphism bestows human capacities, in this case on robots, and in doing so it helps people to rationalize behavior (Duffy 180). People confer upon the artifact personality, emotion, and mental states (Duffy 181, 182), all of which are, in reality, non-existent. Through the capacity to confer these humanlike abilities people are better able to interact and this assists in building and supporting increased closeness. Many researchers agree that this ability, to personify and bestow robots with humanlike traits “fosters a human willingness to form unidirectional emotional bonds” (de Graaf 590). Thanks to anthropomorphism, while robot capacities are currently less complex than other living beings, it is still possible to form a bond with such an entity (de Graaf et al. 12). Additionally when

interacting with a robot people often apportion it agency, an intentionality or autonomy (Wykowska et al. 767), in this way becoming more humanlike than many other technological artifacts.

Demonstrating the effects of anthropomorphism is difficult, and few researchers have studied the underlying causes for the phenomenon (Duffy 180). Still, there are a number of observable ways in which people have been shown to attach to robots thanks to the anthropomorphism (de Graaf et al. 2). The effect allows humans to better connect with non-humans. Levy makes note of this effect appearing during interactions with our pets (51). Anthropomorphism could be proposed to lead to a greater willingness on behalf of the user to interact recurrently with the robot, and this in effect motivates the early stages of bonding. Socially interactive robots will likely use natural language communication and researchers have found that when this is the case a users perception of the robot becomes more favorable (Birnbaum et al. 422). In a similar way research conducted by Leite et al. indicates that we become more comfortable, and therefore bond better, when a robot mimics our affective state (252). This can either be achieved through incredibly nuanced analysis of the user, or through clever trickery. The trickery could occur by fabricating false moods in the robot, in other words acted, with even modestly displayed character being recognized as more owing to

(20)

anthropomorphism. Just as in human relationships, sympathetic mimicking functions like empathy (Leite et al. 252). This leads to “significantly higher ratings of companionship, reliable alliance and self-validation” (Leite et al. 258), and goes a long way to explaining why people will be willing to initiate forms of connection with robots. Unidirectional attachment with robots is currently

unreciprocated and can only be initiated from the human side, and yet this bond can still be

incredible strong. Whitby cites cases of “soldiers bonding with military robots in combat” (3), with an extreme example of this type of connection illustrated by Scheutz and Arnold who explain that soldiers have bonded so intensely with their bomb disposal robots (robotic IED detectors) in the battlefield that they perform funeral ceremonies for them (1). Trust is another human state

stemming from anthropomorphism and it is an important factor in increasing our ability to attach. The level of trust could presumably be very different to traditional human relationships, where trust is earned. In a human-robot relationship, trust may be inherently expected of the robot as a

purchased and programmed device, and thus complies with the users wishes. This heightened level of trust would seemingly function to strengthen bonds between man and machine, this phenomenon has been seen in both the young and old as reported by Birnbaum et al. (417) and de Graaf et al. (12) respectively. In separate studies involving the two groups, both the preschool children and the elderly, saw the sharing of a secret or secrets with a robot. The experiments successfully

demonstrated a two-fold effect. On the one hand the groups trusted the robot enough in the first place to reveal some intimate or private detail, and on the other hand increased psychological attachment, through the revelation of a confidential piece of information, thereby reinforcing and building trust. These examples help to substantiate the claim that anthropomorphism can aid in the initial attachment to robots.

Beyond Levy’s suggestions of perfect partners there are a number of reasons why people could find themselves attaching to robots, thanks in large part to anthropomorphism. One of the most glaring reasons, although I find unlikely, is that we simply forget we are interacting with a programmed artifact (de Graaf 592). Through anthropomorphism we ascribe robots agency when they in

actuality have none (de Graaf et al. 2). This results in a dichotomy whereby the robot has no human feeling, emotion, or experience - yet people could well believe that it does (Paauwe et al. 698). This

(21)

idea is somewhat paradoxical. We currently interact daily with unfeeling artifacts, so there could be an interesting shift once artifacts can be anthropomorphized, in doing so raising them ever closer to the level of humans (Paauwe et al. 698). Robots lacking feelings or emotions familiar to human experience is potentially a difficult concept to grasp. Herein lies the paradox - users will confer onto robots emotional agency which they lack, but doing so will provide enhanced interaction and

increase perceived agency (Kiesler et al. 177). In erroneously attributing robots with intelligence, sociability and agency, we also assign them empathy, thus helping to foster an atmosphere of security (Birnbaum et al. 416) just as a parent would to their child. This parental attachment idea was explored earlier through attachment to our devices, but here a social robot may elicit

emotionally caring traits which are reminiscent of human-human interactions (Biswas and Murray, “The Effects Of Cognitive Biases And Imperfectness In Long-Term Robot-Human Interactions”, 2). This perception of sociability is a key motivator (Breazeal 168) likely allowing us to attach to robots and ultimately helping to legitimize their potential companionship status. A final effect, noticed by de Graff et al. found that “habituation and familiarity” (13) led to a disregard of many technological deficiencies. Thus ill effects associated with robots initially may fade after some amount of habituation, a finding also acknowledged by MacDorman and Ishiguro (363). These effects could help to reinforce the idea that not only anthropomorphism assists in our attachment, but if our attachment is hindered for some reason, habituation could possibly aid in improving the dilemma.

This overview of attachment presents initial evidence that human-robot interactions will become more like human-human ones. People are attaching to devices with the aid of the internet, and this mode is becoming the new normal for human communication and attachment. According to Nourbakhsh in the preface of Robot Futures (2013) robots will be the “living glue between our physical world and the digital universe” (XV). Research has established evidence that people can, and do, attach to device technologies. Socially integrated robots are potentially some evolutionary step beyond current devices, in that software technologies available today will likely form one integrated robotic system. Thus people will potentially be able to attach to robots like they have to devices. To further validate this concept, robots will have additional attributes beyond those

(22)

available in contemporary devices. A roughly humanoid appearance for example, leads users to anthropomorphize the artifact making attachment to robots more conceivable. In this respect a robots social ability is more important to its acceptance than its intelligence (de Graaf and Allouch 1484). Understanding human-robot interaction and attachment is a relatively complicated research task, one which will not only rely heavily on studying traditional human-human relationships, but also upon observing robots and humans interacting together in familiar environments.

3.2. Emotional attachment design

Software embedded in robots will likely originate from software found on devices and other current technologies. Due to their lineage, attachment to devices is a possible precursor to attachment to robots, however robots will offer far greater levels of sophistication, thanks in part to embodiment. Due to the current restrictions of both embodiment and agency on mobile devices, a closer allegory for human to non-human attachment may be that of human-pet relationships. Although research is unclear as to exactly what animals are conscious of (Nagel 436), it is common for humans to believe that animals have some kind of innate understanding about what is happening and respond as such. Pet owners naturally anthropomorphize, ascribing human emotions to their pets (Levy 49). This differs from our interactions with other humans, whom we can better assume have similar emotional responses to ourselves, and we therefore need to anthropomorphize less. What is more obvious, in our interaction with other humans, is the way in which we interpret a wide variety of emotional cues to understand another person better. From this perspective, animal and human attachment can be coadapted to propose a model of attachment with robots in the future. Due to our natural propensity to be aware of others emotionality and personality, it is logical to assume our interpretations will affect robot interactions.

Anyone with a beloved pet will attest to the fact that humans can attach to their pets. People ascribe emotional states to pets, inferring that the animal can read and respond to their emotional states. This shows that while we have scarce scientific evidence of pets reading emotional states, it is human nature to anthropomorphize pets to such an extent that we bestow upon them a ‘mind’

(23)

similar to a humans (Waytz et al. 385). It is common to assume that animals have emotional states similar to humans (Nagel 439) although it is difficult to prove. This assumption helps to

demonstrate the strength of anthropomorphism. The ability to justify our similarity of mood makes it easier to empathize and to bond through the commonality of experience. Waytz and colleagues demonstrate how the ability to anthropomorphize lets us perceive a humanlike ‘mind’ residing within an entity, and that ‘mind’ necessitates a moral responsibility (386). Moral responsibility implies decision making, will and agency (Waytz et al. 386). This raises a dichotomy because although current knowledge cannot support these ‘mind’ assumptions empirically, humans bestow these criteria through anthropomorphism. In other words, we assume ‘mind’ but only because we bestow it.

When interacting there is little need for humans to anthropomorphize each other. Rather, we empathize, and can scrutinize others behavior based upon our own experiences. Communication is our ability to assess all the information we sense and make suppositions from that. In this respect our tendency to anthropomorphize is replaced by our natural communication understandings. Communication extends further than simply the verbal. McColl and Nejat find that “[b]ody language plays an important role in communicating human emotions during interpersonal social interactions” (262). These extraneous communication elements are so innately ingrained in human social context that the need to anthropomorphize becomes irrelevant. However within human-technology contexts the level of communication is currently less than that of other humans, and therefore users still need to anthropomorphize, perhaps until robots are believed to be identical to people. Heylen et al. explain how simulating naturalistic human-human interaction styles in robots will lead to more enjoyable human-robot interactions (555). Building robot interaction programs which mimic human-like patterns will likely elicit improved attachment possibilities. While body language was the sole mention here, there is a huge amount of other nuanced data - gestures, inter-person distance, eye gaze, and repeated interactions all merge to create the motivation for

attachment (Brayda and Chellali 219; Kamide et al. 829). Wykowska et al. agree with this

sentiment, indicating that humans are responsive to “subtle characteristics of behavior (independent of appearance) that is typically human” (778).

(24)

Because we have no point of shared reference to animals, people will always understand human behavior styles better. Thus we reduce animalistic behavior to the simplest version of human behavior necessary, we force animals to conform to our understanding. Robots however will be programmed with justified code, and will therefore behave exactly as they are programmed to. To delve into the human-robot behavior paradigm, it is possible to show how current knowledge supports ideas of interaction derived from human interaction practices. Broadbent for example discusses the concept that the amount of human-likeness designed into a robot is only necessary if it is part of the robots need to function (631). In other words an industrial robot need not look like a human. While this idea may appear to raise ambiguity regarding which robots should look

humanlike - the example of an automated vacuum robot necessitates looking nothing like a human cleaner, however it stands to reason that a socially helpful or interactive robot should look mostly human. Some may suggest this need not be the case, however ultimately the goal of humanoid robotics has been to build a robot indistinguishable from a human (Duffy 177). Specifically when a robot looks and behaves in a human-like way it is perceived preferably (Walters et al. 159) and thus has higher chances of establishing rapport with its user, with the consequential effect of boosting sales. Showing empathy, support, and providing physical touch are just a few dimensions which robots will utilize to convince users in building the semblance of an attachment relationship (Leite et al. 251).

From a pragmatic viewpoint social robots will need to fit into our human-friendly constructed world, it should be proportioned similarly to a human, and be dexterous in a similar way too. For example a wheeled robot will be of little use in a home with stairs, as people with robot vacuums have undoubtedly found. Thus if a robots function is within a home style environment, then it will likely need to look human (Dang et al. 137). Broadbent accompanies this idea with the example of Paro the robot baby seal pet, suggesting that a seal was chosen, versus an animal familiar like a cat, as people cannot judge its behavior accurately because they are mostly unfamiliar with the behavior of a real seal (631). People are acutely aware of natural human behavior, and as researchers are aiming to build humanoid robots, then they will likely need to be models of human expectation. As Broadbent explains “engineers are attempting to make robots look and behave identically to humans

(25)

in part so that humans can interact with robots on a more intuitive and natural level” (635).

Therefore it can be seen that robots with the best rates of acceptance will be the ones designed to fit into our human derived model of what it means to be social, and to behave as expected given the circumstances. This intuitive behavior affords us the ability to interact with robots better, allowing for potential attachment (de Graaf 589; Birnbaum et al. 417).

Attachment is therefore somewhat contingent upon our ability to readily associate and comprehend another entities motives. Anthropomorphism helps to negotiate any large discrepancies between our expectations, and the reality of the situation. This however fails to be much use in human-human interactions, and instead we may fall into the trap of assigning too little mind to peoples actions. In this respect, robots will presumably sit somewhere along a scale from completely

anthropomorphized to completely human, with robots currently falling well shy of ‘completely human’. Thanks to the potential for anthropomorphism to bridge some drawbacks, it does seem theoretically possible to attach to robot artifacts which are not completely humanlike. Most researchers however agree that the most appealing situation is to aim to construct human-robot interaction modeled on human-human ones. Unless these robots are disagreeable in some way, this could lead the way for human-robot attachment as proposed by Levy.


(26)

4. The uncanny valley theory

Masahiro Mori is a particularly important researcher in the field of robotic interaction. His influential (Saygin et al. 414) 1970 paper, The Uncanny Valley, has been the source of much

discussion and debate within the academic community, especially in recent years (Wang et al. 393). This paper helps to form much of the idea in my hypothesized concept of robot attachment. His work has not been dissected in the literature review as the original paper is very short, and the important features are instead discussed in this chapter. The main idea proclaimed within his paper has impacted fields beyond robotics, and has even entered the mainstream media, such is its influence. The term Uncanny Valley which originated in this work has been so significant it has begun to enter the cultural lexicon (Pollick 3). Recent advances in robot technology have led to a resurgent interest in Mori’s paper. Originally the paper was introduced soon after the worlds first industrial robot, the Unimate, was presented to the Japanese, and on the back of blossoming

industrial robotization (Mori 1). The basis of much of Japans early fascination with robots begins in 1950s post-war Japan with the manga Astro Boy (Robertson 574) which depicted the convergence of man and machine. Astro Boy, the robot hero of the manga fought against injustice, he was a character easily associated with positive feelings after one of the most destructive wars in history (Richardson 120). From these fundamental underpinnings Mori formed his concept.

Eighteen years after Astro Boy first appeared, and following the introduction of the physically tangible Unimate robot, Mori coined the idea that as robots become more human looking “our sense of their familiarity increases until we come to a valley” (Mori 1). He coined the term ‘uncanny valley’ to explain this phenomenon (Mori 2). While this concept may be difficult to grasp (Destephe et al. 10), Mori explains our familiarity (or shinwa-kan) relative to human-likeness best with his graph, showing that the proposed curve is oddly nonlinear (figure 1). An unexpected result ensues, with human-likeness not being equally relative to familiarity. Before elaborating further, it is important here to briefly discuss the term ‘familiarity’ as used. Mori’s paper is originally in Japanese, being translated into English by MacDorman and Minato. The original paper uses the

(27)

term “shinwa-kan” (Bartneck et al. 368), which when translated is closer to “likability” according to Bartneck et al. (369).

Figure 1.| The uncanny valley (Mori 2).

In his Uncanny Valley paper, Mori proposes that our impression of a robot is partly determined by its appearance. This appearance at some point becomes unlikeable as it approaches our own physical appearance. He uses the anecdote of a prosthetic hand, suggesting that it looks somewhat human, however not completely realistic enough (Mori 2). Upon closer inspection the hand is found to be artificial, and the illusion is revealed, making us feel uncomfortable. The hand then is

regarded as unfamiliar, disconcerting and uncanny (Mori 2). This prosthesis resides within the uncanny valley with respect to human hands. In a similar way, because robots are not human, as they appear more human-like at some point they will become unfamiliar and uncanny. This uncanny point, when plotted on a graph would form a valley, coinciding with the advancing design of robots. Mori’s hypothesized that this valley region may form along the evolutionary path from industrial

(28)

style robots to humanoid ones (Mori 2). The premise that we judge a robots appeal based on its looks means that as the robot becomes slightly more human looking than industrial, we become slightly more familiar with it, finding it more likable than the purposeful and tool-like industrial robots. Conversely at the other extreme, when a robot is completely indistinguishable from a human in appearance, then we should find total familiarity within it. However between these two points a valley forms, coinciding with the unfamiliar or unlikeable characteristics associated with the uncanny. This is the same theoretical area which the prosthetic hand occupies. With respect to robots, the valley area is so negatively correlated with likability and familiarity that we judge the robot as far less appealing than the industrial robot residing at the zero point. So exaggerated is our dislike for robots succumbing to the uncanny valley, that Mori suggested we conclude they are analogous to a human corpse or zombie (Mori 4).

Mori proposed the uncanny valley as the worst possible outcome for a robots design. This he suggested was to avoided at great cost, and instead recommended designers to aim for the first peak on the graph (Mori 3). This would avoid any implication with the uncanny valley, and would

instead provide a safe design which users would be happier to interact with. However the ultimate goal of roboticist is to create a “synthetic realistic human” (Duffy 183) and therefore aiming for the first peak is hardly sufficient. Additionally technological progression over the past 47 years, since

The Uncanny Valley was published, has reached a point beyond the first peak, with examples like

Japanese researcher Hiroshi Ishiguro building incredibly detailed human replicas. However even these fall victim to the uncanny valley (Sullins 406). Equally unfamiliar and rather disconcerting are currently available lifelike sex doll robots which, doubtfully any conventional person would

disagree, reside in the uncanny valley. Thus from a pragmatic standpoint the uncanny valley, or more pertinently avoiding the valley, is important to researchers and to manufacturers alike. As technology continues to evolve, and robots enter the home, consumers will be dissuaded from purchasing uncanny robots as they will be inherently unlikable. As mentioned earlier habituation has a tendency to negate many poor design results, however the initial uncanniness felt will deter positive reviews, and undoubtedly harm commercial sales of such a product.

(29)

From a researcher standpoint the uncanny valley is important as the issue is unresolved. Unresolved because triumphing over the uncanny valley is yet to occur in robotics and secondarily because the underlying cause of the valley has eluded researchers thus far. Animation is one field susceptible to uncanny valley principles (Tinwell et al. 1617), one in which the valley problem is very close to being overcome. Huge technological leaps have shifted animation from portraying uncanny

characters as little as a decade ago, to the current situation where animations have, for the most part, surpassed the uncanny valley to a great degree. This shows the feasibility of ignoring Mori’s appeal to aim for the first peak, and to instead focus on the pinnacle. As to the secondary motive, the underlying cause of the valley has comparatively less straightforward answers. It relies on understanding the human motivation behind the valley phenomenon. Robots could be seen as contrasting humans in a number of ways, predominantly threatening our distinctiveness. To justify these notions first we have to understand that an uncanny robot will be contextualized as kindred to a zombie. These uncanny robots will cause discomfort within people, as they are mixture of living appearance within a lifeless object (Cowie 417). This viewpoint is important because the living dead is allegorical to the zombie, and this is deeply unfamiliar and terrifying. The threat to

distinctiveness undermines our human uniqueness. Researchers Ferrari et al, and Aymerich-Franch et al. both address this concern by explaining how people fit into different groups, and that when two group becomes too similar then there can be category boundary challenges “too much

perceived similarity between robots and humans may trigger concerns about the negative impact of this technology because similarity blurs category boundaries” (Aymerich-Franch et al. 10) and this challenges our distinctiveness. Within the context of the uncanny valley, when robots become too humanlike yet not completely human, then group dissimilarity is forecast to become a category boundary challenge. Further Wang and colleagues propose that robots may fall victim to a

dehumanization process, due to their out-group differences (401). This lack of clarity surrounding the human motivations for the uncanniness felt require more research. Additionally avoiding the valley area is now the loftiest goal achievable by robotics experts. Since in some ways we have already moved past the first peak, and are now aiming for the second, the time for Mori’s recommendation of aiming for the first of the two peaks has already debatably passed.

(30)

Research into the uncanny valley phenomenon has elicited much debate (Mathur and Reichling 29). Fundamentally researchers find both the existence and non-existence of the valley in their

experiments (Pollick 4). Hanson et al. for example found that “[t]here does not appear to be an inherent valley” finding participants “showed no sign of the repulsion that defined the “valley” of Mori’s uncanny valley” (29). However this study had relatively comical results, and thus cannot be taken very seriously, asking participants to rate the amount of liveliness and disturbance they felt in the presence of a robot with “numerous unrealistic features (head on a stick, no back of head, etc)” (Hanson et al. 30). Hanson and colleagues expected results should correlate “no back of head” (30) with highly disturbing feelings, however “0.0% said that human-looking robots disturb them” (29), and further for liveliness that “85% said the robots look lively, not dead” (29), which is surprising to say the least. More recent studies have found evidence supporting the uncanny valley hypothesis (Cafaro et al; Mathur and Reichling; MacDorman and Ishiguro being a few), ultimately though there is an absence of agreement on the uncanny valley as a phenomenon. This situation can be explained by the fact that technological sophistication is not yet advanced enough to appraise the exploration of the uncanny valley in a significant way. We are still some time away from creating realistically humanoid enough robots to see the true effect of the uncanny valley, as Pollick explains “limited empirical evidence both restricts extensive conclusions being drawn” (5). Moreover

because the phenomena is anecdotally recognized in computer-generated imagery (CGI) animation (Walters et al. 161) it suggests that an effect is present and observable.

In light of the uncertainty surrounding the uncanny valley, it is easy to be cynical. However papers have found examples, probably the most surprising case being that of Steckenfinger and Ghazanfar who established that even monkeys are affected by the valley (18364). Humans, it seems,

experience the effect more powerfully, with MacDorman and Ishiguro finding participants often initially mistaking the android for human, but upon longer viewings would recognize it as artificial and subsequently fall into the uncanny valley (363). This leads to an aversion to humanoid robot devices, as a robot residing in the intermediate state which is the uncanny valley, is less than satisfying (Borenstein and Arkin 4) and even “revulsive” (Tay et al. 75). This intermediate state “seems to confuse the user” (Cafaro et al. 1078), with confusion being just one of many negative

(31)

consequences for interaction, another being trust (Mathur and Reichling 31). Returning to Mori, he suggested that the uncanny valley will complicate the formation of social relations (Mori 4) and this has serious ramifications for any future attachment hypothesized by Levy. Thus the response is to find a solution to the uncanny valley problem, if it truly does exist. This answer will be to continue to test the latest robotics hardware with participants, gauging whether the effect is apparent with each new iteration and evolution.


(32)

5. The uncanny valley of character

5.1. A new theory

Robot depictions in films are often assumed to be extremely humanlike (Broadbent 627), however I would fundamentally disagree. When viewing media portraying humanoid robots their outward appearance is often similar to humans, however rarely completely accurate. The fact that film-goers can tell that they are robots, often obviously, is reason enough to suggest that the depictions are not human equals. Sometimes these robots are so non-human as to place them within the pit of the uncanny valley. Often media portrayals exemplify a robot as looking and moving in a humanlike fashion, and in this respect they are not uncanny, however the robot swiftly becomes peculiar through their depiction of emotion and emotional response. Here a great many robots fail to exhibit any realistic humanlike characteristics. Robots in films often require some form of robotic-ness, to differentiate them from human characters, and highlight the truth that they are in fact a robot. However it is incredibly difficult to accept them seriously as human alternatives. Robots like this become entities which are not quite human, but look uncannily like humans. Thus I hypothesize that although robots may seem to be human, if they do not possess genuine humanness in terms of character or emotional portrayal, then they will seem stiff and obviously robotic. As robots are designed to be progressively more humanlike, they risk provoking the category boundary of humanhood, in doing so creating discomfort in people and henceforth succumbing to the uncanny valley. Logically then the solution may be to design robots completely identical to humans.

Here however the distinction must be made, for the term appearance used originally in the translation of Mori’s work has since been used to denote the outward physical look of the robot. Some may disagree here, suggesting that by appearance he meant an impression connotation, however Mori himself writes most obviously of the physical look of the robot, with examples given referencing physicality. Additionally work published since has followed this original premise. This is important to note, as the uncanny valley currently exists only in relation to a robots outward physical appearance, and its movement. The uncanny valley does not extend to appearance in terms of character or emotion and yet it should. A robots emotional intelligence or outward emotion is

(33)

crucially judged by people as acceptable or not. This is to be expected as “people mindlessly apply social rules to computers” (Broadbent 640) and this concept will extend to robots. Thus robots risk falling into an uncanny valley if their emotionality is not adequately aligned with users

expectations. A robot whose character is seen to be almost human, but not quite, will be decidedly uncanny.

Zlotowski et al. endorse Mori’s hypothesis summarizing it thusly “Once the appearance of a robot becomes indistinguishable from a real human, the affinity with it reaches its optimum at the same level as for human beings” (Zlotowski et al. 1). This declaration, while accurate, tells nothing of the characteristics of the robot, and nothing of our affinity in response to its character. This gap

therefore necessitates the creation of a paradigm similar to Mori’s in relation to character (emotional portrayal) as opposed to physical appearance. It is thus possible to hypothesize how the uncanny valley may relate to its cousin, an uncanny valley of character. There is an obvious difference between the traditional uncanny valley, related to appearance, and the uncanny valley of character. In the appearance model, the roboticist designer can misjudged the outward appearance of the robot. When this robot is viewed it is either deemed uncanny or not depending on its physical features, for example its skin tone, facial structure or body design. Movement can also be judged as uncanny or not, and while the robot is not contextually bound to move in any certain way, as long as the

movements are human in appearance then it may pass the uncanny valley. In contrast a robot which is able to show emotion not bound by any uncanny ideals, and as such should not fall into the uncanny valley. However if interacting with people the robot would be contextually and socially bound to display the correct emotion at the correct time, otherwise it could seem to be very random or even broken. These factors rely on more than just simplistic design parameters, relying more heavily on both decision management skills and programming design. If this robot does not understand the socially appropriate emotional response then it will not know how to correctly act, and inappropriate action could result in the acquiescence to an uncanny valley - of emotion. This new uncanny valley would theoretically plot a similar graph to Mori’s, in that there would be an initial increase, which a large subsequent dip, followed by a triumphant peak (figure 2). I believe the diagram would follow a pattern most similar to Mori's assumption of movement, for the reason

(34)

that just like moment is more nuanced than the ‘still’ condition, character has many variables, and multiple evaluation criteria.

Figure 2.| The uncanny valley of character.

It is difficult to justify this idea with an illustrative model, as Mori had with the prosthesis in 1970. His prosthetic hand example did not seek to provoke amputees. While any example I make will not seek to provoke audiences either, it is more difficult as there is a different practice of thinking about physical and mental deviations from the norm. For example many amputees may not be greatly offended by the idea that their prosthetic hand is uncanny for the majority of people, fundamentally they were not born with the prosthetic hand, and may be able to divorce themselves somewhat from the item. It is not naturally their own flesh. Character however is something innately and uniquely ours. It is to many metrics a measure of who we are. If someone external to us is to judge our character negatively, we may be hurt, or respond that they are ignorant of the facts about us.

Furthermore character is a multitude of traits which we feel we have some level of control over, we are not a victim of our circumstance like the amputee, but rather captain of the ship (the self). This creates a problem when searching for an allegory which mimics Mori’s example of the prosthetic hand, as many such examples may be deemed offensive or just plain wrong.

Referenties

GERELATEERDE DOCUMENTEN

The de-compacting effect on chromatin structure of reducing the positive charge of the histone tails is consistent with the general picture of DNA condensation governed by a

The independent variables are amount of protein, protein displayed and interest in health to test whether the dependent variable (amount of sugar guessed) can be explained,

The uncanny valley theory proposes very high levels of eeriness and low levels of affinity (Burleigh and Schoenherr, 2015; Mori, 2012; Stein and Ohler, 2016; Zlotowsky e.a.,

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

In the letter of 1666, by which Bobovius hoped to rekindle the friendship with Basire, he also refers to his connections with Warner. Again we read how important Bobovius was in

At a relatively low level of analysis, they appear rather sim- ilar, but the English case seems much more integrated into the rest of the grammar of English – in the sense of

Het tradi- tionalistisch-historistisch denkkader, zoals dat in Engeland voornamelijk bij auteurs uit de common law-traditie te vinden is (Coke bijvoorbeeld), maar dat ook in

TNO is partner van de Academische Werkplaats Samen voor de Jeugd en begeleidt het proces om hulpverleners en de jeugdhulpaanbieders eenduidig en resultaatgericht te laten