• No results found

In this paper, the term “chatbot” refers to any program that interacts with the user using dialogue only. The first chatbot saw the light in the 60’s with ELIZA (Weizenbaum, 1966), and things have changed considerably since then. Not only have these bots become better at talking and understanding their human user, we have also started using instant messaging platforms such as Whatsapp and Facebook Messenger in our daily lives (Dale, 2016). As a result, our habituation to text-based messaging and the use of these platforms have dramatically improved the acceptance and accessibility of contemporary bots: whereas ELIZA was unfamiliar and required a dedicated system and location to engage with, current chatbots are familiar to most people and can be accessed virtually everywhere and at any time (Pounder et al., 2016;

Weizenbaum, 1966). As a result, these programs that talk are pervading our societies, taking up jobs as customer support, providing entertainment and forming the basis for voice-based assistants such as Google Assistant and Siri (Abu Shawar & Atwell, 2007; Dale, 2016; Følstad

& Brandtzæg, 2017).

Chatbots can do these human things because of our tendency to treat our computer systems, and any virtual entity running on it, as a social actor - despite being fully aware that a computer program is non-human and does not require such treatment.

This forms the basis of the paradigm of Computers As Social Actors (CASA) (Nass & Moon, 2000): we treat our computer systems socially, but we do not anthropomorphize them. That is, we do not really think our computers are human or have human characteristics. Kim and Sundar (2012) use the words mindless- and mindful anthropomorphism to indicate the discrepancy between our mindful attitude - computers are not human - and our mindless behavior - we treat them socially.

CASA is what enables a human to actually have a conversation with a chatbot and allows these bots to take on roles traditionally reserved for humans, but that does not make chatbots human. In fact, they are considered non-human by most people: on a scale of 0 (“poor”, machinelike) through 50 (“good”, but still machinelike) to 100 (humanlike), average scores for six contemporary chatbots lay between 24 and 63 (Shah et al., 2016). Moreover, we treat them differently than we treat other humans: for example, we use more profanity and send shorter messages that lack the richness in vocabulary of human-human interaction (Hill, Randolph Ford

& Farreras, 2015).

2.1.1 Chatbot categories: roles and tasks

Currently, chatbots can be divided into three categories. Most of them are (1) intelligent assistants like Siri and Alexa, capable of doing several short tasks for us such as scheduling appointments or making a web search, or (2) task-focused - only capable of doing one thing, such as finding the cheapest flight. Only a few chatbots fit into the last category of (3) virtual companion. These are bots that come closest to having an open conversation with their users, but there are still ways to go before they actually do (Grudin & Jacques, 2019; Seering, Luria, Kaufman & Hammer, 2019).

Except the virtual companion, all bots are intended to do things for us. This is reflected in what we expect from bots, namely to help us with menial tasks - providing weather updates or keeping an eye on our calendars (Brandtzaeg & Følstad, 2017; Zamora, 2017). Hence, the fact that chatbots function mostly as non-human shopping buddies or command-obeying entities (“Alexa, turn on the lights”) reveals what most chatbots are to us: digital “butlers” (Zamora,

2017), created to make our lives easier and to be used at will3.

However, Zamora (2017) and Brandtzaeg and Følstad (2017) also revealed participants’

interest in having access to relational kinds of chatbots - bots that are there to listen or to provide motivation. In other words, people would like to have bots that cater to their emotional needs as well as taking care of their menial needs such as shopping lists and agendas. This shift towards bots for emotional needs also suggests a shift in the role that these chatbots fulfill for their users: when we start confiding our worries in bots and begin to depend on their input in order to feel better, they might become closer to us. In effect, this means that bots may be moving up a social rank from their earlier butler position to being a virtual assistant for feelings (Grudin & Jacques, 2019).

2.1.2 No judgment

A key element that makes these virtual assistants-for-feelings attractive is the pervasive notion that chatbots are non-judgmental: the reason that participants in the Zamora (2017) study gave for wanting to have a listening “chatbot ear” was the fact that a chatbot would not judge them for whatever it was that they needed attention for. Moreover, a survey conducted by media agency Mindshare UK showed that people would rather share sensitive or embarrassing medical or financial information with a chatbot than a human (Pounder et al., 2016). In fact, the tendency to share intimate information with chatbots was also seen in the earlier Vincent study (Lee et al., 2019) and has been found in research as well: people disclose more to computers, and to computerized entities such as chatbots, than they do to humans - especially so when the information is sensitive (Lucas, Gratch, King & Morency, 2014; Weisband & Kiesler, 2003).

The rationale behind this openness towards chatbots seems to be a mix of two things: first, talking to a computer feels more anonymous than talking to another human being (Weisband

& Kiesler, 2003).

3One may wonder whether these bots are effectively treated - and verbally abused - as digital slaves, with the user as their superior master: “butler” may be more of a euphemism. See the work of De Angeli and Carpenter (2006) for a discussion on the subordinate role of chatbots.

Second, people are less afraid of receiving negative evaluation from a chatbot than when they share sensitive information with humans (Lucas et al., 2014). In other words, people seem to feel that a chatbot will not judge them (Pounder et al., 2016; Zamora, 2017).

Chatbot therapists This non-judgmental nature makes them suitable candidates for assist-ants in (mental) health care, where people’s reluctance to share sensitive information often hinders adequate care (Lucas et al., 2014).

An example is that of chatbot therapist Woebot, developed at Stanford (Fitzpatrick et al., 2017). Woebot delivered a self-help program to its users through an instant messaging app, basing its daily conversations on Cognitive Behavior Therapy (CBT). After a period of two weeks, participants that engaged with Woebot showed significant decreases in depressive and anxiety related symptoms, as compared to the group that only received a link to an online self-help guide (Fitzpatrick et al., 2017). Another example is that of Tess, Woebot’s younger sibling, which showed similar results using the same approach (Fulmer et al., 2018).