• No results found

Both Woebot and Tess are examples of chatbots intended to make people less ill. Although their development is important and their success promising, they are not addressing the full picture - what about the people who are not ill? This question underlies the development of a field called positive psychology. This field is built on the idea that interventions aimed at improving well-being should be as important as those that cure illness (Seligman et al., 2005).

These interventions can also come in the form of technology, as Calvo and Peters (2014) demonstrate in their book “Positive Computing: Technology for Well-being and Human Poten-tial”. Chatbots are no exception: for example, bot Shim was born out of the desire to stimulate well-being in its users by using insights from positive psychology (Ly, Ly & Andersson, 2017).

This paper intends to follow this line of thought by using the concept of self-compassion to stimulate well-being.

2.2.1 Why Self-Compassion?

Self-compassion is originally a Buddhist concept, stemming from general compassion: being open and receptive to suffering, generating kindness and the desire to help without judging the actions from those who suffer. Self-compassion, then, is exactly that, but oriented towards the self (Neff, 2003a, 2003b). In more detail, self-compassion consists out of three parts: (1) being kind to oneself rather than being harsh and judgmental; (2) putting one’s experience in the bigger picture of all other humans rather than isolating oneself; and (3) being mindfully aware of the fact that emotions and painful thoughts are something one has, not something one is (Neff, 2003b).

In several studies and meta-analysis, self-compassion has been found to be important for stimulating general well-being, reducing symptoms of anxiety and depression, and to build resilience against stress (MacBeth & Gumley, 2012; Neff, 2003b; Zessin et al., 2015). More evidence for the causal effects of self-compassion comes from the work of Shapira and Mongrain (2010): participants who did self-compassion related exercises for one week showed significant and sustained drops in their depressive symptoms, up until as long as six months after the intervention had ended. In contrast, the participants who had to work with early childhood memories or optimism exercises for a week did not show the same result.

2.2.2 How can chatbots stimulate self-compassion?

To create chatbots for self-compassion, we need to understand how to stimulate self-compassion.

Most traditional (human-led) self-compassion interventions contain three components: (1) psy-choeducation, providing the reason behind self-compassion, (2) mindfulness and meditation exercises and (3) rounds of practice being compassionate to others in the intervention group (Kirby, 2016; Neff & Germer, 2013).

Especially the third component is striking: apparently, practicing compassion for others helps you to be compassionate to yourself4.

4To be compassionate means to have compassion for all, including the self. In fact, to introduce a separate concept for the self would be nonsensical in many Eastern philosophies - for several reasons, such as whether there is or should be a division between the self and others (Neff, 2003a, 2003b).

A digital example of an intervention that incorporates both receiving and giving compassion is the work of Falconer et al. (2016). They studied whether VR could be utilized to improve the self-compassion levels of people with depression. In their experiment, participants were instructed how to give a compassionate response and were placed in a VR environment together with a crying, virtual child. They were asked to deliver the compassionate response to the child. Afterwards, they were placed in the virtual body of that child, and received their own compassionate response from the perspective of the child. Hence, their participants both gave and received compassion and experienced an enormous increase in self-compassion (Cohen’s dz

= 1.5, p = 0.02) which remained stable after four weeks.

However, it is not clear whether self-compassion needs a combination of receiving instructions and giving compassion to others in order to be stimulated: both receiving instructions and giving compassion to others have separately been found to improve self-compassion. For example, Leary et al. (2007) instructed some of their participants on how to do an exercise called compassionate letter writing, through which participants get to think about the three parts of self-compassion when dealing with their own unfortunate event. They concluded that this exercise successfully induced a state of self-compassion in their participants (Leary et al., 2007). On the other hand, Breines and Chen (2013) experimented with the effect of giving compassion to someone else.

They gave participants a description of a stranger who had just experienced failure and let participants write down a comforting statement to this person. They measured self-compassion before and after the task and found that, on average, being compassionate to someone else positively influenced self-compassion scores.

2.2.3 Giving or receiving chatbots

Knowing all this, where do chatbots for self-compassion fit in? We know they can give instruc-tions for exercises such as compassionate letter writing like their therapeutic relatives Woebot and Tess (Fitzpatrick et al., 2017; Fulmer et al., 2018), so should they stick to that?

Alternatively, chatbots could also ask for compassion like the stranger in the work of Breines and Chen (2013) - assuming that people are willing to show compassion to chatbots. Calvo and

Peters (2014) write that people can only be compassionate to technology if they (1) witness the technology suffering, (2) understand how this suffering must feel for the technology and (3) have the possibility of doing something to alleviate the suffering. Given that these demands are met, should a chatbot then ask for compassion instead of giving it?

This was exactly the question addressed by Lee et al. (2019) with their work on Vincent.

Like this paper, they targeted non-clinical people with the goal of improving well-being for the average person. Their participants had daily chats with Vincent for a period of two weeks. Half (n = 31) talked with a Vincent that gave help (caregiving), resembling earlier therapist chatbots like Woebot and Tess, while the other half (n = 31) talked with a Vincent that asked for help (care-receiving), based on the findings from Breines and Chen (2013). The group of participants that talked with a care-receiving Vincent reported increased levels of self-compassion after the experiment (Cohen’s dz= 0.35, p = 0.029) whereas the participants who talked with caregiving Vincent did not (dz= 0.1, p = 0.286). Hence, chatbots for self-compassion should ask for help, instead of giving it.

Categories Referring back to the categories of task-focused, intelligent assistant or virtual companion chatbots (Grudin & Jacques, 2019), where do these two versions of Vincent belong?

Chatbot therapists like caregiving Vincent, Woebot and Tess (Fitzpatrick et al., 2017; Fulmer et al., 2018) fit best as intelligent assistants for feelings - those that cater to their users’ emotional needs (Grudin & Jacques, 2019; Zamora, 2017). They seem to be effective in reducing symptoms of illness, but are less suited for a role as self-compassion therapist.

Care-receiving Vincent, on the other hand, shows promise in stimulating self-compassion.

However, he does not seem to fit into either category: he does not assist with a specific task for the user - emotional nor menial - but he also does not engage in open conversation. In fact, conversations with care-receiving Vincent are about addressing his emotional needs, not those of the user. With participants making remarks such as “can I keep him?”(Lee et al., 2019, p.

9), he appears to be another type entirely, namely that of a talking pet - a “ChatPet”.