• No results found

Understanding the Intention to Use Mental Health Chatbots Among LGBTQIA+ Individuals: Testing and Extending the UTAUT

N/A
N/A
Protected

Academic year: 2023

Share "Understanding the Intention to Use Mental Health Chatbots Among LGBTQIA+ Individuals: Testing and Extending the UTAUT"

Copied!
61
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Understanding the Intention to Use Mental Health Chatbots Among LGBTQIA+

Individuals: Testing and Extending the UTAUT

Tanja Henkel Student ID: 12737658

Master’s Thesis

Graduate School of Communication Master’s Programme Communication Science

Supervisor: Dr. A.J. Linn Word count: 7460 3rd of February 2022

(2)

Abstract

This study aimed to test and extend the unified theory of technology acceptance and use theory (UTAUT) to the context of mental health chatbot usage among LGBTQIA+

individuals. The proposed model uses performance expectancy, effort expectancy and social influence to predict the intention to use a mental health chatbot, and further incorporates medium- and target group specific predictors: willingness to self-disclose, perceived loss of privacy and trust. We hypothesized that these direct effects are moderated by age and previous experience with chatbots. We further sought to investigate whether gender identity moderates the effects of predictors on the behavioral intention to use a mental health chatbot.

Regression results with data derived from an online survey (N = 305) indicate that performance expectancy, social influence, and willingness to self-disclose positively, significantly predict chatbot usage intention among LGBTQIA+ individuals, whereas effort expectancy negatively, significantly influenced behavioral intention. Previous experience with chatbots negatively moderated the effect between social influence and behavioral intention and age seem to negatively moderate the effect between willingness to self-disclose and intention to use a mental health chatbot. With regards to gender identity, there were no differences except that the negative relationship between perceived loss of privacy on behavioral intention was stronger for males compared to females. Overall, the extended UTAUT provided a slight improvement in variance explained in behavioral intention to use a chatbot compared to the initial UTAUT (67% to 70%) and thus showed a good fit to explain technology acceptance of mental health chatbots among the LGBTQIA+ community. The theoretical and practical implications of the results are discussed.

Keywords: Technology Acceptance, Mental Health Chatbots, UTAUT, LGBTQIA+

Community

(3)

Understanding the Intention to Use Mental Health Chatbots Among LGBTQIA+

Individuals: Testing and Extending the UTAUT

With the growing emergence of technology an increasing need to understand technology usage became apparent. Consequently, the unified theory of technology acceptance and use (UTAUT) was developed (Venkatesh et al., 2003). This theory-driven model combined several variables of prior technology usage models such as the technology acceptance model (Davis, 1989) or the theory of planned behavior (Ajzen, 1991). Initially, the UTAUT was applied to and tested in organizational settings (Venkatesh et al., 2003). However, in the past twenty years, technology has rapidly developed as computer systems became more

sophisticated and complex. Technology usage is no longer just about using a computer programme at work but has become indispensable in all areas of our society: from smartphones and electric cars to manufacturing robots. Moreover, we currently live in a society where gender identity no longer corresponds only to the traditional "man or woman"

image: boundaries are blurring between male, female, non-binary, queer, transgender or genderfluid individuals. The UTAUT in its original version is neither able to explain technology acceptance of modern technology, nor does it address variations in gender identity. Therefore, it is important to constantly test the boundaries of the UTAUT by

applying it to modern technologies and including user groups which represent blurring gender identities such as the Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, Asexual

(LGBTQIA+) community.

The UTAUT was applied and extended to different contexts before, e.g., to study the behavioral intention to use smart healthcare systems (Liu & Tao, 2022), investigate

consumer’s usage of online banking (Tarhini et al., 2016) or analyse social media usage of microbusinesses (Mandal & McQueen, 2012). However, the UTAUT is currently

understudied in an emerging field: mental health chatbots (Goklani, 2021). Mental health

(4)

chatbots use natural language processing (NLP) to detect and reframe cognitive patterns of users (Goklani, 2021). Applications such as Wysa (Mental Health Support, for Everyone., 2021) or Woebot (Woebot Health, 2022) offer great potential for users who lack access to professional help or who feel ashamed because of mental health issues (D’Alfonso, 2020).

This is due to the many advantages of mental health chatbots: They are always available, easily accessible, cost-effective, offer a non-judgemental space and show both infinite patience as well as immediate feedback (Croes & Antheunis, 2021). First studies suggest promising results regarding the effectiveness of mental health chatbots in combating mental health issues. For instance, chatbot usage has shown to reduce feelings of stress (Ly et al., 2017), anxiety (Fulmer et al., 2018) and depression (Fitzpatrick et al., 2017). Both the advantages and the effectiveness of mental health chatbots may potentially increase people’s intention to use them. Yet, the initial UTAUT by Venkatesh and colleagues (2003) is not able to fully explain the intention to use mental health chatbots as it neglects crucial aspects like privacy, trust and whether individuals would even disclose to a chatbot. Individuals who would not disclose to a mental health chatbot in the first place will probably show a lower intention of using one (Lee et al., 2020a). Also, when disclosing sensitive data, users need to have the feeling that they can trust the mental health chatbot and that their privacy is protected (van Wezel et al., 2021). Thus, people’s willingness to self-disclose, their perceived loss of privacy and trust in mental health chatbots are necessary concepts to consider in the context of mental health chatbots.

The second aspect limiting the UTAUT is its restriction to male and female individuals. It might be that transgender or non-binary individuals show different underlying reasons in mental health chatbot usage intention as opposed to male or female individuals. By excluding those individuals from research by default, we might miss valuable insights about gender

(5)

differences. Hence, this study aims to bring more inclusivity into research and include an under-researched target group - the LGTBQIA+ community - in the UTAUT.

To the authors present knowledge, no other study has applied and extended the UTAUT to the specific setting of mental health chatbot usage intention among LGBTQIA+ individuals.

Hence, this paper aims to answer the overarching research question to what extent the

(extended) UTAUT can predict the behavioral intention to use a mental health chatbot among LGBTQIA+ individuals. In doing so, our research not only contributes to a deeper

understanding of modern technology usage intention and proposes a more inclusive theoretical model but might also be able to provide valuable insights into whether

LGBTQIA+ individuals would rely on a mental health chatbot to combat mental health issues.

Theoretical Background

The Unified Theory of Acceptance and Use of Technology (UTAUT) and Why it Needs Adjustment

The UTAUT was initially proposed by Venkatesh and colleagues (2003). In developing this model, the authors combined concepts from eight user acceptance models, among other the technology acceptance model (Davis, 1989), the theory of reasoned action (Fishbein &

Ajzen, 1975) and the innovation diffusion theory (Rogers, 1995). This way, Venkatesh et al.

(2003) created a unified and theory-based model that would help future researchers predict user acceptance without having to pick and choose from several models.

According to the original model (Figure 1), three core variables predict the behavioral intention (BI) to use a certain technology: Performance expectancy (i.e., how useful people perceive a technology), effort expectancy (i.e., how easy technology usage will be perceived) and social influence (whether one believes that one’s social environment thinks one should use the technology). In turn, BI influences actual technology use. Technology usage is further

(6)

directly predicted by facilitating conditions (i.e., the degree to which one believes that the necessary technical infrastructure exists to enable technology usage) (Venkatesh et al., 2003).

These relationships are moderated by age, gender, previous experience with the technology and voluntariness of use (Venkatesh et al., 2003). For the present research, we adapt

performance expectancy (PE), effort expectancy (EE) and social influence (SI) to explain behavioral intention to use a mental health chatbot among LGBTQIA+ individuals. Note that voluntariness of use and facilitating conditions were omitted from the model, as our research focuses on the voluntary usage of mental health chatbots and chatbot applications are

nowadays easily installable on the smartphone (Brandtzaeg & Følstad, 2017). Finally, measuring the actual usage of a mental health chatbot is beyond the scope of this research, therefore technology usage is also excluded from the model.

Figure 1

The Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al.

(2003)

Although the UTAUT serves as a solid, theoretical framework and its generalizability was tested numerous times (e.g., Chang et al., 2007; Neufeld et al., 2007; Tarhini et al., 2016;

(7)

Venkatesh et al., 2012; Yi et al., 2006), it is not able to fully explain the intention to use a mental health chatbot as it does not address factors linked to the unique features of mental health chatbots. Mental health chatbots are computer applications that use natural language – either text or voice-based – to conduct human-like conversations (Almahri et al., 2020). They are trained in cognitive behavioral therapy (CBT) to challenge user’s thinking patterns (Inkster et al., 2018). Based on the information the users provide - oftentimes anonymously - the chatbot can react immediately and show emotional support without judgement. These characteristics may allow people to disclose personal information that they would not have entrusted to a real human therapist (Lee et al., 2020a): Their willingness to self-disclose (WSD) to a chatbot could thus enhance people’s intention to use a mental health chatbot, which is why WSD could be a potential predictor for the BI to use a mental health chatbot. At the same time, sharing sensitive information with a mental health chatbot, raises privacy issues. Individuals using a mental health chatbot need to be sure that their personal thoughts are not misused and shared with third parties (Liu & Tao, 2022). Perceived loss of privacy (LOP) may therefore influence people’s intention to use a mental health chatbot and needs to be considered. Moreover, before formulating the intention to use a mental health chatbot, individuals need to trust such an application. Trust has been repeatedly shown to be a significant predictor of BI to use modern technology (e.g., Lee & See, 2004; Liu & Tao, 2022) and might be able to explain the intention to use a mental health chatbot. Following this argumentation, WSD, LOP and trust will be included in the extended UTAUT and discussed further below.

Finally, the initial UTAUT considers gender as a dichotomous moderator. However, technology usage becomes more and more personalized (Toch et al., 2012). Simultaneously, users themselves can express their individual personalities and blurring gender identities more freely. Transgender individuals use different mobile applications and visit different websites

(8)

than cisgender individuals (McInroy, 2018). If we apply the traditional UTAU, we are neither able to grasp the current societal climate nor gain valuable insights into the nuances in

technology use of gender identities. To adapt to these changing gender perceptions, we suggest replacing gender with gender identity in the extended UTAUT.

Hypotheses Development

Performance Expectancy

PE was initially described as “the degree to which an individual believes that using the system will help him or her to attain gains in job performance.” (Venkatesh et al., 2003, p.

447). Based on this, we define PE as the degree to which an individual of the LGBTQIA+

community believes that a mental health chatbot will help them improve their mental health.

According to Venkatesh and colleagues (2003), PE has shown to be among the strongest predictors of BI. In the context of chatbots, several studies have shown that PE is indeed a significant predictor for BI to use a chatbot. For instance, Almahri and colleagues (2020) found that PE predicts students’ intention to use a chatbot in an educational setting. Similarly, Chocarro et al. (2021) suggests that PE significantly predicts the intention to use a chatbot among teachers. In the healthcare sector, Liu and Tao (2022) were able to support the hypothesis that PE predicts the intention to use AI healthcare applications such as mental health chatbots. Venkatesh’s et al. (2003) initial work suggests that this relationship is

moderated by age, meaning that the effect will be stronger for younger individuals as opposed to older users. Regarding PE, we thus hypothesize the following:

H1. PE positively influences BI to use a mental health chatbot among the LGBTQIA+

community.

(9)

H1a. The relationship between PE and BI to use a mental health chatbot is moderated by age, meaning the effect will be stronger for younger than for older individuals from the LGBTQIA+ community.

Effort Expectancy

Venkatesh and colleagues (2003) defined EE as: “the degree of ease associated with the use of the system” (p. 450). To predict the BI to use a mental health chatbot, we define EE as the degree of ease LGBTQIA+ individuals associate with the use of a mental health chatbot.

Previous research suggests that EE positively influences BI of chatbot usage. For instance, the results of Almahri et al. (2020) indicate that EE increases students’ intentions to use a chatbot used in an educational setting and Chocarro et al. (2021) found similar results for teachers.

The literature review by Abd-alrazaq and colleagues (2019) stated that overall perception of EE of a chatbot was high, although some users showed difficulties in phrasing their questions in a way which was understandable for the chatbot. Venkatesh and colleagues (2003) further argue that the effect of EE on BI is stronger for older users. This is because younger people become accustomed more easily with new technology such as chatbots compared to older individuals (Magsamen-Conrad et al., 2015). Additionally, Venkatesh et al. (2003) found that EE was more significant among less experienced individuals and this effect disappeared when people gained more experience (Isaias et al., 2017; Magsamen-Conrad et al., 2015). Thus, EE seems to be more relevant in the early process of using a technology when, for instance, phrasing questions for the chatbot can be an obstacle. Hence, we predict the following:

H2. EE positively influences BI to use a mental health chatbot among the LGBTQIA+

community.

(10)

H2a. The relationship between EE and BI to use a mental health chatbot is moderated by age, meaning the effect will be stronger for older LGBTQIA+ individuals than for younger LGBTQIA+ individuals.

H2b. The relationship between EE and BI to use a mental health chatbot is moderated by experience, meaning the effect will be stronger for less experienced LGBTQIA+

individuals than for experienced LGBTQIA+ individuals.

Social Influence

SI can be of great importance when it comes to adopting a certain behavior (Ajzen, 1991). Especially among LGBTQIA+ individuals, the community feeling is very strong (Fish et al., 2021), as they are less likely to receive sufficient family support and rely more on peers with a similar background instead (Jackson, 2017). Based on Venkatesh and colleagues’

(2003) original definition – “the degree to which an individual perceives that important others believe he or she should use the new system” (p. 451) - we define SI as the degree to which an LGBTQIA+ individual thinks their close social environment believes they should use a

mental health chatbot. Various studies indicate that SI is crucial for the BI to use a mental health chatbot. For instance, Melián-González et al. (2019) found support for SI predicting chatbot usage intention in the tourism context. The qualitative study by Schueller and colleagues (2018) suggests that recommendations from people’s social environment might impact mental health app usage as almost 37% of all participants found mental health apps through word-of-mouth. Prakash and Das (2020) used a thematic analysis approach to analyse patients’ perceptions of mental health chatbots and found that recommendation of people’s closest environment served as a driver for the usage of mental health chatbots.

The relationship between SI and BI to use a mental health chatbot is further considered to be moderated by age and previous experience. Previous research on age characteristics

(11)

suggests that the need for affiliation is stronger for younger individuals (Venkatesh et al., 2000) and the effect of SI might thus be stronger for younger individuals (Magsamen-Conrad et al., 2015). Additionally, in the beginning of the chatbot usage, people with little experience have less informed opinions compared to individuals who have used chatbots longer

(Venkatesh et al., 2003). Therefore, their opinions are easier influenced by others and

problems that might arrive in an early stage of chatbot usage may disappear (Venkatesh et al., 2011). We thus assume:

H3. SI positively influences BI to use a mental health chatbot among the LGBTQIA+

community.

H3a. The relationship between SI and BI to use a mental health chatbot is moderated by age, meaning the effect will be stronger for younger LGBTQIA+ individuals than for older LGBTQIA+ individuals.

H3b. The relationship between SI and BI to use a mental health chatbot is moderated by experience, meaning the effect will be stronger for less experienced LGBTQIA+

individuals than for experienced LGBTQIA+ individuals.

Willingness to Self-Disclose

When it comes to human-computer interaction, the characteristics of mental health chatbots can be highly beneficial for certain groups: Some people might prefer to disclose personal information in an anonymous space without stigmata. This may increase their intention to use a mental health chatbot (Lee et al., 2020b). However, for others, the lack of human cues might decrease their willingness to disclose personal information. Thus, the intention to use a chatbot is first preceded by users WSD to mental health chatbots (Croes &

Antheunis, 2021). Self-disclosure has further found to be an important aspect in the treatment of mental health issues (Pennebaker, 1995): It can create a sense of relief for the person

(12)

disclosing (Croes & Antheunis, 2021) and reduce stress (Barak & Gluck-Ofri, 2007). Adapted from Croes and Antheunis (2021), we define WSD as the willingness of LGBTQIA+

individuals to entrust personal information to a mental health chatbot. Croes and Antheunis (2021) found that people are equally willing to disclose to a chatbot as to a human therapist.

This is in line with other studies which indicate individual’s high WSD to an empathic chatbot (a.o., Chaix et al., 2019; Ho et al., 2018; Lee et al., 2020a; Van Wezel et al., 2021). Lucas and colleagues (2014) found that participants showed less fear of self-disclosure, more intense expressions of emotions and, overall, a higher WSD with a computer system as opposed to a human operator. Assuming that people are more likely to use a chatbot if they believe they can openly disclose their feelings, we test whether WSD is a potential predictor of BI to use a chatbot.

In addition to that, we consider this effect to be stronger for younger LGBTQIA+

individuals compared to older LGBTQIA+ individuals. This is because younger people are often more familiar with modern technology and therefore more likely to entrust personal information to a mental health chatbot (Schroeder & Schroeder, 2018). Regarding the role of previous experience, empirical research is scarce. To the author’s best knowledge, no study has yet examined the previous experience moderating the association between WSD and BI to use a mental health chatbot. We therefore aim to answer the research question whether

previous experience with chatbots moderates the relationship between WSD and BI to use a mental health chatbot. Accordingly, we expect the following:

H4. WSD positively influences BI to use a mental health chatbot among the LGBTQIA+ community.

H4a. The relationship between WSD and BI to use a mental health chatbot is moderated by age, meaning the effect will be stronger for younger LGBTQIA+ individuals than for older LGBTQIA+ individuals.

(13)

RQ1. To what extent does previous experience with chatbots moderate the relationship between WSD and BI to use a mental health chatbot among LGBTQIA+ individuals?

Perceived Loss of Privacy

With more sophisticated technology being implemented in various contexts, privacy concerns have been part of our everyday life (Kretzschmar et al., 2019). Especially in mobile health applications, where people disclose sensitive data, privacy is an important aspect to consider. When analysing three different mental health chatbots, Kretzschmar and colleagues (2019) were particularly concerned about third-party access to data and how well chatbots protect users in emergency situations. Liu and Tao (2022) refer to the term perceived loss of privacy (LOP), which describes how individuals think using smart healthcare services such as mental health chatbots will violate their privacy. Although Liu and Tao (2022) could not confirm LOP to be a significant direct predictor but a mediator for the BI to use a mental health application, other studies found a negative, direct effect of LOP on the acceptance of chatbot applications (e.g., Lipschitz et al., 2019; Nadarzynski et al., 2019).

Perceived LOP also depends on people’s level of experience with technology. Bellman et al.

(2004) found that privacy concerns decrease with more Internet experience, meaning individuals with less Internet experience showed higher privacy concerns compared to experienced users. This is mostly in line with the findings of Bergström (2015), who found that with most Internet situations, experienced people were less concerned. Regarding the potential moderating effect of age on privacy concerns, previous research has been somewhat inconclusive. Some studies found no differences due to research measurements (Hoofnagle et al., 2010; Taddicken, 2013) or only small significant differences with younger people being more concerned about privacy (Bergström, 2015). Guo and colleagues (2016) found that the effect of privacy concerns on BI is stronger for younger users, whereas older users where not affected. In contrast, Shehaan (2002) proposed different user typologies, with older

(14)

consumers being more alarmed in contrast to younger users of online services. Accordingly, we test the following hypotheses and aim to answer the following research question:

H5. LOP negatively influences BI to use a mental health chatbot among the LGBTQIA+

community.

H5a. The relationship between LOP and BI to use a mental health chatbot is moderated by experience, meaning the effect will be stronger for less experienced LGBTQIA+

individuals than for experienced LGBTQIA+ individuals.

RQ2. To what extent does age moderate the relationship between LOP and BI to use a mental health chatbot among LGBTQIA+ individuals?

Trust

Trust is a crucial factor for establishing strong bonds with someone and has shown to be equally important when it comes to human-computer interactions (Croes & Antheunis, 2021;

Lee & See, 2004). Without trust, the implementation of AI-based mental health tools will be almost impossible (Powell, 2019). Several studies indicate that trust is an antecedent for the BI to use a mental health application (e.g., Liu & Tao, 2022; Prakash & Das, 2020). Based on the definition from Liu and Tao (2022), we specify trust in a chatbot as the degree to which an LGBTQIA+ individual perceives that mental health chatbots are dependable, reliable, and trustworthy in improving one’s mental health. Schroeder and Schroeder (2018) investigated factors that influence trust in chatbots and found that individuals who are more experienced with chatbots and who are younger are more likely to trust a chatbot. Therefore, we expect the following:

H6. Trust positively influences BI to use a mental health chatbot among the LGBTQIA+ community.

(15)

H6a. The relationship between Trust and BI to use a mental health chatbot is moderated by age, meaning the effect will be stronger for younger LGBTQIA+

individuals than for older LGBTQIA+ individuals.

H6b. The relationship between Trust and BI to use a mental health chatbot is moderated by experience, meaning the effect will be stronger for more experienced LGBTQIA+ individuals than for experienced LGBTQIA+ individuals.

The Role of Gender Identity

Gender effects on technology acceptance are not necessarily caused by biological sex but can also depend on more sociological concepts such as gender identity. (Venkatesh et al., 2003). Gender identity – “one’s sense and subjective experience of gender, which may or may not be consistent with birth sex” (Russel & Fish, 2016, p. 466) can be male or female, but can also entail non-binary individuals (people who neither identify as male nor female (Cheung et al., 2020)), transgender individuals (people whose gender expression and/or identity deviates from birth sex (Carpenter et al., 2020)) or intersex individuals (people who were born with non-typical sex characteristics or develop these during puberty (Carpenter, 2016)). Previous research looking into differences regarding gender identity in technology usage intention is scarce. McInroy and colleagues (2018), however, studied differences in media usage among LGBTQ+ youth and found that non-binary and queer adolescents show a higher use of mobile devices compared to males, females or transgender adolescents. With a higher affinity for mobile device might come a higher perceived ease to use a mental health chatbot application, whereas males or females may perceive it as more difficult to use. Additionally, a recent study has found that transgender individuals are 1.6 times more likely to suffer from untreated depression than cisgender individuals (Steele et al., 2017). Simultaneously, transgender individuals often seek social support online (McInroy et al., 2018). Considering this unmet need and high online presence, transgender individuals may perceive mental health chatbots

(16)

more positively, which in turn increases their intention to use such a chatbot. To explore the BI to use a mental health chatbot among LGBTQIA+ individuals, we pose the following research question:

RQ3. To what extent does gender identity moderate the relationships between the predictors and BI to use a mental health chatbot among LGBTQIA+ individuals?

The proposed extension of the UTAUT to the context of mental health chatbot acceptance among the LGBTQIA+ community is depicted in Figure 2.

Figure 2

Extended UTAUT for Chatbot Usage Intention Among the LGBTQIA+ Community

(17)

Method

Sampling Method

To test the hypotheses and answer the research questions, a survey design was applied.

We developed an online questionnaire (see Appendix A) with the survey tool Qualtrics.

Ethical approval was granted by the university’s Ethics Review Board (project ID: 2021-PC- 14159). The questionnaire was created in English to reach LGBTQIA+ individuals

worldwide. We chose LGBTQIA+ individuals because research has repeatedly shown that they run a higher risk of developing a mental illness compared to heterosexual individuals (e.g., Steele et al., 2017; Russel & Fish, 2016; Yarns et al., 2016). Because of possible homophobia and lacking support in people’s sociocultural environments, individuals might perceive their sexual differences as unacceptable or sinful, which in turn may lead to anxiety, feelings of shame and depression (Chazin & Klugman, 2014). Although social acceptance of the LGBTQIA+ community has increased within the last years, some individuals still face bullying, harassment and violence because of their sexual identity or orientation (Russel &

Fish, 2016). At the same time, LGBTQIA+ individuals often lack the necessary social support and psychological assistance to understand their feelings and inclinations (Russel & Fish, 2016). A chatbot’s potential to support vulnerable groups, combined with its beneficial characteristics, could therefore be especially valuable for LGBTQIA+ individuals. We used purposive convenience sampling by sharing the survey on the author’s direct social network via WhatsApp and Instagram, as well as posting a recruitment text (see Appendix B) in relevant LGBTQ+ Facebook groups and reddit threats (see Appendix C for the full list of groups/threats). In addition, flyers with the survey QR code were spread at the University of Amsterdam campus. Eligible participants were individuals older than 16 years who

(potentially) identify as LGBTQIA+. Participation was completely voluntary and anonymous, and respondents could withdraw at any time. Respondents were not compensated for their

(18)

participation. Because of the length of the questionnaire (~ 10 minutes), the dropout rate was quite high (32,18%). In total, 354 valid responses were gathered. However, four respondents did not give consent, sixteen participants did not consider themselves as part of the

LGBTQIA+ community and four more respondents were aged below 16. These respondents, together with those who did not pass the attention check, were excluded from the data set.

Additionally, we omitted one case whose answers indicated zero variance (straight liner). This leaves a final sample of N = 305 participants.

Pre-Testing

The questionnaire was pre-tested with eight LGBTQIA+ individuals. Based on their feedback concerning the length of respective blocks, overall length of the study and their understanding of mental health chatbots, we adjusted the survey: Blocks with the UTAUT items were split and one question was left out to shorten the survey duration. Pre-testers further indicated having difficulties to imagine how such a mental health chatbot would look like, which is why a screen recording showing an interaction with an existing mental health chatbot application (Wysa) was included.

Procedure

Data were collected from 9th December 2021 to 17th December 2021. Participants who clicked on the survey link or scanned the QR code, were first exposed to the information letter which provided information about the goal, procedure, and additional information about the research. Afterwards, participants gave informed consent. If participants did not give consent they were automatically led to the end of the survey.

All participants who agreed to the research terms were asked whether they consider themselves part of the LGBTQIA+ community. This question served as a control question to rule out heterosexual/ cisgender individuals; those individuals were led to the end of the

(19)

questionnaire. In the next section, respondents indicated their gender identity, age, level of education and whether individuals cope with mental health issues. The following block was dedicated to previous experience with chatbots. Respondents first saw a short description and examples of chatbots to increase general understanding of chatbots (see survey Appendix A).

They then indicated how often they have used different types of chatbots in the past.

Afterwards, participants saw another description and a short example video that specifically explained mental health chatbots. After that, they were exposed to the initial UTAUT items, the LOP questions and trust items. In the following section, participants stated their

willingness to disclose to a mental health chatbot. Lastly, respondents gave their final approval for anonymous data submission and were thanked for their participation.

Measurements

The scales for PE, EE, SI and BI were adapted from Venkatesh et al. (2003) and adjusted to the context of this research. Participants’ WSD to a chatbot was measured with five question items which were adapted from Croes & Antheunis (2021). The scale for perceived LOP was drawn from Liu and Tao (2022) and consisted of three items. The last predictor, trust, was also measured on a three-item scale adapted from Liu and Tao (2022).

Table 1 provides an overview of the original and adjusted items. All latent constructs were measured on a 7-point Likert scale ranging from 1 (Strongly Disagree) to 7 (Strongly Agree).

A factor analysis with direct oblimin rotation revealed one-dimensionality for all constructs and reliability analysis confirmed the reliability of the scales. Table 2 shows the Eigenvalues, the explained variance, Cronbach’s α as well as M and SD of the computed mean variables of each latent concept.

Individual’s age was measured with an open text entry. Previous experience with chatbots was measured with one question stating, “How often have you used one of these chatbots in the past?”. For different types of chatbots, e.g., customer service chatbots,

(20)

healthcare chatbots, social messaging chatbots or other, respondents indicated their previous experience on a 5-point Likert scale ranging from “Never” to “A lot of times (>20 times)”.

Experience with customer service chatbots, mental health chatbots and social messenger chatbots were treated as separate variables. Gender identity was measured with one question item, stating: “Which of the following most likely describe you?”. Participants could choose between “Female”, “Male”, “Non-binary”, “Transgender”, “Intersex”, “Queer or

Questioning”, “I prefer not to say” and a text field for individual specification.

Further, to describe the sample, data about participant’s level of education were gathered with one question, “What is the highest degree or level of education you have completed?”, ranging from “No schooling completed” to “Doctorial or equivalent level”.

Respondents also had to indicate whether they feel like they cope with mental health issues and, if so, whether they receive professional help. Lastly, one attention check item (“Please click on ‘Agree’”) was included between the items addressing BI to ensure respondents paid attention throughout the questionnaire.

Table 1

Operationalization of Predictors and Behavioral Intention

Variable Initial Question Items Adjusted Survey Items Performance

expectancy (Venkatesh et al., 2003)

PE1: I would find the system useful in my job.

PE2: Using the system increases my productivity.

PE3: Using the system enables me to accomplish tasks more quickly.

PE4: If I use the system, I will increase my chances of getting a raise.

PE1: I would find such a mental health chatbot useful in my daily life.

PE2: Using such a mental health chatbot would improve my mental health.

PE3: Using such a mental health chatbot would help me to improve my mental health more quickly.

PE4: Using such a mental health

chatbot improves my mental well-being.

Effort expectancy (Venkatesh et al., 2003)

EE1: Learning to operate the system is easy for me.

EE2: My interaction with the system would be clear and understandable.

EE1: Learning how to use such a mental health chatbot is easy for me.

EE2: My interaction with such a mental health chatbot would be clear and understandable.

(21)

EE3: I would find the system easy to use.

EE4: It would be easy for me to become skilful at using the system.

EE3: I would find such a mental health chatbot easy to use.

EE4: It would be easy for me to become skilful at using such a mental health chatbot.

Social influence (Venkatesh et al., 2003)

SI1: People who are important to me think that I should use the system.

SI2: People who influence my behavior think that I should use the system.

SI3: In general, the organization has supported the use of the system.

SI4: The senior management of this business has been helpful in the use of the system.

SI1: People who are important to me think that I should use such a mental health chatbot.

SI2: People who influence my behavior think that I should use such a mental health chatbot.

SI3: In general, my social environment would support the use of such a mental health chatbot.

Behavioral intention (Venkatesh et al., 2003)

BI1: I intent to use the system in the next <n> months.

BI2: I predict I would use the system in the next <n> months.

BI3: I plan to use the system in the next <n> months.

BI1: I intend to use such a mental health chatbot in the future.

BI2: I will try to use such a mental health chatbot in my daily life.

BI3: I plan to use such a mental health chatbot frequently.

Willingness to self- disclose (Croes &

Antheunis, 2021),

WSD1: During the conversation I was able to share personal

information about myself.

WSD2: During the conversation I felt comfortable sharing personal information.

WSD3: During the conversation it was easy to share personal information.

WSD4: During the conversation I felt that I could be open.

WSD1: I feel I could share personal information about myself with such a mental health chatbot.

WSD2: I feel I would be comfortable sharing personal information with such a mental health chatbot.

WSD3: I feel it would be easy to share personal information with such a mental health chatbot.

WSD4: I feel that I could be open during a conversation with such a mental health chatbot.

WSD5: How likely are you to confide in an anonymous chatbot for mental health issues?

Perceived loss of privacy (Liu

& Tao, 2022)

LOP1: I am concerned that smart healthcare services will collect too much personal information from me.

LOP2: I am concerned that smart healthcare services will use my personal information for other purposes without my

authorization.

LOP3: I am concerned that smart

LOP1: I am concerned that such a mental health chatbot will collect too much personal information from me.

LOP2: I am concerned that such a mental health chatbot will use my personal information for other purposes without my authorization.

LOP3: I am concerned that such a mental health chatbot will share my

(22)

healthcare services will share my personal information with other entities without my authorization.

personal information with other entities without my authorization.

Trust (Liu &

Tao, 2022)

TRU1: Smart healthcare services are dependable.

TRU2: Smart healthcare services are reliable.

TRU3: Overall, I can trust smart healthcare services.

TRU1: Such a mental health chatbot is dependable.

TRU2: Such a mental health chatbot is reliable.

TRU3: Overall, I can trust such a mental health chatbot.

Table 2

Eigenvalues, Explained Variance, Cronbach’s α, Means and Standard Deviations of Latent Concepts

Variable Eigenvalue % of Variance Cronbach’s α Mean SD

Performance expectancy 3.43 85.65% .94 4.12 1.43

Effort expectancy 2.69 67.34% .83 5.23 1.11

Social influence 2.22 73.98% .81 3.44 1.23

Willingness to self-disclose 3.94 78.82% .93 4.10 1.62

Loss of privacy 2.78 92.60% .96 4.81 1.70

Trust 2.25 74.88% .83 4.16 1.31

Behavioral intention 2.73 91.13% .95 3.37 1.55

Note. Factor analysis with direct oblimin rotation was used; M and SD refer to the mean variables

Analysis

Data analyses were carried out in SPSS. To describe the sample, a frequencies analysis was conducted. By creating a scatterplot and histogram of the residuals, the assumptions of linearity and homoscedasticity were checked. Afterwards, all predictors and moderators were mean centered. This simplifies the interpretation of interaction effects: All coefficients account for respondents who score average on the predictor variables. Subsequently,

interaction variables were created to test moderation effects. All hypotheses, the moderating role of previous experience on the relationship between WSD and BI (RQ1), and the

moderating role of age for the relationship between LOP and BI (RQ2) were tested with

(23)

regression analysis where first the independent variables (PE, EE, SI, age, previous

experience) as well as the interaction variables (i.e., PE*Age, EE*Age, EE*Pexp1, …) were entered. In a next block, we included WSD, LOP, trust and the following interaction variables (i.e., WSD*Age, LOP*Pexp1, LOP*Pexp2, …). This enabled a comparison between the initial UTAUT and the extended model. To answer RQ3, which addressed the moderating effect of gender identity, dummy variables for the gender identity variable were created (i.e., female, male, trans, non-binary) with female participants as the reference group. Next, a linear regression model was conducted, in which only PE, EE, SI, WSD, LOP, trust, the dummy variables for males, trans and non-binary individuals and lastly the interaction variables for the respective predictor*gender identity effects were included.

Results

Descriptive Analysis

Table 3 presents the sample demographics. Participant’s age ranged from 16 to 59 (M= 24.69; SD= 7.28). The majority identified themselves as female (43,60%). 10,80% specified their gender identity in a separate text field. Common answers were “Agender”,

“Genderfluid” and “Questioning”. Regarding respondents’ level of education, the majority completed upper secondary level (34,80%). Furthermore, the descriptive analysis of the control variable revealed that most of the participants cope with mental health issues without receiving professional help (39,70%). Remarkably, only 12,80% stated to not cope with mental health issues at all. When it comes to previous experience with chatbots, respondents had the most experience with customer service chatbots (M= 2.23; SD= 0.98) and social messaging chatbots (M= 1.84; SD= 1.05), followed by mental health chatbots (M= 1.37; SD= 0.73). Table 4 illustrates the frequency distributions for previous experience with chatbots.

(24)

Table 3

Characteristics of the Sample (N = 305)

Characteristics n (%)

Age

16-23 24-30 31-35 36-40 >40

157 (51,5%) 89 (29,2%) 37 (12,2%) 10 (3,3%) 12 (3,8%) Gender Identity

Male Female Non-Binary Transgender Intersex Other

67 (22,0%) 133 (43,6%) 54 (17,7%) 13 (4,3%) 0 (0%) 33 (10,8%) Level of Education

No schooling completed Lower secondary level Upper secondary level Vocational training Bachelor’s or equivalent Master’s or equivalent Doctoral or equivalent Other

3 (1,0%) 31 (10,2%) 106 (34,8%) 13 (4,3%) 96 (31,5%) 36 (11,8%) 6 (2,0%) 9 (3,0%) Mental Health Issues

Yes, receive professional help Yes, do not receive professional help No

I am not sure

94 (30,8%) 121 (39,7%) 39 (12,8%) 51 (16,7%) Table 4

Frequency Distribution for Previous Experience with Chatbots (N = 305)

Type of Chatbot Never Rarely Sometimes Often Very often

Customer service chatbots

82 (26,9%) 102 (33,4%) 97 (31,8%) 17 (5,6%) 7 (2,3%) Mental health

chatbots

224 (73,4%) 58 (19,0%) 15 (4,9%) 6 (2,0%) 2 (0,7%) Social messaging

chatbots

152 (49,8%) 84 (27,5%) 46 (15,1%) 12 (3,9%) 11 (3,6%)

(25)

Model Fit and Hypotheses Testing

Main Effects

The regression model with BI to use a mental health chatbot as dependent variable, with PE, EE, SI, WSD, LOP and Trust as independent variables and with age and previous experience as moderators was significant, F(31, 304) = 20.17, p < .001, and explained 69,60% of

variance in BI of chatbot usage. It also demonstrated a slightly better fit than the initial UTAUT, where only PE, EE and SI were considered as predictors, F(16, 304) = 35.91, p <

.001, R2 = 66.60% (see Table 5). The extended regression model can therefore be used to predict the BI to use a mental health chatbot among the LGBTQIA+ population. However, not all predictors were significant. Only PE, b = 0.67, t = 11.70, p < .001, 95% CI [0.56, 0.79], showed a significant, strong association with BI. This indicates that people who score higher on PE, thus those who believe that a mental health chatbot will help them increase their mental wellbeing, have a higher intention to use a mental health chatbot. Similarly, SI, b = 0.18, t = 3.29, p = .001, 95% CI [0.07, 0.28] and WSD, b = 0.21, t = 3.69, p < .001, 95% CI [0.10, 0.32] showed a significant, weak association with BI. Hence, people who are more influenced by their social environment and LGBTQIA+ individuals who are more willing to self-disclose to a chatbot have a higher intention of using one. We therefore found support for H1, H3, and H4. Surprisingly, EE, b = -0.14, t = -2.40, p = .017, 95% CI [0.08, 0.29],

showed a weak, negative relationship, which is opposed to what we expected. Our results indicate that, the more people perceive a mental health chatbot as easy to use, the lower is their intention to use such a chatbot. We therefore reject H2. Further, the results suggest that LOP, b = 0.03, t = 0.91, p = .362, 95% CI [-0.04, 0.11] and Trust, b = -0.03, t = -0.42, p = .672, 95% CI [-0.14, 0.09] are no significant predictors of chatbot usage. Thus, H5 and H6 were rejected.

(26)

Table 5

Comparison of Regression Models to Predict BI of Mental Health Chatbot Usage

Behavioral Intention to Use Mental Health Chatbots

UTAUT model Extended model

Constant 3.36*** 3.35***

Performance expectancy 0.75*** 0.67***

Effort expectancy -0.08 -0.14*

Social influence 0.25*** 0.18**

Willingness to self-disclose 0.19**

Perceived loss of privacy 0.04

Trust -0.01

R2 0.67 0.70

F 35.91*** 20.17***

Note. N = 305.

* p <.05, ** p <.01, *** p <.001.

Moderating Effects

In terms of interaction effects, we found only two weak, significant interaction effects:

Firstly, the effect of SI on BI is moderated by previous experience with mental health chatbots, b = -0.16, t = -2.14, p = .033, 95% CI [-0.31, -0.01]. Figure 3 visualizes that, as predicted, the effect of SI becomes weaker the more experience LGBTQIA+ individuals have with mental health chatbots. However, this is only the case for previous experience with mental health chatbots. Previous experience with customer service or messaging chatbots were no significant moderators. We found partial support for H3b. Secondly, WSD on BI seems to be very weakly moderated by age b = -0.02, t = -2.43, p = .016, 95% CI [-0.04, - 0.004]. Figure 4 indicates that, as hypothesized, the effect is stronger for younger compared to older LGBTQIA+ individuals. H4a. was therefore supported. All other interactions turned out to be insignificant, which means we reject H1a, H2a, H2b, H3a, H5a, H6a and H6b. Table 6 summarizes test results for all hypotheses and Figure 5 illustrates relevant significant

relationships.

(27)

Figure 3

Moderating Effect of Previous Experience on the Relationship Between SI and BI

Note. Experience refers to previous experience with mental health chatbots Figure 4

Moderating Effect of Age on the Relationship Between WSD and BI

(28)

Table 6

Path Coefficient Estimation and Hypotheses Overview

Hypothesis: Path Unstandardized B t-value p-value Supported?

(Yes/No)

H1. PE→BI 0.67 11.64 < .001*** Yes

H1a. PE*Age→BI 0.01 0.73 .467 No

H2. EE→BI -0.14 -2.40 .017* No

H2a. EE*Age→BI 0.00 0.28 .778 No

H2b. EE*Pexp→BI EE*Pexp1→BI EE*Pexp2→BI EE*Pexp3→BI

0.08 -0.08 0.02

1.22 -0.73 0.35

.222 .466 .727

No

H3. SI→BI 0.18 3.37 .001** Yes

H3a. SI*Age→BI 0.01 0.60 .550 No

H3b. SI*Pexp→BI SI*Pexp1→BI SI*Pexp2→BI SI*Pexp3→BI

0.03 -0.16 0.05

0.42 -2.14 1.13

.675 .033*

.260

Partly

H4. WSD→BI 0.19 3.35 .001** Yes

H4a. WSD*Age→BI -0.02 -2.43 .016* Yes

H5. LOP→BI 0.04 1.13 .259 No

H5a. LOP*Pexp→BI LOP*Pexp1→BI LOP*Pexp2→BI LOP*Pexp3→BI

-0.00 0.01 0.03

-0.03 -0.09 0.79

.974 .927 .430

No

H6. TRU→BI -0.01 -0.21 .834 No

H6a. TRU*Age→BI 0.01 1.49 .139 No

H6b. TRU*Pexp→BI TRU*Pexp1→BI TRU*Pexp2→BI TRU*Pexp3→BI

-0.01 -0.03 -0.01

-0.09 -0.29 0.10

.931 .773 .918

No

RQ1.WSD*Pexp→BI WSD*Pexp1→BI WSD*Pexp2→BI WSD*Pexp3→BI

-0.04 0.07 0.03

-0.72 0.92 0.56

.473 .360 .578

No

RQ2.LOP*Age→BI -0.00 -0.79 .432 No

Note. BI: Behavioral Intention to use a chatbot; PE: Performance expectancy, EE: Effort expectancy, SI: Social influence, WSD: Willingness to self-disclose, LOP: Perceived loss of privacy, TRU: Trust in mental health chatbots, Pexp1: Previous experience with customer service chatbots, Pexp2:

Previous experience with mental health chatbots, Pexp3: Previous experience with social messenger chatbots.

Research Questions

RQ1: Does Previous Experience Moderate the Relationship Between WSD and BI?

To answer RQ1, which asked whether previous experience with chatbots moderated the relationship between WSD and BI to use a mental health chatbot among LGBTQIA+

(29)

individuals, interaction variables were included in the regression model. As Table 6 signals, previous experience with any kind of chatbot is not a significant moderator for the

relationship between WSD and BI.

RQ2: Does Age Moderate the Relationship Between LOP and BI?

RQ2 addressed the moderating role of age for the relationship between LOP and BI.

The regression results of the interaction variable (see Table 6) shows that there are no age differences for the effect of LOP and BI to use a mental health chatbot between younger and older LGBTQIA+ individuals.

RQ3: Is the Effect Different Among Different Gender Identities?

RQ3 asked whether gender identity moderates the relationships between the predictors and BI to use a mental health chatbot among LGBTQIA+ individuals. The regression analysis

indicated that the only significant, weak difference appears for the relationship between LOP and BI, where the effect seems to be stronger for male individuals, b = 0.19, t = 2.09, p = .037, 95% CI [0.01, 0.36] than for females. We did not find support for any other moderating effects of gender identity (see Table 7).

Table 7

Coefficient Estimation of Interaction Variables

Interaction Variables Unstandardized B t-value p-value

PE*Male→BI -0.05 -0.33 .745

PE*Transgender→BI 0.08 0.22 .826

EE*Non-binary→BI 0.03 0.18 .855

EE*Male→BI -0.20 -1.31 .191

EE*Transgender→BI -0.38 -0.88 .382

EE*Non-binary→BI 0.05 0.26 .793

SI*Male→BI -0.14 -1.06 .292

SI*Transgender→BI -0.28 -0.78 .435

SI*Non-binary→BI -0.16 -1.00 .317

WSD*Male→BI 0.18 1.34 .183

WSD*Transgender→BI 0.63 1.63 .105

WSD*Non-binary→BI 0.02 0.14 .887

LOP*Male→BI 0.19 2.09 .037*

(30)

LOP*Transgender→BI 0.16 0.72 .471

LOP*Non-binary→BI 0.00 -0.00 1.000

TRU*Male→BI 0.12 0.81 .420

TRU*Transgender→BI -0.16 -0.44 .659

TRU*Non-binary→BI 0.13 0.83 .408

Note. Reference Group = Female Individuals

Figure 5

Significant Relationships of the Extended Model

Discussion

General Discussion

This study aimed to take a critical perspective on the UTAUT by exploring whether it can be tested and extended to the context of mental health chatbot usage intention among LGBTQIA+ individuals. Through integrating both contextual variables like WSD, LOP and trust, and considering gender identity, we were able to demonstrate that the extended UTAUT provides a better understanding of mental health chatbot usage intention among LGBTQIA+

individuals than the initial model. Our findings do not only contribute to more inclusive

(31)

technology acceptance models and the generalizability of the UTAUT but also give valuable insights into which aspects influence the intention to use a mental health chatbot among LGBTQIA+ individuals.

Following our main findings, PE, SI and WSD significantly predicted BI to use a mental health chatbot. Unsurprisingly, PE has shown to be the strongest positive predictor: PE has repeatedly been an important predictor for technology acceptance in previous research (e.g., Almahri et al., 2020; Tarhini et al., 2016; Venkatesh et al., 2003). Hence, the believe that a mental health chatbot would improve their mental health seems to be a crucial driver for the BI to use a mental health chatbot among LGBTQIA+ individuals and should be

highlighted in future chatbot interventions. Additionally, in line with prior research, the more a LGBTQIA+ individual believes that their social environment thinks they should use a mental health chatbot, the higher is their BI (Tarhini et al., 2016; Venkatesh et al., 2012). This effect seems to be stronger for less experienced people, which means that particularly when individuals have little experience, their social environment can have a significant impact on their BI to use a mental health chatbot. It is worth noting that the effect of SI on BI was very weak and, considering the strong community feeling of LGBTQIA+ individuals, we expected this effect to be stronger. Especially, since a study by Fish and colleagues (2021)

demonstrated that emotional and mental health topics were the most popular themes discussed in a chat-based Internet community support programme, thus LGBTQIA+ individuals are generally willing to discuss mental health problems with their peers. It might be that the usage of mental health chatbots is not as widespread as online communities (McInroy et al., 2018) and SI is automatically less important for the BI to use mental health chatbots.

In contrast to our expectations, another main finding indicates a negative relationship between EE and BI. The easier the usage of a mental health chatbot seems, the lower is

(32)

LGBTQIA+ individuals’ intention to use it. This negative direction is contradicting existing literature: Some studies found a significant relationship (Almahri et al., 2020; Isaias et al., 2017; Venkatesh et al., 2011; Venkatesh et al., 2012) and others did not find a significant association due to common use of the measured technology (Tarhini et al., 2016). However, all studies demonstrate a positive direction instead of a negative. Simultaneously, our results show a high mean EE, which suggests that many participants perceived a mental health chatbot as easy to use. A possible explanation could be that one of the contextual variables were suppressing the effect of EE since it only became significant when the other variables were added (see Table 5). To shed light into this, future research should add predictors one by one to the model.

Contrary to our expectations, perceived loss of privacy and trust were no significant predictors of chatbot usage intention. Especially the results regarding trust do not fit prior research. For Liu & Tao (2022), for instance, trust was the strongest predictor for BI to use a smart healthcare system. Besides, other studies have established trust as a crucial antecedent for chatbot acceptance (Mostafa & Kasamani, 2021; Nadarzynski et al., 2019). A plausible explanation could be that LOP and trust do not directly affect BI to use a mental health chatbot, but indirectly via WSD. Schroeder and Schroeder (2018) found that trust positively influences WSD to a chatbot. Similarly, lower privacy concerns seem to increase trust in chatbots (Guo et al., 2016). Since WSD directly influenced BI, future studies may consider LOP as antecedent of trust, and trust as predictor of WSD rather than direct predictors of BI.

Moreover, this paper stressed the importance of including gender identity into the UTAUT. Interestingly, we could not find support for substantial differences among gender identities. Apparently, LGBTQIA+ regardless of their gender identity, perceive mental health chatbots equally, even though transgender and non-binary individuals show a higher online

(33)

behavior compared to female or male individuals (McInroy et al., 2018). Only the effect of LOP on BI was stronger for males than females. This is interesting, as prior research on online behavior revealed that women are more concerned about their privacy (e.g., Hoy &

Milne, 2010; Sheehan, 2002). But then again, gender differences measured with non- LGBTQIA+ samples may deviate from our sample. Despite the non-significance of the results, this still provides important insights: Future mental health interventions for LGBTQIA+ individuals do not need to consider different factors for respective gender identities except stressing privacy protection more among male individuals.

Finally, while the relationship between WSD and BI remains equal for older

LGBTQIA+ individuals, younger individuals have a higher intention to use a mental health chatbot when their WSD is also high. Note that our sample was quite young (Ø 24 years) and our data of older LGBTQIA+ individuals (>40 years) may not be enough to show potential differences with increasing age.

Limitations and Future Research

Although this study has successfully demonstrated a good fit of the extended model, several limitations should be considered. The present paper’s generalizability is somewhat limited due to methodological choices and sample characteristics. One major issue regarding the study design was that participants could not engage with a chatbot themselves. Using existing chatbots like Wysa would have created privacy issues by involving third parties. At the same time, 92% of all respondents have never or rarely used a mental health chatbot before and hence had a hard time imagining such an interaction, which could have led to imprecise answers. This problem was already raised during the pre-test, which is why we included a screen recording of a mental health chatbot conversation. Yet, we had no control whether participants skipped the video. Secondly, actual usage of mental health chatbots was

(34)

not measured. A follow-up study could let participants test a mental health chatbot and, at the end of the study, provide a link to the chatbot application free for them to use. Measuring the click rate may reveal insights into the actual usage of the chatbot and lead to more precise results. Another limitation arises because of the sample characteristics. No respondent

identified as intersex, so the generalizability of the gender differences is constrained by a lack of intersex individuals. Future research could apply purposive quota sampling to guarantee that every gender identity is represented. Lastly, as this research topic has not been researched in depth so far, researchers should consider applying a qualitative research design to gain an in-depth understanding of LGBTQIA+ individuals’ thoughts on mental health chatbots.

Especially on reddit, the recruitment text (see Appendix B) has caused elaborate discussions about whether individuals would use such chatbot or not. Despite these limitations, this study offers valid knowledge about the predictors of mental health chatbot usage intention among LGBTQIA+ individuals and gives interesting directions for future research.

Conclusion

By conducting an online survey, this research was able to test and extend the UTAUT to the context of mental health chatbot usage intention among LGBTQIA+ individuals.

Although not all predictors were significant, our results indicate that our model is a good fit to explain mental health chatbot usage intention. The strongest predictor of behavioral intention to use a mental health chatbot was performance expectancy. Thus, for future interventions, developers should stress the potential benefits mental health chatbots have to improve mental health issues among LGBTQIA+ individuals. Besides, social influence and people’s

willingness to self-disclose should also be highlighted as crucial determinants for mental health chatbot usage intention. Ultimately, this study has provided a new perspective on the inclusion of gender identities into traditional communication models such as the UTAUT.

Even though this study did not find gender differences in chatbot usage intention, there might

(35)

be other technologies where differences prevail. Research should strive for a better understanding of the blurring gender identities and, to achieve this, keep testing existing theories.

References

Abd-alrazaq, A. A., Alajlani, M., Alalwan, A. A., Bewick, B. M., Gardner, P., & Househ, M.

(2019). An overview of the features of chatbots in mental health: A scoping review.

International Journal of Medical Informatics, 132, 103978.

https://doi.org/10.1016/j.ijmedinf.2019.103978

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211.

Almahri, F. A. J., Bell, D., & Merhi, M. (2020). Understanding student acceptance and use of chatbots in the United Kingdom universities: A structural equation modelling

approach. 2020 6th International Conference on Information Management (ICIM).

Published. https://doi.org/10.1109/icim49319.2020.244712

Barak, A., & Gluck-Ofri, O. (2007). Degree and reciprocity of self-disclosure in online forums. CyberPsychology & Behavior, 10(3), 407–417.

https://doi.org/10.1089/cpb.2006.9938

Bellman, S., Johnson, E. J., Kobrin, S. J., & Lohse, G. L. (2004). International differences in information privacy concerns: A global survey of consumers. The Information Society, 20(5), 313–324. https://doi.org/10.1080/01972240490507956

Bergström, A. (2015). Online privacy concerns: A broad approach to understanding the concerns of different groups for different uses. Computers in Human Behavior, 53, 419–426. https://doi.org/10.1016/j.chb.2015.07.025

Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. Internet Science, 377–392.

https://doi.org/10.1007/978-3-319-70284-1_30

(36)

Carpenter, M. (2016). The human rights of intersex people: Addressing harmful practices and rhetoric of change. Reproductive Health Matters, 24(47), 74–84.

https://doi.org/10.1016/j.rhm.2016.06.003

Carpenter, C. S., Eppink, S. T., & Gonzales, G. (2020). Transgender status, gender identity, and socioeconomic outcomes in the United States. ILR Review, 73(3), 573–599.

https://doi.org/10.1177/0019793920902776

Chaix, B., Bibault, J. E., Pienkowski, A., Delamon, G., Guillemassé, A., Nectoux, P., &

Brouard, B. (2019). When chatbots meet patients: One-year prospective study of conversations between patients with breast cancer and a chatbot. JMIR Cancer, 5(1), e12856. https://doi.org/10.2196/12856

Chang, I. C., Hwang, H. G., Hung, W. F., & Li, Y. C. (2007). Physicians’ acceptance of pharmacokinetics-based clinical decision support systems. Expert Systems with Applications, 33(2), 296–303. https://doi.org/10.1016/j.eswa.2006.05.001

Chazin, D., & Klugman, S. (2014). Clinical considerations in working with clients in the coming out process. Pragmatic Case Studies in Psychotherapy, 10(2), 132–146.

https://doi.org/10.14713/pcsp.v10i2.1855

Cheung, A. S., Leemaqz, S. Y., Wong, J. W. P., Chew, D., Ooi, O., Cundill, P., Silberstein, N., Locke, P., Zwickl, S., Grayson, R., Zajac, J. D., & Pang, K. C. (2020). Non-binary and binary gender identity in Australian trans and gender diverse individuals. Archives of Sexual Behavior, 49(7), 2673–2681. https://doi.org/10.1007/s10508-020-01689-9 Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots

in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 1–19.

https://doi.org/10.1080/03055698.2020.1850426

(37)

Croes, E. A. J., & Antheunis, M. L. (2021). 36 Questions to loving a chatbot: Are people willing to self-disclose to a chatbot? Chatbot Research and Design, 81–95.

https://doi.org/10.1007/978-3-030-68288-0_6

D’Alfonso, S. (2020). AI in mental health. Current Opinion in Psychology, 36, 112–117.

https://doi.org/10.1016/j.copsyc.2020.04.005

Davis, F.D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly,13(3), 319-339.

https://doi.org/10.2307/249008

Fish, J. N., Williams, N. D., McInroy, L. B., Paceley, M. S., Edsall, R. N., Devadas, J., Henderson, S. B., & Levine, D. S. (2021). Q chat space: Assessing the feasibility and acceptability of an Internet-based support program for LGBTQ youth. Prevention Science, 130-141. Published. https://doi.org/10.1007/s11121-021-01291-y

Fishbein, M. & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research, Addison-Wesley, Reading

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated

conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. https://doi.org/10.2196/mental.7785

Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety:

Randomized controlled trial. JMIR Mental Health, 5(4), e64.

https://doi.org/10.2196/mental.9782

Goklani, B. (2021, September 15). Chatbots in healthcare: Top benefits, risks and challenges you need to know. Mindinventory. https://www.mindinventory.com/blog/chatbots-in- healthcare/

(38)

Guo, X., Zhang, X., & Sun, Y. (2016). The privacy–personalization paradox in mHealth services acceptance of different age groups. Electronic Commerce Research and Applications, 16, 55–65. https://doi.org/10.1016/j.elerap.2015.11.001

Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733. https://doi.org/10.1093/joc/jqy026

Hoofnagle, C. J., King, J., Li, S., & Turow, J. (2010). How different are young adults from older adults when it comes to information privacy attitudes and policies? (SSRN Scholarly Paper No. ID 1589864). Rochester, NY: Social Science Research Network.

Retrieved from <http://papers.ssrn.com/abstract=1589864>, 25 November 2014.

Hoy, M. G., & Milne, G. (2010). Gender differences in privacy-related measures for young adult facebook users. Journal of Interactive Advertising, 10(2), 28–45.

https://doi.org/10.1080/15252019.2010.10722168

Inkster, B., Sarda, S., & Subramanian, V. (2018). An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR mHealth and uHealth, 6(11).

https://doi.org/10.2196/12106

Isaias, P., Reis, F., Coutinho, C., & Lencastre, J. A. (2017). Empathic technologies for distance/mobile learning. Interactive Technology and Smart Education, 14(2), 159–

180. https://doi.org/10.1108/itse-02-2017-0014

Jackson, S. D. (2017). “Connection is the antidote”: Psychological distress, emotional

processing, and virtual community building among LGBTQ students after the Orlando shooting. Psychology of Sexual Orientation and Gender Diversity, 4(2), 160–168.

https://doi.org/10.1037/sgd0000229

Kretzschmar, K., Tyroll, H., Pavarini, G., Manzini, A., & Singh, I. (2019). Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated

Referenties

GERELATEERDE DOCUMENTEN

The current study analysed whether mental health status is associated with time preferences from the individual perspective, to contribute quantitatively to the rationale for

Ce ni- veau se marquait par une terre à peine plus foncée que le sol en place, conte- nant quelques fragments de torchis, des charbons de bois et des tessons d'une

Increasing public debt, coupled with the composition of government expenditure that is skewed towards compensation of employees, and the high financial mismanagement of

This section will indicate the results of the linear regression performed on the data of the first wave responds to the questions about job satisfaction and working conditions

Following the components of the Health Belief Model, the telephone survey included items measuring health-related preventive behavior, previous screening behavior,

Hence, the question that will be researched is: “what is the strength of the relationship between state- level strength use and state-level depressive symptoms in LGBTQ+

A study with forty participants was conducted in which each participant tested and evaluated ten different chatbots based on the CUS, the UMUX-LITE, and questions regarding

[r]