• No results found

Why Do(n’t) We Cure Our Minds with Apps? Understanding the Drivers of m-Mental Health Uptake among Emerging Adults

N/A
N/A
Protected

Academic year: 2021

Share "Why Do(n’t) We Cure Our Minds with Apps? Understanding the Drivers of m-Mental Health Uptake among Emerging Adults"

Copied!
51
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Why Do(n’t) We Cure Our Minds with Apps? Understanding the Drivers of m-Mental Health Uptake among Emerging Adults

Marek Háša

Master’s Programme Communication Science Graduate School of Communication

University of Amsterdam

Master’s Thesis Supervisor: dr. Sandra Zwier

Student number: 12112720 Due date: 28. 6. 2019

(2)

Abstract

The overarching objective of this study was to understand the drivers of the use of mobile applications designed to enhance one’s mental well-being (m-Mental Health). A new model grounded in both health communication research and technology acceptance studies was developed to offer an advanced perspective on the determinants of m-Health technology uptake. Furthermore, the study strived to investigate the role of privacy and safety concerns of potential users in the highly confidential area of mental health, and aimed to determine if the commercial or non-commercial origin of an m-Mental Health app predicts the level of such concerns.

A mixed design comprising a survey and manipulation tested the new model among one of the populations most at-risk in the domain of mental health, namely adults aged 18-29. A PLS-SEM analysis of 229 observations showed solid predictive power of the model and provided supportive empirical evidence for the positive relationships between social influence, self-efficacy, health technology efficacy and attitude toward m-Mental Health or the intention to use such apps. Moreover, the results suggested that the predictive power of perceived health technology efficacy with regard to the attitude toward m-Mental Health decreases with stronger privacy and safety concerns. The extent to which potential users worried about their safety and privacy, on the other hand, was not influenced by the (non-)commercial nature of an app. Further research might focus on generating a more accurate qualitative insight into the role of privacy and safety concerns in the field of m-Mental Health.

(3)

Why Do(n’t) We Cure Our Minds with Apps? Understanding the Drivers of m-Mental Health Uptake among Emerging Adults

In the age of the ever-increasing prevalence of depressive symptoms and suicidal rates among young people (Twenge, 2006; World Health Organization, 2013), aiding this

population to foster healthy mental well-being poses a key public health challenge. Mental health disorders are most likely to onset between the ages of 18 and 25 (Public Health

England, 2014; Stroud, Mainero, & Olson, 2013). This is the typical age span of the transition stage between adolescence and young adulthood, characterized by abruptly acquired

independence and exploration of life possibilities, denominated by Arnett (2000) as emerging adulthood. However, after receiving criticism for the neglect of socioeconomic and ethnic differences (Hendry & Kloep, 2007), the demarcation of this age period may more likely apply to the enlarged category of 18-29-year-olds (Arnett, Kloep, Hendry, & Tanner, 2011).

Although numerous recent studies point to a link between the use of new technologies and psychological problems (e.g., Coyne, Santarossa, Polumbo, & Woodruff, 2018; Twenge, Joiner, Rogers, & Martin, 2018; van Velthoven, Powell, & Powell, 2018), the very same platforms can also serve as facilitators of positive health behaviour change (Mohr, Burns, Schueller, Clarke, & Klinkman, 2013), effectively diminishing stigma and overcoming the financial and accessibility barriers of traditional preventive interventions (Stiles-Shields, Ho, & Mohr, 2016; Watts & Andrews, 2014). The potential of online interventions to successfully reduce problems such as depression or anxiety (Ahmedani, Belville-Robertson, Hirsch, & Jurayj, 2016) is especially high for young people who generally tend to avoid offline help-seeking in case of mental health troubles (Rickwood, Deane, Wilson, & Ciarrochi, 2005). Specifically, m-Mental Health apps (i.e., mobile apps developed to help tackle or prevent psychological problems; Apolinário-Hagen, 2017) are among the approaches most commonly highlighted for their potential to cost-effectively improve psychological wellness (Kumar et

(4)

al., 2013; Price et al., 2014) with wide-reach prevention and treatment solutions (Sort, 2017). In the past few years, there has been a boom in m-Mental Health apps for smartphones (Powell, 2016), with many commercial companies saturating the market with their own solutions for psychological well-being (Tucker & Goodings, 2015).

The present study aims to investigate the determinants of the use of m-Mental Health apps among emerging adults. By generating a quantitative insight into the factors that make young adults use or not use commercial and non-commercial m-Mental Health tools, the study strives to contribute to both academic and practical developments in this promising area of mental health treatment and prevention.

M-Mental Health apps may prove to be effective by allowing for non-stop user access and innovative technology-based features at relatively low costs compared to traditional mental health care. However, there remains a lack of evidence supporting such apps’ actual helpfulness (Hilty, Chan, Hwang, Wong, & Bauer, 2017). As Derks, De Visser, Bohlmeijer and Noordzij (2017) note, mHealth applications can often be poorly designed without the necessary evidence support, making them ineffective and potentially even harmful. The review by Donker et al. (2013) confirmed that a substantial part of the available m-Mental Health apps lacks any systematic evidence. Another potential barrier to the uptake of m-Mental Health tools is the data privacy concern; people experiencing symptoms of mental health problems may be worried about the possible misuse or unwanted disclosure of their highly confidential data when considering downloading an m-Mental Health app (Patel et al., 2018). Despite attempts to understand the determinants of digital mental health counselling uptake (e.g., Apolinário-Hagen et al., 2016), it remains unclear whether or not the factors of efficacy and data privacy are among the main drivers of the intention to use m-Mental Health. Drivers of m-Mental Health Uptake

Two distinct paths of research could be regarded as suitable for shedding light on the question of why people do or do not intend to use a specific health technology. First, health

(5)

communication research, studying the determinants of health-related behaviours such as attitude, social norms, and self-efficacy (Fishbein & Ajzen, 1975). Second, technology acceptance studies, investigating the influence of factors such as perceived usefulness and ease of use (Venkatesh, Morris, Davis, & Davis, 2003). The present study aims to merge several theories to develop a model grounded within the domains of digital health

communication and technology acceptance. Moreover, it enriches the existing models with a newly conceptualized variable – Confidence in Health Technology, encapsulating the

potential privacy concerns and uncertainties with regard to the eminent confidentiality of the area of mental health. This variable is hypothesized to be largely determined by the source and nature of m-Mental Health solutions, with non-commercial tools arousing higher levels of trust in the technology compared to their profit-driven counterparts. After bridging the

languages of technology acceptance research and health communication scholars, the study turns to the young population currently under an increasing danger of mental health issues, in striving to answer the following research question:

What are the determinants of the use of m-Mental Health among emerging adults? And to what extent do they differ for commercial and non-commercial applications?

Theoretical Background

To offer a synthesis of health communication and technology acceptance perspectives to explain the uptake of m-Mental Health, the present study extracts parts of several theories which are relevant to the field of mental health promotion, the mobile app technology, and the target group of emerging adults. Following the example of a similar effort by Calvin and Karsh (2006), and adhering to Glanz, Lewis, and Rimer (1997), this merger of existing evidence should result in the creation of a new contextualized model, which is hypothesized to explain the drivers of m-Mental Health uptake among emerging adults. The study thereby employs a user-oriented approach to studying the selection and use of a particular medium,

(6)

focusing on user characteristics and expectations rather than the features of a technology (Flanagin & Metzger, 2001).

Figure 1 depicts a summary of the assumed relationships between those variables that can predict the intention to use m-Mental Health applications among emerging adults. The remainder of this chapter will offer an overview of the theoretical backgrounds of the model’s main variables and their relationships.

Figure 1. Conceptual model

For ease of reading, Table 1 below provides a glossary of the abbreviations for theories and concepts that will be used in the remainder of the present section.

Table 1

Glossary of abbreviations

TPB Theory of Planned Behaviour

TAM Technology Acceptance Model

HTE Health Technology Efficacy

HTSE Health Technology Self-Efficacy HTSA Health Technology Social Acceptance CHT Confidence in Health Technology

(7)

Attitude toward and Intention to Use m-Mental Health

Although some studies measured self-reported use of a health technology as the outcome variable (e.g., Or, Karsh, Severtson, & Brennan, 2008), a more common approach relies on assessing the intention to adopt the e-health tool of interest (e.g., Razmak, Bélanger, & Farhan, 2018). While behavioural intention plays the role of the central dependent variable in both the Theory of Planned Behaviour (TPB; Fishbein & Ajzen, 1975) and the Technology Acceptance Model (TAM; Holden & Karsh, 2010; Venkatesh et al., 2003), the construct of attitude derives solely from the health communication perspective where it stands for an evaluative affect about performing the behaviour in question (Fishbein & Ajzen, 1975) and is mostly determined by one’s beliefs about the consequences of the given behaviour

(Doğanyiğit, 2018). According to TPB, together with more positive subjective norms and higher self-efficacy, a more favourable attitude positively predicts behavioural intentions (Fishbein & Cappella, 2006); hence the first hypothesis of the study (H1):

H1: The more favourable the attitude toward m-Mental Health, the stronger the intention to use m-Mental Health apps.

Health Technology Efficacy

Health Technology Efficacy (HTE) represents the perceived ability of a specific technology to help with treating or preventing certain health problems. In the context of this study, it stands for the perceived efficacy of mobile apps as treatment or prevention tools for mental well-being. The health communication part of this variable can be regarded as a combination of behavioural beliefs and outcome evaluations from the Integrative Model by Fishbein and Cappella (2006) and response efficacy, which reflects one’s perceived likelihood that performing the recommended behaviour will lead to the desired health objective and is hypothesized to positively impact the persuasiveness of a health-related message (Cismaru, Nagpal, & Krishnamurthy, 2009). Its technology acceptance counterpart draws on the

(8)

et al., 2003), and effectivity-related sought gratifications (Luo & Remus, 2014). All of these terms reflect a similar concept – namely one’s beliefs regarding the potential the technology brings to oneself in a specific context.

While the original TAM by Davis (1986) proposed that the effect of perceived usefulness on actual technology use was mediated by attitude, later versions such as TAM 3 by Venkatesh and Bala (2008) proposed behavioural intention as the mediator. However, taking a step back to the TPB perspective, both attitude and intention should serve as the mediators between outcome beliefs and actual behaviour (Fishbein & Cappella, 2006). To settle this inconsistency in the proposed relationships, this study turns to the Combined Technology Acceptance Model and Theory of Planned Behaviour (C-TAM-TPB; Venkatesh et al., 2003) which also merges the two perspectives and tests the effect of health technology efficacy on behavioural intention through attitude; hence the second hypothesis (H2)

regarding the relationship between health technology efficacy and attitude is:

H2: Health Technology Efficacy positively predicts more favourable attitudes toward m-Mental Health.

Health Technology Self-Efficacy

Self-efficacy represents one’s perceived ability to perform certain actions. It reflects the perceived barriers to the target behaviour (Fishbein & Cappella, 2006), as well as the perceived required effort, one of the costs which should always accompany the concept of response efficacy according to Cismaru et al. (2009). As put forth by Bandura (1986), who originally coined the term (Bandura, 1977), the construct of self-efficacy should always be tailored to the specifics of the domain of interest. Rahman, Ko, Warren and Carpenter (2016) adhered to this suggestion by introducing the concept of Health Technology Self-Efficacy (HTSE) which encapsulates one’s perceived ability to use healthcare technologies. However, given the rich variety of today’s e-Health technologies (e.g., mobile apps, tracking devices, websites, or virtual reality tools), this study strives to further deepen such contextualization of

(9)

self-efficacy by assessing people’s perceived ability to use one particular health technology – mobile apps (i.e., mobile self-efficacy; Doğanyiğit, 2018) – for one particular health area – mental well-being.

The techno-humanist model for e-Health adoption introduced by Razmak et al. (2018) offers a potential avenue for combining the psychological factor of an individual’s self-efficacy with the technology-focused factor of perceived ease of use. Technology acceptance models build on the perceived ease of use (Holden & Karsh, 2010) or effort expectancy (Venkatesh et al., 2003) and successfully test their influence on behavioural intention. However, a review by Holden and Karsh (2010) suggests that compared to perceived usefulness, the direct predictive power of perceived ease of use is rather low. Conversely, numerous studies provide supportive evidence for strong positive effects of perceived ease of use on perceived usefulness (Holden & Karsh, 2010; Hung & Yen, 2012; Ryan, Bergin, & Wells, 2017). Therefore, this study hypothesizes two distinct predictive effects of HTSE, one’s perceived ability to easily and effectively use mobile apps to treat or prevent one’s (potential) mental health difficulties:

H3: Health Technology Self-Efficacy positively predicts one’s perceived Health Technology Efficacy.

H4: The higher one’s Health Technology Self-Efficacy, the stronger one’s intention to use m-Mental Health.

Health Technology Social Acceptance

Health communication theory regards social norms as one of the determinants of health-related behavioural intentions, whereby social norms represent the observed

behaviours and presumed normative opinions of one’s important others (Fishbein & Ajzen, 1975; Fishbein & Cappella, 2006). Technology acceptance scholars use the term subjective norms to describe a person’s beliefs about whether or not their important others want them to

(10)

Technology Social Acceptance (HTSA) concept this study proposes rests more on the health communication approach; it combines both descriptive (i.e., observed behaviours) and injunctive norms (i.e., assumed opinions; Prentice, 2008) to capture the normative influence, as well as the sociological factor of an increased awareness of m-Mental Health apps as proposed by Razmak et al. (2018).

There are empirical grounds to assume that the degree to which one believes other people use and approve of using m-Mental Health directly influences behavioural intentions (Doğanyiğit, 2018; Ryan et al., 2017; Venkatesh et al., 2003), perhaps even outweighing their negative beliefs regarding the efficacy or ease of use of m-Mental Health. However, similarly to HTSE, strong links have also been found between HTSA and HTE (Buccoliero & Bellio, 2014; Venkatesh & Bala, 2008); hence there are two hypotheses reflecting the role of social influence:

H5: Health Technology Social Acceptance positively predicts one’s perceived Health Technology Efficacy.

H6: The higher one’s Health Technology Social Acceptance, the stronger one’s intention to use m-Mental Health.

Confidence in Health Technology

As put forth by van Schaik, Flynn, van Wersch, Douglass, and Cann (2004), technology acceptance research should aim for a balanced model of the advantages and disadvantages of the technology studied, whereby the presence of both expected benefits and potential costs of using the platform increases the model’s predictive power. When certain risks are involved in the process of deciding whether or not to use an online health-related resource, people’s trust in the technology tends to decrease (Sillence & Briggs, 2015), which can negatively impact acceptance of e-Health (Beldad, de Jong, & Steehouder, 2010). A study by Montagni et al. (2016) found that European university students generally do not trust online platforms for mental health-related issues. Lee and Cho (2016) further suggest that

(11)

some of the most influential worries of people with regard to health technologies relate to their credibility and accuracy. Similar concerns over intervention efficacy (usefulness) and credibility (privacy) can be identified in the context of both e-Health in general (Musiat, Goldstone, & Tarrier, 2014), and m-Mental Health in particular (Stiles-Shields, Montague, Lattie, Kwasny, & Mohr, 2017). To map the influence of such user worries, this study presents a newly conceptualized construct of the Confidence in Health Technology (CHT) reflecting two key factors of safety concerns – treatment accuracy and data privacy.

While the general effectiveness of m-Mental Health is yet to be sufficiently proven (Hilty et al., 2017), it is of no surprise that the developments in this domain are being

inhibited by user concerns over the apps’ effectivity (Powell, 2016). As shown in the reviews by Donker et al. (2013) and Hale, Capra, and Bauer (2015), a large proportion of the apps currently available within both m-Mental Health and e-Health in general lack proper empirical evidence. Such apps then pose a threat to users of not only being ineffective but potentially also causing harm to their well-being (Lal & Adair, 2013). Moreover, a qualitative study by Tucker and Goodings (2015) suggests that users might also feel sceptical about the suitability of a technology often labelled as a cause of stress for mental health promotion.

The second component of CHT should embody the fact that trust in a specific technology can largely vary among individuals due to different levels of data privacy concerns, which can be amplified by the intimate and confidential nature of the topic of mental health. In this highly sensitive context (Anderson & Agarwal, 2011), numerous studies have shown that data privacy and security concerns can reduce users’ confidence in an app, thus decreasing the app’s uptake (e.g., Gulliver et al., 2015; Young, 2005; van Velthoven et al., 2018). Together with the worries about treatment accuracy, the degree to which a person believes their highly confidential mental health input will be safely encrypted and secured (Kumar et al., 2013) and not disclosed to any third parties without their explicit consent,

(12)

even harmfulness of a mental health app is directly linked to perceived efficacy, and privacy concerns were found to be closely linked to perceived usefulness by Chung, Park, Wang, Fulk, and McLaughlin (2010), this study hypothesizes a moderating role for CHT:

H7: The predictive effect of Health Technology Efficacy on attitude toward m-Mental Health increases with higher Confidence in Health Technology.

(Non-)Commercial Nature of an App

In situations when it is not possible to critically assess the trustworthiness of a specific app, users are forced to form an initial trust impression by screening through the key and more salient information (Briggs, Burford, De Angeli, & Lynch, 2002) and making heuristics-based credibility appraisals (Petty & Cacioppo, 1986). As noted by Pornpitakpan (2004), messages from more credible sources typically have a larger impact on the attitude and behaviour of the receiver. The present study aims to uncover if a similar effect exists for commercial and non-commercial m-Mental Health apps. It builds on the work of Gulliver, Griffiths, and Christensen (2010), who suggest that young people are strongly influenced by the credibility of the provider of help when it comes to preventing or treating mental health issues.

When being unable to map trust-forming factors such as information quality, usability, or popularity of a particular app (Hale et al., 2015), users have no other choice but to look for heuristic indicators of quality, security, and privacy (Sillence & Briggs, 2015), including organization logos or accreditation endorsements by governmental entities (Batterham et al., 2015). As highlighted by Luxton, McCann, Bush, Mishkind, and Reger (2011), there exists no oversight or clear guidelines which would ensure that m-Mental Health apps are fully

evidence-based and truly efficient with no unwanted effects. Nevertheless, governmental and research projects generally rely on an evidence base and accuracy, as well as adherence to data privacy and security guidelines such as the Voluntary Code of Conduct for health

(13)

Commission of Canada (2014) has put forth, quality assurance frameworks and patients’ confidentiality and safety guidelines may be much less attractive or feasible for commercial projects. Therefore, this study presumes that the (non-)commercial nature of an m-Mental Health app might serve as a trust-forming marker; hence the final hypothesis:

H8: The Confidence in Health Technology is higher for a non-commercial m-Mental Health app compared to its commercial counterpart.

Methods

The forthcoming sections depict the mixed survey-experimental research approach of the present study. Before delving into the operationalization of the main variables, the sampling method and other important characteristics of the research design will be introduced.

Design and Participants

Cross-sectional survey design was chosen for its suitability for measuring beliefs, attitudes, and intentions of a large number of people (Bryman, 2012). Moreover, the

confidential nature of an individually administered, Web-based, self-completion questionnaire should trigger the openness of respondents even for questions regarding highly sensitive topics such as one’s mental health (Fowler, 2014).

The questionnaire was built in the Qualtrics software (www.qualtrics.com) and pilot-tested on a small convenience sample (N = 11) of emerging adults to identify potential flaws and enhance the survey’s internal validity. After enhancing the visual lucidity of the

questionnaire and comprehensibility of several items, the final survey (see Appendix A) was published.

Due to the lack of a sampling frame, convenience sampling was adopted to recruit participants aged 18-29 using social media posts and private messages. Despite the contradictory evidence for the (in)effectiveness of incentives (Goritz, 2010), the study

(14)

included a charitable incentive in striving to counter the growing survey fatigue (Fowler, 2014).

The data collection was launched on April 28 and closed on May 9, 2019. After deleting 11 responses by ineligible participants (2 did not own a smartphone, 9 had

participated in the development of an m-Mental Health app) and 2 responses by people aged higher than 29, there were slightly more female respondents (52%) compared to male participants in the final sample, N = 229. As shown in Table 2, the age distribution of the sample was fairly diverse within the target age span of 18 to 29 years old, with a minimum of 19 years old, median 23, and maximum 29 years old. The majority of respondents lived in the Czech Republic, followed by the Netherlands, and the rest being dispersed across 25 other countries. Most participants (61.1%) reported having obtained a university degree, while the others had completed high school education. Nearly half people identified as part-time (self-)employed students, over a third as full-time students or unemployed, and about one fifth as full-time (self-)employed.

Table 2

Demographics of the sample

Characteristic (N = 229) M (SD) or % Sex Male Female 48 52 Age 23.40 (1.82) Country Czech Republic Netherlands Other 57 21.5 21.5 Education High school University degree Professional status 38.9 61.1 Unemployed or full-time student 33.6

(Self-)employed part-time 3.9

(Self-)employed part-time and student 41.9

(15)

In line with research ethics standards, the online questionnaire first briefly presented the study, informed respondents that the research had been approved by the Ethics Review Board of the University of Amsterdam, and asked for their informed consent while also highlighting that opting-out was possible at any point during their participation. Additionally, given the high sensitivity of the mental health topic, respondents were reassured of the study’s confidentiality. After passing the eligibility check, participants were randomly assigned to read an example of either a non-commercial (condition A, N = 115) or commercial (condition B, N = 114) m-Mental Health app (see Appendix B), as well as the definition of mental well-being. The survey proceeded with displaying the scales of the main variables in a random order, with the exception of the instruments measuring attitude and intention which were placed at the end of this block. Items were also randomized within scales. After completing the main section, participants were asked to answer the items for control variables and report their demographics before being debriefed and having the opportunity to vote for a charity which should receive a donation from the study’s author.

Main Variables

(Non-)Commercial Nature of an App. Respondents were randomly assigned to one of the two conditions, each offering a slightly modified example of an m-Mental Health app: either commercial or non-commercial. Participants were instructed to keep this example in mind while answering the ensuing questions. These instructions can be found in Appendix B. Manipulation of the app characteristics was based on a review of both commercial and non-commercial m-Mental Health apps by Anthes (2016). The names of the app and companies were fabricated.

Scale Validation. All the main latent variables were measured with multiple items using a 7-point Likert scale (1 = Completely disagree, 7 = Completely agree) and answered by all respondents, N = 229.

(16)

After recoding negatively worded items, the reliability of all latent variables was tested, and their validity inspected using exploratory factor analyses with a principal-axis factoring extraction. Appendix C presents a full overview of the results.

Health Technology Efficacy. To measure this newly conceptualized variable that encapsulates the perceived ability of a specific technology to help people improve their health, a four-item indicator was developed, with each item referring to one specific potential effect of m-Mental Health. Respondents were asked to indicate to what extent they agree or disagree with statements such as “m-Mental Health apps could help me effectively manage my mental well-being”, which were all built on the basis of the original perceived usefulness operationalization (Davis, 1989) and inspired by similar items contextualized in the area of mental health from the study by Apolinário-Hagen, Fritsche, Bierhals, and Salewski (2018). A full overview of all items can be found in Appendices A and C.

A mean scale computed to measure this composite latent variable reported good reliability, Cronbach’s alpha = .82. On average, the sample scored slightly higher on Health Technology Efficacy scale than the mid-score (M = 4.68, SD = 1.11).

Health Technology Self-Efficacy. Three items largely derived from the study by Rahman et al. (2016) and inspired by the perceived ease of use measures of Razmak et al. (2018) were used to capture one’s beliefs about their ability to effectively use m-Mental Health apps (see Appendices A and C).

The instrument was found to have low reliability in this sample, Cronbach’s alpha = .55. The mean scale (M = 5.18, SD = 1.10) was in line with the presumption that mobile app self-efficacy will be rather high in a young population (Cho, Park, & Lee, 2014).

Health Technology Social Acceptance. This variable comprises injunctive, as well as descriptive norms surrounding the use of health technologies. The two items of its instrument (see Appendix A, section II) are in line with the operationalizations of both the two norm

(17)

types by Fishbein and Ajzen (2010) and the social influence variable in a TAM setting by Razmak et al. (2018).

While the injunctive norm scores were nearly one point above the mid-point (M = 4.99, SD = 1.41), the sample reported rather low observability of the use of m-Mental Health among people similar to them (M = 2.82, SD = 1.44). The two items were weakly but

significantly correlated, r = .19, p = .003. The mean scale averaged slightly lower than the mid-point (M = 3.10, SD = 1.10).

Confidence in Health Technology. This variable considers potential disadvantages of using m-Mental Health apps and focuses on safety and privacy concerns (see Appendices A and C). The two privacy-related items were adapted from the MUIPC (Mobile User’s Information Privacy Concerns) scale (Xu, Gupta, Rosson, & Carroll, 2012) and were chosen to cover two factor sub-scales with the highest reliability in the study by Bol, Helberger and van Weert (2018), namely perceived intrusion and secondary use of personal data. The two items covering safety concerns represented negative outcome beliefs and had been inspired by the attitude-measuring scale of Rahman et al. (2016).

The Cronbach’s alpha of .64 indicated rather low reliability of the mean scale.

Respondents reported on average a slightly higher Confidence in Health Technology than the mid-score, M = 4.29, SD = 1.16.

Attitude toward m-Mental Health. Respondents’ attitude toward m-Mental Health was measured using a three-item scale. Two rather general items were adapted from Schnall, Cho and Liu (2018) and one item reflecting more the specifics of m-Mental Health was added from Apolinário-Hagen et al. (2018), see Appendices A and C.

The mean scale reported slightly low yet still acceptable reliability, Cronbach’s alpha = .72. On average, the sample reported a rather positive attitude, M = 5.42, SD = .98.

(18)

M = 4.11, SD = 1.86, and the other focused on actual use, M = 4.26, SD = 1.80. Each item was accompanied by an example situation of encountering mental health difficulties adapted from Fonseca et al. (2016), see Appendix A, section II.

On average, the sample reported behavioural intentions slightly higher than the mid-score (M = 4.19, SD = 1.69). The two items of this instrument were strongly correlated, r = .70, p < .001.

Control Variables

To prevent threats to internal validity by confounders which potentially affect the main variables and skew their relationships (Gravetter & Forzano, 2009), the following control variables were also included:

Age, Sex, Education. Previous studies provide empirical grounds to believe that age (e.g., Chung et al., 2010; Hung & Yen, 2012), sex (e.g., Cho et al., 2014; Venkatesh et al., 2003), and education (e.g., Cho et al., 2014) might play a confounding role in the present study’s model.

Health technology awareness, e-Health literacy, Previous health technology use. The research of Apolinário-Hagen (2017) adds health technology awareness, e-Health literacy (also in Khazaal et al., 2008), and previous health technology use (also in Venkatesh et al., 2003) to the list. To measure e-Health literacy, three items on a 5-point Likert scale were adapted from Razmak et al. (2018) who originally derived them from the eHEALS

instrument, developed and validated by Norman and Skinner (2006). The mean scale (M = 3.42, SD = .86) reported fairly good reliability, Cronbach’s alpha = .78, and showed that the sample had a slightly higher e-Health literacy than the mid-score.

A full overview of the instruments measuring the control variables can be found in Appendix A, section III. Table 3 summarizes the distribution of the awareness and use of m-Mental Health apps, showing that most respondents did not know any specific m-m-Mental Health apps and an overwhelming majority had no experience with using them.

(19)

Current mental well-being. Additionally, following the example of Fonseca et al. (2016), the current state of mental well-being – one’s ability to cope with common stress, be productive and contribute to their community (World Health Organization, 2014) – was also included as a control variable. To measure this construct, the seven-item Short Warwick-Edinburgh Mental Well-being Scale (Stewart-Brown et al., 2011) was used, asking

respondents to indicate how often they felt positive about their abilities to handle life (e.g., “I’ve been dealing with problems well”) on a 5-point Likert scale (1 = None of the time, 5 = All of the time). See Appendices A and C for an overview of all items.

In line with the official guide to using this instrument (NHS Health Scotland, University of Warwick, & University of Edinburgh, 2006), a sum scale was computed with fairly good reliability, Cronbach’s alpha = .79. On average, the sample reported a rather low level of mental well-being, M = 24.21, SD = 4.24.

Table 3

Use and awareness of m-Mental Health apps

Characteristic

(N = 229) n (%) Awareness

(“Do you know any m-Mental Health apps?”) Yes No Not Sure 54 (23.6) 147 (64.2) 28 (12.2) Use

(“Have you ever used any kind of...?”, “During the past 6 months, how frequently...during a regular week?”)

Never

Yes, but not in the past 6 months Once per week

Multiple times per week Once each day

Don’t know 191 (83.4) 14 (6.1) 10 (4.4) 1 (0.4) 1 (0.4) 12 (5.2) Statistical Analysis

The method of analysis in this study was structural equation modelling (SEM). More specifically, the variance-based partial least squares technique (PLS-SEM) was chosen, due to

(20)

its ability to test more complex models also with smaller sample sizes or non-normal data (Hair, Sarstedt, Hopkins, & Kuppelweiser, 2014). As highlighted by Lowry and Gaskin (2014), while regression analysis is a good choice for simple models and highly normalized data, SEM offers flexible causal-modelling with latent constructs. Contrarily to the regression approach, PLS-SEM runs equations simultaneously and interdependently to provide a more accurate picture of complex models such as the one proposed in this study.

Data cleaning and preparation, exploratory factor analyses and the testing of

regression assumptions were conducted in the SPSS statistical software package by IBM. The model was tested in SmartPLS 3.

First, the model was run one time with each of the control variables (or all the dummies of a particular multi-categorical control variable) to determine which of the seven potential confounders showed significant effects on the results. Finally, to answer the

hypotheses proposed in the previous chapter, the model was run one more time including the control variables with significant effects.

Results

Before proceeding with PLS-SEM to determine the model fit and deliver empirical evidence for accepting or rejecting the hypotheses, several regression assumptions were tested.

Testing of Assumptions

Although PLS does not require any particular distributions of the dependent variables (Lowry & Gaskin, 2014), acceptable multivariate normality was confirmed with histograms. The lack of multicollinearity was demonstrated by all VIF values of the multicollinearity test falling between 1.17 and 2.06. The Durbin-Watson statistic of 2.00 confirmed that the data were not autocorrelated. The inspection of scatterplots then served to confirm

homoscedasticity and linear relationships for each path in the model. Finally, Harman’s single factor test for all first-order constructs was applied to check for common method bias, a type

(21)

of error potentially present when measuring the endogenous and exogenous variables of a model at the same time and with the same method (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). A single factor extracted 31.28% of total variance, on the basis of which common method bias could be deemed unlikely.

The social acceptance and behavioural intention constructs were labelled as formative in SmartPLS 3 due to their composite nature. Therefore, regular PLS algorithm and

bootstrapping were chosen for the analyses over the Consistent algorithm, which is intended for fully reflective models.

Finally, it was determined whether or not each control variable had a significant impact on the results. By the end of this iterative process, five relationships of confounders with the main variables were kept in the model for the final analysis, namely: 1) higher e-Health literacy was linked with a more positive attitude toward mobile mental health; 2) the use of m-Mental Health once per week was related to higher intention to use such apps. Further, 3) not having used m-Mental Health in the past six months, 4) never having used it, and 5) knowledge of specific apps were all associated with lower Confidence in Health Technology (see Table 4).

Model Fit

The final run of the PLS algorithm (see the SmartPLS 3 output in Appendix D) and regular bootstrapping (1000 subsamples, maximum 1000 iterations) tested the conceptual model and hypotheses of this study all at once.

The Standardized Root Mean Residual (SRMR), which is the measure of fit

commonly used in the context of evaluating latent variable models, took the value of .065, indicating good model fit. In all, the model explained 46.3% of the variation in the intention to use m-Mental Health apps (R2 = .46, p < .001), 56.1% of the variance in attitude toward m-Mental Health (R2 = .56, p < .001), and 33% of the variation in Health Technology Efficacy

(22)

(R2 = .33, p < .001). However, only 6.4% of the variance in Confidence in Health Technology was explained, (R2 = .06, p = .071).

Table 4

PLS-SEM path analysis results summary

Path1 β p t R2 (p) Adjusted R2 (p) ATT → BI .42 .000*** 6.17 0.46 (.000) 0.46 (.000) HTSA → BI .10 .133 1.50 HTSE → BI .26 .000*** 4.27 USE_1pW → BI .11 .000*** 3.50 HTE → ATT .56 .000*** 10.30 0.56 (.000) 0.56 (.000) EHL → ATT .11 .034* 2.13 Moderation -.12 .008** 2.68 HTSA → HTE .25 .000*** 3.87 0.33 (.000) 0.33 (.000) HTSE → HTE .43 .000*** 6.81 (Non-)Com → CHT .02 .812 0.24 0.06 (.071) 0.05 (.245) USE_Never → CHT -.24 .001** 3.24 USE_NotIn6m → CHT -.16 .037* 2.09 AWA_Yes → CHT -.17 .020* 2.34 *p < .05, **p < .01, ***p < .001 Testing of Hypotheses

The summary of hypotheses testing is provided in Table 5. The relationship between attitude toward m-Mental Health and intention to use such apps was found to be highly significant, β = .42, t = 6.17, p < .001, thus confirming H1. Similarly, the significant strong association between Health Technology Efficacy and attitude toward mobile mental health provided support for H2, β = .56, t = 10.30, p < .001.

Furthermore, confirming H3 and H4 respectively, Health Technology Self-Efficacy had a significant positive relationship with Health Technology Efficacy, β = .43, t = 6.81, p < .001, as well as with the behavioural intention to use m-Mental Health, β = .26, t = 4.27, p < .001. Similarly, Health Technology Social Acceptance had a significant positive relationship

1 ATT: Attitude toward m-Mental Health; BI: Intention to Use m-Mental Health; HTSA: Health Technology

Social Acceptance; HTSE: Health Technology Self-Efficacy; HTE: Health Technology Efficacy; EHL: e-Health Literacy; CHT: Confidence in Health Technology; (Non-)Com: (Non-)Commercial Nature of an App;

USE_1pW: used an Mental Health app once per week in the past six months; USE_Never: never used an m-Mental Health app; USE_NotIn6m: did not use an m-m-Mental Health app in the past six months; AWA_Yes:

(23)

with Health Technology Efficacy, confirming H5. On the other hand, Health Technology Social Acceptance was not significantly related to the intention to use m-Mental Health, β = .10, t = 1.50, p = .133; therefore, H6 was not supported.

Table 5

Summary of hypotheses testing

Hypothesis Relationship tested2 β p Results

H1 ATT BI .42 .000 Supported

H2 HTE ATT .56 .000 Supported

H3 HTSE HTE .43 .000 Supported

H4 HTSE BI .26 .000 Supported

H5 HTSA → HTE .25 .000 Supported

H6 HTSA → BI .10 .133 Not supported

H7 Moderation -.12 .008 Not supported

H8 (Non-)Com CHT .02 .812 Not supported

Figure 2 poses a visual depiction of the main results by showing path coefficients along with their level of significance.

*p < .01, **p < .001

Figure 2. Conceptual model with main results

2 ATT: Attitude toward m-Mental Health; BI: Intention to Use m-Mental Health; HTSA: Health Technology

Social Acceptance; HTSE: Health Technology Self-Efficacy; HTE: Health Technology Efficacy; EHL: e-Health Literacy; CHT: Confidence in Health Technology; (Non-)Com: (Non-)Commercial Nature of an App;

(24)

m-The moderating relationship between Confidence in Health Technology and Health Technology Efficacy in shaping the attitude toward m-Mental Health was found to be significant, β = -.12, t = 2.68, p = .008. However, no support was found for H7 due to the direction of the relationship contradicting the hypothesis. Specifically, whereas H7 predicted that the role of Health Technology Efficacy in shaping the attitude toward mobile mental health would become stronger with increased Confidence in Health Technology, the negative beta-value indicates that higher Confidence in Health Technology actually weakened the role of Health Technology Efficacy in shaping the attitude toward m-Mental Health.

Finally, the effect of the (Non-)Commercial Nature of an App was found to be insignificant, β = .02, t = 0.24, p = .812, hence no support was found for H8.

Discussion

The final section of the present study translates the results outlined above into specific implications and demarcates potential avenues for further research based on the study’s limitations.

Interpreting the Results

First and foremost, bringing the prognosis of Viswanath (2015) to life, the approach of bridging health communication and technology acceptance perspectives to develop a model grounded in both disciplines for understanding the drivers of m-Mental Health uptake proved to be promising. Our model about the determinants of m-Mental Health use succeeded in explaining a significant share of the variance in the intention to use mobile apps for mental health, and the significant paths were in line with both theory and previous studies. This suggests that when emerging adults are to decide whether or not to use an app to enhance their mental well-being, several factors come into play: Perceived efficacy of the app,

(25)

through the attitude toward m-Mental Health, while health technology self-efficacy is also directly related to the intention to use such an app.

Moreover, the study substantiated the claim by van Schaik et al. (2004) that balancing the advantages with a particular technology’s downsides increases predictive power. In spite of the unforeseen direction of the moderation by Confidence in Health Technology, this variable manifested its indispensable role within the model, thus demonstrating that new contextualized constructs indeed deserve attention in health technology studies (Holden & Karsh, 2010).

Finally, the study failed to deliver supportive evidence for the assumed effect of the (non-)commercial nature of an m-Mental Health app on the privacy and safety concerns it arouses in emerging adults. Thereby, this study disconfirmed the assumption that young people would be less confident in the app if they were confronted with signs of its profit-driven origin.

Limitations

The fact that an overwhelming majority of the sample did not have any awareness of or experience with specific m-Mental Health apps constitutes the most apparent limitation of this study. Due to the lack of their own knowledge and exposure, most respondents very likely had to fully rely on the examples and definitions provided in the beginning of the questionnaire when forming their attitude toward m-Mental Health and the intention to use such apps. Hence, the results are bound to apps designed to increase overall mental wellness by helping people with daily stress management without any therapist being actively involved (see Appendix B). However, the low use and awareness among the present sample can also be considered a testimony to the importance of studying the topic of m-Mental Health uptake.

Another limitation can be seen in the experimental part of the present study. Without a pre-test and an attention check testing the manipulation, it cannot be concluded that the

(26)

manipulated. The participants might have forgotten about the details in their example after having responded to a few items. A failed manipulation can be deemed a quite likely explanation of the lack of significant results in this part of the model.

Furthermore, the implications of the results are notably constrained by the low reliability of the scales for measuring Health Technology Self-efficacy and Confidence in Health Technology, as well as by the weak correlation of the two items of the composite social influence construct. The complexity of the model and very limited space within the frame of a reasonably demanding online survey required highly economical scale

development. As a result, very few instruments exceeded three items and item selection was partially based on intuitive choices.

The final limitation rests upon a rather weak argumentation behind the position of the Confidence in Health Technology variable within the model. Other paths such as a direct relationship between this variable and perceived Health Technology Efficacy might have been similarly justifiable. However, the extremely limited amount of previous studies utilizing this concept did not allow for more precise hypothesizing.

Implications

Several practical implications can be derived from this study. Understanding the drivers of uptake can be deemed the first step in the process of translating the promising potential of the recent boom in m-Mental Health apps (Powell, 2016) into actual uptake. Identifying the most influential factors may help both commercial and non-commercial practitioners in the field of m-Mental Health to tailor their communication targeting emerging adults. Additionally, despite their lack of awareness and personal experience with m-Mental Health, the sample in this study reported a positive attitude toward such apps and also a fairly high intention to use or recommend them. The mean scores suggest that emerging adults are generally quite open to trying an app for improving their mental health. Apart from using the concepts of perceived efficacy or self-efficacy in adjusting their offerings, m-Mental Health

(27)

professionals should also build on this solid basis and aim to translate the positive attitude and behavioural intentions into actual intake to boost the rather modest awareness and use among young adults so far.

Further Research

In times when issues of digital safety and personal data privacy are taking up an increasingly pivotal role in the general public agenda, it can be presumed that such concerns might easily diminish the positive influence of, for instance, high perceived Health

Technology Efficacy in shaping the uptake of m-Mental Health. Presently, there is a lack of empirical evidence on the role of privacy and safety concerns in the field of m-Mental Health, which only highlights the importance of devoting more attention to this matter in future studies. It is believed that, given the empirical void outlined above, a more narrowly focused qualitative research providing deep insights for follow-up quantitative studies would be the desirable approach now. Such research could also investigate the underlying beliefs behind safety and privacy concerns regarding m-Mental Health, and uncover the relationships between use and awareness and Confidence in Health Technology with increased precision. Moreover, it would be highly beneficial for further research to determine whether or not privacy and safety concerns should be disunited in next models. And finally, future research might aim to apply the perspectives-bridging, variables-contextualizing approach outlined in this study to other health domains, technologies and populations. For instance, it would be interesting to assess the role of privacy concerns among the other drivers of uptake in the field of women’s health apps, another highly private and confidential e-Health area, or to uncover new influential determinants for situations when parents decide whether or not their

(28)

Conclusion

In spite of being often listed among the main contributors to the deteriorative mental well-being of emerging adults nowadays, smartphones offer vast potential in the field of preventing or treating mental health issues. The present study aimed to forward the m-Mental Health research and practice by studying the determinants of the uptake of such apps.

Furthermore, to depict a more accurate reflection of the highly personal context of m-Mental Health, the model was enriched with a newly conceptualized Confidence in Health

Technology variable. Moreover, the study investigated whether the level of this variable is dependent on the commercial or non-commercial origin of an m-Mental Health app.

The study employed a mixed design, interlacing a manipulation with an online survey targeted at people aged 18-29. A PLS-SEM analysis of 229 observations reported solid

predictive power of the newly developed model. However, the (non-)commercial nature of the provided app example had no significant effect on the Confidence in Health Technology. The results also showed an unforeseen direction of the assumed moderation – the stronger young adults believed in m-Mental Health apps in terms of safety and privacy, the less the predictive power of the perceived efficacy of such apps in relation to the attitude toward m-Mental Health. Future studies might adopt a qualitative approach to provide a deeper dive into the role of privacy and safety concerns within the model and to enhance predictive power even further.

References

Ahmedani, B. K., Belville-Robertson, T., Hirsch, A., & Jurayj, A. (2016). An online mental health and wellness intervention supplementing standard care of depression and anxiety. Archives of Psychiatric Nursing, 30(6), 666–670.

(29)

Anderson, C. L., & Agarwal, R. (2011). The digitization of healthcare: Boundary risks, emotion, and consumer willingness to disclose personal health information.

Information Systems Research, 22(3), 469–490. https://doi.org/10.1287/isre.1100.0335

Anthes, E. (2016). Pocket Psychiatry: Mobile Mental Health Apps have exploded onto the market, but few have been thoroughly tested. Nature, 532, 20–23.

https://doi.org/10.1093/itnow/bws036

Apolinário-Hagen, J. (2017). Current perspectives on e-mental-health self-help treatments: Exploring the “black box” of public views, perceptions, and attitudes toward the digitalization of mental health care. In L. Menvielle et al. (Eds.), The Digitization of

Healthcare: New Challenges and Opportunities (pp. 205–223). Palgrave Macmillan.

https://doi.org/10.1057/978-1-349-95173-4_12

Apolinário-Hagen, J., Fritsche, L., Bierhals, C., & Salewski, C. (2018). Improving attitudes toward e-Mental Health services in the general population via psychoeducational information material: A randomized controlled trial. Internet Interventions, 12, 141– 149. https://doi.org/10.1016/j.invent.2017.12.002

Apolinário-Hagen, J., Trachse, A., Anhorn, L., Holsten, B., Werner, V., & Krebs, S. (2016). Exploring individual differences in online and face-to-face help-seeking intentions in case of impending mental health problems: The role of adult attachment, perceived social support, psychological distress and self-stigma. Journal of Health and Social

Sciences, 1(3), 223–240. Retrieved from:

http://journalhss.com/wp-content/uploads/JHHS13_223-240.pdf

Arnett, J. J. (2000). Emerging adulthood: A theory of development from the late teens through the twenties. American Psychologist, 55(5), 469–480. https://doi.org/10.1037/0003-066X.55.5.469

(30)

Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change.

Psychological Review, 84(2), 191-215.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Batterham, P. J., Sunderland, M., Calear, A. L., Davey, C. G., Christensen, H., Teesson, M., … Krouskos, D. (2015). Developing a roadmap for the translation of e-mental health services for depression. Australian & New Zealand Journal of Psychiatry, 49(9), 776– 784. https://doi.org/10.1177/0004867415582054

Beldad, A., de Jong, M., & Steehouder, M. (2010). How shall I trust the faceless and the intangible? A literature review on the antecedents of online trust. Computers in

Human Behavior, 26(5), 857–869. https://doi.org/10.1016/j.chb.2010.03.013

Bol, N., Helberger, N., & van Weert, J. C. M. (2018). Differences in mobile health app use: A source of new digital inequalities? The Information Society, 34(3), 183–193.

https://doi.org/10.1080/01972243.2018.1438550

Briggs, P., Burford, B., De Angeli, A., & Lynch, P. (2002). Trust in online advice. Social

Science Computer Review, 20(3), 321–332.

https://doi.org/10.1177/089443930202000309

Bryman, A. (2012). Social research methods (5th ed.). Oxford: Oxford University Press. Buccoliero, L., & Bellio, E. (2014). The adoption of "silver" e-Health technologies: First hints

on technology acceptance factors for elderly in Italy. Proceedings of the 8th

International Conference on Theory and Practice of Electronic Governance, 2014,

304-307. https://doi.org/10.1145/2691195.2691303

Calvin, K.L., & Karsh, B. (2006). The Patient Technology Acceptance Model (PTAM) for Homecare Patients with Chronic Illness. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(10), 989-993.

(31)

Cho, J., Park, D., & Lee, H. E. (2014). Cognitive factors of using health apps: Systematic analysis of relationships among health consciousness, health information orientation, eHealth literacy, and health app use efficacy. Journal of Medical Internet Research, 16(5). https://doi.org/10.2196/jmir.3283

Chung, J. E., Park, N., Wang, H., Fulk, J., & McLaughlin, M. (2010). Age differences in perceptions of online community participation among non-users: An extension of the technology acceptance model. Computers in Human Behavior, 26(6), 1674-1684. https://doi.org/10.1016/j.chb.2010.06.016

Cismaru, M., Nagpal, A., & Krishnamurthy, P. (2009). The Role of Cost and Response-efficacy in Persuasiveness of Health Recommendations. Journal of Health Psychology, 14(1), 135-141. https://doi.org/10.1177/1359105308097953

Coyne, P., Santarossa, S., Polumbo, N., & Woodruff, S. J. (2018). The associations of social networking site use and self-reported general health, mental health, and well-being among Canadians. Digital Health, 4. https://doi.org/10.1177/2055207618812532 Davis, F.D. (1986). A technology acceptance model for empirically testing new end-user

information systems: Theory and results. Massachusetts, United States: Sloan School of Management, Massachusetts Institute of Technology.

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008 Derks, Y. P. M. J., De Visser, T., Bohlmeijer, E. T., & Noordzij, M. L. (2017). mHealth in

mental health: how to efficiently and scientifically create an ambulatory biofeedback e-coaching app for patients with borderline personality disorder. International Journal of Human Factors and Ergonomics, 5(1), 61-92.

https://doi.org/10.1504/IJHFE.2017.10009438

(32)

Current and emerging mHealth technologies: Adoption, implementation, and use (pp.

37-56). Springer International Publishing.

Donker, T., Petrie, K., Proudfoot, J., Clarke, J., Birch, M. R., & Christensen, H. (2013). Smartphones for smarter delivery of mental health programs: a systematic review. Journal of medical Internet research, 15(11), e247. https://doi.org/10.2196/jmir.2791 European Commission (2016). Draft code of conduct on privacy for mobile health

applications. Retrieved from: https://ec.europa.eu/digital-single-market/en/news/code-conduct-privacy-mhealth-apps-has-been-finalised

Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley.

Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The reasoned action approach. New York, NY: Psychology Press.

Fishbein, M., & Cappella, J. N. (2006). The role of theory in developing effective health communications. Journal of Communication, 56, S1-S17.

https://doi.org/10.1111/j.1460-2466.2006.00280.x

Flanagin, A. J., & Metzger, M. J. (2001). Internet use in the contemporary media environment. Human Communication Research, 27(1), 153–181.

https://doi.org/10.1093/hcr/27.1.153

Fonseca, A., Gorayeb, R., & Canavarro, M. C. (2016). Women’s use of online resources and acceptance of e-mental health tools during the perinatal period. International Journal of Medical Informatics, 94, 228–236. https://doi.org/10.1016/j.ijmedinf.2016.07.016 Fowler, F. (2014). Survey research methods (5th ed.). Los Angeles, CA: Sage.

Glanz, K., Lewis, F., & Rimer, B. (1997). Linking theory, research, and practice. In Glanz, K., Lewis, F., & Rimer, B. (Eds.), Health behavior and health education: Theory, research, and practice (2nd ed.; pp. 19-35). San Francisco, CA: Jossey-Bass.

(33)

Goritz, A. S. (2010). Using lotteries, loyalty points, and other incentives to increase participant response and completion. In S. D. Gosling & J. A. Johnson (Eds.), Advanced methods for conducting online behavioral research (pp. 219-233). Washington, DC: American Psychological Association.

Gravetter, F. J., & Forzano, L.-A. B. (2009). Research methods for the behavioral sciences. Stamford, CT: Cengage Learning.

Gulliver, A., Bennett, K., Bennett, A., Farrer, L. M., Reynolds, J., & Griffiths, K. M. (2015). Privacy issues in the development of a virtual mental health clinic for university students: A qualitative study. JMIR Mental Health, 2(1), e9.

https://doi.org/10.2196/mental.4294

Gulliver, A., Griffiths, K.M., & Christensen, H. (2010). Perceived barriers and facilitators to mental health help for young people. BioMed Central Psychiatry, 10, 113.

https://doi.org/10.1186/1471-244X-10-113

Hair, J. F. Jr., Sarstedt, J., Hopkins, L., & Kuppelwieser, V. G. (2014). Partial least squares structural equation modeling (PLS-SEM). European Business Review, 26(2), 106–121. https://doi.org/10.1108/EBR-10-2013-0128

Hale, K., Capra, S., & Bauer, J. (2015). A framework to assist health professionals in recommending high-quality apps for supporting chronic disease self-management: Illustrative assessment of type 2 diabetes apps. JMIR MHealth and UHealth, 3(3), e87. https://doi.org/10.2196/mhealth.4532

Hendry, L., & Kloep, M. (2007). Conceptualizing emerging adulthood: Inspecting the emperor’s new clothes? Child Development Perspectives, 1(2), 74-79.

https://doi.org/10.1111/j.1750-8606.2007.00017.x

Hilty, D. M., Chan, S., Hwang, T., Wong, A., & Bauer, A. M. (2017). Advances in mobile mental health: opportunities and implications for the spectrum of e-mental health

(34)

Holden, R., & Karsh, B. (2010). The Technology Acceptance Model: Its past and its future in health care. Journal of Biomedical Informatics, 43(1), 159-172.

https://doi.org/10.1016/j.jbi.2009.07.002

Hung, M. C., & Jen, W. Y. (2012). The adoption of mobile health management services: An empirical study. Journal of Medical Systems, 36(3), 1381–1388.

https://doi.org/10.1007/s10916-010-9600-2

Khazaal, Y., Chatton, A., Cochand, S., Hoch, A., Khankarli, M. B., Khan, R., & Zullino, D. F. (2008). Internet use by patients with psychiatric disorders in search for general and medical informations. Psychiatry Quarterly, 79(4), 301-309.

https://doi.org/10.1007/s11126-008-9083-1

Kumar, S., Nilsen, W. J., Abernethy, A., Atienza, A., Patrick, K., Pavel, M., … Swendeman, D. (2013). Mobile health technology evaluation: The mHealth evidence workshop. American Journal of Preventive Medicine, 45(2), 228–236.

https://doi.org/10.1016/j.amepre.2013.03.017

Lal, S., & Adair, C. E. (2013). E-Mental Health: A Rapid Review of the Literature. Psychiatric Services, 65(1), 24–32. https://doi.org/10.1176/appi.ps.201300009 Lee, H. E., & Cho, J. (2016). What motivates users to continue using diet and fitness apps? Application of the uses and gratifications approach. Health Communication, 32, 1445- 1453. https://doi.org/10.1080/10410236.2016.1167998

Lowry, P., & Gaskin, J. (2014). Partial Least Squares (PLS) Structural Equation Modeling (SEM) for Building and Testing Behavioral Causal Theory: When to Choose It and How to Use It. IEEE Transactions on Professional Communication, 57(2), 123-146.

https://doi.org/10.1109/TPC.2014.2312452

Luo, M. M., & Remus, W. (2014). Uses and gratifications and acceptance of Web-based information services: An integrated model. Computers in Human Behavior, 38, 281– 295. https://doi.org/10.1016/j.chb.2014.05.042

(35)

Luxton, D. D., McCann, R. A., Bush, N. E., Mishkind, M. C., & Reger, G. M. (2011).

MHealth for mental health: Integrating smartphone technology in behavioral healthcare. Professional Psychology: Research and Practice, 42(6), 505-512.

https://doi.org/10.1037/a0024485

Maghnati, F., & Ling, K. C. (2013). Exploring the relationship between experiential value and usage attitude towards mobile apps among the smartphone users. International Journal of Business and Management, 8(4). https://doi.org/10.5539/ijbm.v8n4p1 Mental Health Commission of Canada (2014). E-mental health in Canada: Transforming the

mental health system using technology: A briefing document. Retrieved from:

https://www.mentalhealthcommission.ca/sites/default/files/MHCC_E-Mental_Health-Briefing_Document_ENG_0.pdf

Mohr, D. C., Burns, M. N., Schueller, S. M., Clarke, G., & Klinkman, M. (2013). Behavioral Intervention Technologies: Evidence review and recommendations for future research in mental health. General Hospital Psychiatry, 35(4), 332–338.

https://doi.org/10.1016/j.genhosppsych.2013.03.008

Montagni, I., Donisi, V., Tedeschi, F., Parizot, I., Motrico, E., & Horgan, A. (2016). Internet use for mental health information and support among European university students: The e-MentH project. Digital Health, 2, 205520761665384.

https://doi.org/10.1177/2055207616653845

Musiat, P., Goldstone, P., & Tarrier, N. (2014). Understanding the acceptability of e-mental health: Attitudes and expectations towards computerised self-help treatments for mental health problems. Psychiatry, 14, 109.

https://doi.org/10.1186/1471-244x-14-109

NHS Health Scotland, University of Warwick, & University of Edinburgh (2006). Short Warwick-Edinburgh Mental Well-being Scale ((S)WEMWBS). Retrieved from:

(36)

https://warwick.ac.uk/fac/sci/med/research/platform/wemwbs/using/register/resources/ swemwbs_final.pdf

Norman, C. D., & Skinner, H. A. (2006). eHEALS: The eHealth literacy scale. Journal of Medical Internet Research, 8(4). https://doi.org/10.2196/jmir.8.4.e27

Or, C., Karsh, B. T., Severtson, D. J., & Brennan, P. F. (2008). Patient Technology Acceptance Model (PTAM) – Exploring the potential characteristics of consumer health information technology acceptance by home care patients with chronic illness. Proceedings of the International Conference on Healthcare Systems, Ergonomics and Patient Safety in Strousberg, France.

Patel, V., Saxena, S., Lund, C., Thornicroft, G., Baingana, F., Bolton, P., … Unützer, J. (2018). The Lancet Commission on global mental health and sustainable development. The Lancet, 392(10157), 1553–1598. https://doi.org/10.1016/s0140-6736(18)31612-x Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and

peripheral routes to attitude change. New York, NY: Springer-Verlag.

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879.

https://doi.org/10.1037/0021-9010.88.5.879

Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology, 34, 243-281.

https://doi.org/10.1111/j.1559-1816.2004.tb02547.x

Powell, J. (2016). E-mental health special issue. Digital Health, 2. https://doi.org/10.1177/2055207616676792

Prentice, D. (2008). Mobilizing and weakening peer influence as mechanisms for changing behavior. Implications for alcohol intervention programs. In M. J. Prinstein & K. A.

(37)

Dodge (Eds.), Understanding peer influence in children and adolescents (pp. 161– 180). New York, NY: Guildford Press.

Price, M., Yuen, E. K., Goetter, E. M., Herbert, J. D., Forman, E. M., Acierno, R., & Ruggiero, K. J. (2014). MHealth: A mechanism to deliver more accessible, more effective mental health care. Clinical Psychology & Psychotherapy, 21(5), 427-436. https://doi.org/10.1002/cpp.1855

Public Health England (2014) Improving young people’s health and wellbeing: A framework for public health. Retrieved from:

www.gov.uk/government/uploads/system/uploads/attachment_data/file/399391/20150 128_YP_HW_Framework_ FINAL_WP__3_.pdf

Rahman, M.S., Ko, M., Warren, J., & Carpenter, D. (2016). Healthcare Technology Self-Efficacy (HTSE) and its influence on individual attitude: An empirical study.

Computers in Human Behaviour, 58: 12-14. https://doi.org/10.1016/j.chb.2015.12.016 Razmak, J., Bélanger, C.H., & Farhan, W. (2018). Development of a techno-humanist model

for e-health adoption of innovative technology. International Journal of Medical

Informatics, 120, 62-76. https://doi.org/10.1016/j.ijmedinf.2018.09.022

Rickwood, D., Deane, F. P., Wilson, C. J., & Ciarrochi, J. (2005). Young people’s help- seeking for mental health problems. Australian E-Journal for the Advancement of

Mental Health, 4(3), 218–251. https://doi.org/10.5172/jamh.4.3.218

Ryan, C., Bergin, M., & Wells, J. S. (2018). Theoretical perspectives of adherence to Web- based interventions: A scoping review. International Journal of Behavioral Medicine,

25(1), 17–29. https://doi.org/10.1007/s12529-017-9678-8

Schnall, R., Cho, H., & Liu, J. (2018). Health information technology usability evaluation scale (Health-ITUES) for usability assessment of mobile health technology: Validation study. Journal of Medical Internet Research, 20(1).

(38)

Sillence, E., & Briggs, P. (2015). Trust and engagement in online health: A timeline approach. In Sundar, S. (Ed.), The handbook of the psychology of communication

technology (pp. 469-487). John Wiley & Sons, Inc.

Sort A. (2017). The role of mHealth in mental health. mHealth, 3, 1. https://doi.org/10.21037/mhealth.2017.01.02

Stewart-Brown, S., Platt, S., Tennant, A., Maheswaran, H., Parkinson, J., Weich, S., … Clarke, A. (2011). The Warwick-Edinburgh Mental Well-being Scale (WEMWBS): a valid and reliable tool for measuring mental well-being in diverse populations and projects. Journal of Epidemiology & Community Health, 65(Suppl 2), A38–A39. https://doi.org/10.1136/jech.2011.143586.86

Stiles-Shields, C., Ho, J., & Mohr, D. C. (2016). A review of design characteristics of

cognitive behavioral therapy-informed behavioral intervention technologies for youth with depression and anxiety. Digital Health, 2, 205520761667570.

https://doi.org/10.1177/2055207616675706

Stiles-Shields, C., Montague, E., Lattie, E. G., Kwasny, M. J., & Mohr, D. C. (2017). What might get in the way: Barriers to the use of apps for depression. Digital Health, 3, 205520761771382. https://doi.org/10.1177/2055207617713827

Stroud, C., Mainero, T., & Olson, S. (2013). Improving the health, safety, and well-being of young adults: Workshop summary. In Institute of Medicine and National Research Council (p. 203). National Academy of Sciences.

https://doi.org/doi.org/10.17226/18340

Tucker, I., & Goodings, L. (2015). Managing stress through the Stress Free app: Practices of self-care in digitally mediated spaces. Digital Health, 1, 205520761558074.

Referenties

GERELATEERDE DOCUMENTEN

b) Also, the rare use of the antechamber–a feature used only in royal tombs from China and Korea (Barnes 1993) –in the Kyūshū tradition, can be connected to the fact

[r]

As this also reshapes the relations between public, private, international and civil society actors involved in service delivery as well as community consumers, it

De aaltjes zijn dan dus niet in staat om hun levenscyclus te voltooien maar zijn wel de verantwoordelijke factor voor de sterfte van de larven bij lage temperatuur. De

Omdat andere kos- ten, vooral ruwvoer, gedaald zijn is de arbeidsop- brengst “maar” f 24,- gedaald ten opzichte van verleden jaar..

- De zaadopbrengst en het duizendkorrelgewicht van overjarig Engels raaigras wordt door verbran- den van stro iets (niet significant) verhoogd ten opzichte van afvoeren of

Zij geven als eventuele reden voor het uitblijven van een verband dat er binnen hun onderzoek alleen maar gekeken werd naar de emotionele expressiviteit van de verzorger als

In electron spin resonance studies of transition metal complexes the experimental spin Hamiltonian parameters are often compared with those calculated from experimental