• No results found

I, financial avatar : the impact of the dehumanization level of an A.I avatar’s design on individual behavioral intention to follow its advice

N/A
N/A
Protected

Academic year: 2021

Share "I, financial avatar : the impact of the dehumanization level of an A.I avatar’s design on individual behavioral intention to follow its advice"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

I, financial avatar – the impact of the

dehumanization level of an A.I avatar’s

design on individual behavioral intention

to follow its advice

Master thesis author: Fabio Mangia (10418148)

Under the supervision of: Dr. Andrea Weihrauch

---

Msc in Business Administration – Marketing track

Amsterdam Business School

---

Key words: dehumanization, avatar, financial advice

---

(2)

2 Statement of Originality

This document is written by Fabio Mangia who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

Thanks to the following people whose support was essential to the completion of this project: Chris Warminger

Manou Knoef Sueli Cortezia

(3)

3

Abstract

With the ongoing development of artificial intelligence coupled with the digitalization of banks and their appetite for new solutions involving A.I and robot process automation, the arrival of A.I avatars is arguably near. Although research on avatars is both diverse and extensive, little is known about how the dehumanization level of an avatar’s appearance affects consumer attitude and behavioral intention to follow the avatar’s advice. As such, in this research, I investigated the impact of the design of an A.I avatar regarding humanness and mechanization on individual behavioral intention to follow its advice, future behavioral intention to use it and attitude towards the bank. In addition, I researched whether individual financial knowledge moderated the predicted effect of the dehumanization level of the avatar on the dependent variables. For the conceptual model, I adapted scales from previous literature on the likelihood of listening to recommendation agents (Kim, Ferrin & Rao, 2008; Wang & Banbasar, 2005) and of the unified theory of acceptance and use of technology (Venkatesh et al., 2012; Martins et al., 2014). Surprisingly, most hypotheses were not supported by the data but two statistically significant effects were found: one concerning the dehumanization level of the avatar and the difference between how much participants planned to save and the amount advised and the other related to a significant interaction of the dehumanization level and gender on mean perceived ability, that is, how competent the avatars were perceived.

(4)

4

Contents

Abstract ... 3 Contents ... 4 1. Introduction ... 7 2. Literature Review... 9

2.1 Avatar – a conceptual definition and application in previous research ... 9

2.2 Self-service technologies' definition, examples and conceptual overlap with A.I avatars . 10 2.2.1 Self-service technologies' benefits and strategic customer goals ... 10

2.2.2 When consumers do not enjoy nor are willing to use SSTs ... 11

2.3 SSTs – UTAUT model, trust, perceived competence and privacy concerns ... 12

2.4 Virtual advisors – perceived trust, ability and privacy concerns ... 14

2.5 Avatars as persuasive agents and the importance of resemblance ... 14

2.6 Anthropomorphism – definition and a psychological perspective ... 15

2.6.1 How anthropomorphism would affect the A.I avatar trust-worthiness and persuasion15 2.7 Why the design of the A.I avatar is expected to impact consumer attitude and behavioral intention – the uncanny valley and dehumanization perspective ... 16

3. Conceptual framework and hypotheses ... 18

3.1 Research question ... 18

3.2 Conceptual model ... 18

3.2.1 From a framework of technology adoption to recommendation agents ... 18

3.3 Hypotheses ... 19

3.3.1 Dehumanization level of the avatar on behavioral intention ... 19

3.3.2 Dehumanization level of the avatar and expert condition on future behavioral intention to use the A.I avatar ... 19

3.3.3 Dehumanization level of the avatar on attitude towards the bank ... 19

3.3.4 Dehumanization level of the avatar on Mean Trust, Mean Ability and Privacy Concerns ... 19

3.3.4 Moderation of the expert condition ... 20

3.3.5 Gender, Avatar and behavioral intention ... 20

4. Methodology and Data collection plan ... 21

4.1 Experiment flow... 21

4.1.1 Questionnaire order and sampling ... 22

(5)

5

4.3.1 Dehumanization condition ... 22

4.4 Dependent variables and Likert scales ... 24

4.4.1 Benefits of Likert scales... 24

4.6 Mediators ... 25

4.7 Proposed moderator – expert financial condition ... 26

4.8 Control variables ... 26

4.9 Summary of relevant variables ... 27

4.10 Data Collection ... 27

4.11 Method ... 28

4.11.1 ANOVA, mediation, and moderation ... 28

4.11.2 Methodological steps taken in the analysis ... 28

4.12 Descriptive and Frequency statistics ... 29

4.12.1 Randomization and control variables descriptive and frequency statistics ... 29

4.12.2 Dependent variables’ descriptive and frequency statistics ... 31

5. Results ... 32

5.1 Manipulation check ... 32

5.2 Reliability analysis for scales... 32

5.2.1 Reliability of the mediator scales ... 32

5.2.2 Reliability of the dependent variables scales ... 34

5.3 Correlation of the relevant variables ... 34

5.4 Testing the hypotheses ... 35

5.4.1 Testing H1 ... 35

5.4.2 Testing H1 concerning other dimensions of behavioral intention ... 36

5.4.3 Testing H2a, H2b ... 37

5.4.4 Testing H3 ... 37

5.4.5 Testing H4, H5, H6 and mediation ... 38

5.4.6 Testing H7 ... 39

5.4.7 Testing H8 and the unexpected findings concerning gender, dehumanization level and mean ability ... 39

5.5 Overview of hypotheses ... 40

6. Discussion ... 42

6.1 Dehumanization level of the avatar and dependent variables ... 42

6.2 Expert condition and behavioral intention to follow financial advice ... 43

6.3 Managerial implications... 43

(6)

6

6.5 Future research ... 45

7. Appendix ... 46

7.1 A note on the RPA & A.I information mentioned on the introduction ... 46

7.2 Illustration of key parts of the experiment ... 46

(7)

7

1. Introduction

"There are three great events in history. One, the creation of the universe. Two, the appearance of life. The third one, which I think is equal in importance, is the appearance of artificial intelligence" (Copeland, 1993, p.1). These words were said by Edward Franklin, a previous manager of the MIT Artificial Intelligence (A.I) laboratory over 20 years ago but are arguably even more relevant today. Indeed, artificial intelligence, especially concerning software, is nothing short of "booming" (New York Times, September 2016). It has been applied to fields ranging from finance and big data, up to healthcare (Patel et al., 2009).

It is possible to argue that this trend will continue in the future because, as the economy becomes larger and more complicated, so does the demand for efficient solutions. According to PwC's forecasts, the world economy is predicted to nearly triple its current size by 2050 (PwC, 2015). This considerable predicted growth will likely reflect into higher investment and development in A.I. Also, it might be particularly verified in the field of finance and banking, due to an on-going computerization of retail financial services and banks having much appetite for effective innovation (Bátiz-Lazo, Maixé-Altés & Thomas, 2010).

The digital transformation of banks takes form in two main dimensions: robotic process automation (RPA) and A.I. Robots will be most present in the back and middle office where they can eliminate repetitive, manual, non-value adding tasks and thereby raise the efficiency of operation teams. They can do so by reducing human error and intervention, which drives down costs. On the A.I end, technology is expected to take front end roles involving customer care in the future, ranging from providing basic account information to even financial advice. As such, artificially intelligent avatars can be used to provide a better customer experience in digital banks, and they are the focus of this research.

While some authors focused on a more philosophical conceptualization of artificial intelligence (Clocksin, 2003; McDermott, 2007; Cole, 1991), others have pursued more specific applications in different fields (Turin, 1950), which is my goal with this research. In this thesis, I propose a different conceptualization of artificial intelligence that could enable fruitful investigation on topics relevant for academia and financial organizations alike. Meuter et al. (2000) defined self-service technology (SST) as an interface that allows customers to fulfill a need or produce a self-service without resorting to direct employee involvement. With this in mind, I believe that an artificially intelligent avatar able to engage in conversations with customers and answer personal financial inquiries, however simple they at first may be, is an advanced form of self-service technology. Given the theoretical nature of studying a service interface which has yet to be developed, I will offer here an illustrative example of what a case scenario of such an A.I. avatar could look like. Imagine that a bank client wishes to check the interest rate of a loan to open a bakery based on her current credit. She then accesses her bank's website on her laptop and clicks on the option to speak to the newly developed customer oriented A.I. Having done that, an avatar appears on her screen

(8)

8

which directly speaks to her and, ideally, answers all her questions without the need for her to talk to a human manager.

As futuristic this case scenario might seem, the technology behind it will be feasibly developed over the medium term. Given its predominantly front end role, it is relevant to ask them how and why should this A.I avatar be designed. When it comes down to this topic, research has been done on the design of avatars in video games (Hussain, Nakamura & Marino, 2011), economics (Béslie & Bodur, 2010) and even healthcare (Dean, Cook, Keating, & Murphy, 2009), but none has been done in the context of artificially intelligent avatars providing financial advice. Therefore, I believe that this paper addresses a gap in the literature that has relevant theoretical and managerial implications.

Therefore, I will investigate the effects of the design of such an A.I. financial avatar in terms of humanness and mechanization on consumers' behavioral intention to follow its financial advice and their attitude towards the financial institution itself. To do so, I plan to run an online experiment wherein the impact of the design of the avatar in terms of its dehumanization level (i.e. how robotic or non-human it looks) on participants behavioral intention to follow its financial advice is measured. For my conceptual model, I will use mediators adapted from authors who researched consumer behavioral intention to follow recommendation of SSTs (Kim, Ferrin & Rao, 2008; Wang & Banbasar, 2005).

The remaining parts of this thesis proposal are as follows: firstly, a literature review where I will discuss previous relevant research and a presentation of the conceptual framework. Secondly, the methodology and data collection, followed by a discussion of results, managerial implications and, finally, questions for future research.

(9)

9

2. Literature Review

In this section, I aim to expand on four themes: previous research on avatars, self-service technologies, anthropomorphism, and dehumanization. Firstly, I will offer a conceptualization of avatar and briefly discuss the main research approaches to it. With regards to SSTs, I will discuss more in depth what they are, what their expected benefits may be and what might lead individuals not to wish to use them. In doing so, I will attempt to persuade the reader that the envisioned A.I avatar can be seen as an advanced form of SST and thus make it more tangible. Then, I will briefly discuss previous research on avatars and focus on the findings that are relevant for this study. As to dehumanization, I will describe why it matters in the context of the A.I avatar and how it may affect perceptions of credibility.

2.1 Avatar – a conceptual definition and application in previous research

Holzwarth et al. (2006, p.20) explained that the word derives from the ancient Indian language Sanskrit and invokes to the embodiment of a deity on earth. As esoteric as this may sound, it is consistent with a more current definition of the avatar, which is delineated as graphic representations that are personified through computer technology. The modern concept of avatar is, therefore, related to its original one in so far as it reflects the embodiment of a being into another. It is, then, the pictorial representation of an agent from one environment (e.g., a human from the real world) onto a unique environment (e.g., virtual reality).

The conception definition of an avatar proposed by Holwarth et al. (2006) is shared by Hauus and Fox (2015, p.34), albeit in a more straightforward manner. The authors argued that an avatar is a “representation of a user in a virtual environment.” However, this interpretation can be somewhat misleading as far as scientific research is concerned, for it implies that the study of avatars revolves mostly around questions of identity. While quite some research has been done in this regard (Dunn & Guadagno, 2012; Vasalou & Joinson, 2009; Lin & Yang, 2014, there are many other relevant questions beyond personal identity for avatar use.

Broadly speaking, there are two main scientific approaches to avatar study: those that focus on the personal relationship between a user who creates his or her avatar in a given context and the dynamics between an independent avatar and an individual. The former has been addressed in different fields, going from video games (Lim & Reeves, 2010; Zhang, Dang, Brown & Chein, 2017; Hussain, Nakamura & Marino, 2011) to online interactions between users (Hooi & Cho, 2013). The latter encompasses more business to consumer scenarios, such as how clients react to avatars as virtual sales agents (Holwarth et al., 2006; Moon, Kim, Choi & Sung, 2013).

This approach evidently is the one undertaken in this paper as its topic is the dynamics between an A.I avatar and bank clients. However, given the theoretical nature of studying a product that has yet to be developed, it is fruitful to propose a more extensive conceptualization of the A.I avatar. Unlike the ones investigated in the previous research mentioned so far, the avatar proposed in this thesis would work as a financial advisor and would take a more predominant role in bank customer

(10)

10

care. Thus, beyond an avatar, it can be perceived as a rather advanced form of self-service technology.

2.2 Self-service technologies' definition, examples and conceptual overlap with A.I avatars

Self-service technologies (SSTs) are technological interfaces which allow consumers access to a service without any direct employee involvement (Meuter, Ostrom, Roundtree & Bitner, 2000, p.50). Their increasing presence in daily life reflects an ongoing trend of customers providing more of their service (Bitner, Meuter, Ostrom. 2002, p.96). Examples of SSTs vary from automated machines (ATMs), telephone banking and grocery store self-scanning (2002, p.97). Their current presence in different businesses is indicative of consumers comfortability with using technology as opposed to resorting to direct human contact.

From a strictly functional perspective, the SST definition of Meuter et al. (2000) would include the financial A.I avatar envisioned in this paper because such an A.I would enable customers to fulfill their needs and provide them a service without the need of direct employee involvement. In addition, but from a more conceptual perspective, it is possible to argue that this SST definition overlaps with that of artificial intelligence itself. McCarthy and Hayes (1969, p.4) asserted that a machine possesses intelligence if it can solve specific types of problems requiring intelligence in humans, or if it survives in an intellectually demanding environment. In other words, an artificially intelligent machine can solve specific problems that require human intelligence, such as providing bank account information or calculating an individual's loan interest rate.

An SST that provides an intellectually complex service, such as engaging in conversation with a person and offering him or her relevant financial information is, too, an artificially intelligent machine according to McCarthy and Hayes's definition. Therefore, these functional and conceptual overlaps suggest that the A.I conceived in this research can be considered – and analyzed – as not only an artificially intelligent tool but also as a rather advanced form of self-service technology. 2.2.1 Self-service technologies' benefits and strategic customer goals

The proliferation of SSTs has arisen due to diverse reasons, which include cost reduction, an ambition to raise customer satisfaction, and a need for new delivery channels to reach new customer segments (Bitner, Meuter, Ostrom. 2002, p.97). Cost reduction is facilitated regarding lower labor costs; in other words, the effective implementation of an SST implies that fewer employees are necessary for the performance of a given service, which then means lower wage expenses. Similarly, SSTs can lead to faster customer care, which in turn could increase both customer satisfaction and loyalty. As to reaching out new customer segments, SSTs are in some cases introduced to create new channels to reach clients who might have been previously unreachable (2002, p.98). This is particularly true for web-based SSTs which make it possible for companies to reach worldwide markets.

(11)

11

It is clear that an effective implementation of SSTs is correlated with important customer metrics, such as customer satisfaction, but it is also relevant to understand more precisely what are the strategic goals behind their implementation are and, also, the means through which they can lead to positive financial results. In this regard, Bitner et al. (2002, p.99) clarified three customer goals related to SSTs: customer service, enabling direct transactions and education.

SSTs provide different forms of customer service, such as creating the possibility to get questions regarding accounts answered, paying bills or even tracking delivery times (2002, p.98). Other examples would be interactive kiosks that enable customers to have a virtual tour of chosen stores and offer a detailed overview of the products provided by a retailer. A more straightforward, yet widely present example, are SSTs used by consumers when a service failure occurs, as when problems are reported through phone lines or email forms.

Another key customer goal of SSTs is enabling transactions (2002, p.99). In this case, SSTs allow consumers to order, purchase and exchange goods with a company without any direct interaction with firm employees. This goal is directly linked to both cost reduction for the company, and higher customer satisfaction as consumers can buy the good they choose in a faster and more convenient manner.

The third primary customer goal of SST implementation is "education" (2002, p.99). More concretely, SSTs may be used to help consumers learn and train themselves. Examples of such use vary from phone-based information lines and training videos. Bitner et al. (2002) mentioned an interesting example in this regard which, though relatively old, is still relevant: GE Medical Systems adoption of SSTs to provide video and satellite TV-based training for users of its hospital equipment.

It thus follows that the A.I. conceived in this paper would theoretically fulfill all 3 SSTs customer goals hitherto mentioned. On one level, it would provide an interactive and rapid form of customer care, while enabling fast transactions as it is arguably quicker to enunciate financial operations than it is to type them. Furthermore, it could also be relevant for educational purposes if it is used to record tutorials for new clients. I will thus use the terms A.I avatar and SST in this thesis interchangeably.

2.2.2 When consumers do not enjoy nor are willing to use SSTs

Having clarified the benefits of SSTs for both companies and customers, the next logical issue is to understand under which circumstances would consumers not be willing to use them. According to Bitner et al. (2002, p.100), customers dislike and avoid SSTs in three scenarios. Firstly, SSTs are met with a negative attitude by consumers when they, simply put, do not function. In the authors' research, the largest amount of negative experiences with SSTs revolved around some level of failure to perform the service. Secondly, customers do not appreciate such tools when they are poorly designed; that is, even when SSTs work, they will not be preferred over traditional options by customers if they are not user-friendly enough. As such, the fulfillment of

(12)

12

the client's need is not enough to ensure a continuous use of the SST, if the experience itself is complicated or not pleasurable due to design issues. Thirdly, the authors found that customers do not like SSTs when they make a mistake themselves (2002, p.102). In other words, even when the failure of the service is due to the consumers' fault, they partially blame the firm and may avoid the SST next time.

This last found reason for consumers to dislike and even avoid SST is connected with the former two. Should the SST interface not be user-friendly enough, customers are bound to either not enjoy the experience or make a mistake that leads to partial or complete failure to deliver the service. It is, therefore, essential to keep in mind the design and ease of use to successfully implement SSTs. In the case described in this paper, I believe that the envisioned financial A.I. has the potential to avoid these issues because it would enable direct interaction with the consumer. Assuming that it functions in terms of understanding the customer speech and succeeds in offering relevant information back, such an SST could be extremely user-friendly and efficient in comparison to usual SSTs. This is because directly discussing one's need or problem is much faster and convenient than typing it on an email or selecting different options on a phone call.

2.3 SSTs – UTAUT model, trust, perceived competence and privacy concerns

Because no previous study has contemplated the dynamics involved in an A.I avatar providing financial advice, I had to resort to research in other fields. As we have established that the envisioned avatar is, conceptually speaking, an advanced SST, it was logical to refer to studies that addressed the contingencies for individuals to engage in and trust SSTs. Some authors did so through the lenses of the unified theory of acceptance and use of technology (Venkatesh et al., 2012; Martins et al., 2014). Others relied more on constructs of trust and perceived competence (Kim et al., 2008; Wang & Benbasat, 2005). For the conceptual model of this thesis, I combined and adapted the insights from both and will argue in the following paragraphs why it was a sensible course of action.

The Unified theory of acceptance and use of technology is a relatively recent theoretical model that attempts to explain critical factors and contingencies with regards to the prediction of behavioral intention to use technology (Venkatesch et al., 2012, p.157). Although it was originally devised to be applied mostly in organizational settings, the model has since been effectively used to study a broad range of technologies in both organizational and non-organizational settings (2012, p.158). Its main differentiator from previous models such as the Technology Acceptance Model (Davis, 1989 in Martin et al., 2014, p.2) is the extent to which its predictions of behavioral intention have been accurate. In fact, in longitudinal field studies of employee technology acceptance, the UTAUT explained around 70 per cent of the variance in behavioral intention to use technology and about 50 percent of the variance in technology use (Venkatesh, 2012, p. 157). The UTAUT has four main constructs: performance expectancy, effort expectancy, social influence and facilitating conditions (2012, p.159). As such, the model proposes that consumer's perception of the effort necessary to use the technology combined with his or her expectation of

(13)

13

its performance impact his or her behavioral intention to use it. Also, the necessary conditions to use the new technology, that is, the facilitating conditions and the opinions of others about the technology are also predicted to influence behavioral intention.

To illustrate these constructs better, I will briefly use it to analyze what would impact consumer likelihood to adopt, say, mobile banking. In such a case, the performance expectancy would be translated to how well the customer expects mobile banking to function. Logically, if he or she doubts that the SST will work due to various reasons (be them rational or not), it is less likely that mobile banking will be adopted. Similarly, effort expectancy, in this case, refers to how complicated the individual expects mobile banking to be. This construct is, of course, related to how user-friendly the SST is, as mentioned by Bitner et al. (2002). The other two constructs, social influence and facilitating conditions, would, in turn, refer to how positively relevant people in the life of the consumer feel towards mobile banking and whether or not he or she has access to a cell phone and the internet.

In the case of the financial A.I envisioned here, the model would need to be adapted. For example, facilitating conditions may not be quite as necessary in this case to predict behavioral intention to use it, because laptop and internet access is widespread in the developed world. Similarly, given the novelty of the SST, it is questionable whether or not people would have a strong opinion towards it, let alone that it would be enough to affect an individual's behavioral intention through social influence.

Furthermore, although the A.I avatar would be an advanced SST, the dynamics of its relationship with a bank client are different. For usual technologies, it is a matter of whether an individual willing to buy it; in the case of the A.I avatar, it is more relevant to understand how likely are clients to trust it and follow its financial advice. As opposed to a general new technology with one specific instrumental goal, the A.I avatar envisioned here would act as an adviser agent.

The key relevant mediators between the A.I avatar design and the consumer behavioral intention to use it would, therefore, primarily be performance and effort expectancy. Another relevant mediator could be perceived risk, as suggested by Martins et al. (2014) adaptation of the UTAUT model. The authors argued that perceived risk consists of, namely. Performance, financial, time, social and privacy risk. In the case of this research, these components of perceived risk could also be adapted and operationalized with questions as, for example, whether the design of the A.I avatar makes the respondent perceive the SST more likely to be a waste of time or to share his or her private information. A more comprehensive discussion of the conceptual model and the mediator's adaptation will be provided under the conceptual framework chapter.

Venkatesh et al. (2012) and Martins et al. (2014) argued that individual behavioral intention to adopt technology is dependent upon several factors, such as performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivations, price value and habit. In other words, it matters how well individuals expect the new technology to perform; how

(14)

14

complicated its adoption is seen. Because the envisioned A.I avatar in this thesis would speak directly to the individual, its perceived ease of use is not a pertinent construct for analysis.

2.4 Virtual advisors – perceived trust, ability and privacy concerns

From the Literature review that I conducted, two articles seemed to be particularly suitable for deeper analysis: one focusing on trust-based consumer decision making in e-commerce (Kim et al., 2008) and another addressing the trust in and adoption of online recommendation agents (Wang & Benbasat, 2005). Wang and Benbasat (2005, p.72) investigated the validity of trust in online recommendation agents by testing an integrated trust and technology acceptance model. Their main results were that individuals anthropomorphize computerized agents and in doing so attribute human characteristics such as trust and benevolence to them. Although their study focuses on the context e-commerce, the manner which they examined the online recommendation agent is fitting to the A.I avatar. Both non-human agents provide recommendations – albeit the avatar offers financial ones – and the likelihood of individuals following them is dependent on constructs such as trust and perceived ability.

Similarly to website users sharing data with e-commerce recommendation agents, bank clients dealing with A.I avatars will also face privacy concerns different levels of privacy concerns. In this regard, Kim et al. (2005, p.1) argued that a website user’s behavioral intention is affected by his or her perceived trust and risk.

While the UTAUT model is not entirely applicable to this research’s case, it does shed light on constructs that are relevant to adoption of new technology which can be seen as an extension of the mediators mentioned above. Whether a consumer is considering adopting “bio-pay” (Thomas, Aljazeera, 2017) or having financial advice given by an A.I avatar, trust is paramount, as well as believing in the performance or ability of the SST. It is thus useful to assess what previous research has covered in terms of avatar’s persuasion capabilities.

2.5 Avatars as persuasive agents and the importance of resemblance

Research on avatars and their potential persuasiveness is as interesting as it is diverse. For example, Hannus and Fox (2015, p.33) analyzed the effect of avatar customization over purchase intentions. They found that customers able to customize an avatar’s appearance had a more positive attitude towards the product and presented higher purchase intentions than participants in the control group. Similarly, Suh, Kim and Suh (2011, p.711) found in their study that the greater the resemblance between the avatar’s and the individual’s look, the greater the likelihood of the user having positive attitudes regarding affection and connection towards the avatar. Logically, the more positive outlook translates into a higher behavioral intention to use the avatar.

The insights from both studies implicitly suggest that anthropomorphism can increase the persuasiveness of an avatar, as it leads to higher levels of identification with the individual and, in doing so, enhances avatar’s credibility. However, before expanding into the importance of

(15)

15

anthropomorphism and avatars on attitude and behavioral intention, it is pertinent to offer a more comprehensive definition of anthropomorphism.

2.6 Anthropomorphism – definition and a psychological perspective

Epley, Waytz, and Cacioppo (2007, p.864) defined anthropomorphism as the tendency to attribute human characteristics to non-human agents; such characteristics can vary from motivations, behaviors and even emotions. The authors developed a theory that explains the conditions which render individuals more likely to engage in anthropomorphism. This likelihood is dependent on three key conditions: the accessibility and applicability of anthropocentric knowledge (elicited agent knowledge), the motivation to understand behavior conducted by others (effectance) and, finally, the inclination for social contact (sociability).

More specifically, elicited agent knowledge is a condition wherein the subject conducts anthropomorphism based on data from other agents. In other words, the condition implies that one can observe the behavior of others or to make mental simulations and, with this input, imbue human characteristic and behaviors to non-human agents (2007, p.868).

Effectance refers to the necessity to interact with one’s environment (White, 1959 in Epley, Waytz & Cacioppo, 2007, p.867). In a context of anthropomorphism, it involves the motivation to effectively interact with non-human agents and contributes to enhancing the individual’s capability to explain present complex stimuli. By attributing human characteristics to a non-human agent, one can create a narrative which makes sense of the environment, thereby reducing uncertainty. On a social level, it can be argued that anthropomorphism implies a need and desire to create social connections (Epley et al., 2007, p.867). It is a means to achieve this end and, in doing so, leads to some of the benefits that actual human interaction yields - which suggests, in other words, that people are more likely to resort to anthropomorphism when they feel a lack of social connection. However, it is not clear is whether a greater social connection would be greater between the user and a realistically human avatar or an anthropomorphized robot avatar. Neither is it evident whether a constantly different level of anthropomorphizing leads to equally different levels of trust, attitude and behavioral intention.

2.6.1 How anthropomorphism would affect the A.I avatar trust-worthiness and persuasion In another article, Waytz, Cacioppo, and Epley (2010, p.6) highlighted three main consequences of anthropomorphizing to the extent of assuming another agent to possess a human-like mind. Initially, this would imply that the given agent is able to have a conscious experience (Gray, Gray & Weger, 2007 in Waytz, Cacioppo & Epley, 2010) and therefore affords it the category of the moral agent. Secondly, an agent that has a humanlike mind can carry out the intentional action and thus be held accountable for it. Finally, such an agent is capable of engaging with the observer regarding noticing him or her and can thus exert influence on the observer.

Anthropomorphizing, in general, does not mean that an individual invariably perceives a non-human agent to possess a non-human-like mind. It would be wise to argue that such a case would

(16)

16

represent an extreme case of anthropomorphizing. However, the A.I avatar proposed in this research would inherently fit this category as an advanced self-service technology due to its ability to calculate and offer financial advice. Considering the consequences of anthropomorphism defended by Waytz, Cacioppo, and Epley (2010, p.6), it is logical to wonder whether the dehumanization level of the design of the avatar would hinder the A.I avatar’s ability to “influence the observer”.

In other words, if anthropomorphism can render an agent more influential, then perhaps a less anthropomorphic A.I avatar would be comparatively less persuasive and trustworthy than a more human-looking avatar. The literature on anthropomorphism does suggest that the dehumanization level of an avatar’s design can be a relevant factor to consider as far as the influence it imposes is concerned. However, other bodies of knowledge involving design and even robotics offer a similar respective. A relevant one to address is the uncanny valley and the impact of realism.

2.7 Why the design of the A.I avatar is expected to impact consumer attitude and behavioral intention – the uncanny valley and dehumanization perspective

One of the first researchers who studied the effect of the design of robotic non-human agents on individual affinity towards them was Masashiro Mori (1970), a former robotics professor at the Tokyo Institute of Technology. He observed that, as robots became more human-like, they were perceived as more familiar, but only up until a certain point. After reaching a high level of humanness, the robot would evoke an unpleasant sensation in the viewer instead of familiarity – they would thus fall into the so-called "uncanny valley." As such, robots that look very human-like may, in fact, trigger discomfort in individuals (Seyama & Nagayama, 2007, p.337).

The uncanny value theory has been investigated and partially confirmed by different researchers. For instance, MacDorman and Ishiguro (2006) argued that the level of humanness is one of the aspects determining the perceived familiarity, strangeness or eeriness of the robot. However, Seyama and Nagayama argued that an almost perfectly human appearance is a necessary but insufficient condition for the uncanny valley (2007, p.337). These authors found in their experiment that the uncanny valley effect occurs when abnormal features are present in the robot (2007, p. 348). This abnormality takes the form of "unpleasant impressions of the faces of avatars in virtual reality or robots."

Moreover, some research has been done on specific effects of avatars on consumer behavior. Holzwarth, Janiszewski and Neumaan (2006, p.19), for example, found that using an avatar sales agent entails higher customer satisfaction, a more positive attitude toward the product and greater purchase intention. In another experiment, the authors found that an attractive avatar is a more efficient sales agent at moderate levels of product involvement, while an expert-avatar leads to more sales at high levels of product involvement. In their experiment, the different avatar

(17)

17

conditions were reflected in their stereotypical appearance; the expert condition, for instance, showed an avatar of an older man wearing glasses, thus evoking the idea of experience.

The underlying idea of the research mentioned here is that the design of the A.I avatar can be logically expected to affect customer attitude and behavioral intention to use it. While previous authors have studied different conditions involving general robots (Saygin, Chamide, Ishiguro, Driver & Frith, 2012) or avatars in e-commerce (Holzwarth et al., 2006), similar research in the context of an A.I avatar in banking has not been conducted to the best of my knowledge. Therefore, I believe that the proposed experiment here has the potential to address a gap in the literature. My focus would be how the design of the A.I. financial avatar in terms of humanness and mechanization would impact customer behavioral intention to use its services and his or her attitude towards the bank itself.

(18)

18

3. Conceptual framework and hypotheses

In this section, I will illustrate the conceptual framework and describe the hypotheses. 3.1 Research question

The research question in this thesis is: how does the design of an A.I avatar regarding humanness and mechanization impact individual behavioral intention to follow its financial advice? While the main dependent variable in this inquiry is behavioral intention, I also assess how the dehumanization level of the A.I avatar affects attitude towards the bank that offers the A.I avatar, future behavioral intention to use the SST as well as whether the key DV, that is, behavioral intention to follow the advice is meditated by perceptions of trust, ability and privacy concerns by adapting scales from Kim, Ferrin & Rao (2008) and Wang & Banbasar (2005).

3.2 Conceptual model

3.2.1 From a framework of technology adoption to recommendation agents

Considering that the A.I envisioned here would act as a financial advisor, it is intuitive that the relevant constructs that affect the individual and advisor’s relationship are the same as those of online recommendation agents studied by Ferrin & Rao (2008) and Wang & Banbasar (2005). Intuitively, when receiving advice, be that an online purchasing recommendation or financial advice, perceptions of trustworthiness, competence and privacy risk concerning the advising agent can be predicted to impact the likelihood of the individual following the advice.

Further, the mediators of this conceptual model can be seen as adaptations of the UTAUT model analyzed by Venkatesh et al. (2012) and Martins et al. (2014). Mediators perceived trust and

(19)

19

ability, for example, can be seen as an adaptation of UTAUT’s construct of performance expectancy. It can be argued so because in the context of the A.I avatar, its performance anticipations are intuitively rooted on how much trust worthiness and competence it evokes. The more trustworthy and capable the avatar may seem, the more likely is the individual to see its advice in a positive light. Similarly, privacy concerns can be perceived as a dimension of the perceived risk construct adapted to the UTAUT by Martins et al. (2014).

3.3 Hypotheses

This section introduces the hypotheses. The more comprehensive reasoning for each hypothesis can be found in the Literature Review.

3.3.1 Dehumanization level of the avatar on behavioral intention

H1: The dehumanization level of the avatar is negatively associated with individual behavioral intention to follow the avatar’s financial advice.

As more anthropomorphic agents can be expected to exert influence on observers (Waytz, Cacioppo and Epley, 2010), I theorize that the less human the A.I avatar looks, the less likely respondents will be to follow its financial advice. With the conceptual model in mind, the less human the avatar looks, the less trustworthiness (and arguably competence) it will evoke, and the lower the behavioral intention to follow the advice. Further, I expect that this lower level of trustworthiness will reflect into higher privacy concerns.

3.3.2 Dehumanization level of the avatar and expert condition on future behavioral intention to use the A.I avatar

H2a: The dehumanization level of the avatar is negatively associated with individual future behavioral intention to use it.

H2b: The expert condition is negatively associated with individual future behavioral intention to use it.

The intuition behind H2b is straightforward – the more one believes to know about a given area of expertize, the less likely is he or she to plan to pursue an A.I advisor or SST to provide information he or she already knows.

3.3.3 Dehumanization level of the avatar on attitude towards the bank

H3: The dehumanization level of the avatar is negatively associated with individual attitude towards the bank

From the research regarding brand attitude and product satisfaction (Suh & Yi, 2006; Homburg & Giering, 2000) and intuition alike, it is known that feeling towards a product often translate in different measures into feelings towards the brand. With this into consideration as well as the consequences of anthropomorphism highlighted above (Waytz, Cacioppo & Epley, 2010), I expect that the less human the avatar is, the less positive will individual attitude towards the bank be.

3.3.4 Dehumanization level of the avatar on Mean Trust, Mean Ability and Privacy Concerns H4: The dehumanization level of the avatar is negatively associated with mean perceived trust

(20)

20

H5: The dehumanization level of the avatar is negatively associated with mean perceived ability

H6: The dehumanization level of the avatar is positively associated with mean privacy concerns

3.3.4 Moderation of the expert condition

H7: The expert condition moderates the effect of the dehumanization level of the avatar on mean trust, mean ability and mean privacy concerns

The idea of adding financial expert knowledge as a moderator was suggested by my supervisor. Its underlying goal is two-folded: to determine whether or not financial knowledge would indeed moderate the effect the effect of the IV on the mediators and dependent variables alike, and as a proxy for a higher point in the consumer journey. By providing financial knowledge before offering financial advice, it is possible to emulate the situation by a client who, say, is looking for a mortgage and has researched it considerably.

3.3.5 Gender, Avatar and behavioral intention

H8: Gender positively influences perception of trust, ability and privacy concerns regarding the avatar.

Suh, Kim and Suh (2011, p.711) argued basing on their results that “the more closely an avatar resembles its user, the more the user is likely to have positive attitudes toward the avatar”. From this insight, I expect female participants to perceive the avatar more positively than their male counterparts. This may lead to higher perceptions of trust-worthiness and ability, as well as lower privacy concerns, which can then translate to higher behavioral intention to follow the financial advice and to even use the A.I avatar in the future.

(21)

21

4. Methodology and Data collection plan

The purpose of this research is to assess how the design of an A.I avatar regarding humanness and mechanization impacts consumer behavioral intention to follow its financial advice and consumer attitude towards the bank itself. Also, I examine whether individual financial knowledge acts as a moderator. For better comprehension, it is necessary to define the operationalization of all relevant constructs and the underlying reasons for the choice of the experiment design and its specific logistics.

Thus, in this section, I would like to discuss more in depth not just how I plan to collect the data for the investigation, but also expand on the operationalization of all relevant variables and explain the logistics of the experiment. In the first sub section, the reader can find a summary of the experiment flow summarizing the different scenarios participants faced. Then, the operationalization of all relevant variables is discussed, followed by the data collection plan and analysis method.

4.1 Experiment flow

As soon as participants had gone past the consent form, they were assigned to one out of two random conditions: expert or novice. In the former, they were provided with a text which operationalized financial knowledge whereas in the later they were shown a text with a neutral stimulus unrelated either to finance or avatar. Then, they were explained that before the experiment began they would be asked a couple of questions to measure their financial knowledge. This step was employed to determine whether or not those assigned to the expert condition did indeed know more about finance than those who were not.

Then, participants faced the second randomization: they were randomly assigned to one out of the three possible avatars: human, humanoid or robot. Along with a picture of the given avatar came a text briefly explaining that it is an advanced self-service technology to be implemented in banks in the medium term. It was also mentioned then that participants would be next asked questions about their personal finance so that the avatar prototype in the server could make calculations and provide financial advice.

When the final personal financial question was answered, a quick loading animation would pop up and a text saying that the avatar is calculating the financial advice was shown at the top. After this quick animation, a picture of the same avatar randomly assigned to each participant would emerge saying one out of two possibilities: that the participant should save 12 per cent of his or her income to buy a property or to keep their property in the short term. Whether a respondent was advised to either one of them depended on how they identified themselves in a previous question (as a home owner or not). The actual figure of the advice was the same for all conditions as to ensure that differences in behavioral intention to follow it stemmed from differences in the design.

(22)

22

After receiving the advice, participants were asked to what extent they planned to follow the financial advice, to use the Avatar’s service in the future and their attitude towards banks that employed such SST. Then, questions that measured the mediators were asked, followed by control variable inquiries and, finally, a manipulation check was asked before the end.

All these chronological steps are summarized in Table 1. The specific way which the expert condition and all other variables were operationalized are described in the next sub sections.

Table 1: Experiment flow

1) Consent form

2) First randomization Expert condition Novice condition 3) Financial knowledge

questions

4) Second randomization Human avatar Humanoid avatar Robot avatar 5) Personal financial

questions

6) Advice is given Keep property Buy property 7) Dependent variables

8) Mediators 9) Control variables 10) Manipulation check

4.1.1 Questionnaire order and sampling

Following the suggestion of my supervisor, I ordered the questionnaire so that the first questions that were asked were the ones that allowed the measuring of the dependent variables. These were followed by questions concerning the mediators, then by control variables and, lastly, the manipulation check as conducted. By doing so, I minimized the chance that respondents would be paying less attention when providing information for the dependent variables.

Because I conducted a 3 by 2 between-subjects experiment, 300 participants would be ideal as a rule of thumb (50 for each avatar condition times the number of scenarios imposed by the moderator). However, time constraint coupled with a low response rate made that challenge difficult to achieve, and I ended up obtaining only 211 valid responses (although 399 people began my experiment). A mathematician named Rhadnes at my internship argued that the sample size is large enough for the scope of the experiment.

4.3 Independent variable 4.3.1 Dehumanization condition

The independent variable of this research is the design of the A.I financial avatar in terms of humanness and mechanization. As such, it involves three different concepts: artificial intelligence, avatar, and dehumanization. As previously mentioned in the theoretical framework, I employ here McCarthy and Hayes’ (1969, p.4) definition of A.I, that is, a machine which can solve particular

(23)

23

problems that also require intelligence from humans. Because the envisioned A.I financial avatar can engage in discussion with the consumer and provide him or her with relevant information and, ultimately, even advice, I believe that McCarthy and Hayes’ definition is indeed applicable. As to the conceptualization of avatar, I combine here Hauus and Fox’s (2015, p.34), avatar definition as a “representation of a user in a virtual environment” with its original definition. The avatar is the pictorial representation of an agent from one environment (e.g., a human from the real world) onto a unique environment (e.g., virtual reality).

Regarding the dehumanization level of the A.I financial avatar, it refers to how human (or robotic) its appearance will be. I hypothesize that differences in the avatar’s dehumanization level will, in fact, lead to different levels of familiarity, and thus distinctive behavioral intentions to use such avatars based on the theory proposed by Mori (1970) and consequences of anthropomorphism (. The author theorized that the “physical appearance of robots and virtual reality agents” will lead to different levels of familiarity according to how realistic their designs are, that is, how realistic they seem in terms of humanness (Seyama & Nagayama, 2007, p.338).

I adopt here a similar approach to that conducted by Seyama and Nagayama (2007). In their research, the authors they provided “frames of image sequences in which an artificial face was gradually morphed into a real face” (2007, p.339). Also, they measured respondents’ impressions of these facial images and by doing so tested Mori’s uncanny valley. In this thesis, I too intended to provide different versions of the same picture but with a more restricted context. Instead of focusing solely on evaluating the predictability of the uncanny valley, I propose to investigate whether the dehumanization of the A.I. avatar will impact consumer attitude towards the bank that may offer it and also his or her behavioral intention to follow the financial advice of the self-service technology.

To achieve that, I used three versions of the same photo of a woman to produce three different conditions – one where it is unaltered (human condition), one where it is slightly altered (humanoid condition) and one where it is entirely changed (robot condition). By doing so, I expected to control for the levels of dehumanization, such that the robotic avatar had the highest and the human one, logically, the lowest. In the questionnaire, I made sure to inform respondents that what they face is an example of an A.I avatar independently of which condition they are randomly assigned. As my photoshop skills were limited, I asked the Dutch photographer Anna Theunissen to help me alter the photos according to the following criteria:

1) The background must be the same for all pictures.

2) The body language and facial angle of the avatar are the same in all pictures. 3) The facial proportions do not considerably change.

The underlying reason for these criteria is to make sure that the single aspect of the photos that does change is the level of dehumanization itself. By ensuring this, the expected causal factor is

(24)

24

isolated, and the different conditions can be compared (Field & Hole, 2003, p.20). Bellow are the photos used in the experiment:

4.4 Dependent variables and Likert scales

The dependent variables of this study are consumer behavioral intention to use the A.I financial avatar and his or her attitude towards the bank itself which is assumed to offer the advanced self-service technology. More concretely, the former assesses to what extent respondents would be planning to use the devised advanced self-service technology assuming it was available and based on the stimulus (independent variable). At the same time, the latter concept evaluates how positively or negatively the respondent feels regarding the bank after being exposed to the stimulus. To measure both these constructs I employed here the same methodological approach as Martins et al. (2014, p.5). Similarly to the authors, I weighed most items (dependent variables and mediators) using 7-point Likert scales, ranging from totally disagree (1) to totally agree (7). 4.4.1 Benefits of Likert scales

Field and Hole outlined the benefits of employing Likert scales (2003, pp.45-46) relatively to their self-report measures. For instance, they argued that such scales offer respondents a greater scope to express how they feel. Moreover, because Likert scales consist of a statement to which one can explicit varying degrees of agreement, they are relatively straightforward to understand and, thus, may stimulate more respondents to finish filling questionnaires. On the flip side, the authors asserted that Likert scales might contribute to respondents remembering the answers they gave due to the limited number of choices available, which can lead to biases should one use these scales to measure changes in response over time. In this regard, the authors argued that visual-rating scales (VAS) are superior since they prevent participants from knowing what value they have given and are therefore unable to remember their responses.

This issue, however, is not applicable in the case of this thesis for I do not intend on recording any difference in responses to the same stimulus over time, but rather investigate different punctual

(25)

25

responses to various stimuli. The use of Likert scales is then preferred over that of VAS or more simplistic yes-no scales.

4.6 Mediators

As mentioned in the previous section, to decide upon the mediators I resorted to literature regarding consumer perception of sales agents and financial advice. Intuitively, the likelihood of an individual following financial advice (or any advice for that matter) is associated with three main constructs: trust, competence, and privacy concerns. In other words, I envisioned that behavioral intention to follow financial advice would be mediated by how trust worthy and competent the advising agent is perceived, and also by the extent to which an individual faces privacy concerns while receiving the advice. Since this conceptual model had not been used yet, I had to combine scales used by different researchers.

For operationalizing trust, I adapted one scale from Wang and Bensabat (2005, p.100). In their study, the authors measured two dimensions of trust: integrity and benevolence. Although I initially considered using both operationalizations, choosing only one seemed more efficient as to minimize the time necessary to complete the experiment. In the context of understanding whether respondents were more likely to follow the financial advice given, the integrity aspect of trust seemed more relevant than how benevolent the Avatar was seen. This, combined with the reliability of the Integrity scale on Wang and Bensabat’s research (Cronbach’s Alpha=.75), led me to choose it. Table 1 shows on the left column how the scale was originally worded by Wang and Bensabat (2005) and how I adapted it for my research on the right.

Table 2: Trust scale adaptation

This virtual advisor provides unbiased product recommendations.

The A.I avatar provides unbiased financial advice

This virtual advisor is honest The A.I avatar is honest I consider this virtual advisor to possess

integrity

I consider the A.I avatar to possess integrity

To measure both perceived competence and privacy concerns, I adapted the scales from Kim, Ferrin and Rao (2008). In their paper, the authors investigated the extent to which risk and trust affected consumer’s purchasing decisions in an e-commerce setting. Although the context of their research and mine is different, they measured constructs which are relevant for this paper. I would argue that trust and specifically privacy risk are also significant factors to consider when analyzing whether or not consumers would be willing to follow the financial advice provided by an A.I avatar. This relevance combined with the reliability of the scales themselves - all of them had a Cronbach Alpha higher than .7 (2008, p.14) – rendered them a good choice for this study. Tables 2 and 3 indicate on how the scales were originally worded and how I have adapted them for my experiment.

(26)

26 Table 4: Privacy concerns scale adaptation I am concerned that This Website is collecting too much personal information from me

I am concerned that the A.I avatar is collecting too much personal information from me

This Web vendor will use my personal information for other purposes without my authorization

The A.I avatar will use my personal information for other purposes without my authorization

I think this Web vendor shows concern for the privacy of its users

I think this A.I avatar shows concern for the privacy of its users

4.7 Proposed moderator – expert financial condition

Although it is clear that expert advice impacts individual decision making and behavioral intention (Meshi, Biele, Korn & Heekeren, 2012, p.1), no previous research has investigated how individual financial knowledge may moderate perceptions of trust and ability of an A.I financial advisor. Indeed, operationalizing financial knowledge is a considerable challenge due to its broad nature. Given the broad possibilities of its and limited time and resources, my supervisor Weihrauch suggested I focused on rephrasing a short information summary of the process of obtaining a mortgage in the Netherlands she received from her real estate agent. The transcript randomly assigned to participants can be found in the appendix.

When tailoring this text, I had two main concerns: how specific the knowledge should be and minimizing the length of the text itself. As mentioned in sub section 4.1, to test whether participants assigned to the expert condition did know more – or were at least effectively primed – in comparison to the others, I created two multiple short questions regarding the financial information provided in the financial condition that all respondents had to answer. I expected that overall those assigned to the expert condition would answer these questions correctly more often than those assigned to the neutral condition.

4.8 Control variables

When analyzing the impact of moderating conditions on the likelihood of individuals adopting new technology, Venktash et al. (2012, p.2012) cited both age and gender. With regards to the former, they argued that older consumers struggle more to process new or complex than younger ones, which translates into a lower adoption of new technology.

Table 3: Competence scale adaptation Overall, I think this Website provides useful information.

Overall, I think that the A.I avatar provides useful information

This Website provides reliable information. The A.I avatar provides reliable information This Website provides sufficient information

when I try to make a transaction.

The A.I avatar provides sufficient information in its financial advice

(27)

27 4.9 Summary of relevant variables

Name Type Operationalization

Dehumanization level IV (independent variable) Photos

Expert condition Moderator Short text providing concrete

financial insights Behavioral intention DV (dependent variable) Likert Scale

Percentage DV (dependent variable) Integer

Suggested Difference DV (dependent variable) Integer Mean future behavioral

intention

DV (dependent variable) Likert Scale Attitude towards the bank DV (dependent variable) Likert Scale

Mean Trust Mediator Likert Scale

Mean Ability Mediator Likert Scale

Mean Privacy Concern Mediator Likert Scale

4.10 Data Collection

I used an online questionnaire to collect the data. In other words, I used online forms which were to be completed and returned by correspondence. This method has several advantages, such as providing access to unique populations (Garton, Haythornthwaite, & Wellman, 1999 in Wright, 2005) and time-efficiency when compared to other methods (Bachmaan & Elfrink 1996; Garton et al., 2006 in Wright, 2005). Another advantage of online questionnaires is their comparatively lower cost (Couper, 2000; Llieva et al., 2002 in Wright 2005).

However, this method also has its drawbacks. For example, Lefever, Dal, and Matthiasdottir (2002, p.578) argued that online questionnaires have a lower participation rate compared to, say, pencil-paper surveys. Another potential problem relates to sampling issues since online surveys ultimately gather self-reported data, it is possible to question how accurate the obtained demographic information is (Wright, 2005).

Despite these potential problems, the online questionnaire is the best data collection method for this research due to two main reasons: firstly, it imposes almost no financial cost; secondly, it offers dynamism regarding enabling quick alterations and minimal time gap between developing the questionnaire and sending it to respondents.

To gather the data, I resorted to people in my network, many of whom were kind enough to share it within their own networks. I asked friends, colleagues and former teachers. In addition, I reached out to social media groups specialized in survey sharing on both Facebook and Reddit. LinkedIn was also used but led to comparatively fewer responses. Also, I shared the experiment in the internal communication channels of the bank where I interned.

(28)

28 4.11 Method

4.11.1 ANOVA, mediation, and moderation

The main statistical tool that I will employ in the data analysis will be a two-way analysis of variance, that is, ANOVA. Field (2014, p.430) defined ANOVA as “a way of comparing the ratio of systematic variance to unsystematic variance in an experimental ratio.” He went further and defined this ratio as the F-ratio (2014, p.231). More specifically, Brace, Kemp, and Snelgar (2016, p. 190) argued that the F-test calculates the ratio of the variance explained by the model (variance due to manipulation of IV and the variance not explained (error variance). As such, should the error variance be small relative to the variance explained by the model, the F statistic should be higher than 1. Should that not be the case, it follows that the F-test will be lower than 1. The F-test used in the ANOVA thus offers a way to assess how effectively a regression model an predict an outcome vis-à-vis the error within that model (Field, 2014, p.431).

Given the experimental design of participants being randomly assigned to different conditions, and the measurement of the dependent variables through Likert scales, ANOVA is an adequate statistical method to use. By assessing the ratio of the variance between-groups in comparison to that within-groups (Field, 2014), I am able to infer whether the manipulation of the IV had a statistically significant association as predicted by the hypotheses. Another advantage of using ANOVA in the context of this experiment is that it can work as an intermediate step before conducting mediation analysis. That is, by running a two-way ANOVA of the IV and moderator on the mediator it is possible to assess if there might have been a mediation relationship – if no significant results nor correlations are found, then it is unlikely that the mediation worked as predicted.

Having said that, it is worth addressing that mediation analysis is also part of this research given the conceptual model. Hayes and Preacher (2010, p.2) defined mediation between two variables (e.g. X and Y) by a third (e.g. M), as a relationship where a change in X leads to a change in M, which in turn causes a change in Y. Mediation can be accessed via SPSS add-on PROCESS by Andrew F. Hayes.

As to moderation, it will be investigated through two-way ANOVA analysis. Should there be statistically significant interactions between variables, moderation can be inferred from different plots of the marginal means of the according to the DV, an IV in one axis and the expected moderator with a separate plot.

4.11.2 Methodological steps taken in the analysis

The first steps taken for managing the data were to check for missing or invalid values. The criteria I used for the invalid values for Percentage are covered in the Appendix. Then, I analyzed the main descriptive statistics of the data and conducted reliability analysis for the relevant variables (i.e. future behavioral intention, trust, ability, privacy concerns). Once their reliability was attested, I generated mean variables to account for the three item scales used to measure the

(29)

29

mediators and future behavioral intention. The next steps revolved around checking for homoscedasticity of errors through Levine’s test and ANOVA analysis.

4.12 Descriptive and Frequency statistics

In this section, the most used variables will be summarized and discussed. In addition, the descriptive and frequency statistics will be thoroughly examined.

4.12.1 Randomization and control variables descriptive and frequency statistics

As described in the methodology, the experiment consisted in two randomizations: one concerning the financial knowledge condition (FK) and the neutral stimulus (NS), and another with three different avatars (Human, Humanoid and Robot). Of the 211 participants, 100 were randomly assigned to the FK and 111 to the NS. After being assigned to one of the two initial conditions, every participant had to answer two multiple choice financial questions. There was a small proportional difference between the percentage of respondents from each condition who answered the first financial question right – 89 per cent for FK (n=89) and 82.29 per cent (n=92) for NS. However, being assigned to the FK resulted in a greater likelihood of answering the second financial question correctly. In fact, 65 per cent (n=65) of those assigned to the FK condition answered the second financial question correctly, whereas only 30.63 per cent (n=34) of respondents assigned to the NS did. These results are summarized in the two graphs below. To ensure an easier grasp of the difference, I only identified which answers were correct and which were not, but the reader can find the full ones in the appendix.

Figure 1: Answers to the first financial question

0 10 20 30 40 50 60 70 80 90 100

Neutral Stimulus Financial Knowledge

correct answer wrong answer 1 wrong answer 2

(30)

30 Figure 2: Answers to the second financial question

While a majority of respondents assigned to FK answered the second financial condition correctly, roughly only one-third of those assigned to the NS did the same. However, this does not mean necessarily that the financial knowledge operationalization worked per say. Whether it did will be discussed further in this section.

As to the second randomization, 32.7 per cent of respondents (n=69) were assigned to the human avatar, while 33.6 percent (n=71) were assigned to both the humanoid avatar the robotic avatar. As such, the assignment of respondents was random but roughly equally spread.

The mean age of respondents was 29.12, and only 1.5 percent of participants (n=3) claimed to be younger than 18. These data indicate that the majority of respondents claimed to be old enough to have a bank account which renders them fitting as the target audience for the experiment. In addition, 53.1 per cent (n=112) participants were female and 46.4 per cent were male (n=98). The nationalities of respondents varied, but a few nations stood out. For instance, 34.9 percent of participants identified as Brazilians, who were the largest group of a single country to take part in the research. The second largest group was from the Netherlands, consisting in 19 per cent of total responses. The third largest group was the United States (8.7 percent). Other than these, responses varied across 38 other nationalities.

Education level wise, the sample was not representative of the average population as 53.6 percent had at least a Bachelor´s degree, and almost a third (32.1 percent) had a master´s degree. Since most respondents were well educated, it is possible to question whether their perceptions of the avatars and their behavioral intention towards following its advice can be generalized.

The sample characteristics are summarized by table 1: 0 10 20 30 40 50 60 70

Neutral Stimulus Financial Condition

correct answer wrong answer 1 wrong answer 2

Referenties

GERELATEERDE DOCUMENTEN

The pursuit of the objects of private interest, in all common, little, and ordinary cases, ought to flow rather from a regard to the general rules which prescribe such conduct,

Yeah, I think it would be different because Amsterdam you know, it’s the name isn't it, that kind of pulls people in more than probably any other city in the Netherlands, so

Veral is hierdie behoefte versterk toe die stelsel ge- leidelik meer belangstelling van die kant vah sekere Provinsi- ale Amptenare erlang het~ en- toe die

In a moment we will prove that, for every square matrix, there is an associated Jordan basis, and consequently that every square matrix is similar to a

In order to map the motion, a data-driven approach using Adaptive Neuro-Fuzzy interference systems (ANFIS) and direct angle mapping using the rotational position of the human

We added to a technology-driven approach, in a literature analysis for direct impact using the SDG’s, with an impact-driven approach, in a brainstorm session for maximum impact

The classical version is one in which the narrative component is supposed to be largely dominant, sustained through periodic moments when the emphasis shifts towards

The dummies are calculated as follows: for the first month (21 trading days) from the second day after IPO, the highest point of return index is set two 1. After the first month,