• No results found

AI chauffeurs as the future of car transportation

N/A
N/A
Protected

Academic year: 2021

Share "AI chauffeurs as the future of car transportation"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

AI chauffeurs as the future of car transportation

Finding the interactions between social cognition, trust, and cultural context, in

regards to the acceptance of fully-autonomous cars

Date: 14 January 2019

(2)

Master Thesis

AI chauffeurs as the future of car transportation

Finding the interactions between social cognition, trust, and cultural context, in

regards to the acceptance of fully-autonomous cars

Author: Jan Bogdan Ryzynski

Supervisor: dr. J. van Doorn 2nd Supervisor: A. Schumacher

University of Groningen Oostendamstraat 155b

Faculty of Economics and Business 9073 NE Rotterdam

MSc Marketing Management +48 660 647 050

14 January 2019 j.b.ryzynski@student.rug.nl

(3)

Abstract

Fully-autonomous cars are thought to be the next big innovation of transportation. Most of the car manufacturers, but also some of the biggest technological companies, work on developing them, and plan to release them in the near future. Despite that, the research into fully-autonomous cars is quite underdeveloped, especially from a marketing perspective. This research contributes to this field of study by implementing to it the concepts of warmth and competence social cognition dimensions, and cultural context theory. It also finds new ways of increasing trust, and acceptance of fully-autonomous cars. The results of this research show that the social cognition focus of the artificial intelligence, affects the acceptance of fully-autonomous cars. Furthermore, this relation is mediated by the perceived trust, and moderated by the cultural context. This research also proves that low cultural context has a more positive interaction with competence than with warmth, and confirms that trust increases the acceptance of fully-autonomous cars. The effects and relations found in this study, show the importance of social and cultural aspects in regards to fully-autonomous cars, both from a scientific, and managerial standpoint. It proves that communication and artificial intelligence design shouldn't be overlooked by research and development of driving automation.

(4)

Table of Contents

1. Introduction...5

2. Literature review...8

2.1 Humans' perception of fully-autonomous cars...8

2.2 Warmth and competence as human judgement tools...13

2.3 Cultural context...17

2.4 Conceptual model...24

3. Methodology...25

3.1 Survey design and data collection...25

3.2 Warmth and competence manipulation check...26

3.3 Measurement of trust...27

3.4 Describing countries based on the cultural context...28

3.5 Measuring the acceptance of fully-autonomous cars...29

4. Results...30

4.1 Sample descriptives...30

4.2 Scales reliability...31

4.3 Manipulation check...32

4.4 The effect of cars' AI character focus on the acceptance of fully-autonomous cars...34

4.5 Mediation effect of trust...35

4.6 Moderation effect of cultural context...37

5. Discussion...39

5.1 Importance of trust for fully-autonomous cars research...40

5.2 Competence as a car AI communication focus...40

5.3 Meaning of cultural context for fully-autonomous cars research...41

5.4 General findings...43

5.5 Limitations and possible future research...43

6. References...46

7. Appendices...51

7.1 Appendix A: survey questions...51

(5)

1. Introduction

Automation has been the key concept of cars since the very beginning. Back in the time, instead of “car” people used to use the word “automobile”, which literally translated would mean “self-driving” (Maurer et al. 2016). That shows that cars have always been about reaching automation. Further developments in this industry, like Henry's Ford production line, only show that this direction has been kept constant. But for car industry, automation in production was not enough, they wanted to stay true to the basic meaning of “automobile”. That is why already in the 1990s Mercedes S-class had the option of adaptive cruise control, which meant that it could manage the distance between cars on its own (Bloomberg 2018). One of the very first steps for consumer cars to really reach this “self-driving” which derives from the word “automobile”.

Since the 1990s, the automotive industry has come a long way in development of fully autonomous driving. It introduced many new features that support drivers and constituted to building whole levels of driving automation (Techemergence 2018). Nowadays some sort of driving automation is implemented in most of the cars. For example self-parking, automatic braking in emergency situations or automatically keeping the car in line (Autotrader 2016). Although as impressing as it is to see Tesla models steer on their own, change lanes, and even let drivers take hands of the wheel for a while, this is still not the ultimate goal. Actually, the journey of driving automation ends quite a bit further with fully-autonomous cars (Maurer et al. 2016). The ones that don't even have a steering wheel nor the pedals, instead they are fully steered by the car's AI, which acts as sort of a “virtual chauffeur”. That concept, a level 5 automation, seemed far fetched not that long ago, but now it is no longer a question of “Will it happen?”, but rather “When will it happen?” (Wired 2018).

(6)

happen?”, most of the companies working on fully self-driving cars predict that the reasonable date for full launch of their vehicles and services is somewhere in the 2020s. Although as much as the business world seems completely set on the direction of automotive industry development, the concern about consumers' attitude still looms in the air (Maurer et al. 2016).

Researchers have been already trying to understand the acceptance of fully-autonomous cars in order to develop science of this technology from a more marketing oriented perspective (Maurer et al. 2016). Thanks to their work, some advancements have already been made. For example it has been proven that the perceived trust is among the key dimensions which decline with the progression of car automation (Rödel et al. 2014). Some answers to this issue have been already found, Waytz et al. (2014) conducted the research which shows that the anthropomorphic behaviour and voice of a car AI can improve the trust that the passenger feels. Nevertheless, Maurer et al. (2016) state that currently research in the domain of fully-autonomous cars is still very much at the beginning steps, and what has been found so far definitely requires further development in order to find and explain all of the underlying concepts.

From the research of Fiske et al. (2007) we know that two dimensions that people use to judge others are warmth and competence. They are used to determine whether someone is trustworthy or not, both regarding their intentions and capability to perform. These dimensions are universal, with warmth focusing on people and competence on performance (Cuddy et al. 2008). This could be an interesting extension of the study into the car AI anthropomorphism conducted by Waytz et al. (2014), as it also deals with the issue of gaining trust and explains how it is perceived by humans, which is valuable for anthropomorphism study in automation. Especially if we consider the fact that virtual agents are judged the same way as people are (Demeure et al. 2012), and that the rise of the level of automated social presence (ASP) can lead to a higher perceived warmth and competence (Van Doorn et al. 2014). That is because higher levels of ASP create a notion of interacting with the same social being. Warmth can be increased because automations with higher ASP can be judged as more sociable, approachable, friendly, and competence can be increased with rise of ASP because of the more human-like behaviour, movement and communication, which makes the automation feel more intelligent (Van Doorn et al. 2014). Thus, knowing that trust stems from social cognition (warmth, competence) (Fiske et al. 2007), and that these dimensions can be increased through the social presence of automation (Van Doorn et al. 2014), one can assume that these concepts will play a crucial role in the fully-autonomous cars acceptance research. As it means that setting a specific (based on social cognition dimensions) character of the human-like car AI, could expand the understanding of trust issues in fully-autonomous cars.

(7)

that the acceptance of fully-autonomous cars and their social image varies depending on the country. They expect that culture might play a role in this case, but they merely pointed it out, and no further research regarding it in the case of fully-autonomous cars has been conducted. Interestingly, papers from Wills et al. (1991) and Van Everdingen and Waarts (2003) point that cultural context plays an important role in technology acceptance, but also state that this particular theory of culture has not been researched deeply enough in that case, and call for more studies to be made. Furthermore, cultural context has been found to affect building and perceiving trust (Hall 1976), which is an important issue for the fully-autonomous cars research (Rödel et al. 2014). More precisely, cultures prioritise different social cognition dimensions when judging trust (Doney et al. 1998). For some cultures, trustworthiness might stem from characteristics connected with warmth, for example friendliness, kindness, care for relationships, and for others, it might come from characteristics connected with competence, for example authority, certainty, task orientation. That is because people want other social beings to act in line with what they expect, value, and how they behave or want the society to behave (Lee and Seppelt 2009), and this changes depending on a cultural context (Hall 1976). More so, based on the culture of a person, social cognition perception and importance of its dimensions might also change (Cuddy et al. 2008). This makes warmth and competence theories very important in terms of describing differences between cultures (Bandura 2002). Still, even though theoretically the connections between concepts such as trust, cultural context, technology acceptance, and social cognition do exist, the amount of research combining them is lacking at best. As Maurer et al. (2016) state, most of the forces influencing technology acceptance of fully-autonomous cars remain unclear and not tested thoroughly enough.

(8)

following research questions will be tested:

RQ1: How does the social cognition focus of a fully-autonomous car AI affect the trust, and subsequently, the acceptance of fully-autonomous cars?

RQ2: How does the cultural context affect the relation between social cognition focus of a fully-autonomous car AI and trust?

2. Literature review

2.1 Humans' perception of fully-autonomous cars

Car automation and autonomous vehicle are very broad terms under which one can find research about many different technologies. Researchers often use different terms revolving around autonomous cars without clearly specifying the level that they are writing about (Maurer et al. 2016). That causes confusion and problems with linking research together, as “automated” refers to machine just being able to perform certain actions for a driver, while “autonomous” means that the car has authority to act independently from a driver (Maurer et al. 2016). To avoid this “lack of concept clearness” that can distort research picture, in this research paper the following definition will be used whenever referring to the (fully)autonomous or self-driving car. “The driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.” (SAE (J3016) Automation Levels 2016). This definition by the Society of Automotive Engineers refers to what they rate as level 5 driving automation, where “steering wheel is optional and no human intervention is required”.

In the case of fully-autonomous cars, the interplay between driver (which in this case, looking at the previous definition can be actually more referred to as a “passenger” or just “user”) and car AI is crucial. That is why for the literature review, the research about virtual agents is equally important to the one about car automation (Maurer et al. 2016). Thus for discussion about the current state of the research into the car automation, both articles and books about different levels of automation and virtual agents/artificial intelligence will be taken into consideration. Also, big part of the current research is not exactly about fully-autonomous but semi-autonomous cars, although often their findings are even more relevant as the automation progresses (Koo et al. 2015).

(9)

plays a big role in the acceptance of this technology (Maurer et al. 2016). The design of the car AI behaviour should match the thought process and expectations of humans, so the challenge of car automation is to understand the complex interactions between actors (Koo et al. 2015). In this interaction one of the keys to success is the appropriate feedback that the car gives to the driver (Stanton and Young 1998), because a misunderstood feedback can actually lead to an increase in mistrust (Lee and Sepelt 2009). Research from Koo et al. (2015) found that in the case of semi-autonomous cars the information that is given about car actions significantly changes how it is perceived by the driver. Surprisingly the full message about why the car is performing an action and how it is doing it, leads to lower trust levels than when it is informing only about one these two (either “why”or “how”). That can be explained by the fact that too much information “overloads” the mind of the driver. The lowest level of anxiety and highest of trust is reached when the car only informs about why it is doing a certain action.

Humans prefer polite machines that communicate in a soft and informal way, not a strict and formal one (Norman 1990), as this is the way people treat them, so they expect the same in return (Koo et al. 2015). The communication between car and human must be in line with the expectations of the driver, and there should be a constant interaction between them. That should decrease the amount of blame, thus increasing trust, that automation receives in situations and industries that deal with higher risk, with transportation certainly being one of them (Norman 1990). Proper information exchange, conducted in a casual way is the key for establishing relationships and cooperation between machines and humans (Norman 1990). However the specifics of that interaction still need to be refined in the case of fully-autonomous cars (Maurer et al. 2016). The social aspects of car automation are right now underdeveloped, and according to the authors, technical aspects still are the main focus of the studies, even though the social ones seem to be at least equally important for the acceptance of this technology (Maurer et al. 2016). Without knowing the social constructs that govern the relationship between humans and fully-autonomous cars, the construction of proper communication between them is impossible.

(10)

driving system. Although it is important to point out that none of the studies actually tested whether higher level of trust will lead to a higher acceptance of fully-autonomous cars. To better understand this relationship one must refer to the literature that doesn't directly study fully-autonomous cars. There it can be found that trust is important for explaining human interaction with automation, because trust mediates relationships between people, and as we respond socially to technology, this mediation should stand also in the case of machines (Lee and Seppelt 2009). That is in line with Demeure et al. (2012) statement that people judge machines as they do humans, and with findings of Nowak and Biocca (2003) that humans ascribe human-like behaviours to all entities, machines and AIs included. According to Lee and Seppelt (2009), if people don't trust the automation, they reject it. This trust often stems from features like interface and other design aspects, that might not be directly connected to the actual performance of the machine, but they are even more important in the case of autonomous cars (Wei et al. 2013).

At the certain level of automation, machine starts being truly perceived as an agent (Lee and Seppelt 2009). In that situation communication and proper understanding becomes even more important. There might even occur something that can be described as an “etiquette” of behaviours between humans and machines or artificial intelligence (Parasuraman and Miller 2004). It can affect the trust positively if it matches the user's one, but if it doesn't, it can lead to the decrease in trust. In other words, machine behaviour has to be in line with user's expectations and norms (Parasuraman and Miller 2004), which also leads to virtual agent being rated as more believable (Demeure et al. 2012). That has important connotations for fully-autonomous cars and their AI, especially considering the fact that lack of image of virtual agent, in situation when it behaves in line with user's expectations, can actually be beneficial, as then the consumer creates a perfect image of the virtual agent in his head (Nowak and Biocca 2003). As in the case of fully-autonomous cars the AI is showcased purely by voice (hence no image of the virtual agent), behaving in line with user's expectations and meeting this so called “etiquette” between machines and humans, can actually be of even greater importance than in the case of other automations.

(11)

fully-autonomous cars, others have found the opposite. For example Maurer et al. (2016) found that being just a passenger in your own car still isn't really accepted as a concept. Traditional driver perspective, where one manually controls the car is still the main vision of driving. On the other hand some polls found that consumers are open towards autonomous cars (Continental 2013). Especially younger drivers think that the act of “manual driving” is not needed and that they can spend this time better (Deloitte 2011). More to the point, from different research it seems that people are often open to the idea of fully-autonomous cars, but are still not willing to actually accept them. It appears as though the concept is interesting to consumers as long as it is not implemented (Maurer et al. 2016).

Research results of the acceptance of fully-autonomous cars vary, but so do the countries where studies have been conducted. As the structure of the questions is not the same, it is impossible to actually judge whether the culture has an effect, so it is something that needs to be further researched (Maurer et al. 2016). Although these differences between results can also be due to the social cognition, as if the questions or descriptions of situations were formed differently, they might have led to humans perceiving the autonomous cars or their AIs in contrasting ways. That is because as noted before, humans judge machines the same way as they judge people (Demeure et al. 2012), and person's opinion on someone changes with their perceived levels of warmth and competence, which vary depending on descriptions and situations (Fiske et al. 2007).

Authors Summary of findings

Demeure et al. (2012) Virtual agent is judged as more believable when it acts and communicates in line with what user expects. Also, humans show signs of judging media and machines the same way they do other people.

Nowak and Biocca (2003) Authors write about how people perceive virtual agents. They show that humans ascribe human-like behaviours to all entities, but it helps when the physical image is actually less anthropomorphic, because then people build in their minds the vision that perfectly suits their perception. Especially when the entity already behaves in line with their expectations. It is possible for humans to have social responses to virtual agents.

(12)

that higher levels of ASP will lead to higher scores of competence and warmth which makes for a better service evaluation.

Rödel et al. (2014) In their research they found out that the main dimensions that decrease, when car automation level is increased, are trust, perceived control, and fun.

Maurer et al. (2016) Social aspects are at least as important as technical ones for the autonomous car research. The success of car automation is mostly based on human-machine interaction, so its design is a key factor. Authors found that alignment of user's needs and expectations is crucial for how they perceive the fully-autonomous car. Also they pointed that trust is of great importance. Currently the acceptance of fully-autonomous cars is relatively low, but research varies across the countries. Many interactions between social aspects and the acceptance of fully-autonomous cars are still unknown, even though it is already established that behaviour of AI should be in-line with human's expectations,

Waytz et al. (2014) The increase of human-like features of an autonomous car AI makes it more trustworthy and liked. Research analysis has confirmed that anthropomorphism of an AI mediates the relationship between car automation and perceived trust.

Wei et al. (2013) User interface in fully-autonomous cars is even more important than in regular ones. Lack of a good communication between the car and a passenger leads to panic.

Koo et al. (2015) Autonomous car character should be designed based on human behaviour. Understanding the interactions between a machine and a human is a key for a successful automation. Feedback is crucial for autonomous driving, but it depends on context. In semi-autonomous driving the communication has a significant effect on a driver's attitude towards the car. In their research, when the car was providing only the information about “why it is performing an action”, the highest level of trust was reached in the study.

(13)

technology suits humans better than a strict one.

Lee and Seppelt (2009) People accept automation that they trust and reject the one that they mistrust. Trust might have as strong of a mediation effect for automation-human relationship, as it has for a human-human one. Trust in automation sometimes depends on features that are not directly connected to its capability, for example on the design of the interface. Matching the behaviours and expectations of a user is crucial for trust in automation, especially when the virtual agent is involved.

Table 1. Automation literature

2.2 Warmth and competence as human judgement tools

Social cognition, which is used to judge and perceive others, is measured by many different traits in research. Depending on the author, these traits are then categorized under certain dimensions with various names (Cuddy et al. 2008). For example Wojciszke et al. (1998) used traits like fair, generous, helpful, honest, righteous, sincere, tolerant, and understanding, which he put under the dimension of “moral”. Over time, the research has become more and more centralised on the concepts and names used for social cognition (Cuddy et al. 2008). Nowadays there is a consensus that there are two universal dimensions of social cognition, that cover all of the traits and descriptions of other dimensions used in different research, these are warmth and competence (Fiske et al. 2007). According to the authors, how people perceive others can be fully explained by these factors. That finds confirmation in the work of Kervyn et al. (2010) who also name warmth and competence as the two main dimensions of social cognition, among others that were proposed. Fiske et al. (2007) concluded that warmth dimension is used to evaluate what someone plans to do or in other words their “intention”, whereas competence judges how likely someone is to actually act on their “intention”. Traits that are used to describe warmth are morality, trustworthiness, sincerity, kindness, and friendliness. The ones that describe competence are efficacy, skill, creativity, confidence, and intelligence (Cuddy et al. 2008). Using just two dimensions to describe such a complex issue as social cognition seems to be quite a simplification, but in fact it is enough. According to research findings, as much as 90% of the variance of social traits can be explained by just two factors that can be boiled down to warmth and competence (Abele and Wojciszke 2007).

(14)

2003), although some also state that there are still certain emotions that are acceptable in interactions between humans, that are not acceptable when a person interacts with a virtual agent (Demeure et al. 2012). So, some small differences in judgement might actually occur, and in regards to warmth and competence of robots, they might be caused by the difference in the level of automated social presence (Van Doorn et al. 2017). Automated social presence (ASP) is used to measure how much a person feels socially connected to a robot, it is a level of a conspecific social presence of automated being, as perceived by a human. High levels of ASP induce higher perceived warmth and competence, which helps to boost service outcomes (Van Doorn et al. 2017). That is because the more “human” the robot feels like, the more its social cues are readable and appreciated by real humans. Thus, the automation with high levels of ASP is perceived to be more sociable, friendly, intelligent, and capable, which in turn increases the perceived warmth and competence (Van Doorn et al. 2017). These higher levels of warmth and competence are also correlated with how believable the virtual agent is, but the current research can't answer whether this is just a correlation or is there an actual causal relation (Demeure et al. 2012). However, it is proven that making a virtual agent behave more like a human, actually leads to a higher likeability and trust (Bergmann et al. 2012). Additionally, if any social beings (even beyond a human) can be perceived as cooperative, they can be judged as warm which leads to trust (Aaker et al. 2012).

Some researchers point to an interesting compensation effect between social cognition dimensions. Specifically, when people compare two entities, when one scores higher on competence it automatically loads lower on warmth and vice versa (Kervyn et al. 2010). Interestingly it only occurs when comparing two different beings, and only with warmth and competence, although some authors like Cuddy et al. (2005) have found this effect even without groups comparison. Perceived warmth also decreases easier than competence, and when it does, it is harder to build its levels back up (Bergmann et al. 2012). That could be due to the fact that warmth is generally judged quicker and according to some authors has a “primacy” over the competence (Fiske et al. 2007). Another explanation is that people believe that traits connected to warmth can be under the control of a person, while competence is not always, as intentions that are perceived as “good” create a feeling of warmth, while competence is often connected to skilfulness and performance (Demeure et al. 2012). Manipulations of “friendliness” of communication are easier than manipulations of our actual performance and skill. That leads to the possibility of someone trying to just deceive another person for some gain (for example pretend to “care about others” just to receive a gratification), while a small “mishap” in competence might happen to everyone and doesn't have to change their core personality traits (Fiske et al. 2007).

(15)

automated service outcomes (Van Doorn et al. 2017). That social cognition dimensions are in play in the case of judging artificial beings (Demeure et al. 2012), and also it perceived levels can grow as the social presence of the automation increases (Van Doorn et al. 2017), it would seem that warmth and competence is an important aspect for technology acceptance. Especially in the case of fully-autonomous cars, as Rödel et al. (2014) stated that trust decreases as the driving automation progresses, and Maurer et al. (2016) concluded that trust seems to be the main barrier in acceptance of fully-autonomous cars. Thus, any theoretical concept that helps in understanding the creation of trust is very helpful for establishing relationships and theories regarding fully-autonomous cars. In terms of social cognition and its usage for the design of automation interface, it is important to remember that theoretically, there might occur a compensation effect between warmth and competence (Kervyn et al. 2010). Which means that it is best to focus AI on just one of the dimensions. This leads to a question about which of the social cognition dimensions should be the focus of the fully-autonomous car AI. Theoretically, it seems that warmth should be closer connected to trust than competence is. The theoretical primacy of warmth over competence, occurs because people first want to know what are the intentions of someone, thus they first and foremost base their trust on warmth (Fiske et al. 2007). In theory, if a being has good intentions (warm), but is not competent enough to act on its plans (not competent), it still would have been judged as relatively trustworthy (Fiske et al. 2007). It is also theorised that the communication between machines and people should be more informal and soft, rather than strict and autocratic (Norman 1990), as that improves the relationship, which leads to more trust. So generally, automations are expected to be polite and friendly (Koo et al. 2015), which suits more the “caring” message of a warmth dimension (Würtz 2005). Theoretically, trust and warmth might even be heavily correlated (Fiske et al. 2007). Based on these theories, I propose following hypotheses:

H1: Warmth focused AI (vs. competence) leads to a higher acceptance of fully-autonomous cars. H1.2: Trust mediates the effect of the AI character focus on the acceptance of fully-autonomous

cars.

(16)

country comparison) (Kervyn et al. 2010). Although research in that regard is relatively underdeveloped, as it hasn't been conducted in that many different situations and using different social entities, nor it has properly dived into the cultural underlying. To better understand how culture could influence social cognition one has to understand the ideas behind culture, such as its context.

Authors Summary of findings

Fiske et al. (2007) Authors point out that the two main dimensions that people use to judge others are warmth and competence. They are used to determine the trust. Warmth is used to judge the intentions of someone, the perceived “friendliness”. Competence is used to judge the ability of someone to act, the perceived “authority”.

Kervyn et al. (2010) When a comparison is made between two groups, one that loads higher on competence is perceived as less warm and vice versa. They have noticed this compensation effect taking place, but only when groups are compared directly and only with warmth and competence dimensions.

Cuddy et al. (2008) Depending on the ethnicity the perception of social cognition might change. Warmth and competence are universal dimensions, but of course their perceived importance and levels differ depending on a society. Warmth focuses on others and competence on personal gains and performance. These dimensions are negatively correlated when judging others.

Aaker et al. (2012) Warmth is felt when someone is cooperative, that includes not only humans, but all social entities.

Bergmann et al. (2012) Judgement of warmth is made quickly. The level of perceived warmth is easier to go down, and is harder to come back than in case of competence.

Bandura (2002) Social cognition theory is very important for differences between cultures. Even though cultures mix with each other more and more because of the globalisation and digitalization, there is still no transnational culture.

(17)

2.3 Cultural context

Culture is a dynamic and diverse concept that changes depending on place and society (Bandura 2002). Furthermore, growing globalisation and digitalization leads to people interacting with others from different cultures on every day basis. This means that all cultural theories and analysis are even more important than before, as it is key to understand the differences that occur between cultures when they interact with each other (Bandura 2002). This plays a part not only in personal communication, but also in marketing and business in general, as culture impacts these aspects as well (Kim et al. 1998). To understand culture and how it shapes the society, authors propose different concepts and models. Two of the most prolific ones are “cultural context” proposed by Hall (1976), and “cultural dimensions” proposed by Hofstede (1984). Although both of them are often cited and have a strong position in the research world, they also receive some criticism, especially for being outdated in the current global and digital World (Würtz 2005). Because of that, some culture theories that are less strict and more interchangeable between countries have been proposed, for example by Morley and Robins (1995). Still some authors, like Bandura (2002), argue that even though currently cultures mix and interact with each other more and more, especially because of the digitalization, there is no “transnational” culture. Würtz (2005) also states that so far there hasn't been any convincing empirical evidence to stop relying on dimensions created by Hall (1976) and Hofstede (1984). What is more, research actually shows that culture changes the communication around the digital world in a way that was described by these “old” cultural theories (Würtz 2005). There are also some empirical studies which found that the cultural context theory proposed by Hall (1976), actually still determines the differences between countries in a more general way (Kim et al. 1998). Their findings support the validity of this concept for example in regards to USA, China and Korea. So, even according to more recent research, the differences between cultures proposed by Hall (1976) still occur, even though his theory has faced some criticism.

(18)

dimensions” proposed by Hofstede (1984).

According to Hall (1976), cultural context is responsible for differences in how people interact with each other and form the society. It mainly affects the communication part of interpersonal interactions, but it transcendence to other aspects of humans' lifestyle and perception as well. The whole premise is based on a fact that cultures have different traditions, history and societal structure, which all creates a “context” for people's actions (Würtz 2005). Hall (1976) proposed a differentiation between low cultural context countries and high cultural context ones, and he associated certain characteristics with both of them. People from low cultural context cultures communicate in a direct and explicit way. They are individualistic (Wendi et al. 1999), performance oriented, and are less bound by traditions (Kim et al. 1998). On the other hand, high cultural context relies heavily on relationships, society, and traditions. For people from these cultures, cultural context itself is much more important, than for others (Hall 1976). They use more social cues in communication, and focus on close, personal relationships (Würtz 2005).

It is important to remember that although countries are referred to as “high cultural context” or “low cultural context” ones, they do not exclusively fall under one of the dimensions (Richardson and Smith 2007). Meaning that cultural context is a continuum scale, and although certain countries score more towards one of the scale ends (high/low context), they will often not be exactly equal to the ones that score close to them. However, countries can actually cluster together depending on where they are on the cultural context scale, especially in the case of research results (Van Everdingen and Waarts 2003). So, even though when talking about cultural context one has to think about it as a continuum, researchers still refer to countries as being high or low cultural context ones, as it refers to the general direction of behaviour that they display.

Although main papers describing cultural context hint on its relevance for technology acceptance, there is not that much research that dealt with it directly. Wills et al. (1991) proposed that cultural context does influence innovation adaptation, but they haven't empirically researched the direct effect. One significant and complex research that has been conducted using cultural context for technology acceptance, was made by Van Everdingen and Waarts (2003). They have found that companies from low cultural context countries are more willing to implement innovations, but as it dealt with companies and not individuals, the need for more studies into that aspect of the cultural context remains. Even the authors themselves have stated that future research should look into different scenarios.

(19)

2005), clear statements and logic are valued instead (Hall 1976). People from low cultural context countries are usually more individualistic (Wendi et al. 1999), less likely to conform (Kim et al. 1998), and they value personal strength of someone (Hall 1976). Trust is based on concrete statements like written contracts, and in business the context and personal knowledge of another person is less relevant (Keegan 1989). People from low cultural context countries excel at adaptation, because they are not held down by their traditions (Hall 1976). That is why they do not loose integrity because of the technology, and are more innovative when facing new situations (Kim et al. 1998). That is also why Van Everdingen and Waarts (2003) have found that companies from low cultural context countries are more willing to implement innovations, than the ones from high cultural context countries. In low cultural context, the design is focused on being practical, and not so much on emotions (Würtz 2005). That is why goal completion and efficiency are a key to success, especially in business and service interactions (Mattila 1999). The communication can even be impersonal, as long as it leads to a quicker and better performance (Riddle 1992). Examples of countries that can be described as low cultural context ones are Germany, Switzerland, and Sweden (Würtz 2005).

Differences in culture can change the way people perceive others, so affect their social cognition (Cuddy et al. 2008). Especially in the case of cross-country comparison, people change their perceived levels of warmth and competence (Kervyn et al. 2010). Theoretically, that is because people have preferences for warmth or competence, and judge others based on that, meaning that they baseline their cognition on their country's culture (Kervyn et al. 2010). Because of these changes in warmth and competence, cultural context in theory should affect the perception of trust, as social cognition is used to judge it (Fiske et al. 2007). In line with that, Doney et al. (1998) theorised that depending on culture, people do build trust differently. In case of cultures that can be defined as having low cultural context, acquiring trust tends to rely more on a competence dimension of social cognition, than on warmth (Doney et al. 1998). That is in theory because personal relationship, good will, and care for others, which all make up for warmth, are less relevant compared to the individual skill, efficiency, and performance orientation, which contribute to competence (Hall 1976). Also, a straight-forward, task oriented communication style, which fits perfectly the competence dimension, is in theory preferred in a low cultural context (Würtz 2005). That is why I propose the following hypothesis:

H2.1: Low cultural context interaction with a competence focused car AI has a more positive effect

(20)

In the case of high cultural context, there are a lot of social cues that build the “context” of behaviour and communication, which is understandable only to the ones belonging to the same culture (Hall 1976). Thus relationships, tradition, and society as a whole is more important. Communication is not direct nor straight-forward (Gudykunst et al. 1996), there are a lot of underlying implications that give “hints” about the full meaning (Richardson and Smith 2007). Communication makes more use of physical aspects like gestures, posture, but also timing, intonation, and even timed silence can be used to convey a message (Würtz 2005), so verbal aspects seem to be less relevant (Hall 1976). Showing good will, remaining trustworthy, and caring for others is highly important, as in high cultural context people are very much group oriented (Kim et al. 1998). The individual is less important compared to the collective. That is why high cultural context societies can generally be described as conformists, group gain and well being prevail over personal believes and needs (Würtz 2005). There is a “beehive” mentality, personal decisions and actions are supposed to serve the society. That stems from big relevance of relationships and hierarchy of the society (Kim et al. 1998), which translates to business world as well, for example services are very much concentrated on each and every person, and a cose relationship with them (Riddle 1992). That causes people from high cultural context countries to have higher expectations when it comes to service encounters (Mattila 1999). They want a highly personal experience and the process is as important, if not even more than the outcome.

High relevance of personal relationships leads to more trust being shown, but also hinders the early technology acceptance, as it is harder for them to implement it, when the whole society is still not using it (Van Everdingen and Waarts 2003). This is because technology in high cultural context can hinder the societal integrity (Hall 1976), which means that early adopters are rare, as people are reluctant to start new things (Kim et al. 1998). On the other hand, once the society approves a certain idea or innovation, it spreads really quickly, and everyone adopts it, as there is a “beehive mentality”. Also, people from high cultural context countries show more creativity when using already accepted technologies and systems, compared to the ones from low cultural context cultures (Kim et al. 1998). Still, they have difficulties dealing with new situations and find them taxing for their mentality. Countries that adhere to the high cultural context, are for example Spain, Italy (Van Everdingen and Waarts 2003), and China (Würtz 2005).

(21)

address it, a proper communication between social entities has to established, putting emphasis on relationships, friendliness, care, and personal contact in high cultural context communication (Hall 1976). Relying on warmth dimension also translates into business world of high cultural context, as it is theorised to have a primacy over competence in service encounters, leading to more personal and trustworthy relations (Riddle 1992). All of these have a further meaning for interactions between AI and people, as to establish trust between automation and human, a machine has to act in accordance to expectations and values of a user, especially in the case of a virtual agent (Lee and Seppelt 2009). For people from high cultural context countries, these values are based around characteristics of warmth (Hall 1976). Meeting them should have a far reaching implication for building trust, as in high cultural context unity is theorised to help in establishing and transfering trust (Doney et al. 1998), as people have a “group mentality” (Kim et al. 1998). All of the above leads to this proposed hypothesis:

H2.2: High cultural context interaction with a warmth focused car AI has a more positive effect on

perceived trust, than an interaction with a competence focused one.

Even though in this research it is hypothesised that two different character types of AI, based on the social cognition dimensions, work better or worse depending on the cultural context, still one of them might perform better overall. Meaning that one of the combinations of cultural context and character focus of car AI, might lead to the highest acceptance of fully-autonomous cars. Based on the literature it is not a straight-forward decision to point to the best combination. Fiske et al. (2007) state that trust is closely related to the warmth dimension of social cognition, and that there is an overall primacy of warmth over competence. This higher trust should lead to higher acceptance of fully-autonomous cars (Lee and Seppelt 2009). Theoretically, warm communication between humans and automations should lead to better cooperation and be overall more suitable for their relationship (Norman 1990), as people prefer robots that are easy to cooperate with and want to interact with users, rather than the ones that act independently and purely focus on performance (Maurer et al. 2016). These preferred characteristics of a robot pointed by Maurer et al. (2016) fit warmth dimension of social cognition, especially the “cooperation” part (Aaker et al. 2012). Then again, I predict that this warmth dimension will combine better with high cultural context, than with low cultural context (Doney et al. 1998), so this interaction has to be taken into account.

(22)

low cultural context countries are more open to new concepts, and accept new technologies quicker (Van Everdingen and Waarts 2003). Technological development doesn't disrupt their cultural context (Hall 1976), and as they are more task and efficiency oriented, they appreciate automation more (Mattila 1999). Then again, low cultural context theoretically combines the best with competence social cognition dimension, which theoretically, and as I hypothesised, should be less successful in reaching the acceptance of fully-autonomous cars.

One may then wonder whether the highest acceptance of fully-autonomous cars wouldn't be reached with a warmth focused car AI being used by a person from a low cultural context country, as these dimensions of social cognition and cultural context respectively, seem to be best suited for automation acceptance. In that case the problem is that according to some authors there is a compensation effect between warmth and competence (Kervyn et al. 2010). Meaning that once one dimension is higher, usually the other is automatically perceived as lower (Cuddy et al. 2005). This is not always the case, but if it would stand in the case of fully-autonomous cars, then warmth focused car AI wouldn't perform well in low cultural context, as it would be perceived as having low competence, which is a prevailing social cognition dimension in low cultural context societies (Würtz 2005). Thus it is important to match user's expectations of technology behaviour, so to choose from one of the combinations that complement each other, and not to combine them as one pleases (Demeure et al. 2012).

That is why I hypothesise that in the end, the most successful combination of car AI and cultural context of a user, is a competence focused AI in a low cultural context user environment. This combination will lead to the highest acceptance of fully-autonomous cars out of all the hypothesised possibilities. I predict that culture will prevail, and that people from low cultural context countries are just much more open to new technologies as they don't need much of societal approval. Which will be a big barrier for high cultural context countries, especially in the case of such a impactful technology as fully-autonomous cars (Kim et al. 1998). Thus I propose a following hypothesis:

H3: The configuration of competence focused car AI and low cultural context will lead to the

highest acceptance of fully-autonomous cars out of all of the hypothesised combinations.

Authors Summary of findings

(23)

society functions. Low cultural context countries rely on direct explicit communication, focus more on the individual person, efficiency, and present time. Whereas in high cultural context everything is based on the socio-historical traditions and relationships between others. High cultural context communication uses many social cues and is more person oriented. Also low cultural context countries are better at adapting new technologies as they are not disruptive for their cultural context.

Mattila (1999) People from countries that fall under high cultural context are more people-oriented than the ones from low cultural context. They instead value efficiency and goal completion more, even if that means having an impersonal delivery. Also a correct emotional response to the interaction in services is crucial, but in cross-cultural situations misjudgement probability is very high, which leads to worse service rating.

Wills et al. (1991) Cultural context impacts innovation acceptance. Van Everdingen and Waarts

(2003)

Companies from low cultural context countries are more willing to adopt an innovation than companies from high cultural context countries.

Doney et al. (1998) Cultures build trust on different basis, some base it on perceived warmth and some on perceived competence.

Würtz (2005) There are no convincing evidences that cultural context theory is no longer valid in current times. Actually even recent research findings show that this theory is still applicable, and people behave in a way that is predicted by cultural context theory. Relationships play a big role in high cultural context, low cultural context countries are rather individualistic. Preferred communication in low cultural context is efficient, practical, and rational. For high cultural context it should be focused on creating a relationship and centred around a person. These differences do take place in the interface design. Kim et al. (1998) Understanding cultural context helps with conducting proper

(24)

mentality). In low cultural context people are focused on themselves and the closest surrounding. They are better at new situations, they adapt to them quicker than people from high cultural context countries. Which on the other hand are more creative with the innovations that are already accepted by the society. Their research supports the validity of cultural context theory. According to their findings, how people deal with what's “new” makes for one of the most significant differences between cultures.

Richardson and Smith (2007) Cultural context is a continuum scale, one can't state that the country is purely high or low context. All of them posses some level of characteristics connected to both ends of the scale. It is just that generally certain behaviours from one end of the scale tend to prevail over the others, which makes for a generalization of calling a country high/low cultural context one.

Table 3. Cultural context literature

2.4 Conceptual model

Based on the findings and theories of the analysed papers, and further directions for research pointed out in them, I have designed the corresponding conceptual model:

Figure 1. Conceptual model

Findings laid down in Tables 1, 2 and 3, indicate that acceptance of fully-autonomous cars is dependent on trust as it decreases with the increase of car automation (Rödel et al. 2014), and is

(25)

pointed as a key issue to overcome (Maurer et al. 2016). This trust is judged based on social cognition (competence and warmth), so perceived warmth and competence of someone is leading to the level of trustworthiness of that person (Fiske et al. 2007). What is more, this relation translates from humans to artificial beings, as it seems that they are judged in the same way (Demeure et al. 2012). It has already found that in regard to cars with some degree of automation, level of anthropomorphism changes the perceived trust (the higher it is the more trustworthy the car seems), but the different social cognition focus of the car AI in this situation still hasn't been researched (Waytz et al. 2014).

Research shows that cultural context can influence technology acceptance (Wills et al. 1991), but it hasn't been tested empirically on a customer level, nor at all in regards of fully-autonomous cars (Van Everdingen and Waarts 2003). Looking at Table 1, it can be noticed that cultural context can change the way people perceive warmth and competence in regards to trust, so it could possibly change the effect that might occur between them (Doney et al. 1998), leading to a possible moderation effect. This conceptual model, which is based on the literature review, was used to analyse possible relations of concepts, which led to all of the previously proposed hypotheses, that will be tested in this research paper.

3. Methodology

3.1 Survey design and data collection

In order to provide answers to questions about whether the social cognition focus of car AI, depending on the cultural context, affects trust in fully-autonomous cars, and subsequently their acceptance, the survey was created using online software called Qualtrics. Through it, all the necessary data for researching the conceptual model relations and checking the hypotheses was collected. To facilitate for two different social cognition character focuses of the fully-autonomous car AI, two scenarios were described in the survey, each of them was prepared in a way to best show the character of each of the AIs.

(26)

order to make sure that any perceived differences are due to the only changing aspect in the scenarios, the communication between the AI and the passenger (what the AI says). This was changing from being warmth focused in one scenario to competence focused in the second scenario. This manipulation was designed based on many examples of characteristics and behaviours that are either contributing to the warmth or competence dimension, all of which were found in the reviewed literature (Cuddy et al. 2008; Fiske et al. 2007; Bergmann et al. 2012). Differentiating AIs' character focus by purely changing the way they communicate also best facilitates the measurement of cultural context as a mediator, as this cultural theory is highly based exactly on the communication differences (Hall 1976). Thus in theory, this survey design is suiting the collection of data needed for relations theorised in the conceptual model (Figure 1). The full description of the scenarios can be found in the Appendix.

As between subjects design was used, two variations of the survey were needed. One with a scenario showing a warmth focused AI, and the second one showing the competence focused AI. Apart from the descriptions of the scenarios, both surveys were the same, meaning that all of the questions remained constant and the same number of them was asked. The questions were taken from developed and verified multi item scales and answered on a seven point Likert scale. The exception to this were some demographics questions, such as nationality, gender, age, and education level. The scales were randomized each time the survey was filled, and each participant was randomly showed only one of the survey types, although maintaining an even split between how often each was presented. The survey was available online and distributed through social media, both via direct messages as well as posting it on various pages which accumulate people from different countries. Additionally the survey was distributed in a printed format by approaching random people on the university campus. The distribution was conducted in a way that allowed to reach respondents from various countries, which score differently on the cultural context scale, thus allowing for a comparison between cultural context dimensions. Overall 190 responses were collected during the research, although after cleaning the data from the unfinished or otherwise “defected” surveys, 144 responses remained which were further analysed.

3.2 Warmth and competence manipulation check

(27)

samples. Also the questions were tested across different social entities depending on a research, so the scale is generally not restricted by the object of the study. The only problem could be that the scale was created for testing how humans are perceived, and in this research, social cognition is used for the AI, so an artificial being. This though in theory shouldn't actually pose any issues as numerous studies show that machines, media, and artificial agents are judged the same way that humans are (Demeure et al. 2012).

The questions regarding the warmth dimension asked about how much the car AI can be perceived as warm, nice, friendly, sincere. Competence dimension was judged based on characteristics such as competence, confidence, skilfulness, and being capable. The questions were answered on a seven point Likert scale, ranging from 1 = Not at all, to 7 = Extremely. To find out how the entity is judged on social cognition dimensions using this scale, one has to calculate the mean scores for each of question groups (warmth and competence). That also allows to establish whether the social cognition manipulation was successful in case of this study.

3.3 Measurement of trust

To judge trust (mediator), the scale from Helldin et al. (2013) was used. In their research they used this scale to measure what is the impact of displaying the uncertainty of automation on trust, during automated driving. Their scale is a version of Jian et al. (2000) trust in automation scale, which was customised specifically for car automation. The reliability of the Helldin et al. (2013) version was proven by them, and since this research is about fully-autonomous cars, their take on Jian et al. (2000) scale was used for this survey. Helldin et al. (2013) version is more fitting than its predecessor, as its questions reflect the nature of car automation, but still use a solid foundation of work from Jian et al. (2000). Nevertheless, both the original scale from Jian et al. (2000) and its version from Helldin et al. (2013) are proven statistically and used in research.

(28)

3.4 Describing countries based on the cultural context

To measure the moderator, the scale has been adapted from Van Everdingen and Waarts (2003), which ranks countries based on a cultural context score. These scores are fixed values on a continuum scale going from 1 = the lowest cultural context, to 16 = the highest cultural context, with 8 being a mid point where a country can't be described as neither low cultural context, nor high cultural context one. The ranking was created by combining the findings of two researches (Morden, 1999; Kotabe and Helsen, 2001) that precisely ranked countries based on their cultural context. Instead of just theorising about which one might be called a high cultural context one, and which one a low cultural context one, they created a continuum scale on which countries rank based on their score. Considering that as pointed by Richardson and Smith (2007), cultural context is continuous, and not just fully high or fully low, this approach to ranking countries is a more valid way of assessing the cultural context than just stating whether a country has a high or low cultural context. Using a ranking from Van Everdingen and Waarts (2003) also allows for a shorter and more manageable survey for the respondents. Normally measuring a complex issue such as culture, requires a lot of questions, but in this case all that is needed is a questions about where the respondent is from, and the cultural context will be acquired base on the scale. Of course the question might arise about whether in this highly globalised and mobile World, the simple question about the origin of a person even holds any cultural value, but it actually has been proven that this concept is still valid (Würtz 2005).

(29)

characteristics from both ends of the scale and vary on some level from each other (Richardson and Smith 2007).

3.5 Measuring the acceptance of fully-autonomous cars

The acceptance of fully-autonomous cars is measured using the scale from Davis and Venkatesh (2004). It uses ten questions split among three groups. Two questions for (A) Intention to Use, and four for both (B) Perceived Usefulness and (C) Perceived Ease of Use. Respondents are presented with statements and asked to give their opinion on them. The examples of the questions are "Assuming I had access to [x], I intend to use it.", "Using [x] would increase my productivity.", "I find [x] easy to use.". Full list of the questions can be found in the Appendix. All of the questions are answered using a seven point Likert scale, with 1 = Strongly disagree, and 7 = Strongly agree.

Davis and Venkatesh (2004) scale is based on the very popular and well tested TAM model (Davis 1989), but they refer to their version as the “new dynamic model”. The core remains the same, but their idea was to change it so that the measurements for early stages of technology don't only work in short term, but also have a strong predictive power in later stages of the innovation. Meaning that the “new dynamic model” was created in a way to facilitate for innovations in their early stages of development, so to have a predictive power even before the working prototype is available. Davis and Venkatesh (2004) have proven that measurements taken using their scale are reliable and valid. That is particularly important in this case, as Davis (1989) stated that when using very subjective measurements, as the ones used in Davis and Venkatesh (2004) scale, the proper measurement instruments are the key to properly conducting research. The scale has to be pretested many times and then additionally proven empirically, to make sure of its validity. Which in this case is also true as this scale has been used in other studies that supported its psychometric properties and validity.

(30)

survey of this research, so it furthers its validity.

4. Results

4.1 Sample descriptives

Overall 190 respondents took part in the survey, but after cleaning the data from responses with missing values, the final sample included 144 respondents. The cleaning was done in order to maintain the integrity of the study, as data was missing not only in the case of support questions, such as age or education, but also in the case of main scales used for this study, such as trust or acceptance. There were also couple of cases when the online program used for the survey (Qualtrics) presented respondents with a mixture of questions from both possible study situations (car A and car B). This has occurred couple of times throughout the study, and the reason for it remained undetermined, but these responses also had to be deleted.

The final analysed data contained 65 respondents which participated in the survey with the car A and 79 with the car B. For the whole study split between males and females was basically even. Slightly over half of the sample was in the group age of 18-24. The biggest part of the sample's highest finished education is Bachelor's degree, closely followed by Master's degree. Almost half of the sample is from the countries recognized by Van Everdingen and Waarts (2003) as being in the middle of the cultural context scale, in which case neither low nor high cultural context characteristics prevail over the other. The rest of the sample was split almost evenly between countries that can be described by either predominantly high or low cultural context characteristics. Cultural context is a continuous scale, so the context grows as the score increases, but properties of either low or high cultural context can prevail over the other. That is why countries scoring below 8 are often referred to as low cultural context ones, and the one above 8 are referred to as high cultural context ones. At the score of 8 the countries context is so balanced that they can't really be referred to as having low or high cultural context.

(31)

25-34 30 20.8% 35-44 27 18.8% 45 - 54 3 2.1% 55 - 64 1 0.7% 65 - 74 1 0.7% Education (highest obtained)

Lower than high school 5 3.5%

High school graduate 22 15.3%

Bachelor's degree 64 44.4%

Master's degree 53 36.8%

Cultural context Low 37 25.7%

Neither low nor high 69 47.9%

High 38 26.4%

Table 4. Demographics

4.2 Scales reliability

All of the scales, apart from the cultural context, are multi item, so the analysis of their reliability in the case of this study had to be conducted. The analysis showed that social cognition scale for warmth needed to be adjusted in the case of this study, as one question posed problems for its reliability. The Cronbach's Alpha of warmth factor containing all four questions (as stated in the scale from Cuddy et al. 2008) was sufficient, as it had the value of 0.735, but deleting one question (Please rate how much car [x] can be perceived as sincere?) led to its increase to the value of 0.818. This question was also cross loading on both warmth and competence, and actually loading harder on the competence component while checking the rotated matrix (0.293 and 0.538 respectively). Additionally, its communality had the value of 0.375, so below the required cut off point of 0.4. All of these reasons led to the exclusion of this question from the warmth factor, meaning that the scale for warmth was reduced from four items to three items.

In case of the competence factor, the Cronbach's Alpha had the value of 0.675 with no question deletion leading to any increase of it, and no factor cross loading or communality below 0.4. Thus no changes in the competence scale for this study were needed.

Mean Standard deviation

Cronbach's Alpha

F-statistic Significance Number of items

Warmth 4.9410 1.23619 0.818 9.864 0.000 3

Competence 5.0388 0.94502 0.675 11.772 0.000 4

(32)

After the reliability analysis, both trust and technology acceptance scales were found to be valid in the case of this study. All of the items that were theorised to be included, were included in the case of this study, reaching high reliability in both cases.

The trust scale had a Cronbach's Alpha of 0.846, with only a small possible increase to the value of 0.869 after deleting the question (What is your opinion on the following statement: I understand how the system of car [x] works – its goals, actions and output.). This question was not deleted since the increase of the Cronbach's Alpha is not big enough to justify the adjustment to the scale, especially that there were no problems with cross loading or communality of this question. Technology acceptance or fully-autonomous car acceptance in this case, had the Cronbach's Alpha of 0.912, with only a marginal possible increase to the value of 0.918 after deleting the question (What is your opinion on the following statement: Interacting with car [x] would NOT require a lot of my mental effort.). In this case the scale is very reliable, so again no changes have been made as the marginal increase of the already very high Cronbach's Alpha doesn't justify changes in the scale.

In the end both trust and technology acceptance scales were used as taken from Helldin et al. (2013) and Davis and Venkatesh (2004) respectively, and all of the questions that they predicted were used to create new factors.

Mean Standard deviation

Cronbach's Alpha

F-statistic Significance Number of items

Trust 4.5061 1.17044 0.846 16.738 0.000 7

Acceptance 4.9035 1.21617 0.912 14.094 0.000 10

Table 6. Trust and acceptance factors

4.3 Manipulation check

(33)

It has been found that the difference is indeed a significant for the perceived competence with F = 6.811 and p = 0.010 between groups (with significance cut off point at p < 0.05), but the difference of perceived warmth is not significant with F = 2.675 and p = 0.104 (with significance cut off point at p < 0.05). In fact for the warmth factor the “p” was not even close to being below 0.05 and was still above the more lenient cut off point of p < 0.1.

Overall the manipulation check was partially successful as competence was indeed successfully manipulated between car A and B as the mean was higher for car B (competence focused) and this difference was significant (p < 0.05). Unfortunately warmth manipulation was not successful as the warmth mean difference between car A and B is not significant (p > 0.05). With warmth not being significantly different, it can't be further used as a factor to compare car A and B. They can still be compared on other dimensions though, but it has some further implications for some of the proposed hypothesis.

Mean Standard deviation F-statistic Significance

Warmth car A 5.1256 1.20042 2.675 0.104

Warmth car B 4.7890 1.25198

Competence car A 4.8167 0.93637 6.811 0.010

Competence car B 5.2215 0.91811

Table 7. Manipulation check

H1: Warmth focused AI (vs. competence) leads to a higher acceptance of fully-autonomous cars.

The assumption of the H1 was that the AI of car A would have a significantly higher level of warmth, and car B a significantly higher level of competence. It was assumed that the higher level of warmth was more important for the acceptance of fully-autonomous cars than the higher level of competence. As warmth is not significantly different between cars it can't be used as a measurement or as a reason why one car's AI leads to the higher acceptance of fully-autonomous cars than another. Though as car A and B still differ in the design of the AI, and there is a significant difference between them on some scores and dimensions such as competence, it can still be tested whether car A or B design leads to the higher acceptance of fully-autonomous cars. So, now the cars can be compared based on their competence, and it still can be tested whether the AI's design of car A leads to a higher acceptance of fully-autonomous cars than the AI's design of car B, just warmth is not going to be taken into account.

Referenties

GERELATEERDE DOCUMENTEN

Finding the interactions between social cognition, trust, and cultural context, in regards to the acceptance of fully-autonomous cars.. Jan Bogdan Ryzynski S3567338 Master

In Hall’s (1976) cross-cultural contexting theory, the message in the communication environment of a high context culture is “one in which most of the information is either in

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

Current results in the project include a literature study, a list of domain requirements for context-aware well-being systems and a reference architecture and de- scriptions for

The Oral History project ‘Verteld Verleden’ (Dutch literal translation of Oral History) that is currently running in The Netherlands, focuses on improving access to

To identify the relationship between consumer-blogger identification and CBI, and to analyze the effects of the blogger’s characteristics (source credibility, social influence

As one is susceptible to being lured by the arts, and most of the arts are about a willingness to participate in reciprocal deception (Hodgson and Helvenson 2006; Hodgson 2013), one

Cultural specificity is strongly supported when a cross-cultural study fails to find universal aspects (e.g., of a trait structure) and cross-validation studies have shown that