• No results found

Aversion Towards Algorithms in Autonomous Cars Related To User Expertise

N/A
N/A
Protected

Academic year: 2021

Share "Aversion Towards Algorithms in Autonomous Cars Related To User Expertise"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Aversion Towards Algorithms in

Autonomous Cars Related To User

Expertise

Ruben Kuik

S2929805

MSc. Technology and Operations Management Faculty of Economics and Business

University of Groningen

(2)

2

Abstract

While the amount of deadly accidents in road traffic is significant and the quick development of technology offers opportunities on the other hand, demand for self-driving cars is growing. However, academic consensus regarding the acceptance of autonomous driving has not been achieved yet. This study adds to the existing body of literature by combining both fields of technology acceptance and algorithm aversion. A survey is conducted amongst experts in the field of autonomous driving, and individual user acceptance is compared to results of non-experts. Moreover, the effects of different complexity indicators, such as visual demands and weather conditions, are discussed. Expertise does show a significant correlation to user acceptance, but only for specific driving scenarios. Furthermore, influence of complexity indicators causes a great variety in acceptance and a moderation effect is observed in several cases. In the end, applications of gained knowledge related to the field of autonomous driving and algorithm aversion are discussed.

(3)

3

Table of contents

Abstract...2

1. Introduction ...4

2. Theory ...5

2.1. Acceptance of autonomous driving...6

2.1.1. User acceptance ...7

2.1.2. Legislation...8

2.1.3. Ethics ...9

2.2. Algorithm aversion ... 11

2.2.1. General theory of algorithm aversion ... 11

2.2.2. Application of algorithm aversion ... 12

3. Research questions ... 15

4. Methodology ... 16

4.1. Research design ... 16

4.2. Test of control variables... 18

4.3. Survey measures ... 19

5. Results ... 20

5.1. Normality of data ... 20

5.2. H1: Influence of expertise on user acceptance ... 22

5.3. H2: Moderation effect of situation complexity... 26

6. Discussion... 29

6.1. Research question(s) ... 30

6.2. Limitations ... 31

6.3. Applications of gained knowledge ... 31

7. Conclusion ... 33

References ... 34

(4)

4

1. Introduction

Each year, more than 1,2 million people die in road traffic according to the World Health Organization (WHO, 2015). Over 70% of these deaths were associated with human error (Dhillion, 2007). The technology of self-driving cars has the potential to dramatically reduce the traffic crashes caused by human errors (NHTSA, 2016). Moreover, self-driving cars have a positive effect on traffic congestion and fuel consumption (Fagnant & Kockelman, 2015). As advanced algorithms have developed a lot in the past decades, demand for autonomous cars is growing (Bimbraw, 2015). Many companies, not only limited to major car manufacturers such as Tesla and Nissan, are investing in technologies related to self-driving cars. Recent studies predict that sales of autonomous cars will be around 20-40% in 2040 (Litman, 2020) and market

penetration is estimated at 24.8–87.2% in 2045 (Bansal & Kockelman, 2017). Ultimately, this would save users time and will result in a significant reduction of car accidents. However, the public acceptance forms one of the main issues towards adoption of autonomous driving. Developers of self-driving cars might face problems in terms of user demands which have not been considered before.

(5)

5 Since the technology of self-driving cars is emerging and will be used publicly, the future interaction with daily users is of high interest. This research will form the link between individual user acceptance of autonomous driving and algorithm aversion. Driving is a dynamic activity and the driver of a (semi-)automated vehicle can, or sometimes is forced to, switch between manual and automatic control. It has already been determined that user background affects user

acceptance (Raue, et al., 2019; Nielsen & Haustein, 2018). However, by incorporating algorithm aversion and situation complexity, the dynamics of driving are approached more realistically. It will be determined whether there is a clear correlation between user acceptance and situation complexity for different driving scenarios. Subsequently, it will be discussed how this might affect the future market of self-driving cars. In addition, the contextual knowledge gained in the field of autonomous driving will be generalized in order to contribute to the general theory of algorithm aversion.

The concept of acceptance of autonomous driving will be described and the influence of its underlying aspects are discussed. Subsequently, the theory of algorithm aversion is explained and linked to the context of user acceptance of autonomous driving afterwards. It will be discussed how this general theory can be applied and which differences are faced. Combining both insights leads to the research question, visualized in a conceptual framework. Finally, the results will be discussed and possible applications of gained knowledge will be provided.

2. Theory

(6)

6

2.1. Acceptance of autonomous driving

In order to predict, explain and improve public acceptance, a thorough understanding of reasoning is needed (Nordhoff, et al., 2016). When it comes down to the acceptance of autonomous driving, three main factors are defined and will be discussed separately; user acceptance, legislation and ethics. Obviously, other factors will play a role but these are considered to be the most relevant.

To start with the term autonomous driving itself, the Society of Automobile Engineers (SAE) defines six levels of autonomous driving (2018). The latest visual chart summarizes all levels in figure 1. This research’ focus is on the moment at which a driver would switch to

manual control or autonomous mode. Therefore, autonomous driving and self-driving cars refers to SAE levels 3/4 in which the car is capable of driving but the user is able to take over control at all times. Moreover, levels 1-3 are defined as semi-autonomous driver support features and cannot take over all control. While level 5 is fully autonomous but will most likely lack the possibility to drive, e.g. the absence of a driver seat.

(7)

7 2.1.1. User acceptance

As figure 1 made clear, automated cars of level 3/4 are able to drive autonomous under specified conditions. In other words, a driver must be present in order to take over control. On top of that, the driver has the ability to switch between manual and automatic control. Both situations require a reliable bond between driver and car. However, public acceptance might form the biggest barrier towards adaption of autonomous cars, as argued by Shariff et al. (Shariff, et al., 2017). Several causes such as concerns regarding overtrust and distrust in algorithms in

autonomous cars have been mentioned (Bazilinskyy, et al., 2015). As most of the people do not have any experience with self-driving cars, the degree of trust is being determined by perception to a significant extent.

Obviously, perception is directly influenced by feelings. When judgments involve complexity and uncertainty, people often rely on how they feel about it rather than engaging in effortful reasoning about arguments for their judgments (Raue, et al., 2019). This research emphasizes the importance of perception which is relevant in perceiving self-driving cars.

Moreover, it turns out that gender, experience and knowledge of autonomous driving are directly affecting user acceptance (Raue, et al., 2019; Nielsen & Haustein, 2018). This is in line with Köning & Neumayr (2017), who found that people who had experienced well-functioning automated features in their present cars, showed a more positive attitude towards self-driving vehicles. To conclude, a correlation between user acceptance and user background has been determined. However, in none of these studies the parameter expertise has been considered relevant.

As most research surveys are conducted online and participants have little or no experience with autonomous driving (Nordhoff, et al., 2017), the need for empirical testing is growing. A field experiment of Xu et al. (2018) invited 300 students to experience nine automated driving scenarios. Their direct experience is found to increase their trust, perceived usefulness and perceived ease of use. This is confirmed by two real Automated Road Transport Systems (ARTS) tested in La Rochelle and Lausanne based on the Unified Theory of Acceptance and Use of Technology (UTAUT; (Venkatesh, et al., 2003)). The results indicate that the

performance expectancy, effort expectancy and social influence were all useful predictors of behavioral intentions to use ARTS (Madigan, et al., 2016). The observed positive attitude towards prototypes suggests that involving people in the development of autonomous driving, is

(8)

8 Besides direct influence, perception is also indirectly influenced. A clear example of this is media imaging. Kohl et al. (2018) conducted a wide analysis of tweets for text classification. It is found that many people have a distorted perception regarding potential risks and benefits of autonomous cars. An overestimation of benefits might lead to a misuse of self-driving cars, the disappointment of initial users, and could have fatal consequences. On the other hand, an overestimation of risks could lead to a resistance against self-driving cars before they even become available. Moreover, the importance of experience is highlighted. Companies need to rethink their strategies and incorporate early public experience with prototypes, which may lead to less susceptibility to distorted perceptions of self-driving cars (Kohl, et al., 2018).

2.1.2. Legislation

Besides individual acceptance of the self-driving car technology, legal issues arise. One of the major challenges concerns reliability. In the case of a semi-automated car, is the car manufacturer, the driver or the software developer responsible for caused damage? The question who should be held liable becomes in particular very hard to answer when the Product Liability Directive is taken into consideration (Counsil of EU, 1985). This European law dates back to 1985, a period in which software was not as widespread as it is today.

The law should keep abreast of technological development, even when it is obvious that the legal environment cannot change as fast as the improving technology related to autonomous driving. Several papers strive for a legal framework to cover deficiencies exposes by the

development of autonomous driving technology (Juhasz, 2018). However, policy makers have a role to play in this, on both a national and supranational level. Not only the car manufacturers but also the members of society as a whole need a safe and predictable legal background (De Bruyne & Werbrouck, 2018).

(9)

9 2.1.3. Ethics

Independently of technological feasibility, it is important to involve ethics for autonomous decision-making in deadly accidents. A well-known problem is the trolley problem. Inevitably, an autonomous car will end up in a situation where it is impossible to avoid a collision, and the only choice is which person or people to hit. Since there is no correct choice, the algorithm can be described as "a targeting algorithm of sorts” according to Patrick Lin, a professor of Philosophy at Cat Poly. He explains that such programs would be "systematically favoring or discriminating against a certain type of object to crash into." As a result, those in "the target vehicles will suffer the negative consequences of this algorithm through no fault of their own." According to Brown (2019) this will lead to major challenges, both ethically and legally.

However, this is being refuted in the article of J.L. Gogoll (2018) who advocates for a mandatory rule that aims at minimizing overall harm. State regulation seems to be the most obvious as well as practical way to achieve that. Furthermore, the article states that the classic trolley problem is conceptually inadequate for discussing the ethics regarding autonomous cars. This is also argued by Holstein (2017), who identifies ethical challenges and proposes a

conceptual model which minimizes damage. The purpose is to move away from stagnating discussions over abstract thought experiments and move forward to solve actual ethical issues.

(10)

10 To summarize, the acceptance of autonomous driving is influenced by several factors. Legislation usually is static and monitors the development of autonomous driving carefully. Therefore, the adaption of legislation is assumed to be a matter of time. As a result, clarity will be provided and most ethical dilemmas will be established by law. However, user acceptance covers the society’s willingness to open up to autonomous driving, which importance has been emphasized. Since few people have experience with self-driving cars, their attitude mainly depends on perception. Perception, on its turn, is both indirectly and directly influenced. An overview of the acceptance of autonomous driving can be seen in figure 2.

(11)

11

2.2. Algorithm aversion

The chapter of algorithm aversion covers the general theory of algorithm aversion and how this can be applied to user acceptance of autonomous driving. The goal is to capture the broad range of influences on perception so that the contextual gained knowledge can become generalizable.

2.2.1. General theory of algorithm aversion

Algorithm aversion as a definition is introduced in 2015 (Dietvorst, et al.), it combines behavioral decision-making and the fact that current algorithms outperform human forecasters. Until 2015, research has been done from roughly two different angles of approach. On one side,

psychological research regarding the basics of human decision-making. On the other side, research regarding human predictions compared to data-driven predictions.

An example of psychological research is Kruger (1999), who states that insufficient adjustment, i.e. people tend to judge themselves as above average, is a common error in human judgement due to its egocentric nature. This indicates that people are not able to properly reflect on themselves, but also on other people, due to respectively overconfidence and prejudices. Furthermore, the article of Önkal et al. (2009) can be seen as an example of preference for human decision-making over a data-driven method. Advise of experts gained more attention compared to a statistical model, in the context of stock price forecasting. This is also in line with Eastwood et al. (2012), who concludes a “human-is-better” and a “more-is-better” attitude towards all different decision-making strategies.

These papers made clear that people generally prefer human decision-making. But the foundation of algorithm aversion in general is laid by Dietvorst et al. (2015). This article focuses on empirical research in order to identify principles behind algorithm aversion. Evident

quantitative proof of people losing trust in algorithms after they see them err, has been found. This corresponds to the theory of people relying on experts´ advices and the “human-is-better” attitude mentioned before. To avoid people losing trust, the algorithm could be made invisible such that people are not able to see the error in the first place. Obviously, this is not preferable because a black box situation arises which negatively affects the overall performance.

(12)

12 Moreover, van Wezel & Cegarra (In press) conducted a broad literature study on user involvement in scheduling models. An overview of usability factors is provided, resulting in a conceptual framework for the design of useable scheduling models. An algorithm should become heuristic and the user should be involved in several interaction points. The latter enables the user to adjust and control the algorithm. It has already been determined that interaction and the ability to modify the algorithm are positively correlated to usability and overcoming algorithm aversion (Dietvorst, et al., 2016).

Controversially, Logg et al. (2019) challenges the idea of people preferring human decision making and states that people appreciate algorithm advice despite blindness to the process. This sheds another light on the theories discussed so far, and raises the need for further research. However, appreciation of algorithms was not uniform. It was lower among experts, who were simply less open to taking any advice. As experts are working on the development of self-driving cars, it would be interesting to distinguish between their particular acceptance and the acceptance of people without any experience nor knowledge.

2.2.2. Application of algorithm aversion

Now that the user acceptance of autonomous driving and the theory of algorithm have been elaborated on, the link can be formed. Perception, directly and indirectly influenced, is the main source of user acceptance. The general theory of algorithm aversion is applied to perception as visualized in figure 3.

However, there is one main difference between scheduling methods, which are considered in algorithm aversion, and algorithms used in self-driving cars. For the latter, the system design consists of an input (captured image of road) a controller (dynamic control system) and an output (steering and speed commands) (Isa & Jantan, 2005). Such a dynamic environment requires a feedback loop designed to achieve a certain response time (Abdelzaher, et al., 2008). While the Full Self-Driving Chip (FSD) of Tesla is able to process 2300 frames per second (WikiChip, 2019), a scheduling method is not limited in process time to some extent. In that sense, a

(13)

13 Several studies showed a correlation between user background and user acceptance (Raue, et al., 2019; Nielsen & Haustein, 2018). However, only basic demographic variables such as age and gender were considered. Moreover, Logg et al. (2019) argued that experts are less willing to rely on algorithms´ advice. As this has not been tested in a dynamic environment, the influence of expertise could contribute to the field of algorithm aversion. Since a driver of a level 3/4 autonomous vehicle can, or sometimes is forced to, switch between manual and automatic control, the dynamic user acceptance of autonomous driving is relevant.

Furthermore, as driving is a dynamic activity, mental workload differs significantly during a car ride, according to Paxion et al. (2014). Generally, complex driving situations increased the perceived workload and impaired the performance. Consequently, drivers often reduce their speed when facing situations assessed as moderately complex. In the end, Paxion et al. (2014) emphasizes the lack of studies combining effects of situation complexity and driving experience on mental workload. Subsequently, Stapel et al. (2017) have found that automation lowers the subjective workload, but experience with automation is a prerequisite. However, this contributes to the idea of automation having a positive effect on mental workload. Stapel et al. (2017) compared the results of the field study (objective workload) to subjective workload, defined by NASA. Moreover, the driving environment complexity has a strong correlation to both

subjective and objective workload under all conditions. As mentioned, NASA defined a general task load index (1986). When a complex driving situation is assessed, the effect on subjective mental workload becomes clear. Relatively high scores on aspects as mental demand, temporal demand and frustration level can be identified.

Combining both issues regarding the influence of expertise and situation complexity, the following main research question is defined: “How does algorithm expertise relate to individual user acceptance and how does situation complexity moderate this relation?”

(14)

14 Figure 3 is a visualization of positioning the conceptual framework of this research with respect to algorithm aversion and user acceptance. The dotted box indicates the main research question and how algorithm aversion is affected by algorithm expertise and situation complexity. Subsequently, algorithm aversion relates to perception, and therefore has an effect on user acceptance.

(15)

15

3. Research questions

Now that this research has been positioned and the main research question is stated, it is important to define underlying research goals. The correlation between expertise and algorithm aversion has been determined, however, it is unclear to what extent expertise affects user acceptance. Secondly, the moderation effect of the situation complexity will be studied. At this moment, it is not known how this parameter affects individuals’ user acceptance. Finally, gained knowledge should become generalizable and it should be discussed how it might contribute to the development of self-driving cars.

Summarizing, the main research question is defined as :

“How does algorithm expertise relate to individual user acceptance and how does situation complexity moderate this relation?”

And can be subdivided into two research topics; the effect of expertise and the moderation effect of complexity. The following hypotheses are defined:

• H1: There is a significant correlation between user expertise and user acceptance of autonomous driving. • H2: Experts are more reluctant to rely on autonomous driving algorithms in complex situations.

Furthermore, two follow-up questions regarding the application of gained knowledge will be discussed:

What are possible applications of gained knowledge in the (semi-)autonomous car industry?

(16)

16

4. Methodology

4.1. Research design

For data collection, a quantitative survey is most appropriate (Ahmed, et al., 2016). The focus of this research is on revealing the correlation of user acceptance, algorithm expertise and

complexity of the situation. Also, surveys are in line with other research in the field of human behavior, which is thoroughly discussed in the theoretical background. The use of a questionnaire allows large populations to be surveyed since data can be collected simultaneously, in

contradiction to interviews for instance. The presence of internet offers benefits in distributing and data administration of a survey. Nevertheless, there are some disadvantages associated with the absence of the researcher. For example, the lack of control and accuracy due to less detailed questions and low response rates of hard-to-reach research populations (Denscombe, 2014).

Since autonomous driving is not yet widely available, the current state of scientific research is limited to field tests. Therefore, an explorative study is most appropriate following the reasoning of von Krogh et al. (2012). In the so called “embryonic stage”, the goal is to generate first evidence and provide new insights to further research. For this purpose, a non-probability sampling method is chosen and participants are selected based on apparent expertise.

For this research, it requires additional effort to reach experts in the field of autonomous driving. To deal with this, experts of Rijksdienst voor het Wegverkeer (RDW), TomTom,

ANWB, The Future Mobility Network, Crossyn, and ProDrive academy have been contacted and are willing to cooperate. RDW, TomTom, ANWB, The Future Mobility Network and Crossyn are all involved in the development of autonomous driving, while ProDrive is a training center which focus is on dealing with Advanced Driver Assistance Systems (ADAS). Therefore, their instructors can be considered as experts in the field of autonomous driving.

(17)

17 survey platform Qualtrics licensed by the University of Groningen (RUG, 2020). Default

language is set to Dutch since most of the participants are originated here. However, it is also available in English for international participants. All provided images are subtracted online while searching for practice exam of car driving license (Theorie examen oefenen, 2020; Itheorie, 2020; Stuvia, 2020; Oefentoetsen, 2020). The complete survey is provided in the appendix.

As mentioned before, experts in the field of autonomous driving are actively contacted. On the other hand, the survey is spread online to gather data of people without expertise. Since it turned out the average expert is male and middle aged, there was a slight preference for similar participant in the non-expert group. Moreover, this group might have some experience with (semi-)autonomous driving systems such as adaptive cruise control, lane departure warning or parking sensors. The influence of experience will also be discussed in the next chapter.

ACCEPTANCE OF AUTONOMOUS DRIVING ANSWER Zeker

handmatig Waarschijnlijk handmatig Neutraal Waarschijnlijk autonoom Zeker autonoom

SCORE 1 2 3 4 5

(18)

18

4.2. Test of control variables

To determine the actual effect of algorithm expertise and situation complexity on user

acceptance, other factors which might influence the user acceptance should be excluded first. An overview of all relevant control factors and corresponding tests (results) is provided in table 3.

CONTROL VARIABLE TESTED?

AGE Not tested

GENDER Independent samples t-test (no significance) LEVEL OF EDUCATION One-Way ANOVA (no significance)

FIELD OF STUDY Not tested

JOB SECTOR Not tested

EXPERIENCE Independent samples t-test (no significance)

Table 2: Control variables

To start, considering variables gender and experience with autonomous driving systems, the dataset can be divided into two sample groups. By doing so, an independent samples t-test is ran. No significant results have been found for these factors of interest, considering both average acceptance as well as acceptance of each complexity indicator. In other words, it cannot be concluded that gender or experience does influence the acceptance of autonomous driving based on the conducted sample.

Secondly, the influence of level of education is subjected to a One-Way ANOVA test. It compares the means of two or more independent groups in order to determine whether there is statistical evidence that the associated population means are significantly different (Kent State University, 2020). It turned out that there is no proof level of education has a significant correlation to average acceptance of autonomous driving.

(19)

19

4.3. Survey measures

Since this research comprises the acceptance towards autonomous driving and the main research question scopes two aspects; algorithm expertise and situation complexity. Therefore, the goal of the survey is to identify and measure both.

Measuring expertise is a difficult task since there are many definitions (Gobet, 2015). Nevertheless, algorithm expertise in the field of autonomous driving will be determined on the basis of one survey question (Q7), in which experience and/or involvement in the development of autonomous driving is assessed. Participants who answer this question positive, are asked to provide their specific related knowledge.

On the other hand, the effect of situation complexity related to the acceptance of autonomous driving will be tested. Complexity can be expressed in workload; the amount of information-processing resources used per unit of time, to meet the level of performance required (Wickens & Hollands, 2015). In the contextual case of driving complexity, external factors can influence individual workload required to drive. Visual demands (i.e. navigation), traffic density, lane changing, speed and weather conditions will be considered to be the most influential indicators, an overview is provided in table 3.

In total ten scenarios will be defined, each focusing on one particular complexity

indicator, in order to determine its effects. By doing so, five scenarios classified as simple and five scenarios classified as complex are created. For example, scenario 1 and 2 are identical except for the fact that navigation is incorporated, which is identified as a visual demand.

(20)

20

MEASUREMENT INDICATOR

ALGORITHM EXPERTISE Experience and/or involvement in development of autonomous driving SITUATION COMPLEXITY Visual demands (Tsimhoni, et al., 1999)

Traffic density (Schießl, 2008; Gold, et al., 2016)

Lane changing (Teh, et al., 2014) Speed (Cacciabue, et al., 2007)

Weather conditions (Kilpeläinen & Summala, 2007)

Table 3: Measures and indicators

5. Results

The sample included 76 participant (65 men; 11 women; average age = 40). Since the sample size n (of both groups of expertise) ≥ 30, the distribution of the sample means will be approximately normally distributed, according to the Central Limit Theorem (LaMorte, 2016). However, normality of the data will be tested in 5.1. to confirm this.

5.1. Normality of data

One of the assumptions for most parametric tests to be reliable is that the data is approximately normally distributed. Therefore, it should be determined whether the gathered data is normally distributed, in order to select the appropriate statistical test. At first, the average acceptance of each participant is calculated by the sum of the acceptance of all ten driving scenarios divided by ten. Afterwards, normality of average acceptance data can be concluded based on the

(21)

21 at the 0.05 level of significance1. Therefore, normality can be assumed for this dataset and an appropriate parametric test can be used.

Figure 4: Histogram of average acceptance data and normal distribution

Tests of Normality

Kolmogorov-Smirnova Shapiro-Wilk Statistic df Sig. Statistic df Sig.

Acceptance_average ,088 76 ,200* ,980 76 ,289

*. This is a lower bound of the true significance. a. Lilliefors Significance Correction

Table 4: Test of normality average acceptance

(22)

22

5.2. H1: Influence of expertise on user acceptance

To determine whether there is statistical evidence that the associated population means are significant different, the independent samples t-test is suitable (Kent State University, 2020). As described before, the two different populations considered are based on expertise, i.e. experience and/or involvement in development of autonomous driving. This variable corresponds to Q7 of the survey. Common answers to this question were “assessment of autonomous vehicles”, “involved in shuttle projects”, “development of ADAS systems” and “research NVIDIA drive computers”.

The average acceptance of both sample groups is provided in figure 5 below. The mean acceptance for the sample group without expertise is equal to 3,16 and for the sample group with expertise equal to 3,03 respectively, both slightly above neutral.

Figure 5: Graphical representation of average acceptance autonomous driving 1 1,5 2 2,5 3 3,5 4 4,5 5 no expertise expertise A cc ep ta n ce

(23)

23 The average acceptance of autonomous driving seems independent of expertise.

However, an independent samples t-test is used to prove this. The hypothesis H1: “There is a

significant correlation between user expertise and user acceptance of autonomous driving.” is tested. In other

words, µ1 = µ2 ("the two population means are equal") should be rejected in order to conclude that there is a significant difference between the two populations. The group statistics and results of the independent samples t-test are provided in table 5 and 6 respectively.

Group Statistics

Heeft u ervaring en/of bent u op enige wijze betrokken bij de ontwikkeling van

autonoom rijden? N Mean Std. Deviation Std. Error Mean

Acceptance_average Nee 45 3,1622 ,71551 ,10666

Ja, namelijk: 31 3,0323 ,76567 ,13752

Table 5: Group statistics of t-test average acceptance with respect to expertise

As table 6 shows, the hypothesis cannot be rejected at a 95% confidence interval, since the significance exceeds 0.05. This is independent whether equal variances are assumed or not. Therefore, there is no clear evidence that expertise does have an effect on the average acceptance of autonomous driving.

Independent Samples Test

Levene's Test for

Equality of Variances t-test for Equality of Means

F Sig. t df Sig. (2-tailed) Mean Difference Std. Error Difference 95% Confidence Interval of the Difference Lower Upper Acceptance_average Equal variances

assumed

1,095 ,299 ,756 74 ,452 ,12996 ,17185 -,21245 ,47238

Equal variances not assumed

,747 61,722 ,458 ,12996 ,17404 -,21796 ,47789

(24)

24 While the focus has been on average acceptance, the analysis of each individual driving scenario is missing and might provide new insights. The mean acceptance of the whole sample is equal to 3,11 (Figure 4). However, it is found that some driving scenarios have a very strong effect on the average acceptance of autonomous driving, which could either be positive or negative. An overview of all ten driving scenarios, corresponding mean acceptance for both sample groups of expertise and standard deviation is provided in table 7.

S1 (normal road/conditions) and S10 (motorway low speed) result in the highest

acceptance of autonomous driving. On the other hand, S3 (bad weather) and S7 (lane changing) result in the highest aversion towards autonomous driving, i.e. manual driving is preferred. Moreover, the difference between expertise and no expertise is most noticeable in S6 and S9.

Besides the mean acceptance, standard deviation should be considered. The standard deviation provides information about how the spread of acceptance is from the mean.

Furthermore, the Coefficient of Variation (𝐶𝑉 = 𝜎𝜇) is less than 1 for all scenarios, which means that variation can be considered low (Kaufmann, 2014). On top of that, standard deviation of S1 and S10 is extremely low compared to the other scenarios, which indicates a consensus among the sample group for autonomous driving in normal conditions and at low speed on a motorway.

DRIVING SCENARIO ACCEPTANCE NO EXPERTISE ACCEPTANCE EXPERTISE STANDARD DEVIATION S1: NO NAVIGATION 4,36 4,29 0,806 S2: NAVIGATION 3,91 4,16 1,077 S3: BAD WEATHER 2,11 1,71 1,199 S4: GOOD WEATHER 2,38 2,52 1,340 S5: CITY TRAFFIC 2,40 2,48 1,279 S6: CITY NO TRAFFIC 2,98 2,29 1,366 S7: LANE CHANGING 2,31 2,35 1,279 S8: NO LANE CHANGING 3,49 3,71 1,236 S9: HIGH SPEED 3,09 2,35 1,473 S10: LOW SPEED 4,60 4,45 0,807

(25)

25 In 5.1 it is concluded that there is no clear indication to assume expertise and average acceptance of autonomous driving have a significant correlation. However, there could be a significant correlation for a specific driving scenario. The hypothesis H1: “There is a significant

correlation between user expertise and user acceptance of autonomous driving.” will now be tested for

individual driving scenarios. Since in total ten driving scenarios have been surveyed, each one has been subjected to an independent samples t-test. However, only the significant results are shown.

Group Statistics

Heeft u ervaring en/of bent u op enige wijze betrokken bij de ontwikkeling van

autonoom rijden? N Mean Std. Deviation Std. Error Mean

S6: City no traffic Nee 45 2,98 1,305 ,195

Ja, namelijk: 31 2,29 1,371 ,246

S10: Motorway high speed Nee 45 3,09 1,459 ,217

Ja, namelijk: 31 2,35 1,404 ,252

Table 8: Group statistics of t-test acceptance S6 and S10 with respect to expertise

Independent Samples Test

Levene's Test for

Equality of Variances t-test for Equality of Means

F Sig. T df Sig. (2-tailed) Mean Difference Std. Error Difference 95% Confidence Interval of the Difference Lower Upper S6: City no traffic Equal variances

assumed ,064 ,800 2,211 74 ,030 ,687 ,311 ,068 1,307 Equal variances not assumed 2,190 62,542 ,032 ,687 ,314 ,060 1,315 S10: Motorway high speed Equal variances assumed ,625 ,432 2,189 74 ,032 ,734 ,335 ,066 1,402 Equal variances not assumed 2,205 66,252 ,031 ,734 ,333 ,069 1,399

(26)

26 As presumed, it is found that there is a difference in acceptance with respect to expertise for S6 (city no traffic) and S9 (motorway high speed) at a 95% significance interval (table 9). This means that it can be assumed that expertise has a (negative) effect on acceptance in these two driving situations. The mean acceptance is lower for the expertise sample, which indicates that experts are less willing to drive autonomously for given scenarios compared to people without expertise.

5.3. H2: Moderation effect of situation complexity

The effect of expertise is thoroughly analyzed, both on average user acceptance as well as on the acceptance for specific driving scenarios. However, what has not been studied yet, is the

moderation effect of situation complexity on user acceptance. The supposed interaction is visualized in figure 6 and is also incorporated in the conceptual framework (figure 3). In this case, algorithm expertise is classified as independent variable, situation complexity corresponds to the moderator and user acceptance can be seen as the dependent variable (Kean University, 2004).

(27)

27 Since the situation complexity is not related to the participant, i.e. complexity of a

particular driving scenario is equal for all participants, it is impossible to express the complexity as a variable and thus impossible to conduct a statistical moderation test. However, the average acceptance of both sample groups could be calculated and graphed for both levels of complexity. S1, S4, S6, S8 and S10 are classified as simple, whereas S2, S3, S5, S7 and S9 are classified as complex. The results are provided in figure 7.

Figure 7: Moderation effect of complexity on average acceptance

It becomes directly clear that experts are less willing to drive autonomously in general, which is already discussed before. Furthermore, it is observed that an increase in situation complexity results in a lower acceptance. However, the moderation effect is negligible as both graphs have an almost identical slope. In other words, the influence of complexity of the situation is assumed to be equal for both sample groups, i.e. no expertise and expertise.

(28)

28 While the average acceptance seems not influenced by the moderation effect of

complexity, it would be interesting to study specific driving scenarios. Considering both

situations where significant correlations of expertise were found, i.e. S6 (city no traffic) and S10 (motorway high speed), a different light is shed on the moderation effect of complexity.

To start, the significant difference in acceptance in S6 and S10 can be observed in figure 8 and 9 respectively. However, this difference diminishes, influenced by the moderator complexity. On the other hand, it should be mentioned that the moderation effect is exactly opposite, as an increased complexity causes a reduction of difference in acceptation for city traffic, while it amplifies the difference for high speed.

Moreover, the fact that acceptance of the expertise sample group increases as complexity increases, is remarkable. Possible explanations for this observation will be discussed in the next chapter. 4,6 3,09 4,45 2,35 1,5 2 2,5 3 3,5 4 4,5 5 L O W S P E E D H I G H S P E E D ( S 1 0 ) A CCE P TA N CE

ACCEPTANCE SPEED

No expertise Expertise 2,98 2,4 2,29 2,48 2 2,2 2,4 2,6 2,8 3 3,2 C I T Y N O T R A F F I C ( S 6 ) C I T Y T R A F F I C A CCE P TA N CE

ACCEPTANCE CITY TRAF F IC

No expertise Expertise

(29)

29 Finally, the most extreme moderation effect is found for complexity indicator “weather”. While the no expertise sample group only shows a slight decrease in acceptance with respect to weather conditions, the experts sample group is highly affected. The acceptance for non-experts reduces by 11%, whereas it decreases 32% for experts.

Figure 10: Moderation effect of complexity on acceptance weather

6. Discussion

(30)

30

6.1. Research question(s)

The main research question comprises the two relevant aspects of this research, i.e. the influence of expertise on user acceptance and the moderation effect of situation complexity. Both are distinguished by two hypotheses.

The effect of expertise on acceptance of autonomous driving seems limited according to the results. However, significant evidence has been found in two driving scenarios; S6 (city no traffic) and S10 (motorway high speed). An explanation for this could be the presence of latent danger. Although there is no direct danger present in the scenario description and visualization, both driving in a city and driving at high speed, carries a risk. This is in contradiction to a busy city, lane changing or bad weather, which directly indicate a perceived high risk. It might be the case that experts are more aware of possible dangers and drive more pro-actively instead of letting the autonomous driving algorithm react.

Furthermore, the average user acceptance is equal to 3,11 which is slightly above neutral. However, different scenarios compensate the actual effect on average user acceptance which gives a distorted picture. Reasonably big variances in user acceptance have been observed for different complexity indicators. In some cases the driving scenario causes a negative effect on the user acceptance, while this effect is positive in other cases. In general, participants were more willing to drive autonomously in situations where perceived safety is higher so the car could take over the monotonous task. Low speed motorway driving, mean acceptance 4,54 on a scale of 5, can be seen as a good example of such a task. However, when perceived safety is lower, e.g. bad weather or high speed, the mean acceptance of autonomous driving is significantly lower. As discussed, this effect is even stronger for the experts sample.

(31)

31

6.2. Limitations

A significant difference in acceptance between both expertise groups is observed for two specific driving scenarios. And a possible explanation for this is provided in 6.1. However, it remains difficult to define a common cause since a survey is a quantitative study and limited in depth. Therefore, further research into the exact causes and reasoning behind driving behavior in different complex situations is of interest. This could be done by interviewing both groups of experts and non-experts for instance.

Another limitation of this research is the fact that defined scenarios are mainly based on visual representations of driving situations. This comes with two disadvantages. Firstly,

difficulties to represent the actual complexity indicator result in measurement errors (Coughlan, et al., 2009). In other words, participants might respond to certain aspects present in the picture which do not match the corresponding complexity indicator. Secondly, it is difficult to capture actual individual acceptance as survey is passive while driving is a dynamic activity. In practice, drivers have considerably less time to react, compared to participants of the survey.

Moreover, weather conditions might have a moderation effect, as discussed in 5.3. However, even though the prove seems significant, this cannot be assumed based on one observation. Since it was not possible to conduct a statistical test on the gathered data, further research into the moderation effect is required.

6.3. Applications of gained knowledge

(32)

32 As observed, people are less willing to drive autonomously in low perceived safety

situations. On the other hand, people tend to switch to autonomous driving for monotonous tasks with high perceived safety. Since a significant part of the participants (61 out of 76) has experience with autonomous driving systems, like adaptive cruise control, lane departure warning and/or park assist, participants might be convinced of the capabilities of autonomous cars for relatively simple tasks. Therefore, the high acceptance of autonomous driving for high perceived safety situations could be caused by previous experience, which is in line with (König &

Neumayr, 2017). The industry of autonomous driving could tackle this by incorporating early experience with prototypes driving in complex situations in order to convince the public of the capabilities of an autonomous car, which is also suggested by (Kohl, et al., 2018). These positive experiences seem beneficial in order to persuade people towards complete autonomous driving.

On the other hand, as the development of autonomous driving is ongoing, it might turn out that autonomous driving in complex situations is much more complicated to achieve. If so, the focus of the industry might shift to semi-autonomous cars which are only capable of driving in limited predefined circumstances. In that case, the public acceptance seems to comply already. Further research is required in order to determine the exact possibilities and challenges of future (semi-)autonomous cars.

(33)

33

7. Conclusion

This research aimed to identify the effect of expertise on the acceptance of autonomous driving. Moreover, the (mediation) effect of different situation complexity indicators have been analyzed. Based on a quantitative analysis of a survey conducted amongst experts in the field of

(34)

34

References

Abdelzaher, T. et al., 2008. Introduction to Control Theory And Its Application to Computing Systems. In: Performance Modeling and Engineering. Boston: Springer, pp. 185-215.

Ahmed, V., Opoku, A. & Aziz, Z., 2016. Choosing an appropriate research methodology and method. Research Methodology in the Built Environment: A Selection of Case Studies, pp. 32-49.

Bansal, P. & Kockelman, K. M., 2017. Forecasting Americans’ long-term adoption of connected and autonomous vehicle technologies. Transportation Research Part A: Policy and Practice, Volume 95, pp. 49-63.

Bazilinskyy, P., Kyriakidis, M. & De Winter, J., 2015. An international crowdsourcing study into people’s

statements on fully automated driving. Las Vegas, s.n.

Bimbraw, K., 2015. Autonomous Cars: Past, Present and Future - A Review of the Developments in the Last

Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology. Colmar, France,

12th International Conference on Informatics in Control, Automation and Robotics (ICINCO). Brown, A., 2019. Interesting Engineering: Are We Programming Killer Cars? The Ethics of Autonomous

Vehicles. [Online]

Available at: https://interestingengineering.com/are-we-programming-killer-cars-the-ethics-of-autonomous-vehicles

[Accessed 5 February 2020].

Bugala, M., 2018. Algorithms Applied In Autonomous Vehicle Systems, s.l.: s.n.

Cacciabue, P. C., Re, C. & Macchi, L., 2007. Simple Simulation of Driver Performance for Prediction and Design Analysis. In: Modelling Driver Behaviour in Automotive Environments. London: Springer, pp. 344-375.

Corbetta, P., 2003. Social Research: Theory, Methods and Techniques. London: SAGE publications. Coughlan, M., Cronin, P. & Ryan, F., 2009. Survey research: Process and limitations. International

Journal of Therapy and Rehabiliation, 16(1).

Counsil of EU, 1985. EUR-Lex: Access to European Union Law. [Online]

Available at: https://eur-lex.europa.eu/legal-content/GA/TXT/?uri=CELEX:31985L0374 [Accessed 22 March 2020].

De Bruyne, J. & Werbrouck, J., 2018. Merging self-driving cars with the law. Computer Law &

Security Review, 34(5), pp. 1150-1153.

Denscombe, M., 2014. The Good Research Guide for Small-Scale Social Research Projects. 5 ed. Maidenhead, Berkshire, UK: Open University Press.

(35)

35 Dietvorst, B. J., Simmons, J. P. & Massey, C., 2016. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3), pp. 1155-1170.

Eastwood, J., Snook, B. & Luther, K., 2012. What People Want From Their Professionals: Attitudes Toward Decision‐making Strategies. Behavioral Decision Making, 25(5), pp. 458-468. Fagnant, D. J. & Kockelman, K., 2015. Preparing a nation for autonomous vehicles:

opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and

Practice, Volume 77, pp. 167-181.

Gobet, F., 2015. Understanding expertise: A multidisciplinary approach. s.l.:Palgrave/Macmillan. Gogoll, J. L., 2018. Experimental and Normative Ethics: The Case of Autonomous Cars. München: s.n. Gold, C., Körber, M., Lechner, D. & Bengler, K., 2016. Taking Over Control From Highly Automated Vehicles in Complex Traffic Situations: The Role of Traffic Density. Human Factors

The Journal of the Human Factors and Ergonomics Society, 58(4), pp. 642-652.

Hiray, J., 2008. Business Management. [Online]

Available at: https://businessmanagement.wordpress.com/2008/04/24/open-ended-multiple-choice-and-likert-scales-items-in-surveys/

[Accessed 8 March 2020].

Holstein, T., 2017. The Misconception of Ethical Dilemmas in Self-Driving Cars. Gothenburg, Proceedings.

Isa, K. B. & Jantan, A. B., 2005. An Autonomous Vehicle Driving Control System. International

Journal of Engineering Education, 21(5), pp. 855-866.

Itheorie, 2020. Itheorie. [Online]

Available at: https://itheorie.nl/auto/theorie-examen/ [Accessed 5 May 2020].

Juhasz, A. B., 2018. The Regulatory Framework and Models of Self-Driving Cars. Law Journal

Library, Volume 3, pp. 1371-1392.

Kaufmann, J., 2014. ResearchGate. [Online] Available at:

https://www.researchgate.net/post/What_do_you_consider_a_good_standard_deviation [Accessed 10 June 2020].

Kean University, 2004. Moderation Mediation. [Online]

Available at: http://orsp.kean.edu/documents/Moderation_Meditation.pdf [Accessed 22 June 2020].

Kent State University, 2020. University libraries SPSS tutorial. [Online] Available at: https://libguides.library.kent.edu/SPSS/IndependentTTest [Accessed 9 June 2020].

(36)

36 Kilpeläinen, M. & Summala, H., 2007. Effects of weather and weather forecasts on driver

behaviour. Transportation Research Part F: Traffic Psychology and Behaviour, 10(4), pp. 288-299. Kohl, C. et al., 2018. Anticipating acceptance of emerging technologies using twitter: the case of self-driving cars. Journal of Business Economics, Volume 88, pp. 614-642.

König, M. & Neumayr, L., 2017. Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, Volume 44, pp. 42-52. Kruger, J., 1999. Lake Wobegon be gone! The 'below-average effect' and the egocentric nature of comparative ability judgments. American Psychological Association, 77(2), pp. 221-232.

LaMorte, W. W., 2016. Boston University. [Online] Available at:

http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Probability/BS704_Probability12.html [Accessed 10 June 2020].

Litman, T., 2020. Autonomous Vehicle Implementation Predictions Implications for Transport Planning, s.l.: Victoria Transport Policy Institute.

Logg, J. M., Minson, J. A. & Moore, D. A., 2019. Algorithm appreciation: People prefer

algorithmic to human judgment. Organizational Behavior and Human Decision Processes, Volume 151, pp. 90-103.

Madigan, R. et al., 2016. Acceptance of Automated Road Transport Systems (ARTS): an adaptation of the

UTAUT model. Warsaw, Elsevier.

NASA, 1986. NASA Technical Reports Server. [Online]

Available at: https://ntrs.nasa.gov/search.jsp?R=20000021488 [Accessed 23 March 2020].

NHTSA, 2016. Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety, Washington D.C.: U. S. Department of Transportation.

Nielsen, T. A. S. & Haustein, S., 2018. On sceptics and enthusiasts: What are the expectations towards self-driving cars?. Transport Policy, Volume 66, pp. 49-55.

Nordhoff, S., van Arem, B. & Happee, R., 2016. Conceptual Model to Explain, Predict, and Improve

User Acceptance of Driverless Vehicles. Washington D.C., Transportation Research Board.

Nordhoff, S. et al., 2017. User Acceptance of Driverless Shuttles Running in an Open and Mixed Traffic

Environment. Strasbourg, s.n.

Oefentoetsen, 2020. Oefentoetsen. [Online]

Available at: https://www.oefentoetsen.nl/toetsen/gevaarherkenning/ [Accessed 5 May 2020].

Önkal, D. et al., 2009. The Relative Influence of Advice from Human Experts and Statistical Methods on Forecast Adjustments. Behavioral Decision Making, 22(4), pp. 390-409.

(37)

37 Raue, M. et al., 2019. The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Car. Risk Analysis, 39(2), pp. 358-374.

RUG, 2020. Survey Tools RUG. [Online]

Available at: https://www.rug.nl/research/research-data-management/tools-services/rdm-guide/services/survey-tools

[Accessed 05-05-2020 May 2020].

Samuels, P. & Marshall, E., 2020. SPSS statstutor. [Online]

Available at: https://www.sheffield.ac.uk/polopoly_fs/1.579181!/file/stcp-marshallsamuels-NormalityS.pdf

[Accessed 9 June 2020].

Schießl, C., 2008. Subjective strain estimation depending on driving manoeuvres and traffic situation. IET Intelligent Transport Systems, 2(4), pp. 258-265.

Shariff, A., Bonnefon, J.-F. & Rahwan, I., 2017. Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour, Volume 1, p. 694–696.

Society of Automobile Engineers, 2018. Society of Automobile Engineers. [Online]

Available at: https://www.sae.org/news/press-room/2018/12/sae-international-releases-updated-visual-chart-for-its-“levels-of-driving-automation”-standard-for-self-driving-vehicles [Accessed 23 February 2020].

Stapel, J., Happee, R. & Mullakkal Babu, F. A., 2017. Driver Behavior and Workload in an On-road

Automated Vehicle. The Hague, Road Safety & Simulation International Conference.

Stuvia, 2020. Stuvia. [Online]

Available at: https://www.stuvia.nl/toetsen/cbr-auto-theorie/examens [Accessed 5 May 2020].

Sugiono, S., Widhayanuriyawan, D. & Debrina, A., 2018. The impact of road complexity on the psychophysiological load experienced by car drivers using electroencephalography (EEG) measurement of brainwaves. Acta Neuropsychologica, 16(4), pp. 361-374.

Teh, E., Jamson, S., Carsten, O. & Jamson, H., 2014. Temporal fluctuations in driving demand: The effect of traffic complexity on subjective measures of workload and driving performance.

Transportation Research Part F: Traffic Psychology and Behaviour, Volume 22, pp. 207-217.

Theorie examen oefenen, 2020. Theorie examen oefenen. [Online]

Available at: https://www.theorieexamenoefenen.nl/theorie-examen-oefenen/auto [Accessed 5 May 2020].

Tsimhoni, O., Yoo, H. & Green, P., 1999. Effects of Visual Demand and In-Vehicle Task Complexity on

Driving and Task Performance as Assessed by Visual Occlusion, Michigan: University of Michigan

Transportation Reserach Institute.

Van Wezel, W. & Cegarra, J., In press. Reframing the theory-practice gap in scheduling: recognizing the usability of incomple scheduling models.

(38)

38 von Krogh, G., Rossi-Lamastra, C. & Haefliger, S., 2012. Phenomenon-based Research in

Management and Organisation Science: When is it Rigorous and Does it Matter?. Long Range

Planning, 45(4), pp. 277-298.

WHO, 2015. Global Status Report on Road Safety, Geneva: World Health Organization.

Wickens, C. D. & Hollands, J. G., 2015. Engineering psychology and human performance. third ed. New York: Routledge.

WikiChip, 2019. WikiChip. [Online]

Available at: https://en.wikichip.org/wiki/tesla_(car_company)/fsd_chip [Accessed 23 March 2020].

(39)

39

Appendix

Master thesis autonoom rijden

Start of Block: Intro

Intro: Voor mijn master thesis onderzoek aan de Rijksuniversiteit Groningen doe ik onderzoek naar de acceptatie van autonoom rijden. Het invullen van deze enquête zal maximaal een kwartiertje in beslag nemen en is volledig anoniem. Echter, aan het eind heeft u de mogelijkheid om via een aparte hyperlink uw contactgegevens achter te laten als u geïnteresseerd bent in de resultaten. Alvast bedankt!

Page Break

Q0: Gaat u ermee akkoord dat uw resultaten gebruikt mogen worden voor dit onderzoek? (verplicht om verder te gaan)

o

Ja (1)

o

Nee (2)

Skip To: End of Survey If Gaat u ermee akkoord dat uw resultaten gebruikt mogen worden voor dit onderzoek? (verplicht om ve... = Nee

End of Block: Intro

(40)

40 Q1: Wat is uw leeftijd?

o

Onder 18 (1)

o

18 - 24 (2)

o

25 - 34 (3)

o

35 - 44 (4)

o

45 - 54 (5)

o

55 - 64 (6)

o

65 - 74 (7)

o

75 - 84 (8)

o

85 of ouder (9) Q2: Wat is uw geslacht?

o

Man (1)

o

Vrouw (2)

Q3: Wat is uw hoogst genoten opleiding?

o

Middelbaar onderwijs (1)

(41)

41

(42)

42

Q5: In welke sector bent u werkzaam?

o

Gezondheidszorg en welzijn (1)

o

Handel en dienstverlening (2)

o

ICT (3)

o

Justitie, veiligheid en openbaar bestuur (4)

o

Landbouw, natuur en visserij (5)

o

Media en communicatie (6)

o

Onderwijs, cultuur en wetenschap (7)

o

Techniek, productie en bouw (8)

o

Toerisme, recreatie en horeca (9)

o

Transport en logistiek (10)

o

Niet van toepassing (11)

o

Anders, namelijk: (12) ________________________________________________ Q6: Heeft u ervaring(en) met (semi-)autonoom rijden? (Bijvoorbeeld ook: adaptive cruise control, lane departure warning en parkeerhulpsystemen)

o

Nee (1)

(43)

43

Q7: Heeft u ervaring en/of bent u op enige wijze betrokken bij de ontwikkeling van autonoom rijden?

o

Nee (1)

o

Ja, namelijk: (2) ________________________________________________

End of Block: Background + expertise Start of Block: Driving scenarios

Intro: De volgende vragen schetsen in totaal 10 verschillende scenario's die u mogelijk kunt treffen tijdens het rijden in een autonome auto. In dit geval betreft dit een normale auto die daarnaast ook in staat is om volledig autonoom te rijden.

(44)

44

Veronderstel dat u op een weg buiten de bebouwde kom rijdt onder normale

omstandigheden (drukte/weer/snelheid), vergelijkbaar met onderstaande afbeelding. In hoeverre zou u in deze situatie handmatig/autonoom rijden?

o

Zeker handmatig (1)

o

Waarschijnlijk handmatig (2)

o

Neutraal (3)

o

Waarschijnlijk autonoom (4)

o

Zeker autonoom (5) Page Break

Veronderstel dat u op een weg buiten de bebouwde kom rijdt onder normale

omstandigheden (drukte/weer/snelheid), vergelijkbaar met onderstaande afbeelding. Echter bent u ondertussen druk bezig met uw navigatiesysteem omdat deze een omleiding

(45)
(46)

46

(47)

47

Veronderstel dat u op rustige weg binnen de bebouwde kom rijdt, vergelijkbaar met onderstaande afbeelding. In hoeverre zou u in deze situatie handmatig/autonoom rijden?

(48)

48

Veronderstel dat u op een relatief drukke weg in een stad rijdt onder normale

omstandigheden (weer/snelheid), vergelijkbaar met onderstaande afbeelding. In hoeverre zou u in deze situatie handmatig/autonoom rijden?

(49)

49

Veronderstel dat u op een relatief rustige weg in een stad rijdt onder normale

omstandigheden (weer/snelheid), vergelijkbaar met onderstaande afbeelding. In hoeverre zou u in deze situatie handmatig/autonoom rijden?

(50)

50

Veronderstel dat u op een meerbaans snelweg rijdt onder normale omstandigheden (drukte/weer/snelheid). Door invoegende vrachtwagens moet u van rijstrook wisselen, vergelijkbaar met onderstaande afbeelding. In hoeverre zou u in deze situatie

(51)

51

Veronderstel dat u op een meerbaans snelweg rijdt onder normale omstandigheden (drukte/weer/snelheid). Gezien de wegbezetting kunt u (voorlopig) nog links blijven rijden, vergelijkbaar met onderstaande afbeelding. In hoeverre zou u in deze situatie

(52)

52

Veronderstel dat u op een rustige autobahn in Duitsland rijdt onder normale

omstandigheden, vergelijkbaar met onderstaande afbeelding. Gezien de drukte en omdat u op tijd op uw vakantiebestemming wil aankomen, rijdt u hier met 180 km/u (ervan uitgaande dat u ook in autonome stand zelf uw snelheid kan bepalen). In hoeverre zou u in deze situatie handmatig/autonoom rijden?

(53)

53

Veronderstel dat u op een rustige autobahn in Duitsland rijdt onder normale

omstandigheden, vergelijkbaar met onderstaande afbeelding. Gezien het feit dat u niet op tijd staat en van het uitzicht wil genieten, rijdt u niet harder dan 120 km/u (ervan uitgaande dat u ook in autonome stand zelf uw snelheid kan bepalen). In hoeverre zou u in deze situatie handmatig/autonoom rijden?

o

Zeker handmatig (1)

o

Waarschijnlijk handmatig (2)

o

Neutraal (3)

o

Waarschijnlijk autonoom (4)

o

Zeker autonoom (5)

End of Block: Driving scenarios Start of Block: Contactgegevens

Mocht u geïnteresseerd zijn in de resultaten van dit onderzoek, kunt u via onderstaande hyperlink uw e-mailadres achterlaten. Uw e-mailadres is op deze manier niet gekoppeld aan de vragenlijst.

https://rug.eu.qualtrics.com/jfe/form/SV_eb8u5zOLQIW4C7r

Referenties

GERELATEERDE DOCUMENTEN

It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly

We consider the time interval between a user’s best answers, and we measure it as the number of days between the moment a user joined the forum and when the posted his N-th best

In 2012 heeft Kistenkas in het boek Recht voor de groenblauwe ruimte (Wageningen Academic Publishers) al geconcludeerd dat het huidige groenblauwe omgevingsrecht dreigt vast te

duurzame energiehuishouding vitale economie vitaal leven veilig wonen voedsel veiligheid duurzame mobiliteit Health hub Roden Digital Society Hub.. 4/17/18 Centre of

Disregarding the speed of real 'harmonisation' or `compatibilisation' in the field known as forensic expertise, a likely effect of this study will be a growing awareness of

Once the expert had been appointed by the investigating judge, the defendant and his counsel had certain rights, including the right (a) to be present during the expert's

In those countries where there is a body of legal experts three different systems exist: that where the expert testimony may be challenged where the accused and the judge each

Voor dit laatste type onderzoek geldt dat, door de ruime beschikbaar- heid van digitale geluidsbewerkingssoftware op veelal modale compu- tersystemen waarmee bewerkingen