• No results found

Acceptance of social healthcare robots

N/A
N/A
Protected

Academic year: 2021

Share "Acceptance of social healthcare robots"

Copied!
34
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Acceptance of social healthcare robots

Measuring differences in the acceptance level of social healthcare robots

between different age groups

Rob Dekker

Student number: 11020067 July 2018

Supervisor: Ms. dr. J.A.C. (Jacobijn) Sandberg 2nd​ Examiner: Mr. ir. A. M. Stolwijk

Bachelor thesis Information Science Faculty of Science

(2)

Abstract

The acceptance of new technology has yielded many different models, each with a different set of factors. This study focused on the acceptance level of assistive sociable healthcare robots. Therefore, the Almere model of Heerink et al. (2010) formed the basis of the model. One additional factor was taken into account, called ​perceived

human-likeness, and completed the model being tested in this study. PLS path modeling was performed to test the relations in the model. Most constructs were confirmed to have a significant direct or indirect influence on the intention to use social healthcare robots. In addition, the difference between two different age groups, ​Digital Nativesand​Digital

Immigrants, in explaining the acceptance level was confirmed to be significant for the constructs Attitude, Perceived Ease Of Use and Perceived Enjoyment.

(3)

Table of contents

Preface 3

1. Introduction 4

2. Theoretical background 5

2.1 Existing acceptance models 5

2.1.1 TAM 5

2.1.2 UTAUT 6

2.1.3 Almere model 7

2.1.3.1 Adopted factors 8

2.1.3.2 Additional factors 8

2.2 The robot’s appearance 8

2.2.1 Additional factors and final model 10

2.3 Different age groups 11

3. Method 12 3.1 Participants 12 3.2 Design 12 3.3 Materials 13 3.3.1 Video material 13 3.3.2 The questionnaire 13 3.4 Procedure 14 3.4.1 Online procedure 14 3.4.2 Offline procedure 14 3.5 Data analysis 15 4. Results 15 5. Conclusion 21 6. Discussion 22 6.1 Implications 22 6.2 Further research 22 7. References 23

Appendix A - Questionnaire items per construct with Dutch translation and original

source 25

Appendix B - Removed variables per construct 30

Appendix C - Hypothetical model in SmartPLS 32

(4)

Preface

First of all, I want to thank my supervisor Jacobijn for the excellent guidance during the process of writing my bachelor thesis. I learned a lot during our meetings.

Secondly, I want to thank Loek for the technical support with SmartPLS so I could continue analysing the data.

Third, I want to thank my parents for helping to collect participants for this research. Last but not least, I want to mention my cat which served as a social companion throughout the entire process of writing my thesis.

(5)

1. Introduction

After the invention of the digital computer in the 20th century, researchers tried to implement this computer into a robot, so that it would have a “mechanical brain”. In the early stages, these robots were only implemented in factories to perform physical tasks repeatedly. Human labor was partially replaced by robots which could operate autonomously. In 1966, a robot named ​Shakey was invented at the Artificial Intelligence Center of the Stanford Research Institute. Shakey was able to make its own decisions by identifying objects in the environment (Nilsson, 1984). The development of autonomous robots which can independently operate and even make decisions on their own based on different type of inputs these robots might accept, e.g. visual, auditory or kinesthetic, is still thriving amongst researchers who are living in today’s digital society. Later on, since the early 1990s, researchers have begun to develop robots which interact socially with human beings. A so-called ​social robot is a robot that is able to perform non-physical tasks with the use of technologies such as speech recognition, facial recognition and emotion recognition (Van Poelvoorde, 2016). An example of such a social robot is ​Kismet made by Dr. Cynthia Breazeal (Breazeal, 2003). Kismet is a robot head which is able to interact with humans by recognizing and simulating human emotions.

Social robots are currently operating in much more complex domains, like healthcare and education. These domains are characterized by unstructured and changing environments. Therefore, social robots are required to interact with human beings, converse with them and even form social relationships in order to understand their intentions and being able to operate in such environments (Breazeal, 2004). The integration of this kind of robots in society seems to be futuristic, but researchers are making more and more progress towards this goal. Current healthcare has already adopted robots that perform work humans used to do. For instance, robots are being used in surgery to improve surgical precision (Howe & Matsuoka, 1999). In addition, healthcare robots have also been designed to assist the elderly. These assistive healthcare robots can be divided into two categories: rehabilitation robots and social robots (see Figure 1). An example of the first category of robots is a smart wheelchair which is primarily focused on the physical assistance. Two types of assistive social robots can then be distinguished in the second category in which the focus is on the communication with the user: service type robots and companion type robots. Service type robots are assistive by providing all kind of tasks to support independent living (Broekens et al., 2009). An example of such a service type robot is Pepper made by SoftBank Robotics. Pepper is a humanoid robot which is able to communicate by voice, gestures and expression of emotions. Companion type robots are used to enhance the health and well-being of the user. An example of a companion type robot is Paro, a therapeutic seal robot, which is being used as a social companion for the elderly (Wada & Shibata, 2007).

(6)

Figure 1. Categorization of assistive robots in eldercare. (Broekens et al., 2009)

However, it is important to know if this new technology in healthcare will be accepted by the people, because it would be a huge waste of time and money if they do not want to use it. The acceptance of social healthcare robots is essential to the successful implementation of these robots, which is why it is necessary to carefully examine the behavioral intention of people to use the technology. There are several studies that have predicted and explained factors influencing the user’s acceptance of new technology such as social healthcare robots. The fact is that these studies were only conducted on two different age groups, namely young adults (people from 40 to 65 years) and the elderly (people over 65 years), so these factors might not have as much influence on (or might not influence at all) the acceptance level for younger people. In the future, today’s youth will become adults and eventually become part of the elderly themselves, so it is very important to investigate the acceptance level for this younger age group.

2. Theoretical background

By now, a rather exclusive picture exists on the acceptance level of new technology by elderly persons. Surprisingly, however, research on technology acceptance by future elderly (current youth) is scarce. Although various researchers already have outlined the influence of age on the acceptance level of robots (Kuo et al., 2009; Broadbent et al., 2009; Ezer, Fisk, & Rogers, 2009), the only two different age groups investigated in these studies were (i) from 40 to 65 years, i.e. young adults, and (ii) adults over 65 years, i.e. elderly people. Over the years, different technology acceptance models have been created which will be covered in the next section.

2.1 Existing acceptance models

2.1.1 TAM

(7)

user’s intention to use the system, which is the main factor that influences the actual use of the system. These determinants are ​perceived usefulness, which is defined as the degree to which a person believes that using a particular system would increase his or her job performance, and ​perceived ease-of-use, the degree to which a person believes that using a particular system would be free of effort (Davis, 1989). The TAM is presented in Figure 2. Both perceived ease-of-use and perceived usefulness are influenced by external variables and in turn influence the attitude towards the use of new technology.

Figure 2. The Technology Acceptance Model. (Davis, Bagozzi & Warshaw, 1989).

Criticism of this model has led to extended versions of the TAM, called the TAM 2 and TAM 3. The TAM 2 explains perceived usefulness and perceived ease-of-use in terms of social influence processes and cognitive processes (Venkatesh & Davis, 2000), while the TAM 3 addresses how managerial decision making can influence the acceptance of information technology (Venkatesh & Bala, 2008). These specific acceptance models form only a small part of the total number of acceptance models which have been created in various domains.

2.1.2 UTAUT

As stated before, research in the field of the acceptance of information technology has yielded many different models, such as the TAMs, each with a different set of acceptance determinants (​Venkatesh, Morris, Davis & Davis, 2003​). Venkatesh et al. reviewed and discussed eight prominent models in order to create a unified model called the Unified Theory of Acceptance and Use of Technology (UTAUT). This model outperformed all the individual models reviewed in their paper, which is why this unified model can be best used to explain the user acceptance of new technology. The UTAUT model directly explains the intention and use behavior of new information technology according to the following four determinants: ​Performance Expectancy, ​Effort Expectancy, ​Social

Influence and ​Facilitating Conditions. These are then being influenced by ​Gender,​Age,

Experience and ​Voluntariness of Use , see Figure 3. The UTAUT explains 70 percent of the variance in the behavioral intention to use new technology (Venkatesh et al., 2003).

(8)

Figure 3. The Unified Theory of Acceptance and Use of Technology model. (Venkatesh et al., 2003).

2.1.3 Almere model

The previously discussed model of technology acceptance, the UTAUT, gives the factors that explain the use behavior of new technology. Heerink et al. (2010) extended this model, in order to create a model of technology acceptance that tests the acceptance of assistive sociable agents by elderly people. Apart from the factors from the UTAUT, this model, called the Almere model, explains the intention and use behavior with factors mainly related to social interaction. Gender, Age, Experience and Voluntariness of Use have been omitted from the model. These factors were found to be the main moderating influences yet Heerink et al. decided to omit them without any scientific justification. Though, as stated in their discussion, these moderating factors could complete their developed vision on the user’s acceptance level of social robots. Most of the factors in the Almere model are not directly related to the user’s intention and actual use, but are interrelated to each other as can be seen in Figure 4. The Almere model has proven to be able to predict and explain the acceptance level of different kind of systems. The relations that have been tested and confirmed for particular systems have been indicated by numbers in Figure 4, where each number corresponds with a different system. The Almere model explains 59 to 79 percent of the variance in Intention To Use and 49 to 59 percent of the variance in the Actual Use.

(9)

Figure 4. The Almere model. The numbers refer to: 1 iCat speech controlled, 2 Robocare videos, 3 iCat touch screen, 4 Screen agent Steffie. Dotted lines are not confirmed by any regression analysis. (Heerink et al., 2010).

2.1.3.1 Adopted factors

Originally, Perceived Usefulness and Perceived Ease of Use were adopted from TAM. In UTAUT, Perceived Usefulness was renamed to Performance Expectancy and Perceived Ease of Use was renamed to Effort Expectancy, where both had a broader definition. Heerink et al. reused ​Perceived Usefulness and ​Perceived Ease of Use in the Almere model, because it would fit better in the (care) home environment instead of the work environment. ​Social Influence, Facilitating Conditions, Intention to Use and Actual Use were all directly adopted from UTAUT.

2.1.3.2 Additional factors

The additional constructs of the Almere model are ​Perceived Enjoyment, Social Presence,

Perceived Sociability, Trust, and Perceived Adaptivity. This model also includes the factors ​Anxiety and ​Attitude, because several studies found these directly influential (Heerink et al., 2010). Although these factors have been proven to largely explain the level of acceptance of assistive sociable agents by elderly, additional research on the influence of the robot’s physical appearance has also yielded significant differences which will be covered in the next section.

2.2 The robot’s appearance

A study by Bartneck et al. (2010) examined how the degree to which patients experienced embarrassment was influenced by the robot’s level of anthropomorphism. There were three different experimental conditions in this study; (i) a technical box, (ii) a technical robot and (iii) a lifelike robot, which represented the three different levels of anthropomorphism. Participants were asked to perform different kind of tasks, each with a different level of embarrassment, and rate their emotions during the experiment. The results show that Dutch students were less embarrassed when interacting with a technical

(10)

box than with a robot, because it was perceived less as a person. So the robot’s level of anthropomorphism indeed has influence on the experienced embarrassment of a student. Thus, this factor needs to be taken into account when predicting the acceptance level because the level of embarrassment might, directly or indirectly, influence the intention and eventually the actual use of social robots.

Besides, the human-likeness of the robot has influence on the familiarity one has with a robot. The more familiarity people perceive, the more likely it is to be accepted by people. Industrial robots are primarily made for their function and therefore have little human-likeness which in return results in a sense of little or no familiarity, as can be seen in Figure 5. In the case of the humanoid robot the emphasis is not on the functionality but rather on its appearance. Humanoid robots are more human-like than industrial robots and people have a sense of familiarity because they have more resemblance. Therefore, a humanoid robot is situated at the first peak. Once something has reached a point of human-likeness where it feels scary, strange or uncanny, the familiarity will drop down to a negative value. This phenomenon is called the ​uncanny valley (Mori, 1970). A prosthetic hand is a good example of this, because advanced prosthetic hands are hardly distinguishable from a real human hand. At first glance it looks like a real hand, but when people find out that it is a prosthetic hand they get a feeling of strangeness which results in a decreasing value of familiarity. The more the social robot is perceived as human-like the greater the risk of falling into the uncanny valley. A healthy person is placed at the top of the second peak since people have a high sense of familiarity when having a score of one hundred percent on the scale of human-likeness, i.e. one hundred percent resemblance.

(11)

2.2.1 Additional factors and final model

According to the literature review in section 2.2 it is advisable to add a new factor to the existing Almere model, namely​Perceived Human-Likeness. Perceived Human-Likeness (PHL) can be defined as the degree to which one perceives the robot as a human being. Table 1 shows an overview of all constructs with their corresponding code that are used in the final model of this paper. Three additional hypothetical interrelations are made and will be tested in this study: Anxiety, Social Presence and Attitude are all influenced by Perceived Human-Likeness, see Figure 6. It is hypothesized that the higher the robot’s level of anthropomorphism, the higher the anxiety towards the robot, because the likeliness could feel scary. Also, if the robot is perceived more as a human, the robot would feel more like a social entity. Therefore, Perceived Human-Likeness will probably have influence on Social Presence as well. It is hypothesized that Perceived Human-Likeness will also have influence on the Attitude toward the robot.

Code Construct

ANX Anxiety

ATT Attitude

FC Facilitating Conditions ITU Intention To Use PAD Perceived Adaptivity PENJ Perceived Enjoyment PEOU Perceived Ease Of Use PHL Perceived Human-Likeness PS Perceived Sociability PU Perceived Usefulness SI Social Influence SP Social Presence TR Trust

(12)

Figure 6. The Almere model including the additional factor ​perceived human-likeness​ and the hypothetical relations. Dotted lines represent non-confirmed relations.

2.3 Different age groups

Prensky (2001) makes a distinction between two different age groups, namely the ​Digital

Natives and the​Digital Immigrants. He states that nowadays students who have grown up with technology, i.e. Digital Natives, have different thinking patterns than those who have not. Those who were not born in the digital world, but became fascinated by and adopted many aspects of the new technology at some later point in their lives, are called Digital Immigrants (Prensky, 2001). Prensky points out here that they will always retain to some degree their “accent”, i.e. they will fall back on the old fashioned way of doing things. They think about and use technology very differently, so this might have influence on the acceptance level of new technology such as social healthcare robots. Wikipedia (n.d.) adds a third group to this, namely the ​Digital Intermediates. People who are close to the cutoff between the Digital Natives and Digital Immigrants, belong to this group. However, the actual classification of people into Digital Natives, Digital Immigrants or Digital Intermediates is controversial. The classification used in this paper will be covered in section 3.2.

In this study, the difference in technology acceptance between these different age groups is the central thread, which brings us to the following research question:

To what extent are the factors that explain technology acceptance of the older population equally valid for technology acceptance of younger people?

(13)

3. Method

3.1 Participants

In this study, a total of 97 people were collected through ​diversity sampling (or

heterogeneity sampling). Diversity sampling is a form of non-probability sampling where participants are being intentionally selected to represent all different types, which in this particular study means participants from different age groups. They were partially obtained through online platforms such as Facebook, Whatsapp and LinkedIn, and partially at a local retirement home (or eldercare institution?) where the survey was conducted offline, i.e. on paper. As a result, the range of ages varied from 15 years to 92 years old with a statistical mean of 42.59 years. 32 of them were male versus 59 female participants. Further details on the age of the participants per category, either conducted offline or online, can be found in Table 2. Six participants, who had prior knowledge because they already had experience with social robots, were excluded from this study which resulted in a total of 91 valid participants.

Report on Age

Conducted N Minimum Maximum Mean Mean

Std. Error

Std. Deviation

Offline 10 70 92 78.00 2.155 6.815

Online 81 15 80 38.22 2.026 18.233

Total 91 15 92 42.59 2.240 21.369

Table 2. Descriptive statistics of the participants.

3.2 Design

The experiment used a between-groups design to identify differences between the different age groups (Burns & Burns, 2008). The independent variable in this study is age. We distinguished three different age groups reflecting the Digital Natives, Digital Immigrants and Digital Intermediates. Participants aged up to 25 years old were labeled as Digital Natives whereas participants of 50 years and older were labeled as Digital Immigrants in order to make a clear distinction between people who have grown up with technology and people who have not. The intermediate category, people aged from 26 to 49 years old, were labeled as Digital Intermediates. Table 3 shows the total number of participants per age group. The dependent variable was the intention of the participants to use of the technology as was being measured by the questionnaire; the extent to which participants will accept to make use of social robots in the foreseeable future.

(14)

Age group N

Digital Natives 34

Digital Immigrants 35

Digital Intermediates 22 Table 3. Number of participants per age group.

3.3 Materials

3.3.1 Video material

The particular robot used in this experiment is called Pepper, which is categorised as a service type robot (Figure 1). To provide a broad view to the participants, the self-assembled video showed three different settings in which Pepper could be used in the current healthcare system. These were (i) in home care, (ii) in the hospital and (iii) at the general practitioner’s office. The first two fragments showed Pepper as a social companion robot for single elderly people. Pepper is able to have conversations and simultaneously be entertaining. The third fragment showed Pepper as a host in a hospital guiding visitors to the right room. The last fragment showed Pepper as a general practitioner who can perform various tasks such as measuring the blood pressure and talking about the patient’s current health. Dutch subtitles have been added to the video to make sure the participants understood the text being spoken, because it is easier for people who do not (fully) master the English language to follow the content of the video if they can read the text while listening.

3.3.2 The questionnaire

To measure the influence of the factors in the model, a questionnaire was created. A minimum of four statements was created for each construct to improve reliability, which resulted in a total of 60 statements.

Questionnaire items were adopted for each construct of the Almere model from the study of Heerink et al. (2010). Most of these questionnaire items were originally inherited from Venkatesh et al. (2003), but Heerink et al. focused on another domain, namely assistive sociable robots for the elderly, so most questionnaire items have been slightly edited to fit in the context. New questionnaire items were constructed for Perceived Human-Likeness (which was added as a new construct to the model), Attitude, Perceived Usefulness, Social Presence and Trust. Furthermore, a number of questionnaire items have been adopted from a study by a fellow student (Gubler). In addition, all statements have been carefully translated to Dutch using a form of Heerink et al. which was used in the experiment of the iCat. Finally, statements were modified a little due to the fact that participants did not really ​use a social healthcare robot but viewed video content about

(15)

acquiescence response bias. In total, 9 out of 60 statements were negatively formulated. Participants could agree with the statements using a 5 point Likert-scale ranging from 1 to 5: totally disagree - disagree - don’t know - agree - totally agree.

3.4 Procedure

A distinction has been made between the online and offline procedure, because there was a slight difference between them. In total, 81 people participated in this study through the online procedure against 10 people who participated via the offline procedure. Both procedures are described in detail below.

3.4.1 Online procedure

The survey was distributed by means of a hyperlink to Google Forms so that participants could complete it anywhere. The survey was divided into four sections: (i) introducing the subject, (ii) video material, (iii) personal information about the participant, (iv) questionnaire items reflecting the factors from the created model. First of all, the purpose of the study was explained and an explanatory text introduced the concept of social healthcare robots. Secondly, a video was shown to the participant containing four different fragments in which the social healthcare robot Pepper is presented. After, questions about the gender (Male/Female/Other) and the age of the participant were asked which were used as background variables. A statement about the participant’s experience with social robots (True/False) was appended to the survey to verify that the participant had no prior knowledge. Emphasis was placed here on the anonymity of data. Then, the participant was asked to fill in the questionnaire of 60 statements. The participants were thanked for their time and effort when they had successfully completed the survey.

3.4.2 Offline procedure

At the local retirement home, the examiner explained the purpose of the study. The residents were drinking coffee at that moment. Almost immediately after introducing the concept of social healthcare robots, a discussion started in which various examples of robots and the negative aspects were mentioned. For instance, “a robotic lawn mower can not detect a hedgehog whereby it will be crushed” was mentioned by one of the participants. After the discussion, the examiner emphasized voluntariness of participation and anonymity of the data obtained. The group was split in half so that each group, further mentioned as group A and B, contained 5 to 6 people. First, the video material was shown to group A. Hereafter, group A had to fill in the questionnaire while in the meantime the video was shown to group B. When the participants had all completed the questionnaire, they were thanked for their time and effort.

(16)

3.5 Data analysis

Before performing statistical measures on the dataset, IBM SPSS Statistics (version 25.0) was used to convert all data into the right types and measurements. Reverse coding was used for negatively phrased questionnaire items to make sure all items were scored in the same direction. Non-valid participants, i.e. participants experienced with social robots, were deleted from the dataset. The dataset was now ready to perform path modeling on with Partial Least Squares Structural Equation Modeling (PLS-SEM) in SmartPLS (version 3.2.7).

4. Results

First of all, a Reliability Analysis was performed to determine whether the variables are accurately measuring the constructs of the created model. The Cronbach’s alpha test showed that Facilitating Conditions (0.637), Intention To Use (0.560), Perceived Adaptivity (0.655), Perceived Ease of Use (0.664), Perceived Usefulness (0.682) and Trust (0.629) did not meet the overall threshold value of 0.7 which is indicated as the minimum value to be considered acceptable (Burns & Burns, 2008). Most constructs exceeded the minimum value of the Cronbach’s alpha by removing the variables with a factor loading below 0.7, as can be seen in Figure 7. The initial and remaining number of variables per construct can be found in Table 4. The complete list of the removed variables can be found in Appendix B. The alphas of Intention To Use (0.648), Perceived Adaptivity (0.691) and Perceived Ease Of Use (0.674) had increased but failed to reach the threshold value. However, these constructs were kept in the model because the values were close to the threshold value.

Figure 7. Results of the Cronbach’s alpha test. Figure 8. AVE per construct.

The next step was to test if the Average Variance Extracted (AVE) for each construct was 0.5 or above. Figure 8 shows that all constructs satisfy this requirement. Next, a Partial Least Squares (PLS) Algorithm was performed on the remaining model. The PLS path modeling method was developed by Wold (1982). The PLS algorithm is a sequence of regressions in terms of weight vectors. It is a path weighting scheme that shows path coefficients and variances of all constructs within the model.

(17)

Construct

Number of variables before the Cronbach’s alpha test

Number of variables after the Cronbach’s alpha test

ANX 5 2 ATT 5 5 FC 4 2 ITU 4 2 PAD 5 3 PENJ 5 4 PEOU 6 3 PHL 4 2 PS 4 3 PU 4 3 SI 4 3 SP 5 4 TR 5 2

Table 4. Number of variables per construct before and after performing the Cronbach’s alpha test.

The PLS Algorithm was performed on the complete dataset. The resulting model with path coefficients and explained variances is shown in Appendix C. The next step was to identify whether these PLS-SEM results are statistically significant using bootstrapping. The number of bootstrap subsamples used was 5000 to ensure the stability of results. Results of the bootstrapping can be found in Table 5, where the effect size (Cohen, 1988) for each path coefficient is also appended. Each hypothesis in the model, i.e. each link between a dependent and an independent factor, was tested for statistical significance. With the confirmed (inter)relations a new model was created (see Figure 9). Path coefficients and explained variance of the confirmed model in SmartPLS are shown in Appendix D. The confirmed model has an explained variance (​R2) of 0.596 for the construct Intention To Use.

Hypothesis Path Coefficient Effect Size P-Value

ANX -> ATT -0.287 Medium 0.000*

ANX -> PEOU 0.067 Small 0.501

ANX -> PU -0.155 Small 0.056

ATT -> ITU 0.619 Medium 0.000*

ATT -> TR 0.558 Medium 0.000*

FC -> ITU 0.062 Small 0.429

PAD -> ATT 0.233 Medium 0.009*

PAD -> PU 0.329 Medium 0.010*

(18)

PEOU -> PU 0.249 Medium 0.050*

PENJ -> ITU -0.053 Small 0.739

PENJ -> PEOU 0.619 Medium 0.000*

PHL -> ANX -0.176 Small 0.077 PHL -> ATT 0.045 Small 0.595 PHL -> SP 0.472 Medium 0.000* PS -> PENJ 0.799 Medium 0.000* PS -> SP 0.474 Medium 0.000* PU -> ITU 0.322 Medium 0.001* SI -> ATT 0.512 Medium 0.000* SI -> ITU 0.025 Small 0.808 SP -> PENJ -0.046 Small 0.620 TR -> PS 0.662 Medium 0.000*

Table 5. Path coefficient, effect size and P-value per hypothesis. * = significance level of p < 0.05

Figure 9. The confirmed model with path coefficients.

Next, a Multivariate Analysis of Variance (MANOVA) with the LSD Post-Hoc Test was performed to identify significant differences between the means of the different age groups. Details on the mean score per construct for each age group can be found in Table 6. The output of the Post-Hoc Test is shown in Table 7. Significant difference between the means of Digital Natives and Digital Immigrants were found for the constructs Attitude, Perceived Ease Of Use and Perceived Enjoyment. The mean differences between Digital Immigrants and Digital Intermediates were statistically significant for the constructs Anxiety and Trust. A significant difference between the means of Digital

(19)

Construct Age group Mean

Std. Deviation ANX Digital Natives 12.7647 3.52531

Digital Immigrants 13.9429 4.38542 Digital Intermediates 11.5000 3.58236 ATT Digital Natives 15.9706 3.79382 Digital Immigrants 13.0857 4.17536 Digital Intermediates 15.5909 4.85660 FC Digital Natives 11.1765 2.28924 Digital Immigrants 11.7143 3.18610 Digital Intermediates 12.5455 2.13201 ITU Digital Natives 11.9118 2.84304 Digital Immigrants 11.2286 2.77686 Digital Intermediates 11.8636 3.19666 PAD Digital Natives 15.6471 2.77306 Digital Immigrants 14.7143 3.65876 Digital Intermediates 15.9545 2.64534 PEOU Digital Natives 20.4706 2.69944 Digital Immigrants 18.7143 3.99685 Digital Intermediates 20.2273 4.05829 PENJ Digital Natives 16.7941 3.36452 Digital Immigrants 14.3429 4.48490 Digital Intermediates 16.7273 4.88127 PHL Digital Natives 9.0294 2.50436 Digital Immigrants 9.8286 3.09160 Digital Intermediates 9.8636 3.07518 PS Digital Natives 11.6176 2.44967 Digital Immigrants 10.2000 2.98821 Digital Intermediates 11.3636 3.28844 PU Digital Natives 12.4412 2.88351 Digital Immigrants 11.1143 2.96818

(20)

Digital Intermediates 12.4091 2.68433 SI Digital Natives 10.8529 3.12500 Digital Immigrants 9.7429 3.15616 Digital Intermediates 10.6818 2.69720 SP Digital Natives 12.3235 3.25409 Digital Immigrants 11.7714 4.19443 Digital Intermediates 12.2727 3.70562 TR Digital Natives 14.6765 2.99212 Digital Immigrants 12.9143 2.99383 Digital Intermediates 15.6364 3.52603

Table 6. The mean and standard deviation per construct for each age group.

Dependent

Variable (I) Data group (J) Data group

Mean Difference (I-J)

Std.

Error Sig.

95% Confidence Interval

Lower Bound Upper Bound ANX Digital Natives Digital Immigrants -1.2040 .96312 .215 -3.1180 .7100 Digital Intermediates 1.0047 1.03025 .332 -1.0427 3.0521 Digital Immigrants Digital Intermediates 2.2087* 1.04380 .037 .1344 4.2831 ATT Digital Natives Digital Immigrants 2.6893* 1.05165 .012 .5994 4.7793 Digital Intermediates .9306 1.12495 .410 -1.3050 3.1662 Digital Immigrants Digital Intermediates -1.7587 1.13974 .126 -4.0238 .5063 FC Digital Natives Digital Immigrants -.3548 .64545 .584 -1.6375 .9279 Digital Intermediates -1.5035* .69043 .032 -2.8756 -.1314 Digital Immigrants Digital Intermediates -1.1487 .69951 .104 -2.5389 .2414 ITU Digital Natives Digital Immigrants .5055 .71837 .483 -.9221 1.9331 Digital Intermediates .3518 .76843 .648 -1.1753 1.8789 Digital Immigrants Digital Intermediates -.1538 .77854 .844 -1.7009 1.3934

(21)

Digital Immigrants Digital Intermediates -.5113 .84072 .545 -2.1820 1.1595 PEOU Digital Natives Digital Immigrants 1.8143* .88271 .043 .0601 3.5685 Digital Intermediates .3506 .94423 .711 -1.5259 2.2270 Digital Immigrants Digital Intermediates -1.4638 .95665 .130 -3.3649 .4374 PENJ Digital Natives Digital Immigrants 2.3879* 1.04397 0.25 .3132 4.4625 Digital Intermediates .4341 1.11673 .698 -1.7851 2.6534 Digital Immigrants Digital Intermediates -1.9538 1.13142 .088 -4.2022 .2947 PHL Digital Natives Digital Immigrants -1.1268 .70614 .114 -2.5301 .2765 Digital Intermediates -.4106 .75535 .588 -1.9117 1.0905 Digital Immigrants Digital Intermediates .7162 .76529 .352 -.8046 2.2371 PS Digital Natives Digital Immigrants 1.1176 .71682 .123 -.3069 2.5422 Digital Intermediates .7776 .76677 .313 -.7462 2.3014 Digital Immigrants Digital Intermediates -.3400 .77686 .663 -1.8838 1.2038 PU Digital Natives Digital Immigrants 1.2224 .71252 .090 -.1936 2.6384 Digital Intermediates .3212 .76218 .674 -1.1935 1.8359 Digital Immigrants Digital Intermediates -.9012 .77221 .246 -2.4359 .6334 SI Digital Natives Digital Immigrants .9154 .75348 .228 -.5819 2.4128 Digital Intermediates .5329 .80599 .510 -1.0688 2.1347 Digital Immigrants Digital Intermediates -.3825 .81659 .641 -2.0053 1.2403 SP Digital Natives Digital Immigrants .1360 .92296 .883 -1.6982 1.9702 Digital Intermediates .6435 .98728 .516 -1.3185 2.6055 Digital Immigrants Digital Intermediates .5075 1.00027 .613 -1.4803 2.4953 TR Digital Natives Digital Immigrants 1.5202 .79361 .059 -.0569 3.0974 Digital Intermediates -.3235 .84892 .704 -2.0106 1.3635 Digital Immigrants Digital Intermediates -1.8438* .86009 .035 -3.5530 -.1345

(22)

5. Conclusion

Existing models on the acceptance of new technology do not fully comply. By means of a literature research an additional factor was found to be influencing the acceptance level of assistive sociable robots. In this study, the Perceived Human-Likeness of the robot is investigated and is confirmed to influence one other factor in the model: Perceived Human-Likeness is found to have a statistically significant influence on Social Presence. This study confirms that Perceived Human-Likeness, Perceived Adaptivity, Anxiety, Social Presence, Perceived Sociability, Social Influence, Attitude, Perceived Usefulness, Perceived Ease Of Use, Perceived Enjoyment and Trust have a significant direct or indirect influence on the Intention To Use social healthcare robots. The confirmed model in this study explains 59.6 percent of the variance ( ​R2) in the acceptance of social

healthcare robots which is in line with the findings of Heerink et al. (2010). As can be concluded based on the output of the Post-Hoc Test in Table 5, no significant differences were found between the means of Intention To Use, i.e. the acceptance level, of the three age groups. Though, differences between the means of Digital Natives and Digital Immigrants were found statistically significant for the constructs Attitude, Perceived Ease Of Use and Perceived Enjoyment. So to give answer to the research question in this paper: Digital Immigrants are less intended to accept a social healthcare robot than Digital Natives based on the score of Attitude, Perceived Ease Of Use or Perceived Enjoyment. All other constructs are equally valid for both age groups in explaining the acceptance level of social healthcare robots.

(23)

6. Discussion

6.1 Implications

As was indicated by several participants, the used questionnaire consisting of 60 statements could be frightening and therefore people might not be willingly to participate. Apart from that, qualitative research during the experiment has yielded another point of interest: various participants pointed out that it was hard to form an opinion about something they have never used. Powers et al. (2007) investigated the differences in people’s social interaction with a humanoid robot and an computer agent on a monitor, and found large differences in attitude across the different conditions used in his study. So to get more reliable results, a social humanoid robot should be used instead of a video. However, it was beyond the scope of this study to actually use a social robot which has interaction with the participants. Though, showing visual content should give a good impression of what a social healthcare robot might look like and how it could be useful in different areas of healthcare.

Another concern is that 8 out of the 9 negatively formulated statements in the questionnaire were removed after the reliability analysis, because the factor loading of these variables were too small. This might indicate that either participants were confused by the negative statements or that they were just filling in some random values.

6.2 Further research

Another way to test if the influence of the social robot’s appearance significantly differ between the acceptance level of Digital Natives and Digital Immigrants is by means of a 2x2 between-subjects design in which two social robots, each with a different level of anthropomorphism, are presented. Each age group should use both robots, and fill in the questionnaire. At the end of the experiment, it will have to be examined whether there is a statistically significant difference in the level of acceptance between both groups.

Also, by increasing the sample size for each age group a Multi-Group Analysis (MGA) would be possible to identify significant differences in the group specific PLS path model estimations. MGA via PLS-SEM is an effective way to evaluate moderation across multiple relations.

(24)

7. References

Bartneck, C., Bleeker, T., Bun, J., Fens, P., & Riet, L. (2010). The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Paladyn, Journal of Behavioral Robotics, 1(2), 109-115.

Breazeal, C. (2003). Toward sociable robots. Robotics and autonomous systems, 42(3-4), 167-175.

Breazeal, C. L. (2004). Designing sociable robots. MIT press.

Broadbent, E., Stafford, R., & MacDonald, B. (2009). Acceptance of healthcare robots for the older population: Review and future directions. International Journal of Social Robotics, 1(4), 319.

Broekens, J., Heerink, M., & Rosendal, H. (2009). Assistive social robots in elderly care: a review. Gerontechnology, 8(2), 94-103.

Burns, R. P., & Burns, R. (2008). Business research methods and statistics using SPSS. Sage.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences 2nd edn.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: a comparison of two theoretical models. Management science, 35(8), 982-1003.

Digital Native. (n.d.). In ​Wikipedia. Retrieved June 16, 2018 from

https://en.wikipedia.org/wiki/Digital_native

Ezer, N., Fisk, A. D., & Rogers, W. A. (2009, Juli). Attitudinal and intentional acceptance of domestic robots by younger and older adults. In International Conference on Universal Access in Human-Computer Interaction (pp. 39-48). Springer Berlin Heidelberg.

Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing acceptance of assistive social agent technology by older adults: the almere model. International journal of social robotics, 2(4), 361-375.

(25)

Kuo, I. H., Rabindran, J. M., Broadbent, E., Lee, Y. I., Kerse, N., Stafford, R. M. Q., & MacDonald, B. A. (2009, September). Age and gender factors in user acceptance of healthcare robots. In Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on (pp. 214-219). IEEE.

Mori, M. (1970). The uncanny valley. Energy, 7(4), 33-35.

Nilsson, N. J. (1984). Shakey the robot. SRI INTERNATIONAL MENLO PARK CA. Powers, A., Kiesler, S., Fussell, S., & Torrey, C. (2007, March). Comparing a computer agent with a humanoid robot. In Human-Robot Interaction (HRI), 2007 2nd ACM/IEEE International Conference on (pp. 145-152). IEEE.

Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.

Van Poelvoorde, R. (2016, 2 mei). Sociale robot. Retrieved from

https://www.ensie.nl/randall-van-poelvoorde/sociale-robot

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision sciences, 39(2), 273-315.

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management science, 46(2), 186-204. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. ​MIS quarterly: Management Information

Systems, 27(3), 425-478.

Wada, K., & Shibata, T. (2007). Living with seal robots—its sociopsychological and physiological influences on the elderly at a care house. IEEE Transactions on Robotics, 23(5), 972-980.

Wold, H. (1982). Soft modeling: the basic design and some extensions. Systems under indirect observation, 2, 343.

(26)

Appendix A - Questionnaire items per construct with

Dutch translation and original source

Construct Question items Dutch translation Source

Anxiety If I should have the

robot, I would be afraid to make mistakes with it. If I should have the robot, I would be afraid to break something. I find the robot scary. I find the robot intimidating.

I would be afraid the robot would hurt me.

Als ik een robot zou hebben, zou ik bang zijn er fouten mee te maken.

Als ik een robot zou hebben, zou ik bang zijn iets stuk te maken.

Ik vind de robot eng.

Ik vind de robot intimiderend.

Ik zou bang zijn dat een robot me pijn zou doen.

Heerink et al. (2010)

Heerink et al. (2010)

Heerink et al. (2010) Heerink et al. (2010)

Thesis Edward Gubler

Attitude I think it’s a good idea to

be assisted by a robot.

The robot would make life more interesting.

I would like to be assisted by a robot.

I have a positive feeling about the robot.

I am interested in what a social robot can do for me.

Ik vind het een goed idee om door een robot geholpen te worden.

Een robot kan mijn dagelijkse leven interessanter maken.

Ik zou het fijn vinden om door een robot geholpen te worden.

Ik heb een positief gevoel over de robot.

Ik ben geïnteresseerd in wat een robot voor me kan doen. Heerink et al. (2010) Heerink et al. (2010) Dekker Dekker Dekker Facilitating Conditions

I have everything I need to use the robot.

I know enough of the robot to make good use

Ik heb alles wat ik nodig heb om goed met een robot te kunnen omgaan.

Ik denk dat ik genoeg weet om goed met een

Heerink et al. (2010)

(27)

If necessary, there is someone who can help me with using the robot.

Using the robot would be comparable to other technology I use.

Er is, indien nodig, iemand die me kan helpen met het gebruik van een robot.

De robot is vergelijkbaar met andere technologie die ik ken.

Heerink et al. (2010) (Form iCat)

Thesis Edward Gubler

Intention to Use I think I’ll use the robot in the future.

I’m certain to use the robot in the future.

I have no interest in using a robot whatsoever.

I am not intending to ever use a robot in the future.

Ik denk erover om me in de toekomst door een robot te laten

ondersteunen.

Ik weet zeker dat ik in de toekomst te maken zal krijgen met een robot.

Ik ben tot nu toe niet geïnteresseerd in de assistentie van een robot.

Ik ben niet van plan om ooit door een robot ondersteund te worden.

Heerink et al. (2010)

Heerink et al. (2010)

Thesis Edward Gubler

Thesis Edward Gubler

Perceived Adaptivity

I think the robot can be adaptive to what I need.

I think the robot will only do what I need at that particular moment.

I think the robot will help me when I consider it to be necessary. I think the robot would know when I don’t need any help.

When my needs change I think the robot will be able to recognise this.

Ik denk dat de robot zich aanpast aan wat ik nodig heb.

Ik heb het idee dat de robot alleen dat voor me doet waar ik op dat moment behoefte aan heb.

De robot zal me pas helpen als ik dat nodig vind.

Ik denk dat de robot zal weten wanneer ik geen hulp nodig heb.

Ik denk dat de robot zal begrijpen wanneer mijn behoeften veranderen.

Heerink et al. (2010)

Heerink et al. (2010)

Heerink et al. (2010)

Thesis Edward Gubler

(28)

Perceived Ease of Use

I think I will know quickly how to use the robot.

I find the robot easy to use.

I think I can use the robot without any help.

I think I can use the robot when there is someone around to help me.

I think I can use the robot when I have a good manual.

It would be hard to make a robot understand what I want.

Ik denk dat ik snel door heb hoe ik met een robot moet omgaan.

De robot lijkt mij makkelijk in de omgang.

Ik denk dat ik met een robot kan omgaan zonder hulp.

Ik denk dat ik met een robot kan omgaan als er iemand in de buurt is om te helpen.

Ik denk dat ik met een robot kan omgaan als ik een goede handleiding heb.

Het lijkt mij moeilijk om een robot duidelijk te maken wat ik wil.

Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) (Form iCat) Perceived Enjoyment

I enjoy the robot talking to me.

I enjoy doing things with the robot.

I find the robot enjoyable. I find the robot fascinating.

I find the robot boring.

Het lijkt mij leuk als een robot tegen me praat.

Het lijkt mij leuk om met een robot dingen te doen.

De robot lijkt mij plezierig.

Ik vind de robot boeiend.

Ik vind de robot saai.

Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Perceived Human-Likeness

The robot’s physical appearance is very similar to that of a person.

The robot comes across as human-like.

The robot does not

Het uiterlijk van de robot lijkt sterk op dat van een mens.

Ik vind dat de robot mensachtig over komt.

Ik vind dat de robot

Dekker

Dekker

(29)

The robot is

indistinguishable from a real human.

De robot is niet te onderscheiden van een mens.

Dekker

Perceived Sociability

I consider the robot a pleasant conversational partner.

I find the robot pleasant to interact with.

I feel the robot understands me.

I think the robot is nice.

Ik vind de robot een prettige

conversatiepartner. Ik vind de robot prettig in de omgang.

Ik heb het gevoel dat de robot begrip voor mensen heeft.

Ik vind de robot aardig.

Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) Perceived Usefulness

I think the robot is useful to me.

It would be convenient for me to have the robot. I think the robot can help me with many things.

The robot would be of no use to me.

Ik denk dat een robot nuttig is voor mij.

Ik zou het handig vinden om een robot te hebben.

Ik denk dat een robot me met veel dingen kan helpen.

Ik zou niets aan een robot hebben.

Heerink et al. (2010)

Heerink et al. (2010)

Heerink et al. (2010)

Dekker Social Influence I think it would give a

good impression if I should use the robot. My family would be pleased to see me using a robot.

I think the people around me would appreciate it if I should use the robot.

I think many people would like me if I had the robot.

Het zou een goede indruk geven als ik een robot zou hebben.

Mijn familie zou graag zien dat ik door een robot word geassisteerd.

Ik denk dat de mensen om me heen het zouden waarderen als ik een robot zou hebben.

Ik denk dat veel mensen het leuk zouden vinden als ik een robot zou hebben. Heerink et al. (2010) Heerink et al. (2010) (Form iCat) Heerink et al. (2010) (Form iCat) Heerink et al. (2010) (Form iCat)

Social Presence I have the feeling the robot interacts like a real person.

Ik heb het gevoel dat de robot als een echt persoon communiceert.

(30)

It sometimes felt as if the robot made really eye contact with the person.

I can imagine the robot to be a living creature. I often think the robot is not a real person. Sometimes the robot seems to have real feelings.

Ik had af en toe het gevoel dat de robot echt oogcontact maakte met de persoon.

Ik kan de robot zien als een levend wezen.

Ik zie de robot niet als een echt persoon.

Ik vind dat de robot soms echt gevoel lijkt te hebben.

Heerink et al. (2010)

Heerink et al. (2010)

Heerink et al. (2010)

Heerink et al. (2010)

Trust I would trust the robot if

it gave me advice.

I would follow the advice the robot gives me.

The robot would not take advantage of the

information I give.

I would not trust a robot with my personal belongings. I would trust a real person more than a robot.

Ik zou een robot vertrouwen als het me advies gaf.

Ik zou een advies van een robot ook opvolgen.

Als ik de robot informatie zou geven, zou daar geen misbruik van gemaakt worden.

Ik zou een robot niet vertrouwen met mijn persoonlijke bezittingen.

Ik zou een echt persoon meer vertrouwen dan een robot. Heerink et al. (2010) Heerink et al. (2010) Heerink et al. (2010) (Form iCat) Dekker

(31)

Appendix B - Removed variables per construct

Construct Removed variable(s)

Factor loading

Anxiety If I should have the robot, I would be afraid to make

mistakes with it.

If I should have the robot, I would be afraid to break something.

I would be afraid the robot would hurt me.

0.678

0.572

0.697

Attitude - -

Facilitating Conditions If necessary, there is someone who can help me with using the robot.

Using the robot would be comparable to other technology I use.

0.430

0.618 Intention To Use I’m certain to use the robot in the future.

I have no interest in using a robot whatsoever.

0.480 0.493 Perceived Adaptivity I think the robot will only do what I need at that

particular moment.

I think the robot would know when I don’t need any help.

0.318

0.617 Perceived Ease Of Use I think I will know quickly how to use the robot.

I think I can use the robot when there is someone around to help me.

It would be hard to make a robot understand what I want.

0.626 0.484

0.390

Perceived Enjoyment I find the robot boring. 0.530

Perceived Human-Likeness The robot does not appear to be human at all. The robot is indistinguishable from a real human.

0.655 0.569

Perceived Sociability I feel the robot understands me. 0.546

(32)

Social Influence It would give a good impression if I should use the robot.

0.681

Social Presence I often think the robot is not a real person. 0.315

Trust The robot would not take advantage of the

information I give.

I would not trust a robot with my personal belongings.

I would trust a real person more than a robot.

0.612

0.434

(33)
(34)

Referenties

GERELATEERDE DOCUMENTEN

What is the effect of the addition of a nutrition logo on food packages on consumers' perceived healthiness of a product among different product categories (hedonic

In chapter 2, it is stated that the service quality attribute and opinion of the parents and peers would be more important if a consumer would perceive high performance risk

As a result, to further understand the conditions under which personalization might be effective, the present study will examine the moderating role of perceived privacy invasion, as

H5 : Compared to the no picture condition, an avatar profile picture positively impacts the perceived trustworthiness (a), expertise (b) and homophily (c) and indirectly

Keywords: Sexting, Online experiment, Perceived legitimacy, Perceived (Sharing) Risk, Fidelity, Scenario, SEAM, Information Privacy, Situation

The second research question was “What is the effect of the instructors’ visibility in an instructional statistics video on student’s state anxiety?” It was hypothesized that the

H1: The perception of (a) social presence, the level of (b) satisfaction, and (c) purchase intention, is higher when people are confronted with a chatbot using a

Also, more research on cultural differences influencing the perception of soundscape in general is needed to accurately determine whether certain elements have positive or negative