• No results found

To be seen or not to be seen

N/A
N/A
Protected

Academic year: 2021

Share "To be seen or not to be seen"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

To be seen or not to be seen

The influence of eye pictures on socially desirable

responding in surveys

(2)

To be seen or not to be seen

The influence of eye pictures on socially desirable

responding in surveys

Taco Bos

University of Groningen, Faculty of Economics and Business Master Thesis

September 13th 2008

A: Plantage Middenlaan 82-IV 1018 DJ Amsterdam

The Netherlands

E: tacobos@gmail.com T: +31 6 12 13 20 10

(3)

Preface

After a lot of hours behind my computer, enough print outs to make Al Gore pretty angry and hectolitres coffee, it’s finally there: my Master’s thesis. Besides my computer, printer, coffee machine, and the waiters and

waitresses of my favourite terrace, I’d like to thank Marieke de Ruiter, Liesje Goldschmidt and Debra Trampe for their helpful support. But first and

foremost I want to thank Erjen van Nierop, my first supervisor. He gave me the opportunity to work on a wacky project the way I like it best: with a lot of freedom and critical yet constructive feedback when I needed it. Grazie mille.

(4)

Management Summary

In previous studies about the influence of eye pictures on human behaviour, experiment subjects turned out to behave more honest and/or cooperative when they where gazed at by pictures of human eyes, compared to test situations where control pictures were used. Could this mechanism be used in market research situations? What if pictures of eyes are added to

self-administered questionnaires? Would respondents give more honest answers? Social Desirability Bias (SDB), or the tendency to respond to self-report items in a manner that makes the respondent look good rather than to respond in an accurate and truthful manner, is considered to be one of the most

common sources of bias affecting the validity of experimental and survey research findings in psychology and the social sciences. If SDB could be decreased with an inexpensive means like eye pictures, even a little bit, it would be an important step towards more reliable market research results.

To test whether eye pictures influence the degree of social desirability bias, different groups of respondents (over 2500 in total) filled in the same Impression Management (IM) items of the Balanced Inventory of Desirable Responding (BIDR), a widely used method to measure the degree of socially desirable responding by survey subjects. Different groups saw different pictures (women's eyes, men's eyes or control pictures of flowers) and average IM-scores per group were calculated. It turned out that the type of picture did not make any significant difference. In conclusion, no empirical evidence for the expected result was found.

(5)

- Guarantee anonymity and confidentiality;

- Add questions from social desirability scales (like the BIDR) to the survey, to determine the extent to which results are contemned by social desirability bias. Compensate when possible;

- Warn respondents that tricks are used to test the honesty of the

respondents. Even if those tricks are in fact not used, mentioning them helps.

(6)

Table of contents

Preface...2

Management Summary...3

1. Introduction...7

1.1 Background ...7

1.2 Goal and scientific relevance...8

1.3 Research questions ...8

1.4 Thesis outline ...10

2. Theoretical framework...11

2.1 Introduction...11

2.2 Influence of eye pictures on human behaviour...11

2.2.1 Eyes in the coffee room...11

2.2.2 How robots turn this world into a nicer place ...14

2.2.3 Cartoonesque eyes do the job as well ...16

2.3 Social desirability bias and its drivers...17

2.3.1 Definition of SDB ...17

2.3.2 Influence of survey topic on SDB ...17

2.3.3 Influence of the degree of anonymity and confidentiality on SDB.18 2.3.4. Influence of interview mode on SDB ...18

2.4 Scales to measure SDB...20

2.5 Conceptual model ...21

3. Research design...24

3.1 Introduction...24

3.2 Set up of the experiment ...24

3.3 Which SDB-scale should be used? ...26

3.4 Sample size and approach of respondents ...28

3.5 Data-analysis ...28

4. Results ...29

4.1 Introduction...29

4.2 Hypothesis 1...29

(7)

4.4 Hypothesis 3...32

4.4 Summary...33

5. Conclusion and discussion...34

5.1 Introduction...34

What is the influence of pictures of eyes in surveys on the social desirability bias of the results? ...34

5.2 Findings ...34

5.3 Discussion ...35

5.4 Recommendations for market researchers ...36

5.5 Recommendations for further academic research ...37

References ...38

Appendix 1: Translated IM-survey...42

Appendix 2a: BIDR version 6 - form 40A ...47

Appendix 3: Scoring key for BIDR version 6...49

Appendix 4: ANOVA assumptions check...51

Appendix 5: Kolmogorov-Smirnov-test of normality ...52

Appendix 6: Levene’s Test ...59

(8)

1. Introduction

1.1 Background

Take a look at the picture above. This picture, the picture alone, can make you behave more honest. This surprising conclusion can be drawn from the research findings of a British behavioural biologist and her colleagues (Bateson, Nettle and Roberts 2006).

How do they know? The researchers used the coffee room at their university as a testing spot. In this room, consumed coffee and tea had to be paid in an honesty box, but the layout of the room was such that people could easily duck payment without anyone's notice. Bateson et al. added pictures of eyes on the pricing list, because they expected that cues of being watched would be enough for people to behave more honest. Their suspicion turned out to be true: people paid almost three times as much for their drinks when eyes were displayed instead of a control image of flowers.

If pictures of eyes really can make people behave more honest or socially desirable, could this same mechanism be used in other settings? One of the possible applications is in a survey, on paper or through the Internet. Many marketing research depends on honest answers of respondents, sometimes about sensitive subjects like household income. Or take the yearly Durex sex survey as an example, in which last year 350.000 people were interviewed about the most intimate details of their sex life.1 (The results were a smack in

(9)

the face of the Dutch: with an average of 103 times sex per year, this sober-minded nation dangles in the lower regions. Or were the Dutch simply more honest then, say, the French? Will we ever know? Could pictures of eyes help to bridge this painful gap in the future?)

1.2 Goal and scientific relevance

The goal of this research is to investigate whether pictures of eyes in surveys can help reduce socially desirable answers given by respondents. King and Bruner (2000) emphasize the relevance of such a reduction:

"Social-desirability bias (SDB) is considered to be one of the most common and pervasive sources of bias affecting the validity of experimental and survey research findings in psychology and the social sciences (Nederhof, 1985; Paulhus, 1991; Peltier & Walsh, 1990). Although identification and control of SDB has presented an ongoing challenge for researchers in most social science disciplines (cf. Fisher, 1993), only a handful of articles

published in marketing research journals have reported attempts to systematically identify contamination from this source. Considering the frequent use of self-report measures in marketing research, as well as the social and normative implications inherent in many marketing investigations, the low incidence of routine testing for this potential confound may indicate that marketing researchers have underestimated the importance of SDB in interpretation and application of research findings."

1.3 Research questions

The resulting research question is as follows:

(10)

Definitions of the key concept can be found in table 1.

Table 1

Definitions of key concepts

Survey “A gathering of a sample of data or opinions

considered to be representative of a whole.”2 Social desirability

bias

“A tendency to respond to self-report items in a manner that makes the respondent look good rather than to respond in an accurate and truthful manner.” (Holtgraves 2004)

To answer the main question thoroughly, some sub questions have been formulated:

1) What is currently known about the effect of eye pictures on human behaviour?

2) What exactly is social desirability bias?

3) What are the main influential factors (drivers) of social desirability bias in surveys?

4) How can the extent to which respondents give socially desirable answers be measured? What kind of social desirability scales exist? 5) Do respondents have a significant different score on the social

desirability scale if pictures of eyes are included in the survey, compared to a control group in which pictures of flowers are used? 6) If an effect can be found, does it still hold when people are aware of

the reason behind the eye pictures? In other words: if the trick works initially, will it still work if everyone knows the trick?

(11)

1.4 Thesis outline

(12)

2. Theoretical framework

2.1 Introduction

The aim of this chapter is to give an overview of existing scientific knowledge, relevant for partially answering the sub questions poned in the introduction. The following subjects are included in this chapter. First of all, an overview of studies about the influence of eye pictures on human behaviour will be given. How were those studies set up, and what were the results? Second,

explanation about social desirability bias (SDB) follows. What exactly is SDB and what factors influence the likeability of it? Third, different ways to

measure SDB will be discussed, as well as the advantages and disadvantages of different procedures and scales. Finally, the most important findings will be bundled into a conceptual model.

2.2 Influence of eye pictures on human behaviour

2.2.1 Eyes in the coffee room

As said before, the findings of Bateson, Nettle and Roberts (2006) formed the pretence for this study. With their experiment, they found that an image of a pair of eyes led to an increase in the amount of money paid for tea and coffee in an honesty box. How exactly was this experiment conducted, and are there any possible rival explanations for the results?

Participants of the study came from a population of 48 members (25 females and 23 males) of the Division of Psychology at the University of Newcastle. They had the option to pay for coffee and tea via an honesty box in their coffee room, and this system had been used for years prior to the

(13)

start of the experiment were the images added directly above the pricing notice. Nobody reported to be aware of the purpose of the images. The first week, images of eyes were used. The week after they were swapped for control images of flowers. Then eye pictures were used again for another week, and so on. This weekly swap was done for ten weeks in total, and the volume of consumed milk was recorded every week because this was the best index available of total beverage consumption. Figure 1 shows the relation between the picture used and the amount of money paid per litre milk. When eyes were shown, people paid significantly more than when flowers were shown. Eyes of pictures of men turned out to be more effective than women's eyes.

(14)

Figure 1

(15)

2.2.2 How robots turn this world into a nicer place

At Harvard, another remarkable experiment has been held. Burnham and Hare (2005) asked college students to play a six-round public goods game which was designed in such way that participants were forced to choose between egoistic behaviour and altruism. Groups of 24 participants each were split up into six sub groups of four people, and the group compositions

changed every round so no one was ever matched in a group with the same person more than once. This way, people knew that building relationships with team members to perform better together in the next round was of no use. Above this, people didn't know who their team members were because everyone was sitting behind networked computers and communication in any other way was forbidden. In each round, people received ten tokens and they were asked to choose which part they would donate to their own private account and which part would go to a public account. This public account was shared with the other temporary team members. At the end of each round, tokens in the public account were doubled and equally divided over the four team members. If you contributed ten tokens, this would be favourable for the group result because it would be doubled to twenty tokens. This is five per person, which is, of course, a worse individual result than just keeping the ten tokens to yourself. Bottom-line: being egoistic would always be the best strategy to get as much tokens as possible. At the end of the game, the tokens on your own account could be changed for real cash: each token was worth $ 0,20 and all participants knew that the earned money would be paid out in private.

(16)

experiment, the assisted students saw a movie on their screen of Kismet saying: "Hi, I'm Kismet and I'm here to help you through the experiment."

During the experiment, Kismet was still visible on the desktop of the

computer screen. The researchers expected that this cue of being watched would result in more cooperative behaviour. The results of the experiment supported this hypothesis: when "Big Brother" Kismet was watching, students contributed 29% more to the public good (Burnham and Hare, 2005).

Picture 1

(17)

2.2.3 Cartoonesque eyes do the job as well

Haley and Fessler (2005) played a very simple game to measure the influence of pictures of eyes on cooperative behaviour. Students were anonymously put in groups of two, and one of the randomly selected students (the "dictator") received ten dollars. This student was given twenty seconds to decide in an anonymous computer game which part of this ten dollars he/she would keep and which part he/she would give away to the other unknown student. Different students saw different backgrounds; two of the used backgrounds are shown in Figure 2.

The results show that even drawn eyes can influence human behaviour: when these eyes were shown, dictators donated an average of $ 3,79 to their

counterparts, compared to $ 2,45 when the control background was shown. In a third setting, a skewed version of the eyes was shown (no picture available). In this version, the left and right eyespots were not in the same horizon plane, creating a less face-like image on the desktop. However, even this picture yielded significant differences: dictators donated $ 3,00 on

average.

Figure 2

(18)

2.3 Social desirability bias and its drivers

2.3.1 Definition of SDB

Social desirability bias can be defined as "a tendency to respond to self-report items in a manner that makes the respondent look good rather than to

respond in an accurate and truthful manner" (Holtgraves 2004). For example, people tend to overreport socially desirable behaviour such as voting (Karp and Brockington 2005) and underreport socially undesirable behaviour such as drug and alcohol abuse (Aquilino 1994). The influence of this bias has been demonstrated in self-reported measures of personality traits, attitudes, behaviours and psychopathology (Holtgraves 2004). The subject of the survey could significantly impact the extent to which SDB trickles through in the survey results. In literature, other factors that increase contamination through social desirable responding that are often mentioned are the degree of

anonymity, confidentiality and the interview mode. These factors will be discussed in more detail in the subsequent paragraphs.

2.3.2 Influence of survey topic on SDB

Issues of self-presentation threaten the validity of self-report measures when interviews tap into sensitive areas of behaviour or subjective experience (Schwarz et al. 1991). Matters like voting (Karp and Brockington 2005) and drugs (Aquilino 1994) were mentioned before as examples; sexual behaviour is another matter in which a social desirability bias lurks. Newcomer and Udry (1988) asked 1152 junior high students if they were still virgin and if not, when they first had sex. Two years later, the same question was asked and seven percent admitted to have lied during the first interview. In another ten percent, statements in the second interview were not in line with the earlier outpourings. In those cases, it was not possible to determine whether people lied or just suffered from a bad memory.

(19)

experience and at the end of it, they simply asked whether the respondents had been honest or not. As much as fourteen percent of middle school males admitted to have overstated their actual behaviour, while middle school girls were most likely to understate (8%) their behaviour. Obviously, the

researchers are aware of the limitations of this approach: "An objection can be raised as to the usefulness of asking subjects directly about their

questionnaire response honesty, on the ground that one might lie about whether one is lying. This has been referred to as the 'Liar's paradox.' However, we do not feel that this argument should be used to completely discount the value of information gathered by such questioning, particularly given the number of surveys measuring sexual behaviour in which there is no attempt to ascertain from subjects their stated level of response validity." In short: something is better than nothing, and this argument could also hold in market research.

2.3.3 Influence of the degree of anonymity and confidentiality on SDB

Promising anonymity and/or confidentiality to respondents is widely seen as a means of reducing social desirability bias (e.g. Aquilino 1994; King and Bruner 2000; Siegel, Aten and Roghmann 1998; Singer 1978). Reduced public self-awareness should lead to a decrease of self-presentation concerns and, therefore, to less need for social desirability (Crowne and Marlowe 1960). For example, Joinson (1999) assigned respondents at random to anonymous and non-anonymous surveys; in the anonymous surveys, people scored lower on social desirability.

2.3.4. Influence of interview mode on SDB

(20)

also to a certain extent to a selection bias (Aquilino 1994). Holding out the prospect of a face-to-face interview about a sensitive subject to someone could be a reason for them to refuse cooperation, whereas the same person may be willing to join a self-administered survey research. Aquilino (1994) solved this problem by selecting respondents first, after which he assigned them randomly to different interview modes: through telephone, face-to-face or with self-administered questionnaires (SAQs). In the last case, respondents received a face-to-face explanation about the survey. In all modes, the same questions about drug and alcohol use were asked. Admission of illicit drug and alcohol use turned out to be most likely in the personal mode with SAQs, less likely in the personal mode without SAQs (face-to-face interview) and least likely in interviews over the telephone. Aquilino expected that this difference could be explained by a combination of anonymity and

confidentiality. The use of SAQs increases a feeling of anonymity, but this same mechanism should decrease SDB in interviews over the telephone compared to face-to-face interviews but this is not the case. According to Aquilino, this is because confidentiality claims are less convincing over the telephone. This idea has been tested: "If the credibility of confidentiality guarantees varies by interview mode, then interview mode effects should be greatest among those respondents who are generally more suspicious of the claims of others and least among respondents who have higher levels of trust in other people." Research results confirmed this hypothesis.

Some years later, a comparable research has been held but this time paper-and-pencil SAQs were compared with computer-assisted SAQs (Wright, Aquilino and Supple, 1998). Adolescents turned out to admit higher levels of alcohol and illicit drug use and psychological distress to a computer than to a piece of paper, while for older people no significant differences were found. Expected mode-by-mistrust interactions were confirmed again.

(21)

some impressive looking electronic equipment, and they are told that this equipment is capable of detecting lies, which is not true. It functions as a fake lie detector and the idea behind it is that people are afraid to be embarrassed by the machine, and therefore they will probably be more honest. A milder version of this bogus pipeline that also turned out to work is a warning at the beginning of a survey, that the survey contains certain questions to detect lying (Paulhus 1991, p. 19).

2.4 Scales to measure SDB

The idea behind Social Desirability scales is that they consist of items listing issues and behaviours that are either socially desirable but infrequently practiced or frequently practised but socially undesirable. In theory, lying can be diagnosed when a respondent claims to have rarely performed but socially desirable acts as a habit, while denying frequently performed undesirable acts (Francis 1991).

Although various social desirability scales were initially developed by different researchers, the Marlowe-Crowne Social Desirability Scale (Crowne and Marlowe 1960) and its short forms (e.g. Loo and Thorpe 2000; Reynolds 1982; Strahan and Gerbasi 1972) have gained the most widespread

acceptance (King and Bruner 2000). Low intercorrelations among the original version and the shorter ones have caused doubts about the unidimensionality of the Marlowe-Crowne scale, and Factor analysis revealed two underlying primary factors: self deception and impression management. Self-deception can be defined as "the unconscious tendency to see oneself in a favourable light. It is manifested in socially desirable, positively biased self-descriptions that the respondent actually believes to be true." Impression management refers to the "conscious presentation of a false front, such as deliberately falsifying test responses to create a favourable impression." Self deception is a relatively invariant personality trait and should not be seen as a

(22)

Paulhus 1987). To measure both factors separately, Paulhus created the Balanced Inventory of Desirable Responding. This scale reportedly offers minimal correlation between impression management and self-deception (King and Bruner 2000). Other researchers created scales dedicated to either measure impression management or self deception (e.g. Sackeim and Gur 1979). In the current study, invariant personality traits are not of interest and therefore the focus will be solely on impression management.

2.5 Conceptual model

The former findings give reasons to expect both positive and negative effects of eye pictures on social desirability bias. In the coffee room experiment as well as in the Kismet experiment, the researchers point out the influence of eye pictures on cooperative behaviour. If surveys are used in which the importance of honesty is stressed, more willingness to cooperate could lead to less conscious impression management (deception of others). Furthermore, people tend to admit their socially undesirable behaviours more in face-to-face interviews than through the telephone. According to Aquilino (1994), confidentiality claims are less convincing over the telephone because of the greater social distance between interviewer and respondent. Is this larger social distance the result of the physical distance, lack of eye contact or the inability to see other non-verbal signals? If eyes play a role, pictures of eyes could possibly help in increasing the credibility of confidentiality claims. Different studies point out the importance of eye contact in credibility judgements (Burgoon et al. 1985).

(23)

The arrows in figure 3 represent expected causal links. The expectation about the most important relationship is summarized in the first hypothesis:

H1: Respondents will have a different IM-score if pictures of eyes are used in the Impression Management (IM) survey, compared to respondents who saw control pictures of flowers.

Because of the expected contradictory sub effects, no predictions about the direction of the main effect were made. Research about the influence of eye pictures on human behaviour is still in infancy and therefore, no solid

scientific ground is available to pone sensible hypotheses about the strength of the different sub effects and the resulting direction of the main effect.

Bateson, Nettle and Roberts (2006) found in their coffee room experiment a stronger effect of pictures of male eyes compared to the eyes of women.

Pictures of eyes

Gender of depicted eyes

Respondent’s knowledge about the

aim of the pictures

Impression Management (IM)

Figure 3

The influence of eye pictures on Impression Management

H1

H2

(24)

This factor was also examined in this study. This led to the second hypothesis:

H2: The expected effect is stronger when pictures of male eyes are used.

Last but not least, it is important to know if a possible effect will still hold if respondents are aware of the effect. Put in a more straightforward way: if eye pictures work initially, will they still work if everyone knows their purpose? This is summarized in the third hypothesis:

H3: The expected effect is weaker when respondents are informed about the purpose of the eye pictures than if they are not.

(25)

3. Research design

3.1 Introduction

This chapter contains an explanation about how the empirical part of this research has been set up. It clarifies how the experiment was conducted, how many respondents were used, how these respondents were approached, what kind of questionnaire was used and how the collected data was analyzed.

3.2 Set up of the experiment

As explained in preceding chapters, the aim of this study was to measure the influence of eye pictures on socially desirable responding among survey respondents. To isolate and manipulate this relation and to exclude other possibly influencing factors, the choice for an experiment was made. It is the pre-eminent way to test if there is a causal relationship between two

(26)

To prevent a selection bias, respondents were assigned randomly to one of the six subgroups and all respondents filled in surveys that were identical, apart from the pictures that were used and the explanation that was given at the beginning of the survey.

The aim of the SDB-questions in the survey was secret, to keep the

experimental conditions as naturally as possible. Therefore, respondents were told that the faculty of psychology of the University of Groningen is

conducting research about the relationship between personality and political preferences. The following explanation was given to respondents from group one till three:

“In some of the surveys, pictures of flowers/eyes3 have been added. Based

on earlier research, the expectation is that those pictures can have an

3 Depending on the used pictures, respondents saw the word "flowers" OR "eyes" instead of

"flowers/eyes" as shown in the example. The explanation about the purpose of the pictures could lead to a self-fulfilling prophecy effect: when students think that certain pictures make them more honest, this thought alone could make them more honest (assumption of the researcher). If both words were shown together, students could see through the purpose of the flower pictures as control images, and in such case the pictures of flowers could lead to less self fulfilling prophesy. The extent of self-fulfilling prophecy would be in such case an unwanted, contemning extra factor. Therefore students in survey group 1 were told that pictures of flowers are expected to make respondents more honest, in an attempt to minimize this factor. Furthermore, the explanation in the real survey was in Dutch because the research is conducted in The Netherlands. The original survey can be found in Appendix 1.

Table 2

Different subgroups that were used for this study Main group 1: respondents

received explanation

Main group 2: respondents received NO explanation

1a: Flowers 2a: Flowers

1b: Male’s eyes 2b: Male’s eyes

(27)

unconscious effect on the honesty of the given answers. Therefore, respondents could possibly admit to vote on controversial political parties easier. This explanation about the pictures has only been added to a part of the surveys, to test whether a possible effect will still hold when respondents are aware of the reason behind those pictures.”

3.3 Which SDB-scale should be used?

In the preceding chapter, the lack of unidimensionality of many early SDB-scales has been mentioned. Factor analysis revealed two underlying primary factors: Self-Deception (SD) and Impression Management (IM) (Robinson, Shaver and Wrightsman 1991). Because SD-scores tell more about relatively constant personality traits (contrary to more situational dependent

IM-scores), the scope of this research is limited to the influence of eye-pictures on IM. Therefore, a scale is needed which measures solely IM, with as little contamination through SD-mechanisms as possible.

Figure 4 compares the factor loadings (SD versus IM) of different SDB-scales.

As can be seen, the Impression Management Scale (IMS, part of the Balanced Inventory of Desirable Responding or BIDR) performs best at measuring solely IM. Furthermore, the IMS is widely used and tested, and has a high internal consistency (different studies revealed Cronbach's coefficient alphas ranging from .75 to .86). The test-retest correlation is high (r=.65 over a five-week period) and the IMS has high correlations with a cluster of measures traditionally known as lie scales (e.g. Eysenck's Lie scale, MMPI Lie Scale) and role-playing measures like Wiggins' Sd and Gough's Gi (Robinson, Shaver and Whrightsman 1991, p. 38). For these reasons, the IMS has been chosen to measure IM in this study; see Textbox 1 for some examples of statements that are used in the IMS. The entire English version of the IMS and a manual can be found in Appendix 2. A Dutch translation (see Appendix 1) is made because the experiment was conducted to (mainly) Dutch students.

(28)

Figure 4

Typical Factor Loadings of SDR measures (Robinson, Shaver and Wrightsman 1991, p. 22)

Textbox 1

Examples of statements in the IMS (respondents have to indicate how true they are)

(29)

3.4 Sample size and approach of respondents

The aim was to involve at least fifty respondents per group, thus three

hundred respondents in total. The survey was web-based for three reasons: it is faster, cheaper and its popularity is increasing (Wright 2005) which makes SDB-issues in an online environment more and more relevant. For this study, a couple of thousand students of the University of Groningen (The

Netherlands) were approached. This does not yield a representative (for the Netherlands/western world) sample, but because of budget and time

constraints this was the best option. For this current study, it was seen as far more important that the six groups were mutually comparable, and from this point of view a more homogeneous sample is a benefit. To increase the response rate, a price of  100,- was promised to one of the participants.

3.5 Data-analysis

(30)

4. Results

4.1 Introduction

In chapter 2, the following hypotheses were set:

H1: Respondents will have a different IM-score if pictures of eyes are used in the Impression Management (IM) survey, compared to respondents who saw control pictures of flowers.

H2: The expected effect is stronger when pictures of male eyes are used.

H3: The expected effect is weaker when respondents are informed about the aim of the eye pictures.

A couple of thousand students of the university of Groningen were emailed with a request to fill in the survey, and this tactic yielded more than 2500 respondents. Those respondents were randomly put in one of the six following subgroups:

4.2 Hypothesis 1

To test the first hypothesis, a comparison was made between the average IM-score in the “flower groups” and the IM-IM-scores of respondents in the “eye-groups”. In appendix 2b, an explanation about the calculation of the IM-scores can be found. Table 2 shows the number of respondents per group and the average IM-scores. Note that the IM-score ranges from 0 (low level of impression management) to 20 (high level of impression management,

Main group 1: respondents received explanation

Main group 2: respondents received NO explanation

1a: Flowers 2a: Flowers

1b: Male’s eyes 2b: Male’s eyes

(31)

respondent is probably lying). Respondents who did not fill in the entire survey and those who appeared to have filled in random answers (for example the same answer to every IM-question) were filtered out.

Table 3

IM-Scores per group

Group # Explanation? Picture N Average

IM-score

1a Yes Flowers 399 5.18

1b Yes Women’s

eyes

398 5.21

1c Yes Men’s eyes 360 5.20

2a No Flowers 435 5.61

2b No Women’s

eyes

377 5.62

2c No Men’s eyes 439 5.60

Note: IM-score ranges from 0 (low level of impression management) to 20 (high level of impression management, respondent is probably lying).

Because several means had to be compared and different respondents were used for the various experimental conditions (picture type and explanation yes/no), an independent ANOVA was used to test the first hypothesis. Because no predictions about the direction of a possible effect of the eye-pictures have been made, a two-sided test was used. To be able to run such test, the data set had to meet some assumptions. See Appendix 4 for more details.

(32)

respondent's faculty and gender and possible interaction effects were also taken into account. This required a four-way independent ANOVA. The results can be found in Table 4 (see for detailed results Appendix 7).

Independent Variable Degrees of

freedom F Significance Picture type 2 0.01 .99 Explanation 1 8.44 .01* Gender 1 4.64 .03* Faculty 8 3.14 .00*

Picture type * Explanation 2 0.14 .87

Picture type * Gender 2 0.80 .45

Explanation * Gender 1 2.18 .14*

Picture type * Explanation * Gender 2 0.58 .56

Picture type * Faculty 16 0.58 .90

Explanation * Faculty 8 0.57 .80

Picture type * Explanation * Faculty 15 1.04 .41

Gender * Faculty 8 0.79 .61

Picture type * Gender * Faculty 15 0.89 .57 Explanation * Gender * Faculty 8 1.08 .37 Picture type * Explanation * Gender *

Faculty

14 1.55 .09

*significant value

When the independent factors were tested in isolation (without taking interaction effects into account), a significant effect was found for all independent factors but picture type. So on average, for a respondents’ IM-score it makes a difference whether he/she is a women or a man and if he/she received explanation or not. The faculty he/she studies at also counts, but the type of picture that was shown does not make a difference. Even

Table 4

(33)

when interaction effects are taken into account, “Picture type” does not make a significant difference. Therefore, Hypothesis 1 can be rejected.

4.3 Hypothesis 2

If an effect would have been found, it was expected to be stronger when pictures of male eyes were used, compared to pictures of the eyes of women. No effect was found so hypothesis 2 is not applicable.

4.4 Hypothesis 3

H3: The expected effect is weaker when respondents are informed about the aim of the eye pictures.

Again: no effect was found, so Hypothesis 3 is not applicable and can be rejected as well. This does not mean that explanation in itself did not have any effect either; the ANOVA described in paragraph 4.2 (also see table 4) revealed a highly significant difference between the subgroups that received explanation and those that did not receive explanation. When table 3 is further examined this is not a surprise; the average IM-scores of subgroups 1a till 1c are all around 5.19, whereas the average for subgroups 2a till 2c is 5.61 (see Table 5)

Table 5

IM-Scores Related to Received Explanation Yes/No

Subgroup Explanation? Picture # respondents Average

IM-score

1a+1b+1c Yes All

(34)

This is an interesting conclusion: if respondents are told that certain

techniques are used to make them more honest, they become more honest even though the mentioned techniques (pictures of eyes) do not work. It seems like some kind of placebo effect: the pills themselves do not work, but the idea that they will work still does the job. This is in line with the earlier described effect of the bogus pipeline and its milder versions (see paragraph 2.3.4).

4.4 Summary

All hypotheses were rejected: the type of picture does not have any

(35)

5. Conclusion and discussion

5.1 Introduction

The aim of this study was to answer the following research question:

What is the influence of pictures of eyes in surveys on the social desirability bias of the results?

First of all, an explanation about social desirability bias has been given. It was defined as "a tendency to respond to self-report items in a manner that

makes the respondent look good rather than to respond in an accurate and truthful manner." Secondly, an overview of current research about the influence of eye pictures on human behaviour was drawn. Research in this field is still in infancy, but the results of all three found studies all pointed in the same direction: people who feel gazed at by pictures tend to behave more cooperatively as a result of a decrease in perceived anonymity. These conclusions were a mixed blessing for this study: more cooperative behaviour could be good news if the importance of honest answers is stressed at the beginning of the survey, but the decrease in perceived anonymity was likely to increase social desirability bias.

5.2 Findings

To test whether eye pictures influence the degree of social desirability bias or not, different groups of respondents all filled in the same Impression

(36)

5.3 Discussion

How can the findings in this study be explained? Was it because of the expected contrary effects of the eyes, namely increased willingness to cooperate versus a decrease of perceived anonymity? While this possibility cannot be completely excluded, it seems improbable that these two contrary effects were exactly equally strong. It is more likely that the respondents simply were not affected by the pictures at all. How could this be the case, if researchers found strong effects for pictures of eyes in other experiments? One possible explanation is that in the other experiments, it was very unlikely that respondents consciously noted that the eyes were part of the

experiment. In this study, at least a couple of respondents were curious about the purpose of the eye pictures and sent emails to ask what the real research goal was. Picture 2 shows a screen shot of the online survey that was used for this study; it is no surprise that the eye pictures led to curiosity.

PICTURE 2

(37)

Another possible explanation is that presenting yourself favourably in a questionnaire is less socially undesirable than stealing coffee or tea (coffee room experiment) or giving money to yourself, at the expense of someone else (Harvard experiment and dictator game, see paragraph 2.2).

An interesting finding is the large difference in average IM-scores of students from different faculties (see Table 6 for a ranking from low to high IM-score). The reason for these differences could not be found; one could expect that students doing social studies score higher than Beta students, but this does not seem to be the case.

Table 6

Average IM-scores per faculty

Faculty Average

IM-score N SD Spatial Sciences 4.51 134 2.31 Economics and Business 4.89 335 2.63 Philosophy 5.13 16 2.73 Behaviourial and Social Sciences 5.35 401 2.63 Arts 5.46 487 2.74 Law 5.53 243 2.89 Theology 5.53 32 3.20 Mathematics & Natural Sciences 5.68 284 2.85 Medical Sciences 5.87 417 2.66

5.4 Recommendations for market researchers

(38)

1) Guarantee anonymity and confidentiality;

2) Add questions from social desirability scales (like the BIDR) to determine the extent to which results are contemned by social desirability bias. Compensate when possible;

3) Warn respondents that tricks are used to test the honesty of the

respondents. Even if those tricks are in fact not used, mentioning them helps.

5.5 Recommendations for further academic research

If there would be a follow up for this experiment, it would be a good suggestion to hide the eye pictures even better. The pictures could be designed as, for example, banners with an advertisement for contact lenses. One drawback of this approach is the extra stimuli that have to be added; advertisement text and a company logo to make it look convincing. This could also have a contemning effect on the IM-scores.

Still, it is questionable if hidden eyes would yield the expected effect. In the introduction of this thesis some results of the Durex sex survey were

described, for example the enormous difference in sexual activity between the French and the Dutch. I joked that this gap could be explained by more

(39)

References

Aquilino, William S. (1994), “Interview Mode Effects in Surveys of Drug and Alcohol Use,” Public Opinion Quarterly, 58, 210-240.

Baarda, Ben and Martijn de Goede (2001), Basisboek Methoden en

Technieken - Handleiding voor het Opzetten en Uitvoeren van Onderzoek. Groningen: Wolters-Noordhoff.

Bateson, Melissa, Daniel Nettle and Gilbert Roberts (2006), “Cues of Being Watched Enhance Cooperation in a Real-World Setting,” Biology Letters, 2, 412-414.

Breazeal, Cynthia and Brian Scassellati (2002), “Robots That Imitate Humans,” Trends in Cognitive Science, 6 (11), 481-487.

Burgoon, Judee K., Valerie Manusov, Paul Mineo and Jerold L. Hale (1985), “Effects of Gaze on Hiring, Credibility, Attraction and Relational Message Interpretation,” Journal of Nonverbal Behaviour, 9 (3), 133-146.

Burnham, Terence and Brian Hare (2007), “Engineering Human Cooperation - Does Involuntary Neural Activation Increase Public Goods Contributions?,” Human Nature, (18), 88-108.

Carter, Mike and David Williamson (1996), Quantitative Modelling for Management & Business. Essex: Pearson Education.

(40)

Field, Andy (2005), Discovering Statistics Using SPSS. London: Sage Publications.

Fisher, R.J. (1993), “Social Desirability Bias and the Validity of Indirect Quesioning,” Journal of Consumer Research, 20, 303-315.

Haley, Kevin J. and Daniel M.T. Fessler (2005), “Nobody's Watching? Subtle Cues Affect Generosity in an Anonymous Economic Game,” Personality and Individual Differences, 12 (12), 1255-1260.

Holtgraves, Thomas (2004), “Social Desirability and Self-Reports: Testing Models of Socially Desirable Responding,” Personality and Social Psychology Bulletin, 30 (2), 161-172.

Joinson, Adam (1999), “Social Desirability, Anonymity, and Internet-Based Questionnaires,” Behavior Research Methods, Instruments, & Computers, 31 (3), 433-438.

Karp, Jeffrey A. and David Brockington (2005), “Social Desirability and

Response Validity- A Comparative Analysis of Over-Reporting Turnout in Five Countries,” The Journal of Politics, 67 (3), 825-840.

King, Maryon F. and Gordon C. Bruner (2000), “Social Desirability Bias: A Neglected Aspect of Validity Testing,” Psychology & Marketing, 17 (2), 79-103.

Loo, Robert and Karran Thorpe (2000), “Confirmatory Factor Analyses of the Full and Short Versions of the Marlowe-Crowne Social Desirability Scale,” The Journal of Social Psychology, 40 (5), 628-635.

(41)

Newcomer, Susan and J. Richard Udry (1988), “Adolescents’ Honesty in a Survey of Sexual Behavior,” Journal of Adolescent Research, 3 (3-4), 419-423.

Reynolds, William M. (1982), “Development of Reliable and Valid Short Forms of the Marlowe-Crowne Social Desirability Scale,” Journal of Clinical

Psychology, 38 (1), 119-125.

Robinson, John P., Philip R. Shaver and Lawrence S. Wrightsman (1991), Measures of Personality and Social Pschychological Attitudes. San Diego: Academic Press, 17-60.

Sackeim, Harold A. and Ruben C. Gur (1979), “Self-Deception, Other-Deception, and Self-Reported Psychopathology,” Journal of Consulting and Clinical Psychology, 47 (1), 213-215.

Schwarz, Norbert, Fritz Strack, Hans J. Hippler and George Bishop (1991), “The Impact of Administration Mode on Response Effects in Survey

Measurement,” Applied Cognitive Psychology, 5, 193-212.

Siegel, David M., Marilyn J. Aten and Klaus J. Roghmann (1997), “Self-Reported Honesty Among Middle and High School Students Responding to a Sexual Behavior Questionnaire,” Journal of Adolescent Health, 23, 20-28.

Singer, Eleanor (1978), “The Effect of Informed Consent Procedures on

Respondents’ Reactions to Surveys,” Journal of Consumer Research, 5, 49-57.

Strahan, Robert and Kathleen Carrese Gerbasi (1972), “Short, Homogeneous Versions of the Marlowe-Crowne Social Desirability Scale,” Journal of Clinical Psychology, 28 (2), 191-193.

(42)

Questionnaires in a Survey On Smoking, Alcohol and Drug Use,” Public Opinion Quarterly, 62, 331-353.

Wright, Kevin. B. (2005), “Researching Internet-based populations:

Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services,” Journal of Computer-Mediated Communication, 10 (3), article 11

Zerbe, Wilfred J. and Delroy L. Paulhus (1987), “Socially Desirable

(43)

Appendix 1: Translated IM-survey

The online version can be found at

http://www.thesistools.com/?qid=42029&ln=ned ***

BELANGRIJK: LEES DIT EERST!

Bedankt voor je medewerking aan dit onderzoek!

Het doel van dit onderzoek is het in kaart brengen van relaties tussen persoonlijkheidskenmerken en politieke voorkeuren. Om betrouwbare resultaten te krijgen, is het belangrijk dat je eerlijk en zorgvuldig antwoord geeft.

Alleen voor het uitkeren van de prijs van  100,- hebben we je

studentnummer nodig. Je antwoorden worden volledig anoniem verwerkt. Alleen door de enquête VOLLEDIG in te vullen, maak je kans op de  100,-.

Let op: gebruik NOOIT de back-knop in je browser. Dit kan de resultaten verstoren. Teruggaan in de enquête is helaas niet mogelijk.

1) Op welke dag van je geboortemaand ben je geboren?

(44)

Uitleg:

In sommige enquêtes zijn plaatjes van bloemen/ogen opgenomen. Op basis van eerder onderzoek is de verwachting dat de plaatjes een onbewust effect hebben op de eerlijkheid van antwoorden. Daardoor zouden respondenten mogelijk gemakkelijker toegeven op controversiële partijen te hebben gestemd. Deze uitleg over de plaatjes is niet in alle enquêtes opgenomen, maar alleen in sommige. Zo kan getoetst worden of een eventueel effect ook nog stand houdt als respondenten zich bewust zijn van de bedoeling achter de plaatjes.

2) Wat is je geslacht?

O Man

O Vrouw

3) Wat is je leeftijd? ... jaar

_____

4) Aan welke faculteit studeer je?

(45)

5) Op welke politieke partij heb je bij de laatste (landelijke) tweede kamerverkiezingen gestemd?

O Christen Democratisch Appèl (CDA) O Partij van de Arbeid (P.v.d.A.)

O VVD

O SP (Socialistische Partij)

O GroenLinks

O Democraten 66 (D66)

O ChristenUnie

O Staatkundig Gereformeerde Partij (SGP) O Partij voor de Dieren

O EénNL

O Groep Wilders / Partij voor de Vrijheid O Verenigde Senioren Partij

O Ik heb niet gestemd

O Ik heb een blanco stem uitgebracht O Weet ik niet meer

O Wil ik niet zeggen

O Anders, namelijk: _____

6) Als er nu weer (landelijke) tweede kamerverkiezingen zouden zijn, op welke partij zou je dan stemmen?

O Christen Democratisch Appèl (CDA) O Partij van de Arbeid (P.v.d.A.)

O VVD

O SP (Socialistische Partij)

O GroenLinks

O Democraten 66 (D66)

O ChristenUnie

(46)

O Partij voor de Dieren

O EénNL

O Groep Wilders / Partij voor de Vrijheid O Verenigde Senioren Partij

O Ik zou niet gaan stemmen

O Ik zou een blanco stem uitbrengen O Wil ik niet zeggen

O Anders, namelijk: _____

7) Tot slot volgt hierna de persoonlijkheidsvragenlijst, bestaande uit twintig stellingen.

Geef aan in hoeverre de volgende stellingen waar zijn.

Als het nodig is vertel ik wel eens een leugen. niet waar waar

Ik verhul nooit de fouten die ik maak. O O O O O

Het is wel eens voorgekomen dat ik misbruik heb gemaakt van iemand.

O O O O O

Ik vloek nooit. O O O O O

Soms speel ik liever quitte dan dat ik iemands iets vergeef.

O O O O O

Ik gehoorzaam altijd de wet, ook als de kans klein is dat ik voor een overtreding wordt gepakt.

O O O O O

Ik heb iets onaardigs over een vriend(in) gezegd achter zijn/haar rug om.

O O O O O

Als ik mensen privé hoor praten, vermijd ik meeluisteren.

O O O O O

Ik heb te veel wisselgeld teruggekregen van een cassier(e) zonder hem of haar dit te vertellen.

O O O O O

(47)

Toen ik jong was stal ik wel eens iets. O O O O O Ik heb nog nooit afval op straat gegooid. O O O O O Soms rijd ik sneller dan de toegestane snelheid. O O O O O Ik lees nooit seksueel getinte boeken of

magazines.

O O O O O

Ik heb dingen gedaan waar ik andere mensen niet over vertel.

O O O O O

Ik neem nooit dingen mee die mij niet toebehoren. O O O O O Ik heb me ziek gemeld voor werk of school terwijl

ik niet echt ziek was.

O O O O O

Ik heb nog nooit een bibliotheekboek of

verkoopwaar in een winkel beschadigd zonder dit te melden.

O O O O O

Ik heb een paar behoorlijk afschuwlijke gewoontes.

O O O O O

Ik roddel niet over andermans zaken. O O O O O

8) Hoe eerlijk heb je deze enquête ingevuld?

O Volledig oneerlijk O Enigszins oneerlijk

O Noch heel oneerlijk, noch heel eerlijk O Enigszins eerlijk

O Volledig eerlijk

Nogmaals heel hartelijk bedankt voor je medewerking!

(48)

Appendix 2a: BIDR version 6 - form 40A

Using the scale below as a guide, write a number beside each statement to indicate how true it is.

+_________+_________+_________+_________+_________+_________+ 1 2 3 4 5 6 7

not true somewhat very true

____ 1. My first impressions of people usually turn out to be right. ____ 2. It would be hard for me to break any of my bad habits. ____ 3. I don't care to know what other people really think of me. ____ 4. I have not always been honest with myself.

____ 5. I always know why I like things.

____ 6. When my emotions are aroused, it biases my thinking.

____ 7. Once I've made up my mind, other people can seldom change my opinion.

____ 8. I am not a safe driver when I exceed the speed limit. ____ 9. I am fully in control of my own fate.

____ 10. It's hard for me to shut off a disturbing thought. ____ 11. I never regret my decisions.

____ 12. I sometimes lose out on things because I can't make up my mind

soon enough.

____ 13. The reason I vote is because my vote can make a difference. ____ 14. My parents were not always fair when they punished me. ____ 15. I am a completely rational person.

____ 16. I rarely appreciate criticism.

____ 17. I am very confident of my judgments

____ 18. I have sometimes doubted my ability as a lover.

(49)

+_________+_________+_________+_________+_________+_________+ 1 2 3 4 5 6 7

not true somewhat very true

____ 21. I sometimes tell lies if I have to. ____ 22. I never cover up my mistakes.

____ 23. There have been occasions when I have taken advantage of someone.

____ 24. I never swear.

____ 25. I sometimes try to get even rather than forgive and forget. ____ 26. I always obey laws, even if I'm unlikely to get caught.

____ 27. I have said something bad about a friend behind his/her back. ____ 28. When I hear people talking privately, I avoid listening.

____ 29. I have received too much change from a salesperson without telling him or her.

____ 30. I always declare everything at customs. ____ 31. When I was young I sometimes stole things. ____ 32. I have never dropped litter on the street. ____ 33. I sometimes drive faster than the speed limit. ____ 34. I never read sexy books or magazines.

____ 35. I have done things that I don't tell other people about. ____ 36. I never take things that don't belong to me.

____ 37. I have taken sick-leave from work or school even though I wasn't really sick.

____ 38. I have never damaged a library book or store merchandise without

reporting it.

____ 39. I have some pretty awful habits.

(50)

Appendix 3: Scoring key for BIDR version 6

Self Deceptive Enhancement (SDE): Items 1 - 20

Reverse scored items: 2,4,6,8,10,12,14,16,18,20.

Impression Management(IM): Items 21 - 40

Reverse scored items: 21,23,25,27,29,31,33,35,37,39.

The 40 items may be interspersed for administration.

Dichotomous Scoring procedure

First, reverse the Likert ratings for the items indicated above.

To add up the points on each scale. If the items were 5-point, the scoring is a little different.

7-point scales. For each scale, add one point for every '6' or '7'.

5-point scales. For SDE scale, add one point for every '5'; For IM scale, add one point for every '4' or '5'.

In both scoring systems, the minimum score is 0; the maximum is 20.

Reliabilities: Typical alphas are .67-.77 (SDE) and .77-.85 (IM)

Norms: Means and standard deviations for UBC undergraduates under two scale formats and two instructional sets.

7-point scale 5-point scale

Males (182) Females (251) Males (122) Females (248) Respond Honestly SDE 7.5 (3.2) 6.8 (3.1) 2.3 (2.3) 2.1 (2.0) IM 4.3 (3.1) 4.9 (3.2) 5.5 (3.5) 6.1 (3.6)

Play Up Your Good Points

(51)

For more information, consult the following papers:

Paulhus, D.L. (1991), “Measurement and control of response bias,” in

Measures of personality and social psychological attitudes, J.P. Robinson, P.R. Shaver, & L.S. Wrightsman, eds. San Diego: Academic Press, 17-59

Paulhus, D.L., & Reid, D. (1991), “Enhancement and denial in socially

desirable responding,” Journal of Personality and Social Psychology, 60, 307-317.

Paulhus, D.L., Bruce, M.N., & Trapnell, P.D. (1995), “Effects of

self-presentation strategies on personality profiles and their structure,” Personality and Social Psychology Bulletin, 21, 100-108.

Stober, J, Dette, D.E., & Musch, J. (2002), “Comparing continuous and dichotomous scoring of the Balanced Inventory of Desirable Responding,” Journal of Personality Assessment, 78, 370-389.

(52)

Appendix 4: ANOVA assumptions check

To be able to run an ANOVA, the collected data had to meet some assumptions:

1) Data should be from a normally distributed population;

2) The variances in each experimental condition are fairly similar; 3) Observations should be independent;

4) The dependent variable should be measured on at least an interval scale (Field 2005, p. 324).

To test the first assumption, a Kolmorgorov-Smirnov test was run. The results show that within each group the IM-scores are normally distributed (see Appendix 5); all significance scores are way below =0,05. A Levene's test was used to test the second assumption about the homogeneity of variances. All p-values of the different Levene tests were above 0,05 (see Appendix 6), which is good news in this case: the second assumption can be accepted as well. To be able to pass the third criterion, different respondents were placed at random in different groups, so no respondent saw pictures of flowers and pictures of, say, male eyes. To prevent respondents from participating

multiple times, which could have been tempting because hundred Euro's were at stake, people were asked to fill in the e-mail address of their University account -no other- before the survey got started. The possibility of

(53)

Appendix 5: Kolmogorov-Smirnov-test of normality

Case Processing Summary

group Cases

Valid Missing Total

N Percent N Percent N Percent

IMSCORE Explanation: Yes,

Flowers 386 100,0% 0 ,0% 386 100,0% Explanation: Yes, Women's Eyes 388 100,0% 0 ,0% 388 100,0% Explanation: Yes, Men's Eyes 355 100,0% 0 ,0% 355 100,0% Explanation: No, Flowers 427 100,0% 0 ,0% 427 100,0% Explanation: No, Women's Eyes 366 100,0% 0 ,0% 366 100,0% Explanation: No, Men's Eyes 427 100,0% 0 ,0% 427 100,0% Tests of Normality

categorie Kolmogorov-Smirnov(a) Shapiro-Wilk

Statistic df Sig. Statistic df Sig.

IMSCORE Explanation: Yes,

Flowers ,106 386 ,000 ,973 386 ,000 Explanation: Yes, Women's Eyes ,090 388 ,000 ,973 388 ,000 Explanation: Yes, Men's Eyes ,130 355 ,000 ,959 355 ,000 Explanation: No, Flowers ,089 427 ,000 ,978 427 ,000 Explanation: No, Women's Eyes ,129 366 ,000 ,966 366 ,000 Explanation: No, Men's Eyes ,105 427 ,000 ,970 427 ,000

(54)

IMSCORE Stem-and-Leaf Plot for GROUP= Explanation: Yes, Flowers

Frequency Stem & Leaf

(55)

IMSCORE Stem-and-Leaf Plot for

GROUP= Explanation: Yes, Women's Eyes

Frequency Stem & Leaf

2,00 0 . 0 21,00 1 . 0000000000 44,00 2 . 0000000000000000000000 40,00 3 . 00000000000000000000 50,00 4 . 0000000000000000000000000 51,00 5 . 0000000000000000000000000 59,00 6 . 00000000000000000000000000000 50,00 7 . 0000000000000000000000000 35,00 8 . 00000000000000000 19,00 9 . 000000000 8,00 10 . 0000 7,00 11 . 000 1,00 12 . & 1,00 13 . & Stem width: 1 Each leaf: 2 case(s)

(56)

IMSCORE Stem-and-Leaf Plot for GROUP= Explanation: Yes, Men's Eyes

Frequency Stem & Leaf

(57)

IMSCORE Stem-and-Leaf Plot for GROUP= Explanation: No, Flowers

Frequency Stem & Leaf

(58)

IMSCORE Stem-and-Leaf Plot for

GROUP= Explanation: No, Women's Eyes

Frequency Stem & Leaf

(59)

IMSCORE Stem-and-Leaf Plot for GROUP= Explanation: No, Men's Eyes

Frequency Stem & Leaf

(60)
(61)

Tests of Between-Subjects Effects

Dependent Variable: IMSCORE

Source

Type III Sum

of Squares df Mean Square F Sig.

Corrected Model 710.486(a) 52 13.663 2.011 .000

Intercept 6383.657 1 6383.657 939.354 .000 GROUP 1.237 2 .619 .091 .913 GENDER 61.780 1 61.780 9.091 .003 FACULTY 74.427 8 9.303 1.369 .206 GROUP * GENDER 19.315 2 9.658 1.421 .242 GROUP * FACULTY 113.404 16 7.088 1.043 .407 GENDER * FACULTY 66.016 8 8.252 1.214 .287 GROUP * GENDER * FACULTY 195.269 15 13.018 1.916 .018 Error 7312.274 1076 6.796 Total 38553.000 1129 Corrected Total 8022.760 1128

(62)

Appendix 7: four-way ANOVA

(63)

Referenties

GERELATEERDE DOCUMENTEN

Volgens Kaizer is Hatra zeker (mijn cursivering) geen belangrijke karavaanstad geweest, want de voornaamste karavaanroute zou op een ruime dagmars afstand gelegen hebben en er zou

50 However, when it comes to the determination of statehood, the occupying power’s exercise of authority over the occupied territory is in sharp contradic- tion with the

In conclusion, this thesis presented an interdisciplinary insight on the representation of women in politics through media. As already stated in the Introduction, this work

To give recommendations with regard to obtaining legitimacy and support in the context of launching a non-technical innovation; namely setting up a Children’s Edutainment Centre with

By combining organizational role theory with core features of the sensemaking perspective of creativity, we propose conditional indirect relationships between creative role

Fouché and Delport (2005: 27) also associate a literature review with a detailed examination of both primary and secondary sources related to the research topic. In order

An opportunity exists, and will be shown in this study, to increase the average AFT of the coal fed to the Sasol-Lurgi FBDB gasifiers by adding AFT increasing minerals

This potential for misconduct is increased by Section 49’s attempt to make the traditional healer a full member of the established group of regulated health professions