• No results found

The Double-edged Sword of Information Transparency as a Remedy for Privacy Concerns

N/A
N/A
Protected

Academic year: 2021

Share "The Double-edged Sword of Information Transparency as a Remedy for Privacy Concerns"

Copied!
83
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Double-edged Sword of Information

Transparency as a Remedy for Privacy Concerns

by Christopher Bunn Stationsplein 9 9726 AE Groningen bunn.christopher90@gmail.com Phone: +49 160 758 3226 Student number: s3242242 31st July 2017

Master Thesis in Master of Science Marketing Intelligence University of Groningen

Faculty of Economics and Business, Department of Marketing

(2)

Management summary

We live in the century of big data. Interactions between online companies and customers get observed and analyzed in order to reach the individual customer as personalized as possible. The modern-day marketing builds on Customer Relationship Management (CRM). Customers get targeted with individually suited offers, services and even a whole unique customer experience. While they enjoy the benefits, they are increasingly concerned about their privacy online. Many already experienced privacy violations online. Additionally, the media pays these issues a lot of attention and negative news about data breaches spread like wildfire. Besides that, customer realize how powerful data gathering and analysis tools have become. As a consequence, they get reluctant to share their personal information and stop cooperating with companies to protect their privacy. These so called protective responses impair the effectiveness of customer-centric marketing by decreasing the quality of data.

Our goal was to find out more about a remedy for privacy concerns and protective responses that is already in the center of attention – transparency. A transparent data collection and usage approach gets practiced by more and more companies online in order to address privacy concerns and to keep customers open for information sharing.

To simulate transparency, we modelled different degrees of transparency that were presented to the customer who wants to use an online service. Based on literature, we expected no transparency at all to raise concerns because customers get suspicious. We also expected very high degrees of transparency to be concerning because it makes risks to present. We therefore argued that a low transparency approach is the good trade-off.

(3)

Further, we found that customers with a low Internet self-efficacy react especially sensitive to transparency which supports our explanation that customers do not have the knowledge about data collection and therefore get concerned when confronted with information on it. Customers with a high Internet self-efficacy on the other side prefer low degrees of transparency while high degrees as well as no transparency makes them suspicious.

(4)

Preface

By handing in this thesis, I am for now finishing my academic journey. First of all, I want to thank my parents who enabled me take all these chances and to walk this path to a promising future. Their support and believe in me played a big role why I am here today.

I also want to thank all the friends who accompanied me during this journey. They made this demanding time a lot more enjoyable. Further, I want to thank all the people who took part in the survey for this research. Their participation is the corner stone of my work. I also thank my thesis workgroup. Our mutual exchange and support made the whole process way more valuable.

Finally, thanks to Dr. Lara Lobschat who gave me the chance to work on this interesting topic and guided me through the whole process. I can certainly say that I have never learned as much as during these months. And thanks to Prof. Dr. Peter Verhoef, for the additional supervision. His lectures in Customer Management helped me to better understand the Customer Relationship Management processes and therefore played an important part for my theoretical inspiration.

(5)
(6)

1

1. Introduction

“Information is the oil of the 21st century, and analytics is the combustion engine.” — Peter Sondergaard (Gartner 2011).

We live in the century of big data. With estimated revenues for big data analytics of $203 billion in 2020, the whole economy revolves around data trends like deep learning and Internet of Things (Press 2017). Companies probably never before collected that much data from their customers. Consider alone the five largest tech companies Google, Facebook, Apple, Amazon and Yahoo with their 3 billion active monthly users. Ad clicks, device specific information, phone numbers, profile information and browser histories are just a few types of data that are gathered during the daily interaction with customers (Baynote). Customer data is the foundation for modern customer-centric marketing. It is being used to determine the customer’s needs and desires and to deliver personalized advertisements and services to individual customers (Kumar and Reinartz 2012). Research supports the usefulness of customer data for firms. For example, customer data can improve marketing returns (McAfee 2012; Brynsolfsson 2012). Facebook’s quarterly mobile ad revenues of 5,4 billion in 2016 illustrate this importance. These revenues which account for 84% of Facebook’s total revenue are gained through the more than 1 billion users that use Facebook every day and get targeted with personalized ads in their daily life (Gara 2016). Companies also depend on user information to build relationships with their customers. Personalized service is necessary to build sustaining relationships with the customer and to create customer loyalty (Awad and Krishnan 2006).

(7)

2

As a reaction to their concerns, customers engage in different protective responses to preserve their privacy (Son and Kim 2008). For instance, customers get reluctant to disclose personal information to the company. Malhotra et al. (2004) found privacy concerns to be a major determinant for the willingness or unwillingness to share personal information with companies. 44% withhold personal information due to privacy concerns in 2016 (TRUSTe 2016). Other protective responses are removal of personal information or negative Word of Mouth (WOM) about the company. Research mostly agrees that “privacy concerns are the most influential predictor of privacy protective responses” (Chen et al. 2016, p. 415). Son and Kim (2008) conclude that the reluctance to cooperate with companies threatens data quality and creates costs.

Following these insights, companies need to find strategies to remedy these behaviors in order to use high quality data and to provide well-suited services to individual customers (Kumar and Reinartz 2012). Apparently, it is the protective response that diminishes the effectiveness of marketing in the end. For example, many customers have privacy concerns but are still willing to share their data for very little expected benefit. A phenomenon, known as the Personalization Privacy Paradox (Aguirre et al. 2016; Carrascal et al. 2013). The direct impact of protective responses on the CRM’s effectiveness makes them an interesting aspect for research and practitioners (Martin et al. 2017). Privacy concerns and the resulting Information Privacy Protective Responses (IPPR) are therefore in the focus of this research.

(8)

3

company stores the data for a certain period. Third, the knowledge about how the company actually uses the customer’s data after collecting and storing them (Son and Kim 2008; Awad and Krishnan 2006; Kumar and Reinartz 2012).

So far, not enough evidence for a positive effect of transparency was provided. While confirming the usefulness of transparency, Martin et al. (2017) point out that the interaction between transparency and control is what makes transparency successful. In two of their studies, transparency alone had no significant effect on experienced vulnerability, trust or financial returns. The authors argue that customers that have only the knowledge (transparency) about potential risks but no possibility to interfere remain concerned. Customers that have only control, have the possibility to act but lack the necessary knowledge to be effective (Martin et al. 2017).

(9)

4

It is important to mention that in terms of online privacy, the individual customer’s attitude towards the Internet and technology in general has an important influence that needs to be considered in this study. For example, Youn (2009) found that young adolescents who use the Internet less frequently seek out more advice from parents or experts. It has also been shown that only 35% of the baby boomer-generation are comfortable with the way companies handle their data, while 51% of the millennial generation feel this way which suggests a different attitude towards the internet (Quint and Rogers 2015). The other way around, teenagers that are more concerned about their privacy online, set their Facebook profiles less visible and implement more strict privacy settings (Feng and Xie 2014). Chen et al. (2016; p. 423) suggest further investigation into the role of Internet efficacy which is “a person’s understanding of and confidence in using a computer and the Internet”. It plays an important role for customers’ responses which skills they possess. Advanced internet skills as encrypted communication, anonymously browsing the internet and installing anti-spy programs are necessary for more aggressive responses while an avoidance response does not demand advanced skills (Chen et al. 2017). We propose that a customer’s Internet self-efficacy might strengthen the suggested effect of transparency on privacy concerns because customer who are more skilled and knowledgeable in Internet aspects are more aware of the potential risks of data collection. Following this argument, a higher Internet self-efficacy also strengthens the protective response to privacy concerns because of the higher awareness and the higher confidence in one’s ability to cope with the problem.

The current study therefore addresses the following research questions: How do different degrees of transparency (none, low, high) by the company impact the different protective behaviors of customers (mediated by privacy concerns)? How is this impact moderated by the customer’s Internet self-efficacy?

(10)

5

feelings of helplessness, which can be a motivator for protective responses. On the contrary, if the risk is less visible, it might be perceived as less threatening or the customer does not perceive a threat at all (Chen et al. 2016, Watson 2014, Morey et al. 2015). Besides the contribution to academia, the results on transparency might also have a practical relevance since several countries oblige certain degrees of transparency within their governmental regulations. For example, Germany has one of the strictest data protection laws in terms of the collection, processing and use of personal information by private actors for commercial purposes (Kumar and Reinartz 2012).

Additional to that, we want to give practitioners further advice on how to treat different types of customers to deal with their concerns. We want to find answers on how customers with different levels of Internet self-efficacy differ in their protective responses. Even though previous research mainly found a positive link between Internet affinity and heightened skepticism and customer knowledge (Feng and Xie 2014), we suggest that customers who are more open to technological innovation (e.g. Internet of Things) focus more on the benefits of big data while others are more cautious and skeptic (Youn 2009).

2. Theoretical Background

This section reviews the existing literature on privacy concerns and their impact on protective responses. We examine the role of transparency as a strategy to deal with privacy concerns but also as a general concept in the field of privacy concerns. Additionally, the importance of Internet self-efficacy in the perception and behavior of customers will be presented. Finally, we want to suggest a conceptual framework of how these concepts interact with each other, come up with expected effects and derive hypotheses for the empirical research (see graphic 1).

2.1 Information Transparency

(11)

6

benefit from stating openly why they request data and how they plan to use them. This information could also proof that customer data is not entirely used to the company’s advantage which reduces negative feelings (upset, irritated, distress) about the whole information disclosing process.

It has also been shown that customers desire distributive, procedural and interactional justice between them and the company. If their feeling for justice gets hurt, privacy concerns and protective responses increase as a result. Information transparency is an elementary part of procedural justice. Procedural justice means that the customer is able to make informed choices. This can include information about the usage of personal data and the opportunity to remove these data. This could happen in the form of a proper privacy policy by the company. The findings therefore hint to a positive impact of transparency (Wirtz and Lwin 2009).

Awad and Krishnan (2006) found that customers’ rated importance of information transparency increases with higher privacy concerns. Also, customers who previously experienced incidences of privacy invasion, value information transparency as more important. According to this, these customers who rate information transparency as important, are also more reluctant to share personal information or to participate in online profiling.

The growing privacy concerns and a greater desire for information transparency are not unseen by the companies. Therefore, they increasingly adopt privacy as a strategy (Martin and Murphy 2017). This means, privacy management is not seen as an inconvenient cost but a way to improve customer experience by meeting the customer’s needs for privacy. Transparency about information collection and usage lead to a more balanced relationship between customer and company, which creates justice. Privacy as a strategy makes sense, since customers get increasingly knowledgeable and savvy about digital profiling. This way, companies with a good privacy management can gain a competitive advantage on the market (Martin and Murphy 2017). Additional, Quint and Rogers (2015) recommend transparency as a tool to sustain brand trust and reputation which are key components for a positive long-term customer relationship. (Quint and Rogers 2015). Urban et al. (2009) confirm the central importance of transparency to build and maintain trust in the buyer-seller relationship online. A lack access to privacy and security information (transparency), e.g. in the form of an unclear privacy policy, can break the trustworthiness of an online company.

(12)

7

personally (vis-à-vis other individuals, groups, organizations, etc) information about one’s self” (Stone et al. 1983, p. 460). The authors state the importance of control over personal information in reducing customers’ perceived privacy concerns. One core element of perceived control is knowledge. The authors therefore conclude that knowledge (transparency) is a manifestation of the actual desire for perceived control (Awad and Krishnan 2006).

This idea is in line with the findings of Martin et al. (2017) that knowledge is only truly powerful in reducing protective responses when it interacts with control. In two of their studies, they manipulated a low and high degree to which a company’s data management policies are ‘clear, straightforward, and easy to understand’. However, while confirming the usefulness of transparency to mitigate protective responses, Martin et al. (2017) point out the importance of the interaction between transparency and control in order to see significant impacts. They found that transparency alone has no significant suppressing effect on the vulnerability the customer experiences. The highest suppressing effect came from the combination of high transparency with high control. The authors argue that customers that only have the knowledge (transparency) about potential risks but no possibility to interfere, remain concerned. Customers that only have control, have the possibility to act but lack the necessary knowledge to be effective.

In our study, we want find out more about transparency as an isolated tool without control. This way, we can put the focus entirely on transparency and go further into detail instead of focusing on the interaction between transparency and control. This means, we want to further investigate the impact of different degrees of transparency (none, low, high) on privacy concerns and IPPR. We therefore observe three instead of two degrees of transparency as in Martin et al (2017).

(13)

8

on customer perceptions of an online business. It was found that the partition of the total sale price into base price and one surcharge (e.g. shipping fee or sales tax) increased the perceived trustworthiness of the online business. This supports the idea of transparency as a strategy to build a trust-based relationship between customer and company (Martin and Murphy 2017; Urban et al. 2009). The information about single components of the price is a transparent approach which customers value higher than the non-transparent approach with just the total price. However, when participants were provided with a price split into three components (a lower base price, a shipping fee and a sales tax), the perception of trustworthiness dropped significantly below the level of the price with only two components. It is suggested that additional surcharges direct the attention to additional costs even if the total price is the same (Xia and Monroe 2004). This finding suggests that a higher transparency is not further improving trust in the company compared to the moderate transparency approach. Contrary, the high amount of detail repels customers because it directs their attention on negative aspects instead of creating trust.

(14)

9

On the other side, we assume that a total lack of transparency leads to distrust and will raise privacy concerns about covert data collection practices (Aguirre et al. 2016). Therefore, a transparency approach with a low degree is expected to be the optimal trade-off between a lack of information and overly detailed information. Based on the presented literature on information transparency, we develop the following hypothesis:

H1: The effect of different degrees of information transparency (none, low, high) on privacy concerns has a u-shaped form.

2.2 The effect of Privacy Concerns on Protective Responses

Kumar and Reinartz (2012, p. 280) define privacy in the CRM context according to Stone et al. (1983) as “the power of the individual to personally control (vis-à-vis other individuals, groups, organizations, etc.) information about one’s self”. However, we want to define the construct of information privacy concern as used in Son and Kim (2008, p. 508): The “degree to which an Internet user is concerned about online companies’ practices related to collection and use of his or her personal information”. We chose this narrower definition since we want to focus on the numerous situations in practice where customers have little or no control over the interaction where data gets collected and used.

The level of privacy concerns a customer experiences depends on a wide variety of factors that are internal or external to the company. For example, how transparent a company communicates their collection processes or past incidences of unauthorized access by third parties are internal to the company. The customer’s cultural background, past experiences and public media coverage are external to the company’s frame of action (Malhotra et al. 2004; Son and Kim 2008; Kumar and Reinartz 2012). Even the mere knowledge that companies use customer data to provide personalized content, creates the idea of being manipulated since customers find it difficult to evaluate the risks of covert data collection. (Aguirre et al. 2016). Additionally, customers do not know if their data gets transmitted to third parties (Angwin 2010).

(15)

10

But also, company-external trends as the endless possibilities of Internet and technology are central drivers of privacy concerns. Technologies like cookies and location-based smartphone services make the customer aware of being constantly tracked. Additionally, privacy issues receive attention in the public media. In combination with social media, negative incidences spread extremely wide in a short amount of time (Kumar and Reinartz 2012).

Also, the context influences the level of customers’ privacy concern (John et al. 2011). For example, it has been shown that customers have less privacy concerns when the company is well known (Aguirre et al. 2016). Besides that, mandatory data collection causes more negative feelings than on a voluntary basis which leads to higher falsification of the personal information (Norberg and Horne 2013). This supports our previous remarks on the importance of control experienced by the customer.

High privacy concerns impact the processes of companies negatively in a number of ways. Customers show lower online purchase intentions, higher unwillingness to disclose personal information and less communication with the company (Hallam and Zanella 2017, Son and Kim 2008).

In other words, customers engage in protective responses to deal with their privacy concerns. Chen et al. (2016) differentiate approach and avoidance protective responses. Approach strategies include active problem solving like restricting the visibility of online postings to certain contacts or deleting certain contacts (Chen et al. 2016; Feng and Xie 2014). Meanwhile, avoidance strategies aim at avoiding the risk. E.g. by using fake information for the registration on a website in order to hide one’s identity from other users online (Chen et al. 2016). The authors found privacy concerns to positively predict both approach and avoidance protective responses.

Youn (2009) focused on the protective responses of young adolescents. The observed approach strategies are fabricating personal information and seeking social support (e.g. asking teachers to examine the privacy policy). Avoidance strategies include refusal to visit a website that asks for personal information (Youn 2009). While the author found that seeking advice and refusing to visit the website were positively correlated to privacy concerns, there was no significant relationship with fabricating personal information.

(16)

11

and the usage of privacy settings on social network websites, too. Further, they confirm the findings from Malhotra et al. (2004) and Son and Kim (2008) that users who are less concerned leave their privacy settings at default (Mohamed and Ahmad 2012).

Martin et al. (2017) researched the effect of customer data vulnerability on the responses ‘falsifying’, ‘negative WOM’ and ‘switching’, mediated by either emotional violation or cognitive trust. They confirmed that all relationships are positive, either partially or fully mediated.

Son and Kim (2008) probably contributed the most extensive conceptual model about protective responses. They defined three categories: Information provision (refusal, misrepresentation), private action (removal, negative WOM) and public action (complaints to company or third parties). Similar to Youn (2009), they find a positive effect of privacy concerns on refusal (Kumar and Reinartz 2012). In line with Youn (2009), they do not find support for misrepresentation of a false identity. Removal, meaning to boycott the company’s data usage by removing personal information (Son and Kim 2008; Kumar and Reinartz 2012). All IPPR except misrepresentation were found to be caused by privacy concerns.

In this research, we want to look at active and passive protective responses. Also, we want to include protective responses that impair data quality of marketing in a direct way. For this matter, we use refusal, misrepresentation, removal and negative WOM as Son and Kim (2008) define them under IPPR. As demonstrated above, the positive effect of privacy concerns on protective responses was found in numerous studies. We use the established constructs and already supported hypotheses by Son and Kim (2008). This is important to make the results on the actual topic of interest, the effect of transparency, as valid as possible. Refusal aims at avoidance of information interaction with the company. It does not demand advanced skills or efforts since it is basically sufficient to not use a service. We therefore developed the following hypothesis for the effect of privacy concerns on refusal: H2: Information privacy concerns will have a positive impact on customers’ refusal to provide their personal information to companies.

In contrast, removal is a proactive approach that demands several actions and a certain degree of Internet-efficacy to be effective. We propose the following hypothesis:

(17)

12

Misrepresentation is somehow in between. It is especially interesting because the presented research finds contradicting results (Son and Kim 2008; Youn 2009; Martin et al. 2017; Chen et al. 2015). We therefore suggest the following hypothesis:

H4: Information privacy concerns will have a positive impact on customers’ misrepresentation of their personal information to companies.

Finally, a response that is not directly affecting the data quality, yet still potentially harmful to the company. When customers do have a certain degree of privacy concerns caused by a concrete privacy violation, this might lead to the spread of negative WOM about the company. By doing this, customers try to protect friends and family from similar incidences (Son and Kim 2008). Because of the important role of WOM, we decided to include it as a fourth IPPR.

H5: Information privacy concerns will have a positive impact on customers spreading negative Word of Mouth about the companies.

The link between privacy concerns and protective responses is well supported as already presented. Therefore, by using these hypotheses for our model, we want to replicate the prior findings in our study (Son and Kim 2008). Still, previous research on falsification/misrepresentation responses are not as clear as for the other IPPR. This leaves room for further research.

2.3 Internet self-efficacy

(18)

13

Internet self-efficacy is a central part for privacy self-efficacy online (LaRose, Maestro and Eastin 2001). Youn (2009, p. 403) describe Internet self-efficacy as the “general confidence in protecting their privacy from e-marketers’ information practices”. Based on this, Internet self-efficacy in the context of online privacy means that an individual believes in the ability to gain knowledge on marketers’ data collection practices, be aware of potential privacy risks, and being able to cope with these risks. According to protection motivation theory, we assume that Internet self-efficacy plays a central role in the way a customer engages in protective responses. The psychological reaction to transparent data collection methods (privacy concerns) and the resulting behavior (protective responses) are most probably fundamentally different for two individuals from different ends of the Internet efficacy spectrum. The IT specialist will process information on data collection entirely different than the retiree who recently got introduced to the Internet. In the same way, their protective responses to privacy concerns will differ. Therefore, we assume that an individual’s Internet self-efficacy promotes the previously described relationship between transparency, privacy concerns and IPPR and therefore needs to be considered as a moderating effect.

Feng and Xie (2014) point out the role that the Internet and social media have when it comes to the socialization of teenagers. They argue that the Internet usage helps young users to gain knowledge about CRM and advertising practices, which increases awareness for potential risks. Also, Moscardelli and Divine (2007) mention that heavy Internet users always keep on track with the latest Internet developments like marketers’ data collection practices. It is suggested that a higher Internet self-efficacy causes higher protective responses (Milne, Rohm and Bahl 2004; Moscardelli and Divine 2007).

Additionally, very active social media users disclose more personal information online, which brings off the need for skills in managing and protecting one’s privacy effectively (Lewis et al. 2008). Further, heavy users of social networks already have skills in managing social networks, which makes it easier for them to come up with the right coping strategies (Feng and Xie 2014; Moscardelli and Divine 2007). Feng and Xie (2014) find support for this theory. They found heavy social network users to have higher concerns about their personal information being collected by marketers and therefore adopt higher privacy settings in their social networks.

(19)

14

less privacy concerns. The findings of Youn (2009) and Miyazaki and Fernandez (2001) suggest that a high Internet self-efficacy decrease privacy concerns and protective responses.

Summarizing the presented research, there is sufficient proof that Internet users with higher knowledge and skills have a more awareness for risks and therefore are more motivated to engage in protective responses. The confidence in dealing with these risks might further promote the engagement in protective responses when privacy concerns arise. On the other side, there is still some contradiction in the literature. Some insights suggest that heavy Internet users are less concerned about practices by companies. One explanation might be that high efficacy users are more used to and desensitized for risks because of the frequent confrontation while less confident users get more concerned because of their lacking skills.

Even if the majority of the presented literature hints to a positive relationship, the remaining ambiguity makes it interesting to add own insights to this topic. We also want to find out, where the effect actually comes into play. For example, we suspect that a customer with high self-efficacy who gets confronted with data collection methods, might be more aware of risks and therefore more concerned which leads to protective responses. However, the customer might also be less concerned because he feels more confident on the Internet and because it is easier for him to engage in protective responses.

The literature suggests that heavy Internet users always keep on track with CRM and advertising practices which creates a higher awareness for the associated risks (Moscardelli and Divine 2007; Feng and Xie 2014). Therefore, they react with more privacy concerns when confronted with transparent data collection practices.

H6: Customers’ Internet self-efficacy strengthens the effect of information transparency on privacy concerns.

(20)

15

positive but weaker. No effect was found on the refusal to read unwanted mails or refusal to register for a website. We suppose that high efficacy users are more reliant on online services and therefore prefer to not avoid the service (refusal). Instead, they prefer use other strategies to cope with the risks, namely misrepresentation and removal. In terms of negative WOM, it has been shown that customers engage more in WOM when they expect a positive judgement about their person by others. More socially confident customers believe in their competence, expect a positive judgement and therefore engage in more WOM (Clark et al. 2008). We assume that Internet efficacious customers believe the same way in their competence and therefore engage in more negative WOM.

We therefore expect Internet self-efficacy to have the following moderation effects:

H7: Customers’ Internet self-efficacy strengthens the effect of privacy concerns on the customers’ refusal to provide their personal information to companies (low effect).

H8: Customers’ Internet self-efficacy strengthens the effect of privacy concerns on the customer’s removal of his/her personal information from the databases of companies that threaten information privacy (moderate effect)

H9: Customers’ Internet self-efficacy strengthens the effect of privacy concerns on the customer’s misrepresentation of his/her personal information to companies (low to moderate effect)

H10: Customers’ Internet self-efficacy strengthens the effect of privacy concerns on the customer’s negative Word of Mouth about the companies (moderate effect).

(21)

16

3. Methodology

In the previous chapter, we elaborated the theoretical constructs and discussed their expected interactions in a conceptual model. In the current chapter, the experimental setup and data collection methods will be defined and the operationalization of the constructs will be described. Finally, we present our plan of analysis.

3.1 Study Design

In order to get insights to the hypothesized effects and answer the research question, an experiment was conducted. Within this experiment, we manipulated the previously discussed three levels of transparency in order to examine their effect on the participants’ privacy concerns and IPPR. This means, we will create a no -, a low - and a high transparency condition. Each participant will be randomly assigned to one of the three conditions. We therefore use a single factor, between subject design with three manipulation groups. We look at the effect of the independent variable (IV) transparency on the four dependent variables (DV) (IPPR). This effect is expected to be mediated by privacy concerns. Meanwhile, the effect of the IV on the mediator and of the mediator on the DV are moderated by Internet self-efficacy as depicted in the conceptual framework.

An experimental set up has several advantages. First, it allows us to create different conditions according to our ideas. Second, while creating the conditions, we can keep other influences as constant as possible and control for external effects. Also, the random assignment of participants to the conditions ensures that differences between the groups beforehand are minimal. This way, differences in their behavior later on can be safely assigned to our manipulation. The downside is that an experimental situation is not as realistic as observations of real life situations (Aronson et al. 1998).

3.2 Survey and Data Collection

(22)

17

topic are posed. Next, a scenario gets introduced in which each respondent experiences one of three conditions. The scenario depicts the situation of using a sneakers online store for the first time after the registration. After the scenario, all respondents continue with the same flow of questions again. These questions ask how transparent the company in the scenario was perceived and how much privacy concerns the participants experience. The participants were also asked to state the IPPR they would potentially engage in and to rate their Internet self-efficacy. The next set of questions aims at previous privacy violations, attitudes towards the presented online store, involvement with the product (sneakers) and demographics (gender, age, education). Answers to these questions will especially help us to control for external effects. Besides that, the comprehensibility of the survey was requested in order to assess the overall quality of the data.

Before publishing the survey, pre-testing was conducted with around 10 respondents. All of them rated the flow of questions and the scenario as clearly understandable and straightforward. Also, the desired difference in the perception of the different conditions could be observed. After small adjustments, we therefore decided to continue with the survey.

Since the survey was send out to other students at the university, friends and relatives, the approach can be considered as convenient sampling. The author invited fellow students, friends and family via the instant messenger WhatsApp, Facebook, LinkedIn and via email to follow a hyperlink to the survey. Participants who took part in the survey mostly live in The Netherlands and Germany. The survey can be conveniently conducted on a computer or mobile device. The invitation included a note that participation in the survey gives the chance to take part in the lucky draw for an Amazon voucher worth 30€.

3.3 Measurement of Scales

(23)

18

We also operationalized eleven control variables which include mood, perception of the online store, involvement with sneakers, familiarity with the topic of online privacy, previous privacy violations as well as demographics (gender, age, education). Further, we use two scales to assess the overall comprehension of the survey. The operationalization of the control variables can be seen in table 1 to 8 in Appendix A. Please note that the constructs presented on the following pages are already displayed with factor loadings and Cronbach’s alpha. For a detailed description of the factor analysis, please read chapter 4.3 on reliability of scales.

3.3.1 Independent variable

Displaying one of three different fake-websites to each respondent marks the operationalization of the IV information transparency. The websites were created with ‘wix’, an online tool which offers templates to create simple website surfaces. For each degree of information transparency (none, low, high), we use the same website of an online store for sneakers, called ‘Awesome Sneakers’. This is a fantasy name and not an existing website, shop or company. By using a hypothetical shop, we try to avoid bias through store or brand perceptions and keep the scenario company as neutral as possible. The website displays a pop-up window which asks to read a short text and agree on the conditions in order to proceed to the service. The three conditions differ in amount and detail of the information displayed concerning data collection, storage and usage. These are the three different degrees of information transparency: none, low and high. Pictures of the scenarios are visible in Appendix C (see questions 9, 10 and 11). In all three conditions, we kept the website surface, the pop-up window, buttons etc. as constant as possible so that only the presented information on data collection differs between the conditions. This is necessary to minimize other effects besides transparency.

In order to do a manipulation check for information transparency, we measured how transparent the participants experienced the company in the scenario (Awesome Sneakers). We therefore adopted and adjusted the scales from Awad and Krishnan (2006) in a way that they match with our scenario (see table 1).

(24)

19

Perceived Information transparency: Seven-point Likert scales

(Awad and Krishnan 2006) Cronbach’s alpha = 0.820

1. Please specify the degree to which ‘Awesome Sneakers’ will allow you to find out what information about you they keep in their database. (FL = 0.686)

2. Please specify the degree to which ‘Awesome Sneakers’ tells you how long they will retain information they collect from you. (FL = 0.823)

3. Please specify the degree to which ‘Awesome Sneakers’ tells you about the purpose for which they want to collect data from you. (FL = 0.745)

4. Please specify the degree ‘Awesome Sneakers’ is going to use the information they collect from you in a way that will identify you. (FL = 0.723)

5. Now, please rate how transparent ‘Awesome Sneakers’ is about their data collection and usage. (FL = 0.832)

Table 1: Items of perceived transparency with factor loadings (FL) and Cronbach’s alpha(CA)

3.3.2 Dependent variables

The constructs of the DVs refusal, removal, misrepresentation, negative WOM, as adopted and adjusted by Son and Kim (2008) are presented in table 2. Note that each DV is measured on three very similar semantic scales (see Appendix C, questions 25 to 28).

Refusal: Seven-point semantic scales (Son and Kim 2008); Cronbach’s alpha =0.899

Please specify the extent to which you would refuse to give information to ‘Awesome Sneakers’ because you think it is too personal.

very unlikely (1) - very likely (7) (0.864) not probable (1) - probable (7) (0.897) impossible (1) - possible (7) (0.889)

Misrepresentation: Seven-point semantic scales (Son and Kim 2008);

Cronbach’s alpha =0.969

Please specify the extent to which you would falsify some of your personal information if it is asked for by ‘Awesome Sneakers’.

very unlikely (1) - very likely (7) (0.939) not probable (1) - probable (7) (0.954) impossible (1) - possible (7) (0.948)

Removal: Seven-point semantic scales (Son and Kim 2008); Cronbach’s alpha =0.938

Please specify the extent to which you would take actions to have your information removed from ‘Awesome Sneakers’ database when your personal information was not properly handled.

very unlikely (1) - very likely (7) (0.929) not probable (1) - probable (7) (0.933) impossible (1) - possible (7) (0.842)

Negative Word-of-Mouth: Seven-point semantic scales (Son and Kim 2008);

Cronbach’s alpha =0.969

Please specify the extent to which you would speak to your friends and/or relatives about your bad experience with ‘Awesome Sneakers’ mishandling personal information when your personal information was not properly handled.

very unlikely (1) - very likely (7) (0.930) not probable (1) - probable (7) (0.942) impossible (1) - possible (7) (0.922)

(25)

20

3.3.3 Mediator and Moderator variables

The mediating variable privacy concerns was adopted by Son and Kim (2008) and adjusted as shown in table 3.

Internet Privacy Concerns: Seven-point Likert scales (Son and Kim 2008) Cronbach’s

alpha = 0.851 1. I am concerned that the information I submit to ‘Awesome Sneakers’ could be misused. 2. (FL = 0.840)

3. I am concerned that a person can find private information about me on the Internet. 4. (FL = 0.711)

5. I am concerned about providing personal information to ‘Awesome Sneakers’, because of what others might do with it. (FL = 0.893)

6. I am concerned about providing personal information to ‘Awesome Sneakers’, because it could be used in a way I did not foresee. (FL = 0.873)

Table 3: Items of Internet privacy concerns with factor loadings and Cronbach’s alpha

Table 4 shows the items of the moderator variable Internet self-efficacy. Items 1 to 6 were adopted from Eastin and Larose (2000) and items 7 and 8 from Youn (2009). Please note that the cursive marked items by Eastin and LaRose (2000) and Youn (2009) did not show sufficient communalities and where therefore dropped from the construct. The displayed factor loadings and the Cronbach’s alpha are therefore based on the remaining items 1 to 5.

Internet self-efficacy: Seven-point Likert scales (Eastin and LaRose

2000; Youn 2009)

Cronbach’s alpha = 0.839

1. I feel confident understanding terms/words relating to Internet software. (FL=0.821) 2. I feel confident describing functions of Internet hardware. (FL=0.803)

3. I feel confident using the Internet go gather data. (FL=0.678)

4. I feel confident learning advanced skills within a specific Internet program. (FL=0.839) 5. I feel confident explaining why a task will not run on the Internet. (FL=0.779)

6. I feel confident turning to an online discussion group when help is needed.

7. I feel confident dealing with the ways that companies collect and use my personal information on the Internet. (Youn 2009)

(26)

21

3.4 Plan of analysis

In chapter 3, we defined our experimental setup and explained how the experiment was conducted which includes content and distribution of the survey and measurement of the scales and constructs. The following chapter will contain all steps of the analysis. First, we will prepare the data for the analysis and give an overview over the sample population in terms of demographics. Then, we will conduct a factor and reliability analysis in order to test the presented constructs on their reliability within each construct. Further, we will test the DVs and privacy concerns for discriminant validity to see if they measure distinctly different things. Since we want to run several regression models to test our hypotheses, we also test if the central constructs are normally distributed which is an assumption of ANOVA and regression. Additionally, before we can interpret results of the moderated mediation analysis, we will check if the applied manipulation actually worked, i.e. if the participants experienced the three levels of transparency as we intended. This will be done by running an ANOVA with our three conditions (none, low, high) as a factor and perceived transparency (see table 1) as the DV. Additional ANOVAs will be conducted to test if we need to control for certain effects in the moderated mediation analysis. For example, if the means of mood differs significantly between the three groups, we need to control for mood so we can minimize unwanted effects. Finally, we will run several regression models and use Hayes macro PROCESS to see if we find the expected moderated mediation. First, we will regress privacy concerns on transparency (path A). Afterwards, we run four regression models with privacy concerns as the IV and each IPPR as the DV (path B). Further, four regression models with each IPPR regressed on transparency will be conducted (path C’). This way, we can make clear statements if we see full or partial mediation at play. Further, we will use Hayes’ macro process to test whether mediation and the moderation effect at path A and B actually exists.

4. Results

4.1 Preparation of the dataset

(27)

22

checks are questions in the survey whose purpose is to test whether the participant is actually reading the question or just ticking randomly without paying attention. An example is: “This is an attention check, please tick totally disagree = 7 here”. Questions like this were randomly included into the survey flow.

Another observation was dropped because the respondent was considered the only outlier in a boxplot for comprehensibility of the survey content (see Appendix C, questions 19 and 49). These two questions were intended to measure how well the participants comprehend the survey and the scenario. This way, we could evaluate the overall comprehensibility and get rid of observations that do not have any value for the analysis because the respondent did not understand important aspects. The boxplot indicated that the mentioned observation was the only outlier of the whole sample. This way, the dataset for the analysis contained 125 observations.

4.2 Descriptives

The dataset consists of 65.9% female and 34.1% male participants, which means we work with a female dominated dataset. Obvious by the method of sharing the survey in the university environment, the dataset is quite young. 61.1% of the respondents are between 18 and 24, 30.2% between 25 and 34 while the three age groups between 35 and 64 make up for only 8.8%. Also in terms of education, the sample fits the academic environment. 39.7% of the respondents stated the Bachelor’s degree as their highest education and 38.1% the Master’s degree. The third biggest group holds a high school degree (15.1%). 4.8% state that they visited some college, but obtained no degree. 1.6% hold a doctorate degree. 0.8% achieved less than a high school degree. In terms of previous negative experiences with online privacy, the observations are not that clear. 40.8% claim that they have been victim of privacy violation in general. However, this also includes issues like unwanted spam. Only 8% state that their privacy got violated during a data breach.

(28)

23

observe that the no and low transparency groups contain more older participants then the third group. 12.1% of the population of group 1 and 13% of group 2 are between 35 and 64 while group consists of participants only between 18 and 34. For education, it is clear that in group 1, the majority has a Master’s degree (41.5%) while in group 2 and 3, most people have a Bachelor’s degree (43.5% and 41%). Group 3 has the most high school graduates (17.9%), second is group 2 (15.2%) and group 1 has the fewest (12.2%). Previous negative experiences seem highly similar among the three groups. We conclude that the respondents are properly divided over the three manipulation groups in terms of numbers. However, there are differences over the three groups in terms of demographics which might have unwanted effects on the participants’ behavior. In order to further investigate if we have to control for these differences, we will test whether these differences among the manipulation groups are significant. To do this, we cannot conduct an ANOVA since this test requires a DV that has an interval/ratio scale (Malhotra 2010). However, gender, education, previous privacy violations and victim in data breach have a nominal scale, while age has an ordinal scale.

(29)

24

Variable 1 (n=41) 2 (n=45) 3 (n=39) Overall (n=125)

Age (1-9) 2.71 2.53 2.36 2.54

Victim in data breach (1, 2)

1.95 1.91 1.90 1.92

Table 5: Mean comparison for age and victim in a data breach

4.3 Reliability and Validity of scales

Before we can begin with a profound analysis, we need to conduct an exploratory factor analysis and a reliability analysis (Cronbach’s alpha) on the before presented constructs. This way, we can test the reliability within all constructs which helps to establish the quality of the scales. This means, that items of a theorized construct load on the same factor and measure the same thing. Besides that, single scales for each construct make further calculations more efficient. Additionally, we also need to check whether the constructs qualify for discriminant validity which means that the constructs are clearly different from each other and measure different things. For this matter, we will check if items load higher on other theorized constructs than on their own. This will be especially done with the four DV’s and privacy concerns since these variables are very similar in their content and scale (see table 2 and 3). All DV’s belong to the same category which is IPPR. Similarly, refusal and misrepresentation are in the same category of information provision while removal and negative WOM are grouped in private action (Son and Kim 2008). Therefore, we will test if the four IPPR can be treated as separate constructs or if we can reduce them.

Before each factor analysis (FA), it needs to be checked if this method is appropriate. Therefore, a Kaiser-Meyer-Olkin measure of sampling adequacy (KMO) and a Bartlett’s test of sphericity are performed. Only if the KMO measure is above 0.5 and the Bartlett test is significant (p<0.05), the sample and variables are adequate for FA. This way, we can guarantee that the proportion of variance among the variables is low enough (Field 2009). This was the case for all further presented constructs. Additionally, only items with communalities above 0.4 were chosen for the FA (Field 2009). This way, we make sure that the factors have enough explanatory value for each item.

(30)

25

appropriate method. Also, we want to analyze the data with a minimum of necessary constructs. Further, PCA is less complex than other methods (Field 2009).

Each extracted factor needs to have an Eigenvalue above 1. It needs to explain more than 5% of the variance in the DV. Cumulative, the factor(s) should explain more than 60% of the variance in the DV (Malhotra 2010). A rotated version of the factor matrix will be applied, which prevents that all variables load on one factor (Varimax). The factor loadings where checked to be higher than 0.5. To guarantee significance with a threshold of 0.5, a minimum sample size of 120 is necessary (Hair et al. 1998). This is the case with our 125 observations. We further confirm the constructs’ reliability by estimating Cronbach’s Alpha for each factor with more than two components. This value should be higher than 0.7 for the factor to be reliable (Nunnally 1978).

The first FA was conducted with the five items of perceived information transparency (see table 1). One factor with an Eigenvalue of 2.920 and an explained variance of 58.405% was extracted. The variance is shortly below the threshold of 60%. Still, since all factor loadings are between 0.686 and 0.832, the extraction of one factor seems appropriate. The Cronbach’s alpha is 0.820 which confirms that the factor is reliable. Therefore, a single construct for perceived transparency can be created.

Next, the four items of Internet Privacy Concerns (see table 3) were used for FA. All four items have factor loadings between 0.711 and 0.893, which is way above the threshold. One factor with an Eigenvalue of 2.771 and explained variance of 69.279% was extracted. The Cronbach’s Alpha was 0.851 which suggests high reliability. As a consequence, the scale named Internet Privacy Concerns was created.

The FA for the nine items of Internet self-efficacy showed insufficient communalities for item 6 (0.333), 7 (0.322) and 8 (0.248) (see table 4). Further, the extracted factor showed insufficient explained variance (47.75%). Because of this, items 6 to 8 were dropped and the FA conducted again. It suggested the extraction of one factor, which has an Eigenvalue of 3.089 and 61.779% explained variance in the DV. All factor loadings are between 0.678 and 0.839.

(31)

26

similarity that privacy concerns and IPPR share in theory. Since we expect two or more factors to be extracted, we use a rotated matrix to display the factor loadings (Varimax). The results suggest the extraction of five factors that cumulatively explain 85.426% of the variance. Each factor explains more than 7.634% of the variance and has an Eigenvalue above 1. The rotated factor matrix (see table 6) shows that the items of each theorized construct distinctly load on their respective factor. In order to establish convergent validity, we only accept constructs with an average factor loading of above 0.7 (Fornell and Larcker 1981). This is the case for privacy concerns (0.7927), refusal (0.8333), misrepresentation (0.947), removal (0.9013) and negative WOM (0.9313). The lowest distance is observed for the item removal (possible) which loads with 0.845 on its own factor and with 0.346 on the theoretical negative WOM factor. Still, the distance is quite big and all items load distinctly on their theorized constructs which suggests the extraction of five factor for the IPPR and privacy concerns. Convergent validity of the constructs is established.

Item Factor 1 2 3 4 5 Privacy concerns (1) ,065 ,159 ,778 ,033 ,253 Privacy concerns (2) ,025 ,194 ,704 ,017 ,018 Privacy concerns (3) ,104 ,092 ,847 ,047 ,239 Privacy concerns (4) ,078 ,124 ,842 ,104 ,163 Refusal (likely) -,039 ,090 ,257 ,055 ,864 Refusal (probable) ,047 ,056 ,241 ,051 ,897 Refusal (possible) ,086 ,102 ,094 ,070 ,889 Misrepresentation (likely) -,073 ,939 ,189 ,015 ,101 Misrepresentation (probable) -,027 ,954 ,199 ,013 ,057 Misrepresentation (possible) -,018 ,948 ,166 ,063 ,103 Removal (likely) ,179 -,013 ,108 ,929 ,077 Removal (probable) ,252 ,015 ,060 ,933 ,074 Removal (possible) ,351 ,105 ,013 ,842 ,035

Negative WOM (likely) ,930 -,056 ,093 ,244 ,049

Negative WOM (probable) ,942 -,066 ,090 ,258 ,042

Negative WOM (possible) ,922 -,008 ,075 ,246 ,017

Table 6: Rotated factor matrix for refusal, misrepresentation, removal, negative WOM and privacy concerns

(32)

27

examine cross-correlations between the constructs. Therefore, we use a method suggested by Campbell and Fiske (1959). We take the lowest significant correlation-coefficient between the items within a chosen construct (e.g. refusal). Then we look at all correlations between the items of the chosen construct (refusal) with the items of the other constructs. We make a count for all cross-correlations that are bigger than the lowest correlation within the respective construct. These are considered violations. This number of violations is allowed to be 50% of the number of total comparisons. We found 0 violations for our five variables. However, we see some correlations between the items of removal and negative WOM that are around 0.5 which raises concerns. To further confirm discriminant validity, we apply another measure suggested by Henseler et al. (2015). To do this, we calculate the average variance extracted (AVE) between all five constructs by taking the mean of the five average factor loadings (AFL=0.881) and taking the square of this number (AVE=0.7763). This number needs to be bigger than the squared inter-construct correlations to confirm discriminant validity (see table 7). The highest correlation exists between the constructs of removal and negative WOM (0.503). The square of this correlation coefficient is 0.253 and therefore smaller than the AVE between all five constructs (0.7763). Therefore, discriminant validity is established. However, the correlation between the constructs tells us that the constructs are by far not independent from each other. Therefore, when looking at one IPPR, it is important to control for the remaining IPPR.

Construct Privacy

concerns

refusal misrepresentation removal neg. WOM

Privacy concerns 1 0.492 0.344 0.157 0.164

refusal 0.492 1 0.201 0.153 0.101

misrepresentation 0.344 0.201 1 0.073 -0.48

removal 0.157 0.153 0.073 1 0.503

neg. WOM 0.164 0.101 -0.48 0.503 1

Table 7: Inter-construct correlations between privacy concerns and the four IPPR

(33)

28

other constructs show Cronbach’s alpha above 0.7 and are therefore reliable scales (see table 2 in Appendix B).

Finally, a construct of the survey comprehensibility was tested. The supposed construct includes two scales from the survey (table 6 in Appendix A). A correlation coefficient was estimated to test if the two scales move together. The correlation is 0.464**. This is below our set threshold for factor loadings of 0.5 (Hair et al. 1998). We decide to use the two scales separately since the two scales measure distinctly different things. One asks on the survey and the other one on the scenario. It makes sense to measure survey and scenario comprehensibility separately since this can differ significantly.

4.4 Test for Standard Normal Distribution

(34)

29

All variables fulfill our criteria and are therefore considered normally distributed (see table 3 in Appendix B). Negative WOM shows a mentionable skewness to the low end of the scale (-1.329) and a peak to the high end (0.847) which we can observe in graphic 2. Even if the skewness (-1.329) does not exceed the cut-off value of -2, we argue that it should be kept in mind for further analysis. Some other variables also show slightly skewed or peaked distributions. However, we do not consider these as problematic since distributions to one end of the scale are also valuable observations that we want to consider. To conclude the testing for normality, the assumption of standard normal distribution is met for all continuous variables. We therefore continue the analysis with all variables including negative WOM.

Graphic 2: Distribution of negative WOM (item 1); very unlikely (1) to very likely (7)

4.5 Comprehensibility of the survey

(35)

30

The group means for the scenario comprehensibility only confirm this finding. The no transparency group scored on average 4.61, low transparency 5.2 and high transparency 5.31. To test the statistical significance of the differences between the three conditions, two one-way ANOVAs are conducted with transparency (1,2,3) as the factor and each comprehensibility variable (survey and scenario) as a separate DV. In both cases, the Levene’s test gives an insignificant p-value (p=0.801; p=0.786) (see table 1 in Appendix B). The assumption of homogeneity of variances is therefore met which is an assumption of ANOVA (Field 2009). However, the differences in comprehensibility between the manipulation groups are not significant for survey (p=0.155) and scenario (p=0.105).

Even if it was the intention to provide more information in the low and high transparency conditions, it needs to be considered for the final analysis that the participants of the no transparency condition experienced the scenario as less clear. Besides that, participants from all conditions experienced the scenario (mean=5.06) less comprehensible than the overall survey (mean=5.52). However, average scores above 5 signal that the experiment in general was well understood. The insignificant results of the ANOVA tests for both scales show that we do not need to control for comprehensibility later on.

Besides that, one significant outlier was visible at the low end of the scale on a boxplot graph. The observation was deleted from the survey for all analysis which reduced the dataset to the existing 125 observations. This outlier is not visible in the graphs nor included in any calculations since the observation was dropped during the data preparation beforehand.

(36)

31

4.6 Manipulation check

Before the moderated mediation can be tested, it needs to be confirmed that the manipulation of transparency actually worked, which means that the created conditions apply. We can to that by testing whether the means of the perceived transparency differ statistically significant between the three conditions. First, the Levene’s test was conducted to test if the assumption of homogeneity of variance is met (see table 1 in Appendix B). We cannot reject the Null-hypothesis that there is homogeneity of variances (p=0.384). Therefore, the one-way ANOVA can be conducted to test if there is a significant variance between the three group means (see table 8). The results suggest that there is a highly significant difference (p=0.000). Also, the F-value of 50.853 is far higher than the threshold of ~3.09 (Field 2009). The sum of squares also indicates a proper variance between the groups (56.571).

Sum of

squares

df Means square F Sig.

Between groups

56.571 2 28.285 50.853 0.000

Within groups 67.859 122 0.556

Total 124.429 124

Table 8: One-way ANOVA; Factor: transparency (1,2,3), DV: Perceived transparency

(37)

32

the mean comparison for item 5 which was measured on a 7-point Likert scale (1= “very low” to 7 “very high”). We observe that the no transparency group scored the lowest (2.17), the low transparency condition is around one point higher (3.09) while the high transparency condition the highest value on the Likert scale (4.72). Again, we see that the participants experienced the difference between condition 2 and 3 clearer than for 1 and 2.

Still, we come to the conclusion that the participants show the desired perception of each condition with ‘no transparency’ as the lowest, low in the middle and high as the highest transparency. While the difference between 1/3 and 2/3 is not only obvious but also statistically significant, the difference between 1/2 is still not significant if we look at the whole construct. Therefore, this limitation has to be kept in mind for the focal analysis.

Graphic 4: Post-hoc ANOVA; mean comparison for perceived transparency (item 5); X-axis: transparency (1,2,3), Y-axis: perceived transparency (7-point Likert scale)

4.7 Testing of control variables

(38)

33

mood, familiarity with the topic of privacy, perception of the online store (Awesome Sneakers) and involvement with the product (sneakers). Further, we also include Internet efficacy which is not only a moderator but will be also used as a covariate later on (see table 9). First, the assumption of homogeneity of variances has to be tested for the variables. The Levene’s test confirms the assumption for all five variables (see table 1 in Appendix B).

Dependent variable Sum of Squares (between) F Sig. (p-value)

Positive mood 1.648 0.817 0.444

Familiarity with privacy 3.145 1.576 0.211

Perception online store 4.125 2.106 0.126

Involvement Sneakers 0.051 0.025 0.975

Internet self-efficacy 1.567 0.778 0.462

Table 9: Five separate one-way ANOVA tests; Factor: transparency (1,2,3)

While positive mood, familiarity with privacy, involvement with sneakers and Internet efficacy show p-values above 0.200, perception of the online store shows the lowest p-value (0.126) and the highest F value (2.106). The differences in store perception between the conditions are not significant but they stick out against the much higher p-values from the other variables. Since this research is based on an experiment we argue that there are always external effects that need to be kept constant and control for. Even if the differences are not statistically significant, there still can be a potential influence on our DVs. Further, corporate image and store design are considered fundamental differentiators for companies in many fields (Varley 2005). Therefore, we decide to consider store perception as a control variable for the following analysis. Due to the moderation, Internet efficacy will be used as a covariate by default.

4.8 Moderated Mediation Analysis

(39)

34

However, to reduce the complexity of the model building and analysis process, we decided look at the mediation first. Afterwards, we will add and test the moderation part. This also has the advantage that we can demonstrate the effect of the moderator more distinct.

For the mediation, we do not follow the traditional procedure of Baron and Kenny (1986) since parts of it are considered outdated by modern research on mediation (Hayes 2009; Hayes 2013). This means, we do not consider a significant total effect (path C in graphic 5) of X on Y as essential to find mediation. Instead, we first examine if there is a significant effect of transparency (X) on privacy concerns (M=Mediator) (Path A). If that is the case, we test the effect of privacy concerns (M) on IPPR (Y) while controlling for transparency (X) (Path B). Then we look at path C’ which is the direct effect of X on Y while controlling for M. To find full mediation, the direct effect (C’) must be not significant and the indirect effect (AxB) must be significant (Hayes 2009). PROCESS estimates this by using bootstrapping and confidence intervals. We will look at the confidence intervals to check if the indirect effect (AxB) is significantly different from 0. As a final step, we will examine how this indirect effect (AxB) is influenced by different levels of the moderator. In other words, if we have significant interaction effects between transparency and Internet efficacy (W) on privacy concerns (TxI in graphic 5) and between privacy concerns (M) and Internet efficacy (PxI) on the DV. Simply put, if Internet efficacy moderates the effects marked as path A and B.

(40)

35

Graphic 5: Model 58 in PROCESS: Moderated mediation model with moderation at path A and B; only one DV displayed to reduce graphical complexity (4 DVs in estimation)

Referenties

GERELATEERDE DOCUMENTEN

As long as cyclosporin is administered in low concentrations there will not be a nephrotoxic effect, because the protective and toxic effects are caused via 2 different mechanism

The new Finnish workplace development programme (TYKES-FWDP) as an approach to innovation. Collaboration, innovation, and value creation in a global telecom. Applying

Specifically, it was expected that dyads with higher levels of cultural distance would have, in comparison to low/medium cultural distance, a greater negative indirect relation

Hypothesis 4: A creative star´s network centrality moderates the indirect effect of their individual creativity on team creativity via creative collaboration, such that

The outcome of the whistleblowing act (e.g. positive or negative) serves as a conditional indirect effect on the relationship between the type of whistleblowing

It is proposed that higher levels of task interdependence make the positive relationship between hierarchy steepness and team performance stronger when mediated by coordination..

Furthermore, investigating the moderator role of privacy concerns in the relationship between trust and loyalty in the online travel market will help to understand

differentiated Price Downgrade Un-willingness to Pay Price Premium Perceived Level of Expertness Consumer Judgment Quality Upgrade Willingness to Buy Higher Quality H4 ( + ) H3