• No results found

Optimizing open innovation contests (OIC) in online market: The effects of award structure on OIC performance

N/A
N/A
Protected

Academic year: 2021

Share "Optimizing open innovation contests (OIC) in online market: The effects of award structure on OIC performance"

Copied!
69
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Optimizing open innovation contests (OIC) in online market:

The effects of award structure on OIC performance

14 March, 2016 Master Thesis

MSc BA Strategic Innovation Management MSc Marketing Intelligence

University of Groningen

Faculty of Economics and Business PO Box 800, 9700 AV Groningen Aina Kunigelyte s2444887 K. de Vriezestraat 61, 9741 AH Groningen, a.kunigelyte@student.rug.nl Supervisors:

(2)

4

Optimizing open innovation contests (OIC) in online market:

The effects of award structure on OIC performance

Master Thesis

MSc BA Strategic Innovation Management MSc Marketing Intelligence

Aina Kunigelyte Abstract

Although popularity of OIC as a novel innovation tool is increasingly growing, the majority of companies are struggling with the question how to utilize this innovation tool effectively. Mostly the literature on contests and award structure is theoretical and/or based on offline settings thus provides a limited guidance for managers. In this paper, we showed that distributing fixed budget across awards in different ways can lead to significant differences in OIC performance. We extended prior studies that examined differential impacts of single and multiple-awards structures on OIC performance by deriving three sub-types of multiple award structure. Overall, we empirically tested differential impacts of four types of award structure on large scale data extracted from a real world OIC setting. Effects on two OIC performance indicators participation and quality of solutions were estimated. Both performance indicators were specified by two different variables which demonstrated robustness of our models. To our knowledge, this is the first study to provide comprehensive empirical analysis of the award structure as a determinant of OIC performance in public and dynamic online setting. We demonstrate that our derived award structure type multiple-awards structure with a relatively high range between largest and smallest awards is the most effective structure in terms of participation and the number of top quality solutions. This finding shows that common practice of choosing the number of awards based on the number of solutions needed might be misleading. Furthermore, contrary to prior literature, we show that participation is highly triggered by extrinsic motivator the size of award, whilst the size of award is not a primary driver of the quality of solutions. We suggest that managers while distributing the budget on award(s) should consider whether they want to get an access to a large pool of solvers and solutions or they prioritize relatively larger number of top quality solutions, though smaller and potentially less diverse pool of solutions. This study sheds the much needed light on effective implementation of current OIC that yield different nature from traditional contests and calls for further research that could help managers to release the potential of this innovation tool.

(3)

Table of Contents

Table of Contents ...1

1. Introduction ...3

1.1 The growing popularity of OIC's ...3

1.2 Problem statement ...4

1.3 Purpose of the study and research questions ...6

1.4 Significance of the study ...7

1.5 Overview of the research ...7

2. Conceptual framework and background ...8

2.1 The paradigm of open innovation, crowdsourcing and OIC ...8

2.2 Hypotheses Development ...9

2.2.1 The type of award structure and its effect on OIC participation ...9

2.2.2 The moderating role of OIC budget ...12

2.2.3 The moderating role of OIC duration ...14

2.2.4 The type of award structure and its effect on the quality of solutions ...15

2.2.5 Assurance of payment and its effect on participation and the quality of solutions ....16

3. Methodology ...17

3.1 Research design ...17

3.2 Empirical setting ...18

3.2.1 The process and characteristics of OIC...18

3.2.2 Data ...19

3.3 Measures ...20

3.4 Research method and model specification ...26

4. Results ...29

4.1 Award structure, total budget and duration of OIC: effects on participation ...29

4.1.1 Main effects models: OLS estimation results after HCSE correction ...33

4.1.2 Moderating role of budget and duration: Estimation and testing for assumptions ...35

4.1.3 Estimation results of moderating effects Models 3 and 4 after HCSE correction ...37

4.2 Award structure, number of solvers: effects on the quality of solutions ...41

4.2.1 Estimation results of models 5 and 6: effects on the quality of solutions ...43

4.2.2 Model fit: the quality of solutions ...47

5. Conclusions and recommendations ...48

(4)

2

5.1.1 Synthesis of the main findings: Award structure and its differential impacts on

participation and the quality of solutions ...48

5.1.2 Theoretical contributions ...50

5.1.3 Managerial implications ...50

5.1.4 Limitations and future research ...51

BIBLIOGRAPHY ...53

Appendix 1: Little’s MCAR test ...59

Appendix 2: VIF scores of main effects Models 1 and 2 ...59

Appendix 3: Distributions of standardized residuals of models 1 and 2 ...59

Appendix 4: SPSS Macro for Heteroscedasticity – Robust Regression ...60

Appendix 5: OLS estimation results of main effects models1 and 2 before HCSE correction ..63

Appendix 6: VIF scores of moderating effects models 3 and 4………..65

Appendix 7: Estimation results of moderating effects models 4a and 4b ... before HCSE correction ...65

(5)

3

1. Introduction

1.1 The growing popularity of OIC's

In his recent work, Chesbrough (2013) states that in the current environment, firms have to leverage the benefits offered by open innovation in order to innovate effectively. Thus, nowadays no longer many firms only rely on internal R&D, instead they have been transitioning from closed to more open innovation development processes. Firms are increasingly seeking to find novel ways to collaborate and develop innovations (Chesbrough, 2007; Friedman, 2005; Metters et al., 2010). Online innovation contests (OIC) represent one of the most cost-effective and straightforward ways to innovate by outsourcing tasks to an undefined group of participants (Bockstedt, Druehl & Mishra, 2015; Estelle´s-Arolas & Gonzalez-Ladro´n-de-Guevara; 2012, Howe, 2006). Simply put, a seeker, usually a firm, identifies a specific task, offers a monetary award or a number of awards, broadcasts the invitation to submit solutions online and eventually rewards the best solution or a number of solutions (Boudreau & Lakhani, 2013).

(6)

4

innovation contest which resulted in 154,000 submissions from all over the world in only two month period (Boudreau & Lakhani, 2013). Predictions are that by 2017, crowdsourcing will be used by more than 60% of firms as a way of engaging external parties in making a wide variety of decisions with a wide variety of providers (McIntyre et al., 2013).

1.2 Problem statement

While success stories of utilizing OIC by multinational companies such as IBM, Intel, Google or P&G increasingly attracting significant media attention, it might give a distorted image that OIC is already a commonplace and effectively managed innovation tool (Adamczyk, Bullinger & Moslein, 2012; Boudreau & Lakhani, 2013; Terwiesch & Xu, 2008). However, in reality only a small minority of all companies implement OIC effectively (Boudreau & Lakhani, 2013; Terwiesch & Xu, 2008). In the past decade, the number of intermediary platforms or OIC hosted in private firm websites has been increasingly growing. In order to adopt OIC and attract more and better skilled solvers, companies started exploring the very different nature and complexity associated of OIC posed by online environment (Wang, Butler & Ren, 2013). Furthermore, according to Chesbrough (2013), in the current environment only open innovation will lead to effective innovation performance. Hence, it is of a crucial importance for organizations to understand how OIC function and manage the effectiveness of eliciting fruitful suggestions of external contributors (Mahr & Lievens, 2012).

(7)

5

Scheremeta, 2010), their risk tolerance (Archak & Sundarajan, 2009), or solvers’ knowledge over each other’s abilities to solve the task in laboratory settings (Freeman & Gelber, 2010). Even though these studies are relevant to OIC settings and provide a substantial base of knowledge about the nature and mechanisms of contests in general, the differences cannot be ignored (Feller et al., 2009). Typically in the setting of OIC, the number of solvers and the number of solutions submitted per solver are not limited and traits of solvers such as risk tolerance, level of skill, can be observed only to a limited extent: e.g. skill level as successful prior experience in prior OIC published publicly, or cannot be observed at all: e.g. the skill level of new members that gained experience in other settings or risk tolerance of participants. In OIC setting, contrary to the settings studied in theoretical and experimental studies described above, tasks are focused on the development of creative ideas that reflect the production of novel and useful ideas and is seen as the starting point of innovation (Amabile, 1996; Oldham, 2002). Thus in order to further investigate the topic of OIC, it is important to understand to what extent previously developed theories and experiments on contests hold in the real world setting of OIC.

(8)

6

launch OIC having a certain fixed budget, they have to consider a number of issues. For example, is it more effective to assign all budget to one award or divide it across multiple awards? Even if only one creative solution is needed, does the structure of one big prize attract solvers to participate and encourage them to maximize their effort put in the task? Does the difference between largest and smallest award matter in terms of participation and quality of solutions? To our knowledge, these questions have not been tackled in prior literature, especially there is a lack of empirical evidence based on data derived from existing OIC. Thus the very different nature of OIC still has to be untangled in order to fill these literature gaps and provide managers with guidelines how to leverage OIC in more effective ways (Sun, Fang & Lim, 2012).

1.3 Purpose of the study and research questions

In order to address the research gaps identified above, the main goal of this study is to deepen the understanding of the award structure as an antecedent of OIC performance by answering the main research question: How does the award structure affect OIC performance: i.e. participation and the quality of solutions? We refer to participation of OIC as the number of solvers that participate and the number of solutions submitted, whilst we refer to the quality of OIC as the number of highly evaluated solutions.

In order to answer the main research question, we address the following sub-questions:

1. Is it more effective to award one prize or distribute the budget across multiple prizes in terms of participation and the quality of solutions?

2. Is it more effective to distribute the budget across multiple prizes equally or unequally in terms of participation and the quality of solutions?

3. Is it more effective to distribute the budget across multiple unequal prizes such that the relative range between smallest and largest prize would be smaller or larger in terms of participation and the quality of solutions?

4. Do the differential impacts of award structures that differ in terms of the number and distribution of awards, on participation change increasing the budget and/or duration of OIC?

(9)

7

Figure 1 provides visual illustration of these questions.

Figure 1: Preliminary conceptual model

1.4 Significance of the study

From theoretical perspective, this study aims to contribute to the literature of contests and provides a deeper and more comprehensive view of the award structure as a determinant of OIC performance. To our knowledge, it is a first empirical study that uses large scale real world data in order to examine the effects of award structure on OIC performance. It extends prior literature by distinguishing not only between most generic award structure types such as single and multiple-awards structures, but also derives sub-types of multiple-awards structure and demonstrates that all four award structure types entail differential impacts on OIC performance. We show that having a fixed budget assigned to OIC, not only allocating it to a single award or distributing it across awards can make a difference, but also the relative range between largest and smallest awards leads to differential impacts on OIC performance. From managerial perspective, this study is beneficial for both managers of firms conducting OIC and intermediaries hosting them. While managers having a richer understanding of how to formulate and conduct online innovation contests would be able to utilize them more effectively, this knowledge for intermediaries would improve their ability in giving guidelines to seekers, increasing participation and performance of solvers, which eventually would lead to the higher satisfaction to both parties and thus subsequently result in better performance of intermediary platform. In the end of the paper we provide novel managerial guidelines how to utilize OIC more effectively based on different award structure types.

1.4 Overview of the research

(10)

8

followed by discussion of the main findings, implications and limitations of the study are presented.

2. Conceptual framework and background

In this section the most relevant background information about OIC will be provided. Secondly, hypotheses will be deducted from the literature reviewed as well as conceptual model which incorporates these hypotheses. Hypotheses reflect the relationships between variables, while the conceptual model visualizes respective relationships (Bacharach, 1989).

Figure 2: Conceptual model

2.1 The paradigm of open innovation, crowdsourcing and OIC

The concept and usage of OIC can be related to the emergent paradigm of open innovation representing the idea that innovation can emerge both inside and outside the company (Chesbrough, Vanhaverbeke & West 2006; von Hippel, 2005). One of the ways to conduct open innovation is crowdsourcing which is defined as a means of transferring a function once performed by employees and outsourcing it to an undefined and generally large group of people in the form of an open call and powered by advanced IT technologies (Howe, 2008; Martinez, 2015).

(11)

9

awards (Zheng, Li & Hou, 2011). Boudreau and Lakhani (2013), define crowd contests as a way to engage a crowd, when a seeker (a company) identifies a specific task, offers a monetary award or a number of awards and broadcasts the invitation to submit solutions. While there can be various tasks of the contest, this study is limited to creative tasks, such as design problems. Creativity describes the production of novel and useful ideas and is seen as starting point of innovation (Amabile, 1996; Oldham, 2002).

Furthermore, OIC can be hosted in a company’s own website, or through intermediary online platforms that “aggregate both demand for capabilities (firms seeking innovators capable of meeting new challenges) and supply (a large and diverse population of innovator)” (Feller et al., 2012). More precisely, intermediary’s role is to help seekers to list project in an appropriate category, to adjust the contest if needed and to implement transactions of payments to the creative and IPR rights to the solver, after the seeker chooses the winner. Important to note that intermediaries take a share of prizes awarded to participants, which makes them concerned about the effectiveness of OIC. Hence, slightly modifying the definition provided by Boudreau and Lakhani, OIC analyzed in this study are defined as means to engage a crowd, when a seeker identifies the specific creative task, offers a monetary award or awards and broadcasts the invitation to solvers (creatives) to submit solutions through intermediary online marketplace.

2.2 Hypotheses Development

2.2.1 The type of award structure and its effect on OIC participation

Potential contestants decide whether to take the opportunity to participate in the contest by assessing the size of the prizes, likelihood to win and in this way get compensated for the efforts put in task (Freeman & Gelber, 2010; Mathews & Namoro, 2008).

According to the extrinsic motivation theory, the size of award is one of the drivers which can be used to explain the participation in crowdsourcing contests (Zhao & Zhu, 2014). Overall, prior research has shown that higher awards attract more solvers (Brabham, 2010; Fuller, 2010; Lakhani et al, 2007; Shao, Shi & Liu, 2012). Another factor potential solvers consider when making a decision whether or not to participate in a contest, is the likelihood to win (Freeman & Gelber, 2010). The higher is the perception of the likelihood to win, the more likely is the individual to enter the contest (Dixit, 1987).

(12)

10

award and thus provides bigger potential coverage for solvers’ work compared to the same budget divided into multiple awards. On the other hand, despite the potential competence and self-esteem one might be motivated by to enter the OIC, this type of structure still yields higher risks and lower perceived likelihood to be compensated at all than a multiple- award structure, even though with the smaller compensation (Archak & Sundarajan, 2009; Moldovanu & Sela, 2001). Simply put, when participating in an OIC which offers a single award, two alternative outcomes are possible for solvers: to win the prize or not to win it and not get compensated for the efforts put in OIC at all, whereas under multiple-awards condition alternatives are to win the first price, the second prize, nth prize, and lastly not to win at all (Moldovanu & Sela, 2001). We argue that the option of smaller size awards under multiple-award structure and higher likelihood to win the award is more preferred by potential solvers than a relatively higher award and lower perceived likelihood to win. Research suggests that generally individuals are more likely to avoid risks and chose higher perceived likelihood to be compensated at all over the option to risk more and possibly win the larger prize under the condition of single-award structure (Moldovanu & Sela, 2001). Szymanski and Valetti (2005) complemented the propositions of Moldovanu & Sela (2001) by modeling participation incentives while distinguishing between weaker and stronger contestants. Weaker potential solvers become increasingly discouraged to participate when they face stronger competition and do not consider themselves as candidates to win the first prize. Thus, the participation of these solvers is driven by the incentive to win the second, third or nth prize (Szymanski & Valetti, 2005). While multiple-award structure is preferred over single-award structure by weaker participants, experimental study of Cason, Masters and Scheremeta (2010), demonstrated that the individuals with stronger abilities is not altered by the type of award structure. As a result, we expect that overall multiple-award structure should attract more solvers than single -award structure, given the same budget. Hence, the following hypothesis is proposed:

H1a: Multiple-equal award structure will attract more solvers compared to a single-award structure.

(13)

11

structure and approaches the most unequal multiple award structure. The more budget is concentrated on the first prize, the smaller the second prize or/and the rest of the prizes become. As argued above, potential solvers consider two crucial factors while making a decision whether to participate in the contest or not: size of the prizes and the likelihood to win (Freeman & Gelber, 2010; Mathews & Namoro, 2008).The major advantage of single-award structure as well as multiple-award structures with relatively higher range between largest and smallest awards, is that it offers higher potential compensation for the incurred costs in comparison to the situation where the same budget is distributed in a less varied way (Cohen, Kaplan & Sela, 2008). Given the same budget, the higher is the range between the largest and the smallest awards, relatively larger the first prize becomes. However, contrary to the single-award structure, multiple-non-equal single-award structures not only provide relatively high compensation, but also entail higher likelihood to win because of multiple prizes offered. Therefore, multiple non-equal award structures, perform well considering both factors – the size of award and the likelihood to win – while single-award structure is only better in terms of the size of award. Hence, given the same budget, we expect multiple-non-equal award structures to attract more solvers compared to single- award structure:

H1b: Multiple-non-equal award structures will attract more solvers compared to a single-award structure.

While overall multiple-award structures share the advantage of relatively higher perceived likelihood to be compensated at all, multiple-non-equal award structures also offer relatively higher first prize than multiple- equal structure. The size of the possible compensation for the efforts put in the contest is in the interest of both weaker and stronger potential solvers (Szymanski & Valleti, 2005). Thus multiple-non-equal award structures, perform relatively well considering both factors – the size of award and the likelihood to win. Multiple-equal awards structure is only superior in terms of higher likelihood to win. Hence, given the same budget, we expect multiple-non-equal award structures will attract more solvers compared to multiple-equal award structure:

H1c: Multiple-non-equal award structures will attract more solvers compared to a multiple-equal award structure.

(14)

multiple-non-12

equal award structures entail higher largest prizes compared to multiple-equal award structure, and also both structures offer higher likelihood to win compared to single-award structure. However, with an increase in the relative range between largest and smallest award, the size of the highest award increases as well. Since the size of prizes is an important motivator to enter the contest (Zhao & Zhu, 2014), we expect that a multiple-award structure with a relatively higher range would outperform the structure with a relatively lower range in terms of participation. Hence, given that the budget is fixed, we propose the following hypothesis:

H1d: Multiple-high-range award structure will attract more solvers compared to a multiple- low-range award structure.

2.2.2 The moderating role of OIC budget

We argue that with an increase in the budget of OIC, the preference of multiple-award structures will be even stronger compared to a single-award structure. Following the same line of thought, the major advantage of single-award structure is the higher size of award concentrated on one prize (Cohen, Kaplan & Sela, 2008). When the fixed budget is lower, dividing it into multiple awards might entail higher perceived likelihood to get compensated but only with a relatively small award which becomes smaller when multiple awards are equal and the number of awards is increasing. On the other hand, when the fixed budget is higher, multiple-equal award structure might still result in a substantial amount of money assigned per award in order to work as an incentive to participate in OIC. According to Kalra and Shi (2001), there is a threshold of the size of award which is large enough to work as an incentive by offering a substantial expected utility to participate in a contest. Hence, when OIC budget is higher, the relative disadvantage of multiple-equal award structure compared to single- award structure can be balanced out.

H2a: Total budget of OIC moderates the differential impact of multiple-equal award structure and single-award structure, such that with an increase in total budget the relative positive effect of multiple-equal award structure on participation becomes stronger.

(15)

13

H2b: Total budget of OIC moderates the differential impact of multiple-non-equal award structures and single-award structure, such that with an increase in total budget: the positive effect of multiple-non-equal award structures on participation becomes stronger.

Regarding multiple-non-equal award structures, while these structures share the same advantage with multiple-equal award structure higher perceived likelihood to win, in addition they also imply higher award concentrated on the largest prize (Szymanski & Valleti, 2005). In a similar vein, while we hypothesized that generally multiple-non-equal award structures has a stronger effect on participation than multiple-equal award structure because of the larger first prize, this differential impact will be less pronounced with the increase in total budget simply because the prize under multiple-equal becomes large enough to motivate individuals to join the OIC. Similarly as argued on for hypothesis 2a, with the increase in a total budget of an OIC, the size of prizes distributed equally across awards increase as well, and potentially become large enough to motivate individuals to enter the OIC (Kalra & Shi, 2001). Thus, the relative advantage of multiple-non-equal award structures offering relatively larger first prize over multiple-equal award structure will be less pronounced when the budget is increased. Therefore, we hypothesize:

H2c: The total budget of OIC moderates the differential impact of multiple-non-equal award structures and multiple-equal award structure, such that with an increase in total budget, the relative positive effect of multiple-non-equal award structures on participation becomes weaker.

In addition, we expect the budget of OIC to moderate the differential impact of multiple-high- range awards structure and multiple-low-range award structure on participation. As argued in hypothesis 1d, multiple-high-range award structure offers relatively higher award than multiple-low-range award structure and, thus triggers participation. While comparing between two types of multiple-non-equal award structures, we follow the same logic as for hypothesis 2c: with the increase in total budget the size of the highest award under multiple-low-range award structure becomes large enough to motivate individuals to participate in an OIC (Kalra & Shi, 2001). This way the relative advantage of multiple-high-range award structure of offering larger first prize than multiple- low-range award structure will be mitigated. Hence, we propose the following hypothesis:

(16)

14

the relative positive effect of multiple-high-range award structure on participation becomes weaker.

2.2.3 The moderating role of OIC duration

As argued above, overall single-award structure entails higher risks and lower perceived likelihood to win and due to the higher perceived challenge will attract less solvers than multiple-equal award structure. However, the longer duration of the contest, referred as the length of the days between the announcement of OIC’s start and its end (Bockstedt, Druehl & Mishra, 2015), provides solvers with more time to prepare their solutions and requires less intensive effort in the process of developing a solution for the specified task (Shao et. al, 2012). According to Banker and Hwang (2008), tasks that are more challenging entail higher time costs. Empirical evidence shows that in general individuals are less likely to participate in more difficult tasks (Sonsino, Benzion & Mador, 2002). Thus giving more time for solvers to complete the task may lower its perceived difficulty, challenge and risks associated with entering the single-award OIC.

Furthermore, literature on creativity argues that time pressure impedes creativity (Amabile,1996) and thus in the context of OIC where creative skills are required for successful performance, time pressure is likely to reduce motivation to enter the contest. Having more time provides the opportunity for solvers to show their full potential and create their best possible solutions, especially it applies for the solvers that are slower in creative process (Shao et. al, 2012). Thus longer duration of OIC decreases the risks and increases the perceived likelihood to win the contest, consequently motivating solvers to enter the contest and put effort in a task (Shao et. al, 2012). Since OIC with single-award structure poses higher risks due to the lower perceived likelihood to win the OIC in comparison to multiple-equal award structure and multiple non-equal award structures, we expect that these relationships will be mitigated by longer duration of OIC.

H3a: Duration of OIC moderates the differential impact of multiple-equal award structure and single-award structure, such that with an increase in duration the positive relative effect of multiple-equal awards structure on participation becomes relatively weaker.

(17)

15

Furthermore, multiple-non-equal award structures as well as single-award structure entail relatively higher prizes compared to a multiple-equal award structure. However, aiming to win the biggest prize also poses higher risks since solvers aiming at the largest prize will work harder on the task without any guarantee to get compensated for that work (Mathews & Namoro, 2008). In addition, the more resources are assigned to compensate of a task, the higher is the perceived challenge to complete that task (Amabile, 1996). Thus, we expect that these risks would be mitigated by longer OIC duration since more time will allow solvers to work on a task less intensively and have enough time to face and complete tasks that are more challenging (Shao et al., 2012, Banker & Hwang, 2008).Therefore, we assume that given the same budget, when the OIC lasts longer, individuals would be more willing to challenge themselves and aim at higher prizes offered by multiple-non-equal structures. Similarly, since multiple-high-range award structure offers relatively large first prize than multiple-low-range award structure, we also expect that with an increase in duration, the former award structure will have even stronger incentives for individuals to join compared with the latter due to the lower perceived risks associated with more time given to complete the task. Thus, we propose the following hypotheses:

H3c: Duration of OIC moderates the differential impact of multiple-equal award structure and multiple-non-equal award structures, such that with an increase in duration the relative positive effect of multiple-non-equal award structure on participation becomes stronger.

H3d: Duration of OIC moderates the differential impact of multiple-high-range award structure and multiple-low-range award structure, such that with an increase in duration the relative positive effect of multiple-high-range award structure on participation becomes stronger.

2.2.4 The type of award structure and its effect on the quality of solutions

(18)

16

reaction function is increasing in rivals ‘effort (Szymanski &Valleti, 2005). Thus, due to this competitive effect multiple-award structures motivates to put more effort all contestants despite their abilities and consequently increase the quality of solutions. Hence, given that the budget and the number of solutions submitted to OIC is the same, we propose the following hypothesis:

H4a: Multiple-equal award structure will elicit more top quality solutions compared to a single-award structure.

While we expect that multiple-equal award structure will elicit more high quality solutions than single-award structure, another question is whether it is the most efficient to distribute the fixed budget of OIC equally across awards or in a more varied way?

According to the theorem proposed by Archak and Sundararajan (2009), when solvers are sufficiently risk neutral, every new prize introduced to the award structure has twice less contribution to the OIC outcome than the prize above it. For example, when a seeker offers two prizes, the marginal incentive generated by the increase of one dollar for the first prize is higher than the marginal incentive generated for the second prize (Archak & Sundararajan, 2009). Thus, given that multiple awards are offered, concentrating the budget on the first prize triggers participants to put more efforts in a task. In their experimental study, Freeman and Gelber (2010) demonstrated that given the same budget, neither a single-award structure with the highest possible award, nor the multiple-equal award structure, maximizes aggregate effort. Instead, multiple-non-equal award structure proved to be the most efficient in motivating solvers to work harder on tasks (Freeman & Gelber, 2010). Therefore, given that the budget and the number of solvers participating in OIC is fixed, we propose the following hypotheses: H4b: Multiple-non-equal award structure will elicit more top quality solutions compared to a single-award structure.

H4c: Multiple-non-equal award structure will elicit more top quality solutions compared to a multiple-equal award structure.

H4d: Multiple-high-range award structure will elicit more top quality solutions compared to a multiple-low-range award structure.

2.2.5 Assurance of payment and its effect on participation and the quality of solutions

(19)

17

participated in OIC (Erat & Krishnan, 2012). Such a commitment poses risks to seekers in case the best solution proposed in OIC is less value than the prize announced (Erat & Krishnan, 2012). On the other hand, the lack of such a commitment from the seekers’ side poses risks to solvers. Not only solvers have to evaluate their costs of efforts and potential to propose a solution which would be better than those proposed by their competitors, but also manage risks that their proposed solution, say better than competitors, still may not be purchased by a seeker. In their study on the role of intermediary platforms in managing crowdsourcing contests, Feller et al. (2012) claimed that one of the roles of intermediaries is to enhance appropriability, i.e. ensure that both parties – seekers and solvers – would capture value from the participation in OIC. The potential situation when none of the proposed solutions are acceptable to a seeker hampers future participation of both seekers and solvers. As a response to this situation, the intermediary can ensure diverse rewards for all participants, including the majority of those who lost the OIC, i.e. non-financial benefits such as learning, hedonic awards, increased reputation or possible career prospects. (Feller et al., 2012) In our study, consistent with the literature on crowdsourcing, we assume that seekers are encouraged to participate and to put more effort in a task by the combination of extrinsic and intrinsic motives (Fuller, 2006). Therefore, we argue that the assurance of getting compensated, given that the proposed solution is the best of the set of all submitted solutions, would increase both participation and incentives to put more effort in a task. Thus given the same award structure and same budget assigned for the OIC, we expect that assurance of payment will positively affect participation and the number of top quality solutions in OIC.

H5: OIC with the assurance of payment will attract more solvers and more top quality solutions than OIC without the insurance of payment.

3. Methodology 3.1 Research design

(20)

18

does the award structure affect OIC performance: i.e. participation and the quality of solutions? is concerned with learning how changes in award structure produces changes in OIC performance, it implies the causal nature of the study (Blumberg, Cooper & Schindler, 2013).

3.2 Empirical setting1

Data used for this research was collected from a well-known OIC intermediary marketplace based in the US and operate online all over the world with more than 182 thousand registered creatives and 138 entries per OIC on average. The website hosts OIC for design works that are classified in four main categories: graphic design, web design, mobile design and industrial design. 2

3.2.1 The process and characteristics of OIC3

The analyzed online intermediary marketplace is based on interactions between three main parties: an innovation seeker, solvers - creatives who participate in OIC, and an online intermediary marketplace itself.

Preparation phase: Prior broadcasting the contest, seekers can choose from a number of project packages that generally differ in terms of a) privacy of solutions, b) limit of entries based on screening of creatives before allowing them to participate and c) additional promotion of the contest and d) the price of the package. Advanced promotion feature refers to additional service of including the project in a special Featured Projects newsletter which the marketplace sends to tens of thousands solvers every week and private electronic invitation to the most successful solvers in a particular project category. Furthermore, for an additional fee a seeker can buy Twitter promotion and get the project advertised on the official marketplace Twitter page. In our study we focus on contests that are public, i.e. the gallery is observable to everyone from the start of the contest until the winner is announced, and the number of entries is not controlled by seekers. The feature of the additional promotion will be controlled in our study.

The marketplace guarantees to refund money for seekers who are not satisfied with the solutions submitted or the project does not receive at least one hundred solutions. However, a seeker can assure the project and in this way provide guarantee to solvers that a winner will be chosen and the award will be granted. The preparation phase ends with choosing the number of

1

Information for this sub-chapter is taken from the online intermediary marketplace used in this study.

2

Due to confidentiality reasons, the name of the OIC intermediary marketplace is kept anonymous.

3

(21)

19

awards, budget allocation and contest duration.

Competition phase: Once the contest is broadcasted solvers can start submitting solutions. The nature of this phase entails three important features that distinguish the setting of OIC used in this study from the traditional offline contests and that are relevant for the conclusions of our study. Firstly, most of the previous studies assumed that the number of solvers is known before the start of the contest and all solvers start competing from the moment when the contest starts. However, in the setting of this study, the process of entering is dynamic since solvers can join the contest any time before it ends. Therefore, the final number of solvers is unknown until the contest ends. Secondly, seekers can start evaluating solutions by giving them a number of stars that are ranging from one to five, or even rejecting the solution before the contest ends. Finally, the solutions submitted are observable to everyone. Altogether, these three features make the empirical setting of our study significantly different from most of the previous studies on contests having an impact on the perception of winning and the quality of solutions (Yang, Chen & Pavlou, 2009).

Post-competition phase: After the OIC is closed, seekers either select winner(s) or asks money back in case solutions do not live up to his/her expectations. The latter option can be chosen only when seekers did not assure the payment initially. In case of the former option, seekers and selected winners continue communication privately and finalize the project. Eventually, with the assistance of the marketplace, IPR of the solution are transferred to seekers and winners are awarded. Lastly, both parties reflect on the process and rate each other on the dimensions of timing, communication and overall satisfaction on the scale with the range from zero to one hundred.

3.2.2 Data

OIC is the focal unit of our analysis and data is aggregated at an OIC level. Initially 4919 OIC’s conducted in seven years period – starting in March 2008 till May of 2015 - consisted the basis of the empirical analysis of this research.

(22)

20

were missing randomly or not. Consequently, Little’s MCAR (Missing completely at random) test was conducted (Field, 2013). Results showed that indeed these values were missing at random (p = .638 > .05). Therefore, 119 cases with list-wise missing values were excluded from the analysis. Test results can be found in Appendix 1.After cleaning and adjusting the data, the final sample for the models of participation consisted of 4547 cases.

Furthermore, to assure that the number of top quality solutions (solutions evaluated with the highest score of five) is a good representation of the quality of solutions, we excluded 747 cases when a seeker does not give any evaluation for any solution. This situation does not necessarily mean that no solution was considered as of a top quality, it might also happen that a seeker chooses the winner or asks the refund once the OIC ends without engaging in a scoring process. By taking into account only those cases where seekers engage in a scoring system, we can consider the absence of solutions scored with a highest score as a sign that in the eyes of a seeker no solution represents top quality. Hence, the final sample for the models of the quality of solution consisted of 3800 cases.

3.3 Measures

Dependent variables

Participation:

(23)

21

Figure 3.Distribution histograms of dependent variables before and after log transformation Quality of solutions:

(24)

22

measure the quality of solutions as a total Number of top quality solutions (evaluated with the score of five) in the OIC. In order to probe robustness, we also measure the effects on the total Number of high quality solutions which refers to solutions evaluated either with a score of five or four.

Independent variables

Award structure

Four types of award structure

Prior theoretical studies distinguished between single and multiple awards and developed models for optimal distribution of the budget across awards (Archak & Sundarajan, Szymanski & Valetti, 2005). In their empirical study on solving mazes in the offline environment, Cason, Masters and Scheremeta (2010) tested differential impacts of a single-award structure and a proportional (multiple-non-equal) award structure on the quality of contest. This study aims to broaden the understanding of the differential impacts of various award structure types in OIC setting. We go one step further by distinguishing not only between single and multiple award structures, but also differentiating between sub-types of multiple-non-equal award structure on OIC performance. Categorization of award structure types with the corresponding frequencies is depicted in Figure 4, where grey boxes represent four focal award structure types examined in this study.

(25)

23

[1] RR = ,where A1= size of the first award in awards’ distribution, An =size of the last

award in awards’ distribution, n=the number of awards.

In order to divide multiple- non-equal award structure type into relatively equal sub- types, we analyzed frequencies based on the relative range. The first sub-type is defined as multiple- non-equal award structure with a low relative range, i.e. multiple-low-range award structure which refers to the third type of award structure studied in this paper. This sub-type takes place within the interval of relative range of values ranging from 0.000001 until 0.51. The second sub-type is defined as multiple- non-equal award structure with in comparison high relative range. i.e. multiple-high-range award structure, taking place within the interval of relative range values ranging from 0.5100001 until 1, excluding 1 from the interval, and refers to the fourth type of award structure used in this study. Four final types of award structure were defined by binary variables, where “1” refers to a specific type of award structure, and “0” refers that another type of award structure was chosen.

Figure 4: Types and the distribution of award structure.

Abbreviations of focal award structure types: 1) S: single-award, 2) ME: multiple-equal, 3) LR: multiple-low-range, 4) MH: multiple-high- range.

Assurance of payment

Assurance of payment refers to the assurance of a seeker to award a winner no matter what, i.e. even if seekers are not satisfied with any of solutions submitted. This variable was defined as binary (“1” - award(s) is assured, “0” - award(s) is not assured).

Duration

(26)

24

interpretability of effects this variable was expressed in days. When duration was included as an interaction term, it was mean - centered so that the interpretation of the potential change of main effect due to interactions would be meaningful: i.e. relative to the mean value of a variable.

Total budget

Total budget of OIC was measured in units of dollars. We expressed this variable in hundreds of dollars and mean- centered when testing for interaction effects.

Control variables

Some variables are controlled for both participation and creative performance, while others are included in models exclusively. Firstly, Duration as a control variable is included in both models. Yang, Chen & Pavlou (2009) found a positive effect of duration on the number of solvers. Amabile (1996) demonstrated that time pressure impedes creativity. Furthermore, in order to test the effect of award structure types on both participation and quality of solutions, we control for total budget of OIC, i.e. Budget. Controlling for Budget enables us to infer whether having the same budget and distributing it across awards in different ways makes a difference in terms of OIC performance.

(27)

25

any other category.

In examining the role of award structure on the quality of solutions, we control for Ability of solvers. Prior research showed that solvers higher in their ability are more likely to win the contest (e.g. Boudreau, Lacetera & Lakhani, 2011, Cason, Masters & Scheremeta, 2010). Ability of a solver is defined as ratio of the total number of awarded OIC to total number of OIC that a solver participated in. This variable is a contest level variable, i.e. it is aggregated within the contest. Overview of the variables used in this study is presented in Table 1. Descriptive statistics of the variables are summarized in Tables 2 and 3.

Table 1: Operationalization of the variables Variable Role of the

variable

Description Scale

Number of solvers Dependent, control

Total number of solvers participating in OICi (ln). Reflects participation. Ratio Number of solutions Robustness check

Total number of solutions submitted to OICi (ln). Reflects participation.

Number of top quality of solutions

Dependent Total number of top quality solutions (evaluated with a score of five) submitted to OICi.

Ratio

Robustness check

Total number of high quality solutions (evaluated with a score of four or five) submitted to OICi.

Award structure, type 1: Single-award

Independent Budget of OICi is concentrated on one award. Binary: “1” =yes, “0” =no

Award structure, type 2: Multiple-equal

Independent Budget of OICi is distributed equally across multiple awards.

Binary: “1” =yes, 0=no

Award structure, type 3:

Multiple-low-range

Independent Budget of OICi is distributed across multiple awards so that the relative range between the largest and the smallest awards is relatively low.

Binary: “1” =yes, “0” =no

Award structure, type 4:

Multiple-high-range

Independent Budget of OICi is distributed across multiple awards so that the relative range between the largest and the smallest awards is relatively high.

Binary: “1” =yes, “0” =no

Assurance of payment

Independent A seeker guarantees that the prize offered in a OICi will be awarded.

Binary: “1” =yes, “0” =no

Duration Independent (interaction), control

Number of days between the start and the end of OICi.

Ratio

Budget ($100) Independent (interaction), control

Total budget assigned to OICi expressed in hundreds of dollars.

Ratio

Promotion Control Besides intermediary marketplace, OIC is promoted in other channels

Binary: “1” =yes, “0” =no

Category: Graphic Control OICi belongs to the graphic design category that comprises subcategories: logo, package, illustrations, print.

(28)

26 Number of

evaluations

Control Total number of solutions evaluated by seekers in OICi.

Ratio

Ability Control Ratio of the total number of awarded OICi to total number of OICi that solvers participated in. The variable is aggregated at OICi level.

Ratio

Variables N Minimum Maximum Range Mean St. deviation Variance

No. solvers 4547 1.00 991.00 990.00 162.31 194.11 3767.90

No. of solutions 4547 2.00 4240.00 4238.00 162.31 194.11 37679.90 No. of top quality

solutions

3800 0.00 112.00 112.00 3.89 8.20 67.18

No. of high quality solutions 3800 0.00 443.00 443.00 17.78 25.93 672.11 Budget ($100) 4547 2.00 70.00 68.00 7.67 3.91 15.28 Duration (days) 4547 .00 904.17 904.17 11.98 14.61 213.40 Ability 3800 .00 .50 .50 .09 .07 .01 No. of evaluations 3800 1.00 1009.00 1008.00 75.66 103.86 10787.30 Relative range 4547 .00 .98 .98 .06 .18 .03 No. of awards 4547 1.00 12.00 12.00 1.30 .77 .60

Table 2: Descriptive statistics of continuous variables

N Frequencies Percent S str. type 4547 3673 80.80 ME str. type 4547 344 7.60 LR str. type 4547 306 6.70 HR str. type 4547 224 4.90 Assurance 4547 1253 27.60 Promotion 4547 1529 33.60 Cat. Graphic 4547 2836 62.40

Table 3: Frequencies of categorical variables

3.4 Research method and model specification

(29)

27

approach can only be used in case when the mean and the variance of the dependent variable are equal. However, in our case this assumption did not hold, and it was preceded using Negative Binomial regression (NBR). Section 4.2 provides detailed explanation on this issue. In order to test effects of award structure on the participation of OIC (hypotheses 1 - 3), four general models were specified: two main effects models testing the effects on the Number of solvers and the Number of solutions, respectively, and two models including interactions with Budget and Duration testing the effects on the Number of solvers and the Number of solutions, respectively. These models are general in a sense that they do not account for switching between different reference categories of award structure types. Instead a general term is used representing three award structure types n used in a model, leaving the type which is not included in the equation to represent a reference category.

Since testing for hypotheses 1-3requires changing reference categories, each model will be estimated twice or three times with difference reference categories. More detailed specifications of these models will be provided in the results section. Generic econometric models are expressed in equations 1- 4: equations 1 and 2 refer to the main effects models, while equations 3 and 4 refer to the moderating effects models.

[1] [2] [3] 10 + 11 + 12 13 + 14 + 15 + [4] 10 + 11 + 12 13 + 14 + 15 +

(30)

28

high quality solutions (evaluated with the score of either four or five points) were estimated. Equation for the Negative Binomial model is adopted from Lawless (1987) and is defined as follows: ) = ,

where y refers to the expected value of the number of high quality solutions, which can take non-negative discrete values y = 0, 1, 2…n. x denotes explanatory variables, is the mean given by the parameter , and a is a dispersion parameter which is higher than 0 (a ≥ 0). Equations 5-6 represent the additive forms of models that follow Negative Binomial Distribution.

[5] [6] Where, Constant (intercept) i Parameter

The OIC identification (ID) number

No_TQ_Solutions No_HQ_Solutions

Total number of solvers in the OICi (ln) Total number of solutions in the OICi (ln)

Total number of solutions evaluated with a score of five in OICi

Total number of solutions evaluated with a score of either four or five in OICi

Binary variable for the award structure type n (Single – award (S), Multiple-equal (ME), Multiple-low-range (LR), Multiple-high-range (HR), in OICi. “1” if the award structure type is n, “0” otherwise.

Assurancei

Durationi

Binary variable for the assurance of OICi. “1” if OICi is assured, “0” otherwise

Number of days between the start and the end of OICi

Budgeti

Graphici

No_Evaluationsi

Budget assigned to OIC expressed in hundreds of dollars ($100) Binary variable for the category which OICi belongs to. “1” if it belongs to the category Graphic, “0” otherwise

(31)

29

Abilityi

i

Ratio of the total number of awarded OIC to the total number of OIC solvers participated in, aggregated at OICi level

Error term for the OICi

4. Results

Our discussion of results is divided into two sections: (1) a section discussing the effects on participation (hypotheses 1 -3), (2) a section in which the effects on the quality of solutions are explained (hypotheses 4 - 5). As mentioned previously, in order to test these hypotheses, switching between reference categories of the award structure types will be required, thus models 1-4 have to be estimated a few times. The overview of the models, corresponding hypotheses and reference categories is provided in Table 4. Some hypotheses can be tested by using either one or another reference category which is handy in case of potential multicollinearity issues that might become an obstacle to infer about hypothesized effects.

Model 1, 2: Main effects 3, 4: Interactions with budget and duration

Hypothese s

H1a H1b H1c H1d H2a H2b H2c H2d H3a H3b H3c H3d Reference category ME/ S S ME LR/ HR ME/ S S ME LR/ HR ME/ S S ME LR/ HR Table 4: Overview of the reference categories

Abbreviations of award structure types: 1) S: single-award, 2) ME: multiple-equal, 3) ML: multiple- low-range, 4) MH: multiple-high-range.

4.1 Award structure, total budget and duration of OIC: effects on participation

(32)

30

2014, Leeflang, 2015). The reason for this can be explained by applying the central limit theorem which posits that the shape of the data should not affect significance tests provided the sample is large enough (Lumley et al., 2002, Field, 2014). Also with the increases in sample sizes, the null hypothesis of normally distributed error terms is rejected more often, because even minor deviations from normality can lead to the violation of this assumption. The third assumption heteroscedasticity refers to the requirement that the error term should be homoscedastic, i.e. the variance of the error term is the same across all cases cross-sectionally and/or over time (Leeflang, 2015). Violating this assumption does not affect the size of parameters, but it does invalidate estimates of variance of parameters which leads to spurious significance inferences (Hayes & Cai, 2007, Leeflang, 2015).

4.1.1 Main effects model: Estimation and testing for assumptions

The assumption of multicollinearity was tested by examining Variance Inflation Factors (VIF's). If VIF's are greater than five, predictor variables are correlated to the extent which might bias parameter estimates. Firstly, Model 1 and 2 were estimated three times with three different reference categories: i.e. multiple-equal, single-award and multiple-low-range award structure types. Equations 1a and 2a represent Models 1a and 2a with a reference category multiple- equal award structure type. In Models 1b, 2b and 1c, 2c reference categories of single -award and multiple- low-range award structure types, respectively, are used.

[1a] (No_Solversi)=

[2a] (No_Solutionsi)=

Binary variable for the award structure type in OICi. “1” if the award structure type is single-award, “0” otherwise.

Binary variable for the award structure type in OICi. “1” if the award structure type is multiple-equal, “0” otherwise.

Binary variable for the award structure type in the OICi. “1” if the award structure type is multiple-low-range, “0” otherwise

Binary variable for the award structure type in OICi. “1” if the award structure type is multiple-high-range, “0” otherwise.

(33)

31

Appendix 2.

Furthermore, in order to test for the non-normality assumption, histograms of standardized residuals were examined. The shape of the residual distribution has a minor right skew but overall did not seem to deviate far from a normal distribution. The histograms of Model 1a and 2a are depicted in Figure 5, while histograms of Models 1b, 2b, 1c and 2c can be found in Appendix 3.

Figure 5: Distributions of standardized residuals of Models 1a and 2a.

In order to statistically test the normality of residuals, Kolmogorov-Smirnov (K-S) and Shapiro-Wilk (S-W) tests were conducted (Table 5). Results for all models rejected the null hypothesis that residuals are normally distributed (p <.01).

Model Kolmogorov - Smirnov Shapiro - Wilk

Statistic df p - value Statistic df p - value

1a, 1b, 1c 0.046 4547 0.000 0.053 4547 0.000

2a, 2b, 2c 0.046 4547 0.000 0.053 4547 0.000

Table 5: Results of normality tests

As argued above, given a large sample size (4547 cases), the results of OLS regression should not be affected (Field, 2013). In addition, plots of histograms illustrates that the normality assumption is not violated seriously. Thus we proceed to testing for the potential violation of heteroscedasticity.

(34)

32 Figure 6: Residuals against predicted values for Model 1a. (left)

Figure 7: Residuals against predicted values for Model 2a (right)

Plots illustrate that residuals might systematically differ from zero: i.e. variances fluctuate from larger to smaller and again to larger across predicted values. It also changes in terms of density: barely several values are distributed in the right side of the plot, however, these values do not deviate from the largest range of variance of more dense parts of the plot. Thus, heteroscedasticity is suspected and subsequently Breusch-Pagan (B-P) test was performed. Squared residuals were regressed on explanatory variables used in Models 1 and 2. B-P test allowed us to reject the null hypothesis and conclude that heteroscedasticity occurred. Test results of B-P tests for Models 1 and 2 are depicted in Table 6.

Table 6: Results of B-P tests for Models 1 and 2

In order to remedy for heteroscedasticity, heteroscedasticity-consistent standard error (HCSE) estimator of OLS parameters was adopted (Hinkley, 1997). Hayes and Cai (2007) argued that this method has an advantage over GLS and WLS approaches because it simply does not require errors to be homoscedastic. Macro developed by Hayes and Cai (2007) was used (Appendix 5). Results for main effects Model 1and Model 2 before correction with HCSE are can be found in Appendix 6, and final results after HCSE correction are depicted in Table 7.

Breusch – Pagan tests

Model F - statistic P - value

(35)

33

We can see that effects on the number or solvers are comparable with the effects on the number of solutions, demonstrating that Model 1 is robust.

4.1.2 Main effects models: OLS estimation results after HCSE correction Model 1a:

Dependent variable: Ln (No. of solvers)

Model 2a:

Dependent variable: Ln (No. of solutions)

Variables β S.E. t - statistic β S.E. t - statistic

Constant 2.103 0.546 34.853 3.272*** .0866 37.773

S str. type .333*** .0551 6.005 .394*** .0772 5.108

ME str. type Reference category

LR str. Type .241*** .0654 3.690 .250*** .0791 3.158 HR str. Type .405*** .0710 5.711 .475*** .0833 5.702 Assurance .180*** .0267 6.744 .333*** .0291 11.428 Budget .045*** .0044 10.216 .053*** .0054 5.543 Duration .003 .0114 .227 .004 .0190 .185 Promotion .151*** .0304 4.963 .219*** .0395 5.543 Cat. Graphic 1.206*** .0230 52.490 1.305*** .0261 50.020 F-test = 406.241*** F-test = 438.282*** R2=.417, R2 adj.=.416 R2=.443, R2 adj.=.442 Model 1b:

Dependent variable: Ln (No. of solvers)

Model 2b:

Dependent variable: Ln (No. of solutions)

β S.E. t - statistic β S.E. t - statistic

Constant 2.436 .0160 151.966 3.667*** .0206 178.058

S str. Type Reference category

ME str. type -.329*** .0547 -6.005 -.391*** .0765 -5.111 LR str. Type -.091* .0484 -1.884 -.144** .0484 -2.973 HR str. Type .073 .0564 1.289 .081 .0584 1.386 Assurance .180*** .0267 6.753 .333*** .0292 11.429 Budget .045*** .0044 10.211 .053*** .0054 9.879 Duration .003 .0114 .227 .004 .0190 .185 Promotion .151*** .0304 4.963 .219*** .0396 5.538 Cat. Graphic 1.205*** .0230 52.483 1.304*** .0261 50.012 F-test = 438.177*** F-test = 435.288*** R2=.417; R2 adj.=.416 R2=.443; R2 adj.=.442 Model 1c:

Dependent variable: Ln (No. of solvers)

Model 2c:

Dependent variable: Ln (No. of solutions)

(36)

34 Constant 2.340*** .0514 45.486 3.519 *** .0545 64.611

S str.Type .096** .0483 1.981 .147 *** .0483 30.035

ME str. type -.234*** .0647 -3.620 -.245 *** .0778 -3.153

LR str. Type Reference category

HR str. Type .168** .0692 2.422 .227 *** .0688 3.301 Assurance .180*** .0267 6.759 .333 *** .0292 11.430 Budget .045*** .0044 10.212 .053 *** .0054 9.875 Duration .003 .0114 .227 .004 .0190 .185 Promotion .151*** .0304 4.964 .219 *** .0395 5.540 Cat. Graphic 1.206*** .0230 52.476 1.305 *** .0261 50.005 F-test = 438.196*** F-test = 435.271 *** R2=.417; R2 adj.=.416 R2=.443; R2 adj.=.442

Table 7: Model 1 and Model 2:OLS estimation results after correcting for heteroscedasticity ***p<.01, **p<.05, *p<.10

Note: Abbreviations of award structure types: 1) S: single-award, 2) ME: multiple-equal, 3) LR: multiple-low-range, 4) MH: multiple-high-range.

As argued by Hayes and Cai (2007), we can see that heteroscedasticity did not distort the sizes of effects. However, heteroscedasticity consistent standard errors are greater than in the initial OLS model which caused a difference in significance tests between these two estimations: duration consistently became insignificant in all three models. Furthermore, in Model 1b in the initial OLS estimation, multiple - low- range award structure has a highly significant effect (p <.01), while in HCSE model was significant only at .10 level (p <.10). Overall, we can conclude that heteroscedasticity did not have very pronounced effects on main effects model. Finally, results of Models 1 and 2 can be interpreted.

40.6 % (R2 adjusted = .416) and 44.2 (R2 adjusted = .442) of the variation in the number of solvers can be explained by Model 1. 44.2 (R2 adjusted = .442) of the variation in the number of solution scan be explained by Model 2. In both models F – test showed highly significant results meaning that overall Models 1 and 2 predict the number of solvers and the number of solutions significantly well.

Referenties

GERELATEERDE DOCUMENTEN

moeilijker project kunnen (moeten) nemen. Bij ieder project wordt wat literatuur ter bestudering opgegeven en eventueel een probleem ter oplossing. Een ander bezwaar, dat men

By adapting the empirical framework of national innovative capacity and employing panel data analysis under fixed effect model, I find that inward FDI itself does not

H7: If a risk indifferent consumer expects energy prices to drop they will have a preference for (A) contracts with variable tariffs without contract duration and (B) low

Fishman (1988) assumes that the management of the target and both bidders act to maximize the expected wealth of their respective shareholders. Before making a bid, the first

Burgelman (1983) characterizes organizational championing mainly as political behavior through which the champion keeps the top management informed and enthusiastic

The Remembered state in that sense is manifested in how people perceive the state and practice their lives today in accordance to their memories.. It can be seen in

Moreover, in the lottery, participants who have a negative social relationship are more likely to choose an option with a larger outcome discrepancy compared to those who have

King Abdullah had called the meeting to address the issues of “unity” and “joint action” and set the Muslim world free from its “state of impotence and disunity it is