• No results found

Crowdsourced innovation contests: a study of design and management variables to improve idea generation

N/A
N/A
Protected

Academic year: 2021

Share "Crowdsourced innovation contests: a study of design and management variables to improve idea generation"

Copied!
45
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Crowdsourced innovation contests:

a study of design and management

variables to improve idea generation

University of Groningen Faculty of Economics and Business

MSc. BA - Business Development

L. van der Meulen E-mail: larsvdm@outlook.com

Supervisor:

Dr. K.R.E. Huizingh, Faculty of Economics and Business Second supervisor:

(2)

1 ACKNOWLEDGEMENTS

The way organizations use Internet to involve people in innovation has fascinated me during the last year of my studies. I explored in what areas organizations use the internet to encourage people to participate in innovation and I have found out that the possibilities of involving people in innovation by using new technologies are still in early development. Eventually this search has led to the subject of online innovation contests. Although a large number of innovation contest platforms (websites) were known to me, it was interesting to discover that academia, especially economic researchers, only have begun to examine these platforms in recent years. Because innovation contest platforms give rise to new business models and ways to innovate, and a lot of discussion takes place how to design and manage these contests, I considered this subject very interesting to explore. This paper is its end result.

It is also the final part of the Master Business Administration (MSc.BA), specialization business development at the University of Groningen. My research aims to provide an explorative review on online innovation contests and how to effectively design, manage and organize this specific open innovation tool.

I would like to thank my supervisor: Dr. K.R.E. Huizingh for his advice, patience and guidance throughout the entire research process. Finding the optimal path for researching an interesting academic subject is not always easy. I have learned that research is mostly an iterative process of continuously changing and fine tuning research design and findings and putting them into context and perspective. Dr. Huizingh helped me narrowing the subject down, choosing a worthwhile research path and to keep on track.

Secondly, I want to thank Prof. Dr. Bijmolt for acting as my second supervisor.

Last but not least, I would like to thank my girlfriend Marijke and my parents for their support. They have kept me motivated during this final stage of my studies.

I hope you enjoy reading my thesis.

Lars van der Meulen Sauwerd, August 2015

(…) the world is becoming too fast, too complex and too networked for any organization to have all the answers inside.

- Yochai Benkler, Professor at Harvard Law Business School and Author of The Wealth of Networks.

(3)

2 EXECUTIVE SUMMARY

Competition is an important aspect to encourage innovation. For this purpose, online innovation contests have become increasingly popular due to the growing use of the internet and the adoption of social media tools. By creating online innovation contests (OICs) as a platform for ideas, an organizer obtains access to the knowledge and creativity of a large number of participants by offering possibilities of interaction and cooperation. Subsequently, the organizer has several challenges to overcome. The main challenge for organizers of online innovation contests is two-fold: organizers have to motivate people to enter a contest and they have to set conditions to ensure participation and the development of useful ideas.

After thorough research on crowdsourcing, open innovation and innovation contests, this paper aims to examine the effect of management and design variables of online innovation contests to maximize their success. In the proposed model the effects of multiple variables on performance and effectiveness have been deduced from theory and evaluated through statistical analysis. Data have been collected from the online innovation contest website crowdSPRING. On crowdSPRING hundreds of competitions are being organized with a wide range of creative subjects (e.g. logo, web, graphic, industrial and packaging design, and writing projects). These contests are organized to help organizers find a solution by appraising participants’ entries and selecting the best ideas.

The analysis of the data in this research has provided insight how specific design and management variables on online innovation contests’ performance correlates. All variables that were included in this research showed positive influence on performance, although they vary in significance; the effects of offering the prospect of higher rewards lead to far more contest watchers and participants than composing a simple to understand description of the contest subject or maintaining a high reputation score as organizer. This suggests that when organizing an online innovation contest each factor or variable should be emphasized differently to maximize results, despite the fact other mediating variables may also play a role. Future research in this direction is proposed: expanding the model with other potential mediating variables. Future research has to ascertain and extend the proposed data collection model by examining other platform websites as well. Moreover, future research has to include whether promotion across other social media platforms (e.g. communication and discussion through Facebook, Twitter, Google+, LinkedIn) influence participation and help to acquire ideas from a larger or other group of contributors.

Keywords: innovation contest, tournaments, problem solving, crowdsourcing, open innovation, OICs, crowdSPRING, crowdsourced innovation

(4)

3 TABLE OF CONTENTS

Executive Summary ... 2

1. Introduction ... 5

1.1 Studies of online innovation contests ... 6

1.2 Main research question ... 6

2. Research design ... 7 2.1 Scientific contribution ... 7 2.2 Research questions... 7 2.3 Definitions ... 8 2.4 Research strategy ... 8 2.5 Research model ... 9 2.6 Methodology ... 9 2.6.1 Literature review ... 10

2.6.2 Case study: statistical analysis ... 10

2.7 Introducing CrowdSPRING ... 11

3. Theory ... 12

3.1 Open innovation and crowdsourcing: ideation ... 12

3.2 Defining online innovation contests ... 16

3.3 Performance of online innovation contests ... 19

3.3.1 Evaluating the quality of ideas ... 20

3.4 Design of online innovation contests ... 20

3.4.1 Rewards, duration and complexity ... 22

3.5 Management of online innovation contests ... 23

3.6 Control variables of online innovation contest platforms ... 23

3.7 Conceptual model ... 24

4. crowdSPRING: Data collection and analysis ... 25

4.1 crowdSPRING ... 25

4.2 Data collection model... 27

(5)

4

5. Conclusion & discussion ... 36

5.1 Conclusion ... 36

5.2 Discussion ... 37

(6)

5 1. INTRODUCTION

The last decade has been characterized by the use of the internet as a source of innovation. During this time company-centered innovation has moved towards a more open innovation approach. This has offered possibilities for companies to expand innovation activities across organizational boundaries and to increase chances for gaining competitive advantages by using the knowledge and creativity of people outside the organization. Moreover, the principle of competition has long been conducive to innovation.

In recent years, a large number of online innovation contest (OIC) websites have been developed for this purpose. OICs are contests on the internet where people can post innovative projects - seekers - and where others - solvers - can contribute by offering and suggesting business solutions (Yang et al. 2009). These contests can be hosted by third parties (e.g. Topcoder, Innocentive, Ideedock and CrowdSPRING) or are organized by companies on dedicated websites (e.g. Dell’s IdeaStorm). Describing and presenting a project problem in a contest form is a relatively cheap and quick activity. However, experiences of seekers organizing online innovation contests show that the management and evaluation process can take a lot of time, is often difficult and can be very costly (Jouret, 2009). Contests on these platforms usually start with the description of an innovation problem, deciding which reward(s) can be won, setting the contest runtime and determining the eligibility of the contest solvers; which is constrained by the choice of platform. These platforms have the objective of generating ideas and finding solutions for product, process or company-wide innovation, or a combination. The challenge for organizations to design and manage these online competitions is two-fold: motivate people to participate and to set conditions to ensure a high quality of ideas (Witt et al., 2011).

The structure of this paper is presented in such a way that it is a logical flow. This is achieved by discussing and examining design, control and management variables for OICs separately. The structure is as follows; in chapter 2 the research design will be elaborated. In chapter 3 the scope of the research and the position it aims to fulfill in existing scientific literature will be discussed. It forms the theoretical basis of this research. Subsequently, a data collection model is constructed. The empirical part of this research encompasses examining data obtained from the website crowdSPRING.com by conducting statistical analyses. Chapter 4 provides a summary of the research findings. Conclusions are drawn and elaborated in the final chapter (chapter 5). Here, suggestions for implementing and organizing OICs are also discussed. Additionally, a discussion about the limitations of this research and future research opportunities; variables which are also relevant in organizing successful OICs but which are beyond the scope of this research.

(7)

6

1.1 STUDIES OF ONLINE INNOVATION CONTESTS

OICs offer the possibility of interaction and cooperation among participants, but a growth in practical implications contrasts with the availability of academic knowledge. A limited number of studies have been done on OICs. Most research focus on two major issues, namely reward mechanisms and the interaction of (sufficient number of) solvers in a contest and their contributions. A major concern in literature is a lack of measurable success criteria for online innovation contests (Hallerstede and Bullinger, 2010). This concern is predominately caused by the fact that many authors focus on one dimension to organize and maintain OICs successfully, which is for example illustrated by research done by Dahan and Mendelson (2001) in which they pursue an extreme value model and conclude that the final performance of a product development contest is decided by the top distribution of solvers. In additional research done by Terwiesch and Xu (2008), an extension of this model is made to a more general contest situation where projects can have multiple dimensions, such as trail-and-error projects or expertise and ideation based projects. Their conclusion is that it is beneficial to have a larger distribution of solvers because this will give a seeker higher chance to get successful solutions. On the contrary, another body of research suggest that attracting a larger number of solvers would decrease the likelihood of winning, which would dismay solvers’ efforts and would eventually decrease contest idea quality (Che and Gale, 2003).

1.2 MAIN RESEARCH QUESTION

Similarly, the overall review of literature shows that guidelines are still missing important overall integral design variables, which are necessary for setting up an OIC. Successful or effective idea generation in OICs are those contests which have outcomes that are perceived as high valued, useful and relevant to the organizer. To complement existing research, this study focusses on the requirements to formulate and maintain OICs and to which extent certain design and management variables are relevant for successful idea generation. Consequently, the main research question of this paper is:

What (integral) design and management variables influence the outcomes of idea generation in online innovation contests and how can these contests be designed and managed effectively?

(8)

7 2. RESEARCH DESIGN

As mentioned in the introduction, this study aims to give insights on how OICs function and how these can be designed and management effectively. This resulted in the statement that the challenge seekers are facing is two-fold. They need to establish an environment that enough participants join the contest and are contributing (attracting and motivating). Additionally, seekers need to set conditions to ensure high quality ideas. According to Bullinger and Möslein (2010) current studies on OICs engender to delve into the exploration of design elements in more detail. Moreover, relationships and interdependencies need to be examined further. This initiates to look at design and management of OICs more integrally, because these elements are inseparable to receive successful ideas.

2.1 SCIENTIFIC CONTRIBUTION

This research aims to epitomize relevant existing knowledge on the common grounds of open innovation and crowdsourcing literature. OICs and its contiguous theories on rewards mechanisms, motivation, design attributes and management principles are scattered and isolated. Research on innovation contests has also been delayed due to the fact that a large difference in terminology has been used by many authors. Innovation contests, technology contest, idea contests and innovation tournaments are all terms used to define the same concept (Lampel et al., 2012). Moreover, the influence of economic modelling has dominated research on organized competitions where incentives and performance criteria were set and manipulated (e.g. Nambisan and Baron, 2009).

The stance taken in this research ensures an integral identification of design and management variables which play an important role in OICs. Unfortunately, existing knowledge is insufficiently used by practitioners (Van Aken, 2007). Therefore, a broad literature review is part of this research to establish a comprehensive data collection model to test in practice; case study research forms the empirical part of testing the proposed model. The conclusions drawn from the findings can provide insights for future research directions.

2.2 RESEARCH QUESTIONS

The previous paragraph has provided insight of which research gap this research aims to fill. Thus, this leads to the following sub research questions. These sub questions are formulated in order to find an answer for the main research question:

Sub research questions:

(9)

8

II. What design variables influence idea generation in OIC and which ones are effective to facilitate participation and improve the quality of ideas?

III. What management variables influence idea generation in OICs and which ones are effective to facilitate participation and improve the quality of ideas?

2.3 DEFINITIONS

A few concepts need a definition before theory and the case study are explained.  Virtual community:

- A group of people who interact regularly on a common interest, problem or task in an organized manner over the internet (Ridings, Gefen and Arinze, 2002).

 Open evaluation / community evaluation:

- Integration of external stakeholders into the assessment of product and service ideas by means of IT (Haller, 2012). Seekers in an OIC can leverage the crowd in order to evaluate ideas or products.

- An evaluation method that represents and bundle the judgment of people who are not part of the general group of decision makers (Füller, 2010).

 Flesch Reading Ease score and the Flesch–Kincaid Grade Level:

- The Flesch/Flesch–Kincaid readability tests are readability tests designed to indicate comprehension difficulty when reading a passage of contemporary academic English. There are two tests, the Flesch Reading Ease, and the Flesch–Kincaid Grade Level. Although they use the same core measures (word length and sentence length), they have different weighting factors. The results of the two tests correlate approximately inversely: a text with a comparatively high score on the Reading Ease test should have a lower score on the Grade Level test.

2.4 RESEARCH STRATEGY

The research strategy used for this research is based on theory of case research in information systems (Benbaset et al. 1984), and can be applied to OICs as well. Case study in crowdsourced innovation contests has a few benefits:

1. The researcher can study the OIC in a natural setting and generate theories from practice.

2. The case method allows the researcher to answer ‘how’ and ‘why’ questions; to understand the nature and complexity of the processes taking place.

(10)

9

3. A case approach is an appropriate way to research an area in which few previous studies have been carried out. With the rapid pace of change on the internet and technologies, many new topics emerge each year for which valuable insights can be gained through the use of case research.

Theoretical assumptions can be tested with elements that are found in practice. With the use of related literature on open innovation and crowdsourcing, a theoretical/conceptual model is built which is used as a guidance. The website crowdSPRING is used as a case study object to confront the theory with practice.

2.5 RESEARCH MODEL

The research model, illustrated as figure 1, visualizes the research process. This process starts the introduction of concepts that are related to OICs as phase 1. These are explored, discussed and examined. The exploration of these concepts consists of reading wiki’s, blogs and articles on open innovation, crowdsourced innovation and online contests. This creates an overview which elements are important organizing OICs. These elements are elaborated in the next phase. This concludes into a theoretical model which will be applied (and translated) to a case study (phase 3). A translation of elements found in theory is necessary because literature and practice use different terminology, topics and themes. An encounter of findings from both theory and practice form the last phase and finally conclusions are drawn.

Figure 1: Research model 2.6 METHODOLOGY

As described in the research model, the core of this research consists of two major parts: a literature review (theory) and a case study (crowdSPRING).

Exploration (of OICs related subjects) Literature on open innovation, crowdsourcing, online contests Conclusions Theoretical/conceptual model Case study: crowdSPRING

(11)

10 2.6.1 Literature review

The articles written by Chesbrough (2006), Terwiesch (2008) and Yang (2009) provided a starting point for the literature review in the next chapter. These authors have been cited very often; they are the most referred to in scientific literature in subjects about or related to ‘open innovation’ and ‘online contests’ (found by using a software program called ‘publish or perish’, see

http://www.harzing.com/pop.htm). The main source for the literature review are articles found via this program, by Google Scholar, Microsoft Academic or by databases like EBSCOhost and ScienceDirect.com. Along with these sources, scientific journals, blogs, wiki’s, newspapers have been used. Although the internet offers an unlimited number of websites and blogs with interesting debates and observations, it is difficult to determine which sources are the most reliable. Unfortunately, references and the arguments of certain statements are often lacking.

2.6.2 Case study: statistical analysis

The website crowdSPRING, which will be further introduced in chapter 2.7 and described in chapter 4.1, is chosen for case study because this website collects a lot of data by itself. Therefore, it offered the opportunity to access and obtain a lot of qualitative and quantitative variables about its hosted contests. Case study research is suitable for theory building and explanatory research (Eisenhardt, 1989) and to explain and describe complex phenomena (Kohn, 1997). The selection for crowdSPRING was also a pragmatic one because of limited access to data of contests on other innovation contest websites. On other platforms subscriptions and invitations were needed and data about the contest itself was unavailable.

The sources of evidence which are used for this case study are documents, problem descriptions, feedback, scores and data of all contests with a reward set higher than $450. This minimum reward is chosen to include the contests which can be perceived as relevant. Scores from theses contest can more easily be generalized which increases validity.

As discussed by Yin (1994), there are logical tests to judge the quality of any given research design. Four tests are commonly used to establish the quality of any empirical research. These are also applied to the empirical part of this study:

 construct validity, operational measures are the basis to study concepts. The concepts of effective OICs design and management are operationalized for crowdSPRING and explained.  internal validity, with statistical analysis causal relationships between different variables are

(12)

11

 external validity, a discussion about the generalization takes place in the last chapter. The research domain are publicly accessible contests; sometimes a subscription is necessary to enter

 reliability, data collection procedures can be applied to other platforms if data is publicly available

Data have been collected by entering the relevant variables and text into a database and MS Excel first and using macro’s to calculate the Flesch Reading Ease, and the Flesch–Kincaid Grade Level score. The variables and calculated scores were then exported to SPSS to make statistical analyses.

2.7 INTRODUCING CROWDSPRING

CrowdSPRING is a typical crowdsourcing contest website, co-founded in May 2007 by Ross Kimbarovsky and Michael Samson and is based in Chicago. Thousands of contests are being organized on crowdSPRING and it claims to be the #1 marketplace for logo’s, graphic design and naming. A typical project receives 110+ entries and more than 150,000 designers and writers from 200 countries participate in contests across 4 design categories (graphic design, web design, industrial design and mobile design) and 4 writing categories (naming, creative, business and online). The organization of the contests is a standardized process which consists of a creation phase (seeker posts problem description, rewards and duration), a design phase (solver uploads solutions and seeker rates ideas) and a selection phase (solver picks a winner or multiple winners).

In some cases a winner can’t be determined and the reward (in $) will not be granted. Some contest are marked with ‘reward assured’ when a winner will be granted the reward despite the idea quality is below the expectations of the seeker. Creatives (contributors/solvers) from around the world (over 100,000 from 200 countries) submit actual work. Seekers choose from among actual work, not bids and proposals.

(13)

12 3. THEORY

The exploration of innovation subjects from which innovation contests derive resulted in a clear starting point for the literature review and the development of theory. This chapter specifically focuses on the effects of design and management variables of OICs. Design and management variables will be elaborated from different theories on open innovation, crowdsourcing, co-development and community building. First, online innovation competitions will be discussed and described and what choices organizations have to make to make use of OICs for innovation successfully. Secondly, design variables of OICs are reviewed. Thirdly, managerial variables for OICs are discussed. Finally, control variables, which are basic conditions under which a contest takes place, are explained. These control variables help in determining the generalization of outcomes across contest categories, organizer scores and user diversity. The objective of this elaboration is to develop a guiding set of predictions to explore the nature of organizing effective OICs.

3.1 OPEN INNOVATION AND CROWDSOURCING: IDEATION

Developing new ways to innovate is not an option anymore for organizations. It has become imperative to grow and gain competitive advantages. Tapping into difference sources to enhance innovation, beyond organizational boundaries, and the trend that business evolves to a more open business model, has been called ‘open innovation’, a term first introduced by Chesbrough (2003). His definition of open innovation is often used; the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively. It indicates that organizations can and should use external ideas as well as internal ideas, and internal and external paths to market, as they look to advance their technology (Chesbrough, 2006). Tools that support open innovation activities, e.g. innovation markets, communities, toolkits, technologies and contests, are well-known and are increasingly used by organizations (Hülsmann and Pfeffermann, 2011).

Several developments, for example the rise of new technologies (Dahlander and Gann, 2010) and the internet, made it possible for organizations to innovate, develop virtual communities and cooperate across organizational boundaries using the ‘wisdoms of crowds’. Not only the barrier of distance but also the cost of collaboration has become insignificant and people are not solely passive users of information. The act, of taking a task traditionally performed by a designated agent and outsourcing it by making an open call to an undefined but large group of people, has also become known as ‘crowdsourcing’ (Howe, 2008). While communication and social media tools become more apparent and easier to use, the use of these tools for crowdsourcing seem to offer even more opportunities. Using the internet and its emerging forms of technology, an organization has access to knowledge and holds a method to generate a large number of solutions in a relatively short amount of time (Archak

(14)

13

and Sundararajan, 2009). Modern innovation processes require firms to master highly specific knowledge about different users, technologies, and markets. The way organizations draw knowledge from external sources has been extensively characterized by Laursen and Salter (2006), by introducing two concepts – external search breadth and external search depth. These concepts define the openness of organizational search strategies. They discuss that deep search, building long-term relationships with a small number of external knowledge sources, plays a crucial role in promoting innovation. This openness has led to a new paradigm of innovation beyond the boundaries of the organization: open innovation.

The paradigm of open innovation assumes that organizations can and should use external ideas as well as internal ideas, and internal and external paths to market, as the organizations look to advance their technology. It is innovating with partners by sharing risk and sharing reward. The boundaries between a company and its environment have become more open. Hence, the internet forms a platform for an organization to engage with others to communicate, collaborate, share and learn. While customer interaction has always been important in new product development (von Hippel, 1988), the widespread deployment of the internet has improved the capability of organizations to engage with others in R&D processes (Dahan & Hauser, 2002). In contrast, a study by Berchicci (2013) shows that organizations that move the boundaries of their R&D configuration by engaging in external R&D activities need to balance the benefits from tapping into external sources and the costs of searching, coordinating and monitoring linkages; which therefore play an important role in deciding to engage into crowdsourced innovation. Likewise, Dunlap-Hinkler et al. (2010), manifest that breakthrough innovations are more likely to come from joined attempts.

Figure 2: The overlap between Open Innovation and Crowdsourcing

Open innovation intertwines with the concept of crowdsourcing in an area where the input for innovation includes the activities aimed at getting ideas and solutions from external sources; crowdsourcing can be a means of open innovation and open innovation can be a reason for

(15)

14

crowdsourcing (see figure 2). Open Innovation is the concept of allowing information to flow out and in. Additionally, open innovation in itself does not necessarily infer the number of contributors. It could be limited to a select number of contributors or it could be unlimited. With crowdsourcing the organization is out-flowing information about a problem/challenge/contest while allowing the in-flow of suggested solutions/developments from an unlimited number of contributors. Crowdsourcing does not necessarily involve innovation although ‘ideation’ is one of the easier types of work that can be outsourced to crowds.

Crowdsourcing occurs when an organization outsources projects to the public. An organization decides to tap into the knowledge of a wider crowd and input is sourced from a large and diverse group of people. Crowdsourcing requires a lower level of engagement and involvement than open innovation. An organization using crowdsourcing will set a challenge to the public and ask for opinions, insight and suggestions. It is an open call to the public whereby the organization solicits solutions from the crowd – not genuine contribution and collaboration. Open innovation and co-creation imply a stronger involvement from the stakeholders who are included in the creation process.

Kleemann et al. (2008) grouped crowdsourcing by its application forms. Competitive bids on specifically defined tasks or problems is one of the crowdsourcing types Kleemann et al. (2008) discusses and is also the form that entails implementing a system that encourages competitiveness in order to accelerate ideation. In this respect, involving contributors successfully that lead to relevant solutions is the main challenge of organizing innovation contests.10

Crowdsourcing type Illustration of application

1) Participation in product development and configuration

Dell’s “idea storm” - call for comments and suggestions regarding the company's entire product palette.

2) Product design Fiat - Call announced by the auto manufacturer Fiat for its new Fiat 500 resulted in ten million clicks, 170,000 designs from (potential) consumers, and 20,000 specific comments on things like particular exhaust pipe forms.

3) Competitive bids on specifically defined tasks or problems (crowdsourcing contests)

Innocentive.com –Over 80,000 Independent scientists solve Fortune 500 companies R&D challenges.

4) Permanent open calls CNN - use of "amateur reporters," who submit photos or short articles for publication or broadcast

5) Community reporting Trendwatching.com individuals are asked to notify the company regarding any observable changes in market supply or consumer demand. Information is commercially used for market reports.

6)Product rating by consumers and consumer profiling

Amazon.com - Customers submit unpaid reviews of products it sells.

7) Customer-to-customer support Nike - users can upload their running times via their iPods and then use this data to engage in various competitions with other users.

(16)

15

The internet and its virtual environments enhance the organization’s capacity to use the social dimension of customer knowledge, by enabling the creation of virtual communities of consumption (Kozinets, 1999). Furthermore, it increases the flexibility of customer interactions and the level of involvement (Hagel & Singer, 1999). By offering challenges and transforming potential customer groups into active partners in R&D, ideas can lead to innovative solutions. Online innovation contests can be used to enhance and reward this user involvement (Yang et al. 2011). In conducting IOCs, companies aim to integrate customers into the early phases of the innovation process. So, OICs are a method to expand the source of potential new ideas.

Figure 3: Simplified illustration of attraction and facilitation in innovation contests (Möslein, 2012)

Research on innovation contests is scarce, but researchers underline the importance of examining its effective design and management. Examining both attraction and facilitation of innovation contests remains an ongoing effort, however, even though existing research has identified that an innovation contest is a suitable instrument for the realization of open innovation, it has disregarded how to systematically attract people to innovation contests and how to appropriately facilitate participants in order to achieve relevant innovations. Closely related theories on collective intelligence, creativity and idea tournaments contribute to knowledge about the functioning of OICs. Collective intelligence is a form of mass collaboration. There are four principles of collective intelligence that can also assist in organizing OICs:

1. Openness

Sharing ideas and intellectual property: allowing others to share ideas and gain significant improvement and scrutiny through collaboration.

INNOVATIONCONTEST

WORLDWIDEWEB WORLDWIDEWEB

(17)

16 2. Peering

Horizontal organization where users are free to modify and develop. Peering succeeds because it encourages self-organization – a style of production that works more effectively than hierarchical management for certain tasks.

3. Sharing

Companies have started to share some ideas while maintaining some degree of control over others, like potential and critical patent rights. Limiting all intellectual property shuts out opportunities, while sharing some expands markets and brings out products faster.

4. Acting Globally

The advancement in communication technology has led to collaboration with no geographical boundaries and opens access to new markets, ideas and technology

3.2 DEFINING ONLINE INNOVATION CONTESTS

Before discussing design, management en control variables for OICs a clear definition of innovation contests must be elaborated. Research shows that innovation contests can be good alternatives for generating new ideas compared to traditional new product (or service) development initiatives.

OICs come in many formats, with variations in subjects, reward structures, rating system, and entry restrictions. InnoCentive, founded in 2001, was the first online marketplace to host open innovation projects in form of contests (Allio 2004). It was built to help finding innovative medicine solutions. Nowadays, a large scope of projects are posted on InnoCentive, ranging from website logo design to algorithm design and manufacturing projects.

In scientific literature many authors define innovation contests differently but in correspondence with each other. Several authors give a definition for online innovation contests which are mostly based on theory on ‘crowdsourcing’, of which online innovation contests are the most popular form (Archak & Sundararajan, 2009). Most common and cited definitions of OICs are:

I. Online innovation contests are contests on the internet where people can post innovative projects - seekers - and where others - solvers - can contribute by offering and suggesting business solutions (Yang et al. 2009).

(18)

17

II. Online contests are contests where an organization posts a problem online, a vast number of individuals offer solutions to the problem, the winning ideas are awarded some form of a bounty, and the organization mass produces the idea for its own gain (Brabham, 2008).

III. An online (innovation) contest is a web-based competition of innovators who use their skills, experiences and creativity to provide a solution for a particular contest challenge defined by an organizer (Bullinger & Möslein, 2010).

Underlining these definitions is that the contests take place online. This is another important difference from ‘offline’ contests, and has consequences for feedback and participation. One distinguished feature is that online contests allow players to compete dynamically, instead of simultaneously (Yang, 2010, see table 2).

(19)

18

Distinctions Traditional Contest Online Contest

Feedback

Usually it is assumed there is no

communication between seekers and solvers. So there is no feedback.

Consequence: Each solver has

identical information and will only execute equilibrium effort.

Seekers may send feedback to preferred solvers. Such feedback may indicate changes desired by the seeker and provide signal to the solver that his solution is preferred.

Consequence: Some solvers may perceive higher

incentive due to increased probability of winning and would likely increase their efforts.

Participation

When a contest starts, a certain number of solvers compete simultaneously and the number of competitors is known to every solver.

Consequence: more solvers, lower

equilibrium effort.

Solvers enter and submit solutions at different times. It’s a dynamic process

Consequence: The final number of solvers is

uncertain but balanced by the contest design

Table 2. Traditional Contest vs. Online Contest (Source: Yang et al. 2010)

Although Tidd & Bessant (2009) state that there is a difference between ‘innovation contest’ and ‘idea contest’, whereby they mention that the latter does not cover the entire innovation process from idea creation and concept generation to selection and implementation, several other authors do not make such a clear distinction and use the terms idea(s) and design competitions also for innovation contests (Leimeister et al. 2009; Piller and Walcher 2006). In this paper the broader concept of online innovation contests will be applied; including online idea challenges and design competitions as innovation contests.

In summary, in this paper a combination of the definitions for OICs that have been described above will be used:

An online innovation contest is a web-based competition of solvers who use their skills, experiences and creativity to provide a business solution for a particular contest challenge defined by a seeker (an organizer), in which the winning ideas are rewarded by monetary or non-monetary rewards.

From this definition the process of an innovation contest can be deduced and explained. Usually the process of online innovation contests is as follows (Brabham, 2008):

1. The company posts a problem/ task online (seekers post, solves contribute by using their creativity, experiences and creativity);

(20)

19

2. A vast number of individuals offer solutions to the problem/ task (enabled by the internet); 3. The winning ideas are awarded some form of bounty (often prize money or reputation); 4. The organizer mass produces the idea for its own gain.

Figure 3 shows a similar process, but here subscribing to the OICs is necessary to submit solutions.

Figure 3: The dynamic timeline of online contests (Yang, Cheng & Banker, 2011)

The seeker starts a contest by describing the problem and set a duration for the solves to propose ideas. Each contest is on a voluntary basis.

In the first two phases the project (challenge) is described and contributors are invited to enter the contest. In some cases no one enters the project and new rewards and revised information are being presented. Furthermore, the organizer can intensify its participation in a contest to respond to it getting too little attention. The submitted ideas are being reviewed and rated by the organizer, a review committee or participants and a winner will eventually, after optional feedback rounds, be selected (Leimeister et.al, 2009).

This framework of dimensions and specifications in innovation contests is a reference point for a systematic approach for the evaluation of OICs. The next paragraphs are an elaboration of the performance, the design and management of OICs.

3.3 PERFORMANCE OF ONLINE INNOVATION CONTESTS

The quality of the outcomes of online competitions are evaluated by its organizers. The feedback system, the quantity of ideas and the quality of ideas is often determined by a rating system that has been set up by the platform creators. The competitive character inherent in an OIC encourages participants to produce a winning idea that is innovative and unique. However, the selection of the most promising submissions in OICs is still an unsolved task (Möslein et al. 2010) and can be difficult. Literature on open innovation discusses whether openness is always beneficial (Chesbrough and Appleyard, 2007). Negative side-effects such as the disclosure and sharing of strategic information might damage the reputational effectiveness of an organization (Bond et al. 2004). Other researchers (e.g. Praest Knudsen & Bøtker Mortensen, 2011) have found that open innovation might lead to worse

(21)

20

time to market and slower, more costly processes. In addition, some argue that breakthrough innovation are less likely to come out of OICs than from internal R&D sources. However, OICs performance mostly relies on understanding solver responses and obtaining sufficient solutions from a large group of contributors that meet the seekers’ expectations. In this research the solutions that are contributed are easily quantifiable. By selecting the most promising ideas through ranking and scoring, the contributors who have the best chances for winning are identified. The number of winnings in past performance is a good predictor for future winning probability for solvers (Yang et al., 2010).

3.3.1 Evaluating the quality of ideas

Evaluating the quality of ideas is the task of an organizer. The number of incoming ideas, debates and votes are quantifiable and measurable. The number of contributors who submitted ideas and solutions, commented and voted are also quantifiable. Defining the quality of ideas, comments and dialogue it becomes more difficult and most organizers use comments, ranking and ‘star-ratings’ to determine the winners.

The first evaluation of the quality of ideas is often done by users voting and commenting on published ideas. The performance of the OIC is decided by the best several solutions; to which extent the submitted ideas meet the organizers business solution requirements (Dahan and Mendelson 2001).

Rewards form an important incentive for contributors to participate in a contest and for winner determination. These rewards can be monetary or non-monetary. Pairing a clear, measurable end goal with a variety of motivators, well-designed rewards leverage the principles of competition to bring a field of solvers to contribute on a certain contest. The effectiveness of rewarding problem-solvers is dependant of several circumstances and the organizer’s goal.

3.4 DESIGN OF ONLINE INNOVATION CONTESTS

The design of the contest depends on the strategic intentions and can occur along multiple avenues. A good online contest design should take into account solver behavior, and it is important to understand the strategic interactions between solvers (Yang et al. 2011). The probability of solvers entering a contest is improved by meeting solvers’ both intrinsic motivation and extrinsic rewards (Borst, 2010). Intrinsic motivation is stressed in combination with social motivation, covering positive community feedback, reputation among relevant peers, and self-realization (Füller, 2006). Extrinsic rewards can be monetary or non-monetary.

Several researchers developed a framework of design criteria for online innovation contests (e.g., Ebner, Leimeister & Krcmar, 2009; Leimeister et al., 2009). Such design characteristics are, among

(22)

21

others, the contest period (minutes, days, weeks, months), the reward system (monetary vs. non-monetary), the community functionality (with or without community interaction), the task specificity (from very open to very specific), the degree of elaborateness (simple vs. more elaborated descriptions), the targeted participants (qualified or not qualified), the evaluation form (performance- or participation-oriented) and the evaluation criteria (e.g., novelty, feasibility, usefulness).

The importance of supporting participants with a very high degree of cooperative orientation is being emphasized by Bullinger et al. (2010), which will lead to highly innovative outputs and an establishment of a community of participants. Furthermore, if an organizer is mainly interested in highly innovative single person submissions, the contest should be designed to support a high degree of competiveness. The competitiveness can be stimulated by making ranking visible to other solvers, invoking others to revise and improve their contribution using entries on which higher scores are given. Third, if an organizer is interested in both highly innovative submissions and community building, both very low and very highly cooperative orientation should be supported. In general, contests do not work well in situations where the participants’ performances are interdependent.

Additionally, the design of the virtual interaction has to be tailored to its participants and to the development (sub)task(s) transferred to them. There is no single best solution for the design of the interaction as it depends on the specific context (Füller et al., 2004). In fact, many online contest platforms are configured to enable participants to form teams and merge their efforts. Some design-contest platforms enable the sponsors to run completely open design-contests in which all entries are visible to all competitors, allowing for rapid learning.

The OIC can be viewed as an aggregation tool to achieve collective intelligence in problem solving. In practice, the power of collective intelligence is best applied to idea generation and evaluation (Bonabeau, 2009), which is important, given that a successful innovation project needs to first generate numerous good ideas or solutions, and to then evaluate those solutions, to identify those that are ‘exceptional’ (Terwiesch and Ulrich 2009). Formulating the innovation problem, which is to solve, is crucial for innovation contests. The main question that rises is how an organizer formulates its problem in such a way that the description motivates competent solvers to participate and does not reveal own competence deficits or strategic innovation, but still is clear enough to deliver relevant information for its own innovation activities and internal innovation processes.

For creating and applying selection criteria for these ideas or solutions, Terwiesch and Ulrich (2010) also discuss the need of implementing filters in to find and select the winners in a competition. Absolute filters to make sure core criteria for success are adhered to; and relative filters to let the best contributors win out.

(23)

22

Usually, an organizer dedicates the contest to a specific topic; details of which vary extensively. Each innovation contest is set to run for a limited period of time; during this contest period participation is allowed. Contest periods range from some hours to more than four months or ongoing contests (Bullinger & Möslein 2010).

Summarizing, designing OICs encompasses a predefined and set duration, reward and evaluation, description of the challenge presented and its goal.

3.4.1 Rewards, duration and complexity

The rewards of an online competition are an important factor for motivating solvers to enter. Incentives are part of presenting a problem and the worthwhileness to let solvers come up with possible solutions.

Five design characteristics are leading in determining how to set a reward (Wagner, 2011):

1. Value in attracting multiple solutions

2. Understandable explanation of the problem to contributors and spectators 3. Set and maintain criteria for playing and winning

4. Incentivize in accordance with the organizers goals

The main intuitive message from existing models is as follows. In winner-takes-all contests with only one participant, contestants will have little incentive to exert effort to improve their work because there are no parties against whom they will be evaluated. Thus, adding some minimum level of competition should lead to greater effort (Harris and Vickers 1987).

Following the conclusion of Erkal and Xiao (2014) the optimal award level depends on the distribution of ideas critically depends on the market value of the innovation. If the market value is not very sensitive to the quality of the solution, then it is the case that as the scarcity of good ideas increases, the optimal award level increases. However, if the market value is quite sensitive to the quality of the solution, then as the scarcity of good ideas increases, the optimal award level decreases.

Furthermore, the chances of winning a reward diminishes when a greater number of solvers enter the contest. This is illustrated by (Aghion et al. 2005) in which they show an ‘U’-shaped relation between competition and incentives. Organizers counteract this phenomena by promising multiple rewards.

Research suggests that innovation contests with higher awards, lower time cost, shorter problem description, longer duration and greater popularity will capture more participants to solve the underlying innovation task. Longer duration, and higher popularity (‘trending problems’) will capture

(24)

23

more solvers. However, seekers of innovation contests have to be able to provide a clear description of the problem. The uncertainty of innovation problem description suggests that the likelihood of entering competitions is larger when problems are more complex (Boudreau et al. 2009), which often coincides with offering higher rewards. This diminishes the reducing effect of greater rivalry to exert effort and to make investments. Thus, uncertainty, rewards and the nature of the innovation problem should be explicitly considered in the design of innovation contests. Moreover, if the problem is highly complex, it is not suitable as innovation contest (Möslein, 2012).

3.5 MANAGEMENT OF ONLINE INNOVATION CONTESTS

The involvement of the organizer, coinciding with usable feedback, timing and activity can help in getting solvers involved in OICs. The facilitation of OICs is necessary to help participants in understanding the problem and addressing the participants’ specific needs. Facilitation is managing people and the process of submitting ideas, avoiding participants turning into opponents (Collins, 2007). Schepers, Schnell, and Vroom (1999) identified success factors of innovation contests including sense of urgency, the right moment, management commitment, awards, evaluation, or feedback. In correspondence with research of Ebner et al. (2009) they found out that the right communication instruments, motivating, and trust‐supporting elements play an important part in the success of innovation contests. Innovation contests often have a larger number of registered solvers who are not contributing (Yang et al., 2008). They also conclude most users become inactive after a few submissions and are optimizing their win-to-submission ratio (users make strategic choices). It is imperative to activate and motivate this core group of users, because 80% of all solutions are submitted by this group.

3.6 CONTROL VARIABLES OF ONLINE INNOVATION CONTEST PLATFORMS

In order to standardize innovation contests in terms of form, description and evaluation an innovation contest platform (website) provides a structure to capture ideas and to let organizers present their problem. In fact, it is this governance system that defines and administers the rules by which competition will be decided. Each competition can be categorized and each category has its own target group. These structural choices are predetermined by the platform and form constraints under which circumstances each competition takes place. The platform needs to strike a balance between value creation for organizers and community values. The innovation contest platform should therefore deploy the platform with caution and attention to governing mechanism (Boudreau and Lakhani, 2009).

(25)

24 + +

+

Moreover, the choice of organizers for a certain platform and its characteristics can influence the way various online media instruments are used by relying on social network ties in order to reach a critical mass of participants.

3.7 CONCEPTUAL MODEL

The previous paragraphs were intended to shed light on different aspects of OICs and to sketch the context of the research questions. The conceptual model is the summarized outcome of examining the variables which influence the results of OICs.

Figure 4: Conceptual model

Based on the literature review this conceptual model has been created. A representation of the interaction between design, management and control variables has been given; all should influence the effectiveness and performance of OICs positively in order to increase the success of OICs.

Design variables: variables which are part of composing an individual competition on an OIC platform. The organizer sets the conditions and constraints before opening the contest to the public.

Management variables: variables which are part of maintaining and supporting participants of an OIC.

Control variables: variables which are part of defining the governance system of an OIC. It administers the rules by which competition will be decided and identifies the role and reputation of the organizer (solver who determines winner).

OIC design variables OIC management variables

OIC effectiveness and performance OIC control variables

(26)

25

4. CROWDSPRING: DATA COLLECTION AND ANALYSIS

The choice of using an innovation platform to analyze the relation between management and design variables on project performance is because data is publicly available. This research is a study of a crowdsourcing contest website which has been chosen to evaluate different factors that influence the outcomes of online innovation contests.

4.1 CROWDSPRING

The important elements discussed in the literature review are also integrated and used on the already introduced platform crowdSPRING. This contest platform collects statistics of every challenge and shows this in different graphs and tables. This data is open to each subscriber.

Data on participation of seeker and solvers, the rating of ideas, the duration of the contest, the reputation of the seeker, the satisfaction score of the seeker and the number of comments are publicly available to users which subscribe to the website (see figure 5). Furthermore, the data of each organizer is collected and shown on his/her profile (see figure 6). This enables organizers to present themselves and to work on their ‘satisfaction score’. This score is determined how fast organizers are responding to solvers’ contributions and helps to motivate organizers to be more active on the website.

This data has been used in statistical analysis to ascertain the correlation between management and design variables and the quality of ideas (rating). These results are used to determine the degree of influence on project quality and performance. All contests are listed with contest title, category, number of entries, duration and reward (and if the reward is assured or not) (figure 7).

(27)

26

Figure 5: crowdSPRING Project statistics Figure 6: Organizer statistics

Figure 7: crowdSPRING contest categories, profile stats and project stats

When a contest creation on crowdSPRING is completed the contest starts and is thereby open for solvers to contribute. All contributions are shown in the gallery and subsequently the organizer can rate and comments on all entries. In this stage it is the goal to actively communicate with solvers and to agree or disagree on the quality of the ideas that have been posted. This communication takes

(28)

27

places by sending text messages, respond on posted entries, post feedback on creative brief page and rating ideas. The duration of each contest differs because the complexity and requirements of the organizer varies.

4.2 DATA COLLECTION MODEL

A clear distinction has been made for variables which can be obtained from crowdSPRING. These have been listed under the main categories: design, management and control variables for an OICs. A translation of variables has been made for the characterization of different elements on the website see figure 5).

Figure 5: Data collection model of Online Innovation Contests; applied to crowdSPRING.com PERFORMANCE VARIABLES  Quantity of Participation o # of ideas submitted o # of votes o # of comments o # of watchers  Quality of Participation

o Quality of the ideas o Quality of comments o Quality of dialogue  Idea rating

o Star index o Entries marked ‘no

thanks’

CONTROL VARIABLES

 Idea – user diversity  Contest category

o Logo o Print Design o Website o Illustration  Organizer satisfaction score

ONLINE CONTEST DESIGN & FORMAT VARIABLES

 Incentives

o Amount (in cash) o Reward assurance  Contest duration (set by organizer)  Contest description in creative brief

o Complexity/Readability o Description elaboration

MANAGEMENT VARIABLES

 Organizer activity and timing o # Organizer comments

on creative brief o Average time it takes to

rate ideas

o # Percentage answered questions posted under submitted ideas o # Contests on platform  Organizer participation score

(29)

28

VARIABLES DESCRIPTION MEASUREMENT / INDICATOR

Online contest design & format variables

Incentives Cash amount: what cash reward did

the organizer set for the contest?

Dollar ($) amount

Reward assurance: Is there a guaranteed contest pay-out?

Yes/no

Contest duration (set by organizer) Time of contest duration set by organizer in days

Days between the contest start date and the closing date, determined by the organizer

Contest description Complexity / readability: Flesch

Reading Ease Score / Flesch-Kincaid Grade Level

0-100 point scale / Grade level score (students of which grade are able to understand text)

Description elaboration: Length of

creative brief (including template and

questions set by website)

# of words

Control variables

Idea - user diversity Uniqueness of ideas and users. Do all ideas come from the same user or are they all submitted by different users?

Diversity index of contributors

Contest category In which category has the contest been

placed (set by website’s format)?

Logo Print Design Website Illustration Organizer satisfaction score What satisfaction score does the

organizer have?

0-100 as displayed on the website

Management variables

Organizer activity and timing Number of comments to clarify creative brief, time to give feedback on ideas, time it takes for the organizer to answer questions and the organizer’s activity on the whole platform

# Organizer comments on creative brief

Average time between idea posting and rating (in days, hours, minutes) % Percentage answered questions posted under submitted ideas # Contests organizer has made available on the platform (website) at moment of data collection

Organizer participation Participation score High / medium / low (determined by website’s format)

(30)

29 Performance variables

Quantity of Participation Frequency of participation # of ideas submitted for contest # of votes per contest

# of watchers per contest Quality of Participation Quality of the ideas, comments and

dialogue. Do they improve over time?

See ’idea rating’. Posting date of idea versus idea rating

Quantity of social media links (social media engagement)

Has social media been used, e.g. Facebook and Twitter, to communicate/share the contest to others?

# of Tweets on Twitter # of Likes on Facebook

Idea rating Star index (website scoring

mechanism)

0-5 star ideas (0 = not rated yet)

Entries marked ‘no thanks’ # of entries marked ‘no thanks’

Table 3: Description of management, design and control variables in OICs, applied to crowdSPRING.com Table 3 gives an more in detailed description of the variables collected from crowdSPRING. In the next paragraph the data on these variables have been collected and analysed.

4.3 RESULTS

The analysis of data has provided an important overview of the influence of several performance indicators. By measuring the different variables, of designing and managing an OIC, correlation and regression analyses has been completed. The basis for this analysis is the model described in the previous paragraph. Finally, a multiple regression analysis has been performed.

All challenges with a reward higher than $450 have been taken into account and statistical data has been collected and analyzed with a database (MySQL), MS Excel and SPSS. Competitions with a lower than $450 reward are regarded to be less significant to users and organizers. In this research the selection of variables has taken place to determine the impact on idea quality, quality of participation and quantity of participation. Due to N being > 50 + 8m (m being all included independent variables) multiple regression analysis can be used. Duration has been standardized by subtracting start and end time UNIX timestamps (see https://en.wikipedia.org/wiki/Unix_time).

All the descriptive statistics are as follows:

Descriptive Statistics participants

N Minimum Maximum Mean Std. Deviation

Number of Comments 3756 0 3913 86,52 149,673

Number of Entries 3756 0 4240 150,66 196,382

Number of Creatives 3756 0 906 32,89 40,482

Number of Watchers 3756 0 186 19,28 17,679

(31)

30

Descriptive Statistics ratings

N Minimum Maximum Mean Std. Deviation

@5starRating 3756 0 109 3,63 8,242

@4starRating 3756 0 109 3,63 8,242

@3starRating 3756 0 584 22,93 38,699

@2starRating 3756 0 437 19,28 35,283

@1starRating 3756 0 1113 17,97 51,846

Number Marked with ‘NoThanks’ 3756 0 3254 10,61 65,799

Number of Entries To Be Scored 3756 0 1131 21,07 51,444

Valid N (listwise) 3756

Descriptive Statistics organizer

N Minimum Maximum Mean Std. Deviation

AmountUSD reward 3756 $450.00 $7,800.00 $786.2572 $407.76024

ProjectOwner Number of Projects 3756 ,00 92,00 3,6480 11,49428

ProjectOwner Timing score 3308 1,00 100,00 95,1466 10,80475

ProjectOwner Communication score 3308 ,00 100,00 96,0012 10,12976

ProjectOwner Overall Satisfaction score 3308 1,00 100,00 96,6258 9,01621

Number of Awards 3152 0 26 1,25 ,904

Flesch Reading Ease 2294 18,60 100,00 62,7708 11,85194

Flesch-Kincaid Grade Level 2296 1,90 33,10 7,7503 2,27138

Valid N (listwise) 1559

(32)

31

Model Summary

Model R R Square Adjusted R Square Std. Error of the Estimate

1 ,528a ,279 ,278 12,1924

a. Predictors: (Constant), Number of Comments, Number of Awards, Project Owner Number of Projects, Amount in USD (Prize) Correlations Number of Comments Number of Entries Project Owner Number of Projects Amount of USD (Prize) Number of Watchers Number of Comments Pearson Correlation 1 ,724** -,061** ,122** ,467** Sig. (2-tailed) ,000 ,000 ,000 ,000 N 3756 3755 3756 3756 3756 Number of Entries Pearson Correlation ,724** 1 -,081** ,175** ,582** Sig. (2-tailed) ,000 ,000 ,000 ,000 N 3755 3756 3755 3755 3755 Project Owner Number of Projects Pearson Correlation -,061** -,081** 1 ,070** -,116** Sig. (2-tailed) ,000 ,000 ,000 ,000 N 3756 3755 3756 3756 3756 Amount of USD (Prize) Pearson Correlation ,122** ,175** ,070** 1 ,245** Sig. (2-tailed) ,000 ,000 ,000 ,000 N 3756 3755 3756 3756 3756 Number of Watchers Pearson Correlation ,467** ,582** -,116** ,245** 1 Sig. (2-tailed) ,000 ,000 ,000 ,000 N 3756 3755 3756 3756 3756

(33)

32

ANOVAa

Model Sum of Squares df Mean Square F Sig.

1

Regression 181189,545 4 45297,386 304,714 ,000b

Residual 467669,217 3146 148,655

Total 648858,762 3150

a. Dependent Variable: Number of Watchers

b. Predictors: (Constant), Number of Comments, Number of Awards, Project Owner Number of Projects, Amount in USD (Prize) Coefficientsa Model Unstandardized Coefficients Standardize d Coefficients t Sig. 95,0% Confidence Interval for B

B Std. Error Beta Lower

Bound Upper Bound 1 (Constant) 7,060 ,485 14,555 ,000 6,109 8,011 Number of Awards ,826 ,276 ,052 2,992 ,003 ,285 1,367

Amount in USD (Prize) ,006 ,001 ,177 10,348 ,000 ,005 ,007

Project Owner Number of Projects

-,134 ,018 -,114 -7,335 ,000 -,170 -,098

Number of Comments ,047 ,002 ,445 29,029 ,000 ,044 ,050

a. Dependent Variable: Number of Watchers

Multiple regression analysis was carried out to predict how many watchers were monitoring the contest (Number of Watchers) from the number of awards set by the organizer, number of comments, number of projects by organizer and the contest reward in USD. These variables statistically significantly predicted Number of Watchers, F(4, 3146) = 304,714, p < .0005, R2 = 0.279. All four variables added statistically significantly to the prediction, p < .05.

(34)

33

Correlations

AmountUSD NumEntries FleschKincaidG radeLevel AmountUSD Pearson Correlation 1 ,175** ,022 Sig. (2-tailed) ,000 ,178 N 3756 3755 3756 NumEntries Pearson Correlation ,175** 1 ,103** Sig. (2-tailed) ,000 ,000 N 3755 3756 3756 FleschKincaidGradeLevel Pearson Correlation ,022 ,103** 1 Sig. (2-tailed) ,178 ,000 N 3756 3756 3757

**. Correlation is significant at the 0.01 level (2-tailed).

Model Summary

Model R R Square Adjusted R

Square

Std. Error of the Estimate

1 ,128a ,016 ,016 4,1421

a. Predictors: (Constant), NumEntries, NumWatchers

ANOVAa

Model Sum of Squares df Mean Square F Sig.

1

Regression 1065,727 2 532,864 31,058 ,000b

Residual 64374,112 3752 17,157

Total 65439,839 3754

a. Dependent Variable: FleschKincaidGradeLevel b. Predictors: (Constant), NumEntries, NumWatchers

Coefficientsa

Model Unstandardized Coefficients Standardized

Coefficients t Sig. B Std. Error Beta 1 (Constant) 4,157 ,101 41,310 ,000 NumWatchers ,022 ,005 ,093 4,682 ,000 NumEntries ,001 ,000 ,048 2,430 ,015

(35)

34

Multiple regression analysis was done to predict watchers who are monitoring the contest (NumWatchers) and the number of entries (NumEntries) from the Flesch-Kincaid Grade reading score (complexity of the creative brief by the organizer) These variables statistically significantly predicted Number of Entries/Watchers, F(2, 3152) = 31,058, p < .0005, R2 = 0.016. All variables added statistically significantly to the prediction, p < .05.

From a second, larger sample the following correlations were found:

Correlations num_ entrie s num_aw ards num_watchers award_amou nt_USD num_star_r ating_5 unix_timestamp end-start num_entries Pearson Correlation 1 ,183** ,558** ,157** ,248** ,096** Sig. (2-tailed) ,000 ,000 ,000 ,000 ,000 N 6579 6579 6579 6579 6579 6579 num_awards Pearson Correlation ,183** 1 ,123** ,460** ,118** ,010 Sig. (2-tailed) ,000 ,000 ,000 ,000 ,427 N 6579 6579 6579 6579 6579 6579 num_watchers Pearson Correlation ,558** ,123** 1 ,269** ,223** ,157** Sig. (2-tailed) ,000 ,000 ,000 ,000 ,000 N 6579 6579 6579 6579 6579 6579 award_amount_USD Pearson Correlation ,157** ,460** ,269** 1 ,060** ,044** Sig. (2-tailed) ,000 ,000 ,000 ,000 ,000 N 6579 6579 6579 6579 6579 6579 num_star_rating_5 Pearson Correlation ,248** ,118** ,223** ,060** 1 ,029* Sig. (2-tailed) ,000 ,000 ,000 ,000 ,017 N 6579 6579 6579 6579 6579 6579 unix_timestampendstart Pearson Correlation ,096** ,010 ,157** ,044** ,029* 1 Sig. (2-tailed) ,000 ,427 ,000 ,000 ,017 N 6579 6579 6579 6579 6579 6579

**. Correlation is significant at the 0.01 level (2-tailed). *. Correlation is significant at the 0.05 level (2-tailed).

(36)

35

Model Summary

Model R R Square Adjusted R

Square

Std. Error of the Estimate

1 ,259a ,067 ,066 8,595

a. Predictors: (Constant), num_entries, unix_timestamp end-start, award_amount_USD, num_awards

ANOVAa

Model Sum of Squares df Mean Square F Sig.

1

Regression 34870,019 4 8717,505 118,001 ,000b

Residual 485663,305 6574 73,876

Total 520533,324 6578

a. Dependent Variable: num_star_rating_5

b. Predictors: (Constant), num_entries, unix_timestampendstart, award_amount_USD, num_awards

For this sample multiple regression analysis was done to predict number of 5 star rated ideas, which is the highest rating possible for ideas, from number of award (Num_Awards), award amount in USD (Award_Amount_USD) and the duration of the contest (UNIX_Timestamp end-start). These variables statistically significantly predicted Number of 5 Star Rated ideas, F(4, 6574) = 118,001, p < .0005, R2 = 0.259.

Referenties

GERELATEERDE DOCUMENTEN

34 screening of the ideas as too difficult or too time consuming which can lead to a situation in where they seem to neglect the process of rating and commenting on designs (CSD

This study will focus on online open innovation platforms where idea contests are organized to link organizations with their consumers for idea generation for the new

Knowledge giving and taking. A frequently mentioned advantage of participation in the cluster is the ability of firms to receive valuable information. However, within the cluster

Effects of multiple-non-equal structures on the expected count of high quality solutions (evaluated with a score 4 or 5) are comparable: given the same conditions,

Multiple firms that were interviewed acknowledged that they started out as crowdsourcing consultancy firms, but since demand was low, they had to shift their focus towards

This paper has addressed this gap by investigating the direct effects, signalling of quality, learning and networking, of participating in an innovation award

Zott and Amit (2008) Multiple case studies - Develop a model and analyze the contingent effects of product market strategy and business model choices on firm performance.

Colonial Office, Letters Received: Lock Hospital, 1892, (CO 1526), National Archives of South Africa, Cape Town Archives Repository (KAB). Colonial Office, Administrative and