• No results found

What drives idea acceptance on crowdsourcing platforms: initiator expertise, idea quality or popularity?

N/A
N/A
Protected

Academic year: 2021

Share "What drives idea acceptance on crowdsourcing platforms: initiator expertise, idea quality or popularity?"

Copied!
35
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

What drives idea acceptance on crowdsourcing

platforms: initiator expertise, idea quality or popularity?

Master Thesis

Steven Zaal

First supervisor: dr. T.L.J. (Thijs) Broekhuizen

Second supervisor: prof. dr. P.C. (Peter) Verhoef

University of Groningen

Faculty of Economics and Business

MSc Marketing

MSc Business Administration: Strategic Innovation Management

22-June-2015 Word count: 9.595

(2)

2

Abstract

This study examines the role of initiator expertise and popularity in shaping idea acceptance on voluntary crowdsourcing platforms. It uses literature on signaling, feedback and popularity to develop hypotheses about the drivers of idea acceptance. Using data from 115 projects on the crowdsourcing platform “Dell Ideastorm”, this study found that being an initiator with expertise, as signaled on the platform by the past realized success (“Rockstar” label), is no guarantee for future success. Signaling experts, using the specific “Rockstar” expert label, seems to have no effect on the popularity (number of votes and feedback quantity) and idea acceptance over and above idea quality. Users cast more votes (number of votes) for high quality ideas that, together with platform owner feedback, lead to higher idea acceptance. Despite that feedback acceptance of the initiator may positively influence idea quality, this study did not find evidence that initiators can increase idea acceptance by accepting more feedback, nor that the valence of the feedback matters. This study provides managerial implications for platforms and initiators. For the platform it is interesting that the crowd is highly capable of rating ideas by which the reviewing process of ideas could be accelerated. Furthermore, initiators could try to get more votes for their idea by sharing it on social media.

Key words: crowdsourcing, feedback, quality, expertise, popularity, idea acceptance

Acknowledgements

(3)

3

Table of Contents

1. Introduction ... 4

2. Crowdsourcing and idea acceptance ... 6

2.1 Setting and added value of the study ... 6

2.2 Theoretical framework & conceptual model ... 8

2.2.1 Initiator type & consequences of expertise ... 9

2.2.2 Feedback acceptance ... 11

2.2.3 Antecedents of idea acceptance ... 12

2.2.4 Control variables... 13

3. Methodology ... 15

3.1 Research design ... 15

3.2 Data collection ... 16

3.3 Variables and measures ... 16

3.4 Analysis ... 17

3.5 Robustness checks ... 18

4. Results ... 19

4.1 Consequences of expertise ... 19

4.2 Antecedents of idea acceptance and mediation effects ... 21

4.4 Robustness checks and additional tests ... 24

5. Conclusion ... 27

5.1 Discussion and implications ... 27

5.1.1 Theoretical implications ... 27

5.1.2 Managerial implications ... 28

5.2 Limitations and future research ... 29

References ... 31

Appendix ... 35

(4)

4

1. Introduction

Innovation has a high priority among top managers (Andrew et al. 2010, Jaruzelski & Dehoff 2010). Traditionally, firms relied on internal investments and efforts for a constant stream of ideas and innovations (Ernst et al. 2000, Schulze & Hoegl 2008). As organizations find it gradually more difficult and costly to understand their customers and face the challenge to develop products that meet the demand of differentiated consumers (Arkakji & Lang, 2007), they responded by using different tools to gauge consumer needs. Internet and related web 2.0 technologies facilitated the ease of reaching and interacting with consumers, and provided the opportunity of useful insights via crowdsourcing (Jain, 2010). Crowdsourcing is a collaboration model enabled by people-centric web technologies that solves individual, organizational, and societal problems with a dynamically formed crowd of people who respond to an open call for participation (Pedersen et al., 2013). It has been adopted by many leading organizations, including Fortune 100 companies such as PepsiCo, General Electric, General Motors and Procter & Gamble (Bratvold, 2011).

Crowdsourcing offers organizations the opportunity to publish a problem or concern to which individuals can respond by posting their ideas or solutions. This leads to a bulk of ideas, which should be examined by a selector, who can be the organization and/or platform owner. Regarding the selection of ideas there are two important issues: firstly, the crowd has to be motivated to contribute (e.g., ideas, feedback, voting) in order to have a minimum number of ideas casted and promoted. Secondly, the platform or organization has to find and select the best idea. Many studies (see table 1) have shown that tools such as possibility to earn points, possibility to earn an expert label and financial rewards may motivate the crowd to contribute. For the platform or organization, the large quantity of solutions and ideas makes it difficult to find and select the best idea. It can be costly and time consuming to go through all ideas of which a lot may be of low quality. For platforms or organizations it could be interesting to make this process more efficient. For example, using expert labels for users on the platform may provide a (soft) pre-selection to identify users with better ideas (Pornpitakpan, 2004).

Idea initiators on the platform share a motivation for engagement: they want their voices to be heard by other participants on the platform and by the platform owner (Nam, 2012). For them, the acceptation is very important, because this is a way to show people that they are noticed and that their need for engagement is fulfilled.

(5)

5

the perspective of how the organization (or platform) may benefit from the crowd. This study takes an initiator’s perspective by looking at which initiator ideas get selected, using initiator expertise, feedback and voting as determinants, while controlling for idea quality. This research thus shifts the focus from the selector to the selected idea. This dissertation investigates which variables influence the degree of acceptance of ideas, and aims to answer the following research question:

How does the idea initiator’s expertise, user’s feedback and voting influence the acceptance of ideas generated on crowdsourcing platforms?

This dissertation’s contribution is threefold. Firstly, this dissertation is the first study to shed light on the role of initiator expertise on acceptance. It considers the amount of attention an expert idea receives if the acceptance is higher and/or if the quality or popularity of those ideas lead to higher acceptation. Popularity – irrespective of idea quality – may positively influence acceptance, because the degree to which an idea that gets promoted (votes) or receives feedback gives a higher probability of getting selected by the platform because of greater visibility. Secondly, this study investigates how feedback acceptance (i.e., the degree to which the initiator accepts the feedback) may influence idea acceptance. An initiator could be hesitant to accept feedback because he or she does not want to sacrifice the fundamentals of his/her idea for the benefit of the masses. Thirdly, this research focuses on voluntary social voting platforms, where the platform owner selects the idea that will be implemented.

(6)

6

2. Crowdsourcing and idea acceptance

2.1 Setting and added value of the study

Figure 1 presents a schematic overview of how the different parties interact in chronological order (and possible feedback loops) towards idea acceptation on voluntary platforms.

Figure 1. Idea creation & social voting process

When the initiator posts an idea, it becomes available to the rest of the crowd who will be able to comment, provide feedback and vote on it. The willingness of the crowd to interact in this process could depend on the expertise of the initiator. Signaling theory suggests that it is the signaler (initiator) signals his or her expertise to the users, who may be more persuaded to act favorably when the signal fits a certain role model profile (Connelly et al., 2011). Users may consider expert initiators attractive for their knowledge and possible past successes on the platform (Pornpitakpan, 2004). Depending on the reactions of the crowd (comments and votes), an indication of popularity and usefulness of the idea is given to the platform and its users. Users can interact with each other, and with the idea initiator via commenting and providing feedback on the idea. Initiators can either decide to alter the idea by accepting feedback or not. Eventually the platform owner decides whether to accept the idea (initial idea or altered idea) or not, and then decides to (partially) implement the idea.

Initiator’s integration of feedback is optional, leading to multiple possible paths to acceptance (see dotted lines fig.1). Firstly, if an idea is directly interesting for a platform owner, the votes and feedback become redundant. Secondly, when the initiator decides not to react to the provided feedback, no interaction takes place between the initiator and feedback-giver.

This setting is interesting to investigate since companies have an easy way to get in touch with initiators with specialized knowledge. Furthermore, these initiators are often intrinsically motivated to participate (Fuller, 2010; Von Hippel, 2005), which may lower R&D expenditures and improve new product development (NPD) performance (Boutin, 2006). In addition, research shows that crowdsourcing may result in solutions that are more commercially viable; customers are more willing to pay for innovations initiators propose, and the chance of churning is higher when initiator needs are

Possible discussion Votes & feedback

(7)

7

not taken into account (Franke & Hippel, 2003). Finally, the current rate of acceptation of crowdsourcing is very low. For example, the crowdsourcing platform of Starbucks has an acceptation rate of 0,0016% (190,000 posted against 300 accepted). Even lower is the success rate of 0,0007% in the case of the Gulf Oil Spill (Goldenberg, 2011). Companies and platforms will benefit by a higher acceptance rate, which will attract more users and make them more successful in finding innovations. Table 1 summarizes the extant literature covering the topic of crowdsourcing, and points out how this study differs from existing research. This study focuses on, in contrast to most researches that investigate platforms with financial rewards, a voluntary platform from which the platform owner selects the best idea. The fact that the platform owner decides whether an idea is executed or not is yet an unexplored topic. This means that an idea is not chosen and funded by the masses (which is the case for Kickstarter, Indigogo), but by an expert who will try to exploit the idea with the goal of financial profits.

(8)

8

Table 1. Previous research comparatively with vocal paper * initiator characteristics, transparency, governance mechanism ** personal characteristics, hedonic value

2.2 Theoretical framework & conceptual model

Figure 2 displays the conceptual model that is built upon the signaling and feedback theory. This study analyzes the impact of initiator expertise on idea acceptance, mediated via popularity (as measured by number of votes and feedback quantity), feedback valence, platform feedback and platform feedback valence. It also incorporates feedback acceptance of the initiator, which is tested against the idea acceptance. The level of attention that an idea receives (popularity) consists of two variables: number of votes and feedback quantity. This is because they share the common factor of quantity (volume) (Yang & Leskovec, 2011). To investigate whether initiator expertise has an influence via popularity or via idea quality, the conceptual model controls for user characteristics (initiator, platform owner), user response (number of votes, feedback quantity and valence), initiator response (feedback

Ip eirot is et al. (2010 ) Bay u s (2013 ) Dow et a l. (2 012) Fu lle r (20 06) H u an g & Fu (2013) Jain (2010 ) Wo o ten & Ulric h (201 2) Be lf la m e et al. (20 1 4) G erb er et al. (20 1 3) Schu lz et al. (2015) Th is p ap er Type of platform

Micro task (no creativity)    

Ideation task (involves creativity)     

Crowdfunding    Incentive Financial          Non-financial Decision maker Problem owner      Platform owner     Crowd/platform population    Feedback

Non user experts  

Users   

Platform 

Other variables

Initiator expertise     

Other*    

Selection of ideas based on (DV)

Creativity/quality         

Feedback acceptation 

(9)

9

acceptance) and idea acceptance. In the following sections the relevance and the hypothesized effects of these variables are elaborated.

Figure 2. Conceptual model

2.2.1 Initiator type & consequences of expertise

Users on a platform vary in their level of expertise, ranging from a very inexperienced user to an experienced specialist in a certain profession or area. Expertise, in this respect, is defined by the qualifications, track record, professional standing, and experience of the initiator (Burgman et al., 2011). Most of the crowdsourcing platforms do not explicate the exact background of its initiators, but often grant certain labels to participants who have delivered an outstanding contribution. With this labelling system, all users of the platform are equal until a proven form of expertise has been demonstrated. This signal of expertise can influence the initiator’s credibility on the platform and could elevate the initiator’s chances of success on the platform (Pornpitakpan, 2004). The initiator can influence the (perceived) idea quality directly via signaling their own expertise, or because the ideas shared can be of objectively higher quality. This study focuses on the objective quality.

The labeling enhances the visibility of these participants, by which other users know what kind of status someone has. Users tend to react more favorably when perceived expertise is higher (Horai et al., 1974). In other words, they are likely to agree with (and engage in developing the posted idea) people who have a higher perceived expertise (Crisci & Kassinove, 1973; Horai et al., 1974;

(10)

10

Pornpitakpan, 2004). Therefore, the expertise of the initiator is hypothesized to impact the number of votes, feedback quantity and feedback valence.

2.2.1.1 Initiator expertise and number of votes

Initiators with an expertise indicator may receive a greater number of votes than non-expert initiators. Users are more inclined to give a “thumbs up” for an idea from which they perceive the initiator expertise as higher in contrast to ideas from non-experts (Liao & Fu, 2014). This could mean that, when an initiator possesses a higher expertise according to the crowd, the idea or invention receives more votes than when posted by initiators with lower expertise. Furthermore, an idea could become ‘trending’ on a platform thanks to higher expertise, as these ideas tend to have more visibility. As such, the possibility of a higher number of votes increases with the expert label. Another factor that could stimulate the votes for expert initiators deals with the extrinsic motivation of the crowd to support ‘winners’ (Lepper & Henderlong, 2000). Participants with a higher expertise could have a higher success rate and by voting on these ideas or providing feedback, the user could feel a form of recognition (associating with winners) when that idea gets accepted and implemented (Lepper & Henderlong, 2000).

H1: Initiators with higher expertise attain more votes.

2.2.1.2 Initiator expertise: platform and user feedback quantity

(11)

11

H2a: Initiators with higher expertise receive more user feedback.

In this study setting, the platform owner could also deliver feedback. Although the platform owners may be less driven by recognition, they seek the most valuable solution for themselves and may be particularly attracted to ideas with greater visibility and higher perceived quality of contributors with more expertise. The platform owners only invest time when they see value for the organization, which could be higher with an expert initiator, because of its perceived knowledge (Crisci & Kassinove, 1973; Horai et al., 1974; Liao & Fu, 2014). Platform owners may therefore engage in more feedback towards initiators with greater expertise.

H2b: Initiators with higher expertise attain more platform feedback.

2.2.1.3 Initiator expertise: platform and user feedback valence

Feedback valence is the tone of the feedback. This tone can be positive, negative or neutral. Because of the perceived expertise, initiators with various levels of expertise may receive different feedback valence. Users are more inclined to agree with a person with higher perceived expertise (Horai et al., 1974; Liao & Fu, 2014). Therefore feedback towards experts is more often positive than negative (Finkelstein & Fishbach, 2012). This positive feedback could come from the automated reaction that everything an expert does is good. This leads to the assumption that when there is feedback provided, by either the user or the platform, the tone will be more positive of nature towards experts.

H2c: Initiators with higher expertise attain more positive feedback from the user and platform.

2.2.2 Feedback acceptance

(12)

12

offered, while reinforcement is ensuring that the receiver is on the right track (McIntire, 2007). The effect of valence on feedback acceptance is not hypothesized but used as an extra test.

When feedback is accepted, the initiator further develops the initial idea towards a more valuable and/or popular idea for the crowd and platform. The initiator then incorporates multiple perspectives within the idea and mitigates a form of groupthink/tunnel vision. When the initiator decides to ignore the feedback (if provided), the original idea stays intact, but the idea is not improved towards preferences of the crowd and/or platform. Therefore, the chances of acceptation may be lower. The acceptance of feedback is also important for the continuity and viability of the platform. By incorporating feedback, the idea should ideally improve and a notion of reciprocity could emerge (Dellarocas et al., 2004).

H3: Feedback acceptance is positively related to the idea acceptance.

2.2.3 Antecedents of idea acceptance

Idea acceptation refers to the extent to which an idea is approved by the platform owner, and varies from rejection to full acceptance in four stages (rejection, under review, partially accepted, full acceptance). The ultimate goal of the initiator is that an idea gets fully accepted and transforms from an idea into reality. With a higher acceptance rate, a platform becomes more interesting for new initiators thanks to the perceived chance of success.

2.2.3.1 Initiator expertise

Initiators earn their expert label by outstanding contribution on the platform or proved track record in real life (All, 2011). This leads to the expectation that initiators with expert labels are more successful and better in developing useful ideas than normal users on the platform. Apart from having an indirect influence via idea quality and popularity, initiator expertise may have a direct positive influence on acceptance.

H4: Initiator expertise is positively related to idea acceptance.

2.2.3.1 Popularity and idea quality

(13)

13

2010). By receiving more votes, the crowd signals to the platform that it likes the idea and that the idea is useful, which may stimulate the acceptance of the platform owner (Capra et al., 2010). This makes the growing number of votes an important factor for acceptation.

Just like the number of words, the quantity of feedback is also an indicator of popularity, since they share the common factor of quantity (volume) (Yang & Leskovec, 2011). An idea that receives more feedback will also stand out to the platform owner. Although feedback can point out that the idea is not yet perfect and needs improvement, it also indicates that users see potential in improving the idea, thereby signaling the attractiveness of the idea to the crowd, and hence to the platform. Because popularity indicates usability for the crowd this study created the following hypotheses.

H5a: Popularity (number of votes and feedback quantity) is positively related to idea acceptance. H5b: Idea quality is positively related to idea acceptance.

2.2.3.3 Platform owner feedback

Besides the users, the platform owner can also provide feedback. The frequency and quantity of platform owner feedback is lower compared to that of normal users. The platform owner only provides feedback when an idea is potentially useful for the organization, but is not yet fully satisfying. As such, when ideas receive feedback from the platform owner, this could be an indicator of a higher chance of acceptance. The platform possess the power of decision. Hereby they can choose which idea is accepted and which one is not. These two factors make the influence of the platform owner a strong indicator of idea acceptance.

H6: Platform owner’s feedback is positively related to idea acceptance.

2.2.4 Control variables

To increase the robustness of the findings and provide a stronger test, this study incorporates several control variables that may impact acceptance as well. These control variables allow to test whether initiators with greater expertise are selected more often, because their ideas are of higher quality, or because they primarily benefit from popularity advantages. The following control variables are all about idea quality.

2.2.4.1 Idea clarity

(14)

14

2.2.4.2 Number of words

The number of words indicates the degree of the elaboration on a particular idea. Ideas could be more effective when details are provided on how it works, or what the benefits may be for the platform owner and the crowd. This elaboration part consists of, among others, providing worked-out examples (Shute, 2008). In general, specified ideas are in perceived to be of higher quality compared to less specified ideas, but too much details may have drawbacks. If an idea is too long or too complicated, receivers may not pay attention to it.

2.2.4.2 Content quality

(15)

15

3. Methodology

This study uses the Dell Ideastorm as the focal research context. At this platform, ideas are initiated by contributors and may be amended based on feedback from users and the platform owner. Ultimately, some ideas are accepted.

This chapter elaborates on the methodology used. Firstly, the research design is explained, followed by the choice of data source. Secondly, an insight is given on how data were collected. Finally, the variables, measurements and analysis will be discussed.

3.1 Research design

To test the hypotheses, quantitative data from the website will be collected, which will be amended with extra expert ratings. The transparency of the Dell Ideastorm website provides the advantage that all the necessary ideas, feedback and voting data could be obtained. Data like the popularity of ideas, valence of feedback and final acceptation of the idea are all openly available.

The Dell IdeaStorm is a voluntary platform where participants (users/non-users of Dell offerings) can post their suggestions, ranging from advertising tips to product hardware improvements. Furthermore, it is possible to comment and cast votes on ideas of other participants. This platform was, among others, chosen for the possibility to have an open discussion where non-expert (normal users) and non-expert users (Rockstars) can provide feedback or ask questions for more specific ideas. It is also possible to distinguish between the two groups. When a user posts high quality ideas, feedback and votes, this user can be promoted to Rockstar by the platform.

If an idea becomes popular enough through votes and comments, it is assigned to the appropriate stakeholder within Dell (platform owner) to evaluate and to see if it can be implemented. At that point, the status of the idea is changed to Under Review, and Dell keeps the IdeaStorm community up-to-date on its progress. An idea can get several statuses, on which will be elaborated in table 2. These statuses indicate the acceptance by the platform.

Status Description

Acknowledged The idea is in the running for voting and feedback after a quality check of the IdeaStorm team.

Under review Is when an idea is popular enough for Dell to take a closer look and assess value for the organization.

Already offered Idea is already part of Dell products or differently already implemented within Dell.

Partially implemented An idea is partially implemented or in stages and will be still available for further consideration.

Implemented An idea is fully implemented within Dell.

Archived After 6 months of acknowledgement without one of the above statuses an idea is archived.

(16)

16

The Dell IdeaStorm launched in 2007 and has, since then, received more than 22,000 ideas with close to 550 ideas accepted and implemented. At the end of March 2012, the platform was updated and several improvements were applied in terms of user friendliness and layout (Jervis, 2012). This platform has a higher acceptation rate compared to other platforms, which makes it easier to assess the factors that drive acceptance. As acceptance is less rare, it is easier to predict.

3.2 Data collection

This study compared the ideas submitted after April first 2012, after implementation of Dell 2.0. With this update, the Rockstar and platform owner status classification were implemented. This means that the classification became visible by their own labels and logos, so users could more easily assess the expertise of the initiator. After six months an idea gets archived. To ensure that projects were not still running during the data collection, a 6-month buffer is taken into account. Data were collected from the ‘Desktops and Laptops’ section, as this category contains the most submitted ideas. This is not a cross-category study, because of the difficulty to find raters that have multiple fields of expertise, or the difficulty to compare the quality of ideas from experts from different fields. The section held 115 ideas in a timeframe of 26 months. From these 115 ideas, 14 were from Rockstars and 101 from non-Rockstars. Furthermore, one idea was implemented, eight were partially implemented, 14 were under review and the rest was archived. Only 28 ideas did not receive any platform or user feedback.

3.3 Variables and measures

Initiator expertise is a dummy variable to differentiate Rockstars from normal users (Rockstar = 1, normal user = 0), and can be obtained by viewing the contributor’s label on the platform.

For the variables that indicate popularity (number of votes, feedback quantity) and the platform owner feedback, this study uses an objective count variable indicating the quantity.

(17)

17

idea becomes more specific and (hopefully) more relevant. The feedback valence score is based on the average of all feedback messages. For example, an idea that receives three neutral and two positive feedback comments, transforms into a score of 0.4 (=2/5), while the feedback quantity score is 5.

Idea acceptance: This study uses an objective four-point score, ranging from one to four, to indicate the different statuses on the platform. One is not accepted, two is under review, three is partially accepted and four is accepted. The different stages of acceptance by the platform can be found in table 2.

The control variables, number of words and idea clarity, are measured using objective measures. Previous work has indicated that the quality of the description has an influence on innovation success and on acceptance (Schulze & Hoegl, 2008; Tornatzky & Klein, 1982). Number of words is the word count in the original description. Idea clarity represents the three objective components: (a) visualization (i.e., use of pictures, graphs), (b) use of bullet points, and (c) use of references via hyperlinks. With these three components, the score ranges from zero (no idea clarity) to three (high clarity).

Content quality is rated by two independent expert raters who are different from the other raters. Two experts with experience or great affinity with the computer/IT industry are asked to independently rate the different ideas. To stimulate objectivity, the raters did not receive any idea creator characteristics (e.g., Rockstar status and other variables) that could influence their opinion. The expert raters had to rate the idea via a 5-point Likert scale on three items that represent the quality of the idea (compatibility, usefulness, novelty). To ensure inter-rater reliability this study first tested the homogeneity of the raters by providing them with five identical ideas. These ideas are individually rated, but with consensus about the three items of content quality. Inter-rater reliability is necessary to ensure the reliability of the measure when the data is divided among them to rate. This inter-rater reliability was tested by a Cohen's kappa test. The “calibration” of the expert raters show that the intraclass correlation for single measures is .76 on a 1% significance level (p < .01), indicating that the raters correspond strongly on idea quality. The Cronbach’s alpha of this 3-item construct is well over .7 (.865), which justifies the use of using a composite score of the three items for the measure content quality.

3.4 Analysis

To indicate the differences between Rockstars and non-Rockstars and include the control variables and assess the impact of IVs (e.g., initiator expertise) on the DVs (e.g., number of votes), this study uses an ANCOVA.

(18)

18

towards idea acceptance should become clear. With this expanding test, variables which were significantly related to idea acceptance at first sight, could become less or non-significant when another variable was entered. In this manner, the most important and strongest variables were found.

To test for the possible mediating effects in the conceptual model, regression analysis were executed. Specifically this study uses the indirect model of Preacher and Hayes (2008). This regression analysis allows for the simultaneous assessment of multiple mediators and influence of covariates (number of words, idea clarity, content quality). This model uses bootstrapping to assess the statistical significance and is a computationally intensive method that involves repeatedly sampling from the dataset and estimating the indirect effect in each re-sampled dataset. When this is repeated thousands of times, an empirical approximation of the sampling distribution of the mediation path is built and used to construct confidence intervals for the indirect effect (Preacher & Hayes, 2008).

To test for the curve–linear relationship, this study uses a squared term in the regression analysis to look if feedback quantity and acceptance were curve linear related. This curve is expected in the form of a inverted U-shape because when feedback quantity rises the chance of conflicting feedback rises which makes the choice which to accept harder. Furthermore, by a higher number of feedback the feedback receiver could overlook feedback that could otherwise be accepted.

3.5 Robustness checks

(19)

19

4. Results

This results section is divided into two parts. Firstly, the relation between initiator and variables is considered, covering: number of votes, feedback quantity, platform feedback, feedback valence and platform feedback valence to test H1-H2. The second covers the antecedents of idea acceptance, and tests H3-H6.

4.1 Consequences of expertise

To analyze the difference between the Rockstars and non-Rockstars on the variables number of votes, feedback valence, feedback quantity, platform feedback and platform feedback valence, an ANCOVA was performed. With an ANCOVA, this study could control for the idea quality (number of words, idea clarity, content quality). Table 3 summarizes the outcome of the tests.

In contradiction to H1, the ANCOVA test found no significant difference between Rockstar (M = 17.64, SD = 4.35) and non-Rockstar (M = 7.42, SD = 1.62) concerning the number of votes after controlling for number of words, idea clarity and content quality (F(1, 113) = 1.51, p = .22). Although the mean seems higher (+10 votes), this difference is not significant. Before including the covariates, there seems to be a significant, but limited, influence of initiator expertise on number of votes. However, when controlling for idea quality, there was no direct effect found between expertise and number of votes. For the control variables, idea clarity is significant on the 10% level, being positively related to number of votes (F(1, 113) = 2.71, p = .10). This study thus rejects H1, as it did not find evidence that higher expertise leads to more votes on the platform.

To test H2a-2c, the same steps were taken as with the analysis of H1. In contrast to H2a, which states that initiators with a higher expertise receive more user feedback, the results showed no significant difference between Rockstar (M = 1.36, SD = .64) and non-Rockstar (M = 1.62, SD = .17) (F(1, 85) = .29, p = .59), thereby rejecting H2a. This study thus found that there is no significant mean difference between Rockstars or non-Rockstars in the quantity of feedback they receive. Although the results were not significant and, as such no conclusion can be drawn from this, the results suggest that users are somewhat more hesitant to provide feedback to Rockstars.

H2b stated that initiators with a higher expertise attain more platform feedback. It has to be mentioned that the number of observations (N = 11) regarding platform owner feedback was small. For the ANCOVA analyzing platform owner feedback, this study uses the model with covariates, but no significant effects were found. This outcome rejects H2b.

(20)

20

Regarding the control variables, content quality was positively related to feedback valence (F(1, 85) = 4.50, p = .04), meaning that ideas with higher content quality receive more positive feedback.

As a final check, the platform owner feedback valence was studied including the covariates. The test showed no significant difference between Rockstar (M = 1.09, SD = .73) and non-Rockstar (M = .09, SD = .23) after controlling for number of words, idea clarity and content quality (F(1, 85) = 1.67, p = .24).

(21)

21 Dependent Variables Model Independent Variables F p value ηp2

Number of votes 1 (adj. R2=.033) Initiator expertise 4.85 .03** .041

2 (adj. R2=.064) Content quality Idea clarity Number of words Initiator expertise 1.59 2.71 .52 1.51 .21 .10* .47 .22 .014 .024 .005 .014 Feedback quantity

1 (adj. R2=-.006) Initiator expertise .29 .59 .003

2 (adj. R2=-.019) Content quality Idea clarity Number of words Initiator expertise 1.47 .01 .19 .22 .23 .93 .66 .64 .013 .000 .002 .002 Feedback valence

1 (adj. R2=-.012) Initiator expertise .01 .93 .000

2 (adj. R2=.015) Content quality Idea clarity Number of words Initiator expertise 4.50 .01 .27 .08 .04** .93 .61 .78 .052 .000 .003 .001 Platform feedback

1 (adj. R2=-.008) Initiator expertise .11 .75 .001

2 (adj. R2=-.007) Content quality Idea clarity Number of words Initiator expertise 1.52 .99 .46 .01 .22 .32 .50 .91 .014 .009 .004 .000 Platform feedback valence

1 (adj. R2=.034) Initiator expertise 1.35 .28 .131

2 (adj. R2=.080) Content quality Idea clarity Number of words Initiator expertise .29 1.64 .70 1.67 .61 .25 .43 .24 .045 .215 .105 .220

* p=<0.10, **p=<0.05,***p=<0.01; ηp2 : variance explained per value

Table 3. ANCOVA outcome summary; model 1: without covariates, model 2: with covariates

4.2 Antecedents of idea acceptance and mediation effects

To test H3-6 this study used a regression analysis. Table 4 reports the results. Model 1 regressed the initiator expertise on idea acceptance. Model 2 added feedback acceptance, model 3 idea quality, model 4 popularity and model 5 platform feedback.

(22)

22

the relation was not significant, the initiator expertise is negatively related to idea acceptance which is noteworthy. This leads to the rejection of H4.

Model 2 incorporates feedback acceptance in the regression. Even though the explained variance slightly rises (R2 = .01) there was no significant effect (feedback acceptance: β = -.12, p = .44). Implicating that even though feedback is accepted, it does not affect the idea acceptance. H3 was therefore rejected.

Model 3 adds the idea quality in the total regression of model 2. This has a positive effect on the explained variance of the test (R2 = .17). The idea quality is decomposed into 3 different variables (content quality: β = .42, p < .00; idea clarity: β = .04, p = .80; number of words; β = .00, p = .26). Of these variables the content quality is significant. This means that the content quality is highly significant for the explanation of idea acceptance. As such, this study can partially support H5b, suggesting that the platform owner selects, among others, based on the content quality of the idea.

Model 4 adds popularity to the total regression. Popularity is divided into number of votes (β = .02, p < .00) and feedback quantity (β = .00, p = .98) an idea receives. Again this leads to a strong increase of the explained variance of the test (R2 = .35). Number of votes was highly significant in relation to idea acceptance. H5a stated that popularity is positively related to idea acceptance and with this outcome the hypothesis was partially supported. This outcome shows that when an idea has more votes, the probability of acceptance by the platform owner increases.

(23)

23

MODEL 1 MODEL 2 MODEL 3 MODEL 4 MODEL 5

B SE B SE B SE B SE B SE Initiator expertise (H4) -.12 (.20) -.12 (.20) -.22 (.20) -.30 (.19) -.33 (.17) Feedback acceptance (H3) .14 (.18) .13 (.17) .18 (.16) .19 (.14) Idea quality (H5b) - Content quality - Idea clarity - Number of words .42*** .04 .00 (.12) (.14) (.00) .37*** -.10 .00 (.11) (.13) (.00) .32*** -.03 .00 (.10) (.12) (.02) Popularity (H5a) - Number of votes - Feedback quantity .02*** .00 (.00) (.04) .02*** -.04 (.00) (.04) Platform feedback (H6) .77*** (.17) Constant .37 .33 -1.20 -1.11 -.99 R2 F-Value .00 .35 .01 .48 .17 3.29 .35 5.96 .50 9.37

Notes: DV = IDEA ACCEPTANCE, * p=<0.10, **p=<0.05,***p=<0.01

Table 4. regression outcome antecedents of idea acceptance

The conceptual model assumes that the effect of initiator expertise on feedback acceptance is mediated by the variables: number of votes, feedback quantity, idea quality and platform feedback. However, because there is no direct effect between initiator expertise and idea acceptance there will be no full mediation. When performing the test of Preacher and Hayes (2008) there were no significant results found except for the direct effects of paths a and b. These paths a and b are already taken into account in the previous tests and therefore not mentioned here. In figure 3 the paths of mediation are shown. The results of the mediation tests are shown in Appendix A.

.

(24)

24

Table 5. summary of hypothesis and results.

4.4 Robustness checks and additional tests

To see if Rockstars objectively provide better ideas to the platform this study looks to the relation between initiator and idea quality. By an ANCOVA this relation was tested. A significant difference between Rockstar (M = 243.57, SD = 210.81) and non-Rockstar (M = 134.85, SD = 139.59) concerning the number of words used in their idea (F(1, 113) = 6.50, p = .01) was found. The same could be concluded for idea clarity (F(1, 113) = 14.97, p < .00) between Rockstar (M = .79, SD = .98) and non-Rockstar (M =.21, SD =.43). This means that non-Rockstars write more easily readable ideas and use more words (greater elaboration and specificity). However, the content quality is not significant (Rockstars: M = 3.57, SD = .48; non-Rockstar: M = 3.42, SD = .59).

Hypothesis Result

H1: Initiators with a higher perceived expertise attain more votes Not fully supported, experts

do attain more votes, but this is due to higher idea quality

H2a: Initiators with a higher perceived expertise receive more user feedback.

Not supported

H2b: Initiators with a higher perceived expertise attain more platform feedback

Not supported H2c: Initiators with a higher perceived expertise attain more positive

feedback from the user and platform.

Not supported H3: The feedback acceptance is positively related to the idea acceptance

by the platform owner

Not supported

H4: Initiator expertise is positively related to idea acceptance. Not supported

H5a: Popularity (number of votes and feedback quantity) is not related to idea acceptance

Partially supported for number of votes

H5b: Idea quality is positively related to idea acceptance Partially supported for

content quality.

(25)

25 Dependent Variables Model F p value ηp2 Idea quality - Content quality - Idea clarity - Number of words (adj. R2=-.002) (adj. R2=.109) (adj. R2=.046) .83 14.97 6.50 .37 .00*** .01** .007 .117 .054

* p=<0.10, **p=<0.05,***p=<0.01; ηp2 : variance explained per value Table 6. ANCOVA outcome summary

To provide some insight in the acceptance of platform feedback valence, a chi square test was undertaken (see Table 7). There is no significant association found between feedback valence and the acceptation of feedback (Chi square (2) = 4.30, df = 2, p = .12). This means that the valence of platform feedback valence (positive, neutral, negative) is not more (less) often accepted.

Platform

feedback valence

Negative Neutral Positive Total

Accepted 2 (100%) 1 (20%) 1 (25%) 4

Not accepted 0 (0%) 4 (80%) 3 (75%) 7

Total 2 5 4 11

Χ2 = 4.30, df = 2, p = .12

Note: all cells have a count lower than 5.

Table 7. Cross table of platform feedback valence accepted vs. not accepted

The same is done for the user feedback valence. The Chi square test, shown in table 8, shows that there is a significant relationship between feedback valence and the acceptation of feedback (Chi square (2) = 9.95, df = 2, p < .01). These outcomes show that positive feedback is less often accepted as compared to the two other types of valences. However, initiators are not eager to accept negative feedback either.

User

feedback valence

Negative Neutral Positive Total

Accepted 11 (29%) 20 (50%) 17 (22%) 48

Not accepted 27 (71%) 20 (50%) 61 (78%) 108

Total 38 40 78 156

Χ2 = 9.95, df = 2, p < .01

(26)

26

Finally, this study tested whether there exists a linear curve relation between feedback quantity and feedback acceptance. The linear and squared term of feedback quantity were compared. It was found that the linear term (feedback quantity) was not significant (β = -.03, p = .32), the same applies to the feedback quantity squared (β = .01, p = .17). From the positive beta, this study can conclude there is an inverted u-shaped relation for the acceptance of feedback, however insignificant.

MODEL 1 MODEL 2 B SE B SE Feedback quantity -.03 (.03) -.12 (.06) Feedback quantity squared .01 (.01) Constant .35 .46 R2 df F-Value .01 83 1.02 .04 82 1.80 Notes: DV: FEEDBACK ACCEPTANCE, * p=<0.10, **p=<0.05,***p=<0.01

(27)

27

5. Conclusion

To conclude this study the main research question will be answered by discussing the theoretical and managerial implications. The research question was stated as followed:

How does the idea initiator’s expertise, user feedback and votes influence the acceptance of ideas generated on crowdsourcing platforms?

This is followed by the limitations and future research.

5.1 Discussion and implications

This study investigated the role of initiator expertise on voluntary crowdsourcing platforms and the impact of popularity and quality towards idea acceptation. Although initiator expertise is positively associated with number of votes, when controlling for idea quality, the effect of initiator expertise becomes insignificant. It was found that initiator expertise (label) has no significant influence on popularity (feedback quantity and number of votes) of the ideas itself. However, it should be noted that Rockstars write more clear ideas (higher idea clarity), which lead ultimately to more votes. Furthermore, Rockstars also use more words in their idea description. This could indicate that they are more specific and elaborative. However, this does not lead to a higher idea acceptance.

One of the most striking findings is the non-significant effect of initiator expertise on idea acceptance. It could be that the platform does not favor (nor disfavor) previous winners, or that users dislike promoting Rockstars. Another thought is that Rockstars face difficulties to bring a continuous qualitative high input on the platform or, because of the previous success, are not willing to put in the effort needed to win contests. The only significant drivers of idea acceptance were the number of votes, content quality (idea quality) and platform owner feedback. Popularity may facilitate acceptance, but popularity alone will not suffice: a high quality idea is equally important. Furthermore, when the platform owner engages by providing feedback to an idea, the chance that the idea will be implemented increases.

5.1.1 Theoretical implications

(28)

28

difference in creativity and usefulness. Ideas are most likely to win when they are popular (number of votes and feedback quantity) and of high quality (content quality). This study confirms the literature that popularity (Geiger et al., 2011; Riedl et al., 2010) and idea quality (Schulze & Hoegl, 2008; Tornatzky & Klein, 1982) help idea acceptance.

Literature on the effect of non-experts providing feedback to experts is scare. Because of the insignificant effect this study can state that there is not a clear difference between the two groups (experts vs. non-experts) in receiving feedback. The same holds for the feedback valence, where literature predicts the overall positive tone of feedback towards Rockstars (Finkelstein & Fishbach, 2012), which was not found in this study. This means that users are not particularly hesitant or eager to provide (positive) feedback focused on Rockstars. Also a direct effect of expertise and idea acceptance is not found, which strengthens the finding of Bayus (2013) that an expert label is not a guarantee for success. It seems that successes of the past are no guarantee for the future. This suggests that the role of the label towards idea acceptance is redundant.

According to the literature the acceptance of feedback follows a curve-linear reaction. As feedback quantity rises, the receiver finds it more difficult to assimilate information (Laursen & Salter, 2006). This could not be confirmed by this study. With regard to the feedback valence literature, this study shows there is no difference between experts and non-experts in the degree to which the tone of feedback is accepted. This is in contrast to current literature, which predicts that negative feedback should have an impact (Finkelstein & Fishbach, 2012). However, the notion that feedback acceptance (one of the fundamental mechanisms on the platform) would lead to idea acceptance (McGourty et al., 2013) is not confirmed.

5.1.2 Managerial implications

This study offers several managerial implications for platforms. Firstly, this study finds that the expert label is a non-distinguishing label in terms of differentiating the winners from the losers. There is no direct effect of initiator expertise on acceptance by the platform; however, Rockstars write clearer ideas, which lead to more votes. In its turn, more votes increase the chance of idea acceptation (together with content quality and platform feedback).

(29)

29

The third implication concerns the feedback acceptance, which does not lead to a direct increase in idea acceptance, and it does not seem to improve idea quality. Towards the idea acceptance these feedback mechanisms seem redundant, but feedback could still be a useful way to keep users active on the platform, as they may get the feeling of adding value. Furthermore, as in the study of Laursen & Salter (2006), the acceptance of feedback could have important impact on the use of the innovation itself after acceptation. When feedback is accepted by the initiator (crowd input) the chance that it will be used could be larger when it truly satisfies a market need.

Considering the managerial implications from initiators point of view, this study shows how users could provide better ideas, or contribute more to the platform. Firstly, this study found the value of writing a well-structured idea, for instance with visualization and bullet points, to attain more votes. The number of votes leads to a larger chance of idea acceptance by the platform.

Secondly, the motivation in striving for the Rockstar-label is not that important. It does not provide the user with more benefits towards acceptation than a non-Rockstar. To be successful on the platform, the initiator has to deliver a continuous effort.

Thirdly, because the platform owner feedback leads to idea acceptance, it could be beneficial to actively engage the platform owner to the idea. Of course this could mitigate the effect of platform feedback to idea acceptance if the platform decides to be less critical which idea to provide feedback for. However, it could be expected that the platform has a professional state of mind and only engages when they think it is useful for the organization.

Lastly, since the popularity of the idea helps to stimulate acceptance, it would be valuable for the initiator to promote his or her idea on other platforms (social media, forums fitting the idea category), in order to spark the interest of users on the platform as well.

5.2 Limitations and future research

After undertaking this study, limitations that came to light which give reason for future research. First of all, difficulties were experienced while compiling the dataset concerning platform owner influence. This study tried to gather dataset as large as possible (100+ ideas). However, because it was bound to a specific category and the renewal of the Ideastorm, it was not enough to reliably assess the effects of platform owner feedback valence (hence lower than 5 observations per block in the chi square test). A solution may be an experimental setup, in which two groups could be created where one group would receive feedback from a platform owner (decision maker) and the other group just normal labeled feedback and both groups with a fair distribution of valence. The reliability of this test will then rise because the manipulations can be controlled.

(30)

30

acceptance is made by the users of the platform, like with crowdfunding, the votes and feedback could have an even stronger impact and because users will experience the consequences of wrong/good judgment themselves. Crowdfunders will face financial consequences if they invest and vote for undesired ideas. Also feedback acceptance could have a stronger effect on acceptance in these crowdfunding platforms, because without alteration, creators will not get funded, which is not necessary the case with voluntary platform when the content is strong. The focus with crowdfunding lies at convincing users instead of the platform. This should make the effect of votes stronger. For future research it would be interesting to repeat this research on another type of platform.

The third limitation has to do with the expert label. This study considered the initiator label at the moment of submitting the idea. However, it could be that during the period non-experts became expert. This would mean that they had expert characteristics, but only missed the label. This study tried to overcome this by a blind review. The expert raters did not see Rockstar-labels through which they were not influenced. In contrast, it is not possible for voters to see the ideas without labels, which could bias their choice. There could have been a mismatch of votes by users which were driven by the label. A study with experimental setup (2 equal groups) should focus on the effect of the label on idea quality to isolate the effect of the label from “objective” content quality. One group will vote or rate ideas with labels and another group will rate the same ideas, but without labels.

(31)

31

References

All, A. (2011, Juni 15). Blogs: customers in control at dells ideastorm. Retrieved November 7, 2014, from IT business edge: http://www.itbusinessedge.com/cm/blogs/all/customers-in-control-at-dells-ideastorm/?cs=11673

Arkakji, R. Y., & Lang, K. R. (2007). Digital consumer networks and producer-consumer collaboration: innovation an product development in the video game industry. Journal of Management Information Systems, , 195-219.

Bayus, B. (2013). Crowdsourcing Nep Product ideas over Time: An Analysis of the Dell IdeaStorm Community. Management Science , 226-244.

Belflame, P., Lambert, T., & Schwienbacher, A. (2013). Crowdfunding: Tapping the right crowd. Journal of Business Venturing , xx-xx.

Borg, G. (1998). External, physiological and psychological factors and perceived exertion. Human Kinetics , 68-74.

Bratvold, D. (2011, Augustus 31). The Top Five Crowdsourcing Mega-trends. Retrieved November 3, 2014, from business grow:

http://www.businessesgrow.com/2011/08/31/the-top-five-crowdsourcing-mega-trends/

Burgman, M., Carr, A., Godden, L., Gregory, R., McBride, M., Flander, L., et al. (2011). Redefining expertise and improving ecological judgment. Conservation Letter , 81-87.

Capra, E., Francalanci, C., Merlo, F., & Rossi-Lamastra, C. (2010). Firms’ involvement in Open Source projects: A trade-off between software structural quality and popularity. The Journal of Systems and Software , 144-161.

Connelly, B., Certo, T., Ireland, D., & Reutzel, C. (2011). Signaling Theory: A Review and Assessment. Journal of Management , 39-67.

Crisci, R., & Kassinove, H. (1973). Effect of perceived expertise, strength of advice and environmental setting on parental compliance. The journal of social psychology , 245-250.

Deci, E., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. . Psychological Bulletin , 627-668.

Dell. (n.d.). Dell Community Rockstars. Retrieved November 7, 2014, from Dell Community: http://en.community.dell.com/p/dcf-rockstars

Dellarocas, C., Fan, M., & Wood, C. (2004). Self-Interest, Reciprocity, and Participation in Online Reputation Systems . Cambridge: MIT Sloan School of Management.

Dow, S., Kulkarni, A., Klemmer, S., & Hartmann, B. (2012). Shepherding the Crowd Yields Better Work. Seattle.

(32)

32

Floyd, K., Freling, R., Alhoqail, S., Cho Young, H., & Freling, T. (2014). How Online ProductReviews AffectRetail Sales: A Meta-analysis. Journal of Retailing , 217–232.

Floyd, K., Freling, R., Alhoqail, S., Cho, H. Y., & Freling, T. (2014). How Online Product Reviews Affect Retail Sales: A Meta-analysis. Journal of Retailing , 217-232.

Franke, N., & Hippel, E. (2003). Satisfying Heterogeneous User Needs via Innovation Toolkits: The Case of Apache Security Software. Research Policy , 1199-1215.

Fuller, J. (2006). Why Consumers Engage in Virtual New Product Developments Initiated by Producers. advances in consumer research , 639-645.

Gemser, g., & Van Oostrom, M. (2007). The impact of film reviews on the box office performance of art house versus mainstream motion pictures. J Cult Econ , 43-63.

Gerber, E., Hui, J., & Pei-Yi, K. (2013). Crowdfunding: Why People Are Motivated to Post and Fund Projects on Crowdfunding Platforms. Evanston: Northwestern University.

Girotra, K., Terwiesch, C., & Ulrich, K. (2009). Idea Generation and the Quality of the Best Idea. Fontainebleau: INSEAD.

Goldenberg, S. (2011, July 12). Environement BP deepwater horizon oil spill crowdsourcing. Retrieved 12 20, 2014, from the guardian:

http://www.theguardian.com/environment/2011/jul/12/bp-deepwater-horizon-oil-spill-crowdsourcing

Hanson, W., & Putler, D. (1996). Hits and Misses: Herd Behavior and Online Product Popularity. Marketing letter , 297-305 .

Harackiewicz, J. (1979). The effects of reward contingency and performance feedback on intrinsic motivation. Journal of Personality and Social Psychology , 1352-1363.

Hattie, J., & Timperly, H. (2007). The Power of Feedback. Review of Educational Research , 81-112.

Horai, J., Naccari, N., & Fatoullah, E. (1974). The Effects of Expertise and Physical Attractiveness Upon Opinion Agreement and Liking. sociometry , 601-606.

Howe, J. (2006, june 14). The Rise of Crowdsourcing. Wired Magazine , pp. 1-5.

Huang, S.-W., & Fu, W.-T. (2013). Don't hide in the crowd!: increasing social transparency between peer workers improves crowdsourcing outcomes. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , 621-630 .

Ilgen, D., Fisher, C., & Taylor, S. (1979). Consequences of Individual Feedback on Behavior in Oranizations. Journal of Applied Psychology , 349-371.

Ipeirotis, P., Provost, F., & Wang, J. (2010). Quality Management on Amazon Mechanical Turk. New York: Department of Information, Operations, and Management Sciences.

(33)

33

Jervis, C. (2012, maart 22). Direct2Dell. Retrieved november 15, 2014, from Dell Community: http://en.community.dell.com/dell-blogs/direct2dell/b/direct2dell/archive/2012/03/22/ideastorm-2-0-launched-today

Jervis, C. (2013, oktober 02). Idea2read. Retrieved november 7, 2014, from Dell IdeaStorm: http://www.ideastorm.com/idea2ReadIdea?v=1394605780726&Id=087700000008pILAAY

Keskustalo, H., Jarvelin, k., & Pirkola, A. (2006). The Effects of Relevance Feedback Quality and Quantity in Interactive Relevance Feedback: A Simulation Based on User Modeling. Tampere: Springer.

Laursen, K., & Salter, A. (2006). open for innovation: the role of openness in explaining innovaion performance amoung U.K. manufacturing firms. Strategic managment Journal , 131-150.

Lepper, M., & Henderlong, J. (2000). Turning "play" into "work" and "work" into "play": 25 years of research on intrinsic versus extrinsic motivation. Stanfor university, Department of Psyhology , 257-307.

Liao, V., & Fu, W.-T. (2014). Expert voices in echo chambers: Effects of Source expertise indicators on exposure to diverse opinions. Urbana: University of Illinois.

Mazzola, D., & Distefano, A. (2010). Crowdsourcing and the participation process for problem solving: the case of BP. VII conference of the Italian chapter of AIS information technology and innovation trend in organization .

McGourty, J., Dominick, P., & Reilly, R. (2013). Incorporating Student Peer Review and Feedback into the Assessment Process. Frontiers in education , 14-18.

McIntire, M. (2007). Four types of Feedback: Using the Right Feedback Style in the Right Situation. Las Vegas: Innovative Management Group.

Morran, D. K., Robison, F. F., & Stockton, R. (1985). Feedback exchange in counseling groups: An analysis of message content and receiver acceptance as a function of leader versus member delivery, session, and valence. Journal of Counseling Psychology , 57-67.

Nam, T. (2012). Suggesting frameworks of citizen-sourcing via Government 2.0. Government information Quarterly , 12-20.

Nemet, G. (2009). Demand-pull, technology-push, and government-led incentives for non-incremental technical change. Research Policy , 700-709.

Oldham, G., & Cummings, A. (1996). Employee creativity: personal and contextual factors at work. The Academy of Management Journal , 607-634.

Pittman, T., Davey, M., Alafat, K., Wetherill, K., & Kramer, N. (1980). informational versus controling verbal rewards. Personality and social psychology bulletin , 228-233.

(34)

34

Preacher, K., & Hayes, A. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior research methods , 897-891.

Schultz, M. (2015). How Idea Creativity and Hedonic Value Influence Project Succes in Crowdfunding. St. Gallen: Institute of Information Management.

Schulze, A., & Hoegl, M. (2008). Organizational knowledge creation and the generation of new product ideas: A behavioral approach. Research Policy , 1742–1750.

Shute, V. (2008). Focus on formative feedback. Review of educational research , 153-189.

Sykes, A. (1993). An introduction to regression analysis. Chicago: Chicago Unbound.

Tornatzky, L., & Klein, K. (1982). Innovation Characteristics and Innovation Adoption-Implementation: A meta-Analysis of Findings. IEEE Transactions on Engineering Management , 28-43.

Traxler, D. (2013, February 08). Articles. Retrieved December 16, 2014, from practical ecommerce: http://www.practicalecommerce.com/articles/3906-21-Key-Performance-Indicators-for-Ecommerce-Businesses

Urban, G., & Hippel, E. (1986). Leaduser analyses for the development of new industrial products. Management Science .

von Hippel, E. (2005). Democratizing Innovation. Massachusetts: The MIT Press.

Vuurens, J., de Vries, A., & Eickhoff, C. (2011). How Much Spam Can You Take? Crowdsourcing Results to Increase Accuracy. Delft: media matica.

Waardenburg, M. (2008, november 12). artikel, marketing artikel. Retrieved Oktober 3, 2014, from de communicatie desk:

http://www.decommunicatiedesk.nl/artikel/-/marketing_artikel/293546/Crowdsourcing+in+stappen;jsessionid=772977D27AEFDE234892D1EA06 8C0D5E?p_p_auth=5fFAvzUr

Wooten, J., & Ulrich, K. (2012). Idea Generation an dhe Role of Feedback: Evidence from Field Experiments with Innovation Tournaments. Pennsylvania: The Wharton School.

Wu, F., Wilkinson, D., & Huberman, B. (2009). Feedback loops of attention in peer production. Palo Alto: HP Labs.

Yang, J., & Leskovec, J. (2011). Patterns of Temporal Variation in Online Media. Honk Kong: Stanford University.

Zhao, Y., & Zhu, Q. (2012). Evalutation of crowdsourcing research: Current status and future direction. Information Systems Frontiers , 417-434.

(35)

35

Appendix

A

* p=<0.10, **p=<0.05,***p=<0.01 Table A. Mediation effects summary

Variables Model Path B t p

DV:

Idea acceptance

Mediator: (Idea quality)

Nr. Words (W) Idea clarity (C) Content quality (cq) IV: Initiator expertise (adj. R2=.069) df: 113 C -.08 -.46 .65 A W C cq 108,72 .58 .15 2.55 3.87 .91 .01** .00*** .37 B W C cq .00 -.01 .34 .34 -.04 3.37 .74 .97 .00*** C’ -.15 -.78 .44 DV: Idea acceptance Mediator: (popularity) Feedback quantity (fq) Number of votes (V) IV: Initiator expertise (adj. R2=.224) df: 113 C -.08 -.46 .65 A fq V -.25 6.02 -.53 1.23 .64 .22 B fq V .02 .01 .56 3.66 .58 .00*** C’ -.23 -1.33 .19 DV: Idea acceptance Mediator:

Platform owner feedback

Referenties

GERELATEERDE DOCUMENTEN

H5: The positive effect of Nutri-score colored feedback after selecting products on sales of healthy products is stronger for consumers with a higher level of health interest. H6:

volgens de literatuur uit moet zien om effectief te zijn, hoe feedback tijdens de colleges gegeven werd door docenten en door studenten, en omdat ik door deze twee aan elkaar

Voor alle leerlingen is het belangrijk om hun taalvaardigheid te ontwikkelen, om kennis over taal, taalgebruik en literatuur op te doen en om daar vanuit verschillende

Environmental histories of the empire in a longer, geographically broader view come to the fore again in Richard Grove’s Ecology, Climate and Empire: colonialism and

the defect density increases in the shallow energy range, while traps generated by CVS reach a maximum density at deep energy levels. Furthermore, we have pointed out two

Zoals je allicht weet kan een slechte werking van dit regelsysteem leiden tot suikerziekte of diabetes: als de regelaar niet meer doet wat je verlangt, zal ook het systeem

QR Codes, QR Codes (negative) influence/effect/impact on consumer behavior, mobile marketing QR codes, mobile marketing approaches, mobile marketing opportunities,

• Starre staaf (systeem met één vrijheidsgraad). • Systemen met meer dan