• No results found

Bachelor thesis title:"Who has more power over the users' purchasing decisions making online: peers or professionals?"

N/A
N/A
Protected

Academic year: 2021

Share "Bachelor thesis title:"Who has more power over the users' purchasing decisions making online: peers or professionals?""

Copied!
64
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor thesis

"Who has more power over the users' purchasing decisions making online: peers or professionals?"

Ivanna Slipets Student number 11185422

Faculty of Economics and Business, BSc Business Administration Supervisor: Frederik Situmeang

(2)

Statement of Originality

This document is written by Student Ivanna Slipets who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no

sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of

(3)

Table of Content 1. Abstract 4 2. Introduction 5 3. Theoretical framework 6 4. Methodology 15 5. Results 18 6. Discussion 23

7. Limitations and suggestions for future research

28

8. Conclusion 29

9. References 31

(4)

Abstract

Word of mouth(WOM) used to be the most effective method of advertising to boost sales of a product. Since sales of a lot of products moved online, the word of mouth shortly followed in a form of online reviews. The study is aiming to investigate the effects on a new form of WOM on reducing the informational asymmetry. The existing literature provides mixed findings regarding the effectiveness of jury personality and different aspects of online review, as volume, valence, sentiment. This paper focused on the difference between critics and users regarding the valence of the review and comparison of the strengths of the effect of sentiment and valence. The sample data of 1769 entries collected from the Metacritics.com website was analyzed with the use of hierarchical regression. The result showed that critics have a

significant influence on sales, compared to users and valence is a stronger predictor than the sentiment.

(5)

Introduction

The entertainment industry constitutes a vast amount of the market and circulate billions of euros annually and growing(Marchand & Hennig-Thurau, 2013). The niches within the industry as films, books, and travel have been extensively researched by scholars, but video games were mostly ignored(Livingston, Nacke & Mandryk, 2011). Video games constitute an experience good, meaning users are taking the risks of purchasing it, as they cannot assess the quality of the product beforehand(Nelson, 1970). To reduce the risk of buying a “bad"

experience, customers rely on reviews from other users and professional critics. Zhu & Zhang (2006) state that online reviews have a big effect on the demand for a video game. There has been little prior research about the influence of different sources and features of reviews on video games. Research on the related industries provides mixed results, as Resnick and Zeckhauser (2002)'s and Zhang (2006)'s studies prove that the effect of reviews scores is significant on sales, while Chen et al. (2004)'s and Duan et al. (2005) proves that only the amount of feedback matters. Some of the other examples of previous studies examine the general difference in independent and amateur review, game score, and its effects on sales, as some of the characteristics of the review text. Researchers tried to investigate what aspects of the review are the influencing the users’ decision making. Some of the attempts are devoted to the comparison of the valence of the review and level of emotional activation that the review brings up. There have been some attempts to understand the mechanics behind the users' responses to different types of reviews. Still, there is a gap in knowledge on whose opinion is more influential on the sales of a game, professionals or users, and what sentiments have the biggest impact (Livingston, Nacke & Mandryk, 2011). This research is trying to answer the question to what extent does a jury of experts influence the sales of the game of

(6)

video games differently than a jury of users. The paper will draw insights into the process of choosing the game purchase. Websites like metacritic.com can be used to improve

convenience and customer experience, as well as decrease their search time(Santos et al. 2019). Game developers would get a better understanding of the needs of their customers. Moreover, insights from this research will help to create better targeting campaigns for a game release.

The structure of the paper is the following: the theoretical framework is going to highlight the problems of online reviews and show current knowledge about it. It is followed by the

methodology part with a detailed explanation of the study methods and procedures. The next section reveals the results of the analysis and discussion of the findings. The discussion and conclusion is going to summarise the study and provide implications for practice, as well as limitations and suggestions for future research.

Theoretical framework

Experience goods and the role of the online review

Products can be classified into two types, being search and experience goods(Nelson,1970). Experience goods are those whose quality cannot be known beforehand and can only be evaluated after the purchase. The purchase of experience goods can involve a lot of risks, especially if the product is expensive(Marchand & Hennig-Thurau, 2013). So there is a limited number of indicators that can help to assess the quality before the purchase.

(7)

There are search and experience attributes of a product(Huang, Lurie, & Mitra, 2009). Search attributes include general information about the product and its features. While experience reviews are harder to provide and they constitute the subjective opinion of the user.

Word of Mouth is one of the earliest forms that experience attributes can take and used to be the main way to signal quality, but it had limited geographical and social span(Zhu & Zhang, 2006). With the development of Web 2.0, it became possible to transfer reviews to online environments, the Internet(Belk, 2013). Web 2.0 was a disruptive technology that opened the internet to the mainstream public, by providing the possibility to easily share information and exchange feedback between users in the web environment(Christensen, Raynor & McDonald, 2015). So the Word of Mouth moved to the Internet in a form of online reviews. Previous research has indicated that 60% of potential customers perceive reviews as an important course that contributes to decision making(Smith, 2013). Still, online reviews are missing some important aspects that Word of Mouth has, such as the identity of a reviewer or the interactive element of offline conversation(Ziegele & Weber, 2015). As a result, website visitors cannot access the reliability or expertise of the review, and the influencing power decreases.

A lot of experience goods are sold on the Internet. Websites specialized in selling products online enabled the option to leave a review that would be visible to other customers. Consumers are heavily reliant upon online reviews because of bounded rationality (Payne, Bettman, & Johnson, 1992). This implies that customers are not able to collect all the available information about the product due to the time constraints they are willing to spend searching for it and can only access a limited amount of information. Still, user reviews and critics reviews were scattered all over the Web, and consumers would need to invest a lot of time to search for all of them(Zhu & Zhang, 2006).

(8)

Video game industry

Specialized websites started to emerge with the only goal to collect the experience attributes from the Web in one location which would be easily accessible for any user looking for it(Greenwood- Ericksen, Poorman & Papp, 2013). One of the industries which are engaged with experience goods is the Video Game industry. Users cannot know the quality of the game before they buy it and the purchase possesses high risk, as the game is considered an expensive purchase by its main audience(Bounie et al. 2005). The industry customers are a wide segment of people that starts with the youngest and includes the elderly, with an average of 35 years old("U.S. average age of video gamers 2019 | Statista", 2020). So, the customers are investing time into research before a purchase. One of the websites that provide reviews is metacritic.com("About Us - Metacritic", 2020). The website provides a review score of the game, which is a combined score of independent critics, the written review of the critics, the user score, and the written comments of the user(Greenwood-Ericksen, Poorman & Papp, 2013).

User versus expert reviews

User and critics reviews have some differences. Critics are professionals who are hired by an independent party to test and provide their opinion over the game(Reinstein & Snyder, 2005). Users are regular players who voluntarily leave a review. The critic's review is provided by the independent professional shortly after the game release and never after some time(Santos et al. 2019). Critics are more focused on the technical aspects and give more objective comments. The critics' reviews are usually emotionless. One critic writes reviews over a big

(9)

amount of games, as a part of his job. On the contrary, user review is often highly emotional and less informative(Santos et al. 2019). Users tend to describe their experiences and feelings during the game. Users keep adding reviews long after the game is released.

Critics and users' reviews can align in their opinion or contradict to each other. Some of the theories that explain the behaviour of users in response to discrepancies are the bandwagon and authority heuristic effects(Sundar et al. 2009). The first one means the notion that users tend to follow other user's opinions and do as the majority/crowd does. While the authority heuristic puts the expert's opinion as to the dictations one, meaning that the user would follow the expert opinion, because of the status and authority. Sundar et al. 2009 proved that the bandwagon effect can prevail(Livingston, Nacke & Mandryk, 2011). So authoritative and user scores might have different influential power on the customer and can overcome each other. There has been a lot of studies examining the effect of reviews, but the results tend to be mixed. The following are some of the aspects that explain why the results of the former studies are mixed. The main reason is the question of credibility. Expert reviews are supposed to be more influential, as they are perceived as more credible and enjoyable(Park & Nicolau, 2015). Expert ratings are possessing sufficient trustworthiness by their status and unbiased reputation, which is not influenced by the volume of the reviews(Flanagin & Metzger, 2013). Expert credentials are not widely available, which means it is complicated to obtain them and they are more resistant to external influences and pressure. Nevertheless, experts should be perceived as such by the customers, otherwise, their authoritative power is

reduced(Willemsen, Neijens, & Bonner, 2012). Expert judgments are powerful in customer's minds, as a traditional source.

(10)

The credibility of user-generated content is not clear, due to the volume and scattering of it (Hovland, Janis, & Kelley, 1953). Another reason is that traditional signals of credibility are absent(Metzger & Flanagin, 2008). Customers cannot access the quality of assessment that the reviewer possessed as well as the level of expertise that the reviewer holds. From the other hand, users often possess the experiential credibility, as they tried the product, which is the essential feature in experience goods(Flanagin & Metzger, 2008;) that's why the

replacement of conventional authority might happen by the wisdom of the crowd(Madden & Fox, 2006, p. 2). The next aspect that defends the user-generated reviews is the low ability to manipulate them. As such, user reviews can be more trustworthy than critic reviews due to their volume. Moreover, there is informational social influence, which states that with the lack of one's own experience, people believe in peer's judgments more than one's

own(Cosley, Lam, Albert, Konstan, & Riedl, 2003). In line with this theory, the user's rate for this item is influenced by other user's judgment. All of this makes it immune to any

manipulations or attempts of influence by an interested party(Flanagin & Metzger, 2013). Another important aspect is that consumers trust reviews from other users, as they believe in similarities between themselves and other users, which is absent in the case of experts (Tussyadiah, Park, & Fesenmaier, 2008).

To sum up, previous studies conclude that experts are still experiencing more authoritative power over customers. Still, the world has been experiencing rapid changes in consumption and buying habits, as more products are moving to an online environment and are heavily reliant upon reviews(Schuckert, Liu & Law, 2015; Bordonaba-Juste, Lucia-Palacios & Polo-Redondo, 2012) simultaneously with the rise of the sharing economy(Bounie et al. 2005).

(11)

Another point includes the nature of the video games field. The reason is that the core of a computer game is user experience, personal active interaction of a player, and the

game(Calvillo-Gámez, Cairns & Cox, 2015). This research would assume that previous studies might be outdated and social forces have shifted power to the side of the experiential credibility of users. So the experiential credibility was assumed to be more important and has the potential to outweigh more traditional expert credibility. Therefore,

Hypothesis 1 is: User jury has a bigger influence on game sales than an expert jury,

independently of the direction of the review valence.

The sentiment of the online reviews

Online reviews often consist of the review score(valence of the review) and the review text(Ziegele & Weber, 2015). Both influence the customer's decision making and possess different features. Website visitors are using the aggregated review score and text reviews for different reasons. The score of the review usually is a short indicator of the products’

performance(Park, Lee & Han, 2007), while text reviews are used as the next level, for customers who would like to know more and get the feeling of the product(Park & Lee, 2008). Moreover, the two types differ in a sense of trustworthiness. Aggregated scores are supposed to summarise the opinion of many customers, but do not provide any ground, while text reviews can be perceived as more valid and legitimate(Ziegele & Weber, 2015). Ziegele & Weber(2015) state that even with both of the types usually available on the website, there was not enough investigation into the difference in effects between the two. In their research, they supported the hypothesis that a score has a stronger influence than the text review itself

(12)

for reviews that customers find as credible. This paper is going to examine the text sentiments of online reviews as well as the score of the review.

The reviews were heavily studied by previous scholars from different sides and some of the results are outlined below. The review can be positive, negative, mixed, or moderate. Zagal, Ladd & Johnson (2009) managed to identify 9 common themes that game reviews possess but did not look at the influence of each separate theme. Thomas, Orland et al. (2007) conducted a study that showed that online reviews are vastly inconsistent with each other. Research by Baumeister et al. (2001) showed that negative reviews have more influencing power than positive ones, however, it was not examined in the area of video

games(Livingston, Nacke & Mandryk, 2011). Another set of studies focused on extreme versus moderate review valence, but results were mixed as well, one study showed that the most helpful reviews were those with an extreme opinion(Forman, Ghose & Wiesenfeld, 2008), while other proved the opposite, that balanced reviews are more useful(Mudambi & Schuff, 2010). The reason why moderate reviews can have a bigger effect is that consumers perceive those as more honest and truthful and increase the credibility of the

text(Purnawirawan, 2015; Hennig-Thurau, Wiertz & Feldhaus, 2015 ). Forman et al. (2008) examined that one-sided reviews are more helpful than moderate. It implies that users perceive extreme reviews as 1 or 5 stars as more useful, rather than a rate of 3(Pavlou and Dimoka, 2006). The reason is that they enable reducing informational asymmetry better than moderate reviews(Cheung, Lee & Rabjohn, 2008). This tendency gives rise to the U-shape of the relationship. Despite the extensive investigation, online reviews are still a grey area for scholars. Below are a couple of theories that thrive to explain this phenomenon.

(13)

The first theory that can partially explain the customer's response is the attribution

theory(Hao, Ye, Li & Cheng, 2010). It states that the level of warranty depends on whether the underlying reasons for the review were external or internal, so whether the product features are in the base or personal experience of the person writing the review. The reviews with the external reasons are seen as more legit.

Consumers face informational asymmetry when the product is released and use online reviews as a tool to fill in space. Hennig-Thurau, Wiertz & Feldhaus(2015) showed that negative reviews are more influential, especially in the early stages of adoption(also Hao et al., 2010). The effect can be explained by the diagnosticity of information and the prospect theory.

The diagnosticity of information states that positive reviews dominate in the online

environment, which makes negative more attention catchier, as the user does not expect them. Moreover, negative reviews are perceived to be more honest(Hennig-Thurau, Wiertz & Feldhaus, 2015; Purnawirawan, 2015).

The prospect theory states that consumers would want to protect themselves from the negative experience more than gain the positive one, so value negative reviews more. Another reason why negative reviews might be more useful is that they allow to access the quality of products and differentiate by categories, while positive ones are rather vague in the level of quality(Purnawirawan, 2015).

From the other side, customers looking for reviews already have predisposition attitudes and they look at positive reviews to measure themselves in the correctness of their choice and disregard negative comments

Nevertheless, negative reviews on average are perceived as more helpful, while positive is considered to be more enjoyable(Park & Nicolau, 2015).

(14)

To sum up, Ziegele & Weber(2015) showed that the aggregated score has a stronger influence than the review text. Other studies highlighted the difference between the effects of

sentiments. The main focus of this paper is the comparison of jury personality. All of the factors manifest in the proposed hypotheses:

H2a: Critics valence of the online review has a bigger influence on game sales than

critics positive sentiment of online reviews

H2b: Critics valence of the online review has a bigger influence on game sales than

critics negative sentiment of online reviews

H2c: User valence of the online review has a bigger influence on game sales than

users positive sentiment of online reviews

H2d: User valence of the online review has a bigger influence on game sales than

users negative sentiment of online reviews

As mentioned in the introduction, the game industry is rapidly growing, and understanding the processes and relationships with customers is crucial for its survival and development. While reviews help customers to decide on the purchase, they can be a strong leverage point for game sales. It was shown that game sales are heavily influenced by the review score, as with 1 point up in the review, the game sales increase by 4%(Zhu & Zhang, 2006). The average industry price for the development of a new game is seven million dollars, so a bad review can waste a large investment of a company. Additionally, feedback received from critics and users in the form of reviews can significantly shape the direction of industry development and the adoption of standards and technologies(Goldenberg and al. 2004). So,

(15)

knowledge of the triggers in online reviews can bring a lot of insights into what would be the most profitable vector of development for producers. The precise effect of the valence on sales is unknown, as Chevalier & Mayzlin, (2006) showed that there is a positive relationship with positive reviews and sales, while Chen, Wu, & Yoon(2004) haven't found any

relationship at all. Moreover, Berger et al., (2010) showed that negative reviews are

associated with an increase in sales. The reason being that negative reviews tend to stick in the mind better, which means the game stays in the customer's mind longer. This paper assumes that the effect of online reviews is made up of such factors as jury personality, valence, and sentiment of the review. The investigation proposed in this paper is going to fulfil the knowledge gap and provide industry players with insights regarding their audience behaviour.

Methodology

This study is going to have an empirical nature and examine the video game industry. This study is going to use the second-hand data provided by the University of Amsterdam. The quantitative data is going to be used because the aim of the study is to prove the correlational link between different types of jury ratings and the number of product sales. The secondary data was collected from the metacritic.com website in 2019 with the use of Python software. The website is providing critics and users reviews and scores about the video games, movies, TV, and music industries("About Us - Metacritic", 2020). Metacritic was chosen because it is considered to be one of the most respected along with users and professionals alike and has an extensive base of reviewed games(Greenwood-Ericksen, Poorman & Papp,

(16)

geographical area. The Metacritic databank consists of 3001 game samples and includes items as Metacritic score(critics opinion), user score, sales per game, and texts of reviews. The databank carries more content, but all the other indicators will be ignored for this study. The paper is going to take into account qualitative data in the form of the review text and game score. The study is going to examine the valence(score) and sentiment of the review, whether the review if negative or positive. The text data was analyzed and coded into quantitative items with the use of the LIWC scale(Tausczik & Pennebaker, 2010). Reviews without defined emotional shade are going to be excluded from the study.

The correlational type of research design is going to be adopted within this study(Trochim & Donnelly, 2001). The research strives to show the pattern in the data but does not explain it or prove causality so the irrelevant qualitative content is going to be dropped. Hierarchical multiple regression analysis is adopted as a design choice for this study. This type of regression is chosen, as it allows to compare the intensity of the intervention and shows a clear pattern in data. Meanwhile, it also allows for controlling for the other variables. So this type of regression is going to be the most effective in reducing the noise and identifying the strongest effect. The study has six independent variables that are going be examined

separately, being critic review valence, user review valence, critics review sentiment positive, critics review sentiment negative, user review sentiment positive, user review sentiment negative. The dependent variable is going to be the sales of the game. Control variables are going to include the genre, thumbs up user, total thumbs user, rating, volume, and a number of players.

The study is going to include six equations that are going to be modified depending on the independent variable.

(17)

1) y= b0 +b1x1+e; where y = is the sales result, b0 = the sales result, when the game score and review is absent, b1 = slope, x = critic review valence, ei = Residual for the i the unit

2) y= b0 +b1x2+e; where y = is the sales result, b0 = the sales result, when the game score and review is absent, b1 = slope, x = user review valence, ei = Residual for the i the unit

3) y= b0 +b1x3+e; where y = is the sales result, b0 = the sales result, when the game score and review is absent, b1 = slope, x = critics review sentiment positive, ei = Residual for the i the unit

4) y= b0 +b1x4+e; where y = is the sales result, b0 = the sales result, when the game score and review is absent, b1 = slope, x = critics review sentiment negative, ei = Residual for the i the unit

5) y= b0 +b1x5+e; where y = is the sales result, b0 = the sales result, when the game score and review is absent, b1 = slope, x =user review sentiment positive, ei = Residual for the i the unit

6) y= b0 +b1x6+e; where y = is the sales result, b0 = the sales result, when the game score and review is absent, b1 = slope, x = user review sentiment negative, ei = Residual for the i the unit

(18)

SPSS program is going to be used for the analysis of the data. The program allows flexibility and a wide range of statistical tools needed for the study. To analyze the data, ANOVA is going to be computed. The analysis is going to show the intensity of different examined effects in different versions of the model. The hypotheses are going to be tested by comparing standardized B of relevant variables, to define the strongest effect. The data is considered valid and the reliability cutoff point of .95 is going to be used. The source of data is

independent and perceived reliable among the user audience, also it is the opinion leader on the market and functions internationally(Santos et al. 2019).

Results

Critics and users' reviews were analyzed and coded into 6 quantitative variables. Metacritic scores provided by users and critics review were coded in 2 variables namely user review valence and critics review valence. After cleaning the data with the Alteryx software(see Appendix) and excluding irrelevant and missing items, the sample size is 1769, so the data is considered to be normally distributed. No missing values were present in the final sample. The scatterplot did not show a random allocation or a linear relationship(see Appendix). Another assumption for the regression analysis is that multicollinearity is not present. All the data meets the assumption except for the TotalThumbs and ThumbsUp variables. As these variables are control variables, they do not affect the independent variables and will be ignored(Allison, 2012). For the purpose of this research, the data is considered to be valid.

The data has been analyzed and results are going to be presented in this section(Table 1). The average number of sales of the game is .1477 mln.(SD=0.48056). The sample includes a wide

(19)

variety of genres. The most popular genres are action(M=.4946; SD=0.50011),

adventure(M=.2431; SD=0.42906), sports(M=.0582; SD=0.23423), strategy(M=.1730; SD=0.37834 ), shooter(M=.3024; SD=0.3841). Total Thumbs User has an M of

5.1095(SD=11.052334), and the Thumbs Up User has an M of 2.9397(SD=7.1713).The multiplayer games are constituting .5709(SD=0.49508) of the whole sample. The games in the sample had different ratings, T-rate games account for .3482(SD=0.47654) of the sample and M-rate games for .3024(SD=0.45944), with a variety of different ratings making up the rest. The volume of user reviews ranges from 1 to 100 with an average of 38.5263(SD= 38.43132), while the volume of critics reviews ranges from 1 to 112, with the average of 23.0475(SD= 18.46435).

The user reviews valence range between 0 and 10, with an M of 6.7265(SD = 2.08546). The critics review valence range between 6 and 100, with an M of 73.1791(SD = 12.86883). The critics review sentiments scores were mainly positive, an average of 6.0649 positive(range 0 to 22.22; SD= 2.00538), with the average of the negative ones to be of 2.4187(range 0 to

(20)

21.02; SD= 1.64023). The user review sentiments were mainly positive, an average of 6.9423 positive(range 0 to 55.56; SD= 4.02751), with the average of the negative ones to be of 2.5279(range 0 to 25.85; SD = 1.67439).The correlational analysis of the key variables of the study provide evidence that valence critics(r= 0,197, p<.000) has a positive correlation to the global sales, as well as user positive(r=-0,044, p = .033) and critics negative(r= -.048, p=

Table 2

Summary of Hierarchical Regression Analysis for Variables Predicting Games Sales (N = 1769)

Model 1 Model 2 Model 3 Model 4

Variable β VIF β VIF β VIF β VIF

Volume User 0.133*** 2.190 0.085** 2.380 0.084** 2.393 0.085** 2.397

Total Thumbs User 0.066 15.079 0.063 15.087 0.060 15.106 0.059 15.108

Thumbs Ups User 0.108 14.123 0.109 14.217 0.111 14.243 0.111 14.244

Volume Critics 0.062* 1.960 0.040 2.013 0.040 2.013 0.041 2.014 Multiplayer -0.012 1.095 -0.023 1.118 -0.026 1.123 -0.025 1.125 Genre Action -0.126*** 2.059 -0.123*** 2.060 -0.125*** 2.066 -0.122*** 2.100 Adventure -0.041 1.493 -0.043 1.496 -0.035 1.534 -0.034 1.547 Sports -0.018 1.194 -0.029 1.222 -0.036 1.240 -0.034 1.247 Strategy -0.082*** 1.412 -0.076** 1.419 -0.08** 1.425 -0.077** 1.459 Shooter 0.033 1.730 0.040 1.748 0.039 1.756 0.042 1.780 Rating T 0.045 1.446 0.046 1.448 0.043 1.455 0.045 1.472 M 0.043 2.046 0.057 2.063 0.050 2.107 0.053 2.151 Valence User -0.034 1.408 -0.040 1.642 -0.047 1.752 Valence Critics 0.138*** 1.565 0.159*** 1.792 0.154*** 1.971

User Sentiment positive 0.017 1.243 0.015 1.252

Critics Sentiment Positive -0.05* 1.269 -0.05* 1.297

User Sentiment Negative -0.023 1.347

Critics sentiment Negative -0.003 1.423 Rˆ2 .106 .118 .120 .121 F for change in Rˆ2 17.261** 12.427** 2.088 .443 *p < .05. **p <.01. ***p<.001

(21)

.021) sentiment have a negative correlation with sales. The rest of the independent variables do not show significant correlations with the dependent variable.

Hierarchical linear regression analysis was performed to access the relationship of sales and user review valence, critics review valence, user review sentiment, and critics review sentiment. The chosen control variables are the volume of the user and critics reviews, ratings, Total Thumbs User and Thumbs Up User, number of players, and five main genres. The summary of the analysis can be found in Table 2, other relevant information can be accessed in Appendix.

The first block analysis included all the control variables. The R^2 was 10.6% and the model was statistically significant (p<.001).

The second block analysis included the addition of the first two independent variables, being user review valence and critics review valence. The second block of the model analysis as well showed statistical significance(p<.001). The second block of the model accounts for 11.8%(R^2) of the variability of the game sales, which means that the R^2 change was .012 with an addition of two independent variables.

Two additional variables were added to the model in block 3, namely, user reviews sentiment positive and critics review sentiment positive. The third block of the model analysis was statistically significant(p<001). The R^2 associated with the third block of the analysis is 12%, which account for the change of .002, which is not significant(p=.124)

(22)

The final block of analysis was devoted to the negative review sentiment for users and critics. The 4th model is statistically significant(p<.001) and Rˆ2 is 12.1%. Rˆ2 change was .000 and is not significant(p=.642).

Moving on to the influence of individual variables. The first block examined the effect of control variables. The volume of user(p<.001; β = .113) and critics(p=.051; β= .062) reviews have a significant influence on game sales. The volume of critics reviews stays significant for the first two models and loses it in the third and fourth, while the volume of user reviews stays significant in all the models. The genres, that are shown a significant influence are the action(p<.001; β= -.126) and strategy(p<.001; β= -.082) and stay significant in all the models. None of the ratings proved to be significant and stays it for all the models.

The block two model includes the first set of independent variables.

User reviews valence was used as an independent variable and do not have a significant (p=.195) effect on the sales of the game, with the β coefficient of -.034. Game sales decreased by 0.08 with an increase of 1 unit of user review score.

Critics review valence does have a significant effect on game sales(p<.001). The regression coefficient is .138 and a change in 1 unit of critics reviews valence is associated with an increase of game sales by .005 points.

So we can conclude, that Hypothesis 1 is partially rejected. The ideal comparison would be if both of the variables were significant, then the comparison of standardized β would point out the stronger result. However, the fact that critics review valence is significant and has a higher β, we can conclude that critics review valence is a stronger predictor of sales.

(23)

Block three includes positive sentiment variables and is going to be reported in the following paragraphs. The user positive sentiment variable did not prove to be significant, while the critics' positive sentiment variable shows significant results for game sales. User positive mood accounts for p=.498, which is not significant; β of .017 and unstandardized B of .002. Critics' positive mood does have a significant impact (p=.05; β of -.05 and unstandardized B of -.012).

The fourth block includes the remaining two variables that correspond to the negative sentiment of both juries. User negative mood has results of p=.372, β of -.023, and unstandardized B of -.007. Critics' negative mood does not have a significant impact as well(p=.896; β of -.003 and unstandardized B of -.001).

We can safely compare only critics' valence and critics' positive sentiment, as both have a significant score. Based on the βs, valence outweighs the sentiment score. Therefore,

Hypothesis 2a is supported. Other variables do not have a significant score, so we cannot

make a valid conclusion about the effect. Nevertheless, by looking at the β coefficients of the remaining variables, it is clear that the valence scores outweigh the corresponding sentiment scores. This allows us to partially support Hypotheses 2b, 2c and 2d.

Discussion

Multiple studies were already conducted on the topic of online reviews. Nevertheless, the results were mainly mixed(Resnick & Zeckhauser, 2002; Zhang, 2006; Chen et al., 2004;

(24)

Duan et al., 2005). The goal of this study was to prove or object previous findings in regard to the general effect of online reviews and broaden the knowledge base in the specific field of video games.

Two of the Hypotheses were proposed in the study. The first one was focusing on the difference between critics and user influence. The literature review has shown that the influencing dominance fluctuates between the two with the slightly more preferred

personality of the critic(Park & Nicolau, 2015; Madden & Fox, 2006). The assumption was made that the power shifted to users due to multiple environmental reasons(Schuckert, Liu & Law, 2015; Bordonaba-Juste, Lucia-Palacios & Polo-Redondo, 2012; Bounie et al. 2005). After conducting the analysis, it appeared to be a wrong assumption. The test showed that experts have a significant influence on game sales, so on consumer choice. Surprisingly, users do not have a significant influence on sales. This could be evidence that the society does not trust the online community just yet(Brandtzæg & Heim, 2008). So, Hypothesis 1 can be partially rejected. The finding stays the same in all the tested models which had additional sentiment variables. The examined aspect of the review used for this test was a review valence, as a numerical representation of the reviewers' opinion on the scale from 1 to n. The critic's valence score has a positive relationship with game sales, as predicted(Flanagin & Metzger, 2013). It proves that with an increase in the score, the game sales rise. Still, the findings have reduced reliability, as only one of the examined variables proved to be significant.

The second hypothesis is focused on the comparison of different variables that the review holds. This research focused on the sentiment, as the variable to examine and compare to the

(25)

valence. The result is contradicting previous research. Out of the four examined options, being users and critics, negative and positive emotional shade, only one proved to be a reliable predictor of relationship with the game sales. Critics positive mood appeared to be the only sentiment direction and jury personality that influences potential buyers. Moreover, the variable shows a negative relationship with sales. The better the sentiment of the review is, the worse the game sales. Both findings show that negative sentiments are not influential and that the positive sentiment can lead to a decrease in sales that is unexpected and

contradicts the previous literature(Baumeister et al., 2001). Nevertheless, as Livingston, Nacke & Mandryk(2011) mentioned, it was never tested in the field of video games. This result shows that there is a gap in the existing scientific literature about the online reviews and further investigation is needed. Moreover, the reliability of the conclusion is discounted, as 4 out of 6 variables did not show significant results.

Interesting findings are connected to the chosen control variables. Supporting the finding of Duan et al.(2005), the volume of the user and critics are indeed a strong and reliable predictor for the game sales and it stays the same for all the models tested. Surprisingly, the volume of user reviews has a bigger slope than critics ones, meaning that user volume has a bigger influence than the critics', which is not in line with the other findings of this study. Both show a positive relationship with game sales. So, the more reviews there are, the higher the sales will be. Reversely, the relationship could go the other way around, as the more games were sold, the more users are likely to leave their reviews. So, future research should examine the causality of the relationships. Other significant findings include two out of five game genres: strategy and action. Interestingly, as these are the most popular genres, they have a negative

(26)

relationship with sales(Research, 2020). So some of the most demanding genres are

indicators of the decrease in sales. This is also a potential research question for future studies.

Implications

This research carries a set of important implications both for the private sectors and the academic community. The paper is focused on the field of rising importance, still, relatively new, and unexamined. More and more economic, social, and political operations are moving to the Web, which means that the power of the online environment is going to

increase(Dhillon & Uppal, 2016). Conversely, online reviews are getting weight and power as well. Due to the relatively new phenomenon of online environments and electronic word of mouth, the field is not examined and possesses a lot of loopholes. This study attempts to fill in some gaps and map the road for future explorations. To be more precise, the research laid out evidence that such variables as critics review valence and critics positive emotions have significant influence sizes. It means that these aspects of the online review have the strongest influence on the game sales. The lack of significant effects of any examined variables focused on the user's reviews, except for the volume, showed that users do not have a consequential effect on sales.

Moreover, this is the first paper that focuses on the comparison of critics and user jury while controlling for different aspects of the review. Current research enhances the previous assumption that so far critics opinion is more influential than the used ones.

(27)

Another finding of the analysis adds to the opinion that the review valence has a stronger effect than the volume of the reviews or the sentiment, or other features.

This study is unique in the sense that it's focused on the video game industry, while the majority of research was mainly conducted in other fields, such as films and

hospitality(Livingston, Nacke & Mandryk, 2011). This could explain the deviation from some of the expected results shown in other studies, as having negative reviews more influential than positive ones.

This paper derives important practical implications for video game production companies, as much as the website administrations.

The website administration should invest more in involving more professional critics to review the game. Moreover, it should build the website with easier accessibility of the critics' opinion, than the user's feedback. Both are because professional opinion is more powerful over the potential buyer's decision making. The website is interested in reducing and

simplifying the search costs for buyers, as then they will be more satisfied and more likely to return for the next purchase(Srinivasan, Anderson & Ponnavolu, 2002). Furthermore, the website can build in nudges for the user interface. The website should also try to highlight aspects of the reviews that make the biggest difference, as clearly show the volume of reviews or the positive sentiment from critics. The unimportant features should be left less attention catchy. All of the above features would help to make the most effective and efficient user experience with the website and turn the visitor into the buyer.

(28)

The game developers should keep in mind that while the end-user is going to purchase and enjoy the game, the critic is the one who influences the purchase to a large extent. Previous research extensively explained what experts are focused on while giving a review, so targeting important aspects for critics, developers can seriously boost their valence and positive sentiments, which has a direct impact on sales(Santos et al. 2019).

Limitations and suggestions for future research

The study possesses several limitations. The biggest issue is that the research was done on one particular industry of video games. So the generalisability of results is debatable, as consumers among different industries might behave differently. The conclusions that were drowned regarding online reviews stay valid for video games but might deviate from other fields. To overcome the limitation, future studies should conduct research that includes multiple industries.

Another limitation of the research is that causality was not taken into account while conducting regressions. As mentioned above, some of the effects might have a reversed relationship, as the volume of reviews and sales. The other point is that the study proves that critics are more influential than users, while it's unclear is it due to their expert credibility, or the character of the review itself, as more objective and less emotional. So deeper

(29)

The third limitation that the study possesses is the quality and variety of the previous studies which formed the basis for the theoretical framework. Only a small portion of studies of online reviews was focusing on video games, while the majority researched the movie and the traveling industry(Livingston, Nacke & Mandryk, 2011). So the transferability of information and insights is lower than if the previous studies had a bigger proportion of the video game field.

The fourth limitation of the study is concerned with the scatterplot. The data pattern achieved by plotting independent and dependent variables proved to be non-random. The finding means that there is another key variable that was missed in this research. So, finding the missing variable can be a direction for future research.

The final limitation is that the review data was collected from one website, while there are multiple websites on the Web that function similarly. So the chosen sample is limited to only one source. Future studies should collect data from multiple sources to reassure validity and generalisability.

Conclusion

The topic of online reviews gains more attention as companies are moving online and the general development of an online environment(Schuckert, Liu & Law, 2015; Bordonaba-Juste, Lucia-Palacios & Polo-Redondo, 2012). So, it is important to understand what aspects of the review make a difference and lead to a change in sales. Overall, online reviews have a significant impact on video game sales. The paper looked at two authoritative personas, users,

(30)

and critics, who can provide potential buyers with the reviews. Both of them possess a specific influencing power over the customer. Critics have expert credentials and users have experiential credibility. This research tested which one of these powers prevails, as expert credibility is stronger traditionally while the development of the Web could have moved it more to the experiential credibility. The result was that critics still have a stronger influence over the website visitor than other users. The other examined variable was the sentiment of the review, as the mood that prevails in the text review, positive or negative. The result of the analysis showed that only a positive sentiment is a reliable predictor of the game sales, with the condition that the review author is a critic. Moreover, website visitors are more affected by the aggregated score, than by the individual text message. The volume of reviews, as well as some of the genres, were shown to have a significant effect on sales. The findings added some insights to the knowledge pool of online reviews and provided developers and website administration with valuable information. It is clear that the Web is going to develop and integrate in more aspects of modern life, so it’s crucial to understand its rules and processes. That’s why this paper and the similar ones are so important and further encouragement of the research on this topic is required.

(31)

References:

ALLISON, P. (2012). When Can You Safely Ignore Multicollinearity? | Statistical Horizons. Retrieved 18 June 2020, from https://statisticalhorizons.com/multicollinearity

BAUMEISTER, R., BRATSLAVSKY, E., FINKENAUER, C., AND VOHS, K. 2001. Bad is stronger than good. Review of General Psychology 5, 4, 323–370.

Berger, J., Sorensen, A. T., & Rasmussen, S. J. (2010). Positive effects of negative publicity: When negative reviews increase sales. Marketing Science, 29(5), 815–827.

Bounie, D., Bourreau, M., Gensollen, M., & Waelbroeck, P. (2005). The effect of online customer reviews on purchasing decisions: The case of video games. Retrieved July,

8, 2009.

Brandtzæg, P. B., & Heim, J. (2008, January). User loyalty and online communities: why members of online communities are not faithful. In Proceedings of the 2nd

international conference on INtelligent TEchnologies for interactive enterTAINment (pp. 1-10).

Calvillo-Gámez, E. H., Cairns, P., & Cox, A. L. (2015). Assessing the core elements of the gaming experience. In Game user experience evaluation (pp. 37-62). Springer, Cham.

(32)

Chen, P-Y., Wu, S-Y., & Yoon, J. (2004). The impact of online recommendations and consumer feedback on Sales. In Proceedings of the international conference on information systems (pp. 711–724), ICIS 2004. Seattle: Association for Information Systems.

Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43, 345–354.

Cheung, C. M. K., Lee, M. K. O., & Rabjohn, N. (2008). The impact of electronic word-of- mouth: The adoption of online opinions in online customer communities. Internet Research, 18, 229–247.

Christensen, C. M., Raynor, M. E., & McDonald, R. (2015). What is disruptive innovation.

Harvard business review, 93(12), 44-53.

Cosley, D., Lam, S. K., Albert, I., Konstan, J. A., & Riedl, J. (2003). Is seeing believing? How recommender system interfaces affect users’ opinions. In Proceedings of the SIGCHI conference on human factors in computing systems (CHI ‘03).

Dhillon, D. K., & Uppal, R. S. (2016). Internet of Things: making sense of the next Mega- Trend.

Duan, W., B. Gu, A. B. Whinston. 2005. Do online reviews matter?—An empirical investigation of panel data. Working Paper, University of Texas at Austin.

(33)

Flanagin, A. J., & Metzger, M. J. (2013). Trusting expert-versus user-generated ratings online: The role of information volume, valence, and consumer characteristics.

Computers in Human Behavior, 29(4), 1626-1634.

Forman, Chris, Anindya Ghose, and Batia Wiesenfeld (2008), “Examining the Relationship Between Reviews and Sales: The Role of Reviewer Identity Disclosure in Electronic Markets,” Information Systems Research, 19, 3, 291–313.

Goldenberg J., B. Libai, S. Moldovan, E. Muller, 2004, “The Economic Implications of Negative Word of Mouth: A Dynamic Small-World Approach”, Mimeo.

Greenwood-Ericksen, A., Poorman, S. R., & Papp, R. (2013). On the validity of Metacritic in assessing game value. Eludamos. Journal for Computer Game Culture, 7(1), 101-127.

Hao, Y., Ye, Q., Li, Y., & Cheng, Z. (2010, January). How does the valence of online

consumer reviews matter in consumer decision making? Differences between search goods and experience goods. In 2010 43rd Hawaii International Conference on

System Sciences (pp. 1-10). IEEE.

Hennig-Thurau, T., Wiertz, C., & Feldhaus, F. (2015). Does Twitter matter? The impact of microblogging word of mouth on consumers’ adoption of new movies. Journal of the

(34)

Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. New Haven, CT: Yale University Press.

Livingston, I. J., Nacke, L. E., & Mandryk, R. L. (2011, August). The impact of negative game reviews and user comments on player experience. In Proceedings of the 2011

ACM SIGGRAPH Symposium on Video Games (pp. 25-29).

Madden, M., & Fox, S. (2006). Riding the waves of Web 2.0. <http://pewresearch.org/ pubs/ 71/riding-the-waves-of-web-20> Retrieved 10.08.10.

Marchand, A., & Hennig-Thurau, T. (2013). Value creation in the video game industry: Industry economics, consumer benefits, and research opportunities. Journal of

Interactive Marketing, 27(3), 141-157.

Metzger, M. J., & Flanagin, A. J. (Eds.). (2008). Digital media, youth, and credibility. Cambridge, MA: MIT Press.

Mudambi, Susan M. and David Schuff (2010), “What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon.com,” MIS Quarterly, 34, 1, 185–200.

Nelson, Phillip (1970), "Information and Consumer Behavior," Journal of Political Economy, 78 (March-April), 311-29. -(1974), "Advertising as Information," Journal of Political Economy, 82 (July-August), 729-54.

(35)

Park D, Lee J. 2008. eWOM overload and its effect on consumer behavioral intention depending on consumer involvement. Electronic Commerce Research and Applications 7: 386–398.

Park D, Lee J, Han I. 2007. The Effect of On-Line Consumer Re- views on Consumer Purchasing Intention: The Moderating Role of Involvement. International Journal of Electronic Commerce

11: 125–148.

Park, S., & Nicolau, J. L. (2015). Asymmetric effects of online consumer reviews. Annals of

Tourism Research, 50, 67-83.

Pavlou, P., & Dimoka, A. (2006). The nature and role of feedback text comments in online marketplaces: Implications for trust building, price premiums, and seller

differentiation. Information Systems Research, 17(4), 392–414.

Payne, J. W., Bettman, J. R., & Johnson, J. R. (1992). Behavioral decision research: A constructive processing perspective. Annual Review of Psychology, 43, 87–131.

Purnawirawan, N., Eisend, M., De Pelsmacker, P., & Dens, N. (2015). A meta-analytic investigation of the role of valence in online reviews. Journal of Interactive

(36)

Reinstein, D. A., & Snyder, C. M. (2005). The influence of expert reviews on consumer demand for experience goods: A case study of movie critics. The journal of industrial economics, 53(1), 27-51.

Research, S. (2020). Top 10 Most Popular Gaming Genres in 2020. Retrieved 22 June 2020, from https://straitsresearch.com/blog/top-10-most-popular-gaming-genres-in-2020

Resnick, Paul, Richard Zeckhauser. 2002. Trust among strangers in internet transactions: Empirical analysis of ebay’s reputation system. Michael R. Bay, ed., The Economics of the Internet and E-Commerce, Advances in Applied Microeconomics, vol. 11. Elsevier Science.

Santos, T., Lemmerich, F., Strohmaier, M., & Helic, D. (2019). What's in a Review:

Discrepancies Between Expert and Amateur Reviews of Video Games on Metacritic.

Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-22.

Schuckert, M., Liu, X., & Law, R. (2015). Hospitality and tourism online reviews: Recent trends and future directions. Journal of Travel & Tourism Marketing, 32(5), 608-621.

Smith, A. (2013) Civic engagement in the digital age. Pew Research Center.

Srinivasan, S. S., Anderson, R., & Ponnavolu, K. (2002). Customer loyalty in e-commerce: an exploration of its antecedents and consequences. Journal of retailing, 78(1), 41-50.

(37)

SUNDAR, S. S., XU, Q., AND OELDORF-HIRSCH, A. 2009. Au- thority vs. peer: how interface cues influence users. In Proceed- ings of the 27th international conference extended abstracts on Human factors in computing systems, ACM, 4231–4236.

Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of language and social psychology,

29(1), 24-54.

Thomas, D., K. Orland, et al. (2007). The Videogame Style Guide, Game Press & International Game Journalists Association.

Trochim, W. M., & Donnelly, J. P. (2001). Research methods knowledge base (Vol. 2). Cincinnati, OH: Atomic Dog Publishing.

Tussyadiah, I. P., Park, S., & Fesenmaier, D. R. (2011). Assessing the effectiveness of consumer narratives for destination marketing. Journal of Hospitality & Tourism Research, 35(1), 64–78.

U.S. average age of video gamers 2019 | Statista. (2020). Retrieved 10 May 2020, from https://www.statista.com/statistics/189582/age-of-us-video-game-players-since-2010/

Victoria Bordonaba-Juste, Laura Lucia-Palacios, & Yolanda Polo-Redondo. (2012).

Antecedents and consequences of e-business adoption for European retailers. Internet

(38)

Willemsen, L. M., Neijens, P. C., & Bonner, F. (2012). The ironic effect of source

identification on the perceived credibility of online product reviewers. Journal of Computer-Mediated Communication, 18(1), 16–31.

Zagal, J. P., Ladd, A., & Johnson, T. (2009, April). Characterizing and understanding game reviews. In Proceedings of the 4th international Conference on Foundations of

Digital Games (pp. 215-222).

Ziegele, M., & Weber, M. (2015). Example, please! Comparing the effects of single customer reviews and aggregate review scores on online shoppers' product evaluations. Journal

of Consumer Behaviour, 14(2), 103-114.

Zhang, X. M., 2006, “The lord of the ratings: Is a movie’s fate influenced by professional and amateur reviews?,” mimeo.

Zhu, F., & Zhang, X. (2006). The influence of online consumer reviews on the demand for experience goods: The case of video games. ICIS 2006 Proceedings, 25.

(39)
(40)

Amsterdam Business School Plantage Muidergracht 12 1012 TV Amsterdam The Netherlands T +31 20 525 7384 www.abs.uva.nl

Date Our reference

Contact Telephone E-Mail

Subject

June 03, 2020 EC 20200603070617

Sophia de Jong (31)20-5255311 secbs-abs@uva.nl EBEC approval

To: Situmeang

Ethics Committee Economics and Business (EBEC) University of Amsterdam

Dear Frederik Situmeang,

The Economics & Business Ethics Committee (University of Amsterdam) received your request nr 20200603070617 to approve your project "Who has more power over the user online: peers or professionals?".

We evaluated your proposed research in terms of potential impact of the research on the participants, the level and types of information and explanation provided to the participants at various stages of the research

process, the team's expertise in conducting the proposed analyses and particularly in terms of restricted access to the data to guarantee optimal levels of anonymity to the participants.

The Ethics Committee approves of your request.

The information as filled in the form, can be found at

https://www.creedexperiment.nl/EBEC/showprojectAVG.php?nummer=20200603070617

Best regards,

On behalf of the Ethics Committee Economics and Business,

Prof. Dr. J.H. Sonnemans Chairman of the Committee

(41)
(42)

Notes

Resources

Memory Required Additional Memory Required for Residual Plots

27568 bytes 0 bytes [DataSet1] C:\Users\Jamie\Desktop\thesis\mydata1.sav Descriptive Statistics Mean Std. Deviation N Global volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative .1477 .48056 1769 38.5263 38.43132 1769 5.1059 11.05234 1769 2.9397 7.17130 1769 23.0475 18.46435 1769 .5709 .49508 1769 .4946 .50011 1769 .2431 .42906 1769 .0582 .23423 1769 .1730 .37834 1769 .1798 .38410 1769 .3482 .47654 1769 .3024 .45944 1769 6.7265 2.08546 1769 73.1791 12.86883 1769 6.9423 4.02751 1769 6.0640 2.00538 1769 2.5279 1.67439 1769 2.4187 1.64023 1769 Page 3

(43)

Correlations Global volumeUser Avg_totalThum bsUse Avg_totalUpsU se

Pearson Correlation Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative

Sig. (1-tailed) Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive 1.000 .246 .265 .258 .221 .246 1.000 .490 .447 .659 .265 .490 1.000 .963 .461 .258 .447 .963 1.000 .420 .221 .659 .461 .420 1.000 .048 .214 .111 .103 .241 -.006 .275 .129 .114 .126 -.065 .003 -.007 -.007 -.025 -.010 -.045 -.054 -.051 -.063 -.041 -.081 -.073 -.064 .057 .075 .256 .107 .098 .152 .009 -.108 -.068 -.057 .006 .102 .424 .247 .219 .283 -.033 -.045 -.257 -.272 .023 .197 .407 .172 .148 .371 -.044 -.170 -.168 -.159 -.100 .002 .041 -.009 -.007 .049 -.020 .084 .094 .096 .037 -.048 .016 .021 .023 -.015 . .000 .000 .000 .000 .000 . .000 .000 .000 .000 .000 . .000 .000 .000 .000 .000 . .000 .000 .000 .000 .000 . .021 .000 .000 .000 .000 .395 .000 .000 .000 .000 .003 .447 .388 .384 .150 .331 .028 .011 .017 .004 .041 .000 .001 .004 .008 .001 .000 .000 .000 .000 .347 .000 .002 .008 .403 .000 .000 .000 .000 .000 .082 .030 .000 .000 .168 .000 .000 .000 .000 .000 .033 .000 .000 .000 .000 .475 .043 .348 .390 .020

(44)

Correlations

VolumeCritics multiplayer action adventure

Pearson Correlation Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative

Sig. (1-tailed) Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive .221 .048 -.006 -.065 -.010 .659 .214 .275 .003 -.045 .461 .111 .129 -.007 -.054 .420 .103 .114 -.007 -.051 1.000 .241 .126 -.025 -.063 .241 1.000 .111 .020 -.101 .126 .111 1.000 .320 -.207 -.025 .020 .320 1.000 -.141 -.063 -.101 -.207 -.141 1.000 .057 -.005 -.449 -.256 -.114 .152 .067 .473 -.221 -.116 .006 .041 -.156 -.127 -.116 .283 .165 .454 .198 -.164 .023 .131 -.012 .018 -.088 .371 .195 .045 .004 .057 -.100 .049 -.079 -.004 -.003 .049 .012 -.004 .161 -.059 .037 .008 .165 .001 -.004 -.015 -.014 .205 .074 -.126 .000 .021 .395 .003 .331 .000 .000 .000 .447 .028 .000 .000 .000 .388 .011 .000 .000 .000 .384 .017 . .000 .000 .150 .004 .000 . .000 .201 .000 .000 .000 . .000 .000 .150 .201 .000 . .000 .004 .000 .000 .000 . .008 .414 .000 .000 .000 .000 .002 .000 .000 .000 .403 .041 .000 .000 .000 .000 .000 .000 .000 .000 .168 .000 .313 .219 .000 .000 .000 .030 .427 .008 .000 .019 .000 .431 .451 .020 .311 .438 .000 Page 5.006

(45)

Correlations

sports strategy shooter T M

Pearson Correlation Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative

Sig. (1-tailed) Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive -.010 -.041 .075 .009 .102 -.033 -.045 -.081 .256 -.108 .424 -.045 -.054 -.073 .107 -.068 .247 -.257 -.051 -.064 .098 -.057 .219 -.272 -.063 .057 .152 .006 .283 .023 -.101 -.005 .067 .041 .165 .131 -.207 -.449 .473 -.156 .454 -.012 -.141 -.256 -.221 -.127 .198 .018 1.000 -.114 -.116 -.116 -.164 -.088 -.114 1.000 -.214 .174 -.236 .004 -.116 -.214 1.000 -.061 .349 -.076 -.116 .174 -.061 1.000 -.481 .062 -.164 -.236 .349 -.481 1.000 -.019 -.088 .004 -.076 .062 -.019 1.000 .057 -.035 .005 .004 .086 .374 -.003 .037 -.046 .052 -.114 .394 -.059 -.058 -.092 .006 -.083 .201 -.004 .019 .194 -.013 .154 -.385 -.126 .021 .196 -.005 .201 -.208 .331 .041 .001 .347 .000 .082 .028 .000 .000 .000 .000 .030 .011 .001 .000 .002 .000 .000 .017 .004 .000 .008 .000 .000 .004 .008 .000 .403 .000 .168 .000 .414 .002 .041 .000 .000 .000 .000 .000 .000 .000 .313 .000 .000 .000 .000 .000 .219 . .000 .000 .000 .000 .000 .000 . .000 .000 .000 .436 .000 .000 . .005 .000 .001 .000 .000 .005 . .000 .004 .000 .000 .000 .000 . .213 .000 .436 .001 .004 .213 . .008 .071 .417 .436 .000 .000 .451 .058 .026 .015 .000 .000 .006 .008 .000 .399 .000 .000

(46)

Correlations

valenceUser ValenceCritics

Avg_UserPositi ve

Pearson Correlation Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative

Sig. (1-tailed) Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive -.033 .197 -.044 .002 -.045 .407 -.170 .041 -.257 .172 -.168 -.009 -.272 .148 -.159 -.007 .023 .371 -.100 .049 .131 .195 .049 .012 -.012 .045 -.079 -.004 .018 .004 -.004 .161 -.088 .057 -.003 -.059 .004 -.035 .037 -.058 -.076 .005 -.046 -.092 .062 .004 .052 .006 -.019 .086 -.114 -.083 1.000 .374 .394 .201 .374 1.000 .032 .374 .394 .032 1.000 .096 .201 .374 .096 1.000 -.385 -.244 -.222 -.170 -.208 -.379 -.105 -.293 .082 .000 .033 .475 .030 .000 .000 .043 .000 .000 .000 .348 .000 .000 .000 .390 .168 .000 .000 .020 .000 .000 .019 .311 .313 .030 .000 .438 .219 .427 .431 .000 .000 .008 .451 .006 .436 .071 .058 .008 .001 .417 .026 .000 .004 .436 .015 .399 .213 .000 .000 .000 . .000 .000 .000 .000 . .088 .000 .000 .088 . .000 .000 .000 .000 Page 7 .

(47)

Correlations Avg_criticsposit ive Avg_UserNegat ive Avg_criticsnega tive

Pearson Correlation Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative

Sig. (1-tailed) Global

volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive .002 -.020 -.048 .041 .084 .016 -.009 .094 .021 -.007 .096 .023 .049 .037 -.015 .012 .008 -.014 -.004 .165 .205 .161 .001 .074 -.059 -.004 -.126 -.058 .019 .021 -.092 .194 .196 .006 -.013 -.005 -.083 .154 .201 .201 -.385 -.208 .374 -.244 -.379 .096 -.222 -.105 1.000 -.170 -.293 -.170 1.000 .336 -.293 .336 1.000 .475 .202 .021 .043 .000 .253 .348 .000 .192 .390 .000 .171 .020 .058 .270 .311 .367 .275 .438 .000 .000 .000 .489 .001 .006 .425 .000 .008 .215 .193 .000 .000 .000 .399 .293 .419 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 . .000 .000

(48)

Correlations Global volumeUser Avg_totalThum bsUse Avg_totalUpsU se Sig. (1-tailed) Avg_criticspositive Avg_UserNegative Avg_criticsnegative N Global volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative .475 .043 .348 .390 .020 .202 .000 .000 .000 .058 .021 .253 .192 .171 .270 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 Page 9

(49)

Correlations

VolumeCritics multiplayer action adventure

Sig. (1-tailed) Avg_criticspositive Avg_UserNegative Avg_criticsnegative N Global volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative .020 .311 .438 .000 .006 .058 .367 .000 .489 .425 .270 .275 .000 .001 .000 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769

(50)

Correlations

sports strategy shooter T M

Sig. (1-tailed) Avg_criticspositive Avg_UserNegative Avg_criticsnegative N Global volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative .006 .008 .000 .399 .000 .000 .425 .215 .000 .293 .000 .000 .000 .193 .000 .419 .000 .000 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 Page 11

(51)

Correlations valenceUser ValenceCritics Avg_UserPositi ve Sig. (1-tailed) Avg_criticspositive Avg_UserNegative Avg_criticsnegative N Global volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative .000 .000 .000 . .000 .000 .000 .000 .000 .000 .000 .000 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769

(52)

Correlations Avg_criticsposit ive Avg_UserNegat ive Avg_criticsnega tive Sig. (1-tailed) Avg_criticspositive Avg_UserNegative Avg_criticsnegative N Global volumeUser Avg_totalThumbsUse Avg_totalUpsUse VolumeCritics multiplayer action adventure sports strategy shooter T M valenceUser ValenceCritics Avg_UserPositive Avg_criticspositive Avg_UserNegative Avg_criticsnegative . .000 .000 .000 . .000 .000 .000 . 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 1769 Page 13

(53)

Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate Change Statistics R Square Change F Change df1 1 2 3 4 .325a .106 .099 .45605 .106 17.261 12 1756 .344b .118 .111 .45311 .012 12.427 2 1754 .347c .120 .112 .45283 .002 2.088 2 1752 .347d .121 .112 .45297 .000 .443 2 1750 Model Summary Model Change Statistics df2 Sig. F Change 1 2 3 4 1756 .000 1754 .000 1752 .124 1750 .642

Predictors: (Constant), M, sports, multiplayer, adventure, Avg_totalUpsUse, strategy, VolumeCritics, shooter, T, action, volumeUser, Avg_totalThumbsUse

a.

Predictors: (Constant), M, sports, multiplayer, adventure, Avg_totalUpsUse, strategy, VolumeCritics, shooter, T, action, volumeUser, Avg_totalThumbsUse, valenceUser, ValenceCritics

b.

Predictors: (Constant), M, sports, multiplayer, adventure, Avg_totalUpsUse, strategy, VolumeCritics, shooter, T, action, volumeUser, Avg_totalThumbsUse, valenceUser, ValenceCritics, Avg_UserPositive, Avg_criticspositive

c.

Predictors: (Constant), M, sports, multiplayer, adventure, Avg_totalUpsUse, strategy, VolumeCritics, shooter, T, action, volumeUser, Avg_totalThumbsUse, valenceUser, ValenceCritics, Avg_UserPositive, d.

Referenties

GERELATEERDE DOCUMENTEN

In order to decide what Arduino components must be used to control the system, information about the developed miniaturised bioreactor and its relevant process parameters

MecA does not interact with Spo0A or its phosphorylation, but I argue that the on-target inhibition of transcription by the MecA-ClpC-Spo0A complex inhibits sinI

This may cause problems with cheat detection when for example the Time on Target is analysed: if a players happens to look directly at an enemy player when the match finishes,

z Leverage items: high percentage of purchased cost, low level of dependence on supplier The consensus derived from the literature suggests the company should use different

Daarbij staat de hoofdvraag centraal: “In hoeverre wordt het gebrek aan kwaliteit van de primaire woning en groenvoorzieningen in de omgeving van de primaire

Figure 23 in paragraph 6.3 shows the gained collection efficiencies of the reflector ring at the different positive angles of incidence.. The values for the gained

Furthermore, it was concluded that IDO is involved in the induction of maternal tolerance to extravillous fetal trophoblast (EVT) invasion.. Trophoblasts form a layer of

After etching, the complete wafer will be etched except for the part under the gold layer resulting in channel waveguides with a height of the etching depth of the surrounding