• No results found

Game on : improving the profitability of beta tests in the gaming industry

N/A
N/A
Protected

Academic year: 2021

Share "Game on : improving the profitability of beta tests in the gaming industry"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Game On

Improving the profitability of beta tests in the gaming

industry

Tim Aardenburg - 10279792

Amsterdam, July 15, 2015

Thesis seminar for Economics and Business studies

Supervisor: Dhr. A.C.C. Gruijters

Second Supervisor: Dhr. J. Sol

Academic year: 2014-2015

Semester 2, Block 3

(2)

2

Abstract

The gaming industry has grown to be a bigger and bigger part of the entertainment industry,

this growth has led to a rise in profits for game producers but at the same time brought with

it a rise in risk and production costs. This means that for managers it is becoming more

important to make sure that every part of the game production process pays off in order to

negate as much risk as possible and to increase profits. In the hope of adding to the

literature available on this subject this study looks at opportunities for increasing profits in

an area mostly overlooked by the scientific community when it comes to profits, beta-tests.

Social ties in the form of recommendations will be analysed for different levels of

involvement between beta testers and end users of the online game Hearthstone. The results

in combination with previous literature show that even though higher levels of involvement

do lead to more recommendations to friends, and therefore increased profits, there is only a

partial difference in this effect between beta testers and end users.

Euphorus ID:

9aecc710-4e1f-4a62-926c-aaddf294387d / 20150716T034318

Statement of originality

This document is written by Student Tim Aardenburg who declares to take full

responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no

sources other than those mentioned in the text and its references have been used in creating

it. The Faculty of Economics and Business is responsible solely for the supervision of

completion of the work, not for the contents.

(3)

3

Contents

Abstract………..2

1. Introduction………...5

2. Literature review………7

2.1 Social ties in gaming………7

2.2 Word of Mouth advertising… … … ……… … ………8

2.3 Involvement………..9

2.3 Conclusion………..9

3. H y p o th es es … … … …… …… ……… … … …… … … …… … .… … . 1 1

4 . Meth o d … … … ……… … ……… … … …… … …… … … . .. … …1 2

4.1 Sample……….13

4.2 Data collection………..14

4.3 Measures……….……15

4.4.1 Demographics ……….…….15 4.4.2 Involvement………15 4.4.3 Likeliness to recommend………..16

5. Results……….17

5.1 Combining the Revised Personal Involvement Inventory……….17

5.2 Hypothesis 1………18

5.3 Hypothesis 2………...19

5.4 Hypothesis 3………20

6. Discussion………...21

6.1 Findings on the hypotheses and research question………...21

6.2 Limitations………22

7. Conclusion………..23

8. Bibliography………...24

Appendix 1: Survey………...………..27

(4)
(5)

5

1. Introduction

The gaming industry is one of the strongest markets in the entertainment industry. With PC games alone generating a projected revenue of over 20 billion dollars in 2015 (Meulen & Rivera, 2013) and with the gaming software market in total expected to rise above 100 billion dollars in 2018 (Brightman, 2014), it is becoming clear that gaming has long since risen above the entertainment niche of its past. However, even though the total industry revenue for gaming is well over 110 billion dollars right now (Meulen & Rivera, 2013), developing games is risky business (Griffiths, 2013). Where computer games in the past were built by a group of college students somewhere in a basement where the only real investments were pizza, red-bull, and time, game development in this day and age is a fair bit more costly. The budgets for big games are actually starting to rival those of blockbuster movies with the new grand theft auto game, GTA V, smashing the record for most expensive game ever by having a mind-bending 265 million dollars in production costs (Gamecentral, 2013). Such an amount might seem like a sure loss for Rockstar games, GTA V’s developer, but don’t fret even they made a profit in the end. Three days after the game was launched it had raised in excess of 1 billion dollars, another entertainment industry record (Webster, 2013).

When production costs are high, and they don’t have to be in the hundreds of millions of dollars, it is important that the ending product sells well to make profits. This goes for any product, be it a car a movie or a game. For software this is where beta tests come in. Even though they are not a guarantee for success, software developers will almost always want to test a product to see if it is working “as intended” (Hosselman, 2013). Just like a store might hand out free samples to test the reaction of users, software developers can have versions of a program tested by a group of potential customers to see if the program is used and perceived in the way that is intended and, more importantly, if the users are enthusiastic enough to predict good sales. These tests are called “beta tests” (Businessdictionary).

Beta tests, or betas, have been a tried and tested method for testing new products, especially relating to software. Research about their effectiveness dates back to the early 90’s (Dolan & Matthews, 1993) and the method is so widely used that there are companies that solely provide products to manage the beta testing process (Centercode, 2015). However, even though beta tests can provide a way of validating the product concept, eliminate performance issues before launch and even serve as a way of promoting a product (Dolan & Matthews, 1993) they are not risk free. A poorly designed beta test can destroy account relationships, generate inaccurate data on product performance, and stimulate negative publicity (Dolan & Matthews, 1993). These risks have always been there but in recent years they have grown to become a much bigger hazard. This can partially be explained because of the fact that in the past beta tests for games were usually done by a small and selected group, whilst nowadays some companies have their software tested by hundreds,

(6)

6

thousands or even millions of people (The Street, 2015). This means that mistakes will affect a lot more potential customers. Added to this is the rising popularity of “game-streaming” (Kaytoue, Silva, Cerf, & Meira, 2012), through programs like twitch or YouTube (Twitch, 2014). Where people watch other people play video games, even if these are still in beta. For example, the final match of the first Heroes of the Storm tournament, a game that was at that time still in beta, was watched over eight hundred thousand times on YouTube alone (Heroes of the storm, 2014).

Shortly put; the risks of negative publicity and damaging relationships through beta testing have grown alongside of gaming’s popularity. Nowadays a mistake or error in a game can be seen by thousands of people. Add this to rising production costs and it becomes apparent that it is more important than ever to make beta testing worth its while (Hosselman, 2013). At this point it is important to empathize that this paper will not be about improving the methods in which beta tests are done. There is already an extensive body of research on this matter (Dolan & Matthews, 1993) (Kaulio, 1998) (Tychsen, 2008) and to truly perform a meaningful analysis of different beta testing methods would be beyond the scope of a mere bachelor’s thesis. Instead the focus of this study will be on an aspect of beta tests that has been all but neglected in the past. Where earlier research has focused mainly on the technical aspects of beta tests (Dolan & Matthews, 1993) (Gold & Wolfe, 2012) (Hosselman, 2013) or on the selection of the correct type of testers (Kaulio, 1998) (Olsen, Procci, & Bowers, 2011), there is no real research to be found on the direct economic implications of beta tests. Where beta tests are mostly used as a tool to reduce future risk, they bring with them their own costs and risks, as discussed previously. This study will look for a way to offset a part of these costs, not by assessing the beta tests themselves but the social environment of the testers that are participating in them, and the possibilities they provide for generating revenue. This leads to the following research question:

To what extent can the social ties of beta testers be used to increase the profitability of a beta test?

To answer this question we shall first take a look at three different aspects related to the social environment in games: social ties in general, word of mouth (WOM) advertising, and involvement. After this review of previous literature a conceptual framework will be given, showing the hypotheses that will be used in answering the research question. Prior to analyzing the actual research a short overview will be given of the methods used and the research design. Finally the paper will end with a discussion and conclusion of the research.

(7)

7

2. Literature review

2.1 Social ties in gaming.

Gaming has changed. Not only has it grown more popular (Meulen & Rivera, 2013), as shown already in the introduction, but the way people are playing games is changing and has been doing so for the last few years. The importance of social aspects in games has been rising steadily and with the boom in gaming on mobile platforms (Soh & Tan, 2008) it will not be stopping. Increasingly games have gone from being a pure form of entertainment to fulfilling part of the social needs of its players (Ang, Zaphiris, & Wilson, 2010). Whether it is through something as simple as playing Wordfeud with a Facebook friend or something as complex as a guild in a massive online environment like World of Warcraft, nowadays most games will allow a user to build a network of online connections (Kolo & Baur, 2004). These social ties are important to address when looking at the research question, not only do they have a direct effect on gaming habits but through this effect they influence revenues as well.

Previous research has shown that social connections are a significant determinant of how long and how often a person plays a game (Wei, Yang, & Adamic, 2010). When someone starts playing a game through a friend, this person will not only play more often, he or she will keep playing the game over a longer period of time. This friend does not even have to be a “real life” friend or relative, it could be someone they know from another game, the effect remains similar (Debeauvais, Nardi, Schiano, Ducheneaut, & Yee, 2011). In today’s market where many games gain revenue over time through subscription fees or in-game micro transactions, I’m looking at you candy crush..., a few weeks of extra game time will boost revenues directly (Chang, Ku, & Fu, 2013). This means that from an economic standpoint it is preferable to have the biggest possible percentage of friend invites.

Shortly put, increasing the amount of social ties among gamers will increase usage and therefore profits.

(8)

8

2.2 Word of mouth advertising.

In the previous section it was shown that an increase in the amount of friend invites in a game should have a positive effect on profits. This is of course important to know, but how exactly does one go about increasing the amount of friend invites? The answer to this specific question proves hard to find. With the gaming industry only recently coming up on the scientific radar, and most of the preponderances of the studies aimed at the potential negative effects of gaming (Przybylski, Rigby, & Ryan, 2010), the answer will have to come from other sources. To do this we shall move away from the niche characteristic of the question and look at a comparable agent that has been a staple in marketing research for decades, word of mouth advertising or WOM (Johnson Brown & Reingen, 1987).

Word of mouth has played a major role in the buying decisions of customers (Richins & Root-Shaffer, 1988). It provides unbiased product information from other consumers (Thorsten, Gwinner, Walsh, & Gremler, 2004) and the rising popularity of review sites like TripAdvisor, or Gamespot for a more study-related example, is a testament to its importance. To remain close to the original concept of friend invites the focus here shall only be on positive word of mouth, its determinants shall be discussed and, if possible, matched to a driver of friend invites.

Historically, the drivers of word of mouth advertising have proven to be a point of discussion among researchers, with several papers giving opposite findings (Anderson E. W., 1998). This might seem problematic but should be easily fixed by making a distinction between different types of studies. Lately most research on WOM tends to be more practice oriented, focusing on one-to-many word of mouth especially relating to so called “viral marketing” (Thorsten, Gwinner, Walsh, & Gremler, 2004). Even though this type of WOM is becoming increasingly important it is not comparable to the more personal type of recommendation related to friend invites. It is therefore of importance to look only at drivers of WOM that are applicable on the one-on-one level instead of one-to-many as is the case for most types of electronic word of mouth.

One of the most prominent studies on the motives for positive WOM has been done by Henning-Thurau et al (2014). Describing five main motivational categories for WOM: “focus-related utility (concern for other consumers, helping the company, social benefits, and exerting power), consumption utility (post-purchase advice-seeking), approval utility (self-enhancement and economic rewards), moderator-related utility (convenience and problem-solving support), and homeostase utility (expressing positive emotions and venting negative feelings).” (Cheung & Lee, 2012). Comparing these five motivational categories to different studies done on the drivers for personal WOM (Peres, Lovett, & Shachar, 2013) (Thurau, Gwinner, Walsh, & Gremler, 2014) (Sundaram, Mitra, & Webster, 1998) leaves us with two main determinants of WOM, satisfaction and involvement of which only involvement is applicable to the current study. This is because

(9)

9

satisfaction fulfils a moderating role related more to WOM in general then just positive WOM. Satisfaction determines if a recommendation will be positive or negative (Anderson E. W., 1998) and since a friend invite is assumed to already be positive this determinant falls outside of the scope of this research.

2.3 Involvement.

This leaves the driver of WOM that will be used in this study, involvement. Involvement in relation to products and WOM has been extensively researched (Peres, Lovett, & Shachar, 2013) (Thurau, Gwinner, Walsh, & Gremler, 2014) (Sundaram, Mitra, & Webster, 1998) and reliable methods have been built to measure it (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994). Though it is often used as a standalone item in analyses it can be split up into two aspects, or as Webster et. al. (1998) formulated it” product involvement: the personal interest in the product, excitement resulting from product ownership and product use.” Both product use and ownership are important here but it is the latter that makes product involvement an ideal research subject for beta tests.

The reason for this is that normal users, or end-users, usually have little to no say in the development of a game, after all, the game is supposed to already be finished when they buy it. Beta testers on the other hand do, helping in building the game is one of the main reasons they are testing (Anderson R. , 2002). It is expected in this study that this difference in input and intrinsic motivation will be of effect to product involvement and therefore WOM. This expected effect will be discussed further at the hypotheses.

2.4 Conclusion

Previous studies on social ties in gaming show that a larger amount of friend invites – or social ties – will positively affect revenues (Wei, Yang, & Adamic, 2010). When looking at word of mouth advertising to find a driver for these friend invites it was found that product involvement will be usable in analysing the difference between regular gamers and beta testers. Though it is far from the only motivator available (See table 1) it is a solid choice both because of the relative reliability of measurement (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994) and the expected difference in sense of ownership between the two groups (Sundaram, Mitra, & Webster, 1998). In the next section we will formulate the different hypotheses derived from the reviewed literature, as well as introduce the methods that will be used in analysing them.

(10)

10

(11)

11

3. Hypotheses

In the review of previous literature it was shown that there is a positive relation between product involvement and the amount of WOM of the product’s users (Peres, Lovett, & Shachar, 2013) (Thurau, Gwinner, Walsh, & Gremler, 2014) (Sundaram, Mitra, & Webster, 1998). Even though this relation should serve as a foundation for the current research none of the previous studies was done in the entertainment industry. To be sure of its validity the relation between involvement and WOM or more precisely between involvement and recommendations will be tested first.

Hypothesis 1: Higher involvement with a game leads to more recommendations.

Also discussed was the expected difference between involvement of beta testers and end-users. The reason for this expectation is the higher level of input beta testers have in the ending product (Anderson R. , 2002). Though the sense of ownership and invested time are also expected to play a role here, resources for this study were limited. Therefor the choice was made to only check if there is a difference between levels of involvement between beta testers and end-users. Afterwards, depending on the results that are found, this information might serve as a basis for further research in the area.

Hypothesis 2: Beta testers feel more involved with a game then end users.

To match the previous two hypotheses to the research question given in the introduction:

To what extent can the social ties of beta testers be used to increase the profitability of a beta test?

They will have to be combined, leading to the third and final hypothesis:

(12)

12

4. Method

For this study an online questionnaire based survey was used to collect the data needed for analysing the hypotheses. The choice of using a survey instead of a more personal method was made because of multiple reasons. First surveys are ideal for gathering data from a large amount of people (Saunders, Thornhill, & Lewis, 2009) and since the expectation was that the difference between beta testers and end-users would be small the use of surveys is preferable. Second, by definition all social games take place at least partly online, it therefore makes sense to collect data on these games online as well. Surveys provide a quick and affordable way of doing so (Saunders, Thornhill, & Lewis, 2009). Last, to enable a thorough statistical analysis of involvement data has to be comparable, when looking at previous research this was done by using surveys as well (Debeauvais, Nardi, Schiano, Ducheneaut, & Yee, 2011).

It has to be mentioned that the use of surveys as a tool for data gathering brings with it some drawbacks. First, because the lack of personal contact with the people filling in the survey they are usually not willing to fill in a long set of questions (Saunders, Thornhill, & Lewis, 2009). This is a problem because as said before it is expected that a large sample size will be needed. To balance out this lack of willing subjects the survey was kept very short in the hope of acquiring more test subjects. Keeping the survey short however brings with it another problem. With a survey, unlike the case with interviews, there is only one chance of data collection. This means that if any questions are missing they can’t be added later on, this risk is inherent to the use of surveys (Saunders, Thornhill, & Lewis, 2009) and though it can’t be offset completely it was kept in mind during the survey design.

The survey was self-administered, even though this discards the ability to give an explanation in the case of ambiguity, it greatly enhances the speed at which data can be gathered and allows for a faster and easier data analysis with data already being digital (Saunders, Thornhill, & Lewis, 2009). Furthermore, filling in the survey online guarantees anonymity reducing subject / participant bias, increasing the level of reliability of any potential findings (Saunders, Thornhill, & Lewis, 2009)

(13)

13

4.2 Sample

The focus of this study is on the potential difference in involvement of beta testers and end-users of a game. The survey therefore had to be filled in by the players of a game that fulfilled certain requirements. First, the game of choice had to have had a beta testing period, preferably not too long ago to increase the possibility of finding people that were active at that time. Second, the community of players of the chosen game had to allow for data gathering. And third, the players had to be easy to reach through example community forums or in-game mail to enable the survey to be spread to a large amount of potential subjects.

With these requirements in mind the choice was made to use the online game Hearthstone. Hearthstone is a free-to-play game available on pc’s as well as mobile devices with a player base of over thirty million players worldwide (Sirani, 2015). Moreover the game went through two distinct beta test phases. A “closed beta” starting on August 2013 where people could only test the game if they was invited by the developer. And an “open beta” starting on January 2014 where the beta test was available for anyone interested (Gamepedia, 2014). Finally Hearthstone has an active community in several forums making it easier to reach them with a request to fill in a survey.

To increase the generalizability of any results (Saunders, Thornhill, & Lewis, 2009) the goal was to get a sample size as large as possible. Though the minimum sample size for these types of surveys is generally set at thirty samples per category compared to make sure that the data is normally distributed (Saunders, Thornhill, & Lewis, 2009), for this study this would probably come up short. With the small amount of questions, as discussed previously and the drawbacks of using online surveys a sample size of at least 150 beta testers and 150 end-users was preferred just to be safe.

The ending sample size ended up at 2.966…

The reason for this unexpectedly high amount lay in the method of data collection which shall be discussed further in the next section.

(14)

14

4.3 Data collection

As mentioned before data collection was done by method of a short online survey. The survey was built in Qualtrics to add to the anonymity and user friendliness of the survey in the hope of increasing the likeliness of people filling in the survey (Saunders, Thornhill, & Lewis, 2009). To get people to fill in this survey first I contacted people in my own network that play the game asking them if they would please fill it in and if they would be so nice as to ask players they knew to do so as well. This first round of surveys was mostly used not to acquire data but to gather feedback on the survey itself. The time they took filling in the survey was measured, between 3 and 5 minutes, and a check was done if the questions were clear. These responses led to a small revision in which the questions were spread over more pages and to remove spelling errors. For an example the end version of the survey used see appendix 1.

After the revision I again went to gamers in my own network asking with help spreading the survey, however the data acquisition proved too slow especially since the people that filled in the previous survey could not do so again in order to prevent biases (Saunders, Thornhill, & Lewis, 2009). In the hope of increasing the amount of responses links to the survey were posted on the official European and American forums of Hearthstone as well as on two community build-forums. Overnight this led to nearly three thousand people filling out the survey. For an example of the text posted on the forums see appendix 2.

To ensure data quality several steps were taken in cleaning the data. First, any incomplete surveys were deleted. Second, surveys coming from a duplicate I.P. address were all deleted, though it is possible that two or more different people used the same computer to fill in the survey, the risk of insincere answers was deemed too high. Last, every set of answers was hand checked for “joke” results eventually leaving a usable sample size of 2.390.

(15)

15

4.4 Measures

Here the different measures used in the survey shall be explained, they will be handled in the order they were used in the survey.

4.4.1 Demographics

Even though for this study there will be no statistical analyses on either age or gender these items were still in put in the questionnaire to provide a warming-up of sorts. Furthermore, by looking at these items in the data trends might be visible that can be used as a basis for further research.

The main item of importance in this part is the question on which date a player started playing. The answer to this question will serve as a way to split the data set in three groups. Closed beta testers, open beta testers and end users (see appendix 1). These groups will be the ones compared to each other to test differences in involvement and likeliness to recommend the game.

4.4.2 Involvement

In order to order to measure involvement amongst the participants the Personal Involvement Inventory or PII (Zaichkowsky, 1985) was used in the survey. Even though the PII is thirty years old it is still used to this date especially in the area of consumer relations (Söderlund & Colliander, 2015). One problem with the PII is that the original set of twenty items used to measure involvement were somewhat redundant (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994). This was especially a problem in this study because the survey was planned on being as short as possible in order to raise the response rate. Therefore a revised version of the PII was used in which the number of items has been brought down from twenty to ten (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994). The ten items in the revised PII now ask if the subject finds an object important, interesting, relevant, exciting, meaningful, appealing, fascinating, valuable, involving and needed (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994).

To enable statistical analyses a seven point Likert scale was used. Though a scale with an odd amount of numbers would prevent the tendency of people to be drawn to the middle number (Saunders, Thornhill, & Lewis, 2009), a seven point Likert scale was used in the original research (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994) and it was therefore deemed unwise to switch. In order to keep the subject’s

(16)

16

attention and to further validate the data six of the ten items were reverse coded (Saunders, Thornhill, & Lewis, 2009) this meant that instead of ranging from negative to positive, boring-interesting, these items range from positive to negative, important-unimportant.

4.4.3 Likeliness to recommend.

The last item measured in the survey is how likely a subject thinks it is he or she will recommend the game to a friend. For this a seven point Likert scale is used as well to make the data easily comparable with the data gathered on involvement. Though in this study we only use one item in testing the likeliness to recommend this is done quite often (Danaher & Haddrell, 1996). In measuring satisfaction among customers this item has been proven to be sufficiently reliable and recommended for future research (Danaher & Haddrell, 1996).

(17)

17

5. Results

In this section we shall take a look at the results of the study. First, the revised PII will be checked for validity and combined into one variable for further use in the analyses. Second, the three hypotheses will be analysed separately to provide a foundation for the discussion and concluding remarks

5.1 Combining the Revised Personal Involvement Inventory.

As mentioned before, the Personal Involvement Inventory is mostly used in the area of marketing. Therefore it was deemed appropriate to check the validity of its content in the current research before using the resulting data in further analyses. One of the most common measures of scale reliability is the Cronbach alpha test (Field, 2013). Furthermore, this test was also used in testing the original and revised PPI making it easier to compare findings.

The Cronbach Alpha of the ten items in the Revised PPI was originally 0.95 (Zaichkowsky, The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising, 1994). When looking at the current data the findings are as follows:

(18)

18

Though the Cronbach Alpha of 0.87 found in this analysis fails to live up to the 0.95 found

in the original research it is still well over the 0.80 reliability limit for applied research

(Nunally, 1978).

To provide a clear measurement in assessing the three hypotheses the choice was

made to combine the ten involvement items in the database into one variable named

‘Overall Involvement’. This should have minimal effect on the statistical outcomes because

of the high levels of correlation and the relatively high reliability given in the previous test.

This combined variable will be meant in the next analyses when talking about

involvement.

5.2Hypothesis 1: Higher involvement with a game leads to more recommendations.

As mentioned in the literature review this hypothesis has been proven in several studies before (Peres, Lovett, & Shachar, 2013) (Thurau, Gwinner, Walsh, & Gremler, 2014) (Sundaram, Mitra, & Webster, 1998). However it will be tested again here to validate it with regards to the gaming market.

Because for this hypothesis only two variables are of importance, involvement and recommendations an independent samplesT-test was performed. The findings are as follows:

(19)

19

As can be seen in figure 2 there is a highly significant positive correlation between involvement and the likeliness to recommend with a p<0.01 and a correlation of 0.438. Of course it must be kept in mind that even though the hypothesis seems to be supported by these findings correlation does not imply causation. Or more directly put, it is not possible to see the direction of the relation from the data. To be able to say if higher levels of involvement lead to more recommendations or the other way around is impossible without a laboratory style experiment (Field, 2013). That being said, for now it is good enough to know that involvement and likeliness to recommend are significantly correlated, especially since comparable relations have been proven for other products in the past (Thurau, Gwinner, Walsh, & Gremler, 2014).

5.2Hypothesis 2: Beta testers feel more involved with a game then end users.

Because of the three different types of users an ANOVA was used to compare levels of involvement between groups. The results were as follows:

(20)

20

Looking at the data it becomes clear that there is no significant difference between the involvement of beta testers and after-launch players ( p>0.05). There is a marginally significant difference (p<0.05) between open beta users and regular users, however the difference is unexpected with after-launch players scoring higher levels of involvement then open beta testers. Because of the lack of significance in the main part of the findings no ad-hoc tests were needed. The findings will be further discussed in the conclusion, for now it is enough to say that the results found do not provide evidence in support of the second hypothesis.

5.2 Hypothesis 3: Beta testers are more likely to recommend a game then end users.

Though it goes without saying that the lack of supporting evidence for the second hypothesis will likely effect the third hypothesis as well an analysis of the data was done regardless.

Similarly to the second hypotheses an ANOVA was needed to compare the likeliness to recommend between the different groups of subjects. The results were as follows:

(21)

21

An odd thing happens here. Even though the second hypothesis can be rejected based on the results there are still significant correlations to be found for the third hypothesis. Both the difference in likeliness to recommend between open / closed beta testers (p<0.01) as well as between closed beta testers and end users (p=0.047) are significant. A post-hoc Scheffe test shows that closed beta testers (M=5.826, SE=0.051) are significantly more likely to recommend the game to a friend than open beta testers (M=5.564, SE=0.061, p<0.01) or end-users (M=5.688, SE=0.043, p<0.05)

6. Discussion

6.1Findings on the hypotheses and research question.

The focus in this study was on answering the research question: To what extent can the social ties of

beta testers be used to increase the profitability of a beta test?

The first hypothesis used in trying to answer this question proposed that higher involvement with a game leads to more recommendations. There turned out to be a significant correlation between the two variables so even though correlation does not imply causation these findings do seem to support the hypothesis. This was to be expected since similar relations have been proven in other markets then gaming (Peres, Lovett, & Shachar, 2013) (Thurau, Gwinner, Walsh, & Gremler, 2014) (Sundaram, Mitra, & Webster, 1998), never the less these results might still add to the inter-industry validity of this relation.

The second hypotheses proposed a correlation between the type of user, open/closed beta tester or end users, en the involvement they would feel regarding a game. This is because of the expected higher levels of ownership of beta testers and their input in building the final product (Anderson R. , 2002). However the results of the data regarding these variables did not support the hypothesis. Some of the likely reasons for this will be discussed in the limitations section of this discussion.

The third and last hypothesis proposed a correlation between the type of user and their willingness to recommend a game to a friend. Because this hypothesis is a combination of the first two and the second hypothesis is not supported it is surprising to see that there are some significant correlations to be found. Closed beta testers score significantly higher than both of the other two types of users showing that even though the second hypothesis found no support there might still be some truth in what was expected to be found.

(22)

22

6.2Limitations.

First and foremost the biggest problem with the current research was the way in which the data was gathered. By posting links to the survey the amounts of results became too high leading to an early closing of the online survey. The ungodly amount of 2390 data points that all had to be checked by hand in order to provide a decent level of data quality led to a loss of time that in turn endangered the planning of the rest of the research. Moreover because nearly all the respondents came from community forums there probably was an involvement bias, people that are active on the forum of a game are probably more involved than regular users. It is expected that this raised the level overall involvement.

A second limitation was the use of data from one point in time, ideally the involvement of beta testers after a few months of playing in the beta is compared to the involvement of end users after playing for a few months in the finished game. Though this falls outside of the scope of this research it might be a nice start for future research especially since there were some correlations between the different types of users and their involvement

A third limitation was the game that was used, though ideal in terms of user base and types of beta testing Hearthstone is a relatively simple game. It makes sense that a game that is played in the train to school is less involving than a pure PC game. Again for the current research Hearthstone was a solid choice however if the subject is ever dug up it might be better to make measurements in different types of games as well.

(23)

23

7. Conclusion

To what extent can the social ties of beta testers be used to increase the profitability of a beta test?

The main goal of this study was to try and find a possibility for increasing the profitability of game beta tests by using friend invites. In order to do this first the determinants of friend invites were discussed using previous research in the area of word of mouth advertising. Afterwards product involvement and the likeliness to recommend a game to friends were measured and analysed for end users as well as two types of beta testers of the online game Hearthstone.

Though there were some significant results that related certain beta test users to higher levels of recommending, there was no support for the hypothesis that beta testers feel more involved with a game then end users. This might have been caused by a bias in type of subjects or because there is actually no difference between the groups this was hard to say.

The results might not have been conclusive but a review of the literature did hint at a positive relation between the social ties a player has in a game and his profitability. Therefore it can be concluded that even though there might not have been clear differences when it comes to beta testers there is money to be made by using the social ties of gamers be it in a beta test or not.

Furthermore, by contributing to the literature regarding the gaming industry, even if it is just a little. This study might provide the basis for future research in an area that is mostly ignored by the scientific community.

(24)

24

8. References

Anderson, E. W. (1998). Customer Satisfaction and Word of Mouth. Journal of Service Research, 5-17.

Anderson, R. (2002). Security in Open versus Closed Systems – The Dance of Boltzmann, Coase and Moore. Open Source Software Economics (pp. 1-13). Cambridge: Cambridge University. Ang, C. S., Zaphiris, P., & Wilson, S. (2010). Computer Games and Sociocultural Play: An Activity

Theoretical Perspective. Games and Culture, 354-380.

Brightman, J. (2014, June). Game software market to hit $100 billion by 2018 - DFC. Retrieved from gamesindustry: http://www.gamesindustry.biz/articles/2014-06-25-game-software-market-to-hit-usd100-billion-by-2018-dfc

Businessdictionary. (n.d.). Beta Test. Retrieved from Buninessdictionary: http://www.businessdictionary.com/definition/beta-test.html Centercode. (2015). Beta Testing Objectives. Retrieved from Centercode:

http://www.centercode.com/beta/goals/

Chang, T.-S., Ku, C.-Y., & Fu, H.-P. (2013). Grey theory analysis of online population and online game industry revenue in Taiwan. Technological Forecasting and Social Change, 175–185. Cheung, C. M., & Lee, M. K. (2012). What drives consumers to spread electronic word of mouth in

online consumer-opinion platforms. Decision Support Systems, 218–225.

Danaher, P. J., & Haddrell, V. (1996). A comparison of question scales used for measuring customer satisfaction. International Journal of Service Industry Management, 4-26.

Debeauvais, T., Nardi, B., Schiano, D. J., Ducheneaut, N., & Yee, N. (2011). If you build it they might stay: retention mechanisms in World of Warcraft. Proceedings of the 6th International

Conference on Foundations of Digital Games (pp. 180-187). New York: ACM.

Dolan, R. J., & Matthews, J. M. (1993). Maximizing the utility of customer product testing: Beta test design and management. Journal of Product Innovation Management, 318-330.

Field, A. (2013). Discovering Statistics using IBM SPSS Statistics. Los Angeles: Sage.

Gamecentral. (2013, september). GTA 5 is most expensive video game ever at $265 million. Retrieved from Metro: http://metro.co.uk/2013/09/09/gta-5-is-biggest-budget-video-game-ever-at-265-million-3955201/

Gamepedia. (2014, 12 4). Hearthstone beta. Retrieved from hearthstone.gamepedia.com: http://hearthstone.gamepedia.com/Hearthstone_beta

Gold, S. C., & Wolfe, J. (2012). The Validity and Effectiveness of a Business Game Beta Test.

(25)

25

Griffiths, D. N. (2013, January). The Break Up - Bankrupt THQ's Assets Sold At Auction. Retrieved from Forbes: http://www.forbes.com/sites/danielnyegriffiths/2013/01/24/the-break-up-bankrupt-thq-assets-sold-at-auction/

Heroes of the storm. (2014, November). Evil Geniuses vs. Cloud 9 -Finals- Heroes of the Storm

Exhibition Tournament 2014. Retrieved from Youtube:

https://www.youtube.com/watch?v=3QxaGLZpVm4

Hosselman, E. (2013, June). The Risks of Running a Public Beta Test. Retrieved from Centercode: http://www.centercode.com/blog/2014/06/the-risks-of-running-a-public-beta-test/ Johnson Brown, J., & Reingen, P. H. (1987). Social Ties and Word-of-Mouth Referral Behavior.

Journal of Consumer Research, 350-362.

Kaulio, M. A. (1998). Customer, consumer and user involvement in product development: A framework and a review of selected methods. Total Quality Management, 141-149. Kaytoue, M., Silva, A., Cerf, L., & Meira, W. J. (2012). Watch me Playing, I am a Proffesional: a First

Study on Video Game Live Streaming. WWW 2012 Companion, (pp. 16-20). Lyon.

Kolo, C., & Baur, T. (2004). Living a Virtual Life: Social Dynamics of Online Gaming. the international

journal of computer game research, 1-30.

Meulen, R. v., & Rivera, J. (2013, oktober). Gartner Says Worldwide Video Game Market to Total

$93 Billion in 2013. Retrieved from Gartner:

http://www.gartner.com/newsroom/id/2614915

Nunally, J. C. (1978). Psychometric Theory (2nd edition ed.). New York: McGraw-Hill.

Olsen, T., Procci, K., & Bowers, C. (2011). Serious Games Testing: How to ensure Proper Usability, Playability, and Effectiveness. Design, User Experience, and Usability. Theory, Methods,

Tools and Practice, 625-634.

Peres, R., Lovett, M., & Shachar, R. (2013). On brands and word-of-mouth. 1-47.

Przybylski, A. K., Rigby, S. C., & Ryan, R. M. (2010). A Motivational Model of Video Game Engagement. Review of General Psychology, 154-166.

Richins, M. L., & Root-Shaffer, T. (1988). The Role of Involvement and Opinion Leadership in Consumer Word-of-Mouth: An Implicit Model Made Explicit. Advances in Consumer

Research, 32-36.

Saunders, M., Thornhill, A., & Lewis, P. (2009). Research methods for business students. Essex: Prentice Hall.

Sirani, J. (2015, 5 5). HEARTHSTONE REACHES 30 MILLION PLAYERS. Retrieved from IGN.com: http://www.ign.com/articles/2015/05/05/hearthstone-reaches-30-million-players

(26)

26

Söderlund, M., & Colliander, J. (2015). Loyalty program rewards and their impact on perceived justice, customer satisfaction, and repatronize intentions. Journal of Retailing and

Consumer Services, 47-57.

Soh, J. O., & Tan, B. C. (2008). Mobile gaming. Communications of the ACM, 35-39.

Sundaram, D. S., Mitra, K., & Webster, C. (1998). Word-Of-Mouth Communications: a Motivational Analysis. Advances in Consumer Research Volume, 527-531.

The Street. (2015, February). Activision Blizzard (ATVI) Earnings Report: Q4 2014 Conference Call

Transcript. Retrieved from The Street:

http://qamobi.thestreet.com/story/13038379/2/activision-blizzard-atvi-earnings-report-q4-2014-conference-call-transcript.html

Thorsten, H.-T., Gwinner, K. P., Walsh, G., & Gremler, D. D. (2004). Electronic word-of-mouth via consumer-opinion platforms: what motivates consumers to articulate themselves on the internet? Journal of Interactive Marketing, 38-52.

Thurau, T. H., Gwinner, K. P., Walsh, G., & Gremler, D. D. (2014). ELECTRONIC WORD-OF-MOUTH VIA CONSUMER-OPINION PLATFORMS: WHAT MOTIVATES CONSUMERS TO ARTICULATE THEMSELVES ON THE INTERNET? JOURNAL OF INTERACTIVE MARKETING, 38-52.

Twitch. (2014). Twitch 2014. Retrieved from Twitch: http://www.twitch.tv/year/2014

Tychsen, A. (2008). Crafting User Experience via Game Metrics Analysis. Goals and Strategies for

Studying User Experience, 1-5.

Webster, A. (2013, September 20). 'Grand Theft Auto V' sets record by earning $1 billion in just

three days. Retrieved from The Verge:

http://www.theverge.com/2013/9/20/4752458/grand-theft-auto-v-earns-one-billion-in-three-days

Wei, X., Yang, J., & Adamic, L. A. (2010). Diffusion dynamics of games on online social networks.

Proceedings of the 3rd Wonference on Online social networks (pp. 1-9). Pelotas: USENIX

Association Berkeley.

Zaichkowsky, J. L. (1985). Measuring the Involvement Construct. Journal of Consumer Research, 341-352.

Zaichkowsky, J. L. (1994). The Personal Involvement Inventory: Reduction, Revision, and Application to Advertising. Journal of Adverstising, 59-70.

(27)

27

Appendix 1. Survey

Dear Hearthstone player,

Thanks in advance for helping me with my thesis by filling in this questionnaire. The data given here will be used for statistical purposes only and after the results have been generated all personal data will be scrambled and deleted. Please make sure to read all of the questions carefully and take your time filling them in (the survey is not that long don't worry.)

For any further questions or remarks you can contact me at: tim.aardenburg@student.uva.nl

Greetings,

Tim.

Some personal info that will of course stay personal.

Still, if it bothers you you can choose the option "personal" and we'll leave it at that.

What is your age?

Personal

In Years

What's your gender?

Male

Female

Personal

Page 1

(28)

28

When did you start playing?

During or before the closed Beta (Before 21- January - 2014)

During the open Beta (Before 11 - March - 2014)

After the game was launched (After 11- March - 2014)

On to the main part.

Please take your time here, you're almost done!

To me Hearthstone is: Important Not Important Boring Interesting Relevant Irrelevant Exciting Unexciting Means Nothing Means a lot to me Appealing Unappealing Fascinating Mundane Worthless Valuable Involving Uninvolving Not Needed Needed

How likely are you to recommend hearthstone to someone?

Unlikely

Likely

Page 2

(29)

29 You're done! thanks a lot for the help!

Any feedback on this questionnaire?

Battle Tag / Email? (to make sure I can reach you if you've won the awesome T-shirt)

(30)

30

Appendix 2. Forum Message

(http://www.hearthpwn.com/forums/hearthstone-general/general-discussion/46531-hearthstone-thesis#c4)

Hello there my hearthstoning amigo's,

My name is Tim and I've enjoyed playing hearthstone off and on for the past year or so, so much even that I've decided to incorporate hearthstone as the subject of my Bachelor thesis. As a Mad Scientist I need test-subjects (like a bloody thousand or so), for which I am turning to you guys (and girls).

I have a survey that contains a couple of questions and will honestly not take more than 2-3 minutes to fill in, there is even a prize to be won!

Among the respondents I will hand out one thatshirtwascash t-shirt! which is awesome if you don't have enough Pizza Dj-ing cats on your t-shirts yet (if you win you get to pick out the shirt of course). The winner will be announced somewhere around July.

You can access the questionnaire through this link the rest is done automatically: -->Survey<--

The provided information will of course stay private and after the research I will personally destroy all of it (with great pain though).

Thanks in advance and I hope to see you in game! Greetings,

Referenties

GERELATEERDE DOCUMENTEN

The primary goal of learning by doing is to foster skill development and the learning of factual information in the context of how it will be used. It is based on

In addition to some concrete findings about the differences between the perspectives of patients and regulators on the four dimensions (quality of care, responsibilities,

As they write (Von Neumann and Morgenstern (1953, pp. 572, 573), the solution consists of two branches, either the sellers compete (and then the buyer gets the surplus), a

In the other treatment (N), however, the subject moving second was not informed about the subject moving ®rst, and by design reciprocity was physically impossible because the

Venhola, Aku; Peletier, Reynier; Laurikainen, Eija; Salo, Heikki; Iodice, Enrichetta; Mieske, Steffen; Hilker, Michael; Wittmann, Carolin; Lisker, Thorsten; Paolillo,

In this work we will concentrate on how to improve quality of that process us- ing computational science methods and machine learning by implementing super- vised learning, doing

In the same way that the young people are aspiring to have a single fixed dwelling filled with the positive social experiences often associated with the normative UK vision,

[r]