• No results found

Critical success factors of digital dlatforms : a quantitative measurement framework

N/A
N/A
Protected

Academic year: 2021

Share "Critical success factors of digital dlatforms : a quantitative measurement framework"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master Thesis

Critical Success Factors of Digital Platforms

A Quantitative Measurement Framework

Jeroen Martijn van Uden 10250840

MSc Business Administration Digital Business Track

(2)

Statement of originality

This document is written by Student Jeroen van Uden, who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Abstract

Digital platforms have disrupted multiple incumbent industries (Kenny & Zysman, 2016). Although it is stated that pursuing a digital platform strategy results in higher revenue and faster growth, a better understanding what makes a digital platform successful is still missing (De Reuver, Sørensen, & Basole, 2017). The research question in this study, therefore, is: “How can success factors of digital platforms be quantifiably measured effectively?”

Based on a literature review, critical success factors are transformed into objective, quantitative measures that are put together in one framework, with which digital platforms can be given a ‘success scores’. Next, using an adapted version of the Technology Acceptance Model (TAM), consumer opinions on five different platforms are collected. To collect these opinions, an online survey is sent to students and young professionals. The results are combined with objective performance measures and used to validate the scores from the framework.

Analysis of the collected data shows support for the conceptual model and its serial mediation effect. Next, the results show that Snappcar is scored highest on three of the four variables and second highest on the last variable. These results are in line with the success score of the platform, as Snappcar scores are moderately high to high on all factors in the framework. Moreover, the objective performance measures show the current success of Snappcar.

These outcomes suggest that the framework can be used to measure critical success factors objectively. Although, due to confidentiality issues, the framework is only partly validated, the process of assessing a platform is still considered valuable for managers. Also, the literary basis for the framework provides researchers with a comprehensive overview of multiple essential concepts.

(4)

Table of contents

1. Introduction 5

2. Literature review 7

1. Digital platforms 7

2. Network effects 9

3. New rules of strategy 10

4. Success factors and reasons for failure 12

3. Research design 17

1. Framework 17

2. Technology Acceptance Model 19

3. Platforms 22

4. Method 24

1. Procedure 24

2. Population and sample 25

3. Measures 25

5. Results 27

1. Sample 28

2. Reliabilities 29

3. Factor analysis 32

4. Serial mediation effect 34

5. Platform ranking per variable 36

6. Repeated measures ANOVA 37

7. Stepwise regression 40

8. Platform scores 43

6. Conclusion 44

1. Technology Acceptance Model 44

2. Framework 45 7. Discussion 48 8. Limitations 50 9. Bibliography 53 10. Appendix 59 1. Survey questions 59

(5)

1. Introduction

In 2007, there were five global players in the mobile-phone industry, who together controlled 90% of the industry’s profit. Also in 2007: the release of the iPhone. Only seven years later, in 2015, the iPhone generated 92% of global profits; while the original global players generated no profit at all (Van Alstyne, Parker, & Choudary, 2016b). The original global players had classic competitive advantages, such as strong product differentiation, trusted brands and huge R&D budgets. However, Apple disrupted that industry by using the power of a digital platform and leveraging new rules of strategy. Apple created a digital platform, connecting participants in a two-sided market, app developers and app users, creating value for both sides (Van Alstyne et al., 2016b).

The technological developments in recent years, such as cloud tools, allow digital platforms to disrupt multiple incumbent industries (Kenny & Zysman, 2016). The rise of these digital platforms will ultimately result in a reorganization of value creation and capture. Bughin and van Zeebroeck (2017) state that companies that pursue an offensive platform strategy have a better payoff in both revenue and growth. These insights create an opportunity for business to change the way they work, socialize, and compete for profits, according to Kenny & Zysman (2016). Digital attackers can disrupt entire industries, draining profits from incumbent firms; incumbent firms, on the other hand, can protect themselves by quickly adopting new digital business models, thereby cannibalizing existing models (Bughin & van Zeebroeck, 2017).

However, not all digital platforms are successful (Van Alstyne, Parker, & Choudary, 2016a). A lot of new platforms struggle in the beginning and eventually don’t make it. Even leading platforms, such as Apple and Google, did not succeed on

(6)

to be successful. The existing literature is unable to address this question completely (De Reuver, Sørensen, & Basole, 2017). Most studies mention and discuss only one of the essential dynamics or factors that influence strategy, without combining their respective effects. Recurring topics, for example, are network effects (Hendler & Golbeck, 2008; Lee, Kim, Noh, & Lee, 2010; Muzellec, Ronteau, & Lambkin, 2015), the control versus generativity paradox (Andersson Schwarz, 2017; Boudreau, 2010; Tilson, Sørensen, & Lyytinen, 2013; Wareham, Fox, & Cano Giner, 2014), and ecosystem value (Adner & Kapoor, 2010; Wareham et al., 2014; Xu, 2014). As

mentioned before, a better understanding of how all of these factors together influence the success of a platform is still missing. Next to that, it is stated by De Reuver et al. (2017) that the focus of most researchers has been on success stories, ignoring why others fail.

Therefore, the purpose of this research is to examine and objectify critical success factors of digital platforms and see if they can be used to make predictions about or explain platform success. This objective leads to the following research question: “How can success factors of digital platforms be quantifiably measured effectively?” To answer this question, a literature study will be conducted to identify critical conditions and factors for platform success. In this study, the essential factors will be transformed into quantifiable, objective measures and put together in one comprehensive framework. To test these measures, some existing platforms will be assessed and given a ‘success score’. After that, an adapted version of the Technology Acceptance Model will be used to examine how consumers feel about the assessed platforms and if they would use it. Next to that, some objective measures will be used to measure actual performance. With these two measures, the scores, and thus the framework, can be validated.

(7)

This research will contribute to the literature by combining multiple theories about the different factors that influence the success rate of digital platforms. Thereby creating a better understanding of the different factors and how they might interact. Although there has been some research on the success factors, most of the researchers discuss only one or two factors. In this paper, one comprehensive overview will be created. Next to that, this research will try to objectify the success factors by

transforming them into quantitative measures. After which they can be used to ‘score’ different digital platforms. Besides the academic contributions, this research will help managers to assess their digital platform objectively. More objective assessments can help managers to design their platforms better and to strategize for success.

2. Literature review

In the first part of this literature review, definitions of a digital platform will be discussed and explained; the nature of competition and specific strategies for these platforms will also be described. Secondly, it will be explained how strategy has changed with the rise of digital platforms. After that, critical conditions and success factors for digital platforms will be named and discussed.

2.1 Digital platforms

Andersson Schwarz (2017) defines a platform as a digital infrastructure, usually software-based, intended for users to apply computer code, for example, to run applications. These applications would be called a module, or an app. Modules are add-on software subsystems that add functionalities to the platform (Tiwana, Konsynski, & Bush, 2010). Next to this technical definition of a digital platform,

(8)

Boudreau (2008) describes a platform as a set of components used in common across a product family whose functionality can be extended by third parties, which means that they do not necessarily have to make use of modules. Due to the rise of

information technology and the Internet, there is an increasing number of these platforms, as described by Boudreau, which have all become digital (Muzellec et al., 2015). They are diverse in function and structure (Kenny & Zysman, 2016); implying that they can emerge in a variety of industries and that they can result in the

reorganization of whole markets, work arrangements, and value creation and capture methods. These platforms are loosely organized ecosystems in which more than one participant supplies components that together form the total ecosystem (Tilson et al., 2013). Due to this flexibility, the platform owners might not know the end-users in advance. Different types of sellers, buyers or advertisers come together on platforms to reduce their respective transaction costs (Andersson Schwarz, 2017).

Digital platforms usually are sided networks, which operate in multi-sided markets (Bakos & Katsamakas, 2008; Xu, 2014). They provide an interface that connects multiple types of participants, to create synergies and network effects (Xu, 2014). A specific form of these multi-sided networks is a two-sided network, or market, which has two sides that can either be a business side (B2B) or a consumer side (B2C) (Bakos & Katsamakes, 2008; Muzellec et al., 2015). The network does not necessarily have to consist of a B2B and a B2C side; it could also be that there are, for example, two B2C sides, which can be described as a peer-to-peer platform. Each side of the network/market is a customer of the platform, and each side has directed

interactions with each other (Hagiu, 2014). For example, a marketplace that connects buyers and seller, or users and advertisers (eBay and Google); social media platforms that connect users with service or application providers (Facebook); and music

(9)

platforms that connect producers with consumers of digital media (iTunes) (Bakos & Katsamakas, 2008). These two-sided networks will be the focus of this research.

2.2 Network effects

In these two-sided markets, the participants on each side derive positive externalities from each other (Muzellec et al., 2015). These externalities are defined as the network effect. The network effect describes the value of a service to a user that arises from the number of people using the service (Hendler & Golbeck, 2008). These effects can be positive or negative. Positive effects exist when the value of a platform increases when there is an increase in the number of users, on either side of the platform

(Srinivasan, Lilien, & Rangaswamy, 2004). Negative effects exist when the value of a platform decreases when the number of users increases (Lai, Wang, Hsieh, & Chen, 2007). In this paper, the focus lies on positive network effects, as they are crucial to platform success (Lee et al., 2010). The best way to describe these effects is by using by Metcalfe’s law (Hendler & Golbeck, 2008). According to Hendler and Golbeck (2008), Metcalfe’s law has been used to explain the growth of many technologies. It states that as the number of users of a platform grows, connectivity increases, and if the users can link to each other, the value will grow exponential. Stoelhorst (2005) describes this as the ‘fax effect’. He states that technologies that are subject to

positive network externalities experience explosive growth at one point in time. These aspects often will lead to the emersion of one dominant standard for the market or industry.

(10)

2.2 New rules of strategy

Digital platforms are different from traditional businesses and need a different perspective on strategy to survive (Van Alstyne et al., 2016b). Van Alstyne et al. (2016b) state that there are three crucial shifts in strategy, which are needed for platforms to succeed. The first transition is to shift the focus from resource control to resource orchestration. The second transition is to shift from internal optimization to external interactions. Finally, a transition from a focus on customer value to a focus on ecosystem value is needed.

Resource orchestration is part of a process called resource management

(Sirmon, Hitt, Duane, Brett, & Gilbert, 2011). Resource management is about creating value for customers and gaining competitive advantage for the business by

structuring, bundling, and leveraging the business’ resources (Sirmon, Hitt, & Ireland, 2007). Sirmon et al. (2007) define these three concepts as follows. Structuring the portfolio of resources is about acquiring, accumulating and divesting resources, to establish the businesses’ resource portfolio. “Bundling resources to build

capabilities,” means that resources are integrated. The integration involves stabilizing and enriching current resources and pioneering, meaning that new capabilities are being created. Leveraging these capabilities in the marketplace is about the exploitation of a firm’s capabilities and taking advantage of market-specific

opportunities. Investments that follow a well-defined leveraging strategy can lead to higher performance (Simon et al., 2011). Well defining a leveraging strategy involves mobilizing, coordinating, and deploying resources. Simon et al. (2011) state that the orchestration of these processes is key to developing a competitive advantage. Not only because many different variations in resource orchestration can be pursued, but also because the technology on which a platform is based is continuously changing

(11)

(Yoo, Boland, Lyytinen, & Majchrzak, 2012). These aspects imply that there needs to be flexibility within the organizational functioning, meaning that the resources and capabilities need to be orchestrated continuously to deal with fast-changing

environments.

The next transition is from internal optimization to external interactions. These external interactions are essential for digital platforms, as platforms are a complete ecosystem with different participants (Wareham et al., 2014). These ecosystems need governance to work efficiently. According to Wareham et al. (2014), autonomous contributions of the participants can create complementary products and services that together solve a problem better than the platform would do alone. The shift in strategy is thus toward purposefully organizing the contributions of the participants in the ecosystem, instead of merely designing and developing products (Wareham et al., 2014). Platforms need these innovative contributions to create value and thus need valuable external interactions. The fact that all participants can contribute implies, however, that the platform is no longer under full control of the platform owner (Tilson et al., 2013). Managers need to focus on the tension between controlling the platform to extract value and opening up the platform for others to innovate, resulting in a paradox, which will be further explained in section 2.3.

The final transition is from focusing on customer value to focusing on ecosystem value. In the past, to gain competitive advantage, managers shifted their focus from internal process optimization to delivering customer value (Woodruff, 1997). Delivering customer value is the idea of gaining commercial success, due to the satisfaction of the customer; making sure they perceive the value just as they expected (Huber, Herrmann, & Morgan, 2001). Woodruff (1997) states that

(12)

customers to deliver customer value. However, in their way of defining a digital platform, Tilson et al. (2013) state that the end-users of a platform are generally unknown. When this is combined with the aforementioned dynamics of value creation in an ecosystem, it can be stated that focusing solely on customer value is outdated. Next to that, to create value, there needs to be innovation, coming from the platform participants (Adner & Kapoor, 2010). These innovations need to be accompanied either by changes in the platform ecosystem or by complementary innovations. These needs imply that focusing on ecosystem value is more beneficial than focusing solely on customer value. Xu (2014) confirms this by stating that is crucial to success to build an ecosystem in which everyone gets enough value to be satisfied, keeping the growth for the future in mind. Thus confirming the shift in strategy from creating customer value to creating ecosystem value, as stated by Van Alstyne et al. (2016b).

2.3 Success factors and reasons for failure

For platforms to succeed, it is not only essential to account for the different dynamics and strategy changes as mentioned above, but is it is also crucial to look at the reason why other platforms have failed (Van Alstyne, Parker, & Choudary, 2016a). Van Alstyne et al. (2016a) state six reasons why platforms fail: failure to optimize

‘openness’, failure to engage developers, failure to share the surplus, failure to launch the right side, failure to put critical mass ahead of money, and failure of imagination.

Platform openness can be seen as the degree to which the platform owners give up value, or ‘open up’, to third-party developers (Parker & Van Alstyne, 2017). A platform should be opened as wide as possible to all consumers (Xu, 2014). However, managers need to choose the number of participants to take on board strategically. According to Xu (2014), businesses should establish the platform with

(13)

fewer participants first, before expanding it. This way a business can balance issues such as cost-benefit analysis, complexity, agility, and flexibility better. Tilson et al. (2013) build upon this with the idea that managers need to focus on the tension between controlling the platform to extract value and opening up the platform for others to innovate. This tension creates a paradox between ‘control’ and

‘generativity’. When there is more autonomy within an ecosystem, the generativity will increase. Resulting in participants pursuing self-interested, innovative activities (Wareham et al., 2014; Yoo, Henfridsson, & Lyytinen, 2010). This way the rate of innovation is optimal and participants will develop locally relevant solutions (Wareham et al., 2014). However, letting al participant act fully autonomous can result in undesired variances in outcome quality, resulting in agency costs, where the platform owner incurs the negative costs for the poor quality of a participants

contribution (Wolter & Veloso, 2008). These downsides suggest that full autonomy is not desired and platform owners should control not only total output quality but also participant contributions, to some extent (Wareham et al., 2014). Xu (2014)

emphasizes the fact that effective governance, for example regulating interactions on the platform, is crucial for platform success.

Successful platforms need to identify, attract and retain valuable developers (or suppliers) to not only implement but also develop the platform (Xu, 2014). These developers need good support and the right tools from the platform to participate and deliver value (Antikainen, Mäkipää, & Ahonen, 2010). Developers want to design their contributions efficiently. The platforms can thus attract them with the right design tools, which allow them to realize their valuable ideas, and thereby contribute to the total value creation. The creation of these tools requires the allocation of sufficient resources by the platform, both financial and human resources (Xu, 2014).

(14)

Next to that, Oreg and Nov (2008) state that there are three key motivations for contributors to participate in open source involvement. The first motivation is about one’s desire to establish a reputation and gain approval from peers. For this, it is essential to have the right tools to make reputation-building contributions, for

example, reviews or rewards that are visible to others. The second motivation is about the desire for self-development. For this, it is necessary that developers can ask for feedback on their work. The third motivation is about the desire to help the

community. As stated by Van Alstyne (2016a), platforms need to have tools for innovation, feedback, and rewards. These tools will satisfy at least the first two motivations, making it appealing to developers to participate.

Sharing the surplus is about distributing the value fairly between participants. Fairly distributing the value helps the platform to avoid creating too much

competition on one side of the platform, which is critical for platform success (Xu, 2014). Van Alstyne et al. (2016a) elaborate on this with the example of Covisint, which was an online marketplace designed to match buyers and suppliers of auto parts. They state that the structure of the platform forced suppliers into heavy price competition, leaving them with little or no residual value. Consumers, on the other hand, captured most of the value. Suppliers felt like they were getting ripped off, so they left. Thus, the platform failed because it did not share the surplus evenly. Having a dynamic pricing strategy and providing appropriate incentives can help ensure evenly distributed value (Xu, 2014).

Typically, the focus of the value proposition of new digital platforms shifts from customers to businesses over time (Muzellec et al., 2015). At first, companies try to ensure participation of the critical mass, giving the value proposition a B2C-orientation. However, to generate a steady revenue stream and a sustainable business

(15)

model, other businesses need to get involved eventually (Muzellec et al., 2015). This need forces the platform owner to change the orientation of the value proposition and marketing strategy, incorporating B2B partners. However, Van Alstyne et al. (2016a) name Google Health as an example of a platform that failed because is focused on customers before providers. So, in some cases, the initial value proposition should be focused on attracting developers, sellers or other businesses. This example indicates the reason for failure, as stated by Van Alstyne et al. (2016a), that businesses should strategically choose the right side to focus their value proposition on first.

“Putting the critical mass ahead of money,” means that monetization should come after attracting customers (Muzellec et al., 2015). Platform owners need to focus first on creating a user-centric platform, for both developers (or suppliers) and customers (Xu, 2014). The platform needs to be reliable and dynamic by satisfying the users’ needs with high functionality and user-friendliness. This way they can effectively build a large user-base and create large network effects, which will result in new and larger revenue sources (Xu, 2014). The monetization comes after that, by defining the value proposition for businesses. If the value proposition for other businesses is not appropriately defined, the platform will have difficulty in monetizing its offer (Muzellec et al., 2015).

Failure of imagination might be the hardest factor to explain, but it also might be the most important one. As digital platforms are a growing feature of competition and digital platform strategies yield superior returns to those who do not adopt, it is crucial to ride this new wave (Bughin & van Zeebroeck, 2017). Van Alstyne et al. (2016a) name different examples of companies that failed to see the platform opportunity and chose to focus merely on product development. Sony, HP, and Garmin were all beaten by platforms such as iOS and Android. By allowing

(16)

developers to create apps like Google Maps for iPhone, Apple beat Garmin, just because users did not need Garmin anymore. They could do it all on their iPhone. Despite what new features Garmin might’ve developed. “Product managers who focus on features are not just measuring the wrong things; they are thinking the wrong thoughts.” (Van Alstyne et al., 2016a). However, Xu (2014) states that businesses do not necessarily need to adopt a digital platform strategy. The success of a platform depends on the fit between a platform and the business. Some companies could benefit from more control; for example, companies that sell high demand products and optimized bundling packages (Xu, 2014).

Although there has been done research on different dynamics and strategies for digital platforms, a deeper understanding of what makes it that some platforms succeed and others fail, is still missing (De Reuver et al., 2017). Researchers focus mainly on the success stories, ignoring why others fail. In their article, Van Alstyne et al. (2016a) provide a starting point for this research, as they state six reasons why platforms fail; a scientific argumentation, however, is still missing. The purpose of this research is, therefore, to examine these factors and see if they can be transformed into quantifiable, objective measures, to make predictions or explain platform success. The research question will be: “How can success factors of digital platforms be

(17)

3. Research design

In this section, the research design will be described. First, the proposed framework will be explained. After that, the adapted version of the Technology Acceptance Model will be presented; together with the conceptual model that is going to be tested and the corresponding hypotheses. Lastly, the platforms that are going to be assessed will be introduced.

3.1 Framework

To create a better understanding of the critical success factors for digital platforms, both qualitative and quantitative research designs are used. Firstly, a literature study was conducted to identify critical conditions and factors for platform success. The literary review resulted in the following factors: failure to optimize ‘openness’, failure to engage developers, failure to share the surplus, failure to launch the right side, failure to put critical mass ahead of money (monetization), and failure of imagination. These factors are transformed into quantifiable measures, which can be used to assess digital platforms.

The openness of a platform will be measured using a 5-point Likert scale, scaling from ‘high level of platform control’ to ‘high level of autonomy’. As Wareham et al. (2014) and Xu (2014) suggest, both entirely controlled and fully autonomous is not desired. However, managers need to choose a strategy, so either moderately controlled or moderately autonomous will be considered as positive.

The engagement of developers will be measured by looking at the number of resources that are available to developers. The resources, or tools, will be divided into categories, such as innovation, feedback, and rewards, as they are all essential for motivating developers (Oreg & Nov, 2008). Tools in one category will not be

(18)

assessed as more valuable than tools in another category. The total sum of available tools will be taken as the measure, meaning that more tools will result in a better score.

The sharing of the surplus will be measured as a shared percentage of the total value created, more specifically by the distribution of value. Does the platform gain profit by taking a percentage of the revenue from the suppliers or do they add costs for the consumer, or both? The more evenly split these percentages are, the better the score, as Xu (2014) stated that uneven distribution of value would lead to too much competition on one side of the platform.

The ‘right side’ will be measured by looking at the orientation of the initial value proposition (B2B versus B2C), and thereafter by measuring the changes in orientation and the timing of the change (e.g., stage in platform lifecycle and time in between orientations), as Muzellec et al., 2015 state that to create a steady revenue stream most platforms have to change once. If a platform changes its orientation often, or relatively late or early in its lifecycle, it implies that it does not know which orientation to choose, or that they have made wrong decisions. So in this case, fewer and well-timed changes will lead to a better score.

The ‘monetization focus’ will be measured by looking at the timing and the number of changes in marketing strategy, e.g., “At what number of platform

participants did the platform owner changed the strategy to generate more revenue?” As Muzellec et al. (2015) and Xu (2014) state, platforms need to focus on creating a user-centric platform first, to attract customers, before they can focus on

monetization. So for this measure, again, fewer changes are considered better; for the timing, however, especially changing too early will be considered as a wrong choice.

(19)

The ‘imagination’ will be measured by looking at the number of

strategy/product changes before adopting a platform strategy and the time between digital disruption in the industry and the adopting of a platform strategy. As Bughin and van Zeebroeck (2017) state, it is essential for companies to adopt a digital

platform strategy. Less product/strategy changes will, therefore, be considered better, as it shows ‘imagination’. Next to that, response time to digital disruptions,

incremental innovations or disruptive, is also a crucial factor. The quicker a company can respond, the better the score.

To test the measures, some existing platforms will be assessed and given a ‘success score’. The platforms will assess themselves, as it can be difficult for the researcher to judge most of the measures. For them to be able to assess themselves, a questionnaire has been created (see appendix 10.2).

3.2 Technology Acceptance Model

To validate the scores, a slightly adapted version of the Technology Acceptance Model (TAM) will be used to examine how consumers feel about the platforms that are assessed, and if they would use them. TAM is created to understand why people accept or reject information systems (Davis, 1989). Whether a person accepts an information system is determined by his/her intentions. Intentions are formed by the person’s attitude toward an information system and the perception of its usefulness. Attitudes are a result of the persons’ beliefs, which consist of the perception of usefulness and ease of use of the information system (Szajna, 1996). As there are many different forms and applications of the TAM (Davis, 1989; Szajna, 1996; Venkatesh & Davis 2000), for this study, there has been created a slightly adapted

(20)

model. This way all relevant variables will be tested, and the items that measure the variables will suit the research question. The conceptual model is shown in figure 1.

Figure 1: Conceptual model

However, according to Davis (1989), it is important to point out that perceived usefulness and ease of use are subjective measures. The outcomes do not necessarily give a good representation of the objective reality. Therefore, the given ‘success scores’ will also be validated by objective performance measures, to gain insights into the actual platform success. Financial measures, such as profit and revenue growth, and nonfinancial measures, such as the number of users will be used, as they are widely accepted as reliable performance indicators (Venkatraman, 1986).

When these two measures are combined, statements (or conclusions) can be made about the success of a platform. There will be not only subjective data on platform success but also objective. Using the statements on success, the scores that resulted from the framework can be validated. If a platforms scores high for most critical success factors in the framework, and the statements about actual success are positive, conclusions can be made on the effectiveness of the framework measures.

(21)

To verify the conceptual model, the following hypotheses will be tested: Hypothesis 1: There is a positive relationship between Perceived Ease of Use

and Intention to Use

Hypothesis 2: The positive relationship between Perceived Ease of Use and Intention to Use is mediated by Perceived Usefulness

Hypothesis 3: The positive relationship between Perceived Ease of Use and Intention to Use is mediated by Perceived Usefulness and Attitude in serial

Hypothesis 4: The positive relationship between Perceived Ease of Use and Intention to Use is mediated by Attitude

Hypothesis 5: There is a positive relationship between Perceived Ease of Use and Perceived Usefulness

Hypothesis 6: There is a positive relationship between Perceived Ease of Use and Attitude

Hypothesis 7: There is a positive relationship between Perceived Usefulness and Attitude

Hypothesis 8: There is a positive relationship between Perceived Usefulness and Intention to Use

Hypothesis 9: There is a positive relationship between Attitude and Intention to Use

(22)

3.3 Platforms

In this study, there will be an evaluation of five different digital platforms. These platforms are Snappcar, Fitmo, Peerby, Udemy, and Publitas. In this section, there will be a short description for every platform.

Snappcar is an online platform aiming to connect car owners with people looking to rent a car, a so-called ‘peer-to-peer’ platform. “In the Netherlands, eight million cars stand idle for 23 hours a day. Why shouldn’t the owner share their car with people that only need one occasionally?” With this idea, Pascal Ontijd and Victor van Tol started Snappcar in 2011 (Owler, n.d.-c; Snappcar, 2016). Ontijd en van Tol launched the platform in Utrecht, and it was the first car-sharing platform in the Netherlands (Snappcar, 2011). People looking to rent a car can select where, when and how long they want to rent a car and Snappcar shows them all available cars in that period and area. Next to that, Snappcar also facilitates the communication, the transactions, and the insurance matters.

Fitmo allows people to get personal support from a professional coach on the road to achieving their fitness goals (Fitmo, n.d.). Fitmo was founded in 2013 in Amsterdam by Dave Roeloffs and can be described best as a service marketplace (Owler, n.d.-a). People that want to start living healthier or exercise more can become a member and connect with personal trainers from all over the world via an app. This connectivity is not only convenient for the (service) user, but also for the personal trainers (service provider), who can give personal assistance to more people than before.

Peerby is an online platform that lets people easily borrow or rent things from their neighbors, thus a peer-to-peer platform (Peerby, n.d.). Daan Weddepohl founded Peerby in 2011 in Amsterdam, with three goals: save people money, promote social

(23)

cohesion and enable green life (Owler, n.d.-b; Peerby, n.d.). As 1/8 of CO2 emissions on earth are derived from the production and consumption of durable goods, Peerby tries to limit that emission by proposing a different mindset to people: Why buy when you can borrow (Peerby, n.d.)? People can ask Peerby for something they need, and Peerby will show them people in their area from whom they can borrow or rent it. With renting, people pay a small amount, but they are sure to get exactly what they need, when and where they need it.

Udemy aims to improve people’s lives through learning (Udemy, n.d.). In 2010 Kevin Johnson founded Udemy in San Francisco, California as an online service marketplace (Owler, n.d.-d). Udemy is now a global marketplace for online learning and teaching. Students can master new skills and achieve their goals by learning from the courses provided by expert instructors. The platform not only allows students to gain knowledge but also instructors to distribute theirs.

Publitas aims to help retailers to inspire shoppers by publishing their print catalogs online (Publitas, n.d.). In 2006, Guillermo Sanchez and Khalil Mehdi founded the Netherlands-based company as a content platform (Crunchbase, n.d.). Shoppers can quickly browse through catalogs online and from any device, and easily purchase the products they like. This way of online shopping is advantageous for retailers as well, as it increases their reach and conversion rate.

(24)

4. Method

In this section, at first, there will be a detailed description of the procedure of the study, and an explanation of specific design related choices will be given. After that, there will be a description of the population and an explanation of the chosen sample. Finally, the measures that are going to be used in the survey will be described.

4.1 Procedure

To answer the research question, the scores that are giving to the platforms have to be validated. To validate these scores, explanatory, quantitative research will be

performed, in the form of cross-sectional survey design. Because of the observational nature of the design, it shows what happens in the real world, without interference. This design is most suitable because it gives data that is easily comparable. The collected opinions can be compared with other measures, to validate the scores. When the data has been collected, the following steps will be taken to analyze it.

First, Principal Axis Factoring will be used to estimate the number of factors in the conceptual model and the relevance of all items to their scale (De Winter & Doduo, 2012). Then, since the model has a serial mediation effect, an observed variable regression path analysis will be conducted, using PROCESS (Hayes, 2012). This analysis will test if the model works and it will also test the strength and

significance of all direct and indirect effects for all platforms combined. After the model has been tested, descriptive statistics for all variables will show the actual opinions on the different platforms per variable, on which they can be ranked. Repeated measures ANOVA will show if there are significant differences between outcomes, from a statistical point of view. Also, a stepwise regression analysis will be

(25)

done to show which variables are strong predictors for Intention to Use (Saunders, Lewis, & Thornhill, 2008). This analysis will be done per platform as well.

4.2 Population and sample

The population for this study is Dutch consumers. The general population is large, and a sample frame is unavailable; therefore this study will be conducted using a non-probability, convenience sample. Respondents will mostly consist of students from the University of Amsterdam and young professionals. This sample is a good representation of the population in relationship to platform success. Students and young professionals are more likely to use digital technologies and have a steeper learning curve; their opinions and intention to use will have a more significant impact on and will be a better predictor of platform success. Given a confidence level of 90% and a margin of error of 7%, the sample size will consist of 136 respondents. This sample represents an analyzable minimum; the aim will be to get as many respondents as possible. The response rate cannot be predicted accurately, due to the various ways the survey will be distributed (Facebook, personal approach, and e-mail).

4.3 Measures

All respondents will firstly be asked to answer two control questions. Before answering platform-specific questions, respondents will have to state if they have prior knowledge about that specific platform and if they have ever used it before. Their prior knowledge or previous experiences are essential to control for because it could significantly affect one's opinion on a specific platform, both negatively and positively. This effect can give a wrong impression of the population as a whole. After the control questions, the questionnaire will continue with the actual variables.

(26)

To measure the Perceived Ease of Use of a platform, a combination of two scales will be used. The first scale is an adapted version of the validated six-item, 7-point Likert scale by Davis (1989), which has a Cronbach’s alpha of 0.93. The measure is adapted in the sense that not all questions posted by Davis are relevant for this study and will thus not be used. The second scale is an adapted version of the validated four-item, 7-point Likert scale by van der Heijden (2004), which has a Cronbach’s alpha of 0.87. This measure is also adapted in the sense that not all questions posted by van der Heijden are used. The combined measure in this study has four items. An example of a question is: “I would find (platform) easy to use.” An adapted version of the validated six-item, 7-point Likert scale by Davis (1989) will be used to measure Perceived Usefulness. The measure has a Cronbach’s alpha of 0.98 and is also adapted in the way that irrelevant questions are left out so that the measure now has four items. An example of a question is: “Using (platform) would make it easier to (activity).”

After that, Attitude is measured by a combination of two different scales. An adapted version of the validated three-item, 7-point Likert scale by Park (2009) will be combined with an adapted version of validated the four-item, 7-point semantic difference scale by Moon (2001). The scale as opposed by Park has a Cronbach’s alpha of 0.94, and the scale as opposed by Moon has a Cronbach’s alpha of 0.90. In the original scales, not all items are suitable for this study, so some of them are left out. The scale in this study uses the two items from the scale by Park and one from the scale of Moon. Next to that, the items from the scale by Moon are transformed so that they fit a 7-point Likert scale. An example of a question is: “I think that (activity) through (platform) is a good idea.”

(27)

Lastly, to measure the Intention to Use, again a combination of two scales will be used. The first scale is the two-item, 7-point Likert scale by Luarn and Lin (2005), which has a Cronbach’s alpha of 0.94. The second scale is an adapted version of the three-item, 7-point Likert scale by Moon (2001), which has a Cronbach’s alpha of 0.88. The measure is adapted in the sense that not all questions posted by Luarn and Lin are relevant for this study and will thus not be used. The measure in this study has three items. An example of a question is: “Assuming that (platform) is available, I predict that I will use it on a regular basis in the future.”

When the respondents have answered all questions on all five platforms, they are asked to answer some questions about their age, gender, and educational

background. This way, it is possible to get a sense of the characteristics of the sample. With the information this gives, the possibility to generalize the results to the whole population can be assessed.

5. Results

In this section, the results of the statistical analysis of the survey data will be

described. Firstly, a description of the actual sample size and its characteristics will be given; followed by the reliability and descriptive statistics for the variables. After that, the results of the factor analysis and the serial mediation will be stated. Next, the platforms will be ranked based on their outcomes, after which the results of the repeated measures ANOVA will be presented. Lastly, the results of the stepwise regression will be shown.

(28)

After the statistical results, the scores that were given to the platforms via the created framework will be stated and explained. The results will also be linked to the results of the survey and the actual performance measures.

5.1 Sample

The total sample consisted of 149 respondents, which is more than anticipated beforehand. However, not all respondents completed the whole questionnaire. So, demographics are not available for all respondents. The partial completion of the survey is also visible further in this section. For the 140 respondents that did fill out the demographics part of the survey, the average age was 25, which seems logical, since the sample for the more substantial part consists of students from the University of Amsterdam. The sample composition is also visible in their educational

background. The more substantial part of the sample studies or has studied at a university (63%), while 28% attends or has attended a higher professional education program. The sample was evenly split between men and women.

Since not all respondents completed the survey, there was missing data. The data showed that respondents who did not complete the survey tended to quit after completing a whole section for a specific platform. The fact that they did complete one section means that the answers given in those sections can still be valuable and relevant to this study. Missing data points were therefore re-coded into a specific value (999) so that they could be excluded pairwise, which means that all cases are included and that only the missing data is not incorporated in the analysis.

(29)

5.2 Reliabilities

After checking for errors and counter-indicative items, which both weren’t present, the reliability of the scales was tested. The scale for Perceived Ease of Use has high reliability, with Cronbach’s Alpha = 0.96. All items have a good correlation with the total score of the scale, as the corrected item-total correlation indicates all items above .30 (see table 1). The fourth item in the scale would substantially affect the reliability if it were deleted (Δα = .12). However, the item was not deleted, because the content is relevant to the study.

The Perceived Usefulness scale has a Cronbach’s Alpha of .93, which is also highly reliable. The corrected item-total correlations indicate that all the items have a good correlation with the total score of the scale (all above .30). Items one and three in the scale would substantially affect the reliability is they were deleted (Δα = .14; Δα = .11). However, the items were not deleted, as they are relevant to the study.

The Attitude scale is highly reliable as well, with Cronbach’s Alpha = .94. All items have a good correlation with the total score of the scale, as the corrected item-total correlation indicates all items above .30. None of the items would substantially affect the reliability if they were deleted.

The scale for Intention to use has high reliability, as the Cronbach’s Alpha is 0.91. The corrected item-total correlations indicate that all the items have a good correlation with the total score of the scale (all above .30). The third item in the scale would substantially affect the reliability if it were deleted (Δα = .13). However, the item was not deleted, because the content is relevant to the study. The reliability, correlations, means and standard deviations are shown in table 2.

(30)

Table 1: Item-Total Statistics Scale Mean if Item Deleted Scale Variance if Item Deleted Corrected Item-Total Correlation Cronbach's Alpha if Item Deleted PrcEase1 7.731 5.444 .914 .937 PrcEase2 7.722 5.407 .898 .942 PrcEase3 7.688 5.311 .930 .933 PrcEase4 7.499 5.302 .839 .961 PrcUse1 8.630 4.640 .784 .923 PrcUse2 8.730 4.891 .907 .877 PrcUse3 8.892 5.297 .800 .912 PrcUse4 8.758 4.977 .838 .899 Att1 5.989 2.455 .866 .931 Att2 5.825 2.404 .871 .938 Att3 5.734 2.399 .913 .895 IntUse1 7.460 3.253 .813 .872 IntUse2 7.111 2.901 .873 .821 IntUse3 7.371 3.496 .773 .905

(31)

Table 2: Means, Standard Deviations, Correlations Variables M SD 1 2 3 4 5 6 7 8 9 1. Age 25.37 5.70 - 2. Gender 1.50 .50 -.05 - 3. Education 3.61 .70 -.03 .13 - 4. Knowledge 2.59 .42 .07 -.04 -.19 - 5. Usage 2.90 .22 -.06 .07 .01 .43 -

6. Perceived Ease of Use 2.55 .76 -.03 -.31 -.21 .17 .01 (.96)

7. Perceived Usefulness 2.93 .76 .02 -.30 -.13 .11 .12 .55 (.93)

8. Attitude 2.92 .77 -.01 -.33 -.09 .19 .06 .56 .77 (.94)

9. Intention to Use 3.60 .89 .02 -.21 .01 .21 .16 .33 .67 .72 (.91)

Note: correlations significant at the .05 level (2-tailed) appear in bold

(32)

5.3 Factor analysis

To test the conceptual model, and determine the relevance of all items, Principal Axis Factoring (PAF) was conducted on the scales. To verify the samplings adequacy for the analysis, the Kaiser-Meyer-Olkin measure was used, which resulted in KMO = .903. Bartlett’s test of sphericity χ² (91) = 2335.96, p < .001, indicated that

correlations between items were sufficiently large for PAF. Initial analysis showed that two components had eigenvalues over Kaiser’s criterion of 1. However, the scree plot showed two screes, which implies four factors (see figure 2). The second point of inflection is admittedly small, but combined with the theoretical basis of the model, retention of four factors is most logical. The four factors together explain 83.22% of the variance. The factors are retained and rotated with an Oblimin, Kaiser

normalization rotation as shown in table 3. The items that cluster on the same factors suggest that factor one represents Perceived Usefulness, factor two Perceived Ease of Use, factor three Intention to Use, and factor four Attitude. The second item on the factor of Attitude shows high cross loading on the factor of Perceived Usefulness. This high cross loading could be explained by the assumption that if someone has a positive attitude towards a platform, one automatically finds it useful.

(33)

Figure 2: Scree Plot

Table 3: Rotated Factor Loadings

Item PrcUse PrcEase IntUse Att

PrcEase1 .02 .94 -.3 .001 PrcEase2 -.06 .91 .01 -.06 PrcEase3 -.03 .94 -.04 -.08 PrcEase4 .11 .83 .08 .07 PrcUse1 .86 -.07 .03 .004 PrcUse2 .94 -.04 -.02 -.09 PrcUse3 .69 .32 .03 -.03 PrcUse4 .55 .29 .14 -.10 Att1 .06 .12 .05 -.75 Att2 .40 -.002 -.05 -.91 Att3 .01 -.02 .13 -.88 IntUse1 .10 -.06 .66 -.21 IntUse2 .002 -.01 1.07 .11 IntUse3 .003 .08 .64 -.19 Eigenvalues 8.60 2.22 .82 .63 % of variance 60.22 14.88 4.62 3.50

(34)

5.4 Serial mediation effect

Given the fact that there is a double, serial mediation effect in the conceptual model, both the direct effect of the independent variable on the dependent variable and the indirect effects of the two mediators have to be analyzed. This analysis is done with an observed variable regression path analysis.

The direct effect of Perceived Ease of Use on Intention to Use is negative and significant, c’= -.192, t(142) = -2.447, p = .016. Three indirect effects further illustrate the underlying process, as shown in table 4. The first indirect effect is the indirect effect of the Perceived Ease of Use on the Intention to Use through Perceived Usefulness. This effect indicates that people that find a platform easy to use

significantly find a platform more useful (a1 = .55, p < .01), which is also associated with higher intentions to use it (b1 = .37, p < .01), independently of their attitudes. This indirect effect can be interpreted as significantly positive because the bootstrap confidence interval is above zero (Effect = .205, SE = .07, CI: .098 to .368).

The second indirect effect is the effect of Perceived Ease of Use on the

Intention to use, through Perceived Usefulness and Attitude in serial. People that find a platform easy to use significantly consider it to be more useful, which was also related to more positive attitudes towards the platform (a3= .69, p < .01), and the increase in positive attitudes further results into higher intention to use it (b2= .63, p < .01). This specific indirect effect is significantly positive (Effect = .241, SE = .06, CI: .136 to .370).

The third indirect effect indicates the specific effect of Perceived Ease of Use on Intention to Use, through Attitude. People that find a platform easy to use

significantly have a more positive attitude towards it (a2= .16, p < .01), which in turn was associated with their intentions to use the platform (b2= .63, p < .01), regardless

(35)

if they found it useful. This indirect effect is significantly positive (Effect = .104, SE = .47, CI: .022 to .206).

Next to the direct and indirect effects, one of the control variables also showed a significant coefficient. The regressions showed that there is a positive relationship between Knowledge and Attitude (cv1 = .23, p = .03), which implies that when someone has prior knowledge about a specific platform, one tends to have a more positive attitude towards using it.

These results show support for all but one hypothesis. The regressions showed a positive relationship between Perceived Ease of Use and Perceived Usefulness (a1 = .55, p < .01), which supports H5; a positive relationship between Perceived Ease of Use and Attitude (a3= .69, p < .01), which supports H6; a positive relationship between Perceived Usefulness and Attitude (a2= .16, p < .01), which supports H7; a positive relationship between Perceived Usefulness and Intention to Use (b1 = .37, p < .01), which supports H8; and a positive relationship between Attitude and Intention to Use (b2= .63, p < .01), which supports H9. However, there was a negative,

significant relationship between Perceived Ease of Use and Intention to Use (c’= -.192, p = .016), which means that H1 is not supported.

Next to the direct effects, the regression also gave supporting results regarding the indirect effects. The positive relationship between Perceived Ease of Use and Intention to Use is not only mediated by Perceived Usefulness (Effect = .205) and Attitude (Effect = .104) individually but also in serial (Effect = .241). These results support H2, H3, and H4.

(36)

Table 4: Consequent

M (PrcUse) M (Att) Y (IntUse)

Antecedent Coeff. SE p Coeff. SE p Coeff. SE p

X (PrcEase) a1 .552 .069 < .01 a2 .164 .062 < .01 c' -.192 .079 .016 M (PrcUse) --- --- --- a3 .688 .062 < .01 b1 .372 .105 < .01 M (Att) --- --- --- --- --- --- b2 .634 .104 < .01 C (Know) --- --- --- cv1 .230 .104 .029 --- --- --- C (Use) --- --- --- --- --- --- --- --- --- constant

i

m1 .423 .709 .551

i

m2 .638 .528 .229

i

y .057 .659 .931 R2 = .391 R2 = .635 R2 = .578 F (144) = 22.492, p < .01 F (143) = 62.276, p < .01 F (142) = 38.812, p < .01

5.5 Platform ranking per variable

An analysis of the descriptive statistics (mean and standard deviation) for all variables in the conceptual model showed how the different platforms rank on those variables. These results indicate the opinions of consumers, as shown in table 5.

The platforms with the highest ranking on Perceived Ease of Use is Snappcar, with Mean = 2.27 and standard deviation (SD) = 0.93. This raking means that, on average and rounded, people perceive Snappcar as moderately easy to use. The differences amongst the platforms on this variable are relatively small, with only Udemy en Publitas being perceived as slightly easy to use (Mean = 2.70, SD = 1.01; Mean = 3.10, SD = 1.16).

For the Perceived Usefulness of the platforms, Peerby has the highest ranking (Mean = 2.66, SD = 1.12), meaning that people perceive Peerby as the most useful one. On average and rounded, they scored Peerby as slightly useful. Again, the differences amongst platforms are relatively small, with Publitas as the lowest scorer still being perceived as slightly useful (Mean = 3.48, SD = 1.38).

(37)

The variable Attitude measured the attitudes of the consumers towards using the platform for the activity is supposed to perform. For this variable, the differences were relatively large. People had the most positive attitude towards using Snappcar (Mean = 2.45, SD = 1.02), meaning that, on average, they agreed that renting a car through Snappcar is a good idea, for example. In contrast, people had the least positive attitude towards using Publitas (Mean = 3.59, SD = 1.36). On average, they thus stated that they were more or less indifferent to the idea of Publitas.

Lastly, for the Intention to Use, the highest scoring platform was again Snappcar (Mean = 3.30, SD = 1.23). This ranking means that people somewhat agreed with statements regarding their intentions to use Snappcar in the future. The lowest scoring platform was Publitas (Mean = 4.13, SD = 1.43), meaning that people neither agreed nor disagreed with the statement regarding intentions to use the platform.

5.6 Repeated measures ANOVA

After looking at the descriptive statistics for all variables per platform, repeated measures ANOVA has been performed to look at the differences between variable means per platform, to test whether the observed differences were statistically

significant. The number of cases that are shown in the table with descriptive statistics (table 5) is different from the number of cases in table 6, in which the results of the repeated measures ANOVA are shown. This difference comes from the fact that in the repeated measures ANOVA, every variable should have the same number of cases. In the analysis of the descriptive statistics in general, missing data was deleted pairwise, so that partly completed questionnaires were also measured.

(38)

For the variable Perceived Ease of Use, Mauchly’s test, χ²(9) = 14.33, p = .11 did not indicate any violation of sphericity, which means that sphericity is assumed. The difference between the means is statistically significant: F(4, 560) = 21.64, p < .01, meaning that the five platforms were are rated differently, from a statistical point of view. Mauchly’s test for Perceived Usefulness (χ²(9) = 30.14, p < .01) did indicate violation of sphericity . So, with Greenhouse-Geisser Epsilon (ε) > .75, Huyn-Feldt results are used. For Perceived Usefulness, the difference between the means is also statistically significant: F(3.77, 527.64) = 13.56, p < .01. In the case of Attitude, Mauchly’s test, χ²(9) = 24.75, p < .01 also, did indicate a violation of sphericity. So, with ε > .75, again, Huyn-Feldt results are used. For this variable the five platforms were not rated equally because the difference between the means is statistically significant: F(3.82, 530.40) = 20.23, p < .01. And finally, for Intention to Use, Mauchly’s test, χ²(9) = 18.72, p = .03 did indicate violation of sphericity . With ε > .75, Huyn-Feldt results were used. The difference between the means is again statistically significant: F(3.84, 537.51) = 11.74, p < .01.

These results suggest that all means per variable and platform are indeed not rated equally. The statistical significance of the differences in variable means implies that they are not coincidental. Coincidental differences could lead to incorrect

statements about which platform is rated higher on a specific variable. The results of the repeated ANOVA thus confirm the correctness of the statements in section 5.5.

(39)

Table 5: Descriptive Statistics per Variable

Perceived Ease of Use Perceived Usefulness Attitude Intention to Use

N Mean SD N Mean SD N Mean SD N Mean SD

Snappcar 149 2.27 .93 149 2.69 1.07 148 2.45 1.02 149 3.30 1.23

Fitmo 144 2.47 1.05 144 2.96 1.42 143 3.03 1.29 144 3.94 1.49

Peerby 142 2.32 1.07 142 2.66 1.12 141 2.91 1.35 142 3.43 1.42

Udemy 141 2.70 1.01 141 2.86 1.11 140 2.72 1.12 141 3.54 1.27

Publitas 141 3.10 1.16 141 3.48 1.38 140 3.59 1.36 141 4.13 1.43

Table 6: Repeated Measures ANOVA

Perceived Ease of Use Perceived Usefulness Attitude Intention to Use

N Mean SD N Mean SD N Mean SD N Mean SD

Snappcar 141 2.27 .95 141 2.69 1.09 140 2.47 1.03 141 3.34 1.24 Fitmo 141 2.43 .98 141 2.93 1.40 140 3.00 1.25 141 3.92 1.47 Peerby 141 2.29 1.00 141 2.64 1.10 140 2.88 1.31 141 3.41 1.41 Udemy 141 2.70 1.01 141 2.86 1.11 140 2.72 1.12 141 3.54 1.27 Publitas 141 3.01 1.16 141 3.48 1.38 140 3.59 1.36 141 4.13 1.43 χ²(9) = 14.33, p = .11 χ²(9) = 30.14, p < .01 χ²(9) = 24.75, p < .01 χ²(9) = 18.72, p = .03 ε = .945 ε = .915 ε = .925 ε = .931 F(4, 560) = 21.64, p < .01 F(3.77, 527.64) = 13.56, p < .01 F(3.82, 530.40) = 20.23, p < .01 F(3.84, 537.51) = 11.74, p < .01

(40)

5.7 Stepwise regression

To test which factors have the strongest effect on Intention to Use, after controlling for prior Knowledge and Usage, stepwise linear regression has been performed. The regression has been done stepwise because this way the results only show which variables are strong and significant predictors for Intention to Use. Next to that, the regression has been done for each platform separately and the total conceptual model.

As the results in table 7 show, for Snappcar, there is only one strong and significant predictor, which is Attitude (β = .64, p < .01), which explains 41% of the variance. The regression for Fitmo showed that both Perceived Usefulness (β = .28, p < .01) and Attitude (β = .53, p < .01) are strong predictors, with Attitude being the strongest one. Together they explain 57.8% of the variance. For Peerby, Knowledge (β = .04, p = .45), Perceived Usefulness (β = .19, p = .01), and Attitude (β = .67, p < .01) turned out to be strong predictors. However, Knowledge is not significant when taking in account the other variables. Together they explain 66.5% of the variance. In the regression for Udemy, Knowledge (β = .15, p = .01), Perceived Usefulness (β = .31, p < .01), and Attitude (β = .46, p < .01) are also strong predictors, and in this case they are all significant when combined. Together they explain 57.8% of the variance. And lastly, the results for Publitas showed only one strong and significant predictor, namely Attitude (β = .83, p < .01), which explains 68.4% of the variance.

For all platforms, the specific regressions showed Attitudes as the strongest predictor for Intention to Use. These results are also shown in the regression for the total model, in which Attitude is also the strongest, significant predictor (β = .53, p < .01), meaning that one’s attitudes towards using a specific platform have the strongest effect on one’s intentions to use it. In three of the five platform-specific regressions, Perceived Usefulness was also a strong predictor, which is also visible in the total

(41)

model regression (β = .34, p < .01). These results mean that next one’s attitudes, the perception of the usefulness of a platform influences one’s intentions to use it. Both effects are positive, meaning that if attitudes become more positive or the perceived usefulness increases, the intentions to use the platform will increase as well. The final significant effect is only visible in the total model, namely Perceived Ease of Use (β = -.18, p = .01). This effect, however, is negative, meaning that when a platform is perceived as easier to use, the intentions to use it will decrease. All mentioned variables together, including Knowledge (β = .10, p = .09), explain 56.4% of the variance in the conceptual model.

(42)

Table 7: Stepwise Regression

Snappcar Fitmo Peerby Udemy Publitas Total

Model Factor β p β p β p β p β p β p

1 (Constant) --- .000 --- .000 --- .000 --- .053 --- .000 --- .000

Knowledge --- --- --- --- .179 .033 .350 .000 --- --- .205 .012

Usage --- --- --- --- --- --- --- --- --- --- --- ---

Perceived Ease of Use --- --- --- --- --- --- --- --- --- --- --- --- Perceived Usefulness --- --- --- --- --- --- --- --- --- --- --- --- Attitude .641 .000 .744 .000 --- --- --- --- .828 .000 --- --- R² = .410 R² = .551 R² = .032 R² = .116 R² = .684 R² = .036 2 (Constant) --- .000 --- .000 --- .053 --- .014 Knowledge --- --- .035 .492 .179 .003 .072 .222 Usage --- --- --- --- --- --- --- ---

Perceived Ease of Use --- --- --- --- --- --- --- ---

Perceived Usefulness .275 .002 --- --- --- --- --- --- Attitude .532 .000 .803 .000 .671 .000 .706 .000 R² = .578 R² = .651 R² = .537 R² = .516 3 (Constant) --- .013 --- .868 --- .101 Knowledge .038 .446 .152 .009 .083 .147 Usage --- --- --- --- --- ---

Perceived Ease of Use --- --- --- --- --- ---

Perceived Usefulness .186 .009 .305 .000 .288 .001 Attitude .669 .000 .458 .000 .481 .000 R² = .665 R² = .578 R² = .546 4 (Constant) --- .057 Knowledge .096 .089 Usage --- ---

Perceived Ease of Use -.175 .011

(43)

5.8 Platform scores

To assess the platforms and score them on the success factors, the measures as described in section 4.1 were used. Managers of all platforms were asked to answer the questions (e.g., about marketing strategy and value proposition; see section 10.2). As most platforms managers found the requested information confidential, only a manager of Snappcar was willing to participate and answer the questions. Only these answers can be used to validate the proposed framework.

On the 5-point Likert scale for Platform Openness, Snappcar was assessed as moderately autonomous. This score implies that participants can act freely and that their actions are controlled only to a limited extent. As for the engagement of the car owners, there are multiple tools in different categories. Snappcar provides one tool for rewarding, two for reputation, and three for overall support; which sums up to six tools in total. These results suggest that Snappcar has a moderately large amount of tools for car-owners. Then, for the sharing of the surplus, Snappcar gains profit from both sides of their network. The specific percentages were not disclosed, but it is estimated that in the most extreme case, 60% comes from the car-owners and 40% from the renters. These results can be described as a high level of fair distribution. Next, Snappcar seems to have started at the ‘right side’, meaning that there haven’t been any definite changes in the orientation of the value proposition. They adopted a value proposition that can be described as dynamic, which means that they can change the focus of the proposition without changing it as a whole. Further, there has been one small adjustment in the marketing strategy. The focus on monetization came after six years, meaning that they focused on sufficiently expanding their user-base first. Lastly, for imagination, Snappcar scores moderately high. Although they were founded as a digital platform, they usually respond to disruptions (incremental

(44)

innovations in their case) in the industry. Their response time, however, is low (less than a year).

Next to the success factors, actual performance measures were also used to gain insight into the success of Snappcar. Since their launch in 2011, Snappcar raised $12.3 million in funding and has estimated revenue of $17.5 million. Their user-base grows at a 50% rate per year, which resulted in 500.000 users today. The current market value was not disclosed, but it is expected to be over $10 million. Snappcar is now active in four countries.

6. Conclusion

The purpose of this research was to examine and objectify critical success factors of digital platforms, and see if they can be used to make predictions about or explain platform success. To validate the objectified success factors, both subjective and objective measures were used: the subjective measure being consumer opinions, measured using of an adapted version of the Technology Acceptance Model, and the objective measure being multiple performance criteria. In this section, first there will be concluding statements regarding the outcomes of the TAM; and after that,

conclusions on the validation of the success factors in the framework will be presented.

6.1 Technology Acceptance Model

To test the conceptual model, multiple statistical analyses were performed. The results showed support for the model and the included variables. All except one of the

(45)

variables affect each other positively, aside from Perceived Ease of Use and Intention to Use. The results indicate that the easier to use a platform is perceived, the fewer people intend to use it. This negative effect is unexpected, as it was assumed that a more complex platform would discourage consumers. However, as the sample for the more substantial part consisted of university students, their interpretation of

complexity might be different from the average consumer.

The regressions analysis showed Attitude as the strongest predictor for Intention to Use, and Perceived Usefulness as the second strongest. Attitude also has high cross-loading with Perceived Usefulness, which means that if someone has a positive attitude towards a platform, one automatically finds it useful. This effect was already described by Davis (1989), who stated that intentions are formed by the person’s attitude toward an information system and the perception of its usefulness (Davis, 1989). Another interesting outcome of the analysis was the fact that

Knowledge directly influences Attitude. So when someone had prior knowledge about a specific platform, he/she generally has a more positive attitude towards it.

6.2 Framework

According to Xu (2014), a platform should be as open as possible to all consumers. High levels of platform openness (autonomy) will lead to generativity, which means that participants will pursue self-interested, innovative activities (Wareham et al., 2014; Yoo et al., 2010). Next to that, the rate of innovation will be optimal, and the autonomous contributions of the participants will only create relevant and

complementary products and services that together solve a problem better than the platform would do alone (Wareham et al., 2014). As Snappcar is assessed as moderately autonomous, it can be stated that they have a great balance between

Referenties

GERELATEERDE DOCUMENTEN

When you click on 'Permalink', what you entered in the form is stored on the server, and an URL is generated that, when you access it, will give you the form, populated with the

Project success can be achieved by focusing on the critical factors listed in this study if the project has high calibre project teams starting the project

Next we will compare reflec- tivity spectra obtained from different scans to demonstrate position-independent overlapping stop bands for orthogonal polarizations and crystal

The results of the Linear Granger Causality test in Table 5.1 and the nonparametric Diks-Panchenko test in Table 5.2 on the raw data imply that there are significant linear

In regression model 1 the relationship between (i) the duration and (ii) the extensiveness of the governance structure of transitional IT services and the closing time of the

Moreover, assessing the success factors for knowledge application within the consultancy industry is lacking since previous studies do not incorporate the distinctive aspects

When SENs know where and how to find published assignments in the common database filled with information about assignments from different HCOs, and know how to select

The factors critical for the success of a digital HR transformation are simplicity, or rather intuitivism, of IT systems used, attention to organizational (both internal and