• No results found

Introducing a performance framework specific for internal corporate ventures

N/A
N/A
Protected

Academic year: 2021

Share "Introducing a performance framework specific for internal corporate ventures"

Copied!
74
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Introducing a performance

framework specific for internal

corporate ventures

Institution:

University of Amsterdam

Study:

MSc Business Administration

Track:

Entrepreneurship & Innovation

Submission date: June 23, 2017

Supervisor:

dr. W. van der Aa

(2)

Abstract

An internal corporate venture program is an important tool for corporates to become more innovative in order to adapt to their rapidly changing environment. Currently, the approach to measure performance is subjective and inadequate. The goal of this research is to fill this gap by providing a framework that suggests concrete metrics that can be used to measure ICV performance.

The research is conducted at a leading Dutch bank with an extensive ICV program. Semi-structured interviews were used to apply an existing performance metrics framework, adapted from the NPD research field, and to find possible additional metrics. A survey was sent out afterwards to prioritize the metrics and gather deeper insights. The research considered industry effects, the influence of different purposes of ICV programs, and different phases.

The main contribution of this research is the final ICV performance framework, consisting of a concrete set of metrics. Explanations why these metrics are appropriate are also included in the research. Practitioners can use the framework to design their performance management system. Also researchers can use it to more objectively measure performance in their research. Furthermore, this research identified two different types of metrics, namely progress and decision metrics. Finally, the research critically analyses which phase model to use and brings up a new suggestion with regards to the standard in the research field.

(3)

Statement of originality

This document is written by Tom Westerbeek, who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(4)

Contents

List of Tables v

1 Introduction 1

2 Literature Review 4

2.1 Embracing entrepreneurial practices . . . 4

2.2 ICVs boost innovation . . . 5

2.3 Performance Measurement Systems . . . 6

2.3.1 Traditional performance measures . . . 8

2.3.2 Subjectivity of current measures . . . 8

2.3.3 ICV programs differ in purpose . . . 9

2.3.4 Different stages of ICVs . . . 11

2.3.5 Industry effects . . . 14

2.4 Creating an alternative . . . 14

2.4.1 Framework adoption from other research field . . . 15

2.4.2 NPD performance measurement frameworks . . . 17

2.5 Conceptual Model . . . 20 3 Methodology 22 3.1 Research Design . . . 22 3.2 Case Description . . . 23 3.2.1 Banking industry . . . 23 3.2.2 Rabobank . . . 24 3.2.3 ICV program . . . 24 3.3 Research methods . . . 26

3.3.1 Mixed methods research . . . 26

3.3.2 Semi-structured interviews . . . 26

(5)

3.4 Addressing the research questions . . . 29 3.5 Analysis . . . 30 4 Results 35 4.1 Findings . . . 35 4.1.1 NPD Framework testing . . . 35 4.1.2 Additions to NPD framework . . . 41

4.2 Findings per phase . . . 44

4.2.1 Phase 1 . . . 45

4.2.2 Phase 2 . . . 47

4.2.3 Phase 3 . . . 48

4.2.4 Phase 4 . . . 50

4.3 Final ICV performance framework . . . 51

5 Discussion 53 5.1 Core findings . . . 53

5.2 Theoretical contribution . . . 55

5.3 Practical implications . . . 56

5.4 Limitations . . . 58

5.5 Future research possibilities . . . 59

6 Conclusion 61

References 62

Appendix A: Interview questions 66

(6)

List of Tables

1 Terminology . . . 5

2 Purposes of ICV programs . . . 10

3 Set up methodology . . . 23 4 Overview of ventures . . . 25 5 Overview of participants . . . 28 6 Set up analysis . . . 33 7 Applied NPD metrics . . . 39 8 Additional metrics . . . 43

9 Phase 1: Customer Discovery . . . 45

10 Phase 2: Customer creation . . . 47

11 Phase 3: Customer validation . . . 48

12 Phase 4: Company building . . . 50

13 Final ICV performance framework . . . 52

14 Counted coding scheme 1 . . . 67

(7)

1

Introduction

We live in a world of technological revolution and increasing competition in international markets, which has enhanced the competitive importance of innovation. This perspective inspires large companies to look for ways to become more innovative and therefore explore and implement entrepreneurial practices. “Large firms and start-up ventures are decidedly different organizations. Each side has what the other one lacks. Shouldn’t great things happen if both sides combined their strengths?” (Weiblen & Chesbrough, 2015, p. 66). An Internal Corporate Venture (ICV) program is an example of such a combination and is defined as “an entrepreneurial initiative that originates within a corporate structure and is intended from inception as new business for the corporation” (Kuratko, Covin, & Garrett, 2009, p. 460).

Corporations invest billions of dollars in resources for internal venture development projects (Johnson, 2012). Despite the high expectations, the gap between the corporate and start-up ways of working poses real challenges for their collaboration (Weiblen & Chesbrough, 2015). After years of operations and irrespective of the measures of success (Campbell & Park, 2004; Garvin, 2002, 2004), 50–99 per cent fail to achieve their performance expectations (Birkinshaw, 2005; Chesbrough, 2000). These results are not particularly precise, but it confirms the difficulty ICV programs have to live up to their expectations.

This raises many questions. What is going wrong? Or how can this be improved? Though, a fundamental question that might need an answer first is: how is the success of ICVs actually measured? Does this way of measuring actually give a good understanding of how the ICV is doing? And also important, do the metrics provide relevant insights? The importance of good performance measurement is vital (Chiesa, Frattini, Lazzarotti, & Manzini, 2009). These authors mention several reasons why, for instance the ability to motivate, monitor and evaluate. As a consequence, inadequate performance measures can lead to misdirected decision making by managers and misleading model development by researchers. The problem

(8)

to find the adequate metrics is that traditional financial metrics are not applicable. For example, the use of sales growth rate as a performance criterion is problematic because ventures all start with zero sales and profitability-related criteria for young ventures can vary too much among different accounting methods (Covin, Garrett, Kuratko, & Shepherd, 2015).

As an alternative, researchers (Thornhill & Amit, 2001; Johnson, 2012; Garrett & Neubaum, 2013) use the method of looking at “the venture’s ability to meet milestones on schedule”. They acknowledge the highly subjective nature of this method. Garrett and Neubaum (2013) give three reasons why it is hard to find a more appropriate way of measuring ICV performance. Namely, differing objectives among ICV programs, different phases the ICVs are in, and industry differences.

In conclusion, it is hard to define appropriate ICV performance metrics, but it is vital to solve this gap and to get an in-depth understanding of this matter. Therefore, the main goal of this research is to understand which performance metrics are appropriate for ICVs and why. This study considers the challenging aspects mentioned by Garrett and Neubaum (2013). Consequently, the research question of this paper is “What are phase-dependent key performance metrics for ICVs, aiming for strategic benefits in the banking sector, and why are these appropriate?” Later, the literature review discusses the sub research questions. These further clarify how the challenges are overcome and what the scope of this research is.

By answering the main question, this research generates a prioritized list of key performance metrics specific for ICVs, together with background on why these metrics are a good fit with the ICV environment. Practitioners can use this framework to design an effective performance measurement system, which has three main advantages. First, it increases the effectiveness of the ICV program by enabling stricter selection. Second, it provides insights in how and where to support the ventures. Third, clear expectations are set for all employees engaged in the ICV program. Also, researchers can use it to more objectively measure performance of ICVs.

(9)

three bank of the Netherlands: the Rabobank. Mixed methods are used: twelve semi-structured interviews are conducted, followed by a survey.

The remainder of this research is as follows. First, a review of the literature is given. Then, the methodology is described. Finally, the results are shown and discussed, and the last section is the conclusion.

(10)

2

Literature Review

This chapter will discuss the following. First it explains what ICVs are and how they stimulate innovation. Then, it discusses more in-depth why performance measurement is crucial and why it is challenging to find the right set of metrics for ICV environments. Finally, the literature review discusses the selection of models that lead to a conceptual model, which is used to structure the research.

2.1

Embracing entrepreneurial practices

Corporates try to become more innovative to adapt to the rapidly changing business environment (Weiblen & Chesbrough, 2015). One of their efforts is to embrace corporate entrepreneurship. Unfortunately, there is ambiguousness and confusion around the definition of terms like “corporate entrepreneurship”, “intrapreneurship”, “corporate venturing” and “internal corporate venturing” (Johnson, 2012). Therefore these terms require some clarification. Table 1 on page 5 provides an overview.

(11)

Term Definition Unique aspect Corporate entrepreneurship Process of revitalization of firms to stimulate entrepreneurial behaviours Overarching term

Intrapreneurship Refers to creating a more entrepreneurial climate.

May also generate distinct new business units that would be managed separately.

Employees keep their initial role

Corporate venturing

A corporate can obtain or acquire existing ventures

Originated outside the firm

Internal corporate venturing

An entrepreneurial initiative that originates within a corporate structure

Originated within the firm

Table 1: Terminology

To conclude, ICVs are defined as an entrepreneurial initiative that originates within a corporate structure and is intended from inception as new business for the corporation. ICVs can be a valuable tool for corporates to adapt to their rapidly changing environment. If employees have an idea they want to execute and are selected, they are taken out of their own function and have the possibility to work on their own idea, as if it was their own start-up.

2.2

ICVs boost innovation

Garrett and Neubaum (2013) contend that innovation is the single common theme underlying all forms of corporate entrepreneurship. So how do ICVs actually influence

(12)

corporate innovation? This can be explained by the different outcomes ICV programs can have. First of all, ICVs can be used for spin-offs, which aim to leverage innovations that do not fit the current strategy (Weiblen & Chesbrough, 2015). This reduces the financial risks of innovation, as they can still profit from innovations, even if they do not fit the current strategy.

However, the most important way of ICVs to stimulate innovation is when they become an integrative part of the organization. The benefit of this can be best explained by comparing it to innovation streams of Tushman, Smith, Wood, Westerman, and O’Reilly (2010). They define innovation streams as the ability of a firm to host both incremental as well discontinuous innovations. ICV programs mainly focus on discontinuous innovations (Johnson, 2012), while the core of the firm can still focus on incremental innovations.

Finally, Keil, McGrath, and Tukiainen (2009) introduce another perspective on how ICV programs increase the innovativeness of firms. They say ventures might spark the founding of new capabilities or might help develop capabilities, even if they fail financially. Their findings are based on 106 interviews among 37 ventures within one anonymous large player in the global electronics sector.

In summary, ICVs can be used for leveraging spin-outs, ICVs can become integrative parts of the organization or they can develop new capabilities. This way ICVs stimulate the innovativeness of corporates.

2.3

Performance Measurement Systems

The following paragraph discusses the value of performance measurement systems. Performance measurement is defined as “the acquisition and analysis of information about the actual attainment of company objectives and plans, and about factors that might influence this attainment”(Mascarenhas Hornos da Costa, Oehmen, Rebentisch, & Nightingale, 2014, p. 371). It is noteworthy that the terms metrics, measures, measurement and indicators are often used as synonyms in the literature. However, in this paper the term metric is used and defined as: a variable that indicates one

(13)

or more aspects of the effectiveness or efficiency of a process. An indication of “performance” refers to the combination of two or more metrics.

Control systems such as performance measurement are important in shaping an environment that stimulates creative processes such as identifying an idea that becomes the seed of a new company (Davila, Foster, & Oyon, 2009). The work of Chiesa et al. (2009) explains why performance measurement is so important within an NPD environment. Why this model might also be applicable to the ICVs, will be discussed more in detail in the section “Framework adoption from other research field”. Their framework consists of seven purposes of performance measurement systems of R&D departments:

• Motivate researchers and engineers and improve their performance in R&D activities

• Monitor the progress of R&D activities with respect to resource consumption targets, temporal milestones and technical requirements

• Evaluate the profitability of R&D activities and their contribution to the firm’s economic value

• Support the selection of the projects to be initiated, continued or discontinued • Favor coordination and communication among the different people and organizational

units taking part in R&D activities

• Reduce the level of uncertainty that surrounds R&D activities • Stimulate and support individual and organizational learning

As mentioned in the introduction, incorrect or insufficient performance measurement may represent a risk to the organization. Frequently, managers base their decisions on poor performance measures (Chiesa et al., 2009). Consequently, misleading measures and associations can convey a wrong message of the progress or performance of a

(14)

business. This can result in poor decisions that can have a negative impact on the progress of a business. An example of implications for managers is that they might be assigning too much budget to ICVs that are close to failure. For researchers, it might imply that they argue that some factors might positively influence “ICV performance”, while in fact it could be indicators for failure.

2.3.1 Traditional performance measures

This paragraph discusses why traditional performance measures might not be applicable to ICVs. Bhasin (2008) compiled several reasons why traditional metrics do not work in innovative environments, among which are: traditional accounting measures are not suited for strategic decisions and traditional metrics largely ignore value creation. The metric ”market share” will serve as an example to illustrate why traditional metrics in the ICV environment do not apply. Consider the case of pioneering ventures. If a new venture involves a genuinely new product or service it may also entail the creation of an equally new market. A pioneering venture thus begins its life with a 100% market share, but to the extent that it is successful, it will attract followers. In other words, the true pioneer will probably lose market share if it is successful.

The inappropriateness of traditional metrics creates a need for a new set of metrics. This is partly solved by Thornhill and Amit (2001) who created an alternative. The next section “subjectivity of current measures” will explain why this alternative is not an adequate solution either.

2.3.2 Subjectivity of current measures

Davila et al. (2009) confirm the call for an alternative set of metrics. They state that entrepreneurial and innovative environments need special management control systems and more specific: a way of measuring performance. Thornhill and Amit (2001, p. 34) came up with an alternative way of measuring performance. Their method is currently most widely used and defines ICV performance as “the venture’s

(15)

ability to meet milestones on schedule”. Thus, the performance highly depends on how these milestones are set. In this method, the venture managers are the ones that set these milestones. As every person is different and has different ambitions, it is likely that people set these goals differently. Consequently, the problem is that ventures are not assessed on their actual performance, but on the satisfaction of the venture manager. The highly subjective nature is acknowledged to be inadequate by the authors themselves, as well as by other researchers (Johnson, 2012; Garrett & Neubaum, 2013).

There are three reasons why it is difficult for researchers to come up with a more adequate set of metrics (Garrett & Neubaum, 2013):

• ICV programs differ in purpose • Different stages of ICVs

• Industry effects

How to deal with these issues will be discussed one by one in the next sub sections.

2.3.3 ICV programs differ in purpose

It is hard to come up with one generic set of questions, if ICV programs have different aims. This paragraph explains how this is solved in the research. To illustrate the difficulty: “Some ICVs are founded for the purpose of leveraging a corporation’s pre-existing assets in new business arenas, whereas other ICVs are founded as market probes” (Covin et al., 2015, p. 756). It is likely that it will take less time and effort to develop a product that is based on pre-existing assets. Therefore, a metric like development costs might be more relevant for the market probes compared to when pre-existing assets are leveraged.

To enable this research to come up with proper metrics, a categorization of these different purposes is adopted from (Miles & Covin, 2002). They identified three main objectives of corporate venture programs firms can have, namely: organizational

(16)

development & cultural change, strategic benefits & real option development, and quick financial returns. These three objectives are further clarified in table 2.

Purpose Definition Value

Strategic benefits and real option development

More fully appropriate value from current organizational competencies, or to strategically reinvent or stretch the corporation

To capitalize on the many resources a firm can have. Taking on (some of the) the many possibilities this provides

Organizational development and cultural change

Aims to build an innovative or entrepreneurial capability

Firms recognize that they must embrace innovation or their long-term viability will be jeopardized

Quick financial returns

The promise of quick financial benefits

Circumvent the profitability caps inherent to one’s core product-market segments or industries

Table 2: Purposes of ICV programs

To conclude, there are three types of objectives of ICV programs that firms can have. The focus of this research will be on strategic benefits & real option development, which from now on will be referred to as “strategic benefits”. The reason for this focus is because of the alignment with the case of this study. The main goal of Rabobank is namely to benefit strategically, as will be discussed more in detail in the methodology. However, it is noted that this matter is often not that black and white. Different purposes can be served by one venture. Consider a venture that mainly delivered quick financial returns. The person who led this venture is still expected to have learned something about leading a venture, which refers to the second purpose.

Yet, to illustrate the selected objective “strategic benefit”, consider the following example: “Tikkie” of ABN AMRO is an example of the rise of mobile apps in the banking sector These are outcomes of market opportunities pursued by banks to use

(17)

their competencies and stretch their digital strategies.

2.3.4 Different stages of ICVs

As an independent venture develops, what it takes to continue its development also changes (Johnson, 2012). Consequently, the appropriate metrics might be different for ICVs that are in a different stage. For example, in the beginning ventures are focusing on acquiring new customers. In a later stage they tend to focus more on operational improvements. Consider therefore for instance the metric “customer satisfaction”. This metric could score significantly higher in later stages, because operations are more mature and therefore mistakes that might dissatisfy the customer are less likely. This does not have to imply though that the venture in an earlier stage is not doing well. Even more so: constructive feedback in the beginning might even be key to success.

Thus, it is important to consider differences in metrics in several phases. Therefore, a proper phase model should be selected. Three models are being considered and the selection criterion is the degree to which the model is applicable to the ICV environment.

The first option to consider is the model of (Thornhill & Amit, 2001) that is widely used in the ICV literature (Johnson, 2012; Garrett & Neubaum, 2013). They define three different phases for ICVs, namely:

• Early stage - year of first financial investment in the venture • Middle stage - year the venture began to generate revenue • Established stage - year the venture became profitable

The main advantage of this model is that outcomes of this research are easy to compare to other research outcomes. However, the applicability to practice is low. That is mainly because this typology in some cases does not group similar ventures in the same stage. Some ventures might have grown to become relatively mature

(18)

organizations, but not generating profits yet due to continuing large investments. This is the case for contemporary companies like Zalando and Cool Blue. The focus and therefore also the required performance metrics will be severely different from relatively small ventures that already started making profits, due to low investments needed.

The second phase model to discuss is the one of Churchill and Lewis (1983). This traditional model is widely diffused in the literature and therefore important to consider. It describes five stages of growth small businesses go through, namely: existence, survival, success, take-off, and resource maturity. It focuses rather on how to structure the company, then on how to successfully launch a product. As ICVs are heavily intertwined with the corporate, they have quite a specific way of how to structure the organization. This diminishes the added value of the five stages model. The final option is the customer development model of Blank (2013). It consists of four phases, namely: customer discovery, customer validation, customer creation and company building. The foundation of the model is to learn about customers. It aims to give room for discovery, failure and iteration, in order to prevent products being build that nobody actually wants. This fits the environment of ICVs, as one of its main goals is to build up capabilities and learn about new markets (Keil et al., 2009; Miles & Covin, 2002). As a consequence of this focus on learning and iteration, the model pays sufficient attention to the period after inception of the company. This is an important phase for an ICV program, because every venture will go through this phase.

To conclude, the applicability of the last model is considered the highest. The model of Thornhill and Amit (2001) does not group similar ventures in the same group and the model of Churchill and Lewis (1983) is too much focused on how to structure an organization. The model of Blank (2013), as shown in figure 1, is not considered to be perfect, but as the best fit with this research to define different phases of the ICV process.

(19)

Figure 1: Customer development model Blank (2013)

be explained more in detail in this paragraph. The first phase of the customer development model is the customer discovery phase. This stage is all about testing the vision of the founder and understanding who the customers are. Hypotheses are set to structure the insights to be gained. Those hypotheses are describing aspects of the customer, the problem they have, and the envisioned solution to solve this problem. Second, customer validation is about trying to find actual customers and come up with an actual business model to serve them. The product is far from ready, but minimal viable products are launched or even tests are done with only websites to see if customers click the “buy button”. According to the model, it is likely that a venture will have to go back to the first phase or re-do the second one to make sure the product is validated. This change of plan is called a “pivot”. Because of possible pivots, the level of uncertainty is relatively high in the first two phases. Thirdly, at the end of the customer creation phase the first product can go live. The goal in this phase is to start growing the amount of customers by intensifying marketing channels and further improving the product. The final phase is company building. The organization and the operations need to be scaled. More formalized structures

(20)

are build and functions of people become more clearly defined.

2.3.5 Industry effects

The final challenge to consider is potential industry differences. The industry a firm is located in influences the innovativeness of the firm (Damanpour, 1991). Therefore, the common “magnitude” and/or “referent” of the innovation might differ per industry (Crossan & Apaydin, 2010). Are the innovations generally incremental or radical in this industry? And to who is the innovation new, to the firm, market or industry? This might influence which performance metrics are adequate to use. Consider the metric “% of sales by new products” for instance. This number is expected to be more stable and therefore provide fewer insights, if in general an industry generates innovations with a low level of magnitude or referent.

These differences make it hard to generalize metrics throughout industries and to come up with one generic set of metrics. Therefore, this research looks at the banking industry first. The possibility to apply findings to other industries is discussed later.

2.4

Creating an alternative

That brings us to the goal of this research. To explore and compose an adequate set of performance metrics for ICVs in financial services, the main research of this paper is:

• What are phase-dependent key performance metrics for ICVs, aiming for strategic benefits in the banking sector, and why are these appropriate?

To be able to answer this question, the following sub questions need to be answered first. These sequential sub questions also indicate the scope of this research.

• What are possible performance metrics of ICVs, aiming for strategic benefits for the firm?

(21)

• Which performance metrics are most relevant to measure in the different phases of ICVs?

• Which performance metrics are most relevant to measure ICVs in the banking sector?

2.4.1 Framework adoption from other research field

To come up with a solid list of possible performance metrics, it is useful to look at existing frameworks from other research fields that can be (partly) applied to ICVs. Similar research fields that are discussed are the following:

• Independent Ventures (IVs) • Incubators

• Venture capitalists (VCs)

• New Product Development (NPD)

This research considers two criteria to select a field, of which to adopt a framework from. These criteria are the availability of frameworks within the field, and the similarity between the field and ICV environments.

First, the performance of IVs is not assessed by other parties, like the corporate monitors the performance of its ICVs. The way IVs measure their own performance is less mature, because the desire to formalize this process is also lower. Therefore, the availability of similar performance frameworks is low. Also, ICVs tend to have access to valuable resources and support from existing parent firms. In contrast, the independent entrepreneur lacks the “deep pockets” of a parent organization, but enjoys more decision-making freedom without the bureaucracy of a parent organization (Johnson, 2012). This makes them significantly different in nature.

Secondly, one might argue that incubators perform a similar supportive role for the ventures as the corporate does for ICVs. However, research regarding incubators

(22)

is mainly focused on the effectiveness of the incubator itself and therefore also delivers a limited amount of applicable frameworks.

Third, there is however an extensive amount of literature regarding VCs assessment systems. However, these kinds of frameworks are only used as feed forward measures (measures that forecast outputs, and evaluate inputs necessary to attain these outputs). Meaning that these measures are used before investing in the venture. For ICVs, it is important to look at a framework that gives input for metrics both before and during the process. Also, the goal of the VCs is to get a good ROI, not to increase the innovativeness of the firm. Therefore the VC literature is not selected for this research.

Finally, the overlap between NPD and ICV environments is recognized by many researchers (Cooper, Edgett, & Kleinschmidt, 2001; Ledwith, 2000; Simon & Houghton, 2003). Johnson (2012) warns other researchers for mixing up the two. This implies how similar the two can be. Though his warning is valid, it does not apply to this research, because the aim is to deliberately find a similar, not equal, field of research to gain knowledge from. Similarities between ICV and NPD environment are for instance: both operate in an uncertain environment, both focus on generating innovations, and both appear within the context of large corporations. Yet, Greene, Brush, Hart, et al. (1999) argue that a new product does not always require the creation of a new organization, which is the main difference with an ICV. This difference might lead to interesting insights to ICV specific metrics though.

Furthermore, there is an extensive amount of adequate literature regarding performance measurement in NPD environments. That is because many researchers aimed to reduce the ambiguity of NPD environments (Griffin & Page, 1996). The lack of consensus was a driver for the large amount of research.

To conclude, based on the two criteria: similarity of the fields and availability of frameworks, the NPD is being further explored to select a performance measurement framework. This will be mapped out in the next section.

(23)

2.4.2 NPD performance measurement frameworks

Now that the NPD environment is selected, first some selection criteria will be discussed, followed by three possible frameworks.

Chiesa et al. (2009) argue that an effective Performance Measurement System (PMS) for R&D is built around a limited number of indicators that measure results rather than behavior, and privileges objective and external metrics to subjective and internal ones. In the same article, they argue that performance indicators for R&D should have a strategic orientation and reflect the firm’s critical success factors, they should be simple, able to encourage change and to balance financial and non-financial perspectives. These points will be considered when selecting the final framework. Other criteria are: empirical basis of the framework, diffusion in literature (times cited by other researchers), and the extent to which it is used recently, to ensure the framework is up to date.

The first framework that is considered is the one of Hertenstein and Platt (2000). They did a multi-case study and came up with 43 different metrics, where they split financial (27) and non-financial metrics (16). A strong aspect of their work is the research design. They first did in-depth interviews with 11 NPD managers, followed by a workshop with those same people to get more practical insights. Finally, they conducted a survey among 46 employees working in NPD departments.

A problem however is the amount of metrics. As discussed above, it is desired to have a limited amount of metrics. Furthermore, the framework is not widely used by other researchers.

The second option is the work of Mascarenhas Hornos da Costa et al. (2014). This recent work focused on integrating the lean management vision into NPD performance measurement. They argue that “the importance of lean principles is unquestionable” (p. 371) and, the lean principles constitute a central part in every NPD environment and therefore should be used as a guideline for companies willing

(24)

to implement lean in their product development processes.

They first identified 153 metrics in 52 different articles. By asking ten lean experts they selected a list of 50 metrics. By conducting a survey among 25 experts from the Aerospace industry, these metrics were rated on how much they were currently used and on considered usefulness. Finally, they categorized the metrics in five groups, namely: stakeholder value and benefits, program objectives and requirements, results from product, results from process, and people. The strengths of this framework are the year of publication, and it fills the gap of lean principles. However, the weaknesses, as stated by Mascarenhas Hornos da Costa et al. (2014) themselves are: the research is conducted in one particular industry and has a weaker empirical basis compared to the other frameworks. Furthermore, the diffusion of the model is very limited and the number of metrics (50) is high.

The final and selected framework is the success/failure measures model of Griffin and Page (1993). They analysed 77 previously published articles with suggested metrics. Also, they conducted a survey among 50 NPD managers to ask which metrics were being used and which ones they would like to use. Then, all these metrics were consolidated, followed by a categorization in four groups. This categorization was based on a bottom-up process on a conference. The four main categories they identified are: customer acceptance measures, financial performance, product-level measures, firm-level measures.

The main reason to select the framework is the combination of a strong theoretical and empirical basis. The 77 scientific articles provide solid input and the view of 50 experts is valuable. Though, the empirical basis does not go without questions. Who where the participants involved in the survey and the conference? How knowledgeable where they? Yet, it is the combination of the scientific and empirical basis that makes this model strong. That might also be the reason that the diffusion in the literature is around five times higher than the other considered frameworks. Furthermore, the number of core metrics is limited (four categories and sixteen metrics), which makes

(25)

it easy to work with.

A weakness can be the year the research has been conducted. This might lead to missing metrics that are relevant now, as to not existing back when the research was conducted. However, recently their work is still widely used for research about success metrics in NPD environments (Godener & S¨oderquist, 2004; Chiesa et al., 2009; Mascarenhas Hornos da Costa et al., 2014). For example, it is used as a framework to find an appropriate set of metrics for a specific context.

As this model is selected and has a central role in this research, it is discussed in more detail in this paragraph. The study considered three perspectives, namely: measures researchers use, measures practitioners use and measures that practitioners desire to use. The core of the framework are the sixteen metrics in the middle, as shown in figure 2. These sixteen metrics are included in all the three perspectives and are therefore most actionable in both practice and in research. The other 55 metrics that are not in the core circle are not at the forefront of this research. Though, they are used to analyze findings, if this leads to better insights. In this case, they will be referred to as “non-core metrics”. Also, the four categories make the model more comprehensible. If beneficial in bringing forward findings or insights, the categories are used to do so.

(26)

Figure 2: Performance metrics in NPD environment Griffin and Page (1993)

2.5

Conceptual Model

This section concludes the literature review, before looking at the methodology of this study. Three models are used to structure the set up of this research. The selection of these models is summarized in this section. The three models are:

• The success/failure measures of Griffin and Page (1993) • The customer development model of Blank (2013)

• The seven purposes of performance measurement systems of Chiesa et al. (2009) Together, these models form “the conceptual model”, as it will be called from here onwards. More elaborate argumentation for the selection of these models can

(27)

First, as a starting point to find appropriate metrics, the model of Griffin and Page (1993) is selected, mainly because of its strong theoretical and empirical basis. NPD environments are expected to generate the most useful frameworks. The framework is not considered to be perfect, but it is the most applicable and defensible option for this research. Also, the year of publication might lead to missing out on newer metrics. The fact that the framework is still widely used by researchers is satisfying. In addition, as will be discussed in the methodology, the interviewees also have the possibility to suggest new metrics. These two insights combined give the confidence to use the framework as a starting point. From here onwards the model can be referred to as “the NPD framework”.

Secondly, to define the different phases for the ICVs in this research, this study analyzed three phase models. The customer development model of (Blank, 2013) is selected, based on its applicability to ICV environments. This model can from here onwards be referred to as “the phase model” or “the customer development model”. Finally, the model of Chiesa et al. (2009) is used to analyze the reasons why metrics might be appropriate. They mention seven purposes performance measurement systems can serve, like selecting, monitoring and motivating.

(28)

3

Methodology

This section first discusses how the research is set up. Then, it describes the case of Rabobank and its ICV program. It also discusses the research methods that are used and explains how the sub research questions are addressed. Finally, it describes the approach to analyse the data.

3.1

Research Design

This section gives an overview of what steps will be taken in order to answer the research question, which is also summarized in table 3. As discussed in the literature review, there is a lack of appropriate ICV performance metrics. Therefore, the aim of this research is to generate a new framework with performance metrics, specific for an ICV environment. Regarding the novelty of this topic, an explorative case study is needed. A case study is appropriate when the topic requires no control of behavioural events and focuses on contemporary events (Yin, 2009). This means that this research can be done without any manipulation of context or time. The insights of one single case can be revelatory and later applied to others. The approach of this research is to use the framework of Griffin and Page (1993) as a starting point. It served as the basis of the semi-structured interviews. The result of these interviews is a list of possible metrics, and more background information regarding the most relevant ones. Later, the list is prioritized by a survey. More detailed description of research methods can be found under the section “research methods”.

(29)

Stage What Why 1 Use framework of Griffin and

Page (1993) as starting point

Served as a basis of the semi-structured interviews

2 Semi-Structured interviews Generated a list of possible metrics by revising the framework (add, change, delete)

Got more background information (why)

3 Survey Prioritized the list of possible metrics Table 3: Set up methodology

3.2

Case Description

This paragraph provides some relevant details about the banking industry, the Rabobank and its ICV program. This enhances the understanding of the environment from the participants of this research.

3.2.1 Banking industry

The subject of this research is the Rabobank, a leading Dutch bank. This paragraph will discuss why the Dutch banking environment is rapidly changing, as has been mentioned in the introduction. After the financial crisis in 2008 there are numerous introductions of new regulations. Also, the industry sees many new entrants in the form of Fin-tech companies. In general the strength of these ventures is to take a customer-centric approach and serve these needs with new technological solutions. Despite the increasing amount of new entrants, the market is still relatively sparse. Four main players, ING, Rabobank, ABN AMRO and Volksbank represent 80Digitalization is a strategy for banks to adapt to the new landscape. It enables them to serve the customer faster and better. Also, it allows them to cut costs on

(30)

overhead. As a result, banks are forced to focus on aspects like cyber security, data analytics and IT solutions.

3.2.2 Rabobank

Rabobank is founded in 1972 after a merger of two parties. Cooperation is in its roots, as the bank started as a collective entity of farmers. It currently has about 46.500 employees internationally and 103 local banks in the Netherlands. It has been given the AA status since 2014, as years before it always had the AAA status.

3.2.3 ICV program

As a response to the changing landscape in the industry, Rabobank started an ICV program. This is the result of the “moon shot campaign”, which was launched in 2015, with the two follow-up editions the years after. During these campaigns employees can pitch their ideas and the winners are invited to start an ICV within the program. It currently has five active ICVs and around six new ones every year. In the year 2016 and 2017 three ICVs stopped. The names of these ventures are Grppy, Bank Contact Cloud, and Fetch. The managers of these ventures are included in the research, because they may have valuable insights in what metrics are appropriate. The names of the five current ICVs are Peaks, Easytrade, Tellow, Speedo, Farmbit, and Moovement. These are all technological driven ventures with various value propositions, like: comfortable payment systems, automatic bookkeeping and smart currency trading. Table 4 provides an overview and information regarding the ventures, namely in which phase they are of the customer development model, number of employees and start date. The information is derived on the date this study is rounded off (June, 2017).

One final valuable aspect about the ICV program of Rabobank is that they work with an external partner to support the ICVs in the first two phases of the customer development model. This partner is called “Innoleaps”, which is a corporate accelerator and is part of the start-up accelerator “Startupbootcamp”. The employees

(31)

of Innoleaps coach the ICV members to think and act more like a start-up, as the members are used to work in a large corporate.

Venture Phase Number of

employees

Started in

Moovement 1. Customer discovery 3 2017: February Speedo 1. Customer discovery 5 2017: February Farmbit 1. Customer discovery 3 2017: February Tellow 3. Customer creation 4 2015: February Easytrade 3. Customer creation 6 2016: April Peaks 4. Company building 30 2015: February Grppy Stopped in phase 3.

Customer creation

5 2015: February

Fetch Stopped in phase 2. Customer validation

4 2016: April

Bank Contact Cloud Stopped in phase 3. Customer creation

4 2016: April

(32)

3.3

Research methods

3.3.1 Mixed methods research

The nature of this research is partly qualitative and partly quantitative. When little is known about a phenomenon, qualitative methods can be used because of for their ability to discover the underlying nature of the phenomenon in question (Strauss, Corbin, et al., 1990) . The quantitative part is a survey, which is described later in this section.

3.3.2 Semi-structured interviews

Semi-structured were conducted with the following goals: • Apply the NPD framework

• Find additional metrics

• Get more background information on why the metrics are appropriate.

The choice for semi-structured is based on the level of expertise of the interviewees. It is acknowledged that if you let experts talk freely, more in-depth insights can be gained (Leech, 2002). However, like above mentioned the goals of the interviews are clear, so a set of questions was pre-defined to guide the conversation in order to generate the right outcomes. The questions are derived from the research of Griffin and Page (1996) and are adjusted to this research. The questions can be found in the appendix, but to give an idea the set up is briefly as follows: first, interviewees were asked to indicate which metrics are currently used. These comments were not used as input, but just to check if the participant has understood the topic and to resolve obscurities, if needed. After that, the NPD framework was applied by checking which metrics apply and which not. Then, interviewees could suggest new metrics they believed are relevant. Finally, they were constantly motivated to provide

(33)

Table 5 gives an overview of the twelve participants of this research. It includes their role, their perspective, the organization they belong to, and which phases are discussed with them. These people are selected, because they are the ones most involved in performance measurement of the ICV program. Three types of stakeholders were interviewed by which three main perspectives are included. These three stakeholders are the decision makers, supporters of the ventures and the venture managers. The decision makers are not directly involved with the ICVs, but decide if the ventures can continue at certain points in time and if they will get access to capital of the corporate. The supporters are employees of the bank that facilitate the ventures where needed. This can be in the form of knowledge or in practical sense. They can be seen as a mediator between the interests of the corporate and the venture. Finally, there are the venture managers, who are directly driving the activities of the ventures. Including stakeholders from all these three different perspectives in the research is an application of the concept triangulation, which is good for the credibility of the research Tracy (2010). If only one perspective would be taken into account, the participants could possibly argue only for their own interest, deliberately or subconsciously. The outcomes that are shared by all three stakeholders have more rigor and the disagreements can possibly lead to interesting insights.

There are three aspects about table 5 that are important to clarify. First is the imbalance in numbers among the three different perspectives. Two decision makers, two supporters and eight venture managers participated in the research. This is simply due to the absence of more decision makers and supporters, closely involved with the performance measurement of the ICVs. Furthermore, the reason that only CEOs participated in the research is because they are the main responsible for tracking the performance. Therefore, they are expected to have the most valuable insights. Also, the decision regarding which phases are discussed with whom depended on the perspective. The decision makers and supporters are involved and consequently gave input in all phases. One venture manager discussed all phases. He proposed to

(34)

do a second interview, due to his enthusiasm and therefore there was no lack of time. The other venture managers gave input on one or more phases, depending on their expertise and phase their venture is or was in. This was done to generate sufficient depth, which is especially important to figure out why certain metrics are important.

Participant Role Perspective Organization Discussed phases

1 Head of Fintech & Decision maker Rabobank All

Innovation department

2 Senior VP Fintech & Innovation

Decision maker Rabobank All

3 Fintech & Supporter Rabobank All

Innovation manager

4 Innovation manager & Supporter Rabobank All

lean start-up coach

5 CEO Venture manager Easytrade All

6 CEO Venture manager Moovement 1

7 CEO Venture manager Speedo 1 and 2

8 CEO Venture manager Bank Contact Cloud

1 and 2

9 CEO Venture manager Fetch 1 and 2

10 CEO Venture manager Tellow 3 and 4

11 CEO Venture manager Grppy 3 and 4

12 CEO Venture manager Peaks 3 and 4

(35)

3.3.3 Survey

Based on how often certain metrics were mentioned in the different phases in the interviews, key metrics per phase were selected. As a last step, the participants could indicate, by filling out the survey, how they would prioritize the metrics in the different phases. All participants filled out the information for all phases. It is decided that in case certain metrics would score particularly low, they are deleted from the framework. The minimum is five on a seven point likert scale, which corresponds with “somewhat agree”. It is argued that if participants think a metric is not even somewhat applicable, it should not be kept as a key metric. In the survey there is also room for respondents to clarify their answers, which can provide deeper insights. This approach is in line with Hertenstein and Platt (2000), who conducted a similar explorative case study regarding success metrics in a specific NPD environment. To summarize, the goals of the survey are:

• Prioritize metrics

• Filtering less appropriate metrics • Generating deeper insights

3.4

Addressing the research questions

This paragraph discusses how this research aims to answer the sub research questions. As a reminder, the sub questions are as follows:

• What are possible performance metrics of ICVs, aiming for strategic benefits for the firm?

• Which performance metrics are most relevant to measure in the different phases of ICVs?

• Which performance metrics are most relevant to measure ICVs in the banking sector?

(36)

The first question strives to only consider performance metrics that are applicable to ICVs that strive for strategic benefits. Therefore, before starting the interviews, the responsible of the ICV program confirmed that “strategic benefit” actually is the main goal of the ICV program of the corporate. He recognized that different purposes might be served simultaneously, but stated clearly that this was the main objective. Also, the interviewees were asked to select one of the three options, to what they see as the main goal of the ICV program. The second question tries to reveal the most relevant metrics specific per phase. Therefore, at the beginning of every interview the phase model of Blank (2013) was discussed and the interviews were structured accordingly to this model. This means that interviewees answered the questions by going through the phases one by one or focused on only one or two phases.

The third questions aims to identify industry specific metrics. Therefore, one of the last interview questions is: “Do you think particular metrics you mentioned are specific to the banking sector? If so, please list them”.

3.5

Analysis

The following section will describe the analysis of this research, which is summarized in table 6. For the analysis of the interviews, two different approaches are adopted from the work of Hsieh and Shannon (2005) . First, the directed method is used. The framework of Griffin and Page (1993) was adopted from the New Product Development (NPD) environment and served as guidance for the initial codes. The coding was therefore deductive in nature, which helps to quantify the data (Pratt, 2009). The customer development model is used to structure the different phases. In other words, the first step of the analysis was done by counting how many times the metrics from the framework were mentioned in total and in the different phases (see table 14 in appendix B on page 67).

The risk of this approach is to have a strong bias as a researcher, because it can lead to the desire to justify the selection of a certain framework, paradigm or

(37)

model. Therefore, the interviewees were given the chance to shed light on which metrics should be tracked besides the NPD framework. As these empirical findings are unexplained, inductive coding is needed (Bitektine, 2008). To do this, the conventional method was used to create codes for this specific content. This implies that the data was first read word by word to understand the full context. Then, similar answers were grouped together and the content was checked on overlap. After reading the content again, a logical name per category was given based on the content itself, which can be found in the separate excel appendix. Afterwards, the times a particular code was mentioned was counted (see table 15 in appendix B on page 68).

The next step was to understand why certain metrics were considered important. The directed approach is also used to analyse the content of the reasons why. The framework of Chiesa et al. (2009), which is discussed already in the literature review, defines seven purposes of performance measurement systems. The codes conducted from the framework are the following: motivate, monitor, evaluate economic outcome, selection, coordination, stimulate learning, and reduce uncertainty. The value of adding codes to the background information is to better understand what interviewees try to say with a particular statement. These codes can be found in the last column of table 7 and 8.

Based on the analysis so far, the survey was conducted. First, key metrics per phase of the customer development model were selected. This was done based on the counted coding scheme. Practically, this meant that all metrics that were mentioned three or more times in a particular phase were selected. The survey was sent out to all participants, who could then rate the metrics in all the different phases and add comments if preferred. To ensure reliable outcomes, the survey was not send out to other people involved in the ICV environment. It can be challenging to think through the implications of the performance metrics. It is therefore important to only include people that have direct experience with using performance metrics.

(38)

of data were considered. First is the inductively as well as the deductively coded content. Secondly, two aspects of the outcomes of the survey: the distribution of the scores, and the optionally given additional comments. All this content was systematically analysed by looking at differences and similarities. According to Tashakkori and Teddlie (2010) this is an effective approach in order to get a better understanding of the full context and generate deeper insights. For example, the outcomes that were equally divided among the participants were brought up as interesting points of discussion. For instance to find out: does it depend on the perspective, do the participants just disagree or is there a context factor that decides the outcome? Furthermore, if one or two participants gave deviating answers, they were requested by email to clarify their point to see if they have unique information others did not consider. Finally, clear similarities and unanimous outcomes indicated that results could be brought forward with more certainty.

(39)

Stage What Why Displayed in

1 Use codes adopted from Griffin and Page (1993) to count metrics

See which of the NPD metrics apply to the ICV environment

Table 7

2 Create codes for additional content and count them

Categorize and analyse input given for metrics outside the NPD framework

Table 8

3 Use codes adopted from Chiesa et al. (2009) to analyse content regarding background why metrics are important

Provide better

understanding of reasons why behind the metrics

Table 7 and table 8, last column

4 Select key metrics per phase, based on Blank (2013) and send out survey

Prioritize metrics and gain additional insights

Table 9 to table 12

5 Look at agreements and disagreements

Extract deeper insights “Discussion”

Table 6: Set up analysis

Before moving on to the findings, one interesting observation came up during the first few interviews. This observation needs to be discussed first, because it influenced the way of analysing. The interviewees referred to a distinction between two types of metrics, namely “progress” and “decision” metrics. The first refer to metrics that are tracked internally by the ventures and are used in frequent and structured updates to the management team. Decision metrics refer to the ones that are used by the management team (decision maker perspective) while making final decision whether

(40)

ventures can proceed to the next phase or not. For some metrics this distinction is less clear or can even be used for both.

Yet, before explaining the methodological impact, a few examples will be discussed to illustrate the possible difference. Consider the metrics “launched on time” and “speed to market”. Launched on time refers to several deadlines the team sets to give direction to work they are doing. Speed to market is a metric that simply analyses how fast the product or a new feature is (expected to be) in the market. The first one can be updated to the managers, but is unlikely to become a direct breaking point. However, if the overall speed to market is too low, this could be a reason for the management team to stop investing in the venture, because competitive alternatives are likely to get to the market first. Furthermore, a typical decision metrics is ”team”. You do not track the composition of your team on a daily or weekly basis. However, it can be a reason for the management team to stop investing in a particular venture. A metric particularly useful as progress metric is product performance level. This metric indicates small or larger errors in the product or service and should be tracked continuously.

To conclude, it has been decided to make a distinction between those two types of metrics. Consequently, the interviewees were asked to indicate if the metrics were appropriate in a given phase as progress metrics, decision metrics, or both. Another consequence is that for the survey not one but two sets of key metrics were selected per phase. This is because trying to prioritize two different types of metrics would be like comparing apples and oranges. The decision to distinguish these two types will lead to a better understanding of the appropriateness of the metrics. To further clarify: if a metric is noted as both applicable as progress as well as decision metric, it depended on how often it was mentioned relatively to the other metrics in a particular phase. If a metric is mentioned more often than others comparing it to both sets of metrics (progress and decision), it is included in both sets of metrics. It can however also be that it had more “competition” in for instance the decision list, and therefore is excluded, while still being included in the progress list of the survey.

(41)

4

Results

This section discusses the following: first the outcomes of applying the NPD framework are shared. These include the reasons why and are not divided per phase yet, in order to provide a comprehensive overview. Then, the same is done for the metrics that are found in addition to the NPD framework. Afterwards, the selected key metrics per phase are shared. In addition, this section also explains why these are appropriate in that particular phase and the scores of the survey are given. Finally, the final list of suggested metrics per phase is shown.

4.1

Findings

First of all, table 7 on page 39 shows all tested metrics and table 8 on page 43 all metrics that are mentioned in addition to the existing framework. Furthermore, both tables show how often the metrics are mentioned cumulative over all phases, and provide some keywords with a supporting quote to explain what this metric actually displays and why this is important. Sometimes interviewees shed light on why a particular metric should not be tracked. In this case, it is displayed between brackets. The final column clarifies what is meant with the quote and the key words by labeling it to the codes extracted from the framework of Chiesa et al. (2009). The order of table 7 is identical to how it is displayed by Griffin and Page (1993) and table 8 follows a random order. The aim of these tables is to give an overview of all the mentioned metrics. The phases of the customer development model will be considered later. These tables will also support the decisions to shortlist the metrics that will be ranked by the survey.

(42)

Performance metric

# Decision vs. Progress

Why (not) – key words

Why – Example of quote Why - code Customer

acceptance

17 Both Market potential, problem/solution fit

It is vital to keep track of this, to make sure that you are building something that your customers want.

Evaluate economic value, reduce uncertainty Customer satisfaction

16 Both Customer feedback, customer trust, reputation

People should know that they want something and should be happy about it. Then they will buy your product.

Stimulate learning Met revenue

goals

5 Decision Willingness to pay, (hard to estimate)

How much money people already spend on your product is a good indicator for the future

Motivate, evaluate economic value Revenue growth 9 Both Willingness to pay,

progress

Growth in revenue are in indication of the progress you are making.

Evaluate economic value, monitor

Met market share goals

1 Decision Can indicate how disruptive you are, (highly differs per market)

(It does not tell you a lot, because markets are new, others are old and everything in between. If it’s new, you start with 100%).

Evaluate economic value

Met unit sales goals

6 Decision Willingness to pay, operations can start

You need to have some volume, so you actually have something to manage

Motivate, evaluate

(43)

Performance metric

# Decision vs. Progress

Why (not) – key words

Why – Example of quote Why - code Break-even time 3 Both Indication of trade-off

between investments and time

For every decision we look at the impact on break-even time. It provides insight in the pay-off between investments and time. If you are live sooner, you will start making money earlier.

Evaluate economic value

Attain margin goals

5 Decision Indicates readiness to scale up

As soon as you have good margins, you know you are good. You can really start scaling. Motivate, evaluate economic value, reduce uncertainty Attain profitability goals

4 Decision Validation of customer lifetime value, (can be easily manipulated by accounting, hard to estimate)

Profit is something that is too easy to manipulate. You can for example delay payments or change depreciation rates.

Motivate, monitor, evaluate economic value IRR / ROI 3 Decision Gives overview of ratio

what you put in and what you get out, (unable to track in starting phase)

You can only estimate this after around 5 year. By then, you are having less costs to develop your product and the customer acquisition costs are starting to go down.

Evaluate economic value

(44)

Performance metric

# Decision vs. Progress

Why (not) – key words

Why – Example of quote Why - code Development

costs

9 Both Control costs and budgets

How much you are spending determines how long you can last with your funding round Monitor, selection, evaluate economic value Launched on time

8 Progress Get feedback in early stages, indicates ability to execute, communicate to stakeholders to show progress

This is important, because it distinguishes nice ideas from actual successful projects. It measures your ability to continuously work out your idea. Motivate, monitor, coordination Product performance level

5 Progress Driver of customer satisfaction, errors prevention

We are constantly tracking if all the elements of our app are working. This is important to make sure there are no errors in our system, which leads to unhappy customers Monitor, stimulate learning Met quality guidelines

6 Decision Standardized auditing, communicate

minimums (customers should decide)

We have to meet all the audits of the Rabobank and other parties

Monitor, stimulate learning, coordination

(45)

Performance metric

# Decision vs. Progress

Why (not) – key words

Why – Example of quote Why - code Speed to market 8 Decision Comparison with

competitor, stimulates focus

This distinguishes you from the competition. It also ensures focus. Do you focus on the right things? If yes, you will be able to get to the market quicker.

Motivate, monitor, selection

Table 7: Applied NPD metrics

(46)

Table 7 provides a few interesting insights. First, customer satisfaction and customer acceptance are mentioned the most. This stresses the importance of those metrics. However, it might also be the case because the definition of these metrics is quite broad. Therefore it can be operationalized in many different ways and apply to several situations.

On the other hand, table 7 shows which metrics are not considered important. One metric is invalidated because it is mentioned zero times, namely the “% of sales by new products”. The reason given by interviewees is that the significance of a venture cannot be measured by the size of its sales. If for instance the aim of the ICV is to enter a new market, which has a large growth potential in the future, contemporary significance is not important to consider. Also, market share has been mentioned only once. This is because if a new venture involves a genuinely new product or service it may also entail the creation of an equally new market. A pioneering venture thus begins its life with a 100% market share, but to the extent that it is successful, it will attract followers. In other words, the true pioneer will probably lose market share if the venture is successful.

Furthermore, nine metrics are applicable as decision metric, two as progress metric, and five apply to both types of metric. These numbers show that the framework of Griffin and Page (1993) mainly focused on decision metrics.

Another observation by looking at the fourth and fifth column is the overlap in some metrics. For example, customer satisfaction and quality guidelines both aim to make sure that the customers are happy. Also, revenue growth and met revenue goals are both directly influenced by the number of sales. These overlaps were solved by the selection of the key metrics and by the survey. In all cases only one is selected, except for one duo. In the case of the overlap between time to market and launched on time is solved because the first is a decision and the latter is a progress metric.

Finally, some insights are derived from the coding in the last column. Eleven from the sixteen metrics are coded by “evaluate economic value”, because they directly give an indication. All other metrics also at least have an indirect impact on the

(47)

evaluation of the economic value. Take speed to market for example. If this is too low, competitors will enter the market first and this can damage the economic value of the product. This makes sense regarding the purpose of the ICV program, which is to pursue real options, which automatically also includes creating economic value. Also, all seven codes are used, which indicates that the NPD framework addresses all the possible positive outcomes of a performance measurement system.

4.1.2 Additions to NPD framework

During the interviews there was room for the interviewees to shed their lights on which metrics, outside the NPD framework, are important according to them. These might be metrics that only came up recently, or metrics specific to ICVs. Table 8 therefore provides a list of metrics that are called “additional metrics”.

Table 8 shows nine additional metrics to the framework of Griffin and Page (1993). Some are mentioned often, like team and strategic fit. Other metrics are only mentioned a few times, like feasibility and internal support.

However, the metric that is mentioned the least is feasibility. Interviewees indicated that they find it hard to say something about feasibility before it is actually given a try.

Finally, because of the novelty of these metrics, implications regarding the operationalization are discussed. This is done for the three metrics where this is most needed. Even though this is not the core of the research, the interviewees provided sufficient information to clarify this. First, according to the interviewees, the metric “team” consists of two main aspects. These are team composition and entrepreneurial abilities. The first is measured by a tool, which is called “venture metrix”. This is a test that maps out personalities and skills of the team and provides insights regarding the comprehensiveness of the team. Entrepreneurial ability is a subjective estimation, but in the case of Rabobank can be partly based on the team evaluation of Innoleaps.

(48)

Performance metric

# Decision vs. Progress

Why (not) – key words

Why – Example of quote Why - code Team 18 Decision Skill variety, team

cohesion and

entrepreneurial ability all stimulate

effectiveness

Team composition is very important, because you need different skills in a team. That will lead to success.

Selection

Learning – hypotheses testing

8 Both Problem validation, gaining market knowledge and indicates ability to learn and focus.

The more hypotheses are tested, the more the team knows about the concept. It shows learning, speed and focus. Focus is important because the team should not move between completely unrelated topics of interest Stimulate learning, reduce uncertainty Costumer Acquisition Costs vs. Customer Lifetime Value

8 Both Indicator of profit in the future, indicates readiness to scale

You do not necessarily have to make a profit. As long as this sum is positive, you will make money eventually

Evaluate economic value, reduce uncertainty

Strategic fit 12 Decision Indicates potential added value, (blocks opportunities in new directions)

We should consider if the topic of interest fits the direction of the bank. However, are we also open for new ideas? Because who says that our strategy is the right

Selection, reduces uncertainty

Referenties

GERELATEERDE DOCUMENTEN

In de wiskun- delessen op school kan op dit ver- mogen een beroep gedaan worden, maar daarbij zijn wel drie punten van belang: abstracte begrippen moeten goed voorbereid worden, er

In deze studie zijn deze maatregelen onder bestaand beleid ingedeeld en de kosten niet opgenomen onder de additionele kosten.. aan jaarlijkse kosten verbonden aan

Een mogelijk nadeel van deze maatregel is dat een spookrijder die van de afrit naar de oprit wordt geleid er mogelijk niet op bedacht is dat hij invoegt op een weg waar ander

Wegbeheerders zijn verantwoord EI jk voor ongevallen die zijn ontstaan doordat de weg niet voldoet aan de eisen die daaraan kunnen worden gesteld.. Hierbij moeten zij volgens de

Bodemeenheid: Pdm matig natte lichte zandleem met diepe antropogene humus A-horizont.. H1

Researchers recently consider ethical leadership as a style with an own set of behavior or behavioral style in itself rather than focusing only on ethical aspects of

The current EEG study investigates the potential contribution of alpha oscillations on functional inhibition seen during visual processing; based on the ST theory of visual

First, most of the included lagged variables were not significant, and did not reveal any pattern, however, they need to be included because the number of lags can influence the