• No results found

The implications of Big Data for the open innovation process : an exploratory study

N/A
N/A
Protected

Academic year: 2021

Share "The implications of Big Data for the open innovation process : an exploratory study"

Copied!
128
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Amsterdam Business School

Master’s Thesis

The implications of Big Data for the open innovation

process

An exploratory study

Author: Supervisor:

W.C.A. Rijssenbeek M. Paukku

11232110

A thesis submitted in fulfillment of the requirements for the degree of Master in Science

in

Business Administration, International Management Faculty of Economics and Business, University of Amsterdam

(2)

Statement of originality

This document is written by Willem Rijssenbeek who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Abstract

Big Data is becoming increasingly important. It has disrupted many businesses and industries, and it continues to do so (McAfee & Brynjolfsson, 2012). With the increasing numbers of people and devices connected, the amount of data being generated is increasing exponentially. To survive, companies need to identify the impact Big Data has on their business. The impact of Big Data has already been identified in the areas of, for example, marketing, logistics, and decision making. One area that has not been understood well in this regard is innovation, and yet innovation is a significant driver of firm performance. Additionally, the innovation field is itself in transition towards a more open innovation approach, leading firms increasingly to leverage external knowledge for their innovation processes (Chesbrough, 2003). This research aims to identify how Big Data impacts the innovation process through open innovation. According to the existing literature, the innovation process is likely to be significantly influenced by Big Data. In semi-structured interviews, a panel of innovation experts and a panel of Big Data experts generally agreed that Big Data influences the innovation process in such a way that it becomes faster, more focused, and broader in scope. These results advance our knowledge of this research area and serve as good starting point for further research. Furthermore, the results can lead to significant benefits for firms. The results of this research demonstrate that firms have solid reasons to rethink their approach to the innovation process.

(4)

Table of contents

1. Introduction ... 5

1.1. The background of the problem ... 6

1.2. Statement of the problem ... 7

1.3. The contribution of this research ... 8

1.4. Outline of the thesis ... 9

2. Big Data... 10

2.1. Implications of Big Data for business. ... 11

2.2. Conclusion ... 15

3. Innovation ... 16

3.2. What is open innovation? ... 17

3.3. Why the time of open innovation is now ... 18

3.4. Conclusion ... 19

4. The implications of Big Data for the innovation process ... 20

4.1. The innovation process ... 20

4.2. Motivations for selection of propositions ... 22

4.3. Propositions and conceptual model ... 23

4.4. Conclusion ... 24

5. Research Design ... 26

5.1. Research Structure ... 26

5.2. Data collection ... 27

5.3. Strengths and limitations ... 29

6. Data analysis and results ... 31

6.1. Analysis ... 31

6.2. A faster innovation process? ... 33

6.3. A more focused innovation process? ... 37

6.4. A broader scope within the innovation process? ... 40

6.5. Further analysis of the results ... 44

7. Conclusions and discussion ... 47

7.1. Conclusions ... 47

7.2. Discussion ... 48

7.3. Limitations and further research ... 49

References ... 51

Appendices ... 54

Appendix A: Introduction letter ... 55

Appendix B: Interviews ... 56

Interview transcript Maarten Bellekom ... 56

Interview transcript Marnix Bugel ... 64

Interview transcript Maurice Jilderda ... 70

Interview transcript Gijs Egberink ... 84

Interview transcript Bram Broeks ... 90

Interview transcript Ron Tolido ... 95

Interview transcript Koen Klokgieters ... 99

Interview transcript Ubald Kragten ... 108

(5)

Interview transcript Sonoko Takahashi ... 119

Appendix C: Interview questions ... 127

(6)

‘Big Data’ is by now a very well-known concept; most of the Fortune 1000 firms are using it in one way or another (Bean, 2016). It has been defined in many ways. Some scholars argue that Big Data is large amounts of unstructured data too ‘big’ and complicated for ordinary data software to process (Snijders, Matzat, & Reips, 2012). Others claim that Big Data does not have to be of high volume but rather of a certain level of complexity (Ward & Barker, 2013). The lack of quantification of the volume of Big Data makes it even harder to define; only Intel connects Big Data to volume, stating that on average firms processing 300 terabytes per week are involved in Big Data. Perhaps the most widely adopted definition comes from the Gartner report (Ward & Barker, 2013). This report does not explicitly mention Big Data, but a definition has nonetheless been extracted from it. It defines Big Data in terms of three Vs: volume, velocity, and variety. All are increasing at a rapid pace (Douglas, 2001). A recent literature review of definitions of Big Data identifies multiple works that deploy this three Vs definition, and the review itself uses it as a basis for its own ‘consensual definition’ (De Mauro, Greco, & Grimaldi, 2015).

Some have argued that Big Data is too big and complex for companies to process (Snijders, Matzat, & Reips, 2012). However, Moore’s law reminds us that the capabilities of computers and data processing software increase rapidly. So, too, does human expertise. As a consequence, there is now technology available that can allow companies to get valuable information out of Big Data. For example, because of improvements in data processing, causal relationships can be distinguished from mere correlations in data, which can lead to better decision making and increased financial and product performance (Brown, Chui, & Manyika, 2011). Take IBM’s self-learning supercomputer Watson, for example. It can read 200 million pages in 3 seconds, and this information can then be used for pattern discovery and to improve evidence-based decision making. It can also be used to discover new drugs for treating cancer, an application that is disrupting the entire industry (IBM, 2015). In the financial industry, algorithmic trading allows for more frequent trading executed at the best price level whilst excluding human error (Connors, Courbe, & Waishampayan, 2013). Dutch bank ING uses data to predict the future expenditure of its customers. Another good example is the transition of ING into a digital bank, a project in which the bank has invested 800 million euros (Keuning, 2016).

(7)

From the above-mentioned examples, we may conclude that Big Data has the potential to disrupt entire industries. Research has found that this will lead to a shift in the organizational structures of companies and innovation in their business models (Brown, Chui, & Manyika, 2011). Big Data has been found to have an impact upon marketing as well. To improve customized advertising, new European guidelines will allow companies, when given permission, to access their customers’ bank statements (Schouwenaars, 2016). In logistics, Big Data has been found to improve network and capacity planning and to increase operational efficiency through real-time route optimization (Jeske, Grüner, & Weiß, 2013). Further, it has been found that Big Data alters the way companies innovate (George, Haas, & Pentland, Big Data and Management, 2014). However, Big Data’s impact on innovation is still not well understood, and there is a need for additional research in this area (George & Lin, 2016). We know that Big Data leads companies to innovate, but there is little understanding yet of how companies use Big Data to innovate.

1.2. Statement of the problem

The fact that Big Data disrupts industries and leads organizations to change their business models demonstrates the importance of this topic. But the impact of Big Data on innovation is not well understood, even though innovation is a key business activity for creating competitive advantage (Neely & Hii, 1998). Clarifying the nature of the relationship between Big Data and innovation is thus of importance for management. One of the reasons this relationship has not been widely examined in the extant literature could be that Big Data is not directly linked to innovation. For example, traditional innovation is the introduction of a new technology or combination of technologies to meet a need that consumers do not yet realize they have (Utterback & Abernathy, 1975). This traditional approach to innovation links data to innovation only indirectly: the need is identified by market researchers and the technology is then developed through internal research and development to protect intellectual property. This leads companies to innovate only within the boundaries of their own organizations. However, firms who do not engage in open innovation (exchanging knowledge with other firms, institutions, or customers to gain competitive advantage) reduce their own knowledge base and lose competitiveness in the long run (Enkel, Gassmann, & Chesbrough, 2009). Open innovation recognizes that most of the available knowledge is outside of the firm’s walls. Through open innovation, organizations can get access to huge

(8)

of open innovation increases and more data becomes available (Enkel, Gassmann, & Chesbrough, 2009). This increase in data will most likely force organizations to change the way they develop their knowledge bases and thus their innovation processes.

Another reason the relationship between Big Data and innovation has not been examined could be that the role of innovation within larger firms has been increasingly taken on by technological start-ups. Disruptive technologies and business models are levelling playing fields. Additionally, the potential customer base for new innovations is enormous, global, and easily accessed through the internet (Taneja, 2013). Dutch bank ING, for example, invests in or collaborates with 40 financial technology start-ups for innovation based on, amongst other things, Big Data (Timmermans, 2016). These financial technology start-ups develop their knowledge bases by using Big Data. These start-ups are part of ING, which means that Big Data, at least indirectly, affects the way the bank develops its knowledge through open innovation. So, in this case at least, Big Data has an impact upon the innovation process.

The aforementioned reasons may explain why the relationship between Big Data and innovation has not been properly explored. But, as the above examples show, it is likely that Big Data will alter the ways in which organizations develop their knowledge bases through open innovation. This has consequences for the innovation process. The objective of this research is therefore to identify the effect Big Data has on the innovation process through open innovation. To achieve this research objective, I have thus formulated the following research question:

“What impact does Big Data have on the open innovation process in organizations?”

To investigate this impact further, this research needs to elaborate on open innovation and Big Data, before exploring the effects of Big Data on open innovation to see what its implications are for the innovation process. Because there is, as yet, little known about this relationship, this research is of an exploratory nature.

1.3. The contribution of this research

The contribution of this research is that it uncovers the impact Big Data has on the innovation process through open innovation, a relationship that is currently not well understood. Because innovation is an important source of competitive advantage, every factor that might influence

(9)

innovation is worth investigating. Exploring this relationship has direct implications for organizations as well. The conclusions of this research may be able to help managers make better decisions about how to implement Big Data for open innovation purposes. Furthermore, it is important to note that the findings of this research also have implications for consumers and for the political system. The impact of Big Data on open innovation most likely means an increased use of consumer data. This in turn implies the need for regulation by governments to protect the privacy of these consumers.

1.4. Outline of the thesis

As Big Data is a new and complex topic, the second chapter of this thesis discusses Big Data by describing its current state according to the extant literature on the topic. It will start with a general overview of what Big Data is before turning to its implications for business. The implications for various areas of business are discussed in a literature review, in the course of which I note that scholars have somewhat neglected the implications of Big Data for the area of innovation. I therefore discuss innovation in the third chapter. It will become clear that the field of innovation is transitioning towards a state that is hospitable to the facilitation of Big Data: open innovation. In chapter four, the implications of Big Data for a specific part of open innovation – the open innovation process – are discussed. I then discuss the relevance of this research and formulate three propositions about the impact of Big Data, which are outlined and visualized in a conceptual model. This model leads to the methodology that I describe in chapter five, and the methodology in turn underlies the presentation and analysis of the results in chapter six. Finally, in chapter seven, I answer the research question and discuss the results.

(10)

2. Big Data

What is Big Data, exactly? The definitions outlined in the introduction vary greatly, which cannot be considered a surprise. Given the rapid growth of Big Data, it is understandable that people have sought to refine their definitions of the term. Gartner’s 3Vs model is, despite having been introduced over 15 years ago, the one definition that seems to have endured (De Mauro, Greco, & Grimaldi, 2015). The general characteristics of Big Data have to do with its

volume, variety, and velocity. Simply put, the 3Vs capture the essence of Big Data. Recently,

the 3Vs definition has been expanded to emphasize that Big Data, apart from being large in volume and involving a wide variety and a continuous stream of data, needs specific technology to create value (De Mauro, Greco, & Grimaldi, 2016). Given that even ordinary data analysis is carried out with specific technology, however, one may debate the real added value of this new definition.

Is a new definition needed? In order to understand the impact of Big Data, no. Instead, to stress the significance of Big Data one should look at the ways in which it is applied and the solutions it offers. Consider, for example, machine learning, one of the applications supported by Big Data, which involves algorithms finding patterns in data in order to predict patterns in new data (Hall, Phan, & Whitson, 2016). Machine learning itself is not new, but the exponential growth of data and affordable processing power have improved it dramatically. Self-learning machines are now capable of processing and filtering information a million times faster than humans, finding patterns and automating processes without any human intervention. This has also led to a more comprehensive form of machine learning: deep learning. Deep learning involves learning several abstraction levels and modes of representation, which moves it closer to the once-fictional area of artificial intelligence (Deng & Yu, 2014). Deep learning is applied in audio and object recognition, and it is also used to help discover new drugs by predicting drug activity. It is clear, then, that the possibilities relating to Big Data are increasing and new methods relating to it are being developed; the necessary technology underlying these possibilities and methods is also becoming more widely available. All these factors indicate that the talk about Big Data is not just hype.

McAfee and Brynjolfsson (2012) have shown that data-driven decisions allow for better decision making than do intuition-based decisions. Thus, with the increase in available data and technology, the playing field of business is shifting and businesses are seeking to exploit the potential of Big Data. With more data available, companies are able to collect a

(11)

wider array of data from unusual sources, which allows them to connect up new nodes of data with one another (Tableau, 2016). But simply possessing a vast amount of data is not enough. In order to extract value from that data, it must be translated into information. Readily available, sophisticated technologies like data mining, as well as the technologies mentioned above, allow for deeper learning and lead to better insights and improved firm performance (Kwon, Lee, & Shin, 2014).

However, merely ‘using’ Big Data is not sufficient to gain a competitive advantage. Big Data is maturing, and it is becoming the new normal; now, the focus is shifting to the results it can deliver and the implications it has for business (Bean, 2016). This is understandable: after the burst of the dot.com bubble and the introduction of Web 2.0, Big Data is the new disrupter of business. Research by Capgemini and EMC has shown that Big Data investments are set to increase for 56% of their respondents – primarily managers from multinational corporations. Moreover, 65% of their respondents were convinced that their businesses would become irrelevant were Big Data not adopted (Capgemini, 2015). Thus, in order to capture value from Big Data and to understand where it may yield competitive advantage, it is important that its implications for business are clear.

2.1. Implications of Big Data for business

Researchers are convinced that Big Data will have an impact on almost all parts of firms, which will lead to productivity growth within the economy (McAfee & Brynjolfsson, 2012). Although it is well beyond the scope of this thesis to treat each business area in detail, this section considers aspects of business that are, according to the literature, affected by Big Data. These areas were chosen because the impact of Big Data on them is clear, but it should be noted that this section does not consider every area of business. Apart from functional areas of businesses, I also examine the impact of Big Data on firms’ decision making. While decision making is not a particular function of a business, it is worthy of examination because it is important for each of the other areas highlighted and for the operational success of an organization as a whole. Further, it is an issue that arises frequently in the literature on Big Data.

(12)

Supply chain management

Big Data has important implications for supply chain management, as supply chain management and Big Data are said to be a good fit for one another (Waller & Fawcett, 2013). This is because of the many ways in which data can be exploited to improve supply chains. New data processing tools will change the structure of supply chains, and even more so their management, which will lead to new challenges and opportunities such as predictive analyses in forecasting, inventory management, and transportation management. For example, delivery time windows can be reduced dramatically and manufacturers can respond to customer sentiment faster. Big Data can also allow for real-time capacity planning and optimal routing. Data concerning customer sentiment can also be used in order to predict future sales of new products (Waller & Fawcett, 2013). In this area of business, opportunities seem to lie particularly in increasing efficiency and reducing time to market, which can dramatically decrease inventory costs, transportation costs, and overhead costs. This can free up money to be used elsewhere. Although costs can be reduced, companies should be prepared to make initial investments. According to researchers, new skills are needed to act upon Big Data opportunities (Waller & Fawcett, 2013).

Marketing

Marketing departments are known to be affected by Big Data. Companies try to obtain as much data as they can to improve their marketing strategies. Marketers seek to predict what consumers are going to buy and where they are going to buy it, and Big Data can serve as a bridge between what consumers read, like, and view online and their purchasing behaviour (Dawar, 2016). User-generated text analysis and sentiment analyses allow marketers to identify negative or positive responses in real time (Chen & Storey, 2012) Furthermore, product recommendation algorithms provide ease of use and can build trust, which can lead to brand loyalty. In addition, the potential of personalized and predictive marketing is most likely increasing due to the increase in generated data and the increasing sophistication of technology. But there are also challenges that come with the journey towards ever more personalized marketing. Although identifying a consumer’s next purchase provides a short-term benefit, it will not lead to a sustained competitive advantage. Only if competitors are already behind in utilizing predictive analysis can premium returns from personalized marketing be expected (Dawar, 2016). Thus, despite the opportunities generated by Big Data, marketers should find ways to use Big Data other than as the basis for ‘merely’ targeted marketing or predictive algorithms aimed at identifying consumer preferences.

(13)

Human resources

Human resources analytics, or ‘people analytics’, is changing the way companies hire new talent as almost every aspect of an application is now subject to analysis. The recruitment process is changing. In addition to the traditional capability tests, firms now employ gaming data to assess an applicant’s creativity and problem-solving skills. Also, with the rise of massive open online courses, firms can tap into a whole new data set concerning the skills of potential employees (Ibarra, 2013). Big Data can identify not only the characteristics that are important to look for when hiring new talent, but also the reasons that a specific employee outperforms his or her peers, the drivers of employee retention, or the correlations between compensation and performance. But there is much still to be achieved in this area. One study found that only 4 per cent of its sample (large corporations, N=480) could use predictive analytics for their workforce, even though it improves recruitment efforts, leadership pipelines, and talent mobility (Bersin, O'Leonard, & Wang-Audia, 2013). It seems that human resources is an area that can benefit greatly from Big Data – that is, if firms organize themselves to utilize the data they have.

Innovation

Organizations are still struggling to determine the impact Big Data has on innovation and to understand how to get value out of Big Data (Schrage, 2016). Research has found that the relationship between Big Data and innovation is not yet well understood (George, Osinga, Lavie, & Scott, 2016). George and Lin (2016) have found that by using Big Data, firms can improve their businesses on a daily basis. They present four types of Big Data analytic-driven innovation, two of which possess an innovation focus and two an analytics focus. The innovation focus is the managerial effort focused on the innovation outcome, and the analytics focus is the organizational effort focused on the analytics itself (George & Lin, 2016). For the purposes of this thesis, the analytics mentioned here is treated as Big Data analytics.

1. Analytics as innovation: the use of analytics as part of innovation.

2. Innovation on analytics: employing new algorithms or dashboards, or improving current ones.

3. Analytics on innovation: performing analytics on innovation-related processes. 4. Innovation through analytics: driving innovation through analytics by integrating

(14)

This framework is useful in identifying how to approach Big Data, but its implications for innovation have not been explored. Although innovation is a driver of competitive advantage, the implications of Big Data for this area of business have not yet been determined. Here, companies are still thrashing around in the dark – in part because the Big Data industry itself is not yet able to comprehend Big Data’s impact. Given how quickly Big Data is developing, this is only to be expected.

Decision making

As mentioned above, it has been shown that Big Data improves the quality and effectiveness of decision making. More data leads to more information and evidence on which to base decisions. This trend shows that decision making is increasingly moving away from intuition and gut feeling and towards more transparency (Kwon, Lee, & Shin, 2014). Moreover, there is already technology available that allows decisions to be made automatically based upon Big Data (Provost & Fawcett, 2013). These technologies relieve managers of the need to make decisions on a ‘hunch’ and increase the likelihood that, where decisions are made on the basis of data sets, the correct decisions can be made. But the vast amounts of data, the variety of data, and the velocity of data also have potentially negative implications for data-driven decisions. Unstructured compiled data from different sources may be of a lesser quality than structured data, and this may affect the quality and effectiveness of the decision making based on it (Kwon, Lee, & Shin, 2014). Also, as companies seek to become more agile, and more stakeholders become involved and more information becomes available, there can be delays in decision making because all stakeholders have access to all the data, leading to increased interference. (Schrage, 2016). Beyond the effects it has on the process of decision making, Big Data may also alter the rights to make certain decisions. When a company moves towards a more data-intensive way of making decisions, other expertise becomes necessary if the right decisions are to be made (Schrage, 2016). For example, more analytical knowledge may be necessary. This leads to changes in the way decision-making rights are distributed and may alter organizational structures. One can assume that translating data into information is hard, as the quality and completeness of data is not always ensured. In short, companies should note that using Big Data can have significant implications for their decision making.

(15)

2.2. Conclusion

Big Data is playing an increasingly important role in business, disrupting entire industries. We generated more data in the last two years than we did in all of preceding human history. Data is thus becoming more readily available (Waller & Fawcett, 2013). In combination with new technology and increasingly sophisticated algorithms, Big Data is becoming the new corporate standard and is touching upon all of areas of business. But not all of the changes being brought about by Big Data are understood equally well. The next chapter discusses one of these areas: innovation.

(16)

3. Innovation

Innovation is an important source of competitive advantage for both services and manufacturing firms (Neely & Hii, 1998; Goffin & Mitchell, 2005). To innovate, companies traditionally either focus on differentiating themselves by creating new products or services or optimizing existing processes to cut costs. But it seems that this is no longer enough. Some researchers have recently emphasized business model innovation, suggesting that it promises the most sustainable form of competitive advantage (Demil & Lecocq, 2010). Research has shown that business model innovators outperform traditional innovators over time (Lindgardt, Reeves, Stalk, & Deimler, 2009). Yet business model innovation should not serve as a company’s sole source of innovation. Rather, businesses should combine business model innovation with either product innovation or process innovation (or both). Consider, for example, Apple, which regained competitiveness through a combination of a new business model (a digital hub instead of a computer) supported by new products (the iPod) and services (iTunes and iMovie).

Firms can choose any of these approaches to innovation. But to innovate successfully, a firm needs an innovation strategy (Pisano, 2015). An innovation strategy defines how and to what extent a company employs innovation to gain competitive advantage (Gilbert, 1994). An innovation strategy sets the boundaries for the innovation approach and the innovation processes that companies adopt. A typical innovation strategy is internally oriented. An internal innovation strategy often involves secretive internal innovation processes like R&D activities and the protection of property rights, and these sorts of strategies were successfully employed in the past by large multinational enterprises such as DSM and Philips. However, the playing field is in transition. Since Web 2.0 was introduced, customers have had a voice – a loud one – that companies cannot ignore. Additionally, the Internet of Things, combined with the increasing number of digital devices, has created an ocean of new data to be used (Swan, 2012). Products like refrigerators are becoming ‘smart’ and are connected to other products (Teradata, 2015). This should lead companies to adopt novel innovation processes. Firms need to find ways to deal with the increase in user-generated content and customer empowerment, technology, and digitalization. Currently, companies often innovate within the frame of their business model or industry. Take for example the football industry: modern football clubs have enormous amounts of data, scouting reports, psychological reports, and statistics relating to individual players. Still, there is not one football club that uses all their available data to attract the right players, field the right squad, or analyse their opponents

(17)

properly (De Hoog, 2017). This is conservative, short-term thinking. Companies should use the increasing amounts of data and increasingly sophisticated technology as an opportunity to reframe the way they approach their innovation processes. Only if they do this will they be able to withstand the increasing competition from data-enabled start-ups. Open innovation is one way that firms can start to reframe their innovation processes.

3.2. What is open innovation?

Exploiting external knowledge is considered central to innovation (Cohen & Levinthal, 1990). Research suggests that firms cannot innovate on their own, and many innovative firms use open innovation to achieve a competitive advantage (Laursen & Salter, 2006) (Dahlander & Gann, 2010). But what is open innovation, exactly? Open innovation can be defined as ‘the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively’ (Chesbrough, Vanhaverbeke, & West, 2006). Although this definition defines open innovation clearly, there have already been changes to the technologies relating to open innovation since this definition was suggested. This thesis defines the knowledge inflows and outflows mentioned in the above definition as (public) databases, user-generated content, and (open source) technologies, which are obtained externally to help the firm in a specific way.

According to Dahlander & Gann (2010), open innovation can be categorized as either inbound or outbound. This distinction is based upon the flow of information in the open innovation process. Inbound open innovation is the flow of information from external sources to the focal firm; outbound open innovation is the opposite. A similar distinction is adopted by Gassmann et al. (2010) under the headings of ‘outside-in’ and ‘inside-out’ open innovation. There are different ways external knowledge can be made use of in innovation. In inbound open innovation, new knowledge can be either sourced or acquired. Sourcing means that firms scan the environment for existing technologies, databases, or knowledge. Acquiring is similar to sourcing, but firms must pay a licence fee to obtain the technology (Dahlander & Gann, 2010). Or, instead of paying a license fee for a certain technology, firms can acquire start-ups to obtain the necessary technology. Outbound open innovation can be categorized as either revealing or selling. These are similar to sourcing and acquiring, yet the flow of information is in the opposite direction (Dahlander & Gann, 2010). In line with my research question, this thesis will primarily consider inbound open innovation.

(18)

3.3. Why the time of open innovation is now

Even though it has been suggested that open approaches to innovation do not fit well with traditional business strategy theories, these approaches are increasingly being adopted (Chesbrough & Appleyard, 2007). Open innovation is also enjoying an increasing amount of attention in both the academic and business worlds (Gassmann, Enkel, & Chesbrough, 2010; Wass & Vimarlund, 2016). Thus, despite its disadvantages – such as difficulties in protecting property rights – open innovation is becoming increasingly popular (Dahlander & Gann, 2010). Firms increasingly look outside their own boundaries for knowledge that they can transform into value, and this is most likely to be explained by the above-mentioned increases in available technology, public data, and user-generated content. Even the pharmaceutical industry, which is traditionally thought of as a conservative industry, engages in open innovation to leverage outside knowledge (Hunter & Stephens, 2010). Because of the shift to open innovation, firms can also rely on a different kind of knowledge that is generated externally for competitive advantage (Doheny, Nagali, & Weig, 2012). In this regard, firms that open their doors can tap into a wealth of knowledge to meet their changing knowledge needs. This knowledge keeps on expanding due to the sheer number of devices connected to the internet – it is estimated that there will be 50 billion devices connected to the internet by 2020 (Swan, 2012). More data from a growing variety of sources (roads, buildings, and household appliances) offers more information to use in the search for competitive advantage. This information comes, for example, from sound waves, movement, and other variables, and it links real world objects to the internet. In other words, firms that are not yet looking outside of their own innovation boundaries are missing out.

Because firms are increasingly using open innovation, they are altering their methods of developing knowledge. Instead of relying on internal innovation practices such as research and development, firms are sourcing knowledge from outside of their own borders. The increasing amounts of data and the intensifying search for that data will have implications for innovation. In other words, the open innovation process is increasingly dependent on external

Direction Category Examples

Outbound Knowledge sharing:

selling, revealing

Intellectual property, technology, data sets, user-generated content (forums, social media),

Inbound Knowledge development:

sourcing, acquiring

Intellectual property, technology, data sets, user-generated content (forums, social media), start-ups

(19)

knowledge flows. As Big Data is a part of these knowledge flows, understanding Big Data’s impact on open innovation is likely to lead to insights about the innovation process. The conclusions of market research are likely to become more trustworthy because of the supporting data they draw upon. This may, in turn, make it easier for firms to know whether to continue with or end an innovation project. Different and novel sources of data will provide new insights. The examples mentioned above are all likely to have effects on the innovation process. Still, to extract insights from the data, firms should employ Big Data analytics. With sentiment analysis, sensor patches, and deep learning, market research could become obsolete. Furthermore, data concerning consumers or products now comes in batches, which leaves companies chasing consumers. A continuous stream of data reduces the delay, saving companies a lot of money. Nanosensors within products provide real-time data on usage patterns, which can allow companies to identify which product components are malfunctioning instead of having to take the entire product apart. This can mean a more rapid improvement of the product and a faster time to market.

3.4. Conclusion

The ever-growing amount of data offers enormous potential. The transition towards open innovation means companies increasingly opening themselves up to external sources of knowledge in order to innovate. Because firms are looking beyond their own innovation boundaries, traditional ways of creating value are becoming less relevant, and other forms of knowledge are becoming more relevant. Therefore, the innovation playing field itself is shifting towards a new state, and the only way for firms to keep playing is to reframe their approach to innovation. They should start innovating outside the normal innovation boundaries. Big Data can help in facilitating this transition.

(20)

4. The implications of Big Data for the innovation process

Now that Big Data is being used by an increasing number of companies, the impact it has on business is becoming clear. Research into marketing, supply chain management, decision making, and HR has focused on the positive outcomes that can be achieved in these areas using Big Data. However, how Big Data affects the innovation process as a corporate activity is not yet understood. It is still not known what exactly the challenges and opportunities presented by Big Data are. Wessel (2016) argues that Big Data may provide a basis for disruptive innovation and open up opportunities for entering markets from different angles. As I explained in chapter 2, George and Lin’s (2016) typology of Big Data analytics and innovation is useful as a first step in identifying how Big Data can be used to approach innovation as a business activity. Still, George and Lin do not consider how and to what extent innovation is altered by Big Data. The traditional innovation framework has already shifted towards an ‘open innovation process’ (which will be discussed in section 4.1), through which companies can leverage external data (Chesbrough, 2003). Yet, even in the case of open innovation, the impact of Big Data on the innovation process is still a ‘black box’. What implications does Big Data have for the innovation process itself? Is it an extra tool that can be used to support the innovation process? Or does it yield so much information that it allows firms to discover completely different and new insights, thereby changing the innovation process itself? Will innovation driven by Big Data play a central role in the firm? Why is it important to know how Big Data changes the innovation process itself?

4.1. The innovation process

As mentioned in chapter 3, organizations can engage in process innovation, product innovation, or business model innovation. However, this does not yet tell us anything about

how businesses go about developing their business models, products, or processes. The

traditional definition of the innovation process is the ‘progression from scientific discovery, through technological development in firms, to the market place’ (Rothwell, 1994). Yet, as Rothwell (1994) argues, new generations of innovation processes were developed, which eventually led to a novel innovation process model – leveraging external knowledge – which saved companies money and development time. It can be argued that the innovation process itself still consists of three major stages. It is still a process in which ideas are generated (the ideation phase), a product or service is developed (the development phase), and a product or

(21)

service is delivered to the marketplace (delivery/marketplace). Figure 4.1 shows these three major stages.

Figure 4.1: The open innovation process

Adapted from Rothwell (1994)

Even though the innovation process is depicted as a sequence, the dual arrows indicate that iterations making use of, for example, customer feedback are common and the stages are interdependent.

The idea generation, or ideation, phase is influenced by new technologies and new needs pushed by the focal organization. Knowledge can be developed in the form of needs and technologies that can be identified by leveraging external knowledge. This is indicated by societal needs and the state of the available technology. Customers – society – can voice their needs through social media or through direct interaction with the company. Furthermore, technology can be sourced or acquired (see chapter 3). This model thus shows that new ideas can be generated through both internal and external knowledge development. The dashed lines emphasize the open nature of the innovation process. As mentioned above, using external data can make it easier to identify successful innovations, but it can also lead to the connection of knowledge domains and the broadening of the scope of the innovation portfolio. In the development phase, external knowledge can be included in order to provide feedback at different stages of development (for example, when a pilot version is tested). To get a product to market as fast as possible, it is important to develop it quickly. In the

marketplace, the new product or service is delivered to the market, after which adjustments

can be made based on market sentiment.

In light of this model, it is clear that the impact of Big Data on the open innovation process is likely to be felt primarily at the ideation and development stages. Big Data can improve the ideation phase as external knowledge is used to connect knowledge domains. Big

Ideation Development Marketplace

New needs

New tech Technology state

(22)

Data can also increase the scope for innovation because it opens up opportunities to connect unusual knowledge domains, which leads to the identification of new market segments or gaps in the existing portfolio. Big Data also influences the development phase as it facilitates an increased use of external data. This can lead to a more rapid identification of successful innovations, more reliable knowledge of which markets to serve, and fewer deviations from the innovation process. All in all, it leads to a better focus in the innovation process. Moreover, the development phase can be accelerated by shortening iteration cycles through faster treatment of real-time feedback data.

4.2. Motivations for the selection of propositions

Section 3.3 brought out some ways in which the innovation process might be influenced. This research uses some of those ways to build a case for the impact of Big Data on the innovation process. These factors – speed, focus, and scope – will be elaborated in the following section. The importance of these factors is stressed in the existing literature. Speed, or time to market, is considered one of the most critical factors in a firm’s success (Vesey, 1992). It has also been found that innovation can only be implemented successfully if there is a good fit between the product and the targeted user (Klein & Speer Sorra, 1996), and a focused innovation approach increases the likelihood of such a fit. Finally, connecting a wider variety of sources of data together may lead to greater diversification and thus reduced risk (Kim, Hwang, & Burgers, 1993). Big Data can stimulate this process.

As mentioned above, these three factors are not the only ways in which firms can benefit when implementing Big Data in their innovation process. However, speed, focus, and scope cover the central ways in which businesses are likely to be affected by Big Data. Of course, given that little literature is available on these matters, it is risky to make claims about these effects. However, these factors were mentioned frequently by the respondents in the interviews used to gather data for this thesis. What is more, because the interviews were semi-structured, these factors were not raised due to the prompting of the interviewer but were raised by the interviewees themselves. This fact strengthens the case that they are significant factors. Thus, the motivations for focusing on these three factors in the formulation of the propositions below are the previous mentions of these factors in section 3.2, the significance attributed to them in the literature, and their repeated mentions in the interviews.

(23)

4.3. Propositions and conceptual model

Why is filling this gap in our knowledge useful? Awareness of how Big Data affects open innovation can lead to competitive advantages for firms. For example, open innovation based upon Big Data can lead to the early identification of effective innovations, which may reduce the time to market of new products, saving companies precious time and capital. Products’ malfunctions and faults can be transmitted to businesses in real time. The continuous stream of data may lead to shorter iteration cycles, which speeds up the innovation process. This is valuable: one study found that 77 per cent of decision makers (N=1000) increasingly require real-time data (Capgemini, 2015). This leads to the first proposition:

P1: The impact of Big Data on open innovation leads to a faster innovation process

Furthermore, through the variety of data, innovation may become more focused. More data about customers and products, captured from a variety of sources, together with Big Data analytics can provide better insights into which customer segments should be served and which kinds of product or service fit that specific segment. Big Data can also increase the success of innovation. Data-driven innovation reduces the incidence of failure between the invention and innovation stages dramatically by selecting the most promising ideas (Kusiak, 2009). Big Data can help reduce this failure rate even more. This leads to the second proposition:

P2: The impact of Big Data on open innovation leads to a more focused innovation process

Third, and by extension from the second hypothesis, Big Data can also lead to innovations in new areas. Combining a larger variety of data gives rise to novel insights that can lead firms to innovate in areas that they may not have considered innovating in before. In addition to a more focused approach to innovation, then, Big Data expands the scope of innovation. This leads to the third proposition:

(24)

P3: The impact of Big Data on open innovation gives rise to a broader scope for the innovation process

It is likely that there are other advantages to making use of Big Data in open innovation. But the advantages just outlined are surely the most central ones, as they relate to areas that are heavily reliant on data and therefore subject to the effects of Big Data. Figure 4.2 illustrates the impact of Big Data on open innovation. Big Data influences open innovation, which in turn gives rise to changes in the speed, focus, and scope of the innovation process.

Figure 4.2: The impact of Big Data on open innovation

Now that we have identified why it is important to understand the effects Big Data has on open innovation, we can also see who needs to know about them. In business, decision makers at organizations who make use of open innovation and Big Data can benefit from understanding these effects as they will thereby acquire a greater understanding of the innovation process. Furthermore, firms may benefit through cost savings and through better and faster innovations. At the same time, a firm may be able to penetrate new markets due to the increased scope of innovation, leading to the exploration of new markets and the creation of a more diversified portfolio. As this is one of the first studies to explore the impact of Big Data through open innovation on the innovation process, its results open the door to other scholars to do more research in this area.

4.4. Conclusion

Until now, the implications of Big Data for open innovation have not been clear. The transition of the innovation playing field towards open innovation is the first step in utilizing Big Data to add value to the firm. Still, it is not completely clear what the implications are. We can be sure, however, that an understanding of these effects will benefit both business and

Speed Focus Scope Open innovation

Big Data Innovation process

P1+

P2+

(25)

researchers. The three propositions presented above identify the aspects of open innovation affected by Big Data: speed, focus, and scope.

(26)

5. Research Design

This section will provide an overview of the research design of this study. Because this research seeks to identify how Big Data affects open innovation, which is carried out at firms, this study’s unit of analysis is the firm. Therefore, Big Data and innovation experts were interviewed about what they thought the impact on companies would be. The interviews were conducted in the Dutch language. This was the native tongue of all respondents and allowed for a smoother interview process.

5.1. Research Structure

Yin (2003) argues that there are three criteria for determining the right research structure. First, case studies are exploratory, explanatory, or descriptive. Exploratory studies are appropriate for problems that are not clearly defined. Explanatory studies are used to explain causal relationships. Descriptive studies are used to describe certain phenomena or processes that occur. It has already been shown above that there is very little understanding of how Big Data affects open innovation. Thus, this study is exploratory.

Secondly, research can involve either a single-case design or a multiple-case design. Which design is adopted depends on whether one ‘case’ is enough to explain the unit of analysis. A single-case design explores, explains, or describes phenomena in one ‘case’. A multiple-case design considers multiple ‘cases’. This research investigates the impact of Big Data across multiple firms, and therefore it adopts a multiple-case design. Zucker (2009), argues that multiple-case studies must have a replication logic, in either a literal or a

theoretical form. Literal replication logic can be defined as similar results. Theoretical

replication logic is similar or contrasting results for specific reasons. This study implies theoretical replication logic as the impact of Big Data differs from case to case.

Thirdly, the design can be either holistic or embedded, depending on whether there is only one unit of analysis (holistic) or multiple units of analysis (embedded). The units of analysis in this study are firms as this research seeks to identify the impact on knowledge development within firms. This is thus a holistic study.

(27)

5.2. Data collection

Data was collected through semi-structured interviews with respondents.

Respondents

To obtain insights from both innovation and Big Data perspectives, this study compares responses from innovation and Big Data experts. As experts, these respondents obviously know a great deal about the matters at issue. They all have university degrees and are currently working at firms involved in innovation or Big Data; they thus have both theoretical and practical expertise. To get to respondents, a gatekeeper was identified and contacted. After some negotiation, access to his network of professionals was secured. After the gatekeeper provided necessary information about the potential candidates, they were contacted via email. As the gatekeeper was vice president at Capgemini and had a consultancy firm operating in a large network, he was involved in several innovation and IT projects. He had access to respondents with a great deal of experience in either innovation or Big Data. The search was expanded through the author’s own network. The respondents are listed in Table 5.1.

Table 5.1: Respondents

Innovation Big Data

Name Company/industry Role Firm size

Name Company/industry Role Firm Size

Ubald Kragten DSM – Chemicals Manager Business Intelligence and Innovation

4,300 Bram Broeks Netscalers –

Business development

Director 11

Bart Jansen S&I Consulting – Information technology consulting

Strategy & Innovation Partner

10 Maurice Jilderda Sitech – Chemicals Project

Director Predictive Analytics 730 Koen Klokgieters RevelX – Growth entrepreneurship /consultancy Strategy & Innovation Partner

15 Ron Tolido Capgemini – IT

consultancy Global CTO 145,000 Sonokoh Takahashi PwC – Consultancy Senior Manager Digital and Innovation

4,445 Maarten Bellekom EY – Consultancy Data

Analytics Advisor

4,284

Gijs Egberink Decisive Facts – IT Database

Analyst

7

Marnix Bügel MiCompany –

Management consulting

Founding Partner

(28)

The sample includes two multinational enterprises from the chemicals industry (DSM, Sitech). DSM provided an innovation expert whereas Sitech provided a Big Data expert. Furthermore, two large consultancy firms are represented (PwC, EY). Again, one of the respondents (PwC) was an innovation expert whereas the other respondent was a Big Data expert (EY). Two small consultancy firms complete the innovation pool (S&I, RevelX), and the Big Data pool of experts is completed by two small IT data scientists from firms comparable in size to RevelX and S&I. The Big Data pool includes a very large IT consultancy (Capgemini) and a medium–large Big Data consultancy company (MICompany). Both pools of experts were assembled in such a way as to ensure similar industries and firm sizes were represented in each; this increases the likelihood that similarities or differences in the responses of innovation and Big Data experts will be due to the respondent’s particular field of expertise. In the analysis and results section, similarities and differences between the two pools of experts will be identified and discussed. This will allow us to see whether the two groups of experts have different views of the impact of Big Data on the open innovation process.

Research instrument

The research instrument used was the semi-structured interview. The reasons for choosing this research instrument are the exploratory nature of the research, the novelty of the subject, and the resulting need to adopt a flexible and open approach. Furthermore, this research instrument allows for the exploration of some of the deeper motivations for respondents’ answers directly, which leads higher quality answers and thereby achieves greater insight (Horton, Macve, & Struyven, 2004). Despite the need for flexibility, certain aspects need to be covered to ensure the research objective is achieved. The three propositions posed in the previous chapter outline clearly what those aspects are. A series of questions was created to ensure that the most important issues were covered and to provide a guideline for the interview. At the same time, there was room to deviate when respondents felt the need to elaborate on a specific topic. Leech (2002) and Campion et al. (1994) were consulted for guidance on the construction of questions and on broadening the scope of the question types. The interviews were conducted either face-to-face (four) or via Skype (six).

An initial interview was used to test the validity of the approach; this interview was carried out with the gatekeeper and was not included in the results. After the initial interview, the questions were adjusted to ensure higher quality and efficiency. The adjusted interview was used for the respondents in this thesis. Typical adjustments involved changes to the

(29)

format of the questions. Also, the introduction of a brief general overview of the topic guaranteed that the respondents had an adequate understanding of the issue. Additionally, the reformulation of certain questions in order to make them more specific to the company in question allowed respondents to provide more in-depth answers on the topic, which in turn provided a greater insight into the effects of Big Data. The interviews typically lasted 40–60 minutes. This time was adapted from the pilot interview and ensured enough time to cover the core topic. Ten interviews were conducted, four with innovation experts and six with Big Data experts. The interviews are included as an appendix (appendix B).

Interviews were recorded with an audio recording device and subsequently transcribed in full. Transcription ensures that no precious data is lost, and it is considered a fundamental aspect of qualitative research (Davidson, 2009). The method of transcription is of much significance. McLellan et al. (2003) state that poor transcription of audio or digital recordings has a potentially negative effect on the analysis of data. This may result in a loss of quality and delay in the analytical process. Although there is no general framework concerning the transcription of data, there are many practices which improve transcription (McLellan, MacQueen, & Neidig, 2003). Transcribed interviews in this research follow the approaches of McLellan et al. (2003) and Davidson (2009) as starting points for understanding how to prevent the loss of information. Furthermore, full transcription of the data supports the processing of the data for analysis. After transcription, the data was coded using a digital coding programme called NVivo, which is a widely-used coding tool in qualitative research that is well suited to modern qualitative research projects (Bazely & Jackson, 2013).

5.3. Strengths and limitations

Because this is a qualitative research project, and because of the small sample size, its conclusions are hard to generalize. It is, however, a good starting point from which to advance this field of research. Because the research instrument is data collection through interviews, there is a possibility of interviewer bias. However, Maxwell & Reybold (2015) propose that even though qualitative research is subjective, it should not be considered inherently biased, as recognizing this subjective aspect makes the research stronger.

The goal of this study is to identify the impact of Big Data on the open innovation process. This is obviously a new and under-researched phenomenon, and, as I have already mentioned, the appropriate kind of study to approach such a phenomenon is a qualitative

(30)

internal validity and improve reliability in such a way that the interview questions were of high quality and covered the intended ground. Also, the semi-structured interviews allowed for certain aspects to be investigated in more detail if it proved necessary.

(31)

6. Data analysis and results

This chapter presents and analyses the results of the interviews. The propositions stated in chapter four will be used to structure this section as follows. First, I analyse proposition one, which states that the impact of Big Data on open innovation leads to a faster innovation process. This will be done by presenting the relevant single-case findings in a table. Secondly, the results will be categorized and analysed according to whether they represent the perspectives of innovation experts or Big Data experts; this will allow for the identification of differences or similarities between the Big Data respondents and the innovation respondents. Third, both pools will be considered together in order to identify what differences or similarities there are between the Big Data experts and the innovation experts. This procedure will then be repeated for propositions two and three.

First, in order to get some idea of the general approaches of the experts, I will give a brief overview of the companies and their approaches to innovation or Big Data. Each pool of experts included at least one multinational, one large consultancy network, and some smaller consulting firms. It is possible that these different sorts of companies have different views about the basic approach to innovation. Sitech and DSM place a large emphasis on innovation and Big Data, respectively. DSM has multiple innovation centres across the globe and relies on innovation to achieve a competitive advantage. Sitech currently has 18 pilots running on predictive analytics, supported by Big Data analyses; this highlights this firm’s expertise when it comes to Big Data. PwC and EY are both major consultancy firms. The PwC respondent was involved in multiple innovation projects, primarily in the retail sector. The EY respondent was also primarily involved in retail projects and Big Data analytics. Both consultancy firms are thus significantly involved in either innovation or Big Data. The small consultancy firms in the innovation pool are chiefly focused on innovation projects; accordingly, they emphasize innovation. This is also the case for the Big Data companies whose work mostly consists in advising companies about how to increase performance through Big Data analytics. Finally, Capgemini is a very large IT consultancy firm that relies heavily on both innovation and Big Data analytics.

6.1. Analysis

(32)

innovation experts. The tables consist of three columns: the respondent’s initials (1), whether he or she is in favour of the proposition (2), and a supporting statement (3). The categories for the second column are ‘yes’ (in favour), ‘no’ (not in favour), ‘not clear’ (no explicit answer), or ‘no relevant answer’ (no relevant answer was given). Whether respondents were in favour of the proposition or not was inferred from the interviews. Supporting statements are provided in the third column.

To arrive at either ‘yes’, ‘not clear’, ‘no’, or ‘not relevant’, the interviews were coded after transcription using NVivo. The codes were created to identify specific sections that contained relevant areas in which to seek answers relating to the propositions. Initial codes constructed were ‘Big Data’, ‘innovation’, and ‘Big Data impact’. Further coding was performed when the interviews were analysed and read to ensure a more thorough categorization of the statements. The categorization of the answers led to division of the statements into the four categories mentioned above. To sort the codes into one of the categories, the codes were examined and analysed. For each of the propositions, explicit words or related statements were identified to make sure that the statements explained the right proposition. These words or related statements were obtained from the interviews. To ensure the right category was selected, the context was searched for positive or negative connotations within the sentence itself and in related sentences. Below, I outline the way answers were sorted into distinct categories for each of the propositions.

Proposition one

For proposition one, ‘yes’ was inferred if an answer yielded direct or related uses of words such as ‘faster’, ‘quicker’, and ‘accelerated’, preceded or followed by words with positive connotations. ‘Not clear’ was inferred when statements included indirectly related uses of words such as ‘stimulate’, ‘influence’, or ‘impact’. ‘No’ was inferred if such statements were made alongside words with negative connotations or if negative words such as ‘slow’, ‘challenge’, or ‘complex’ were used. ‘No relevant answer’ was inferred if the respondent did not mention anything regarding the proposition.

Proposition two

For proposition two, ‘yes’ was inferred if an answer yielded direct uses of words such as ‘direction’, ‘focus’, and ‘where to go’, preceded or followed by words with positive connotations. ‘Not clear’ was inferred when statements included indirectly related uses of words such as ‘stimulate’, ‘influence’, or ‘impact’. ‘No’ was inferred if similar statements

(33)

were made alongside words with negative connotations or if negative words like ‘deviation’, ‘distraction’, or ‘failure’ were used. ‘No relevant answer’ was inferred if the respondent did not mention anything regarding the proposition.

Proposition three

For proposition three, ‘yes’ was inferred if an answer yielded direct uses of words such as ‘scope’, ‘knowledge domain’, ‘portfolio’, and ‘variety’, preceded or followed by words with positive connotations. ‘Not clear’ was inferred when statements included indirectly related uses of words such as ‘range’, or ‘opportunity’. ‘No’ was inferred if similar statements were made alongside words with negative connotations or if negative words like ‘smaller’ or ‘less variety’ were used. ‘No relevant answer’ was inferred if the respondent did not mention anything regarding the proposition.

This thesis focuses primarily on the three propositions put forward in section 4.3, and so the results section will deal with these three propositions. As mentioned above, the motivation for focusing on these three in particular was their prominence in the literature and in the interviews themselves. It is beyond the scope of this thesis to examine all the potential ways Big Data influences the open innovation process. In exploratory research in particular, it is important to provide a well-argued basis for further research. By using these carefully selected propositions, rather than attempting to ascertain all of the ways in which Big Data affects the open innovation process, this thesis is able to provide such a basis.

6.2. A faster innovation process?

The first proposition states that the impact of Big Data on open innovation leads to a faster innovation process. To test this proposition, the interviews were analysed for relevant answers. The results are presented in Tables 6.1 and 6.2.

6.2.1. Innovation pool

Table 6.1: Innovation experts on proposition one

Respondent In favour of

(34)

A (U.K.) Yes ‘Big Data is one of the tools that can be used … If Big Data helps to test, to accelerate [the innovation process] via earlier insights in user patterns, then those are things that are effective. If I can test earlier, or can fail and improve my solution earlier, then I’ll advance earlier [in the innovation process].’ ‘Yes, iteration cycles [of the innovation process] will become shorter – that is correct.’

B (B.J.) No ‘We would say, then [if you need Big Data] you are too late;

testing can occur earlier on [with the Lean Start-up Method].’

‘If you engage in open innovation and you collaborate with the crowd or other specialists, do you use big data sets, then? Not necessarily, I would say.’

C (K.K.) Yes ‘[Innovation goes] faster, absolutely.’ ‘So, Big Data is

absolutely an accelerator of innovation.’

D (S.T.) Yes ‘Then [with more incremental innovations] you could engage

in far more accurate planning or make a plan with a business case leading to earlier identification of what the advantage is for the company.’ ‘That you receive a “go” or “no go” earlier [on an innovation project] so that you can advance to something new [a product] sooner, or so that you can improve the product earlier, and then introduce it to the market earlier.’ ‘I think [it is likely] that the incremental innovation indeed [will become] easier … that it will go faster, but that might also be the case for disruptive [innovation].’

Most innovation experts agreed with the proposition. Respondents A, C, and D stated that Big Data could make innovation or the innovation process faster. Respondent A discussed the shorter iteration cycles in an innovation process, which lead to faster development and normally a shorter time to market. Respondent C added that Big Data definitely accelerates innovation but did not discuss the innovation process specifically. Respondent D made a distinction between incremental and disruptive innovation, initially saying that incremental innovation was more likely to be impacted by Big Data. In a later statement, she added that disruptive innovation might also be affected. By contrast, respondent B did not think Big Data would have a definite impact on the innovation process. With regard to the use of Big Data for feedback, respondent B stated that when you have to wait on Big Data analyses to test hypotheses or new products, it is already too late to make use of them. Respondent B also questioned whether Big Data is needed in specific cases of open innovation as in these cases businesses often make use of multiple specialists, each with their own data sets. This contributes to the unstructured nature of the data, which is a characteristic of Big Data.

Referenties

GERELATEERDE DOCUMENTEN

Around 2020 the extent of the reduction in demand for energy, the amount and concentration of decentralized electricity production, the use of electric heat pumps, and the degree

Ook als persoonsgegevens bijvoorbeeld in het online telefoonboek zijn opgenomen (op basis van toestemming van de betreffende persoon), dan is de gebruiker van die informatie

“So, you think about the process, and you think about what different data sources that we have.. We have the data sources from customer service, from the technical operation, the

Nevertheless, there has not been executed a systematic literature review on specific application opportunities for data-driven decision making alongside the entire

As both operations and data elements are represented by transactions in models generated with algorithm Delta, deleting a data element, will result in removing the

They have implemented an open innovation strategy in which they are not eager to cooperate with external partners; are cooperating with a limited number of

As a result of internationalization exogenous and endogenous factors stimulating shaping of new capabilities to respond to them, the business model that firms adopt for operating

Opvallend zijn in dit verband verder de parallellen tussen de duivelsfiguur die Nathanael bezoekt en de figuur van Hitler in De gallische ziekte; in de voorstel- ling van Ferron