• No results found

Developing a capabilities approach for the adoption of big data in organizations

N/A
N/A
Protected

Academic year: 2021

Share "Developing a capabilities approach for the adoption of big data in organizations"

Copied!
97
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Developing a Capabilities Approach for the Adoption of

Big Data in Organizations

Master thesis

MSc. Business Administration | International Management Track

Sophie Elsen 10876596 March 2, 2017

Amsterdam Business School | University of Amsterdam Dr. M.P. Paukku

(2)

2

Abstract

Big data is an innovation that has a lot of potential for organizations. However, reality and academic literature show that many big data projects are failing. Big data influences the entire organization. It is more than technological changes; it requires the organization to change. The aim of this study is to describe specific organizational big data capabilities that influence the successful adoption of enterprise-wide big data systems in legacy organizations and distinguish between the relative importance. A qualitative approach is used for the purpose of this research. Exploratory interviews and case interviews are conducted. Leader

characteristics, organizational structure, and external characteristics form the foundation for this study. Since big data has profound implications on the organization the three categories are extended to the following five categories: people, organizational structure, external characteristics, organizational culture, and strategy. Organizations are provided with capabilities to become successful in the adoption of big data on these five categories. Next, the research shows even though all categories are important distinction can be made in the importance of the categories. Where people is most important followed by culture, strategy, external characteristics, and organizational structure. The research provides a foundation for further research and provides managers with more insights on how to be successful in the adoption of the disruptive innovation big data.

Keywords: Big data; adoption; big data capabilities; disruptive innovation; IT innovation;

(3)

3

Acknowledgement

I would like to express my deepest gratitude to Markus. Without your supervision and valuable feedback, it would have been impossible to finish my thesis. Thank you for everything. Next, I would like to thank Bas Verheij. As my thesis supervisor from Accenture, he made it possible for me to get in touch with the global big data analytics department.

Statement of originality

This document is written by Student Sophie Elsen who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(4)

4

Contents

1. Introduction ... 6

2. Literature review ... 11

2.1 Value creation ... 11

2.1.1 Value creation theories ... 11

2.1.2 Big data an innovation to create value ... 14

2.2. Big data ... 15

2.2.1. Emergence ... 15

2.2.2 Definition of big data ... 15

2.2.3 Big data: beyond the 5 V’s ... 18

2.3 Adoption theories: the organizational perspective ... 20

2.3.1 Leader characteristics ... 21

2.3.4 Organizational structure ... 23

2.3.5 External characteristics ... 28

2.3.6 Conclusion ... 29

2.3.7 Expanding the organizational perspective ... 30

2.4 Conceptual framework ... 35 3. Methodology ... 37 3.1 Research strategy ... 37 3.2 Research design ... 37 3.3 Data collection ... 39 3.4 Data analysis ... 42 3.5.1 Coding ... 42 3.5.2 Analysis ... 42

3.5 Credibility, transferability, and dependability ... 43

4. Results ... 44 4.1 Exploratory interviews ... 44 4.1.1 People ... 44 4.1.2 Organizational structure ... 47 4.1.3 Culture ... 52 4.1.4 Strategy ... 54 4.2 Case interviews ... 55 4.2.1 Main categories ... 56

(5)

5 4.2.2 People ... 58 4.2.3 Organizational structure ... 60 4.2.4 External characteristics ... 62 4.2.5 Culture ... 63 4.2.6 Strategy ... 64 5. Discussion ... 67 5.1 Implications ... 67 5.1.1. People ... 67 5.1.2 Organizational structure ... 70 5.1.3 External characteristics ... 73 5.1.4 Organizational culture ... 74 5.1.4 Strategy ... 75 5.2 Contributions ... 75 6. Conclusion ... 79 7. References ... 80 8. Appendix ... 89 Appendix 1 ... 89 Appendix 2 ... 89 Appendix 3 ... 92 Appendix 4 ... 96

(6)

6

1. Introduction

‘Chief Information Officer replaces Chief Executive Officer’. Will this be a future headline of the Financial Times? The world is getting more and more connected (Marr, 2015). Mobile devices, laptops, IPads, and connectivity increasingly have become an integral part of doing business and social interactions in the 21st century. Also, there are already more devices connected to the internet then there are people on the face of planet earth (Marr, 2015). The increasing number of connections has led to a large growth of different types and volumes of data. Data can be generated in the form of text, pictures, sensors, web clicks, Facebook likes, and so on. In the past two years (2011-2013) more data has been recorded than in all of human history (Waller & Fawcett, 2013).

Big data has the potential for companies to create value (McAfee & Brynjolfsson, 2012), which is the ultimate goal of organizations. By some big data is seen as a gold mine (Mayer-Schonberger & Cukier, 2013), or even the new oil (Marr, 2015). It allows for decision makers to make decisions based on the analysis of large data sets. Big data can discover underlying (behavioral) patterns in the data, that would be impossible for one person to discover. In other words, big data leads to more accurate decision making. Decisions based on data are called data-driven decision making (DDD) (Provost & Fawcett, 2013). Especially enterprise-wide big data systems have the ability to combine data from different departments, obtained internal and/or external data sources, and thereby come with unique results. When discussing big data, enterprise-wide systems are meant. The concept of big data is a disruptive innovation (Kitchen, 2015). It requires a massive change in the organization (Court, 2015; Kuilen & Jacques, 2015; LaValle et al., 2011). In order for value to be created big data needs to be adopted in the organization.

Even though the potential value creation research shows around 50% of the big data projects fail (Demirkan & Dal, 2014; Lui & Meloon, 2015). Companies struggle to become

(7)

7 successful adopters of big data and do not use analytics to their advantage (Ransbotham, Kiron & Prentice, 2016). In order to utilize the available data for business purposes, some problems are encountered with the existing data systems. The systems are unable to process the new data characteristics (Watson, 2014). However, that does not seem to be the only challenge. Since the innovative and disruptive nature of big data, the implications are bigger than technology alone. The biggest challenge for the adoption of big data is a business problem as opposed to a technology problem (Kuilen & Jacques, 2015). Scholars agree that the biggest challenges in adopting big data are mainly managerial and cultural by nature (LaValle et al., 2011). What are the needed capabilities for organizations in order to become successful in the adoption of big data. Traditional organizations, also called legacy organizations, experience difficulties in exploiting the benefits of big data. These organizations have developed a certain degree of administrative heritage (Bartlett & Ghoshal, 2002; Gertsen, Søderberg, & Torp, 1998), this administrative heritage is ingrained in all the organizational processes, structures and leadership style creating a dominant logic (Prahalad & Bettis, 1986) which is not compatible with the requirements of big data (Wamba et al., 2015). According to Bean (2009) old and large organizations (administrative heritage) experience many difficulties in understanding, implementing and exploiting the benefits of big data. Since these difficulties, legacy organizations will be the focus of this research. Legacy organizations are also defined by their large size. Large is defined by the Kamer van Koophandel (KvK), the Dutch chamber of commerce, as revenue > €40 million and average amount of employees is > 250. It can be summarized that traditional organizations may be hindered by their administrative heritage, which reduces the benefits they can derive from adopting in this case big data (Bartlett & Ghoshal, 2002; Gertsen, Søderberg, & Torp, 1998).

In trying to understand this high failure rate, it is useful to review models that explain the adoption of technology. Several models try to explain the adoption of technology

(8)

8 including the Technology Acceptance Model (TAM) by Davis (1989), Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. (2003), and the Diffusion Theory by Rogers (1995). The TAM provides reasons for (non) adopting of a technology (Wallace & Sheetz, 2014). It explains variety in technology. This model is highly studied and explains for 30 to 40 percent the IT adoption (Yousafzai, Foxall, & Pallister, 2007). The adoption depends on the perceived usefulness and the perceived ease of use (Davis & Venkatesh, 1996). The UTAUT model emerged from 8 different models that are validated by prior research. Differences in the adoption are attributed to the performance expectancy, effort expectancy, social influence, and facilitating conditions (Venkatesh et al., 2003). The model explains for 70% of the variance in the technology adoption (Venkatesh, Thong & Xu (2012). Due to this high rate the model is frequently used in the IT adoption in organizations (Bierstaker et al., 2014). The Diffusion Theory (Rogers, 1995) focuses on the conditions that increase or decrease the likelihood that an innovation will be adopted by the members of in this case a legacy organization. The model explains the innovation adoption. The adoption is explained by the organizational readiness to adopt and the technology readiness to adopt (Rogers, 1995). When researching IT adoption, although the same research question is addressed, it is important to understand there is a distinction between the individual adoption and the organizational adoption (Jeyaraj et al., 2006). Jeyaraj et al., (2006) point out there is a lack of understanding and integration between the two. Models trying to explain the adoption mainly focus on either the individual adoption or the organizational adoption. This is interesting since adoption of a technology mainly depends on the users (individuals) and the processes of the organizations. Without one of the two the adoption will fail.

Placing the adoption literature in the context of big data, brings out clearly that the model of Rogers (1995) provides a good basis for this research as Rogers (1995) is the only adoption study that combines the individual adoption with the organizational adoption. Given

(9)

9 that big data is more of a business problem as opposed to merely a technological problem the focus will be on the organizational readiness to adopt as opposed to the technological readiness to adopt. The organizational readiness to adopt depends on three factors: (1) leader characteristics, (2) internal characteristics of the organizational structure, and (3) external characteristics (Rogers, 1995). In earlier research the model has been tested on IT innovations (England, Stewart & Walker, 2000), but never on big data. Because big data is more of a business than a technological problem, this differentiates big data from other IT innovations. It suggests that big data is not solely supporting technology, as IT is generally perceived, but requires fundamental adjustments in how legacy organizations operate. The question arises if the model of Rogers (1995) is too limited or outdated to be applied to a disruptive innovation as big data. The three factors of the Rogers (1995) play an important role. However, as seen above adopting big data requires changing the entire operating model of an organization. In such a context the three factors of Rogers (1995) are unlikely to be sufficient and change of organizational culture and strategy will also be required. In order to examine this research gap, the research question will be: how do organizational factors influence big data adoption in legacy organizations, with a view to developing big data capabilities?

The contribution of this exploratory research is to point organizations in the right direction to become successful in the adoption, and thereby implementation, of enterprise-wide big data systems, with a focus on the organizational perspective. This helps organizations to acknowledge the importance of the organizational perspective and how best to handle this transition when adopting big data in which many organizations are failing. This research develops specific big data capabilities, which organizations found hard to understand confirmed by the 50% failure rate. Also, this research aims at contributing to the literature by further developing and expanding the adoption theory of Rogers (1995) and applying it to a relatively new innovation: big data. This given the fact that big data is gaining more

(10)

10 importance in practice and in the academic field. Also, the research gives guidance on the relative importance of the categories of the expanded model. This has not been done in previous research (Damanpour, 1987). In addition, unlike most other research that is confined to the healthcare sector (Damanpour, 1987). This research takes an industry-wide perspective.

This thesis is organized as follows. Firstly, the literature will be discussed starting with a discussion of value creation in organizations. The literature review continues with an exploration of the concept big data. It emphasizes that big data is a way of working, not only a static definition, that asks for large changes in the way organizations operate. The literature review section then focuses specifically on the adoption of big data. Followed by the method section. Next, the results of the conducted interviews will be discussed. Lastly, the discussion and conclusion will be presented.

(11)

11

2. Literature review

Overall, organizations adopt big data because it has the potential to create value. This value is created by making decisions based on data. The potential value is unlocked when organizations actually use the technology. In order for big data to be used it must be adopted in organizations. Given the disruptive nature of big data, this is not an easy task. First, value creation will be discussed, followed by big data. Next, the adoption theories will be discussed. The chapter will end with the conceptual framework.

2.1 Value creation

It is no secret organizations want to create value. The aim is achieving profit, preferably as high as possible, and outsmarting the competition. Organizations are successful when the market demand aligns with the offerings of the organization. The focus is on the markets they compete in (Day, 1994). Managers strive to be successful in the marketing concept. This refers to making the customer top priority of the organization, customer satisfaction is key (Day, 1994). Companies that are able to meet the market demand and respond to changing conditions better than its competitors are able to achieve a competitive advantage and thereby higher profitability. This is confirmed by empirical evidence (Day, 1994). However, there is a paradox concerning this marketing concept. Little research is done on how to successfully achieve the marketing concept (Day, 1994). What are the needed capabilities for organizations to be successful? In order to determine what capabilities an organization needs to be successful in big data adoption, first an understanding of how value is created is necessary. The next section will examine different theories of value creation.

2.1.1 Value creation theories

Value creation drives organizations. Understanding this concept is essential since it determines what strategic decision organizations make. Different theories are developed on

(12)

12 how organizations can create value. The focus areas differ per theory. Porter (1980) focuses on the environment in which organizations compete. The resource-based theory (Barney, 1991) focuses on the organization itself and the resources it possesses.

Porter introduced the importance of competitive forces (Teece, Pisano & Shuen, 1997). According to Porter (1980), a superior competitive position is mastered by dealing with the company’s environment and industry competition. The attractiveness, in this case the ultimate profit potential, of an industry is determined by five external forces: industry competition, potential of new entrants, power of suppliers, power of customers, and threat of substitute products (Porter, 1980). Interesting is that research trying to explain difference in value shows that firm-specific factors are central, as opposed to industry effects which are bad predictors (Teece et al., 1997).

A different perspective on value creation is the ‘emerging capabilities’ or ‘resource-based’ theory, which focuses on the firm-level. It looks at the capabilities an organization owns that can create a sustainable competitive advantage. Value is created by two interrelated concepts: assets and capabilities (Day, 1994). ‘Assets are the resource endowments the business has accumulated. . . .; and capabilities are the glue that brings these assets together and enables them to be deployed advantageously’ (Day, 1994, p.38). The resources, a combination of assets and capabilities, originate and develop over time and the combination of these unique resources should be cherished and utilized by the management to create a competitive advantage (Barney, 1991). The focus of this approach is on firm-specific resources, as opposed to market positioning (Teece et al., 1997).

A critical thought about this view. Organizations are focused on the market they compete in. It is not only important what resources the organization owns, but also to what resources the company has access. Many organizations obtain help from skilled people outside the organization when adopting big data. Also, it can be the organization is including

(13)

13 data from outside of the organization on its big data platforms. In other words, only focusing on the firm-specific resources would be too narrow in this case. At some points, the resource-based view could be too limited. Day (1994) and Teece et al. (1997) both acknowledge the shortcoming. Day (1994) introduces distinctive capabilities. Some capabilities are superior if the business is succeeding in outperforming the competition. Those capabilities are described by Day (1994) as dynamic capabilities since these capabilities support a market position that is valuable and difficult to match. Day (1994) comes with five steps to enhance the capabilities: ‘ (1) the diagnosis of current capabilities, (2) anticipation of future needs for capabilities, (3) bottom-up redesign, (4) top-down direction and commitment, (5) creative use of information technology, and (6) continuous monitoring of progress’ (Day, 1994, p. 37). Dynamic capabilities are discussed by Teece et al. (1997). These capabilities provide the unique combination of integrating, building, and reconfiguring internal and external competence. This in order to keep up with the rapidly changing environment. Key in building capabilities is to identify the base for distinctive and difficult to replicate advantages that can be developed (Teece et al., 1997). When establishing necessary capabilities an organization needs to possess in order for big data to be adopted, the importance of the market environment and the firm-specific assets will both play an equally important role.

Developing capabilities has become necessary (Teece, 2007). Due to changing characteristics of the business environments legacy organizations compete in. In the current environment firm-specific resources do not guarantee value creation. The value creation depends on discovering and developing opportunities that must be embraced by the organization’s capabilities (Teece, 2007). According to Teece (2007), these capabilities have profound implications on the current process, organizational forms, and business models. The capabilities need to create and address the opportunities presented while still letting the customer be top priority. It helps organizations to see the opportunities, seize them, and

(14)

14 transform the organization. For the purpose of this thesis the focus will be on establishing what capabilities are required for organizations to embrace big data.

2.1.2 Big data an innovation to create value

Organizations are continuously developing in order to maintain their competitive advantage. This is done by adopting innovations (Damanpour, 1987). Innovations mean changing the status quo, changes at firm-level. The more disruptive the innovation the bigger the changes the organization faces. The disruptive nature of big data asks for large organizational changes in order to be successfully adopted. The success of an organization adopting a new technology does not just depend on the opportunities it confronts, but also on the capabilities the organization can muster (Teece et al., 1997). When organizations decide to adopt big data, this is not enough. It must muster capabilities that match big data adoption (Barton & Court, 2012). Given the disruptive nature, it seems that current capabilities are insufficient in order to successfully adopt big data. Organizations need to adjust the current capabilities to develop capabilities that match with the innovative technology of big data. Robeyns (2005, p. 54) provides a definition for capabilities which will be used in this research: ‘capabilities are the ability or potential to do or be something-more technically’. ‘The core characteristic of the capability approach is its focus on what people are effectively able to do and to be; that is, on their capabilities’ (Robeyns, 2005, p. 94).

Big data clearly has a lot of potential, but reality shows that adopting big data is difficult. In order for companies to become successful in the adoption of big data, it is important first to understand the concept of big data so that can be assessed what capabilities are needed to successfully adopt the big data technology in the organization. The next section will discuss the concept of big data.

(15)

15

2.2. Big data

2.2.1. Emergence

The academic literature is not consistent on the origin/emergence of big data. Gandomi & Haider (2015) claim big data probably originated during a lunch table conversation at Silicon Graphics Inc., in the mid-1990s. However, Dubé et al. (2014) do not agree. According to these scholars, big data emerged in late 1980. Chen, Tavichandar & Proctor (2016) and Gandomi & Haider (2015) agree on the point that big data became a hype when IBM created the hashtag #bigdata and other leading technology companies also started promoting big data. Consulting Google Trends, concludes that the emergence of big data under the general public started growing exponentially from 2011 (see Appendix 1).

2.2.2 Definition of big data

Big data is often defined by the three V’s (Gandomi & Haider, 2015; Waller & Fawcett, 2013; Wamba et al., 2015; Ward & Baker, 2013), and later got expanded with two additional V’s: Value and Veracity. According to Ward & Baker (2013), the three V’s were first introduced in 2001 and stand for: Volume, Velocity, and Variety. See figure 1.

Volume

Every second more data is crossing the internet than was stored on the entire internet twenty years ago (McAfee & Brynjolfsson, 2012). According to IBM (2016), the world generated 4.4 zettabytes of data in 2013, which will increase to 44 zettabytes in the year 2020. One terabyte can store up to 220 DVDs, and around 16 million photographs (Gandomi & Haider, 2015). To put this in perspective, one character is consistent with 1 byte. 1 byte is equal to 10007

(16)

16 zettabytes. The current systems cannot cope with the high volume of data. Organizations should find new storage solutions to increase the current capacity.

Velocity

Wamba et al. (2015, p. 236) explain velocity as the increase of ‘the frequency of data generation and/or frequency of data delivery’. The spectacular increase in volume is driving growing need for real-time analytics and evidence-based planning (Gandomi & Haider, 2015). For example, a Formula 1 car can generate up to 20 gigabytes of data from over 150 sensors placed on the car (George et al., 2014). Or Bol.com offers every customer a customized home page that aligns with the products of interest the customer clicked on just seconds ago. Organizations should be able to react quickly to insights generated from the data. If not the opportunity that presented itself could be gone within seconds. This changes the way decision should be made.

Variety

Variety describes the increase of different sources and formats the data is generated from (Wamba et al., 2015). It is easier to compare apples and apples to draw a conclusion, instead of comparing apples and oranges. Ideally, data is stored in a highly structured format to maximize the information that can be extracted (Khan et al., 2014). However, current data consists mostly of unstructured and structured data. The formats of structured, semi-structured and unsemi-structured data do not align (Gandomi & Haider, 2015). Unsemi-structured data sometimes lacks the structure required by software programs and machines for analysis (Gandomi & Haider, 2015). The data group that is increasing exponentially is the unstructured data which can be explained as data evolving around human interaction such as images, videos, Facebook data, web clicks, YouTube, Instagram and LinkedIn. This makes up for 90% of the data (Khan et al., 2014). For the organizations, this implies difficulty in

(17)

17 understanding the data. Not all the data is usable right away. This requires a change in skill set.

Two additional V’s

Value and veracity got added to the academic definition explaining big data. Discord exists among scholars on which V got added first, but scholars agree that value and veracity should both be added to the definition. Value. Value is ‘the extent to which big data generates economically worthy insights and or benefits through extraction and transformation’ (Wamba et al., 2015, p. 236). Veracity. The last V is about the unpredictability and uncertainty of the data. The unstructured data is mostly created by humans and thereby entails human judgment. The data is uncertain, unpredictable, and perhaps unreliable, but it still can produce valuable insights. A lot of the data is produced by humans. In order to come to trustworthy results, some data requires analysis to be sure it can be trusted (Wamba et al., 2015). Big data changes the way organizations are used to create value. It changes the way of working. Organizations should be critical towards where the data is coming from, but this is beyond the scope of this research.

Nonacademic literature

Some articles online even discuss the seven V’s. McNulty (2014), adds Variability and Visualization. Variability refers to the change in the meaning of data. A good example is data analysis of languages. For example, the sentence: ‘this morning I waited over an hour for my train to arrive, what a great start of my day’. In this sentence ‘great’ does not have a positive meaning. However, the definition of ‘great’ is positive. The computers systems should be smart enough to distinguish between different meanings of words, and attribute the correct meaning. Visualization describes the challenges in presenting the data, in order for the data to

(18)

18 be accessible and extract value. However, since these two V’s have not been found in the academic literature they will be disregarded for this research.

Overall

In summary, big data is explained using the five V’s. No concrete figures are linked to the definition. It is impossible to identify universal benchmarks. It depends on size, industry location, and these numbers evolve over time (Gandomi & Haider, 2015). The definition used to explain the enterprise-wide big data systems, which is also used by Ward & Barker, 2013, p. 2), is as following:

‘big data is a term describing the storage and analysis of large and or complex data sets using a series of techniques, but not limited to: NoSQL, Map Reduce, and machine learning’.

After assessing the articles in depth, the definition of one V is found odd: value. Extracting value from big data seems like the overall goal. Not necessary explaining the concept of big data. Next, this is something that is not 100 % true. Since, as stated before, many companies fail in translating the insights generated from big data into value. That would mean far fewer organizations are using big data since many organizations fail. This suggests big data is more than the static definition using the 5 V’s.

2.2.3 Big data: beyond the 5 V’s

The technology alone does not provide any guarantee for adoption (Ward & Baker, 2013). It must be implemented in the organization’s day-to-day activities and processes. The data must be analyzed and the results used by decision makers to make decisions based on data (Watson, 2014). Organizations need to get the application of big data integrated into the organization. Big data is a way of working. Even though the technology is important, humans are perhaps even more important in order to make the adoption successful.

(19)

19 Big data has the potential to create value when it is adopted. So, how does this apply to big data? Big data uses various hardware programs in order to store, organize and process all the data. Data alone does not create value, the data should be understood and acted upon in order to create value. Bellinger, Castro & Mills

(2004) argue in order for big data to create value, the wisdom level should be achieved (see figure 2). This stage is achieved by following the four consecutive stages. The first stage is data. This is raw symbols, that have no significance beyond its existence. The second stage is information. In this stage the data is given a meaning, it can be

useful but does not have to be. The third stage is knowledge. The knowledge stage is achieved when someone memorizes information. For example, ask a ten-year-old what 4 x 4 is, but when you ask him what 650 x 239 is the answer will be unknown. In order to answer the second question, the understanding level needs the be reached, the fourth stage. The understanding stage provides answers to the why question, whereas the knowledge stage explains the how question. The last stage is the wisdom stage. This stage asks the question to which there is not always an answer. It is about reasoning. It is impossible for a computer to create wisdom (Bellinger, Castro & Mills, 2004). It is therefore, important that new capabilities are created where the combination of data and human ability to create wisdom is combined.

Organizations should change the way they operate. The leader characteristics, organizational structure, and external characteristics can be changed in order for the adoption of big data to be successful (Rogers, 1995). The adoption of big data challenges the very

(20)

20 foundations of the organization. Organizations need to change and adjust in order to be successful in the big data capability.

2.3 Adoption theories: the organizational perspective

When assessing adoption, it is crucial to understand the difference between the generation of an idea and the actual utilization by organizational members. An innovation is not in use when the decision is made to adopt big data, but only when organizational members are actually using the insights generated from big data (Damanpour, 1987). Organizational members can do this by adopting big data. People have the ability to make a difference. For example, an organization can introduce a new healthy canteen but when employees prefer unhealthy food that can be bought just around the corner, the change in the food will not make any differences. Employees will go around the corner to buy food.

Rogers (1995) describes (IT) adoption by two factors: organization’s readiness for adoption and the technology’s readiness for adoption. The focus of this research is on the organization’s readiness for adoption. This has two reasons. The first reason is big data is first and foremost a business problem instead of a technology problem (Barton & Court, 2012). The challenges involved in developing organizational capabilities to include big data capabilities are repeatedly underestimated (Ransbotham et al., 2016). Secondly, the focus is on the process of big data implementation and not on how to build the actual big data platform. Three organizational barriers or enhancers for the adoption of IT innovations: (1) leader characteristics; (2) organizational structure: centralization, complexity, formalization, interconnectedness, organizational slack, size; and (3) external characteristics of the organization (Rogers, 1995; England et al., 2000). Figure 3 (page 21) shows the how the variables are related to the organizational readiness to adopt proposed by Rogers (1995). For the purpose of this thesis, the sub-category size from the organizational structure category is not taken into account. This because the focus is on large organizations, and only large

(21)

21 organizations will be taken into account in the research. So, size is a constant factor. In the following section, the categories will be discussed in detail.

2.3.1 Leader characteristics

In the adoption literature on IT innovations leader characteristics are extensively researched. If leaders project a positive attitude towards changes that has a positive influence on the adoption of the innovation (England et al., 2000; Rogers, 1995). The involvement of senior managers and top management is important (Gates & Hemingway, 1999). The majority of top management focuses on the cost rather on the perceived benefits when it comes to IT adoption (LaValle et al., 2011). Big data challenges the way leaders take decisions (McAfee & Brynjolfsson, 2012). Is the adoption of this innovation different from prior IT innovations? Decision makers are hesitant towards big data, due to the high investment cost, and the risk it brings (McAfee & Brynjolfsson, 2012). Assessing literature on big data and leader characteristics, four different items keep recurring.

(22)

22 First of all, Barton & Court (2012); LaValle et al. (2011); Wambe et al. (2015) agree that leaders should be on board and trust the new big data models even without the appropriate skill set to fully understand the models. There is a lack of trust in the big data models (Harris & Mehrotra, 2014). A critical note on this point. It should also be addressed from a technical point of view, which is not the angle of this research. It is also the data scientist’s task to make the models understandable for the people who act upon the models. Leaders should be supportive of the technology, this due to their decision making power and the ability to let the organizational members follow (Chen et al., 2016; England et al., 2000; Kane et al., 2015; LaValle et al., 2011; Ransbotham, et al., 2015). Important is for leaders to be fully supportive of the digital strategy and portray that on their teams as well (Sivarajah et al., 2016). Distinction can be made between top management support and management support. It is especially important to have top management on board (Chen et al., 2016).

Secondly, it is important for leaders to actually act upon the insights provided by the enterprise-wide big data system (Barton & Court, 2012; McAfee & Brynjolfsson, 2012; Harris & Merhrotra, 2014; LaValle et al, 2011). It is important to act upon the models and insights to guide day to day operations. It is one thing to be supportive of the big data models but actually working with it is something else. The priorities should not lie in a different area. In order for this to work, it is important that data scientist and leaders work closely together, due to the increased real-time decision-making big data enables (Harris & Mehrotra, 2014).

Next, according to the literature leaders should gain some understanding of the analytics process (Barton & Court, 2012). Leaders do not have to fully understand the technical process but establish some understanding, for example, understand what is possible and what not with the platforms or gain knowledge on standard analytics vocabulary. Fourth, it is important for leaders to encourage data and knowledge sharing between departments

(23)

23 (McAfee & Brynjolfsson, 2012; Sivarajh et al., 2016). This because of the speed a real-time decision making required in order to reap maximum value creation.

Combining the above leads to the first working proposition: If leaders have a positive attitude towards the big data, this will have a positive influence on the adoption of big data.

2.3.4 Organizational structure

Organizational structure is explained as the way in which an organization divides the different tasks and coordination among those tasks (Mintzberg, 1979). Research recurrently shows that technologies have the power to change society (Barley, 1986). These changes have profound effects. Looking at organizational structure. Technology changes can cause a shift in organizational forms (Barley 1986). Looking at this from a different perspective, putting the structure central, can a certain structure enhance or form a barrier to the adoption of big data? Spotify, a company transforming the music industry, is an example of a success story. Spotify’s structure is seen as one of the most impressive examples (Kniberg & Ivarsson, 2012). Spotify changed the structure in order to keep up with the growth the organization was experiencing. This raises the question if legacy organizations should change the structure when implementing big data or is the organization as it is today equipped to reap the advantages of big data. The factors of importance are centralization, complexity, formalization, interconnectedness, and organizational slack.

Centralization

The innovation adoption literature, for many years, believed adoption works best in a centralized organization. However, Rogers (1995) found compelling evidence of the opposite. According to the Rogers (1995), the adoption of innovation is enhanced in decentralized organizations as opposed to centralized ones. Decentralization implies a wide sharing of

(24)

24 power and control among the organizational members and therefore increases the adoption rate (Rogers, 1995).

Big data is a specific and complex technology. Grossman & Spiegel (2014) provides pros and cons for locating the analytics function. However, the authors do not provide a specific outcome for the analytics function. Managers should carefully balance the two options when considering which model works best for the organization. Chen et al. (2016) agree that organizations should be organized in a more decentralized way so that employees, when working with big data, can react quickly to the data. Decentralization leads to fewer communication channels, facilitating the communication between departments and management layers, and facilitating the flow of innovative ideas (Damanpour & Gopalakrishnan, 1998). However, Lamba & Dubey (2015) point out that organizations need to centralize the analytic capabilities. Given that centralization leads to quick decision making.

It seems the literature does not fully agree on which model is best. In taking a centrally or decentrally position, it seems that the authors overlook that it can be seen as a continuum in which both centrally and decentrally are relevant. Rogers (1995) acknowledges this. The elements of the centralized and decentralized model can be combined in any organization. However, on the specific subject of adopting innovations Rogers (1995) believes that an organization should be organized decentrally in order to be successfully adopt the innovation. This also applies for big data because big data is an innovation. In a decentralized model information is created and shared in order to establish a mutual understanding (Rogers, 1995). This is important for big data since the business and the technology side should reach a mutual understanding in order to create value (Galbraith, 2012).

(25)

25 Therefore, working proposition 2a will be: If the organization is organized decentrally, this will enhance the adoption of big data.

Complexity

According to Rogers (1995), the more complex the organization the quicker innovations are adopted due to relatively high skill levels and professionalism of the employees. Damanpour & Goppalakirshan (1998) agree that the high diffusion rate of innovations is facilitated under the condition of complexity. Big data is a complex technology to understand, and requires a specific skill set or understanding of the technology of an organizations employees. However, the skill set required is so complex that organizations do not obtain those skill sets in the current organization. It is questionable whether the skill level of employees, that does not match with the big data capabilities, influences the adoption rate in a positive way. Complex organizations are known for its competing objectives (Braithwaite et al., 1995). This would be expected to slow down the adoption process. Rogers (1995) pointed out the fact that complexity may lead to difficulties in achieving consensus about the adoption but still comes to the conclusion that complexity is an enhancer. When competing objectives are at hand, objectives need to be adjusted in order to reach an agreement. The objectives need to be aligned, in order to move forward. As well when many different concerns need to be taken into account this slows down the decision making process. With the total different skill set that is required, and not present in traditional organizations, and increased speed at which decisions regarding big data are taken (Galbraith, 2014) it is expected that complexity will to slow down the adoption.

Therefore, working proposition 2b will be: if the organization is perceived to be complex, this will form a barrier for the adoption of big data.

(26)

26

Formalization

With formalization is meant prescribed processes. The more formal, routines or rigid an organization is, the less likely it is to adopt innovations (Rogers, 1995). Formalization on the one hand is necessary for effective and efficient communication between members of the organization. Big data requires for employees to act quickly upon insights (Galbraith, 2012). The outcome is always unknown. New processes, problems, and insights arise that could not have been foreseen at the beginning (Wamba et al., 2015). Formal and strict routines therefore are almost impossible to follow due to the unpredictable nature of the technology. Therefore, for big data it is even more important to not be bound to formal routines in order to create value.

Therefore, working proposition 2c will be: if organizations have formal routines in place, this will form a barrier to the adoption of big data.

Interconnectedness

According to Rogers (1995), interconnectedness is important for the successful adoption of innovations. When the organization is highly interconnected ideas can flow at a higher pace and more easily amongst the members (Rogers, 1995). Interconnectedness is the process where different group members and members tend to communicate on a frequent base and the social system allows and or encourages it (England et al., 2000). Interconnectedness is a frequently recurring point in the big data literature. Data sharing and communications between different teams and employees within the organization contribute to the success of big data (Chen et al., 2016; Sivarajah et al., 2016). Big data is about combining data from different departments or/and data sources. This process becomes highly difficult or almost impossible if a team is not sharing information or working with one another. Data sharing and communications become possible when there is close collaboration between the data scientists, employees who work with them and the people who take the decisions (Chen et al.,

(27)

27 2012; Sivarajah et al., 2016). This is especially the case with the combination of traditional organizations and big data. Traditional organizations tend to work in silos, one department is accountable for one specific task and that task only. It should be a tradeoff between working efficiently and effectively, and still be able to break down the silos. In order for big data to be adopted it seems key that the organizational members, teams, and departments are interconnected.

Therefore, working proposition 2d will be: if organizations are highly interconnected, this will enhance the adoption of big data.

Organizational slack

Organizational slack refers to the spare capacity and resources within the organization. According to Rogers (1995), organizations with slack are enhancers of IT adoption. Slack allows an organization to use resources for the adoption of an innovation that would not be available if the resources are perceived rare. This factor is a tricky one. When analyzing the big data literature little is available on this subject. As stated before the outcome of big data is uncertain. It could be that for that reason spare capacity and resources are important, in order to further develop a successful big data platform. However, since the literature does not go in to depth on this, no assumptions are made regarding this factor. Further research is required.

Combing all of the above it is expected that organizational structure can influence big data adoption. Organizational structure can influence the big data adoption in a positive manner when the organization is decentrally of nature, the organization is not complex, non-formal processes dominate, and interconnectedness is important.

(28)

28

2.3.5 External characteristics

External characteristics of the organization are determined by its openness to the outside world (Rogers, 1995). Organizations are more likely to innovate, and learn from new ideas and evaluate those when it embraces openness (England et al., 2000; Rogers, 1995). When an organization has an open system information is exchanged across the organizational boundaries (Rogers, 1995). In market-driven organizations there is a drive towards a competitive advantage over its competitors. Organizations keep a close eye on the competition and what those organizations are doing. Market driven organizations tend to be open towards the outside world in order to survive.

According to Ransbotham et al. (2016) in order for companies to become successful in the big data openness to new ideas is important. In order for an organization to have a competitive advantage with analytics, a wide range of ideas is key. To make that happen it demands openness to new ideas (Ransbotham et al., 2016). Sivarjah et al. (2016) discuss the importance of data and information sharing between business partners. Which will lead to close connections and harmonization with partners outside the organization. It is important for organizations to be open to changing the status quo (Kiron, Ferguson & Prentice, 2016). However, not a lot of articles touch on this category, which makes it hard to tell from the literature review if this influences the adoption of big data. It seems that there is some evidence found in the literature that is in agreement with Rogers (1995). Therefore, the original proposition will be applied for big data.

The third working proposition is: System openness of organizations enhances the big data adoption within the organization.

(29)

29

2.3.6 Conclusion

This chapter has reviewed the variables key to innovation regarding the diffusion of big data. With these factors it becomes possible to make assumptions on the adoption of big data in large organizations. Figure 1 shows the how the variables are related to the organizational readiness to adopt proposed by Rogers (1995) in combination with big data. The factors are summarized in table 1. The table shows if the variables are likely to speed up, be neutral for, or form a barrier to the adoption.

Table 1 | The expected impact on the adoption of big data in traditional organizations

Rapid Normal Slow Comment

Leader

characteristics

X Leader characteristics tends to play

an important role on the adoption of big data

Centralization X Centralization slows down big data

adoption

Complexity X Complexity slows down big data

adoption

Formalization X Traditional organizations tend to

have set rules and practices, this slows down the adoption of big data

Interconnectedness X Organizations should stop thinking

in silo’s and start working together closely to enhance big data adoption

Slack X No conclusions can be drawn

Openness X Market driven organizations tend to

be open to the environment, which has a positive effect on big data adoption

As stated earlier big data has profound implications for the entire organization. The most influential adoption barriers are managerial and cultural (LaVallee et al., 2011). Review of the academic literature reveals that the Rogers (1995) model it too narrow to be applied to big data adoption in traditional organizations. The next section will go into more detail on the expansion of the organizational perspective.

(30)

30

2.3.7 Expanding the organizational perspective

Comparing the big data literature with the IT adoption model proposed by Rogers (1995) it becomes clear that successful adoption regarding big data requires more than the three categories mentioned by Rogers (1995). In total 30 articles are examined to establish the success factors when adopting big data. This review reveals that, within the focus of this thesis, three more variables are important. The three variables are: hiring the right people, data-data strategy, and data-driven culture. Table 2 shows the outcome of the examination of the 30 articles.

Table 2 | Outcome of literature study on expanding the organizational perspective of Rogers (1995) Variable Authors

Hiring the right people

Ransbotham et al. (2015); Lamba & Dubey (2015); Chen et al. (2016); McAfee & Brynjolfsson (2012); Davenpoort (2006); Grossman & Spiegel (2014); Galbraith (2014); Frankova (2016); Watson (2014); Wamba et al. (2015); Khan et al. (2015); Tambe (2014); Waller & Fawcet (2013); Davenport & Patil (2012)

Data-driven strategy

Ransbotham et al. (2015); Lamba & Dubey (2015); Davenpoort (2006); Galbraith (2012); Galbraith (2014); Watson (2014); Cao et al. (2015); Vallejo et al. (2016); Khan et al. (2014); Khan et al. (2015);

Data-driven culture

Ransbotham et al. (2015); Lamba & Dubey (2015); LaValle et al. (2011); Barton & Court (2012); McAfee & Brynjolfsson (2012); Kiron et al. (2013); Davenpoort (2006); Grossman & Spiegel (2014); Harris & Mehrot (2014); Watson (2014); Vallejo et al. (2016); Khan et al. (2015); Kuilen & Jacques (2015)

Hiring the right people

In order to be successful adopters of big data organizations need the right people. A change in the way employees think, work, and are treated is necessary (Davenport, 2006). The skills of employees do not match with the required skills for big data. This sub-category will be briefly touched upon. Big data differs from analytics in general on two points. First, the ability to analyze and display enormous amounts of data. Second, the ability to provide extra context by combining data from various business units or sources (Watson, 2014). In order for big data projects to succeed there are three roles: end users, analysts, and data scientists which can be seen as a continuum of big data users (Watson, 2014). Data scientists need to be hired,

(31)

31 because they understand the complex technology and have the knowledge to work with it. Data scientists are responsible for gathering new insights from big data. They understand how the data is stored on for example a Hadoop cluster, write a code in Python, access the data, analyze it, and write reports in which findings are communicated (Watson, 2014). Big data requires quantitative skills (Davenport, 2006). According to Davenport & Patil (2012, p. 76): ‘data scientists today are akin to the Wall Street ‘quants’ of the 1980s and 1990s’. Analysts have an analytics understanding, but not as deep as the data scientist. Analysts understand how the analytics applies to their business units (Watson, 2014). It is their job to ask the correct question, due to their business understanding (Franková, Drahošová, & Balco, 2016). It is nearly impossible to retrain an end user to become a data scientist. New people should be hired that possess the requested skills. Organizations should not forget that all the roles are important. The data scientists should not be overrated, because if the skills are missing to apply the data analysis to solve an actual business problem, no value is created. Same for the end users. If the end users are not willing to work with the data or the insights al the previous steps make no sense. Harris & Mehrotra (2014) point out the importance of understanding one another. Analysts or data scientists should be able to speak in the languages of business people and vice versa.

Hiring the right people is about having the right characteristics for big data. Leader characteristics, together with hiring the right people will be sub-categories of the main category people. The above leads to the following working proposition: if leaders are willing to hiring people with big data skills this will have a positive on the big data adoption.

Organizational culture: a data-driven culture

‘Technologies are better viewed as occasions that trigger social dynamics’ (Barley, 1986, p. 81). Culture plays an important role in the adoption of big data (as pointed out in table 2)

(32)

32 Hofstede explains how culture influences the work place by looking at differences in culture between countries (Hofstede, 1984). However, can it be assumed that the organization has one culture or are there more? With the emergence of big data in organizations it is almost as if there is a ‘divide’ between the business and the technology side. Even though eventually both have the same goal, the road towards that goal is completely different. In order for employees to change it is easier to justify a significant effort when the benefits of big data are understood (Lavalle et al., 2011). Culture is important because it determines how people work. The people eventually decide whether the information produced by the technical systems will be used in their day to day practices (Kuilen & Jacuqes, 2015) and if the culture does not have the data at its core why will people adopt big data?

When establishing a data-driven culture, three different principles need to be taken into account: (1) break out of your box; (2) ride the ripple effects; (3) align all eyes on the target (Lamba & Dubey, 2015). Many organizations rely on ‘HiPPO’- the highest-paid person’s opinion culture. At all levels of the organization, employees trust on experience and intuition instead of trusting the data (LaValle et al., 2011; McAfee & Brynjolfsson, 2012; Watson, 2014). In order to reap the full benefits from big data, a change in the HiPPO culture is necessary (Ransbotham et al., 2000; McAfee & Brynjolfsson, 2012; Watson, 2014). Many executives support their decisions with data, but decisions are mostly based on experience and gut feeling. A shift in culture is needed allowing organizational members to put the data first when making a decision. Starting with managers, since they have the power and authority to make sure the entire organization will follow (McAfee & Brynjolfsson, 2012). In this way data-driven decision making will become the cultural norm in organizations (Watson, 2014). Research shows that in organizations that are successful in creating value from big data a data-driven culture is part of the organizational culture (Kiron et al., 2016). It should be at the core of the organization. In establishing a shift in culture executives can start with two simple

(33)

33 techniques. First, asking the right questions becomes important (McAfee & Brynjolfsson, 2012). Questions could be ‘what do the data say?’, and ‘how sure are we about the accuracy of the results?’. Also, it sends a powerful message if executives will allow the data to overrule earlier choices made. Executives will be valued for the questions they ask and not for the decision made on their experience and gut feeling (McAfee & Brynjolfsson, 2012). Next, when changes occur there are always people unwilling to change. These people will have to be replaced (Watson, 2014). A different study by The Economist Intelligence Unit (2012) reveals several strategies for changing the culture into one that fits with the requirements of big data. These strategies include: top-down guidance from the CEO or senior managers, increased availability of training for employees at all functions and levels, encourage knowledge sharing and communicate the benefits it brings.

When establishing a data-driven culture encouraging collaboration is important, 20% of the respondents point out the culture in place is forming a barrier for the adoption of big data because it does not encourage knowledge sharing (LaVallee et al., 2011). In order for an enterprise-wide big data platform to function, knowledge between departments should be shared by close collaboration. It requires real-time decision making, where people should work closely together. Sometimes decisions have to be made with high speed (Davenpoort, 2006). If the needed information is not provided this slows down the process or will let it fail.

Therefore, the following working proposition arises: an organizational culture that is data-driven, does not rely on the HiPPO principle and encourages close collaboration will enhance the big data adoption.

Strategy: a data-driven strategy

For big data initiatives to succeed a clear strategic business direction is needed, otherwise the efforts will not move forward (La Valle et al., 2011). To get a big data platform up and

(34)

34 running the organization should be prepared to invest heavily and organizations are investing heavily because they believe big data can create value (Corte-real et al., 2016; La Valle et al., 2011). When a big investment is done it would be assumed that the organization has a plan or direction on what to achieve. Shockingly, 7 out of 8 respondents do not have a long-term strategy for the big data platform. Even more surprisingly, 1 out of 4 respondents do not have a plan on what to accomplish (Ransbotham et al., 2016).

Choices regarding innovations should be business driven as opposed to technology driven (Watson, 2014). Organizations do not want to fall behind and engage in this new fancy technology. A reason why so many projects are failing is because organizations forget the goal they want to achieve with big data (Ransbotham et al., 2016). Is it necessary for the organization to have an enterprise-wide big data system, or can the question the organization wants to solve, be solved by a different technology? In order to prevent organizations from failing, organizations should be extremely clear on what they want to achieve from the big data platform. The absence of goals result in a lack of understanding on how to use big data to improve the business (LaValle et al., 2011), as result, the adoption will fail.

The technology big data, brings a new feature to the organization. It is something that is not done before. For the data to create value it must be presented in a format that everyone understands, integrate the outcomes, store the data in data warehouses, and make it accessible to all employees, which cannot be done without a data-driven strategy (Davenpoort, 2006). When companies develop a data-driven strategy it is important that it is aligned with the business strategy already in place (Lamba & Dubey, 2015). It should support the business strategy. In big data-driven organizations, the alignment between the business and data strategy is so close, separation is impossible (Watson, 2014). Big data breaks down the silo thinking traditional organizations cope with. It requires close collaboration in order to create value. A perfect situation would be where the data strategy, makes it possible for the business

(35)

35 strategy to succeed (Watson, 2014). Alignment plays an important role. Akter et al. (2016) show that the alignment of a data-driven strategy and a business strategy has a positive moderation effect on value creation.

Combining the above leads to the final working proposition: an organizational strategy that is data-driven, has clear goals, and is aligned with the existing strategy can enhance the big data adoption.

2.4 Conceptual framework

All of the above comes together in the conceptual framework. Traditional organizations create value in order to survive. Big data allows organizations to make decision based on data, data-driven decision making. This can create value for organizations. The adoption of big data is needed for it to be actually used and thereby create value. So, organizations adopt big data with a view to create value. The main challenge in the big data adoption is the organizational readiness to adopt. Five main capabilities are examined to see how the adoption of big data can be influenced. The conceptual model of this thesis is shown in figure 4, including the new numbering of the working propositions.

(36)

36 Figure 3 | Conceptual framework

(37)

37

3. Methodology

In this chapter the research methodology is presented. The research strategy, research design, data collection, data analysis, and the quality of the research are discussed in detail.

3.1 Research strategy

In order to answer the research question, a research strategy is established. This warrants a critical look towards the overall research, and thereby ensuring the quality of the research.

The main reason for this research is to gain in-depth understanding of the variables influencing the adoption of big data in traditional organizations. The overarching goal is to understand how these variables influence the adoption and could perhaps accelerate the adoption of big data across various industries. According to Miles, Huberman & Saldana (2003), qualitative research has the ability to expose complexity. Big data is complex to understand. Next, it is important to gain an understanding of the contextual conditions. This fits with qualitative research (Miles, Huberman & Saldana, 2003). Therefore, qualitative research method is most suitable.

3.2 Research design

According to Yin (2003) a good way to conduct explanatory qualitative research is by doing a case study. Multiple cases will be examined in this research. Multiple cases allow to compare different variables (Yin, 2003), which is key to this research. Due to different cases it permits for the neglect of other influential factors (Eisenhardt, 1989). The selected cases are from various industries to establish an industry-wide view.

For the interviews two different sort of interviews will be conducted; (1) exploratory interviews; (2) case interviews. The exploratory interviews are conducted at Accenture. I started an internship at the Amsterdam office of Accenture Digital in September 2016. Accenture is a global professional services company and provides services in strategy,

(38)

38 consulting, digital, technology, and operations. The company serves clients in more than 120 countries, works across more than 40 industries, and has around 400 000 employees worldwide. I applied for the internship because Accenture is known as an IT-consultant, which fits best with the purpose of this thesis and to ensure the industry-wide view. The case interviews are conducted at five large Dutch organizations.

The participants for the exploratory interviews and the case interviews are selected because of their extensive knowledge and experience with big data. All participants should at least have three or more years of experience with Hadoop, an open-source software framework that often forms the base of a big data platform. In addition, preferably have management experience. In order for the findings to be transferable, there is a mix between ‘hardcore’ technical participants and participants who have some understanding of the business. Participants are approached individually by email and

LinkedIn with a description of the research and a request to participate in an interview of 60 minutes. In practice the duration varies due to changing availability of the participants. The participants are approached in six different ways, shown below. Table 3 shows the division of the selected participants. A - Accenture employees

B - Connections of Accenture employees C - Via the website www.bigdata-alliance.org D - Approached via LinkedIn, no link before E - Third degree connection on LinkedIn F – Personal network

Both interviews are semi-structured. This allows for the possibility to ask additional questions to the participant’s responses in order to gain a more thorough understanding (Saunders & Lewis, 2014). Semi-structured interviews allow to discover what is happening and gain new insights or identifying general patterns by describing what is happening (Saunders & Lewis, 2014), which fits with the research strategy. Semi-structured interviews are conducted by

1 Interview 1 A 2 Interview 2 A 3 Interview 3 A 4 Interview 4 A 5 Interview 5 A 6 Interview 6 A 7 Interview A F 8 Interview B E 9 Interview C F 10 Interview D B 11 Interview E B 12 Interview F C 13 Interview G D 14 Interview H B

(39)

39 using a list of predefined questions, the interview guidelines. Nevertheless, if perceived needed additional questions are asked as mentioned in the research design. Two separate guidelines are developed. The guidelines are presented in appendix 2 . The interview guideline of the case interviews are supported by an additional document asking more specific questions about the five categories. Participants are asked to rank the five main categories, where 1 is least important and 5 is most important. Next, two questions are asked for the sub-categories. The first question is about the degree of implementation of the asked aspect. Where participants can respond between 1 not at all and 5 is fully present. The second question is about the importance when adopting big data in general. The participants can respond between 1 indicating not important at all and 5 being extremely important. An example is found in figure 5, the full document is found in appendix 3.

Figure 4 – Additional document case interviews

3.3 Data collection

In total 16 interviews are conducted, of which 14 are used for this research. Two exploratory interviews are withdrawn, due to a more technological focus instead of organizational, and the project described during the interviews did not deploy an enterprise-wide big data system. In total six exploratory interviews, and eight case interviews at five different companies are conducted. The participants for the exploratory interviews and the case interviews are selected because of their extensive knowledge and experience with big data. In order for the findings to be transferable, there is a mix between ‘hardcore’ technical participants (data scientist) and participants who more have a business understanding (analyst).

Referenties

GERELATEERDE DOCUMENTEN

 Toepassing Social Media Data-Analytics voor het ministerie van Veiligheid en Justitie, toelichting, beschrijving en aanbevelingen (Coosto m.m.v. WODC), inclusief het gebruik

Given the use of the RUF as a prototype resource-based VNSA by Weinstein in his work (Weinstein, 2005), it comes as no surprise that the RUF ticks all the boxes on its inception.

De blanke lezer wordt niet uitgesloten als publiek, maar moet zich ervan bewust zijn dat hij niet hetzelfde sentiment deelt als de groep die Wright beoogd heeft, waardoor hij niet

measurement, sharing, and replication (iv) train employees about the capabilities of Big Data (v) start with Big Data and learn about Big Data tools while implementing and using

Doordat het hier vooral gaat om teksten worden (veel) analyses door mid- del van text mining -technieken uitgevoerd. Met behulp van technieken wordt informatie uit

Opgemerkt moet worden dat de experts niet alleen AMF's hebben bepaald voor de verklarende variabelen in de APM's, maar voor alle wegkenmerken waarvan de experts vonden dat

Table 6.2 shows time constants for SH response in transmission for different incident intensities as extracted from numerical data fit of Figure 5.6. The intensities shown

Vervolgens kunnen verschil- lende technieken worden gebruikt om data te verkennen, zoals descriptieve statistische analyses (gemiddelde, modus, mediaan en spreiding),