• No results found

Distributed Ledger Technology in Know Your Customer processes : can distributed ledgers process personal information?

N/A
N/A
Protected

Academic year: 2021

Share "Distributed Ledger Technology in Know Your Customer processes : can distributed ledgers process personal information?"

Copied!
59
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Distributed Ledger Technology (DLT) is a hot topic, however, little research has been done to prove the potential, many claim it has. This thesis aims to reduce this knowledge gap, by exploring the potential of the technology. This is done by reviewing its potential to increase the efficiency of a market, from the perspective of a Know Your Customer process. Although evidence has been uncovered that implies that DLT can indeed increase the efficiency of a market, the immutability of a DLT makes it unsuitable to process personal data. Due to the General Data Protection Regulation (GDPR) that states that personal data must be able to be erased or rectified.

Distributed Ledger Technology in Know Your Customer processes

Can distributed ledgers process personal information?

Daaf Egberts 10185607

1/26/2018

Universiteit van Amsterdam

MSc. In Business Administration – Entrepreneurship and Innovation track

Dr. A.S. Alexiev

(2)

1

Statement of originality

This document is written by Daaf Egberts who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

Contents

1. Introduction ... 2

2. Literature review ... 4

2.1 Distributed Ledger Technology (DLT) ... 4

2.2 Economic theory ... 9

3. Methodology ... 16

3.1 Nascent and Grounded Theory ... 16

3.2 Case study and Qualitative data ... 17

3.3 Data collection and Participants ... 18

3.4 Selection of participants ... 18

3.5 Case design and Interviews ... 19

3.6 The interviews ... 20

3.7 Data Analysis ... 21

3.8 Data description ... 22

4. DLT and KYC case study ... 23

4.1 Current state of Know Your Customer processes: ... 23

4.2 Transaction costs of KYC ... 29

4.3 DLT solutions ... 38

4.4 GDPR ... 47

5. Summary, discussion and conclusion ... 50

Appendix ... 52

Appendix 1: Coding scheme ... 52

Appendix 2: Overview of interviews ... 53

Appendix 3: Overview of secondary sources ... 54

(3)

2

1. Introduction

A specific group of technologies has been receiving a lot of attention lately (Tapscott and Tapscott, 2016). Although this group of technologies is often called “Blockchain technology”, this name may cause some confusion as not all the varieties use the famous structure of blocks. There is, however, one element that connects the different technologies which are often called Blockchain technology: A decentralized ledger that is peer-to-peer (P2P) distributed over all the members of the network. Therefore a more accurate denomination of this group is Distributed Ledger Technologies (DLT). To avoid confusion, this will be the name that is used throughout this thesis. The term Blockchain will only be used to refer to a specific form of DLT.

Although there is still little known about the DLT, the technology is credited for having the ability to erase certain transaction costs and change the way, firms within different value chains, interact with each other. However, due to its novelty, there is a substantial lack of evidence to back those claims, and expectations may exceed reality (Tapscott and Tapscott, 2016).

As the technology matures, more and more parties are becoming interested in the possibilities the technology offers. During the first 9 months of 2017 investments in DLT have already gone above $4.5 billion (Forbes, Sep 2017). Besides the usual suspects from the technology industry like IBM and Microsoft, big players from the financial industry are also actively experimenting with the technology that might disrupt their own business models. For example, the DLT and fintech consortium R3 has raised over $107 million in funding from big investors like SBI group, Merril Lynch, HSBC, ING and lots of other big international banks (Corda, 2017).

However, up to this point, there is little evidence of the effect that DLT has on these industries. Therefore this research is aimed at exploring the potential of DLT by researching the possibilities it offers with regard to the reduction of market failure as described by Transaction Cost Theory (TCT) (Coase, 1937; Arrow, 1969; Williamson, 1971). Therefore the research question is:

How can Distributed Ledger Technologies be used to reduce market failure?

By reviewing the effect DLT has on market failure, a framework is created from which the usefulness of the innovation can be discussed. According to TCT market failure is the result of an inefficient allocation of resources, which in turn is a result of the existence of transaction costs (Coase, 1937). By analyzing the possibilities that DLT offers with regard to the reduction of transaction cost, the effectiveness of the technology can be estimated with regard to the allocation of resources. If the technology is found to increase the efficiency with which resources are allocated, the technology can be said to reduce market failure.

To narrow the scope of the thesis, the potential DLT has with regard to reduce market failure is explored in one specific case: “Know Your Customer” processes. Know Your Customer processes, or in short, KYC is the due diligence process in which a financial institution is required to engage before they are allowed to go into business with a customer. Due to extensive regulations, KYC processes are known to be slow and often impose huge financial costs on financial institutions and their customers. The KYC process is characterized by the need to process, collect and verify large amounts of data. This makes it a potential use-case for a DLT solution, amongst others, global consultancy firm Deloitte as well as the Singaporean government are exploring this possibility (Deloitte, 2017).

(4)

3 In order to explore the possibilities of DLT with regard to increasing the efficiency of KYC processes, a case study is conducted. In this case study, the experiences and knowledge of varying experts are combined in order to generate insights into the potential of DLT, and thereby add to the scares body of knowledge on the technology. The focus of the case will be an analysis of the effect that DLT solutions have on the various transaction costs.

Because there is little to no previous research available with regard to DLT, this research is of an exploratory nature. Therefore the choice was made to use a case study as research method, which is an appropriate research strategy according to Eisenhardt (1989) when “little is known about a phenomenon and existing theories seem inadequate or insufficient” (Eisenhardt, 1989). However, one limitation of this research is the absence of quantitative data, which restricts this research from quantifying the effect that DLT has on the efficiency of the KYC research.

The results from this research are somewhat unexpected, as they reveal that DLT solutions are not well suited to processes confidential information. Therefore, DLT solutions are not found to improve KYC processes as they involve a lot of personal and confidential information. On top of that, it has been discovered that the new data protection regulations from the European Commission, the General Data Protection Regulation, even prohibits the use of DLT for any kind of data that can be related to a person (GDPR, 2016). So using a DLT solution for any process that involves personal data would defy the GDPR and result in noncompliance. This does not mean that DLT is not able to improve processes in which data other than personal data. Several use-cases, which are situations in which a technology can potentially be implemented, have been identified in the case study that reveals the possibilities that DLT offers with regard to integrating networks and streamlining communications.

The structure of this thesis is as follows. The thesis starts with the literature review. The first part of the literature review contains an overview of the different forms of DLT and a short introduction to the technology and process behind the technology. This is followed by an introduction of the theory in which the relationship between market failure, efficiency and transaction costs are explained from the perspective of Transaction Cost Theory. This will form the theoretical framework from which the research question is explored. Then the methodology is explained and justified, followed by a short description of the case design and collected data. After this part, the analysis of the collected data is presented. First, the identified transaction costs within the KYC process will be discussed followed by a discussion of the identified DLT solutions and their effect on the previously identified transaction costs. Then in the last chapter, the results from the case study and the insights with regard to the research question are discussed.

(5)

4

2. Literature review

The following chapter contains of the introduction of the relevant literature that form the theoretical framework for the research. It starts with the introduction and explanation of Distributed Ledger Technology. Then the three main technologies, Blockchain systems, permissioned ledgers, and associated technologies are explained, and then the economic literature is discussed. This starts with an explanation of the relationship between innovation, efficiency and market failure, which is then followed by a discussion of Transaction Cost Theory, and the different categories of transaction costs.

2.1 Distributed Ledger Technology (DLT)

This section contains a discussion of DLT, starting with an introduction of the first Blockchain Technology, followed by permissioned ledgers, and associated technologies. Without going into too many technical details, the key elements that enable distributed ledgers are also explained.

Lately, a lot of attention is being received by a technology often called Blockchain. A variety of sources, like industry experts and popular media, have been discussing the topic, and a substantial group believes it has the potential the change or even disrupt the current way of doing business (Tapscott and Tapscott, 2016). Blockchain technology is said to have the ability to dramatically lower transaction cost for many forms online value exchanges. The first point of impact is expected to be within the financial services industry because it offers a unique opportunity to transfer assets

without the need for an intermediary. However, its abilities regarding data storage, management and transfer it is believed to impact all kinds of intermediary services. For instance, it offers great

opportunities for online identity management, by giving individuals the power to securely store and encrypt personal data and grant them the power to allow or deny whoever they choose access to that data (Mainelli and Milne, 2016; Tapscott and Tapscott, 2016; Kosba et al., 2016; Guo and Liang, 2016, Mainelli and Smith, 2015)).

Before continuing to discuss the possibilities this innovation might have to offer, a terminology issue needs to be resolved first. Within popular media and in scientific literature, including those cited above, the terms Distributed Ledger Technology(DLT) and Blockchain technology are often used interchangeably to designate a variety of technologies which are all related in some way. This often leads to unnecessary confusion.

To accommodate this problem, it is helpful to understand that Distributed Ledger Technology is a term that is more accurately, a collective name for a group of three different technologies: Blockchain technology or public ledgers, like those used by Bitcoin, Litecoin, Hyperledger and Ethereum; Permissioned ledger or private ledger systems like Corda, Fabric, and Ripple; and Associated technologies like Smart-Contracts, programmable money, peer-to-peer networks, and other decentralized applications. This division in three types is more accurate because, permissioned ledger systems do not necessarily include blocks. Which is the reason why systems using a chain of blocks, as used by Bitcoin, were called Blockchains in the first place (S. Nakamoto, 2007). And many of the associated technologies like peer-2-peer networks and Smart-Contracts already existed before the introduction of Blockchain systems, and their survival does not necessarily depend on Blockchain technology. In the following paragraphs, these different technologies and their different aspects will be discussed.

(6)

5 Blockchain Technology

The first technology, and most famous, is the Blockchain technology. In its utmost basic form, Blockchain technology is a combination of two primary components: A decentralized network that facilitates and verifies transactions within the network, and a linear and immutable ledger

maintained by the members of the network (IBM, 2017). Within the decentralized network, all the members or nodes, are peer-to-peer connected. Peer-to-Peer connection means that all members have a direct connection with each other, which allows them to interact with each other and exchanges data directly. Which means that there is no need for intermediaries who facilitate transactions within a Blockchain ecosystem.

The members of the network use a process called the consensus protocol to validate transactions between network members, which makes controlling and facilitating transactions a community task. To incentivize participation in the consensus protocol; monetary or social stimulus is often given to the participating members (Coindesk, 2017). When a transaction is verified and validated by the network, a copy containing all the necessary transaction data is distributed to all the members of the network. This allows them to update their copy of the database and create a distributed ledger of all the transactions (Mainelli and Schmindt, 2015).

To create an immutable database or ledger, cryptography in the form of hashing algorithms are used to form an unbreakable link between collections of verified transactions, which are a record in “blocks” of data. Each block is connected to the previous block and thereby creates a chain of verified records, the Blockchain. The mathematical structure of hashing in which each block is linked to its previous block makes it nearly impossible to alter any data that is recorded on the Blockchain, creating a highly immutable ledger of previous transactions. Another form of cryptography, called public-key encryption, is used to secure the data by encrypting and decrypting it. This form of encryption only allows the parties involved to alter any transaction data (Mainelli and Milne, 2016; Lin and Liao, 2017; Wright and de Filippi, 2015).

When looking at the process of adding a new block to the Blockchain and distributing the information throughout the network it becomes apparent that Blockchain is not a single technique or technology but a combination of cryptography, mathematics, computer algorithms, internet and economic models that run on a peer-to-peer structure.

This combination of the different techniques that form Blockchain technology was first introduced by the unknown author Satoshi Nakamoto (2008) who created the first Blockchain enabled application in 2008, called Bitcoin. Bitcoin is a fully digital currency also called a cryptocurrency, that runs on a decentralized Blockchain network, and is the first large-scale Blockchain application. Due to its design, the Bitcoin Blockchain creates a highly authoritative and immutable database, where

members can directly interact with each other and each member has the same view of the same data and no single actor is able to (falsely) change the records without the consent of the members forming the community (Mainelli and Milne, 2016). Due to its mathematical structure, the Bitcoin Blockchain creates an environment in which transactions can be made without the possibility of opportunistic behaviour from the counterparty; therefore, there is no need to trust or distrust the other members, which creates a “trustless network”.

(7)

6 The technology

Figure 1 A Blockchain

To understand the basics of a Blockchain it is important to understand how a block is structured. Figure 1 gives a visual representation of a basic Blockchain. A simple block often contains five different elements: The hash of the current block, block (2), the hash of the previous block, block (1), a timestamp, the main data, and other information. The main data contains all the information of the transaction like, who is the sender and receiver and what is being exchanged. The hash of the current block (2), Hash (2), is a string of numbers that is generated using a hash function and can be seen as a unique mapping or summary of the data that is within the file or block in this case (Bellare et al. 1996). The timestamp shows when the block was created.

Now when two actors decide to engage in an exchange of any kind of data, a record of the

information about the transaction is being created, containing all the information on the trade and its conditions. Now the record is being sent into the network to be validated by the other members, often called miners. During the validation process, miners check if both parties are able to fulfil the conditions of the exchange and the conditions of the network. When the transaction is validated the miner enters all the information about the exchange into a block and includes a timestamp. This new block is linked to the chain of previous records by incorporating the hash of the previous block into the new block and generates a unique hash for this block. When this is done the whole network receives a copy of the transaction and validates the work done by the miner. When the network reaches consensus and approves the transaction and the transaction is executed. Now every member accepts the new block and has knowledge of the new and longer chain, this way the whole network has access to the same and most recent information (Lin and Liao, 2017). The protocols that are used to reach consensus come in many different forms, like Proof-of-Work, Proof-of-Stake, Byzantine Agreement, and more but their specifics will be disregarded for this paper (Coindesk, 2017). These steps that go into creating new block results in the fact that Blockchain technology creates a highly authoritative and immutable database, where each member has the same view of the same data and the records are linked together by hard to break and complex mathematics. In such a database, no single actor is able to (falsely) change the records without the consent of the members forming the network (Mainelli and Milne, 2016). Which makes Blockchain perfectly suitable to solve the “double-spending” problem, which occurs when an actor acts opportunistically by making use of the information asymmetry in a network by engaging in multiple transactions with the same

(8)

7 Permissioned Ledger

Blockchain developers have been trying to create new structures that are better faster, more anonymous, or provide a greater deal of autonomy. In some cases that forced them to change some key characteristics. This led to the development of a new form of DLT: Permissioned Ledger

Systems. Although it deviates on some key aspects of the original Blockchain technology, there is one important common denominator, the distributed ledger. As with Blockchain technology,

permissioned ledger systems make use of a distributed ledger, in which the members agree upon the changes via the consensus mechanism, all the records are timestamped, cryptographically signed, immutable, and replicated across all members (IBM, May 2017). On the contrary, features like complete decentralization of control, anonymity, transparency and autonomy, which are often found and highly valued in the early Blockchain systems, are not necessarily part of this form of DLT.

These features aren’t always necessary or desired in a DLT solution, and removing or redesigning some of them can make DLT more suitable to tackle different problems in which, for instance, a form of central authority or confidentiality is desired. When DLT is used to store and exchange medical data between hospitals, for instance, transparency might not be a much-desired feature with regard to privacy, and neither will anonymity be very useful. But the high immutability and ease of record keeping and sharing between different medical institutions that DLT offers can be a very useful improvement of the current system of medical record-keeping (Harvard Business Review, March 2017).

Insights like these have created the division within DLT solutions, between Public ledgers or Blockchains, and private ledgers, or permissioned ledgers. Even though permissioned ledgers often store their record in a structure of blocks as well, this is not necessarily true for all of them. Corda, a distributed ledger platform that records, manages and automates legal agreements between certain business partners, for instance, only shares the records with the parties involved and does not store anything on a chain of records, and access is only granted by permission. But, they make use of a consensus protocol, whether it is on an individual deal level, peer-to-peer interaction, and Smart-Contracts. This qualifies them as a distributed and permissioned ledger, but not as a Blockchain (Corda, whitepaper).

Associated Technologies

As becomes apparent, DLT is not just one technology with a single application, but more a collection of techniques and architectures which allows decentralized data storage and facilitate interaction between members of a network. A variety of different applications or technologies can be built on top of this decentralized network, like programmable money, digital signatures, secured messaging systems, and many more. Without going into depth on all kinds of applications that can utilize distributed ledgers, there is one sort of application, in particular, that is enabled by and inseparably connected with DLT: Smart-Contracts.

Smart-Contracts are programmable contracts that run on a distributed ledger and automatically execute when pre-defined conditions are met (CapGemini, 2016). These kind of contracts are

definitely not new. These digital, self-enforcing contracts were first introduced by Nick Szabo in 1997 (Szabo, 1997), which was approximately 10 years before the introduction of the first Blockchain. However, Smart-Contracts only work if there is an economic system and infrastructure on which the

(9)

8 contract can autonomously execute according to the conditions in the contract, without the

possibility for its actions overruled or reversed. This only became possible when the peer-to-peer structure of Blockchain was introduced, in which there is no central power or possibility of reversal. The immutability and irreversibility of records on the Blockchain empower the Smart-Contracts and gives them a high degree of security (Luu, 2016).

In short, a smart contract is an application which executes a certain protocol when predefined conditions are met. When two or more parties have agreed to the terms of the contract, and the contract is created, it is given the authority to autonomously act on the Blockchain within the specified boundaries of the contract. This means that it can also be deployed as a gatekeeper to the ledger, by only allowing new entries or records to be recorded in the ledger if they meet the rules of the contract. Other applications in which the use of Smart-Contracts can be useful are, for instance, financial instruments, decentralized gambling, outsourced computation, governance applications, employee contracts and many more (Luu, 2016).

It is important to include Smart-Contracts when discussing disintermediation as a result of Blockchain because Smart-Contracts in a way unlock and utilize the greater potential of distributed ledgers. As opposed to what is regularly said, DLT itself does not fully eliminate the need for intermediaries. Distributed ledgers allow for decentralized storage of data and build peer-to-peer communication channels. This might eliminate intermediaries that were previously offering those services. But the consensus protocol of a public Blockchain is often mostly concerned with solving the double-spending problem, were Permissioned ledgers sometimes are primarily concerned with verifying read and write permission. Both DLTs often don’t bother with any additional conditions that are specific to an exchange, like did both parties uphold their end of the deal, and ethical dilemmas like, does the money come from criminal activities? Therefore a decentralized ledger by itself does not eliminate the need for intermediaries with the power and authority to protect the secondary conditions of a transaction and protect both parties from wrongdoing by the other (Szabo, 1997). This is where the Smart-Contracts come in. As the use of Smart-Contracts allow two parties to engage in transactions that previously needed the presence of a third party, Smart-Contracts enable disintermediation. An example is a creation of a financial instrument, like put or call options. Two parties can create a contract in which they specify a sell and/or buy date and price as well as a moment of execution. The contract then has the authority to hold and transfer the funds of the conditions are met. So instead of needing a financial intermediary, parties now can engage in risky and more complex due to the fact that they don’t need a third party to intervene when a party does not uphold (Christidis and Devetsikiotis, 2016).

(10)

9

2.2 Economic theory

The following section covers the discussion of the economic theories that are used for the framework from which the efficiency of DLT solutions will be reviewed. The section begins with explaining that innovations aim to reduce inefficiencies, followed by an discussion of how Transaction Cost Theory offers a possibility to analyze the efficiency of an innovation by reviewing the effect it has on the different transaction cost categories.

Innovation and efficiency

Innovation can be defined as a “new idea, device or method”. The goal of innovating is often the improvement of current products, services, processes, technologies or business models (Frankelius, 2009). Whenever an innovation is so revolutionary that it restructures the current economic structure, by re-defining the “rules of the game” and the roles and power of the players within the industry, an innovation is said to be disruptive. This process of revolutionizing the current economic structure is called ‘creative destruction’ and was first described by Schumpeter (1942) in his work “Capitalisme, Socialism and Democracy”.

This process of creative destruction might result in a reallocation of resources, like labor, time, and money as a result of the innovation. The improvement of efficiency might result in the fact that less resources have to be spend on certain process, and the change of roles and power might allocate the resources differently. When it comes to the development of DLT it is often said that it has the

potential to disrupt our current way of doing business and change the distribution of wealth (Tapscott and Tapscott, 2016).

Whenever an innovation just chances the distribution of resources it might make the world a “better place”, but it does not increase the efficiency with which the resources are used. If an innovation, however, changes the process in such a way that the same results can be achieved with the use of less resources, an innovation is said to improve a certain process and increase the efficiency with which resources are used. If, as a result of the innovation, a re-allocation of resources takes place in which at least on individual is better off, without making one or more individuals worse off, the increase in efficiency is said to generate a Pareto improvement (Ross, 1973).

In other words, a Pareto improvement occurs whenever goods and resources are reallocated in such a way that there are no loser and only winners. Whenever an economy has reached the point in which no other structure or allocation results in an Pareto improvement, the economy is assumed to have reached a point that is Pareto optimal or Pareto efficient (Breyer, 1989). From the point of view of a society, Pareto improvements increase the total wealth of the society as a whole, and can be seen as a measure off efficiency of certain changes or actions. Therefore, judging an innovation, in this case Distributed Ledger Technologies, by the effect it has on the Pareto efficiency offers a way of valuating an innovation from the perspective of a society.

However, there is one big objective against using Pareto Efficiency as a criteria on which a decision is based: Whenever certain changes or innovations affect a large and diverse population, it is very unlikely that there is not at least one person badly affected by the change, which makes finding a Pareto efficient solution very difficult or even impossible (Reckon, 2010). This is why two additional criteria which are derived from the Pareto criteria were introduced by Nicolas Kaldor (1939), whose

(11)

10 work was founded in the work of John Hicks (1937). A Kaldor-Hicks improvement thus is achieved if one of the following criteria is met (Stringham, 2001):

• The “winners” gain enough from the change to (hypothetically) compensate the “losers” for their loss and still be better off, than before the reallocation. This is called the Kaldor criterion.

• The “losers” can afford to bribe the “winners” and still be better off, than before the re-allocation. This would happen if the losers loss more than the winners gain. This is called the Hicks criterion.

• There are no “losers” only “winners”. This is the Pareto criterion.

So a Kaldor-Hicks improvement is an improvement that leads to a resource allocation that increases the total wealth of a society. In other words, a reallocation that leads to an increase in the

cumulative wealth. In the case of DLT, this would mean that the benefits of disintermediation are bigger to the parties involved than the loss of revenue for the intermediation firm. Another example might be, that DLT enables people to spend less time on verifying information, which leaves them with more time to spend on other activities.

There are, however, some points of critique on using these criteria as a measure of the efficiency of policies or innovations. The first point is that it only measures quantifiable effects, and thereby does not leave room for factors like happiness and social impact, which makes it measurement unit of wealth and not of welfare. Due the complexity of measuring welfare and the constraints on time and resources, this limitation is taken for granted in this research. Thus must be noted that whenever the term efficiency is used it refers to a change in quantifiable wealth. Wealth is defined by the amount of resources like time, money and labor an individual or company as at his or her disposal.

The second point of critique is given by Stringham (2001), who says that it is impossible to know the exact effect of a policy or structural change, and that every prediction of future utility is therefore based on past preferences. Although this is undeniably true, according to this objection the only measure that would be suitable for the prediction of future results, is one that uses future data. Since the use of future data is preferable but impossible, this point of critique will be disregarded. Market Failure and transaction costs

Implicit in the assumption that innovations can improve the efficiency of a market, is the assumption that the market is inefficient at a certain point. This phenomenon; when the market fails to reach an optimal equilibrium, is called market failure. A collection of scholars (Coase, 1937; Arrow, 1969; Williamson, 1971) argue that market failure is a result externalities which find their roots in

transaction costs. Their economic perspective called new institutional economics focuses on the role that institutions play in an economic system and builds on but goes beyond the beliefs of neoclassical economic theory. As a small recap, the assumptions underpinning the neoclassical theory are:

• People have rational preferences among outcomes • Individuals maximize utility as firms maximize profits

• People act independently on a base of full and relevant information

Under these assumptions arises a situation where there are free and fully competitive markets for all possible goods, without externalities and transaction costs are negligible. Under neoclassical

(12)

11 assumptions, the economy always goes towards a Pareto efficient equilibrium (Arrow, 1969; De Alessi, 1983).

However, the new institutional economists argue that these assumptions don’t always hold, and fail to explain the presence of transaction cost. Ronald Coase (1937) was the first to use the term “transaction cost” and showed that by relaxing one or more of the neoclassical assumptions, the existence of transaction cost can be explained. With his work, Coase laid the foundation for the creation of the Transaction Cost Theory(TCT) which was further developed by a collection of scholars (Coase, 1937; Arrow, 1969; Williamson, 1971, Stiglitz, 1999). TCT states that all forms of externalities and thus market failure find their roots in transaction costs. It acknowledges that our economic system is characterized by imperfections and friction, which results in the existence of transaction costs that are at the base of all non-value adding costs and activities (Coase, 1937).

Nobel prize winner Joseph E. Stiglitz showed that market failures and transaction cost find their origin in information asymmetry and incomplete markets. According to Stiglitz information asymmetry and incomplete markets, as well as other forms of market inefficiencies, result in externalities. The presence of these externalities generally leads to Pareto inefficient equilibria according to Stiglitz (Stiglitz, 1999). Although Stiglitz does not use the term transaction costs, externalities are used to describe unwanted non-value adding costs inflicted on a subject and result from participating in economic activities and closely resemble the TCT definition of transaction costs. R. Coase (1937) acknowledges that certain cost arise from the activities that are necessary for using the market mechanism and to carry out an exchange between two actors in a marketplace when neoclassical assumptions are relaxed. These costs arise from any form of necessary economic organization that does not directly add value to the resource that is exchanged but merely serve the purpose of enabling the exchange. (Anderson, 1986; Dahlman, 1979, Williamson, 2005). These costs include but are not limited to the cost of discovering what prices should be, the cost of creating and negotiating individual contracts and/or specifying the details of any transaction in long-term contracts as well as the monitoring of counterparty behaviour and the cost of enforcement when contracts are breached. These costs are called transaction costs (Hobbs, 1996).

As transaction costs do not add value, they cause the waste of resources. Therefore, if the value of an innovation is determined by its ability to increase the efficiency of an economic process, and efficiency is achieved by an improved resource allocation, and resource allocation is constrained by transaction costs, the value of an innovation can be determined by judging its ability to lower transaction costs. This means that to determine the value of DLT its ability to lower transaction cost should be explored.

In order to be able to identify the effect DLT has on transaction cost, a full comprehension of its definition and the elements that stand at its origin is needed. Therefore, the next section will be devoted to further explore Transaction Cost Theory and the possibilities it offers with regard to innovation valuation.

Origin of Transaction Cost Theory

According to Transaction Cost Theory, real-world economic systems are characterized by the presence of positive transaction costs, which means that the use of the market mechanisms is never

(13)

12 costless. This, by itself, indicates the presence of imperfections and frictions within the market. This, by itself, indicates the presence of imperfections and frictions within the market. In the beginning, TCT was used as a theory to describe a contracting problem and explain the choice of economic organization by firms. Economic organization is defined by the degree of vertical integration. At its base lies the assumption that the control or direction of certain behaviour and the reduction of certain transaction costs is more efficiently done within a firm than it is to accomplish similar results in a firm's external environment.

This means that firms will tend to expand and vertically integrate as long as the benefits of internal transactions and organization are higher than the additional cost of the extra organization because firms aim to maximize their profits. Therefore TCT sees firms as a form of governing structures that function as a substitute of the free market (Coase, 1937). Simply said, TCT attempts to explain the effects of the arrangements actors take to protect themselves from the hazards of their trading relationships.

Williamson (1970) expanded this notion further and showed that vertical integration, through mergers and acquisitions, does lower the transaction costs for an individual firm. On the contrary, the centralization of market power increases cost for the market as a whole. A high concentration of power and authority gives the individual firm the opportunity to engage in opportunistic behaviour and drive up the price of a product or service in order to maximize their own profits. As it is often too costly or difficult for customers to form a coalition that can restore balance and fight against the opportunistic actions of big monopolists, this leads to a situation in which customers pay a higher price than in a competitive market. Williamson shows, that the increase in revenue of the monopolist does not compensate the loss for the customers. Therefore, centralization of market power often leads to total social deadweight loss due to a low-end equilibrium (Williamson, 1970). Because deadweight loss is basically a synonymous for inefficient allocation of resources and thus market failure, centralized power in Pareto inefficiency (Arrow, 1969).

Although Transaction Cost Theory was originally used to describe a contracting problem and explain the incentives behind the scale and scope of a firms, over time TCT proved to be useful for studying a variety of economic phenomena ranging from transfer pricing, corporate finance, marketing, long-term commercial contracting, and other contracting relations (Shelanksi and Klein, 1995). But it can also be used as a measure of the effectiveness of certain technological innovation, like DLT, a firm makes to build or enhance its competitive advantage.

Transaction cost identification

In the process of valuing an innovation based on its ability to reduce transaction cost, the first step is to identify the transaction cost. This process can be done through Transaction Cost Analysis(TCA) which is, a process that divides costs into three main categories. The main categories of transaction costs in which the cost can be divided are (Hobbs, 1996):

• Search and Information cost, arise because firms and individuals face costs in the search for information about the different products, buyers, sellers, prices and inputs. These costs can be found in the time spent generating and verifying the information, the costs for creating and using an information distribution network, and the storage of information. These costs are aimed at reducing information asymmetry and arise prior to the transaction

(14)

13 • Negotiation costs, arise as a result of the interaction between parties prior to the exchange

in which the conditions and responsibilities of the exchange are specified. These are costs such as writing contracts, negotiating and of including intermediaries in the process to carry out some of the activities.

• Monitoring and Enforcement costs, are the costs of monitoring the behaviour of current or potential trading partners and the costs of enforcement when the counterparty does not hold up to the predetermined conditions of the exchange. This may involve activities such monitoring the quality of goods, due diligence, but also include legal actions. These transaction costs occur after the transaction is made.

For the differentiation between certain costs, the distinction between Monitoring and Enforcement costs and Search and Information cost is a subtle but important one. Although both deal with the gathering and verification of information, the difference lies in the fact that Search and Information cost are made prior to a transaction as Monitoring and Enforcement costs are made after the transaction.

The assumptions

These categories are useful when it comes to classifying the costs, however, in order to reduce the costs or remove them entirely, the underlying factors that cause the transaction cost have to be identified. A wide range of disciplines from psychology to economics and law, has been occupied with doing so and four factors or assumptions have been identified that underpin transaction costs;

Bounded Rationality, Opportunism, Asset Specificity and Information Asymmetry. Bounded rationality

TCT assumes that people are limited in their ability to receive, store, retrieve and communicate information without errors. This leads to an inaccurate evaluation of all possible decision alternatives and outcomes, which restricts them from making optimal or fully rational decisions, even though they may intend to make rational decisions. In situations of high complexity or uncertainty, this results in the fact that people often are unable to make the most optimal decision. In such situations higher Monitoring and Enforcement costs might be needed to monitor the effects of the decisions, as well as higher negotiation costs to predetermine responsibility in case of failure. (Hobbs, 1996; Grover and Malhotra, 2003).

Opportunism

Opportunism, as defined by Williamson (1979), is self-interest seeking behaviour and occurs when an individual or company exploits a situation to their own benefit. Although it does not mean that everyone is acting opportunistically all of the time, the possibility of opportunism imposes a risk factor on the parties engaged. The possibility of opportunistic behaviour leads to a decrease in trustworthiness and an increase of riskiness. Transaction costs, in this case, arise from the costs of monitoring behaviour, safeguarding assets, negotiating and enforcement activities. This, for instance, means that innovations like DLT, who reduce the possibility of double spending might lower the cost of opportunism by taking away the opportunity (Clark and Lee, 1999; Franklin and Reiter, 1997).

(15)

14

Asset specificity

Asset specificity occurs when one of the actors in the exchange invested in assets that have little or no value in other transactions or exchanges. This means that the actor has a high dependency on the current transaction in order to gain benefits from his investments. Due to asset specificity, the actor has a weaker bargaining position due to limited alternative options which might stimulate

opportunistic behaviour from the counterparty due to his stronger bargaining position. Besides as asset specificity can be found in, for instance, the burden of legacy systems, it can make

technological innovation increasingly difficult and costly (Hobbs, 1996).

Information asymmetry

TCT recognizes that many business transactions are characterized by a state in which both parties do not possess equal information or under information asymmetry, which might give rise to moral hazard and adverse selection. It also happens that, although both parties have similar information, the information itself is imperfect or incomplete. This might lead to less than efficient decision making. The ability that DLT has to provide wider access to verified data might reduce the level of information asymmetry and the need for intermediaries (Hobbs, 1996).

Allen and Santomero (2001) state that the need for risk-management due presence of information asymmetry can be seen as the key reason for the existence of modern intermediaries. Intermediaries are in a way the embodiment of transaction costs and vertical integration in which a part of the free market is substituted by a governing structure. And disintermediation, which is the process of removing the intermediaries from the process, is a good example of the increase of efficiency within a market by removing a player from the process.

“Intermediaries create value via information collection, aggregation, display, and information processing; by managing workflow for a set of transactions between a buyer and seller; by

coordinating logistical services to buyers and sellers; or by providing information processing services for end-to-end transaction management.” (Bhargava and Choudhary, 2004, p.22)

Looking back on the three neoclassical assumptions, it becomes apparent that by relaxing the assumption of perfect and costless information, each factor that causes transaction costs can be explained. Even the bounded rationality factor, which is not a relaxation of the assumption that people have rational preferences. Bounded Rationality still includes the assumption that people make rational choices, however, assumes that they are limited in their comprehension of the variety of choices and their effect, by the lack of information.

Transaction cost restricts the economy from achieving an efficient resource allocation. Therefore, the existence of transaction cost should provide individuals and firms with the incentive to reduce or remove these forms of market failure, because it lowers their cumulative wealth. These costs drive a wedge between buyer’s and seller’s prices and lead to a mismatch in supply and demand that result in welfare losses (Arrow, 1969). From the perspective of a firm, reducing transaction costs is a legitimate strategy when it results in a stronger competitive position and a more favorable allocation of resources for the company and its shareholders. From an individual’s point of view lowering transaction costs might lower the price of certain products. Furthermore, lowering transaction costs

(16)

15 will lower the entrance barrier to the market, which will enhance the competitiveness of the markets and protect them from the costs of power centralization.

Although the analysis of transaction costs offers a tool to theoretically review DLT, there are some limitations and concerns that need to be addressed. According to Hobbs (1996), these problems are a result of the fact that there have not been as many developments on the measurement aspect as there have been in the theoretical area. First of all, the resources spend on accommodating market failure aren’t always easy to separate from the cost of product or service development and often overlap. Moreover, if they can be identified, quantifying them is often difficult.

Furthermore, due to the complex nature of the costs the institutions that are often charged with the collection of economic data like governments or accountancy firms, don’t bother to separate them. Therefore, researchers are often not able to do more than make qualitative statements on the perceived or predicted effect that certain changes have on transaction costs.

In summary, by combining the discussed theories a framework emerges that denotes transaction costs as the origin of inefficiencies in the market or market failure. The presence of these

inefficiencies leads to a less than optimal use and allocation of resources, which means that the economy currently is in a less than optimal state. In line with this reasoning; lowering transaction costs, which can be categorized as Search and Information costs, Negotiation costs, and Monitoring and Enforcement costs, will result in an efficiency improvement in which drives the whole economy up. The current literature has pointed out four factors that can be seen as the main source of

(17)

16

3. Methodology

This section describes the methodology used in the research. It starts with the justification of the chosen method of study, followed by a description of the process of data collection and the techniques used to analyze the data.

The objective of the case study is to explore how the use of DLT can reduce the transaction cost within the Know Your Customer process. Additionally, the effect it has on the efficiency of resource allocation in the market will be reviewed. As earlier explained there is little to no prior scientific research on which to build this work, due to the novelty of the topic. Therefore, this thesis is of an exploratory nature and aims to uncover the relationship between DLT and transaction costs and create a theoretical groundwork from which future research can develop. This will be done by means of a multiple-case study on DLT and KYC.

3.1 Nascent and Grounded Theory

This thesis makes use of the Grounded theory approach as described by Glaser and Strauss (1991), to develop a theoretical framework through which to understand the impact of DLT impact on

transaction costs and its effect on the efficiency of a market in terms of resource allocation. The Grounded theory approach constitutes an analytical strategy whereby a scholar inductively collects data surrounding a phenomenon without setting a priori hypothesis, in order to build a theory. According to Edmondson and McManus (2007), this form of research is often used when there is little or no previous work to build on and can be classified as nascent theory research (Edmondson and McManus, 2007). The use of Grounded Theory is particularly appropriate in the case of DLT research, as it is often used to explore novel and unexplored phenomena, mostly because it is very effective in generating responses to “how” and “why” questions. However, it has been argued persuasively by Perry (1998) that Grounded Theory is not suitable to corroborate or falsify

preexisting theoretical frameworks. In consequence, the subsequent section will not attempt to use Grounded Theory in such manner.

When exploring a novel phenomenon through Grounded theory, the theory should gradually evolve from inductive data collection and analysis. This is established by working iteratively, in which new evidence is constantly incorporated into the research, and new findings guide the research. Edmondson and McManus (2007) argue that data collection methods like interviews,

open-questionnaires, observations and longitudinal investigations are best suited for this kind of research. Accordingly, the collected data should be coded in a specific way to uncover themes and patterns from which to develop a theory (Coffey and Atkinson, 1996).

Building the actual theory starts after the first data is collected and can be divided into two main parts. The first element is the so-called Detective Work and the second one is known as the Creative Leap. Detective work is the process of collecting, coding, categorizing data in order to discover themes and patterns. This is done through looking for commonalities and consistencies within the data. During the Creative Leap, the researcher combines all the data and findings and looks for underlying connections from which to draw a theoretical conclusion (Mintzberg, 1979). As previously stated, this is done in an iterative manner, where discoveries in the Creative Leap can guide the direction of the Detective work as well as the new data collection.

(18)

17

3.2 Case study and Qualitative data

An inductive case study was chosen to study the potential that DLT offers for improving the efficiency of KYC processes. This choice was made because case studies are the preferred method among scholars when the research has an exploratory character and the topic is so new that general information still needs to be discovered to understand the topic (Flick, 2009). Eisenhardt (1989) agrees and states that a case study is the appropriate research strategy when “little is known about a phenomenon and existing theories seem inadequate or insufficient” (Eisenhardt, 1989). Additionally, Yin (2009) argues that case studies are most suitable when the “how and why” of a phenomenon is explored, the phenomenon is a contemporary event with a real-life application, and the researcher has little or no control over the phenomenon and its development. Since the topic is very novel and is aimed at answering the “how-question” in an uncontrollable environment, it appears as if a methodological approach based on one or more case studies is justified in this instance.

As the efficiency of DLT solutions might differ due to the influence of external factors, multiple cases will be studied to obtain a broader picture of its impact on transaction costs. A methodology

incorporating multiple case studies has been found to be very effective when studying complex and new phenomena because multiple case studies enforce external validity by relying on multiple observations. In addition, such a methodology provides a more robust and generalizable outcome than when performing a single case study (Cavaye, 1996; Yin, 2009; Eisenhardt and Graebner, 2007). Cavaye (1996) argues that although studying multiple cases may not enable the same rich

descriptions as single cases, it does enable researchers to create a theory that is not just based on the idiosyncrasies of the setting in which the single case is studied. In the case of DLT and KYC, this is very much desired as the KYC process is highly complex and can take different forms.

Due to the shortage of quantitative data concerning DLT solutions, this project is focused on the collection of qualitative data. Besides, it is important to keep in mind that this thesis aims to explore the potential applicability of an immature technology, DLT, by researching the technology’s potential ability to improve the flow and availability of data in KYC processes. Qualitative data is better suited for such an endeavor quantitative data, as it is more descriptive. As Cavaye (1996) states, qualitative data is well suited for the description of a phenomenon in a highly dynamic environment whereby the contextual factors are is very significant (Cavaye, 1996). As DLT is a technology that interacts fundamentally is built on the involvement of multiple parties and legacy systems the use of qualitative data over quantitative data seems appropriate.

There are some limitations that need to be addressed when it comes to using a Grounded Theory approach to build a theory. Primarily, a fully open and completely unstructured inductive approach might it more difficult to compare separate cases due to the fact that the unstructured inductive approach can lead to very different insights. Secondly, according to Glasser and Strauss (1987), it is nearly impossible to conduct study free of inductive reasoning, and this might slightly bias the results. Lastly, the number of available cases and the data the cases have to offer are not ideal and due to the expected lack of quantitative results, it might be hard to draw unequivocal conclusions. Therefore, to accommodate the first two limitations, the choice was made to deviate a little from the original grounded theory approach by making use of semi-structured interviews. The last limitation has been taken into consideration while writing the conclusion.

(19)

18

3.3 Data collection and Participants

The collected data is both primary and secondary data. For the primary data, semi-structured interviews were conducted to document the experience of the selected experts. The secondary data was gathered from a variety of sources including research papers, managerial articles, news articles and blogs. The choice to gather primary and secondary data from multiple sources was made to ensure the internal validity of the research. The internal validity refers to the degree to which the data agrees with the reality. Internal validity can be achieved by using data triangulation.

Triangulation means that multiple perspectives from different sources are incorporated to study the subject and verify the findings (Holtzhausen, 2001).

For the primary data, 17 experts with varying backgrounds participated in the interviews of an average duration of +/- 30 minutes per interview. In addition, secondary data was extracted from 17 additional sources. A more extensive description of the collected data can be found in the Data description section.

During the data collection phase, three groups of experts were consulted for the primary data. The first group contains experts with direct experience in using DLT solutions for the KYC process, the second group contains experts who have direct experience with DLT solutions in other use-cases, and the third group of experts contains experts from the “context” of the case like consultants,

developers or other industry associates, with KYC knowledge.

Five experts have been interviewed who have been directly involved with the use of DLT in KYC processes. There is no consensus among scholars on the minimum number of cases, but Eisenhardt (1989) states that it is preferable to have four or more different cases. Because of the fact that it is such a new field and even the experts are not in possession of an overload of experience, the choice was made to document experiences from five cases that directly related to DLT and KYC. Three additional cases in which DLT was used in another context are considered as well to generate additional insight into the use of DLT. Although it would have been preferable to get information from more cases, this actually depleted the sources and no more experts, could be found, or were willing to participate. Therefore a number of experts from the “context” were added who is

considered an expert in either the field of DLT or KYC. In this regard, the research strategy complies with Perry’s assessment (1998) that considering the views of additional experts helps to generate a balanced and multi-sided perspective.

Secondary data was collected from multiple online sources. This data was mainly used in the first and the last phase of the research. In the first phase, it was used to document the environment in which the research was conducted and build the foundation of the research. In the last phase, while the first patterns and themes were unveiling, the secondary data was collected by searching for

literature on the emerging themes and patterns and has been primarily used to support the findings of the analysis.

3.4 Selection of participants

The purposeful sampling method was used to select candidates. With this method, the criteria through which the participants are selected are determined beforehand. Even though the

(20)

19 iterative process of Grounded Theory, is the most suited and unbiased method of selecting

candidates for an explorative study (Marshal, 1996). Due to time constraints as well as limited resources to gain access to higher placed experts, the purposeful sampling method was used instead to identify the candidates. Because the pool from which the participants could be selected was limited in size, the criteria for inclusion where not very strict.

The first group, the DLT and KYC experts, was selected on the criteria that they have direct

experiences with the use of a DLT-based solution for KYC. It is expected that they are able to make a well-grounded assessment of the rationale behind using DLT in KYC processes and the effect it has on the market.

The second group, the DLT experts from other use-cases, was selected on the premises that they have got direct experience with DLT-based solutions. It is expected that they are able to make a well-grounded assessment of the possibilities that the technology has to offer and its impact on the processes and the market, based on their experiences.

The last group, the context experts, were selected on the criteria that they are an expert in either DLT or KYC procedures, and preferably have some basic knowledge in the other field.

All participants were approached by email, and were found through Google, or were referred by other interviewees. A group of over 50 experts and companies were approached, by mail. In the e-mail, the research was briefly introduced and the conditions of the interview were stated, including the promise of full anonymity to protect any personal or company-specific data. Of this initial group 20 potential participants gave a positive reactive, 3 of them eventually were left out which resulted in a group of 17 interview participants.

3.5 Case design and Interviews

The case contain four parts, which gradually work up to the formation of a theory. The first part introduces the landscape and current state of the field of Know Your Customer processes. It is aimed to explain the process and purpose of KYC activities. This can be seen as the preparation phase in which the foundations for the research is laid on which the case will be built. This foundation contains information from extensive desk research, in which a combination of legal documents and managerial articles on KYC are studied, and findings from the interviews with context experts on KYC. The second part of the case describes the inefficiencies that are identified in the KYC process, from both the perspective of the client from the perspective of the institutions. In this part data from desk research, is combined with the insights from interviews with KYC experts, and DLT and KYC expert. The insights are used to connect the identified inefficiencies to the transaction cost categories as described by J.E. Hobbs (1996). Additionally, explanations for the presence of these transaction costs are given from the perspective of the previously listed Transaction Cost Theory assumptions.

The third part discusses the opportunities that have been identified by the interviewees with regard to the solutions that DLT offers to increase the efficiency of the KYC process. For each of the

identified solutions, the DLT characteristics that contribute to the solution are discussed, as far as the data allows it. This is followed by the discussion of the technical, emotional and legal objections against using DLT for KYC processes and the adoption of DLT in general. References from all collected

(21)

20 data are used for this part. Additionally, the General Data Protection Regulation as released by the European Parliament, is introduced and its relevance with regard to DLT is explained. This section mainly contains insights from the interviews with DLT and KYC experts.

Lastly, in the conclusion discussion part, all the elements from the previous parts are combined and a summary of the findings are given, followed by a conclusion to the question: How can DLT be used to increase the efficiency of KYC processes? Followed by an analysis of the effect it potentially has on the allocation of resources.

3.6 The interviews

As previously stated, the main sources of information are the interviews with experts, which have been conducted in a semi-structured fashion (Barriball and While, 1994). The choice for the use of semi-structured interviews is made to eliminate the risk that the cases are non-comparable. As fully unstructured interviews can result in very divergent information. Besides, according to Barriball and While (1994), semi-structured interviews allow the researcher to capture equivalence of meaning from a varying group of experts in a dynamic environment, which makes it suitable for this research. Although different subjects might describe a situation differently, this method helps to standardize their meanings and facilitate comparability, by guiding the subjects towards similar topics but allowing them to explain their own experience. This, in turn, allows the researcher to guide the data collection towards.

For each of the interviews, an interview guide was created. Although for every participant an interview guide was tailored to fit the profile of the participant, they all followed a similar outline. The purpose of the research was explained, followed by a quick explanation of the purpose of the interview, the topics that would be covered, and the role of the participant. This was followed by some introductory questions about the background of the participants. For the DLT and KYC experts, the main part of the interview was built around three core questions:

• Which KYC related problems can be solved by implementing a DLT solution? • How are these problems solved with DLT?

• Are there any objections to using DLT for solving these problems? • Why use DLT instead of a centralized database?

The interviews with the KYC experts were mainly focused on the questions: • What is KYC?

• What inefficiencies can be identified in the KYC process?

• Is it legally possible to integrate KYC processes and share KYC data? The interviews with DLT experts focused on the questions:

• What problems does your DLT solution solve? • How are these problems solved with DLT?

• Are there any objections to using DLT for these problems? • Why use DLT instead of a centralized database?

(22)

21 Although for each group a couple of topics were defined upfront, the interviewees were allowed and encouraged to deviate to other related topics if they felt it added to their story. In the last part of the interview, all participants were asked where they saw the development of DLT going in the next couple of years, and if they foresaw obstacles for a wide-scale adoption of the technologies. All interviews have been transcribed for the data analysis. Most participants agreed to have the interview recorded so they could easily be transcribed afterwards. However, some participants in case the participant did not wish to be recorded, due to confidentiality issues or company policy, notes were taken during the interview and a transcription was made afterwards based on the notes. All transcriptions were anonymized, on request of the majority of the participants, by replacing the names of the participants with the title ‘interviewee’ followed by a number, and company names were removed from the transcripts.

3.7

Data Analysis

Thematic analysis is used for the analysis of the collected data from the interviews and secondary sources (Aronson, 1995). This method of data analysis is often used in qualitative research and works well with the Grounded theory approach as it is focused on identifying themes and patterns that are important to describe a specific phenomenon. This method starts by identifying broad categories and gradually narrow these down until explicit references are identified that explain a certain aspect of the observed phenomenon. Following Aronson (1995), this was done in four steps.

Step 1: The first step is collecting the data and familiarize with the data, during this step the interviews are transcribed, reviewed and the first broad categories are created. The researcher can start with a list of potential codes but should stay flexible enough to include emerging patterns. For the case study, all interviews are transcribed and summarized. During each of the interviews notes were taken of important or striking statements and comments, after each interview a summary was made containing the key points of the interview. This allowed the first themes and patterns to emerge before the full transcription of the interviews. The interviews that could not be recorded were transcribed immediately after the interview. The other interviews were conducted the recorded interviews were transcribed after all interviews were done. Having done the analysis of the interview summaries already the researcher was able to leave out all irrelevant references from the transcriptions.

According to Braun and Clarke (2006), irrelevant data can be left out of the transcriptions and the researcher use his interpretation of certain comment if it contributes to the clarity of the

transcription. As explained during the literature review, the terms Blockchain and DLT are often used interchangeably; this was also the case during the interviews. So, to avoid confusion the choice was made to replace most mentions of the word “Blockchain” by DLT unless they interviewee was specifically referring to a Blockchain system as defined previously. After reviewing the summaries, recordings, and transcription the first major categories were established: DLT, KYC and DLT, KYC and Transaction Costs. However, the last category was later removed.

Step 2: During step two the actual coding is done in multiple rounds of coding. Coding is a cyclical process in which the researcher goes back and forth between the data and the organization of code categories (Aronson, 1995). During this step, the first patterns occur but the codes and categories are

(23)

22 not linked together yet. Every comment of in the transcriptions was analyzed and given a code. And multiple cycles of coding were done.

Step 3: During step three the codes are to be linked together and combined in the sub-themes. These themes bring together certain components or fragments of the data that are meaningless when standing alone but together form a comprehensive picture of the phenomenon. This step is often executed in an iterative manner with the previous step as new insights might provide better codes than were used before.

During this step, the first sub-themes arose. Additionally it was found that the Transaction Cost category should not be used as a category on its own, but be viewed as a sub-theme of the three main categories. During this step the categories were divided into themes and some of the sub-themes were also divided into additional sub-sub-themes. As some of the references were subdivided into multiple themes and sub-themes, the first patterns and underlying connections emerged. Step 4: This is the step in which the arguments are built, by reviewing the emerged themes and references that form the themes. Now the themes are defined based on the references and a hierarchy is developed based on the relevance of the theme with regard to the research. During this step, the secondary data sources were mostly collected as some references were discovered that needed clarification or additional argumentation. During this step, the actual case study is built and the results of the analysis are presented.

3.8 Data description

The processing, coding and creation of the coding scheme was done with a software program called Nvivo which contains multiple tools for processing, coding and analyzing qualitative and quantitative data. The Appendix section of this theses contains the tables that display the information with regard to the coding scheme, as well as the details with regard to the primary and secondary data.

Appendix 1 contains an overview of the coding scheme including the different themes and

subthemes that emerged. It also displays the number of references that are assigned to the themes and sub-themes as well as the number of different sources from which those references stem. The most important details with regard to the interviews are listed in Appendix 2. As previously explained, the names of the respective interviewees have been replaced by numbers to ensure confidentiality. Besides an short description of the key topics of every interview, the table also displays the interview time and the number of useful references that were extracted from the interviews, and the number of different themes and subthemes these references were subdivided into.

Additionally, Appendix 3 contains an overview of the additional secondary data sources that were used to support the findings and patterns recognized in the analysis of the interview data. As in the previous table, the key topics of each of sources is described, as well as the number of useful references and the number themes and subthemes the references were subdivided into. As the collection of these sources were often collected to clarify specific subjects, most articles contain references to less themes and subthemes than the primary sources.

Referenties

GERELATEERDE DOCUMENTEN

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

Background: The main objective of this research is to identify, categorize and analyze the stakeholder issues that emerge during the implementation process of Information Systems in

Do employees communicate more, does involvement influence their attitude concerning the cultural change and does training and the workplace design lead to more

To answer the question whether an active portfolio strategy based on the implied volatility spread earns abnormal returns, it is necessary to have an expected return and

These questions are investigated using different methodological instruments, that is: a) literature study vulnerable groups, b) interviews crisis communication professionals, c)

To answer the main research question, What type of alternative coffee creamer portion packaging concepts can be designed that are correctly sortable in the recycling process and fit

What are the effects of the different ways middle managers can be involved in the formulation and implementation of strategy in a dynamic environment in order to

As the new integration solution shall be used for the communication between the information broker and the producers, all the three candidate architectures in this section will use