• No results found

DECOUPLING BLOCKCHAIN THE POWER RELATIONS AND CONTROL OF THE INTERNET AND BLOCKCHAIN PROTOCOLS

N/A
N/A
Protected

Academic year: 2021

Share "DECOUPLING BLOCKCHAIN THE POWER RELATIONS AND CONTROL OF THE INTERNET AND BLOCKCHAIN PROTOCOLS"

Copied!
71
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

DECOUPLING BLOCKCHAIN

THE POWER RELATIONS AND CONTROL OF THE INTERNET AND BLOCKCHAIN PROTOCOLS

By Paul Hoffman

[s2518481]

Master Thesis Digital Humanities LHU006M05

15 ECTS

Professors R. Prey & S.Aasman 18 September 2018

(2)

Contents

Introduction ... 2

Theoretical framework on organizational structures and power relations ... 6

Protocol & resistance, understanding the relationship ... 15

The three layers of the Internet ... 18

Understanding blockchain ... 25

State of blockchain research ... 40

The theories of Foucault and Deleuze in relation to blockchain ... 43

Blockchain specific issues related to power and control ... 46

Conclusion ... 63

(3)

Introduction

“Many individuals and organization units contribute to every large decision, and the very

problem of centralization and decentralization is a problem of arranging the complex system into an effective scheme.” -Herbert Simon (1997, p. 132)

Organizing a large group of people in such a way that they work together in the most efficient manner is a widely debated topic. Insights regarding this topic change as new discoveries in technology and organizational structures are explored. Theories have engaged with a plethora of organizational strategies that range from emperors/dictators to egalitarian democracies. Historically, there has always been some form of centralized authority. This centralized authority may be responsible for everything including law making, ruling, taxing, etc. Centrality however is susceptible to issues. For example, it marks a central point of failure and a central entity may become corrupt or unable to perform. However, there are many explanations why centralized authorities are so common. Note in terms of efficiency (one or a few people can make decisions quicker than a large group), strength of leadership (one

central entity to rally behind) and logically (there will always be a group of people who are better leaders/decision makers than others). This demonstrates that centralization has some key strengths. Another, perhaps even more significant reason is because up until recently, it was simply impossible to efficiently run a decentralized organization. This is due to the fact that a decentralized setup requires more efficient methods of communication and

organization that rely on technological developments and advanced infrastructure. History is ripe with inventions that enable advances in communication and organizational techniques. For example, the telegram and later the telephone allowed for instant communication. In relation to the concept of centralized vs decentralized organization, these inventions are a double-edged sword. On the one hand, instant communication means that centralized authorities have a larger and more prominent sphere of influence. At the same time, these advances in technology allow decentralized organizational structures like guerrilla armies to function effectively against larger centrally organized armies. As technology

advances, this double-edged sword becomes more pronounced. Most notably technologies such as the Internet have enabled governments to increase their sphere of influence by equipping them with the ability to monitor civilians in mass. On the other hand, the Internet has also proven itself as an invaluable tool in organizing protests and other acts of resistance to governments, as exemplified by the Arab Spring.

An important aspect of the Internet is the manner in which it is organized. More specifically, the Internet is an environment in which hundreds of millions of different computers, servers and other connected devices communicate with one another without one central node. It is essentially a decentralized network. Crucially, a decentralized network structure is different to a centralized structure in a number of key ways. For example, it is a delegation of authority to multiple over one. In other words, instead of one centralized entity having responsibility over all communications within a network, the Internet makes use of a

(4)

host of DNS servers that are responsible for organizing the Internet’s traffic. This means that there is no central point of failure or corruption providing higher resilience. The decentralized nature of the DNS servers means it is also more efficient at organizing the communication of a large numbers of computers because a central node the Internet would be susceptible to being overrun. There is also the question of where would this central server be placed and who would hold ownership over it? This would be a serious issue in world politics.

Decentralized structures are also considered much easier to expand than centralized structures (Provan & Kennis, 2007). What this analysis therefore demonstrates is that a decentralized organizational structures offers many advantages over a centralized organizational structure.

In 2008, Satoshi Nakamoto introduced another method of organizing a large number of computers through blockchain technology (Nakamoto, 2008). More specifically,

blockchain enables computers to digitally verify ownership in a decentralized manner. Historically ownership is verified though a trusted, centralized part. For example, banks verify ownership of a particular sum of money and ownership of a given identity is verified by governments and other institutions. Blockchain technology on the other hand enables a decentralized network of computers to digitally verify ownership without a trust central authority. Adding this functionally to that of the Internet (decentralized communication and transfer of information) demonstrates that the ability to organize ourselves in a decentralized manner has advanced with the aid of technology. A quick example of how this technological advancement influences centralized entities is found in banking. As a whole, banks are part of a decentralized network of money ownership verifiers that enable the transfer of this

ownership. The individual banks, in turn, function as gatekeepers to this ownership network, demonstrating elements of centrality. Blockchain upsets this structure by enabling individual nodes/computers to transfer wealth ownership to another without these gatekeepers. In other words, money transfers from me to you without a trusted intermediary. Crucially this element of direct transfers suggests that blockchain takes on aspect of a distributed, as opposed to a decentralized network.

Herein a distributed network marks a key departure from a decentralized network in that a distributed network has no elements of centrality. For example, if the Internet were a distributed network there would be no DNS servers, computers would simply communicate with one another directly. This has obvious advantages in that this organizational structure further crystalizes the advantages that decentralized structures have over centralized

structures for example. In distributed networks authority is dissipated throughout the network fully. Also, the interaction between actors does not rely on any form of centralization,

meaning there are no possible points of failure and creating a network with incredibly high resilience. Finally, no one or multitude of actors has ownership of the network, ownership is equal among users. What this demonstrates is that a distributed organizational structure is a continuation of the advances that a decentralized structure provides in relation to its

centralized counterpart.

Another key element of organizational structures is how they manifest hierarchy and power. Herein power and hierarchy are crystalized in the way control is exerted, a particular

(5)

ruleset is dictated and the responsibility of given roles are allocated. For example, in a centralized structure, power emanates from whoever or whatever controls the central entity. In a business using a centralized structure, the business is controlled by an owner. In turn he/she has control over which employee does which job and dictates the rules of the working environment. Finally, the role of the business owner dictates that he/she holds responsibility over the business and all those who work for it. In a decentralized business, instead of a business owner having sole responsibility, this might be shared with their chiefs of staff. For example, a business might be run by a chief technology officer, chief financial officer and chief marketing officer. What is demonstrated, almost immediately, is that power has dissipated over three instead of one. This also changes the hierarchy model in this structure. In a distributed business model, this power is distributed over all participants in the network. This also means that a distributed network should be hierarchy free. What this brief

comparative analysis demonstrates is that power and hierarchy manifest themselves differently according to the organizational structure in use. In essence, organizational structures are closely tied to power relations and hierarchical structures.

Paradoxically, the dissipation of power and hierarchy is not always desirable. For example, there are key use functions of centrality that a distributed organizational model would struggle with. For example, how does the model make sure that actors adhere to a ruleset and punish/remove those who do not? A suitable answer would include the fact that a distributed model needs a ruleset that can be enforced without an enforcer. In other words, a distributed ruleset with distributed ability to enforce. Also, without a central authority, who or what will make sure that the model will function properly, who will lead? These questions suggest that a fully distributed organizational structure is immensely difficult to achieve and maintain.

The solution to this issue comes in the form of a protocol. Protocol can mean a set of standards that are voluntarily followed during a ceremony for example. Protocols are also found in the digital environment. However, the crucial difference being that all actors must adhere to the protocol. Crucial examples hereof are the Internet Protocol Suite (ISP) and blockchain protocol in the digital environment. One could therefore think of digital protocols as a set of laws which make it possible to organize a number of machines in a decentralized or distributed manner.

The study of Digital Humanities is concerned with the issues related to the ‘Digital,’ the Humanities and the tension(s) found in and between these two areas. Therefore, further investigation into the IPS and blockchain protocol (‘Digital’), organizational structures, power and control relations (Humanities), and the inherent tensions and relations between these two falls precisely within the Digital Humanities sphere of investigation and critical analysis. Similarly, Digital Humanities has a strong focus on the analysis of data to crystalize arguments related to the Digital and to the Humanities. This is another crucial component of this paper, as blockchain data will function as an important source for key lines or

argumentation. In essence, I will make use of all areas of Digital Humanities academic engagement in order to crystalize my investigation and argumentation.

(6)

The questions I want to answer in this paper are specifically related to power relations and protocols. More specifically, I aim to crystalize how power relations manifest themselves in blockchain technology. This is important given the fact that blockchain is a completely new organizational structure in the digital environment. In order to do so, I will first

demonstrate the current theoretical framework on power relations by examining its historical development. Herein the works of Foucault and Deleuze will be central, as they provide a crucial theoretical paradigm in regards to power relations and organizational structures. This section will also engage with the theories suggested by Galloway, who presented some of the initial theories on power relation as they relate to the Internet and the digital environment. The aim of this section is to demonstrate how the discussion on power relations has developed, how it manifests in a digital environment and to provide an overall theoretical framework with which to analyze blockchain.

The subsequent section will provide a closer analysis of power relations on the Internet specifically. I will engage with the “three layers” of the Internet namely the

protocological layer, the data & code layer and the platform layer. The importance herein lies in that these three layers are also found in the blockchain environment. A comparative

analysis will function as a key component in providing a complete understanding of the structures of control and manifestations of power. This section also demonstrates how these three layers are related to one another and how different power relations manifest themselves at each layer. I will also demonstrate how governance of these three layers is conducted.

A fundamental aspect of control and power is how resistance manifests itself. In this section I aim to crystalize exactly where these pockets of resistance can be found. Herein the focus will be on the Internet’s protocological structures as it is used as a means for resistance. This will once again function as a critical element in the comparative analysis that

demonstrates how blockchain technology changes aspects of resistance.

I then shift the focus to blockchain. First, I will investigate the state of blockchain research. This investigation will demonstrate the fact that blockchain research is still in its infancy. This infancy is especially pronounced in regards to blockchain and critical theory. In turn, this lack of analysis reaffirms the necessity for my form of academic engagement and the importance of critical theory in understanding blockchain. Crucially, it also demonstrates that this line of research lacks a paradigm of critical theory. In turn this underlines the

importance of my line of research into blockchain.

Next I will provide an in-depth technical analysis of the blockchain protocol. In order to do so I will crystalize exactly what technological problems blockchain solves and how blockchain is used. In this section I will also demonstrate the technical workings of the blockchain protocol by examine is protocological structure. This analysis will include engagement with key terminology, blockchain organizational structures, consensus mechanisms, key players and differences in particular blockchain solutions. Crucially

throughout this section I will demonstrate how blockchain’s open ledger functionality can be used as an important source for data analysis.

(7)

Theoretical framework on organizational structures and power

relations

“Power is tolerable only on condition that it masks a substantial part of itself. Its success is

proportional to an ability to hide its own mechanisms.” –Michael Foucault (p. 86, 1978)

In the following section the aim is to establish a theoretical framework on power, power relations, control, resistance and provide an overall understanding of the paradigms of power as described by a number of academics and theorists.

First, in a more general context, throughout this paper I will discuss power. But what is power exactly and how does it manifest in a group dynamic? In the simplest terms, “the concept of power [is defined in] saying that A exercises power over B when A affects B in a manner contrary to B’s interests” (Lukes, 2005, p. 37). This is essentially a scenario wherein someone is told to do something that contradicts their own wants and needs. It is power in its most elementary form and also the most directly tangible form of power.

Power however, is also more complex and multilayered. This is made evident from the fact that Lukes introduces three dimensions of power. In short, one-dimensional power is the fundamental understanding of power. One-dimensional power perfectly describes the before-mentioned example given by Lukes of A exercising power over B. The two-dimensional view suggests that power has two faces, one discussed above and the second introduces the “mobilization of bias” (Lukes, 2005, p. 20). Herein the mobilization of bias encompasses the fact that there are particular beliefs, values and institutionalized procedures that function to benefit a particular person or group at the cost of another. This analysis includes both decision and non-decision-making, meaning that a decision or a decision not to act also demonstrate power. This additional layer means that it is possible to consider how

potential issues or decisions demonstrate power. The three-dimensional view of power

includes latent conflicts “which consists in a contradiction between the interests of those exercising power and the real interests of those they exclude” (Lukes, 2005, p. 28). These latent conflicts are caused by the fact that power can be exercised in such a way where a person’s wants are influenced, possibly determined, by someone or something exercising power over them. What this means is that power is not always in the here and now, and its influences can be observed in contradicting behavior or decision-making. This type of analysis is much better suited for the understanding of power dynamics in larger groups and could be understood as power through ideology. What Lukes’ analysis demonstrates is the fact that power is not always direct, instead power can be exerted indirectly and effect groups of people.

Crucially, power is not an attribute, but a relationship, more specifically, an asymmetrical relationship where one actor always has greater influence over another. Crucially, power cannot be absolute, there must always be the possibility of resistance (Castells, 2009, p.11). This is because if power is absolute, it fundamentally changes the

(8)

nature of the relationship. At the same time, absolute power means that the control/power mechanism no longer has to be established and reinforced. In turn, given the dynamic relationship between power and resistance, it suggests that power is not static, and that over time, new methods of establishing power, reinforcing power and exerting power develop as areas of resistance emerge. Crucially, this fluid dynamic calls for constant investigation, not only in established spaces, but especially in the new spaces as they emerge. It also

demonstrates that in any organizational structure where power is exerted, resistance is found. The way that power is exerted and established has varied greatly through human history. In the past, power has been established through the controlling forces of violence and disciplining. However, over time, some societies have developed in such a manner where violence is no longer the controlling force. Instead, in control societies, it is the structure of institutions, a decentralized organization and visibility that reinforce power relations and control. The history of these control societies has been divided into three historical periods where power emerges in distinct forms of organization.

The first two periods are described by Foucault in Discipline & Punish (1997).

Foucault argues that the sovereign societies, characterized by centralized power and prevalent in the classical era, where superseded by disciplinary societies, characterized by decentralized power and found throughout the modern era. More specifically, where control in societies during the classical era emanated from the King or Queen (centralized), as history proceeded, societal control could be decentralized into institutions such as schools, prisons, hospitals and other state run apparatuses. In a sense, this means that power is less absolute because it does not lie with monarch that could be directly opposed. However, this remains an area of some ambivalence due to the fact that societal institutions are difficult to resist to as well. One could argue that the reach of societal institutions makes their power equally absolute compared to that of a despot. Less ambivalent and more evident is the fact that this

development marks a clear departure from centralized organization and takes on aspects of decentralized organization.

The way this decentralization is crystalized and effected is described in the chapter titled “Panopticism” (Foucault, 1997). More specifically, Foucault argues that control

societies first emerged in the seventeenth century when cities started taking action against the plague. In order to curb the spread of the plague, space was partitioned, houses closed off and regular quarantines and purification operations were executed. In essence, “the plague is met by order” (Foucault, 1997, p. 197). Herein Foucault identifies the beginning of all modern mechanisms for controlling individuals through “multiple separations, individualizing distributions, an organization in depth of surveillance and control, an intensification and a ramification of power” (Foucault, 1997, p. 198). In other words, Foucault identifies how individuals are controlled by several entities, as opposed to one monarch. In turn, this control is exerted through the design of the control structure and not through direct force.

Foucault uses Jeremy Bantham’s Panopticon, which is a tower or building at the center of a prison that allows a single prison guard to overlook all the surrounding cells, to

(9)

crystalize his argument.1

Source: Seelie (2017)

Crucially, what becomes evident from this image is the fact that as a prisoner, visibility is a trap as he is “perfectly individualized and constantly visible” (Foucault, 1997, p. 200). Similarly, if we take into consideration that the central tower will probably be equipped with one-sided glass, the panopticon induces a sense of permanent one-sided surveillance. At the same time, given that individuals cannot communicate with one another, the possibility for individuals to organize themselves is removed. This makes the individual an “object of information [and] never a subject in communication” (Foucault, 1997, p. 200). Crucially, it is exactly this mechanism that ensures the “automatic functioning of power” because the

individual is unaware of when he is being watched, therefore it is in his best interest to act according to the imposed rules. In essence, this mechanism makes power homogenous, disindividualized and to some extent automatic.

In Foucault’s work, the first case (the plague), is a case where power is used against evil, the second (Panopticon), should be understood as a model of human control or a “way of defining power relations in terms of the everyday life of men” (Foucault, 1997, p. 205). Crucially, these two mechanism describe the transformation of the disciplinary aspects of control societies from one where control/power is enforced by heavy constraints constantly reinforced by the controlling entity; to one where control is a subtle force reinforced by the mechanism of power itself. In effect, Foucault marks this shift as the “the formation of what might be called in general the disciplinary society” (Foucault, 1997, p. 209). Therefore, what Foucault essentially describes is how societies of discipline with centralized power have shifted their structure into one where control itself is many times more decentralized.

1A physical manifestation of this concept is the prison built on the southern coast of Cuba on

(10)

Crucially, the source of this control is still centralized, but the control is no longer enforced by discipline directly; instead the control is enforced automatically, through the construction of the system. This means that it takes fewer people to control more people. It makes power more economical and efficient, and therefore desirable. Foucault’s example of the panopticon makes this near invisible force visible. This is the first crucial step to understanding how power relations manifest themselves in modern societies.

The next element to understanding contemporary power relations is described by Deleuze in “Postscript on the Societies of Control” (1997). This work is very much a direct continuation and engagement with Foucault’s work. This is evident from the fact that the first two paragraph function to position Foucault’s contributions to this discussion. The main goal of Deleuze is to theorize about how, in an increasingly technological society, where

surveillance of communication is a given, the juxtaposing forces of control and freedom manifest themselves. In essence, Deleuze aims to describe how the disciplinary society described by Foucault has shifted into a society of control.

Deleuze begins by reiterating some of the contributions by Foucault, most notably how Foucault crystalized the transition from sovereign societies to disciplinary societies. Deleuze’s crucial contribution comes from the fact that he demonstrates the emergence of societies of control. The distinction between the two being forms of continuity and the role of technology. Continuity because in discipline societies one is always starting something new (from school to barracks, to factory) whilst contemporary societies of control, this process is continuous. This essentially also makes control more continuous. Technology also has an essential role to play in this. In Deleuze’s own words: because “disciplinary societies

equipped themselves with machines involving energy, with the passive danger of entropy and the active danger of sabotage; the societies of control operate with machines of a third type, computers, whose passive danger is jamming and whose active one is piracy or the

introduction of viruses” (Deleuze, 1997, p. 6). This is supplemented with the fact that “[i]n the societies of control […] what is important is no longer either a signature or a number, but a code: the code is a password” (Deleuze, 1997, p. 5). What Deleuze demonstrates is the fact technology has an important role to play in societies of control. On the one hand because technology brings with it the ability to understand people in terms of data points, markets or samples. In turn, Deleuze argues that technology creates “dividuals” because people are no longer self-contained units (individuals) but instead dividuals (devisable individuals) that can be understood as data points (Deleuze, 1997, p. 5). On the other hand, Deleuze demonstrates the key role technology plays in continuous control.

Where Lukes and Castells provide a framework for understanding what power is, Deleuze and Foucault establish a historical timeline of power relations and demonstration the manifestation of control. What becomes evident from their analysis is that power as a concept is not as simple as A exerts power over B, but power can be indirect and influence bias and decision-making. Power is also a fluid relationship privy to elements of resistance. The analysis of the historic timeline has also demostrates that power and control have shifted from a centralized form to a decentralized form. Crucial to this development is the socio-cultural

(11)

structure of societies and the role technology plays in reinforcing and structuralizing the modes with which control is reinforced. Although Foucault and Deleuze’s contributions have been a tremendous contribution to the discussion, their analysis stops just before power relations and organizational control structures go into the next phase. With the advent and popularization of the Internet, power relations and control are about to step onto the global stage.

The aim of the Internet is to construct a way with which machines and humans can communicate with one another without a central point of failure. It is built on the principle of a fully distributed network, without central nodes, wherein the flow of information is

uncontrolled (Brandom, 2017). Crucially, where decentralization relies on elements of centralization and hierarchy, a distributed network is one wherein all nodes are equal and communication between nodes is free. This is visualized in the following graph:

(Source: Eagar, 2017)

What should be noted is that a distributed network is more difficult to organize given the fact that hierarchy and power relations are absent, this means that the organizational structure has to function differently. This is because although hierarchy and power relations in some cases have a bad reputation; they also function as a tool with which to establish and enforce a particular ruleset (such as laws) which enable a governed body to function as a cohesive whole. Similarly, a distributed network can only function if all actors conform to the same rules, it therefore requires some form of homogenization.

Paradoxically, the distributed nature of a network dictates that theoretically this space should be without power relations, elements of control and hierarchies that influence

behavior. This is because all nodes on the network are considered equal, communication is from point-to-point and without any form of centrality. This means that establishing elements of control is difficult. This presupposition, in combination with the perceived identity of the

(12)

early Internet as a free and uncontrolled space demands further investigation into the properties of control and relations of power in this space. It prompts the question, how does control exist after decentralization? This question is central to Alexander Galloway’s

Protocol: How control exists after decentralization.

In addition to questions concerning control and power on the Internet, this work also aims to “to flesh out the specificity of this third historical wave by focusing on the controlling computer technologies native to it” (Galloway, 2004, p.3). Herein the third historical wave is a reference to Deleuze’s “Societies of Control” and the role computers and digital

organization have. Whereas the first two waves demonstrate a gradual change from centralized to more decentralized control structures; the third wave takes on elements of a distributed organization. Crucially, as I have demonstrated earlier, a distributed

organizational model requires some form of homogenization to make sure that all actors in the model adhere to a similar ruleset hereby forming a cohesive whole. This comes in the form of a protocol.

Protocol has three distinct meanings. According to Cambridge English dictionary, one, as a rule: “the system of rules and acceptable behavior used at official ceremonies and occasions,” two, as an agreement: “a formal international agreement” and three, in

computing: “a computer language allowing computers that are connected to each other to communicate” (Cambridge, n.d.). For my purposes I will be using protocol in the sense of a computer language. More specifically, my understanding of protocol is that a protocol is a ruleset that a number of digitally unorganized machines should follow in order to engage (communicate, transfer information, etc.) with one another efficiently. It is also the ruleset that defines the digital structure. Herein a protocol is the key component in creating a distributed network that functions as a cohesive whole, it is the homogenizer.

Galloway has another understanding of protocol in the sense of protocol as a management style. More specifically, this understanding of protocol encompasses a non-hierarchical management style and is meant as a way for the “protocological management of life itself” (Galloway, 2004, p.87). The purpose hereof is to demonstrate that the

protocological organizational structure found in the digital environment has a non-digital counterpart. In other words, life/humans managed according to a protocological structure. Although one could argue that the protocological management of life is a form technical determination, I would caution this understanding due to the fact that Galloway’s

understanding of the protocological management of life also includes the confluence of protocol and life, whereas technical determination is inclined towards the idea that there is direct relationship of influence between technology and our lives and not a coming together (Adler, 2006). Although there are elements of legitimacy to a protocological management of life itself, for my purposes I will focus solely on protocol as a digital force aimed at

homogenization and control.

In order to establish a theoretical framework with which to analyze protocol,

(13)

in Protocol is “where has the power gone?” The reason being that in a distributed network such as the Internet, every node or member of the network is able to communicate with another freely and without an intermediary. There are no elements of centrality or hierarchy. More concretely, the internet makes use of “intelligent end-point systems that are self-deterministic, allowing each end-point system to communicate with any host it chooses” (Hall, 2000, p.6). In this, the network functions like a rhizome as opposed to a tree.

The rhizome is first introduced by Deleuze and Guattari, and is significant because a tree is strictly hierarchical and rhizome is not (Holland, Smith & Stivale, 2009). Crucially, rhizomes are also heterogeneous (Gartler, 2018). Herein heterogeneity is synonymous to protocol in the sense that a protocol is the force that creates and enforces heterogeneity. This is paradoxical given the fact that I have also established that protocol is homogenous. This paradox also explains Galloway’s motivation to establish the power structures at the protocol level, as this is the exact force that makes the Internet a heterogeneous and homogenous network. Galloway argues “that power relations are in the process of being transformed in a way that resonates with the flexibility and constraint of information technology. The Internet is a form that is regulated. […] Information flows, but in a highly regulated manner. This duel property, regulated flow, is central to Protocol’s analysis of the [I]nternet” (Galloway, 2004, p. xxiv). This relates to the heterogeneity and homogeneity juxtaposition because it demonstrates that protocols can organize two seemingly opposing forces into a harmonious whole.

In order to establish his argument, Galloway introduces two core elements of the Internet’s protocol, namely the TCP/IP and the DNS. These two protocols found in the Internet Protocol Suite (IPS) demonstrate this duel property of regulated flow. Herein TCP/IP stands for Transmission Control Protocol (TCP) and Internet Protocol (IP), and is responsible for establishing connections between computers, servers and devices and allows data packets to be sent and received effectively (Braden, 1989). What this means is that, by design, devices can communicate with one another freely, “resulting in a nonhierarchical, peer-to-peer relationship” (Galloway, 2004, p. 8). TCP/IP protocol consists of four layers namely the application, transport, network and finally the data link layer. Herein the application layer is what a web browser engages with (through http for example). The next layer is the transport layer, this is where TCP lives.2 The transport layer is responsible for chopping up the data received from the application layer into smaller PDU’s (Protocol Data Unit) called segments. In turn, these segments can then be sent to their destination by the quickest and cheapest route. Next, in order for the segments to be put back together in the right order, TCP puts a header on them with ordering instructions. These segments then get pushed on to the network layer. On the network layer, the IP protocol is responsible for adding a source and destination IP address to the segments to make sure the segments are sent to the right machine. This turns

2 This is also where we find UDP, a “best effort” variant of TCP, which although faster, has no guarantee that

packets reach their destination. UDP is popular in gaming because the application layer is responsible for correctly placing data packets, but ultimately not as fundamental as TCP and therefore will not be investigated further.

(14)

the segmented PDU’s into packet PDU’s. Finally, the packets are sent along the data link layer, which is another word for physical layer. This layer is the wires(less) infrastructure responsible for an internet connection. The PDU on this layer is called a frame. Crucially, this process can go both up and down (decapsulation and encapsulating respectively) and as described by Galloway and others such as Beniger (2009), Hall (2000) and Lessig (2003, 2006) as non-hierarchical. At the same time, they make any form of control or regulation very difficult given the fact that tracing data and the people sending/receiving data are hard to identify (Lessig, 2003).

On the other hand, there is the DNS. DNS stands for Domain Name System and functions according to predetermined and rigid hierarchies. In essence, the DNS is a large decentralized database that is crucial the Internet’s functioning. Its role is to associate domain names with the numerical IP addresses. The reason this is necessary is because we tend to remember and prefer to organize websites according to names and not their IP addresses. One way to imagine the DNS system is like the Internet’s phonebook. The way DNS works is when you want to visit say www.google.com., then your machine does not necessarily know what IP address to go to. Note that in the address mentioned, there is a final dot after .com that we normally never see or use, however, this dot is fundamentally important because this is where the hierarchy begins. This dot represents the root of an IP query. More specifically, when your machine queries the DNS’ resolving name servers for an address, the resolving name servers will not be able to return the entire address, instead, it lets the machine know where to find the .com name servers. These servers are known as the top level domain (TLD) name servers. When these servers are queried, they in turn return the google.com name servers. These are known as the authoritative name servers (ANS). Finally, when querying the ANS, they return the exact IP address on which the site is hosted. This information is returned to the browser which will ultimately take one to the Google website.

There is however an incredibly important aspect to take into consideration when analyzing the DNS as a protocological structure. This is the fact that it “is structured like an inverted tree [where] each branch of the tree holds absolute control over everything below it” (Galloway, p. 9, 2004). What this means is that if the root of a branch or tree is blocked by a provider or government, the entire branch is blocked. An example of when this is problematic was during the Arab Spring uprising. During this uprising, the Egyptian government shut down official DNS servers this ground internet use in Egypt to a halt (Cellan-Jones, 2011). In essence, given the fact that the DNS servers are structured hierarchically and by design centralized (and therein susceptible to control); it is the direct opposite of the rhizomatic, non-hierarchical protocological structure of TCP/IP. In turn, this means that the Internet Protocol Suite in its totality holds two opposing forces. Tim Berners-Lee (the inventor of the internet) even went as far as to say that the DNS system is “one centralized Achilles’ heel [that] can be brought down or controlled” (qtd. by Galloway p. 10, 2004). The fact that this is problematic is also established in the academia: “[u]nlike the distributed end-to-end

architecture of the internet generally, the technology […] and control of internet and Web addresses and names is highly centralized and hierarchical” (Flanagin, Flanagin & Flanagin,

(15)

p. 190). The reason Berners-Lee, the Flanigan’s and Galloway suggest that the DNS servers are centralized, as opposed to decentralized, is because there are only 13 Root Name DNS servers, of which most are in America. One could therefore easily suggest that the DNS servers are centralized, but strictly speaking it conforms to the decentralized organizational structure. The DNS protocol therefore combines elements of centrality with decentrality.

What becomes evident from the analysis of the Internet Protocol Suite are several key aspects. First and foremost, the TCP/IP and DNS protocol structures are radically different. TCP/IP is non-hierarchical, peer-to-peer and distributed; DNS is a strictly hierarchical and combines aspects of centralization and decentralization. Therefore, the Internet as a network appears to be a distributed network, however closer analysis of its protocological ruleset dictates that although it does have distributed elements, centralized and decentralized organizational structures are of equal importance. Second, the goal of the Internet Protocol Suite is totality. As Galloway put it “[the Internet] accept[s] everything, no matter what source, sender, or destination” (p. 42). More specifically, it is the lingua franca with which all computers speak to one another and transfer information. This totality is ensured by the virtues of the Internet in its “robustness, contingency, interoperability, flexibility, heterogeneity [and] pantheism” (p. 42). This demonstrates the oneness of a distributed network and a rhizome. Third, the Internet protocol is a distributed technology, a universal language and is open to an unlimited amount of machines physically in different parts of the world (Galloway p. 46-7, 2004). What this demonstrates is that the combined properties of distributed, decentralized and centralized control found in the Internet Protocol Suite allow for a way to organize a fully distributed system.

What remains unexplored is the relationship between the IPS and resistance. In the next section I aim to crystalize this relationship. This is important because a protocol that desires totality to function suggests that resistance is not possible. Another way of

interpreting this is by asking the question “can the IPS be used as a tool for resistance?” An answer to this question and providing more understanding to the relationship between the IPS, totality and resistance will ultimately further propel the understanding of the relationship between protocol and control & power.

(16)

Protocol & resistance, understanding the relationship

“I believe in the resistance as I believe there can be no light without shadow; or rather, no

shadow unless there is also light.” –Margaret Eleanor Atwood (1998, p. 134)

Closely related to power relations and the control over the digital as a space is the issue of resistance. More specifically, how does resistance in the digital space use protocological structures as a means, and which groups, institutions or persons form part of this resistance? Also, what is the motivation for using the protocological structure as a tool for resistance and to what ends is it being used? These elements are important to understand due to the fact that wherever there is control & power, there has to be a form of resistance to said forces.

There are two ways to answer these questions. One, by looking at ways with which actors resist the homogenization and totality of the Internet Protocol Suite. The importance of this line of investigation is underlined by the fact that, as Foucault famously said “where there is power, there is resistance” (Foucault, 1978 p. 95-96). In other words, the presence of resistance demonstrates the prevalence of power. Crucially, resistance to homogenization and totality is achieved by establishing protocological environments that that exist within the IPS, but behave according to a different ruleset. This is because this would upset the

homogenization and totality of the protocol suite. An example hereof is DNS hijacking, which involves a person or group setting up a rogue DNS server and rerouting internet traffic to their own servers. These servers contain illicit copies of trusted websites and are often used to gain private information in an act known as ‘phishing.’ However, the problem with this type of resistance is that it is not really resistance to the protocol itself. The resistance herein is simply a by-product of a hackers’ bad intentions of gaining access to or information about their target. What this means is that although the totality and homogenization of a protocol can be upset, it is in fact not an act of resistance to protocol. This in turn means it is not a form of resistance to power.

Another related example is DNSCrypt. This is a DNS service which encrypts traffic (regular DNS servers have vastly inferior safety standards). Most notably, it is a DNS

protocol that exists within the DNS protocol paradigm, demonstrating that it is possible for a different set of rules to exist within protocol, juxtaposing its homogenization and totality. This point however, could be complicated by the fact that the DNS servers of DNSCrypt simply feed back into the existing structure so that machines connected to DNSCrypt’s servers can communicate with regular servers. This in essence serves to extend the totality of protocol due to the fact that it adds a layer of potentially desirable capabilities. It is not at odds with normal DNS servers, ultimately it is in compliance. Resistance to protocol directly is there for possible, but in essence only serves to extend the totality of protocol.

Another way of understanding the relationship between resistance and protocol is by understanding protocol as a tool for resistance or instrument used for suppressing resistance. Protocol as an instrument for suppressing resistance is exemplified by Gambia’s Yahya Jammeh, the 2016 ex-president. Jammeh read reports that his rival Adama Barrow used

(17)

social media as a way to organize his rallies. In turn, Jammeh brought down not only social media websites, but used the hierarchical DNS structure to black out all of Gambia’s internet (Petesch, 2016). This demonstrates the fact that if one has access to the higher echelons of DNS hierarchy, this power can be used as a means to exert control over a large group of internet users. On the other hand, a notable example of DNS structure being used as a tool for resistance is the DNS hack by the Syrian Electronic Army. This hack was aimed at taking over the domain of the New York Times in order to spread an anti-war message (Vaughan-Nichols, 2013). Crucially, by hacking Melbourne IT, the DNS provider of the New York

Times, the hackers were able to get access to an even larger share of websites including

Twitter. What this demonstrates is that the hierarchy of the DNS structure is (mis)used in such a way that the hackers were able to extend their control and message of resistance to a larger audience than just that of the New York Times.

Another notable form of resistance is a Distributed Denial of Service (DDoS) attack. DDoSing an online service or resource involves overwhelming it with traffic from a large number of sources (computers known as bots) simultaneously. Although this form of attack is often used for malicious ends, there are numerous cases where DDoSing is used as a political expression (Zurkus, 2016). For example, a few weeks before the Brexit vote, the UK’s voter registration website was DDoSed (Lomas, 2017). At the same time, Google is notably

expanding its free DDoS protection service to US political groups (Conger, 2018). Crucially, DDoSing involves using the Internet’s protocological structure in order to be successful. This is due to the fact that a large number of bots send out connection request to an overwhelmed DNS server, which in turn is forced to time out requests. DDoS attacks can also involve sending the target a large amount of data. This is effective in two ways, first it overloads the target’s bandwidth and makes them inaccessible, and second, the target’s database could potentially be corrupted with useless data (Reiher, 2004). What makes DDoS attacks a notable example is the fact that it is exactly the protocological structure of DNS that allows for these attacks to be an option. In turn, because it is not possible to fundamentally alter the DNS or IPS protocol structure, this form of resistance remains effective. In turn this

demonstrates the rigidity of the DNS protocol.

What becomes evident from this line of enquiry is the fact that although the totality of the IPS can be upset, as made evident from the example of DNS hijacking, it is not an act of resistance to the IPS per se. At the same time, acts of resistance to the IPS, and in particular to the DNS protocol, as exemplified by DNSCrypt, invariably only serve to extend the totality of the IPS. This means that the totality aspect, as suggested by Galloway, and essential to the regulated flow of the IPS, holds true even with close examination. Using the DNS as a tool for and against resistance however is possible. This is made evident from political DDoS attacks and the example of Gambia’s 2016 elections. Crucially, in order to a DDoS attack to be noticeable and efficient, a particular website has to go down or

government servers have to be unavailable for example. This hints at the fact that the IPS does not exist in isolation and that websites and databases are closely related to the it. In the subsequent section I will further investigate other aspects of the ISP as they relate to

(18)

websites, databases, code and the other key players/components of the Internet as a digital space.

(19)

The three layers of the Internet

“We are all now connected by the Internet, like neurons in a giant brain.” –Stephen Hawking (2014).

As I previously mentioned, the Internet Protocol Suite does not exist in a vacuum. For example, websites & applications make use of the IPS. These websites & applications are in turn are linked to databases, all of which is written in code. More concretely, the IPS

functions like a backbone or core structure for the Internet as its used. Not only does this backbone make use of code and databases in order to function; the way we interact with the Internet is in essence through code and databases. In turn, applications and websites give us a comfortable way of interacting with other Internet users, the databases and the underlying code. What this means is that solely analyzing the Internet’s protocological structure would leave a large gap in the discussion as it relates to power and control structures. Additionally, the data & code and platform (applications & websites) layer have a tangible direct

relationship to the protocol layer. A relationship that I will also discuss in regards to blockchain.

In the early 90’s the Internet was initially received as a liberating medium where one could “reject the authorities of distant, uninformed powers” (Sills, 2018, par. 6). However, what becomes evident from the TCP/IP & DNS dichotomy is that the Internet is still a space in which power and control can be exerted. This has also been identified by Lawrence Lessig in Code and Other Laws of Cyberspace. Crucially, what becomes evident from Lessig’s analysis is that code, data and databases play a crucial role in power relations on the Internet. More specifically, in both Code (2003) and the subsequent update Code: Version 2.0 (2006), Lessig crystalizes how the economic needs and desires of online companies dictate how they structure their code and databases. In Lessig’s own words “[w]e are just beginning to see why architecture of the space matters – in particular, why the ownership of that architecture

matters” (7). Herein Lessig does not refer to the physical architecture of wires and datacenters, but instead refers to what makes the digital environment structurally sound, namely code, hence “code is law” (6). In turn, this architecture allows for the exertion of control and power in the digital environment. In essence, Lessig, similar to Galloway, describes how the Internet set out to be a liberating space; but through the invisible hand of commerce became “a distributed architecture of regulatory control” (5) where power exists nonetheless. What Lessig therefore describes is the fact that data is valuable. Meaning that code, data and databases are structured in such a manner that the most valuable or largest total sum of data is extracted from interaction with users. In direct relation to the Internet Protocol Suite, code is law simply because protocol is written in code and all machines adhere to it. Similarly, the DNS is in essence a database.

The platform layer of the Internet is important to define as it functions as the predominant layer of interaction with the Internet. Evidence hereof is found in the work of José van Dijck, in De Platform Samenleving [The Platform Society] (2018). In this book, van

(20)

Dijck argues for the fact that a small group of internet giants function as gatekeepers to the Internet. These platforms, which van Dijck dubs “the GAFA platforms” (Google, Apple, Facebook and Amazon), with their success and popularity, facilitate a large part of the online interaction which directly influences real social interactions, structure and relations. In essence, “as ‘market masters’ they organize the environment in which interactions happen or not and under what circumstances these interactions take place”3 (van Dijck, 2016, p. 10). In

other words, the GAFA gatekeepers or market masters directly influence interaction on the Internet. Equally important is the fact that they take on aspects of central gatekeepers in the sense that they function as central hubs for online interaction. These gatekeepers in turn have a vested interest in maintaining their gatekeeper position because it will invariably provide them with the most amount of data. In turn, this data can be sold for a profit. This is why Google is profitable. It therefore takes away from decentrality of the Internet by adding a layer of centrality to it. The relation to protocol herein is that the protocological design of the Internet makes it possible to curtail or punish GAFA gatekeepers. This is exemplified by the EU Cookie Law which is “a piece of privacy legislation that requires websites to get consent from visitors to store or retrieve any information on a computer, smartphone or tablet” (Cookielaw, 2011, par. 1). The reason this can be reinforced is because the DNS servers are structured in a hierarchical manner and lend themselves to centralized control.

What a three-layered understanding of the Internet suggests is that a control

mechanism that engages with all three layers would be most efficient. This is demonstrated by the PRISM program (Planning Tool for Resource Integration, Synchronization, and

Management), a global surveillance program of the US’ NSA division. As made evident from the PowerPoint presentation that was leaked, the PRISM program made use of the

protocological structure of the Internet and the underlying ruleset in order to fully engage in espionage:

3 Original: “Als ‘marktmeesters’ organiseren zij de omgevingen waarin verbindingen al dan niet tot stand

(21)

(Source: Theguardian, 2013)

This slide shows that most internet traffic flows through the US. The reason being that the US hosts a large number of DNS servers that are cheap to access. The NSA therein uses of the protocol design of the Internet in an unintended manner to gain insight into the nature of connections between machines, a form of deep packet investigation (Longford, 2005). Interestingly, the (ab)use of Internet Protocol Suite also demonstrates what Noble described in 1984 as the double life of technology. One life “which conforms to the intentions of designers and interests of power and another which contradicts them – proceeding behind the backs of their architects to reveal unintended consequences and unanticipated possibilities” (Noble, 2011, p. 324-5). Which in turn demonstrates the fact that control and power dynamics in the digital environment often work in contradicting and competing ways.

PRISM also demonstrates how the data & code and platform layers play into the discussion on power relations. Similarly, it refers to the indirect influence of commerce on the digital environment. This is due to the fact that owners of data, such as Yahoo!, Google, Facebook, etc. are commercial companies that buy and sell data. In turn, it is in their best interest to harvest as much data as they can and to use it responsibly (not using it responsibly should theoretically lead to loss of commercial viability through loss of customers and

reputation). However, it is exactly this commercial interest in data that make these companies such prime targets for data collection. At the platform layer, these companies function, as van Dijck argues, as gatekeepers. This gatekeeper position primes these companies for PRISM data collection. It is therefore no surprise that the companies were approached and recruited by the NSA for PRISM:

(22)

The PRISM example demonstrates three elements in relation to protocol, control and power. First, the Internet’s protocological design can and will be used as a vehicle for gathering insight into data generated on the Internet. This in turn is evidence of that fact that the protocological structure can be used as a way to exert control. Crucially, this increases the power of the US government because of their ability to convey one-sided monitoring, making possible resistance and organizing of resistance more difficult and consequently shifting power relations. Second, PRISM lays bare the fact that resistance on the protocological level is immensely difficult. For example, there have been theoretical attempts aimed at encrypting DNS traffic however researchers concluded that “the possibility of massive encryption of DNS traffic is very remote,” (Grothoff et al., 2015 p. 1). Third, PRISM demonstrates the fact that the three layers of the Internet play a significant role in understanding the complete picture of power relations on the Internet.

Another component that is important to be discussed is the governance of the Internet. In other words, who or what are the decision makers when it comes to protocol maintenance, control over data and the development of platforms. As I will demonstrate later, blockchain technology significantly changes the governance structures of all three layers by changing the relationship between decision makers and those engaging with the technology from other perspectives.

Control and power on the platform layer ultimately lie with the companies and persons that run them. For example, whether Twitter continues to run as a social media platform or wishes to further develop into a sales platform, falls completely unto them. Herein the only real major restrictions, depending on the country, are legality, religion and ethics. The new sales Twitter platform could not sell drugs, promote hate speech or acts of aggression against a particular religion for example. Most typically accounts on these platforms and the data generated by them is owned and decided over by the platforms

(23)

themselves. This means that control and power predominately lies with platform owners. What this leads to is a scenario in which voices that are undesirable can be “deplatformed.” For example, controversial character and presenter of Infowars, Alex Jones, has recently been deplatformed from YouTube, Apple’s app store, Twitter, Facebook, Spotify and many other avenues of digital social interaction (Koebler, 2018). The problem is that nothing can stop platforms from deplatforming undesirable voices and that deplatforming actually works. Power and control therefore lie with the platforms themselves and not with the users. In other words, platforms have the power to control who has a voice and who does not.

Control over data however is a more complex scenario. On the one hand, given that data can easily be duplicated, it is difficult to really have control over data without isolating it completely. On the other hand, the transfer of data can be governed in the sense that data can be treated unequally because some data streams can receive priority over others. This is made evident from the ongoing Net Neutrality discussion. Net Neutrality dictates that all data on the Internet should be treated equally. In other words, Internet service providers (ISP’s) are not allowed to charge extra for particular websites or throttle particular services. This does not guarantee that it is not done however. As made evident by Comcast’s throttling of the BitTorrent service in dozens of countries (Sar, 2011). Ultimately, what restricts these ISP’s is government legislation. In turn, government legislation is dictated by political and economic objectives. This means that in the EU there are different Net Neutrality rules than in the US. Governance of data therefor ultimately part of the responsibility of governing bodies; however, ISP’s will actively resort to exerting some form of control over data streams through direct throttling. What this means is that ISP’s can control and upset the ability for distributed machines to communicate with one another freely. In turn, this adds a hierarchical element to an otherwise hierarchy-free environment and places the ISP in a powerful

position.

Governance of the Internet Protocol Suite should be considered in two tiers. First, the RFC documents, written by the designers, dictate the terms under which the distributed network of computers communicate with one another. Communication between computers is therefore not restricted at any level on the protocol layer. This is idea is expressly reiterated in RFC 3271 “The Internet is for Everyone” (Cerf, 2002). However, this document also states that “it will only be such if we make it so” (Cerf, 2002, par 4). The significance hereof is that even though the protocol is designed to make it accessible and open to all, access to it can still be restricted. The second tier, operation of the Internet, is the responsibility of Internet Corporation for Assigned Names and Numbers (ICANN) and the Internet Society (ISOC), two American non-profits. ICANN is responsible for maintenance and procedures of root DNS servers and ISOC handles Internet standards, policy development and education. Crucially however, neither the RFC documents, ICANN or ISOC have any legal authority over Internet’s structure or traffic. Instead this falls upon the national governments, where the DNS servers are located, that have final authority over their traffic (Zittrain and Edelman, 2003). What becomes evident here is the fact that although the protocol’s design dictates that communication is open and free; there are still forces that can restrict the protocol’s efficacy.

(24)

Finally, the Internet Protocol Suite as a protocological structure that governs how machines communicate desires totality. In other words, there exists only one way, one ruleset with which machines can communicate with one another. This is crucial to its success, because it gives all machines a unified protocol to adhere to. A serious issue with this totality is that it lacks alternatives. These alternatives provide ways with which resistance to the Internet could manifest as it would function as a channel with which machine users could subvert the totality of the Internet. At the same time, it could potentially propel and innovate our digital communication.

What should be noted is the fact that nowhere in this discussion of control over the Internet did the users enter the discussion. This effectively means that, as a user, you have no control over the Internet whatsoever. The only way one could vote is with one’s money. i.e. to choose one particular ISP over another, or to move to another country. In turn, it

underlines the fact that although the protocol is designed to make communication open; the control over the Internet is very much closed. This is essentially a form of two-dimensional power because control over the Internet’s protocol development or ownership has not entered public discourse.

What closer analysis of the three layers of the Internet therefore demonstrates is the following. Firstly, that code is law. The crucial component herein being that it is not the wants and needs of users, but the economic desires of companies that dictate exactly how this code is structured. These companies at the same time function as gatekeepers to the Internet. In other words, a majority of the data & code structure with which a large majority of day-to-day internet users engage with, is oriented towards one specific purpose, the gathering of data and the consequent monetizing of said data. This suggest an immensely skewed relationship of power and control in favor of the GAFA companies in relation to their users. Closer investigation of the three layers also demonstrates that an institution that would hold some form of control over all three layers would be immensely powerful. More specifically, the NSA’s tapping into the DNS structure and requesting of user data has provided them with a very powerful surveillances tool. Herein the efficiency of the PRISM tool is amplified by the fact that relatively few large corporations such as GAFA hold a majority of this data as it is in their economic interest. A crucial component that becomes evident in this analysis is that the wants and needs of users and consumers is never the purpose of an exercise. This in turn demonstrates that the argument could be made for the fact that the power and control of the three layers lies predominately with the few and the powerful.

This completes my analysis of the IPS and the Internet as a space. Next I will analyze the technical workings of the blockchain protocol and the blockchain space. Ultimately, the aim herein is to answer similar questions as I have posed in the IPS/Internet section in order to compare the two digital spaces. This comparison, coupled with the theories I used in the previous section will provide an overall picture of control, power and power relations in blockchain.

(25)
(26)

Understanding blockchain

“Templates of how we organize and govern such as partnerships, corporations, or nonprofits have not be updated in centuries, we still use many of the ancient rules and vernacular today, patching expanding gaps of trust and accounting all stemming from increasing modernized complexity.” -Richie Etwaru (2017, p.38)

The Internet is great for many things such as sharing information, communication and providing entertainment. We use the Internet for e-mail, social media and cloud computing, all of which were previously incredibly expensive to organize or simply impossible.

Although the Internet has many important uses, there are a set of practices that the Internet is simply not suited for. Especially when it comes to governing, economic and business

activities. For example, it is not possible to verify one’s identity on the Internet, this is captured by the 1993 cartoon by Peter Steiner featured in The New Yorker:

(Source: Fleishman, 2000)

This makes it immensely difficult to undertake tasks related to governance such as e-voting. At the same time, it is not possible to transfer money, determine ownership of digital assets or undertake actions that require a form of trust. As a result, the Internet is still the domain of intermediaries. For example, the biggest hotel business in the world is Air BnB, a company that owns no hotels, the biggest taxi company is Uber, which does not own a fleet of taxi’s, and the biggest video retailer is YouTube, which relies on content that it does not produce itself. In turn, these companies gather data on their users, sell their services and essentially function as middlemen in their respective areas of expertise. Ironically, this is a far cry from the image that the Internet had in the 90s when it was promoted as a tool to cut out

(27)

all middlemen (Khandelwal, 2016). The reason that these companies are so prevalent is because they facilitate trust. For example, if someone buys something using PayPal, they trust PayPal to return their funds in case of a dispute. In essence, trust is not built in the Internet’s protocological architecture. This means that the Internet is not suited for the verifying of identity and ownership, nor is it suited for the transfer of money or other actions requiring trust. In turn, large intermediary middlemen have taken over this realm and created the services we all enjoy.

A critical issue in verifying ownership and trust in a digital environment is the Byzantine Generals Problem, described by Leslie Lamport, Robert Shostak and Marshall Pease in 1982. They describe how a group of generals, each controlling a portion in a

Byzantine army would besiege a city. In order to conquer the city, the generals would have to come up with a plan. In the simplest terms, whether to attack or to retreat. Crucially, all generals would have to agree at the same time, else the attack would fail. Although this may appear simple, there are two elements which complicate the issue at hand. Firstly, generals with bad intentions, i.e. voting to attack but going to retreat, or vice versa. Secondly, the generals are physically removed from one another, meaning they have to send their votes through messengers who in turn may also be corrupt or die/fail in the process. In turn, setting up a system or protocol that nullifies all of these issues, is a system that has so-called

Byzantine fault tolerance. In relation to the digital environment, imagine computers as generals and the messengers as the communication systems that link the computers together.

In order to solve this issue, Lamport, Shostak and Pease suggest the addition of Lieutenants that execute in unison and in correspond to what the Commander initially

ordered. The latter being important in a scenario in which a Commander becomes disloyal. At the same time, they provide three other options to help establish Byzantine fault tolerance. One of which requires a signature that cannot be forged. In digital terms, this would be achieved using public-key cryptography. Herein a message is ‘hashed,’ which is the act of using a mathematical function to change a message into a collection of numbers and letters that represent the original message. This form of signature use is a cryptographic system that functions according to two keys. One is a public key, which may be shared and essentially functions as an address, the other being a private key, which functions as a decryption tool for messages sent to the public key. The following schema helps visualize this process:

(28)

(simplified private/public key structure) This solution fixes the issue of a messenger changing the message or knowing the contents thereof, but it does not immediately create a Byzantine fault tolerant environment.

What should also be noted is the fact that the Byzantine Generals Problem is an essential problem in relation to decentralized or distributed organizational structures. More specifically, whereas in a hierarchical system, each commander follows the instruction of the leading commander, king, queen or other leader format, in a decentralized or distributed system, the commanders would have to reach a consensus and act on that consensus simultaneously. What this means is that if societies wish to progress from centralization to decentralization and ultimately distributed organizational structures, the Byzantine Generals Problem is an essential issue to tackle. However, the central issues in the Byzantine Generals Problem are trust and integrity which distributed organizations require. Integrity in terms of a functional system that is completely consistent, safe and error free. They also require trust in terms of providing truth and reliability without the need for, but providing the option of investigation and provability. In a digital environment this means eliminating the possibilities of technical failures and malicious actors.

Blockchain technology aims to achieve both trustworthiness and integrity in peer-to-peer distributed systems. Herein blockchain is used as a name for a data structure, an algorithm, a suite of technologies and an umbrella term for peer-to-peer systems with a common application area (Drescher, 2017). More specifically, blockchain as a data structure puts data together in units called blocks. These blocks are linked to one another like a chain, therein forming a block-chain. A good way of thinking about this is to imagine a book, herein words, sentences and paragraphs are the information. This information is written on different pages which are connected through their position in a book and respective page numbers. In essence, the information is neatly ordered and chained together, relatively similar to data in the blockchain. Blockchain as an algorithm “refers to a sequence of instructions that

Referenties

GERELATEERDE DOCUMENTEN

The effect of the high negative con- sensus (-1.203) on the purchase intention is stronger than the effect of the high positive consensus (0.606), indicating that when the

Financial analyses 1 : Quantitative analyses, in part based on output from strategic analyses, in order to assess the attractiveness of a market from a financial

Yeah, I think it would be different because Amsterdam you know, it’s the name isn't it, that kind of pulls people in more than probably any other city in the Netherlands, so

The present text seems strongly to indicate the territorial restoration of the nation (cf. It will be greatly enlarged and permanently settled. However, we must

Land acquisition in order to settle the land claim depends on the availability of land on the market. South African land reform follows the market-led approach. Therefore, there

Bij de beoordeling van ibandroninezuur intraveneus is geoordeeld dat intraveneuze toediening een alternatief kan zijn, indien de inname instructies voor orale bisfosfonaten niet

Since schools tend to be the first environment where children with complex learning difficulties and disabilities are identified (Millar, Thompson, Schwab, Hanlon-Dearman,

This Part begins with a summary of the shifts in power relations discussed previously. Next, a discussion follows of the consequences for legal protection: first, within the realms