• No results found

Market failure and uncertainty : an attempt at harmonizing neoclassical and Austrian perspectives of market failure

N/A
N/A
Protected

Academic year: 2021

Share "Market failure and uncertainty : an attempt at harmonizing neoclassical and Austrian perspectives of market failure"

Copied!
47
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

Market Failure and Uncertainty - An Attempt at

Harmonizing Neoclassical and Austrian Perspectives of

Market Failure

Abstract

Policy decisions on issues of market failure are often made with the help of economic analyses rooted in Neoclassical Economics. Israel Kirzner and various other scholars of the Austrian School have criticized this school of thought and provided alternatives. As different Austrian scholars have argued, much of their disagreement with it seems to concern issues of knowledge. In this thesis, I aim to answer the question what the exact issue at the core of this disagreement about knowledge between the two schools is and how this can help to arrive at a new, more nuanced understanding of market failure. After comparing it with related questions in other fields, I find it to be the perceived significance of non-linearities in the probability-payoff-matrix of action hypotheses. The merit of either sides arguments regarding issues of market failure therefore largely hinges on a statistical phenomenon. This makes the two schools much less rival theories for the same questions and much more complimentary tools for different sets of tasks. By comparing how both theories’ implications differ in relation to their perception of the significance of non-linearities, I explore what such a common understanding could look like.

Fall 2016

10824502 – Timo Bremer Universiteit van Amsterdam

(2)

2 Statement of Originality

This document is written by Student Timo Bremer who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

3 Table of Contents

Abstract ... 1

Table of Contents ... 3

Introduction ... 4

(1.1) Market Failure – A First Definition ... 5

(1.2) Market Failure – The Neoclassical Perspective ... 5

(2.1) Neoclassical Economics – Basic Assumptions ... 12

(2.2) Neoclassical Economics - Additional Assumptions ... 15

(3.1) Austrian Critique of Neoclassical Economics ... 17

Implications ... 17

Abstractions... 18

(3.2) Austrian Critique of Complimentary Methodology ... 22

(4.1) Uncertainty – Degrees that Make the Difference ... 24

(4.2) Uncertainty - Pervasiveness ... 26

(4.3) Uncertainty - In Economic Theory ... 27

(5) Kirzner’s Market Process Theory ... 29

Assumptions ... 29

Discovery and Arbitrage in a Simple Example ... 30

Applicability ... 31

The Efficiency of Arbitrage ... 33

A Process Based on Ignorance ... 34

The Efficiency of the Process ... 36

Implications ... 37

Implications for Market Failure ... 38

(6) Conclusion ... 38

Summary ... 42

(4)

4 Introduction

Market failure is a pervasive phenomenon wherever the opportunities for cooperation to further and bring into alignment the individual interests of different parties have not been fully exploited. With the Neoclassical framework of marginal utility decision making, market equilibria and welfare calculation, economists have developed a helpful, stylized way of analyzing and solving such problems.

However, numerous Austrian critics have questioned the realism of the assumptions of Neoclassical Economics, pointed out the multitude of phenomena it finds no explanation for and been dissatisfied with the, sometimes counterintuitive implications that have been derived from it. Since the first critics have come forth, both sides have criticized the other’s methods and approach, but little progress has been made in evaluating the factors on which their disagreement is based in order to resolve the conflict. Much of their disagreement seems to concern issues of knowledge. In this thesis, I aim to examine what this disagreement about knowledge is based on and what implications can be drawn from it for the study of market failure.

For that purpose, I will first seek to define market failure and review Comparative Statics as a Neoclassical approach for examining it (1). I will take this as the foundation to then review the assumptions upon which such an analysis is based (2). In examining the limitations which such assumptions impose on an analysis and discussing how Austrian critics have taken these limitations as basis for their critique, I trace the disagreement about knowledge between the two schools to an issue of uncertainty in section (3.1). In section (3.2) I review why this same issue exists in just the same way with regards to methodologies with which Neoclassical Analysis is often complemented and which are equally subject to debate. In section (4) I will then further investigate the issue that I have identified as being at the core of the disagreement about knowledge by bringing in probability theory and the treatment of knowledge in other fields. Only after that, in section (5), will I review Israel Kirzner’s market process theory, as one of the most interesting alternatives based on Austrian assumptions, to examine what implications a different perspective of uncertainty entails. Finally, in section (6), I will then draw a conclusion on what understanding has been gained and what recommendations might be drawn for evaluating policy questions.

(5)

5 (1.1) Market Failure – A First Definition

Economists talk about market failure if the circumstances of a situation are such that individual utility maximization does not entail total utility maximization and that total utility (or welfare) can therefore be increased by interfering with individual decision making or its incentives – often through a kind of hierarchical arrangement (Rubinfeld & Pindyck, 2013).

Market failure is a broad topic. It is frequently subdivided into many different categories. Examples of the partly overlapping classifications of market failure include externalities, public goods, adverse selection, asymmetric information, multiple equilibria, path dependencies, switching cost, search cost, network effects, transaction costs and market power (Rubinfeld & Pindyck, 2013; Todaro & Smith, 2011; Viscusi, Harrington & Vernon, 2005; Pepall, Richards & Norman, 2011; Motta, 2004). In this thesis, I choose externalities and market power as two kinds of very common market failure at which to exemplify the Austrian and Neoclassical disagreements about how to analyze and find solutions for market failure.

Market failure doesn’t just take many different shapes; it is also a very pervasive and important issue. In a population of utility maximizers, companies, and many other hierarchical organizations whose main purpose is an economic one, can only hold legitimacy and be desirable to the extent to which the constraints which they impose on individuals help to maximize total utility. In other words, in such a world, their business model is to solve what I have earlier found to be defined as market failure. Ronald H. Coase explains this well (Coase, 1937). One might even consider whether this isn’t at least partly true for governments and government actions as well. Especially those governments which, for one reason or another, must be very concerned with legitimacy, should be largely guided by market failure considerations in their attempt to satisfy a utility maximizing populous.

That makes market failure market failure central to a vast realm of questions of human organization and the driving force behind much of the changing institutional landscape. It makes the methods with which it is understood and analyzed all the more important. They are what I would like to come to next.

(1.2) Market Failure – The Neoclassical Perspective

While there are many kinds of models for market failure, most are based in Neoclassical economics. One such approach is the methodology of Comparative Statics. With it, the calculation of total

(6)

6 welfare as it results from individual utility maximization and the comparison to alternative scenarios which can then classify a situation as market failure can be carried out (Laing, 2011). Not only would it be difficult to speak of market failure at all without such kinds of classifying analyses, with them, the welfare effects of possible remedies can be approximately calculated to determine the most promising course of action in terms of total welfare – an exercise that is known as welfare economics (Deardoff, 2014).

Neoclassical analysis is carried out with the help of calculated equilibria. When such equilibria are worked out for isolated markets, they are called partial equilibria, while a more comprehensive, multi-market model is solved with a general equilibrium (Deardoff, 2014). To calculate either of these kinds of equilibria, the decisions of all agents are calculated given their assumed preferences and the conditions that they are assumed to face. In Comparative Statics, which I would like to use to exemplify Neoclassical analysis, it is these equilibria, as they emerge under different circumstances and in response to different possible policies, that are compared. The utility that results from them is added up over all agents to arrive at a measure for total utility or total welfare (Laing, 2011; Kirzner, 1973; Hayek 1948).

In the following indented paragraphs, I would like to use Comparative Statics to conduct an intentionally limited and simplistic partial equilibrium examination that can then serve as illustration for the general discussion of Neoclassical Economics throughout this thesis. The first paragraph will deal with externalities, while the second paragraph will examine the issue of market power.

1. The first example are goods and services for which some of the benefits or costs of their production or consumption occur to third parties in the form of positive or negative externalities (Rubinfeld & Pindyck, 2013). Polluting activities and national defense are typical examples. From the perspective of total utility maximization, these goods are thought to be over- or underprovided (or consumed) through individual utility maximization. Because individual utility maximization neglects the external effects, it can be beneficial to total welfare to interfere with that individual utility maximization for example through government intervention. Governments frequently subsidize or tax the good in question or even ban or mandate it outright (C.I.A., 2016, European Union, 2008 & 2012).

(7)

7 In this example, I would like to use a subsidy in a market with a positive externality. Total welfare is here defined as the sum of producer surplus, consumer surplus and the external benefit of the production or consumption of the traded good. In the diagram below the situation before the introduction of a subsidy is displayed by Supply, Demand and Social Marginal Benefit, with the difference between the latter two being the External Benefit. All three are plotted in a price-quantity-space. Consumer surplus is marked yellow, producer surplus red and external benefit orange. The green area marks the External Benefit that could additionally be realized if more units, up to the efficient quantity Qe were traded. However, the actually traded quantity is Q* at the price P*.

Diagram 1

Neoclassical Economics allows not only to compare the status quo to a proposed policy, but also to derive the policy which, at least within the model, would maximize total welfare or be in some other way optimal (Demsetz, 1969; Kirzner, 1973). A commonly used

(8)

8 optimality criterion is Pareto Optimality. Pareto Optimality is defined as a situation in which, given the external conditions, no other outcome is possible that is better for at least one participant and worse for none. By itself a Pareto Optimal equilibrium would only be attained if additional conditions such as well-defined property rights, specifically shaped cost curves and absent transaction costs were fulfilled (Rubinfeld & Pindyck, 2013). In my example it is the lack of well-defined property rights that hinders the market from attaining that equilibrium by itself. The Pareto Optimal solution is therefore found by acting as if all property rights were well-defined and making the Private Marignal Benefit or Demand equal to Social Marginal Benefit.

The subsidy is therefore chosen equal to the External Benefit to shift the Demand graph up accordingly. In Diagram 2, the situation after the introduction of such a subsidy is displayed. The subsidy makes up the rectangular area between P* and the point SB for all traded units. At the new efficient quantity and price, Qe and Pe, for the units up until Q*, all that the subsidy has done is redistribute the orange External Surplus from Diagram 1 to the consumers, and, through a risen price, also to the producers, in the shape of the green and purple areas in Diagram 2. For the now newly traded units Qe-Q*, what had previously been lost as surplus and was marked green in Diagram 1, has now been turned into additional consumer and producer surplus marked in purple and green in Diagram 2. The additional surplus declines as one moves through the units, until, at the marginal unit, the consumer with his subsidy, and therewith society as a whole, is indifferent to buying this unit at the price at which the producer is indifferent to selling it. Total welfare has thus been maximized for the given situation and the subsidy was optimally chosen.

(9)

9 Diagram 2

2. As a second example, I would like to discuss the abuse of market power through price gauging. For a company that holds market power, the incentives shift away from providing the best possible service at the best possible price and maximizing total utility. Instead its position of power and lack of competition gives it leverage to increase its price above production cost and charge a price closer to what consumers would maximally be willing to pay (Rubinfeld & Pindyck, 2013). As a result, total utility suffers, because some consumers won’t be able to buy as much of the good as would be efficient given both the utility they derive from it and how much it costs to produce it.

Governments often use competition policy to ban activities that are associated with the not otherwise useful attainment and use of market power. In areas of the economy where market concentration is extremely profitable (often called “natural monopolies”) and where the incentives for individual utility maximization differ the most from total utility

(10)

10 maximization, governments often go beyond the use of competition policy in order to safeguard total utility and either run these sections of the economy themselves, like in the case of sewage networks, or allow a private company to take this monopoly position under the condition that it submits to economic regulation by the government. (Viscusi, Harrington & Norman, 2005; European Union, 2012; Rubinfeld & Pindyck 2013).

I am going to consider as example a policy that decreases the market power of a company. This could be the banning of a certain practice that gives it market power, like what the European Commission might do regarding the ways in which Google displays its own services within search results, which it recently raised objections to (European Commission, 2016).

Before the introduction of such a policy the situation is as depicted in Diagram 3. MRb gives the marginal revenue of the company in the price-quantity-space. MC gives its marginal cost. It chooses the quantity it produces at the intersection of the two (Qb) and sells to its customers at the price their demand assigns to this quantity (Pb). The producer surplus is marked in red, the consumer surplus in yellow and the deadweight loss as compared to a situation without market power is marked in blue.

(11)

11 Diagram 3

The fourth diagram displays the situation after the introduction of the policy measure. With reduced market power, marginal revenue (MRa) is now closer to the demand curve than it was before (MRb). Qa and Pa are the new prices and quantities. The orange area of surplus is transferred from the producer to consumers. Total welfare increased, because the blue deadweight loss decreased and both consumers (green) and the producer (purple) gained areas to their surplus that had previously been part of the deadweight loss. Overall, producer surplus decreased, because the orange area exceeds the purple area in size. While the policy measure can definitely be said to have been total welfare enhancing, it can only be said to be a Pareto improvement if the producer is reimbursed for his loss out of the consumers’ gain.

(12)

12 Diagram 4

These examples give a first impression of market failure and the ways in which Neoclassical Economics is used to examine it and design appropriate government policies. The next section shall focus on the assumptions that underlie this Neoclassical perspective of this so important issue. (2.1) Neoclassical Economics – Basic Assumptions

The Neoclassical approaches that are commonly used to evaluate questions of market failure have as their central element calculable, fully defined, static states. As part of these, the decisions of all agents and their outcomes can be worked out mathematically for different circumstances and compared in the way demonstrated in the examples. That could make the problem of their assumptions a very simple one. As long as the person or machine that conducts the calculation has all necessary information and enough processing power, they only rely on a single one. They must assume that all factors relevant to human activity follow deterministic laws, so that it is possible to reliably calculate future courses of the world based on its current state.

(13)

13 Some disagree with that assumption and believe that our world is random and chaotic (Prigogine & Stengers, 1997). But as Albert Einstein, has said, it is and will always be possible that it is only our ignorance that prevents us (humans) from being able to calculate the future with perfect accuracy (Einstein & Born, 1926).

While this insight might give support to the assumption about the deterministic nature of our world, it also violates the condition under which it is the only necessary assumption. As no one is yet able to calculate the future with perfect accuracy, there must be lots of things that everybody is ignorant to. Just as intuition suggests with regards to everyday decision-making, economic analyses also hinge on too many complex variables as that an analyst could ever know and flawlessly understand all of them. As it is not possible to isolate the issue at hand from the great realms of ununderstood complexity around it, complete certainty about anything remains unattainable. The necessary knowledge and processing power to solve the near-infinitely complex mathematical problem that accurately models the, for all practical purposes, infinitely many factors that can influence individual decisions and the economy that emerges from them simply is not available. In other words, while the world might be deterministic, it does not appear deterministic to human decision makers. However, how it appears to us, rather than what its fundamental nature is, is precisely what matters when contemplating our decision making. (Einstein & Born, 1926; Hayek, 1948; Cilliers & Spurrett, 1999)

I should clarify that it is not claimed here, that one must be omniscient to precisely calculate the future course of the world. For example, the possibility for the development of a new technology which would revolutionize a particular field does not have to be known to the analyst as long as it will not be discovered in the foreseeable future. But because the analyst needs to have all relevant knowledge and also has to know which knowledge is relevant, he requires at least a kind of quasi-omniscience which he does not have (Hayek, 1948; Kirzner 1973).

However, while the world doesn’t appear deterministic to us, it also doesn’t appear completely chaotic and random. If it would appear so, it wouldn’t make any sense for agents to weigh different options and consciously choose between them, because there would simply be no way of knowing which actions might be better than others. In such a world, they would do better just acting randomly and being satisfied whatever the outcome (Chan, 2015; Douglas, 2007). But everyday experience violently contradicts this world view. Humans have developed large brains with which

(14)

14 they make choices every day, sometimes with a lot of effort, because they’ve made the experience that that can make a difference. The truth therefore must lie somewhere in between the extreme worlds of no and perfect knowledge.

To non-omniscient observers, the potentially deterministic world appears complex (Cilliers & Spurrett, 1999). The world is too complex that anyone could gain certainty about any decision ahead of time, but it is at the same time predictable enough that decisions matter. Decision makers have knowledge of the past and can learn from experience, observing statistical patterns and calculating approximate probabilities. They can even create deterministic models from those calculations which connect causes to their most likely outcomes. But there always remains the unknown effect of the unknown. They never know whether the insights from past experiences really apply to the current situation and all decisions therefore retain a speculative element. They cannot establish the isolated, deterministic causal links necessary for making decisions in certainty. Instead of resembling the mechanistic optimization calculus of Neoclassical Economics, their actual decision making relies on statistics (Kirzner, 1973; Taleb, 2012; Prigogine & Stengers, 1997; Popper, 1935 & 1982; Nicholson & Snyder, 2005; Hayek, 1948; von Mises 1949 & 2016).

The uncertainty that remains is not the kind of clearly defined, calculable uncertainty between a few known possible outcomes as one finds it in games. Instead, it is an open-ended, Knightian uncertainty that defines this realm of complexity between complete determinism and complete randomness (Taleb, 2012; Hayek 1948; von Mises 1949 & 2016; Kirzner, 1973). Whenever I speak of uncertainty in the remainder of this thesis, this is the kind I am referring to.

Because the knowledge about the world that decision makers have is in such a difficult state, Neoclassical Economics must rely on abstractions in addition to the one about the deterministic nature of the world in order to arrive at calculable static states. Just as in everyday life, people respond to the complexity of the real world by creating deterministic mental models of it through abstractions, so Neoclassical Economics does, too, with mathematical models. These can then be used to think about the world and the probable outcomes of possible actions. This is therefore a case of what Nicholson said about science in general, that, without abstractions very little theorizing can be done (Nicholson & Snyder, 2005).

(15)

15 (2.2) Neoclassical Economics - Additional Assumptions

The three basic assumptions of Neoclassical Economics that make possible the above exemplified theoretical ways of understanding phenomena of market failure, are the following.

First, everything but the endogenous decisions of agents, that is, all other exogenous factors which provide the context in and constrains under which these decisions are made, are assumed to be unchanging. Just like physics experiments conducted under lab conditions, Neoclassical Economics Statics holds all factors not immediately relevant to the studied mechanism constant. By making this assumption, analysts alleviate the need to also model the whole rest of the world and can focus only on the isolated issue at hand. It makes their problem a lot simpler. (Hayek, 1948; Nicholson & Snyder, 2005).

It is true that, as exemplified by the discussed difference between partial and general equilibrium models, narrower and broader models do exist, making this assumption stronger or weaker respectively. All Neoclassical models, however, must rely on this assumption to at least some extent.

Second, regarding the way that agents make decisions, Neoclassical Economics assumes that everyone tries to maximize their utility and that their preferences adhere to a number of attributes like local non-satiation, quasi-concavity and continuity (Varian, 1992; Nicholson & Snyder, 2005; Weintraub, 2007). As a result of that maximization, it is always the marginal cost against the marginal benefit of the purchase or sale of an additional unit that the individual or firm compares. Maximizing behavior is consistent with buying and/or selling until marginal cost and benefit are equal and the economic agent is indifferent to further purchases and/or sales. These assumptions make the calculation of agents’ choices a lot easier, because with these assumptions their choices take on a rational, goal oriented form and result in the described regularities.

Third, regarding the knowledge that agents have when making these choices, Neoclassical Economics assumes quasi-omniscience (perfect knowledge of all relevant factors in present and future) (Hayek, 1948; Kirzner, 1973; Nicholson & Snyder, 2005; Weintraub, 2007).

While an exact definition of each agent’s knowledge is crucial for being able to predict their behavior, the agents being quasi-omniscient is not strictly necessary. For example, the Neumann-Morgenstern Framework, often called game theory, neatly shows how choices remain calculable even when agents have no knowledge of the choices which their fellow agents are going to make

(16)

16 (Nash, 1951; Nicholson & Snyder, 2005; Varian, 1992). However, this particular framework gets very complex once the number of agents rises above a few (except if further abstractions are made, as in evolutionary game theory (Smith, 1982)). Another possible abstraction regarding the knowledge of agents is to divide all agents into a few groups, each of which is assumed to have a different, well-defined state of knowledge (Varian, 2010). This does not significantly improve the capability of the researcher to model realistic degrees of ignorance heterogeneity however. As it introduces a lot of additional complexity without doing away with the necessity to make further abstractions about the state of knowledge, it is only employed in special cases.

Hence, the assumption of perfect knowledge remains the most common abstraction. It makes the analyst’s job a lot easier, because he can simply assume agents to choose what is optimal for themselves instead of having to consider what information they might be lacking to correctly make such a choice.

Because this assumption is so common, the notion of equilibrium and Neoclassical Economics have become associated with the additional attributes that such a perfect knowledge equilibrium has. It brings the market into equilibrium in a second sense beyond achieving a static state. Demanded and supplied quantity equal each other. A single, market clearing price prevails and can be determined by finding the intersection of the market demand and supply graphs. All people who wanted to buy and/or sell at that price were able to do so and the buying and selling decisions of all market participants were perfectly coordinated (Hayek, 1948; Kirzner 1973). All of these attributes further simplify the job of the analyst, so that with these three main abstractions, it is well possible to calculate and compare welfare under different circumstances in the way shown in the examples and draw conclusions about policy measures from these calculations. That makes this perhaps strongest abstraction a very crucial one for Neoclassical Economics.

One justification for this choice, is that it is often assumed that knowledge has a tendency to disperse over time (Hayek, 1948). With this assumption the eventual result of any policy measure after all adjustments have run their course and all rigidities are overcome is the same as what would prevail if everybody had perfect knowledge. If knowledge disperses over time, so the thought, there can be no static states to be compared and no equilibria at which the economy seizes to change until the economy reaches a state of perfect knowledge. With the assumption of knowledge dispersal, perfect knowledge becomes the defining characteristic of equilibrium (Hayek, 1948).

(17)

17 While Neoclassical Economics and these three assumptions are very helpful to understanding how the important issue of market failure manifests in a complex world, I would now like to scrutinize the possible problems that these assumptions entail and the ways in which they might make Neoclassical Economics vulnerable to the impact of uncertainty.

(3.1) Austrian Critique of Neoclassical Economics

In light of the difficult state of our knowledge about the world, a balance has to be stricken when making a theory. Depending how significant the impact of the unknown unknown is, a theory’s abstractions might either be too narrow or too broad. As already mentioned, abstracting too narrowly could make the remaining elements too complex and the required knowledge too large to use a model at all. At the very least it would unnecessarily limit the scope of new insights that it can provide. On the other hand, abstracting too broadly can decrease accuracy and applicability to the point of making models useless for understanding reality. As I will now come to, it seems as if behind a lot of the criticism of many Austrian critics of Neoclassical Economics is the perception that, given the degree of uncertainty that human decision makers face in this world, the assumptions of Neoclassical Economics are too broad when it comes to the issue of knowledge.

According to Nicholson there are two ways of making that claim. Both of these are employed by the critics (Nicholson & Snyder, 2005). The first is to assess the reasonableness of the theory’s predictions and implications. If these are generally accurate and if the theory allows one to understand what is going on without significant gaps, it is likely that the abstractions were reasonable and well-chosen. If many important phenomena that can be observed in the real world remain unexplained, or if many predictions contradict what can be observed in reality, the theory should be taken with a grain of salt. The second way of theory validation is to examine a theory’s assumptions directly with regards to whether they oversimplify and misrepresent reality or, on the contrary, fail to isolate useful regularities because they are too narrow.

Implications

In this section I will summarize the criticism that falls under the first category and directly concerns the implications.

Neoclassical Economics offers explanations for a lot of phenomena that would otherwise go unexplained. With there being very few alternative theories, much of the existent knowledge about economics and almost all of the knowledge about market failure, has been learned or at least

(18)

18 rationalized through Neoclassical Economics (Kirzner, 1973). But critics point out that, independent of whether this knowledge is accurate, it is at least not very comprehensive.

There are many crucial aspects and phenomena of the real economy which would be desirable to understand both for their own sake and for the role that they play in any question that economists examine, which are in fact assumed away by Neoclassical Economics and about which it therefore offers few insights. As it is especially the third assumption of Neoclassical Economics that assumes away much of the real world’s complexity, it seems as if it is especially the issue of knowledge and the role that it plays in everything from individual decisions to the economy at large that is poorly understood. For a science concerned with decision making, it seems like a very crucial issue to assume away.

For example, Neoclassical Economics doesn’t offer any insights into the creation and dispersion of knowledge in the economy and the emergence of coordination which Hayek calls the successful “division of knowledge” (analogous to the division of labor) (Hayek, 1948). It furthermore seems to offer little to no input on the calculation debate and the question why highly planned and centralized economic systems like the Soviet one seem to have performed poorer than more decentralized ones. It struggles not only to explain the limitations of planning, but also the emergence of order in its absence (Hayek, 1948; von Mises, 2016; Taleb, 2012; Kirzner, 1973 & 1989).

Other issues that are related to knowledge in less obvious ways and are therefore also assumed by Neoclassical Economics are the role of competition in the laymen’s sense of the term (as a process instead of a static state) (Hayek, 1948; Kirzner, 1973) as well as entrepreneurial activity and the source of pure profit of the kind most companies make (Hayek, 1948; Kirzner, 1973).

While it seems like a very powerful and useful theory overall, its insufficiencies regarding the issue of knowledge indicates that the critiques might indeed have a case that choosing different, less stringent abstractions might be insightful for understanding certain aspects of the economy and market failure. It could act as a check regarding the possible biases of Neoclassical Economics. Abstractions

The second way of theory validation concerns a theory’s abstractions.

It’s not just by looking at the implications, that one could conclude that the third abstraction regarding the quasi-omniscience of agents might be stronger than optimal despite having the

(19)

19 assumption of knowledge dispersal as its justification. It could be valid if the knowledge that agents had either approximated quasi-omniscience or if moving towards different equilibria (the equilibria of perfect knowledge which knowledge dispersal makes agents move towards) had the same relative desirability as arriving at them. However, the critiques argue that there is no reason to assume either.

Various critics, like Hayek, Kirzner and Taleb would claim that human decision makers are too far from perfect knowledge as that continuous knowledge dispersal would make it a reasonable approximation. The decisive disagreement underlying this claim concerns the prevalence and relevance of unknown unknowns or uncertainty as I have described it as necessarily confronting all human decision makers in a complex world. While an absence of prevalent and relevant unknown unknowns would make any degree of knowledge approximate quasi-omniscience, their prominence could render even comprehensive knowledge relatively worthless (Kirzner, 1997; Taleb, 2012). The critiques argue that this is often the case. They point to the fact that, even though individual human beings as well as humanity as a whole gain knowledge over time, no one has yet come close to being omniscient. The rate of learning is too slow and ignorance too big as that continual learning would imply omniscience even on an approximate level.

A similar case can be made regarding the point that moving towards different equilibria might have the same relative desirability as arriving at them. If one can expect the world to behave in a somewhat predictable fashion without a massive surge of unforeseen events invalidating the majority of what one had previously thought true, then it might be reasonable to make the case for Neoclassical Economics here. If one, on the other hand, beliefs there to be many potentially highly important factors about which one has no idea, then one will likely find this abstraction to be rather daring. In fact, it will then seem very close to, what Demsetz famously called the Nirvana Approach and what Kirzner claims Neoclassical Economics is often guilty of. The Nirvana Approach is to derive recommendations for reality from comparing it to a state that can be shown to be optimal if a number of assumptions (like all agents having perfect knowledge) that are in fact violated were given. Depending on how one assesses unknown unknowns there is little reason for considering that a valid way of going about things (Kirzner, 1973; Demsetz, 1969).

While it is this third abstractions that seems to be the strongest and responsible for most of the issues regarding the implications of Neoclassical Economics, in some ways the first of the listed abstractions actually creates the same kind of problem (Hayek, 1948; Kirzner, 1973). The approach

(20)

20 of holding a broad range of possible exogenous factors constant is reasonable where these factors are either very stable or have only minor effects. But if one perceives unknown unknowns to be a significant factor, one might expect these external factors to be highly variable and frequently determining large parts of the observed variation. Thus, assuming them away would then be a bad strategy. In a way this comes down to the same disagreement about the worth of human knowledge in the face of the world’s complexity as the criticism of the third assumption, just here more with regard to the analyst than the agents of the theory.

Led by this impression of uncertainty, Hayek makes it one of his main arguments about Neoclassical Economics to point out the impossibility of trying to quantify the near infinite number of factors which he sees as potentially highly relevant (Hayek, 1948).

He argues that one way in which the contextual and variable nature of markets becomes obvious and the problems of ignoring it manifest themselves in Neoclassical Economics is by analysts frequently justifying the failed predictions of their models with changes in the external conditions. The conditions haven’t changed, he says. The external conditions at a specific point in time are time-invariant by definition. They have just not remained constant as they were assumed to do (Hayek, 1948). The problem therefore must be attributed to the failure to consider other factors as potentially relevant in making the initial prediction.

Kirzner mainly bases his alternative theory on this Hayekian critique. Although he argues that the differences between his theory and Neoclassical Economics mainly derive from a different focus rather than a difference in underlying worldview, it is clear that by refraining from assuming any factors to be irrelevant and therefore paying tribute to Hayek’s concerns, a large part of why his focus is a different one is his assessment uncertainty in relation to this first abstraction of Neoclassical Economics (Kirzner, 1973).

At the end of this section about the limitations of Neoclassical Economics it appears that, while there are many possible issues with all of its assumptions, a large part of why Neoclassical Economics has been criticized concerns the complexity and uncertainty that it has assumed away and the issue of knowledge that was assumed away with them because it loses its relevance in their absence.

This criticism implies that wherever unknown unknowns are infrequent and relatively minor, the implications of Neoclassical Economics will be uncertain, but a very useful approximation. Just as

(21)

21 individuals in their everyday lives know the world to be too complex as that they could trust their mental models perfectly, but also know that they can generally trust them to generate useful hypotheses which must then be tested, so too do many economists rightly view their models and theories. Just as in other scientific disciplines models have successfully been used, not to directly derive conclusions mathematically, but to generate new hypotheses, so the models of Neoclassical Economics Theory are used to generate hypotheses which then are to be tested (Popper, 1935 & 1982).

Whenever and wherever unknown unknowns are more significant, however, the conclusions of Neoclassical Economics will turn from being uncertain to being useless. To paraphrase Edward Lorenz, the present might determine the future, but, if unknown unknowns are significant, the approximate present does not determine the approximate future (Danforth, 2013). If people find themselves in an environment of prevalent unknown unknowns, for example when they are just in the process of learning something entirely new, they typically know that their mental models might be far from being good approximations yet and they are thus very reluctant to take risks based on their implications. This is exactly what Hayek and others perceive to be the situation economic agents and analysts often find themselves in. In these situations it would be far more helpful to attempt to work towards understanding these issues of uncertainty and knowledge rather than assuming them away.

How prevalent unknown unknowns and with them issues of knowledge really are in any particular case is therefore at the heart of the disagreement and should always be examined when determining how much to trust Neoclassical Economics and how much attention to pay to its critiques on issues of market failure.

In the next section I would like to discuss methodologies that are frequently used to complement and affirm Neoclassical Economics and evaluate to what extent they can lend it additional credibility against this attack before I then examine the issues of uncertainty and knowledge in more detail.

As a clarification, I should note that neither I nor any of the critics claim that the models of Neoclassical Economics as a way of generating hypotheses on the issues in question are in any way refuted by the presence of unknown unknowns. As long as unknown unknowns aren’t very prevalent, they might make the implications of Neoclassical Economics uncertain, but they remain very useful. There might have been some examples of

(22)

22 misconduct in which this uncertainty was ignored and the implications of Neoclassical models were treated as certain conclusions rather than as hypotheses that are to be tested. In these cases, unknown unknowns were therefore not treated as merely insignificant, but completely absent. Critics were quick to point out these problems, for example in the calculation debate. However, that does not generally reflect the conduct of Neoclassical economists or the points of their critics. The argument between the Neoclassical and the Austrian school is more one of degree than of kind.

(3.2) Austrian Critique of Complimentary Methodology

Aware of the limitations of Neoclassical Economics, economists often try to improve the certainty of its implications (in the style of the first way of theory validation) by examining them with statistical analyses before they are tested as hypotheses in the real world. This process improves the models of Neoclassical Economics by comparing their implications with data from the real world’s past, akin to how in individual decision making people draw on their experience to evaluate whether the implications of their mental models make sense. Akin to how they remember failures and successes and try to correlate them with different paths of action that they took, economists studying market failure try to learn from correlating different policy measures to different outcomes. Not always can models be made to fit what statistics say, but then at least they can be used with the precaution that comes from knowing their inaccuracies.

But the same underlying phenomena that make the predictions of deterministic models so prone to failure also cause the implications of statistical analyses to be very uncertain. As already pointed out, both individuals and other scientific disciplines use statistical analyses on data from past experiments to suggest causal relationships, but never to generate conclusions based on such statistical prediction alone. That is because there are two significant problems that open-ended uncertainty in the form of potentially relevant and prevalent unknown unknowns creates for statistical analyses. First, because dimensionality is literally infinite, no statistical relationship can ever be said to be reasonably certain if such uncertainty is prevalent. Like with abstraction in models, so statistical analysis is therefore an issue of excluding third factors and judging how relevant and prevalent their influence is. Second, because statistics can only be based on the past, one runs into the so called “turkey problem” of not being able to predict or understand anything that has not happened before. If such events are numerous, the predictions will tend to be very bad. Just like with models, it therefore depends on the assessment of the environment whether these

(23)

23 issues will merely make the implications of statistical analyses uncertain but a good way to determine which hypotheses to test, or completely useless (Hayek, 1948; Kirzner, 1973; Taleb, 2012; Boulding, 1981).

Sometimes hypotheses generated by Neoclassical models are furthermore tested within the framework of experimental economics. In experimental economics, the decision making behavior of people is studied in artificial, laboratory situations. It thus offers a choice that is somewhat intermediate between difficult, large scale but reliable real-world testing and the easier but failure prone ways of generating and screening hypotheses through modelling and statistical analysis. However, as for example Levitt and List have argued (Levitt & List, 2006), it has proven difficult to create laboratory conditions that are realistic, complex and relevant enough and have enough of a time horizon that the results of the experiments can be used in answering real-world policy questions. This is because, in the end it remains true that testing in laboratory conditions is just another way of making abstractions from the real world. Even with much less complex questions like those in the field of physics, the application of laboratory findings to real world engineering problems can be tricky. In the indefinitely more complex realms of biology and economics, this transition introduces even more uncertainty. Trying to learn about the real world in a laboratory can at times seem like learning how to ride a bicycle by making a little model to roll around on a desk, rather than by going out and doing the real thing.

The behavior of humans under laboratory conditions might be interesting in its own right, but when it comes to applying insights from laboratory experiments to the real world, the abstractions and simplifications of the laboratory setting create the same potential difficulties of disproportionally large impacts of ununderstood factors which also limit the applicability of insights from statistical analyses and deterministic models. They may, depending on the experimental setting, be larger or smaller and experimental economics is thus a valuable addition to an economist’s toolbox, but it does not manage to free the hypotheses generated with Neoclassical Economics from the criticism of Hayek and others regarding unknown unknowns.

Our previous conclusion that, whether Neoclassical Economics can be judged an uncertain, but useful tool or not, depends on the relevance and prevalence of unknown unknowns can therefore be extended to include all the testing, be it statistical or experimental, with which its hypotheses are often validated. That is not to say that such testing could not increase certainty. Performing such tests is definitely preferable to not doing so. The argument is much rather that, because these

(24)

24 tests also exclude, rather than seek to understand the issues of uncertainty and knowledge, they too depend in their conclusions on the relative insignificance of unknown unknowns. There therefore remains a need to examine these uncertainty-related issues to find ways for understanding market failure also in more uncertain contexts. In the next section I want to move forwards with that by examining unknown unknowns and uncertainty in more detail.

(4.1) Uncertainty – Degrees that Make the Difference

A more technical term for the phenomenon of unknown unknowns is nonlinear effects. As Taleb describes in his book “Antifragility”, even the average outcome (or expected value) of an action can be negative despite having been estimated with high confidence to be positive (Taleb, 2012). This occurs if there is a chance, no matter how small, of a disproportionally large loss that more than makes up for the vast majority of positive outcomes. The decisive factor is the disproportionality of the loss in contrast to the small, “planned” payoff, or, in other words, the nonlinearity in the probability-pay-off-matrix. As all decision making in a world of open ended uncertainty is statistical in nature, these nonlinearities do not just affect statistical analyses themselves, but, in the way described above with regards to abstractions, all hypotheses people make. Very infrequent but significant deviations from the way things usually work significantly increase the kind of incalculable uncertainty that arises from unknown unknowns and can render the predictions of an otherwise useful model unreliable.

While such non-linearities render models and statistical analyses ever less useful, they make a trial and error approach, as part of which many hypotheses are tested in small trials, ever more promising, because such an approach limits the loss that can be incurred on any single hypothesis and increases the probability of discovering hypotheses which fulfill or even unexpectedly exceed planned payoffs (Taleb, 2012; Hodgson, 1993; Kirzner & Seldon, 1980; Ewens, 2011; Kingman, 1961; Boulding, 1981). When non-linearity is prevalent enough, many small losses and a few big gains yield a better payoff than many small gains and a few big losses. The non-linear relationship in the probability-payoff-matrix is concave for a single hypothesis, but convex for a trial and error approach. Hence Taleb calls taking such an approach a convex transformation.

Taleb shows such an approach, which he names ‘antifragile’ to play an important role in people’s implicit, natural way of learning and decision making. When uncertain of something, people observe what has worked for others and for themselves (gaining experience) and develop heuristics

(25)

25 that are motivated not by optimizing their payoff for the case that the world performs exactly in accordance with their mental model of it, but rather by exterminating failure and thereby minimizing large downside risks and sticking to what has worked. They furthermore have an urge to play with the matters of their work whenever they can afford to do so and therefore naturally explore many different options and leave room for unexpected discoveries.

In finance and management this strategy is known as the barbell strategy. It aims to combine the moonshot experimentation and exorbitant returns of Silicon Valley start-ups with the security and conservative business model of a multinational cooperation, harvesting optionality through tinkering and rapid trial and error without risking one’s existence (Taleb, 2012; Diamandis & Kotler, 2016).

In just the same way, the more prevalent non-linearities are expected to be, the more proper conduct with regards to market failure might not be to go for large scale optimization, but instead to stick to proven lines of policies and launch small scale experiments, in competition with each other and the status quo, making failsafe progress and discovering unexpected opportunities.

This is not to say that trial and error is not part of decision making under less uncertain circumstances. Again, as with the difference between Austrian and Neoclassical thinking before, the difference is one of degree and not of kind. Whether one generates hypotheses through models and statistics or not, actual insights are in any case only gained through trial and error hypothesis testing. Actual testing is the only way to add to experience and create experience relevant for the problem at hand. The more testing is done, the more accurate a picture of reality can be derived from it (Kirzner & Seldon, 1980). The difference is merely that with higher certainty, bigger trials in which more effort is devoted to modelling and planning do better, while the opposite is true for situations in which certainty is lower.

Independent on how uncertain a situation is deemed to be and how much effort is devoted to modelling and planning, trial and error as the basis of new insights and learning and the order and coordination that come with them has been described in many different circumstances and a lot is known about such processes. Popper has famously described for science in general how scientific progress is made through experiments that test hypotheses that were generated for example from models or statistical analyses (Popper, 1935 & 1982). Darwin has famously written about how species and their genetic mutations constantly generate new (but in this case random rather than model derived) hypotheses which are put to the test of an individual’s survival and procreation and

(26)

26 keep the species as a whole adjusted to its ever changing environment (Darwin & Costa, 2009). Susan Blackmore, Richard Dawkins and other biologists and anthropologists have suggested that the advantage of the human species over its rivals is the advent of language and intergenerational, cultural learning that allowed individuals to pass on much of what they had learned about optimal behavior with their own trials to their offspring. This way things did not have to be learned anew each generation, but could instead be built upon. In describing the way that societies learn, they employed the concept of memes to describe a learning process that was very similar to that of evolution and genes in nature (Dawkins, 2016).

Two questions naturally arise after discussing the phenomena that can make Neoclassical perspectives of market failure more or less useful. The first is, in which areas and with which issues uncertainty can be expected to be particularly high or low. The second is, how trial and error processes and the convex transformation could be integrated into economic theory to make questions of knowledge endogenous wherever they are relevant and thus make the theory applicable in circumstances of higher uncertainty. I will turn to these questions in this same order. (4.2) Uncertainty - Pervasiveness

It could be said that these insights create more questions than they answer, because it is difficult to tell how much certainty there is with any particular issue and how useful Neoclassical Economics might therefore be to find an answer.

One conclusion that can be drawn, however, is that projects which, because of the complexity, size or speed involved, prohibit the use of trials and therefore depend almost entirely on planning, modelling and prediction, will be very vulnerable to the faults that I have described with regards to the planning process and therewith, to uncertainty. By relying on a single predictive hypothesis, they correlate the risk of all their parts and become likely to fall prey to one or two of the infrequent but highly negative (and therefore non-linear) effects (Taleb, 2012; Boulding, 1981). It is thus advisable to avoid such projects or attempt to break them down into smaller, more testable or less unique parts as much as possible.

With regards to the significance of non-linearities in different fields, Taleb suggests that there might be a correlation with the age of the field. The idea is that in new fields there are still many significant discoveries and errors left to be made, while in fields that are old in the sense that they haven’t changed in quite a while, so many of the possible mistakes have already been made and

(27)

27 opportunities exploited, that existing experience in the form of data for statistical analysis, common practices or existing models and theories can be expected to be of high certainty and usefulness. (Taleb, 2012)

Finally, there are circumstances under which even simple hypotheses in old fields can be vulnerable to non-linearities. That is the case when a whole field is especially complex. In complexity science this concern is known as the butterfly effect. The butterfly effect is the idea that in highly complex environments very small and insignificant events can, under some circumstances, have large, highly significant consequences. While all of the discussed problems fall under that definition in the sense that the world in which human decision makers operate is just complex as a whole, it is noteworthy that some areas are more complex than others. In these, even humble hypotheses and a lot of experience cannot give one the degree of certainty that one could expect elsewhere. Even models with reasonable abstractions and only small inaccuracies can be completely inaccurate on average (Gleick, 1987).

In conclusion, it seems that while the exact level of uncertainty for any particular case is something that is itself uncertain and the effort that should therefore be put into hypothesis generation relative to that put into hypothesis testing is uncertain as well, there are a couple of red flags for areas of unusually high uncertainty. Highly complex projects or fields as well as particularly new fields can be expected to be unusually uncertain.

(4.3) Uncertainty - In Economic Theory

There is, as the discussion of non-linearities and uncertainty in different fields and situation shows, a need for a range of different decision making tools rather than for a single, one-level-of-uncertainty-fits-all-type theory.

There is also, as the examples of how trial and error is discussed in different fields show, a fairly settled, uniform understanding of how learning processes take place across different domains and different levels of uncertainty (Lavoie & Prychitko, 1995). Yet, despite economics being a science of decision making, there have only been few attempts to create models to which these issues of knowledge, learning and coordination are endogenous and which thus provide a better understanding of these types of processes and make themselves less vulnerable to non-linearities by using fewer abstractions.

(28)

28 One example of where agents have been modeled as hypothesis testing is evolutionary game theory. In evolutionary game theory agents use trial and error to adapt their strategies in playing pre-defined games (Smith, 1982). As they can be modelled by computers, this approach is much cheaper and allows for many more repetitions than attempting to emulate learning processes in lab experiments or formalizing them in Neoclassical models. Similar to lab experiments and Neoclassical models however, the problem with this approach tends to be the application of the simplified findings to the complex real world (McKenzie, 2009). So far, constraints like processing power and programming effort have not allowed for these games and the agents therein to take the complexity of any kind of higher life forms and their environment, let alone the economy (Hsu, 2015). They might be helpful in understanding learning processes and their principles in general and with increasing computing capabilities they hold much promise for the future, but in terms of answering today’s policy questions, their findings remain questionable in the same way as those of lab experiments or Neoclassical modelling and statistical analyses.

Another attempt to create a theory that is more suitable to uncertain circumstances is the already mentioned theory of Israel Kirzner. It is an attempt to complement Neoclassical Economics with an alternative that, while it is still a theory that relies on abstractions and is thus vulnerable to uncertainty, makes do with fewer assumptions. This might mean that conclusions cannot be drawn from it with the same breadth than from Neoclassical theory, but it also makes it more suitable to more uncertain environments.

Another aspect that makes Kirzner’s Market Process Theory so interesting, is that, by starting out with this objective and removing assumptions from Neoclassical Economics, issues of knowledge and learning had to be addressed and became an important part of his theory. Kirzner ended up with a theory that described human action and the economy as a process of trial and error in just the same way as Darwin’s theory did for the natural world. Thus, his theory offers a lot of novel explanatory mechanisms and perspectives for many hitherto assumed away phenomena.

In the following I would like to discuss Kirzner’s theory as a possible complement to Neoclassical Economics for uncertain situations, focusing my discussion on the question of what implications and changes in understanding abandoning the perfect knowledge assumption entails. Through this discussion, it should become clear that the implications that I talk about only depend on giving up this assumption and not on possible further disagreements between Austrian and Neoclassical Economics.

(29)

29 (5) Kirzner’s Market Process Theory

Israel Kirzner published his theory in 1973 as “Competition and Entrepreneurship”. He built it on the works of von Mises, Hayek and other Austrian scholars who had previously thought about the role of knowledge in markets.

Hayek had already put forth the idea of building a theory of markets without the assumption of perfect knowledge, because he hoped such a theory, by furthering understanding of the crucial role of knowledge in markets, would explain the many crucial aspects of the real economy that Neoclassical Economics assumes away and therefore provide an overview of its potential biases (Hayek, 1948). He furthermore hoped that, by explaining knowledge creation, such a theory would also explain the assumed tendency of markets to move towards equilibria and therewith serve as a bridge between Austrian and Neoclassical Economics, legitimizing one with the other (Hayek, 1948). Instead of describing an economy as being made up of equilibria and static states, such a theory should offer a perspective of the economy closer to the dynamism that one can observe in reality (Kirzner, 1973).

Kirzner, having been inspired by these hopes of Hayek, explicitly refers to each of them and shows in what way his theory fulfills them. He starts out simply by taking Hayek’s recommendation of abandoning the perfect knowledge assumption, and ends up with a theory that mirrors those of learning processes in other fields like biology or the theory of science.

One could say that just as Coase and others explored what kinds of errors the nirvana approach produced with regards to the simplifying notion of Pareto Optimality, a theory of the market process as Hayek advocates developing and as Kirzner developed explores what kinds of errors might have resulted from the simplifying notion of equilibrium itself (Kirzner, 1973; Demsetz, 1969; Coase, 1960).

Assumptions

Kirzner’s most significant change in assumptions is the abandoning of the perfect knowledge assumption. Instead of assuming market participants to have all relevant knowledge readily available to them, Kirzner assumed each one to have a patchwork of little pieces of information that they acquired through their own experiences and the experiences of the contacts in their unique network.

(30)

30 This has two important implications that Kirzner recognized in developing his theory. The first is that all agents face open-ended uncertainty in all their decisions, because of the way that imperfect knowledge in some areas makes all other knowledge uncertain. The second is that imperfect knowledge yields discoordination and disequilibrium (Kirzner, 1973). Kirzner defines disequilibrium as a situation in which mutually beneficial trades are not carried out, because not all people are aware of all trading opportunities (Kirzner, 1989: P.76).

Discovery and Arbitrage in a Simple Example

To describe the theory that emerges from Kirzner’s assumptions and see what implications emerge from bringing uncertainty and disequilibrium into economic theory, it will be instructive to consider the market of a single, homogeneous commodity in a single period. Each market participant has a certain willingness to buy or sell that commodity and these demands and supplies can thus be drawn as graphs in a price-quantity space. The difference to conventional Neoclassical Economics is now that without the assumption of perfect knowledge and the equilibrium that would result from that assumption, those demands and supplies in themselves are inconsequential. They are independent of each other and no interactions in the form of trades necessarily result from them. That there is a potential for trades to take place because there is a price at which there is both a demand and a supply is something that must first be discovered by one of the market participants. It must first be hypothesized and tested, because in a world dominated by the open-ended uncertainty described by Kirzner, one cannot truly know what other people will be willing to buy or sell at particular prices before actually offering them a respective trade. One can try to ask them about what they would do, but talk is cheap. One can try to extrapolate from past behavior, but preferences and endowments change. (Kirzner, 1973; Kirzner & Seldon, 1980). Because there are no trades without such discoveries, the process of hypothesis testing takes a central place in Kirzner’s theory. All trades thus start out as a hypothesis about an arbitrage opportunity where one can buy from the supplier and sell to the demander at the price at which one finds them to be willing to make the respective trades and can make a profit on the difference (Kirzner, 1973).

In the simplest case in which the demanding and supplying participants had been completely unaware of any opportunities and the arbitrageur discovers all details of the situation, that difference is going to be the difference between their reservation prices and therewith equal to what would have been their surplus under a standard Neoclassical examination. By discovering that the

(31)

31 trade is possible at all, the arbitrageur has virtually discovered and therewith earned that surplus. It would have remained unrealized without his discovery.

If, however, out of his partial ignorance about the demanding actor’s reservation price, the arbitrageur offered him a deal much better than necessary, the arbitrageur would still earn a profit equivalent to his discovery. He would additionally have triggered a second discovery by the demanding actor, the value of which this actor would then earn. This second discovery would consist of a trading opportunity at which the demanding actor could buy the desired product for much less than his reservation price. Thus, the total arbitrage profit remains the same, no matter how the discoveries are divided (Kirzner, 1973 &1989).

It may also happen that the arbitrageur overestimates the potential for trade, with the perceived opportunity turning out to have been no more than a mistaken hypothesis. Having bought something for more than what one then finds out that one can sell it for, he will end up with a loss. That way, many people see opportunities where there are none, while many actual opportunities are never discovered.

Instead of the potential for a trade being discovered by a third person, the arbitrageur can just as well be one of the two previously unaware market participants from whose reservation prices the potential for the trade arises. This participant would still have discovered the difference between his own reservation price and that of the other market participant and made the difference in profit by conducting the trade. Because discovery and arbitrage still work the same way in such a case as they would if the arbitrageur was a third person, the discovering role can always be viewed in isolation from the possible demanding or supplying roles of the same participant and therewith always as if performed by a third person (Kirzner, 1973). Not just trading activity but even the simple purchase or sale of something can therefore be understood as hypothesis driven arbitrage. Kirzner’s theory makes it possible to understand all economic action through this framework (Kirzner, 1973 & 1989; Kirzner & Seldon, 1980).

Applicability

This single period example directly translates to a market with multiple periods. The intertemporal arbitrage, often referred to as speculation, which is present in these markets works in exactly the same way as the within-period arbitrage that I have discussed here.

(32)

32 The same is true for the more complex case of arbitrage between different markets. For example, steel, wood, machines and labor might be bought on several markets, combined in a certain way and the resulting hammers sold in a different market. Just like a price difference can be exploited within a market, over time or over geographical distance, it can also be exploited between the markets of different inputs and outputs in a production chain. Only if there is an overall price difference between all necessary inputs and the output to be discovered can such a production of hammers be profitable. With this, the discovery of ignorance and of arbitrage opportunities extends beyond simple price differences and includes innovation in the form of opportunities that consist of combining inputs in novel ways (Kirzner, 1973 & 1989; Kirzner & Seldon, 1980).

But not just all production decisions, including hiring labor, can be understood as an exploitation of ignorance-derived arbitrage opportunities. Because time is just another asset one is endowed with, the decision to sell one’s labor to an employer can be understood in this way, too. It would be arbitrage between the price at which one does so and the value that it would have to oneself as leisure or in following other, possibly one’s own productive pursuits (Kirzner, 1973).

Because this makes both the decisions to work for someone else and the decision to hire other people to work for one’s own business as well as other decisions about how to do so dependent on discovering arbitrage opportunities for profitable transactions, the spontaneous emergence and transformation of economic organizations and institutions in human societies in the way that for example Hayek (1948) had written about it can be understood through such arbitrage opportunities as well. Wherever a different institutional construct could be more profitable, it pays to disrupt the status quo with this new construct. Wherever a different institutional construct could afford one subsequent profit opportunities, it pays to disrupt the status quo with this new construct, act on the subsequent opportunities and realize said profit.

There are possibly even more issues on which Kirzner’s theory can offer new insights. After all, just like Neoclassical Economics, Kirzner’s theory views utility as the ultimate decision making variable. Even though Kirzner himself had doubts whether his ideas applied outside of the economic realm, his theory offers a very general model for human decision making under scarcity and uncertainty (Kirzner, 1973 & 1996). Leaving one’s spouse rather than staying with her, going fishing rather than watching TV or attending church Sunday morning rather than going clubbing Saturday night could all be understood as profitable or unprofitable (in utility terms) acts of arbitrage that follow discoveries of new options and their relative desirability.

Referenties

GERELATEERDE DOCUMENTEN

Evaluating the possibility of this international trading of hydrogen and its derivatives involves consideration of the trade-off between cost (lower production cost with

To the end, a systematic variation of the main system parameters, e.g., the Weber number, the ratio between the radius of the inner core and the average film coating thickness, and

Door experimenteel onderzoek in te zetten en meerdere factoren, type Facebookgebruiker, leeftijd, zowel berichtattitude als merkattitude en verschillende typen content, mee te

The nonlinear properties of batteries, the rate-capacity and recovery effect, suggest that battery lifetime can not only be extended by lowering the average discharge current, but

From the research carried out in Chapter 6 it could be seen that the advertising market in South Africa is showing such tendencies, adding to the importance

Bottleneck of experimental evaluation approaches, in particular for analysis and verification tools for distributed systems, is the shortage of adequate benchmark problems [22],

In this paper a new clustering model is introduced, called Binary View Kernel Spectral Clustering (BVKSC), that performs clustering when two different data sources are

Op basis van de berichtgeving in de kranten lijkt deze affaire al met al niet binnen de grenzen van de politiek te vallen, maar ondanks dat heeft het wel gevolgen voor zijn