• No results found

Uncertainty in Economics

N/A
N/A
Protected

Academic year: 2021

Share "Uncertainty in Economics"

Copied!
41
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Uncertainty in Economics

Happy families are all alike;

every unhappy family is unhappy in its own way.

Leo Tolstoy, Anna Karenina

Pieter Jonker

Master Thesis

18 EC

Department of Philosophy

Graduate School of the Humanities

Universiteit van Amsterdam

Supervisor: dr. Federica Russo

Second Reader: prof. dr. Michiel van Lambalgen

June 2019

(2)

P a g e

2 | 37

CONTENT

Abstract

Page 3

1

Introduction

Page 4

2

Probability as the Logic of Uncertainty in Economics

2.1

Keynes and Knight on Uncertainty and Risk

Page 7

2.2

The Different Concepts of Probability

Page 7

2.3

Knight and the Relative Frequency Interpretation of Probability

Page 8

2.4

Keynes and the Logical Interpretation of Probability

Page 10

2.5

Ramsey and the Subjective Interpretation of Probability

Page 11

2.6

Rational Choice Theory and Game Theory

Page 14

2.7

The Efficient Market Hypothesis

Page 16

2.8

Behavioural Economics

Page 17

2.9

Complexity Economics

Page 18

2.10 Conclusion

Page 20

3

Three Attempts to Incorporate Uncertainty in Economics

3.1

Introduction

Page 22

3.1

Info-gap Theory

Page 23

3.2

Agent-based Modelling

Page 25

3.3

The Fractal View

Page 27

3.5

No Rocket Science

Page 29

4

Conclusion

Page 31

References

Page 35

(3)

P a g e

3 | 37

Abstract

Risks are uncertainties that are susceptible of insurance. Before World War II, economists like Knight and Keynes paid due attention to the uncertainties that cannot be reduced to risk. After World War II, however, economics has concentrated on risks alone. This is strongly related to the treatment of probability in economics. Both objective and subjective interpretations of the probability of uncertain future events are used in mainstream economics to justify the representation of the combined effect of all factors excluded from the model by a disturbance term that can thought to be the result of a single random process with a constant and finite probability density function. This is equivalent to the reduction of all uncertainties to risks.

In many cases these models deliver satisfactory explanations and predictions. But there are also cases where they are significantly besides reality. The main reason is that the ceteris paribus clause was violated: something outside the model had intervened. That means that uncertainty does exist. Three attempts will be discussed that have been made to incorporate uncertainty in the economic models. The conclusion is that neither has succeeded, but all three can be used in a pluralistic approach to make economic explanations and predictions more resilient towards uncertainty.

(4)

P a g e

4 | 37

1 Introduction

In November 2008 Queen Elisabeth II visited the prestigious London School of Economics. She asked the obvious question: Why had nobody noticed that the credit crunch was on its way? The economic scholars present were unable to give her a satisfactory answer. The British Academy organised half a year later a Forum with leading academics, economic journalists, civil servants, and other practitioners in order to do answer this question. Their conclusions were summarized in a Letter to the Queen, dated 22 July 20091.

The letter starts with admitting that there had been many warnings, but that they were not listened to. Most decisions makers believed that the financial wizards had found new and clever ways of

managing risks and had virtually removed them. The letter continues with: It Is difficult to recall a greater example of wishful thinking combined with hubris. The letter concludes:

So, in summary, Your Majesty, the failure to foresee the timing, extent and severity of the crisis and to head it off, while it had many causes, was principally a failure of the collective imagination of many bright people, (…) to understand the risks to the system as a whole.

Providing understanding what is happening and being able to make reliable predictions about what is going to happen in a certain context is the core business of science. Since Carl Hempel (1942) introduced his deductive-nomological model, the standard view of science, not only of the natural sciences but of the social sciences as well, has been the use of laws or general hypotheses to deliver explanations or predictions from a set of initial conditions. Nowadays, because of the shortcomings of the deductive-nomological model, other models of scientific explanation have come to the fore. One can mention scientific realism, the idea that theoretical claims constitute knowledge of the world, unificationism, the idea that explanation is a matter of discovering mere patterns in reality, and then classifying states of affairs as instances of those patterns, or constructive empiricism,, the idea that explanation is a three-term relationship between theory, fact and context (Woodward, 2017). In all cases, however, a prime position in scientific explanation is given to theories that describe the relationship between an explanans and an explanandum.

As the example described in the Letter to the Queen shows, there is serious doubt whether the theories used in mainstream economics can adequately cope with the role of risk and uncertainty in explaining and predicting economic events. This thesis tries to investigate the causes of this inadequacy.

Economics is one of the social sciences. Like in other social sciences, many economic theories are qualified by the ceteris paribus clause: “other things being equal” or “assuming nothing else interferes”. This condition will be seldomly met in practice as the environment is constantly developing. This means that economic theories are not general laws, no fixed relation between cause and effect, but they describe tendencies, forces impelling certain effects, that could be counteracted by other forces. If enough evidence has been collected that such a tendency indeed exists, then it can be used for explanation and prediction, although there always remains the uncertainty that something will interfere (Kincaid, 1996).

Since World War II, economics came to rely more and more on modelling techniques. Building models in mathematical terms concentrating on simple causal mechanisms made it possible to investigate the

(5)

P a g e

5 | 37

complex economy. In order to be able to offer policy advice aimed at efficiency and rationality, these models had to be calibrated on experience with the use of statistics (Morgan, 2003). Most models used in economic theory assume that the combined effect of all factors that could influence the actual outcome but are excluded from the model, can be summarized in a single disturbance factor that behaves as being produced by a random process which has a constant probability density function. Most often this probability density function is assumed to be a normal distribution with an average of zero and a standard deviation that is constant over time.

It is this assumption of a disturbance factor with a constant probability density function that provides the foundation for the treatment of risk in mainstream economics. The basic idea is that if the model is used repeatedly to make predictions, the average over time of the deviations of the actual outcomes from the predictions of the model will converge to zero and the deviations will have a constant variance. This axiom of convergence is also known as the law of large numbers. Davidson (2003) calls this the ergodicity assumption. This convergence could happen when the factors that interfere from time to time remain the same.

Throughout history an increasing use of the law of large numbers has been made to provide protection against the precariousness of human existence. Examples are the now extensive availability of all kinds of property insurance, pension funds and social security schemes. Property insurance provides financial protection against a contingent or uncertain big loss in exchange for a guaranteed small payment. Pension funds are superannuation schemes for retirement income that provide insurance against longevity that ordinary savings cannot deliver. Social security schemes compensate people whose monetary income has fallen because of unemployment, decease or disability. To quote Bernstein in his bestselling Against the Gods. The Remarkable History of Risk:

The revolutionary idea that defines the boundary between modern times and the past is the mastery of risk: the notion that the future is more than a whim of the gods and that men and women are not passive before nature (Bernstein 1996, 1)

But, as the Letter to the Queen demonstrates, it is very well possible to rely too much on the axiom of convergence. The future is yet to come, there is always the possibility that something new will interfere. The past is prologue, setting the stage, but the possibility that the actors behave in an unexpected way cannot be excluded. In short, there is more uncertainty than just the quantifiable risk presented in the disturbance factor in economic models.

In this thesis, I will discuss the relation between the treatment of risk and uncertainty in economics with the different philosophical conceptions of probability. The two main categories are the objective conceptions of probability, probability as a characteristic of nature, to be known by observation, and the epistemic conceptions of probability, probability as a belief about what to expect. The treatment of risk in economics is firmly rooted in an objective conception of probability. Even cases of where a clear subjective interpretation of probability should prevail, are reduced to an objective interpretation assuming that market forces will eliminate dissident views. Economic behaviour, however, cannot always be reduced to a tendency plus a disturbance factor, but is often shaped by people changing their mind about the future, assigning a different probability to future states of the world than before. I will start by discussing probability as the logic of uncertainty in economics in the first chapter of this thesis. The conclusion will be that mainstream economics has not succeeded in incorporating the uncertainty that is non-reducible to risk into its models and is therefore susceptible to fallacy outlined in the quoted Letter to the Queen.

(6)

P a g e

6 | 37

In summary, the first two research questions I will try to answer are:

R1: Which philosophical interpretations of probability are used in economic modelling? And do they warrant the emphasis on reducing as much uncertainty to insurable risks?

R2: As economics deals with complex systems, is it justified to concentrate on insurable risks and neglect irreducible uncertainties?

How could economics better incorporate uncertainty into its explanations and policy advice? I will discuss three proposals that have been made. The first is the so-called info-gap methodology developed by Ben-Haim (2010). He defined an info-gap as the disparity between what is known, and what needs to be known in order to make a comprehensive and reliable decision. The info-gap methodology aims at making predictions that are more immune to errors that result from approximate models and imperfect understanding. Info-gap analysis starts from a non-probabilistic quantification of the uncertainty in either the parameters or functional form of the explanatory model and/or the probability density function of the disturbance. This results in an amended model of the system under consideration. The third step is to combine the amended model with a robustness function, a quantification of how wrong we can be in our model or understanding before we will arrive at an unacceptable outcome.

A second attempt to incorporate uncertainty is to use agent-based modelling that can accommodate complexity by tracking how market participants change their behaviour in reaction to developments in their environment. An example is the pattern-seeking approach of Bookstaber (2018), who uses an evolutionary perspective.

The third attempt is the fractal view. This view assumes that the disturbance factor is not generated by a combination of independent processes but by a combination of processes that influence each other, a multiplicative cascade. Such cascades can produce fat tails in the probability density function of the disturbance factor (Mandelstam, 2004).

If we take the requirement of being able to develop general hypotheses than can provide knowledge about the world as our criterion, then I argue that info-gap analysis is a more promising way of incorporating uncertainty in economics than agent-based modelling or the fractal view. But all three attempts can be used to make the explanations and predictions of economic theory more resilient towards uncertainty.

This leads to the third research question I will try to answer in this thesis:

R3: Several attempts have been made to incorporate uncertainties that cannot be reduced to risk in economic models? Have these attempts succeeded? Which lessons have been learned from these attempts about how economics could handle uncertainty?

(7)

P a g e

7 | 37

2

Probability as the Logic of Uncertainty in Economics

2.1

Keynes and Hicks on Uncertainty

The future is yet to come. This means that most of the time when taking a decision, one cannot be certain what the outcome of that decision will be. Natural processes of all kinds and actions taken by others will make it difficult to predict the conditions in which a decision will take effect. In his General

Theory (1936) John Maynard Keynes addressed the economic consequences of uncertainty. When

taking investment decisions, investors often have no other choice than to rely on conventions, generally held opinions about the future of the markets they operate in. These conventions are based on the idea that everything will remain the same unless there is a good reason to suppose that something will change. The conventions will rein in what Keynes called the animal spirits that would motivate entrepreneurs to invest whatever it takes to become and to remain the kind of professional they want to be, without regard to the question whether there is indeed demand for their products or services, and so risking overinvestment. But these conventions can also make investments fall short of the needs of society, leading to underinvestment. These conventions can, therefore, play a decisive role in determining the general level of activity in the economy. Uncertainty about future events will also influence the amount of cash balances economic agents will hold, either for being able to use future investment possibilities that will bring profits superior than those of the present ones, or for being able to cope with possible future set-backs without going bankrupt. Movements in these speculative cash balances can counteract the monetary policies of the governments and central banks. Expanding money supply will not boost demand when the additional money ends up being hoarded, while contracting the money supply can leave the interest rate unaffected when speculative cash balances are run down for transactional purposes.

In 1921, Frank H. Knight famously introduced a distinction between uncertainty and risk. Both relate to future events that can have an impact on the realisation of the goals of an economic agent. The future events for which one can obtain insurance are called risks, the others uncertainty. This distinction has important economic consequences. The costs of insurance against risks will enter the costs of production and will be passed on to the consumer as all other costs of production. But the entrepreneur will have to take the positive and negative effects of uncertainty on his own account and he will demand an allowance for his willingness to bear those consequences. This explains why, even in conditions of perfect competition among suppliers, positive levels of profit will prevail besides occasional losses. Knight also thought that uncertainty is a driving force behind the rise of the corporation in modern economics. Bundling multiple economic activities into one organisation will make the combined outcome of those activities better predictable, transforming (part of) the uncertainty into risk by imitating insurance.

The consequences of uncertainty indicated by Keynes and Knight are important arguments why economics should pay due attention to uncertainty. After World War II, however, economics has increasingly concentrated on risks alone. (Knightian) uncertainty got less and less attention. In a bibliometric survey, Hodgson (2011) found that uncertainty has gradually but consistently disappeared from the economic journals.

2.2

The Different Concepts of Probability

The classical interpretation of probability is derived from games of chance, like throwing a dice or taking balls from an urn. Insurance is based on the relative frequency interpretation of probability. Keynes used a logical-relational interpretation of probability in his main philosophical publication, A

(8)

P a g e

8 | 37

Treatise on Probability (1921). His logical interpretation came under heavy criticism form a fellow

Cambridge philosopher and mathematician, Frank H. Ramsey, who developed the subjectivist interpretation of probability as an alternative. It is this subjectivist interpretation that grounds both game theory and rational choice theory, the foundations of post-war mainstream economics. Keynes accepted Ramsey’s criticism that his logical interpretation has some serious defects but did not adhere completely to the subjectivist alternative. Turned towards economics, he refrained from discussing probability but concentrated on the social aspect of assessing possible future developments. Gillies (2000) labelled this approach as the intersubjective concept of probability. In post-war economics, however, those social aspects were considered as mere irrationalities that should and will be discarded, as markets will develop for translating as much uncertainty as possible into risks. Those markets will shift the burden of uncertainty ultimately onto those agents who could best handle them. This line of reasoning has produced the modern-day financial engineering.

In the following sections we will follow how these different concepts of probability have developed in economics.

2.3

Knight and the Relative Frequency Interpretation of Probability

Knight introduced the distinction between risk and uncertainty in the following way:

To preserve the distinction (..) between the measurable uncertainty and an unmeasurable one we may use “risk” to designate the former and “uncertainty” for the latter. (…) The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in high degree unique. (Knight, 233)

Knight thought that grouping of instances was the most obvious way of dealing with the uncertainty economic agents face with respect to future outcomes. Grouping makes most sense for measurable uncertainty (risk) because in that case the class to which an instance belongs can be defined without ambiguity. He calls this grouping consolidation (Knight, 245). The most important example of consolidation is insurance: the effects of many similar instances of an event, for instance that a building is damaged by a fire, are pooled together, which has the effect that the combined outcome of the group of events can be predicted with more certainty than each event individually. This make it possible to transform uncertainty into a risk by exchanging the full negative effects of an individual event into a corresponding share in the negative outcome of a large group of such events taken together.

But consolidation could also be obtained by combining several similar activities into one organisation:

The possibility of reducing uncertainty by transforming it into a measurable risk through grouping constitutes a strong incentive to extend the scale of operations of a business establishment. This fact must constitute one of the important causes of the phenomenal growth of the size of industrial settlements which is a familiar characteristic of modern economic life (Knight, 252)

The logic of consolidation is strongly correlated with the relative frequency interpretation of probability: the probability of an attribute A in an infinite reference class B is the limiting relative frequency of actual occurrences of A within B. (Hájek, 2012)

(9)

P a g e

9 | 37

Applied to mass social phenomena, this means that, when the number of instances observed grows arbitrarily large, then the value of the resulting relative frequency should not oscillate but will settle down to some value. Von Mises calls this the empirical axiom of convergence (Childers 2013, 7). At the same time, there is also the empirical axiom of randomness: it is impossible to predict individual outcomes in a series of mass phenomena. An example is fire. Because there are many possible causes for a building to fall victim to a fire, it will be impossible to predict whether a specific building will be damaged in a fire in a certain period. But when many buildings of a certain type are taken together, the relative frequency of fire taking place in these buildings will be the about the same every period and can so be predicted with a small margin of error.

The axiom of convergence is also known as the law of large numbers. Throughout history an increasing use of this law of large numbers has been made to provide protection against the precariousness of human existence. The basic idea is that the relative frequency found in the past will be replicated in the future because the same causes will apply. Davidson (2003) calls this the ergodicity assumption. To quote Bernstein in his bestselling Against the Gods. The Remarkable History of Risk:

The revolutionary idea that defines the boundary between modern times and the past is the mastery of risk: the notion that the future is more than a whim of the gods and that men and women are not passive before nature (Bernstein 1996, 1)

Take weather forecasts. The Dutch annual publication Enkhuizer Almanak, now in its 424th edition,

gives a weather forecast for each week of the coming year, based on the average weather conditions over more than hundred years. Its predictions have a remarkable high success rate of seventy percent. For many applications, for instance planning harvesting in arable farming, such a high success rate in predicting future conditions is good enough. But consulting the Enkhuizer Almanak will not be a good guide for deciding if I should take an umbrella with me today. One would prefer an actual weather forecast. This shows the difference between the use that can be made of relative frequency for predicting the outcome of a group of events versus the outcome for a single event.

Insurance based on the relative frequency interpretation of probability has its limitations. The axiom of convergence is defined for events that are repeated under identical conditions and independently. The first requirement gives rise to the moral hazard problem, and the second to the exclusion of force

majeure. Moral hazard refers to situations in which a party gets involved in a risky event knowing that

it is protected against the risk and that another party will incur the cost. The probability that a loss will be incurred is then much higher than in the conditions in which the relative frequency was calculated. In insurance, the moral hazard problem arises in three forms: adverse selection, information asymmetry and the agency problem. Adverse selection takes place when only cases with high risk seek insurance and cases with low risk do not. Insurance companies will defend themselves against the consequences of adverse selection by limiting pay-outs, deductibles, and co-payments, and charging higher tariffs. Information asymmetry takes place when one party has more information about the possibilities that a loss will be incurred than the other party. Insurance companies will refrain from making a policy when they don’t know enough of the mechanism that might cause a loss. The agency problem arises when a principal depends on an agent to act on his behalf. Often the interests of the agent are somewhat different from the interests of the principal. This means that the insured party cannot always be certain that his damage will be reimbursed by the insurer. The moral hazard problem sets limitations on the possibility to transform uncertainty into risks because the insurance will not always be complete and unfailing.

(10)

P a g e

10 | 37

LeRoy et al. (1987) think that most of what has become known as (Knightian) uncertainty are in fact moral hazard problems: failures of the insurance markets leaving it to the entrepreneur to bear the remaining risks himself. LeRoy et al. think that the main objective of Knight was to explain the existence of profits even in conditions of perfect competition, and that moral hazard will be the main explanans for that. Knight paid much attention indeed to moral hazard but did not equate that to “true uncertainty”, uncertainty related to unique events for which insurance is impossible.

The second requirement for the axiom of convergence is that events are independent. When events have a common cause and happen simultaneously, then insurance has also its limitation. For instance, when a flooding takes place and hits many buildings simultaneously, then the damage that house owners can claim under their building insurance will become so large that the insurance company would collapse. This leads to force majeure clauses indemnifying insurance companies in such cases, leaving the bill to be taken up by the house owners (or the government). This is another cause for making some risks uninsurable.

Summarizing, according to Knight profits need to be made, even in conditions of perfect competition, because entrepreneurs need to be rewarded for the willingness to assume uncertainty and uninsurable risk. Risks are insurable when the axion of convergence applies and moral hazard and common causes are absent. Knight thought that, besides insurance, risks could also be handled by consolidation into big corporations. What remains is the “true” uncertainty related to unique events. Knight had little to say how to handle this. Keynes, on the contrary, tried to develop a treatment of probability that could also include those unique events.

2.4

Keynes and the Logical Interpretation of Probability

John Maynard Keynes intended to extend probability to single events by using logic. The opening sentences of his A Treatise on Probability (1921) are:

Part of our knowledge we obtain direct; and part by argument. The Theory of Probability is concerned with that part we obtain by argument, and it treats of the different degrees in which the results so obtained are conclusive or inconclusive. (Keynes 1921, 3)

Keynes defined probability as the degree of the justification one has from the available evidence for a proposition, for instance that it will be raining today. Experience and education will have taught me that rainfall is often preceded by phenomena like dark clouds and a lowering of the atmospheric pressure as witnessed by the barometer. The relation between these phenomena and rainfall, however, falls short of an implication, it is a probability relation. Keynes thinks that probability should be defined in this way, as the relation between two propositions in comparison with an implication. By treating probability in this way, Keynes stands in the intuitionist tradition in Cambridge at that time, initiated by G. E. Moore. Moore opposed the British Idealism that was dominant in England at the end of the 19th century. British Idealism had developed from German Idealism and was heavily influenced

by both Kant and Hegel. Although not all philosophers in the British Idealism school of thought adhered to ontological idealism, all adhered to epistemological idealism: the only thing we can know about reality is its representation in the human mind. The consequence is a strict coherentism: our beliefs about reality can only be justified if they are part of a coherent set of beliefs. G. E. Moore, on the contrary, had the opinion that some beliefs about reality were justified because we can have knowledge of parts of reality by direct apprehension, by intuition. Moore concentrated mostly on ethics, on how one can have direct knowledge of the good. But his criticism of British Idealism marked

(11)

P a g e

11 | 37

the beginning of analytic philosophy. Davis (1994) has shown from early work of Keynes that he started his career in this intuitionist approach. Working on probability, Keynes assumed that one can have knowledge of separate parts of reality, be it a logical necessity or a probability. This knowledge is

objective, because all rational men should arrive at the same conclusion on the evidence available.

Keynes’s so-called “logical-relational” definition of probability runs into a series of difficulties:

 The first is whether is it possible to give probability a numerical value in the range from 0 to 1, comparable to the frequencies calculated from collectives of data, or to the chances derived from classical games like throwing dices or taking balls from an urn. Keynes thought that it will be possible to numerically express probability by considering the group of all possible consequences of the available data. When rational reasoning leads to n>1 mutually exclusive and collectively exhaustive outcomes, and we have no justification to discriminate between them, then we can apply the Principle of Indifference and attach to each of them an equal probability of 1/n. When we have reasons to assume that the argument for one possible outcome is stronger than for the others, we can attach a higher value to its probability, under the condition that the sum of the ascribed probabilities over the exhaustive set of outcomes will be 1.

 A second, related problem with Keynes’s definition of probability is the difficulty in comparing the probabilities of two unrelated propositions, each belonging to a different group of possible outcomes. As the number of possible outcomes in each group will be different, there is no reason to assume that the probabilities derived by applying the principle of indifference to each of both groups will have a common yardstick.

 A third problem is how to incorporate new information about relevant circumstances. This new information will make the argument stronger, as the stock of available evidence increases. Keynes calls this an increase in the weight of the argument. When all evidence points in the same direction, this might increase the probability of the proposition, but when the new evidence conflicts with the already available, the probability could also be adjusted downwards. Nevertheless, when taking a decision, it seems rational to take not only the probability into account, but also the weight of the argument.

Frank P. Ramsey started criticizing Keynes’s concept of probability already in a Review Article in 1922 (Ramsey 1989). At that occasion, he had two main points of criticism. The first was that the probability may be unknown to us through a lack of skill in arguing from given evidence. The faculty of perceiving the relation between two propositions is called insight. This insight might be imperfect. The conclusion is that we cannot say: “We have reason to suppose the probability is a”, but only “We have reason to suppose that we have reason to suppose the probability is a”, and so on ad finitum. It is difficult to see how one can arrive at measurable probability in this way. Secondly, Ramsey pointed out that applying the principle of indifference is only allowed when the evidence is symmetrical regarding the various alternatives. This excludes the application of this principle when there is arbitrariness involved in classifying the alternatives relative to each other, for instance in small and large.

2.5

Ramsey and the Subjective Interpretation of Probability

A more fundamental criticism formulated Ramsey four years later, in 1926, in his essay Truth and Probability (Ramsey 1978).

The first step is to question the ontology of Keynes’s probabilities. Keynes starts from the supposition that we make probable inferences for which we claim objective validity; we proceed from full belief in

(12)

P a g e

12 | 37

one proposition to partial belief in another. Ramsey thinks, however, that there are no such things as the probability relations Keynes describes. Take the example: “This is red” as a conclusion, and “This is round” as evidence. When we observe several objects, some of which are both round and red, do we perceive a probability relation between those two statements, or are we simply developing a probability belief out of the inductive process of observing how many round objects are indeed red, as in the frequency interpretation? Ramsey, therefore, proposes to concentrate on de degree of belief we have in the conclusion.

The second step is to ask how we could measure degrees of belief: what does it mean that we belief a proposition to the extent of 2/3 or a belief in the proposition twice as strong as its contradictory? The first option is to suppose that the beliefs differ in the intensity by which they are felt by the owner of the belief. Ramsey rejects this option, even when it would be possible to ascribe numbers to the intensity of feelings, this view would be false, for the beliefs we hold most strongly are often accompanied by practically no feeling at all: “no one feels strongly about things he takes for granted” (Ramsey 1978, 71). This brings him to the supposition that the degree of belief is “a causal property of

it, which we can express vaguely as the extent to which we are prepared to act on it”. (Ramsey 1978,

71). This is the start of the subjectivist interpretation of probability as “the strength of our belief about

how we should act in hypothetical circumstances”.

The third step is to assume that decisions are made by maximizing expected utility. Ramsey uses the example of somebody who hesitates which road to take. He has a belief which one is the right direction but is not certain about it. He sees somebody in a distance whom he could ask for the right direction. How far would he be willing to go for asking? He will make a calculation, and will only go asking when the disadvantage of walking the extra distance is smaller than the difference between the advantages of arriving at the right destination and the wrong one multiplied by the probability that he has the wrong belief what the right direction is. This means that, if these advantages and disadvantages are known, we can measure the degree of belief by observing the choices the person would make when the distance to the informer varies.

The fourth step is to use betting to measure degrees of belief as bases of possible actions. Ramsey derives this in a three-step procedure:

 First, he defines the degree of belief of ½ as the situation in which the subject has no preference between the options (1) α if p is true, β if p is false, and (2) α if p is false, β if p is true. α and β stand for the values attached by the subject to different worlds. The importance of this definition is that the concept of value is much broader than monetary value alone, it includes all considerations that enter a comparison between two different worlds.

 The second step is to look for an option α for certain that is considered by the subject to be indifferent with that of β if p is true and γ if p is false. The subject’s degree of belief in p can then be defined as the ratio of the difference between α and γ to that between β and γ. This makes it possible to represent the values of different worlds on a common scale.

 The third step is to define the degree of belief in p given q. This expresses the idea that he would now bet on p, the bet only valid if q is true. Suppose the subject to be indifferent between the options (1) α if q is true, β if q is false, (2) γ if p true and q true, δ if p false and q true, β if q false. Then the degree of his belief in p given q is the ratio of the difference between α and δ to that between γ and δ.

(13)

P a g e

13 | 37

1. Degree of belief in p + degree of belief in ~p = 1

2. Degree of belief in p given q + degree of belief in ~p given q = 1

3. Degree of belief in (p and q) = degree of belief in p x degree of belief in q given p 4. Degree of belief in (p and q) + degree of belief in (p and ~q) = degree of belief in p

Ramsey concludes that “these are the laws of probability, which we have proved to be necessarily true

of any consistent set of degrees of belief. (…) If anyone’s mental condition violated these laws (…) he could have a book made against him by a cunning better and would then stand to lose in any event.”.

In this way Ramsey has found that “a precise account of the nature of partial beliefs reveals that the

laws of probability are laws of consistency”. Having degrees of belief obeying the laws of probability

implies a consistency between the odds acceptable on different propositions as shall prevent a book being made against you. This has become known as the Dutch Book theorem.

Ramsey thinks his interpretation has three main advantages in comparison to the interpretation proposed by Keynes:

1. Keynes failed in showing why partial beliefs should follow the axioms of probability calculus, his argument shows that if partial beliefs are consistent, they must obey this calculus.

2. The Principle of Indifference can be dispensed with. There is no need to put any limits on the expectations the subject has if he remains consistent, i.e. that if he has certain expectations, he is bound in consistency to have certain others.

3. Probable beliefs can be justified by direct inspection, there is no need to look for a probability relation between the proposition in question and the things I know for certain.

Ramsey admits that this argument is based fundamentally on betting. He thinks this is reasonable when it is seen that all our lives, we are in a sense betting: “The options God gives us are always

conditional on our guessing whether a certain proposition is true” (Ramsey 1978, 85).

Is it good enough to have consistent beliefs about the future? That would indeed prevent that a Dutch Book could be made against you. But should not epistemic accuracy rank higher: how close can we get to the situation that we give true propositions a probability of one and false propositions a probability of zero?

Ramsey thinks this is “too high a standard to expect of mortal men” (Ramsey 1978, 86). Human beings form their expectations about future events by natural selection and adapt these expectations by incorporating new observed facts, while keeping the expectations consistent. Ramsey thinks that consistency combined with observation and memory can lead to the truth. When more instances of a phenomenon will be observed, then my degree of belief will approach the relative frequency. Ramsey calls this the Logic of Consistency as opposed to the Logic of Truth propagated by Keynes.

After Ramsey, his Logic of Consistency became formalized by using Bayes’ theorem. If, at a particular stage in an inquiry, a scientist assigns a probability distribution to the hypothesis H, Pr(H)—call this the prior probability of H—and assigns probabilities to the evidential reports E conditionally on the truth of H, PrH(E), and conditionally on the falsehood of H, Pr−H(E), Bayes’s theorem gives a value for the

probability of the hypothesis H conditionally on the evidence E by the formula PrE(H) = (Pr(H)*PrE(H))/(Pr(H)*. PrH(E) + Pr(-H) *Pr-H(E))

(14)

P a g e

14 | 37

(Routledge, 2017). Take a medical example as an illustration. Experience has shown that in a certain population a fraction Pr(H) suffers from a certain disease. This is the prior probability that a person belonging to this population indeed suffers from this decease. A test has been developed for deciding whether a specific person does suffer from this decease. This test gives an indication (E) with probability PrH(E) when a person indeed suffers from this disease, and fails to deliver this indication, a

“false negative”, with probability (1 – PrH(E)). Suppose that the test delivers a positive result also when

a person does not suffer from this disease, a “false positive”, with probability PrE(-H). Bayes’s theorem

can now be used to calculate the probability that a person who obtained a positive test result indeed suffers from this disease. Take Pr(H) = 0.25 as the prior probability, and the probability of correct results PrH(E) = 0.95. When the probability of a false positive Pr-H(E) = 0.004, then the probability that

a person with a positive test result will indeed suffer from this disease (PrE(H)) will be (0.25*0.95) /

(0.25*0.95 + 0.75*0.004) = 0.998. This means that there is a high probability that a person who has tested positively indeed suffers from this disease, much higher than the a priori probability. In general: the prior probability will be recalculated by taking the evidence into account. In this way, the subjective probabilities attached to the different possible outcomes can be updated in a consistent way when new evidence becomes available.

The conclusion is that Ramsey has pointed out correctly that the logical interpretation of probability as proposed by Keynes runs into serious difficulties, while his own subjectivist interpretation has the advantage of being both consistent and measurable in a betting procedure. Although the probabilities so derived will approach relative frequencies after repeated Bayesian updating, there remains the problem how one should derive the a priori probabilities to start with. This problem is acute when dealing with single events or when new disruptive evidence should be considered. Then it will be inevitable to use argumentation to obtain the probability of the future event needed for taking a decision. Keynes failed to provide a workable solution for this problem, but the subjectivist account does not solve it either.

2.6.

Rational Choice Theory and Game Theory

The subjectivist interpretation of probability as developed by Ramsey has been elaborated in several ways. Mathematicians like R. von Mises and Sage axiomatized this interpretation in a rigorous way. Economists like Arrow (1951) incorporated the subjective interpretation of probability into the theory of decision making. When the (future) outcomes of an action are known with certainty, a decision maker should choose the action that is open to him that will lead to the best result according to his preferences, having maximum utility. To lead to a single maximum, the preferences of the decision maker should meet certain criteria, like stability, transitivity, and completeness. When the (future) outcomes of an action are uncertain, however, then the decision maker should attach a probability distribution to the conceivable outcomes. The prescription of the theory of decision making then becomes that the action should be choses that will have the maximum expected utility.

In this way, rational choice theory developed. Its main ingredients are a set of preferences over outcomes, a set of possible actions, a set of possible outcomes of each action and a probability distribution over these sets of possible outcomes. The theory says that a decision maker should choose the action that is at least as good, according to his preferences, as every other available action. Although this rational choice theory is developed as a normative theory, how a rational decision maker should choose, mainstream economics has applied rational choice theory also as a good description of the actual behaviour of economic agents. A justification for assuming economic agents to choose rationally, is evolution. It is assumed that economic agents that make different choices than the choice

(15)

P a g e

15 | 37

prescribed by rational choice theory will be wiped out by more rational competitors. The first to apply this kind of reasoning was Alchian (1950): firms will imitate the behaviour of more profitable firms. When conditions change, a process of trial and error will start and show the best way to cope with those new circumstances.

Related to rational choice theory is game theory. Game theory is a branch of applied mathematics that provides tools for analysing situation in which parties make decisions that are interdependent. Game theory started with the assumption that the pay-out of every action of each party is known with certainty. The main models of game theory are a strategic game, an extensive game, and a coalitional game. These models differ in two dimensions. A strategic game and an extensive game focus on the actions of individuals, whereas a coalitional game focuses on the outcomes that can be achieved by groups of individuals; a strategic game and a coalitional game consider situations in which actions are chosen once and for all, whereas an extensive game allows for the possibility that plans may be revised as they are carried out (Osborne 2009, 8). Which model is appropriate to study behaviour depends on the phenomenon studied. Already from the start it was clear that it was unrealistic to assume that the reactions of the other parties to the strategy chosen would be known with certainty. To loosen this assumption, the pioneers of game theory, Von Neumann and Morgenstern, already introduced so-called mixed strategies, in which parties have preferences regarding lotteries over outcomes, and decide on their actions in a probabilistic way, i.e. applying a frequency distribution over their possible actions (Briggs 2014). Having a lottery over outcomes is equivalent to having preferences over the expected value of a payoff function over deterministic outcomes with known probabilities. To find a so-called Nash equilibrium, i.e. a solution of the game in which not player has an incentive to change its strategy when the other players don’t change theirs, one must make the further assumption that the probabilities that all parties attach to possible outcomes are independent. These are strong assumptions indeed. For coping with imperfect information about the situations in which the actions will take effect, Bayesian games have been developed, in which players react to signals and adapt their beliefs about the actual state in which actions will take effect in a Bayesian way (Osborne 2009, chapter 9). In this way, there is a convergence between the assumptions made in game theory and rational choice theory.

The application of game theory to decision making of economic agents contributed to the dominance of rational choice theory in economics. Game theory has, like rational choice theory, also a normative character. Besides trying to improve our understanding of the world, game theory also suggests ways in which an individual’s behaviour may be modified to improve his own welfare.

An extreme example of this normative use of game theory was its post-war application in the highest military and security levels of the USA, the so-called Cold War Rationality. The idea was that not accidents caused wars, but decisions, and that the decisions of all parties were interdependent. So, game theory should be applied to make decisions rational. War games were played with military and political decisions makers playing roles to simulate the results when prescribed rules would be applied in specific circumstances. In the introduction to their study in Cold War Rationality, Erickson et al. said:

What was distinctive about Cold War Rationality was the expansion of the domain of rationality at the expense of that of reason, asserting its claims in the loftiest realms of political decision making and scientific method – and sometimes not only in competition with but in downright opposition to reason, reasonableness, and common sense. (Erickson et al., 2013, 2)

(16)

P a g e

16 | 37

Cold War Rationality ended with the Cold War itself. Not only was the need to use elaborate studies of game theory and rational choice theory less urgent due to the reduced exposure to the risk of a Mutual Assured Destruction, but also because scepticism was accumulating whether these theories were describing actual human behaviour in an adequate way. We will turn to this when discussing behavioural economics.

Cold War Rationality has recently made a comeback in the neorealist school of international relations. Key officials of the actual Trump administration are frequently quoting the lessons that Thucydides, the Greek historian in the fifth century Before Christ, took from the Peloponnesian War of 411 BC. The relation between Sparta and Athens in that war is now seen as a model for the relation between the USA and China. Thucydides is considered to be the father of the realist school in international relations. In the classical realist school, conflicts were seen as ineradicable feature of international politics and how these conflicts started and developed was explained by appealing to the darker features of human nature. The modern neorealist school, however, appeals to rational choice theory. The state is modelled as a unitary rational actor operating under conditions of uncertainty and imperfect information. In contrast to the (neo)liberal school, neorealists think that states should not aim for building international institutions and systems of complex interdependence, but should pursue only what self-interest dictates, akin to competition between economic firms (Korab-Karpowicz, 2017). By embracing both game theory and rational choice theory, mainstream economics concentrated on rationality as the guiding principle to explain the behaviour of economic agents. Rational decision making in game theory and rational choice theory is based on the idea that all uncertainty can be converted into risks by behaving like an astute gambler. Counterparts will be found that are willing to consolidate those risks: from lotteries and auctions to financial products like derivatives, collateral debt obligations and credit default swaps. Economic agents that fail to follow the normative prescriptions of rationality will not survive in a competitive environment, as explained already by Alchian (1950).

2.7

The Efficient Market Hypothesis

In the seventies, mainstream economics added another argument to game theory and rational decision theory in favour of concentrating exclusively on insurable risks: the efficient market hypothesis. Its starting point was the monetarist school, which stated that there is a natural rate of unemployment, or, better said, a minimum rate of unemployment that will prevent businesses from continually raising prices. This theory implied that the full-employment policies of Keynesianism would only succeed in sparking inflation. Lucas (1972) carried monetarism one step further. He stated that if economic agents were perfectly rational, then they would correctly anticipate any effort on the part of governments to increase aggregate demand and adjust their behavior correspondingly. This concept of rational

expectations means that macroeconomic policy measures are ineffective not only in the long run but

also in the very short run.

Lucas derived his theory in a very individualistic way, by his so-called islands model. In this model N islands are assumed, and on each island lives one producer who charges a price for his product. When the price an islander receives for his product is raised in comparison to the average price in the whole archipelago, he will raise production, and vice versa. But how can the individual producer know this average price? Because he lives on an island, he can only observe his own price with certainty, and must make a guess on the prices being negotiated on other islands out of the imperfect information he receives from the other islands. How does he make this guess? The rational expectations assumption says that each producer has learned by experience to make use of all available prior

(17)

P a g e

17 | 37

information in such a way that the remaining error is white noise: purely random deviations with mean zero and a constant variance. This behavior is mirrored by the consumer on the island, who also has learned to separate structural changes from random deviations. Lucas showed that this model will result in a general equilibrium with minor fluctuations due to unexpected forecast errors.

This abstract theoretical model has important policy implications:

 Changes in the total money supply will not change production or consumption decisions because individuals will interpret them correctly as just nominal. So, there is no trade-off between unemployment and inflation, as stated in the Phillips-curve used in Keynesian economics, that could be used systematically by monetary authorities;

 There will be random fluctuations around the general equilibrium due to the irregularities in the flow of money in the economy. These fluctuations are at the basis of the business cycle. Because over time these fluctuations will even out, there is no need for any government intervention. As more information becomes available, the fluctuations will become smaller. So, the assumption that everyone can correctly separate structural signals from noise, leads to a general equilibrium in the economy with full employment without any government intervention. This result is known as the efficient market hypothesis.

Under the rational expectation hypothesis, fluctuations of prices around their equilibrium values are just white noise, the so-called random walk. This means that, under this hypothesis, all uncertainty about the forces influencing prices is reduced to risk, and risk is reduced to random variables having probabilities described by a density function with fixed parameters, e.g., the normal distribution. This reduction is the foundation for the pricing of financial instruments, like the Black-Scholes formula for option pricing (Black/Scholes, 1973) and its extensions.

The financial crisis of 2008, however, raised serious doubts about the wisdom of the view that all uncertainty could be reduced to white noise by means of efficient markets. In the Introduction I have quoted the Letter to the Queen of the British Academy as an example of the skepticism towards this view. A main factor in the credit crisis was the neglect of the moral hazard problem discussed above. When the insurer can no longer meet his obligations to compensate damage, the uncertainty returns to the party seeking insurance. Banks could take on too many risks, because they knew that a bad loan, bundled with good ones, could be offloaded to a secondary market of collateralized loans, so passing the risk to a less informed third party. This access to additional liquidity greatly expanded total risk taking, making banks too big to fail. When it became known that many financial institutions were unable to meet their commitments, a chain effect was produced because trading partners could not be found due to a lack of confidence, and it became necessary for governments to step in with taxpayer’s money to prevent a meltdown of the whole financial system. When trading was possible, it often was done at prices that were changed by up to eight times the standard deviation of historical fluctuations. This was far outside the range predicted by the rational expectation hypothesis, falsifying its assumption that by using efficient markets all risks can be reduced to white noise.

2.8

Behavioural economics

But how adequate is rational choice theory as a description of actual human behaviour? As mentioned above, rational choice theory requires that economic agents have preferences that meet the requirements of stability, transitivity, and completeness. The last one requires that a subject knows all actions that are available to him and their expected outcomes. In practice, however, there are limits

(18)

P a g e

18 | 37

to our thinking capacity, available information, and time. According to Herbert Simon, people tend to make decisions by satisficing (a combination of sufficing and satisfying) rather than optimizing (Simon, 1956). Decisions that meet basic decision criteria are often simply good enough considering the costs and constraints involved. Simon calls this bounded rationality. It is rational choice, but the choice is bounded by the limits of human cognitive capacity for discovering alternatives, computing their consequences under certainty or uncertainty, and making comparisons among them. As many decisions in practice will be repeated several times, and people will feel the need of looking for the best possible outcome, satisficing is akin to a process of learning.

But will this learning process in the end deliver the same result as the optimizing rationality assumed in rational decision theory? Kahneman and Tversky (1979) denied this. They proposed a new theory of choice, prospect theory, to describe how the environment will influence the decisions taken. Their main assumption is that the carriers of value are changes in wealth or welfare, rather than the final states. This assumption is compatible with basic principles of perception and judgment. Our perceptual apparatus is attuned to the evaluation of changes or differences rather than to the evaluation of absolute magnitudes. When we respond to attributes such as brightness, loudness, or temperature, the past and present context of experience defines an adaptation level, or reference point, and stimuli are perceived in relation to this reference point (Kahneman and Tversky, 1979, 277). Furthermore, many sensory and perceptual dimensions share the property that the psychological response is a concave function of the magnitude of physical change. For example, it is easier to discriminate between a change of 30 and a change of 60 in room temperature, than it is to discriminate between a

change of 130 and a change of 160. For that reason, they hypothesized that the value function for

changes of wealth is normally concave above the reference point and convex below. This can explain

loss-aversion: the tendency of people to prefer avoiding losses to acquiring equivalent gains.

Loss-aversion can also explain status-quo bias: people feel greater regret for bad outcomes that result from new actions taken than for bad consequences that are the consequence of inaction (Kahneman and Tversky, 1982). Loss-aversion and status-quo bias can prevent that the learning process will end up in differently as supposed in optimizing rationality.

Prospect theory was the start of a new branch of applied economics called behavioural economics. Behavioural economics attempts to incorporate the psychologist’s understanding of human behaviour into economic analysis. It is aiming not only at describing behaviour but also at improving decision making by restructuring the conditions (framing) under which decisions are taken. This intervention is known as nudging.

In this way, behavioural economics can be seen both as a critique of rational choice theory but also as a complement and enhancement of it.

2.9

Complexity Economics

Another relevant development is the birth of complexity science in the 1980s. Complexity science is concerned with complex systems and problems that are dynamic, unpredictable, and multi-dimensional, consisting of a collection of interconnected relationships and parts. Unlike traditional “cause and effect” or linear thinking, nonlinearity characterizes complexity science. Complexity science has become relevant for many branches of science, from ecology to meteorology and from geology to political science.

Introducing complexity science into economics would mean abandoning its focus on equilibrium (Arthur, 1999). The actions of individual economic agents will lead to an aggregate pattern, and

(19)

P a g e

19 | 37

individual agents will react to this aggregate pattern, creating recursive loops. Mainstream economics has concentrated on the actions of individual economic agents that are consistent with the aggregate pattern, resulting in an equilibrium where there is no incentive to change actions. Examples are the Nash equilibrium in game theory and the general equilibrium of the efficient market hypothesis. This comes down to a shortcut in the recursive loops. This shortcut was a natural way to study the aggregate patterns and to render them to mathematical analysis.

Complexity economics, on the other hand, questions the idea that equilibrium is the normal state of an economy, and thinks that in fact non-equilibrium is the normal state for two reasons. The first reason is (Knightian) uncertainty. In the words of Keynes (1937, 214):

the prospect of a European war … the price of copper … the rate of interest twenty years hence…. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.”

In those case, one must make a move but one has genuine not-knowingness: fundamental uncertainty. There is no “optimal” move. Things worsen when other agents are involved; such uncertainly then becomes self-reinforcing. If I cannot know exactly what the situation is, I can take it that other agents cannot know either. Not only will I have to form subjective beliefs, but I will have to form subjective beliefs about subjective beliefs. And other agents must do the same. Uncertainty engenders further uncertainty. Under such conditions rational decisions are impossible. There might be intelligent behaviour, there might be sensible behaviour, there might be farsighted behaviour, but rigorously speaking there cannot be deductively rational behaviour. Therefore, we cannot assume it. (Arthur, 2013, 4).

The second reason is technological change. Mainstream economics incorporates it by allowing that from time to time its equilibria must adjust to such outside changes. But novel technology is not just a one-time disruption to equilibrium, it is a permanent ongoing generator and demander of further technologies that themselves generate and demand still further technologies. The result is not occasional disruption but ongoing waves of disruption causing disruptions. Technology breeds further change endogenously and continually, and this throws the economy into a permanent state of disruption.

Certainly, many parts of the economy could be still be treated as approximately at equilibrium, and standard theory would still be valid here. And other parts could be treated as temporarily diverging from strong attracting states, and we could study convergence here. But the non-equilibrium parts cannot be ignored. This asks for considering the economy as a system whose elements are constantly updating their behaviour based on the present situation, an ongoing vast, distributed, massively parallel, and stochastic computation. The system that evolves procedurally in a series of events; it becomes algorithmic.

All this suggests a way forward for our nonequilibrium way of looking at the economy. We can see the economy, or the parts of it that interest us, as the ever-changing outcome of agents’ strategies, forecasts, and behaviours. And we can investigate these parts, and classic problems within economics—intergenerational transfers, asset pricing, international trade, financial transactions, banking—by constructing models where responses are specified not just at equilibrium but in all circumstances. Sometimes our models will be amenable to mathematical analysis, sometimes only to computation, sometimes to both. What we can seek is not just equilibrium conditions, but

(20)

P a g e

20 | 37

understandings of the formation of outcomes and their further unfolding, and of any dynamic phenomena that appear.

This approach is reminiscent of the book of Janos Kornai in 1971: Anti-Equilibrium. Kornai, living in Hungary in the communist period, had studied general equilibrium theory from the perspective of market socialism: would it be possible to let markets instead of planning decide on production in a socialist system? He concluded that general equilibrium is impossible, not only in socialism but in a market economy as well. The general equilibrium theory assumes that the market price mechanism is sufficient to guide all economic decisions on production and consumption and make supply and demand equal for all goods and services. In reality, however, these decisions are taking on a much broader information base than just prices. Demand and supply functions should take into account the response functions and decision algorithms used by the actual decision makers. Especially in conditions of uncertainty, it is highly improbable that these response functions and decisions algorithms will result in an equality of supply and demand at for all goods and services at the same time. In a market economy, it is more probable that for many goods supply will be greater than demand, a situation which Kornai calls a pressure economy. In a socialist economy, on the contrary, the reverse will be normal, the supply of many goods unable to meet demand, a shortage economy. Kornai proposed a research program that would start from considering the economy as a system in which information flows and agents react to those signals. For studying the functioning of such a system, one should use the methods of cybernetics.

A parallel development takes place in artificial intelligence and the cognitive science. What use should artificial intelligence make of incomplete information? How are beliefs formed in reaction to perceptions that are not conclusive? In probability theory, this had led to studying imprecise

probabilities. See for an overview Bradley (2016). It is remarkably that in the literature about imprecise

probabilities there is a revival of the logical interpretation of probability, especially Keynes’s concept of the weight of an argument.

2.10 Conclusion

Mainstream economics has banished Knightian uncertainty from its models. These models assume that all economic agents act rationally and have common knowledge about market conditions and market mechanisms. These assumptions allow for descriptions of the actual complex economy in simple mathematical forms. There remains the risk that the actual developments are different from the anticipated ones, due to all kinds of outside influences, ranging from changing natural conditions to accidents and social events. Mainstream economics assumes that these outside influences are independent from each other and that their combined effect can be summarized in a disturbance factor that is normally distributed with parameters that are constant over time. This assumption makes it possible to calibrate the models on experience and, using the obtained numerical values of the parameters of the model, use the models to make predictions of future events. If the assumption about the sum effect of the outside influences is correct, then each economic agent can insure himself against the risk of unanticipated outcomes by hedging.

There are, however, good reasons to be sceptical about this assumption. In the first place, there is the possibility that the individual disturbances are not independent from each other, but that one can trigger others to happen. This can cause positive and negative feedback mechanisms to occur. This will mean that the variance of the sum will not necessarily be smaller than the sum of the variances of the individual factors but possibly greater, making risks uninsurable. In the second place, the outside influences can create environments that induce changes in behaviour, for instance refusing to act as a

(21)

P a g e

21 | 37

counterpart in a hedging transaction. In both cases, risks have become uncertainties again. In the terminology of constructive empiricism, this means that the context in the three-term relationship between theory, fact and context has changed. In the terminology of unificationism, this means that the relationship between explanandum and explanans now belongs to a different category. In the terminology of scientific realism this simply means that the theory is inadequate to provide a true description of the world. The prescription is in all cases the same: try to improve the model describing the economic mechanisms at work. In the remaining part of this thesis I will discuss three attempts to amend the models of mainstream economics to incorporate the risks turned into uncertainties. The purpose should be to safeguard the possibility to analyse the complex economy with relatively simple models to allow for understanding and prediction.

Referenties

GERELATEERDE DOCUMENTEN

Onder het colluvium bevindt zich de C horizont, deze horizont wordt gekenmerkt door bruingeel zandige leem (3).. De dikte van het colluvium is

Voor de mens is de geurverandering amper waarneembaar, maar de insecten worden door deze stoffen gealarmeerd en proberen weg te komen (90% effec- tief). Zelfs wanneer de trips

Other existing studies on international entry mode choice emphasize the value of the option to defer; when facing high volatility and irreversibility of investment, MNEs tend

ber of credit points the first-year GPA is based on, the dummies for female and Dutch, the age at the start of the second year of a student’s program, the number of second

If player 1 chooses KNOW in period 2, we have shown that: a trustworthy player 2 will choose IN on both moves and player 1 chooses STAY on both moves in equilibrium, a selfish player

Petersburg Paradox. This can get around

In maart 2016 is de zorgstandaard Traumatische Hersenletsel voorgedragen aan Zorginstituut Nederland (het Zorginstituut) om deze het openbaar Register van het Zorginstituut op

De opname van gasvormige componenten door bladeren is sterk afhankelijk van de turbulentie van de lucht rond het blad.. De intensiteit van de turbulentie wordt naast de