• No results found

Modelling expectations using least squares heuristics and likelihood

N/A
N/A
Protected

Academic year: 2021

Share "Modelling expectations using least squares heuristics and likelihood"

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Modelling Expectations using Least Squares

Heuristics and Likelihood

Th.E.J. de Kogel

Master Thesis

Advisor: Dhr. Dr. Ir. F.O.O. Wagener

August 18, 2014

Amsterdam School of Economics Faculty of Economics and Business

(2)

Abstract

Expectations play an important role in economics. The classi-cal rational expectations hypothesis (REH) does not match the findings in learning to forecast experiments, especially in mar-kets with positive feedback. This thesis builds on the universal switching Ordinary Least Squares model (sOLS) introduced by Wagener in 2013 and poses a model for expectation formation using the data of Bao et al. (2012) on expectations in markets with a structural break in the fundamental price.

This model is subsequently tested against the REH on the data of Bao et al. (2012). The model does better perform similar to the REH in all markets with negative feedback and it outperforms the REH in all eight markets with positive feedback.

Data

Title: Modelling Expectations using Least Squares Heuristics and Likelihood Author: Th.E.J. de Kogel

Advisor: Dhr. Dr. Ir. F.O.O. Wagener Faculty of Economics and Business University of Amsterdam

(3)

Contents

1 Introduction 2

2 Theoretical overview 4

2.1 Experiments about Expectations . . . 4

2.2 Other Models about Expectation Formation . . . 10

3 The Model 14 3.1 Akaike switching Ordinary Least Squares model . . . 14

3.2 Adapted Akaike switching Ordinary Least Squares model . . . 17

4 Results 18 4.1 Switching Ordinary Least Squares model . . . 18

4.2 Akaike switching OLS model . . . 21

4.3 Adapted Akaike switching OLS model . . . 24

5 Discussion 26 5.1 AsOLS+ model on Bao et al. (2012) . . . 26

5.1.1 Information set . . . 26

5.1.2 Chosen heuristics . . . 28

5.1.3 Selection criterion . . . 29

5.2 AsOLS+ model on Heemeijer et al. (2009) . . . 29

(4)

Chapter 1

Introduction

Expectations matter. Especially in economics expectations play an im-portant role. Investors that expect a stock to increase in price will buy the stock now and drive up the price themselves, thus their expectations become a self-fulfilling prophecy. How to model these expectations has been a key question over the past years.

The rational expectations hypothesis (REH) has long been the leading paradigm in expectation formation modelling in economics. The REH as-sumes agents have full knowledge of the market, having access to all publicly and privately owned information. Agents form their expectations based on this full set of information and will predict the fundamental price under these circumstances. The critique on the REH is clear, it assumes too much knowledge of agents. Agents do not have full access to the entire market structure.

Besides this critique, recent experiments have shown that the REH can be quite wrong in predicting expectations formed by agents. In experiments run by Hommes et al. (2005, 2008), Heemeijer et al. (2009) and Bao et al. (2012) prices deviate substantially from the REH. This deviation is most prominent in markets with a positive feedback structure (where higher expectations lead to higher realized prices). Heemeijer et al. (2009) and Bao et al. (2012) have shown that this feedback structure has a strong effect on the aggregate behaviour. In markets with negative feedback, prices are driven towards the fundamental price and in markets with positive feedback the prices tend to oscillate around the fundamental price. Any model build to capture these expectations should be able to cope with both kinds of feedback (something the REH lacks in case of positive feedback).

Over the years, more advanced models (see for examples Sargent (1993), Hommes (1997,1998), Evans (2001) and Anufriev (2012)) have been build to capture these expectations. However, many of these models are restricted

(5)

by the fact that they do require additional parameters to be estimated on the data with hindsight. The recently introduced switching Ordinary Least Squares model does not face this restriction. Introduced by Wagener (2013), this model makes use of three simple heuristics and a homogenous popula-tion structure to form expectapopula-tions based on experimental data. The model performed quite well on the data of Heemeijer et al. (2009), giving better predictions than the REH in markets with a positive feedback structure and not performing much worse than the REH in markets with negative feedback. This thesis will continue on Wagener (2013) and the switching OLS model.

The goal of this thesis is twofold. First it will test the switching OLS model of Wagener (2013) on the experimental data found by Bao et al. (2012) on market behaviour in case of structural breaks in the fundamental price. Based on these findings, this thesis proposes another OLS model, namely the Akaike sOLS model that is designed to better cope with these structural breaks.

The setup of this thesis is as follows. First there will be an overview of the learning to forecast experiments that provide the data on expectations. This data has already lead to a couple of other models on expectation for-mation, which will be discussed secondly. After this theoretical overview, the Akaike sOLS model will be introduced. The results will subsequently be discussed. This thesis will conclude with a summary of all the findings and some possibilities for future research.

(6)

Chapter 2

Theoretical overview

The first methods of obtaining information about expectations were sur-veys as reported on by Frankel and Froot (1987), Shiller (1990, 2000) and Tursovsky (1970). However, as it is hard to control (or even capture) the underlying fundamentals, determining the precise heuristics used to form ex-pectations from these surveys is difficult. A convenient way to get clean data with regards to expectations is to entice these expectations in a laboratory experiment.

2.1

Experiments about Expectations

When it comes to any trading decision on the real financial markets, there are usually two choices to be made: first one has to predict a future price and with that price in mind, make a decision about if and/or how much to trade. However, as Bao et al. (2013) show, subjects who have to make both decisions do worse overall than subjects who only have to either form a price forecast or make a trading decision. For this reason, this thesis will look at experiments where particpants only have to form a price expectation and where the computer will make the optimal trading decision based on the predicted price, so called learning-to-forecast experiments (ltfe’s).

The participants in these experiments (as they are for most experiments in economics) are volunteers. In general participants in the learning-to-forecast experiments are told that they are an advisor of some sort to a producer or a pensionfund and have to predict the price of a certain good (consumable or stockmarket index) for the following period. Although not always known to participants, markets usually consist of groups of six (a groupsize small enough to allow for multiple markets and big enough so that some inference can be made about the data). Experiments usually last for a longer period

(7)

of time (more than 40 periods) to allow for some initial learning. During the experiment, participants know very little about the market they are in. They know that they are not the only one in the market and that the realized price is a function of the average expected price. However, they do not know the real underlying pricing mechanisme. In general, participants are rewarded based on their prediction quality.

Hommes et al. (2005) use this setup and a simple asset pricing model to elicit expectations from participants. The participants were divided into groups of six. The pricing mechanisme comes from the simple asset pricing mechanisme, in this case being:

pt=

1

1 + r[(1 − nt)p

e

t+1+ ntpf + y + t] (2.1)

Here r, the interestrate, is 0.05. The average dividend y equals 3,  ∼ N(0,14) and nt is a fraction of fundamental traders introduced to prevent the

for-mation of bubbles that can reach the upper limit of the price range. This fraction is a function of the deviation from the fundamental price and can’t be larger than 17% (chosen such that the fundamental traders cannot in-fluence the market much more than one of the other six participants in the market).

Given this setup, Hommes et al. (2005) find that, even though the markets are all based on the same treatments, aggregate behaviour is completely different between markets. In two out of the ten markets the aggregate price behaviour shows a monotonic convergence to the fundamental price, while in another three markets the aggregate price also converges to the fundamental price, but does this with oscillations. A fifth market also shows convergence, but does this by starting with permanent oscillations and at a certain point, the motion becomes monotonically convergent. The other four markets all show persistent ocsillations. As different as the behaviour is between markets, as similar it is within the markets. Hommes et al. (2005) find that, in all ten markets, participants seem to coordinate on some common prediction heuristic within that market. For 41 out of 60 participants, this rule seems to follow some sort of autoregressive scheme. For another sixteen, it seems difficult to specify an exact heuristic.

A critique on this setup would be the introduction of a portion of funda-mental traders. However, Hommes et al. (2005) also run the same experiment without the fundamental traders and they still find the same coordination on a similar prediction strategy. In 2008, a similar setup is used within an asset pricing model, again without the fundamental traders by Hommes et al (2008). Here the markets show signifiant bubbles (which only die out due to the upper price limit of 1000). Given that the fundamental price lies at p=60,

(8)

these bubbles can be considered to be persistent. Hommes et al. (2008) find that these bubbles are driven due by the trend following behaviour of the particpants.

Hommes et al. (2005, 2008) find that a simple asset pricing model com-bined with the behaviour of participants causes irrational price behaviour. Besides the simple asset pricing model as the underlying pricing mechanism, another commonly used mechanism is the cobweb model. The simple asset pricing model is commonly used to mimic the financial market, the cob-web market is used to mimic the consumable (or perishable) goods market. Hommes et al. (2007) use the cobweb model to test the stability of par-ticipants predictions. In this experiment, the parpar-ticipants are advisors to producers of a certain good. The demand curve is linear and fixed at

D(pt) = a − bpt+ t (2.2)

with a, b > 0 and t representing a possible shock in the demand. The supply

curve is nonlinear and depends on the price expectation of all six patricipants in the market.

S(pet) =X(S(pei,t) = X((tanh(λ(pei,t− 6)) + 1) (2.3) Here λ differs between treatments, one λ where the model is stable, one λ where it is slightly unstable and one λ where it is highly unstable under naive expectations (pet = pt−1).

Hommes et al. (2007) find that the expectations of participants are closely related to the rational expectations. In the stable and the slightly unstable markets, the first and second moment do mimic that of rational expectations, while in the unstable treatment, only the first moment mimics that of rational expectations. This is in contrast with the findings of Hommes et al. (2005). The striking difference between the market expectations in the experi-ments by Hommes et al. (2005) and Hommes et al. (2007) inspired Heemeijer et al. (2009) to take a closer look into the effect of the feedback structure on agents expectations/behaviour. The market in Hommes et al. (2007) was set up by a cobweb framework (negative feedback), while the market in Hommes et al. (2005) was set up by a simple asset pricing framework (positive feed-back). However, as the experimental setup differed substantially between both experiment, direct comparison is difficult. Hommes et al. (2005) also hint on the possible effect of the underlying feedback system, but leave this as something for future research. To see how the sign of the feedback effects the expectation of participants, Heemeijer et al. (2009) design an experiment where the only difference between treatments is the sign of the feedback. The pricing mechanisme is:

(9)

With pet = 16P pe

i,t, the average expectation of all participants in the given

market. The combination of α and β is chosen such that the fundamental price in all markets is 60. For the negative feedback markets β = −0.95 and β = 0.95 for the markets with positive feedback. Each treatment consists of 6 markets, each with 6 participants. The errorterm is drawn from the normal distribution with mean 0 and variance 1/4.

Figure 2.1: Results Heemeijer et al. (2009): negative feedback

The results are indeed quite different between the treatments. Figure 2.1 shows the average forecast, the fundamental price and the realized price for four representative markets with the negative feedback structure. In the mar-kets with negative feedback, there appears to be quick convergence towards the fundamental price, after which there also appears to be a coordination on a similar expectation rule (per market). Heemeijer et al. (2009) argue that this follows from the negative feedback structure. In markets with negative feedback, it is beneficial to deviate from the average market expectation.

(10)

This implies that prices quickly converge to the fundamental price (as this is the only price where deviation does not pay off). This is the opposite of what is happening in the markets with positive feedback. Figure 2.2 shows the same graphs for four markets with positive feedback. In these markets it is beneficial to follow the other participants in their expectation. This is also what is shown in the experiment of Heemeijer et al. (2009). The realized price (as well as the average expectation) oscillates around the fundamental price. There apears to be a quick convergence on a similar expectation rule, but no convergence towards the fundamental price.

Figure 2.2: Results Heemeijer et al. (2009): positive feedback

Inspired by the findings of Heemeijer et al. (2009), Bao et al. (2012) test how the different feedback structures handle with structural breaks in the fundamental price. The setup follows that of Heemeijer et al (2009) with a few deviations. Firstly the number of periods has been increased to 65. The pricing rule is still the same, but the fundamental price is no longer fixed,

(11)

but jumps twice during the experiment. For the first 20 periods it equals 56, for the next 22 it is 41 and for the last 23 periods it equals 62. The jumps in the fundamental price are unknown to the participants. The errorterm is drawn from the normal distribution with mean 0 and variance 0.09.

Figure 2.3 shows the findings for four out of eight markets with positive feedback and Figure 2.4 shows the findings for the markets with negative feedback. Bao et al. (2012) again find that agents react quite differently to the shocks in the fundamental price. In the case of a negative feedback structure, participants are able to process the shock almost instantly after it occurs. Due to the negative feedback, the price converges to the fundamental price only a few periods after the fundamental value has changed and remains there untill the next shock. This is quite different from the markets with positive feedback. These markets show a tendency to underreact to the permanent shock in the short run and over react to the permanent shock in the longrun.

(12)

Figure 2.4: Results Bao et al. (2012): positive feedback

2.2

Other Models about Expectation

Forma-tion

Over the years, more and more focus has been on models regarding the expectations of agents. When building a model regarding expectation forma-tion, a couple of things should always be considered. First one should look at the internal and external validity of the model. A forecasting model is internally valid if it only makes use of data that is known to the subjects at the given point in time. For example, the REH assumes agents to have full knowledge of the market they are in, while this is not the case in real life nor in experiments. So the REH is not internally valid. A model is externally valid if it also performs good out of sample.

A forecasting model is usually build up out of three things: what infor-mation does one use, what heuristics should be used to make forecasts and

(13)

lastly, what kind of selection method to determine what rule (heuristic) to use. Example of forecasting models can be found in Sargent (1993), Hommes (1997,1998), Evans (2001) and Anufriev (2012)). This section however will not discuss these models because the aim of this thesis is to build an univer-sal model. The aforementioned models all require additional parameters to be estimated on the data and thus are not universal.

A model which does not face this issue is the switching Ordinary Least Squares (sOLS) model introduced by Wagener (2013). Its strength lies in the fact that it too, like the REH, is an universal model.

The sOLS model is build on the findings of Heemeijer et al. (2009). Based on the fact that there is a fast coordination of expectations, Wagener (2013) treats the agents as homogeneous. The agents form their expectations based on all publicly known data (all prices known up to period t). This information set at time t is:

It−1= {p1, ..., pt−1} (2.5)

Agents use this information set to estimate three least squares heuristics (with number of lags l={0,1,2}):

pel,t(It−1) = cl0,t + l

X

s=1

cls, tpt−s (2.6)

The LS heuristics are estimated on the prices up to, but not including, pt−1.

Selection takes place based on the lowest sum of squared prediction errors Pt−1

s=1(p e

l,s − ps)2. This means the heuristics are used to forecast all price

upto period t, including pt−1, and then the heuristic with the lowest sum of

squared prediction error gets selected to predict the price for period t. The sOLS model has been tested on the data from Heemeijer et al. (2009). Figures 2.5 and 2.6 show average expectation, REH (fundamental price) and the sOLS model prediction for both markets with positive feedback and neg-ative feedback.

As can be seen, the sOLS model outperforms the REH in all markets with positive feedback. It sticks closely to the average forecast of the participants in each market. In the markets with negative feedback, the sOLS does not perform better than the REH. However, as can be seen from Figure 2.6, both models do stay close to the average expectation from the laberatory experi-ments from Heemeijer et al. (2009). Wagener (2013) shows that this sOLS indeed satisfies internal and external validity (when making multiperiod fore-casts, the results only favour the sOLS model more).

Given the results of the sOLS model on the data of Heemeijer et al. (2009), the question rises how the sOLS model performs on other data. This question will be answered in the following two chapters.

(14)
(15)
(16)

Chapter 3

The Model

The results of using the switching OLS model on the data of Bao et al. (2012) (these results can be found in the next chapter) reveal some areas for improvement. Most prominent is the fact that the sOLS model seems to underestimate the predictions in the last 20 periods. This is due to the fact that the sOLS model makes use of the full information set available up to period t. However, Bao et al. (2012) find that agents seem to be good at making the distinguish between useful and useless data. This is where this thesis starts with its universal model.

3.1

Akaike switching Ordinary Least Squares

model

Building on the original switching OLS model by Wagener (2013) and the findings of Bao et al. (2012) the new model is set up as follows. Given the rapid coordination found in experiments ran by Heemeijer et al. (2009) and Bao et al. (2012) agents are assumed to be homogeneous, like in the sOLS model. Holding on to the notion of internal validity, agents have acces to all publicly known data (all past realized prices). Following the notation of Wagener (2013), the total information set in period t is

It= {p1, ..., pt−1} (3.1)

However, in the model introduced in this thesis, agents can choose which information set they will use. When confronted with a structural break in their data, agents may choose to only use the information based on the last s periods. These information sets, with 1 < s ≤ t − 1, are denoted as

(17)

Agents use these information sets for the same three least squares heuristics as in the sOLS model:

pe1,t = ˆc0 (3.3)

pe2,t = ˆc0 + ˆc1pt−1 (3.4)

pe3,t = ˆc0+ ˆc1pt−1+ ˆc2pt−2 (3.5)

Selection in each period t takes place as follows. Agents first estimate Equa-tion 3.3 on all available data sets. As the model uses the entire informaEqua-tion set for both estimation and testing, Equation 3.3 (rule 1) requires at least two observations. This means that at any period t there will be t − 2 estimates produced by rule 1 for period t − 1, namely: {pe12,t−1, pe13,t−1, ..., pe1(t−1),t−1}. These values can be tested against the actual value of pt−1 using the

selec-tion criterion discussed below. Then best performing informaselec-tion set will be selected for this rule 1 (i.e., the information set which would have lead to the best prediction in the previous period). The same will be done for Equation 3.4 (rule 2) and Equation 3.5 (rule 3). However, rule 2 requires at least four observations and rule 3 requires at least 6 observations. So at any point in time past period 7 (rules will only be selected if there is enough information to estimate and test them), each rule will have selected its best performing information set. Then the three predictions for pt−1 formed by these three

rules are compared to each other using the same selection criterion and the best performing rule will be selected to generate a forecast for period t.

Selection will take place based on the absolute prediction error and on the Akaike Information Criterion corrected for smaller sample sizes (AICc). The prediction error (for rules 1, 2 and 3) is defined as:

xrs,t= pers,t− pt (3.6)

This implies that this model only looks at the performance in the last period and not over the entire information set. It only uses the information set to estimate the coefficients of said rules. Given that the error terms originate from the normal distribution in both Heemeijer et al. (2009) and Bao et al. (2012), the log likelihood will be calculated from the Student t-distribution (this is a small deviation from an universal model, but in the chapter ”Dis-cussion” it will be argued that this can be converted to a selection criterion which does fit an universal model). The AICc has been selected as the cri-terion because it can deal with different sample sizes better than the log likelihood criterion itself. When taking the AICc over all rules (r = 1, 2, 3) and information sets (s = 2, ...t − 1), the selection criterion becomes:

(18)

Chose rule r and information set Its that minimize the AICc, where the AICc is calculated as follows:

AICc = 2r(r + 1)

n − r − 1 + 2r − 2( ˆLs,r) (3.7)

Where n stands for the number of observations and ˆL represents the loglike-lihood function, in this case:

ˆ Ls,r = Γ(n2) ((n − 1)π)0.5Γ(n−1 2 )  1 + x 2 rs,t n − 1 − n 2 (3.8) With Γ(.) being the Gamma function.

The choice of using the AICc as a selection criterion does however increase the minimum length of the information set to be 3. This just means there is one less rule 1 to select from. The combination of rule and information set with the lowest AICc, call this combination RSt, will then be used to predict

the price in period t:

peAsOLS,t(It) = peRSt,t(I

S

t ) (3.9)

This model shall be called the Akaike switching Ordinary Least Squares model.

(19)

3.2

Adapted Akaike switching Ordinary Least

Squares model

As the results in the next chapter show, the Akaike switching OLS model seems to have two critical points. The model seems to produce enormous predictions at the first period after the structural break is known to the agents in the markets. At these periods (t = 22 and t = 45) and only at these periods, the model seems to break down. However, as these spikes only last one period and occur at the point in time when the structural break is first known to the agents, this can be circumvented by introducing an additional requirement. This is specified as follows: when the prediction in any period is off by more than α of the realized price, the model should default back to the simplest rule, in this case being pe

t = ˆc0. This can converted to human

behaviour where agents select the simplest heuristic when they do not know which data they can trust. However, the implementation of this additional rule to the Akaike sOLS model which leads to the adapted Akaike sOLS (AsOLS+) does mean that the model can no longer be called universal as this parameter α needs to be chosen such that is does not hamper in markets with positive feedback (where large jumps in prices are not uncommon). This additional parameter will be discussed in the chapter ”Discussion” as well.

The results of the sOLS model, AsOLS model and AsOLS+ model can be found in the next chapter.

(20)

Chapter 4

Results

Before showing the results of the Akaike switching OLS model, first the sOLS will be tested on the data of Bao et al. (2012).

4.1

Switching Ordinary Least Squares model

Figure 4.1 shows the outcome of the sOLS on the data of Bao (2012) in four out of eight markets with postive feedback. These four representative runs show that the sOLS does again beat the classic REH. This is confirmed by testing the quality of prediction. The quality of prediction is defined as the mean absolute error of the forecasts. This is the absolute mean of the deviation between the forecast produced by the model and the forecast gathered from the laboratory (so a value closer to zero is better). Formally the quality of prediction in this thesis is defined by:

emodel,t = T X t=10 |pe lab,t− p e model,t| (4.1)

For each run this gives two quality of prediction values, namely that of the sOLS and that of the REH. Table 4.1 shows the quality of prediction for each model and each run in the case of positive feedback. The table also shows the p-value for the hypothesis that both means are equal. This has been tested by first testing whether the variances of the quality of prediction were equal and then using the appropriate t-test. These p-values indeed show that the sOLS does preform better than the REH in the markets with positive feedback (as expected from the results of Wagener (2013) on Heemeijer et al. (2009)).

(21)

Figure 4.1: Switching Ordinary Least Squares model (positive feedback)

Run sOLS REH p-value

1 0.24 5.99 < 10−10 2 0.78 7.36 < 10−10 3 0.28 5.36 < 10−10 4 0.57 5.24 < 10−10 5 0.41 5.70 < 10−10 6 0.30 6.12 < 10−10 7 0.45 5.82 < 10−10 8 0.52 6.56 < 10−10

(22)

Run sOLS REH p-value 1 2.27 1.34 0.29 2 2.92 1.09 0.16 3 2.28 0.98 0.14 4 4.05 1.33 0.11 5 1.96 1.44 0.49 6 3.40 2.27 0.23 7 3.53 1.00 0.17 8 2.32 1.20 0.23

Table 4.2: Quality of prediction sOLS and REH (negative feedback) The same has been done for the markets with a negative feedback struc-ture. Figure 4.2 shows the results of the sOLS in four representative markets with negative feedback. The sOLS fails to best the REH in any of the eight markets. Table 4.2 shows the quality of prediction for the sOLS and the REH in all eight markets with negative feedback from Bao et al. (2012). The p-values shown in Table 4.2 are calculated in the same manner as in Table 4.1. These p-values from Table 4.2 show that the REH does not do significantly better than the sOLS model, but the REH does have a lower average deviation in all runs with negative feedback.

(23)

Figure 4.2 shows two areas where there is room for improvement for the sOLS model. Firstly, after the first structural break, the sOLS model shows estimates that are way off (for instance see run 4). Secondly, the sOLS model constantly underestimates the prediction in the last 20 periods. This is due to the fact that the sOLS takes all past prices in its estimations. This is were the Akaike switching OLS model should be an improvement.

4.2

Akaike switching OLS model

Figure 4.3 shows the result of the AsOLS model on the data of Bao et al. (2012) in some of the markets with positive feedback. The runs shown in Figure 4.3 are the same runs as shown in Figure 4.2.

Figure 4.3: Akaike switching Ordinary Least Squares (positive feedback)

The findings are similar to that of the original sOLS model. In all eight markets the AsOLS model outperforms the REH. This is also confirmed by taking a look at the quality of prediction for the AsOLS model and the REH in Table 4.3. The AsOLS does significantly better than the REH.

(24)

Run AsOLS REH p-value 1 0.53 5.99 < 10−10 2 0.91 7.36 < 10−10 3 1.04 5.36 < 10−9 4 0.85 5.24 < 10−10 5 0.89 5.70 < 10−10 6 0.90 6.12 < 10−10 7 0.90 5.82 < 10−9 8 0.98 6.56 < 10−10

Table 4.3: Quality of prediction AsOLS and REH (positive feedback)

(25)

Using the AsOLS model on the runs with a negative feedback structure gives Figure 4.4. The results here are less clear. What stands out in these graphs are the sudden enormous jumps in the price expectation of the AsOLS model at the moment the structural break becomes known to the AsOLS model (periods 22 and 45).

Figure 4.5 shows the results for run 4 with negative feedback to show the size of the jumps. These enormous deviations also show in the quality of prediction of the AsOLS model as seen in Table 4.4. Given that these large spikes only happen once or twice during the 55 periods (the first ten periods do not affect the quality of prediction), the p-values may appear misleading. However, given that the spikes only last for one period after which the AsOLS model does give good predictions, the Adapted AsOLS model should be a good contender for the REH in the runs with negative feedback.

(26)

4.3

Adapted Akaike switching OLS model

This new adaption of the AsOLS model, the Adapted Akaike switching OLS (AsOLS+) model, has been set in such a way that it does not affect the findings in the markets with positive feedback. The value of α has thus been set at 30%. However, it should be an improvement in the markets with negative feedback. Figure 4.6 shows the results of the AsOLS+ model in four representative runs with negative feedback. The additional parameter only leads to changes in the prediction for periods 22 and 45 (exactly when the structural breaks become known to the model (or agents in the laberatory)).

Figure 4.6: Adapted Akaike sOLS (negative feedback)

As seen from the four displayed markets in Figure 4.6, the new AsOLS+ model does better than the AsOLS model. However, it is hard to see if the AsOLS+ model beats the REH. Taking a look at Table 4.5 reveals that both the AsOLS+ model and the REH do have a similar quality of prediction, although the AsOLS+ does slightly better in 7 out of 8 markets, yet these results are not significant. The results of the AsOLS+ model will be discussed in the next chapter.

(27)

Run AsOLS REH p-value 1 43.16 5.99 0.32 2 6.98 7.36 0.33 3 50.34 5.36 0.26 4 48.47 5.24 0.27 5 8.18 5.70 0.34 6 9.53 6.12 0.36 7 0.87 5.82 0.81 8 43.75 6.56 0.33

Table 4.4: Quality of prediction AsOLS and REH (negative feedback)

Run AsOLS+ REH p-value

1 0.92 1.34 0.48 2 1.10 1.09 0.98 3 0.94 0.98 0.95 4 1.30 1.33 0.97 5 1.28 1.44 0.78 6 1.52 2.27 0.31 7 0.87 1.00 0.81 8 1.11 1.20 0.87

(28)

Chapter 5

Discussion

The AsOLS+ model shows that it is possible to mimic (or recreate) agents forecasts from the lab. Besides the results regarding the quality of prediction discussed previoulsy, there are some other areas that require attention. Those will be discussed here.

5.1

AsOLS+ model on Bao et al. (2012)

5.1.1

Information set

One of the strengths of the AsOLS+ model is its ability to select its own dataset. Figure 5.1 shows the average length of the information set used by the model in all eight markets with negative feedback. The blue line represents the average amount of data included in the optimal information set as selected by the AsOLS+ model. The black line represents the optimal amount of data that should be in the information set if agents are assumed to be fully rational.

(29)

Figure 5.1: Data selection

What is apparently clear here is the fact that average data length indeed approaches the optimal data length. What also becomes clear is that the model suffers from some sort of chaotic behaviour in the first few periods after a structural break. Figure 5.2 shows the same picture for the eight runs with positive feedback. This figure shows a completely different image. The AsOLS+ model uses more and more data untill period 35, at which point the amount of data used is drastically lowered. This might be due the fact that the estimates of the agents in the experiment of Bao et al. (2012) on average break through the lower fundamental value. In the last 10 periods however, the model again chooses to use nearly the optimal amount of data.

(30)

Figure 5.2: Data selection

5.1.2

Chosen heuristics

The AsOLS+ could choose between three different least squares heuristics to form an expectation. What stands out in the actual selection of a given heuristic is that the AsOLS+ does have a tendency to select the simplest rule nearly every time. Only in the periods of chaos (that is, the first few periods after the structural break) the model selects another rule, but overall it prefers the simplest rule, pet = ˆc0. This might give rise to some concern

as the simplest rule is the rule which uses the least amount of information and the underlying pricing mechanisme in Bao et al. (2012) is of the form pt = α + βpet. However, given the quality of prediction of the AsOLS+, this

might mean that agents in a laberatory experiment also tend to go for the simplest heuristics. Again, this model shows that it is possible to capture expectations using simple heuristics.

(31)

5.1.3

Selection criterion

The Akaike sOLS+ model makes use of the Akaike Information Crite-rion corrected for smaller samples sets (AICc) as its selection method. This method was chosen because likelihood on its own seemed too sensitive to small changes in the realized prices (with smaller sample sizes), leading to a lot more spikes as seen in the AsOLS model. However, in this AsOLS+ model, the AICc is build on the loglikelihood of the Student’s t-distribution. This is only true because the underlying errorterm in both the Heemeijer et al. (2009) and the Bao et al. (2012) experiments were constructed using the normal distribution. This however is not a characteristic of an universal model. A better selection criterion thus might be based on a non-parametric distribution estimated on the data internally by the model. This however is something left for future research. But it would bring the AsOLS+ another step closer to being a true universal model, one that might even beat the Rational Expectations Hypothesis.

5.2

AsOLS+ model on Heemeijer et al. (2009)

Given that the original sOLS model was tested on the data of Heemeijer at al. (2009), it also interesting to see how the AsOLS+ model does fare on this data. Figure 5.3 shows the AsOLS+ prediction in four markets with positive feedback and Figure 5.4 shows the same for markets with negative feedback.

As can be seen, the prediction of the AsOLS+ sticks quite closely to both the prediction from the experiment in both the negative and the positive feedback structure. This is also confirmed by the quality of prediction in Table 5.1 and Table 4.2. There is no statistical difference between the results of the sOLS model and AsOLS+ model in the markets with positive feedback although it has to be said that the sOLS model does perform better in the markets with positive feedback. In the markets with negative feedback, there is no statistical difference between the quality of prediction of the REH and the quality of prediction of the Akaike sOLS+ model.

(32)

Figure 5.3: AsOLS+ on Heemeijer et al. (2009): positive feedback 0 10 20 30 40 50 30 40 50 60 70 80 90 Run 1 Period Value 0 10 20 30 40 50 30 40 50 60 70 80 90 Run 2 Period Value 0 10 20 30 40 50 30 40 50 60 70 80 90 Run 3 Period Value 0 10 20 30 40 50 30 40 50 60 70 80 90 Run 4 Period Value Forecast Lab Fundamental Price Forecast AsOLS

Run sOLS REH AsOLS+

1 0.48 4.81 0.19 2 0.34 3.60 0.34 3 0.20 2.05 1.56 4 0.61 7.43 0.17 5 1.19 5.80 0.64 6 0.16 2.03 0.34

Table 5.1: Quality of prediction sOLS, REH and AsOLS+ (positive feedback, Heemeijer et al. (2009))

(33)

Figure 5.4: AsOLS+ on Heemeijer et al. (2009): negative feedback

Run sOLS REH sOLS+

1 0.46 0.24 0.19 2 1.20 0.32 0.34 3 2.02 1.12 1.56 4 0.30 0.14 0.17 5 1.01 0.60 0.64 6 0.66 0.35 0.34

Table 5.2: Quality of prediction sOLS, REH and AsOLS+ (negative feedback, Heemeijer et al. (2009))

(34)

Chapter 6

Conclusion

Goal of this thesis was to see how the switching OLS introduced by Wa-gener (2012) did on the experimental data on structural breaks in the funda-mental price by Bao et al (2012) in comparison with the REH and if another universal model proposed by this thesis based on the experiment of Bao et al. (2012) could do better. The original sOLS model performs good in the markets with positive feedback, besting even the REH. However, in the mar-kets with negative feedback structure the sOLS fails to defeat the REH. The graph of the sOLS on the data of Bao et al. (2012) however did show room for improvement.

Based on this, this thesis proposes the Akaike sOLS model. In this model, an adaption of the sOLS model, agents choose the specified amount of data they want to use as well as the prediction rule based on a Akaike Information Criterion corrected for smaller samples. This AsOLS does well in the mar-kets with positive feedback structure, besting the REH in all eight marmar-kets. However, the AsOLS does not perform well in the markets with a negative feedback structure. The model seems to derail at the first period after the structure break. This seems to be caused by the fact that the likelhood crite-rion prefers rules with more datapoints when the estimation error is similar for those rules (ie AR(2) over AR(1)). This is circumvented by adapting the model, when the prediction in period t deviates by more than α = 30%, that the model chooses the simplest rule, pet = ˆc0. This AsOLS+ model

is then again tested on the data of Bao et al (2012). This adaption does not affect the performance of the AsOLS+ model in the markets with the positive feedback structure. However, it is an improvement in the markets with a negative feedback structure. In seven out of eight markets, the mean prediction error is smaller for the liOLS+ than for the REH. However, the difference is not significant in any market with negative feedback.

(35)

Run AsOLS+ REH sOLS 1 0.53 5.99 0.24 2 0.91 7.36 0.78 3 1.04 5.36 0.28 4 0.85 5.24 0.57 5 0.89 5.70 0.41 6 0.90 6.12 0.30 7 0.90 5.82 0.45 8 0.98 6.56 0.52

Table 6.1: Quality of prediction AsOLS+, REH and sOLS (positive feedback, Bao et al. (2012))

Run AsOLS+ REH sOLS

1 0.92 1.34 2.27 2 1.10 1.09 2.92 3 0.94 0.98 2.28 4 1.30 1.33 4.05 5 1.28 1.44 1.96 6 1.52 2.27 3.40 7 0.87 1.00 3.53 8 1.11 1.11 2.32

Table 6.2: Quality of prediction AsOLS+, REH and sOLS (negative feedback, Bao et al. (2012))

Table 6.1 summarizes the findings in the form of the quality of prediction of both the sOLS model, the REH and the AsOLS+ model for the markets with postive feedback and Table 6.2 does the same for the markets with neg-ative feedback. In markets with positive feedback, the switching OLS model outperforms both the AsOLS+ model and the REH. However, in markets with positive feedback, the AsOLS+ and REH both performs equally better than the sOLS model.

This thesis has shown that the AsOLS+ model is internally valid. How-ever, its external validity should also be tested. This is something that should also be interesting, but will be left for future research. Another possible route for a follow up on this model would be to change the selection method. Be-cause including both the likelihood based on Student’s t-distribution and the additional parameter α do make the AsOLS+ no longer an universal

(36)

model as both have to be fitted on the data of the experiment. However, the likelihood could be replaced by a likelihood based on a non-parametric distribution which can be estimated internally in the model. Changing the selection criterion might also reduce the need for the additional parameter α, but this is only a guess from the author.

(37)

Bibliography

[1] M. Anufriev and C.H. Hommes (2012), Evolution of Market Heuristics, The Knowledge Engineering Review, 27/2: 255-271.

[2] T. Bao et al. (2012), Individual Expectations, Limited Rationality and Aggregate Outcomes, Journal of Economic Dynamics 36: 1101-1120. [3] T. Bao et al. (2013), Learning, Forecasting and Optimizing: An

Exper-imental Study, European Economic Review, 61: 186-204.

[4] W.A. Brocks and C.H. Hommes (1997), A Rational Route to Random-ness, Econometrica 65/5:1059-1095.

[5] W.A. Brocks and C.H. Hommes (1998), Heterogeneous Beliefs and Routes to Chaos in a Simple Asset Pricing Model, Journal of Economic Dynamics and Control 22/8-9:1235-1274.

[6] G.W. Evans and S. Honkapohja (2001), Learning and Expectations in Macroeconomics, Princeton University Press, Princeton, New Jersey. [7] J.A. Frankel and K.A. Froot (1987), Using Survey Data to Test

Stan-dard Propositions regarding Exchange Rate Expectations, American Economic Review, 77: 133:153.

[8] P. Heemeijer et al. (2009), Price Stability and Volatility in Markets with Positive and Negative Expectations Feedback: An Experimental Investigation, Journal of Economic Dynamics and Control, 33/5: 1052-1072.

[9] C.H. Hommes et al. (2005), Coordination of Expectations in Asset Pric-ing Experiments, Review of Financial Studies, 18/3: 955-980.

[10] C.H. Hommes et al. (2007), Learning in Cobweb Experiments, Macroe-conomic Dynamics, 11: 8-33.

(38)

[11] C.H. Hommes et al. (2008), Expectations and Bubbles in Asset Pricing Experiments, Journal of Economic Behavior and Organization, 67/1: 116-133.

[12] T.J. Sargent (1993), Bounded Rationality in Macroeconomics, Oxford University Press, Oxford.

[13] R.J. Shiller (1990), Speculative Prices and Popular Models, Journal of Economic Perspectives, 4: 55-65.

[14] R.J. Shiller (2000), Measuring Bubble Expectations and Invester Confi-dence, Journal of Psychology and Financial Markets, 1: 49-60.

[15] S.J. Turnovsky (1970), Emperical Evidence on the Formation of Price Expectations, The Journal of the American Statistical Association, 65: 1441-1454.

[16] F.O.O. Wagener (2013), Expectations in Experiments, Annual Review of Economics, 6: Submitted. Doi: .10.1146/annurev-economics-080213-040935.

Referenties

GERELATEERDE DOCUMENTEN

In the scenarios of 50% noise and a small sample size (N = 50) the bias was slightly larger for PPLS estimators compared to PLS estimates when the loading values were larger..

Vervorming tafel van kotterbank bij opspanning van werkstuk Citation for published version (APA):.. Hoevelaak,

De overige fragmenten zijn alle afkomstig van de jongere, grijze terra nigra productie, waarvan de meeste duidelijk tot de Lowlands ware behoren, techniek B.. Het gaat

The aim in the first theme is to compare numerous modern metaheuristics, in- cluding several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a

It is seen that using the Least-Squares Support Vector Machines (LS-SVMs) as a methodology we can construct a Non-linear AutoRegressive with eXogenous inputs (NARX) model for

Abstract: This paper develops a new approach based on Least Squares Support Vector Machines (LS-SVMs) for parameter estimation of time invariant as well as time varying dynamical

We note in this context, that in Van Pelt and Bernstein (2001) it is shown, that it is generally possible to obtain a consistent estimate by using an alternative quadratic constraint

[2006], Beck and Ben-Tal [2006b] suggests that both formulations can be recast in a global optimization framework, namely into scalar minimization problems, where each