• No results found

A thesis presented to the faculty of the University of Groningen in partial fulfillment of the requirements for the degree of Master of Science in International Economics and Business

N/A
N/A
Protected

Academic year: 2021

Share "A thesis presented to the faculty of the University of Groningen in partial fulfillment of the requirements for the degree of Master of Science in International Economics and Business "

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A Heterogeneous Agents Model of Exchange Rate Determination and the Implications of a Currency Transaction Tax

M.R. Teisman

A thesis presented to the faculty of the University of Groningen in partial fulfillment of the requirements for the degree of Master of Science in International Economics and Business

Under the supervision of Dr. G.J. Lanjouw

July, 2011

Abstract

We develop a model of exchange rate determination, based on the assumption of heterogeneous agents. In this model, agents attempt to predict the utilities of different trading strategies, and adopt a strategy accordingly. The model is capable of explaining a number of empirical market anomalies. Next, the model is deployed to test the effectiveness of a Tobin-style currency transaction tax. It is found that a transaction tax has the potential to achieve long run stability of exchange rates, possibly at the risk of increased short term volatility due to reduced market liquidity.

Keywords: Heterogeneous Agents, Exchange Rates, Tobin Transaction Tax

(2)

2

1. Introduction

Over the last decades, the field of exchange rate economics has been undergoing an important paradigm shift. Based on the observation that fluctuations of asset prices appear to be significantly larger than what can explained for by underlying economic fundamentals (Shiller, 1981), and unsatisfied with the disappointing explanatory power of exchange rate models (e.g. Meese and Rogoff, 1983), a stream of international economists abandoned the representative agent approach. The representative agent approach, based on the assumption of rational behavior of all economic agents, suggests the existence of efficient markets (i.e. that securities are rationally priced at their fundamental value).

With new theory and evidence, behavioral finance emerged as an alternative view of financial markets. In this framework economic agents are assumed to be subject to bounded rationality. Bounded rationality originates from limitations in the information mapping process, including the investment of time and monetary resources necessary for obtaining information (Simon, 1957). Kahneman and Tversky (1979) show that in presence of the uncertainty that follows from imperfect information, agents tend to be subject to cognitive biases. Applied to financial markets, the presence of bounded rationality leads to expect that – as opposed to as hypothesized by the representative agent approach – there may be significant and sustained price misalignments, or deviations between price and value.

Along with the emergence of behavioral finance came theoretical models on price

determination in speculative markets, which are generally based on the interaction of

heterogeneous agents. Heterogeneous agent models (HAMs) have been employed by

computational economists, who found such models are capable to explain a number of

empirical anomalies of market prices. In this study, a heterogeneous agents model of

exchange rate determination is developed, and, as proposed by Hommes (2006), this

model will be employed to investigate the economic implications of a Tobin-style

currency transaction tax. Note that the ability of HAMs to explain market anomalies does

not critically rely on the bounded rationality of agents, as Kirman (1992) shows that even

if heterogeneous agents are all objectively maximizing utility, this does not necessarily

engender collective rationality.

(3)

3

Due to the fact that a global transaction tax has never been implemented, the lack of empirical evidence requires scholars to find different methods to assist policy makers in their decision making. By investigating the implications of a Tobin tax in the setting of a HAM, we hope to add to the literature that decision makers can consult in shaping one of the most significant global policies to date.

In the next section we investigate the efficient market fallacy, which Tobin tax proponents use as an argument for the tax, but which also provides a theoretical foundation for HAMs in general. In section three, we provide a concise introduction to Tobin‟s proposed currency transaction tax. In section four, the HAM will be specified mathematically. In the subsequent section, the parameter values of the model are calibrated to fit empirical exchange rate characteristics, and we investigate whether the model can explain a number of empirical exchange rate “puzzles”. In section six, we will conduct the policy experiments by investigating how the introduction of a Tobin tax affects the volatility, as well as the long-term stability of the simulated exchange rates.

Finally, the paper concludes with a discussion regarding certain issues surrounding the tax, and some concluding remarks.

2. The efficient market fallacy

The representative agent approach posits market efficiency, which, in its weakest form, implies that prices reflect all currently available information (Fama, 1970). Market efficiency depends on three progressively weaker assumptions (Shleifer, 2000). First, agents are assumed to be objectively rational, and therefore to value securities rationally.

Second, to the extent that agents are not objectively rational, their behavior is assumed to be random. Therefore, their trades cancel each other out, without distorting market prices.

Third, to the extent that agents are irrational in similar ways, they are met in the market by rational arbitrageurs, who eliminate their influence on prices.

The assumption of efficient markets received strong theoretical support from

Friedman (1953) and Fama (1965), who posit that irrational speculators can only be

destabilizing if they buy high and sell low. However, because buying high and selling

low by definition is a losing strategy, speculators are naturally driven out of the market.

(4)

4

In the meantime, rational arbitrageurs would trade against these speculators in the process of taking advantage of – and thus eliminating – market inefficiencies.

Destabilizing tendencies

Contrary to the theoretical support for efficient markets, there are a number of reasons to question the validity of the assumption that prices unfailingly reflect their fundamental value. First of all, it should be noted that fundamental values of financial securities are difficult, if not impossible, to compute. In the field of exchange rate economics, there is no consensus on the “right” model of exchange rate determination, and most models only explain a small portion of the variation of exchange rates (Taylor, 1995). Omitted variables may explain this lacking explanatory power. Nonetheless, Meese (1990, p.130) notes that “empirical researchers have shown considerable imagination in their specification searches, so that it is not easy to think of variables that have escaped consideration in an exchange rate equation.” An alternative explanation of the low explanatory power of econometric models is that exchange rates follow highly complex, non-linear dynamics.

Keynes (1936) notes that market prices are determined by the consensus of the crowd. Then, in absence of a clear consensus on fundamental values (e.g. due to the complexity of computation described above,) “he who attempts [to invest based on genuine long-term expectations] must surely … run greater risk than he who tries to guess better than the crowd how the crowd will behave” (Keynes, 1936, p.157). This implies that there may be a tendency for markets to be dominated by traders in the game of guessing what other traders are going to think (Tobin, 1978).

A third limiting factor for objective rationality is the notion of information

gathering costs. Simon (1957) recognizes that in their decision making, agents need to

engage in information-gathering, which may be a costly process. If information gathering

is costly, then agents need to determine the extent to which the mapping of information is

to be refined. In doing so, agents continuously weigh the cost of obtaining additional

information against the benefits of having such information. Based on the concept of

costly information, Grossman and Stiglitz‟s paradox explicates the impossibility of

informationally efficient markets: “If a market were informationally efficient, … then no

(5)

5

single agent would have sufficient incentive to acquire information on which prices are based.” (Stiglitz, 2001).

Alternatively, Tversky and Kahneman (1974) show that in solving complex problems, agents tend to rely on a limited number of heuristics, which reduce the complex task of assessing probabilities and predicting values to simpler judgmental operations. The authors identify three heuristics. The representativeness heuristic states that the probability of an event is estimated by consideration of the probabilities of a

“comparable known”. This may lead to biases including that of the gambler‟s fallacy, in which, in the game of roulette, after a series of subsequent red draws, agents incorrectly believe that a black draw becomes more likely, because a draw of black will lead to a more representative sequence. The second heuristic is the availability heuristic, in which people assess the probability of an event by the ease with which instances of occurrences can be brought to mind. This may lead to underestimation of the probability of highly improbable events. Third, there is the anchoring and adjustment heuristic, in which agents depart from an initial expectation, and adjust their expectations according to new information. However, the authors find that the adjustment of expectations is typically insufficient, which essentially leads to the inertia of forward looking expectations (Morris and Shin, 2006). In the case of the marketplace, the presence of these and other cognitive biases may lead to the tendency for sustained misalignment of prices.

Misalignment of prices

It can be concluded that in complex markets, the presence of costly information and cognitive biases may lead to the tendency for erroneous pricing of financial assets.

However, as Friedman (1953) and Fama (1965) note, there remain two forces that may

prevent price misalignments from actually occurring. First, arbitrageurs would take

positions against speculators, and therefore may inhibit price misalignments. Second,

because prices ultimately revert to their fundamental value, speculation remains a money-

losing strategy, and consequently speculators are expected to be driven out of the market

through a process of natural selection. As will become apparent, there is a case against

both arguments, allowing us to assume the possible existence of price misalignments.

(6)

6

First, we address the case against the market selection hypothesis. The hypothesis that speculators are driven out of the market because of their “unfit strategies” is attenuated by the theory of the speculative bubble, which states that speculators can be destabilizing, without it being a money-losing strategy. In a speculative bubble, prices go up each period, because traders expect it to go up further the next period, and in this expectation they are correct (Frankel, 1996). This means that speculation may be a profitable strategy for sustained periods of time, for as long as the bubble lasts.

Furthermore, Yan (2008) finds that the natural selection process is excessively slow, and that incorrect beliefs therefore can have a significant and long-lasting impact on prices.

Last but not least, as P.T. Barnum famously said: “a sucker is born every minute,”

implying that through continuous accession of new noise traders into the market, their presence may remain indefinitely.

The case against the notion that arbitrage inhibits price misalignments is closely related to the above. Specifically, arbitrageurs face two types of risk (Long, Shleifer, Summers and Waldman, 1990). First of all, arbitrageurs face uncertainty about whether the fundamental value they computed is correct, which is called fundamental risk.

Secondly, arbitrageurs face the risk that prices will not revert to their fundamental value anytime soon, due to the consistency of noise traders‟ “incorrect” beliefs over time. This leads to a type of risk that has been termed noise trader risk. Arbitrageurs‟ aversion to both types of risk limits their overall arbitrage activity.

To conclude, when speculation has the potential to be profitable for sustained periods of time, and when arbitrageurs are limited in their actions because of the risks they face, price misalignments may occur.

Based on the premises discussed above, heterogeneous agents models have been

developed, in which exchange rates are determined by the interaction of different types of

economic agents (for a survey, see Hommes, 2006). In these models, markets are

generally populated by agents that choose from a set of simple rule of thumb strategies in

making their trading decisions and market predictions. If trading volume is a reflection of

idiosyncratic reactions of agents on price, as suggested by Beaver (1968), then the actual

(7)

7

presence of heterogeneous agents in financial markets is confirmed by the overwhelming trading volume in these markets.

3. Tobin’s transaction tax

Tobin (1978) proposes an internationally uniform ad varlorem tax on all spot market conversions of one currency into another. The author argues that short-term speculation can have severe real economic consequences, and therefore proposes to

“throw some sand” in the wheels of our excessively volatile international money markets (Tobin, 1978). There are two main objectives of a currency transaction tax. First, a transaction tax is said to have the potential to stabilize market prices. Hypothetically, this is achieved through the discouragement of short-term speculation, resulting from the increased transaction costs. At the same time, fundamentalists are taxed too, but supposedly the tax is less of a burden on them, as fundamentalists are expected to have a longer investment horizon.

Second, conventional belief is that the resources of central banks are no longer adequate for effective intervention (Felix, 1995). Presently, central bank reserves are only a fraction of the trading volume in currency markets. As will become apparent, a transaction tax is likely to reduce the foreign exchange market trading volume, thereby promoting autonomy of national macroeconomic and monetary policies (Tobin, 1996a).

Besides these two main objectives, there are secondary effects resulting from a transaction tax, which may be socially desirable. Financial speculation to a large extent is a zero-sum game, and therefore adds little value to the economy. Vast resources of intelligence and enterprise are thus wasted in financial market activities (Schulmeister, Schratzenstaller and Picek, 2008). As a transaction tax provides a disincentive to engage in financial speculation, it may lead to the redirection of these resources to activities that are beneficial to the economy. In Keynes‟ (1936, pp. 159) words, a Tobin tax would

“mitigate the predominance of speculation over enterprise.”

Finally, though again not a primary reason, the tax has a serious revenue potential,

which may add to its overall desirability. Due to the sheer foreign exchange (Forex)

trading volume, even a small transaction tax is capable of generating significant tax

revenues. Revenue projections strongly depend on the tax level, and the assumed

(8)

8

elasticity of trading volume to the rise in transaction costs. However, careful estimations suggest that a tax as low as 0.1% would raise over $100 billion in tax revenue (Frankel, 1996; Felix and Sau, 1996). Furthermore, as there is increasing dissatisfaction about the fact that tax payers‟ money is spent on bailing out financial institutions that are often at the root of financial crises, part of the revenues of a financial transaction tax can be directed towards a financial sector rescue fund. This would make financial institutions compensate for the social burden associated with the necessary government interventions in the sector during a crisis.

Opponents of the Tobin tax have commonly argued that introducing a transaction tax would significantly reduce incentives to trade, resulting in reduced trading volume and less liquid markets (e.g. Spahn, 1996). Liquidity, also known as market depth, indicates the degree to which securities can be bought and sold freely without affecting market prices. Put differently, liquidity is measured as the cost of immediate execution.

Naturally, liquidity is essential for market efficiency, because if traders‟ actions have a significant price impact, this by definition leads to excessive volatility of exchange rates, and the consequent misalignment of prices.

As Erturk (2006, p. 72) states: “If the Tobin tax is not stabilizing, then much of the rest of the discussion on its feasibility and other related issues are probably moot.”

Therefore, we set out to examine the economic implications of the Tobin tax in the setting of a heterogeneous agents model, and only subsequently engage in a discussion on secondary issues, such as political- and technical feasibility.

4. A non-linear heterogeneous agent model

The model that will be employed in this paper is an extended version of the model

by de Grauwe and Grimaldi (2005). In developing a non-linear exchange rate model, we

start by defining a fundamental exchange rate. One of the simplest models is that of

purchasing power parity, while more advanced exchange rate models include new open

economy macroeconomics models, as proposed by Obstfeld and Rogoff (1995). However,

estimation of the fundamental exchange rate exceeds the scope of this paper, and

therefore it is assumed that the fundamental exchange rate follows a random walk. This

implies that

(9)

9

(4.1)

where is the logarithm of the fundamental exchange rate,

is the logarithm of the fundamental rate in period t-1, and is an independent identically distributed (i.i.d) random variable sampled from a normal distribution with zero mean and variance

, so that ~N(0,

).

Next a general framework for determination of the market exchange rate is specified. In this framework, the market price is determined by the sum of weighted expectations of all market participants.

(4.2)

where

is the change in the exchange rate over period t. There are a total of Q different types of strategies, q. There is a continuum of agents, and is the weight of agents following strategy q, so that

.

is the expected change in the exchange rate by traders following strategy q. Last, is a normally distributed random noise component, with ~N(0,

).

Next the concept of heterogeneous beliefs of agents is introduced. A survey by Cheung, Chinn and Marsh (2004) shows that there indeed is heterogeneity of beliefs among agents in the foreign exchange market. Following the Frankel and Froot (1990) chartist/fundamentalist dichotomy, the model allows traders to choose between a fundamental and a chartist trading strategy.

Fundamentalists have information about the underlying fundamental exchange rate, and base their forecast on the disparity between the spot rate and the fundamental rate. Essentially, this means that fundamentalists act as arbitrageurs, attempting to profit from – and thus eliminating – price misalignments by entering long (short) positions when a currency is undervalued (overvalued). The forecast of fundamentalists can be modeled as

(4.3)

where

is the forecast of fundamentalists; equals the misalignment between

the spot exchange rate and the underlying fundamental rate, and is a measure of the

(10)

10

expected rate of mean reversion of fundamentalists, where .

On the other hand there are chartists, whose forecasts are based on the recent movement of the exchange rate. Taylor and Allen (1992) find that “at least 90 percent of respondents place some weights on [technical analysis] when forming views at one or more time horizon.” Moreover, the significance of extrapolative trading strategies is confirmed empirically by Cheung et al. (2004) and Ito (1990), who show that market participants expect that, for timeframes up to six months, bandwagon effects strongly influence exchange rates. Only on longer timeframes do underlying fundamental values become a major determinant. Therefore, it is assumed that chartists follow a positive- feedback rule, thus extrapolating recent exchange rate movements into the future. The forecast of chartists is given as

(4.4)

where

is the forecast of chartists, who compute a moving average of the exchange rate changes over a period from to , and extrapolate this movement by factor . Here,

, and for the model to be evolutionary stable, .

The next step is to specify how traders choose between the two forecasting rules.

An important property of this model is that the weights of different strategies are determined endogenously. As suggested by Brock and Hommes (1997), in modeling the strategy choice behavior of the population, we refer to discrete choice theory. Due to the heterogeneity of individual agents, there likely exists heterogeneity in the choice environment, which motivates the decision to employ a disaggregate model. Discrete choice theory attempts to investigate the relationship between discrete choice and an array of explanatory variables. Specifically, we employ a binomial logit model to estimate the probability of choice. In the setting of this logit model, agents compare the utilities of both strategies and make a choice for one strategy accordingly. The relative weights of both strategies are then defined as

(4.5)

(11)

11

(4.6)

Here, and denote the expected utility of the fundamental and chartist strategy, respectively. Parameter indicates the sensitivity of the population in choosing the highest-utility strategy. Great values of imply that the population is very sensitive to utility, and thus, that agents en masse choose the strategy with the highest utility.

Inversely, small values of imply that agents are relatively insensitive to differences in utility.

As previously stated, agents are not well-equipped in assessing the probabilities of outcomes, and consequently also at assessing the ex-ante utilities of different strategies.

Therefore, we follow de Grauwe and Grimaldi (2005) in that agents base expected utility on the previously realized utility. This means that if one strategy performed relatively well in the previous period, this leads to a proportion of traders adopting this strategy in the current period. Following the mean-variance framework, utility is modeled as a function of the recently obtained profit, , and the according risk, .

(4.7)

(4.8)

Traders make a profit if the sign of the expected exchange rate movement of their strategy, corresponds with the sign of the actual exchange rate movement. The size of the gross profits is equal to the actual exchange rate movement. It should be noted that the concept described above is in line with the evolutionary approach proposed by Nelson and Winter (1982), in which traders make a selection from an array of trading rules, based on their relative historical performance.

However, Simon (1957) notes that the costs of obtaining information about fundamental prices may be an obstacle to market efficiency. Agents must either face information gathering costs in using sophisticated, fundamental strategies or may choose to employ free and easily available rules of thumb that perform “reasonably well”.

Specifically, the chartist strategy is assumed to be freely available, whereas information

about the fundamental rate that is needed for the fundamentalist strategy has to be

(12)

12

obtained at cost . The net profit for the fundamentalist strategy then equals the gross profit minus the information gathering cost, so that

(4.9)

The estimated risks of both strategies are determined in the following exponentially weighted moving average of the absolute prediction errors

(4.10)

(4.11)

where, α marks the rate at which old information is discounted, so that higher values of α mean that old information is discounted faster. As can be seen, the perceived risk of the fundamentalist strategy is corrected for by the size of the misalignment between the market rate and the fundamental rate. The reasoning is that as the misalignment becomes larger, the risk of following a fundamentalist strategy becomes smaller, as prices are bound to revert to their mean some time in the future. The extent of the perceived risk reduction originating from the disparity is determined by parameter .

As a third market participant, the presence of non-financial firms is introduced. In modeling the behavior of non-financial firms in the Forex market, we touch upon order flow literature. Order flow is the transaction volume that is signed, which means order flow equals the excess buying or selling in the marketplace. Thus, order flow can be considered the market microstructure‟s equivalent of excess demand in general economics. Order flow is generally used as a proximate determinant of price in market microstructure models, as empirical research shows a strong positive correlation between order flow and nominal exchange rates (Evans and Lyons, 1999).

Through international activity, firms engage in transactions on the foreign

exchange market. When the exchange rate is equal to its fundamental value, so that the

exchange rate is at purchasing power parity (PPP), the order flow of non-financial firms

may be considered as randomly distributed around a mean of zero (i.e. there is no

systemic excess supply or demand). On the other hand, when the spot exchange rate

(13)

13

diverges from its PPP equilibrium condition, international goods prices are expected to adjust, so that the market clears.

However, evidence shows that international firms are responsive to deviations from fundamental exchange rates only in the long run, implying that it takes time for goods markets to clear. This results, for example, from the fact that firms tend to engage in long-term contracts. Confirming the presence of sticky international prices, Krugman (1986) finds that import prices initially tend to fall too little when a currency appreciates.

Nevertheless, over the long run it is expected that goods markets will clear, as described by the J-curve (Dornbusch, 1976).

All in all, non-financial firms do not engage in speculative behavior based on expectations. Nevertheless, through their order flow, originating from international activities, firms do exert pressure on the exchange rate. This order flow pressure is modeled so that it includes a mean-reverting component, and a random component, so that

(4.12)

where, denotes the rate of mean-reversion, and is a i.i.d. normal random variable with a mean of zero and a variance equal to

, so that ~N(0,

).

Introduction of Tobin Tax

Introducing a Tobin transaction tax, the profit functions have to be remodeled as agents incur additional trading costs. The Tobin tax is a proportional tax levied on all market participants. The new profit functions can thus be specified as

(4.13)

(4.14)

Here,

is the proportional transaction tax. As the tax – as proposed by Tobin – is levied on all spot transactions, market participants incur the tax twice for a round-trip position, i.e. the purchase and subsequent sale of the position.

The difference between the two equations is that fundamentalists can amortize the

total tax expenses over the entire misalignment, whereas chartists are assumed to have a

(14)

14

one day holding period of their position. Admittedly, the latter is quite a strong assumption. However, empirical data shows that almost half of the transactions have a holding period of two days or less (Kaul, Grunberg and Haq, 1996). Furthermore, with the recent developments of algorithmic trading and the emergence of highly leveraged institutions, the average holding period is expected to have declined even further (Galati and Heath, 2007).

Liquidity effects

In the Triennial Central Bank Survey of Foreign Exchange and Derivatives Markets Activity, conducted by the Bank for International Settlements, information about the transaction volume of different types of market participants is collected. The survey discriminates between financial and non-financial institutions, and finds that of the average daily volume of almost 4 trillion US dollar, non-financial customers (e.g.

enterprises that engage in international activity) account for little over 10% (BIS, 2010).

In order to model the liquidity effects of a Tobin tax, we consult literature on the effects of transaction costs on trading volume. This seems reasonable, as the Tobin tax effectively is an increase of transaction costs, and trading volume is commonly used as a proxy for liquidity (Lee and Swaminathan, 2000).

Due to the real economic nature of non-financial foreign exchange transaction, and the small size of a transaction tax relative to the overall transport costs, it is expected that a Tobin tax will have little impact on the absolute activity of non-financial institutions in the Forex market (Felix and Sau, 1996). As a matter of fact, the effects of a currency transaction tax may even stimulate international trade, and thus non-financial firms‟ activity in the foreign exchange market. The reason is that international trade could potentially benefit from increased long-term stability of exchange rates (Chowdhury, 1993). Regardless, it is assumed that non-financial firms‟ Forex trading volume to be insensitive to the tax.

In contrast, leveraged and unleveraged financial institutions generally speculate

with the goal of obtaining monetary gains through market activity. The introduction of a

Tobin tax may impose a severe threat on the ability of financial firms to generate such

profits, and consequently we expect that the trading volume of financial institutions is

(15)

15

rather sensitive to a transaction tax. In accordance with this hypothesis, Barclay, Kandel and Marx (1998) find a significant negative relationship between transaction costs and trading volume.

Up until now, it has been assumed that all agents follow either a fundamentalist or a chartist strategy. In modeling liquidity effects, agents are now allowed to also choose for inactivity, i.e. not engaging in any market transaction.

The activity choice is determined by evaluating the expected utility of the trading strategies against the alternative of inactivity. Agents assign inactivity a utility of 1, so that . Adding this third option to choose from, the discrete choice model that was introduced earlier now becomes a multinomial logit model. Inserting the option for inactivity in this logit model allows us to compute the total share of active traders in period t using the following equation.

(4.15)

Next, we calculate the average share of active traders, , in the

situation without the Tobin tax.

(4.16) The level of non-financial firm activity in the market is then set so that, on average, the volume equals the empirically observed 10% of the market volume. This can be done with the following arithmetic operation

(4.17)

As mentioned earlier, the absolute level of non-financial firm activity is kept stable across simulations with different tax levels. Now, the shares of all three types of market participants can be calculated by using the fixed activity of non-financial firms, and the relative weights of fundamentalists and chartists, adjusted for total trader activity.

(4.18)

(4.19)

(16)

16

(4.20)

When the Tobin tax is introduced, the expected utilities of both the fundamental and chartist strategies are expected to decline, leading to an increasing inactive trader population. Because the absolute activity of non-financial firms remains constant, this in turn increases the relative share of non-financial firms‟ activity in the Forex market.

Implementing the three different strategies and their according weights into the pricing function now gives

(4.21)

To give a final comprehensive overview of the mechanics of the model: after the market price has changed, agents assess the profits that both the fundamentalist and chartist strategies obtained, as well as the according risks. Based on the past returns and the risks, agents estimate the expected utilities of both strategies. Based on the relative utility of both strategies, agents make a choice between adopting a trading strategy, or remaining inactive. Once traders make a choice for one strategy, they compute their expected exchange rate movement.

Based on the total activity of traders, the non-financial firms‟ share of market volume can be computed, and subsequently, the exact weights of the other market participants can be calculated. Now that all the required values are known, the market price changes as a function of the weight adjusted expectations, and an error term resembling random noise. The entire process described above is then repeated.

5. Calibration and testing

The study continues with the calibration and simulation analysis of the proposed model. Before the model can be operated, parameter values need to be determined.

Unfortunately, there is little empirical guidance on parameter determination. Therefore,

the goal is to find a set of theoretically realistic inputs that result in simulated exchange

rates whose characteristics fit empirical properties closely. This is important because the

better the model matches the behavior of real prices, the more reliable the policy

experiments will be (Westerhoff and Dieci, 2006).

(17)

17

The model is calibrated so that the output (i.e. the simulated exchange rates) fits the distribution of daily logarithmic euro / US dollar changes. Calibration takes place through a process of manual trial-and-error, departing from presumably realistic initial parameter values. The empirical euro / US dollar data spans from January 3

rd

2000 to December 30

th

2010, amounting to a total of 2804 observations. The data is obtained from the International Monetary Fund‟s International Financial Statistics database.

The calibration process leads to the parameter estimations that will be used as the baseline values in the subsequent simulation analyses. Table 1 presents these baseline parameter values. The distribution of the simulation output – using the values presented below – closely matches the distributional characteristics (extreme values and kurtosis) of the empirical exchange rate data, as can be observed in table 2. In the final tests on economic implications of a Tobin tax, simulations will be run using a range of different parameter values, as to also investigate the sensitivity of our results to different parameter values.

Table 1. Parameter values

In the subsequent tests, a Monte Carlo method is employed. A total of 1000 simulations are run, each consisting of 2000 time series observations. In starting each simulation, both the log of the fundamental value and the log of the exchange rate depart from 0. In reporting the results, in general, the 5

th

, 25

th

, 50

th

, 75

th

and 95

th

percentiles are specified.

Quantile Minimum Maximum Kurtosis

Simulations

5% -0.033 0.033 3.012

25% -0.037 0.037 3.324

50% -0.042 0.042 4.07

75% -0.058 0.057 6.483

95% -0.099 0.099 13.918

EURUSD -0.047 0.042 5.659

Table 2. Summary of basic statistics

(18)

18

Stylized facts

In this section we will discuss four empirical properties of exchange rates, and subsequently conduct tests to see whether the model – which has been calibrated to match the price distribution of exchange rates – is also able to produce these other statistical properties of exchange rates. As mentioned earlier, the better the model‟s output resembles real exchange rate behavior, the more reliable will our subsequent policy experiments be. Inversely, if the model does not produce these stylized facts, this may be an indication of the limitation of the model‟s ability to simulate exchange rates, which would in turn reduce the power of the ultimate findings concerning the effectiveness of a Tobin tax.

Excess kurtosis

Mandelbrot (1963) notes that the empirical unconditional distribution of price changes typically is fat-tailed compared to a normal or Gaussian distribution. Later research showed that part of this leptokurtism can be explained by time varying volatility (e.g. Baillie & Bollerslev, 2002). However, it is found that most price distributions are leptokurtic in excess of what can be explained by conditional models, which control for time-varying volatility (Cont, 2001). As shown in the first column of table 3, for the baseline parameters the model produces leptokurtic price distributions, similar to that of the daily euro / US dollar exchange rate.

It has been observed that over longer time intervals the excess kurtosis of exchange rate distributions tends to fall (Lux, 1998). Table 3 confirms this, as the kurtosis of the euro / US dollar distribution (bottom row) falls over longer time intervals.

By aggregating the simulated time series, the output can be compared with this empirical

property. Five period aggregate returns correspond to weekly data, and 20 period

aggregate returns correspond to monthly data. As shown in table 3, in contrast to

empirical findings, the kurtosis of the model‟s aggregated time series is relatively higher

than that of single period returns. This implies that there is a divergence between the

model‟s output and empirical data.

(19)

19

Kurtosis

Quantile Daily Weekly Monthly

Simulations

5% 3.012 2.995 3.197

25% 3.324 3.704 4.278

50% 4.07 5.339 5.771

75% 6.483 9.318 7.955

95% 13.918 19.597 13.037

EURUSD 5.659 4.355 2.884

Table 3. Time aggregated kurtosis

Persistence of volatility

The next empirical observation is that exchange rates seem to be characterized by volatility clusters (Cont, 2001). Absolute first differences of logarithmic prices tend to display strong autocorrelation. This means that large price variations are likely to be followed by large price variations, and vice versa. The presence of this persistence in volatility means that logarithmic exchange rates prices do not follow an ordinary geometric Brownian motion, but that volatility itself follows a stochastic process.

The clustering of volatility of exchange rates will be investigated by estimating the autocorrelation coefficients of the absolute first differences of the logarithmic exchange rates. The autocorrelation coefficient over a range of different lags will be considered.

Autocorrelation coefficients

Period lag euro / US dollar Simulated

1 0.11997* 0.11146*

2 0.09613* 0.07051*

3 0.10620* 0.08064*

4 0.11764* 0.08827*

(*) significant at 95% confidence interval Table 4. Autocorrelation of absolute logarithmic returns

In accordance with Hommes (2006), who suggests that the clustering of volatility may

arise through the interaction and switching between different types of trading rules, table

4 indicates that the model produces significant autocorrelation of absolute logarithmic

returns. Comparing the autocorrelation coefficients of the simulated exchange rates with

that of the euro / US dollar rates, it can be concluded that in this respect, the time series

show a high degree of similarity.

(20)

20

The disconnect puzzle

The third observation that will be discussed is the exchange rate disconnect puzzle.

International economists have had difficulty explaining why models of short horizon exchange rate determination cannot outperform a simple random walk model (Meese and Rogoff, 1983). Instead, there seem to be sustained deviations between market rates and the underlying fundamental rate, which has become a major puzzle in exchange rate economics (Obstfeld and Rogoff, 2001). The disconnect between market rates and their fundamentals is, for example, displayed in work on purchasing power parity (PPP), which is one of the cornerstones of exchange rate economics. Short run deviations from PPP tend to be large and volatile. The consensus on the speed at which PPP deviations damp is that there is a half-life of three to five years (Rogoff, 1996).

In examining the disconnect puzzle statistically, we test for cointegration of the market price and the fundamental value. This will indicate whether the market price has the property to revert to the fundamental value, and thus whether they share stochastic trends. Specifically, the Engle-Granger two-step procedure is employed. In the first step, regression 5.2 is performed,

(5.1)

and subsequently an augmented Dickey-Fuller test is conducted on the residual values.

(5.2)

The lagged error term is included to control for any autocorrelation of the residuals.

Based on the significance of parameter , we conclude that and are cointegrated, and thus that there is the tendency for price to revert to value.

The results in table 5 indicate that the error correction coefficient ( ) is very low.

This indicates that mean reversion takes a very long time. At the median coefficient level,

the half-life of a misalignment is calculated to be approximately 200 days. This is

significantly shorter than the half-life of three to five years indicated by Rogoff, but the

author‟s estimates are likely to be affected by imperfections of the models he employs.

(21)

21

Tobin = 0 % Tobin = 0.5 %

Quantile μ λ μ λ

5% -0.0083 0.2365 -0.0093 0.1306 25% -0.0052 0.2919 -0.0050 0.1628 50% -0.0035 0.3413 -0.0031 0.1876 75% -0.0020 0.4026 -0.0016 0.2187 95% -0.0007 0.4943 -0.0004 0.2804 Table 5. Cointegration coefficients

Figure 1 provides a single realization of the pricing process. Note the periods of sustained misalignment, but also the tendency for mean reversion on longer horizons.

Figure 1. Sample realization of pricing process

The excess volatility puzzle

Last, floating exchange rates seem to exhibit excess volatility. Flood and Rose (1995) note that models of exchange rate determination suggest that when shifting between different exchange rate regimes (e.g. between fixed and floating exchange rate regimes), there should be a conservation of volatility, but that this volatility should be transferred between different economic loci (e.g. from exchange rate volatility to volatility in money supply). In contrast, the authors find that when nominal exchange rates become fixed, there doesn‟t appear to be a systematic effect on the volatility of other macroeconomic factors. This is evidence that floating exchange rates are excessively volatile relative to the underlying macroeconomic fundamentals.

-.5 0.5

Log Price

0 500 1000 1500 2000

time

Fundamental Value Exchange Rate

(22)

22

In reality, the excess volatility of exchange rates is impossible to measure accurately, because the fundamental exchange rate is unknown. In our model, however, excess volatility can be measured by comparing the variance of the market rate with the variance of the fundamental rate, as done in equation 5.3.

(5.3)

where is the variance of the simulated market exchange rate, is the variance of the fundamental price, and is the residual variance, also known as the

noise produced by the model. In order to quantify the excess volatility, a noise-to-signal

ratio is constructed

(5.4)

Values greater than 0 indicate that the spot exchange rate is more volatile than the fundamental exchange rate, implying the presence of excess volatility. Table 6 shows that in the model, the market exchange rate indeed is excessively volatile relative to the underlying fundamental value. The results can be interpreted as follows. At the 5

th

percentile, the exchange rate is 24 percent more volatile than the underlying fundamental rate, whereas at the 95

th

percentile, the exchange rate is almost twice as volatile as the fundamental rate.

Quantile Noise-to-signal ratio

5% 0.24350

25% 0.35859

50% 0.47528

75% 0.62417

95% 0.92007

Table 6. Noise-to-signal ratios

6. Measuring the effectiveness of a Tobin tax

After having investigated the major puzzles, and having found that the HAM

generates time series with properties similar to that of empirical exchange rates, we shall

proceed by measuring the effects of a Tobin tax. The effectiveness of the Tobin tax in

achieving long-run stability of exchange rates can be quantified by measuring the extent

to which it reduces the average squared misalignment.

(23)

23

The average squared misalignment is defined in equation 6.1.

(6.1)

Here N is the total number of time series observations. A squared misalignment is taken for two purposes. First, it converts the signed misalignment to an absolute value, which is what we are interested in. Secondly, more significant misalignments are assumed to have greater impact on the real economy, and accordingly the ASM puts greater emphasis on relatively large misalignments. Preliminary analysis shows that there probably is a non- linear relationship between the height of the tax and the change in the average squared misalignment. The presence of a non-linear relationship between a Tobin tax and the misalignment of exchange rates could imply that there is an optimal level of a Tobin tax, at which the misalignment is minimized.

Secondly we study the effects of a Tobin tax on short-term price volatility, using the previously constructed noise-to-signal ratio.

(6.2)

Again, the presence of a non-linear relationship could imply the existence on optimal tax level, concerning the short-term volatility.

Based on the Monte Carlo simulations, the average mean squared misalignments

and noise-to-signal ratios are computed, and visualized in figure 2. The individual graphs

denote the simulation results with different parameter values. All parameters used in the

simulations are the same as the baseline values in table 1, with the exception of the

parameter indicated on the vertical axes. Sigma denotes the percentage value of

parameter

, also known as the variance of non-financial firms‟ order flow. Gamma

denotes parameter , or the sensitivity of traders‟ activity and strategy choice. Both

variables are of importance as they significantly influence the liquidity effects of a

transaction tax. A greater sigma implies that the order flow of non-financial firms shows

strong variance, and thus that non-financial firms actively consume liquidity. Gamma

denotes the extent to which the choice for inactivity is triggered by a rise in transaction

costs, indicating the extent to which liquidity is affected by a Tobin tax.

(24)

24

(25)

25

As figure 3 indicates, the share of non-financial institutions in the market is expected to increase as a consequence of the Tobin tax. Also, as the Tobin tax has a strong adverse effect on the profitability of chartists, the share of chartists in the market exhibits a significant decline. Due to the reduced noise trader risk, both the relative share, as well as absolute number of fundamentalists is expected to increase.

Figure 3. Average weights of different market participants for different levels of Tobin tax.

Results

Based on figure 2, it can be obtained that for a range of parameter values, the introduction of a Tobin tax may reduce the misalignment of exchange rates. However, it should be noted that for great values of

(see: ASM, Sigma 10), relatively high levels of a Tobin tax may also have significant negative effects.

Moreover, it seems that volatility, measured by the noise-to-signal ratio is

relatively more sensitive to the tax level. For moderate parameter values, and a low level

of taxation, a Tobin tax may reduce price volatility. However, for more pronounced

parameter values, and higher tax levels, volatility may actually be induced as a

consequence of the tax.

(26)

26

In comparing our results to those of others, Westerhoff and Dieci (2006) find that a transaction tax leads to a reduction in the distortion of market prices, together with a reduction in volatility. Our results are less positive, as for a range of parameter values, the tax leads to long-term stability, at the expense of short-term volatility. Our latter result is supported by Mannaro, Marchesi and Setzu (2008), who also predict a rise in short-term volatility, but who do not investigate the long-term effects specifically.

As Matheson (2011) notes, short- and long-term volatility may not necessarily be correlated. In fact, we find that for a range of parameter values, short-term volatility (NSR) and long-term price stability (ASM) are certainly not correlated. The unintuitive finding of increased volatility and a reduced misalignment can be explained as follows.

Due to the declining trading activity as a consequence of the tax, there is relatively less liquidity to absorb random order flow. This causes a rise in short term volatility. However, as the tax changes the composition participants in the market, and specifically makes the chartist strategy less attractive, self-fulfilling speculative bubbles become less likely. This means that arbitrageurs face less noise trader risk, promoting arbitrageurs‟ trading activity, and consequently increasing long-run price stability.

Economics literature generally fails differentiate between short-term price volatility, and long-term price swings. Matheson (2011) notes that long-term mispricing is of greater concern from a social point of view, as bubbles and the subsequent crashes are likely to have a more significant impact on real economic activity than short-term price fluctuations. Considering that a Tobin tax may add to long-term stability, it can be concluded that a global currency transaction tax may be desirable from an socio- economic perspective.

In the 1970s, Tobin initially proposed a one percent transaction tax. In light of the

reduced transaction costs in modern financial markets, he now recommends a tax in the

range of 10 to 25 basis points. The average effective bid-ask spread in the Forex market

is roughly 10 basis points (Osler, Mende and Menkhoff, 2010). In light of current

transaction costs, even a transaction tax of 10 basis points would imply a rise in effective

transaction costs of at least 100 percent.

(27)

27

In deriving the optimal tax level based on our results, we first emphasize that caution is necessary. There is no certainty about the correct parameters values, and as noted before, the outcomes are in fact fairly sensitive to the parameter values. Based on the second chart on the right-hand side, and the third chart on the left-hand side, we postulate that a tax which leads to a 30 basis points increase in transaction costs would significantly reduce price misalignments, while limiting the liquidity risk that would result in excess short-term volatility.

In the following discussion, we address some of the critique that the Tobin tax received throughout the years, and we make a case that, given the current microstructure of the foreign exchange market, a significantly smaller transaction tax can result in the recommended 30 basis points increase in effective transaction costs.

7. Discussion Critique

Technical feasibility

An early critique on a global transaction tax was that it was to be technically unfeasible. However, Stiglitz (Conway, 2010) notes that with modern technology, this is no longer the case. In fact, at the moment institutions already engage in thorough documentation, in order to comply with current regulatory requirements. With the necessary documentation methods in place, implementation of a transaction tax becomes a matter of formally introducing and administering the tax.

Tax avoidance

Another often cited critique is that the tax can be avoided by relocating operations

to international tax havens. Proponents offer a range of different solutions. First, the

number of non-cooperating governments should be limited. This can be achieved either

by stimulating participation, or deterring nonparticipation. For example, governments

may be incentivized to adopt a Tobin tax if they are allowed to retain a share of the

collected taxes. A progressive retention scheme may be implemented, where poor and

small countries may retain relatively more than their large and rich counterparts.

(28)

28

To deter nonparticipation of governments, Tobin (1996b) proposes collection of the tax to be required by all members of the IMF, as a condition to be eligible to receive credit from the fund. Furthermore, financial transactions between participating countries and tax havens may be taxed at penalty rates.

Additionally, to prevent tax avoidance by setting up shell corporations in tax havens, the tax should be charged at the site where the dealers or financial institutions are physically located. Kenen (1996) posits that the financial sector is not as foot-loose as critics commonly assume, and therefore deems it unlikely that financial activities will be relocated to offshore tax havens. Nevertheless, the author conjectures that participation of the major developed countries is critical to the tax‟s success, as nonparticipation of countries that do possess the critical knowledge and proper institutions necessary for financial intermediation, may in fact become attractive targets for relocation, and therefore have the potential to develop into competitive financial hubs. Kenen (1996) therefore concludes that successful adoption of the tax requires participation of at least the G-7 countries.

Then there is threat of tax avoidance in the form of migration of trade to untaxed financial products. The financial industry seems to possess great creativity when it comes to developing new financial products that avoid extant tax laws. For example, the UK introduced a stamp duty: a tax on the transfer of official documents including shares.

Consequently, trade in tax exempt financial products – such as the contracts for

difference1

– has risen significantly. It is estimated that CFD trade currently accounts for over 25 percent of total volume on the London Stock Exchange.

To counter tax avoidance through financial product innovation, regulatory bodies should design law not to only take into account existing financial products, but to also be dynamic enough to adequately respond to financial innovations.

1 A CFD is a financial derivative, where two parties make a contract that the seller will pay the buyer the difference between the current value of an asset, and the value at a given period of time. Due to the fact that there occurs no transfer of the underlying asset, the common stamp duty is avoided.

(29)

29

The effects of a Tobin tax on effective transaction costs

In the HAM, it was estimated how a rise in effective transaction costs affects price stability. The Tobin tax was assumed to be equal to the effective rise in transaction costs. Per contra, based on concepts originating from market microstructure literature, we provide theoretical evidence that the effective transaction costs may rise in excess of the Tobin tax. If this true, it becomes necessary to investigate how a tax affects transaction costs, in order to estimate the optimal tax level. First, we provide an introduction to the market microstructure, and then we point at the potential effects of a Tobin tax on effective transaction costs.

The Forex market consists of two levels. First there is the “retail” market, where dealers meet customers such as pension funds, mutual funds and non-financial firms.

Dealers provide these clients with bid and ask quotes, on which clients can make a decision to either buy or sell. The difference between the bid and the ask quotes is called the (bid-ask) spread, and this spread can be seen as the dealer‟s reward for his activities.

If a customer engages in a transaction with a dealer, this naturally affects the dealer‟s currency inventory.

This is where the second level, the “interbank market,” comes in. In this top level market, dealers trade among each other, generally with the purpose of hedging their excessive inventory exposure. The repeated passing of unwanted inventory imbalances between dealers is termed „hot potato trading‟ (Lyons, 1997). Osler et al. (2010) investigate the rate at which dealers offload their inventories, and find a median inventory half-life of about two hours. Hot potato trading may offer an explanation for the high volume on the interbank market, which accounts for approximately 40% of market volume (BIS, 2010).

Market microstructure theorists have found three types of market making costs,

which consequently affect the bid-ask spread (Stoll, 2000). First, dealers incur order

processing costs (OPC), primarily consisting of employees‟ wages, office rent, and the

information infrastructure costs. To a large extent, these costs are fixed with respect to

trading volume (Copeland and Stoll, 1990). Second, dealers have bound information,

because their primary source of information about prices comes from order flow. Because

of the lacking information on the fundamental rate, dealers are subject to adverse

Referenties

GERELATEERDE DOCUMENTEN

The aim of this study was to investigate the role of the leptin (ob) and leptin receptor (obR) genes in predisposition to pre-eclampsia and involved screening the genes in

Features extracted from the ECG, such as those used in heart rate variability (HRV) analysis, together with the analysis of cardiorespiratory interactions reveal important

cMRI was found to be the most important MRI modality for brain tumor segmentation: when omitting cMRI from the MP-MRI dataset, the decrease in mean Dice score was the largest for

The different major objectives were the following: to characterize and analyse the feedback path in a novel implantable hearing device, the Codacs direct acoustic cochlear

This paves the way for the development of a novel clustering algorithm for the analysis of evolving networks called kernel spectral clustering with memory effect (MKSC), where

To obtain an automated assessment of the acute severity of neonatal brain injury, features used for the parameterization of EEG segments should be correlated with the expert

The addition of the tannins to the different maceration time wines did not exhibit significant differences when compared to their respective controls, but when compared to each

Item analysis was conducted on each of the latent variable scales included in the Work Engagement Survey (WES), as well as on each subscale of the latent variable