• No results found

The rise of high frequency trading, a good or a bad trade?

N/A
N/A
Protected

Academic year: 2021

Share "The rise of high frequency trading, a good or a bad trade?"

Copied!
28
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The rise of high frequency trading, a good or a bad trade? Bachelor scriptie

Hans van der Woude 10260285 Supervisor: dhr. drs. D.F. (Dirk) Damsma

(2)

Abstract

This paper studies the impact of high frequency trading on the extreme short term volatility by comparing 2004 tick data with 2013 tick data after the release of the monthly inflation figure, using the Zhou volatility estimator to estimate short term volatility. Using both an F-test and an sample T-test, this paper proves that high frequency trading does not create excess

volatility. Next to the statistical results, the graphical results suggest a change in the volatility pattern in favor of high frequency trading. The results obtained by this paper give an insight in how the volatility pattern changed over time and whether the rise of high frequency trading influenced this pattern change.

(3)

Table of contents

1. Introduction 1

2. Literature review 3

2.1 Introduction to the literature review 2.2 Theoretical models

2.3 Empirical findings

2.4 Conclusion to the literature review

3. Theoretical background 7

3.1 The estimator 3.2 The inflation figure 3.3 The hypothesis

4. Methodology 11

4.1 The data sources 4.2 Measuring samples 4.3 Comparing results

5. Results

12

5.1 Introduction to the results 5.2 Graphical results

5.3 Statistical results 5.4 Summary

6. Conclusion 16

6.1 Conclusion to the results

6.2 Limitations of this paper and further research

References Appendix

(4)

1. Introduction

Over the past two decades, stock market trading evolved from human trading to highly sophisticated computerized trading, accounting for 78% of Dollar trading volume in 2009 in the United States, up from zero in 1995 (Zhang, 2010). The use of computers to monitor market movements and execute trades within milliseconds, referred to as ‘High Frequency Trading’, is often blamed as the cause of some major short term disruptions in the capital markets, such as the ‘Flash Crash’ of May 6, 2010, which caused a short period of extreme market volatility. Even though research found that HFT did not cause this disruption, it did reinforce the crash by the computers reacting on it (Kirilenko, Kyle, Samadi & Tuzun, 2011).

The increased use of HFT gave rise to the question whether HFT led to more efficient markets or whether it had the opposite effect and led to more inefficient markets.

Additionally, not only scientific researchers were interested in the effects of HFT, but also regulatory bodies and national governments were concerned about the impact, which led, amongst others, to the introduction of the Financial Transaction Tax in several European countries to dampen price volatility and curb short term speculation (Schulmeister, 2009). The research done so far on the effect of HFT and automated trading are divided and even

contradictive. Chaboud, Chiquoine, Hjalmarsson and Vega (2011) found that HFT is weakly negatively correlated with volatility in foreign exchange markets. Henderschott and Riordan (2009) found that HFT on Deutsche Boerse contributed more to price discovery than human trading, additionally Henderschott et al. (2010) found that HFT increases liquidity. Where Zhang (2010) found stock price volatility is positively correlated with HFT after controlling for fundamental volatility and other exogenous determinants.

HFT firms are trading on two different sources of information. The first is new macroeconomic information (Andersen, Bollerslev, Diebold, & Vega, 2007). The second are price imbalances in the order book, being mainly caused by the bid/ask quotes(Cao, Hansch, & Wang, 2009). HFT reacting on these quotes imply reduced arbitrage opportunities and more informative markets. In this paper, I examine the influence of HFT on the currency pair EUR/USD right after the monthly inflation release by Eurostat. To measure the impact of HFT, I use tick-by-tick volatility from 2003 and compare this data with tick-by-tick volatility from 2013 and take 2008 to control for the crisis. This time frame captures the rise of HFT and should provide a clear distinction between human trading and automated trading, since automated trading had not been allowed prior to this time frame (Chaboud, Hjalmarsson,

(5)

Vega, & Chiquoine, 2011). I use the inflation figure release as an example of macroeconomic information that HFT firms act upon, since this figure should be absorbed instantaneously into the exchange rate according to the Uncovered Interest Parity (Pilbeam, 2013). Above normal volatility immediately after the release of this figure is a good way of measuring the potential inefficiency enhanced by HFT. Given the $5,000 billion size of the foreign exchange market, a quick new ‘fair ‘ price would be preferred (Pilbeam, 2013). Therefore it should be well examined whether HFT has a positive or negative influence on the short term volatility. Literature claims that computerized trading should have a speed advantage over human trading and is able to absorb new information more quickly, thereby making the prices more informative (Biais, Foucault, & Moinas, 2012). A quick implementation of the new price should result in a lower volatility compared to the implementation of the new price by human trading. However, several researchers, such as Biais and Woodley (2011) argue that the correlation of trades executed by computers could lead to less informative prices and overreaction to new information, hence increased volatility.

In contrast to previous research done in this field, this paper examines the volatility using tick-by-tick data, instead of the minute and five minute data used in previous papers to come to a definitive answer to the following research question: ‘Does high frequency trading increases the short term volatility, in comparison to human trading, right after the release of new public information, using the Zhou volatility estimator with data ranging from 2004 until 2013 (Zhou, 1996).’ Next to comparing volatility at fixed points, I also give a graphical representation of how volatility revolves right after the release of the inflation figure, to give an insight in the market response to the figure release.

This paper is from here on structured as follows: In section 2 there will be an extended literature review. In section 3 I will outline the theoretical background about the volatility and which estimator I use to correct for noise present in high frequency data. Section 4 contains the methodology, where I will explain where the data comes from and how I calculated and compared the estimator of volatility. Section 5 will contain the results from the statistics using the obtained data and will describe the findings. Section 6 will outline the conclusions

obtained from the statistic research and will propose further research questions to be examined following this paper. Finally, the appendix will contain detailed statistical results and a broader selection of the graphical results.

(6)

2. Literature review

2.1 Introduction

Although regulatory bodies and national governments already started implementing measures and regulation to curb automated trading and short term speculation, the research done so far in this field is limited and contradictive. Well before the rise of automated trading and HFT. theoretical models were developed to predict the influence of short term traders on prices and volatilities. With the rise of automated trading and HFT further theoretical models were developed and scientists started doing empirical research on the influence of this extreme short term trading. Although theoretical models mainly predict a negative influence, the empirical research contradicts the theory on some aspects and supports the theory on other aspects. Since the theoretical models have informed the hypotheses and methodology of the empirical studies, they will be discussed first in section 2.2. The empirical research is discusses next in section 2.3. Thus, this chapter will clarify the exact contribution of this research.

2.2 Theoretical models

In 1992, well before the rise of automated trading, a theoretical model developed by Froot, Scharfstein, and Stein showed that the existence of short-term speculators can lead to informational price inefficiency, even though their model consists of only rational agents (1992). To make this point clear they refer to Keynes’ beauty contest, where the judges would be better off if they coordinate their choices, even if they choose someone who is not the most beautiful. Froot, Scharfstein and Stein argue the same happens with traders with a very short term horizon. If there are two sets of information available, and all traders study the same set, studying the other and trading on this set is not profitable, since no one is willing to buy for the value this information set suggest (1992). The model introduced by Froot, Sharfstein and Stein concludes that there are multiple equilibria when traders have a short term horizon and that in these multiple equilibria, traders are herding by obtaining to much of one information set and ignoring the other, causing prices to be less informational and more volatile. Only when traders have a longer term horizon, there exists one equilibrium (Froot, Scharfstein, & Stein, 1992). The risk with HFT therefore would be that these firms all study and act on the same information, thereby disregarding any other valuable information and leading to

(7)

inefficient pricing and overcrowding in one direction as suggested by Kozhan and Wha Tham (2012) and Biais and Woodley (2011).

Next to the overcrowding effect, according to the theoretical model of Biais, Foucault and Monais, having the fastest access to quotes means having fast access to new information, since quotes contain market information (2011). The market participant having the fastest access to these quotes has an informational advantage, which could create adverse selection costs by trading with the fastest market participant. This led Biais, Foucault and Monais to introduce a theoretical model for equilibrium investment in high frequency trading

technology, where they find that in equilibrium, HFT has both positive implications by helping traders cope with market fragmentation, therefore allowing them to trade all prices until they are fully informative and negative implications by accessing information first, resulting in adverse selection costs. This leads to an equilibrium where both high speed traders and low speed traders coexist (Biais et al., 2012).

2.3 Empirical findings

Both Chaboud, Hjalmarsson, Vega, & Chiquoine and Zhang empirically tested the theoretical implications of HFT introduced above. Chaboud, Hjalmarsson, Vega, & Chiquoine, which were able to exactly distinguish automated trading from human trading and liquidity taking from liquidity providing, used minute-by-minute exchange rate data from three currency pairs, obtained from Electronic Brokerage Services, to study the impact of automated trading on volatility, price discovery and adverse selection costs (2011). Using VAR regression they found the excistence of triangular arbitrage, which is a clear example of prices not being informative, is negatively correlated with the excistence of automated trading, implying that prices become more informative and efficient with the rise of automated trading. Interesting about this finding is that the reduction in arbitrage opportunities comes mainly from

automated trading taking liquidity, meaning trading on posted quotes, which would support the view of Biais, Foucault and Monais (2011) that automated trading gives rise to adverse selection costs. Second, they investigated wheather automated trading contributes to the temporary deviation from the fair price, causing excess short term volatility at high frequencies. Kozhan and Wha Tham argued that automated trading can theoretically push prices further away from their fair price, caused by the overcrowding effect of HFT agents competing for the same arbitrage asset (2012). Not only do they argue that the overcrowding by HFT participants can cause prices to deviate from their fair price, they also argue that

(8)

execution risk, being the risk that an arbitrage trade cannot be completed profitably, prevents HFT participants to act on arbitrage opportunities, implying uninformative prices caused by HFT participants competing for the same arbitrage asset (Kozhan,Wha Tham, 2012). The results obtained by Chaboud, Hjalmarsson, Vega, & Chiquoine show that automated trading improves the price informativeness by mainly providing liqudity, which means that computers are able to provide new fair quotes quickly and on the aggregrate do trade on arbitrage

opportunities as shown by their graph in the appendix (2011).

Next to the research done by Chaboud, Hjalmarsson, Vega, & Chiquoine, Brogaard, Henderschott & Riordan studied a dataset provides by NASDAQ, where they were almost fully able to distinguish HFT and whether this HFT was taking or providing liquidity. Brogaard, Henderschott & Riordan found that HFT facilitates price efficiency by trading in the direction of permanent price changes and in the opposite direction of short term price inefficiencies by taking liquidity. This supports the adverse selection theory of Biais, Foucault and Monais, but contradicts the theoretical model of Froot, Scharfstein and Stein, which suggested that short term traders can cause prices to deviate by traders acting on the same information. Interestingly though is that Brogaard, Henderschott & Riordan find that HFT liquidity providing trades are in the opposite direction of permanent price changes and in the same direction as the short term price inefficiencies, causing short term price deviations and short term volatility and thereby contradicting the findings of Chaboud, Hjalmarsson, Vega, & Chiquoine (2012). Next to these findings, Brogaard, Henderschott & Riordan also find that HFT is positively correlated with macro economic news, suggesting that HFT is most active when news is released. Interesting about this part of their research is that, as expected, when the news announcement is negative, prices fall and when news announcemant is positive prices rise. They show that liquidity demanding HFT buy on positive news and sell on negative news and that the reverse is true for liquidity supplying HFT, where the effect of liquidity supplying HFT is bigger, which results in HFT trading in the opposite direction of the news and proves that HFT keeps providing liquidity even when markets are being stressed and thereby reducing the adverse selection costs caused by the liquidity taking HFT

(Brogaard, Henderschott & Riordan, 2012). Although literature by Pilbeam (2013) suggested that prices should adjust instanteneously to new information, Brogaard, Henderschott & Riordan find that prices keep drifting for a few seconds (2012).

On the contrary to these positive findings about the implications of high frequency trading, Zhang found a weak positive relationship with automated trading and stock price volatility after controlling for fundamental volatility and other exogenous determinants of

(9)

volatility (Zhang, 2010). Zhang used data obtained from CRSP and Thomson Reuters, where he had to estimate the quarterly share of automated trading by looking at institutional turnover rates, which restricted him to the analyses on the quarterly level. He estimated that in 2009 automated trading accounted for roughly 78% of trading volume, up from near zero in 1995 and that HFT is most active in the top 3000 stocks measured by market capitalization (2010). Unlike the study of Chaboud, Hjalmarsson, Vega, & Chiquoine, which focused on the

volatility in exchange rates, Zhang focused on the volatility in the US capital market. His study answered two questions: Does HFT increase or decrease stock price volatility? And does HFT help price discovery after news or does it prevent price discovery after news.

Zhang found that HFT is negatively associated with the markets ability to incorporate news into asset prices, thus decreasing price discovery, which is consisted with the theoretical model of Froot, Scharfstein and Stein (1992). Taking analyst forecast and earnings surprises as news, Zhang found that stock prices react more stronly to news when HFT trading volume is high and that the price reactions caused by these HFT within the short timeframe, are almost entirely reversed in the subsequent period, suggesting that HFT contributes to an overreaction to the news and causes excess short-term volatility (Zhang, 2010). As Zhang already noted, he was limited to conducting his research and empirical testing on the quarterly level, which led him only to conclude that stock prices are pushed too much in the direction of the news and tend to reverse in the subsequent months (Zhang, 2010).

2.4 Conclusion on literature review

The empirical research done on the effect of automated trading and HFT on the short term prices and volatility has been conducted in several different ways and on different data sets, part of the results do support the theoretical models and part of the results do not support the theoretical models. Considering the fact that the earliest theoretical models were developed well before the rise of automated trading and HFT and the fact that for instance the model of Froot, Scharfstein, and Stein was developed for the influence of short term human trading, there may be need for new models to be developed to model the effect of automated trading in general and HFT specifically. Next to the possible outdated theoretical models, the

empirical research remains contradictive to each other, which gives room for further research. This paper will focus on the empirical part, but the empirical results hold no value if an

economic theory behind the results cannot be developed. Although this is left open for further research, the theories just discussed may help indicate what such a theory might contain.

(10)

3. Theoretical background

3.1 The estimator

In a recent study released by the SEC, they express their concern about HFT creating excessive short term volatility, otherwise called ‘noise’. If HFT trade against this noise, this trading may contribute to a lower overall trading cost to long term investors, but if HFT trades in the same direction as this noise, it will contribute to this noise and increase overall trading costs to long term investors (Brogaard, Henderschott & Riordan, 2012). Measuring the short term volatility to discover possible excessive short term volatility will help explain weather HFT trade in the correct direction and reduce the overall trading costs of long term investors.

In this paper the volatility will be measured with an estimator specifically developed by Zhou in 1996 to estimate the volatility using high frequency data (Zhou, 1996). To estimate volatility in high frequency data, using standard volatility measures such as the square of the returns will not suffice, due to the high level of incoherence or ‘noise’ present in high frequency data, which will make the standard volatility measures strongly biased

(Zumbach, Corsi & Trapletti, 2002). Reasons for this noise or uncertainty comes from the multiple contributor structure inherent in the Foreign Exchange markets. Corsi, Zumbach, Müller & Dacorogna gave several consequences of this structure in 2001:

 Disagreement on the ‘true price’, which causes the quotes prices to be distributed around the true price, instead of being representative of the true price.

 Market maker bias toward bid or ask prices due to the current position the market maker holds. A market maker with a large long position could have a preference for selling his stake, making his offer quote more attractive and his bid quote less attractive, influencing the logarithmic middle price.

 Fighting-screen effect for advertising purposes. To keep their name displayed on the screens of data suppliers such as Reuters and Bloomberg market makers keep adjusting their quotes marginally to keep their quotes fresh on the screen.

 Delayed quotes. Several studies have shown that market participants release delayed quotes, sometimes the delayed quotes can be older than one minute, creating

distortion in price discovery. Although this consequence has become less important with the rise of computerized quoting.

(11)

This paper uses tick data instead of minute or even 5 minute data used in previous studies, because of several reasons. First is that HFT occurs on the millisecond timeframe, using one minute data would ignore the most vital data to analyzing the effect of HFT. The second is the concept of ‘minimal sufficient statistic’, the smallest subset of data needed to evaluate statistical estimate without losing information (Zumbach, Corsi & Trapletti, 2002). Zumbach, Corsi and Trapletti give a simple example in their paper “Efficient Estimation of Volatility using High Frequency Data” to illustrate the importance of using data at the highest frequency available when estimating the volatility of a random walk (2002). To measure volatility in exchange rate using tick-by-tick data, Zhou introduced the following estimator:

Let Where k is the tick interval. And where is the

logarithm of the exchange rate. Zhou allowed to further reduce the noise by setting k>1, but given the fact that some sources of noise mentioned by Zhou have largely debated and the fact that this paper studies extreme short time periods, k is being set as equal to 1.

3.2 The inflation figure

The inflation figure released by Eurostat is the HICP which comprises the inflation in the Euro area, calculated according to a harmonized approach and single set of definitions. This figure is the official measure of inflation and is used for monetary policy purposes (EUROSTAT, 2014). This figure is released at the end or at the beginning of the new month at exactly 11:00am during active trading, giving market participants the ability to immediately act on the part of information which was not expected.

According to all three forms of efficient market hypothesis, all publicly available information is reflected in prices and new information should be absorbed instantaneously into the new fair price by markets based on a no-arbitrage assumption (Bodie, Kane & Marcus, 2011). Releasing a new figure, such as the inflation figure should thus lead to a new ‘fair’ price. How much the former price adjusts depends on the expectations the market participants are already reflecting in the current price. According to the efficient market hypothesis the adjustment to expectations when the new information is released should be random and unpredictable. If this adjustment to the price would be predictable and non-random, market participants would position to this known biased random change until the price reflects all available information and expectations (Bodie, Kane & Marcus, 2011). The

(12)

determinants of the exchange rate are explained the no-arbitrage covered interest rate parity introduced by Bodie, Kane and Marcus and defined as follows:

Where is defined as the current future price, is defined as the current spot exchange rate, is the current interest rate in the given country and T is the time horizon of the future. Inflation effects the parity through the Fisher equation developed by Irving Fisher in 1977 being:

Where is the real interest rate, is the nominal interest rate and is the inflation rate. Although there have been developed multiple exchange rate models and the effect of inflation is not agreed upon in the short run nor the long run, most models developed to explain

exchange rate work with real interest rates and/or expected inflation (Pilbeam, 2013). Meaning that new information on inflation would lead to increased volatility as the new information is processed in the exchange rates.

3.3 Hypothesis

As explained by Brogaard, Henderschott and Riordan, the release of new information can lead to increased short term volatility, leading to noise around the price (2012). HFT can trade in two different ways on this noise. The first is the possibility that HFT trades in the same direction as the noise, creating excess short term volatility. The second is the possibility that HFT trades against the direction of the noise, reducing volatility and costs to long term investors. The hypothesis in this paper is therefore:

: = : <

Where stands for the volatility in the first 50 ticks after the release of new public information and stands for the volatility in the second 50 ticks after the release of new public information.

(13)

4. Methodology 4.1 The data sources

This paper uses tick data of the Euro Dollar exchange rate, obtained from the platform of Gain Capital, which provides an electronic foreign exchange platform with volumes averaging approximately $30 billion daily in 2013, which is around 0,6% of total daily volume estimated at $5,000 billion daily (Pilbeam, 2013). 66% Of volume handled by Gain Capital is institutional and the remaining 33% is retail (Gain Capital, 2014). Data of the Euro Dollar exchange rate around the release of the monthly inflation flash estimate can be

downloaded directly from their web page. From this page, data from 2004, 2008 and 2013 have been obtained. The 2004 volatility will be compared with the volatility of 2013 and 2008 will function as a control year to measure whether the financial crisis had a substantial impact on short term volatility.

The inflation figure releases are obtained directly from Eurostat. These figures are being released at the end of the corresponding month, or at the beginning of the new month at exactly 11:00 am. This paper places no importance on the actual figure itself, but only

measures the market reaction on this figure. According to the efficient market hypothesis this reaction should be a random process as outlined in the theoretical background.

4.2 Measuring samples

To measure the difference in volatility caused by HFT and caused by human trading, this paper compares the volatility in the first 50 ticks, which consists of either a new bid quote, a new ask quote or a new bid/ask volume, with the volatility in the second 50 ticks right after the release of the new information at 11:00 am. The crucial assumption here is that HFT is most active right after the release of new information and that the second 50 ticks is comprised of the normal mix of human trading and HFT trading, a view supported by the findings of Brogaard, Henderschott & Riordan that HFT is most active after macroeconomic news (2012). If HFT trades in the same direction as noise created by a news release, the volatility in the first 50 ticks will be higher than the second 50 ticks, the reverse is true if HFT is found to trade in the opposite direction of the noise.

The raw data file obtained from Gain Capital consists of the bid/ask quotes at the corresponding time. From this data the first 100 ticks after 11:00 am are obtained. To prevent

(14)

the data being influenced by the bouncing between the bid and ask price, one price is created simply by taking the mean of the bid and the ask price, from here on referred to as the mid-price. From this price the natural logarithm is taken referred to as the lnmid-mid-price. With these prices the formula given in the theoretical background can be computed. Summary statistics of the tick returns in the three January months are provided below.

Variable Obs Mean Std. Dev. Min Max

Jan-04 99 0.0000106 0.0000624 -8E-05 0.00016

Jan-08 99 4.43E-07 0.0000283 0.00012 0.000102

Jan-13 99 5.46E-06 0.000071 -0.0001 0.000135

It is worth noting that the standard deviation is bigger in all three months, which supports the model of Zhou in which he stated that the mean tick return is negligible compared to the standard deviation (1996).

4.3 Comparing results

Given the fact that the volatility measure of Zhou is additive in time, or ticks in this case since the timeframe is measured in ticks, a rise in volatility should increase the

consecutive values of , which is calculated separately for each tick. By summing and plotting these values, referred to from here on as , on the ticks, a graphical representation is given of the evolution of the volatility. If there is a difference between HFT absorbing new price information and human trading absorbing new price information there should be two visible increases in . One in the first 50 ticks caused by the HFT and one in the second 50 ticks, caused by the human trading. A graphical

representation will show whether there has been a change in the volatility pattern and possible whether there is a visible difference between the first 50 ticks and the second 50 ticks.

Next to the graphical representation, the total 50 tick volatility is measured by the Zhou volatility estimator in both the first period and the second. These two volatilities, from here on referred to as and , where A is for the first 50 ticks and B is for the second 50 ticks, are then compared by an F-test for comparing two variances given by:

.

The mean volatility over the 12 months combined is also calculated, to be able to test whether there is a significant difference between volatility on average. The mean volatility of

(15)

the first 50 ticks is compared to the mean volatility of the second 50 ticks, by using an independent sample T-test for comparing two means given by:

, where

, assuming unequal variances as stated by

the hypothesis.

Volatilities produced in this paper are very small due to the short time period and therefore displayed with a scientific notation, however, to compare the volatilities in this paper to volatility estimates given by other estimators, one can use the formula proposed by Zumbach, Corsi & Trapletti in 2002 to convert the tick volatility to yearly figures. This

formula is given by: , where is the tick volatility and the time interval of the tick. This would enable comparing the short term volatility

measured by the Zhou estimator to the long term volatility, which can be justifiable measured by the standard square of the returns, on for instance a yearly time frame.

5. Results

5.1 Introduction to results

Where Zhang compared the drift rates of stocks on the quarterly basis, this paper examines the volatility on the extreme short time period assuming that the volatility caused in the first 50 ticks after the news release is caused mainly by HFT computers adjusting quotes and acting on quotes and that the second 50 ticks consist of the normal mix between human trading and automated trading. Thereby comparing these two volatilities over a time span of 9 years which include the rise of HFT (Chaboud, Hjalmarsson, Vega, & Chiquoine, 2012) , will give a clear view whether the volatility caused by HFT is significantly higher than the

volatility caused by the mix of HFT and human trading and if this difference has increased or decreased with the rise in HFT. This section contains both graphical results and statistical results, which complement each other.

(16)

5.2 Graphical results

Although graphical results hold little statistical value and cannot be relied on solely, a graph does provide an insight in a possible change in the volatility pattern and makes the concept easier to understand as a whole. The research in this paper produced 72 graphs, which is why only a selection is provided below.

This graph shows the and plotted on the ticks in 2004. There seems not to be a big difference in the pattern of volatility. Further on in this paper the statistical difference in overall volatility will be provided. This graph suggest that in 2004, when there was little to none automated trading in the exchange rates, the volatility was constant over the 100 tick time period.

(17)

In contrast to the graph of 2004, this graphs shows an advance in volatility from the 47th tick, and another two at the 60th and the 80th. The advance in volatility mainly occurs in the second 50 ticks, which are assumed to be caused by the standard mix of human trading and

automated trading. Another interesting thing is that there seems to be not much of a reaction to the news release in the first 10 ticks, this contrary to the 2004 graph, which immediately shows a changing pattern.

Although not all graphs produced by this paper are the same and provide such a clear distinction between the years, overall the graphs seem to point to a change in the volatility pattern. The statistical results in the next section will provide statistical evidence for this change in pattern.

5.3 Statistical results

Statistical results have been produced in two ways. First, the volatility estimate of each month has been compared to the volatility estimate of the same month by the F-test as previously explained. The table below shows how many times the null hypothesis is rejected at the different significance levels:

Year

2004 7 6 4

2008 12 12 10

2013 6 6 3

The results prove that in 2004 the volatility does not differ significantly from in 8 months at the significance level . At the other significance levels the volatilities differ from each other about 50% of the time.

The results of 2013 show resemblance with the results of 2004, which proves that there has not been a significant rise in short term volatility when comparing these two years to each other.

The interesting part of this table is that in 2008, a year in which the financial crisis caused high levels of volatility in the capital markets, the null hypothesis is rejected in all cases but two, proving the volatility significantly differed. In that year, was bigger then in five of the twelve months.

(18)

To prove whether is significantly smaller than as stated in the hypothesis, the mean volatilities have been compared. The results are provided in the table below:

Year t-value

2004 0.210847 No No No

2008 -0.001944 No No No

2013 -1.355346 Yes No No

These results prove that at the significance level of , there is enough statistical evidence

in 2013 to reject the null hypothesis. These results support the suggestions of the graphical results, that the volatility in the first 50 ticks is smaller than the volatility in the second 50 ticks.

The volatility in 2004 in the first 50 ticks does not differ significantly from the volatility in the second 50 ticks, which supports the suggestions of the graphical results that the volatility was constant over the 100 tick time frame in 2004.

Where the F-test yielded significant results for 2008, the mean comparison does not provide statistically significant results. Although not significant, the t-value is negative, suggesting that the volatility in the first 50 ticks was lower than in the second 50 ticks. These results are also supported by the graphical results.

5.4 Summary

The results obtained in this paper suggest a change in volatility during the time span in which HFT became active. Both the graphical results and the statistical results provide

evidence for a change in volatility. The graphical results show a clear difference in the volatility pattern between 2004 and 2013. These graphical results are supported by the statistical results, which show that the volatility in the first 50 ticks in 2013 is lower than the volatility in the second 50 ticks in 2013, while in 2004 there was no significant difference. In 2008, the control year for the financial crisis, an interesting result has been found by the F-test in that the volatilities differed in all months at the and the level.

(19)

6. Conclusion

6.1 Conclusion on results

Although the graphs produced by this paper do not all show such a clear difference in the volatility pattern as the graphs provided in section 5.2, the overall graphs suggest that there has been a change in the pattern of volatility. During 2004, the graphical result show no difference in the pattern over the 100 tick timeframe, which suggest that, when there is little to none automated trading, the volatility is constant over the 100 tick timeframe. During 2013, the graphical results show a clear difference between the first 50 tick volatility pattern and the second 50 tick volatility pattern, which suggest when there is automated trading involved, the volatility pattern is influenced in a different way than by human trading. During the control year 2008 there was already a visible change in the volatility pattern, which supports the results of Brogaard, Henderschott & Riordan that HFT is most active with high levels of news releases.

The empirical results obtained by this paper show that in the first 50 ticks when HFT is assumed to be most active as posting new quotes and trading on quotes, the volatility is lower than in de second 50 ticks which is assumed to be comprised of the standard mix between human trading and automated trading. These results comply with the findings of Chaboud, Hjalmarsson, Vega, & Chiquoine that HFT improves price discovery on the short term and contradicts the theory of Kozhan and Wha Tham which theoreticaly state HFT trading can push prices away from their ‘fair’ price and thereby creates excess short term volatility, a theory supported by the theoretical model of Froot, Scharfstein and Stein. Under the assumption that the first 50 ticks consist mainly of HFT giving new quotes and trading on quotes, these findings contradict the results obtained by Zhang when he conducted his

research on the quarterly level where he found that stock prices tend to revert their course in the months following a news release. Where Brogaard, Henderschott & Riordan found that liquidity providing trades by HFT firms are in the same direction as the pricing error and thereby causes short term volatility, this paper finds that the overall volatility caused by HFT is less than the volatility caused by the mix of HFT and human trading. This suggests that the liquidity taking trades by HFT which trade in the opposite direction of the pricing error are more influential than the liquidity providing trades, which make the overall effect to be negatively correlated with short term volatility.

(20)

In 2008, capital markets had very high levels of volatility. During high levels of volatility, arbitrage opportunities are abundant and HFT and automated trading are most active. The results produced by the F-test in 2008 prove that in a time in which the financial crisis caused very high levels of volatility, there is a significant difference in volatility caused by HFT and volatility caused by human trading. Although not significant, the mean volatility during the first 50 ticks was lower than the volatility during the second 50 ticks, suggesting that HFT did not led to an increase in short term volatility, which again contradicts the theoretical model of Kozhan and Wha Tham.

6.2 Limitations of this paper and further research

Unfortunatelly this paper was not able to exactly measure the share of HFT during the extreme short time frames in which the volatility is measured. The empirical findings and the conclusions are based on the assumption that the share of HFT is the highest right afther the news release in the first 50 ticks. The question arrising from this paper is if this assumption can be proven empirically. Next to proving the crucial assumption, it would be valuable to distinguish liquidity taking trading from liquidity providing trading to examine the effect of both types of trading and get a clear and definitive answer whether there are adverse selection costs involved when trading with HFT. Assuming that volatility is negatively correlated with the market, the question arrises how we can quantify this potential benefit of lower volatility to the market and who benefits from this lower volatility and who suffers and if this results in an overall beneficial effect. This would then need to be modelled into a theory, from which further empirical research can follow.

(21)

Refferences

Andersen, T. G., Bollerslev, T., Diebold, F. X., & Vega, C. (2007). Real-time price discovery in global stock, bond and foreign exchange markets. Journal of International

Economics, 73(2), 251-277

Cao, C., Hansch, O., & Wang, X. (2009). The information content of an open limit‐order book. Journal of futures markets, 29(1), 16-41.

Chaboud, A., Hjalmarsson, E., Vega, C., & Chiquoine, B. (2011). Rise of the Machines: Algorithmic Trading in the Foreign Exchange Market. SSRN Electronic Journal. doi:10.2139/ssrn.1501135

Corsi, F., Zumbach, G., Muller, U. A., & Dacorogna, M. M. (2001). Consistent High‐precision Volatility from High‐frequency Data. Economic Notes, 30(2), 183-204.

Biais, B., Foucault, T., & Moinas, S. (2012, March). Equilibrium high-frequency trading. In AFA 2013 San Diego Meetings Paper.

Biais, B., & Woolley, P. (2011). High frequency trading. Manuscript, Toulouse University, IDEI.

Bodie, Z., Kane, A., & Marcus, A. J. (2011). Investments and portfolio management. McGraw-Hill/Irwin.

Brogaard, J., Hendershott, T., & Riordan, R. (2012). High frequency trading and price discovery. SSRN eLibrary.

Froot, K. A., Scharfstein, D. S., & Stein, J. C. (1992). Heard on the street: Informational inefficiencies in a market with short‐term speculation. The Journal of Finance, 47(4), 1461-1484.

Hendershott, T., & Riordan, R. (2009). Algorithmic trading and information.Manuscript, University of California, Berkeley.

Hendershott, T., C. Jones, and A. Menkveld. 2010. Does algorithmic trading improve liquidity? Journal of Finance, forthcoming.

(22)

Kirilenko, A., Kyle, A. S., Samadi, M., & Tuzun, T. (2011). The flash crash: The impact of high frequency trading on an electronic market. Manuscript, U of Maryland.

Pilbeam, P. 2013. International Finance (Fourth Edition). New York: Palgrave Macmillan

Schulmeister, S. (2009). A General Financial Transaction Tax: a short cut of the pros, the cons and a proposal. Austrian Institute of Economic Research.

Zhang, F. (2010). High-Frequency Trading, Stock Volatility, and Price Discovery. SSRN Electronic Journal. doi:10.2139/ssrn.1691679

Zhou, B. (1996). High-Frequency Data and Volatility in Foreign-Exchange Rates. Journal of Business & Economic Statistics, 14(1), 45–52. doi:10.1080/07350015.1996.10524628

Zumbach, G., Corsi, F., & Trapletti, A. (2002). Efficient estimation of volatility using high frequency data. Manuscript, Olsen & Associates, Zürich, Switzerland.

(23)

Appendix Graphs January

(24)
(25)
(26)
(27)
(28)

Volatility table

1st 50 ticks 04 2nd 50 ticks 04 1st 50 ticks 08 2nd 50 ticks 08 1st 50 ticks 13 2nd 50 ticks 13

Jan 1,14776E-07 1,21001E-07 1,29997E-07 5,01603E-08 3,29451E-08 3,19885E-08

Feb 1,81303E-07 7,1242E-08 4,27637E-07 2,04121E-07 3,64837E-08 6,30352E-08

Mar 6,61203E-08 9,24919E-08 4,38472E-08 2,43244E-07 1,15118E-08 1,36948E-08

Apr 1,67088E-07 1,6011E-07 4,13105E-08 1,40437E-07 2,69247E-08 2,62429E-08

May 1,60902E-07 5,36072E-08 8,35157E-09 2,50904E-08 2,84011E-08 9,65698E-08

Jun 1,62184E-07 5,19952E-07 1,12745E-07 2,01488E-08 1,28286E-08 1,03571E-08

Jul 3,51675E-07 2,61776E-07 1,15007E-07 2,67056E-07 2,33732E-08 4,37656E-08

Aug 2,09835E-07 1,82746E-07 1,10774E-07 3,19047E-07 1,20758E-08 7,29964E-09

Sep 5,18064E-08 9,06318E-08 5,09687E-07 2,7851E-07 7,8593E-09 2,5442E-08

Oct 2,4711E-07 2,28271E-07 4,70205E-07 7,3011E-07 3,34323E-08 2,4651E-08

Nov 1,5285E-07 4,52969E-08 8,7499E-07 2,23488E-07 1,75866E-08 2,81688E-08

Dec 2,03758E-07 1,30182E-07 3,57325E-07 7,02831E-07 7,83173E-09 1,06416E-08

1,72451E-07 1,63109E-07 2,66823E-07 2,6702E-07 2,09378E-08 3,18214E-08

F-test table 2004 2008 2013 Jan 0,948559 2,591634 0,366008 Feb 2,544891 2,095018 0,823225 Mar 1,398843 5,547534 3,965803 Apr 1,04358 3,39954 0,306977 May 3,001495 3,004274 1,000926 Jun 3,205941 5,595605 0,572939 Jul 1,343421 2,322085 1,728487 Aug 1,148233 2,880155 1,046083 Sep 1,749432 1,83005 1,046083 Oct 1,082529 1,552748 0,697169 Nov 3,374403 3,915151 1,16025 Dec 1,565181 1,966922 1,256674

Referenties

GERELATEERDE DOCUMENTEN

Next to that, I expect to see a negative relation between the firm size and both ETRs, because I predict that bigger firms are more capable and have more resources

The coordinates of the aperture marking the emission profile of the star were used on the arc images to calculate transformations from pixel coordinates to wavelength values.

A Limit Order consists of a set of information such as a stock symbol (representing the stock of interest), order direction (which specifies if the trader wants to buy or sell

We implemented an algorithm based on a machine-learning approach to learn the level of engagement of the audience in such a way that it can be possible to measure

This work represents a rare example of the application of a training methodology in a group of world-class athletes; spe- cifically, a 6-week cycling-specific, isometric

What is a Public-Private Partnership (PPP), how is this concept implemented by Dutch water supply companies in developing countries and to what extent do they translate the

Note: To cite this publication please use the final published version (if applicable)... Based on a spectral decomposition of the covariance structures we derive series estimators for

While quarantine measures are an accepted containment strategy in public health emergencies, and they may be both legally and ethically justifiable under particular circumstances,