• No results found

The efficiency of the bitcoin exchange market during the years 2011-2013

N/A
N/A
Protected

Academic year: 2021

Share "The efficiency of the bitcoin exchange market during the years 2011-2013"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Amsterdam

Faculty of Economics & Business

The efficiency of the bitcoin exchange market during the years 2011-2013

February 20

th

, 2014

Author: Filippos Michailidis

Student Number: 6122000

Bachelor’s Programme: Economics & Business Specialization: Economics

(2)

Abstract

Of all recent development of financial economics, the Efficient Market Hypothesis has achieved the widest acceptance from scholars and prevailed in textbooks. The current paper investigates the efficiency of the exchange market for bitcoins, an innovative digital currency that has rapidly increased in popularity since its introduction in 2009. Employing a series of statistical tests, the current paper investigates daily prices of the Mt.Gox bitcoin market for the period May 1st 2011 to December 15th 2013, and finds that the price of bitcoins is significantly autocorrelated:

future prices can be predicted based on the study of past prices. There is also a significantly weekly effect. The model explains only 6% of the response variation, which is generally low. However, in absence of transaction costs, an investor can make abnormal profits.

(3)

Table of Contents

1. Introduction

2. Bitcoin: the Currency a. Technical Analysis b. Economic Analysis

3. Efficient Market Hypothesis, Random Walk Hypothesis and Jensen’s α: a Literature Review 4. Data

5. Specification and Methodology 6. Results

a. Unit Root Tests

b. Tests for Number of Lags c. Seasonality

d. Error Term

e. Evaluating the Autoregressive Model f. Jensen’s alpha

7. Conclusion 8. Bibliography

(4)

1. Introduction

This paper merges one of the most discussed topics in Finance, the Efficient Market Hypothesis, with the controversially flourishing market of electronic currencies, namely Bitcoins.

The efficient market hypothesis (EMH) simply states; “security prices fully reflect all available information” (Fama, 1970). If all relevant information is incorporated into security prices then the market is informationally efficient. According to the definition of “all available information” market efficiency is classified into three categories: the weak form, the semi-strong form and the strong form. This paper will address explicitly the weak form of the EMH, which states that the future prices cannot be predicted by analyzing prices from the past.

The EMH is closely connected to the Random Walk Hypothesis (RWH). The RWH, stemming from the theory of random walks, states that security prices cannot be predicted because all consecutive price changes in a security represent random departures from previous prices (Malkiel, 2003). The logic behind the RWH is that, if security prices fully reflect all available information at the current moment, then tomorrow’s change in price will only reflect tomorrow’s news and will be independent of today’s news because at that time those have already been incorporated in the price. By definition, news cannot be predicted; therefore successive price changes are unpredictable. Consequently, an expert that invests based on technical analysis cannot consistently outperform a naïve investor by selecting a different portfolio, given comparable risk. The combination of the two theories enhances the qualities of each other and contributes to set the current paradigm; EMH and RWH are highly complementary.

Most finance scholars believe the markets are weak form efficient; Doran, Peterson and Wright (2010) found that from a sample size of 642 U.S. finance academics, 59% generally disagree with the statement that returns can be predicted using past returns, while only 8% generally agree.

Equally substantial is the empirical work that finds market anomalies that allow the outperformance of passive investment and contradict the EMH. Once these anomalies are documented and published, investors exploit them in order to make abnormal profits. The above sequence makes the anomalies weaken or disappear, waiving the relevance of the related papers and providing over-confident support for the EMH. In addition, most documented non-random effects become insignificant in the presence of transaction costs involved in the process of exploiting them (Grossman & Stiglitz, 1980). The present paper focuses on weak form of efficiency argument and applies it to a particular market, which remarkably satisfies, at the current moment, the criterion of “zero transaction costs”: the market for the virtual currency of Bitcoins.

Bitcoin is a decentralized peer-to-peer digital currency introduced on January 3rd 2009. It has attracted a growing number of users and notable vendors. As of December 30th 2013, bitcoin’s market capitalization is approximately USD 9.147 billion. The price of bitcoin since its introduction has been extremely volatile. Unquestionably, this new ecosystem has puzzled people and its foundations and functionalities are still being explored. The importance of incorporating bitcoins to economic research becomes apparent by considering the pace of technology integration in everyday transactions over the past years and the ever-growing use of electronic wallets.

Monetary theories become holistic only by embracing new forms of currencies and include them in research and eventually integrating them in economic models. The lack of peer-reviewed research on the bitcoin market, however new its appearance may be, is remarkable. Focusing on conventional currencies and failure to investigate new forms introduced by technological advances will only lead to theories that lack freshness and applicability.

Furthermore, the bitcoin market is a perfect candidate to test the EMH because it offers new, unmapped data. The debate about efficient markets has resulted to a voluminous number of empirical studies attempting to determine whether specific markets are in fact "efficient" and if so to what degree. The present study differs from the earlier studies on market efficiency because it takes into consideration data from a market with very low, if any, transaction fees and no bank account fees.

The key question implied when searching for weak form of efficiency is “How well do past returns predict future returns?” The above leads to the hypothesis that “The future price of bitcoin cannot be predicted by the series of historical price”. I attempt to detect and recognize patterns in historical prices of bitcoins through technical analysis.

The remainder of the paper is organized as follows. Section 2 provides a concise introduction to the bitcoin ecosystem, offering technical as well as economic intuition of the virtual currency. Section 3 focuses on the relevant literature of EMH and RWH, incorporating both supporting and opposing sides. In section 4, the sources and format of data is discussed and presented along with some descriptive statistics. Sections 5 and 6 present the methodology and discussion of different statistical tools employed in constructing a model and present the results accompanied by their economic implication and relevance. In section 7, I present the conclusions of the present study and suggestions for further research.

(5)

2. Bitcoin: the Currency

Bitcoin is a virtual currency scheme. It was introduced in a paper published under the pseudonym Satoshi Nakamoto (2009) as a digital currency allowing direct online payments without a mediating authority or institution. Based on a peer-to-peer network similar to the ones used for sharing files over a network, it can be globally used as a currency and allows transactions for both goods and services. As a virtual scheme, its foundations are similar to conventional currencies.

There are numerous exchange platforms for buying BTC, Mt.Gox being the most widely used and the one considered in the present study. In order to acquire any amount of BTC, one has to install an open-source software which operates as a digital wallet.

a) TechnicalAnalysis

The engineering of the system is complex and extensive discussion lies outside the scope of the present paper. Bellow a simplified description of a transaction is presented.

A BTC is defined as a chain of digital signatures. Let’s assume agents A and B. In order for A to transfer an amount to B, the second has to send A a public key. Electronic wallets generate public keys. We assume both agents A and B own an electronic wallet. A uses B’s public key to transfer the bitcoins by digitally signing the preceding transaction and adding B’s public key to the chain of digital signatures of the coin. The new owner B is the only one who can spend the acquired BTC as long as B has access to his electronic wallet (Virtual Currency Schemes, 2012).

Each BTC stores every transaction that has been executed with a timestamp, and all signed transactions are public, building upon a series of registered transactions called block chain. A full copy of the network block chain contains every transaction that has ever been executed. The block chain constitutes the basic innovation of BTC because it prevents double spending, a problem confronted by earlier versions of virtual currencies. Simply put, by broadcasting each new transaction, the network can verify their legitimacy.

The verification process is also called mining, and requires nodes within the network that perform calculations that require high processing power. Mining makes the nodes of the network to act honestly: “if a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or by using it to generate new coins.” (Nakamoto, 2009, p. 4). In that manner, the BTC algorithm induces honesty between the users because the processing power required to cheat or steal is rewarded less than using the same amount of processing power to participate in mining process. Furthermore, the network requires minimal structure, which means that nodes can exit the network at any time, and upon return they join the longest chain of registered transactions.

The BTC algorithm is not a deus ex machina, it is the outcome of many attempts to construct a virtual currency that will prove itself sustainable by offering safe transactions and vast recognition, and it is based on existing and established peer-to-peer networks. Its mere existence is a fine balance between the technical difficulties that arise in the design process and the economic principles that I describe in the next sub-chapter.

b) Economic Analysis

BTC has been in the center of attention of journalists and economists, investors and speculators and many communities operating online. BTC claims to be a form of currency; a closer look to this assertion is necessary.

To begin with, an object becomes money only by virtue of the fact that it is accepted as a medium of exchange by those engaging in exchange of commodities and services (Mises, 1966). Nowadays, almost all countries have adopted fiat currencies; a fiat currency is legitimised by being declared by a legal system to be legal tender, therefore acceptable as a form of payment for both public and private debt. BTC is not a fiat currency because it is not accepted by any government as legal tender. In the contrary, it is still perceived with skepticism, which makes BTC undergo great volatility over the period under investigation, which limits its ability to become a stable medium of exchange. However, this has not prevented its expansion of number of vendors and people commercially using BTC as a form of currency.

The inception of BTC shares common grounds with the Austrian theory of business cycles. According to F. A. Hayek (1937), the business cycle is the product of monetary interventions in the market, which disrupt relative prices: an expansion of money supply leads to artificially low interest rates, which stimulates overly ambitious investment that does not match contemporaneous consumers’ preferences, inevitably causing a recession. As a result, many Austrian School economists favour a system where monetary control is avoided.

Succeeding the massive international flows of funds that have contributed in the instability of the world economy since the Bretton Woods system (Frankman, 2002), the BTC system is an experimental prototype of a new unhampered global monetary system where no central authority controls the money supply. In a period where pursuit of global capital mobility and extension of free trade has become central in international economics, the advantages of a global currency cannot be overlooked. The Tobin tax, an attempt to control capital flows that lead to exchange rate instability, addresses the symptoms and not the causes of this instability. Tobin infers that out of the worldwide gross volume of foreign exchange transactions, which often exceeds US$1.5 trillion per business day, 90% of the

(6)

transactions are reversed within a week and 40% within a day. He concludes that the speculation contributing to currency instability is immense, and parallels this kind of speculation with a ‘bank run’ (Tobin, Financial Globalization, 2000).

Tobin further identifies the famous economic trilemma, where a country can choose to pursuit at most two out of the following three i) a fixed exchange rate (ii) free capital mobility or (iii) an independent monetary policy capable of achieving domestic macroeconomic or development objectives. A global currency like BTC, however, advocates a new system governed by one currency, abandoning the trilemma all at once. Tobin himself was a supporter of a global currency with supporting institutions: “A permanent single currency, as among the 50 states of the American union, would escape all this turbulence. The United States example shows thsat a currency union works to great advantage when sustained not only by centralized monetary authorities but also by other common institutions” (Tobin, 1994, p. 104). Rose and Wincoop investigate the effects of the Economic and Monetary Union (EMU) and find that national currency can be a significant barrier to trade; the benefits of trade created by a currency union significantly surpasses the costs of giving up monetary independence (Rose & Wincoop, 2001).

The transition to digital forms of currency can also provide many advantages over traditional forms of paper money. In a past paradigm, David Ricardo considered the costs related to minting and preservation of metallic currencies as a waste of resources; he believed that by substituting metallic currencies by paper ones would free significant amount of capital and labour for the production of goods and services that can directly satisfy human needs. This idea, among other ideas presented in his paper “Proposals for an Economical and Secure Currency” (1816, pp. 8-10) were overlooked for a long period, only to be revived under the model of gold standard. In the same respect, the resources employed to print and preserve printed money can be in turn freed by a shift towards digital currencies where the cost of production can be minimized. Indicatively, the cost of printing a USD 1 or 2 note costs 5.4 cents. Accordingly, the Federal Reserve Board, on December 12, 2013, approved the budget of USD 826.7 million for new USD issued in 2014, incorporating transportation and counterfeit deterrence costs (Federal Reserve, 2014). This amount includes only production costs of one out of the numerous currencies currently printed.

In the current period, where currency remains pivotal in national politics, and the exchange rate is still the key tool for the application of external influence, new forms of currencies attract great interest. Unquestionably, this new form of currency can be confusing, especially for the ones without programming skills. Nevertheless, it is a system where massive amounts of money and time are put into and still has failed to attract discussion from academic perspective. Careful investigation of the foundations and assertions can prove whether it is a noble system or a variation of a large Ponzi or pyramid scheme.

3. Efficient Market Hypothesis, Random Walk Hypothesis and Jensen’s α: a Literature Review

Eugene Fama, considered the father of EMH, in the paper entitled "Efficient Capital Markets: A Review of Theory and Empirical Work," (1970) proposed three types of efficiency based on the information sets assumed in the price trends; (i) weak form, (ii) semi-strong-form and (iii) strong form.

Specifically the weak form of the EMH, which is the focus of the present study, states that the future prices cannot be predicted by analyzing prices from the past. Fama broadens the definition of weak type of efficiency in his sequel paper on EMH (1991). The traditional weak form of efficiency becomes a subset of the newer classification of tests of return predictability that acknowledges the use of past prices, but also other past economic variables (dividend yields, interest rates), as well as the closer study of anomalies (January effects, Momentum, Overreaction). Under the new definition, a market is weak form efficient if all public market data is incorporated in security prices.

One of the most eminent theories supporting the EMH is the Random Walk Hypothesis (RWH). Random walks puzzle the mathematical mind even today. One of the oldest published posits about random walk is by Karl Pearson, who, in an appeal to the readers of Nature (1905), makes a clear formulation of the issue in question:

“Can any of your readers refer me to a work wherein I should find a solution of the following problem … A man starts from a point O and walks l yards in a straight line; he then turns through any angle whatever and walks another l yards in a second straight line. He repeats this process n times. I require the probability that after these n stretches he is at a distance between r and r + δr from his starting point, O ...”

Karl Pearson

Among Pearson's respondents was Lord Rayleigh, who referenced Lord Rayleigh’s work on sound vibrations (1880). Lord Rayleigh’s work led Pearson to conclude that "the most probable place to find a drunken man who is at all capable of keeping on his feet is somewhere near his starting point!"

The RWH, stemming from the theory of random walks, states that security prices cannot be predicted because they follow a random walk. Eugene Fama first used the term in the article “Random Walks In Stock Market Prices” (1965)

(7)

which constituted a less technical version of his postgraduate dissertation. In its simple form, the hypothesis states, "properly anticipated prices fluctuate randomly" (Samuelson, 1965). The intuition behind the RWH is that, if a security fully reflects all available information, and information revealed at each point in time is independent of information revealed previously, then price changes are random. Furthermore, revealing new information will lead to a price adjustment, which will over-adjust as often as will under-adjust to the initial disclosure of the news; the times of over- and under-adjustment are random themselves. The combination of the two theories enhances the qualities of each other and contributes to set the current paradigm; EMH and RWH are highly complementary.

Poterba and Summers (1988), in an attempt to establish weak form market efficiency, investigate whether prices are mean reverting. They collect data from indexes and individual stocks from 18 countries since 1871 and find positive autocorrelation in returns over short horizon and negative autocorrelation over longer horizon, without rejecting RWH at conventional statistical levels. They argue, however, that about 50% of the price change of a security is not explained only by risk factors.

Fama and French (1988) also investigate the mean-reverting component of prices for long horizon returns. They examine the sample period 1926-1985 for autocorrelation of stock returns for increasing holding periods and find that autocorrelation is significant for long horizon returns (3-5 years). The explanation stems from the hypothesis that security prices have a slowly decaying stationary component, which eventually becomes dominated by the random-walk price component.

On the opposing side of EMH, empirical studies document market anomalies, which are inefficiencies that cause the securities to be mispriced and, if exploited, can lead to abnormal profits. For example, several empirical studies have studied the phenomena of calendar effects on securities, where returns tend to be higher or lower in specific calendar periods1.

The study of calendar effects is relevant, because they are inconsistent with the efficient market hypothesis; their mere existence poses a threat to EMH which claims that abnormal profits are possible nonetheless random. Taking the weekend effect as an example, if the flow of information is continuous, and prices reflect all information, Monday returns should be around three times higher than other weekday returns, because Mondays incorporate three consecutive days. In the alternative scenario, where the effect of weekends is negligible, Monday returns should at least be as high as other weekday returns. Both hypotheses are wrong.

In one of the first findings of the weekend effect, Frank Cross (1973), using a sample period consisting of 844 arrays of Fridays and following Mondays from January 2, 1953 through December 21, 1970, studies the Standard & Poor's Composite Stock Index and finds that stocks rise on Fridays 22.5% more often than Mondays. Empirical results that present evidence of an existing weekend effect are well documented in French (1980), Keim and Stambaugh (1984), Rogalski (1884), Chang et al. (1993) and Kamara (1997).

Empirical studies also support variations of the weekend effect. Brooks and Persand (2001) observe “significant negative returns on Tuesdays in Thailand and Malaysia, and a significant Wednesday effect in Taiwan”. They conclude that market risk alone is not sufficient to explain these variations to returns. Similarly, Jaffe and Westerfield (1985) found that the lowest average return of Tokyo’s Stock Exchange day is Tuesday.

Supporters of EMH have been critical of chartists claiming to consistently gain abnormal profits through technical analysis. A chartist believes that history repeats itself; their objective is to study a security's historical prices or levels to identify patterns and forecast its future price. Fama himself contemplates that “chart reading is of no value” (1991) and has been critical of their lack of empirical evidence. Given the voluminous evidence supporting the EMH, Fama challenges opponents for “equally well supported empirical work” proving that technical analysis can lead to consistent abnormal profits.

The paradox in Fama’s assertion is that when a pattern that can lead to abnormal profits is spotted, it becomes widely known and exploitation will seize the pattern from existence, evident in the studies presented above. This phenomenon has been well documented for several anomalies; there is evidence that size effect, the value effect, the turn-of-the-year effect, the weekend effect and the dividend yield effect have significantly weakened or disappeared following their exposure (Booth and Keim, 2000). Sullivan, Timmermann and White (2001) use the bootstrapping method and find that calendar effects no longer persist. Kohers et al. (2004) find that the day-of-the-week effect in the world’s largest equity markets was apparent in the 1980s and since then has faded out. Schwert (2001) investigates the weekend effect as well as several additional anomalies and concludes that their effect fades out once they have been identified and documented in academic literature, a process that leads to increased market efficiency.

Revealing market anomalies leads investors to exploit these strategies and consequently reverse them. Therefore, even if consistent abnormal profits can occur, they will not persist after being documented. Having this in mind, chartists have no incentive to reveal their “tricks of the trade”. In addition, if a chartist that is consistently successful in identifying and exploiting anomalies, not only would keep a successful strategy confidential while it is still valid, she will also avoid announcing it ex-post because it might raise ethical issues and because using her time to identify the next profitable strategy overweight the opportunity cost of documenting a currently void strategy. Furthermore, given

                                                                                                               

1  The subsets of calendar effects that are relevant to our study are: (i) the day of the week effect and (ii) the month of the year effect, the latter

(8)

the possibility that profitable strategies based on technical analysis exist, chartists are in no case of “no value”. They are the ones that make the market more efficient by eliminating strategies that lead to abnormal profits. Historically, strategies have been documented and consequently eliminated. This effect shows a considerable change in the structure of the market, indicating growing market efficiency: exploiting market anomalies is a continuous process of refining the rough edges of the market.

Lastly, there are some concerns about the procedures under which empirical work on market efficiency might not correspond to formal statistical inferences. Sullivan, Timmermann and White (2001) introduce the concept of data snooping bias arising from the inability to generate new data sets on which to test the hypotheses independently of the data that led to a particular theory. Furthermore, as the number of studies on any single data set increases, inferential biases are also expected to increase.

The present study makes use of Jensen’s alpha, a measure of performance for assets. The foundations of the model are based on the CAPM formula allowing for a forecasting. The model is an elegant way to test the EMH; recall that EMH suggests that it is impossible to consistently beat the market. Therefore, in order to measure performance of an asset or strategy, we cannot just compare returns, we need to adjust for the risk, and also make the reasonable assumption of risk-free borrowing and lending (although it might be unreasonable when it becomes unrestricted). Performing better than the market, post-adjusting for risk, supports the EMH. Jensen investigates 115 open-end mutual funds from 1945 to 1964 and finds that average alpha was -0.011, a value close to zero, leading to evidence supporting market efficiency in the presence of transaction costs (Jensen M. C., 1968).

The next part will introduce the data, clarify the selection process, tackle the problem of missing values and finally present some descriptive statistics.

4. Data

This study is conducted in an empirical format by using secondary data from Mt.Gox on a daily basis, using a sample of 960 daily observations, covering the trading days of the period from May 1st, 2011 to December 15th, 2013.

Over the thirty-one-and-a-half months covered, this study includes the record peak of USD 1260 per BTC (reached on November 30th, 2013), and a low of USD2.29 (reached on November 21st, 2011). The data were collected from

Mt.Gox exchange based in Tokyo, Japan, because it has been the largest BTC exchange in volume for USD/BTC transactions throughout the period under investigation (Cieśla). Since the market for BTC is continuous (24 hours/day, 7 days/week, 365 days/year), the price of BTC is the price at time 18:15:05 (UTC) daily.

This period, encompassing the majority of the observations since the launch of BTC, omits the first year of operation because the market is considered highly illiquid. To demarcate the illiquid from the liquid period, Ladislav Kristoufek investigates the number of ticks with a non-zero return during intervals of 8 hours, and finds that for the starting days of existence of the BTC market, there was practically no liquidity, while in May 2011, liquidity reaches adequate levels (Kristoufek, 2013).

Within the period of investigation, there are two cases of irregularities that were closely inspected. The first instance is on June 20th, 2011, when a security breach caused the price of USD/BTC to fraudulently drop to USD 0.01 per BTC. The event was documented in a press release by Mark Karpeles, the CEO of Tibanne Co. Ltd, the Tokyo based company that acquired Mt.Gox Co. Ltd. in March 2011. Mark Karpeles explained that there has been a hack in the account served to pay the previous owner a percentage upon commissions, as stated in the purchase agreement. On the 20th of June, a hacker used the credentials of a Mt.Gox trader to transfer a large amount of BTC to his personal

account. He allegedly used the company’s software to sell them all nominally, creating a massive "ask" order at any price. Within minutes the price corrected to its correct market value (Karpeles, 2011).

The second instance, Mt.Gox suspended trading on 11 April 2013 until 12 April 2013 for a "market cooldown". (htt1) The price fell to a low of USD55.59 per BTC after the resumption of trading before stabilizing above USD100. In the latter case, based on the law of one price, I decided to use the price of the next largest in volume BTC/USD exchange market at that time, namely Bitstamp. Both irregularities are simply omitted from the dataset under investigation.

The continuously compounded return is used to measure the return of the specified period, using the following equation:

r t = ln pt pt-1

(1) where r(t) is the return, ln is natural logarithm, pt is current price and pt-1 is previous price. The reason of using log

returns of a series is that relative changes in the variable are easier to compare and interpret, especially in stochastic time series modeling.

(9)

Table 1: Descriptive Statistics & Normality Variable Number of

observations Mean Standard deviation Minimum Maximum Pr(Skewness) Pr(Kurtosis) Pr(Skewness^Kurtosis)

lret 960 0.0055 0.0624 -0.396 0.515 <0.001 <0.001 <0.001

Breusch-Godfrey LM test for autocorrelation: p-value<0.001

Summarizing some descriptive statistics (Table 1), the continuously compounded rate of return, namely lret, has a mean of 0.0055 and a standard deviation of 0.0624. Within the data boundaries, the minimum value of lret is -0.396 and maximum of 0.515. Next I check whether lret is normally distributed.

I employ a test for normality, which combines skewness and kyrtosis test as described by D’Agostino, Belanger, and D’Agostino (1990). We can reject the hypothesis that lret is normally distributed: the null hypothesis of normality is rejected at a = 0.01. This result foreshadows the rest of the study; the EMH is based on the underlying principle of Central Limit Theorem, which states that the mean of a sample of independent random variables is approximately normally distributed, as the number of observations becomes sufficiently large. If the lret plotted over time were normally distributed, then the returns would be random and EMH would be practically established.

Following from non-normality, serial correlation of the error terms is undoubtedly present; we reject the null hypothesis of no serial correlation using the Breusch-Godfrey LM test for autocorrelation at 5% significance level (Table 1). In order to tackle serial correlation, the task is nothing more than the task we already set forward: adding lagged daily returns in the model eliminates serial correlation. Once we have incorporated significant lags in our model, serial correlation should be eliminated, a posit which we will revisit in the next chapter.

For the estimation of Jensen’s alpha, the Fama/French factors are employed. The data were collected from the personal website of Kenneth French (2014), where current research returns are being posted frequently “for investors seeking benchmarks for asset class portfolio returns”. Calculating the Rm takes into account all NYSE, AMEX, and NASDAQ firms that have a “Center for Research in Security Prices” (CRSP) unique and permanent issue identification number, or share code, of 10 or 11 at the beginning of month t, good shares and price data at the beginning of t, and good return data for t. The risk free rate is the one-month Treasury bill rate.

One of the main drawbacks of using the Fama/French data is that we compare a continuous market, the market of bitcoins, with an index, which is not calculated during Saturday and Sunday. For this reason we omit from the dataset all weekend data, a decision that comes with a cost of loss of observations.

A further assumption for this estimation of alpha is that NYSE, AMEX, and NASDAQ index is an appropriate measure for the base of market return. The bitcoin market by definition extends beyond the U.S. borders, operating on foreign exchange markets, therefore the Fama/French index may not be the best measure of the market. To counteract this problem, I repeat the estimation of Jensen’s alpha, this time using the index UDTX which measures the performance of the USD against a weighted basket of six major world currencies: the Euro (EUR), Japanese Yen (JPY), British Pound (GBP), Canadian Dollar (CAD), Swedish Krona (SEK) and Swiss Franc (CHF).

The purpose of the next part is to specify the two models subsequently used and to outline the methodology and different statistical tests that are implemented.

5. Specification and Methodology

The current paper investigates the EMH using two models. The first is an autoregressive model, which incorporates a day-of-the-week effect. The second is an elegant approach to CAPM and EMH introduced by Jensen (1968).

An autoregressive model of order p, denoted AR(p), specifies that the output variable depends linearly on its previous p values. Through antithesis, an autoregressive model is a valid instrument to use because the weak type of EMH denotes that prices of a security are independent of past prices. An AR(p) model is specified as follows:

𝑟!= 𝛽!+ 𝛽!𝑟!!!+ 𝛽!𝑟!!!+ ⋯ + 𝛽!𝑟!!!+ 𝜀!

(2) where β are the autoregression coefficients of the model, rt is the series under investigation, t is the time index, and εt

is the residue, which in normally distributed with zero mean.

First step is to check against some of the restrictions posed by autoregressive models; primarily the time series under investigation needs to be stationary, which leads us to the test of unit roots. An implementation of augmented Dickey-Fuller (ADF) is used for checking whether the time series are stationary. The ADF test allows for higher order autoregressive processes. In addition, a modified version of the ADF test as well as the Phillips-Perron-test in combination with Schwert’s rule of thumb is employed.

(10)

In order to decide on the number of lags included in the AR(r) model I use some of the most commonly used Information Criteria as statistical measures of fit, namely the Swatrtz Bayesian Information Criterion (SBIC) and the Akaike Information Criterion (AIC). For reasons of corroboration, I use both. A backwards elimination of lags is then implemented in order to identify the order of the autoregression.

The above tests indicate a significant effect, which is not accounted for in a progressive autoregressive model, namely

significant lags that are not progressive. First, motivated by the extensive literature showing significant day-of-the-week

effects I test for this effect. For this purpose, a model including dummy variables for each day of the week is

constructed. The model in its extended version is as follows:

𝑟!= 𝛼 + !!!!𝛽!𝑟!!!+ !!!!𝛾!𝐷𝑎𝑦!+ 𝜀!

(3) where rt is the continuously compounded rate of return with t being the time index, α is the intercept, βi is

the autoregression coefficient, γi is the coefficient of the dummy variable Dayj that takes on the value of 1 if date t falls

on the ith day of the week and 0 otherwise and εt is the residue, which in normally distributed with zero mean.

As extensively illustrated in the results, there is no evidence of a specific day-of-the-week effect, albeit evidence of non-progressive significant lags. For this reason the model is finally reconfigured to include these lags using the tools of seasonality and omitting completely the daily dummy variables:

𝑟!= 𝛽!+ 𝛽!𝑟!!!+ 𝛽!𝑟!!!+ 𝛽!𝑟!!!+ 𝜀!

(4) where, as above, β are the autoregression coefficients of the model, rt is the series under investigation, t is the time

index, and εt is the residue, which in normally distributed with zero mean.

Finally, the portmanteau test for white noise is employed to detect the possibility of autocorrelation of the residuals. In addition, the residuals are regressed against their lagged values to further investigate absence of serial correlation in the constructed model.

The second model is another time-series regression, which estimates Jensen’s alpha, a measure of performance. The foundations of the model are based on the CAPM formula and further assuming observable realizations rather than expected, and allowing for a forecasting ability through the intercept (Equation 5).

𝑅!"− 𝑅!" = 𝛼!+ 𝛽!(𝑅!"− 𝑅!") + 𝜂!"

(5) In the above equation, Rqt is the realized returns of a security, RF is the one-period risk free interest rate, RMt is the

one-period realized market return consisting of an investment in each asset in the market weighted in proportion to the total value of all assets in the market, βq (βq) represents the volatility of the portfolio with respect to the market

volatility or a measure of systematic risk and η is the error term. The coefficients α and β are assumed to be stationary (they are not subscripted by t). Note that Equation 6 holds for any length time period assuming returns are in the form of continuously compound (Eq. 1). In the traditional CAPM version, introduced by Sharpe-Lintner, the expected CAPM risk premium fully explains expected value of an asset’s excess return implying a zero “Jensen’s alpha”.

The intercept, alpha (αq), is a measure of performance. A positive value of alpha means that the asset or portfolio

under investigation has performed better than the market, while a negative alpha indicates underperformance. Performing better than the market, post-adjusting for risk, disputes the EMH.

The next part will extensively cover the results of both the augmented autoregressive model with all the relevant statistical tests and the estimation of alpha.

6. Results

The following chapter will present the results in a series of sub-chapters accompanied by their economic implication and relevance.

Unit Roots Test An idea of whether a trend exists would be illustrated by simply plotting the variable lret against the time variable

finaldates (figure 1). Although the time series seems stationary, for a more thorough investigation, we start with a unit

(11)

The Dickey-Fuller test, developed by David A. Dickey and Wayne A. Fuller (Fuller, 1979), tests whether a unit root exists in an autoregressive model of first order. In the present study the augmented Dickey-Fuller test is used because the specified model is of higher order than one, denoted as AR(r), with r the order of the autoregression. The rationale behind the test is that if series rt is stationary then it features a

tendency to return to a constant; negative changes tend to be followed by positive changes and vice versa.

The null hypothesis states that the variable contains a unit root, and the alternative is that the time variable is stationary. The outcome of the test clearly rejects the null hypothesis for all orders of autoregression up to order 10 (Appendix 1a).

In addition to the augmented Dickey Fuller test, I also implement a modified version which has greater statistical power, proposed by Elliott, Rothenberg, and Stock (1996). The latter, which is more powerful especially when there is an unknown mean or trend, also rejects the (same) null hypothesis, in this case up to 20 lags (Appendix 1b), therefore there is no unit root present and the time-series under investigation is stationary.

In general it is not enough to use the Dickey-Fuller test only. It is suggested to use more methods to be confident about the result. Hence, an implementation of the Phillips-Perron-test is employed. To calculate the Phillips-Peron statistic, the order of lags is decided by implementing Schwert's rule of thumb listed bellow:

𝑝!"#= [12 𝑇 100 !/! ] (6) where T is the number of days. The rationale of the above rule is that adding more lags leads to loss of power. Setting the null hypothesis to be that the series are non-stationary, the test rejects the null hypothesis (Appendix 1c). Consequently the Phillips-Perron test supports the Dickey Fuller test results.

In the light of the above findings, the time-series are stationary and the use of Ordinary Least Squares (OLS) for further analysis of the time-series makes sense. This is important because if the stochastic process is non-stationary, the use of OLS can produce invalid estimates.

Tests for Number of Lags

First step is to determine the order of the autoregression: the optimal number of lags. To decide on the number of lags, I use the Swatrtz Bayesian Information Criterion (SBIC) and the Akaike Information Criterion (AIC) because these suit a study of daily data while being asymptotically efficient. The

above information criteria balance the marginal benefit of adding additional lags to the model against the marginal cost of increased uncertainty.

Swartz states that qualitatively both AIC and SBIC provide “a mathematical formulation of the principle of parsimony in model building” but quantitatively rejects the Akaike’s asymptotical optimality (Schwarz, 1978). Burnham and Anderson argue that AIC has theoretical advantages over BIC (2002).

For reasons of corroboration and because both information criteria have supporters and both are widely used, I implement both.

Akaike’s information criterion is defined as

AIC = −2 ln 𝐿 + 2k

(7) where ln 𝐿 is the natural logarithm of the maximized likelihood of the model and k is the number of parameters within the model. (Akaike, 1974)

Schwarz’s Bayesian information criterion (1978) is defined as equation 8.

Table 2: Information Criteria

Lag AIC SBIC

0 -2.71891 -2.71376 1 -2.75821 -2.74792* 2 -2.76069 -2.74525 3 -2.7586 -2.73802 4 -2.75719 -2.73139 5 -2.77423 -2.74335 6 -2.7737 -2.73768 7 -2.78006* -2.73889 8 -2.77894 -2.73262 9 -2.77752 -2.72605 10 -2.77761 -2.72099 Endogenous: lret Sample: 11 – 952 Number of observations = 942

(12)

𝑆𝐵𝐼𝐶 = −2 ln 𝐿 + 𝑘 ln 𝑁

(8) As above, ln 𝐿 is the natural logarithm of the maximized likelihood of the model and N is the sample size.

The above information criteria do not include a hypothesis testing and thus are designed explicitly for model selection. Their task is merely solving the basic problem of choosing the appropriate model evaluating the loss function, defined as the probability of making incorrect decisions, because adding more lags leads unnecessary parameter estimation while omitting lags runs the risk of neglecting important information contained in distant lags. In

order to integrate this effect, the second terms in both equations act as ‘penalty terms’ because they increase, as more

lags are included in the specification, making the test value higher, which means less suitable; minimizing AIC and SBIC values leads to a better fitting model.

The optimal number of lags according to AIC is seven (7) lags while SBIC suggests the data better fit the model when just one lag is included (Table 2). In general, the AIC test always leads to a same or broader model, because the penalty term of SBIC is stricter; BIC penalizes model complexity more heavily. In this case, though, the difference is substantial, which is not explained solely by the penalty term. A closer look in the SBIC results reveals that, although it has a global minimum at Lag 1, it also exhibits a local minimum at lag 7, confirming the AIC results. When the IC indicates a preferred model at the seasonal period (e.g. at lag 7 in daily data, at lag 4 for quarterly data or lag 12 for monthly data), this indicates that the regression is not seasonally adjusted; seasonality has not been properly accounted for in the model, which will be addressed in the next chapter.

In order to conclude on the appropriate amount of lags, and pinpoint the reason of the discrepancy between the AIC and SBIC, a series of F and t-tests are implemented. This method is also called stepwise elimination. The stepwise elimination method initially fits a high-order model with many autoregressive lags and then sequentially removes autoregressive parameters until all remaining autoregressive parameters have significant t-tests.

Running a series of F-tests starting from lag 10, I find that lags 1, 5 and 7 are statistically significant (Table 3). Because autoregressive models are strictly progressive and do not skip lags, the final autoregressive model from stepwise elimination is of first order AR(1).

Seasonality

The findings in Table 3 do not follow the typical decaying behavior of a typical autoregressive model; nevertheless the phenomenon needs to be tackled with the tools we have in hand; seasonality tools allows an autoregressive model to take into account cyclical behavior of the dependent variable. Seasonality can be included in a regression model by

(i) seasonally adjusting the variables, (ii) including seasonal lags or simply by (iii) including seasonal dummies2. In the

present study, we first including seasonal dummies and then including seasonal lags to account for seasonality.

The way to find out if there is a day-of-the-week effect present in the data is to test statistically if higher or lower returns of specific days of the week persist over time. The following regression allows us to test for differences in mean return across all the trading days (equation 9).

                                                                                                               

2

 

An attempt to explain whether the anomaly is due to an effect commonly known as the “day-of-the-week effect” can be found in Appendix 2.

 

Table 3: Stepwise Elimination Process (p-values)

lret Model I Model II Model III Model

IV Model V Model VI Model VII Model VIII Model IX Model X

α 0.026 0.014 0.015 0.018 0.038 0.049 0.035 0.038 0.034 0.037 L1.lret <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 L2.lret 0.056 0.064 0.076 0.082 0.081 0.169 0.203 0.237 0.153 L3.lret 0.908 0.774 0.930 0.929 0.923 0.890 0.938 0.916 L4.lret 0.404 0.966 0.937 0.938 0.951 0.956 0.986 L5.lret <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 L6.lret 0.173 0.059 0.061 0.063 0.068 L7.lret 0.005 0.018 0.018 0.016 L8.lret 0.076 0.054 0.115 L9.lret 0.334 0.223 L10.lret 0.138 Significance level α=0.05

(13)

𝑟!= 𝛼 + !!!!𝛽!𝑟!!!+ 𝛾!𝐷𝑎𝑦!!+ 𝛾!𝐷𝑎𝑦!!+ 𝛾!𝐷𝑎𝑦!!+ 𝛾!𝐷𝑎𝑦!!+ 𝛾!𝐷𝑎𝑦!!+ 𝛾!𝐷𝑎𝑦!!+ 𝑢!

(9) Where Dayit are indicator dummy variables that take on the value of 1 if date t falls on the ith day of the week and 0

otherwise. In representing the days, we use 0 for Mondays, 1 for Tuesdays, 2 for Wednesdays, 3 for Thursdays, 4 for Fridays and 5 for Saturdays and 6 for Sundays. In order to avoid perfect multicollinearity, we omit one categorical dummy variable, namely Day0. The coefficients βi represent the mean excess daily returns on the particular days of the

week and ut is the error term. It is important to note that the model includes all dummies, independent of their

significance.

We test the global null hypothesis that H0: γ1=γ2=γ3=γ4=γ5=γ6=0. The F statistic is 0.92 therefore equality cannot

be rejected (Appendix 2). This indicates that there is no evidence of a day-of-the-week effect in BTC returns over the period May 1st, 2011 to December 15th, 2013. These results pose a minor drawback in order to explain the IC results;

the backward elimination has revealed which lags are significant.

 

Accounting for seasonality using dummy variables was not fruitful, therefore we turn to seasonal lags. Since lags 1, 5 and 7 are statistically significant (Table 3), the model becomes:

𝑟!= 𝛽!+ 𝛽!𝑟!!!+ 𝛽!𝑟!!!+ 𝛽!𝑟!!!+ 𝜀!

where β are the autoregression coefficients of the model, rt

is the series under investigation, t is the time index, and εt is

the residue. The model is an autoregressive model, comprising of 1st, 5th and 7th lagged values of the

compounded rate of return itself. The model has F-value of 5.9 (p-value<0.001), which means that the lags are jointly significant at a significance level of 5%. Because autoregressive models are strictly progressive and do not skip lags, the final autoregressive model is of first order AR(1), which is expanded to include seasonal lags.

Further evaluation of the model will be discussed after we take a closer look to the error terms of the new model. Error Term

A closer analysis of the error term itself is required because serial correlation of the error term in regression analysis using time series violates one of the fundamental OLS assumptions. Autocorrelation of the unobserved error terms is possible to identify because it causes autocorrelation in the observable residuals. Since we have included the lags that are significant in our model (Equation 4), autocorrelation should be eliminated.

First we employ the portmanteau test for white noise. Rejecting the null hypothesis of having white noise denotes serial correlation. The p-value of the Portmanteau test statistic is 0.6856, consequently the null hypothesis is not rejected and the residuals are not correlated at a significance level of 5%.

The above test for serial correlation of the error terms indicates absence of serial correlation and is useful as a pre-test for serial correlation because it is valid only for strictly progressive models. In order to better assure absence of serial correlation for the model (equation 4), which includes lags 1, 5 and 7, I devise the following regression:

𝑟𝑒𝑠! = 𝛿!+ 𝛿!𝑟𝑒𝑠!!!+ 𝛿!𝑟𝑒𝑠!!!+ 𝛿!𝑟𝑒𝑠!!!+ 𝜃!

(10) where δ are the autoregression coefficients of the model, rest is the residuals of Eq. 4, t is the time index, and θt is the

residue. Essentially we construct an additional autoregressive model, this time taking as regressand and regressors the residuals of our model. Testing the hypothesis H0:δ1=δ5=δ7=0, the F statistic is equal to 0.02 and we do not have

enough evidence to reject the null hypothesis; we can safely conclude that the residuals are not significantly correlated to each other (Appendix 3). The absence of serial correlation in the constructed model is in line with our expectations because including significant lags of the dependent variable can eliminate serial correlation. Therefore there is no need to concern our investigation with the serial correlation of the error term, and traditional t and F tests are valid.

Evaluating Autoregressive Model

The return of BTC for any given day is correlated with the return of the BTC of the previous day, as well as the return of five (5) and seven (7) days ago.

The first effect is reasonably explained by low informational efficiency; specifically there is an observed delay in the incorporation of new information to the price of bitcoins. The delay is not prolonged over the one-day lag limit

Table 4: Autoregressive Model with Seasonal Lags

Model Specification • Regressand: lret

• Regressors: L1.lret, L5.lret, L7.lret

lret Coefficient P>|t| Constant 0.004182 0.035 L1.res 0.2041 0.001 L5.res 0.1346 0.024 L7.res -0.0846 0.056 Number of observations = 953 Significance level α=0.05 F(3,930) = 5.9 R2=0.065

(14)

because the decay takes place in one step. Possibly the decaying process can be broken down into smaller intra-day parts, which is a suggestion for further research.

Next, the inclusion of the 7th lag indicates a weekly effect. When the IC indicates a preferred model at

the seasonal period the regression is cyclical in nature. This effect could not be specified by the day-of-the-week effect; there is no specific day that exhibits higher returns on average. The inclusion of the 7th lag suggests that the return of

any given day of the week is correlated to the return of a week beforehand. This is also a sign of informational inefficiency because one can use this anomaly to predict future returns.

Finally, there is also an intuitive explanation behind the significance of the 5th lag, which stems from the dataset itself

and the resource used to draw the data. The BTC exchange platform under investigation, namely Mt.Gox, requires a verification period in order to activate a new account and purchase BTC or engage in exchanging. This period usually takes five (5) business days. The fact that counting is in business days, therefore not including weekends, can be an additional reason for the significance of the 7th lag. The market for BTC is relatively new and expanding, therefore

new users join daily, which further supports the claim that newcomers might cause the 5th lag effect.

Supporting this argument is the positive coefficient of the 5th lag, especially if one gets in the mindset of a new

investor/customer who just receives his account credentials and is eager to invest in BTC following her prolonged anticipation coupled with the ‘animal spirit’ of ‘joining the bandwagon’. This is of course in conflict with the EMH, where investors are rational and have a long-term perspective determined based on changes of long-term income flows. However BTC lack intrinsic value; the value itself is determined by a combination of indefinite dynamics. For example, the price reflects the cumulative belief that BTC is a valid alternative to fiat currencies. It also reflects the news coverage, either positive or negative, endorsements coming from celebrities and number of BTC vendors.

Whether the above anomalies persist in the long-run as exactly documented is highly improbable; every inefficiency once identified gradually disappears. It is not irrational to assume, however, that new inefficiencies will eventually arise; the market for BTC is still at an infant stage and while evolving new inefficiencies will come up, at least until the market matures, if of course the circumstances let it mature.

Jensen’s άλφα (alpha)

We run the least squares regression as specified in Equation 6: the excess returns of bitcoins on the excess return of the market. The results are disclosed in table 4. The market returns are estimated as suggested by Fama/French (Model I) and the UDTX index (Model II).

Table 4: Jensen’s alpha

Model I (Fama/French) Model II (USFX)

𝑅!"− 𝑅!" Coefficient P>|t| Coefficient P>|t|

𝛼! 0.0078 0.032 0.009 0.005

𝑅!"− 𝑅!" 0.005 0.003 -0.394 0.687

Number of observations = 515 The interpretation of betas is straightforward. In Model 1, the bitcoin returns are almost entirely unrelated to movement of the Fama/French index (β close to zero); this is anticipated because the bitcoin market extends beyond the U.S. borders, operating on foreign exchange markets, therefore the Fama/French index is a very narrow definition of the appropriate market. In Model II, the bitcoin returns generally move in the opposite direction as compared to the USFX index (β<0), which is also justified by the nature of foreign exchange markets; the USD and the six currencies represented by the USTX index are direct substitutes trading for each other. However the p value is large, therefore β is not significantly different than 0.

In determining a superior security, one that outperforms the market, we use the conventional test H0: α=0 versus the one-sided alternative H1: α>0. Given the results in Table 4, the p-value associated with the constant term α in both models is smaller than 0.05. Therefore we reject the null hypothesis that α is equal to zero at a 5% significance level; the risk-adjusted returns of bitcoins are superior to market returns.

It is important to note that the model assumes constant risk and volatility; if changes in risk and volatility are predictable, then it follows that the EMH holds; predictable returns is a result of predictable changes in risk (Ferson & Harvey, 1991). This requires a closer study of events, which falls under the semi-strong EMH, and is a subject warmly suggested for further research.

Nevertheless, assuming constant volatility over the subject period, the results indicate that the EMH hypothesis can be rejected based on the alpha value that is greater than zero; selecting bitcoins as a security has outperformed the risk-adjusted market returns. The excess return of almost 1%, albeit small is significant given no transaction costs.

In the paper “The Performance of Mutual Funds for the Period 1945-1965” (1968), Jensen investigates the returns on the portfolios of 115 open end mutual funds and estimates an alpha value of -0.011 and concludes that on average

(15)

the selected funds earned about 1.1% less per year, given the systematic risk involved in those funds. Furthermore, in the presence of brokerage commissions, the investigated funds were not able to outperform a buy-and-hold strategy. In contrast, the magnitude of the value of alpha estimated in the present study (a=0.0078) positive and significantly higher compared to the one presented by Jensen. Taking into account absence of brokerage costs, we can conclude that the market for BTC is not efficient for the given period.

7. Conclusion

Of all recent development of financial economics, the EMH has achieved the widest acceptance from scholars and prevailed in textbooks. It has survived the empirical challenges of long-term anomalies on the grounds that transaction and information costs eliminate these effects.

The market of BTCs is a perfect candidate to challenge the voluminous empirical work on EMH because it provides a new dataset with no transaction fees. Employing a series of multiple statistical tests, the current paper investigates daily prices of the Mt.Gox bitcoin market for the period May 1st 2011 to December 15th 2013, and finds that the price of bitcoins is significantly autocorrelated: future prices can be predicted based on the study of past prices. In a period where the market is still fresh, full of uncertainty and also hopes for high returns, it is reasonable to exhibit lower informational efficiency. The model explains only 6% of the response variation, which is generally low. However, in absence of transaction costs, an investor can make abnormal profit.

In a period where currency remains pivotal in economic policies, and the exchange rate is still a key tool for the application of external influence, further study on the bitcoin market and any form of innovative electronic currencies is highly recommended.

Bibliography

(n.d.). Retrieved from https://twitter.com/MtGox/status/322355614414147588

Akaike, H. (1974). Information theory and an extension of the maximum likelihood principle. Automatic Control , 19 (6), 716-723.

Brooks, C., & Persand, G. (2001). Seasonality in Southeast Asian stock markets: some new evidence on day-of-the-week effects. Applied Economics

Letters , 8 (3), 155-158.

Burnham, K. D. (2002). Model selection and multi-model inference: a practical information-theoretic approach. (Springer, Ed.) 1-5. Chang, E. C., Pinegar, J. M., & Ravichandran, R. (1993). International evidence on the robustness of the day-of-the-week effect. Journal of

Financial and Quantitative Analysis , 28 (4), 497-513.

Cieśla, K. (n.d.). Exchanges. Retrieved from Bitcoinity: http://bitcoinity.org/markets/list?currency=USD&span=6m Coarse, R. (1937). The Nature of the Firm. 4, 386-405.

Cross, F. (1973). The behavior of stock prices on Fridays and Mondays. Financial analysts journal , 67-69. D'agostino, R., Belanger, A., & D'agostino, R. S. (1990). The American Statistician , 44 (4), 316-321.

Doran, J. S., Peterson, D. R., & Wright, C. (2010). Confidence, opinions of market efficiency, and investment behavior of finance professors.

Journal of Financial Markets , 13 (1), 174-195.

Doyle, J. R., & Chen, C. H. (2009). The wandering weekday effect in major stock markets. Journal of Banking and Finance , 33 (8), 1388-1399. Elliott, G., Rothenberg, T. J., & Stock, J. H. (1996). Efficient Tests for an Autoregressive Unit Root. Econometrica , 64 (4), 813-836.

Fama, E. F. (1991). Efficient Capital Markets: II. The Journal of Finance , 45 (5), 1575-1617. Fama, E. F. (1965). The behavior of stock-market prices. The Journal of Business , 38 (1), 34-105.

Federal Reserve. (2014, January 17). 2014 Currency Budget. Retrieved February 20, 2014, from Federal Reserve: http://www.federalreserve.gov/foia/2014currency.htm

Ferson, W. E., & Harvey, C. R. (1991). Sources of Predictability in Portfolio Returns. Financial Analysts Journal , 47 (3), 49-56.

Frankman, M. (2002, May). Beyond the Tobin Tax: global democracy and a global currency. The Annals of the American Academy of Political and

Social Science , 62.

French, K. R. (2014). Data Library. Retrieved January 20, 2014, from Tuck MBA: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html

French, K. R. (1980). Stock returns and the weekend effect. Journal of financial economics , 8 (1), 55-69. Fuller, D. A. (1979). Journal of the American Statistical Association , 74 (366), 427-431.

Grossman, S. J., & Stiglitz, J. E. (1980). On the impossibility of informationally efficient markets. American Economic Review , 70, 393. Hayek, F. A. (1937). Monetary nationalism and international stability. Longmans.

Helper, S. (2000). Economists and Field Research: "You Can Observe a Lot Just by Watching". American Economic Review , 90 (2), 228-232. Jaffe, J., & Westerfield, R. (1985). Patterns in Japanese Common Stock Returns: Day of the Week and Turn of the Year Effects. Journal of

Financial and Quantitative Analysis , 20 (2), 261-272.

Jensen, & Michael. (1968). The Performance of Mutual Funds for the Period 1945-1965. The Journal of Finance , 389-416. Jensen, M. C. (1968). The performance of mutual funds in the period 1945–1964. The Journal of Finance , 23 (2), 389-416. Kamara, A. (1997). New evidence on the Monday seasonal in stock returns. Journal of Business , 63-84.

Karpeles, M. (2011, June 20). Clarification of Mt Gox Compromised Accounts and Major Bitcoin Sell-Off (press release). Retrieved January 3, 2014, from MtGox Co. Ltd: https://www.mtgox.com/press_release_20110630.html

Karpeles, M. (2011, June 20). Clarification of MtGox Compromised Accounts and Major Bitcoin Sell-off. Retrieved January 3, 2014, from MtGox Co. Ltd.: https://www.mtgox.com/press_release_20110630.html

Keim, D. B., & Stambaugh., R. F. (1984). A further investigation of the weekend effect in stock returns. The journal of finance , 39 (3), 819-835. Kenneth R. French, E. F. (1988). Permanent and Temporary Components of Stock Prices. The Journal of Political Economy , 96 (2), 246.

(16)

Kohers, G., Kohers, N., Pandey, V., & Kohers, T. (2004). The disappearing day-of-the-week effect in the world's largest equity markets. Applied

Economics Letters , 11 (3), 167-171.

Kristoufek, L. (2013, 11 18). BitCoin meets Google Trends and Wikipedia: Quantifying the relationship between phenomena of the Internet era. Macmillan Publishers Limited.

Malkiel, B. G. (2003). The Efficient Market Hypothesis and Its Critics. The Journal of Economic Perspectives , 17 (1), 59-82.

Malkiel, B. G., & Fama, E. F. (1970). Efficient Capital Markets: A Review of Theory and Empirical Work. The Journal of Finance , 25 (2), 383-417. Mises, L. v. (1966). Human action : a treatise on economics (3rd Edition ed.). Chicago: Henry Regnery .

Nakamoto, S. (2009). Bitcoin: A peer-to-peer electronic cash system. Retrieved from http://www. bitcoin. org/bitcoin. pdf Pearson, K. (1905). The problem of the random walk. Nature , 72, 249.

Poterba, J. M., & Summers, L. H. (1988). Mean reversion in stock prices: Evidence and Implications. Journal of Financial Economics , 22 (1), 27-59. Rayleigh, L. (1880). XII. On the resultant of a large number of vibrations of the same pitch and of arbitrary phase. The London, Edinburgh, and

Dublin Philosophical Magazine and Journal of Science , 10 (60), 73-78.

Ricardo, D., & Murray, J. (1816). Proposals for an economical and secure currency: with observations on the profits of the Bank of England, as they regard the public

and the proprietors of bank stock. London: Printed for John Murray, Albemarle-Street .

Rogalski, R. J. (1884). New Findings Regarding Day‐of‐the‐Week Returns over Trading and Non‐Trading Periods: A Note. The Journal of Finance

, 39 (5), 1603-1614.

Rose, A., & Wincoop, E. v. (2001). National Money as a Barrier to International Trade: The Real Case for Currency Union. The American

Economic Review , 91 (2), 386-390.

Samuelson, P. A. (1965). Proof that Properly Anticipated Prices Fluctuate Randomly. Industrial Management Review , 41-49. Schwarz, G. (1978). Estimating the Dimension of a Model. The Annals of Statistics , 6 (2), 461-466.

StataCorp. (2011). Stata 12 Base Reference Manual. (C. Station, Editor, & S. Press, Producer) Retrieved from http://www.stata.com/manuals13/restatic.pdf

Sullivan, R., Timmermann, A., & White, H. (2001). Dangers of data mining: The case of calendar effects in stock returns. Journal of Econometrics ,

105 (1), 249-286.

Tobin, J. (2000). Financial Globalization. World Development , 28, 1101-1104.

Tobin, J. (1994). Speculators' tax: International policy coordination and national monetary autonomy‐why both are needed and how a trasaction tax would help. New Economy , 1 (2), 104-109.

(17)

Appendix 1: Tests for Unit Root a) Augmented Dickey Fuller

Number of Lags (q) Test Statistic 1 -25.29 2 -17.34 3 -14.68 4 -11.60 5 -10.36 6 -10.72 7 -10.69 8 -9.77 9 -9.79 10 -9.19 1% Critical Value = -3.43 Number of Observations = 959 - q b) Dickey Fuller - GLS Number

of Lags Test Statistic

1 -20.69 2 -17.61 3 -14.54 4 -11.25 5 -9.98 6 -10.30 7 -9.89 8 -8.98 9 -9.02 10 -8.23 11 -7.80 12 -7.14 13 -7.04 14 -6.38 15 -6.04 16 -5.87 17 -6.23 18 -5.71 19 -5.32 20 -4.84 Number of Observations = 938 1% Critical Value = -3.430 c) Phillips-Perron-test

Test Statistic 1% Critical Value

Z(rho) -772.14 -14.1

Z(t) -25.31 -2.86

(18)

Appendix 2: Test for day-of-the-week effect lret Coefficient P>|t| α -0.0013 0.80 L1.lret 0.198 <0.001 Day1 0.0103 0.16 Day2 0.0108 0.14 Day3 0.0075 0.31 Day4 0.0055 0.46 Day5 0.0078 0.30 Day6 -0.0016 0.83 Number of observations = 959 Significance level α=0.05 F(7,951) = 6.33 R2 = 0.0375 H0: γ1=γ2=γ3= γ4=γ5=γ6=0 Prob>F = 0.237

Appendix 3: Regressing Residuals

Model Specification • Regressand: res

• Regressors: L1.res, L5.res, L7.res

res Coefficient P>F Constant -0.0006844 0.72 L1.res -0.0000268 0.99 L5.res -0.0080483 0.80 L7.res 0.00248 0.94 Number of observations = 946 Significance level α=0.05 F(3,942) = 0.02 H0: δ1=δ5=δ7=0 Prob>F = 0.9949

(19)

Referenties

GERELATEERDE DOCUMENTEN

In the former two chapters I aim to study if the 1970 UNESCO Convention, the ICOM Code of Ethics for Museums and the heritage professionals view on informing the public

The innovativeness of this paper is threefold: (i) in comparison to economic studies of land use our ABM explicitly simulates the emergence of property prices and spatial patterns

Deze verklaring wordt onderbouwd door onderzoek van Phaf (2015) waar in de invloed van positieve en negatieve emotieinducties op interferentie door gemaskeerde flankers

Omdat betrokken partijen van tevoren niet altijd zullen weten of de koper een gelieerde partij is, is het aan te raden dat de beoogd curator zo snel mogelijk na zijn aanwijzing een

Application of solutions containing high concentrations of cGMP dramatically lowers the open probability of single-SthK channels previously activated by saturating cAMP

Producten die vanwege hun eventuele natuurlijke vorm (afwijkend) zijn afgeschreven voor menselijke consumptie. De producten hebben een “A keuze smaak”, maar voor de norm een

We also look at the orbit method, where unitary irreducible representations of Lie groups are found, using geometrical objects called coadjoint orbits.. The orbit method is

Leerlingen zullen het belang van signaalwoorden in teksten inzien (Y1), zich bewust worden van hun gebruik van signaalwoorden tijdens het schrijven (Y2), meer inzicht krijgen in de