• No results found

Earning abnormal returns via takeover target prediction By Max Dijkstra ABSTRACT

N/A
N/A
Protected

Academic year: 2021

Share "Earning abnormal returns via takeover target prediction By Max Dijkstra ABSTRACT"

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

39

Master Thesis

Finance

Earning abnormal returns via takeover target prediction

By Max Dijkstra

ABSTRACT

This paper shows how the US financial market can be beaten by applying investment strategies to portfolios of potential acquisition targets. It replicates Palepu (1986) and Brar et al.’s (2009) acquisition likelihood models on a sample of 810 targets and 1916 non-targets in the US from 2001 to 2015. 38 variables are chosen to represent 7 takeover hypotheses. After testing discriminatory abilities and estimating a binary logit model based on a selection of these variables, I find that target firms are relatively overvalued, endure high leverage and show strong price momentum. Furthermore, their daily trading volume increases compared to non-targets. The model accurately classifies 73% of the firms in the sample based on a predetermined cut-off threshold. An investment portfolio is constructed based on the estimated probabilities for a subsample ranging from 2013 to 2015. This paper compares three investment techniques and finds that a stop-loss investment strategy delivers the most promising results by combining a limited downside risk with the upside potential of the takeover likelihood model. I find that the investment portfolios earn significant cumulative abnormal returns compared to broad market indices over an investment horizon of up to one year. Afterwards, the benchmark indices outperform the investment portfolios.

Author: Max Dijkstra

Date: June 13th 2016 Program: MSc Finance Student number: s2033984 Supervisor: N. Heida E-mail: maxdijkstra@chello.nl Word count: 15,000

Key Words: abnormal returns, investment decisions, M&A, binomial logit, stock selection, takeover prediction, takeover targets

(2)

1

Contents

1. Introduction ... 2 2. Takeover prediction ... 5 2.1 M&A in general ... 5 2.2 Takeover models ... 5

2.2.1 Early (non-binomial) models ... 5

2.2.2 Palepu’s paper ... 6

2.2.3 Powell and Brar et al.’s critiques ... 7

2.3 Firm characteristics ... 7

3. Methodology ... 10

3.1 Research approach ... 10

3.1.1 Selecting target firm characteristics for the model ... 10

3.1.2 Estimating the takeover likelihood model ... 11

3.1.3 Validating the model ... 12

3.1.4 Applying the model ... 12

3.2 Operationalisation of firm characteristics ... 15

4. Data ... 18 4.1 Sample construction ... 18 4.2 Data collection ... 20 4.3 Descriptive statistics ... 21 5. Model construction ... 21 5.1 T-tests ... 21 5.2 Logit model ... 24 6. Results ... 27 6.1 Validation of model ... 27 6.2 Portfolio construction ... 29 6.2.1 Cut-off probabilities ... 29

6.2.2 Buy-and hold strategy... 30

6.2.3 Dynamic portfolio strategy ... 30

6.2.4 Stop-loss strategy ... 32

6.3 Discussions ... 32

7. Conclusions ... 33

Appendix A: Independent variable definitions ... 34

Appendix B: Selected independent variable histograms ... 36

Appendix C: Descriptive statistics ... 37

Appendix D: Correlations of selected variables ... 39

Appendix E: Cut-off probability ... 42

(3)

2

1. Introduction

Up to the financial crisis of 2007 financial markets experienced an increased transaction volume in what is referred to as the sixth ‘merger wave’ (Alexandridis et al., 2012). Post-crisis economic circumstances, including low interest rates and recovering stock markets, have again fostered the market for corporate control and created the opportunity and incentive for corporations to switch from organic- to acquisition-based growth strategies. As we are reaching an unparalleled volume of merger and acquisition (M&A) transactions over 2015 that surpasses the previous year by 37% (Dealogic, 2015) one understands why economic, psychological and financial academics have established a vast body of literature regarding M&A activity.

Contemporary academic literature is focused on numerous aspects of the M&A market and varies from M&A strategies and value creation to financing aspects (Cartwright et al., 2012). An interesting aspect of M&A research is the question of who stands to gain from a takeover transaction. According to Jensen and Ruback (1983) target firm shareholders benefit and acquiring shareholders do not lose. This paraphrasing already shows that acquirer shareholder returns are not always positive and might even essentially be zero (Berkovitch and Narayanan, 1993; Bruner, 2002). Conversely, target firm shareholders can experience significant positive stock price changes in mergers and public tender offers (Goergen and Renneboog, 2004; Jensen and Ruback, 1983). One striking example is a recent public takeover where AB InBev offered a 50% premium to SABMiller’s pre-announcement share price (Chaudhuri et al., 2015). Such immense bid premiums imply that target shareholders stand to gain hefty returns over their investments due to these public offers. Nowadays, even speculative trading based on rumoured acquisitions can already drive up stock prices in the pre-announcement period (Gao and Oler, 2012).

This creates a fascinating niche of research that focuses on predicting companies as viable acquisition targets, which is interesting due to the practical applicability of the results. Numerous studies attempt to use publicly available financial information to construct statistical models that estimate the likelihood of companies becoming acquisition targets, motivated by the potential gains of long positions in these firms’ stocks. Overall, these studies have shown prediction accuracies of circa 60-90% (Palepu, 1986). If these models are able to accurately predict potential takeover targets, one should be able to construct a portfolio consisting of potential target companies and earn abnormal returns due to the bid premiums discussed earlier. However, the majority of these studies report insignificant excess returns on their constructed portfolios.

(4)

3 research ranging from Langetieg (1978) to Houston (2001) and find 21 studies with statistically significant positive cumulative average abnormal returns (CAAR)1. The CAARs found in this survey range from 7.5% to 126.9% for takeover transactions varying in time period, measurement period and deal type (that is mergers, tender offers, et cetera). These results are similar to Jensen and Ruback (1983) who report premiums in the range of 20-30%. Such surveys of academic literature explicitly show that the financial market does not accurately predict M&A and capturing abnormal returns from these transactions should be possible.

Secondly, statistical models may not be accurate enough. Palepu’s (1986) model misclassifies a large number of non-targets: circa 45%. This means that the models may accurately predict the transactions that actually occurred, but type I (a target is incorrectly classified as a non-target) and type II errors (a non-target is incorrectly classified as a target) still prevail, rendering the results of these models useless for investment strategies (Powell, 2004). Brar et al. (2009) propose including measures of a more technical nature, for instance momentum, trading volume and market sentiment measures to increase the accuracy of the prediction models and conclude that significant abnormal returns are possible.

Finally, the results of portfolio analyses may be suboptimal due to the portfolio strategies used in the final yet very important part of these studies. Some studies use statistical models to construct an equally weighted portfolio and back test its performance using a buy-and-hold strategy over specified time windows (among others Palepu (1986) and Powell (2001)). However, contemporary portfolio theory and the increased prominence of derivative instruments suggest that this kind of strategy is outdated or at least insufficient (Perold and Sharpe, 1988; Shilling, 1992). Palepu (1986) finds that actual targets in his portfolio earn circa 21% return over a 250 trading day holding period, whilst the entire predicted portfolio earns -1.6%. The results of earlier studies are therefore prone to be misjudged, since the portfolio strategies used are overly simple, warranting careful analysis of other trading strategies and instruments. One could for instance suggest a dynamic portfolio that rebalances as the likelihood ratios change over time. Brar et al. (2009) appear to be the only one rebalancing their portfolio over time.

Whilst Brar et al. (2009) focus on European targets, this paper constructs a takeover likelihood model on US markets and investigates the efficient market hypothesis (EHM) by examining the possibility of earning abnormal returns. It adds to the existing body of literature by being the first to examine more than one portfolio strategy to see whether these increase the predictability of the models or the applicability of their results. Furthermore, it is the first paper that examines variables of technical nature on US markets.

1

(5)

4 The objective of this paper is twofold. Firstly, it validates existing takeover target prediction models. The theoretic basis from the fundamental model of Palepu (1986) is extended with comments and suggestions from among others Powell (2004) and Brar et al. (2009). Secondly, this paper investigates the practical use of the prediction models and uses the results from its validated model to construct a portfolio of stocks in which to invest. As a consequence, the (semi-strong form) efficient market hypothesis is implicitly tested, since one should not be able to earn abnormal returns. Overall, this paper endeavours to answer the question:

Can investment portfolios based on takeover target prediction earn abnormal returns? To answer this question, this paper finds seven high level takeover hypotheses that motivate corporate takeovers. These models are represented by 38 (financial) ratios and characteristics, which are tested for their discriminatory power on takeover likelihood by means of a two-sample t-test for difference in means. After selection based on this test, multicollinearity issues and data availability, 15 variables are further tested for significance in binary logit regression models, that estimate the individual probability of a takeover. This results in a model with 7 independent variables, confirming the hypotheses that undervalued companies that endure increased leverage and strong price momentum are more likely acquisition targets. After testing the validity of this model, we find that the predictive ability compares to other papers: the model correctly classifies 73% of the sample. Afterwards, the computed probabilities are used to construct investment portfolios. The returns of these portfolios are subsequently examined to see if the market can be outperformed. This paper finds that with buy-and-hold, takeover timing and stop-loss investment strategies abnormal returns can be captured for an investment horizon of up to one year. After one year these abnormal returns become negative. However, this may be attributed to strong performance of the benchmarks used, since it is clear that the model performs tremendously well when in-sample performance is reviewed.

(6)

5

2. Takeover prediction

This section lays out the theoretical and empirical background for this paper by elaborating on M&A activity in general. It then discusses the drivers and motives behind takeovers, which are essential to this study, as they constitute the theory behind empirical research that is discussed next. Subsequently, the most influential papers on takeover prediction are surveyed. Afterward, I elaborate on the various theories and hypotheses behind takeovers and takeover probabilities.

2.1 M&A in general

Takeovers entail capturing corporate control of another firm and occur “through merger, tender offer, or proxy contest, and sometimes elements of all three are involved” (Jensen and Ruback, 1983). Once a firm has identified a target, the acquiring firm often offers a premium to the historical market value of a firm. In a hostile takeover the acquiring firm acquires corporate control over the target without management consent. Friendly takeovers, on the other hand, are recommended to shareholders by the management and board of the target.

The most obvious motivation for mergers and acquisitions are the potential synergies that can be gained from such transactions. Synergy implies that the combination of two entities is a cause for extra gains, due to improved operating efficiencies. In a nutshell, it signifies how 1 + 1 = 3 . These efficiencies are often based on economies of scale and scope and due to skill transfers (Harrison et al., 1991). Inorganic growth (that is, growth through M&A) enables diversification, international expansion and a gain of market share. Moreover, acquiring firms can increase their production capacity or increase the utilization of their current capacity through acquisitions. By using the newly acquired sales channels, the combination can increase turnover (for instance via cross-selling). The increased sales and increased demand for raw materials can then increase buying power of the new combination and overhead costs can be reduced. Furthermore, firms acquire valuable knowledge, human capital and technology through M&A, thereby improving firm performance.

2.2 Takeover models

2.2.1 Early (non-binomial) models

(7)

6 were the first to employ a logit analysis instead of the MDA, thereby increasing the statistical significance of the models. They achieved an accuracy of approximately 90% by testing a variety of financial ratios and using the ones that had significant influence on the likelihood of acquisition ratio. The logit analysis was also employed by Hasbrouck (1985), who assessed the financial characteristics of firms that were actual takeover targets.

2.2.2 Palepu’s paper

Whilst none of the early researchers examined the practical implication of their results, Palepu (1986) was a pioneer to link the predictability of takeover targets with the possibility of earning abnormal returns. His paper is considered a milestone research and addresses three methodological flaws in previous research. First, Palepu (1896) notes that the equal-share samples leads to biased results, since the ratio of non-target to target firms is not 1:1 in reality. Therefore, he proposes to use a random subsample from the total sample of firms collected. In addition, he argues that arbitrary cut-off probabilities in prediction tests make the prediction accuracies difficult to interpret.

Palepu (1986) does not test a broad variety of financial ratios to determine the independent variables for the tests. He claims the best way of finding independent variables is through suggestions derived from (empirical) academic literature. He finds six drivers for M&A activity in academic literature and uses these to construct six hypotheses, represented by six independent variables that should identify potential targets. These are inefficient management, growth-resource mismatch, industry disturbance, firm size, market-to-book (undervaluation) and price-earnings.

Palepu’s paper uses a binomial logit model with the following formula: 𝑝(𝑖, 𝑡) = 1

[1+e𝛽𝑥(𝑖,𝑡)] (1)

(8)

7

2.2.3 Powell and Brar et al.’s critiques

Powell (2004) extends Palepu’s model by separating hostile and friendly takeovers. His multinomial logit model specifies the probability of UK firms being a hostile target, a friendly target or a ‘non-target’. He concludes that only portfolios constructed with hostile takeovers earn significant abnormal returns. The variables Powell uses are mainly similar to the one used in Palepu’s study. He constructs several and-hold portfolios based on his logit models per January 1996 and finds significant buy-and-hold abnormal returns (BAHAR) of up to 17% over a 36-month period.

Brar et al. (2009) also find room for improvement in the model. They propose including independent variables of a technical nature (that is momentum and trading volume) in the hope of capturing the timing of the acquisition announcement (Brar et al., 2009). Using a sample of 1,486 European firms (including targets), their model accurately predicts up to 73% of the targets. Interestingly, Brar et al.’s model does provide a basis for an investment strategy that earns significant abnormal returns. They estimate their binomial logit model to all non-financial companies in the S&P/Citigroup Broad Market Index and make monthly rankings of the estimated takeover likelihoods. Thereafter, they construct a monthly rebalanced portfolio consisting of the top 10% ranking firms and find that this portfolio outperforms the market with an average abnormal return of 8.5%.

2.3 Firm characteristics

As noted by Palepu (1986), the initial selection of financial ratios should be theoretically backed by literature, so that these ratios are not only statistically, but also theoretically justified. This entails selecting preliminary variables based on prior research instead of starting with popular financial ratios and simply testing their significance as is done by Simkowitz and Monroe (1971), thus inheriting the danger of overfitting the model. Palepu (1986) bases his variables on six hypotheses frequently suggested in a broad body of research. His paper is built upon, extended and validated by later research, for instance by Cudd and Duggal (2000), who replicate Palepu’s study and conclude that adjustments for industry specific distributional characteristics are not of added value. Brar et al. (2009) and Powell (2004) agree with the theoretical approach of Palepu (1986) and each present a range of variables that expands current literature. Brar et al. (2009) identify three sets of variables based on prior literature, consisting of mainly firm oriented variables and a limited number of country and industry/market oriented variables.

A survey of earlier research reveals seven ‘high-level’ takeover hypotheses that are relevant to this study. The initial selection of variables represents the seven takeover theories set out below.

(9)

8 One of the more prevalent theories for M&A (and often a motivation for LBOs) is the inefficient management hypothesis. This theory argues that the opportunity of replacing the inefficient management of a potentially lucrative business can be a significant motivation for corporate takeovers. Thus, firms with inefficient management are more likely to become targets. Nearly all studies on M&A drivers use proxies to measure inefficient management. Palepu (1986) uses the excess return on a firm’s stock, averaged over an extended period as well as accounting profitability, measured through return on equity (ROE). Low ROE should be an indication of inefficient management, thus increasing the likelihood of an acquisition and carrying a negative sign in the logit model (Brar et al., 2009; Cudd and Duggal, 2000; Powell, 2004;). Palepu (1986) finds significant results, whilst Brar et al. (2009) do not and instead include the one year sales growth as a significant proxy for inefficient management in their model.

(2) Growth-resource mismatch hypothesis: firms with a mismatch between their growth potential and their financial resources are more likely to become targets

Multiple studies mention the growth-resource mismatch as an indicator for potential takeovers (Barnes, 1999; Cudd and Duggal, 2000; Espahbodi and Espahbodi, 2003). According to this hypothesis, firms that bear an imbalance between their growth and financial resources have a higher probability of becoming a target. Such firms pose as an opportunity for acquirers to either provide the resources needed to sustain growth or aid in creating value from a target’s resources through the acquirer’s existing knowledge, thus creating value by eliminating the mismatch. However, results vary across studies. Cudd and Duggal (2000) find a significant positive relationship with takeover probability, as do Palepu (1986) and Powell (2004). Yet, Espahbodi and Espahbodi (2003) do not and Brar et al. (2009) do not even consider the variable.

(3) Size hypothesis: larger firms are less likely to become targets

Transactions are easier to manage, less costly and more likely to succeed with smaller companies, due to lack of takeover defences and the costs associated with the actual implementation of the acquisition. Hence, smaller firms are more likely to receive acquisition bids (Brar et al., 2009; Hasbrouck, 1985). Also the probability of ending up in a bidding war with multiple bidders is likely to lessen with size (Palepu, 1986). Results are unambiguous across studies, showing a highly significant negative relationship.

(4) Undervaluation hypothesis: undervalued firms are more likely to become targets

(10)

9 results are mixed. Bartley (1999) finds slightly significant results, whilst Brar et al. (2009), Cudd and Duggal (2000), Espahbodi and Espahbodi (2003) and even Palepu (1986) find insignificant relationships. Naturally, a low MTB ratio can also indicate a firm that is plainly losing market value due to poor business circumstances.

(5) Industry disturbance hypothesis: firms that are in an industry with M&A activity are more likely to become targets

“Merger waves occur in response to specific industry shocks that require large scale reallocation of assets” (Harford, 2005). This ‘economic disturbance theory’ is used by Palepu (1986) to explain variations in the level of M&A activity in certain industries and across certain periods. Acting as sort of a wake-up call, it is not uncommon to see (initial) mergers and acquisitions leading to a larger scale industry consolidation. (de Groot & Molenaar, 2016; Fitch Ratings, 2016). Palepu (1986) proxies this variable by constructing a dummy variable that takes a value of 1 for industries where an acquisition took place in the previous 12 months and zero otherwise. Their results show a significant positive relationship between this factor and takeover likelihood. Brar et al. (2009) study industry disturbance in a similar matter, yet find an insignificant relationship. Cudd and Duggal (2000) find an insignificant relationship using Palepu’s (1986) methodology, yet find a highly significant relationship when using Brar et al.’s (2009) methodology.

(6) Financial distress hypothesis: firms that are financially distressed are more likely to be acquired

Distinctly investigated by Brar et al. (2009), the theory behind this hypothesis is that financially distressed companies often become the target of an acquisition. Distressed firm shareholders have an incentive to sell, since their claim on firms´ cash flow is subordinated to secured lenders, who might be inclined to accept a discount on their claim if an acquirer with good prospects arises (Balcae et al., 2012). There are multiple variables that can proxy financial distress. Brar et al. (2009) examine leverage to represent this hypothesis. However, their study does not find enough evidence to confirm this hypothesis and therefore does not include any representing variables in their final model. This result is similar to Ambrose and Megginson (1992), who conclude that there is no significant relationship. Palepu (1986) does not explicitly mention this hypothesis, yet does find a significant relationship between a firm’s leverage and the probability of an acquisition.

(11)

10 (7) Momentum: firms that have a positive momentum are more likely to become targets

As mentioned before, Brar et al. (2009) propose including measures of a more technical nature as an extension of Palepu’s (1986) original model. As rumours of a possible takeover circulate, (proprietary) traders likely act on this, expecting gains due to a possible acquisition premium (Gao and Oler, 2012). Consequently, the trading volume of this stock is likely to increase and a strong short-term momentum is to be expected. Brar et al. find a significant positive relationship for both price momentum and trading volume.

3. Methodology

This section discusses the research methods of this study. It starts by describing the general process followed in this paper and continues by clarifying the selection process of independent model variables. The estimation method of the logit takeover likelihood model is explained next, after which the validation methods of this model are described. Subsequently, I explain how portfolio strategies are applied to the results of this model and finish by reporting how the firm characteristics are operationalised into independent variables.

3.1 Research approach

The research approach of this paper follows the fundaments laid out by previous studies as discussed in the literature review. For each of the seven takeover hypotheses a number of independent variables are proposed as a measure or proxy. Following Brar et al. (2009), the discriminatory power of these variables is then tested and a selection of characteristics to be included in the model is made. Accordingly, several logit models are estimated using a subsample of targets and control firms. The final model is subsequently checked for robustness with a second subsample. Subsequently, the model is used to calculate the takeover probabilities for the investment sample. Finally, the calculated probabilities are used to construct an investment portfolio. The returns of this portfolio are examined through three investment strategies to see whether these can capture announcement premiums.

3.1.1 Selecting target firm characteristics for the model

This study uses several variables to represent the hypotheses discussed in section 2. A selection of variables is made based on their discriminatory ability, multicollinearity and their representation of the main takeover hypotheses as explained in this section.

3.1.1.1 Control sample

(12)

11 activity over 2001-2015, I randomly assign 5% of the total control group to the 2001 control sample. Non-target firms are assigned to such annual control groups only once.

3.1.1.2 Selection methodology (t-test)

Palepu’s (1986) paper proposes six takeover hypotheses, which are subsequently tested by the logit regression. However, his research does not examine other possible financial ratios or factors that might be a close proxy for the same hypothesis. Brar et al. (2009) do test the discriminatory ability of multiple variables, before including a selection of them in the model. Since theirs is a more inclusive and complete method of testing the takeover hypotheses, this methodology is followed. Also, this method avoids the danger of over fitting the model by making a prior selection.

A comparison of means with two sample t-tests assesses the discriminatory abilities of the proposed variables. This involves examining all proposed financial characteristics (for both the target and control sample) for normality and subsequently testing whether a significant difference of means between targets and non-targets exists. T-test statistics assume a normal distribution, therefore careful consideration of the variables is in order. Non-normality should not be an issue with a sufficiently large (that is >200 observations) sample size. Nonetheless, normality is checked by means of Skewness, Kurtosis and Jarque-Bera tests and a visual inspection of the histograms of each variable. To reduce the effect of outliers all data is not trimmed, but winsorized at 95% to avoid deleting useful data (Powell, 2004). Since it is immediately clear that a number of variables still follow a non-normal distribution, the natural logarithm of these particular variables is calculated, hereby improving their normality tremendously as can be confirmed by visual inspection of the corresponding histograms (some of which can be found in Appendix B). After testing the means and selecting variables, the untreated data is used for the logit regression, since logit analyses do not need to meet the linearity and normality assumptions of linear regressions and ordinary least squared analyses.

3.1.2 Estimating the takeover likelihood model

After determining the discriminatory ability of firm characteristics, this paper tests the selected firm characteristics for their statistical significance and influence on takeover probability by means of stepwise selection methods.

3.1.2.1 Logit models

(13)

12 3.1.2.2 Estimation and validation sample

Palepu (1986) notes that estimating and validating the model parameters on the same sample likely leads to biased results. His study therefore uses out-of-sample validation for his logit model. Brar et al. (2009) take a similar approach and split their total sample into two randomly separated halves, using the second sample for model validation, hereby creating an ‘estimation’ and a ‘validation’ sample. I therefore follow Brar et al.’s methodology partly: all firms with announcement dates between 2001 and 2013 are randomly assigned to either of the one of the two subsamples with a 70/30 weight, thereby creating an estimation and a validation subsample. The estimation and validation samples include the 2007 financial crisis. The logit model estimated in this paper is robust across financial crises. I do expect momentum to especially be impacted by this financial crisis. Therefore, an interaction variable is constructed, capturing the interaction between the momentum variables and a dummy variable (taking a value of 1 for all firms during the financial crisis).2

3.1.3 Validating the model

As mentioned previously, the validation sample checks the robustness of the constructed model as an ‘out-of-sample’ validation method. In addition, the validation sample is used to assess the predictive ability of the takeover likelihood model.

3.1.3.1 Cut-off probability

To differentiate possible targets from non-targets, their estimated probability needs to meet a certain threshold: the cut-off probability. The cut-off probability is determined by a similar process as used by Powell (2004) and Brar et al. (2009), who maximizes the number of correctly classified firms in the sample, claiming that this leads to the highest returns for subsequent investment portfolios. First, firms are placed into deciles in ascending order of takeover probability. The optimal cut-off threshold is the first probability with the highest concentration ratio (that is the concentration of correct target classifications in the portfolio). Consequently, any firm in the sample with a calculated probability higher than the cut-off probability is classified as a target and vice versa.

3.1.4 Applying the model

3.1.4.1 Investment sample

This paper follows Palepu (1986) and Powell’s (2004) methodology and focuses model estimation on the sample of 2001-2012. After ‘training’ the model on this estimation sample and validating the

2 The financial crisis is deemed to have started on 09-Aug-2007 (Elliott, 2011). The end of the financial crisis is

(14)

13 model on the validation sample the logit model is fitted to the ‘investment sample’ in the period 2013-2015.3

3.1.4.2 Abnormal returns

To see whether the portfolios possess the ability to outperform the market, the abnormal returns of the constructed portfolios are computed. Palepu (1986) tests the ability of his portfolio to earn abnormal returns through the calculation of cumulative excessive returns (CER). His methodology subtracts the expected return of a single asset 𝐸(𝑅𝑖,𝑡) from the actual return of an asset (𝑅𝑖,𝑡) to find the excess

returns. In essence, the methodology is similar across studies: a market benchmark return (BR) is subtracted from individual stock returns. This BR is the expected return a stock would have earned had there been no acquisition (Martynova and Renneboog, 2006).

Brar et al. (2009) use the S&P/Citigroup BMI to estimate the probabilities and use the equally weighted S&P/Citigroup Pan-European index as a benchmark. This paper examines the returns of the equal weighted MSCI USA, Russell 2000 and S&P500 BMIs in a similar fashion. These BMIs cover large American listed firms and thus serve as a perfect benchmark. The portfolio’s ability to earn excess returns thereby validates the results of Brar et al.’s (2009) European study through similar methodology, however for the US market. This study follows the terminology of Powell (2004) and Brar et al. (2009) and terms these excess returns ‘abnormal’ returns (𝐴𝑅𝑖,𝑡), as can be seen in formula (2).

𝐴𝑅𝑖,𝑡= 𝑅𝑖,𝑡− 𝐸(𝑅𝑖,𝑡) (2) where 𝑅𝑖,𝑡 is the actual return of a single asset and 𝐸(𝑅𝑖,𝑡) is the expected return of that asset, represented by the average return of the BMIs. The average abnormal return (𝐴𝐴𝑅𝑖,𝑡) on a portfolio of stocks on day t is then simply the sum of all abnormal returns divided by the number of stocks (N) in the portfolio as seen in formula (3).

𝐴𝐴𝑅𝑡 = 1

𝑁∑ 𝐴𝑅𝑖,𝑡

𝑁

𝑖=1 (3)

To conclude his return calculation, Palepu (1986) calculates the cumulative abnormal returns (CAR) per portfolio by adding up all AARs over a time period of 1 to k days.

𝐶𝐴𝑅 = ∑𝑘 𝐴𝐴𝑅𝑡

𝑖=1 (4)

To calculate individual returns (𝑅𝑖,𝑡), Datastream’s Total Return Index (RI) is used as in formula (5).

The RI benchmarks the value of an investment in a stock by tracking price movements, as well as any

3

(15)

14 stock events that affect a company’s returns (for instance stock splits), while re-investing dividends earned in the stock.

𝑅𝑖,𝑡 = ln 𝑅𝐼𝑖,𝑡

𝑅𝐼𝑖,𝑡−1 (5)

Palepu’s (1986) methodology is then followed to test the statistical significance of the calculated abnormal returns. The individual daily abnormal returns (𝐴𝑅𝑖,𝑡) are standardized by dividing them by the portfolio standard deviation (𝜎𝑝,𝑡). This standard deviation on day t is calculated over the daily

abnormal returns for the 250 trading days preceding day t to accommodate changes in the portfolio over time across investment strategies. This results in the standardized abnormal returns (𝑆𝐴𝑅𝑡) of formula (6).

𝑆𝐴𝑅𝑡 = ∑ 𝐴𝑅𝑖,𝑡

𝜎𝑝,𝑡

𝑘

𝑖=1 (6)

To determine whether the portfolio’s returns are significantly different from zero, the t-statistic is calculated with formula (7), where k is the number of days over which the portfolio excess return is calculated.

𝑡 = ∑ 𝑆𝐴𝑅𝑡

√𝑘 𝑘

𝑖=1 (7)

Since this analysis considers more than one portfolio, it is important to examine the risks of individual portfolios. The (ex post) Sharpe ratio is a common measure for examining the risk-return relationship of different portfolios (Sharpe, 1994). The Sharpe ratio is calculated by formula (8) and measures the risk-adjusted returns of the portfolios.

𝑆ℎ𝑎𝑟𝑝𝑒 𝑟𝑎𝑡𝑖𝑜 =𝑅̅−𝑅𝑓

𝜎𝑝 (8)

Where 𝑅̅ is the average historic return of the portfolio, 𝑅𝑓 is the risk-free rate4 and 𝜎𝑝 is the annualized standard deviation of the daily portfolio returns. A higher Sharpe ratio indicates that a portfolio performs better than the riskless investment in terms of risk-reward relationship. Usually, this entails a ratio larger than 1. Another measure for the risk-return relationship is the Sortino ratio, which examines only the downside (harmful) risk of a portfolio, entailing it is open to skewed return distributions (Sortino and Price, 1994). This ratio is also examined in this paper and is calculated according to formula (9), where 𝑅̅ is again the expected return (or in this case the average historic return) of the portfolio, 𝑅𝑓 is the target return (in this study the target return is the risk-free rate) and 𝐷𝐷𝑝 is the downside deviation of the portfolio.

(16)

15 𝑆𝑜𝑟𝑡𝑖𝑛𝑜 𝑟𝑎𝑡𝑖𝑜 =𝑅̅−𝑅𝑓 𝐷𝐷𝑝 (9) 𝐷𝐷𝑝=𝑛1∑𝑛𝑖=1(𝑅̅ − 𝑅𝑓)2− 𝑓(𝑡) (10) where 𝑓(𝑡) = 1 𝑖𝑓𝑅̅ < 𝑅𝑓 and 𝑓(𝑡) = 0 𝑖𝑓𝑅̅ ≥ 𝑅𝑓 3.1.4.3 Portfolio strategies

To conclude this paper, I investigate what influence various portfolio strategies of earlier studies have on the abnormal returns. Naturally, this study examines the ability of a buy-and-hold portfolio to earn abnormal return over a 3, 6, 12, 24 and 36-month holding period. Furthermore, this paper employs the ‘takeover timing’ strategy used by Brar et al. (2009) to examine whether this dataset confirms their portfolio analysis. This involves calculating the probabilities of all companies in the investment sample based on new monthly data and rebalancing the investment portfolio accordingly.

In addition to strategies of earlier papers, this paper examines one more strategy, namely the stop-loss strategy. This strategy protects an investor’s portfolio against falling share prices by cutting losses at a certain threshold. It involves setting a threshold at the start of an investment period and selling individual stocks when the stock price crosses this barrier. This threshold can be an absolute or a relative one. A stop-loss approach that cuts losses at a relative loss increases the threshold when the share price rises, enabling the portfolio to lock in profits when stock prices rise. However, volatile stocks can endure a price drop that exceeds the threshold, whilst the price may later increase to higher levels. As this paper expects share prices to rise significantly (though more volatile), we set a hard (absolute) stop-loss threshold at a 10% loss to the original investment price.

3.2 Operationalisation of firm characteristics

This section elaborates upon the hypotheses discussed in the literature review by defining the independent variables representing them and discussing the expected relationship of these variables to the takeover likelihood. Many variables are common across takeover likelihood studies and are therefore not too hard to determine. Moreover, many of the variables we investigate are similar to Brar et al. (2009), since this study replicates their EU-based research to validate abnormal returns in the US. The exact methods used to construct the individual measures can be found in Appendix A along with the definitions of the abbreviations.

(1) Inefficient management

(17)

16 sales (ROS) and the free cash flow to total assets. This study replicates this measure, but adjusts it to the operating cash flow to total assets (OCFTA), since it is a closer proxy of operational management inefficiency, undistorted by any financing or investment decisions. In addition, I examine the operating profit margin (OPM) and the total asset turnover (TAT). One more important measure of management inefficiency is the historical sales growth. Inefficient management is not able to organically grow a company and historical sales growth is destined to be lagging for inefficiently run companies. Therefore, we include the 1- and 3-year historical sales growth (REV_1YG and REV_3YG) and the 1- and 3-year historical earnings growth (NI_1YG and NI_3YG) over the last twelve months (LTM).

(2) Growth-resource mismatch

Growth-resource mismatch is measured through use of a dummy variable (GRDUMMY) that is assigned a value of one for the combinations of low growth, high liquidity and low leverage or high growth, low liquidity and high leverage and is assigned zero for all other combinations. Growth is measured as the average one year sales growth of a firm. Financial resource availability is measured through both liquidity, measured by firms’ current ratio (CUR), as well as leverage, measured by the total debt to equity (DE) ratio. Each variable is defined as ‘high’ when its value is greater than the total sample average and ‘low’ otherwise. (3) Size

Total assets (TA) and market capitalisation (MV) are often put forward as a measure of company size. I examine both in addition to the number of employees (EMPL) and total sales (REV) of the individual firms. These variables are all sensible measures of a firm’s size. (4) Undervaluation hypothesis:

I follow the majority of prior literature by using the market-to-book and price-to-book value (MTBV and PTBV) and dividend (DY) and earnings yields (EYLTM) as a proxy for undervaluation.

(5) Industry disturbance

(18)

17

TABLE 1

Target probability hypotheses and associated independent variables

Hypothesis Variable Expected sign

Inefficient management Operating cash flow to total assets -

Operating profit margin Asset turnover

Return on equity Return on sales

1y historical sales growth 3y historical sales growth 1y historical earnings growth 3y historical earnings growth Growth-Resource

imbalance

Dummy if high-growth/low-resources or low-growth/high-resources

+

Size Market capitalisation -

Total sales Total assets

Number of employees

Undervaluation Dividend yield -

Earnings yield (LTM) Price to book value Market to book value

Industry disturbance Dummy if at least one acquisition occurred in industry in previous year

+

Financial distress Total debt to total assets +

Long term debt to assets Short term debt to assets Total debt to equity Long term debt to equity Short term debt to equity

1y change in Total debt to total assets 1y change in Long term debt to assets 1y change in Short term debt to assets 1y change in Total debt to equity 1y change in Long term debt to equity 1y change in Short term debt to equity

Current ratio -

Cash to capital -

Momentum Price momentum (t-stat 3-month)

Price momentum (t-stat 12-month) -

Daily trading volume as % of market capitalisation - Note: This table presents the full set of variables considered to be of influence on the acquisition likelihood of

(19)

18 (6) Financial distress hypothesis:

I investigate the financial distress hypothesis by including the same comprehensive list of measures Brar et al. (2009) examine in their paper. This list includes the total debt to assets (TDTA), long-term debt to assets (LTDTA) and short term debt to assets (STDTA) as a measure of debt levels protruding throughout the companies. To accurately measure leverage, I investigate total debt to equity (DE), long-term debt to equity (LTDTE) and short-term debt to equity (STDTE). Brar et al. (2009) also address the possibility that target firms experience an increase in their debt in the year prior to an eventual takeover, the 1-year changes in the above measures is included in the analysis as well (indicated by the suffix ‘1YG’). In addition, two measures of liquidity are included in the analysis. The current ratio (CUR) is a logical measure of a firms’ liquidity, measuring the ratio of current assets to current liabilities. Furthermore, the cash to capital (CTC) measure is included to investigate the relative cash levels in a firm.

(7) Momentum

To replicate the significant results of Brar et al. (2009), this study follows their methodology by defining price momentum is as the t-statistic of the slope of logged daily stock prices. This measure looks at the price trend adjusted for volatility for both the three- and twelve-month price momentum (MOM3 and MOM12). Brar et al. (2009) also consider relative daily trading volume to examine whether rumoured stocks endure heightened levels of trading volume. The absolute daily trading volume divided by of market capitalisation (VOCAP) is used to replicate their research.

All of these variables are representatives of their categorical hypotheses and will be tested in section 5 for their discriminatory ability on takeover likelihood. An overview of the seven hypotheses, the proposed variables and their expected signs can be found in table 1 above.

4. Data

This section discusses the sources and types of data used in this study. It starts by explaining how the target and control firms were identified and the data samples are constructed. Next, I clarify how all independent variable data is collected and treated, after which I conclude by presenting various descriptive statistics on the collected dataset.

4.1 Sample construction

(20)

19 of this study. Targets are filtered on a number of restrictions: following Brar et al. (2009) I select all public transactions targeting stocks listed on the NYSE and/or NASDAQ with a market capitalisation exceeding $100 million. Both hostile and friendly deals are included in the database, since acquisition premiums are paid in both types of transactions and these premiums are the primary focus of this study. Furthermore, companies from the financial industry are excluded, since their income statements and balances are used in an alternate manner to corporates, hereby distorting the analysis of ‘regular’ companies. This is elimination is based on excluding all companies carrying a four-digit SIC-code classification positioned between 6000-6800. After cross-referencing our list with NYSE and NASDAQ databases 853 targets are found. Of these transactions, 467 are pending or intended and 67 have even been withdrawn5. Since the announcement premiums can be witnessed (and captured) even before the announcement date due to rumours circulating the financial marketplace (Gao and Oler, 2012), these non-completed tender offers are not excluded from the analysis.

Palepu (1986) mentions the bias of portfolio construction by using matched pairs, a methodology used in earlier studies. In the real world, there is no 1:1 ratio of target to non-target firms. Therefore, a control sample is created consisting of non-target firms: all (non-financial) companies listed on either

5 Transaction statuses are measured as per 13-May-2016

TABLE 2 Sample construction Year No. of deals Activity No. of control firms in sample 2001 42 5% 192 2002 33 4% 150 2003 24 3% 109 2004 33 4% 150 2005 46 5% 210 2006 56 7% 255 2007 70 8% 319 2008 82 10% 374 2009 40 5% 182 2010 47 6% 214 2011 69 8% 315 2012 59 7% 269 2013 62 7% 283 2014 71 8% 324 2015 119 14% 543 Total / average (%) 853 100% 3,890

Note: This table shows the distribution of control firms over

(21)

20 the NYSE or Nasdaq from 2001 through 2015 are included and cross-referenced with the target list to ensure targets are not included in the control sample. Due to the nature of the variables, only companies with at least three years of financial data available are included. Therefore, companies with an IPO after 2012 are excluded from the samples, resulting in a control sample of 3,890 firms (that is a target to non-target ratio of 22%) as seen in table 2.

TABLE 3

US public M&A transactions for the period Jan-2001 - Dec-2015

Year No. of deals Completed (%) Cash (%) Stock (%) Hybrid (%) Unknown (%) 2001 42 52% 62% 5% 17% 17% 2002 33 52% 64% 9% 9% 18% 2003 24 58% 83% 0% 4% 13% 2004 33 58% 67% 12% 9% 12% 2005 46 35% 78% 4% 7% 11% 2006 56 46% 86% 0% 4% 11% 2007 70 33% 83% 3% 4% 10% 2008 82 30% 83% 2% 4% 11% 2009 40 40% 70% 10% 3% 18% 2010 47 49% 83% 6% 6% 4% 2011 69 26% 87% 3% 0% 10% 2012 59 46% 90% 3% 5% 2% 2013 62 40% 81% 6% 3% 10% 2014 71 34% 76% 7% 7% 10% 2015 119 20% 76% 6% 14% 4% Total / average (%) 853 37% 79% 5% 7% 10% Observations 319 673 42 25 82 Note: This table presents the number of M&A transactions over the period Jan-2000 to

Dec-2015 that are included in our analyses. No. deals represents the number of deals per year. 'Completed' denotes the ratio of completed deals to total deals per year. Cash, Stock, Hybrid and Unknown portray the method of payment per deal. 'Hybrid' includes both cash and stock combinations/choices and options.

4.2 Data collection

(22)

21 After removing cases with little or no data 810 targets and 1916 non-targets are left in the sample. To avoid losing useful data and decreasing the explanatory power of the model, I examine the missing data per variable. Although most missing data appears to be missing completely at random (MCAR) and only 12.9% of overall data is missing, some variables do show a substantial amount of missing data, namely DY, STDA1YG and STDE1YG, who are all missing over 30% of their observations. Since they do not contain enough observations, they are excluded from further analysis. It appears there is no substantial pattern of missing values in the dataset. Therefore, I employ a mean imputation technique to fill out the dataset.6

4.3 Descriptive statistics

As can be seen in the descriptive statistics reported in appendix C, the inefficient management variables include the largest numbers of observations. Noteworthy, is the minimum operating profit margin, which is an overwhelming -209,600%, whilst the maximum is 4,275%. This is one particularly good motivation for the winsorisation of the sample dataset. Furthermore, it is remarkable how especially the size variables show a large standard deviation in their data, this appears logical, as total assets range from $5.6 billion to $66.8 trillion. The growth-resource imbalance dummy shows a mediocre amount of observations. This is however expected, since it relies on observations for three separate ratios. Specifically the financial distress variables show a low number of observations, mainly for their one-year changes. Apparently, the divisions of debt are not reported as accurately as one would expect. Conversely, the momentum variables show the highest number of observations: circa 3,800 per variable, increasing the explanatory power of these variables substantially. Finally, it is noteworthy how the Jarque-Bera statistics of all variables are significant at 1% for the raw data. Appendix B shows how logarithmic transformations greatly improve the normality of the distributions of selected variables. Overall, the dataset is large enough to approximate normality through the central limit theorem.

5. Model construction

5.1 T-tests

As table 4 shows, the majority of the firm characteristics test significant for the two-sample t-test and show a highly significant difference in means. Out of the 31 variables tested, 22 test significant at 1%. Due to the high number of discriminatory variables a danger of overfitting the model does occur. Therefore, I scrutinize the variables based on correlations and relevance to their hypothesis. All variables that do not have at least slightly significant discriminatory ability (that is at 10%

6 The mean imputation technique substitutes the mean of the population for a variable reporting missing data.

(23)

22 significance) are excluded from the model. Since no liquidity measure shows a significant difference of means, I test both as the proxy for the liquidity hypothesis.

For a logit regression with such a high number of independent variables it is important to check the variables for multicollinearity. Substantial multicollinearity makes results unreliable and consequently distorts any conclusions drawn from regression outputs. To address the issue of multicollinearity, I examine the correlations of the selected variables to further differentiate the independent variables to be selected for the model. As a rule of thumb, correlations higher than 0.70 (lower than -0.70) are excluded from the model (Mukaka, 2012). As can be seen in Appendix D, there are 9 instances of a high (that is < −0.50 or > 0.50) correlation coefficients.

The inefficient management variables show great discriminatory abilities: all 9 variables show a difference in means at 1% significance. However, the direction of the relationship appears counterintuitive to the hypothesis. All regular ratios portray higher averages for the target sample compared with the control sample, indicating targets would have more efficient management than control firms. Only sales growth shows averages in line with the hypotheses, as sales growth appears to be smaller for targets than for the control group, which is in line with the hypothesis. To avoid over fitting the model only the one-year sales and earnings growth rates are selected for the logit model. OCFTA and ROE are also included in the model as proxies of the inefficient management hypothesis, based on their discriminatory ability and sensible link to the hypothesis.

Size also appears to show a significant relationship to the probability of a takeover, as is expected based on earlier research. Contrary to expectations however, the target mean is higher than the control mean, indicating a positive relationship. Since MV and REV have several high correlations, these variables are removed from the estimations, leaving TA and EMPL as the proxies for size.

The undervaluation variables all show a discriminatory ability, with a highly significant difference in means as well. As expected, target firms show a significantly smaller average price to book and market to book value. Their earnings yield average is however quite higher than the control average. Naturally, PTBV and MTBV have a correlation coefficient of nearly one. The logit model therefore uses PTBV and EYLTM as proxies for undervaluation, since they are the most sensible proxies for this hypothesis and show the highest significance in the t-test.

(24)

23

TABLE 4

Two sample t-test of the means of selected variables

Hypothesis Variable Acquired group

Control group

t-test (p-value)

Inefficient management Operating cash flow to total assets* 0.09 0.03 0.000 Operating profit margin 6.15 -72.57 0.000

Asset turnover -0.18 -0.37 0.000

Return on equity* 6.54 -4.41 0.000

Return on sales 0.00 -0.89 0.000

1y historical sales growth* 0.66 0.74 0.000 3y historical sales growth 0.85 0.96 0.000 1y historical earnings growth* 2.13 2.08 0.000 3y historical earnings growth 2.84 2.79 0.002

Size Market capitalisation 7.10 6.03 0.000

Total sales 13.96 12.98 0.000

Total assets* 14.13 13.33 0.000

Number of employees* 8.18 7.33 0.000

Undervaluation Earnings yield (LTM)* -7.67 -27.58 0.000

Price to book value* 2.78 3.28 0.000

Market to book value 2.79 3.18 0.005

Financial distress Total debt to total assets -1.75 -1.88 0.055 Long term debt to assets* -1.88 -2.11 0.002 Short term debt to assets -4.42 -4.07 0.000

Total debt to equity 6.24 6.36 0.004

Long term debt to equity* 3.66 3.35 0.001 Short term debt to equity 0.12 0.13 0.382 1y change in Total debt to total assets 0.63 0.65 0.319 1y change in Long term debt to assets 0.68 0.68 0.788 1y change in Total debt to equity 1.05 1.08 0.107 1y change in Long term debt to equity 0.68 0.71 0.146 Momentum Price momentum (t-stat 3-month)* 0.00 -0.02 0.087 Price momentum (t-stat 12-month)* 0.01 0.03 0.000 Daily trading volume as % of market

capitalisation*

-1.14 -1.47 0.000

Liquidity Current ratio* 0.69 0.72 0.270

Cash to capital* 4.01 4.07 0.363

Note: this table represents the averages of firm specific variables representing a certain hypothesis for both

(25)

24 As the total and long-term debt-to-assets and debt-to equity ratios are substantially correlated significant variables with low correlations, only the long-term ratios are included to test the relationship of leverage on takeover likelihood, based on their significance in the t-test. Both the current ratio and the cash to capital ratio measure insignificant differences of means when their means are tested. To address the liquidity of the financial distress hypothesis, both variables therefore are tested for significance in the initial model.

Daily trading volume as percentage of market capitalisation tests highly significant and is higher for targets, which is consistent with Brar et al. (2009). In addition, the 12-month price momentum shows a highly significant difference in means as well, whilst the 3-month price momentum variable is only significant at 10%. Since none of these variables show any correlation, they are all included in the analysis.

Overall, this study continues with the 15 out of 37 financial ratios (excluding the dummy variables), most of which are similar to or close representation of the model used by Brar et al. (2009), providing ample opportunity to investigate the sign of the relationship (negative or positive) in the logit regression. The resulting selection of variables are indicated in table 4 with an asterisk.

5.2 Logit model

This paper tests various models for the significance and interaction of the selected variables to arrive at a final takeover likelihood model. First, the model is estimated with only one independent variable per hypothesis (except for ‘inefficient management’) resulting in model 1 and 2. Next, the preferred variables per model are tested for significance and re-estimated in model 3 and 4. Model 4 is considered the final model and is discussed in this section and validated in section 6. The estimated coefficients and statistics can be found in table 5.

(26)

25

TABLE 5

Estimates of binary logit takeover likelihood models

Variables Model 1 Model 2 Model 3 Model 4

(27)

26

TABLE 5 (Continued)

Estimates of binary logit takeover likelihood models

Model 1 Model 2 Model 3 Model 4

McFadden R-squared 0.0576 0.0225 0.0716 0.0688 Prob(LR statistic)

0.0000 0.0002 0.0000 0.0000

No. Iterations for convergence 7 11 9 7

% of correct predictions 72.0% 70.2% 72.2% 72.5%

% of correct targets 16.2% 2.6% 16.9% 17.4%

No. of targets 390 390 390 390

No. of non-targets 907 907 907 907

Note: This table presents the estimation results for the takeover likelihood models with

dependent variable TARGET (value of 1 for targets and 0 for non-targets). Coefficient estimates, t-statistics (in parenthesis) and significance (asterisks) are reported. The sample for these models lies between 01-Jan-2001 and 31-Dec-2012. Model 1 includes 12 variables and includes only 1 independent variable per hypothesis (except for inefficient management). Model 2 re-estimates model 1, except with other variables per hypothesis. Model 3 uses the most significant variables from model 1 and 2 and is re-estimated by model 4 as the final model with all variables significant at 10%. The % of correct predictions / targets is based on a cut-off probability of 0.500. * indicates significance at 10%; ** indicates significance at 5%; *** indicates significance at 1%

The size hypothesis is not confirmed by the model, as both the total asset and the number of employee measures return insignificant t-statistics. This implies that the size of a company is not of major influence on its takeover likelihood and this hypothesis cannot be confirmed. This is strange, since the theoretical argumentation from prior research and the difference in means unambiguously suggest otherwise.

Conversely the undervaluation hypothesis is confirmed by the significant negative relationship of the PTBV. Target firms are undervalued as compared to their non-targets as is predicted by the hypothesis. The coefficient estimate is not very high, yet the variable does significantly influence takeover likelihood and confirms the hypothesis.

Relative debt levels support the financial distress hypothesis through the long-term debt-to-equity ratio that shows a significant positive relationship to the takeover likelihood. Even though the beta-coefficient is not very substantial, the null-hypothesis is rejected at 5%, thus demonstrating that firms with a higher LTDE ratio generally are more likely to be taken over. The coefficient estimate is very small compared to other variables, yet I do note that the average LTDE ratio is quite large as well, indicating that the relative effect is not that dissimilar to the other coefficients. Overall, the financial distress hypothesis is confirmed; firms that are highly leveraged are more likely to be acquired.

(28)

27 results, who also find that the three-month share price momentum is significantly positive. The crisis interaction dummy is highly significant as well, indicating that the negative effects of the financial crisis are adequately captured in the model. Consequently, the momentum hypothesis is confirmed; firms with higher price momentums are more likely to be acquisition targets. Additionally, trading volume as a percentage of market capitalisation also shows a (slightly) significant influence on the takeover likelihood. As expected, takeover targets show higher relative trading volumes as opposed to non-targets as is indicated by the positive sign of VOCAP in the logit models. This confirms our theory that increased trading volume indicates a higher takeover likelihood.

In conclusion, the growth-resources, disturbance, size and liquidity hypotheses are not supported by our analysis, whilst the undervaluation, financial distress and momentum hypotheses are. The inefficient management hypothesis is supported by the 1-year sales growth, yet contradicted by the operational cash flow to assets ratio. Overall, the final model contains a relatively high McFadden R-squared, indicating a good fit with the model. Also, the probability of the LR statistic is highly significant at 1%. Moreover, the number of correct target and total predictions are the highest in model 47. To test for any remaining multicollinearity in the final model, the Variance Inflation Factors (VIF) of each variable is tested. Variables with VIF > 10 are a cause for concern and indicate serious collinearity, thus becoming candidates for removal (O'Brien, 2007). Since all VIFs are smaller than 2.1 the variables included show no danger of multicollinearity.

6. Results

This section reports the results of the estimated logit model. It starts by validating the estimated model of section 5 with the validation sample discussed earlier. Next, it examines the optimal cut-off probability along with the predictive ability of the model. Subsequently, I discuss portfolio constructions and back test their performance relative to market benchmarks.

6.1 Validation of model

Ass discussed in the methodology, the estimation sample is split into an estimation and a validation sample (with 70% and 30% of the data respectively) to check the robustness of the model. As can be seen in table 6, the probability of the likelihood ratio remains highly significant and the McFadden R-squared even increases. Furthermore, the percentage of correct predictions does not decrease substantially and the percentage of correct target predictions increases as well. Most of the beta-coefficients of the included independent variables remain robust in the validation sample. Noticeable is however, how PTBV, LTDE and VOCAP lose their significance for the validation subsample.. Due to their significance in the majority of the sample and their theoretical backing from prior literature, I do not remove these variables from the model. Instead, I test the validity of the model even further by

(29)

28 examining its concentration ratio (CR) and predictive ability. The estimated probabilities are sorted into ten deciles as described in the methodology. Table 7 reports the descriptive statistics of these deciles. As the probabilities rise, the number of correctly classified non-targets in the model decreases and the number of correctly classified targets increases, thereby increasing the concentration ratio. The first highest concentration ratio is found in the ninth decile; the model correctly identifies 29 out of 180 targets in the sample. Furthermore, the average total correct prediction is 68.4% (including non-targets). This percentage is comparable with Brar et al. (2009), who’s model on average predicts 72.6% of the targets. The percentage of targets in the portfolio is somewhat lower than Brar et al.’s (2009). They find their model correctly classifies circa 45% of the targets, whilst Powell (2004) finds a 28-48% correct target prediction rate. The model appears to be robust and adequate for takeover likelihood prediction.

TABLE 6

Validation of logit models

Variables Model 4 Validation

C -0.942 -0.667 (-11.76)*** (-4.85)*** OCFTA 1.696 2.157 (4.00)*** (2.69)*** REV1YG -0.489 -1.062 (-2.84)*** (-3.45)*** PTBV -0.017 0.000 (-1.94)* (0.03) LTDE 0.000 0.000 (2.11)** (0.48) MOM3 0.824 1.173 (2.88)*** (2.78)*** VOCAP 0.055 -0.198 (1.84)* (-1.55) MOM3*CRISIS -4.701 -4.344 (-7.18)*** (-4.02)*** McFadden R-squared 0.0688 0.0768 Prob(LR statistic) 0.0000 0.0000

No. Iterations for convergence 7 8

% of correct predictions 72.5% 69.9%

% of correct targets 17.4% 20.4%

No. of Targets 390 181

No. of non-targets 907 363

Note: This table presents the estimation results for takeover likelihood model 4

(30)

29

TABLE 7

Validating the predictive ability of model 4 on the validation sample

Low High Non targets Non target (%) Targets Target (%) Concentration ratio Cut-off probability % Total correct % Targets in portfolio 0.00 0.15 48 13% 6 3% 1.1% 0.000 33.5% 101% 0.15 0.23 46 13% 9 5% 1.7% 0.148 41.2% 97% 0.23 0.25 40 11% 14 8% 2.6% 0.229 47.9% 92% 0.25 0.28 38 11% 17 9% 3.1% 0.252 52.9% 84% 0.28 0.29 34 9% 20 11% 3.7% 0.276 56.7% 75% 0.29 0.32 36 10% 18 10% 3.3% 0.295 59.3% 64% 0.32 0.34 42 12% 12 7% 2.2% 0.316 62.5% 53% 0.34 0.37 28 8% 26 14% 4.8% 0.343 68.0% 47% 0.37 0.44 24 7% 29 16% 5.4% 0.373 68.4% 32% 0.44 0.97 25 7% 29 16% 5.4% 0.438 67.8% 16% Total / average (%) 361 180

Note: This table presents the predictive ability of the model using solely the validation sample (i.e. 30% of all firms with

lookup/announcement dates between Jan-2001 and Dec-2012). All observations included in the logit regression are sorted into deciles based on ascending order of takeover probability. 'Low' and 'High' denote the range of takeover probabilities per decile portfolio. 'Non targets' (%) reports the number of control firms included in the decile portfolio, as does 'Targets' (%) for the target firms. The 'concentration ratio' denotes the ratio of correct classifications to the total number of firms in the model. '% Total correct' reports the percentage of correct classifications to the total number of firms in the portfolio. Similarly, '% targets in portfolio' shows the ratio of correct target classifications to the total number of targets in the model.

6.2 Portfolio construction

6.2.1 Cut-off probabilities

To assess the performance of the model in terms of ability to capture premiums, a cut-off probability needs to be determined. To assure the cut-off ratio is based on a comprehensive dataset the threshold is determined based on the estimation sample, resulting in the CRs found in appendix E. As stated in the methodology, the estimated probabilities are again sorted into ten deciles and the cut-off probability is determined based on the concentration ratio in a similar fashion as the previous section. The final cut-off probability is based on the tenth decile with a cut-cut-off probability of 0.449 and a concentration ratio of 6.3%. Out of the estimation sample of 389 targets and 904 non-targets, the model correctly identifies 81 targets. The percentage of total correct predictions is very similar to Brar et al. (2009) at 72.7%, whilst the percentage of targets in portfolio is somewhat lower at 21%.

Referenties

GERELATEERDE DOCUMENTEN

[r]

Table A.8.3, Regression results South-Asia without interaction term between Sachs-Warner and Arable Land. per Worker due to collinearity concerns

Multiple regression analysis with dummy variable (banks from developing countries). Dependent Variable: NIM Method:

On the other hand, if the j th column label equals one, the background model that describes the expression of those background genes should only be constructed with the

More specifically, in this work, we decompose a time series into trend and secular component by introducing a novel de-trending approach based on a family of artificial neural

Using the PACS 70 μm emission as an indicator of the possible regions of young star formation, and thus the locations of objects contributing to such free –free emission, we can be

To read any of these documents please contact the author at h.dekorne@dunelm.org.uk, and I will be happy to provide specific documents, or a CD with the

Appendix E: Descriptive statistics for standard deviation per calendar year and outcome paired samples T tests for standard deviation and return-per-unit-of-risk for the MSCI