• No results found

Do analysts de-bias management forecasts? : new evidence on an asymmetric association

N/A
N/A
Protected

Academic year: 2021

Share "Do analysts de-bias management forecasts? : new evidence on an asymmetric association"

Copied!
74
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Amsterdam Business School

MSc Business Economics, Finance Track

Master Thesis Finance

Do Analysts De-Bias Management Forecasts? New

Evidence on an Asymmetric Association

Student: Tom Kroes

Student number: 10287175

Supervisor: dr. F.S. Peters

Date: June 2015

(2)

1 Statement of Originality

This document is written by student Tom Kroes who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

2 Abstract

This thesis examines the accuracy of analysts’ earnings forecasts relative to management’s earnings forecasts. This thesis predicts and finds analysts de-bias management forecasts asymmetrically. Analysts revise their forecasts more in reaction to a management forecast that conveys good news than in reaction to a management forecast that conveys bad news. Moreover analysts de-bias optimistic management forecasts more than pessimistic management forecasts. Consistently, the stock market response at earnings announcement associated with pessimistic analyst forecasts is larger than the stock market response associated with optimistic analyst forecasts. In contrast to earlier empirical work showing that analysts are optimistically biased and no more accurate than management, the results of this thesis suggest that analysts are pessimistically biased and de-bias management forecasts on average.

(4)

3

Table of contents

1 Introduction 4

2 Literature review 2.1 Accuracy of management forecasts 7

2.2 Accuracy of analyst forecasts 10

2.3 The relationship between analyst and management forecasts 12

2.4 Accuracy of analyst forecasts relative to management forecasts 14

2.5 The market response to earnings announcements 16

3 Sample selection and methodology 3.1 Sample selection 18 3.2 Methodology 21 4 Descriptive statistics 35 5 Regression results 49 6 Robustness check 60 7 Conclusion 65 References 68 Appendix A 71 Appendix B 73

(5)

4

1 Introduction

One of the most important tasks of security analysts is to provide forecasts of earnings per share for the firms which they cover. These earnings forecasts are useful to capital markets participants (Healy & Palepu, 2001). Besides analysts providing earnings forecasts a firm’s management can voluntarily provide earnings forecasts or earnings guidance, two terms that are used interchangeably in academic literature. Prior research shows that not only analysts’ forecasts, but also managements’ forecasts are a source of information to capital markets and influence market expectations (Lennox & Park, 2006; Patell, 1976; Penman, 1980; Waymire, 1986). This thesis considers the accuracy of analyst forecasts relative to management forecasts. The relative accuracy of analyst forecasts is relevant to investors, because it is unclear how much weight investors should give to analyst and management forecasts, respectively. Analysts’ forecast accuracy also matters for managers. Given analysts’ forecast accuracy affects firms’ market valuations it might also influence managerial earnings guidance behaviour. Furthermore, analysts’ forecast accuracy matters for policy makers, since it plays a role in regulation regarding voluntary disclosures.

Relative accuracy of analyst forecasts is even more topical given the implementation of Regulation Fair Disclosure (Reg FD) by the Security and Exchange Commission on October 23, 2000. Before Reg FD managers privately provided earnings guidance to analysts. Reg FD prohibits firms from disclosing information to security analysts without simultaneously disclosing this information to the public and resulted in a substantial increase in the percentage of firms providing earnings forecasts (Heflin & Subramanyam, 2013). The implementation of Reg FD and the related increase in earnings guidance changed the environment in which earnings forecasts are produced (Hirst, Koonce & Venkatamaran, 2008). This might have changed the informative content and accuracy of analyst forecasts relative to management forecasts.

Accuracy of analyst forecasts is affected by the incentives analysts’ face to issue biased forecasts. Prior literature is inconclusive about the direction of these biases. Some empirical evidence indicates analysts issue optimistically biased forecasts, since such forecasts enhance analysts’ career prospects and improve access to managerial information (Dugar & Nathan, 1995; Hong & Kubik, 2003; Lim 2001). Other empirical evidence suggests managers prefer analysts to be pessimistic, since pessimistic analyst

(6)

5 forecasts result in a positive surprise at the announcement of actual earnings. This implies analysts might issue pessimistic forecasts to please managers and generate investment banking income for their brokerage houses (Chan, Karceski & Lakonishok, 2007). However, analyst forecasts are not only affected by incentives analysts face, but also by management forecasts. Analysts on average revise their forecasts in the direction of a management forecast (Jennings, 1987; Baginski & Hassell, 1990). In these revisions credibility plays a role. Pessimistic management forecasts might be perceived more credible by analysts than optimistic management forecasts (Hassell, Jennings & Lasser, 1988; Jennings, 1987). Analysts might therefore have a higher propensity to follow biases in management forecasts when management is too pessimistic relative to when management is too optimistic.

On one hand the notion of optimistic biases in management forecasts suggests analysts revise their forecasts more subsequently to an optimistic management forecast than subsequently to a pessimistic management forecast. On the other hand the notion of pessimistic biases in management forecasts and the effect of credibility suggests analysts revise their forecasts more subsequently to a pessimistic management forecast than subsequently to an optimistic management forecast. This empirical puzzle also applies to the accuracy of analyst forecasts relative to management forecasts. If analysts are optimistically biased they might lean against optimistic management forecasts. This suggests analysts de-bias pessimistic management forecasts more than optimistic management forecasts. However, if analysts are pessimistically biased and perceive pessimistic management forecasts more credible they might lean against pessimistic management forecasts. This implies analysts de-bias optimistic management forecasts more than pessimistic management forecasts. This thesis provides an answer to the question whether security analysts de-bias management forecasts asymmetrically.

To test the predicted asymmetric association a sample consisting of 24,254 quarterly management forecasts from the I/B/E/S database issued after the implementation of Reg FD is related to consensus analyst forecasts. To tests analysts’ responses to management forecasts analysts’ forecast revisions subsequently to the release of management forecasts are examined. Controlling for the magnitude of the news conveyed by management forecasts the results indicate analysts revise their forecasts more subsequently to pessimistic management forecasts than subsequently to optimistic management forecasts. Next, analysts’ forecast errors, which proxy for biases

(7)

6 in analyst forecasts, are related to managements’ forecasts errors, which proxy for biases in management forecasts. The results indicate analysts de-bias management forecasts on average. Moreover the results suggest analysts tend to lean more against pessimistic management forecasts than against optimistic management forecasts. Hence, analysts de-bias optimistic management forecast more than pessimistic management forecasts.

In addition the stock market response at announcement of actual earnings is related to biases in analyst forecasts. Investors base their earnings expectations on analyst and management forecasts (Healy & Palepu, 2001; Patell, 1976; Penman, 1980). If analysts de-bias management forecasts asymmetrically the surprise at earnings announcement differs for pessimistic and optimistic forecasts. This implies the stock market response will differ. There is empirical evidence that suggests investors are able to filter out biases in forecasts however (Hutton & Stocken, 2009; McNichols, 1989). If investors are able to fully detect biases in analyst forecasts they will note an asymmetric bias and adjust their earnings expectations accordingly. This implies stock market responses at earnings announcement are similar for pessimistic and optimistic forecasts. However, the results indicate stock market responses at earnings announcement are significantly larger when forecasts are pessimistic relative to when forecasts are optimistic. This is consistent with the notion analysts lean more against pessimistic management forecasts than against optimistic management forecasts. Moreover it suggests investors are unable to fully filter out the asymmetric bias in analyst forecasts.

Existing literature on earnings forecast accuracy is inconclusive about the relative accuracy of analyst forecasts. Some evidence suggests management forecast are more accurate than analyst forecasts (Hassell et al., 1988; Waymire, 1986) and other evidence suggests there is no significant difference in forecast accuracy (Hutton & Stocken, 2009; Ruland, 1978). This thesis contributes to the inconclusive findings on relative forecast accuracy by providing evidence which indicates analysts de-bias management forecasts on average.

Moreover existing literature considers analysts’ forecast accuracy either in relation to biases in management forecasts or in relation to private incentives analysts face. This thesis reconciles both effects and considers the accuracy of analyst forecasts in relation to management forecasts and other incentives analysts face. More specifically this thesis contributes to the literature on analyst forecasts with the notion analysts de-bias

(8)

7 management forecasts asymmetrically. Furthermore, this thesis adds to the literature on market reactions at earnings announcements with the notion investors are unable to fully filter out the asymmetric bias in analyst forecasts. More generally this thesis thus contributes to the knowledge on the informative value of security analysts’ earnings forecasts.

The thesis proceeds as follows. Section 2 provides an outline of related literature and poses the hypotheses that are tested. Since accuracy of management forecasts affects analysts’ forecasts section 2 includes a short overview of empirical findings on management forecasts. The topic of interest of this thesis is the accuracy of analyst forecasts, but it does not attempt to distinguish between different biases or test for one specific bias in analyst forecasts, since alternative biases often cannot be distinguished based on the available empirical data (Gong, Li & Wang, 2011). However, the summary of prior literature on biases in analyst forecasts in section 2 serves as a foundation for predictions on the direction of biases in analyst forecasts. Furthermore, section 2 elaborates on the association between analyst forecasts and management forecasts, the relative accuracy of analyst forecasts and the market response to earnings forecasts and earnings announcements. Section 3 presents the sample selection procedure and methodology used to tests the hypotheses. Section 4 and 5 provide descriptive statistics and regression results, respectively. Section 6 addresses robustness issues and section 7 concludes.

2 Literature review

2.1 Accuracy of management forecasts

Even though biases in management forecasts are not the topic of this thesis, they affect the accuracy of analyst forecasts relative to management forecasts. Therefore a review of empirical findings on biases in management forecasts is provided. This review serves as a fundament for predictions on analysts’ relative forecast accuracy. Existing literature on management forecasts proposes various factors that affect the accuracy of management forecast. The accuracy of management forecast is related to several firm characteristics. Furthermore managerial incentives might result in biases in management forecasts. Also concepts of behavioural finance are used to explain managerial biases.

(9)

8 Lennox and Park (2006) state firms provide earnings forecast to mitigate the asymmetric information problem between managers and investors. This reduces the costs of investors for acquiring private information, which implies lower cost of capital and increased stock liquidity for firms. Managers are more likely to issue an earnings forecast if the magnitude of earnings news is larger and if the market reaction to each unit of earnings news is larger. Market reactions to management forecasts confirm management forecasts are perceived informative by investors. Management forecast that convey good news are associated with positive abnormal stock returns and management forecast that convey bad news are associated with negative abnormal stock returns (McNichols, 1989; Patell, 1976; Penman, 1980; Waymire, 1986). Managers also issue forecast to reduce litigation risk. Management forecast are used to disclose bad news to investors in a timely manner, which reduces the risk of litigation and litigation costs (Skinner, 1994).

Several firm characteristics affect the accuracy of management forecasts. In a logistic regression Baginski and Hassell (1997) find managers produce more accurate forecasts of earnings for firms that are followed by more analysts. Managers seem to use forecasts to affect analysts’ beliefs. When more analysts follow a firm the benefits of guiding analysts outweigh the costs of increased forecast precision. Furthermore the accuracy of forecasts decreases in firm size and the volatility of security returns. For larger firms more public information is available and the benefits of increased forecast precision are limited. Forecast accuracy decreases in the volatility of security returns because earnings uncertainty affects the precision of forecast. In addition Chen (2004) finds that firms with less accounting flexibility and firms that are subject to exogenous shocks issue less accurate forecasts.

Rogers and Stocken (2005) examine management forecasts in relation to managerial incentives to issue biased forecasts. They state forecasts are less biased when investors are more able to detect misrepresentations. They use earnings volatility as a proxy for investors’ forecasting difficulty and mention four factors that affect managers’ intentional forecasting behavior. First, managers will issue more pessimistic forecasts when the chance of being sued for providing misleading information is larger. Second, managers might release misleading forecasts if they can benefit from insider transactions based on stock mispricing induced by these forecasts. Third, managers of firms in financial distress might provide optimistic forecasts to secure their position.

(10)

9 And fourth, managers in concentrated industries possibly release pessimistic forecasts to discourage competitors from entering their market. Hence, management forecast errors are related to managerial incentives in interaction with the market’s ability to detect misrepresentations.

Managers also face an incentive to manipulate earnings expectations downward using management forecasts. By guiding analysts’ and investors’ expectations downward during a fiscal period managers can more easily meet or beat expectations at announcement of actual earnings. Bartov, Givoly and Hayn (2002) find that firms that meet or beat analysts’ expectations are associated with higher abnormal returns than firms that fail to meet analysts’ expectations. In addition Kross, Ro and Suk (2011) find that firms that consistently meet or beat analysts’ earnings expectations more frequently issue bad news management forecasts than firms that do not have a record of meeting or beating analysts’ expectations. Moreover bad news forecasts issued by these firms are more pessimistic on average and the effect is more pronounced after the implementation of Reg FD. Hence, managers manipulate earnings expectations to ensure the earnings surprise at the announcement of actual earnings is positive (Bergman & Roychowdhury, 2008; Cotter, Tuna & Wysocki, 2006; Tan, Libby and Hunton, 2002). Even though this downward manipulation of earnings expectations comes at the cost of lower forecasts earlier in the quarter Chan et al. (2007) find the positive market reaction at announcement of actual earnings dominates the negative market reaction associated with lower earnings forecasts.

Finally, empirical evidence suggests concepts of behavioural finance are related to managers’ forecast accuracy. Hribar and Yang (2013) examine how overconfidence affects forecasts. Overconfidence is divided in over-optimism and miscalibration. Over-optimism refers to managers’ unrealistically high expectations of uncertain future outcomes. Miscalibration refers to managers underestimating the variance of uncertain outcomes. Managerial overconfidence increases the likelihood of management issuing a forecast. Moreover overconfident managers issue more optimistic forecasts that they subsequently miss. And finally overconfident managers issue more specific forecasts, with a more narrow forecast range. Also Libby and Rennekamp (2012) examine management forecast in relation to managerial overconfidence. In a survey among experienced financial managers they find managerial overconfidence affects managers’ ability to predict firm performance and increases their willingness to issue forecasts.

(11)

10

2.2 Accuracy of analyst forecasts

Overall empirical evidence suggests financial analysts’ forecasts are useful to capital markets participants. Analyst forecasts are superior to time-series predictions of earnings. Abnormal stock returns at earnings announcements are significantly better explained by analysts’ earnings forecasts than by time series models (Brown, Hagerman, Griffin & Zmijewski, 1987; Brown & Rozeff, 1978). Moreover analyst forecasts itself affect stock prices (Francis & Soffer, 1997; McNichols, 1989). There is also evidence of biases in analyst forecasts however. This evidence is inconclusive with respect to the direction of these biases.

Lim (2001) states a positive and predictable bias in analysts’ forecasts is rational, given analysts want to issue optimal forecasts with a minimum error. The intuition is managers prefer optimistic forecasts, since these support higher stock valuations and thereby increase managerial compensation levels. A firm’s management is an important source of information to analysts. By issuing positively biased forecasts analysts please managers and improve access to management information, which reduces future forecast errors. Especially for firms with uncertain information environments and for analysts with an interest in gaining access to management it might be optimal to report positively biased forecasts to ensure access to information. Lim (2001) finds that companies with a more uncertain information environment are indeed associated with more optimistically biased analyst forecasts. Optimistic biases are larger for companies that are smaller, have more volatile earnings, experience negative prior earnings surprises or have poor past stock returns. For these firms analysts seem to underreact to negative information by not fully revising their forecasts downward. A possible explanation is that when a company is sitting on bad news it is important for analysts to maintain optimistic to ensure access to management information. Furthermore Lim (2001) finds that analysts who rely less on management as a source of information, like experienced analysts and analysts employed by large brokerage firms, are associated with smaller forecast biases.

Bergman and Roychowdhury (2008) relate analyst optimism to investor sentiment. They use the Michigan Consumer Confidence Index to proxy for investor sentiment and forecast errors to proxy for analyst optimism. Their results indicate analysts are optimistic on average and analyst optimism increases in investor sentiment. Hence, an optimistic bias in analyst forecasts might results from investor sentiment.

(12)

11 Hong and Kubik (2003) relate biases in analyst forecasts to career concerns. In a logistic regression they find brokerage houses evaluate analysts based on the accuracy of their forecasts, implying security analysts are incentivized to issue accurate forecasts. However, controlling for forecast accuracy they find analysts that issue relatively more optimistic forecasts are more likely to move up the hierarchy of brokerage houses and are less likely to be fired. This effect especially holds for stocks that are underwritten by the brokerage house at which an analyst is employed. More optimistic forecasts promote stocks and thereby generate trade and investment banking business for brokerage houses. Hence, optimistic biases in analyst forecasts might result from their career concerns.

In line with the findings of Hong and Kubik (2003) Dugar and Nathan (1995) find forecasts of analysts’ employed at a brokerage house that also provides investment banking services are more optimistic than earnings forecasts of analysts that are not related to investment banking. In addition Dechow, Hutton and Sloan (1999) find sell-side analysts’ forecasts are more optimistic around equity offerings, especially when analysts are employed by the lead underwriter of the equity offering. This confirms analysts are incentivized to provide optimistic forecasts to generate higher investment banking fees for their brokerage house.

Chan et al. (2007) on the other hand state analysts face a conflict of interest between providing accurate earnings forecast and pleasing managers. Managers might prefer a positive earnings surprise at the announcement of actual earnings, which they can achieve when analysts issue pessimistic forecasts. A disproportionally large number of positive announcement surprises in the sample of Chan et al. (2007) indicates analysts follow managers in their downward manipulation of earnings expectations. Analysts might do so to please managers and win investment banking clients. Another explanation is that managers manipulate analysts’ expectations, which implies analysts do reveal their true beliefs through their forecasts. However Chan et al. (2007) find the fraction of positive earnings announcement surprises is smaller for analysts that are not related to investment banking business and have no incentive to please managers. Moreover the fraction of positive earnings surprises is larger for growth firms, which are more likely to generate investment banking income through capital issues and mergers. This suggests analysts intentionally adjust their earnings forecasts downwards to please managers.

(13)

12 Also Tan et al. (2002) note managers prefer actual earnings to meet or exceed the consensus analyst forecast to avoid a negative earnings surprise. In an experiment with highly experienced sell-side analysts they examine how security analysts respond to managerial earnings guidance. They find analyst forecasts are lower when management is too optimistic and higher when management is too pessimistic, relative to analyst forecasts for firms that provide accurate management forecasts. This suggests analysts anticipate managers’ efforts to manipulate earnings expectations and filter out the effects.

Overall existing evidence suggests analysts face conflicting incentives when providing forecasts. Optimistic forecasts result in favorable career prospects, investment banking income for analysts’ brokerage houses and improved access to management information. On the other hand pessimistic forecasts might please managers and thereby generate investment banking income.

2.3 The relationship between analyst forecasts and management forecasts

Analyst forecasts are affected by management forecasts. Management forecasts convey news to analysts. The magnitude and direction of this news, the forecast surprise, depends on the deviation of the management forecasts from analysts’ forecast at the time the management forecast is released. In general analysts revise their forecasts in the direction of the news conveyed by a management forecast. If a management forecast is higher than the prevailing analyst forecast it conveys good news and analysts revise their forecasts upward. If a management forecast is lower than the prevailing analyst forecast it conveys bad news and analysts revise their forecasts downward. Moreover the magnitude of analysts’ forecast revisions is positively related to the magnitude of the news in the management forecast (Hassel et al.; 1988; Jennings, 1987). In addition Baginski and Hassell (1997) find analysts follow firms more closely in the fourth quarter. The relationship between management forecast news and analysts’ revisions is stronger closer the announcement of actual earnings. Fourth quarter management forecasts seem to contain more earnings information than earlier forecasts.

Jennings (1987) additionally states the reaction of analysts to a management forecasts depends on the believability of the forecast. For two management forecasts with the same surprise the change in analysts beliefs and resulting security price movements are larger for more credible forecast. Hence, the magnitude of analysts’

(14)

13 forecast revisions proxies for the believability of a management forecast. Confirming revisions suggest a management forecast is more believable. In a cross-sectional regression without control variables Hassell et al. (1988) find good news management forecasts are associated with more moderate revisions in analyst forecasts. This suggests it is more difficult for managers to convince analysts of the objectivity and accuracy of good news than of bad news. In this context Williams (1996) relates analysts' forecast revisions subsequently to a management forecast to the accuracy of prior management forecast, controlling for other indicators of believability. The accuracy of prior forecasts does not affect the magnitude of analysts’ forecast revisions in reaction to bad news management forecasts. Revisions to good news forecasts are positively related to the accuracy of prior forecast however. Analysts revise their forecasts more in reaction to a good news forecasts if prior management forecasts are more accurate. This again suggests credibility of management forecasts plays a larger role for good news than for bad news forecasts.

On one hand empirical evidence on positive biases in analyst forecasts suggests analysts have the propensity to revise their forecasts more in reaction to a good news management forecasts than in reaction to a bad news management forecasts (Dugar and Nathan, 1995; Hong and Kubik, 2003; Dechow et al., 1999; Lim, 2001). On the other hand credibility of management forecasts might play a role. If analysts perceive bad news management forecasts more credible than good news management forecasts they might revise their forecasts more in reaction to a bad news management forecast than in reaction to a good news management forecast (Hassell et al., 1988; Williams, 1996).

H01: Security analysts revise their earnings forecasts more in response to a management

forecast that conveys good news than in response to a management forecast that conveys bad news.

H11: Security analysts revise their earnings forecasts more in response to a management forecast that conveys bad news than in response to a management forecast that conveys good news.

(15)

14

2.4 Accuracy of analyst forecasts relative to management forecasts

Prior literature on the accuracy of analyst forecasts relative to management forecasts reports somewhat contradicting empirical findings. Moreover most of this literature predates the implementation of Reg FD, which has likely affected analysts’ relative forecast accuracy.

Hassell and Jennings (1986) consider the accuracy of analyst forecasts relative to management forecasts in relation to timing of these forecasts. In a Wilcoxon signed-rank test they find management forecasts are more accurate than analyst forecasts reported prior to, simultaneous with and up to four weeks after the release of a management forecast. Analyst forecast become significantly more accurate again nine or more weeks after a management forecast. Also Waymire (1986) examines the accuracy of analyst forecasts relative to management forecasts in relation to timing. The results of this Wilcoxon signed-rank test are similar to the results of Hassell and Jennings (1986). Management forecasts are on average more accurate than analyst forecasts made prior to management forecasts. Analysts’ forecasts made after management forecasts are on average no more accurate than management forecasts however. Waymire (1986) suggests this result is driven by managers’ informational advantages. Hutton, Lee and Shu (2012) examine the relative accuracy of analyst forecasts in relation to these informational advantages. They find analysts have an informational advantage over managers at macroeconomic level, resulting in relatively more accurate analyst forecasts when firms are exposed to macroeconomic factors, like regulation and the business cycle. Managers on the other hand have an informational advantage in situations where analysts find it hard to anticipate management’s response to situations, like when a firm has excess capacity or is experiencing losses. At industry level management and analysts have comparable ability to forecast earnings. Additionally Hutton et al. (2012) find the frequency with which analyst forecasts are more accurate than management forecasts is only 50%. Consistently, Ruland (1978) finds no statistically significant difference in accuracy between analyst and management forecasts in a Wilcoxon matched-pairs signed-ranks test. Jaggi (1980) however finds management forecasts are more accurate than analyst forecasts issued prior to the management forecast.

Even though prior literature is inconclusive about whether analyst or management forecasts are more accurate there is empirical evidence which suggest analysts are able

(16)

15 to de-bias management forecasts. Gong et al. (2011) find analysts are less responsive to news in management forecasts if prior management forecasts are more biased. Upward analyst forecast revisions in reaction to good news management forecasts are smaller when prior management forecast are too optimistic. Downward analyst forecast revisions in reaction to bad news management forecasts are smaller when prior management forecast are too pessimistic. In addition Hassell et al. (1988) compare revisions in analyst forecasts subsequently to the release of a management forecast to revisions in analyst forecasts for a control sample of firms that do not provide management forecasts. In a Wilcoxon signed-ranks test they find forecast errors of analysts decline following the release of a management forecast relative to the control sample. This implies management forecasts provide useful firm-specific information to analysts, which results in more accurate analyst forecasts. Hassell et al. (1988) additionally relate analysts’ forecast revisions to ex-post management and analyst forecast errors. In a cross-sectional regression without control variables they find analysts are able to distinguish management forecasts that are accurate from management forecasts that are less accurate and revise their forecasts accordingly. When initial analyst forecasts are more accurate than a management forecast analyst do not revise their forecast on average. When a management forecast surprise indicates the right direction, but exaggerate the magnitude of the news, analysts revise their forecasts moderately. Consistently, Kross et al. (2011) find analysts’ forecast revisions subsequently to bad news management forecasts by firms with a record of guiding expectations downward are less pronounced.

The findings of Hassell et al. (1988), Gong et al. (2011) and Kross et al. (2011) suggest analysts anticipate biases in management forecast and filter out these biases. Moreover the implementation of Reg FD resulted in an increase in managerial earnings guidance and changed the environment in which earnings forecasts are produced (Hirst et al., 2008). Given analysts use information in management forecasts and are able to detect biases in these forecasts the relative accuracy of analyst forecasts might have increased after the implementation of Reg FD.

Empirical evidence on analyst forecasts suggests analysts de-bias management forecasts asymmetrically however. Under the assumption of a positive bias in analyst forecasts analysts are expected to lean more against optimistic management forecasts than against pessimistic management forecasts (Dugar and Nathan, 1995; Hong and

(17)

16 Kubik, 2003; Dechow et al., 1999; Lim, 2001). This implies analysts de-bias pessimistic management forecasts more than optimistic management forecasts. Under the assumption of a pessimistic bias in analyst forecasts analysts are expected to lean more against pessimistic management forecasts than against optimistic management forecasts. A pessimistic bias in analyst forecasts might results from analysts’ desire to please managers (Chan et al., 2007) or from analysts perceiving pessimistic management forecast more credible than optimistic management forecasts (Hassell et al., 1988; Williams, 1996). This implies analysts de-bias optimistic management forecasts more than pessimistic management forecasts.

H02: Security analysts de-bias management forecasts more when management forecasts are too pessimistic relative to when management forecasts are too optimistic.

H12: Security analysts de-bias management forecasts more when management forecasts are too optimistic relative to when management forecasts are too pessimistic.

2.5 The market response to earnings announcements

A part of the existing literature on analyst forecasts, management forecasts and market responses considers the stock price reaction to management forecasts. Other literature considers the stock price reaction to earnings announcements.

Patell (1976) examines the market response to management earnings forecasts and finds the sign and magnitude of a management forecast surprise are positively related to market reactions. Also Penman (1980) and Waymire (1984) find management forecasts are associated with significant abnormal stock returns, which increase in the magnitude of the news contained in the forecast. This suggests that management forecasts are used to convey information to investors and that this information is relevant to investors on average. Additionally Pownall, Wasley and Waymire (1993) examine the stock price reaction to different types of management forecasts. They find point, range and one-sided directional forecasts are all considered informative by investors and there are no significant differences in stock price reactions to the alternative forecast types. The stock price reaction to earnings announcements is larger than the stock price reaction to management forecasts however, indicating earnings announcements are perceived more credible.

(18)

17 Jennings (1987) additionally takes analysts’ forecast revisions into consideration. When a management forecast conveys good news analysts’ forecast revisions have significant marginal explanatory power for abnormal stock returns. This suggests investors use analysts’ forecast revisions to confirm the new conveyed by a good news management forecast. When a management forecasts conveys bad news analysts’ forecast revisions have no additional explanatory power for abnormal stock returns however. Hence, it seems to be more difficult for investors to assess the credibility of good news forecasts than of bad news forecasts and investors base their earnings expectations both on analyst and management forecasts when a management forecast conveys good news.

The findings of Cornell and Landsman (1989) indicate also stock price reactions at earnings announcements are related to analyst forecasts. Positive analyst forecast errors, which imply a negative earnings surprise at the announcement of actual earnings, are associated with negative abnormal returns. Negative analyst forecast errors are associated with positive abnormal returns. Also Bartov et al. (2002) relate abnormal returns at earnings announcements to analysts’ forecast errors. They find that firms that meet or beat analysts’ expectations are associated with higher abnormal returns than firms that fail to meet analysts’ expectations. This suggests firms might manage analysts’ and investors’ earnings expectations downward to benefit from higher returns at the announcement of actual earnings.

However, investors might be able to filter out biases in earnings forecasts. McNichols (1989) finds abnormal stock returns in reaction to management forecasts are associated with both the forecast surprise and the management forecast error. A differential reaction to forecasts that ex-post turn out to be too low or too high indicates that investors take into account forecast errors. The findings of McNichols (1989) suggest investors correct errors in management forecasts, because they detect biased forecasting behaviour. In this context Hutton and Stocken (2009) examine whether the accuracy of prior forecasts affects investors’ reaction to current forecasts. They find a firm’s prior forecast accuracy is positively related to its current forecast accuracy, which suggests biases in management forecasts are persistent. Moreover the stock price response to the news in a current forecast is positively related to the accuracy of prior forecasts. Hence, investors take into account biases in forecasts.

(19)

18 Under the assumption analysts de-bias management forecasts asymmetrically but investors are able to detect biases in analyst and management forecasts, the stock price response associated with pessimistic analyst forecasts is similar to the stock price response associated with optimistic analyst forecasts. If investors are unable to filter out biases in analyst forecasts however, the stock price response associated with pessimistic analyst forecasts differs from the stock price response associated with optimistic analyst forecasts.

H03: The market response at earnings announcement associated with analyst forecasts that are too pessimistic is similar to the market response associated with analyst forecasts that are too optimistic.

H13: The market response at earnings announcement associated with analyst forecasts that are too pessimistic differs from the market response associated with analyst forecasts that are too optimistic.

3 Sample selection and methodology 3.1 Sample selection

The Guidance database of Thomson Reuters’ Institutional Brokers’ Estimate System (I/B/E/S) is used to collect management earnings forecast information. Forecasts are the quarterly, one period ahead forecasts. The implementation of Reg FD in October 2000 resulted in a large increase in the volume of management earnings forecasts made. Moreover empirical evidence indicates that Reg FD affects management forecast practices, which implies management forecasts from before the implementation of Reg FD are not comparable to management forecasts issued after October 2000 (Heflin & Subramanyam, 2013). Therefore the sample is restricted to dollar denominated management forecasts made between October 2001 and June 2014 by U.S. firms.

Analysts and managers can revise their forecasts. Therefore only individual analysts’ most recent forecasts are related to managers’ most recent forecast. For each forecast period the last forecast issued by managers and analysts is assumed to be their most accurate forecast for that period. Management forecasts that are issued after the end of

(20)

19 the fiscal period to which they pertain are deleted from the sample, since these forecasts are pre-announcements of actual earnings (Rogers & Stocken, 2005).

Management earnings forecasts are provided with a varying level of specificity. I/B/E/S reports point, range and one-sided directional forecasts, but excludes qualitative forecasts. The unmodified values of point and one-sided directional management forecasts reported in I/B/E/S are used as a proxy of management forecasts. For the purpose of comparability range estimates are converted into point estimates by taking the average of the range endpoints (Baginski & Hassell, 1997; Hassell et al., 1988; Jennings, 1987; Rogers & Stocken, 2005; Williams, 1996).

Management forecasts are merged with financial analysts’ earnings forecasts and actual earnings per share retrieved from the I/B/E/S Detail History database. Observations for which analyst forecasts are missing are deleted. Moreover, individual analysts’ forecasts made after actual earnings are announced are deleted. Since a consensus (median) analyst forecast is required for all tests, management forecasts for which less than three analyst forecasts are reported are excluded from the sample.

Stock prices are retrieved from the Centre for Research in Security Prices (CRSP). The I/B/E/S Guidance database reports management forecasts of earnings per share that are adjusted for stock splits. Therefore prices that are related to forecasts are adjusted for stock splits using the CRSP adjustment factor for prices. To mitigate the small denominator problem when scaling with price, firms with a pre-forecast share price under $ 2.00 are excluded from the sample.

Data on balance sheet items and net income are retrieved from the CRSP/Compustat Merged database. Earnings forecast data are merged with CRSP stock prices and CRSP/Compustat balance sheet data and observations for which variables are missing are deleted from the sample.

The test for hypothesis one makes different assumptions on analyst forecasts than the test for hypothesis two and three. Therefore the final sample used to test hypothesis one differs from the final sample used to test hypothesis two and three. To test hypothesis one the consensus (median) analyst forecast in the month prior and the consensus (median) analyst forecast in the month subsequently to a certain management forecast are constructed. Hence, only individual analysts’ forecasts made within a [-31, 31] day range around a management forecast are used. Since hypothesis one considers analyst forecast revisions only the most recent forecast for every

(21)

20 individual analyst just before and just after a management forecast are used (Gong et al., 2011). Waymire (1986) finds that when bad news announcements are issued often good news is disclosed simultaneously, which affects investors’ and analysts’ reactions. To control for this effect observations for which dividends are announced in the [-3, 3] day period around the management forecast are excluded from the sample. The final sample used to test hypothesis one consists of 8,228 management forecasts divided over 1,539 firms. The most conservative specification of the test for hypothesis one includes a measure of prior management forecast usefulness. For this specification any firm’s first forecast in the sample is deleted, reducing the sample size to 6,724 observations divided over 1,099 firms.

The test for hypothesis two considers the accuracy of analysts’ forecasts relative to the accuracy of management forecasts. Individual analyst forecasts included in the sample used to test hypothesis one are conditioned on being issued in the month preceding or the month following a management forecast. For hypothesis two an individual analysts’ forecasts included in the sample are conditioned on being the most recent forecast for an analyst and forecast period, since the most recent forecast is assumed to be the most accurate forecasts made by that analyst (Rogers & Stocken, 2005). This implies individual analysts’ forecasts are included in the sample even if they are made before the management forecast. The intuition here is that analysts revise their forecasts in reaction to a management forecast, taking into account the information in the management forecast. If individual analysts do not revise their forecast, they do not perceive the management forecast accurate or credible. Hence, forecast that are not revised contain information on analysts’ expectations and do not become stale (Hassell et al., 1988). The final sample used to test hypothesis two consists of 26,421 management forecasts divided over 2,272 firms. The most conservative specification of the test for hypothesis two includes a measures of prior management forecast error and prior analyst forecast error. For this specification any firm’s first forecast in the sample is deleted, reducing the sample size to 24,254 observations divided over 1,858 firms.

The test for hypothesis three relates analyst forecasts to abnormal returns at the announcement of actual earnings. Therefore individual firms’ returns and value-weighted returns on the market index are retrieved from CRSP. Observations for which no returns or market returns are available in CRSP are deleted from the sample. The

(22)

21 final sample used to test hypothesis three consists of 24,185 earnings announcements divided over 1,861 firms.

3.2 Methodology

The test for hypothesis one is based on methodology among others used by Baginski and Hassell (1990), Gong et al. (2011), Williams (1996) and Waymire (1986). If management releases an earnings forecast that deviates from the consensus (median) analyst earnings forecast at that moment, the management forecast conveys news to analysts. To determine whether a management forecast reveals good, bad or confirming news the management forecast is compared to the prevailing consensus (median) analyst forecast. A management forecast higher than the prevailing consensus analyst forecast contains good news and is defined as a positive management forecast surprise. A management forecast lower than the prevailing consensus analyst forecast contains bad news and is defined as a negative management forecast surprise. If a management forecast is equal to the consensus analyst forecast it is defined as a confirming management forecast surprise. In reaction to a management forecast surprise security analysts revise their forecasts. This revision, analyst forecast revision, is measured by the consensus analyst forecast subsequent to a management forecast compared to the consensus analyst forecast prior to that management forecast. Analyst forecast revision measures the reaction of analysts to the news conveyed by a management forecasts.

Management forecast surprise is defined as:

𝑀𝐹𝑆𝑖,𝑡 = 𝑀𝐹𝑖,𝑡−𝑃𝐴𝐹𝑖,𝑡

𝑃𝑖 (1)

Where 𝑀𝐹𝑆𝑖,𝑡 is the management forecast surprise for firm i for quarter t. 𝑃𝐴𝐹𝑖,𝑡 is the

consensus (median) financial analyst forecast for firm i for quarter t in the month preceding the management earnings forecast. 𝑃𝑖 is the share price of firm i five days prior to the management earnings forecast.

Analyst forecast revision is defined as:

𝐴𝐹𝑅𝑖,𝑡 =

𝐴𝐹𝑖,𝑡−𝑃𝐴𝐹𝑖,𝑡

(23)

22 Where 𝐴𝐹𝑅𝑖,𝑡 is the revision in the consensus analyst forecast for firm i for quarter t around the management forecast for that quarter. 𝐴𝐹𝑖,𝑡 is the consensus analyst forecast for firm i in the month following the management earnings forecast for quarter t. 𝑃𝐴𝐹𝑖,𝑡 is the consensus analyst forecast for firm i in the month preceding the management earnings forecast for quarter t. 𝑃𝑖 is the share price of firm i five days prior to the management earnings forecast.

There is little evidence on how rapidly security analysts respond to new information. Analysts might need some time to evaluate the news conveyed by a management forecast, especially if the forecast differs significantly form the prior consensus analyst forecast (Baginski & Hassell, 1990). On the other hand analyst forecasts issued a longer time period after a management forecast are more likely to be based on information that was not available at the time the management forecast was released (Jennings, 1987). Gong et al. (2011) use the median of individual analysts’ first forecast issued within thirty days after a management forecast minus the median of individual analysts’ last forecast issued within thirty days before a management forecast to proxy for analysts’ forecast revision. Hassell et al. (1988) use a [-4, 4] week time window around a management forecast to determine analysts’ forecast revision. Baginski and Hassell (1990) construct analysts’ forecast revisions using a two, four and six week interval after management forecasts. Their findings indicate that the length of the interval does not affect the results and they find a positive relationship between management forecast surprises and analyst forecast revisions for all intervals. Overall empirical evidence suggests that the period one month before to one month after a management forecast provides a representative time horizon for analysts’ forecast revisions.

The findings of Baginski & Hassell (1990), Gong et al. (2011), Williams (1996) and Waymire (1986) indicate revisions in the consensus analyst forecast are positively related to the sign and magnitude of the management forecast surprise. A positive management forecast surprise is associated with upward revisions in analyst forecasts, whereas a negative management forecast surprises are associated with downward revisions of analyst forecasts (Baginski & Hassell, 1990; Gong et al., 2011; Jennings, 1987; Williams, 1996). The null prediction of hypothesis one conjectures analysts respond to management forecasts in an asymmetrical manner due to their optimistic bias. This implies analysts’ forecast revisions are larger subsequently to a positive management forecast surprise than subsequently to a negative management forecast

(24)

23 surprise. The alternative prediction of hypothesis one states analysts tend to be pessimistic and respond more to a negative management forecast surprise than to a positive management forecast surprise. To test which asymmetric prediction holds the following cross-sectional ordinary least squares regression is run with standard errors adjusted for heteroskedasticity and firm-level clustering:

𝐴𝐹𝑅𝑖,𝑡 = 𝛽0+ 𝛽1𝑀𝐹𝑆𝑖,𝑡+ 𝛽2𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑖,𝑡+ 𝛽3𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑖,𝑡× 𝑀𝐹𝑆𝑖,𝑡

+ 𝐼𝑛𝑑𝑢𝑠𝑡𝑟𝑦 𝐹𝑖𝑥𝑒𝑑 𝐸𝑓𝑓𝑒𝑐𝑡𝑠𝑖 + 𝑇𝑖𝑚𝑒 𝐹𝑖𝑥𝑒𝑑 𝐸𝑓𝑓𝑒𝑐𝑡𝑠𝑡+ ∑ 𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝑠𝑖,𝑡 + 𝑢𝑖,𝑡 (3)

The dependent variable is analyst forecast revision, 𝐴𝐹𝑅𝑖,𝑡, as defined in equation (2). Since the null hypothesis states the revision in analyst forecasts is larger when the management forecast surprise is positive a dummy variable, 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑖,𝑡, is included. This dummy variable takes the value of one if a management forecast surprise is positive and zero otherwise. 𝑀𝐹𝑆𝑖,𝑡 measures the sign and magnitude of a management forecast surprise, as defined in equation (1). Given existing empirical findings the coefficient 𝛽1 is expected to be positive and significant. A larger management forecasts surprise is expected to be associated with larger analyst forecast revisions. The primary interest is the coefficient on the interaction term, 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑖,𝑡× 𝑀𝐹𝑆𝑖,𝑡. The coefficient 𝛽3 measures the differential effect of a management forecast surprise on analyst forecast revisions for positive relative to negative management forecast surprises. A positive coefficient 𝛽3 is consistent with the null prediction of hypothesis one, because it implies analysts revise their forecasts more subsequently to a positive management forecast surprise than subsequent to a negative management forecast surprise of the same magnitude. A negative coefficient 𝛽3 is consistent with the alternative prediction of hypothesis one and implies analysts revise their forecasts more subsequently to a negative management forecast surprise than subsequently to a positive management forecast surprise, controlling for the magnitude of this surprise.

Existing literature on analysts’ forecast revisions identifies several variables that affect analysts’ reaction to management forecasts. These variables are related to the information environment of a firm and the credibility of its management forecasts. When informational uncertainty is larger it becomes more difficult for analysts to make

(25)

24 accurate earnings forecasts and analysts might rely more on management forecasts as a source of information. This results in analysts revising their forecasts to be in line with management forecasts. When management forecasts are perceived less credible or less accurate however, analysts’ forecast revisions subsequently to a management forecast decrease in magnitude.

The information environment is likely to be richer for firms that are larger. However, larger firms might be more complex, which reduces the accuracy of their management forecasts. Therefore the natural log of a firm’s market capitalization five days prior to the management forecast is included to control for size (Baginski & Hassell, 1997; Hutton, Lee, & Shu, 2012; Lim, 2001). The information environment is also affected by the number of analyst that are following a firm. When more analysts cover a firm, the firm is subject to greater scrutiny and managers have a stronger incentive to maintain a reputation of credibility. Therefore analyst coverage, measured by the natural log of the number of analyst forecasts issued within sixty days before the management forecast, is controlled for (Hutton et al., 2012; Lennox & Park, 2006; Lim, 2001).

Rogers and Stocken (2005) identify several other variables that proxy for informational uncertainty and forecasting difficulty. First, the dispersion in analysts’ forecasts for a certain firm is an indicator of its forecasting difficulty. This dispersion is measured by the standard deviation of individual analysts’ forecasts scaled by the mean consensus analyst forecast, both in the sixty days prior the management forecast (Rogers & Stocken, 2005; Hutton et al., 2012). Second, it is more difficult for analysts to predict earnings when a firm is unprofitable relative to when it is profitable. Two dummy variables are included to control for this asymmetric effect. Lagged loss equals one if a firm reports negative earnings in the prior quarter and zero otherwise. Predicted loss equals one when the management earnings forecast is negative and zero otherwise (Gong et al., 2011; Rogers & Stocken, 2005). Third, it is more difficult to forecast earnings when a firm’s true earnings are more volatile. Since reported earnings might be subject to earnings smoothing and manipulation the volatility of true earnings is measured by the standard deviation of a firm’s daily stock price over the period hundred twenty days prior to two days prior to a management forecast (Abarbanell, 1991; Lim, 2001). Also the bid-ask spread is related to the level of asymmetric information in the market. The bid-ask spread is expected to increase in the level of uncertainty about a firm’s earnings announcement. Therefore the average relative

(26)

bid-25 ask spread over a twenty-day trading period ending two days before the management earnings forecast is controlled for (Rogers & Stocken, 2005). Moreover more uncertain forecasts are expected to have wider ranges. The width of range estimates proxies for uncertainty revealed by managers. Width is scaled by the stock price five days prior to the management forecast and set to zero for point forecasts (Gong et al., 2011; Hutton et al., 2012). Also growth opportunities and leverage are likely to affect forecasting behaviour. On one hand high-growth firms are more visible so attract more analyst coverage, on the other hand their earnings might be more difficult to predict. A firm’s market-to-book value of equity proxies for growth opportunities. MB is defined as a firm’s market capitalization five days prior to the management forecast divided by its book value of equity at the end of the previous quarter (Gong et al., 2011; Hutton et al., 2012; Rogers & Stocken, 2005). Firms with more debt are subject to greater scrutiny and monitoring by debt holders. This improves their information environment. Leverage is defined as a firm’s total assets divided by its book value of equity at the end of the previous quarter (Hutton et al., 2012).

Baginski and Hassell (1990) find also the timing of managements’ earnings forecasts matters for analysts’ forecast revisions. Analysts tend to follow management forecasts more closely in the fourth quarter. Fourth quarter releases provide additional information on transitory components of earnings, since greater knowledge of transitory earnings is more likely at the end of the fiscal year. Moreover management is more likely to correct previously reported errors closer to the fiscal year end. Furthermore management might delay issuing bad news to the fourth quarter. The relatively high informative content of fourth quarter management earnings announcements results in stronger analyst forecast revisions subsequently to these announcements. A fourth quarter dummy variable is included to control for this effect. This dummy variable equals one for fourth quarter management forecasts and zero for other forecasts. The horizon of a management forecast relative to the end of the forecast period also matters for analysts’ forecast revisions. Management forecasts issued closer to the end of a forecast period are perceived more accurate by analysts, since these forecasts are based on larger information sets. Hence, analysts’ forecast revisions are expected to be inversely related to the horizon of a management forecast. Therefore forecast horizon, measured by the number of days until the end of the forecast period, is controlled for (Pownall et al., 1993; Rogers & Stocken, 2005).

(27)

26 Williams (1996), Jennings (1987) and Gong et al. (2011) find that prior management forecast usefulness or accuracy affects analysts’ forecast revisions. Analysts revise their forecasts more for firms that have issued forecast that were useful in the past. Williams (1996) defines a management forecast as useful when the absolute value of the management forecast error is smaller than the absolute value of the consensus analyst forecast error. Following Williams (1996) prior forecast usefulness is defined as:

𝑃𝐹𝑈𝑖,𝑡−𝑘 =|𝐴𝐹𝑖,𝑡−𝑘−𝐴𝑖,𝑡−𝑘|−|𝑀𝐹𝑖,𝑡−𝑘−𝐴𝑖,𝑡−𝑘|

𝑃𝑖 (4)

Where 𝑃𝐹𝑈𝑖,𝑡−𝑘 is the usefulness of the management forecast of firm i in the preceding quarter. 𝐴𝐹𝑖,𝑡 is the consensus analyst forecast for firm i in the month following the management earnings forecast of firm i in the preceding quarter. 𝐴𝑖,𝑡−𝑘 is the actual value of the earnings per share for firm i in the preceding quarter. 𝑀𝐹𝑖,𝑡−𝑘 is the management forecast for firm i in the preceding quarter. And 𝑃𝑖 is the share price of firm i five days prior to the management earnings forecast in the preceding quarter.

The more useful a management forecast, the higher the value of 𝑃𝐹𝑈𝑖,𝑡−𝑘. If analysts are closer to the actual than management, the management forecast is not useful and 𝑃𝐹𝑈𝑖,𝑡−𝑘 is negative. If the management and analyst forecast error are identical 𝑃𝐹𝑈𝑖,𝑡−𝑘 is zero. If management is closer to the actual than analysts, the management forecast is useful and 𝑃𝐹𝑈𝑖,𝑡−𝑘 is positive. Williams (1996) and Jennings (1987) find that especially for positive management forecast surprises analysts rely on the accuracy of previous forecasts. Managers are assumed to be reluctant to release bad news, so analysts perceive bad news announcements by managers more credible than good news announcements. Therefore 𝑃𝐹𝑈𝑖,𝑡−𝑘 is included as control variable. To control for a possible asymmetric effect of 𝑃𝐹𝑈𝑖,𝑡−𝑘 and other control variables on analysts’ forecast revisions all control variables are interacted with the 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑖,𝑡 dummy. Finally industry and quarter of year fixed effects are included, where an industry is defined as all firms with the same four-digit SIC code (Gong et al., 2011; Rogers & Stocken, 2005). To control for the effect of large outliers 𝐴𝐹𝑅𝑖,𝑡, 𝑀𝐹𝑆𝑖,𝑡 and all non-dichotomous control variables are winsorized at the top and bottom one percentile.

Hypothesis two considers the accuracy analyst forecasts relative to the accuracy of management forecasts. For this purpose two measures of forecast error are constructed

(28)

27 following Gong et al. (2011) and Rogers and Stocken (2005). A forecast error for a given firm and quarter is defined as the difference between forecasted earnings per share and actual earnings per share, scaled by the stock price five days prior to the announcement of actual earnings per share. Analyst forecast error is defined as:

𝐴𝐹𝐸𝑖,𝑡 =𝐴𝐹𝑖,𝑡−𝐴𝑖,𝑡

𝑃𝑖 (5)

Where 𝐴𝐹𝑖,𝑡 is the consensus analyst forecast for firm i for quarter t. The consensus analyst forecast is defined as the median of all individual analysts’ most recent forecast for firm i for quarter t. 𝐴𝑖,𝑡 is the actual value of earnings per share for firm i for quarter t. 𝑃𝑖 is the share price of firm i five days prior to the announcement of actual earnings.

Management forecast error is defined as:

𝑀𝐹𝐸𝑖,𝑡 =

𝑀𝐹𝑖,𝑡−𝐴𝑖,𝑡

𝑃𝑖 (6)

Where 𝑀𝐹𝑖,𝑡 is the management forecast for firm i for quarter t. 𝐴𝑖,𝑡 is the actual value of earnings per share for firm i for quarter t. 𝑃𝑖 is the share price of firm i five days prior to the announcement of actual earnings.

Both analyst forecast errors and management forecast errors result from their respective bias and an unexpected shock to earnings. This shock is common to analyst and management forecast errors and results from unforeseen circumstances that affect earnings in the period between the forecast and the realization of actual earnings. The shock ε is assumed to have a zero mean and a standard deviation 𝜎𝜀. The true expectation of earnings per share for both management and analysts is defined as μ. The bias in management’s forecasts is defined as bM and the bias in the analysts’ forecasts is

defined as bA. Now analyst forecast errors and management forecast errors can be

defined as:

𝐴𝐹𝐸𝑖,𝑡 = 𝐴𝐹𝑖,𝑡− 𝐴𝑖,𝑡 = (𝜇𝑖,𝑡+ 𝑏𝑖,𝑡𝐴) − (𝜇𝑖,𝑡 + 𝜀𝑖,𝑡) = 𝑏𝑖,𝑡𝐴 − 𝜀𝑖,𝑡 (7)

(29)

28 Where 𝜇𝑖,𝑡 is the true expectation of earnings per share for firm i for quarter t. 𝑏𝑖,𝑡𝐴 and

𝑏𝑖,𝑡𝑀 are the bias in analysts’ forecast and management’s forecast for firm i for quarter t, respectively and 𝜀𝑖,𝑡 is an unexpected shock to earnings for firm i for quarter t.

The test for hypothesis two relates analyst forecast errors to management forecast errors, so it considers the relative accuracy of analyst forecasts. This implies analyst forecast errors are expressed as a percentage of management forecast errors. Since analyst forecasts and management forecasts are both affected by the same shock 𝜀𝑖,𝑡, the relationship between analyst forecast errors and management forecast errors is equal to the relationship between analysts’ biases and managers’ biases. Therefore inferences on the bias of analysts relative to the bias of management can be made based on the test for hypothesis two.

Hypothesis two poses analysts de-bias management forecasts. This implies 𝐴𝐹𝐸𝑖,𝑡 is smaller than 𝑀𝐹𝐸𝑖,𝑡. From equations (7) and (8) it can be seen that when analyst forecast errors are smaller than management forecast errors, the bias in analysts’ forecasts is smaller than the bias in management’s forecast. Hypothesis two predicts analysts de-bias management forecasts asymmetrically however. The null prediction of hypothesis two assumes analysts are positively biased in general, so they tend to be too optimistic. This positive bias is expected to persist both when management is optimistic (the management forecast is higher than actual earnings) and when management is pessimistic (the management forecast is lower than actual earnings). Therefore analyst are expected to lean more against optimistic than against pessimist management forecasts under the null prediction of hypothesis two. Figure 1a illustrates how average management forecasts and average analyst forecasts are expected to relate to actual earnings under the null prediction of hypothesis two, conditional on management being too pessimistic (MF<A) and management being too optimistic (MF>A). Analyst forecast errors are marked in grey. As can be seen from Figure 1 analysts are expected to de-bias management forecasts on average, since analyst forecast errors are smaller than management forecast errors both when management is too optimistic and when management is too pessimistic. Figure 1a shows analyst forecasts errors are smaller when management is pessimistic than when management is optimistic, given analysts have the propensity to be optimistic.

(30)

29

Figure 1

Expected Average Management and Analyst Earnings Forecasts Relative to Actual Earnings

This figure displays expected average management and analyst earnings forecasts relative to actual earnings. MF, AF and A are management forecast, analyst forecast and actual earnings, respectively. MFE and AFE are management forecast error and analyst forecast error, respectively. Management forecast error is defined as the difference between a firm's management forecast and actual earnings for a given quarter. Analyst forecast error is defined as the difference between the consensus analyst forecast and actual earnings for a firm for a given quarter. The consensus analyst forecast is the median of all individual analysts' most recent forecasts for the quarter to which the management forecast pertains. The grey areas mark analyst forecast errors. All expected values are displayed conditional on management being pessimistic (MF<A) and conditional on management being optimistic (MF>A).

The alternative prediction of hypothesis two assumes analysts are negatively biased in general, so they tend to be pessimistic. Under this assumption analysts are expected to lean more against pessimistic than against optimistic management forecasts. Figure 1b illustrates how average management forecasts and average analyst forecasts are expected to relate to actual earnings under the alternative prediction of hypothesis two, conditional on management being too pessimistic and management being too optimistic. Figure 1b shows analyst forecast errors are larger when management is pessimistic than when management is optimistic, given analysts have the propensity to be pessimistic.

To test which asymmetric prediction holds the following cross-sectional ordinary least squares regression is run with standard errors adjusted for heteroskedasticity and firm-level clustering: (a) MF AF A AF MF - AFE + (b) MF AF A AF MF

- MFE AFE AFE MFE + MFE AFE MFE

Referenties

GERELATEERDE DOCUMENTEN

The inuence of the relative humidity on the drying of the droplet was analyzed by monitoring the change in contact angle (q), diameter (D), volume (V) and height (h) of a

Niet alleen door zijn persoon, preken en handelen heeft Van Lodenstein grote invloed gehad maar vooral door zijn gepubliceerd werk.. Wilhelemus á Brakel, een tijdgenoot, was al

This study seeks to investigate how sexuality education is implemented in a government-funded school in the town of Nepalgunj, Nepal and its potential influence on the decisions

‘Comparative advertising (vs. non-comparative) under low involvement will elicit more favorable attitudes towards the ad and towards the brand, regardless of argument strength.’

The  Swedish  International  Development  Agency  (Sida)  has  been  supporting  the  University  Eduardo  Mondlane  (UEM)  since  1978.  Currently  Sida  is 

Figure 5: One-sided rejection rates at nominal level 10% of the Diebold-Mariano type test statistic of equal predictive accuracy defined in (3) when using the weighted modified

In this paper, only first lag is considered. Therefore, first-order autoregressive model is presented in the following manner.. In next section, AR model with and without macro

Variatie op DNA-niveau wordt veroorzaakt door (i) puntmutaties, (ii) kleine deleties waardoor geen compleet eiwit gemaakt kan worden, en (iii) grote deleties waardoor het