This section analyzes the robustness of the results for the structural approach. The results in Section 6 already indicate that the sample with the direct approach does not give robust results.

Therefore, most of the robustness checks presented in this section are not relevant for this sample.

Our first robustness check is to include a dummy that indicates for each study which estimates are preferred by the authors of that study. The coefficient of the dummy shows that the preferred estimates do not differ significantly from the non-preferred estimates. Also, the coefficients on the other covariates and the estimates on the ‘true elasticities’ do not qualitatively change.

As a second robustness check, we include interaction terms between the explanatory variables
and the variance. As the interaction terms are most relevant for those regressors that are
significant, in model (II) we only include the interaction terms between the variance and the
variables ‘stock’, ‘published’, ‘age of publication’, ‘age of working paper’ and ‘ranking’. The
highest VIF value of this regression is 23, indicating substantial multicollinearity.^{16} The main
results are robust to the inclusion of the interaction terms, although the coefficient on the
variance is not significant anymore. This might be due to collinearity problems. When all
interaction terms are included (model (III)) the highest VIF value increases to 70. Despite this,
the main results still hold in this regression.

16 The Variance Inflation Factor (VIF) indicates the severity of multicollinearity on a scale from 1 to infinity. If all variables are orthogonal the VIF equals one. A common rule of thumb is that multicollinearity is problematic if the VIF is larger than ten.

22 include the stock dummy, the true elasticity is pooled over flow and stock.

23

The last two columns in Table 7 show the results of the meta-regression when the standard error instead of the variance is used to correct for publication bias. The results on the covariates are robust to the change in the correction for publication bias. The estimated ‘true’ elasticities are smaller. This can be explained by the fact that a regression with the standard error instead of the variance has a constant that is biased towards zero (Stanley and Doucouliagos, 2014).

*The meta-regression in Section 6 uses 1/(VAR**is** n**s*) as weight. Table 8, columns one and two,
*shows the results when instead weights 1/VAR**is** or 1/(VAR**is** (n**s *)^{2}) are used. The results are
*largely robust to this change in weights. For weight 1/VAR**is* the estimated ‘true’ elasticities are
*larger than in the main model while for weight 1/(VAR**is** (n**s *)^{2}) the estimated ‘true’ elasticities
*are smaller. This might be related to the fact that there are some studies with a small n**s* and
*small estimates. When weighting with 1/(VAR**is** (n**s *)^{2}) those studies get a bigger relative weight.

The third column of Table 8 presents the results of a weighted regression with cluster-robust
*standard errors. The clusters here are the different studies s. Because now the cluster-robust *
standard errors correct for the correlation between estimates from the same study, we do not
*use the correction of 1/n**s**, but instead weigh with 1/VAR. As the number of cluster is small, the *
estimated standard errors could be biased. The estimates again are similar to the estimates of
the main model in Section 6.

Another option to correct for the correlation between estimates from the same study is to use a
panel data model. The fourth column in Table 8 shows the results of a fixed effects model (with
*weights 1/VAR**is*). Note that some variables, like the impact ranking, are not included in this
model, since these variables are constant over each study. Moreover, two studies with a single
observation per study are removed from the dataset. Study-level constants are not reported.

The standard errors of the estimated coefficients are higher than in the main specification in Section 6. Since the fixed effects model uses 13 study-specific constants, and exploits only within-study variance, the higher standard errors are not unexpected. The estimates themselves have the same order of magnitude as in the main specification, but the higher standard errors cause a decrease in significance.

*The last column in Table 8 presents the results of a random effects model with weights 1/VAR**is*.
As with the fixed effects model, the estimates have the same sign and order of magnitude as in
the main specification, but the standard errors are higher, leading to less significant coefficients.

The random effects model needs to estimate a considerable higher number of parameters, while the number of observations (82 estimates) is limited.

24

*Notes: Standard errors in parenthesis; robust standard errors for first two models; * p<0.1; ** p<0.05; *** *

*p<0.01*

*For the sample of studies on direct effects we find that weighting by 1/VAR**is** or 1/(VAR**is** (n**s *)^{2})
does not qualitatively change the results of specification (III) in Table 6. Using cluster-robust
standard errors however does change the results: SME’s and published studies now show a
non-significant effect, while the effects of manufacturing and the rating of the study become
significant. This is another sign that this sample of studies does not provide stable results,
making it hard to draw conclusions on the origin of the variation between the studies.

**8. Conclusions **

We performed a meta-analysis of the literature on the effectiveness of R&D tax incentives. This literature consists of two families of micro-econometric studies. One family estimates the elasticity of private R&D expenditures with respect to the user cost of R&D capital. The other family estimates correlations between R&D expenditure and the presence of an R&D tax incentive scheme. We analyzed each family of studies separately.

25

For the studies that estimate the user-cost elasticity we found, after correcting for publication bias, a significant average elasticity of -0.21 for the flow of R&D expenditures and a significant average elasticity of -0.13 for the stock of R&D capital. The publication bias is substantial: the uncorrected average elasticities are -1.10 and -0.48 respectively. Also, the estimates of the user cost elasticity are quite heterogeneous. Part of this heterogeneity is caused by the significant difference between stock and flow of R&D expenditures. We also found publication effects.

Recently published studies provide smaller elasticities compared to either older published studies or unpublished work. In addition, outlets with a higher impact factor tend to publish higher elasticities. All the results we found for this family of studies are robust to different model specifications.

For the family of studies that presents correlations between R&D expenditure and the presence of an R&D tax incentive scheme, the presence of a scheme is associated with seven percent more R&D expenditure after correction for publication bias. This effect is significantly different from zero. Again, we found substantial publication bias as the uncorrected mean effect is 58 percent.

The estimates display a large amount of heterogeneity, but different model specifications suggest different sources of heterogeneity, making it hard to draw robust conclusions.

For both families of studies we found a robust but modest effect after correction for publication bias. This suggests that R&D tax incentives help to increase the level of private R&D, but are probably not a major determinant of a country’s innovativeness.

**References **

Agrawal, A., C. Rosell and T.S. Simcoe, 2014, Do Tax Credits Affect R&D Expenditures by Small Firms?

Evidence from Canada, NBER Working Paper 20615.

Arellano, M. and S. Bond, 1991, Some tests of specification for panel data: Monte Carlo evidence and
*an application to employment equations, The Review of Economic Studies, vol. 58, no. 2, pp. *
277-297.

Baghana, R. and P. Mohnen, 2009, Effectiveness of R&D tax incentives in small and large enterprises
*in Quebec, Small Business Economics, vol. 33, no. 1, pp. 91-107. *

Bozio, A., D. Irac and L. Py, 2014, Impact of research tax credit on R&D and innovation: evidence from the 2008 French reform, Banque de France Working Paper 532.

Castellacci, F. and C.M. Lie, 2015, Do the effects of R&D tax credits vary across industries? A
*meta-regression analysis, Research Policy, vol. 44, no. 4, pp. 819-832. *

Chirinko, R.S., S.M. Fazzari and A.P. Meyer, 1999, How responsive is business capital formation to its
*user cost?: An exploration with micro data, Journal of Public Economics, vol. 74, no. 1, pp. 53-80. *

*Corchuelo Martínez-Azúa, M.B., 2006, Incentivos Fiscales en I+D y Decisiones de innovación, Revista *
*de Economía Aplicada, vol. 14, no. 40, pp. 5-34. *

26

Corchuelo, M.B. and E. Martínez-Ros, 2009, The Effects of Fiscal Incentives for R&D in Spain, Universidad Carlos III de Madrid Working Paper 09-23.

CPB, CASE, ETLA and IHS, 2015, A study on R&D tax incentives: Final report, DG TAXUD Taxation Paper 52.

Dagenais, M.G., P. Mohnen and P. Therrien, 1997, Do Canadian firms respond to fiscal incentives to research and development?, CIRANO Working Paper 97s-34.

Duguet, E., 2012, The effect of the incremental R&D tax credit on the private funding of R&D an
*econometric evaluation on french firm level data, Revue d'Economie Politique, vol. 122, no. 3, pp. *

405-435.

Dumont, M., 2013, The impact of subsidies and fiscal incentives on corporate R&D expenditures in
*Belgium (2001-2009), Reflets et Perspectives de la Vie Economique, no. 1, pp. 69-91. *

Hægeland, T. and J. Møen, 2007, Input additionality in the Norwegian R&D tax credit scheme, Reports 2007/47 Statistics Norway.

*Hall, B.H., 1993, R&D tax policy during the 1980s: success or failure?, Tax Policy and the Economy, *
*Volume 7, MIT Press. *

Hall, B.H. and J. Van Reenen, 2000, How effective are fiscal incentives for R&D? A review of the
*evidence, Research Policy, vol. 29, no. 4, pp. 449-469. *

Harris, R., Q.C. Li and M. Trainor, 2009, Is a higher rate of R&D tax credit a panacea for low levels of
*R&D in disadvantaged regions?, Research Policy, vol. 38, no. 1, pp. 192-205. *

Hines Jr, J.R., R.G. Hubbard and J. Slemrod, 1993, On the sensitivity of R&D to delicate tax changes:

*The behavior of US multinationals in the 1980s, Studies in International Taxation, University of *
Chicago Press.

Ho, Y., 2006, Evaluating the effectiveness of state R&D tax credits, University of Pittsburgh.

Huang, C.H., 2009, Three essays on the innovation behaviour of Taiwans manufacturing firms, Graduate Institute of Industrial Economics, National Central University, Taiwan.

Ientile, D. and J. Mairesse, 2009, A policy to boost R&D: Does the R&D tax credit work?, EIB Papers 6/2009.

Kepes, S., G.C. Banks, M. McDaniel and D.L. Whetzel, 2012, Publication bias in the organizational
*sciences, Organizational Research Methods, vol. 15, no. 4, pp. 624-662. *

*Koga, T., 2003, Firm size and R&D tax incentives, Technovation, vol. 23, no. 7, pp. 643-648. *

Lokshin, B. and P. Mohnen, 2007, Measuring the Effectiveness of R&D tax credits in the Netherlands, UNU-MERIT Working Paper 2007-025.

Lokshin, B. and P. Mohnen, 2012, How effective are level-based R&D tax credits? Evidence from the
*Netherlands, Applied Economics, vol. 44, no. 12, pp. 1527-1538. *

27

Mairesse, J. and B. Mulkay, 2004, Une évaluation du crédit d'impot recherche en France, 1980–1997,
*Revue d'Economie Politique, vol. 114, pp. 747-778. *

Mulkay, B. and J. Mairesse, 2003, The effect of the R&D tax credit in France, EEA-ESEM Conference.

*Mulkay, B. and J. Mairesse, 2008, Financing R&D Through Tax Credit in France, LEREPS and *
*UNU-MERIT Preliminary Draft. *

Mulkay, B. and J. Mairesse, 2013, The R&D tax credit in France: assessment and ex ante evaluation of
*the 2008 reform, Oxford Economic Papers, vol. 65, no. 3, pp. 746-766. *

Nelson, J.P. and P.E. Kennedy, 2009, The use (and abuse) of meta-analysis in environmental and
*natural resource economics: an assessment, Environmental and Resource Economics, vol. 42, no. 3, *
pp. 345-377.

Parsons, M. and N. Phillips, 2007, An evaluation of the federal tax credit for scientific research and experimental development, Department of Finance, Canada, Working Paper 2007-08.

Poot, T., P. den Hertog, T. Grosfeld and E. Brouwer, 2003, Evaluation of a major Dutch Tax Credit Scheme (WBSO) aimed at promoting R&D, FTEVAL Conference on the Evaluation of Government Funded R&D, Vienna.

Stanley, T.D. and H. Doucouliagos, 2014, Metaregression approximations to reduce publication
*selection bias, Research Synthesis Methods, vol. 5, no. 1, pp. 60-78. *

*Stanley, T.D., 2005, Beyond publication bias, Journal of Economic Surveys, vol. 19, no. 3, pp. 309-345. *

Stanley, T.D., 2008, Metaregression methods for detecting and estimating empirical effects in the
*presence of publication selection, Oxford Bulletin of Economics and Statistics, vol. 70, no. 1, pp. *
103-127.

Terrin, N., C.H. Schmid, J. Lau and I. Olkin, 2003, Adjusting for publication bias in the presence of
*heterogeneity, Statistics in medicine, vol. 22, no. 13, pp. 2113-2126. *

Wilson, D.J., 2009, Beggar thy neighbor? The in-state, out-of-state, and aggregate effects of R&D tax
*credits, The Review of Economics and Statistics, vol. 91, no. 2, pp. 431-436. *

Yang, C.H., C.H. Huang and T.C.-T. Hou, 2012, Tax incentives and R&D activity: firm-level evidence
*from Taiwan, Research Policy, vol. 41, no. 9, pp. 1578-1588. *

Yohei, K.O.B.A., 2011, Effect of R&D tax credits for small and medium-sized enterprises in Japan:

evidence from firm-level data, RIETI Discussion Paper 11-E-066.