• No results found

Are Funded Scientific Projects Supporting High Quality Research? Evidence from the Management Studies Field

N/A
N/A
Protected

Academic year: 2021

Share "Are Funded Scientific Projects Supporting High Quality Research? Evidence from the Management Studies Field"

Copied!
31
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Evidence from the Management Studies Field

DINH HUU TRONG

MSc Business Administration – Strategic Innovation Management Student Number: S3399567

Supervisor: Dr. Pedro De Faria Associate Professor

Co-assessor: Dr. Killian J. McCarthy Associate Professor

(2)
(3)

1

Are Funded Scientific Projects Supporting High Quality Research?

Evidence from the Management Studies Field

Dinh Huu Trong, University of Groningen, June 2018.

ABSTRACT

Top journals gather the most influential research across various disciplines. While the quality aspect of research takes different perspectives from the point of the view of the scientist and the funding body, the value that it brings to society remains unclear. This study specifically questions the existence of an eventual trade-off between the scientific and social relevance of science. By investigating a sample of 28 peer-reviewed journals in the strategic innovation management field, the results show that research published in top-tier journals have greater probabilities of receiving financial support. In addition, because the funding bodies perceive the quality of research differently, academics may face a choice between pursuing scientific or societal relevance. Albeit preliminary, the outcomes of this study allow us to draw some practical insights on the current situation of research funding.

Keywords: Journal ranking, Citation analysis, Research funding, Research quality.

INTRODUCTION

(4)

2 and Hansson, 2011). The principal quality criteria pursued by researchers are: (1) scientific excellence and (2) scientific relevance (Erno-Kjolhede and Hansson, 2011). Accordingly, such a type of research, known as “Mode 1” knowledge, represents the ideal in academia (Gibbons et al., 1994). One of the main pillars on which basic research lies on is its publication, preferably in prestigious international journals (Erno-Kjolhede and Hansson, 2011; Podsakoff et al., 2005; Singh et al., 2007). The scientific impact of academic research therefore is reflected by the quality of those peer-reviewed journals (Franke et al., 1990; Seglen, 1997), which is measured by traditional bibliometric indicators such as the counts of citations or the impact factor (Garfield, 1979). Although being criticized for its measurement relevance, well-cited research is viewed as an important scientific contribution in its field of study (Garfield, 1979). To a scientist, a citation is part of the reward system of science and shows impact on the advancement of the body of academic knowledge (Aksnes and Rip, 2009).

While the scientific impact can be assessed by objective measurement tools such as bibliometric indicators, it is however more difficult to evaluate the benefits that basic research can provide to society (Bornmann, 2012; Nelson, 1959). Up until now, there are no reliable nor robust measures to assess the societal impact of science (Bornmann, 2012). In this context, the definition of societal impact is “the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making” (Bornmann, 2012). Furthermore, Nelson (1959) argued that science is considered “fruitful” when a commercial application is attached to it. With this regard, applied research is easier to evaluate than basic research because the link between science and society is more straightforward in the former case (Bornmann, 2012; Erno-Kjolhede and Hansson, 2011; Nelson, 1959; Nightingale and Scott, 2007; van der Meulen and Rip, 2000). Practical knowledge that derives from such a research, also known as “Mode 2” knowledge (Gibbons et al., 1994) is thus highly demanded from the point of view of politics and society. As basic and applied research are continuously growing within the scientific community, a new problem has emerged for policy-makers and funding agencies in terms of research funds allocation (Bornmann, 2012). Because conducting impactful research fundamentally requires financial, human, and physical assets that are limited, governments and funding institutions must therefore revise their distribution of research funding (Bornmann, 2012).

(5)

3 words, a potential tension might appear between the researcher aiming for advancement in knowledge and the funding body who must direct its resources toward “economically useful” scientific projects (Pavitt, 1991; Salter and Martin, 2001). Because funding agencies allocate their resources according to a cost-based analysis, they will select the most promising research that present potential benefits to society (Pavitt, 1991; Salter and Martin, 2001; Whittington, 1997). When lacking information about the potentials of scientific projects, these funding bodies turn to the stature of top journals as proxies for research quality in order to influence their funding decisions (Azoulay et al., 2013; Johnson et al., 1994; Kirkpatrick and Locke, 1992; Podsakoff et al., 2005; Singh et al., 2007).

Most studies in the past have investigated the effects of financial funds on performance and research productivity of scientists (Bolli and Somogyi, 2011; Defazio et al., 2009; Goldfarb, 2008; Lee and Bozeman, 2005; Muscio et al., 2013; Spanos and Vonortas, 2012; Ubfal and Maffioli, 2011). Others have focused on the relationship between scientists’ reputation and their capacity of receiving funds (Arora et al., 1998; Fedderke and Goldschmidt, 2015; Grimpe, 2012; Ponomarev et al., 2014). In general, the results show that scholars with excellent past performance and that have higher publication rates tend to receive financial support more easily. Robert K. Merton (1968) refers such an effect as the “Matthew effect in science”. By that idea, higher research performance with a recognized status would enter a profitable cycle in which they will be prioritized by funding agencies. While previous studies have primarily looked at the effects of the scientist’s performance on research grants, few have studied the relationship between funded research and their scientific contribution. Moreover, management literature has mainly focused on the quantitative aspect of research, that is, by counting the number or the rate of publications. Instead, this study attempts to bring a new perspective of research funding by investigating whether financial support does enhance the scientific contribution of academics.

(6)

4 The next sections are organized as the following. First, a literature review will explain in detail the current landscape of knowledge creation. The second part will look at the funding of scientific projects and establish the foundations to test the hypotheses. A methodological section will follow up to elaborate the sample selection procedure as well as the different econometrical tools in use to analyze the input data. Finally, the results to my research questions will allow us to draw the first implications and conclusions of the issue of research funding.

LITERATURE REVIEW The role of citations

Traditionally, academic research is evaluated through a peer-review system (Bornmann, 2012; Erno-Kjolhede and Hansson, 2011; Nightingale and Scott, 2007; Seglen, 1997). A paper is considered as high quality if it meets the criteria dictated by a disciplinary gate-keeper such as: (1) reliability, (2) consistency, (3) originality, and (4) objectivity (Erno-Kjolhede and Hansson, 2011). The ultimate purpose of science then is to increase the existing repertoire of knowledge and does not necessarily contain any commercial value (Erno-Kjolhede and Hansson, 2011; Nelson, 1959). Accordingly, a scientist’s work is cited if it answers the criteria established by the peer-review system (Seglen, 1997). Concretely, scholars defined a citation as an indication of scientific quality and a sign of contribution on the field (Aksnes and Rip, 2009; Garfield, 1979; Seglen, 1997). For a scientist, being cited is perceived as showing impact and builds up reputation in the scientific community (Aksnes and Rip, 2009). It is not surprising that academic research sought after citations because they have long been considered as influencing research on the field and of the stature of journals in which their papers are published (Franke et al., 1990).

(7)

5 Although citations counts are an important tool for evaluating the scientific relevance of academic research, they sometimes do not reflect the real scientific contribution aimed by scholars (Aksnes and Rip, 2009). The literature refers to the “bandwagon effect” (or “ripple effect”) as a current issue affecting the use of citations (Aksnes and Rip, 2009; Fowler and Aksnes, 2007; Frandsen and Nicolaisen, 2013). The central idea is: citations lead to new citations. The dynamics behind the “bandwagon effect” lie on research quality and the question of visibility, that is, where the research has been published (Aksnes and Rip, 2009). Because of such an effect, some research that presents the same characteristics as highly cited papers are shadowed and neglected by the scientific community (Aksnes and Rip, 2009).

Additionally, Ponomarev et al. (2014) found that well-cited articles tend to be referenced more often than less cited papers. This phenomenon undergoes a process of cumulative advantage in which the most influential research continues to attract the focus of other scholars (Ponomarev et al., 2014). Particularly, Robert K. Merton (1968) characterized the issue as the “Matthew effect in science”. The analogy made with the parable of talents in the Gospel of Matthew describes that due to the status and the past performance of scientists, successful research will continue to get more credits over other research, even if the work is comparably similar1. Subsequently, scholars who wish to accumulate citations for scientific relevance are thus more likely to opt for prestigious international journals (Podsakoff et al., 2005; Seglen, 1994, 1997; Singh et al., 2007). Aksnes and Rip (2009) added that publishing in a “wrong” journal may get authors less citations than if they were in top journals. Thanks to the status of top journals, academic papers found in those journals benefit from the “Matthew effect” which enrolls them in a favorable cycle of acquiring more citations (Azoulay et al., 2013). There is thus a real necessity for academics to preferably publish their work in prestigious international journals if they are to increase the value of their scientific contribution on the field (Franke et al., 1990; Kirkpatrick and Locke, 1992; Seglen, 1994, 1997; Singh et al., 2007).

Besides from demonstrating an important scientific contribution to the advancement of the body of academic knowledge (Johnson and Podsakoff, 1994), publishing in top journals also show signs of scientific excellence (Erno-Kjolhede and Hansson, 2011). In comparison with journals at lower rank, top journals bring prestige and recognition to authors and their institutions (Coe and Weinstock, 1984). Moreover, Whittington (1997) argued that the stature of these journals is a crucial factor which influences the allocation decisions of research funding. Scholars who publish their work in prestigious journals not only do so to leverage their credibility and establish a reputation in the scientific community, but also to

(8)

6 better negotiate for financial support and attract the attention of research grants providers (Kirkpatrick and Locke, 1992; Whittington, 1997).

Scientific impact versus Societal impact

Nightingale and Scott (2007, p. 547) pointed out that “research that is highly cited or published in top journals may be good for the academic discipline but not for society”. In fact, the social benefits of basic research are often difficult to be assessed (Nelson, 1959; Pavitt, 1991; Salter and Martin, 2001). The principal cause is that scientists have different objectives that may diverge from the real needs of society (Bornmann, 2012; Erno-Kjolhede and Hansson, 2011; van der Meulen and Rip, 2000). On the one hand, academics idealize the pursuit of “Mode 1” knowledge and its publication in prestigious international journals, which drive the research towards the advancement of the existing repertoire of knowledge (Gibbons et al., 1994). On the other hand, there is a growing political demand for research that embeds a societal impact (Bornmann, 2012; Erno-Kjolhede and Hansson, 2011; van der Meulen and Rip, 2000). Nelson (1959) argued that science has economic value if “the outcomes of the research can be used to predict the results of trying one or another alternative solution to a practical problem”. Under this context, societal impact is defined as “the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making” (Bornmann, 2012, p. 673). Gibbons et al. (1994) referred such a research as “Mode 2” knowledge (or applied research), that is, knowledge not bound by scientific excellence but rather focus on solving practical issues from the society. Nowotny et al. (2001) added that in order to produce results that are relevant and applicable for the users of the research, the results of “Mode 2” knowledge must be: (1) socially relevant, (2) socially robust, and (3) innovative.

(9)

Erno-7 Kjolhede and Hansson, 2011). In addition, academics consider evaluating the societal impact as a threat to their scientific freedom and therefore reject it (Hanney et al., 2000). Because the allocation of research grants is directed towards societal impact issues, it challenges the traditional peer review system in which scholars are rewarded for their scientific contribution. In other words, an eventual system for evaluating the societal impact of academic research may restrict scientists in only doing research that is practical and not for the necessity of pushing knowledge boundaries (Hanney et al., 2000).

Nonetheless, as the growth of scientific research has outpaced the available resources to fund it, politicians and funding institutions must revise their allocation of funds (Bornmann, 2012; Nightingale and Scott, 2007). The challenge imposes the funding bodies to identify the most promising research that has potential for a societal impact. Subsequently, the social benefits of academic research play an even more important role in attracting the financial support of those funding agencies (Bornmann, 2012). Particularly, funding agencies distribute their grants by looking at the rank of research in terms of quality (Singh et al., 2007; Whittington, 1997). Top journals, which serve as evidence for quality of research (Coe and Weinstock, 1984; Singh et al., 2007), have direct and indirect effects on the allocation decisions of research funds (Johnson and Podsakoff, 1994; Kirkpatrick et al., 1992; Whittington, 1997). Indeed, previous studies have demonstrated that prestige and reputation of academics are key determinants for receiving research grants (Arora et al., 1998; Fedderke and Goldschmidt, 2015; Grimpe, 2012; Ponomarev et al., 2014). Top journals thus act as a signal function for funding agencies to support promising research.

Since there is currently a growing surplus of demand over the supply on the research funding market, funding institutions are ever constrained to a more efficient and effective distribution of funds (Bornmann, 2012). Erno-Kjolhede and Hansson (2011) and Nightingale and Scott (2007) highlighted two consequences of such a distribution method. First, scientists must be in accordance with societal objectives and integrate a value for society in their research. Second, new indicators for measuring social relevance must be developed urgently since relying on traditional methods do not reflect the real needs of society. Grant et al. (2009) suggested that the evaluation process of a societal impact requires a panel of experts to conduct study cases of each scientific project individually. This requires a strict monitoring on scientists who may reject the proposition (Bornmann, 2012). While the research funding landscape is becoming more competitive than ever, academics may therefore face a trade-off between pursuing scientific or social relevance (Bornmann, 2012).

(10)

8 HYPOTHESES DEVELOPMENT

The status of top journals

In the academic world, top journals are used as proxies for research quality (Singh et al., 2007). Particularly, publications in these prestigious journals are crucial for the advancement of science and the body of academic knowledge (Straub, 2009). In order to be published in top journals, academic papers need to go through a quality assessment which puts the accent on the scientific contribution of research. Straub (2009) highlighted four reasons that explains why top journals accept scholars’ papers. First, the quintessential factor of contribution is based on novel ideas. In other words, prestigious journals prioritize research that move into unexplored intellectual territories. Second, top journals prefer research questions that have not been investigated before. Third, scientific projects should explore popular themes that allow to spur new work. Finally, academic papers must be able to contribute to the existing theory. In addition, Straub (2009) also mentioned other enhancing elements such as the writing style, a correct methodology, or a precise cover of the key literature. Putting all together, papers that are published in prestigious journals thus demonstrate quality in terms of the scientific contribution they make in extending the boundaries of academic knowledge.

Nonetheless, quality perception is often difficult to articulate (Azoulay et al., 2013). For the funding agencies that are seeking for promising research, top journals play a central part in influencing the distribution decision of financial grants (Johnson and Podsakoff, 1994; Kirkpatrick et al., 1992; Whittington, 1997). Under situations of high uncertainty about research quality, top journals serve as information for the funding bodies to reallocate their funds across scientific projects (Azoulay et al., 2013; Podsakoff et al., 2005).

(11)

9 performance build up a strong reputation within the community and thereby show potentials for promising future funded projects.

While being one of the core factors in the “Matthew effect in science” of Robert K. Merton (1968), the term “reputation” was further developed by Fombrun (1996) as an intangible wealth that indicates superior quality. Building a good reputation helps to promote trust, shows prospects for growth, and demonstrates the capacity to generate strong earnings (Fombrun, 1996). Fundamentally, Washington and Zajac (2005) argued that a distinction line needs to be drawn between the concepts of reputation and status. On the one hand, they claimed that status is a sociological aspect that seize the differences in social rank that create privilege or discrimination regardless of the performance-based awards. On the other hand, reputation is an economic concept that captures differences in quality based on performance-based awards. This brings us to clarify the status of top journals which influences the funding decision. While their stature offers several advantages to academics in search for scientific excellence (Singh et al., 2007), receiving funds requires the evaluation of the scientist’s specific performance and thus their potentials to generate high quality research. Scholars’ reputation thus acts as a signaling function when other information on quality are lacking (Washington and Zajac, 2005). Therefore, scientists who had previously published in prestigious journals, build up a strong reputation and thereby will have easier access to financial support.

Hypothesis 1: Research published in top-tier journals increases the probability of receiving funds for future scientific projects.

Trade-off between scientific and societal impact

(12)

10 incorporate any socio-economic value, the societal impact of research is thus often shunned by scholars (Erno-Kjolhede and Hansson, 2011; Hanney et al., 2000).

Nevertheless, regardless of what scientists like, the societal impact of their research plays a central part in attracting financial support (Bornmann, 2012). Considering the rising growth of science which has outpaced the available resources to fund it, grants providers need to revise their funding allocation based on the economic usefulness of scientific projects. The societal impact of academic research subsequently becomes an important criterion that determines the eligibility of acquiring funds. However, academics in general avoid highlighting the societal impact of their research because such a method takes them beyond their disciplinary expertise (Holbrook and Frodeman, 2011).

The problem is that research into societal impact is still in the early stages (Bornmann, 2012). While basic research already owns an established method that is continuously refined over time, applied research does not yet possess a distinct community with its own accomplishments (Bornmann, 2012). Funding agencies and governments have attempted in setting up budget-relevant measurements, and recommend that societal impact is best measured in a quantifiable way under specific study cases through a panel of experts (Erno-Kjolhede and Hansson, 2011; Grant et al., 2009). Such a method is viewed as labor-intensive and time-consuming, and put the scientific freedom under constrains of social relevance (Erno-Kjolhede and Hansson, 2011; Hanney et al., 2000). Compared to the traditional peer-review system, the assessment of the societal impact of research limits the scientists in conducting research that only bring direct benefits to the society (Hanney et al., 2000).

Beside from highlighting the social relevance of their research for acquiring funds, scientists can also use the stature of top journals as proxies for research quality (Singh et al., 2007). While they build up the scientist’s reputation, they function at the same time as indicators of research with high potentials for the funding bodies (Podsakoff et al., 2005). Nonetheless, evaluating those prestigious journals still fall into the current peer-review system (e.g. impact factor), which does not necessarily point out the social value perceived by the society (Franke et al., 1990). Scholars who opt for publishing in these journals may end up facing a trade-off choice: either pursuing scientific excellence (by building up their reputation) and making a contribution in the scientific community or following the funding bodies’ criteria in order to generate a societal impact. In consequence, as top journals attract more funds, the financial aids, with the aim to contribute an impact to the society, will in turn be detrimental for creating a scientific impact.

(13)

11 METHODS

Sample selection

To test the hypotheses, I have constructed a dataset which covers 28 best journals in the strategic innovation management field that is, including two lists of peer-reviewed journals ranked by Google Scholar: (1) Strategic Management and (2) Entrepreneurship and Innovation. In particular, the ranking established by Google Scholar is based on three available metrics: (1) the h-index, (2) the h-core, and (3) the h-median. Below are their definitions directly taken from Google Scholar2:

 “The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3.”

 “The h-core of a publication is a set of top cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles, those cited by 17, 9, and 6.”

 “The h-median of a publication is the median of the citation counts in its h-core. For example, the h-median of the publication above is 9. The h-median is a measure of the distribution of citations to the articles in the h-core.”

Moreover, those h-metrics can also measure the citations of peer-reviewed journals for a range of five complete calendar years. They are then termed as h5-index, h5-core, and h5-median.

Initially, the sample size accounted 40 peer-reviewed journals from the two ranking lists provided by Google Scholar (See Appendix A, and B). Since some journals present different contents that are not part of the Strategic Innovation Management program, four were omitted from the selection. Those journals are: (1) Journal of the Academy of Marketing Science, (2) Journal of Operations Management, (3) Journal of Corporate Finance, and (4) Management Decision. Furthermore, the Harvard Business Review (HBR), considered as one of the top-tier journal (See Appendix C), was also omitted because its contents are structured in a magazine form, which is different from ordinary academic papers and therefore will make the comparison more difficult and time-consuming. Additionally, four journals were not accessible due to the restrictions of the University of Groningen. Those journals are: (1) International Entrepreneurship and Management Journal, (2) Technology Analysis & Strategic Management, (3) International Journal of Entrepreneurial Behavior & Research, and (4) Research-Technology Management. Lastly, three journals

(14)

12 are displayed in both lists of Google Scholar (i.e. Strategic Management and Entrepreneurship & Innovation) and thus are not double-counted. They are: (1) Entrepreneurship Theory and Practice, (2) Journal of Business Venturing, and (3) Journal of Product Innovation Management. In sum, the final sample collected accounts 28 peer-reviewed journals.

As my intention is to construct the most diverse sample that fits best with my research questions, the final selection contains in total 635 articles from every first issue, from years 2014 and 2015. In addition, a list of journals provided by the Financial Times was used to classify the sample journals into two distinct categories: (1) Top-tier journals, and (2) Second-tier journals. The appendix section displays a table regrouping all the journals ranked by Google Scholar and the 50 most used journals by the Financial Times (See Appendix A, B, and C).

Data measurement

Dependent variables. The two research questions exposed above explicitly highlight getting funding and scientific contribution as dependent variables. First, getting funding is measured by a dummy variable. Concretely, the dataset involves digging through each acknowledgement section of each article to verify whether financial support was provided or not. If it is the case, then the variable takes the value as “1” whereas in the contrary, it takes “0”. Second, scientific contribution is assessed by the number of citations for each published article and are displayed directly by Google Scholar. Making use of bibliometric indicators for measuring research quality might be subject to criticisms because it does not always reflect actual scientific contributions (Garfield, 1979). However, Aksnes and Rip (2009, p.895) argued that citations are still the best measurement instrument for research quality and scientific contribution since they are part of “the reward system for science through acknowledgements and builds up reputation”. The two dependent variables will serve as the foundations for the two elaborated hypotheses. In particular, getting funding will serve hypothesis 1 while research quality in terms of scientific contribution will be tested in hypothesis 2.

(15)

13 variables are subject to an interaction, which will be tested if there is an existence of a trade-off between scientific and societal contribution.

Control variables. To control the potential impacts of other variables on the dependent variables, I have listed a set of controls to maintain constant during the analysis. The control variables are (1) the number of authors and (2) their affiliated countries, (3) if the research is the result of a cross-country collaboration, and (4) the type of research, that is, whether it is qualitative or quantitative. Collaborations between multiple authors are generally found to be positively related to the citation counts (Rousseau, 2001; Van Raan, 1998). Furthermore, papers resulting from international cooperation tend to be cited more often than if they were from national cooperation (thus, single country) (Rousseau, 2001). Finally, Swygart-Hobaugh’s study (2004) demonstrated that quantitative studies are more likely to be referenced than qualitative studies. Those variables are widely believed to influence the citation counts and therefore need to be held constant during the hypotheses tests.

The number of authors and their affiliated countries are counted in the description section of each article. Whether the paper is the outcome of a cross-country collaboration, and the type of research (i.e. qualitative and quantitative) are both measured by dummy variables. More precisely, if the research involves more than one country, then it would take value “1” (“0” in the case of only one country). Lastly, the type of research takes value “1” for qualitative studies and “0” for quantitative ones.

Data Analysis

For the analysis of my database which mainly consists of primary data, I have adopted a quantitative approach via theory testing. In particular, due to the nature of my research questions, the two categories of journals will be subject to a regression analysis under the ordinary least-squared model (OLS) and a binomial logistic regression since one of the dependent variable “getting funding” is measured by a dummy variable. More specifically, hypothesis 1 will be subject to a binomial logistic regression while hypothesis 2 will follow a simple linear regression under the OLS model.

The binomial logistic regression equation to test the first hypothesis is:

𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝑝𝑝) = 𝑏𝑏0+ 𝑏𝑏1𝑋𝑋1+ 𝑏𝑏2𝑋𝑋2+ ⋯ + 𝑏𝑏𝑘𝑘𝑋𝑋𝑘𝑘+ 𝜀𝜀𝑖𝑖 ; {𝑙𝑙, … , 𝑘𝑘} ∈ 𝑁𝑁

 Where 𝑝𝑝 is the probability of getting funding or not.  Where 𝑏𝑏0 is the constant intercept.

 Where 𝜀𝜀𝑖𝑖 is the error term.

(16)

14 The linear regression equation under OLS to test the second hypothesis 2 is:

𝑦𝑦𝑖𝑖 = 𝛼𝛼 + 𝛽𝛽1𝑥𝑥1+ 𝛽𝛽2𝑥𝑥2+ ⋯ + 𝛽𝛽𝑘𝑘𝑥𝑥𝑘𝑘+ 𝜀𝜀𝑖𝑖 ; {𝑙𝑙, … , 𝑘𝑘} ∈ 𝑁𝑁

 Where 𝑦𝑦𝑖𝑖 represents the number of citations of each article.  Where 𝛼𝛼 is the constant intercept.

 Where 𝜀𝜀𝑖𝑖 is the error term.

 Where 𝛽𝛽{𝑖𝑖,𝑘𝑘} are the correlation coefficients from i to k observation respectively.  Where 𝑥𝑥{𝑖𝑖,𝑘𝑘} are the independent variables from i to k observation respectively.

The data analysis is divided into two stages. First, each type of regression will be applied with regards to their respective hypotheses. Accordingly, the analysis must be verified for the existence of multicollinearity and heteroscedasticity in order to ensure the fit of the model. Furthermore, the verifications also imply controlling for the coefficient of determination to avoid the risk of goodness-of-fit misinterpretation. Second, the significance of the p-value is checked afterwards before stating any potential linkage among the variables. The outcomes of the analysis, albeit preliminary, will lay out the first results which can drive future research towards more precise tests. The following step is to reorganize the input data into descriptive statistics. This will give us an overview of the current situation of the funding landscape and how research grants are distributed among the journals in the strategic innovation management literature.

FINDINGS

Descriptive statistics

Among the 28 journals in the strategic innovation management literature selected, 10 are listed as the best by the Financial Times in 2016. Thus, a third of the 635 articles analyzed are part of the top-tier journals, which we can state that the mean distribution of papers across journals is equal (See Table 1). Interestingly, the percentage of funded research is slightly higher in second-tier journals (32.57% against 30.81%). This contradicts the predictions in hypothesis 1, which assumed that research published in top-tier journals are more likely to receive financial support for future scientific projects. The results of the regression analysis will confirm if the statistics are significant.

Table 1: Share of articles and funded research

Type of Journals Number of Articles Share of Articles Funded Research

Top-tier Journals 211 33.23% 30,81%

(17)

15 Well-cited research papers are often found in top-tier journals, marked by a greater average of citations in comparison with second-tier journals. Taking side by side, an article published in top-tier journals will be on average cited twice as much as the ones found in second-tier journals (see Table 2). This illustrates the findings of Arora et al. (1998) and Ponomarev et al. (2014) which claimed that well-cited papers tend to be cited more thanks to the status confined by being published in prestigious journals. In addition, funded research published in top-tier journals show a lower average of citations compared to the benchmark (56.485 against 85.92). In parallel with hypothesis 2, we can assume that the scientific impact is lowered if scholars must follow financial directives. The potential of a trade-off between scientific excellence and societal relevance may exist and therefore is the core study of the second hypothesis.

Table 2: Average citations funded research and non-funded research

Type of Journals Average citations of non-funded research

Average citations of funded research

Top-tier Journals 85,92 56,485

Second-tier Journals 33,79 35,315

The descriptive statistics provide evidence in line with hypothesis 2, but shows the opposite expectation for hypothesis 1. Whether top-tier journals are more or less likely to receive research funds or the misperception of the intrinsic value in knowledge creation does lead to a trade-off will be subject to a regression analysis in the following section.

Results

Before carrying out the regression analysis, the following step in constructing a reliability test must be verified (Belsey et al., 1980). That includes controlling for multicollinearity, and ensuring that the regression models do not contain the presence of heteroscedasticity. The application of Belsey et al. (1980) shows that there are no multicollinearity issues across the regression models. Accordingly, all condition index are smaller than 30, and the variance inflation factors (VIF) are lower than the rule number (= 10) (See Appendix D, E, and F).

(18)

16

Table 3: Correlation matrix

VARIABLES Mean S.D. 1 2 3 4 5 6 7 1. Number of Citations 47.77 85.49 1 2. Top Journals 0.33 0.47 0.22 1 3. Financial Funds 0.34 0.48 -0.04 0.05 1 4. Number of Authors 2.86 1.83 0.04 -0.07 0.17 1 5. Number of Countries 1.54 0.99 0.06 0.02 0.11 0.65 1 6. Cross-countries Collaboration 0.38 0.49 0.04 0.08 0.05 0.27 0.69 1 7. Type of Research 0.31 0.46 0.04 0.02 -0.13 -0.06 -0.07 -0.06 1

To test the significance of the hypotheses, I have used two types of regression due to the nature of my variables. Table 4 reports all the results of the regressions through five models. Model 1, which contains the four control variables, was computed under a logistic regression with respect to the dependent variable (i.e. Getting funding or not) as a dummy variable.

Similarly, Model 2 tests hypothesis 1 and includes the first independent variable of study: publishing in top-tier journals. Model 3 follows the same rationale as Model 1 which computes the four control variables (i.e. number of authors, number of affiliated countries, cross-countries collaboration and type of research). The difference here is that Model 3 regresses the second dependent variable (i.e. number of citations). Model 4 takes the same computation as Model 3 and adds individually the independent variables getting funding and being published in top-tier journals.

(19)

17

Table 4: Regression analysis

(1) (2) (3) (4) (5) VARIABLES Model 1 Financial funds Model 2 Financial funds Model 3 Citations Model 4 Citations Model 5 Citations Number of Authors 0.247*** 0.261*** -0.310 1.475 0.592 (0.0742) (0.0749) (2.608) (2.577) (2.610) Number of Countries 0.0500 0.0390 6.231 5.312 5.682 (0.200) (0.199) (6.425) (6.269) (6.258) Cross-countries Collaboration -0.0854 -0.108 -1.061 -4.424 -4.038 (0.304) (0.303) (10.26) (10.02) (10.00) Type of Research (Qualitative = 1; Quantitative = 0) -0.593*** -0.606*** 7.526 5.245 4.575

(0.193) (0.194) (7.351) (7.227) (7.219)

Top Journals x Financial Funds -29.30*

(14.95) Top Journals 0.319* 40.88*** (0.181) (7.108) Financial Funds -10.44 (7.142) Intercept -1.222*** -1.341*** 37.12*** 25.45*** 24.09*** (0.256) (0.264) (7.299) (7.735) (7.748) Observations 635 635 635 635 635 R-squared 0.005 0.057 0.063

Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1

By examining the results of Table 4, two are revealed consistent with the hypotheses. First, the effect of publishing in top journals on getting funding is positive and significant in Model 2 (𝑝𝑝 < 0.1). Therefore, as predicted in hypothesis 1, we can state that research published in top-tier journals are more likely to receive funds than second-tier journals. Second, hypothesis 2 tests the significance of the interaction term between getting funding and publishing in top-tier journals. The results in Model 5 demonstrate that the coefficient is negative and significant (𝑝𝑝 < 0.1). As expected, there is empirical support that the trade-off between scientific excellence and societal relevance is the outcome of the misperception in the intrinsic value of knowledge creation.

(20)

18 DISCUSSION AND CONCLUSION

Implications and recommendations

The significance of the results has different implications for the scientific community as well as for the funding bodies. First, scientists in need of financial support for conducting their research must better be able to highlight the societal relevance of their study (Bornmann, 2012; Erno-Kjolhede and Hansson, 2011; Nightingale and Scott, 2007; van der Meulen and Rip, 2000). Through a tight collaboration, funding institutions can assist academics in establishing the fundamental criteria for assessing the societal impact of science. Because of the rising demand for “economically useful” knowledge (Pavitt, 1991; Salter and Martin, 2001), scientific and funding bodies ought to come together in setting up a new modernized system for quality review.

Nonetheless, Nelson (1959) claimed that significant advances in basic research are needed before any practical problems can be solved. For instance, he argued that the invention of the radio communication would be impossible without the prior results of Maxwell and Hertz, whose work was primarily aiming to explain Maxwell’s equation. Consequently, contributions in science through investments in basic research act as a stimulus for future applied research (Nelson, 1959). While the relationship between social benefits and applied research are inextricably linked, the costs of achieving breakthroughs by a direct research effort are likely to be tremendous (Nelson, 1959). There is thus a need for policy-makers and funding agencies to establish a separate reserve for funding basic research (Pavitt, 1991; Salter and Martin, 2001). Among Nightingale and Scott’s (2007, p. 547) propositions for bridging the gap between science and society, “funders need to recognise the distinction between relevance and academic impact” and thereby must develop appropriate criteria to both assessing scientific and societal impacts.

(21)

19

1

21

41

61

81

Second-tier Journals

Top-tier Journals

C

ita

tio

n

c

o

u

n

ts

Non-Funded research Funded research

(2014) is an instrument designed to provide large-scale funding to research units with the objective of linking scientific excellence to purposes beyond the confines of academic science3.

Second, scientists aiming for top journals in order to attract funds may reconsider their objective of either pursuing scientific or societal relevance. Considering the results in Table 4, the coefficient for the interaction term between getting funding and publishing in top-tier journals is negative and significant. This shows the existence of a scientific versus societal impact trade-off. On the one hand, scholars publishing in top-tier journals demonstrate scientific contribution on the field and build up their reputation for easier access to research funds. On the other hand, they also need to answer the societal needs by pointing out the economic and social benefits of their research. This in turn imposes constraints on their scientific freedom and limits them in expanding the body of academic knowledge (Hanney et al., 2000). The results show a negative relationship with the citation counts which decreases the scientific impact of funded research in top-tier journals. Figure 1 illustrates the choice academics may face when applying for research grants. While not being funded, research published in top-tier journals are on average 2.5 times more cited than the work found in second-tier journals. However, once the funds are granted to top-class research, the scientific impact of funded projects decreases because of the preliminary evaluation system which put the accent on the economic and social benefits of science. In addition, the sensitivity of the citation counts to the type of journal drops when an article receives funding, as illustrated by a steeper slope for non-funded projects. This exemplifies the negative impact of research grants on the scientific contribution of academics.

Figure 1: Relationship between the citation counts and top journals.

(22)

20 As a consequence, scholars in need for financial support and aiming to achieve scientific relevance for impactful research may revisit the question: “What defines research quality?” Indeed, scholars have argued that the traditional use of bibliometric indicators such as the number of citations only says something about the “academic popularity” rather than revealing the relevance for the development of science (Aksnes and Rip, 2009; Frey and Rost, 2010; Garfield, 1979). This illustrates again the “Matthew effect in science” developed by Robert K. Merton (1968) and follows the visibility dynamics of research publication. Aksnes and Rip (2009, p. 902) referred that as an obliteration dynamic and stated “when one paper becomes more visible, others will become less so.” Therefore, research quality in terms of scientific relevance does not necessarily translate into citation counts but should be done by directly evaluating the content of the publication through a panel of experts (Frey and Rost, 2010). As Seglen (1994, p. 10; 1997, p. 1055) had suggested: “Science deserves to be judged by its contents, not by its wrapping.”

Limitations

The results of this study can only be qualified as preliminary for several reasons. First, with regards to the collection of information of the articles, some are not included due to a question of feasibility and time constraints. Only the most common and relevant variables are selected to control the five models studied. The lack of control variables partly explains why the fitness of the models are weak (see Table 4). To score a higher R2, we would need to include additional variables that can potentially affect the number of citations such as the amount of pages per article, the number of keywords used, or the details in respect with the authors’ affiliations and past performance in terms of research publications. However, doing so would require a longer period of time for data collection as well as more personnel involved to analyze the selected papers. Further research can be more inclusive regarding the control variables.

Second, the results only apply to the Strategic Innovation Management field (See Appendix A, and B), which is relatively small in terms of the size of the scientific community in comparison with other areas in natural sciences such as medicine, physics or chemistry (Ponomarev et al., 2014). Subsequently, the financial support provided by funding agencies may have different purposes and imposed conditions that are proportionate to the size of the scientific community. A possibility for further research is to investigate the amount of funds received or the total of grants providers. Future research can replicate this study by reviewing other fields of study.

(23)

21 of citations is through the access to collections of worldwide libraries such as SmartCat4. Here the

difference between the two engines is that Google Scholar accounts citations from authors, students and readers who present a paper whereas SmartCat only consider citations in academic papers (Thelwall and Kousha, 2015). Therefore, there is an inherent risk that the citation counts presented in the dataset might be overestimated. I encourage future research to adopt a more concise and integrative method in order to construct a robust database.

The limitations accounted in this study are meant to enlighten further research in overcoming the several issues named above. Since no previous studies in the management literature have investigated the question of research quality with financial support, therefore, the preliminary findings of my study hope to provide a better understanding of the current research funding situation and trigger other studies to look at the issue more in depth.

Conclusion

This study investigated the question of research quality perceived by academics and the funding institutions. It can be observed that the notion of research quality embeds different values for the ones who pursue scientific relevance and the funds providers who must reallocate their distribution according to the society’s needs. While there are still currently no reliable nor robust tools for assessing the economic and social benefits of academic research, under situation of high uncertainty about the potential quality of scientific projects, funding agencies turn to the status of top journals to influence their allocation decisions. It is undoubtedly a result of a natural phenomenon, named the Matthew effect of science (Merton, 1968), that affects the actual landscape of research.

Nonetheless, the empirical results indicate that providing funds to scientists who present different interests to society will be detrimental for their scientific contribution on the field. Because the criteria for societal impact established by the funding bodies contradict the scientific freedom of academics, consequently, researchers in the end may face a choice between pursuing scientific or social relevance. Until a new modernized evaluation system can bridge the gap between science and society’s needs, scholars and the funding bodies ought to collaborate even closer in order to build the foundations for what it means to conduct high quality research.

(24)

22 ACKNOWLEDGEMENTS

This research would not have been possible without the infrastructure and IT equipment provided by the Faculty of Economics and Business of the University of Groningen. In addition, my thoughts go to Dr. Pedro De Faria, my direct supervisor in this thesis for not only guiding me throughout the research process, but also morally supporting me during this darkest period of my life. I also thank the feedback given by my classmates Elena Grigoropoulou, Gabriel Marin, Liu Yun-Chun, Veneta Fesliyska and two other anonymous reviewers in the business field. All errors are my own.

Dinh Huu Trong (born on September 12th, 1994 in Ho Chi Minh City,

Vietnam) is pursuing a Master’s degree in Business Administration with a specialization in Strategic Innovation Management at the University of Groningen (The Netherlands). He previously obtained a Bachelor’s degree in economics at the Solvay Brussels School of Economics and Management (Belgium). His research focuses on the impacts of funded scientific projects and the intrinsic value of knowledge creation.

REFERENCES

Aksnes, D.W. & Rip. A. (2009). Researcher’s perceptions of citations. Research Policy, Vol. 38, 895–905. Arora, A., David, P.A. & Gambardella, A. (1998). Reputation and Competence in Publicly Funded Science: Estimating the Effects on Research Group Productivity. Annales d’économie et de statistique, n°49/50.

Azoulay, P., Stuart T. & Wang, Y. (2014). Matthew: Effect of Fable? Management Science, 60(1): 92-109.

Belsey, D.A., Kuh, E. & Welsch, R.E. (1980). Regression diagnostics. New York: John Wiley & Sons. Bolli, T. & Somogyi, F. (2011). Do competitively acquired funds induce universities to increase productivity? Research Policy, Vol. 40, 136-147.

Bornmann, L. (2012). Measuring the societal impact of research. EMBO reports, 13(8): 673-676.

(25)

23 Defazio, D., Lockett, A. & Wright, M. (2009). Funding incentives, collaborative dynamics and scientific productivity: Evidence from the EU framework program. Research Policy, Vol. 38, 293-305.

Erno-Kjolhede, E. & Hansson, F. (2011). Measuring research performance during a changing relationship between science and society. Research Evaluation, 20(2): 131-143.

Fedderke, J.W. & Goldschmidt, M. (2015). Does massive funding support of researchers work? Evaluating the impact of the South African research chair funding initiative. Research Policy, Vol. 44, 467-482.

Fombrun, C. (1996). Reputation: Realizing value from the corporate image. Boston: Harvard Business School Press.

Fowler, J.H. & Aksnes, D.W. (2007). Does self-citation pay? Scientometrics, 72(3): 427-437.

Frandsen, T.F. & Nicolaisen, J. (2013). The ripple effect: Citation chain reactions of a nobel prize. Journal of the American Society for Information Science and Technology, 64(3): 437-447.

Franke, R.H., Edlund, T.W. & Oster, F. (1990). The development of strategic management: Journal quality and article impact. Strategic Management Journal. Vol. 11, 243-253.

Frey, B.S. & Rost, K. (2010). Do rankings reflect research quality? Journal of Applied Economics, 13(1): 1-38.

Garfield, E. (1979). Is Citation Analysis a Legitimate Evaluation Tool? Scientometrics, 1(4): 359-375. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. & Trow, M. (1994). The new production of knowledge: The dynamics of science and research in contemporary societies. Sage.

Goldfarb, B. (2008). The effect of government contracting on academic research: Does the source of funding affect scientific output? Research Policy, Vol. 37, 41-58.

Grant, J., Brutscher, P.B., Kirk, S., Butler, L. & Wooding, S. (2009). Capturing Research Impacts. Cambridge, UK: RAND Europe.

Grimpe, C. (2012). Extramural Research Grants and Scientists’ Funding Strategies: Beggars Cannot be Choosers? Research Policy, 41(8): 1448-1460.

(26)

24 Holbrook, J.B. & Frodeman, R. (2011). Peer review and the ex ante assessment of societal impacts. Research Evaluation, 20(3): 239-246.

Johnson, J.L. & Podsakoff, P.M. (1994). Journal Influence in the field of management: An analysis using Salancik’s index in a dependency network. Academy of Management Journal, 37(5): 1392-1407.

Kirkpatrick, S.A. & Locke, E.A. (1992). The Development of Measures of Faculty Scholarship. Group & Organization Management, 17(1): 5-23.

Lee, S. & Bozeman, B. (2005). The Impact of Research Collaboration on Scientific Productivity. Social Studies of Science, 35/5, 673-702.

Merton, Robert K. (1968). The Matthew Effect in Science. Science, 159(3810), 56-63.

Muscio, A., Quaglione, D. & Vallanti, G. (2013). Does government funding complement or substitute private research funding to universities? Research Policy, Vol. 42, 63-75.

Nelson, R.R. (1959). The Simple Economics of Basic Scientific Research. Journal of Political Economy, 67(3): 297-306.

Nightingale, P. & Scott, A. (2007). Peer review and the relevance gap: ten suggestions for policy-makers. Science and Public Policy, 34(8): 543-553.

Pavitt, K. (1991). What makes basic research economically useful? Research Policy, Vol. 20, 109-119. Podsakoff, P.M., Mackenzie, S.B., Bachrach, D.G. & Podsakoff, N.P. (2005). The influence of management journals in the 1980s and 1990s. Strategic Management Journal, 26: 473-488.

Ponomarev, I.V., Williams, D.E., Hackett, C.J., Schneckenreither, J.D. & Hawk, L.L. (2014). Predicting highly cited papers: A Method for Early Detection of Candidate Breakthroughs. Technological Forecasting & Social Change, 81(1): 49-55.

Rousseau, R. (2001). Are multi-authored articles cited more than single-authored ones? Are collaborations with authors from other countries more cited than collaborations within the country? A case study. Berlin: Gesellschaft für Wissenschaftsforschung.

Salter, A.J. & Martin, B.R. (2001). The economic benefits of publicly funded basic research: a critical review. Research Policy, Vol. 30, 509-532.

(27)

25 Seglen, P.O. (1997). Citations and journal impact factors: Questionable indicators of research quality. Allergy, 52: 1050-1056.

Singh, G., Haddad, K.M. & Chow, C.W. (2007). Are Articles in “Top” Management Journals Necessarily of Higher Quality? Journal of Management Inquiry, 16(4): 319-331.

Spanos, Y.E. & Vonortas, N.S. (2012). Scale and performance in publicly funded collaborative research and development. R&D Management, 42(5): 494-513.

Straub, D.W. (2009). Why Top Journals Accept Your Paper. MIS Quarterly, 33(3): 3-10.

Swygart-Hobaugh, A.J. (2004). A citation analysis of the quantitative/qualitative methods debate’s reflection in sociology research: Implications for library collection development. Library Collections, Acquisitions, and Technical Services, 28, 180-95.

Thelwall, M. & Kousha, K. (2015). Web Indicators for Research Evaluation. Part 1: Citations and Links to Academic Articles from the Web. El professional de la informacion, 24(5).

Ubfal, D. & Maffioli, A. (2011). The impact of funding on research collaboration: Evidence from a developing country. Research Policy, Vol. 40, 1269-1279.

Van der Meulen, B. & Rip. A. (2000). Evaluation of societal quality of public sector research in the Netherlands. Research Evaluation, 8(1): 11-25.

Van Raan, A.F.J. (1998). The influence of international collaboration on the impact of research results. Scientometrics, 42(3): 423-428.

Washington, M. & Zajac, E. (2005). Evolution and Competition: Theory and Evidence. The Academy of Management Journal, 48(2): 282-296.

(28)

26 APPENDIX

Appendix A: Best journals of the Strategic Management list, Source: Google Scholar.

RANK TITLE H5-index H5-median CATEGORY

1. Journal of Management 82 126 Top-tier

2. Management Science 82 119 Top-tier

3. Journal of Business Research 81 120 Second-tier

4. Academy of Management Journal 80 134 Top-tier

5. Strategic Management Journal 76 110 Top-tier

6. Entrepreneurship Theory and Practice 65 95 Top-tier

7. Organization Science 64 100 Top-tier

8. Academy of Management Review 63 100 Top-tier

9. Omega 63 89 Second-tier

10. Journal of Business Venturing 62 102 Top-tier

11. (x) Harvard Business Review 60 84 Top-tier

12. Industrial Marketing Management 58 81 Second-tier

13. Technological Forecasting and Social Change 58 75 Second-tier

14. Journal of Management Studies 55 101 Top-tier

15. Journal of Product Innovation Management 52 77 Second-tier

16. (x) Journal of the Academy of Marketing Science 48 81 Top-tier

17. (x) Journal of Operations Management 47 80 Top-tier

18. (x) Journal of Corporate Finance 47 69 Second-tier

19. (x) Management Decision 43 76 Second-tier

20. The Academy of Management Perspectives 42 71 Second-tier

(x): Not included in the dataset

Appendix B: Best journals of the Entrepreneurship & Innovation list, Source: Google Scholar.

RANK TITLE H5-index H5-median CATEGORY

1. Research Policy 82 134 Top-tier

2. Entrepreneurship Theory and Practice 65 95 Top-tier

3. Journal of Business Venturing 62 102 Top-tier

4. Small Business Economics 58 93 Second-tier

5. Journal of Product Innovation Management 52 77 Second-tier

6. Technovation 48 68 Second-tier

7. International Small Business Journal 42 60 Second-tier

(29)

27

9. The Journal of Technology Transfer 35 50 Second-tier

10. Entrepreneurship & Regional Development 35 49 Second-tier

11. Journal of Intellectual Capital 33 52 Second-tier

12. Strategic Entrepreneurship Journal 30 48 Top-tier

13. Journal of Small Business and Enterprise Development 30 44 Second-tier

14. Creativity and Innovation Management 28 46 Second-tier

15. R&D Management 28 41 Second-tier

16. (x) International Entrepreneurship and Management Journal 27 44 Second-tier 17. Journal of Engineering and Technology Management 27 38 Second-tier 18. (x) Technology Analysis & Strategic Management 27 38 Second-tier 19. (x) International Journal of Entrepreneurial Behavior & Research 26 38 Second-tier

20. (x) Research-Technology Management 24 45 Second-tier

(x): Not included in the dataset

Appendix C: 50 Journals used in FT Research Rank, Source: Financial Times, 2016.

# TITLE # TITLE

1. Academy of Management Journal 26. Journal of Management Studies 2. Academy of Management Review 27. Journal of Marketing

3. Accounting, Organizations and Society 28. Journal of Marketing Research 4. Administrative Science Quarterly 29. Journal of Operations Management 5. American Economic Review 30. Journal of Political Economy

6. Contemporary Accounting Research 31. Journal of the Academy of Marketing Science

7. Econometrica 32. Management Science

8. Entrepreneurship Theory and Practice 33. Manufacturing and Service Operations Management 9. Harvard Business Review 34. Marketing Science

10. Human Relations 35. MIS Quarterly

11. Human Resource Management 36. Operations Research 12. Information Systems Research 37. Organization Science 13. Journal of Accounting and Economics 38. Organization Studies

14. Journal of Accounting Research 39. Organizational Behavior and Human Decision Processes 15. Journal of Applied Psychology 40. Production and Operations Management

16. Journal of Business Ethics 41. Quarterly Journal of Economics 17. Journal of Business Venturing 42. Research Policy

18. Journal of Consumer Psychology 43. Review of Accounting Studies 19. Journal of Consumer Research 44. Review of Economic Studies

(30)

28

21. Journal of Financial and Quantitative Analysis 46. Review of Financial Studies 22. Journal of Financial Economics 47. Sloan Management Review 23. Journal of International Business Studies 48. Strategic Entrepreneurship Journal 24. Journal of Management 49. Strategic Management Journal 25. Journal of Management Information Systems 50. The Accounting Review

Appendix D: Multicollinearity and heteroscedasticity tests (Model 3).

VARIABLES VIF 1/VIF

Number of Countries 3.50 0.2859

Cross-countries Collaboration 2.16 0.4629

Number of Authors 1.97 0.5080

Type of Research 1.01 0.9935

Mean VIF 2.16

Breusch-Pagan / Cook-Weisberg test for heteroscedasticity:  𝐻𝐻0: Constant variance

 Variables: fitted values of Number of Citations  Chi2

(1) = 63.45  Prob > Chi2

= 0.0000

Appendix E: Multicollinearity and heteroscedasticity tests (Model 4).

VARIABLES VIF 1/VIF

(31)

29 Breusch-Pagan / Cook-Weisberg test for heteroscedasticity:

𝐻𝐻0: Constant variance

 Variables: fitted values of Number of Citations  Chi2(1) = 331.11

 Prob > Chi2

= 0.0000

Appendix F: Multicollinearity and heteroscedasticity tests (Model 5).

VARIABLES VIF 1/VIF

Number of Authors 2.08 0.4805 Number of Countries 3.50 0.2854 Cross-countries Collaboration 2.17 0.4610 Type of Research 1.02 0.9756 Top Journals 1.58 0.6320 Financial Funds 1.66 0.6042

Top Journals x Financial Funds 2.26 0.4434

Mean VIF 2.04

Breusch-Pagan / Cook-Weisberg test for heteroscedasticity:  𝐻𝐻0: Constant variance

 Variables: fitted values of Number of Citations  Chi2(1) = 507.78

 Prob > Chi2

Referenties

GERELATEERDE DOCUMENTEN

Hypothesis 3a: A higher level of General Organizational Perspective will lead to higher levels of Readiness for Change involving Cognitive, Affective and Behavioral attitudes

Twee monsters horecahamburgers bevatten veel zetmeel (6,2% en 5,6%; dit laatste monster bevatte tevens 4% paneer, zodat het totale zetmeelgehalte nog duidelijk hoger zal zijn)..

Het best passende model levert de kleinste AIC, en levert de juiste balans tussen 'fif en aantal parameters. In dit onderzoek selecteerden we voor de grondwaterstanden

Focusing on the development of brand awareness, the results of this research indicated that children, in the age category of seven to eleven years old seem to be highly aware of

hirundinella cell concentrations in source water used to determine the pre-chlorination concentrations during 4 chlorine exposure experiments.. Occasion-a Occasion-b

Impact of Synchronous Versus Metachronous Onset of Colorectal Peritoneal Metastases on Survival Outcomes After Cytoreductive Surgery (CRS) with Hyperthermic Intraperitoneal

This chapter described the running-in of rolling-sliding contacts on macroscopic and microscopic level. 1) On macro-scale, the geometrical change of the contacting

“What solution strategies to resolve the arsenic-related public health burden will fit the needs of the rural poor population in Bangladesh in terms of their health condition, and