• No results found

Eight clusters : a dynamic perspective and structural analysis for the evaluation of institutional research performance

N/A
N/A
Protected

Academic year: 2021

Share "Eight clusters : a dynamic perspective and structural analysis for the evaluation of institutional research performance"

Copied!
7
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Eight clusters : a dynamic perspective and structural analysis for the evaluation of institutional research performance

Thijs, B.

Citation

Thijs, B. (2010, January 27). Eight clusters : a dynamic perspective and structural analysis

for the evaluation of institutional research performance. Retrieved from

https://hdl.handle.net/1887/14617

Version: Not Applicable (or Unknown)

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden

Downloaded from: https://hdl.handle.net/1887/14617

Note: To cite this publication please use the final published version (if applicable).

(2)

3 T

HE DYNAMIC PERSPECTIVE ON RESEARCH PERFORMANCE

3.1 Four basic principles

The paper on Israeli research (see Part II, chapter 4) institutes introduced the dynamic perspective on institutional research performance. This dynamic perspective is an implementation of four principles distilled from guidelines and best practices known in the  eld of scientometrics. These are combined and extended with the classi cation model based on research pro les which adds a unique tool to increase comparability between institutes. Then, the use of citations based indicators standardized both with respect to journal or  eld expectations and the additional information on publication strategies gained from the ratio between journal and  eld expected citation rates do complete nicely to requirements of a multidimensional methodology (Glänzel, Schubert, et al 2008; Glänzel, Thijs et al, 2009).

The four basic principles are:

• Institutions are embedded in their national situation.

• Past performance is a good reference point for assessment of present and future performance.

• Multiple indicators and Field differentiation allow a broad performance assessment.

• Institutions are best compared with similar or likewise institutions.

3.1.1 National situation

Research institutes are embedded in a national science system with its own properties and peculiarities. They depend on national funding and are part of an established cultural heritage. This means that national statistics can provide a good starting point or reference point for the evaluation of institutional research performance. Bonaccorsi (2007) states that ‘contextual information on national systems needs to be introduced at the level of analysis’. However, as institutes can have a much more specialized research pro le than the country, the national situation alone cannot be used as a normative benchmark.

3.1.2 Evolution

A snapshot of an institute’s performance can give a clear idea of the relative position of this institute. However, changes in methodology, in the set of used indicators or in the underlying database, can strongly in uence a subsequent re-run of these snapshots making them incomparable. Therefore, we suggest the usage of evolution data to reveal possible trends in the performance, and point out strengths and weaknesses.

(3)

3.1.3 Multiple indicators and fi eld differentiation

Research is a multi-faceted activity that cannot be captured in one overall indicator, not even in a composed-indicator whereby multiple indicators are weighted and summed. Evaluation of research activity requires a broad set of well-chosen indicators, each with its own meaning. Schubert and Braun (1996) stressed ‘the utmost necessity of multidimensional thinking and methods’. Others, like Martin (1996) or Kostoff (1995) gave the same recommendation for studies of research performance.

It is also crucial that these indicators are well understood and – if necessary - explained to the possible users of the evaluation exercise. Additionally, each indicator can also be calculated for a speci c  eld or sub eld in order to get a more in-depth view of the research activities of an institute.

3.1.4 Comparability

The comparability of research performance of institutes can be problematic in many instances. In 1979, Gar eld already stated about individual researchers :

‘Instead of directly comparing the citation count of, say, a mathematician against that of a biochemist, both should be ranked with their peers, and the comparison should be made between rankings’.

Martin and Irvine (1983) conclude that comparisons are only valid if they focus on similar research entities –‘i.e.-one can only legitimately compare “like with like”‘. This does not only apply to individual researchers but also at a higher level of aggregation when comparing institute performance. It is better to compare these scores of an institute with those scores of similar institutes. As shown above

‘similar’ can be de ned as being active in the same areas of science.

The above mentioned classi cation of institutions into 8 separate groups can help to identify those likewise institutions. In the paper about benchmarking (Thijs and Glänzel, 2009) we could show that these 8 groups differ signi cantly from each other on publication and citation indicators. By looking at these differences between groups on indicators we conclude that it is a valid strategy to compare institutional research performance within its group or with the group average. In the Israel paper we enhanced this comparability even further by dividing each group in quintiles in order to describe the position of an institution within the group (rank score 1 to 5).

3.2 Application

This dynamic perspective was applied to  ve Israeli research institutions (see Part II, chapter 4).

• Tel Aviv University

• Hebrew University of Jerusalem

• Technion – Israel Institute of Technology

(4)

• Ben Gurion University of the Negev

• Weizmann Institute of Science

First, a description of Israel’s research performance was given to describe the national situation. Then, for the assessment of the evolution of the research performance, publication data from 1991 up to 2007 were analyzed with six successive time windows. These periods were 1991-1993, 1994-1996, 1997-1999, 2000-2002, 2003-2005 and 2006-2007. All citation indicators use a three year citation window, the year of publication and the two subsequent years. Thus, for the 2006 and 2007 publications, this three year window was not complete at the time of the analysis and thus citation indicators could not be calculated.

3.2.1 Multiple indicators and fi eld differentiation.

To comply with the third principle of the dynamic perspective a broad range of indicators was calculated and a set of six performance indicators was presented in tables.

• Share of publications within Israel

• Mean Observed Citation Rate (MOCR)

• Relative Citation Rate (RCR)

• Share of Self-Citations

• Relative Citation Rate Excluding Self-Citations (RCRX)

• Normalized Mean Citation Rate (NMCR)

A  gure plotting each institute with MECR/FECR against NMCR (see section 3.3 in the secund part for the de nition of these indicators) was used to present different publication strategies. These plots,‘second generation relational charts’, are created and described by Glänzel et al. (2008, 2009). Other indicators like share of international co-authored papers or citation indicators on international co-authored papers were also calculated but not used in the publications. Also the

 eld differentiation was left out of the paper in order to keep it within a reasonable length for publication.

3.2.2 Comparability

The 5 institutes were all classi ed using the model previously described.

All of them classify as ‘Multidisciplinary’ institutes. The Israeli co-authors were a bit surprised by this classi cation of Technion as they expected it to be a member of the Technical and Natural Sciences group. However, close inspection of the research pro le revealed activities in other areas like medicine as well.

(5)

The indicator scores of the 5 institutes were compared with the overall average of the Multidisciplinary cluster and this for all 5 or 6 time frames. Also a quintile rank was calculated for each institute on the 6 selected indicators. These ranks make it possible to judge the evolution of relative position of the institute within the cluster. As only institutes from the selected European countries are classi ed at the moment, this quintile rank allow an Europe-Israel institutional comparison.

(6)

References

Bonaccorsi, A., Daraio, C., Lepori, B., Slipersaeter, S. (2007), Indicators on individual higher education institutions: addressing data problems and comparability issues, Research Evaluation, 16(2), 66-78.

Gar eld, E. (1979), Citation Indexing. Its Theory and Applications in Science, Technology and Humanities, Wiley, New York.

Glänzel, W., Schubert, A., Thijs, B., Debackere, K. (2008), A new generation of relational charts for comparative assessment of citation impact, Archivum Immunologiae et Therapiae Experperimentalis, 86 (6), 373-379.

Glänzel, W., Thijs, B., Schubert, A., Debackere, K. (2009), Sub eld-speci c normalized relative indicators and a new generation of relational charts:

Methodological foundations illustrated on the assessment of institutional research performance, Scientometrics, 78 (1), 165-188.

Kostof, R.N.(1995), The handbook of research impact assessment (Fifth Edition), DTIC Report Number ADA296021.

Martin, B.R., Irvine, J. (1983), Assessing basic research: Some partial indicators of scienti c progress in radio astronomy, Research Policy, 12, 61-90.

Martin, B.R. (1996), The use of multiple indicators in the assessmznt of basic research, Scientometrics, 36(3), 343-362.

Schubert, A., Braun, T. (1996), Cross- eld normalization of scientometric indicators, Scientometrics, 36 (3), 311-324.

Thijs, B., Glänzel, W. (2009), A structural analysis of benchmarks on different bibliometrical indicators for European research institutes based on their research pro le. Scientometrics, 79 (2), 377-388.

(7)

Referenties

GERELATEERDE DOCUMENTEN

• A structural analysis of benchmarks on different bibliometrical indicators for European research institutes based on thbeir research pro le; published in Scientometrics,

After the clustering of 1767 institutes, a classi cation model was created for the assignment of other research institutes or other research pro les to one of the eight groups.

From the multidisciplinary cluster, 676 institutes were taken in order to investigate the relation between citation indicators normalized each at different level of subject

Given the utmost interesting macro and meso  gures presented in tables 8, 11 and above all, in table 13 we would like to conclude that at the meso level, the share of

Unlike in the analysis of individual universities described by van Raan (2004) where a ‘spectral analysis’ of the output based on ISI Subject Categories is applied, we do not

Here, however, a  eld based expected citation rate is used expressing the average number of citations received over all papers published within a speci c  eld within one

Here, however, a  eld based expected citation rate is used expressing the average number of citations received over all papers published within a speci c  eld within one

In a second section we looked at three different citation indicators to study the effect of the different types of collaboration on the impact of research and the