• No results found

Scientometric indicators and their exploitation by journal publishers

N/A
N/A
Protected

Academic year: 2021

Share "Scientometric indicators and their exploitation by journal publishers"

Copied!
56
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1 Scientometric indicators and their exploitation by journal publishers

Name: Pranay Parsuram

Student number: 2240564

Course: Master’s Thesis (Book and Digital Media Studies)

Supervisor: Prof. Fleur Praal

Second reader: Dr. Adriaan van der Weel

Date of completion: 12 July 2019

(2)

2 Contents

1. Introduction ... 3

2. Scientometric Indicators ... 8

2.1. Journal Impact Factor ... 8

2.2. h-Index ... 10

2.3. Eigenfactor™ ... 11

2.4. SCImago Journal Rank... 13

2.5. Source Normalized Impact Per Paper ... 14

2.6. CiteScore ... 15

2.6. General Limitations of Citation Count ... 16

3. Conceptual Framework ... 18

3.1. Academic Publishing... 18

3.2. Academic Journal Publishing ... 21

3.2.1. Researcher Considerations ... 23

4. Scientometric Indicators and Journal Publishers ... 27

4.1. Use of Scientometric Indicators ... 27

4.1.1. Journal Pricing ... 28

4.2. Manipulation of Scientometric Indicators ... 32

4.2.1. Article Type ... 32

4.2.2. Co-authorship and Subject Area ... 33

4.2.3. Manipulation of Citable Items ... 35

4.2.4. Self-citation ... 36

4.2.5. Accessibility ... 37

4.3. Alternative to Scientometric Indicators... 43

4.4. Open Science ... 45

5. Conclusion ... 47

(3)

3 1. Introduction

The emergence of the Web has shown considerable promise as a means to transform academic communication.1 This is because the Web offers academics and various research institutions to ‘publish, annotate, review, discover and make links between research outputs’.2 Given these affordances, governments and funding agencies in many countries are now able to get directly involved in systematically evaluating the scientific outputs of universities and research institutions and their research productivity and quality. This systematic evaluation aids with providing measures to improve the performance of a given institution and provides a basis for decision-making about the allocation of research funding. The major means for determining the overall scientific output of an institution is scientometrics.3 In this thesis, I will shed light on how journal publishers exploit and manipulate this dependence of universities and research institutions on scientometrics for their own commercial interests. This chapter of the thesis provides a brief introduction of scientometrics and its use in the context of my research question.

Until around 1970, research regarding the growth and development of academic and, in particular, scientific knowledge, was considered to be a philosophical field. More emphasis was laid on the validity of knowledge. However, the question regarding how this knowledge was being produced was not focused on. The latter question was considered to belong to the realm of the social sciences.4 Then, in 1969, the term scientometrics was coined in Russia. This term was intended to examine all aspects of the literature of science and technology. By 1978, the term had attained wide recognition because of the establishment of a journal called

Scientometrics in Hungary.5 This journal is still in publication today, and it deals with ‘the quantitative features and characteristics of science and scientific research’.6 Furthermore, Tague-Sutcliffe provides a simple definition of scientometrics as follows: ‘Scientometrics is the study of the quantitative aspects of science as a discipline or economic activity.’7

1 J. Stewart, R. Procter, R. Williams & M. Poschen, ‘The role of academic publishers in shaping the

development of Web 2.0 services for scholarly communication’, New Media & Society, 15:3 (2013), p. 414.

2 Ibid., p. 414.

3 D. Pontille & D. Torny, ‘The controversial policies of journal ratings: Evaluating social sciences and

humanities’, Research Evaluation, 19:5 (2010), pp. 347–348.

4 L. Leydesdorff, The challenge of scientometrics: The development, measurement, and self-organization of scientific communications, (Universal Publishers, 2011), p. 15.

5 W. Hood & C. Wilson, ‘The literature of bibliometrics, scientometrics, and informetrics’, Scientometrics, 52:2

(2001), p. 293.

6 Anon., ‘Description’, Scientometrics <https://link.springer.com/journal/11192> (10 June 2019).

7 J. Tague-Sutcliffe, ‘An introduction to informetrics’, Information Processing & Management, 28:1 (1992), p.

(4)

4 Scientometrics is related to and has overlapping fields with bibliometrics.8

Bibliometrics has been in use in different forms for over a century or more. However, the term was officially coined only in 1969,9 around the same time as scientometrics. The major difference between bibliometrics and scientometrics is that the former mainly focuses on the quantitative aspects of the literature of science and scholarship, whereas the latter considers the literature as well as other aspects of science and technology, such as researcher practices, socio-organizational structures, research and development management and governmental regulations.10 Inherently, both concepts are used as a tool to measure research output in the form of publications, and as such, the terms have been used inconsistently and

inter-changeably in existing literature. However, as scientometrics is a broader concept, it fits better in the context of this thesis; therefore, I will be using this term throughout. The two basic metrics involved in scientometrics are the number of publications and the number of citations that these publications receive. These metrics can be evaluated at different levels and for different objects such as a single publication, a researcher, a researcher unit, an institution or a country.11 Conventional scientometrics only provides raw data about the publication and citation count, which is generally derived from databases such as Web of Science, Scopus, PubMed, Google Scholar and others. To interpret this data, more

sophisticated indicators have been implemented, for example, journal impact factor (IF), h-index, field normalized citation indicators, Eigenfactor™ (EF), SCImago journal rank (SJR), source normalised impact per paper (SNIP) and CiteScore. The reason for the introduction of these indicators is that publication and citation trends differ by disciplines and even sub-disciplines. Therefore, more objective and normalized indicators are required for comparisons between disciplines.12 One of the first scientometric indicators was the IF,13 and it was

proposed in the 1950s.14 Since then, scientometric indicators have gradually become one of the main means for characterisation of research evolution.15 Scientometric indicators are an effective means of determining the evolution of research in a given field and can hence help

8 Hood & Wilson, ‘The literature of bibliometrics’, p. 291. 9 Ibid., p. 292.

10 Ibid., pp. 293–294.

11 J. Wilsdon, J. Bar-Ilan, R. Frodeman, E. Lex, I. Peters & P.F. Wouters, ‘Next-generation metrics: Responsible

metrics and evaluation for open science,’ Report of the European Commission Expert Group on Altmetrics, (2017), <https://openaccess.leidenuniv.nl/bitstream/handle/1887/58254/report.pdf?> (10 June 2019), p. 8.

12 Ibid., p. 9.

13 J.C. Oosthuizen & J.E. Fenton, ‘Alternatives to the impact factor’, The Surgeon, 12:5 (2014), p. 240. 14 E. Garfield, ‘Citation indexes to science. A new dimension in documentation through association of ideas’, Science, 122 (1955), p. 108; IF was proposed before the terms scientometrics and bibliometrics were coined. 15 Pontille & Torny, ‘The controversial policies of journal ratings’, pp. 347–348.

(5)

5 determine future research trends.16 With the advent of the Internet, their assessment has become easier thanks to the presence of various databases and repositories.17 Therefore, scientometrics plays a central role in the decision-making of universities and research institutions with respect to hiring as well as tenure and promotion decisions for their faculty. This is because these decisions are made mainly on the basis of the impact of a faculty member’s research. This is determined by evaluating the expected impact of an individual’s publications. Due to limited timeframes available to determine this impact, most institutions rely on the impact of journal publications or the journals in which the research is published rather than the impact of an individual researcher’s work.18 Accordingly, most scientometric indicators generally only focus on articles published in academic journals; this is especially true for the sciences.19 However, measuring the impact based on scientometric indicators alone merely provides a quantitative assessment because these indicators are usually derived from citation counts; the use of citation counts has some inherent limitations that will be discussed in detail in the next chapter.

Nevertheless, researchers have realised that journal publications serve not only as a means for communication but also as an indicator of quality and impact20 and hence career development.21 Their selection of when, where and how to publish their work aims at

maximising dissemination to the target audience, registering their claim on the work done and gaining prestige among their peers and superiors. As journal articles have become the

dominant form of publication, even in disciplines where they were not dominant in the past, researchers themselves have increasingly started relying on journals, in particular high-status journals, and have come to perceive other channels of communication, including those that are better suited to application- or practical-based research, to have a low status and prestige in the academic world.22 This has directly affected the number of journals and the amount of research published in them. Overall, the number of academic journals has increased from 39,565 in 2003 to 61,620 in 2008, and among them, the number of peer-reviewed journals

16 Ò. Miró, P. Burbano, C.A. Graham, D.C. Cone, J. Ducharme, A.F. Brown & F.J. Martín-Sánchez, ‘Analysis

of h-index and other bibliometric markers of productivity and repercussion of a selected sample of worldwide emergency medicine researchers’, Emergency Medical Journal, 34:3 (2017), p. 175.

17 Wilsdon et al., ‘Next-generation metrics’, p. 9.

18 V.D. Kosteas, ‘Journal impact factors and month of publication’, Economics Letters, 135 (2015), p. 77. 19 J. Fry, C. Oppenheim, C. Creaser, W. Johnson, M. Summers, S. White, G. Butters, J. Craven, J. Griffiths & D.

Hartley, ‘Communicating knowledge: how and why UK researchers publish and disseminate their findings’,

Research Information Network and JISC, (2009), < https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/5465/1/Communicat> (10 June 2019), pp. 17–18.

20 Ibid., pp. 17–18.

21 Kosteas, ‘Journal impact factors and month of publication’, p. 77. 22 Fry et al., ‘Communicating knowledge’, pp. 17–20.

(6)

6 has increased from 17,649 in 2002 to 23,973 in 2008. Moreover, the annual average number of articles per journal has increased from 72 in 1972 to 123 in 1995.23 Overall, the number of articles published each year as well as the number of journals have both grown steadily by about 3% and 3.5% per year, respectively, for over two centuries or so, though there are some indications that growth has accelerated in recent years.24 As a result, in 2002, Morgan Stanley reported that academic journals have been the fastest growing media sub-sector of the past 15 years.25

Moreover, some scientometric indicators, in particular IF, have rapidly evolved in to being more than just a measure of a journal’s relevance and have come to be viewed as an indicator of journal and author prestige, thus having far-reaching implications.26 This prestige predicts the overall flow of information in a given field, thus increasing the chances of both the journal and author getting noticed and achieving recognition within the scientific community.27 Considering that journal publishers have to take into account the economic considerations of publishing,28 this prestige becomes especially important for financial viability. This is because publications are viewed as commodities sold by publishers to libraries. Since libraries play a central role in financing the publication infrastructure, it becomes important for journals that their publications, i.e. the commodities, are of a high quality and are unique so as to maximise interest from libraries.29 In such a scenario, as the IF of a journal is assumed to be a marker for the quality of research it publishes,30 it acts as a marketing tool for the given journal. Furthermore, because of its importance as a marketing tool, journal publishers are inclined to manipulate it and even exploit it for their own interests. In this thesis, I will examine the use of scientometric indicators by journals for marketing and economisation purposes. To do so, I will first describe the various indicators in use today and how they are or can be used by journal publishers. I will also consider the effects of these indicators on author motivations to submit their work to specific journals.

23 C. Tenopir & D.W. King, ‘The growth of journals publishing’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), p. 110.

24 M. Ware & M. Mabe, The STM report: An overview of scientific and scholarly journal publishing, 4th edition

(The Hague: International Association of Scientific, Technical and Medical Publishers, 2015), p. 6.

25 Morgan Stanley, ‘Scientific publishing: Knowledge is power’ Morgan Stanley Equity Research Europe (London), 30 September, 2002 <http://www.econ.ucsb.edu/~tedb/Journals/morganstanley.pdf> (16 May 2019).

26 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240.

27 N.C. Taubert & P. Weingart, ‘Changes in scientific publishing: A heuristic for analysis’, in The future of scholarly publishing: Open access and the economics of digitization (Cape Town, South Africa: African Minds,

2017), p. 3.

28 Ibid., p. 6. 29 Ibid., pp. 6–9.

(7)

7 Lastly, I will also examine some alternatives to existing metrics that may play an important role in future academic communication and research.

(8)

8 2. Scientometric Indicators

Scientometric indicators are used to assess research performance quantitatively. These assessments may be done in conjunction with other metrics or using the scientometric indicators as stand-alone tools. There are a number of scientometric indicators with varying levels of sophistication.31 This chapter describes some of the most widely used indicators at present.

2.1. Journal Impact Factor

The journal IF is the most commonly employed scientometric indicator today.32 It was first proposed by Eugene Garfield in 195533 and has become the leading scientometric ranking in use today.34 The main aim for introducing it was to achieve a ‘a bibliographic system for science literature that can eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers’.35 This concept was introduced by Garfield to overcome the issue of

researchers being influenced by unfounded assertions and unsubstantiated claims while writing. In other words, the researchers were not always aware about criticisms regarding a certain finding. In the 1950s, in the absence of the Internet, a researcher would have to spend a considerable amount of time investigating the bibliographic predecessors of a given article. However, a citation index would make this check easier and more efficient. Moreover, this index was mainly aimed at minimising the citation of poorly conceived studies. The citation index was developed using a simple numerical-code system to identify individual scientific articles. According to this system, first, an alphabetical list of all periodicals was provided along with the numerical codes for each one of them.36 Thus, the first part of the code gave the periodical in which the article was published. The second part of the code corresponded to articles in that periodical. Under each numerical code, code numbers of other articles that referred to the given article were to be provided; in addition, for each citing source, the type of article, i.e. original article, review, etc. was to be mentioned. The availability of the code

31 T. van Leeuwen, ‘Bibliometric research evaluations, Web of Science and the Social Sciences and Humanities:

a problematic relationship?’, Bibliometrie-Praxis und Forschung, 2 (2013), p. 8-1.

32 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240. 33 Garfield, ‘Citation indexes to science’, p. 108.

34 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240. 35 Garfield, ‘Citation indexes to science’, p. 108.

36 When this system was proposed, a number of periodicals already had numerical codes. The alphabetical list

(9)

9 could be particularly useful for determining the overall historical impact of a given article.37 This system was based on Shepard’s Citations, a legal research system that has been used by lawyers and jurists with considerable success since 1873. Both systems are a means of secondary research, and not primary research. The main function of Shepard’s Citations was to test the validity of a case based on previous rulings. However, it had another important function: it helped index all cases that were automatically derived out of the first one.38 This latter function was considered to be useful for scientific communication.39 The major

difference for IF was that scientific disciplines were divided into broad categories (journals becoming categories here) and the number of years covered were restricted.40

Although the initial aim for the experiment was to provide a list of articles citing any given article, with the expectation being that researchers would go through these citing articles to decide the impact and quality of the cited article, over the years, IF seems to have become more of a quantitative measure for a journal. In the latter sense, the IF resembles the system proposed by Gross & Gross.41 The concept proposed by Gross & Gross was for chemistry journals and was meant to act as a guide for libraries to decide which journals to purchase for their students without any bias or subjectivity. Here, the number of citations to a given journal in a five-year period with respect to the total number of articles the journal published was considered.42 The IF in its current form was defined by Garfield only in the 1970s.43 At present, the IF of journals are published every September by the Thomson Reuters Institute for Scientific Information (ISI).44 The ISI publishes both two-year and five-year IFs.45 The formula for the two-year IF of a journal for a given year, e.g., 2018, is as follows:46

𝐽𝑜𝑢𝑟𝑛𝑎𝑙 𝐼𝑚𝑝𝑎𝑐𝑡 𝐹𝑎𝑐𝑡𝑜𝑟 = 𝐶𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠 𝑖𝑛 2018 𝑡𝑜 𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠 𝑝𝑢𝑏𝑙𝑖𝑠ℎ𝑒𝑑 𝑖𝑛 2016 − 17 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠 𝑝𝑢𝑏𝑙𝑖𝑠ℎ𝑒𝑑 𝑖𝑛 2016 − 17

37 Ibid., pp. 108–109.

38 W.C. Adair, ‘Citation indexes for scientific literature?’, American Documentation (Pre-1986), 6:1 (1955), pp.

31–32.

39 Garfield, ‘Citation indexes to science’, p. 108.

40 Adair, ‘Citation indexes for scientific literature?’, p. 32; IF ranges of journals greatly differ by discipline. 41 Garfield, ‘Citation indexes to science’, p. 109.

42 P.L. Gross & E.M. Gross, ‘College libraries and chemical education’, Science, 66:1713 (1927), pp. 385–386. 43 E. Garfield, ‘Citations-to divided by items-published gives journal impact factor; ISI lists the top fifty

high-impact journals of science’, Current Contents, 7 (1972), pp. 5–8.

44 C. Scully & H. Lodge, ‘Impact factors and their significance; overrated or misused?’, British Dental Journal, 198:7 (2005), p. 391.

45 Kosteas, ‘Journal impact factors and month of publication’, p. 77.

46 Formula derived from Scully & Lodge, ‘Impact factors and their significance’, pp. 391–392. For five-year IF,

(10)

10 However, the IF has a number of shortcomings. First, it only provides an assessment of the journal quality and not of individual articles in the journal.47 Even as a marker for journal quality, it is not entirely accurate, as a single path-breaking article in a journal may be cited numerous times to greatly increase the numerator of the formula, whereas many other articles may not be cited at all. In such a case, the journal’s IF is entirely dependent on the citations of one article alone. Another issue is that the IF of a journal can be easily

manipulated depending on the type of the article. For example, a journal publishing more review articles usually has a higher IF than one publishing more original articles, as the former are more frequently cited. In addition, there is disparity among journal IFs depending on their fields and type. For instance, scientific journals generally have higher IFs than clinical journals. Lastly, the IF is only valid for journals and does not consider books and book chapters.48

2.2. h-Index

The h-index was proposed by Hirsch in 2005. This indicator aims to provide a broad

assessment of an individual researcher’s work and publication record, based on the number of papers published over a given period of time and the number of times each paper is cited. The index was initially proposed for physicists49 but has since been employed by other scientific disciplines as well.50 According to Hirsch’s definition, ‘a scientist has index h if h of his or her Np papers have at least h citations each and the other (Np − h) papers have ≤h citations each’.51 In other words, if a scientist has an h-index of 20, it means that he or she has published 20 papers, with each having been cited at least 20 times.52 A simple method for calculating the index would be to first look up the number a researcher’s published works. Then, the list should be arranged in descending order of the number of times the publications have been cited. Continue through the list until the number of citations for a publication becomes lesser than the number of papers on the list. The number of papers here would give the h-index value. Thus, an h-index of 0 means that the scientists have either not published

47 Wilsdon et al., ‘Next-generation metrics’, p. 9.

48 Scully & Lodge, ‘Impact factors and their significance’, pp. 392–393.

49 J.E. Hirsch, ‘An index to quantify an individual’s scientific research output’, Proceedings of the National Academy of Sciences, 102:46 (2005), p. 16569.

50 L. Bornmann & H.D. Daniel, ‘What do we know about the h index?’, Journal of the American Society for Information Science and Technology, 58:9 (2007), p. 1381.

51 Hirsch, ‘An index to quantify an individual’s scientific research output’, p. 16569. 52 Derived from Bornmann & Daniel, ‘What do we know about the h index?’, p. 1381.

(11)

11 papers or that their papers have had no or negligible visible impact. Thus, this index ensures that only the impactful papers authored by a scientist are considered for assessment and the others are neglected. Therefore, this index supports enduring performers with high publishing productivity coupled with high or at least above-average impact.53

However, the index too has some shortcomings. Primary among them is that the h-index tends to favour senior researchers over junior and untenured ones.54 This is because the index does not decrease over time and tenured researchers would generally have higher h-index values, as their overall productivity would be higher;55 as a result, it does not accurately reflect recent scientific achievement.56 Another common complaint is that the h-index does not consider extreme values; however, in science, a single valuable paper can lead to a major breakthrough.57 Therefore, highly cited and significant papers carry the same weight as any other paper while calculating the index.58 Further, since the h-index is derived from the same databases as those used by ISI to collate journal IF, it does not consider books and book chapters.59 In addition, unclear and incorrect citations within the databases as well as some journals or publications not being indexed in the databases may lead to incorrect calculation of h-index values.60 Moreover, like IF, the h-index too can be manipulated through self-citation. Also, single- and multi-authored papers are treated identically in this system. Lastly, like IF, it is greatly dependent on the discipline and the overall number of scientists working in and the output of a discipline.61

2.3. Eigenfactor™

EF was developed by Carl Bergstrom and Jevin West in 2007 to address two major issues related to both IF and h-index: that citations had the same value irrespective of the prestige of the journal in which they were published, and that these indices did not take into account

53 Bornmann & Daniel, ‘What do we know about the h index?’, p. 1381. 54 Wilsdon et al., ‘Next-generation metrics’, p. 9.

55 H.L. Roediger III, ‘The h-index in science: A new measure of scholarly contribution’, Observer: The Academic Observer, 19:4 (2006), < https://www.psychologicalscience.org/observer/the-h-index-in-science-a-new-measure-of-scholarly-contribution> (20 May 2019).

56 W.E. Schreiber & D.M. Giustini, ‘Measuring Scientific Impact With the h-Index: A Primer for

Pathologists’, American Journal of Clinical Pathology, 151:3 (2018), p. 288.

57 Roediger III, ‘The h-index in science’.

58 Schreiber & Giustini, ‘Measuring Scientific Impact With the h-Index’, p. 288. 59 Roediger III, ‘The h-index in science’.

60 Bornmann & Daniel, ‘What do we know about the h index?’, p. 1383. 61 Schreiber & Giustini, ‘Measuring Scientific Impact With the h-Index’, p. 288.

(12)

12 differences among different disciplines and their journals.62 Thus, the main aim of the EF was to provide a more sophisticated metric to measure citation data by using network analysis.63 To achieve this, Bergstrom and West used a computational algorithm, known as the EF algorithm, to extract information inherent to citation networks. The algorithm is related to a class of network statistics known as eigenvector centrality measures.64 It computes the visitation frequency of a given journal directly from a matrix that records how often other journals cite the given journal.65 Importantly, the EF does not consider journal

self-citations.66 The proposed approach is similar to that used by Google to rank web pages while returning search results. Google’s algorithm considers not only the number of hyperlinks a given page receives but also where those hyperlinks come from. In a similar vein, the EF algorithm ranks journals as web pages based on the citation data obtained from the ISI,67 which play the role of hyperlinks.68 Thus, a journal’s EF score is considered to indicate its overall importance in the scientific community69 over a 5-year period.70

The Article Influence™ score (AIS) is closely related to the EF.71 This score is calculated by dividing the EF score of a journal by the total number of articles published by the journal in the given period, normalised as a fraction of all articles in all journals. In general, the AISprovides a per-article comparison of journals and determines the average influence of a given journal on the scientific community over a 5-year period.72 The normalisation of the number of articles allows a comparison of the AIS between journals.73 For example, if journal A has an AIS of 1.00 and journal B has an AIS of 5.00, the articles in journal B are, on average, considered to be five times more influential than those in journal A in terms of its AIS.

62 C.T. Bergstrom, J.D. West & M.A. Wiseman, ‘The eigenfactor™ metrics’, Journal of Neuroscience, 28:45

(2008), p. 11433.

63 Anon., ‘About’, EIGENFACTOR.org <http://www.eigenfactor.org/about.php> (5 June 2019). 64 The eigenvector is widely used in matrices and hence is ideal when considering networks. 65 Bergstrom et al., ‘The eigenfactor™ metrics’, p. 11433.

66 F. Franchignoni & S.M. Lasa, ‘Bibliometric indicators and core journals in physical and rehabilitation

medicine’, Journal of Rehabilitation Medicine, 43:6 (2011), p. 472.

67 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 241. 68 Bergstrom et al., ‘The eigenfactor™ metrics’, p. 11433.

69 Anon., ‘About’, EIGENFACTOR.org.

70 Franchignoni & Lasa, ‘Bibliometric indicators and core journals’, p. 472. 71 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, pp. 241–242. 72 Franchignoni & Lasa, ‘Bibliometric indicators and core journals’, p. 472. 73 Anon., ‘About’, EIGENFACTOR.org.

(13)

13 Although the EF coupled with the AIS have been considered viable alternatives to IF,74 especially considering its use of normalisation, it does have some limitations. First, like IF, it considers the influence of a journal and not of individual articles or researchers.

Moreover, the categorisation of disciplines and hence the matrix used for calculation of EF has been found to be problematic. The categorisation is considered to be too broad,

inconsistent, inaccurate and incomplete. This is because the categorisation is done using software and not manually. As a result, some journals have been assigned to incorrect categories, or some categories are too broad.75 This can lead to a distorted picture of the standing of some journals in their given field or sub-field and makes it rather difficult to reproduce refined league lists of journals.76 Lastly, EF and AIS are currently only available for journals listed in the Journal Citation Reports (JCR) database, which have been found to be less comprehensive than other citation databases.77

2.4. SCImago Journal Rank

The SJR, first proposed in 2010, is also an indicator of a journal’s prestige, but it is independent of a journal’s size. Like EF, SJR is based on citation weighting schemes and eigenvector centrality, and it aims to measure the average prestige per paper in a journal.78 In fact, its calculation and the algorithm and mechanism used is very similar to those used for EF; however, what distinguishes it from EF is that it considers the Scopus database, which is considered to be more comprehensive than the JCR database used for EF.79 Moreover, SJR considers a 3-year citation window for each journal.80 The developers of SJR, however, found some issues and hence proposed an improved version, known as SJR2, in 2012. This

indicator considers the prestige of the citing journal as well as its closeness to the cited journal through vector calculations.81 By doing so, it aims to also consider the amount of ‘prestige’ of each journal transferred to another by considering the percentage of citations of

74 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 242.

75 P. Jacsó, ‘The problems with the subject categories schema in the EigenFactor database from the perspective

of ranking journals by their prestige and impact’, Online Information Review, 36:5 (2012), pp. 764–765.

76 Ibid., p. 758

77 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 242.

78 B. González-Pereira, V.P. Guerrero-Bote & F. Moya-Anegón, ‘A new approach to the metric of journals’

scientific prestige: The SJR indicator’, Journal of Informetrics, 4:3 (2010), pp. 379–380.

79 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 242.

80 González-Pereira et al., ‘A new approach to the metric of journals’ scientific prestige’, p. 390.

81 V.P. Guerrero-Bote & F. Moya-Anegón, ‘A further step forward in measuring journals’ scientific prestige:

(14)

14 the former made in articles of the latter. Overall, the SJR and SJR2 provide the average citation impact per paper of a journal.82

Like other indicators, SJR too has some limitations. First, as with EF and IF, it does not provide information about individual articles or researchers but journals instead.

Moreover, unlike EF, SJR does consider self-citations up to a limit of 33%,83 which can still be problematic. Lastly, Scopus only contains citation data after the year 1996;84 considering that journal articles have been widely read, circulated and cited since the early 1940s,85 SJR does not seem to provide a complete picture of a journal’s overall historical impact.

2.5. Source Normalized Impact Per Paper

SNIP was proposed by H.F. Moed in 2010,86 and it measures a given journal’s contextual citation impact by considering the characteristics of the journal’s subject field, the temporal maturation of citation impact87 and coverage of the subject field’s literature within a

database.88 These considerations help avoid the distortion in impact caused due to differences in different disciplines and sub-disciplines.89 Mathematically, SNIP is the ratio of a journal’s citation count per paper and the citation potential within the journal’s field.90 The inclusion of the citation potential ensures that niche specialities or sub-specialities that have a tendency to be cited less frequently are given higher weightage to create a more balanced rating system.91 Moreover, the disciplines are determined on an article-to-article basis and not a journal-to-journal basis,92 thus making it a good indicator, especially for multi-disciplinary journals.

However, SNIP has some shortcomings too. First, SNIP does not recognise the

difference between original and review articles; hence, it is not safe from distortion caused by article type. Second, like the other journal factors, it is a reflection of journal quality even if comparisons are made on an article-to-article basis. Thus, it does not indicate the impact of

82 Franchignoni & Lasa, ‘Bibliometric indicators and core journals’, pp. 472–473.

83 González-Pereira et al., ‘A new approach to the metric of journals’ scientific prestige’, p. 381.

84 L. Wildgaard, J.W. Schneider & B. Larsen, ‘A review of the characteristics of 108 author-level bibliometric

indicators’, (2014), <https://arxiv.org/ftp/arxiv/papers/1408/1408.5700.pdf> (1 July 2019), p. 39.

85 Tenopir & King, ‘The growth of journals publishing’, pp. 106–107. 86 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 243.

87 This indicates how rapidly an article is likely to have an impact within a particular field.

88 H.F. Moed, ‘Measuring contextual citation impact of scientific journals’, Center for Science and Technology Studies, 13 November 2009, <https://arxiv.org/ftp/arxiv/papers/0911/0911.2632.pdf>, p. 1.

89 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 243. 90 Moed, ‘Measuring contextual citation impact’, p. 9.

91 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 243. 92 Moed, ‘Measuring contextual citation impact’, p. 16.

(15)

15 individual articles. Lastly, although all scores are normalised according to the citation

potential of a field, it takes into account neither the development and growth of literature within a given field or sub-field nor the frequency of papers in a given field being cited from other fields.93

2.6. CiteScore

CiteScore was launched on 8 December 2016 by academic publishing giant Elsevier to directly compete with IF.94 The overall mechanism and formula used for their calculation is identical.95 However, CiteScore differs from IF in that it considers almost twice as many titles as those considered while computing IF; it takes into account editorials, letters and news items while calculating the score; and is calculated over a 3-year window. Also, CiteScore values are calculated for journals as well as conference proceedings and the scores are openly available.96 In fact, all citation data related to CiteScore can be easily accessed, making its calculation seem more transparent than that of IF.97

Despite this transparency and the relatively short lifetime of CiteScore, it has already been subject to some criticism. First, because of its similarities to IF, most issues related to IF persist.98 In fact, the expansion of article types considered may likely have a negative impact, as it may make CiteScore more vulnerable to manipulation. Further, like SJR, it only

considers the entries included in the Scopus database,99 thus only considering citation data from after 1996. Moreover, the subject-area categorisation of journals has been found to be inconsistent, especially in fields such as pharmacy.100 Lastly, CiteScore is funded and developed by Elsevier, one of the largest journal publishers in the world, as a direct competitor to IF.101 This leads to strong suspicions about its legitimacy given the major conflict of interest.

93 Ibid., pp. 16–17.

94 J.A.T. Da Silva & A.R. Memon, ‘CiteScore: A cite for sore eyes, or a valuable, transparent metric?’, Scientometrics, 111:1 (2017), p. 553.

95 F. Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, Pharmacy Practice (Granada), 16:2 (2018), p. 1.

96 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, pp. 554–555.

97 Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, p. 2. 98 Check Section 2.1 for details.

99 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, pp. 554–555.

100 Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, p. 2. 101 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, p. 554.

(16)

16

2.6. General Limitations of Citation Count

Other than the limitations mentioned above, all scientometric indicators have one major inherent limitation: their dependence on citation counts. This is because citations counts are purely a quantitative measure; yet, they are used as a means for determining the quality of journals. Moreover, they continue to remain the basis for scientometric indicators. In

particular, the IF, which is the most widely used scientometric indicator, has become deeply entrenched in academia for measuring the impact of research despite it being severely

criticised by journal publishers and researchers alike.102 However, the IF is determined purely by the number of citations that articles in a given journal receive, with no consideration of which researchers cite those articles and which journals the citing articles are published in. In doing so, the IF arguably only measures the popularity of a given article or journal and not its prestige.103 The calculation is akin to calling the highest-grossing movie or best-selling book as the most prestigious movie or book, respectively, although, in reality, they may be the most popular but not necessarily the most prestigious because the total number does not provide an idea of who liked those works. Although modern scientometric indicators like EF and SJR do consider the status of the citing journal, they still derive considerable amounts of data from the ISI database, which is used to determine IF; this means that the prestige of the citing journal is still determined by the IF, thus making them problematic as well. In addition, the h-index, which suggests the quality of an individual researcher, depends on citation counts alone, thus still providing a more quantitative viewpoint.

Moreover, citing a work to criticise or oppose it is also considered a citation.104 Applying the same movie or book analogy, this implies that every review, good or bad, for a given movie or book is considered to impact cinema or literature, respectively. This issue is observed in all scientometric indicators, as none of them critically analyse the citations. Thus, even the popularity suggested by pure numbers in citation counts is misleading in that it may represent notoriety or, in some cases, vehement disagreement.105 Therefore, using citation counts and hence scientometric indicators as a qualitative measure for impact of research is a flawed approach in itself. However, as long as scientometric indicators continue to influence hiring, tenure and promotion decisions for researchers, they will remain an inescapable part

102 B. Cope & M. Kalantzis, ‘Evaluating Webs of Knowledge: A Critical Examination of the “Impact Factor”’, Logos, 21:3-4 (2010), pp. 61–65.

103 J. Bollen, M.A. Rodriquez & H. Van de Sompel, ‘Journal status’, Scientometrics, 69:3 (2006), p. 669. 104 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 62.

(17)

17 of academic communication and the overall research process. Consequently, their constant presence and significance make them vulnerable to misuse, manipulation and exploitation.

(18)

18 3. Conceptual Framework

3.1. Academic Publishing

In this chapter, I will focus on the overall communication flow of academic publishing, as this will make clear the roles and responsibilities of different stakeholders in this type of publishing, thus providing context for the subsequent chapter of my thesis. Before I proceed to describing academic publishing, let us look at publishing in general. The verb ‘to publish’ itself is said to have been derived from the Anglo-Norman word puplier and the Middle French word publier, which loosely translated mean to make something public or known.106 The root words originally implied to read a work in public107 or to make public a will or edict.108 Undoubtedly, one could argue that this is what publishing entails today, albeit on a much broader scope. However, defining publishing as a mere activity or step is too simplistic. This is because publishing entails a lot more than simply making something public. It is a process that also has social as well as economic contexts.109 Moreover, the word publishing today has an alternative definition: the business of publishing.110 This is further evidence of it being a process involving multiple players and steps and inter-relationships between them. Furthermore, the business of publishing would clearly have an economic dimension for each stakeholder.

One of most landmark representations of publishing as a process, which takes into account the various considerations mentioned above, is Robert Darnton’s communication circuit. This circuit was first proposed in the 1980s to describe book publishing in the eighteenth century.111 However, many aspects of the model remain relevant today, although the model does require some changes mainly due to the recent increase in the use of digital formats for publishing and other technological advancements. Figure 1 shows an updated version of Darnton’s communication circuit, which includes the aspects involved in digital publishing.

106 M. Bhaskar, The Content Machine: Towards a Theory of Publishing from the Printing Press to the Digital Network, (Anthem Press, 2013), p. 16.

107 R. Chartier, Forms and meanings: Texts, performances, and audiences from codex to computer,

(Philadelphia: University of Pennsylvania Press, 1995), p. 33.

108 Bhaskar, The Content Machine, p. 17. 109 Ibid., p. 17.

110 Ibid., p. 19.

(19)

19 Figure 1: Digital publishing communication circuit (©Intellect Books, 2013).112

This circuit shows an excellent overview of the processes involved in digital book publishing. In simple terms, the circuit depicts the steps associated with digital book publishing,

including the various stakeholders involved as well as the factors influencing them. Although the model is not applicable to academic publishing per se and requires some changes due to the complex dynamics involved in academic publishing, it is still relevant from the point of view of providing the crux of publishing for the purpose of this thesis. In particular, it is interesting to note the various socio-economic factors affecting publishing and

communication as a whole, and this forms an important part of publishing. These factors become especially important when considering publishing as a business; in some ways, the external factors can be seen as driving forces behind the publishing business. Moreover, the circuit suggests a two-way relationship between the author and the reader in that even though these stakeholders may not always be in direct contact, reader tastes and opinions do

(20)

20 influence the author, both before and after the creation of content. Another major feature of publishing is the distribution or sale of published material to the readers. This in itself inherently makes it more than simply making something public. As shown in Figure 1, the publisher receives the content from the author and provides the finished material to the readers. Thus, publishing can act as both a product, with the authors providing a raw product that goes through the process of publishing to become a finished product, and a service, wherein the publishing process serves the author by helping him / her refine the content and the readers by providing them with new content. In that sense, the publisher itself acts as a middleman between the author and the reader, which reflects Oscar Wilde’s definition: ‘a publisher is simply a useful middleman’.113

Here, the middleman performs two main functions: filtering and amplification. Filtering here refers to the filtering of content; a publishing house would filter the content on the basis of a number of social, economic, political and intellectual considerations to reach a decision on whether the given content should be published. This filtering makes publishers act as a gatekeeper of information and content. Filtering is diverse in that in it could be because of idealistic reasons or economic reasons. Similarly, it could be massively inclusive or extremely exclusive in nature.114 Thus, filtering is expected to determine the overall value of publishing a given work. The function of amplification implies to ensure that a given work is distributed and consumed as widely as possible, leading to it being distributed and

consumed by different people.115 Amplification thus involves a series of actions, with the end purpose of them being to increase the consumption as well as awareness of a work.116

However, in academic publishing, the role of the publisher is different compared to that described above. To delve deeper into understanding this role, it is important to understand academic communication. Academic communication is broadly classified into two categories: formal academic communication and informal academic communication.117 Examples of informal communication include personal correspondence, lectures, seminar,

113 W.R. Cole, ‘No Author Is a Man of Genius to His Publisher’, The New York Times, 3 September, 1989

<https://www.nytimes.com/1989/09/03/books/no-author-is-a-man-of-genius-to-his-publisher.html> (11 June 2019).

114 Bhaskar, The Content Machine, pp. 106–109.

115 This would work differently in academic publishing, wherein the major focus for distribution would be the

target audience and then others.

116 Ibid., pp. 114–115.

117 H.E. Roosendaal & P.A.T.M. Geurts, ‘Forces and functions in scientific communication: an analysis of their

interplay’, Cooperative Research Information Systems in Physics, 31 (1997), <https://core.ac.uk/download/pdf/11464080.pdf> (10 June 2019)p. 11.

(21)

21 blogs etc. Formal communication usually refers to published research.118 This could be either in the form of a scholarly monograph or a scientific article published in a journal.119 In academic publishing, the publisher gives a stamp of quality to help formalise a researcher’s work.120 For example, a piece of research posted online by a researcher as a blog is not considered to be published. In order for the given blog to be considered to be a publication, the author would have to publish it in an academic journal or as a scholarly monograph.121 Thus, in a sense, an academic publisher formalises or legitimises a researcher’s work by publishing it. Although this holds true for academic publishing in general, in my thesis, I will only be concentrating on publishers of academic journal articles, since scientometric

indicators are more relevant for them.

3.2. Academic Journal Publishing

Similar to general publishing, in academic publishing, the publisher acts as a middleman and gatekeeper of information. As mentioned in the previous section, in this case, the publisher plays the major role of legitimising the research and its results, which in turn helps future research. Figure 2 shows the role of academic journals in the overall research process.

118 Ibid., p. 8.

119 J.B. Thompson, Books in the digital age: The transformation of academic and higher education publishing in Britain and the United States, (Polity, 2005), pp. 81–84.

120 F. Praal & A. van der Weel, ‘Taming the digital wilds: How to find authority in an alternative publication

paradigm’, TXT, 2016 (2016), p. 98.

121 This could be done with or without modification depending on the target audience and the style required by

(22)

22 Figure 2: Journal articles in the research process.122

As shown in the figure, researchers conduct their work in a given institute or research space to obtain results. To ensure that the results are legitimised, they submit their work to journal publishers. The journal publisher then publishes the content after filtering and delivers it to the readers, who are mostly other researchers, either directly or indirectly through libraries. Some of these readers then use the articles to conduct future research. Thus, journal articles not only help with the dissemination of research but also promote further research. Therefore, they play a critical role in the overall research process. Moreover, legitimisation of a

researcher’s work is achieved by the publisher by performing four major functions: registration, certification, dissemination and archiving.123

122 Adapted from Thompson, Books in the digital age, p. 82.

(23)

23

3.2.1. Researcher Considerations

Given that journal articles themselves play a role in the research process, researchers and journal publishers have an inter-dependent relationship: the journal publisher requires reliable content to be produced by the researcher and the researcher needs a journal to help legitimise his / her findings and to provide access to latest research. As a result of this dependency, the overall research process has affected academic publishing and vice versa.124 Given the growth in the number of journals and hence journal articles being published since the mid-twentieth century, journal publishers have a larger amount of content to choose from and researchers have more journals to consider.125

Researchers have certain considerations while determining the journal they wish to submit their content to. Moreover, like Darnton’s communication circuit, my proposed model also has some socio-economic factors in play. The most important among them is research funding. Apart from the funding aspect, political and legal guidelines, in conjunction with the guidelines of research institutions / universities, and intellectual and social influences also play a role. Because these factors affect the overall research process, they automatically affect all stakeholders involved in the process and become major considerations for researchers while selecting journals.

The other considerations for researchers deal with the four major functions of the academic publisher mentioned in the previous section. Registration refers to acknowledging the fact publicly that the given academic(s) have researched a specific topic or have made a certain discovery. In other words, publishing helps the researcher(s) stake claim to a given result / discovery, whether path-breaking or not,126 thus not only distributing the content but also placing a time-stamp on the research. When the first scientific journals were launched, the Journal des Sçavans in 1665 in Paris and the Philosophical Transactions of the Royal

Society in 1666 in London, most researchers, in particular, scientists, had reservations about

making their findings public. Although they were concerned about staking their claim to a discovery, they believed that the sharing of their findings would give a competitive edge to their rivals. Consequently, many scientists made their findings public through the use of ingenious cryptic messages, codes and anagrams.127 As a result, relevant readers were not

124 D.C. Prosser, ‘Researchers and scholarly communications: an evolving interdependency’, in D. Shorley &

M. Jubb (eds.), The Future of Scholarly Communication (Facet Publishing, 2013), p. 39.

125 This is an ideal scenario. As we will see in the subsequent chapter, the researchers are at a disadvantage here. 126 Prosser, ‘Researchers and scholarly communications’, p. 39.

(24)

24 aware of the latest findings, making it difficult for more research to be conducted or for researchers to collaborate.128 The level of secrecy was so high that Henry Oldenburg, the editor of the Philosophical Transactions of the Royal Society, wrote to leading scientists of the day, describing to them the merits of making their results public in an explicit and clear manner for the purpose of registration and staking claim.129 Thus, right from the start, the main incentive for publication in a journal was not communication or to make something public to aid further research, but it was to register one’s claim, thus proving a scientist’s intellectual worth.130 Over the years, by the mid-nineteenth century, making one’s findings public became not only the norm but also a requirement to justify a researcher’s intellectual worth, eventually becoming a major input for the reward structure for researchers.131 Thus, registration is the most basic expectation of a researcher from a journal, and therefore, the journal being considered a legitimate one becomes a pre-requisite for any researcher wishing to get published.

Apart from assuring the scientists of registration, Oldenburg also mentioned that publishing would help with the certification of results.132 Certification here refers to

qualitatively validating a claim or discovery made by the researcher(s).133 This he hoped to achieve by ensuring that all submitted articles were reviewed by members of the Council of the Society.134 This was perhaps the first instance of peer-review for journal articles, which has become one of the cornerstones of academic publishing over the years.135 Peer-review is ‘the process by which research output is subjected to scrutiny and critical assessment by individuals who are experts in those areas’.136 It is widely believed by the academic

community, including researchers and journal editors, that an academic publication must be peer-reviewed ‘to establish its value to the field, its originality, and its argumentative

rigor’.137 Moreover, the accuracy and quality of work that has not been peer-reviewed cannot

128 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 16. 129 Prosser, ‘Researchers and scholarly communications’, p. 40.

130 Prosser, ‘Researchers and scholarly communications’, p. 40. 131 Ibid., pp. 40–42.

132 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 16. 133 Prosser, ‘Researchers and scholarly communications’, p. 41

134 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 16.

135 I. Hames, ‘Peer review in a rapidly evolving publishing landscape’, in Academic and Professional Publishing

(Chandos Publishing, 2012), p. 15.

136 Ibid., pp. 16–17.

137 D.A. Stirling, ‘Editorial peer review: Its strengths and weaknesses’, Journal of the Association for Information Science and Technology, 52 (2001), p. 984.

(25)

25 be trusted.138 Consequently, although there are some academic journals that do not conduct peer-review, these journals are usually considered to be less prestigious than peer-reviewed ones. Therefore, the absence or presence of peer-review becomes an important consideration for researchers while selecting journals. This benefits not only the authors, who can improve the quality of their work based on the received critical assessment, but also the readers, who are assured of getting access to high-quality, robust and relevant research.

While registration and certification have been considerations for researchers while choosing journals from the outset, the emphasis on measuring the impact of their work on current and future research and development has been in practice only for the last two to three decades.139 As a result, it is important for researchers that their published material is made discoverable to others. This enhanced discoverability would increase the probability of their work being noticed and cited by other researchers. As is evident from Chapter 2, since most scientometric indicators are derived from citation data, dissemination is a very important consideration for researchers while selecting journals. Dissemination refers to creating awareness of a given researcher’s work among the target audience, in particular his peers.140 This awareness can be created through improving the discoverability of articles and by improving their accessibility to users. The increased discoverability and wider access would increase the probability of a given article being read and cited. This may benefit the author, as it can broaden the scope and reach of his / her results and may also aid the reader to get easy access to the research they wish to refer to. Consequently, researchers would prefer

submitting their work to journals that are able to maximise the discoverability of their work. Earlier, this would have been determined mainly by the popularity of the journal. However, today, most journals have gone online. In fact, around 90 percent of journals in English are now available online. Also, most literature searches are carried out online.141 Moreover, there has been a tremendous increase in the number of articles being written,142 although the overall output of a single researcher is still similar.143 This means that there has been an

138 M. Ware, Peer review: benefits, perceptions and alternative (London: Publishing Research Consortium,

2008), <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.214.9676&rep=rep1&type=pdf>, p. 6.

139 M. De Rond & A.N. Miller, ‘Publish or perish: bane or boon of academic life?’, Journal of Management Inquiry, 14:4 (2005), p. 322.

140 Prosser, ‘Researchers and scholarly communications’, p. 41.

141 B. Cope & A. Phillips, ‘Introduction’, in B. Cope and A. Phillips (eds.), The future of the academic journal

(Oxford: Chandos, 2009), pp. 1–2.

142 Tenopir & King, ‘The growth of journals publishing’, p. 110.

143 B. Cope & M. Kalantzis, ‘Signs of epistemic disruption: Transformations in the knowledge system of the

academic journal’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), p. 14.

(26)

26 increase in the number of researchers, thus increasing competition for funding. As a result, ensuring that one’s research is discoverable has become all the more difficult. Yet, getting one’s work noticed and cited is important for researchers’ career prospects and for future funding. Therefore, researchers usually choose to submit to journals that are prestigious, have a wide readership and are easily accessible, thus increasing the probability of their work being cited.144

The last function is archiving, which refers to the preservation of the published material over time.145 This part has become fairly more convenient, as journals have become electronic. For example, the very first issue of the Philosophical Transactions of the Royal

Society, along with all subsequent issues are available in digital format.146 Again, this function seems to benefit both the author and reader, as the author is assured of long-term preservation of his / her research, and it provides the reader the opportunity to refer to historical research on a given topic and thus analyse the evolution of a certain field.

Moreover, in terms of archiving, not only publishers but also libraries have started creating electronic archives and warehouses of information material. The only roadblock in this is the lack of standardisation in how the information is stored, as there are variations from one library or publisher to another.147 Despite this, archiving, like registration, is a fundamental expectation of a researcher from a journal.

144 More details on accessibility will be provided in Chapter 4. 145 Prosser, ‘Researchers and scholarly communications’, p. 41. 146 Tenopir & King, ‘The growth of journals publishing’, p. 106.

(27)

27 4. Scientometric Indicators and Journal Publishers

The previous chapter suggests that in the research cycle, the researcher and journal publisher have a symbiotic relationship, with the latter depending on the former for content and the former needing the latter to register, certify, disseminate and archive that content. However, this is far from the truth. In reality, the relationship is skewed in favour of the journal

publishers, in particular large publishers with multiple titles in their portfolios and publishers owning the most-cited journals. Like most sectors in the communication and media

industries, academic and journal publishing has undergone considerable consolidation in the last three decades or so. This has led to an oligopolistic journal market that is controlled by a few dominant players, where these players wield an inordinate amount of power.148 As a result, publishers, who only own the means of dissemination, tend to dictate the production of content itself.149 This chapter will explore the reasons for this and will also analyse how journal publishers can exploit and manipulate scientometric indicators for their own gain.

4.1. Use of Scientometric Indicators

First and foremost, scientometric indicators are used to evaluate researcher performance. Other than evaluation, the scientometric indicators, in particular IF, also serve as a tool for journals to prove their prestige, or more accurately popularity, in the academic world. The point is most visible when visiting the official website of any journal listed on the ISI database. For example, Figure 3 shows a screen-shot of the journal metrics for Materials

Science & Engineering: A. The journal metrics are advertised on the home-page just below

all the tabs providing more information about pages the visitor can navigate to. As shown in the figure, the CiteScore, IF,150 five-year IF, SNIP and SJR are provided. Values for one of more of the scientometric indicators are usually provided for all journals. The main reason for this is that these indicators strongly impact how the journal is perceived in terms of quality and hence influences how authors select journals they wish to submit their content to.

148 W. Peekhaus, ‘The enclosure and alienation of academic publishing: Lessons for the professoriate’, tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society,

10:2 (2012), pp. 578–580.

149 Ibid., p. 584.

150 Two-year IF values are the standard released by Thomson Reuters. More recently, they have also started

(28)

28 Figure 3: Screen-shot of home-page of journal Materials Science & Engineering: A.

4.1.1. Journal Pricing

Subscription-based journals have two broad categories of customers: individuals and libraries. However, the number individual subscribers has been in constant decline. In fact, many publishers have stopped trying to attract individual subscribers; even if they do, they are usually forced to provide heavy discounts to attract individuals in the first place. As a result, libraries form the main market for journal subscriptions.151 The power shift in academic communication in favour of the journals occurred in the 1980s and 1990s. This is because government strategies with respect to research spending underwent major changes during these decades due to the rise of neo-liberalism. Neo-liberalism is linked to

globalisation and ‘is a particular element of globalization in that it constitutes the form through which domestic and global economic relations are structured.’ 152 While

neo-liberalism had a number of effects on government spending, especially with respect to higher education, it also led to a re-calibration in how research was conducted and evaluated. In general, governments emphasised on greater funding to universities for research to be conducted for the purpose of commercialisation. Consequently, more funding was allocated to applied research at the expense of other departments, including library budgets.153 Overall,

151 Ware & Mabe, The STM report, p. 19.

152 M. Olssen & M.A. Peters, ‘Neoliberalism, higher education and the knowledge economy: From the free

market to knowledge capitalism’, Journal of Education Policy, 20:3 (2005), p. 313.

(29)

29 the growth in research budgets has consistently outpaced the growth in library budgets.154 As a result, the spending power of libraries diminished. This decline in spending power was further exacerbated by the increasing prices of journals. In fact, during the 1980s and 1990s, journal prices increased at rates well above the rate of inflation. This increase was especially dramatic in the science, technology, engineering and medicine (STEM) fields, though

increases were noted in all fields.155 This phenomenon of skyrocketing journal prices coupled with static or declining library budgets is known as the ‘serials crisis’.156 The increase in prices was coupled with an increase in the number of journals itself.157

As a result of the larger catalogues to choose from and decreased spending power, the libraries had to be more selective about the journals they purchased. Library decisions on which journals to subscribe to are based on a number of considerations. Nevertheless, the most major is the metrics of the journals,158 most often the IF.159 In general, the collection policies of academic libraries are governed by the objective of maintaining existing research holdings while attempting to expand their holdings through the acquisition of new titles. Consequently, they are expected to subscribe to as many core journals as possible. The faculty and users of these libraries also expect the library to provide them access to key disciplinary journals. An important point to remember here is that unlike other goods and commodities, competing journal and journal articles are not substitutes for each other despite their similar or overlapping subject areas. As a result, libraries generally cannot replace existing journals even if newer journals in the same discipline are available at a lower price. This is because newer journals would initially not be under the scope of scientometric

indicators. Once they do attain a metric, it can be a long and arduous process for them to gain a reputation compared to a journal that has already been in circulation for decades. Also, journal prices are not necessarily governed by their IF or quality.160 Therefore, libraries have been caught between the needs of the faculty members to have access to certain journals and increasing prices of journals.

154 Ware & Mabe, The STM report, p. 69. 155 Thompson, Books in the digital age, p. 99.

156 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 582. 157 Thompson, Books in the digital age, p. 99.

158 E. Roldan-Valadez, S.Y. Salazar-Ruiz, R. Ibarra-Contreras & C. Rios, ‘Current concepts on bibliometrics: a

brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics’, Irish Journal of Medical Science, 1971- (2018), p. 3

159 Other scientometric indicators are a fairly recent phenomenon. During the time of the serials crisis, IF was

the most dominant scientometric indicator in use.

Referenties

GERELATEERDE DOCUMENTEN

In the paper, we draw inspiration from Blommaert (2010) and Blommaert and Omoniyi (2006) and their analyses of fraudulent scam emails, texts that show high levels of competence

Our third result indicates that while the managers’ levels of work engagement and workaholism were relatively stable over the two-year study period, they also showed some change

Furthermore, we hypothesize that, when controlling for previous day’s level of time spent on work, negative emotions at the end of the workday are positively related to time spent

In this study, we investigate the research trend and characteristics by country in the field of gender studies using journal papers published from 1999 to 2016 by KISTI

Mannana-Rodriguez and Giménez-Toledo (2018) analyse the topic of publisher specialization and multidisciplinarity from an ‘internal’ perspective, by analysing the

One of the first things to be gained from these statistics and other qualitative information which has been patiently gathered, is a solid assessment of Belgian hooliganism: the

More generally, ever since the introduction of state control severed the relationship between the police and the community and ever since the central authorities such as the

Hierop deed ik eene wandeling in dat gedeelte der stad en begaf mij vervolgens na de Comedie in Drury Lane, alwaar ik de ‘Haunted Tower’, eene zeer [30] fraaie Opera zag speelen,