• No results found

U-Map and U-Multirank: profiling and ranking tools for higher education institutions

N/A
N/A
Protected

Academic year: 2021

Share "U-Map and U-Multirank: profiling and ranking tools for higher education institutions"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

U-Map and U-Multirank: profiling and ranking tools for higher education

institutions

Paper presented in track 5 at the

EAIR 35

th

Annual Forum in Rotterdam, the Netherlands

28-31 August 2013

Name of Author(s)

Ben Jongbloed

Frans Kaiser

Frans van Vught

Contact Details

Ben Jongbloed

Center for Higher Education Policy Studies, University of Twente

PO Box 217

7500 AE Enschede

The Netherlands

E-mail: b.w.a.jongbloed@utwente.nl

Key words

Diversity, Institutional performance measures, Mission, Marketing, Higher education

policy/development

(2)

2 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

Abstract

U-Map and U-Multirank: profiling and ranking tools for higher education institutions

U-Map and U-Multirank are web-based transparency instruments, showing institutional diversity in higher education. Both tools are multidimensional, web-based and user-driven, and make use of indicators relating to individual higher education institutions (HEIs). U-Map shows what a HEI is doing and how that compares to other institutions worldwide. U-Multirank visualises how well HEIs are performing relative to others. The advantages of both instruments over other classifications and rankings include the multi-dimensional approach and their user-driven character. We will discuss the distinctive features of both instruments, the choice of indicators employed and address some of the criticism.

(3)

3 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

U-Map and U-Multirank: profiling and ranking tools for higher education institutions

Introduction

There is a huge diversity in higher education institutions (HEIs) worldwide. Making sense of this large diversity presents quite a challenge for those that are interested in higher education, that is: students, academics, institutional leaders, policy-makers, businesses and professional organisations, and others. This is where classifications and rankings come in as transparency instruments that shed light on this diversity. The concept of diversity has moved rapidly up the political agenda of European higher education over the past decade, as exemplified by the recent (2011) European Commission Modernisation Agenda (EC 2011). In comparison to more homogenous systems, diversified higher education systems are argued to be more responsive, effective and innovative. Therefore, it is important, first of all, to understand the diversity that exists within and between individual systems. The U-Map and U-Multirank projects have sought to address this need for understanding diversity.

U-Map (CHEPS 2008; Van Vught et al. 2009; 2010) was developed in a series of projects to lay the ground work for building a classification of European HEIs. The U-Map web-tool describes institutions on a number of dimensions, each representing an aspect of the activities of higher education institutions. U-Map can thus act as a tool for HEIs to present what they do and how that compares to activities of other HEIs. U-Multirank addresses the question of how well HEIs are performing in the context of their institutional profile. U-Multirank (CHERPA Consortium 2011; Van Vught & Ziegele 2012) is a multi-dimensional and user-driven international ranking of HEIs. After the European Commission (EC) funded U-Multirank feasibility study, the EC decided to fund a project to roll out U-Multirank. An international consortium including CHE,

CHEPS, CWTS, INCENTIM, academic publishers Elsevier, the Bertelsmann Foundation and software firm

Folge 3, supported by individual experts and stakeholders organisations, won the project and will

implement U-Multirank over the coming years. The aim is to do this first for a sample of (at least) 500 HEIs worldwide and gradually increase the coverage of HEIs and disciplinary fields. U-Multirank has received €2 million in EU funding, with the possibility of a further two years of funding in 2015-2016. The goal is for an independent organisation to run the ranking thereafter.

This paper will outline the approach, methodology and indicators for U-Map and U-Multirank, and where these tools differ from (and perhaps are superior to) other transparency instruments.

U-Map and U-Multirank: two new profiling and ranking tools

Map and Multirank address individual HEIs, showing what they do and how they perform. For this, U-Map and U-Multirank incorporate five dimensions: (1) Research, (2) Teaching & Learning, (3) Knowledge Exchange, (4) Internationalisation, and (5) Regional Engagement. In addition to U-Multirank, U-Map includes the dimension Student Profile, to show aspects of the size and composition of the student body of a HEI. Along each dimension, indicators describe the characteristics of a HEI. U-Map and U-Multirank allow HEIs to be described, grouped and compared in a variety of ways. Their users can highlight differences between HEIs and construct different classes per dimension.

U-Map

U-Map was created through an intense and interactive process involving many higher education

stakeholders that began in 2005 (Van Vught 2009). A prototype of U-Map was piloted in 2009, and in 2010 and 2011 the instrument was implemented in the Netherlands and later on Estonia, Portugal, Belgium (Flanders) and the Nordic countries. In its soon to be released updated version it will incorporate institutions from more than 200 individual higher education institutions – mostly from Europe.

(4)

4 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

U-Map employs around 30 indicators to produce ‘sunburst charts’ that provide a snapshot of the extent to which a HEI is engaged in the various dimensions of institutional activity. When pictured (‘mapped’) side by side on the interactive U-Map website, selected aspects of different institutions’ activity profiles can be compared. U-Map’s on-line database allows users to select the institutions to be compared and the activities to be explored in more depth.

Figure 1: Example of a U-Map activity profile

Legend:

The six dimensions in the U-Map activity profile and the underlying indicators are presented in table 1. For a full description of the indicators the reader is referred to: www.u-map.eu.

Table 1: U-Map dimensions and indicators

Teaching and learning profile Student profile Research involvement Degree level focus (1-4)

% of degrees awarded at doctorate, master, bachelor and sub-degree level

Mature students (13) % of mature (30+) students

Peer reviewed academic publications (22)

Number of peer reviewed academic publications per fte academic staff Range of subjects (5)

Number of large subject fields (ISCED) in which at least 5% of degrees are awarded

Part time students (14) % of part time students

Professional publications (23) Number of professional publications per fte academic staff

Orientation of degrees (6-7) % of degrees awarded in general formative programmes vs. programmes for

licensed/regulated and other career oriented programmes

Distance learning students (15) % of students I distance learning programmes

Other research output (24) Number of other peer reviewed research outputs per fte academic staff

Expenditure on teaching (8) Expenditure on teaching activities as % of total expenditure

Size of student body (16) Total number of students enrolled in degree programmes

Doctorate production (25)

Number of doctorate degrees awarded per fte academic staff

Expenditure on research (26)

Expenditure on research activities as % of total expenditure

Teaching and learning Knowledge transfer Student body

International orientation Research involvement Regional engagement

(5)

5 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

Involvement in knowledge exchange

International orientation Regional engagement

Start-up firms (9)

Number of start-up firms (new in last three years) per 1000 fte ac staff

Foreign degree seeking students (17)

Number of students with a foreign qualifying diploma as a percentage of total enrolment

Graduates working in the region (27) % of graduates working in the region (NUTS2)

Patent applications filed (10) Number of new patent applications files per 1000 fte academic staff

Incoming students in exchange programmes (18)

Number of incoming students in exchange programmes as % of total enrolment

New entrants from the region (28) Percentage of new entrants coming from the region (NUTS2)

Cultural activities (11) Number of concerts and

exhibitions (co-)organised by the institution per 1000 fte academic staff

Students sent out in exchange programmes (19)

Number of students sent out in exchange programmes as % of total enrolment

Importance of local/regional income sources (29)

Income from local/regional income as % of total income

Income from knowledge exchange activities (12) Income from knowledge

exchange activities (income from licensing agreements,

copyrights, third party research and tuition fees from CPD courses) as % of total income

International academic staff (20) Number of non-national

academic staff (headcount) as % of total academic staff

(headcount)

Importance of international income sources (21) Income from international sources as % of total income

(xx) refers to the number of the “ray” in the sunburst chart (figure 1)

The information on the data underlying the indicators is collected using on-line questionnaires. After verifying the data provided, the scores on the indicators are presented in a sunburst chart. The length of each ‘ray’ indicates the relative position of an institution against the other institutions covered in U-Map. For the sunburst chart, indicator scores are divided into four categories (typically no, some, substantial or

major involvement in the activity in question). The boundaries between the categories are determined by

cut-off points that depend on the distribution of the indicator scores across the institutions in the U-Map database. Mostly, quartile scores are used to establish the cut-off points. The category in which an indicator score is placed is reflected in the length of the corresponding ray in the sunburst chart. It is good to stress that the emergent classification does not imply a rank order. There is no hierarchy between dimensions, nor between the indicators within a dimension.

The activity profiles are published in the U-Map online tool.1 The website offers two tools, the Profile Finder and the Profile Viewer, that allow users to analyse institutional profiles and carry out specific comparative studies. Through the Finder, the user selects HEIs on the basis of user-defined characteristics, arriving at a subset of institutions that meet particular criteria chosen by the user her-/himself. In the next step, the Viewer provides a detailed picture of the activity indicators along the six dimensions covered in U-Map.

U-Multirank

A subset of U-Map’s indicators is used in U-Multirank, to prepare the ground for quantifying and visualising the performance of HEIs along five dimensions. The underlying principle of U-Multirank is that rankings should be made only of HEIs that are comparable in terms of their profile or mission. Institutions and programmes should only be compared when their purposes and activity profiles are sufficiently similar. It makes no sense to compare the research performance of a major metropolitan research university with that of a remotely located University of Applied Science; or the internationalisation achievements of a national

(6)

6 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

humanities college whose major purpose is to develop and preserve its unique national language with an internationally oriented UK university with branch campuses in Asia. Threfore, U-Multirank is closely connected to U-Map, as it adds the performance aspect to the mapping: how well are HEIs performing in the context of their institutional profile?

U-Multirank was developed by a consortium of CHE, CHEPS, CWTS, INCENTIM and OST, that was funded by the European Commission to look into the feasibility of developing a global multi-dimensional ranking of HEIs (CHERPA Consortium 2011). Such a ranking would have to be more comprehensive and rigorous than existing rankings by covering the various missions of HEIs, such as education, research, innovation, internationalisation and community outreach.

U-Multirank allows users to develop personalised rankings by selecting a set of performance indicators according to their own preferences. On the basis of data gathered on the indicators across five performance dimensions, the U-Multirank tool provides its users with an on-line functionality to create two general types of rankings:

Focused institutional rankings: rankings on the indicators of the five performance dimensions at the

level of institutions as a whole;

Field-based rankings: rankings on the indicators of the five performance dimensions in a specific

field in which institutions are active.

The design of U-Multirank builds on a feasibility study (CHERPA Consortium 2011)which was carried out on 150 HEIs from Europe and around the world. That study confirmed that both the concept and

implementation of a multi-dimensional ranking were largely realistic. Apart from institutions as a whole, the study specifically focused on the disciplines of engineering and business studies. This is because, with very few exceptions, HEIs are combinations of stronger and less strong faculties, departments and programmes. Producing only aggregated institutional rankings disguises this reality and does not produce the information most valued by major groups of stakeholders: students, potential students, their families, academic staff and professional organisations. U-Multirank thus allows for the comparisons of comparable institutions at the level of the organisation as a whole and also at the level of the broad disciplinary fields in which they are active.

The indicators that, after an intensive process of discussion and testing, were selected for the current (2013) version U-Multirank are included in table 2.

Table 2: U-Multirank indicators and dimensions

Dimension Institutional

ranking

Field-based rankings TEACHING & LEARNING

Student-staff-ratio X

Graduation rate X X

Percentage of academic staff with PhD X

Percentage of graduates graduating in norm period X X

Rate of graduate unemployment X X

Inclusion of work experience X

Indicators from the student survey * X

Overall learning experience X

Quality of courses & teaching X

Organisation of program X

Contact to teachers X

Social climate X

Facilities (libraries, rooms, IT, laboratories) X

Research orientation of teaching/programme X

(7)

7 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

RESEARCH

External research income (per fte academic staff) X X

Doctorate productivity X

Total research publication output (per fte academic staff): self-reported and from existing (**) databases

X X

Art related output X

Field-normalised citation rate* X X

Highly cited research publications ** X X

Interdisciplinary research publications* X X

Research orientation of teaching (student survey) X

Number of post-doc positions X X

KNOWLEDGE TRANSFER Income from private sources (research contracts, service

contracts, licenses etc.)

X X

Joint research publications with industry* X X

Patents (per fte academic staff) ** X X

Co-patenting with industry per fte academic staff) ** X X

Number of spin-offs X

Patent citations to research publications ** X X

Revenues from Continuous Professional Development X INTERNATIONAL ORIENTATION Educational programmes in foreign language X

International orientation of programmes X

Opportunities to study abroad (student survey) X

Student mobility (incoming, outgoing) X X

Percentage of international academic staff X X

Percentage of PhDs by foreign students X X

International joint research publications ** X X

International research income X X

REGIONAL ENGAGEMENT

Percentage of graduates working in the region X X

Student internships in local enterprises X X

Degree theses in cooperation with local industry X

Regional joint research publications ** X X

Income from regional sources X X

* The student survey collects information from students (using questionnaires) on aspects of student satisfaction ** Data for bibliometric and patent indicators is collected from existing data bases

Three on-line questionnaires will gather the information to build the indicators:

 An on-line questionnaire to provide information on the indicators selected to measure the five performance dimensions at the institutional level.

 A second on-line questionnaire for those institutions/faculties active in the specific disciplinary fields covered in the field-based rankings (engineering, business, etc) to gather the information on the indicators selected to measure the five performance dimensions.

 A third on-line survey for a sample of students studying in the selected fields to collect the information needed for a range of “student satisfaction” indicators.

More on methodology

HEIs are predominantly multi-purpose, multiple-mission organisations undertaking different mixes of activities. Where most existing transparency instruments largely focus on only one or very few dimensions

(8)

8 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

of the broad spectrum of functions of HEIs - primarily the research function - U-Map and U-Multirank distinguish five dimensions. They are not directed towards the large, comprehensive, internationally orientated research university but also include indicators that allow the mapping and performance assessment of regionally-oriented and/or teaching-oriented HEIs.

An obvious corollary to multidimensionality is that institutional performance on these different dimensions should not be aggregated into a composite overall measure. There is neither theoretical nor empirical justification for assigning specific weights to individual indicators and aggregating them into composite indicators. Thus, U-Multirank refrains from the use of composite indicators, as it blurs differences in performance across particular dimensions and indicators. Studies (e.g. Dill & Soo 2005) show that the weighting systems underlying composite indicators are anything but robust.

The selection of dimensions and indicators was based on two processes:

Stakeholder consultation process: An intensive process of stakeholder consultation focused primarily on

the relevance of potential dimensions and indicators which should be the starting point for rankings. Methodological analysis of the validity of the indicators, the reliability of the information to be

gathered, and the expected feasibility of the use of the dimensions and indicators (availability of data; the data collection burden for HEIs).

Where other rankings often suffer from non-transparent procedures in terms of indicator construction, calculation and aggregation, the U-Map and U-Multirank designers chose a participative approach. During the design process all potential dimensions and indicators were clearly described and discussed during stakeholder workshops. Long lists of possible indicators drawn from the literature and from existing practice (including from areas beyond rankings) were discussed. In an iterative process, stakeholders assessed the relevance of these indicators. The outcomes of this stakeholder process were then integrated with the results of a methodological analysis to produce the set of indicators to be included in the feasibility tests. Here, issues of validity, reliability and availability of comparable data played a role. Afterwards, comments were received from external organisations and Advisory Boards. As a result, some indicators were dropped and new ones were introduced.

Decisions about whether to retain or discard indicators are taken in consultation with stakeholders. Three illustrative examples are given below:

 Although there were some problems with the feasibility of employability related indicators in the U-Multirank dimension Teaching & Learning, it was decided to retain these as they were seen as crucial for the multi-dimensional and multi-user character of U-Multirank. Retaining them underlines their importance and encourages institutions and national and international data agencies to pay greater attention to these indicators.

 “International prizes won” was discarded as an indicator, as there was little agreement on the list of prizes to be included.

 The feasibility of the indicators on regional engagement is problematic. This is partly related to a lack of consistent and comparable definitions underlying the data and partly because of lack of available information. Nevertheless, it was decided to retain the indicators as they add clear value to U-Multirank.

The feasibility study demonstrated that multi-dimensional and multi-level ranking is certainly possible in terms of the development of feasible and relevant indicators. It also showed the value of

multi-dimensionality with many institutions and faculties performing very differently across the five dimensions and their underlying indicators. The multi-dimensional approach makes these diverse performances

transparent. In some dimensions (particularly knowledge transfer and regional engagement) and with some concepts (such as graduate employability and non-traditional research output) feasible indicators are more difficult to develop. It is not surprising that these dimensions and concepts are in areas of higher education performance hardly explored by existing rankings. In all dimensions U-Multirank goes beyond the scope of indicators implemented in existing worldwide rankings.

(9)

9 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

Having an interactive web tool is a prerequisite to realising the basic philosophy and approach of U-Map and U-Multirank: The idea of multi-dimensional user-driven classifications cannot be fully realised in a print version. When fully operational, U-Map and U-Multirank will have the facility for users to create

institutional and field-related profiles by including the indicators within (a selection of) the five dimensions into a multi-dimensional table. Such tables are interactively presented so that end-users may decide which indicators are most important to them, supported by web-based technologies allowing interactive

personalised listings. U-Multirank presents the indicator scores into three (top – middle – bottom) to five groupings for each indicator, rather than a presentation in league tables with the spurious precision of ranking from position 1 to n.

This implies that U-Map and U-Multirank are user-driven. This principle ‘empowers’ potential users (or categories of users) to be the dominant actors in the design and application of classifications rather than the normative positions of a small group of constructors. This is one of the Berlin Principles (IREG 2006).

Discussion

Because at the time of writing both U-Map and U-Multirank are not yet fully operational, we are not able to present empirical results in terms of activity and performance profiles. Nevertheless some reflections based on experiences collected so far can be presented here, along with observations on where an instrument like U-Multirank differs from other rankings.

While U-Map and U-Multirank are based on a common conceptual framework, they are separate

instruments, covering different aspects of HEIs. U-Multirank is a European Commission sponsored project, while U-Map is a stand-alone project that provides activity profiles for subscribing HEIs.

U-Map’s users have come to realise that having information on similarities and differences between the activity profiles of HEIs from different parts of the world can be very helpful.

The basic characteristics of Multirank differ substantially from existing international rankings. First, U-Multirank (like U-Map) is not confined to research; it takes into account different dimensions of the

activities of HEIs. Second, U-Multirank does not calculate composite overall indicators based on pre-defined weights to indicators - as it leaves the decision about the relevance of indicators to its users who may have different preferences and priorities. Both U-Map and U-Multirank are user-driven in two ways: (1)

Stakeholders continue to be involved on an on-going basis in the development of the tools. (2) Users will have the option to select indicators according to their own preferences and thereby compile a personalised classification or ranking. U-Multirank will not provide over-simplified league tables. Institutions will be ranked into a number of different rank groups for each indicator. U-Map and U-Multirank encourage their users to compare like with like: based on a number of indicators describing institutional profiles, such as the ones included in U-Map, U-Multirank will only compare institutions with similar missions.

U-Multirank has received wide support as an attempt to design a tool that is more comprehensive and rigorous than existing rankings. At the same time, commentators have articulated various concerns and issues. The criticism ventilated concerns specific indicators that have been proposed, as well as more general conceptual issues. We will address some of this criticism below.

“U-Multirank is not particularly innovative in terms of its indicators”

The U-Multirank consortium has developed new bibliometric indicators for the dimensions of regional engagement, internationalisation and knowledge transfer, incorporating regional, international and university-industry co-publications. Furthermore, U-Multirank includes indicators relating to

multidisciplinary research, internships, art-related outputs and external income, all going beyond the indicators in existing rankings and classifications. However, further development work is needed on some dimensions and indicators. In particular in the dimensions ‘knowledge exchange’ and ‘regional engagement’ feasible and applicable indicators appear to be scarce. The future challenge certainly is to design and develop more and generally acceptable indicators in these areas. Likewise, some concepts (such as graduate employability and non-traditional research output) require further work to develop feasible indicators. The

(10)

10 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

U-Multirank experience so far has demonstrated the complexity of developing transparency instruments in higher education and it is unrealistic to expect a perfect new tool to be designed at the first (or even second) attempt.

“The quality and comparability of the data collected by U-Multirank (and U-Map) is not guaranteed”

A major issue in international rankings is the quality of the data generated. The key challenge is the availability of internationally comparable data. If we move beyond the traditional focus on bibliometric data, rankings largely have to rely on institutional data provision. In the feasibility study, measures were developed to ensure data quality and to minimise “gaming the results”: data cleaning procedures;

plausibility checks; and feedback loops with the institutions. The option of “pre-filling” the questionnaires with data from national sources, however, proved to be less fruitful than was expected at first. In this respect, the further development of the EUMIDA2 data base is another potential opportunity for U-Multirank. In the student survey we analysed whether the comparability of responses was distorted by systematic differences in students’ expectation levels. No distortions were found though.

Based on the experiences and results of the feasibility study, the set of indicators has been revised. In particular, issues of validity, clarity and consistency of indicators led to methodological refinements and a sharpening of indicator definitions. Definitions and explanations are included in a glossary which was developed by the project.

Still, multidimensional rankings that want to take the variety of institutional missions and profiles into account cannot be realized without the application of institutional and student surveys to collect self-reported data. Therefore these rankings have to succeed in convincing HEIs and students to invest time and energy in data-collection and reporting. This makes multidimensional rankings vulnerable: if HEIs or

students do not see clear benefits from the ranking outcomes, they may not want to get involved in the data provision.

“U-Multirank and U-Map are primarily relevant for European HEIs”

While it proved particularly difficult for U-Multirank to recruit institutions from the USA and China, responses from Latin America, Australia, Asia, Russia, and a number of developing countries were enthusiastic and led the U-Multirank project team to believe that there will be continuing interest from outside Europe from institutions wishing to benchmark themselves against European institutions. So far, some 700 HEIs expressed an interest in joining U-Multirank.

“The selection of indicators is not transparent”

The diversity of HEI profiles is large. And so is the diversity of what users of rankings feel about what is and what is not good performance. This does not easily produce consensus about a definitive set of criteria defining the best performance for all stakeholders. The only way to deal with these diversities is to take the normative position of a user-driven approach, accepting the subjective character of a ranking as a design principle and leading to the empowerment of the user. The idea to involve the users of the rankings in the processes of selecting the indicators and compiling the data is relatively new in the ranking world. We feel that the application of feedback loops with users leads to a higher level of usefulness for these users, while also creating a better data accessibility. Experience shows that stakeholders often have strong feelings about the relevance of indicators, and are eager to interpret the outcomes of rankings in the context of their personal ideas about quality in higher education and research.

“U-Multirank and U-Map are too complicated to understand”

Multidimensional rankings seem less attractive than mono-dimensional league tables using composite indicators - particularly to the general public. Simple league tables are often striking, and are easily taken up by the media. Multidimensional rankings that address a variety of target groups may offer more elaborate information, but cannot be reduced to an overall list of winners and losers. Multidimensional rankings need to invest in IT-assisted presentation modes and communication processes, explaining to clients and

stakeholders how the various outcomes can be interpreted. In order to be effective in these communication processes, multidimensional ranking producers will have to analyse the decision-making processes of user groups (such as students, parents, institutional leaders, policy-makers, business leaders) and the

(11)

11 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

information needs in these processes. These needs can only be revealed by intensive stakeholder dialogue and making use of the most recent visualisation techniques in data presentation.

“U-Multirank is not a real ranking”

The user-driven approach to ranking presents another specific challenge. If a ranking is based on the user’s selection of institutions and indicators, the ranking result is not a unique performance list as in the existing rankings. In a user-driven approach, each user can produce his or her own ‘personalized’ ranking. The release of a new ranking outcome is not the publication of an updated list, but the integration of a data update in the ranking database, allowing a variety of users to produce a large number of their own personalized rankings. We nevertheless still call such a multidimensional, user-driven methodology a ‘ranking’, since it remains a tool to make vertical diversity transparent. Also multidimensional ranking results show high and low performances and position institutions/programs in the context of the performance of their peers and competitors.

Finally

We have argued here that a multidimensional approach to ranking is more attractive than the currently popular approaches. It allows a large variety of institutional functions and profiles to be included, thus paying attention to the horizontal diversity of institutional missions and profiles. It opens up the possibility to compare sets of institutions with similar missions and profiles which appears to be more useful than ranking institutions that are very different and can hardly be compared.

Although rankings are often criticized – and usually rightly so – their impact is nevertheless large. Several categories of stakeholders are heavily influenced by ranking results, although they are not always willing to publicly admit so. The various impacts of the outcomes of rankings make it clear that there is sufficient reason to take rankings seriously and to try to improve their conceptual and methodological bases. To ensure this, U-Map and U-Multirank will have to be dynamic instruments that can respond to new conceptual developments in indicator construction, data collection systems and opportunities for performance visualisation and assessment. It is up to the user to determine whether the designers of U-Map and U-Multirank have been (and will be) successful in achieving this.

Endnotes

1.

The recently updated U-Map web tool (http://new.u-map.org/)will have a ‘members only’ functionality that provides premium content, with significant functionality in the areas of the use of U-Map for institutional research and benchmarking; such as a facility to find similar institutions in the U-Map data base

(“institutions similar to …”). 2.

The EUMIDA project, funded by the European Commission, explored the feasibility of building a consistent and transparent European statistical infrastructure (a “European tertiary education register” or ETER) at the level of individual HEIs. This includes the development of a sustainable data infrastructure as well as the collection of data in close cooperation with Member States’ national statistical offices. The EC has decided to go ahead with the establishment of an ETER and recently awarded a grant to a project consortium that will collaborate with U-Multirank on areas of data definitions and related issues.

(12)

12 U-Map and U-Multirank: profiling and ranking tools for higher education institutions

References

CHEPS (2008). Mapping Diversity. Developing a European Classification of Higher Education Institutions. Enschede: CHEPS (available from http://www.u-map.eu/CHEPS_Mapping%20Diversity.pdf )

CHEPS (2010). U-Map. The European Classification of Higher Education Institutions. Enschede: CHEPS (available from http://www.u-map.org/U-MAP_report.pdf)

CHERPA Consortium (2011). Design and testing the feasibility of a multidimensional global university

ranking. Available from: http://ec.europa.eu/education/higher-education/doc/multirank_en.pdf

Dill, D.D. and Soo, M. (2005). Academic quality, league tables, and public policy. Higher Education 49, 495-533.

European Commission (EC) (2011). Supporting growth and jobs – an agenda for the modernisation of

Europe’s higher education systems. Communication from the Commission to the European Parliament,

COM(2011) 1063 final, Brussels: European Commission.

International Ranking Expert Group - IREG (2006). Berlin Principles on Ranking of Higher Education

Institutions. Available at: http://www.ireg-observatory.org/

Van Vught, F.A. (ed.) (2009). Mapping the Higher Education Landscape: Towards a European Classification of

Higher Education. Dordrecht: Springer.

Van Vught, F.A. (ed.) (2010). The European Classification of Higher Education Institutions. Enschede: CHEPS. Vught, F.A. and Ziegele, F. (eds.) (2012). Multidimensional Ranking. The Design and Development of

Referenties

GERELATEERDE DOCUMENTEN

Een tweede probleem is dat sprake kan zijn van tijdeffecten, oftewel onderlinge samenhang tussen observaties voor verschillende landen tijdens dezelfde tijdsperiode. Het is

Veranderingen in antisociaal gedrag en daarmee samenhangende problematiek in het systeem werden gerapporteerd door de jongere, de opvoeder, en de behandelaar en werden gemeten aan

Tijdens een verblijf in Amsterdam kan het toepassen van de controlestrategieën bij kort-verblijvende buitenlandse bezoekers onder andere terug worden gezien in: minder tijd

As well as visualising the survivors culture, separated from the zombie by the dual apparatus, the concluding management model for this section adds a ‘noise filter’, in homage

The degree of healing was simulated in a post processing step of the Moldflow results, determining the maximum temperature during the process from injection to cool down to the

Although superficially our scaling analysis is similar to earlier studies of the granular hydrodynamic equations, our purpose is a very different one, namely the investigation of

As can be seen, the South African sample scored higher on burnout than our European counterparts, but lower than Japan which scores the highest of all the countries

i) To conduct a literature review on the significance of the small and medium enterprises sector in South Africa.. ii) To carry out a literature study of the