English |  Español |  Français |  Italiano |  Português |  Русский |  Shqip

Quality Management in Higher Education

Are ranking systems quality drivers in higher education?

Simona Calugareanu

 

Introduction

In the last years a big emphasis has been directed towards higher education all over the world once its important role in the development of the knowledge society has been recognized. Governments from different countries are calling for better evidence and transparency of academic processes establishing different strategies and guidelines, national and international, for assuring quality in a more and more internationalized and diversified system. From the instruments that have been set up for accountability and the measurement of quality in higher education, a big popularity and attention has been attributed to the ranking systems. These have been developed with the aim of bringing clarity and transparency to such complex higher education systems, from an external and comparative perspective both at national and international levels. This paper is stressing the impact that ranking systems have on higher education and seeks to highlight if this impact is really a driver for quality.

History and development of ranking systems

The society today is more and more characterized by important phenomenon like globalization, internalization, an increasing market-based orientation and competition in all its areas, higher education not being left aside from all these changes and challenges. With the massification of universities in almost all continents, increased competition between universities at national and international levels for students, qualified staff, funding, reputation, an expansion of diversity in the field creating a rich variety of courses and programs, an international trend towards increased autonomy and self-governance for higher education institutions had created the need for a major accountability and quality assessment, increasingly discussed in both, academic world and at higher education policy making level, as higher education and academic research are recognized by the governments as being “vital engine for economic growth in the battle for world class excellence” (OECD, 2007, p. 88). In this context, a great interest in knowing the position that a particular institution has in comparison with others around the globe has increased in the eyes of students, employers, governments and also universities themselves.    

Many instruments for measuring quality in higher education have been developed during time like peer review, accreditation systems, new degree systems (Bologna system in Europe) based on specific standards and guidelines (Standards and Guidelines for Quality Assurance in Europe - ESG), or the use of tools imported from business world for a major accountability, like benchmarking, have been used, but a bigger and bigger importance though, has been attributed in the past years to ranking systems. These are instruments related to quality assurance and used to bring more clarity and transparency “in what one might be tempted to call the university jungle” (Federkeil, 2011, p. 222). Rankings take the form of “league tables”, hierarchically categorizing, according to specific performance output indicators, the universities, programs, teaching activities and research, on a one-dimension ordinal list, going from best to worst in order to orient specific target groups like future students or students, academic and non-academic staff, university management and also governments in assessing their strengths and weaknesses in order to further improve their systems. Rankings provide an external assessment of institutions, not being intended as an instrument for internal quality assurance, and can cover a whole system of institutions, at both national and international levels.

Although the ranking of universities grew in importance worldwide in the last decade, the practice dates back to around 1900 with the publication in England of “Where we get our best men” (UNESCO, 2013, p.7) that provided a list of universities ranked by the number of successful alumni. After a long period of disinterest, the publication of “America’s Best Colleges” by the US News and World Report in 1983 made public information about undergraduate programs in America’s higher education institutions, and a decade later, in 1993, the “Times Good University Guide” from United Kingdom promoted a debate regarding the positions of the institution in the guide. Other countries started following this practice, and in short while different lists appeared; league tables and rankings around the world mostly with an institutional marketing purpose of giving information to customers. A decade later, in 2003, the ranking system had a great breakthrough at the international level once with the release of the Academic Ranking of the World Universities (ARWU) by Shanghai Jiao Tong University in China that aimed to find out the gap between Chinese universities and the world-class ones, ranking the universities in the world by their academic or research performance (Liu, Cheng, 2005, p.1), action that had been followed a year later by the development of the Times Higher Education World University Rankings. Other important ranking systems have been developed like World’s Best Universities Ranking – US, Global Universities Ranking, European Multidimensional University Ranking System (U-Multirank) or those concerning research only like Performance Rankings for Scientific Papers for World Universities.

Rankings are meant to reduce the work of entire institution to numerical indicators through a three stage process: collecting the data on indicators, scoring this data, uniting it into a final score and placing it according to their weight in order to obtain clarity about a certain target group of the ranking. The range of indicators in comparing the institutions can vary significantly in number, areas of activity, resources for education and research, results of education programs and research, types of institutional output or academic reputation, and the choice and weight given to each indicator makes a very big difference in the final output. These indicators are intended to reflect the quality picture of the institution in the chosen area of measurement, and their positive contribution being recognized as simplifying and clarifying the higher education picture for students and parties outside the sector, providing incentives for improving quality in higher education and research, and giving free publicity for universities, but also different ranking systems use very different indicators for this picture, their number worldwide running well into hundreds (Usher, 2009, p. 25).  From this point of view, rankings have started around them a lot of controversy and debate, especially arguing that the results strongly depend on the choice of the indicators used and the importance assigned to them, because it is difficult to measure quality itself. Rankings have been accused that don’t reflect the diversity of academic environments, covering only some of the university missions and have adopted methodologies which address only the world’s top research universities (giving results for only 700 – 1000 universities, a small proportion of the approximately 17,000 universities worldwide (EUA, 2011, p.68)), being focused on reflecting university research performance more accurately than teaching and providing a misleading picture of quality in higher education.

Due to this strong debate, it became clear that some common principles were needed for those who produce rankings to follow. Several initiative were undertaken like International Ranking Expert Group (IREG) that in 2006 come up with a set of guidelines and goals for quality assurance, “The Berlin Principles on Ranking of Higher Education Institutions”, and also elaborating a framework procedure for undertaking audits of rankings, starting with 2010, in line with these principles; another initiative has been proposed by European Commission which has established a consortium of four-country institutions that have to design a new multi-level ranking system; an interesting initiative is developed by OECD called Assessment of Higher Education Learning Outcomes (AHELO) with the aim to measure various types of leaning outcomes and to examine criteria that influence those outcomes (Sadlak, 2010, p.6).

Impact of ranking systems

Ranking systems have taken such amplitude because of their easy to access and to understand figure by all interested public, attracting more and more the attention of media, and making it increasingly difficult for universities to ignore them. Their continue growing impact strongly influence the behavior of universities as their presence and position in the ranking tables increase their profile and reputation, and makes them to continuously search for ways to improve their position investing enormous effort; also the institutions that do not yet appear in the ranking tables sense a high pressure in making efforts to be included.

The existence of rankings encourages universities to improve their performance. Many higher education institutions make use of the outcomes of rankings, using them in setting targets and strategic planning, in benchmarking to identify weaknesses, resolve internal problems and structural reorganization in order to achieve a higher place. Rankings also drive universities to be more effective and innovative as a study by the Higher Education Founding Council of England (HEFCE) found, through improved evidence-based decision-making focused on a better documentation and reach of success, new ways of capturing and reporting academic success and excellence, improve institutional practices, identification and replication of model programs and increased institutional collaboration (Group of Eight, 2012, p. 36).

When used accordingly, rankings are a great help in identifying and raise good practices, but most of their effects are negative because of the risk occurred when universities try to improve their position, being tempted to enhance their performance only in those areas that can be measured by the ranking indicators, which can lead an institution away from its own mission and distinctive values, reducing diversity within the higher education system. An increased focus on research and individual reputation rather than on teaching and learning per example, even though some institution have few areas of serious research, can lead to an imbalance in the academic field that can develop into a mainly vertical differentiated system without too much institutional diversity (EUA, 2013, p. 22). Rankings have also a strong impact on management of higher education institutions, in many cases the use and demand for resources being justified by the performance in the rankings, and even the salaries or positions of top university officials being linked to their institution’s place in the ranking tables. Also in the desire of improving their position in rankings, some easy to influence indicators are being followed in their decisions by the institutions, like the institution’s branding, hiring per example Nobel Prize winners among staff or the choice for English language for institution’s publications in specific journals counted in the international bibliometric database (Vught, Ziegele, 2011, p.33) etc..

Even though a study conducted by the Institute for Higher Education Policy suggested that rankings foster collaboration in research partnerships, student and staff exchange programs and institutional and faculty alliances, representing a tool for identifying the institution with which to collaborate (Group of Eight, 2012, 36), they are also a driver for competition between institutions in the detriment of collaboration. The “reputation race” (Vught, Ziegele, 2011, p.34) has costly implications that not always lead to better education and research, leading to a concentration of resources in a particular area and a neglect of others that can have harmful effect on the performance in the core activities. A new stratified system is taking shape as an effect of this race for reputation, the emergence of “world-class universities” concept that includes all those highly ranked universities is developing. Due to their good position, for leading universities is easier to find funding, partners, and to attract students, student mobility being on a seemingly unstoppable rise, in the recent years of almost 10% towards the top 100 universities (QS World University Rankings, 2012, p.4). University rankings are useful in helping students too in their choice for an university according to their needs, in their home country or abroad, and an increase in student demand and enrolment after obtaining a better position in student-oriented rankings by a certain university has been revealed as a ranking effect, even though, not all students make use of these rankings in the same way.  

Global rankings have also an impact at governmental level, reflected mostly in the desire of nations across the globe to have “world-class universities” for their prestige and as “an engine for the knowledge economy” (Vught, Ziegele, 2011, p. 33). As a consequence, new policies are being adopted, mostly regarding public funding that started to be allocated on a performance-based criteria and sometimes more directed towards the highly ranked universities in the county, the availability for scholarships for their national students targeted more for international highly ranked universities, and setting targets to raise national performance on a more informed data basis in documenting student and institutional success.  

Conclusions

A big popularity for university rankings has been identified around the world. Despite their controversial nature, they are here to stay because of an increasing need of transparency in such a complex system. Ranking is a tool imported from the business world and used in measuring quality in higher education, taking the form of league tables, hierarchically categorizing, according to specific performance output indicators, the universities, programs, teaching activities and research, on a one-dimension ordinal list, going from best to worst in order to orient specific target groups. Their results are being taken very seriously by the institutions ranked, seeking to cope with the weaknesses identified and finding strategies to ascend to a higher position in the ranking table, even though this can bring prejudice to other areas of the institution; in this sense this practice is classified as a dysfunctional consequence of rankings, as well as the way rankings can contribute to institutional quality assurance. They represent starting points that guide institutions to analyze their strengths and weaknesses compared to their competitors. Rankings are not based on a theoretically grounded concept of quality, but develop a specific set of indicators according to their aims and target groups using available data, and that is why they are regarded more as accountability tools, used in providing a picture of quality or excellence in higher education institutions.

The impact that rankings have is not limited only at institutional level; their impact continues at student level as well by guiding students in finding a university according to their needs. As a side effect, rankings are favoring student mobility towards the highly ranked universities, increasing in this way the competition between institutions for different resources. Rankings are seriously considered further at governmental level too, where they are influencing the adoption of new policies, the strategies for the allocation of funding according to performance-based criteria and the measurement of institutional success that became important for setting the targets to raise national performance and prestige. 

Rankings can be quality drivers when used accordingly and when given the appropriate importance. Higher education is a very complex system and its quality it is challenging to be measured; that is why, rankings started a big controversy around the indicators used and the importance assigned to them, being accused not to reflect the entire diversity of the academic environment, but to focus and promote more specific areas like research, in teaching and learning detriment, and disfavoring in this way an important area of universities. In order to be useful to quality assurance, rankings must find a balance according to their measuring targets: the data collected must be broad enough to allow the analysis of different aspects of performance and it should refer to single specific fields, disciplines or programs so that it would be clear for the institution to see in which areas improvement is needed.  

References:

Altbach, Philip G., (2007), international Higher Education: Reflections on Policy and Practice, Massachusetts: Center for International Higher Education

EUA, (2011), Global University Rankings and Their Impact, Brussels: European University Association

EUA, (2013), Global University Rankings and Their Impact – Report II, Brussels: European University Association

Federkeil, Gero, 2008, Rankings and Quality Assurance in Higher Education, Higher Education in Europe, Vol. 33, No. 2/3, p. 219 - 231

Group of Eight, (2012), World University Rankings: ambiguous signals, Turner: The Group of Eight House

IHEP, (2007), College and University Ranking Systems, Global Perspective and American Challenges, Washington DC: Institute for Higher Education Policy

IREG, (2011), IREG Ranking Audit Manual, Brussels: IREG Observatory on Academic Ranking and Excellence

Liu, N.C, Cheng, Y., 2005, Academic Ranking of World Universities – Methodologies and problems, Higher Education in Europe, Vol. 30, No 2, p. 1-14

QS Worlld University Rankings, (2012), QS World University Rankings “Trusted by students since 2004”, 2012 Report

Sadlak, Jan, 2010, Ranking in Higher Education: Its Place and Impact, The Europa World of Learning, p. 1-11

UNESCO, (2013), Rankings and Accountability in Higher Education, Uses and Misuses, Paris: UNESCO Publishing

Usher, Alex, 2009, University Rankings 2.0 New frontiers in institutional comparisons, Australian Universities’ Review, Vol. 51, no. 2, p. 87 – 90

Vught, van Frans, Ziegele, Frank, (2011), U-Multirank Design and Testing the Feasibility of a Multidimensional Global University Ranking Final Report, Consortium for Higher Education and Research Performance Assessment CHERPA - Network

There has been error in communication with Booktype server. Not sure right now where is the problem.

You should refresh this page.