English |  Español |  Français |  Italiano |  Português |  Русский |  Shqip

Quality Management E-book

University Ranking: 'A True Quality Evaluation Tool or a misleading Indicator ?'

Rediet Abebe

Abstract

This essay addresses the controversies surrounding university ranking systems. Arguments supporting the capability of rankings to operationalize and appraise quality using various indicators are made side-by-side with the critics that seriously attack its methodological biases and hence endorse its unwanted consequences as outweighing its benefits. This essay however identifies a compromise of these extreme points by outlining the need to make improvements to rankings for the advantage of higher education. Finally, it concludes by stressing that people in no way should take ranking results for granted and rely on them; rather to be skeptical and see them as one out of a number of tools used to assess higher education quality. The challenges of any attempt to measure quality as well as the greater likelihood of burgeoning ranking practices despite critics and shortcomings are also indicated.

Introduction

Now a day, different forms of university rankings have been increasingly gaining in importance around the world. The initial attempts emerged in the US in the early 1980s as a “consumer guide” targeting domestic academic leaders, students, rating agency, policy makers and other stake-holders (Liu and Cheng, 2005). It took a little more than two decades before the Shanghai Jiao Tong University made the first truly global university ranking (Rauhvargers, 2011). Today, there are various types of rankings based on purpose, the parameters measured, the presentation of the results or intended impact and others. This is because economic success and social progress are increasingly becoming dependent on knowledge and innovation, thereby resulting in a seemingly rising future importance.

However, rankings have always been one of the most disputed issues on higher education in media, political circles and academic community. Ranking groups often argue that rankings are more ‘objective’ measures enabling correspondence between the condition of ‘quality’ and the place of universities in ranking tables. Critics yet claim, measuring quality is rather difficult and if not, impossible, that ranking results, especially in the case of global league tables, greatly count on how indicators are chosen, and the way specific weights are assigned to them. The non-neutrality of the very concept of quality and the subsequent claims of accurate quality or excellence appraisals based on various less or more proxy gauging instruments also fall to the argued dark sides of the phenomenon.

Nevertheless, students, parents, media, politicians, different stakeholders and the society at large, often like the hierarchical positioning of universities in league tables and thus firmly believe that each universities positioned above are essentially ‘better’ than their counterparts below. Besides, growing interest in ranking results has been impacting the course of universities functioning: it has now become increasingly indispensable, for universities to appear ‘successful’, to achieve improvements particularly in those aspects that are relevant in rankings.

This essay, therefore, presents a detail discussion of this controversy and moves further to shading light on a possible compromise and a concluding remark. Relevant literatures and internet sources are consulted in an effort to present a clear picture of the arguments beyond reasonable doubt.

University ranking as a true quality evaluation tool 

The growing relevance of knowledge brought changes with regard to what is generally referred to as a “quality agenda”. Controlling compliance with standards of quality is now going out of the hands of academia-controlled self-regulation into legal and institutional arrangements. With this regard, rankings have been playing a role of key differentiator between legitimate institutions and programs. The quality concept is operationalized with the help of quality indicators and proxies. These include the alumni and staff winning major international awards, highly cited researchers in major research fields, articles published in selected top journals, articles indexed by major citation indexes, and performance per capita. The data used to produce academic rankings is collected from various sources. Surveys and publically available information allow extracting opinions from various stakeholders, government agencies and various agencies involved in higher education and research. Data collected from Bibliometric/Scientometrics data bases, internet sources, and data collected from higher education institutions (HEIs) themselves enable capture the picture of institutional governance and research performance. The function of all these elements empowers rankings to genuinely appraise quality and be able to tell ‘where’ it is found and not. Besides indicating significant aspects of quality, development and use of quantified information through rankings provide a number of international significance presented below.

Shared Criteria on Quality Appraisal

The increasing acceptance and popularity of rankings encouraged the adoption of agreed definitions and compliance with agreed international standards at a more global context. This extends to issues regarding those aspects on which data is collected to appraise quality of higher education. It also paved the way for collaborative learning and sharing good practices.

Informational Value and Comparison

Rankings provide the public with information and evidence on the standing of HEIs for individual or group decision making thereby enabling countries to gain a better insight into the performance of their own systems by way of comparing themselves with others. It also enables institutions to benchmark their own performance.

Competition and Improvement

General rankings foster international initiative for competition and hence stimulate evolution of centers of excellence by serving as leverage to do higher education in more effective and innovative ways (Green Paper, and Sadlak, 2012). It serves to prompt improvement in teaching/learning, and research practices.

Policy Making and Allocation of Fund

When passing decisions on the allocation of resources or the structure of higher education systems, policy makers at national and regional level are increasingly taking ranking results into consideration (Rauhvargers, 2011). Rankings also initiate broader institutional and international discussions about what constitutes success and how it can be better documented and reported so that policies can be made based on verifiable evidences.


University ranking as a misleading indicator

Despite their popularity for measuring quality and sorting results in simple formats, many have made strong arguments questioning rankings, and pointed out several reasons why rankings have been, on the contrary, claimed to be rather misleading quality appraisal tools. The overview of some major critics is presented as follows.

Methodological problems: How Far Can We Trust Indicators ?

Prior to combining all the individual indicators into a composite score, they are subjected to a mathematical operation to make them dimensionless in effect creating a rather indirect link to what they supposed to have measured. In the process, based on their subjective judgments regarding which indicators are more important, ranking providers assign varying weights to indicators reflecting their own version of the quality conception (Rauhvargers, 2011). In addition, the fact that most indicators are proxies to aspects of true quality (e.g. measuring the quality of education by the number of alumni winning Noble Prizes in the sciences) results in a misleading and often non-objective ranking results with scores that usually are not the indicator value themselves but something else.

Biases: Mission, Field, Language, Publication, Size, Scope, Peer Review

Mission

According to Montesinos, Carot, Miguel, and Mora (2008), though universities have three fundamental mission (teaching/learning, research, and transfer of knowledge to society), rankings have been disproportionately addressing the second mission while partially covering the first, and hardly ever deal with the third. Such approaches therefore are bound to plant the seeds of skepticism in the minds of people as they fail to portray an accurate and comprehensive picture of the quality of a particular higher education institution.

Field and Scope

Most rankings are said to emphasize on institutions that specialize in natural sciences and medicine over those that are centers of social sciences, humanities and arts. Comparing quality between disciplines is fraught with difficulties since measures of quality are better aligned with some disciplinary pedagogical practices than others (Gibbs, 2010). Besides, world universities fail to obtain equal chance of being included in rankings as most league tables limit accessibility by ranking only few top institutions. The diversity of ranking systems is eventually doubtful.

Language, Publication, and Size

With English being an international language of the world of academia, global rankings favor English speaking scholars, publications and universities over non-English speaking ones. This has been the whole mark of bibliometric indicators. Even worse, large size institutions, and older publications have also been better incorporated in rankings than small institutions and recent and book publications.

Peer Review

Peer reviews as reputation surveys are also not free from biases as they often suffer from high unresponse rate; restricted to pre-prepared lists omitting out many universities and even whole countries; and the fact that the already built up reputation of institutions can influence the opinion of ‘peers.’

Transparency Issues

Critics with this regard, center at lack of sufficient information on the identity of the ranking provider, the aims of particular ranking, and the target groups (Rauhvargers, 2011). Descriptions of methodologies are as well simplified making it difficult to understand what is actually being measured; how is the indicator value calculated from the raw data; how is final score calculated from indicator results and others.

Improving Quality or Improving Quality Positions ?

Universities, in the effort to improve their ranking situation, fall under strong internal and external temptation to make every effort to boost performance particularly on those aspects which are appraised by ranking indicators. This results in a more partial and rather surface level improvement on the part of HEIs following ranking results.

Disease To Others

As discussed earlier, rankings mostly tend to evaluate institutions by their research standards as a result they indicate a few ‘best’ hundred universities in the world. This however created a “social disease” a need to be ranked, for many well-functioning ‘normal’ universities which simply do their job (Sadlak, 2012). Eventually, judging higher education using the principle of quality assurance and the ‘fitness for purpose’ principle has been missing.

Compromise

After presenting the central themes of arguments on both the bright and dark sides of university rankings, the essay here looks for a compromise in the effort to make a more realistic case. By now it is clear that though serious criticisms against rankings’ capability as a genuine quality evaluation tool, they are at the same time increasingly popular systems. Cognizant of this, the focus now therefore should be on how to correct mistakes and properly utilize rankings.
Regarding this, challenging tasks of improving ranking aspects lay ahead. Rankings should recognize that though shared objectives, no shared operational definition exist for ‘equality’. Using terms such as excellence, reputation, recognition, fame, brand, and others denoting higher education quality therefore should be abandoned as they are misleading concepts. The Harvey and Green (1993) that quality is relative and often means different thing to different people is used to supplement this position. Sadlak (2012) also stressed that “collection of comparable data is feasible but it must be conceptually well-anchored and be based on mutual trust and shared objectives.”
The other thing to seriously consider is the need to improve ranking methodology. The Green Paper clearly explained this concern as follows;
“Well-chosen metrics and indicators can provide effective tools for decision-making, based on each institution’s strategic goals rather than a global conception of what an excellent university should be. If rankings are to be used, then they need to be within coherent sets of comparable universities, and choosing baskets of metrics, from the full set, that accurately reflect the nature of their engagements with society” (2008: 20).

In order to capture a more comprehensive view of higher education quality, fair assessment of HEIs regardless of their specialization, mission, size and language area is crucial. Sufficient information on the ranking purposes and goals; design and weighting of indicators; collection and processing of data; and presentation of ranking results helps improve transparency.
Furthermore, evaluation of ranking agencies themselves on a regular basis helps ensure the adequacy of their resources and the fairness and relevance of their impact on institutions (Vardar, 2010). Finally, progress in all these aspects enhances rankings’ capability to initiate and promote genuine quality improvements, not just improving ranking positions, on the part of HEIs.

Conclusion

University ranking is a new phenomenon that is storming the realm of higher education quality. Despite different types, rankings generally indicate quality and excellence by means of arranging institutions in a particular order. However, the essay presented important controversies surrounding it. On the one hand, it showed that rankings operationalize the concept of quality and use a range of indicators that enable genuine assessment and provide a number of additional benefits. On the contrary, many attack rankings as non-objective, equipped with ill methodology, non-comprehensive, and lacking transparency which in effect makes them rather misleading. Acknowledging the relevance of these extremes, the essay argued for a possibility of compromise through outlining the need to improve ranking systems for the benefit of higher education.
Therefore, people should be cautious about any ranking and should not rely on them either. Instead, people should use rankings simply as one kind of reference and make their own judgment about ranking results based on ranking methodologies. Rankings may be a good measure of quality as exception or excellence; however, overlooks other crucial conceptions of quality including ‘fitness for purpose,’ process quality, and transformation. Adding to these is skepticism of whether the quantitative methods of ranking can effectively show the conditions of aspects that can only be explained qualitatively.
In the end, it is important to always bear in mind that measuring quality is a challenging task as there are no commonly agreed conceptions of the subject. Trends indicate that the number of rankings is likely to keep growing and become specialized despite mounting critics.

References

Gibbs, Graham (2010), Dimensions of Quality, The Higher Education Academy, Helsington, UK;

Harvey, Lee and Green, Diana (1993), ‘Defining Quality’, Assessment and Evaluation in Higher Education 18(1), pp. 9–34;

Liu , N.C and Cheng ,Y. (2005), Academic Ranking of World Universities – Methodologies and Problems, Institute of Higher Education, Shanghai Jiao Tong University, Shanghai 200240, China;

Montesinos, Patricio, Carot, Jose Miguel, Martinez, Juan-Miguel and Mora, Francisco (2008), 'Third Mission Ranking for World Class Universities: Beyond Teaching and Research', Higher Education in Europe,33:2, 259 — 271;

Rauhvargers, Andrejs (2011), Global University Rankings And Their Impact, The European University Association (EUA) Report on Rankings 2011;

Sadlak, Jan (2012), Educational Rankings and Measuring Methodologies, A text based on a presentation at the Universities’ Third Mission: Indicators and Good
Practices Conference, 2-3 February 2012, Dublin, Ireland;

Vardar, Öktem (2010), Quality Management in Higher Education, in: Huisman, J., Attila, P. (Ed.), Higher Education Management and Development; Compendium for Managers, 175-193;

Green Paper: Fostering and Measuring ‘Third Mission’ in Higher Education Institutions, A Consortium project of Universitat Politècnica de València, Spain; University of Helsinki; Finland; Donau-Universität Krems, Austria; University of Maribor, Slovenia, Universidade do Porto, Portugal; Istituto Superiore Mario Boella, Italy; Dublin Institute of Technology, Ireland; and Universidad de León, Spain. http://www.e3mproject.eu/docs/Green%20paper-p.pdf (30.10.2012);

 

There has been error in communication with Booktype server. Not sure right now where is the problem.

You should refresh this page.