Nguyen Thanh Tung
Introduction
Vietnam’s Law of Higher Education, which was enacted in January 2013, is the nation’s first law dedicated to higher education (HE). The Law aims to reform and regulate Vietnam's HE system so that it would be more able to produce high quality human capital for Vietnam’s development towards a knowledge-based economy and society. One of interesting issues the Law mentions is that it is required for Vietnam to have a national ranking of university for the purposes of quality and reputation assessment, which serve as inputs for the government's administration on HE and the allocation of state budget[1]. This seems to be the result of the enthusiasm of universities and the public in wanting a Vietnam's own league table and also the result of globalisation that put pressure on the system to follow the rankings trend. It easily can be seen that there are many news articles, seminars and workshops on why Vietnam needs a ranking and how to create and implement it. On the other hand, too little serious literature and discourse to analyse the pros and cons of ranking lists or league tables and the consequences for Vietnam's HE system if there is one. And no one asks the most basic question "Do we really need it?". This question is the foundation of this paper, which seeks to answer it by exploring the development of university rankings systems in the world to draw some essential lessons then analyzing the context and rationale behind the sudden enthusiasm for Vietnam's national ranking as well as the limitations and consequences when such a ranking implemented.
Rankings and League Tables in the Higher Education World
A short history
The first higher education ranking was compiled 30 years ago by an American magazine, US News and World Report in order to help prospective students and their families comparing colleges and universities in the United States, thus better choosing the right one for themselves. Since then it has become the most influential university and college ranking in the country and was copied by numerous national rankings all over the world. In 2003 come the first international ranking, the Academic Ranking of World Universities by Shanghai Jiao Tong University in China. Initially it was created to establish the standing of Chinese universities internationally right after the launch of a government plan to create world-class universities. It was soon followed by the Times Higher Education World University Rankings and QS World University Rankings. At the moment, one could not be remember how many university rankings are there in the world. They come in different shapes and colours, using different indicators and criteria, ranking from programs to departments to entire institutions and devised by different actors, the media, public authorities or the institutions themselves.
Though different, they all are lists of institutions which are ranked against each other based on a common set of indicators or criteria and tabulated in an ordinal table, usually in descending order from best to worst. They can be called as the 'Billboard Hot 100 or so in higher education' but hundreds of times more complex and controversial. In general, they are produced by commercial publishing enterprises as 'consumer guides', warmly welcomed by the public. On the contrary, they are reluctantly received by the institutions and heavily attacked by most academics. But most dangerously they now are increasingly used by governments to make policy decisions (Attwood, 2009).
The good , the bad and the ugly
Federkeil (2009) has outlined some reasons for the existence and flourishing of rankings. The first is the ubiquitous rise of competition in HE, nationally, regionally or internationally when institutions vie fiercely for students, talents, funding and reputation. National or international rankings are both the results of and the fuel for these rising competitions. Second, they are emerged out of the need of the market, of the consumers when there are countless providers of HE. The consumers need tools that could inform them about the available products in a transparent and comparative way. Tofallis (2012) has given two examples for two specific types of 'consumers' of HE. For prospective students, before to the appearance of ranking they would have had to contact each college or university to obtain its prospectus which mostly would show only the strengths and positive things about institutions. Consequently prospective students would only have a collection of non-comparable and selective pieces of information, which hinders their choice making process. Rankings or league tables offer them a kind of exhaustive and useful data in one convenient place and method. For employers, rankings and league tables are often the shortcut to evaluate the qualifications that applicants have acquired from different institutions. The third is the increasing autonomy for HEIs, which in turn required more transparency and accountability from them. This transparency and accountability to some extent could be presented to the public and authorities through such rankings. In a nutshell, the selling point of rankings and leagues tables is the extensive aggregation of data in simple way based on which various stakeholders could make decisions. But the way they collect, analyse and data to construct to lists is one of the most criticised areas.
Rankings merely gather what they are able to gather, usually from three types of sources: 1/ Surveys from students, employers or opinion makers who either have one-sided view on the institution (in case of the former) or have little knowledge about the institution (in case of the latter; 2/ Third parties' data, which is barely collected for the purpose of making rankings but rather as an administrative by-product. 3/ HEIs themselves, which 'all have a clear incentive to provide data which will benefit them' (Usher & Savino, 2007). Those inherently inaccurate sources of data lead to the limited choice of indicators that are mostly input rather than output ones when 'it is what institutions do with those inputs that really matters for quality' (Brown, 2009). Furthermore those indicators have too much to do with research capacities of HEIs, which favour research universities and leave teaching-focus institutions a disadvantage.
Also because of what is called 'counting what is measured' rather than 'measuring what counts' (HEFCE, 2008), 'there are vast differences exist between university league tables in terms of what they measure, how they measure it and how they implicitly define “quality”', which leads to the inconsistence between rankings except for the consensus on the best HEIs in a given country. This implies an ironic fact that 'institutional league tables don’t measure what their authors think they are measuring' (Usher & Savino, 2007). However different they are in methodology, they surprisingly (or not) adopt the same 'one-size-fits-all' approach when an extremely wide range of HEIs are ranked each other on the same chart with overall score for different purposes and for a diversity of stakeholders and audience.
The consequences of those defects in methodology and approach are very severe. As supposed to support the choice making process, they are in fact constrained by the limitation of sources so they could not provide students with enough important information to make meaningful choice (Bowden, 2000). Additionally, they are not catering for the diverse needs of various stakeholders (students alone have different career interests, backgrounds and capacity). Regarding the formation and selection of indicators, they define the inferior meaning of quality. They focus on input rather than output. They do not measure educational gain. They render 'higher education as a product to be consumed rather than an opportunity to be experienced' (Brown, 2009). And they reject 'quality as transformation' as Harvey & Green (1993) have defined. Apparently they still equate quality with reputation but 'reputational data have a very poor reputation as a valid indicator of educational quality' (Gibbs, 2010).
To make things worse, this penchant for reputation widens the gap between HEIs. Since not only students but also academics, government and other organisations are increasingly influenced by rankings and league tables (HEFCE, 2008), they reinforce those who are ranked high already in reputational lists as talents and funding continue to be attracted by those. It could be said that they are creating the unfairness for small or teaching-focus or with-special/social-purpose HEIs. To compete in this unfair war, many HEIs concentrate on improving their scores on the indicators devised by rankings instead of improving genuine educational quality. Universities even game the statistics to raise their standing, which leads to some severe problems such as inequality in access when they attract already high achieving students by giving them financial aids while those from poor background are neglected.
In spite of their inherent shortcomings and problems, rankings and league tables still mushroom and get more influential because they are inevitable products of this marketised and globalised era. As Merisotis (2002) has argued that they are here to stay, it is more imperative to improve them than pointing out their defects and predicaments caused by them. The Centre for Higher Education Development (CHE) in Germany may have a solution with an alternative approach of allowing users to create their own rankings according to their specific needs. The idea was taken forward with an initiative by the European Commission to introduce the U-Multirank, partly in order to tackle the problem of the underrepresentation of European HEIs in the top table leagues. Unsurprisingly, it met resistance from leading institutions especially from the UK because of interest conflict (Redden, 2013). However it is still in the experiment phase and has a long way to become an established ranking.
The rationale behind the need of a national rankings in Vietnam
Coming to Vietnam, as mentioned earlier, the country is attempting to formulate its own national ranking. To know why it needs such ranking, it is useful to understand Vietnam's HE context.
The Vietnam's HE is rapidly expanding in quantity. The number of HEIs increased dramatically from 9 in 1993 to 225 in 2005 to 441 in 2013, also the number of students 162 000, 1 380 000 and 2 200 000 respectively. The quality of HE, however, does not correspond to this great surge in quantity. HEIs in Vietnam have a low and diminishing ratio of staff-student (1:19 in 2002 and 1:27 in 2005) The quality of staff is also alarming with only around 47 percent of academic staff in 2005 having postgraduate qualifications-most at the master’s level[2]. Furthermore, the teaching curricula and methods are outdated with most university concentrating on teaching not research and on undergraduate level rather than postgraduate. All of these resulted in the inability of Vietnam HE to provide an educated and competence workforce and irrelevance to the need of economy and society (Vallely & Wilkinson, 2008). On the other hand, the rapid expansion put a great strain on public funding when 70% budget of public institutions still funded by the State (The World Bank, 2008). But in a short term oriented culture[3], the most 'face-losing' embarrassment of all is the absence of Vietnamese HEIs in the top rankings.
With the introduction of a national ranking, the government hope it could 'hit ten pins with one ball' but the most targeted pin seems to be to have at least one international recognized university.
Therefore, first of all, the upcoming ranking is expected to train Vietnamese HEIs before going out to bigger leagues. This would be an opportunity for them to assess their strengths and weakness based on a set of indicators/ criteria and to learn what to do to improve their standings in global leagues.
The government, backed by many scholars (V.U.N., 2010), also anticipates that these ranking exercises will enhance the quality of the system because the institutions will have to tackle problems such as high student-staff ratio, lack of research culture, staff's under-qualification or insulation from international academic communities. Besides, the league table will create competition among HEIs not only for students but also for talents and reputation and consequently motivate them to have development plan and strategy. In fact, according the Law of Higher Education, the government assesses the education quality and reputation based on their ranks[4].
In terms of administration purpose, the ranking of universities will inform the allocation of state budget and tasks and missions for public institutions, and that of land and financial supports for non-public ones. Interestingly, the degree of autonomy that a HEI obtain from the state also depends partly on the ranking result[5].
In addition, since the number of HEIs is substantial at the moment and keeps rising, there is some hope that the ranking will help applicants in making choice by providing useful and relevant information.
Above are the government and proponents' arguments for the urgent need of a national ranking. But those arguments are weak and they might be to cover the truth (which the proponent may not be conscious of) that feeling the pressure of globalisation as well as the excitement created by a new fashion Vietnam just wants to jump on the band-wagon without considering if the system is mature to adopt a ranking, and the possible consequences it may cause.
Is it the right time to have a national ranking in Vietnam?
There will be a university ranking in Vietnam as it is inevitable. However it is still early to devise and implement one. The reasons are:
First, on the technical front, in Vietnam there is always a problem of inaccurate data to compile any ranking of any type, be it a simple ranking like a music chart or movie box office, let alone a complex one like a college ranking. There are no surveys about HE on national scale. HEIs rarely conduct surveys for themselves. They also do not have habit of tracking their alumna. The third party sources are rare as well. All one could get are some quantitative statistics at the most basic level. Plus, since one of the government objectives is to improve some indicators such as staff-student ratio or publication citations, the methodology is predicted to have problems too when it focuses mostly on input. Furthermore, when it comes to self-reported data, Vietnamese HEIs have an 'excellent' track record of manipulating data. To be worse, the process of devising a national ranking is not open for professional and public discourse and does not involve stakeholders. Except few insiders, no one knows what is going on, let alone one can contribute.
Second, a ranking does not matter when HEIs have low autonomy. Vietnam's HEIs are still tightly and centrally regulated by the state. So what is the point and is it possible to have development plan or strategy, to innovate curriculum or to be internationally recognised when everything is dictated by the government from the programs to curricula, from the number of students selected to the assignment of rectors, etc. (London, 2010). With low autonomy comes low accountability from Vietnam's HEIs. They are not transparent and accountable to external stakeholders other than the government. The industry, employers and even students do not know about universities' plans and strategies, let alone have a say in them. Even the government could not control some aspects of HE such as quality because of the input control and funding mechanism which is not related to performance and quality in any meaningful or competitive way (Vallely & Wilkinson, 2008).
Third, a ranking will widen the gap between HEIs when the law states that funding and autonomy are given according to ranking result. With less than 10 universities considered multidisciplinary and research-focus (though not much). It will be a vicious circle for most Vietnamese HEIs. Meanwhile a ranking matters only when it has enough number of players of the same status and has mechanism for everyone to have opportunities to improve their rank. Otherwise, it is just a game for few already established institutions.
Fourth, regarding the quality improvement, if the government wants to improve the state of quality of HE, this is the wrong way to do it especially taking into account the fact that Vietnamese HEIs do not possess quality culture. Implementing a national ranking will encourage them to game play the statistics rather than to enhance the quality then it would be never a quality culture in Vietnam' HE at all. Instead, it should implement immediately a quality assurance system that is delayed too long (Madden, 2013).
Fifth, the ranking is expected to provide prospective students with useful information for making choice. The ideal process will be like this: a student, well aware of what subject he or she would like to pursuit and what he or she is capable of; then the student will turn into the league tables, looking for some options; subsequently he or she will examine these options by contacting the HEIs. But the problem is Vietnamese students have a tendency to choose college based only on how high their scores would be in the national university entrance exam rather than their passion and ability. On the other hand, Vietnamese HEIs, particularly public, do not carry out communication activities aimed at prospective students and do not provide enough on the institution, the programs and student life. Therefore the ranking will aggravate the situation and mislead students further.
Conclusion
In a nutshell, Vietnam until now does not have preconditions for carrying out a national ranking: quantity and quality of data sources, HEIs' autonomy and accountability and quality culture. As it can be learned from other systems that have rankings and league tables, trying to implementing one will put quality culture in jeopardy, create inequality and unfairness and stimulate game playing or even corruption. However since the coming of rankings and league tables is unavoidable, it is recommended that Vietnam first should improve its governance in HE, encourage quality assurance/enhancement practices (Pham, 2004) and have a platform for career orientation and higher study opportunities for prospective students to make meaningful choices. In the meantime, it should monitor the development of ranking trends in the world, learn from other's best practices and mistakes in order to devise and undertake a good national ranking system.
References
Attwood, R. (2009). Redrawing ranking rules for clarity, reliability and sense. . Times Higher Education .
Bowden, R. (2000). Fantasy higher education: University and college league tables. Quality in Higher Education, 6(1) , 41–60.
Brown, R. (2009). Quality Assurance and the Market. The Future of Quality Assurance, Research, Policy and Practice in Higher Education Series , 13-28.
Federkeil, G. (2009). Rankings and Quality—A European Perspective. In T. W. Bigalke, & D. E. Neubauer, Higher Education in Asia/Pacific: quality and the public good (pp. 63-78). Palgrave Macmillan.
Gibbs, G. (2010). Dimensions of Quality. The Higher Education Academy.
Harvey, L., & Green, D. (1993). Defining quality’. Assessment and Evaluation in Higher Education 18(1) , 9–34.
HEFCE. (2008). Counting what is measured or measuring what counts? Centre for Higher Education Research and Information (CHERI), Open University, and Hobsons Research.
Hofstede, G. J., & Minkov, M. (2010). Cultures and Organizations: Software of the Mind, 3rd ed. New York: McGraw-Hill.
London, J. (2010). Governance and the Governance of Higher Education in Vietnam. In K.-H. M. (ed.), The search for new governance of higher education in Asia (pp. 193-214). Palgrave Macmillan.
Madden, M. (2013). Walking the line: quality assurance policy development and implementation in Việt Nam. Higher Education .
Merisotis, J. (2002). On the Ranking of Higher Education Institutions. Higher Education in Europe 27 , 361–363.
Pham, T. N. (2004). Organization and governance reform for strengthening university autonomy and accountability in Vietnam. International seminar on Organization Reforms and University Governance: Autonomy and Accountability (pp. 81–93). Hiroshima University.
Redden, E. (2013). Different Kind of Ranking? Inside High Ed , http://www.insidehighered.com/news/2013/07/25/controversial-u-multirank-initiative-aspires-be-different-kind-international-ranking.
The World Bank. (2008). Vietnam: Higher Education and Skills for Growth. Human Development Department, East Asia and Pacific Region.
Tofallis, C. (2012). A different approach to university rankings. Higher Education Volume 63 issue 1 .
Usher, A., & Savino, M. (2007). A global survey of university ranking and league tables. Higher Education in Europe, 32:1 , 5–15.
V.U.N. (2010). Evaluation and Ranking of Vietnamese Universities and Colleges. Ho Chi Minh: www.uef.edu.vn/resources/static/khao_thi/ky_yeu_hoi_thao.pdf.
Vallely, T. J., & Wilkinson, B. (2008). Vietnamese Higher Education: Crisis and Response. Higher Education Task Force, Asia Programs unit of the Kennedy School’s Ash Institute .
[1] Article 9 Classification of HEIs, Vietnam's Law of Higher Education
[2] Source: MOET University Surveys 2002, 2005, 2013
[3] In short term oriented societies, values promoted are related to the past and the present, including steadiness, respect for tradition, preservation of one's face, reciprocation and fulfilling social obligations. (Hofstede & Minkov, 2010)
[4] Ariticle 9- Classification of HEIs, Vietnam's Law of Higher Education
[5] Ariticle 32- Autonomy of HEIs, Vietnam's Law of Higher Education
There has been error in communication with Booktype server. Not sure right now where is the problem.
You should refresh this page.