English |  Español |  Français |  Italiano |  Português |  Русский |  Shqip

Quality Management in Higher Education

Is student feedback the cornerstone of quality assurance? 2

Natalie Nestorowicz

 

Why is quality important for universities?

Over the last five decades the Higher Education sector has undergone dramatic changes across Europe. These changes are due to several factors. The most prominent among them is certainly the massification of university education in the face of decreasing public expenditure for Higher Education Institutions (HEIs). This fact led consequently to a crisis of capacity of universities. At the same time, the mutual trust between the university, state and public eroded gradually, which in turn resulted in a crisis of legitimacy (Fried 2006: 84). This means that the quality of universities is not taken for granted anymore. Instead, universities are obliged to provide proof of their quality by making their operating activities transparent and accountable to its stakeholders and the public. In addition, the university’s compliance with state-imposed quality criteria is linked to the amount of state funding. Also, the pressure on the part of students increases. As student numbers grow from year to year, graduates find themselves on a very competitive labour market. The competitive pressure leads to high expectations of the students. They expect university to equip them with the necessary skills to “survive” on the market – under the motto: survival of the fittest. The two sides of the coin consist thus of: increasing transparency and accountability through control instruments on the one side and rising expectations of students and the public in general on the other side (Lomas 2007: 32).

The university’s efforts to assure and improve its quality are to be seen against this background of changed conditions. But these efforts were not exclusively "homemade" but found its impetus also on a supranational level, in the shape of the Bologna Process.

Already the Bologna Declaration in 1999 put an emphasis on the quality issue by encouraging European universities to develop comparable quality criteria and methodologies in order to establish a common framework of reference for quality assurance. On the initiative of the Bologna Process most European countries introduced Quality Assurance Agencies, the “watchdogs” over Universities that conduct regular external reviews of HEIs. The national agencies in turn are subjected to umbrella organizations such as the European Association for Quality Assurance in Higher Education (ENQA) that supervises the development of a European quality agenda by defining common standards for quality assurance. The national quality agencies have to comply with these principles, which are laid down in the European Standards and Guidelines for Quality Assurance (ESG). The ESG, which have been adopted by the European Ministers of Education in the Bergen meeting of 2005, are divided into three parts. The first part refers to internal processes of quality assurance within HEIs, the second to external quality assurance and the third to external quality assurance agencies (http://www.enqa.eu).

Also Austria, as a [JW1] signatory state of the Bologna Declaration, has implemented these principles of quality assurance. In 2003 the Austrian Agency for Quality Assurance (AQA) has been launched, which is responsible for external quality assurance through audits and accreditation of HEIs in accordance with the ESG (http://www.aq.ac.at). On the internal level of quality assurance, the University of Vienna, for instance, implemented in 2002 a Unit specially dedicated to quality assurance. In alignment with the principles of the ESG it is responsible for the periodic evaluation of faculties, performance evaluation of its staff, course evaluation by students, graduate surveys and analyses of career paths, etc. (http://www.qs.univie.ac.at). One of the instruments the University uses for assuring its quality is Student Feedback. Student Feedback has become an important internal quality assurance tool in recent years and is often referred to as the cornerstone of quality assurance systems (Harvey & Williams 2010: 98).

Student Feedback - an instrument for quality assurance

Student involvement, in particular feedback from students on their university experience, has appeared at the forefront of quality assurance processes throughout Europe (Nair & Mertova 2011) The Quality Assurance Agency for Higher Education (QAA) in UK, a forerunner of European Quality Assurance agencies, claims „Student engagement is increasingly taking center stage. For it to count, institutions need to provide genuine opportunities for students to influence their learning environment, and to act on the issues that are important to them.“ (http://www.qaa.ac.uk) The emphasis on student engagement in quality assurance can be seen as a result of the general shift in the perception of the student’s role in higher education. Universities have recognized students as principal stakeholders as they are the ones who are most affected by education and know best what they want and need (Williams & Cappuccini- Ansfield 2007: 159). Thus, they should be equally involved in quality mechanisms. Students’ participation may occur in a variety of ways such as informal discussions, formal qualitative sessions such as focus groups, facilitated discussions, suggestion boxes or representative committees (Harvey 2003: 3). However, student feedback surveys are most prominent among the different instruments of students’ engagement in quality processes. The reason for the increasing use of student feedback is due to a number of key factors such as the already mentioned expansion of the higher education sector, the increasing consumerism and marketization of HE and not least the increasing concern over the quality of HE (EUA 2013).

Also, the literature on the importance of students’ engagement in quality processes is vast. Already the first issue of the journal “Quality in Higher education” published in 1995 put its focus on the student’s experience (Harvey & Williams 2010). Hill (1995), for example, suggested that students should be also included in the process of defining and assessing quality and Ratcliff (1996) claimed that students should be involved in both internal and external quality assessments (e.g. nationwide evaluations) (Harvey & Williams 2010: 98). This initiative was taken up a couple of years later, precisely in 2005, with the launch of the National Student Survey (NSS) in England, Wales and Northern Ireland. The survey asks final year degree students about various aspects of their study experience including an “overall satisfaction” mark, which are then published [1] (http://www.thestudentsurvey.com). The NSS was later criticized by Harvey who referred to the survey as "a hopelessly inadequate improvement tool" (Times Higher Education 2012). Nevertheless, Harvey was a big supporter of student feedback, provided, however, that its results were also fed back to the stakeholders. He describes this process as “closing the feedback loop”. What exactly he meant by that is explained in his article on Student Feedback published in 2003. There he asserts that although we are clear about the purpose of students feedback, we are “not always clear how views collected from students fit into institutional quality improvement policies and processes” (Harvey, 2003: 4). He identifies as the two main functions of feedback from students “internal information to guide improvement as well as external information for potential students and other stakeholders, including accountability and compliance requirements”(ibid.: 4). Brennan and Williams (2004), in contrast, differentiate between those two principal reasons for collecting students feedback: enhancing the students’ experience of learning and teaching on the one hand and contributing to the monitoring and review of quality standards on the other hand. In summary, six main purposes for the use of Student Feedback can be identified from the literature: information for improvement, information for prospective students, information for current students, accountability, benchmarking, and comparison between and within institutions (Williams & Cappuccini-Ansfield 2007: 164).

For Harvey, however, any collection of student feedback falls short if the information is not played back to the stakeholders, what he refers to as “closing the feedback loop”. He argues that for an effective contribution to internal improvement processes students feedback must be integrated in a regular cycle of analysis, reporting, action and feedback. The cycle begins and ends with the students by giving them in exchange for their feedback, feedback on the planned improvements resulting from the survey findings. This process should happen in a perpetual loop. Only that way it can be ensured that students remain engaged in the quality process without becoming eventually cynical about the constant survey flood that does not help anyway in the end (Harvey 2003; Alderman, Towers, Bannah 2012). According to Harvey, the institutions themselves benefit most from the student feedback. The feedback provides university management with valuable information that they can incorporate into their continuous quality improvement processes (Williams & Cappuccini-Ansfield 2007). Based on Harvey’s model many Student Satisfaction Surveys have been successfully applied in a number of universities both in the UK and abroad (Alderman, Towers, Bannah 2012). Williams and CappucciniAnsfield (2007) compared the fitness for purpose of such Student Satisfactory Surveys with the UK’s National Student survey and came to the following conclusion: institutional surveys are indeed much better suited for internal quality improvement, whereas the NSS is primarily designed as a way of „league-tabling“ institutions.

Risks of Student Feedback

Williams and Cappuccini Ansfield also point out the dangers of the simplistic approach of the NSS, since the information from the students’ feedback is broken down into a single mean score for a better comparison of institutions.  Furthermore, they draw attention to the possibility of falsifying results due to the students’ concern about the reputation of their degree awarding institution. As final-year students are very well aware that the published results of the NSS may affect the reputation of their qualification, they may thus give higher scores to their university (Williams and CappucciniAnsfield 2007).

Also Times Higher Education [2] reported in an article published in 2012 on the paradoxical effects that the NSS had on academic behaviour. A lecturer at Kingston University, for example, advised students to inflate the scores they gave their university in the NSS by saying "If Kingston comes bottom, the bottom line is that no one is going to want to employ you because they'll think your degree is shit" (Times Higher Education 2012). As a consequence the Kingston University's Department of Psychology has been removed from the 2008-09 League Tables.[3] The NSS is much criticized and its negative effects illustrate the drawbacks of student feedback as an instrument for quality assurance.

Numerous critics of the use of Student Feedback claim that feedback doesn’t contribute to the improvement of quality. Rather the opposite, it leads to a quality decrease as universities lower their requirements for students and reduce the quality of learning for fear of being poorly ranked by the students. Ultimately, universities may alter their academic identity and behaviour in order to raise students’ satisfaction. The Times Higher Education article notes furthermore, that students’ satisfaction is not an accurate measure of quality. At best, it reveals subjective preferences of different groups of students. In addition, academic life becomes subordinated to the “imperative of cultivation student satisfaction” (Times Higher Education 2012) that undermines the authority of university. Also, Vuori (2013: 177) diagnoses a “power shift in academia” as students’ power increases at the expense of university. There is a danger when universities are more concerned about the students’ experience than the academic education and university life becomes dictated by the question “What do students want” rather than “What do students need?” (Times Higher Education 2012). Also Marc (2013: 7) worries in his article "Student satisfaction and the customer focus in higher education", that the university’s top priority will become “making students happy”. Vuori (2013) takes the same line, when she notes that universities are increasingly seen as service providers and teaching as a service provision. The students in contrast are more and more perceived as consumers. In the competition for students universities aim for high scores in rankings so that students “choose” them. The author is afraid that “the quality assurance measures that are taken to guarantee student satisfaction may further strengthen students’ potential customer identification (Little & Williams 2010 in Vuori 2013: 177). Franz (1998) depicts an even more dramatic image: "universities become like shopping malls where students come to buy education, classes become popularity contests. Pedagogy becomes entertainment” (Mark 2013: 7).

All these statements are rather radical. Against such concerns some studies argue that students attend universities in order to improve their career prospects, thus they want a quality education, which is valued in the labour market (Mark 2013). However, it must be ensured that students’ feedback is not instrumentalized for the wrong purposes. Therefore, as already pleaded by Harvey, an integrated use of student feedback is crucial for its effective contribution to internal improvement processes (Harvey 2003: 4).

Finally, it should be noted that students are usually involved at the end of the quality process when it comes to evaluating the university or study program by means of Student Feedback. For the most part, universities or quality assurance authorities design these surveys themselves. Students are very seldom involved in the definition of quality criteria. The construction of quality is de facto a top-down process. Thus, every definition and measurement of quality reflects power relations. In other words: the "non-measured" remains non-existent (Ribolits 2009). Therefore, I am convinced that students must be involved from the very beginning in the quality process. They should be already included in the definition and not only in the assessment of quality. In this respect, a qualitative approach by means of focus groups is preferable to the quantitative approach of students’ surveys.

 

References

ALDERMAN, L.; TOWERS, S.; BANNAH, S. (2012) Student feedback systems in higher education: a focused literature review and environmental scan, Quality in Higher Education, 18:3, 261-280.

BBC (2012) Faculty in league table expulsion. Available at: http://news.bbc.co.uk/2/hi/uk_news/education/7526061.stm.

EUA (2013) How does quality assurance make a difference?
A selection of papers from the 7th European Quality Assurance Forum. Available at:

http://www.eua.be/eua-work-and-policy-area/quality-assurance/eqaf/previous- .

ENQA (2009) Standards and Guidelines for Quality Assurance in the European Higher Education Area, 3rd edition. Available at: http://www.enqa.eu/files/ESG_3edition%20(2).pdf

FRIED, J. (2006) Higher education governance in Europe, autonomy, ownership and accountability – A review of the literature. In: Higher education governance between democratic culture, academic aspirations and market forces. (Brussels, Council of Europe).

HARVEY, L.; WILLIAMS, J. (2010)Fifteen Years of Quality in Higher Education (Part Two), Quality in Higher Education, 16:2, 81-113.

HARVEY, L. (2013) Student Feedback [1], Quality in Higher Education, 9:1, 3-20.

LITTLE , B.; WILLIAMS, T: (2010) Students’ roles in maintaining quality and in enhancing learning: Is there a tension? Quality in Higher Education, 16, 115–127.

LOMAS, L. (2007) Are students customers? Perceptions of academic staff. Quality in Higher Education, 13:1, 31-44.

NAIR, C. S.; MERTOVA, P. (Eds) (2011) Student Feedback: The Cornerstone to an Effective Quality Assurance System in Higher Education. (Oxford, UK: Woodhead Publishing).

VUORI, J. (2013) Are Students Customers in Finnish Higher Education?, Tertiary Education and Management, 19:2, 176-187.

LITTLE, B.; WILLIAMS, R. (2010) Students’ roles in maintaining quality and in enhancing learning: Is there a tension? Quality in Higher Education, 16, 115–127.

MARK, E. (2013) Student satisfaction and the customer focus in higher education, Journal of Higher Education Policy and Management, 35:1, 2-10.

TIMES HIGHER EDUCATION (2012) Satisfaction and its discontents, published 8 march 2012, Available at: http://www.timeshighereducation.co.uk/419238.article

RIBLITS, E. (2009): Bildungsqualität. Was ist das und woher rührt die grassierende Sorge um dieselbe? In: Christof, E.; Ribolits, E.; Zuber, J. (Hg.), Bildungsqualität! Eine verdächtig selbstverständliche Forderung. Jg. 34, Heft 136. (Innsbruck/ Wien/Bozen: StudienVerlag).

WILLIAMS, J.; CAPPUCCINI-ANSFIELD, G. (2007) Fitness for Purpose? National and Institutional Approaches to Publicising the Student Voice, Quality in Higher Education, 13:2, 159-172.

Websites

The Agency for Quality Assurance and Accreditation Austria (AQ Austria). Available at: https://www.aq.ac.at/en.

The European Association for Quality Assurance in Higher Education.  Available at: http://www.enqa.eu.

The National Student Survey. Available at: http://www.thestudentsurvey.com.

The Quality Assurance Agency for Higher Education (QAA). Available at:  http://www.qaa.ac.uk.

 

 


[1] The advantages and disadvantages of the NSS will be discussed in the following.

[2] In: Satisfaction and its discontents, Times Higher Education, published 8 March 2012. Available at://www.timeshighereducation.co.uk/419238.article.

[3] In: Faculty in league table expulsion, BBC, published 25 July 2008. Available at: http://news.bbc.co.uk/2/hi/uk_news/education/7526061.stm.

 


 

There has been error in communication with Booktype server. Not sure right now where is the problem.

You should refresh this page.