The notion that universities have a responsibility to provide a “service” to students, arises from the changes which higher education have been undergoing for some time. A significant outcome of which is to provide opportunities for scholarly engagement and personal development within a supportive and inclusive environment. Such expectations re-position the role of the university in society, as not just education providers but institutions of life-long learning which sculpt our future labour force. This research utilises the Deming quality cycle (also known as the PIRI cycle) to analyse to what extent universities are improving their policies and practices relating to student engagement. The research is carried out in the context of the United Kingdom (UK).
Prior to embarking on any discussion about student engagement, it is important to define what is meant by this concept through review of the various definitions available in the literature. The body of research around student engagement largely makes use of two key terms, often used interchangeably - ‘student involvement’ and ‘student engagement’. Whilst generally taken to be similar constructs, the use of these different terms can create some challenges in consistency, reach and interpretation of research (Foubert & Grainger, 2006). There are a number of definitions for student engagement cited in the literature, several of which will be explored shortly.
The term “student involvement” was first coined by Astin (1984) and is defined as “the amount of physical energy and psychological effort that students put into their student experience” (p. 297). A more recent interpretation is given by Krause and Coates (2008) as “the extent to which students are engaging in activities that higher education research has shown to be linked with high-quality learning outcomes” (pg. 493) which further defines the type and result of activities engaged in. These definitions focus on involvement or engagement from the student standpoint; however other definitions place emphasis on the institution’s input; for example the Higher Education Funding Council for England (HEFCE, 2008) defines student engagement as “the process whereby institutions and sector bodies make deliberate attempts to involve and empower students in the process of shaping the learning experience”. Providing a more holistic definition, Kuh (2009) combines these two perspectives, creating the following; “the time and effort students devote to activities that are empirically linked to desired outcomes of college and what institutions do to induce students to participate in these activities” (pg. 683).
Whilst Trowler’s (2010) literature review revealed many grey areas in student engagement research, she noted the one irrefutable conclusion in all these studies is the value that engagement brings to individual students. Specifically, national, cross-institutional student engagement analyses such as the Australasian Survey of Student Engagement (AUSSE) reveal that satisfaction, support and learning outcomes are the most important factors in preventing premature institutional departure (Coates, 2009). Moreover, it seems that cultivating an enriching environment for students is requisite for institutions retaining their students and achieving positive student outcomes (Tinto, 1997). By using specific indicators for measuring the above parameters, institutions can monitor student engagement more effectively, and enhance educational success.
There is considerable variation in the nature and type of research conducted in the student engagement literature (Trowler, 2010). Those conducted in the UK (with the exception of the United Kingdom Engagement Survey (UKES)) tend to be small scale, single case studies. Whilst there is a wealth of empirical research on student engagement interventions globally, there is very little evidence of the nature, function and quality of these, without which, very limited information about effectivity can be inferred.
From a national perspective, there is an expectation from the quality monitoring body that institutions should provide an engaging environment as part of a high quality education. Coates (2005) emphasises the importance of data collection on student engagement and argues that it can be used to improve student learning and productivity. As engagement is a proxy for learning, and learning is a good indicator of quality, engagement data can be a useful indicator of quality. The Quality Assurance Agency (QAA) strongly advocates that “higher education providers should take deliberate steps to engage all students, individually and collectively, as partners in the assurance and enhancement of their educational experience”. (QAA, 2012). This is reiterated by Buckley (2013) who also emphasises the duty of institutions to effectively engage their students; “It is the institution's responsibility to facilitate and improve engagement, by creating environments and opportunities that allow and encourage students to work hard, to invest emotionally and intellectually in their studies and to interact with their teachers, their course and each other in ways that will benefit their learning.” (pg. 8)
Generally speaking, institutions are very successful in the process of data collection through formal assessment and student satisfaction surveys, but what is less apparent, is what the institutions are doing with this data, and specifically how they are using it to enhance student engagement. Understandably, institutions may be overwhelmed by the data collected but they have not been left unsupported, as reports such as “Engagement for Enhancement” (Buckley, 2013) are available to assist them in understanding their data. It is one of the aims of this thesis to establish where the bottleneck is on using student engagement data for improvement purposes.
As previously alluded to, student engagement has positive implications for both institutions and students on multiple levels. For institutions, from an economic perspective, it prevents students from leaving their studies prematurely, and therefore continuing to pay tuition fees to the university; from a social perspective, higher engagement means a higher level of commitment to the institution which is likely to mean a more positive reputation is communicated to the wider society. Finally, from a political perspective, it reflects effective governance of the higher education system as a whole. For students, it provides a richer educational experience and generally speaking, a higher level of engagement is also a strong predictor of success (Berger and Milem, 1999; Astin, 1999; Tinto, 1997; Bonwell and Eisen, 1991; Bean, 1980, Pascarella and Terenzini, 2005).
In North America and Australasia, the term “student engagement” is well understood as a result of large scale national annual surveys; the US National Survey of Student Engagement (NSSE) and AUSSE and is largely related to “student involvement” literature. The term student engagement in the UK literature however links more to research around student feedback, student representation and approaches to learning making it somewhat difficult to yield relevant and substantial research using the term “student engagement” and creating a bias for research in the US and Australasia (Trowler, 2010).
Another issue related to the existing literature on student engagement is the limited amount of research carried out with any confidence on specific and local engagement interventions, resulting in recommendations for practice which are general and non-specific. On the other hand, there are researchers making recommendations which are free of context or situation, assuming that what works for one institution, also works for another (Trowler and Trowler, 2010).
In their research on assessment of student outcomes, Kinzie (2011) found that whilst institutions were going through the motions of assessment, they had not moved beyond assessment as an end in itself towards assessment as a stepping stone in the wider quality improvement process. Whilst this study refers explicitly to student outcomes, the same story can be told for student engagement. Moreover, Kinzie (2011) suggests that more needs to be done in order to “close the assessment loop” and subsequently understand the effect of improvements on, in this case, student learning. In other words, to use the data collected in assessment not only to improve practices, but to continually assess the impact of these improvements.
Research on student engagement in the UK is far behind other Western countries, as can be seen in the relatively recent carrying out of a national student engagement survey. The NSSE started in the United States in 2000, but it was over a decade before the first pilot survey of student engagement was conducted in the UK in 2011. Whilst there was evidence that a few were undertaking their own assessments of student engagement, this was not commonplace (Buckley, 2014). Alex Buckley, taking the lead on developing the UKES, suggested that the latency of a UK student engagement survey could be explained by the dominance of other surveys related to student experience and quality enhancement (Buckley, 2013). However, whilst such surveys did include elements related to student engagement, none focused exclusively on this. Moreover, whilst student satisfaction is undoubtably important, engagement is a better measure of education quality (Gibbs, 2014).
This research builds on the existing research on student engagement in three ways. Firstly, it makes use of a well-established quality improvement framework and applies it to an under-researched, yet highly important topic. Secondly, the research undertaken has direct implications not only for the case-study institutions but may also be used as guidance/benchmarking for other institutions in the UK. Thirdly, it highlights some of the existing issues in higher education quality improvement and suggests possible solutions. These objectives will be explored through a singular research question which is;
To what extent are universities planning, implementing, reviewing and improving student engagement practices and policies?
This study makes use of a mixed methodology; semi-structured interviews and website analysis were selected as data collection tools because of the different perspectives they bring to the research. Semi-structured interviews with university personnel provide an insider perspective of student engagement, which is useful for two reasons; firstly, it provides insight into institutional attitudes towards student engagement and secondly, it gives an indication of how knowledgable staff are about student engagement practices and policies, and what is being done “on the ground”. The website analyses provide an external perspective and in some cases, an idealistic perspective of student engagement. Together, the combined approaches serve to create a richer sense of the effectiveness of institutional practices than either method could alone (Pace, 1984), and to highlight discrepancies, if any, between what is publicised and what is practiced.
Both the website analysis and interviews were conducted using the same inventory which is an adapted version of an inventory created to benchmark student engagement policies and practices in Australia. In the original version, a distinction was made between institution-wide practices and individual departmental practices; this distinction has been removed for the purposes of this study, due to the smaller sample size, and all items refer to institution-wide practices and policies, unless otherwise stated. The inventory consists of 26 items and is divided into four sections which follow the Plan Implement Review Improve (PIRI) quality cycle. The items of the inventory were formulated based on the research conducted on assessment of learning outcomes by Kuh and Ikenberry (2009).
Website analyses took place prior to conducting the staff interviews, in order to gain an understanding of the prevalence of student engagement in universities mission statements, strategies, policies and practices. Indicators of student engagement were measured against the aforementioned PIRI inventory. The first document to be analysed was the mission statement and overall strategies of the university. Following this, keywords were used on the individual website search engines; for example “student engagement”, “orientation”, and “involvement”. The remainder of the analysis took a more fluid form. Whilst every attempt was made to extensively analyse the website, it was not an exhaustive search given the large quantity of information available on university websites.
12 universities were contacted to conduct interviews with members of staff. Initially the Office of the Vice-chancellor was contacted to nominate a suitability positioned member of staff. Of the 12 universities, 3 responded positively, with which interviews were subsequently arranged. Interviews were conducted between 3rd - 19th February 2016 and took approximately 1 hour per member of staff. 2 members of staff were interviewed from each university, except for one. This helped to create both a richer picture of student engagement practices and also to identify biases and inconsistencies. The benefit of face-to-face interviews was the opportunity to view the university facilities which also added to an overall understanding of the specific institution environment.
The interviews and website analyses yielded some commonalities among the three institutions in terms of their governance of student engagement. To an extent the similarities are dictated by the regulatory requirements of the QAA, the Department of Education and the HEAs interest in student involvement in programme design and curriculum development (Trowler, 2010). In addition, institutional comparison is likely to lead to convergence, for example benchmarking exercises and sharing of well-established, effective practices. Below, four trends are discussed which were drawn from the data collected; these are not the only trends which arose but they represent the most prominent and interesting ones. Following this, a number of “recommendations” are given for each of the four quality cycle stages; these are areas identified as good practices or aspects that had worked particularly well for the specific institution in question, and as such should be considered with that context in mind.
When asked about which factors influence student engagement strategies, it became apparent that for all three institutions, the actions that have been taken have arisen from external pressures rather than internal aspirations. However, this is not to say that the latter is not happening, just that the former is the driving force. For example, during interviews, multiple references were made to QAA requirements and NSS ratings as forming the basis for action, and that is also visible in the websites to a degree. Whilst anything that instigates a desire to improve is positive, does this mean that student engagement practices will be less genuine than if they were a result of internal dissatisfaction with the status quo? As Kuh et al. (2008) note, it is not enough for universities to simply offer engaging programmes or practices, as this does not necessarily guarantee success, but rather the practices must be high quality and rooted in a student-oriented culture embraced by the whole institution.
In the same vein as the drivers for change, all institutions had a firm understanding of the necessity for improvement. Interviewees acknowledged that it is not enough to keep doing the same thing, or even to improve incrementally, but that they are in direct competition with other institutions and therefore must be at all times. They are acutely aware that if they want to receive positive feedback in the National Student Survey (NSS), attract more students and be considered a high quality institution, they must improve in every domain of the learning experience. It is encouraging that institutions are at least realising (and in some cases are implementing) the need for constant review and improvement.
The most commonly cited catalyst for change by interviewees, as anticipated, was the NSS results. With students now providing the largest proportion of funding for institutions, they have substantial sway over the financial success of their university, and therefore student satisfaction has become pivotal to universities. Statements such as “At least 90% overall satisfaction in NSS for every school” indicate that this particular national survey is having a large influence over universities’ strategy design. Hopefully, with the improvement and widespread use of the UKES in years to come, this will have a similar but more specific impact on student engagement, not just satisfaction.
Universities are recognising the importance of students as stakeholders more and more and consequently are making efforts to include them in the governance of the institution. This was identified in all three institutions, and was usually manifest in the form of a partnership agreement, depicting students as equal partners in the future of their education. The agreement is a publicly available document, created in collaboration with students and outlines the responsibilities and expectations of both parties. This represents a shift from previous university governance and coincides with the increase of tuition fees and government ideals to make universities more accountable. From the students perspective, it gives them the opportunity to become more involved in their own learning experience and the power to influence decisions.
There is very little in the way of evidence for student engagement in institutional governance in the literature, but the data collected in this study suggests that every institution has at least attempted to include students in a significant proportion of university governance and is a current hot topic in UKES research. However Magolda (2005) postulates that student governance is not always beneficial or a positive experience; for example student committees can get caught up in trivial issues (this was also referred to in the interviews) and in some senses, they do not represent the diversity of a university, serve the needs of those who most need support and in some cases even limit active, meaningful engagement.
Whilst the nature and regularity of student engagement efforts vary greatly among institutions a base level of activity can be identified at all universities. For example, each institution has an induction week at the beginning of the academic year with an extensive range of organised talks, tours and welcome events. Pittaway and Moss (2006) postulate that orientation is an important event which allows students to connect with their peers, mentors and staff, gain familiarity with the campus, settle into academic life and clarify expectations. For the most part the itinerary of events are the same and focus on welcoming the students and offering practical support, but the level of engagement is likely to vary between institutions depending on factors such as communication of events, promotion efforts, demographic of students etc. Naturally, universities will channel a lot of effort into the first few weeks of the semester but after this, the number of events decreases dramatically. Unfortunately, this is when students are likely to be most vulnerable and therefore need a higher level of support. It is possible that this drastic contrast in activity could be a trigger for students to disengage, after which point it might be difficult to re-engage them.
Another basic service that all universities offer is academic/pastoral support through the assignment of a personal tutor. Again, the nature of the mentorship mechanism differs among institutions as does the engagement levels of students, and several interviewees reflected on the effectiveness of these systems. In one case, low engagement of tutors and student was due to the recent reconfiguration of the tutor role and subsequent confusion among both tutors and students about the function and expectations of the role. This highlights at least one area in which improvement is required and that could have a huge impact on students learning experience.
The above discussions clearly show that institutions are making some efforts to enhance student engagement. However, a noticeable theme is that these processes are not well co-ordinated, not the result of institution-wide strategy, or even if they are, they have not been communicated successfully across all departments. To provide a couple of examples, the Student Voice Framework of one university identifies that only a small number of initiatives are implemented across all departments. There was also some discrepancies between the level of activity publicised on the website and the knowledge of projects from interviewees, which suggests that either projects are still at a conceptual phase or they are not being communicated/embedded across the whole institution or a combination of the two.
This leads to the next observation that pockets of activities are taking place. The majority of initiatives are developed and implemented at individual department level. Effectiveness of these depends on both staff commitment and student desire to be more engaged. Some departments have many extracurricular activities, which tends to be more prevalent in social sciences/vocational programmes, whilst others have very little.
Review of student engagement practices and mechanisms, if existent at all, seems to be only sporadically carried out, and at best, occurs once a year. The same is apparent of improvement, it appears to be piecemeal and whilst changes are informed by student feedback, this does not appear to be in a structured manner, or a continuous loop.
This research identified a range of practices being undertaken by universities, in some cases applied more consistently or with greater depth than others but nonetheless, to some extent universities are involved in student engagement activities. They are also developing quality improvement protocols, but they are not necessarily integrating the two, that is, quality improvement of student engagement practices. As can be seen from the examples below, institutions excel at making plans, do well at implementing them, sometimes reviewing them but overall there is inconsistency and further improvement is required.
Here a compilation of the good practices adopted by universities are presented to fulfil several aims; firstly, to provide both a guide and benchmark for universities considering student engagement, secondly, to highlight the current progression made in quality enhancement of student engagement and thirdly, to inspire and motivate institutions to start using quality cycles to analyse and improve their current practices.
One university stood out in terms of the detail provided in their overall university strategy. Not only did it state what they wished to achieve, but also how they were planning to go about achieving these aims. Whilst there was no explicit mention of student engagement as a central strategy objective, it had been integrated in the nature of the actions put forward by the institution, for example “We will develop space to enable our research activities to grow and in particular to facilitate interdisciplinary and interdepartmental research.” (University A Strategy 2020)
All institutions have now established at least one position, if not a committee dedicated to student engagement. One institution formed a student engagement team as a result of student feedback and meets as regularly as twice a semester. Unfortunately, it is difficult at this stage to gauge the specific work of this position/committee and their effectiveness due to the recent formation of this team. On a positive note, it does suggest a plan to dedicate time, effort and resources to enhancing student engagement.
University C has a specific three level student engagement strategy. Of note, this was the only university-wide strategy identified which referred specifically to student engagement and outlined how this would be achieved. The three levels of engagement are involvement by providing multiple platforms for feedback, participation through the student university representative system which allows students to give their opinion, and partnership with the student union on development and implementation of plans. This represents the most progressive of planning strategies among the three case studies with regards to student engagement.
Two of the three institutions offer additional, non-core interdisciplinary courses which focus on exploration of social issues. Students can benefit from being in small classes and an interactive environment, and it also encourages academic engagement with fellow students and faculty. In principle this is a good practice, but interviewees were not sure about the extent of participation and the effectiveness of engaging students, so this highlights the need for review.
University A’s LifeCentre offers a one-stop-shop for students to ask questions about any aspect of their life. The idea behind this recent development is that it will build a sense of community for students and will also reduce confusion about who to contact, about what. It also increases efficiency in university administration, so the benefits are twofold.
The Higher Education Achievement Report is a national level initiative which has been adopted by some universities. This is a place where students can document their non-academic/co-curricular activities and have them formally recognised, and it is hoped that this will encourage engagement. However, like the interdisciplinary courses, this is a fairly new initiative its success and has not yet been reviewed.
By far the best example of a student engagement review method is the Student Voice Framework, developed to identify different levels of implementation of projects (e.g. department level/institution wide) as well as gaps in service provision. Use of this framework by one university suggests a proactive attitude to student engagement improvement and concrete efforts towards reviewing practices.
At least two of the three institutions showed involvement in multi-institution research, which suggests a collective effort to review practices related to student engagement. For example, a publication titled “Understanding student attendance: Informing teaching and learning practice with evidence based decisions” suggests an active interest in improving student retention. However, what is not evident, is whether this research is linked back to practice and improvement. Further, is such research seen by the right people in order to have an impact? In other words, is research connected to practice, if not why not? And what can be done to ensure that there is a strong connection between the two?
Another positive review mechanism mentioned earlier in “planning” is the use of a student engagement team. In one university it was established as a result of student feedback which suggests that the review process had already commenced. A diverse selection of staff attend these meetings and practical issues are discussed during these, such as student attendance, access to resources and other issues as they arise. This demonstrates a proactive and practical approach to reviewing student engagement.
The annual review and enhancement process conducted by one university, in addition to other formal structures which collect feedback from students such as the annual satisfaction survey are a fundamental but unfortunately under-utilised mechanism. The review focuses on quality assurance, enhancement and engagement, although no further details were given about what this specifically entails. However, importantly, it has equal representations of staff and students, and the university emphasises the value of student participation in this review process.
Involvement in national level research and initiatives surrounding student engagement, for example the HEFCE Learning Gain projects and the UKES research coincides with a more pronounced effort of review and improvement of the individual institutions. Presumably, institutions have chosen to be involved in these projects because they are interested in review and improvement of practice Alternatively their involvement in these national level projects may have fuelled their enthusiasm for change. Regardless of their motivation, it shows positive steps forward.
A recurrent theme throughout the interviews and identifiable in the university strategies is investment in campus infrastructure to create a more inclusive and attractive environment for students to study in. For example, one university has an Estates Strategy which aims to “develop functional and innovative teaching and study space that improves the student experience”; progress already made in this area includes 24/7 access to libraries, more campus cafes, and a student union renovation. Another university also cited that to embed a deeper sense of community, making the campus buildings more connected would be on their list of priorities.
University C has developed an increased number of staff development programmes and is encouraging staff to take advantage of this opportunity. According to interviewees this was stimulated by NSS feedback and it is hoped that this will be facilitated through the development of a “Learning Enhancement Unit”. Also the university has also recently created the Postgraduate Certificate in University Teaching and Learning (PCUTL) programme which is available for staff as a continuing professional development course to critically reflect on their teaching practice. One aspect of this is student engagement, and a number of reflective essays on this topic have been written, which indicates desire for each faculty to facilitate a more engagement-centred learning experience.
On a general note, no institution could remark, nor showed evidence of indicators which explicitly measured or marked success in student engagement. Rather, reference was made to indicators assumed to be related to student engagement, for example measurements of satisfaction, student retention or number of projects completed. Could this suggest that institutions have not created indicators for student engagement and therefore are not seeking to review and improve their practices? This could be a significant barrier for students to move forward, if important aspects of the planning stage are not well formulated as indicated in the conceptual framework.
The field of research on improvement of student engagement is young, and there is much that can be done to add to it. A number of suggestions are made below, most of which relate to improvement in indicators and methodology. Naturally, this is not an exhaustive list of developments, but addresses the most pressing issues.
As discussed in the previous chapter, better indicators for student engagement i.e. more direct measurements, would allow researchers to make better inferences and allow institutions to make more accurate evaluations of their initiatives. Pike and Kuh (2005) note that the most engaging institutions are those that are “marked by an unshakeable focus on student learning emphasized in their missions and operating philosophies. They also adapted their physical campus properties and took advantage of the surrounding environment in ways that enriched students’ learning opportunities.” (pg. 187). The latter is something that all case studies make reference to, and were taking actions towards developing, but the former is something more difficult to define and measure, as it requires a more in depth study of the institution.
A longitudinal study adopting the PIRI cycle for student engagement would be greatly valuable as it would provide a clearer picture of progress made over time instead of just a snapshot. A more dynamic approach is required as the nature of the cycle is that improvement should not be static, but continuous.
More studies are needed which give specific recommendations which are practicable within the social, economic and political climate, whilst not ignoring that recommendations are highly contextual, and they should be used as guidelines by institutions instead of imperatives.
Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of college student personnel. 25(4), 297-308;
Astin, A. W. (1999). Student involvement: A developmental theory for higher education;
Journal of College Student Development. 40(5), 518-529;
Bean, J. P. (1980). Dropouts and turnover: The synthesis and test of a causal model of student attrition. Research in higher education. 12(2), 155-187;
Berger, J. B., & Milem, J. F. (1999). The role of student involvement and perceptions of integration in a causal model of student persistence. Research in higher Education. 40(6), 641-664;
Bonwell, C. C., & Eison, J. A. (1991). Active Learning: Creating Excitement in the Classroom. 1991 ASHE-ERIC Higher Education Reports. ERIC Clearinghouse on Higher Education, The George Washington University, One Dupont Circle, Suite 630, Washington, DC 20036-1183;
Buckley, A. (2013). Engagement for enhancement: Report of a UK survey pilot. York: Higher Education Academy. Retrieved from: https://www.heacademy.ac.uk/sites/default/ files/engagement_for_enhancement_final_0.pdf;
Buckley, A. (2014) UK Engagement Survey 2014: The Second Pilot Year. York: Higher Education Academy. Retrieved from: https://www.heacademy.ac.uk/sites/default/files/ resources/ukes_report_2014_v2.pdf;
Coates, H. (2009). Engaging Students for Success-2008 Australasian Survey of Student Engagement. Victoria, Australia: Australian Council for Educational Research;
Gibbs, G. (2014, May 1) Student engagement, the latest buzzword. Times Higher Education. Retrieved from https://www.timeshighereducation.com/news/student- engagement-the-latest-buzzword/2012947.article;
Higher Education Funding Council for England (2008) Tender for a study into student engagement Bristol: Higher Education Funding Council for England;
Kinzie, J. (2011). Colorado state university: A comprehensive continuous improvement system. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment;
Krause, K. L., & Coates, H. (2008). Students’ engagement in first‐year university. Assessment & Evaluation in Higher Education. 33(5), 493-505;
Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education. 79(5), 540-563;
Kuh, G. D. (2009). What student affairs professionals need to know about student engagement. Journal of College Student Development. 50(6), 683-706;
Kuh, G. D., & Ikenberry, S. O. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. National Institute for Learning Outcomes Assessment;
Magolda, P. (2005). Promoting Student Success: What Student Leaders Can Do. Occasional Paper No. 8. National Survey of Student Engagement;
Pace, C. R. (1984). Measuring the Quality of College Student Experiences. An Account of the Development and Use of the College Student Experiences Questionnaire;
Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students (Vol. 2). K. A. Feldman (Ed.). San Francisco, CA: Jossey-Bass;
Pike, G. R., & Kuh, G. D. (2005). A typology of student engagement for American colleges and universities. Research in Higher Education. 46(2), 185-209;
Pittaway, S., & Moss, T. (2006, July). Contextualising student engagement: Orientation and beyond in teacher education. In 9th Pacific Rim First Year in Higher Education Conference, Engaging Students, Griffith University, Gold Coast Campus, Australia;
QAA (2012) UK quality code for higher education: Chapter B5 – Student engagement. Gloucester: Quality Assurance Agency, Retrieved from: http://www.qaa.ac.uk/Publications/ InformationAndGuidance/Documents/Quality- Code-Chapter-B5.pdf;
Tinto, V. (1997). Taking retention seriously: Rethinking the first year of college. NACADA journal. 19(2), 5-9;
Trowler, V. (2010). Student engagement literature review. The Higher Education Academy. 11, 1-15;
Trowler, P., & Trowler, V. (2010). Student engagement evidence summary;
There has been error in communication with Booktype server. Not sure right now where is the problem.
You should refresh this page.