Can We Fix It? Yes We Can!
I often welcome new members of staff joining us from US universities and see them through the practicalities of settling in to a new academic system as well as the, often prolonged, period of cultural adjustment. The practical adjustments in a global city such as London are usually minimal and painless (though getting a bank account, and children into a good school can sometimes test the nerve). The adjustments to a new academic system are much larger. One of the biggest surprises is the UK system of quality assurance and oversight and, specifically, the time, patience and effort required to navigate it. Exam boards, second and external examiners, programme and module approvals, committees and reporting, appear as bewildering and frankly, unwelcome, intrusions on academic judgment and independence. I sometimes find it difficult to disguise my own bewilderment.
Now before you think that this is going to be a diatribe in which I pour scorn on quality management and assurance and all its works let me set you straight. I think we need more scrutiny and oversight - not less! And certainly, more quality, how could it be otherwise? My problem is that much of the existing processes and systems, sector-wide, are a sham. They provide the appearance of scrutiny and oversight without the substance - defined standards, critical review and feedback driven improvement. They promote a (yes, I will say it) tick-box mentality and they operate on the principle that if we add sufficient bureaucratic impedance into the system then quality will spontaneously manifest itself.
All of this is readily fixable; though it will require us to make some difficult changes and most significantly to wrest control from those who have an investment in mastery of the existing system. It will also require additional effort, but at least it will be meaningful effort. So, here are my prescriptions, some of which you may recognise from my previous comments on related topics.
No module and certainly no course or programme should be 'owned' by an individual. All modules should be assigned to teams of at least three people with collective responsibility for the quality of design and delivery. All of these individuals should play an active role in the teaching.
Approval of a module should be on the basis of the completed materials. In other words the content of a module should be developed in totality in advance of delivery (slides, exercises etc.) and constitute a part of the case for approval. Preparing whilst delivering should not be an option.
The core of quality assurance should be peer review. All academics should be involved in academic review of teaching materials on the same basis and with the same rigour expected for research publication. Senior academics across the institution should expect to act as anonymous reviewers for modules both in their area and outside it.
The module team should be subject to a panel interview in which they would be asked to pitch their module and explain the rationale, choice of content and teaching methods. Ideally, though this might be a step too far, there should be an element of competition, with more modules seeking an opportunity to be delivered than space in the programme permits.
The approval of modules should expire after 3 years, at which point re-approval should be sought. It would follow the same pattern as initial approval, though clear evidence of successful delivery, and lessons learnt, would be taken into account.
The system of 'external examiners' is misusing effort. The idea of critical external oversight is a good idea but linking it so tightly to the exams process is mistaken. Specifically, using this unique opportunity for quality input on checking for arithmetic errors and typos in exams is a waste. External examiners should have a much wider brief to scrutinise and comment on curriculum and standards and should spend time in institutions outside the exam period. External examiners should be properly remunerated to reflect the role they are expected to play and the seriousness with which they should take it. Too much academic effort is spent on managing examination processes and marking exam scripts, much more of this should be handled by administrators and teaching assistants. Good reporting of student performance data and statistical sampling of scripts should be used to inform a feedback and reflection process that focuses on outcomes and not the exams process itself.
In key areas it would be valuable to have subject specific, sector-wide standardised tests. This would allow individual institutions compare their student performance in relation to core subject knowledge and skills. It might be feasible to provide feedback across a profile showing performance quartiles. I appreciate this is controversial, and certainly the tests would be difficult to design, but either we are serious about standards, or we are not.
We have, in the UK and in most subjects, moved away from vivas as means of determining performance, largely because it is expensive and unreliable. Unfortunately, we have thrown the baby out with the bath water. Quality rests ultimately upon student knowledge, understanding and development. It is impossible to gauge the level of student understanding without a probing diagnostic interview. We must therefore arrange for, if not vivas, then at least a range of detailed interviews with sample of students aimed at determining the level and depth of understanding they have achieved and eliciting 'deep' feedback on the module.
One of the by-products of the existing quality system has been an insistence that all new academics undergo training. Most institutions have put in place programmes that provide this training, some encourage their academics to secure independent certification. University teaching is however not a 'generic' skill. It is deeply embedded in the subject and in the methods and modes of thinking that constitute the subject. Training is where a shared understanding of quality can be formed and where standards are debated and agreed. Subject associations supported by the relevant professional bodies should be taking responsibility for training new academics in their area and for the promulgation of quality.
Quality assurance cannot sit in an organisational silo. It cannot be the responsibility of a particular cadre of individuals and it cannot sit outside management. Hiving off quality into special committees and institutional roles is damaging. A focus on systems and processes shorn of the context in which they are to operate can never be effective. The mechanisms of critical review and feedback described above must be part of the direct operation of managerial and academic responsibility and their effectiveness must ultimately be overseen by senior institutional management.
I know that many individuals and some universities get a few of these things right. For the most part, this has been without any relation to their own formal quality assurance processes. Perhaps we can extend this guerrilla action. To the barricades, victory to the revolutionary forces of quality assurance!