I’m struck by the coincidence of Jonathan Rees and Ferdinand von Prondzynski both writing, in different ways, about the significance to academics of being able to set their own compass. Not only do we hope to be able to research and teach in areas that are the best fit with our skills and principles, but we also prefer to work for institutions that are able to determine the best mix of features that suits their operating context.
Whether you call this autonomy or freedom, what seems to be at stake is the trust placed in us to behave responsibly: to respond in localised, adaptive, thoughtful ways to the stuff that comes up, whether in the classroom or the economy. Agility depends in powerful ways on agency, and this works best when both the resources and the authority to use them are quartered fairly close to the situation in which they’re needed.
The enemy of freedom bearing down on two fronts here is something we might call standardised diversification, whether encountered in the enforced use of brand-driven institutional templates for online learning (JR), or the centralisation of strategic planning to save us all from the “unnecessary duplication of provision” (FVP). Higher education isn’t inventing any of this: these approaches animate business models all over the place. It’s a principle of sustainable mass production of anything that if you introduce sufficient minor variation over time, or horizontally across the range, the tension between supply and gratification creates the future of demand for whatever it is that you make. This is the straightforward reason why all cars are not white, a question I’ve been asked recently by my kids.
Diversification for us is about the sustainability of small disciplines (what we might call the species biodiversity model); for governments who have to pay to protect this, it’s also about controlling diversity within managed limits so that each institution isn’t overfunded to duplicate what’s being done more cheaply, or perhaps to higher standards, up the road. It’s very like agribusiness in this sense: universities are not hobby farms. Even the humanities, for all our faux organic packaging, are driven by business models that increasingly lean towards standardisation as the best way to achieve both scale and market position without expanding cost.
None of this is rocket science, but I think it’s part of the reason why academics grimace when standards are mentioned. There is very little confidence that the current hive-like coordination of planning, standards and quality management activities in higher education might have anything to do with a genuine wish to do things well. At one level, it’s self-congratulating busywork, which is the worst kind of strategic activity, when the balance between time spent in reflection, and the time spent reporting against the workplan, tips decisively in favour of the KPIs.
But it’s also the reason why higher education institutions are developing such an attachment to data, as a talismanic presence in their effort to pin down the swirling cloud of human complexity that brings learners, researchers and administrators together, day in, day out. My favourite data analysis process at the moment is comparative student outcomes reporting, which we can use to slice the data cube from all directions to see what, if anything, makes a difference. It’s like the opening credits to The Matrix for giving numbers a fetish value that leaps directly into the realm of abstraction. How can we know so much about our students, more or less down to their shoe size, and still understand so little about how it feels for them as they navigate through our systems, our communication channels, our expectations, and our obsessive collection of data on their experience?
So at one level, no wonder the academics are tiptoeing backwards out of the room as the engines of institutional strategic data analysis start to overheat. But perhaps we need to remember that we’re not above a bit of vanity dressed up as quality assurance ourselves, it’s just that we’ve been doing it for so long that it’s become natural to our way of life, as these things do.
Because I can’t really see the difference between making the effort to generate some useful standards in teaching and learning, and our entirely opportunistic enjoyment of self-advancing metrics in research. As researchers, we’ve lived very comfortably with proposition that citation (not to mention journal ranking) has something to do with the measurement of our meaningful impact in the world. We do seem broadly to be able to turn a blind eye to lack of evidence for this case, and just to get on with laundering our personal performance indicators through the various instruments developed by journal publishers to make the quantification of scholarly impact seem like a halfway sensible thing to do. What is this if not adherence to standards?
In which case, perhaps we should all get a bit more sleeves-rolled-up about developing and implementing the standards that will help our students attach their learning to their professional futures. Surely these are no more contemptible or confronting to our freedom than the ones we exploit for ourselves?