This is an expanded version of the “Left Field” column published in the Irish Times.Over the last couple of years there has been a growing feeling amongst all stakeholders – government (supposedly representing the people), the administration (Department of Education and HEA), and the higher education sector that there is an increased scrutiny on what exactly academics do. Nobody is quite clear where the impetus comes from, just as nobody is quite clear what it is that the answer might be.Much of the heat has come on the issues of contact hours and how much research (and of what kind) is of any “use”. Some consider that universities and indeed all third level is merely secondary shcool for adults, and that time spent in the classroom, sorry, lecture theatre, is the only thing that is worth rewarding. These usually also show breathtaking ignorance about the process of scientific investigation, with the extreme suggesting that we should only fund reserach whose results we know in advance…..Yes, that SFI grant into Academic Clairvoyance was a good idea. But, in essence its a debate about value for money. Leaving aside the fact that not every thing that has a price has value and not everything that can be valued has a price, we can and should accept that it is simply good management practice to attempt to find out how efficient and effective our work processes are and how they might be improved. Its even more useful when the inputs, money and academic resources (although seemingly not administrative) are in scarce supply.
Efficiency is a technical concept – its about the transformation of inputs into outputs. For a given level of inputs (academics) can we get more outputs (graduates, googles, whatever). Effectiveness is a little more fuzzy, about how well we achieve our goals. In the former case we have cost and technical issues which can and should be investigated. In the latter we need to have clarity and stability about these goals in order for them to be assessed. And, in most cases we cannot even begin to measure effectiveness for years, if not decades. Plus, directed research, chasing imposed metrics, simply is not the only way to do business. There is a very long tail and a very long halflife to both “good” teaching and research. James Clerk Maxwell published in 1873 some equations he had worked on in 1865 which were abstruse and remote. That they later formed the theoretical basis of the Wireless in the early 1900s was not a directed outcome. Boole’s pamphlet on mathematical logic in 1847 had to wait for the best part of a century until the information revolution to be seen as the transcendental work that it is now seen to be. Wohler synthesised urea, closing the gap between organic and inorganic chemistry, in an accidental discovery Perkins work on dyes and the subsequent growth of synthetic organic chemistry came from a nearly discarded experiment in the synthesis of quinine. Atomic clocks were designed to test relativity and now form the basis of GPS satellites There are hundreds of examples of transformative products that emerged as byblows, serdips or sheer accidents, from basic research. In a wonderfully lucid essay in 1939 Flexner noted the importance of curiosity, serendipity and generally pootling about.
How we are moving in Ireland is towards the introduction of workload models and key performance indicators. Key performance indicators however are inherently political. The EU “knowledge triangle” suggests that higher education trades off between supporting business, education and research. Like all policy triangles it is in fact a trilemma- it is difficult if not impossible to excel in all three domains. Note that this is not to say that it is impossible to be active in all three- just that. And In Ireland we have tried for that. We have two sectors which, if they were to be closely examined have historical and natural affinities with two legs- the IoT sector has historically been orientated at teaching and business support, the University sector at teaching and research. If we require an increased emphasis on research from the IoT without additional resources, or on business support from the universities without additional resources we will degrade one or other of the existing strengths. And anyhow, how would we know what was being done was good?
Part of the problem in the assessment of what we academics to do is that they seem to do lots of things. We teach we supervise we design courses we examine we sit on committees we hunt grant money we reach out to the community…. Most workload models try to fine-tune these to a faretheewell allocating points for this and that. But in reality we do one thing – communicate knowledge, be it to the academy (research), students (teaching) or to business and society. And here is where the problem is. How can we measure these activities so as to allow managers to allocate resources and to reward those who outperform norms and expectations?
We can measure business impact in a crude fashion by patents gained and monies raised and spinoffs that rival google created. This is the dominant metric now obsessing the government, seeming to treat the universities as machines for the creation of the next FaceGoogle and wondering why there hasnt been a patent that morning. J C Maxwell or Faraday or Antonie van Leeuwenhoek would probably not have been funded by SFI. We can measure teaching eficiency in a crude fashion by how many hours an academic is in class. In the Institutes of technology there is a class contact norm of typically 16h per week. That is not the case in universities, leading to the assumption that as university academics teach less they do less. Nothing alas could be further from the truth. The key difference between the modal IoT lecturer and the modal university lecturer is the expectation and requirement of research activity. And that takes time.
It is a reasonable question to ask about the impact or import of the research even if we cant realistically answer– what is not reasonable is to assert that research is in some way less difficult or less timeconsuming than teaching. Research is akin to venture capital. We need to do a lot to get a little output. Success is rare and failure the norm. Publication in toptier journals is exceedingly rare and more than one as fleeting as a shy higgs boson in a ghillie suit on a heather covered hillside. Acceptance rates (success) in decent journals or in gaining leading research grants is often less than 10%. Research grants can take literally months to complete with no guarantee of success. And doing a paper takes time. It takes typically in excess of 100 hours of work to get a finance paper to a stage where one is happy to put it out for even working paper review. And then it takes a long time to get it published. It can take between 6 months and 6 years to get a paper from initial submission to final acceptance in finance and economics and other social sciences. In fact, in finance, economics and political sciences where the papers are usually data driven this time requirement is perhaps on the low side for social science generally. In the arts and humanities there can be literal years with “no output” as monographs and books are chronovores of enormous appetite. During that time the paper is usually under review at symposia, conferences etc and further hundreds of hours are input in refining and tweaking. The total time requirements for publication of a single article in a set of leading journals in finance were estimated to average over 1600h in a 1998 study. Meanwhile for younger academics, those seeking promotion or permanency, or seeking to move to sunnier climes they require publications so this process starts in PhD times and continues with up to a dozen projects in various stages of the pipeline. None of these times are captured in crude workload measurement approaches. to somehow assume that academics in universities engaging in research are idle is to display a gross ignorance of the scientific process – eureka moments are few and far between. If genius is 99% perspiration and 1% inspiration adequacy is 99.9% perspiration and .1% inspiration. A further aspect of research for those who are research active is peer review and editing . Typically a paper gets published after a couple of anonymous referees have commented on it. Doing this referee job is time consuming – typically 4-5h reading and writing per paper. If one does 12 reviews per annum that’s 60 hours, or the best part of two working weeks. Imagine the editor of a journal receiving 300+ papers per annum, each of which has to be read through prior to sending for review and we see another hidden time cost of research. Teaching also takes time beyond the classroom. For every hour spent in the class you take 3 to prepare, review, reflect and redo the work. Even if one has a stack of slides and a good strategy its imperative to do a mock runthrough (that takes about the same time as teaching) noting as one does what is outdated, what is wrong, what doesn’t flow etc. And then one redoes the slides and delivers, finding that in the class there are new issues to be incorporated, new questions raised, issues that seemed pellucid actually as muddled as a government jobs initiative…. So a 16h class contact in the IoT doesn’t leve a lot of time in teaching term to do any research even if one were so interested. Of course, there’s always the evening and weekends but the many IoT staff who wish to be research active typically use the summer breaks to do it –Christmas and easter are usually taken up with marking essays and so forth.
By all means therefore lets look at metrics of activity. But lets recall that these metrics typically measure output, not input and therefore cant be used as such for the measurement of efficiency or still less effectiveness. That’s not to say we should not measure research output. We should. Perhaps the issue is that we are scared of what we will find. Academics are terribly resistant to being managed and by extension to being measured. But we need to accept that without open transparent measures of research we will not allow the universities to show the extent to which they are engaged with the second leg of their historic mission. Measurement of research output is a crude proxy for research activity and even more so for research excellence. But like patents and so forth it is measureable. We don’t do this in Ireland. We have experience in the UK of several generations of research measurement. Many of us have been involved in these as assessors or as units of measurement. We can and should design a model builds on these and improves upon them, that measures research output, across units and the sector. This will show that there are many who simply do not engage in research (as measured). Every academic knows of people who simply turn up, teach and disappear. This puts an unfair burden on those who do research, and it is they who should be shouting loudest for such a metric. We have some example beyond the UK. Ontario has just done a similar exercise, and there are many measures of research activity for individual disciplines in Ireland, notably business and economics. What these all show is that there are many many academics that do not come onto the research activity radar. One can only wonder what it is that they do all day. Conceivably they are engaging with business and society but most who work in the sector would smile ruefully at that idea.
Until universities and the system managers (HEA and DES) determine what the role is of universities in relation to the trilemma we cannot however begin to reward or discipline academics or the sector for resource misallocation. What gets measured gets managed. By definition a poor measurement system will deliver poor management. But we are not measuring research in any manner in Ireland. Perhaps we should start.