What gets measured gets managed – so how can we manage research without measuring it?

measurementThis is an expanded version of the “Left Field” column published in the Irish Times.Over the last couple of years there has been a growing feeling amongst all stakeholders – government (supposedly representing the people), the administration (Department of Education and HEA), and the higher education sector that there is an increased scrutiny on what exactly academics do. Nobody is quite clear where the impetus comes from, just as nobody is quite clear what it is that the answer might be.Much of the heat has come on the issues of contact hours and how much research (and of what kind) is of any “use”. Some consider that universities and indeed all third level is merely secondary shcool for adults, and that time spent in the classroom, sorry, lecture theatre, is the only thing that is worth rewarding. These usually also show breathtaking ignorance about the process of scientific investigation, with the extreme suggesting that we should only fund reserach whose results we know in advance…..Yes, that SFI grant into Academic Clairvoyance was a good idea. But, in essence its a debate about value for money. Leaving aside the fact that not every thing that has a price has value and not everything that can be valued has a price, we can and should accept that it is simply good management practice to attempt to find out how efficient and effective our work processes are and how they might be improved. Its even more useful when the inputs, money and academic resources (although seemingly not administrative) are in scarce supply.

serendipEfficiency is a technical concept – its about the transformation of inputs into outputs. For a given level of inputs (academics) can we get more outputs (graduates, googles, whatever). Effectiveness is a little more fuzzy, about how well we achieve our goals. In the former case we have cost and technical issues which can and should be investigated. In the latter we need to have clarity and stability about these goals in order for them to be assessed. And, in most cases we cannot even begin to measure effectiveness for years, if not decades. Plus, directed research, chasing imposed metrics, simply is not the only way to do business. There is a very long tail and a very long halflife to both “good” teaching and research. James Clerk Maxwell published in 1873 some equations he had worked on in 1865 which were abstruse and remote. That they later formed the theoretical basis of the Wireless in the early 1900s was not a directed outcome. Boole’s pamphlet on mathematical logic in 1847 had to wait for the best part of a century until the information revolution to be seen as the transcendental work that it is now seen to be. Wohler synthesised urea, closing the gap between organic and inorganic chemistry, in an accidental discovery Perkins work on dyes and the subsequent growth of synthetic organic chemistry came from a nearly discarded experiment in the synthesis of quinine. Atomic clocks were designed to test relativity and now form the basis of GPS satellites There are hundreds of examples of transformative products that emerged as byblows, serdips or sheer accidents, from basic research. In a wonderfully lucid essay in 1939 Flexner noted the importance of curiosity, serendipity and generally pootling about.

euknowHow we are moving in Ireland is towards the introduction of workload models and key performance indicators. Key performance indicators however are inherently political. The EU “knowledge triangle” suggests that higher education trades off between supporting business, education and research. Like all policy triangles it is in fact a trilemma- it is difficult if not impossible to excel in all three domains. Note that this is not to say that it is impossible to be active in all three- just that. And In Ireland we have tried for that. We have two sectors which, if they were to be closely examined have historical and natural affinities with two legs- the IoT sector has historically been orientated at teaching and business support, the University sector at teaching and research. If we require an increased emphasis on research from the IoT without additional resources, or on business support from the universities without additional resources we will degrade one or other of the existing strengths. And anyhow, how would we know what was being done was good?

manyhatsPart of the problem in the assessment of what we academics to do is that they seem to do lots of things. We teach we supervise we design courses we examine we sit on committees we hunt grant money we reach out to the community…. Most workload models try to fine-tune these to a faretheewell allocating points for this and that. But in reality we do one thing – communicate knowledge, be it to the academy (research), students (teaching) or to business and society. And here is where the problem is. How can we measure these activities so as to allow managers to allocate resources and to reward those who outperform norms and expectations?

pacioliWe can measure business impact in a crude fashion by patents gained and monies raised and spinoffs that rival google created. This is the dominant metric now obsessing the government, seeming to treat the universities as machines for the creation of the next FaceGoogle and wondering why there hasnt been a patent that morning. J C Maxwell or Faraday or Antonie van Leeuwenhoek would probably not have been funded by SFI. We can measure teaching eficiency in a crude fashion by how many hours an academic is in class. In the Institutes of technology there is a class contact norm of typically 16h per week. That is not the case in universities, leading to the assumption that as university academics teach less they do less. Nothing alas could be further from the truth. The key difference between the modal IoT lecturer and the modal university lecturer is the expectation and requirement of research activity. And that takes time.

passageoftimeIt is a reasonable question to ask about the impact or import of the research even if we cant realistically answer– what is not reasonable is to assert that research is in some way less difficult or less timeconsuming than teaching. Research is akin to venture capital. We need to do a lot to get a little output. Success is rare and failure the norm. Publication in toptier journals is exceedingly rare and more than one as fleeting as a shy higgs boson in a ghillie suit on a heather covered hillside. Acceptance rates (success) in decent journals or in gaining leading research grants is often less than 10%. Research grants can take literally months to complete with no guarantee of success. And doing a paper takes time. It takes typically in excess of 100 hours of work to get a finance paper to a stage where one is happy to put it out for even working paper review. And then it takes a long time to get it published. It can take between 6 months and 6 years to get a paper from initial submission to final acceptance in finance and economics and other social sciences. In fact, in finance, economics and political sciences where the papers are usually data driven this time requirement is perhaps on the low side for social science generally. In the arts and humanities there can be literal years with “no output” as monographs and books are chronovores of enormous appetite. During that time the paper is usually under review at symposia, conferences etc and further hundreds of hours are input in refining and tweaking. The total time requirements for publication of a single article in a set of leading journals in finance were estimated to average over 1600h in a 1998 study. Meanwhile for younger academics, those seeking promotion or permanency, or seeking to move to sunnier climes they require publications so this process starts in PhD times and continues with up to a dozen projects in various stages of the pipeline. None of these times are captured in crude workload measurement approaches. to somehow assume that academics in universities engaging in research are idle is to display a gross ignorance of the scientific process – eureka moments are few and far between. If genius is 99% perspiration and 1% inspiration adequacy is 99.9% perspiration and .1% inspiration. A further aspect of research for those who are research active is peer review and editing . Typically a paper gets published after a couple of anonymous referees have commented on it. Doing this referee job is time consuming – typically 4-5h reading and writing per paper. If one does 12 reviews per annum that’s 60 hours, or the best part of two working weeks. Imagine the editor of a journal receiving 300+ papers per annum, each of which has to be read through prior to sending for review and we see another hidden time cost of research. Teaching also takes time beyond the classroom. For every hour spent in the class you take 3 to prepare, review, reflect and redo the work. Even if one has a stack of slides and a good strategy its imperative to do a mock runthrough (that takes about the same time as teaching) noting as one does what is outdated, what is wrong, what doesn’t flow etc. And then one redoes the slides and delivers, finding that in the class there are new issues to be incorporated, new questions raised, issues that seemed pellucid actually as muddled as a government jobs initiative…. So a 16h class contact in the IoT doesn’t leve a lot of time in teaching term to do any research even if one were so interested. Of course, there’s always the evening and weekends but the many IoT staff who wish to be research active typically use the summer breaks to do it –Christmas and easter are usually taken up with marking essays and so forth.

phrenolBy all means therefore lets look at metrics of activity. But lets recall that these metrics typically measure output, not input and therefore cant be used as such for the measurement of efficiency or still less effectiveness. That’s not to say we should not measure research output. We should. Perhaps the issue is that we are scared of what we will find. Academics are terribly resistant to being managed and by extension to being measured. But we need to accept that without open transparent measures of research we will not allow the universities to show the extent to which they are engaged with the second leg of their historic mission. Measurement of research output is a crude proxy for research activity and even more so for research excellence. But like patents and so forth it is measureable. We don’t do this in Ireland. We have experience in the UK of several generations of research measurement. Many of us have been involved in these as assessors or as units of measurement. We can and should design a model builds on these and improves upon them, that measures research output, across units and the sector. This will show that there are many who simply do not engage in research (as measured). Every academic knows of people who simply turn up, teach and disappear. This puts an unfair burden on those who do research, and it is they who should be shouting loudest for such a metric. We have some example beyond the UK. Ontario has just done a similar exercise, and there are many measures of research activity for individual disciplines in Ireland, notably business and economics. What these all show is that there are many many academics that do not come onto the research activity radar. One can only wonder what it is that they do all day. Conceivably they are engaging with business and society but most who work in the sector would smile ruefully at that idea.

Until universities and the system managers (HEA and DES) determine what the role is of universities in relation to the trilemma we cannot however begin to reward or discipline academics or the sector for resource misallocation. What gets measured gets managed. By definition a poor measurement system will deliver poor management. But we are not measuring research in any manner in Ireland. Perhaps we should start.

About these ads

6 thoughts on “What gets measured gets managed – so how can we manage research without measuring it?

  1. Brian, interesting piece. As you note, measuring research is not a straightforward process given time-lags, differential access to resources and inputs, different kinds of outputs across researchers/disciplines, etc. and rote metrics often do a very poor job. I agree that there are different rates of activity and success across staff, though I think the number who are completely inactive are low, and many that are very active are working a lot more than 40 hour weeks. So, what would you suggest as a method that would allow for variances in inputs, desired outputs, and disciplinary norms but demonstrates effective academic work (given that systems such as the RAE in the UK have destroyed collegiality in many institutions by making faculty overly competitive with regards resource and workload allocation, and led to stressful and overly managerial working environments that are actually counter-productive to scholarship)? Is not individual performance accounted for by PMDS and promotions? I’m interested to know how you think this might work in a way that is fair, equitable and transparent? And which wouldn’t be used to unnecessarily beat all staff – those overly-productive and those less so – over the head and weaken what is already frail collegiality given cutbacks in the sector?

    I’d also like to point out that research activity in Ireland is measured in some cases. I’ve spent the ten years filling out reports for the HEA for PRTLI, which includes daily timesheets for all staff and PhD students, quarterly financial audits, and six-monthly research audits in which you have to detail every new award, paper, presentation, collaboration, event organised, etc. (and if you have some EU funds you do the same, again down to daily timesheets). In other words, if you have exchequer research funding, you do get measured and managed. The irony, of course, is that you don’t get the funds unless you are highly productive and impactful to begin with.

  2. Hi Brian,

    I largely agree, in particular after a long night spent in writing a research proposal. All the previous attempts by HEA to measure impact were simply ridicolous. Mostly because they never wanted to know what is the real time spent in doing what we do; they always asked to indicate the fraction of time of the fictionary 35 hour week. It is fair to say that the discussion here is immature.

    Yet, I do believe there is room for marry the three mandates of a Uni. In CRANN we try to do basic science (our success in ERC says we are doing well), we educate kids to a future (our employment post PhD is essentially 100%), and we are strong in translating basic research to Industry (we have > 70 industrial partners). This however does not come cheap. I cannot think about any of my colleagues working for less then 55 hours per week, I sleep on average 5 hours per night. Many week end are also sacrified. The real question how sustainable all this is in the long run.

  3. Excellent article, as ever. As I say, the standard teaching load for IoT lecturers is 18 contact hours/week, not 16 – and 20 for assistant lecturers, who make up more than 25% of the workforce.
    In consequence, many young academics fresh from their university postgrad soon give up research on taking up a position in the IoT sector, a terrible waste of human resources.
    The exception is those in areas of technology considered important; these lucky and talented people sometimes get good enough funding to buy out teaching hours. For the rest of us, research is what you do 5 – 9 pm!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s