Can we really measure research supervisory quality?

Research metrics are fraught with danger. Usually they are dangerous when they are abused. We can measure the citation history of a paper but that tells us little beyond its citation history. We can measure raw output but that tells us simply how busy someone is. We can measure lots of things but they are all limited in some way. Measurement limitation does not prevent university administration from seizing on metrics and using them appallingly. I recently was informed of an Irish academic unit where papers published in journals that are not in the ISI Web of Science are not allowed to be used as part of any promotion or other college activity. They are un-papers. This is stark raving lunacy, but it shows how dangerous a simple metric can be in the hands of the ignorant.

A new addition to the mix is a proposed measure of how good one is as a research degree supervisor. Based on the h index, the highest number of papers a researcher has published that have each been cited h times or more, the h1 index is defined via the supervised students h indices.  Someone with a h1-index of 4 has had 4 students each individually with an h-index of 4 or more.

Devised by Richard Tol and Frances Ruane it explicitly equates quality with citation, and that’s a problem. Lets leave aside a confusion of conflating process (how good one is as a research supervisor) with product (how much do you output) – it’s very incomplete.

Tol and Ruane state

A professor is a good PhD adviser if she trains many good researchers.

and that is true. However, the assumption is that good researchers go on to publish academic articles and get cited and thus increase their h and their supervisors h1 indices. That is fallacious.

First, not every PhD goes into academia. UK research suggests that between 60 and 90% of doctoral graduates, depending on field, DO NOT go into academia. Indeed, not every graduate goes into any work… About half my graduated PhD students have gone into academia

Second, Not every PhD, even if they are working in research outside academia, publishes. I can think of a number of people in my extended family with doctoral qualifications, working in research intensive industrial areas, who do not publish. I think of some of my own graduates, some I have externally examined, others I have known in other ways, all working in employment that requires significant research skills and output and not publishing.

Third, even if PhD graduates working outside academia do publish, much is done so in non-academic outputs, such as trade journals or reports. These are less cited than peer-reviewed journals regardless of the quality of the work. In economics and finance think of the graduates who have gone into industry and who produce regular research based newsletters or other output. These, except in the rarest cases, don’t get cited.

Fourth, the use of any citation based approach is limited to those areas wherein citations are relevant or possible. Citation analysis, even for areas where there are data and wherein it is accepted as an approach, is very limited and limiting as this Singapore report shows. Much citation analysis is performed on a few databases, most notably the ISI database, which has a well-known sample selection bias against social science and humanities.  In general the further away one goes from the hard sciences the less useful citation based approaches appear to be.There is a wonderful blog rather skeptical of the “citation culture” which is well worth reading if one is interested. The LSE Impact of Social Science blog is a must read. All in all, citation measurement is a very very partial and first step towards any form of evaluation of impact or excellence. David Laband of Georgia Tech muses on the relative metric defined merits of different types of economics here .

Fifth, research supervision is a process, not a product. It’s a human process. It’s vastly multifaceted. TCD defines the supervisory process as involving relationship building, assistance, supervision of research quality, training and development, management of student welfare, and supervisory organization. Each of these are further broken down into specific examples. This is not, I am sure, much different from other universities. Good supervision is about much much more than ensuring that graduates graduate and publish. There is an excellent short piece in the THES on the “good shepherd” approach to PhD supervision.

So, the Ruane-Tol H1 measure is, in and of itself, a nice addition to the bibliometric toolkit. But as a measure of quality of a process, it is woefully inadequate.

About these ads

8 Comments

Filed under Blogpost

8 responses to “Can we really measure research supervisory quality?

  1. Conor

    Hi Brian. Thanks for this one … and for all of the other blogs. Very much appreciated. We seriously need to move away from the current system to one that takes into account personal and professional impact, discipline specific outputs etc – just as you have mentioned. Remembering that “not all that can be measured counts and that not all of what counts can be measured”. This is so true of research supervision. With academic freedom, how can we really question received wisdom if we need to be thinking of the impact factor of the journal … regardless of whether it is read / referenced / communicates to the various constituencies that we wish to inform / persuade outside of the academy. Time to mobilise the troops :-)

  2. “A professor is a good PhD adviser if she trains many good researchers”
    I actually don’t agree with this because this cites quantity as a value to be measured as much as quality. I would argue that a professor is a good PhD adviser if the researchers she trains ultimately turn out to be good. Professors aren’t just assembly lines of PhD students at the end of the day; the key feature should be the quality of the work they supervise, regardless of the number of supervisees.

  3. Hi Brian,
    Thanks for this excellent post. I do completely agree with your viewpoint on how good is a professor as a PhD supervisor. So many factors should be taken into account, among which the number of PhD students going to academia and their fields of research particularly matter. Not all research fields are alike.
    In general, I observe that good professors in terms of scientific research are likely to produce good PhD students as they have good research network, know the hot topics being worth investigating, are in good academic institutions, and rigorously select PhD candidates.
    When I was a PhD student, I very much appreciated the way I was trained, motivated and guided while carrying a research, not necessarily the way to publish. This may be specific to my case.

  4. Brian: Thanks for this. Fully agree.

    ISI WoS indeed holds too much power over what is a worthy journal, and their decision making is less than fully transparent. Your Irish example is not alone. Entire countries, e.g., Spain, use this criterion.

    As to the core topic, our measure of course only applies to PhD alumni who stay in academia. We need to think of other measures for those who decide to leave the university (i.e., the majority).

    One of my alumni started a travel agent. She is very happy, but I doubt that has anything to do with me. Another one is a rising star in a multinational. I pride myself, but I guess this is about the skills and character that comes from writing the PhD — rather than its contents.

    Our paper is supposed to be a starting point for designing appropriate metrics rather than the end point.

    • Hi Richard.
      Glad to hear that (as I hoped/suspected). The fear that prompted me to post is the Irish example, other monstrous misapplications of bibliometrics and the use of blind metrification for management. As I think you have noted before, universities are complex and like all complex organizations require complex metric sets. These may conflict internally so harassed administrators may simply pick and choose simple ones.
      It may be useful to do an addendum to note the academic-only nature, that would deflect further criticism. As you know I am in favour of metrics, but like salt, used sparingly.

  5. Pingback: Ninth Level Ireland » Blog Archive » Can we really measure research supervisory quality?

  6. Reblogged this on DCU union and commented:
    In this post, economist Brian Lucey exposes the fallacies of the latest addition to the academic micro-measurement industry – that of measuring research supervisory quality. But it’s also well worth reading as a recap on the limitations of citation metrics generally and in particular the drive, currently also being pursued at DCU, to exclude papers and articles not found in selected commercial databases. This practice is described in unequivacal terms: ‘This is stark raving lunacy, but it shows how dangerous a simple metric can be in the hands of the ignorant.’

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s