Tags

, , , , , , , ,

Research metrics are fraught with danger. Usually they are dangerous when they are abused. We can measure the citation history of a paper but that tells us little beyond its citation history. We can measure raw output but that tells us simply how busy someone is. We can measure lots of things but they are all limited in some way. Measurement limitation does not prevent university administration from seizing on metrics and using them appallingly. I recently was informed of an Irish academic unit where papers published in journals that are not in the ISI Web of Science are not allowed to be used as part of any promotion or other college activity. They are un-papers. This is stark raving lunacy, but it shows how dangerous a simple metric can be in the hands of the ignorant.

A new addition to the mix is a proposed measure of how good one is as a research degree supervisor. Based on the h index, the highest number of papers a researcher has published that have each been cited h times or more, the h1 index is defined via the supervised students h indices.  Someone with a h1-index of 4 has had 4 students each individually with an h-index of 4 or more.

Devised by Richard Tol and Frances Ruane it explicitly equates quality with citation, and that’s a problem. Lets leave aside a confusion of conflating process (how good one is as a research supervisor) with product (how much do you output) – it’s very incomplete.

Tol and Ruane state

A professor is a good PhD adviser if she trains many good researchers.

and that is true. However, the assumption is that good researchers go on to publish academic articles and get cited and thus increase their h and their supervisors h1 indices. That is fallacious.

First, not every PhD goes into academia. UK research suggests that between 60 and 90% of doctoral graduates, depending on field, DO NOT go into academia. Indeed, not every graduate goes into any work… About half my graduated PhD students have gone into academia

Second, Not every PhD, even if they are working in research outside academia, publishes. I can think of a number of people in my extended family with doctoral qualifications, working in research intensive industrial areas, who do not publish. I think of some of my own graduates, some I have externally examined, others I have known in other ways, all working in employment that requires significant research skills and output and not publishing.

Third, even if PhD graduates working outside academia do publish, much is done so in non-academic outputs, such as trade journals or reports. These are less cited than peer-reviewed journals regardless of the quality of the work. In economics and finance think of the graduates who have gone into industry and who produce regular research based newsletters or other output. These, except in the rarest cases, don’t get cited.

Fourth, the use of any citation based approach is limited to those areas wherein citations are relevant or possible. Citation analysis, even for areas where there are data and wherein it is accepted as an approach, is very limited and limiting as this Singapore report shows. Much citation analysis is performed on a few databases, most notably the ISI database, which has a well-known sample selection bias against social science and humanities.  In general the further away one goes from the hard sciences the less useful citation based approaches appear to be.There is a wonderful blog rather skeptical of the “citation culture” which is well worth reading if one is interested. The LSE Impact of Social Science blog is a must read. All in all, citation measurement is a very very partial and first step towards any form of evaluation of impact or excellence. David Laband of Georgia Tech muses on the relative metric defined merits of different types of economics here .

Fifth, research supervision is a process, not a product. It’s a human process. It’s vastly multifaceted. TCD defines the supervisory process as involving relationship building, assistance, supervision of research quality, training and development, management of student welfare, and supervisory organization. Each of these are further broken down into specific examples. This is not, I am sure, much different from other universities. Good supervision is about much much more than ensuring that graduates graduate and publish. There is an excellent short piece in the THES on the “good shepherd” approach to PhD supervision.

So, the Ruane-Tol H1 measure is, in and of itself, a nice addition to the bibliometric toolkit. But as a measure of quality of a process, it is woefully inadequate.

About these ads