Making public or institutional decisions on the basis of no measures is bad. It is not however necessarily the case that making them on the basis of flawed measures is better. In the parable of the blind men and the elephant a number of blind men feel various parts of an elephant. Each can only describe the part they themselves feel, and of course the impression of the animal is scattered and incomplete. Its only when they pool their knowledge they get a better picture
Similar issues beset university rankings. They are many and varied and no one ranking (or perhaps any ranking) can be complete. A new report from the European University Association (available here ) on the rankings issue evaluates both the overall rankings concept and the main ranking systems.
Here is the Main conclusion section (my emphasis).
4. Main conclusions
1. There have been significant new developments since the publication of the first EUA Report in 2011, including the emergence of a new venture, the Universitas 21 Rankings of National Higher Education Systems, methodological changes made in a number of existing rankings and importantly a considerable diversification in the types of products offered by several rankings providers.
2. Global university rankings continue to focus principally on the research function of the university and are still not able to do justice to research carried out in the arts, humanities and the social sciences. Moreover, even bibliometric indicators still have strong biases and flaws. The limitations of rankings remain most apparent in efforts to measure teaching performance.
3. A welcome development is that the providers of the most popular global rankings have themselves started to draw attention to the biases and flaws in the data underpinning rankings and thus to the dangers of misusing rankings.
4. New multi-indicator tools for profiling, classifying or benchmarking higher education institutions offered by the rankings providers are proliferating. These increase the pressure on and the risk of overburdening universities, obliged to collect ever more data in order to maintain as high a profile as possible. The growing volume of information being gathered on universities, and the new “products” on offer also strengthen both the influence of the ranking providers and their potential impact.
5. Rankings are beginning to impact on public policy making as demonstrated by their influence in the development of immigration policies in some countries, in determining the choice of university partner institutions, or in which cases foreign qualifications are recognised. The attention paid to rankings is also reflected in discussions on university mergers in some countries.
6. A growing number of universities have started to use data compiled from rankings for the purpose of benchmarking exercises that in turn feed into institutional strategic planning.
7. Rankings are here to stay. Even if academics are aware that the results of rankings are biased and cannot satisfactorily measure institutional quality, on a more pragmatic level they also recognise that an impressive position in the rankings can be a key factor in securing additional resources, recruiting more students and attracting strong partner institutions. Therefore those universities not represented in global rankings are tempted to calculate their likely scores in order to assess their chances of entering the rankings; everyone should bear in mind that not all publication output consists of articles in journals, and many issues relevant to academic quality cannot be measured quantitatively at all.
The takeaway I get is one of slight unease : the rankings are here to stay, and they re self admittedly flawed and nascent metrics of even that which they purport to measure, but these rankings are being taken much more seriously than they might be on cold analysis. The face validity is greater than the construct validity. And yet we see UCC has been outed as having taken great pains to ensure that its ranking on one metric was improved, a measure described in an influential online publication as “rigging the rankings?”. It is highly improbable that UCC are alone in the world in taking such measures.