Pages

Friday 2 March 2018

Broken Ranks

The only rank that counts (Image: Brett Jordan, CC BY 2.0)
A couple of years ago I wrote in the Funding Insight magazine about the modern obsession with league tables and how, while relatively harmless when used to bulk out a Sunday newspaper, they could be dangerously corrosive when attempting to compare universities globally.

I was heartened, then, to read a report published before Christmas by the Higher Education Policy Institute. This arrived with little fanfare, possibly because the world in general was torn between seeing the election of Donald Trump as the End of Days, and obsessively comparing and buying scented candles for their loved ones.

A quick search of both Research Professional and Times Higher suggests that neither gave the report any airtime. This is surprising but, perhaps, inevitable. The Times Higher is one of the four major suppliers of global higher education rankings, and it is not in its interest to highlight their inherent unreliability. For RP, it may have been the case that the report didn’t say anything that wasn’t already known, and it may have had the shock value of a bear’s toileting arrangements. Nevertheless, I think that it is important to air the findings of the report, and look at their validity.

The report, written by Hepi president and former director Bahram Bekhradnia, was called University International Rankings: for Good or Ill? The title, of course, is somewhat disingenuous. The motivation for drafting a 25-page report was always unlikely to have been to endorse the status quo. ‘Ill’ was always more likely to be the conclusion than ‘good’.

However, Bekhradnia sets to his task with admirable energy and clarity. He starts by cutting to the heart of the problem: “Rankings—both national and international—have not just captured the imagination of higher education but in some ways have captured higher education itself.”

Their rise has been the result of the marketisation of higher education. In an increasingly competitive environment, consumers want some sense of who the providers are, and which are worth spending their money on.

Four main international rankings have emerged to supply this demand

  • THE University Rankings (produced by Times Higher Education)
  • QS World University Rankings (produced by Quacquarelli Symonds)
  • Academic Ranking of World Universities (ARWU and sometimes described as ‘the Shanghai Rankings’, produced by Shanghai Jiao Tong University)
  • U-Multirank (produced by a consortium of four European institutions, and funded by the European Union).

All four work in fundamentally the same way. The compiler identifies the indicators of ‘quality’ (such as research, teaching, commercialisation, engagement), and how these indicators should be weighted. The algorithm that results churns out a set of results that, for three of the four rankings, produces an ‘ordinal’ list—that is, a rudimentary list that ranks them all from best to worst, providing an easily digestible number but little in the way of nuance.

As well as being flawed because of this sledgehammer presentation of results, the rankings are all limited and skewed both by the data available to them, and by their own value judgements.
For the data, there is a woeful lack of consistency and standardisation between countries. It’s only really in citations that there is international agreement (though these create problems in other ways). Even the definition of a student varies widely, and in some countries there is no differentiation between a masters student and an undergraduate.

This unreliability is exacerbated by the lack of checks on the data submitted by institutions. The report gives the example of Trinity College Dublin, which was alarmed by its fall in the rankings. In checking the data it found that a decimal point had been misplaced in one of its figures, resulting in a tenfold difference in one key measure. There had been no verification of the submitted data at the time.

In addition, where no data is forthcoming, the compilers of the QS sometimes “data scrape”. This is a euphemism for gathering missing data from other, less robust sources, such as institutional websites. Inevitably this is of very variable quality, and the results aren’t validated or verified.

Such issues around the data become more alarming when one looks at the weight it is given by the compilers. For instance, a full 50 per cent of the QS ranking is based on “reputation surveys”, with a further 20 per cent on citation rates. The response rate for the surveys is, apparently, less than 10 per cent. The ARWU judges “education quality” (30 per cent of the score) on whether staff or alumni have Nobel prizes or Field medals. How on Earth does that demonstrate education quality?

Moreover, Bekhradnia makes the point that the international rankings are heavily biased in favour of research. For me, a research manager, this isn’t a problem. In my ideal world everything should be biased in favour of research. But the rankings shouldn’t pretend to be other than what they are. Those keystone reputation surveys, for instance, will be based on colleagues’ awareness of the articles, conference papers and historic track record, which are all founded on research. How else will a Belgian academic know about a university in, say, Australia? Bekhradnia suggests that 85-100 per cent of the measures of THE, QS and ARWU are in some ways research related.

As such, the only way to rise in the rankings is to invest heavily in research. For Bekhradnia, “such a focus on research is appropriate only for a small number of universities. One important—perhaps the most important—function of universities is to develop the human capital of a country, and enable individuals to achieve their potential. Most institutions should be focusing on their students, and a ranking scheme that takes no account of that cannot, as all do, claim to identify the ‘best’ universities.”

The report is damning. While Bekhradnia concludes by suggesting some ways in which the rankings could be improved, the overall implication is that it is not in the interest of the compilers or the institutions at the top of the rankings to do so. Thus, we are likely to have to live with the rankings for some time to come. The best we could hope for is that senior management within higher education takes them less seriously. But I’m not going to hold my breath for that. As the crowds gather to watch the annual parade of providers, it would be a brave manager to be the first to shout out what’s apparent to all.

This article first appeared in Funding Insight in January 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

No comments:

Post a Comment