So the waiting is over, the results are in and they are what they are. Congratulations to the winners and commiseration to the losers. As Brucie used to say "Good game, good game!
A few weeks ago I
posted on the Mrygold, Kenna, Holovatch and Berche (MKHB) predictions for the Sociology sub-panel 2014 rankings based on the Hirsch (H) index. Just to remind you, the H index basically trades off citations against volume of production. So, it is no use producing a lot that nobody cares enough about to cite. To do well on H you have to produce work that people read.
So how did their predictions work out? Here's a graph I quickly ran off this morning. It plots 2014 REF GPA against MKHB's H index score. Looks like an impressive relationship there, but in fact the Pearson correlation is only 0.15 (the Spearman rank correlation, to be fair is 0.65).
What is interesting is who is above and below the prediction line. The top 4 according to the 2014 REF ranking, York, Manchester, Cardiff and Lancaster, as well as Essex who come in at number 7 are all punching above their weight in H index terms. Another way of putting it is that the panel rated their outputs more highly than the research community (assuming that citation reflects, in the main, positive appreciation, significance, impact etc etc). Then there is the group that did less well than their H index suggests they should have, the OU, Warwick, Sussex, Brunel, Leicester and Queen's. If there is to be great wailing and gnashing of teeth then there is some justification for it from these guys.
Looking at this picture what strikes me most is how the H index really brings out three clusters of institutions: 1. Oxford, Manchester, Edinburgh, LSE and Cambridge where broadly speaking the H index and the REF evaluations agree in rating the institutions highly; 2. City, Goldsmiths, Manchester Met., East London and Roehampton where H index and REF agree in rating the institutions (relatively) poorly; 3. The crap shoot in the middle where the H index rates everyone about the same and where whatever it is that the REF panel members are thinking about , trading off and higgling over makes all the difference. It would be really nice to know what that was, but I guess nobody is telling...
Other snippets of information that may be worth knowing:
The Pearson correlation of REF GPA with number of staff submitted is 0.11 but the rank correlation is 0.60 ie roughly the same as with the H index. Having at least one member of the REF panel from your institution is also correlated (modestly) with GPA (Pearson = 0.09, Spearman = 0.52). And if you want to predict REF GPA without any direct measure of research quality then the way to go is to use number of staff submitted plus whether you have a REF sub-panel member. The multiple correlation with these two measures is 0.19 ie you get a better prediction from this than from knowing the institution's H index score. Now that is food for thought.
So champagne for some and sack-cloth and ashes for others. But actually we are all losers from this ridiculous and demeaning process. It's time for those who have come out of it smelling of roses (this time) to stand up in solidarity with those who have the faint whiff of the farmyard about them. There but for the grace of God etc.
And by the way, casting an eye over the rankings in a few cognate disciplines makes me think wtf!...