So, as reported in the THES, Oxford sociology is predicted to come top of the Sociology (subpanel 23) REF assessment. This is the conclusion of a paper published in arXiv by Mrygold, Kenna, Holovatch and Berche in which they use the Hirsch (H) Index to forecast the 2014 REF rankings in 4 subjects: biological sciences, physics, chemistry and sociology.
The inputs to the H index are the number of publications and the number of citations and essentially it balances one against the other. A department that publishes a lot that nobody pays any attention to would get a low H index, but a department with a more modest output that is cited a lot (and presumably consists of higher quality publications) would get a higher number. In the case of the prediction exercise, for the calibration period - the last RAE, the fact that only 4 outputs per person were submitted is taken into account.
Well, I like this result, but then again I would wouldn't I?!At the very least it gives some independent evidence to support my conviction that two departments I have been a member of have been hard done by in the past by the UK's research assessment exercises.
Nobody should claim that a single metric could tell you all you need to know about research quality. But it seems to me equally foolish to ignore this evidence. After all the actual procedure allegedly used by REF panels treats us as though we are idiots.
This time round there were about 30 submissions to the Sociology subpanel. Let's assume an average of 30 staff per submission each with 4 REFable publications. So that gives us 3600 publications for the panel to read. Let's assume that each output has to be read by two panel members (surely fairness would require that?) There are 20 members of the panel so each must read 2x180=360=pieces of work. If we assume that the reading goes on for a whole year then that would mean that each panel member would have to read and reach an opinion about almost 7 outputs a week.
That doesn't sound so bad. I could easily read 7 articles a week if I had nothing else to do. But an unknown proportion of outputs will be monographs. I couldn't read 7 monographs a week even if I got leave of absence from my day job. Most REF panel members will also have a day job to do ie they are doing their REF work in their spare time. During term time I probably read about 2 new articles a week, usually things that are directly related to my research or teaching. Unless I gave up sleeping it is not obvious how I would be able to do my day job and at the same time read and reflect on 7 outputs that are for the most part unrelated to my professional interests.
The conclusion is clear: either the REF panel members are selected for their superhuman reading capacities (to which hypothesis I assign a low prior probability) or the process is in part bogus.
I don't doubt for one moment that REF panel members take the job seriously. I also don't doubt that they pass at least some of the text of each output before their eyes. I do doubt that they read a substantial proportion of the submissions in anything like the common sense use of that word. No doubt a lawyer would be able to defend what they do as "reading" in some emaciated and purely formal sense of that word, but really that would be a rather pathetic and dishonest response.
In reality we all suspect what is really going on (and conversations with people that actually know lead me to believe that these suspicions are not without foundation) but nobody wants to break ranks and say the Emperor has no clothes. To mix my myths and metaphors we all know what happened to Cassandra.
When the real REF results are published on December 18th we will know how accurate the predictions have been. Of course I'm hoping for the best, not least because I know that the colleague responsible for our submission did a quality job. It's not beyond the bounds of possibility however that the results for the sociology panel will differ to a marked degree from the H index predictions. And if they do then somebody should be asking some hard questions about the REF process as it applies to sociology.
There has already been a bit of comment in the twittersphere to the effect that the H index is in some way biased towards elite institutions. A moment's thought suggests that this is rather unlikely. To produce that sort of bias would require conspiracy on a fairly monumental scale. This conspiracy would have to involve the editors of journals, the referees of journal articles and, even more implausible, scores of people, many personally unknown to the authors of the outputs, conspiring to inflate citations.
It's not impossible, but compare that just so story to one that involves a largely self-appointed clique that operates in the proverbial smoke filled room to reward, with no serious scrutiny, the type of sociology they like while pursuing vendettas against whoever and whatever they dislike. They don't even have to discuss doing it. A nod and a wink across a table and a tacit agreement not to stab my kind of sociology in the back if I leave your kind alone is sufficient.
During the course of writing this I became curious to know what my personal H index was. It's easy to find out - I did it with Google Scholar. Getting the numbers is one thing, but what do they mean? Rather conveniently there is an LSE publication that gives information about average H index scores for a number of social scientific disciplines. It turns out that the average sociology Professor has an H index of about 3.7. That made me pretty happy. My personal score for any sensible periodization is, to adapt Harry Enfield, considerably larger than that. And I should add, that I'm probably at the lower end of my department's score distribution.