Caveat Emptor

The opinions expressed on this page are mine alone. Any similarities to the views of my employer are completely coincidental.

Thursday, 18 December 2014

REF 2014

So the waiting is over, the results are in and they are what they are. Congratulations to the winners and commiseration to the losers. As Brucie used to say "Good game, good game! 

A few weeks ago I posted on the  Mrygold, Kenna, Holovatch and Berche (MKHB) predictions for the Sociology sub-panel 2014 rankings based on the Hirsch (H) index. Just to remind you, the H index basically trades off citations against volume of production. So, it is no use producing a lot that nobody cares enough about to cite. To do well on H you have to produce work that people read.

So how did their predictions work out? Here's a graph I quickly ran off this morning. It plots 2014 REF GPA against  MKHB's H index score. Looks like an impressive relationship there, but in fact the Pearson correlation is only 0.15 (the Spearman rank correlation, to be fair is 0.65). 

What is  interesting is who is above and below the prediction line. The top 4 according to the 2014 REF ranking, York, Manchester, Cardiff and Lancaster, as well as Essex who come in at number 7 are all punching above their weight in H index terms. Another way of putting it is that the panel rated their outputs more highly than the research community (assuming that citation reflects, in the main, positive appreciation, significance, impact etc etc). Then there is the group that did less well than their H index suggests they should have, the OU, Warwick, Sussex, Brunel, Leicester and Queen's. If there is to be great wailing and gnashing of teeth then there is some justification for it from these guys.

Looking at this picture what strikes me most is how the H index really brings out three clusters of institutions: 1. Oxford, Manchester, Edinburgh, LSE and Cambridge where broadly speaking the H index and the REF evaluations agree in rating the institutions highly; 2. City, Goldsmiths, Manchester Met., East London and Roehampton where H index and REF agree in rating the institutions (relatively) poorly; 3. The crap shoot in the middle where the H index rates everyone about the same and where whatever it is that the REF panel members are thinking about , trading off and  higgling over makes all the difference. It would be really nice to know what that was, but I guess nobody is telling...

Other snippets of information that may be worth knowing: 

The Pearson correlation of REF GPA with number of staff submitted is 0.11  but the rank correlation is 0.60 ie  roughly the same as  with the H index. Having at least one  member of the REF panel from your institution is also correlated (modestly) with GPA (Pearson = 0.09, Spearman = 0.52).  And if you want to predict REF GPA without any direct measure of research quality then the  way to go is to use number of staff submitted plus whether you have a REF sub-panel member. The multiple correlation with these two measures is 0.19 ie you get a better prediction from this than from knowing the institution's H index score. Now that is food for thought.

So champagne for some and sack-cloth and ashes for others. But actually we are all losers from this ridiculous  and demeaning process. It's time for those who have come out of it smelling of roses (this time) to stand up in solidarity with those who have the faint whiff of the farmyard about them. There but for the grace of God etc.

And by the way, casting an eye over the rankings in a few cognate disciplines makes me think wtf!...

Tuesday, 16 December 2014

Causality fascism

While in Turku I heard a term that was new to me "causality fascism". Actually, in the usage, there is an important distinction to make. 

It could be used to designate someone who believes that the only useful social scientific work to be done involves the rigorous identification of some sort of average treatment effect. This view is beginning to take hold in some of the social sciences  - political science seems to be particularly prone to colonization - and is clearly dumb, dumb, dumb. 

It's OK to have other objectives, as long as you are absolutely clear about what they are. What the balance should be between uncovering heterogeneity and serious causal analysis will depend on the state of knowledge in the field. Frankly, if we are still struggling to establish what the facts of the matter are, then it is  a little premature to put too much emphasis on causality.

On the other hand, if somebody starts to use words like "effect", "impact", "influence" etc rather than "differences", "heterogeneity" and so forth  and has no viable strategy to identify  real causal effects, then a little "causality fascism" is surely a good thing. In this context forcing people to really address what the numbers they estimate actually mean in terms of the relevant counterfactual gives some sort of protection against the propagation of bullshit.

All of which gives me an excellent excuse to link to the classic Seinfeld Soup Nazi.

Monday, 15 December 2014

With Lenin in Turku - and on social mobility in Britain

I'm just back from Turku where I was participating in an excellent workshop hosted (very generously)  by Jani Erola and Elina Kilpi-Jakonen as part of their INDIRECT project. We were kept pretty busy in the conference room  during the day and in other ways during the evening so I didn't have much time for sight-seeing until the morning of my departure when I took a quick look around the city.

Turku is a charming place with a lot of elegant Jugenstil buildings in the vicinity of the central market square. The Swedish influence is still evident and the Svenska Teater was advertising a forthcoming production of  Ronja R√∂vardotter. Love of Astrid Lingren is a part of North European culture that Britain doesn't really share. To be sure Pippi L√•ngstrump is  known over here but that's about it. My own daughter's love of Astrid Lingren comes from her mother reading the stories to her in German and Ronja Raubertochter is one of her favourites.

My wandering took me to the Turku Art Museum - a fine building in Nordic Romantic style - where there was an  exhibition of the work of the controversial Icelandic collage artist Erro. I'd never seen anything by him before and the  juxtaposition of incongruous images - the People's Liberation Army marching through New Jersey - was amusing and at the same time slightly unsettling.

Just opposite the museum is a  bust of Lenin who apparently stayed in the building behind it in 1907 when he was on the run from the secret police. It doesn't say on the plaque how long he was there, but it turns out that it must have been less than five hours as he was pretty anxious to get on a boat to Stockholm - well you would if the alternative was a lengthy stay in Siberia. 

 And what was I doing there? Well I was giving a paper on long-term trends in social class mobility in the UK. If you take the care to assemble as much of the broadly comparable evidence as you can it turns out that (at least for men - hold the headline for women) there is a pretty convincing case for believing that over the last 50 years - and possibly longer - there has been a continuous decrease in baseline levels of association of very roughly 1% per year ie relative rates of social mobility have increased. Trimming the data and applying all sorts of data exclusions doesn't alter the story much. 

Social mobility crisis? What crisis? You can find the slides from my presentation here.

Saturday, 29 November 2014

Oxford predicted first in 2014 REF!

So, as reported in the THES, Oxford sociology is predicted to come top of the Sociology (subpanel 23) REF assessment. This is the conclusion of a paper published in arXiv by Mrygold, Kenna, Holovatch and Berche in which they use the Hirsch (H) Index to forecast the 2014 REF rankings in 4 subjects: biological sciences, physics, chemistry and sociology.

The inputs to the H index are the number of publications and the number of citations and essentially it balances one against the other. A department that publishes a lot that nobody pays any attention to would get a low H index, but a department with a more modest output that is cited a lot (and presumably consists of higher quality publications) would get a higher number. In the case of the prediction exercise, for the calibration period - the last RAE, the fact that only 4 outputs per person were submitted is taken into account.

Well,  I like this result, but then again I would wouldn't I?!At the very least it gives some independent evidence to support my conviction that two departments I have been a member of have been hard done by in the past by the UK's research assessment exercises.

Nobody should claim that a single metric could tell you all you need to know about research quality. But it seems to me equally foolish to ignore this evidence. After all the actual procedure allegedly used by REF panels treats us as though we are idiots.

This time round there were about 30 submissions to the Sociology subpanel.  Let's assume an average of 30 staff per submission each with 4 REFable publications. So that gives us 3600 publications for the panel to read. Let's assume that each output has to be read by two panel members (surely fairness would require that?) There are 20 members of the panel so each must read 2x180=360=pieces of work. If we assume that the reading goes on for a whole year then that would mean that each panel member would have to read and reach an opinion about almost 7 outputs a week.

That doesn't sound so bad. I could easily read 7 articles a week if I had nothing else to do. But an unknown proportion of outputs will be monographs. I couldn't read 7 monographs a week even if I got leave of absence from my day job. Most REF panel members will also have a day job to do ie they are doing their REF work in their spare time. During term time I probably read about 2 new articles a week, usually things that are directly related to my research or teaching. Unless I gave up sleeping it is not obvious how I would be able to do my day job and at the same time read  and reflect on 7 outputs that are for the most part unrelated to my professional interests.

The conclusion is clear: either the REF panel members are selected for their superhuman reading capacities (to which hypothesis I assign a low prior probability) or the process is in part bogus.

 I don't doubt for one moment that REF panel members take the job seriously. I also don't doubt that they pass at least some of the text of each output before their eyes. I do doubt that they read a substantial proportion of the submissions in anything like the common sense use of that word. No doubt a lawyer would be able to defend what they do as "reading" in some emaciated and purely formal sense of that word, but really that would be a rather pathetic and dishonest response.

In reality we all suspect what is really going on (and conversations with people that actually know lead me to believe that these suspicions are not without foundation) but nobody wants to break ranks and say the Emperor has no clothes. To mix my myths and metaphors we all know what happened to Cassandra.

When the real REF results are published on December 18th we will know how accurate the predictions have been. Of course I'm hoping for the best, not least because I know that the colleague responsible for our submission did a quality job. It's not beyond the bounds of possibility however that the results for the sociology panel will differ to a marked degree from the H index predictions. And if they do then somebody should be asking some hard questions about the REF process as it applies to sociology.

There has already been a bit of comment in the twittersphere to the effect that the H index is in some way biased towards elite institutions. A moment's thought suggests that this is rather unlikely. To produce that sort of bias would require  conspiracy on a fairly monumental scale. This conspiracy would have to involve the editors of journals, the  referees of journal articles and, even more implausible, scores of people, many personally unknown to the authors of the outputs, conspiring to inflate citations.

It's not impossible, but compare that just so story to one that involves a largely self-appointed clique that operates in the proverbial smoke filled room to reward, with no serious scrutiny, the type of sociology they like while pursuing vendettas against whoever and whatever they dislike. They don't even have to discuss doing it. A nod and a wink across a table and a tacit agreement not to stab my kind of sociology in the back if I leave your kind alone is sufficient.

During the course of writing this I became curious to know what my personal H index was. It's easy to find out - I did it with Google Scholar. Getting the numbers is one thing, but what do they mean? Rather conveniently there is an LSE publication that gives information about average H index scores for a number of social scientific disciplines. It turns out that the average sociology Professor has an H index of about 3.7. That made me pretty happy. My personal score for any sensible periodization is, to adapt Harry Enfield, considerably larger than that. And I should add, that I'm probably at the lower end of my department's  score distribution.

Wednesday, 19 November 2014

Careful, careful...

It occurred to me only after I had written my last post that there is a danger in admitting professional ignorance. In David Lodge's Changing Places there is a dinner party game called Humiliation in which Eng. Lit. professors compete over which canonical classics they haven't actually read. A particularly obnoxious and ultra competitive character called Ringbaum wins by confessing that he hasn't read Hamlet. He is then sacked by his university.

Far fetched? Well, there are many peculiar practices going on at British universities these days.

Monday, 17 November 2014

What have you been reading?

Kieran Healey has produced a fascinating graphic of the number of citations of the 10 most cited sociology journal articles in each decade from the 1950s onwards. He has a nice discussion of it which I won't repeat here. Several things struck me though. Of the 14 journals  that produced top 10 articles over the last 60 years or so 13 are American and 1 is British. Perhaps not too surprising but food for thought when and if metrics become used in the UK  to measure research quality. 

What did surprise me a little though is how few of the 60 articles I've actually read. I counted 13 that I've read with a modest degree of seriousness and 7 of those are "classics" from the 50s (and before) and 60s. That leaves just 6 from the last 40 years. These are supposed to be the moving and shaking articles and I've never heard of most of them.

So what should I conclude from this? One possible conclusion is that I'm an ignorant fellow who doesn't keep up with the literature. I'd be interested to know exactly how ignorant. Would anyone else like to reveal their own personal tally ? Be honest.

Another possibility is that American and European sociology are actually quite different intellectual worlds with rather different preoccupations and hence rather different reading and citation patterns. It would be odd to talk about American and European economics - or at least I've not heard my economics colleagues discussing their discipline in that way - in a way that it might make sense to talk about American and British sociology.

I'm primarily interested in my own society and that is reflected in what I read.

Friday, 14 November 2014

Sociology down the plughole

Just when you think things can't get any worse you can always rely on the journal Sociology to prove you wrong. At the beginning of last year I was shaking my head in disbelief when they published a poem (in case you haven't heard of Sociology it is supposed to be a serious academic journal). 

Now they have gone one better (or worse) and published a piece by John Holloway  - A Note on Hope and Crisis - that seems, in nine numbered paragraphs, to be a text originally intended for Radio 4's "Thought for the Day" or perhaps for an obscure marxisant literary journal. I don't think it is too extreme to say that it has zero social scientific content. 

What are the editors thinking about? What possible sense for instance can be made of paragraph 8 which consists of two sentences?

We are the crisis of capital and proud of it: that is our dignity, that is our hope. And our joy.

Well at least the editors have done some really serious editorial work as we can see from paragraph 9 where Holloway informs us: "The editors of Sociology have kindly suggested that it would be helpful to clarify my use of 'We'": Good to see they are earning their keep and maintaining linguistic if not intellectual standards.

So where are we going next? I dare someone to submit a photograph or the score of a musical composition. How could they possibly refuse it?