Popular Posts

Caveat Emptor

The opinions expressed on this page are mine alone. Any similarities to the views of my employer are completely coincidental.

Saturday 29 November 2014

Oxford predicted first in 2014 REF!

So, as reported in the THES, Oxford sociology is predicted to come top of the Sociology (subpanel 23) REF assessment. This is the conclusion of a paper published in arXiv by Mrygold, Kenna, Holovatch and Berche in which they use the Hirsch (H) Index to forecast the 2014 REF rankings in 4 subjects: biological sciences, physics, chemistry and sociology.

The inputs to the H index are the number of publications and the number of citations and essentially it balances one against the other. A department that publishes a lot that nobody pays any attention to would get a low H index, but a department with a more modest output that is cited a lot (and presumably consists of higher quality publications) would get a higher number. In the case of the prediction exercise, for the calibration period - the last RAE, the fact that only 4 outputs per person were submitted is taken into account.

Well,  I like this result, but then again I would wouldn't I?!At the very least it gives some independent evidence to support my conviction that two departments I have been a member of have been hard done by in the past by the UK's research assessment exercises.

Nobody should claim that a single metric could tell you all you need to know about research quality. But it seems to me equally foolish to ignore this evidence. After all the actual procedure allegedly used by REF panels treats us as though we are idiots.

This time round there were about 30 submissions to the Sociology subpanel.  Let's assume an average of 30 staff per submission each with 4 REFable publications. So that gives us 3600 publications for the panel to read. Let's assume that each output has to be read by two panel members (surely fairness would require that?) There are 20 members of the panel so each must read 2x180=360=pieces of work. If we assume that the reading goes on for a whole year then that would mean that each panel member would have to read and reach an opinion about almost 7 outputs a week.

That doesn't sound so bad. I could easily read 7 articles a week if I had nothing else to do. But an unknown proportion of outputs will be monographs. I couldn't read 7 monographs a week even if I got leave of absence from my day job. Most REF panel members will also have a day job to do ie they are doing their REF work in their spare time. During term time I probably read about 2 new articles a week, usually things that are directly related to my research or teaching. Unless I gave up sleeping it is not obvious how I would be able to do my day job and at the same time read  and reflect on 7 outputs that are for the most part unrelated to my professional interests.

The conclusion is clear: either the REF panel members are selected for their superhuman reading capacities (to which hypothesis I assign a low prior probability) or the process is in part bogus.

 I don't doubt for one moment that REF panel members take the job seriously. I also don't doubt that they pass at least some of the text of each output before their eyes. I do doubt that they read a substantial proportion of the submissions in anything like the common sense use of that word. No doubt a lawyer would be able to defend what they do as "reading" in some emaciated and purely formal sense of that word, but really that would be a rather pathetic and dishonest response.

In reality we all suspect what is really going on (and conversations with people that actually know lead me to believe that these suspicions are not without foundation) but nobody wants to break ranks and say the Emperor has no clothes. To mix my myths and metaphors we all know what happened to Cassandra.

When the real REF results are published on December 18th we will know how accurate the predictions have been. Of course I'm hoping for the best, not least because I know that the colleague responsible for our submission did a quality job. It's not beyond the bounds of possibility however that the results for the sociology panel will differ to a marked degree from the H index predictions. And if they do then somebody should be asking some hard questions about the REF process as it applies to sociology.

There has already been a bit of comment in the twittersphere to the effect that the H index is in some way biased towards elite institutions. A moment's thought suggests that this is rather unlikely. To produce that sort of bias would require  conspiracy on a fairly monumental scale. This conspiracy would have to involve the editors of journals, the  referees of journal articles and, even more implausible, scores of people, many personally unknown to the authors of the outputs, conspiring to inflate citations.

It's not impossible, but compare that just so story to one that involves a largely self-appointed clique that operates in the proverbial smoke filled room to reward, with no serious scrutiny, the type of sociology they like while pursuing vendettas against whoever and whatever they dislike. They don't even have to discuss doing it. A nod and a wink across a table and a tacit agreement not to stab my kind of sociology in the back if I leave your kind alone is sufficient.

During the course of writing this I became curious to know what my personal H index was. It's easy to find out - I did it with Google Scholar. Getting the numbers is one thing, but what do they mean? Rather conveniently there is an LSE publication that gives information about average H index scores for a number of social scientific disciplines. It turns out that the average sociology Professor has an H index of about 3.7. That made me pretty happy. My personal score for any sensible periodization is, to adapt Harry Enfield, considerably larger than that. And I should add, that I'm probably at the lower end of my department's  score distribution.

Wednesday 19 November 2014

Careful, careful...

It occurred to me only after I had written my last post that there is a danger in admitting professional ignorance. In David Lodge's Changing Places there is a dinner party game called Humiliation in which Eng. Lit. professors compete over which canonical classics they haven't actually read. A particularly obnoxious and ultra competitive character called Ringbaum wins by confessing that he hasn't read Hamlet. He is then sacked by his university.

Far fetched? Well, there are many peculiar practices going on at British universities these days.

Monday 17 November 2014

What have you been reading?

Kieran Healey has produced a fascinating graphic of the number of citations of the 10 most cited sociology journal articles in each decade from the 1950s onwards. He has a nice discussion of it which I won't repeat here. Several things struck me though. Of the 14 journals  that produced top 10 articles over the last 60 years or so 13 are American and 1 is British. Perhaps not too surprising but food for thought when and if metrics become used in the UK  to measure research quality. 

What did surprise me a little though is how few of the 60 articles I've actually read. I counted 13 that I've read with a modest degree of seriousness and 7 of those are "classics" from the 50s (and before) and 60s. That leaves just 6 from the last 40 years. These are supposed to be the moving and shaking articles and I've never heard of most of them.

So what should I conclude from this? One possible conclusion is that I'm an ignorant fellow who doesn't keep up with the literature. I'd be interested to know exactly how ignorant. Would anyone else like to reveal their own personal tally ? Be honest.

Another possibility is that American and European sociology are actually quite different intellectual worlds with rather different preoccupations and hence rather different reading and citation patterns. It would be odd to talk about American and European economics - or at least I've not heard my economics colleagues discussing their discipline in that way - in a way that it might make sense to talk about American and British sociology.

I'm primarily interested in my own society and that is reflected in what I read.

Friday 14 November 2014

Sociology down the plughole

Just when you think things can't get any worse you can always rely on the journal Sociology to prove you wrong. At the beginning of last year I was shaking my head in disbelief when they published a poem (in case you haven't heard of Sociology it is supposed to be a serious academic journal). 

Now they have gone one better (or worse) and published a piece by John Holloway  - A Note on Hope and Crisis - that seems, in nine numbered paragraphs, to be a text originally intended for Radio 4's "Thought for the Day" or perhaps for an obscure marxisant literary journal. I don't think it is too extreme to say that it has zero social scientific content. 

What are the editors thinking about? What possible sense for instance can be made of paragraph 8 which consists of two sentences?

We are the crisis of capital and proud of it: that is our dignity, that is our hope. And our joy.

Well at least the editors have done some really serious editorial work as we can see from paragraph 9 where Holloway informs us: "The editors of Sociology have kindly suggested that it would be helpful to clarify my use of 'We'": Good to see they are earning their keep and maintaining linguistic if not intellectual standards.

So where are we going next? I dare someone to submit a photograph or the score of a musical composition. How could they possibly refuse it?


Social Mobility Grinding to a Halt?

I was at a half-day seminar yesterday sponsored by ESRC and the National Centre for Research Methods on the theme of: Social Mobility Grinding to a Halt? New Evidence from the Census and Birth Cohort Studies. There were four excellent papers and Jo Blanden and myself acted as discussants. Thanks to Pat Sturgis for organizing the event. It was good to see that a number of representatives of Whitehall departments were in attendance. Maybe there is some hope that they might eventually decide to collect some decent data. 

For anyone who is interested, my comments on two of the papers are here (minus the figures and tables that contain Crown Copyright material).

Wednesday 12 November 2014

Oxford Q-Step Centre

The Oxford Q-Step Centre has got up a nice web-site with an excellent introductory video. Looks like there will be all sorts of goodies on it in the future.

Thursday 6 November 2014

Here we go again - more quantitative self-hatred

So, here we go again. Discover Society has decided to publish another piece of quantitative self-hatred, this time by the American sociologist Brian Castellani. It covers similar ground to David Byrne's piece in Sociology that I wrote about here, which is only to be expected as Catellani and Byrne seem to be close associates. It repeats the same old contentious claims  about the alleged deficiencies of the standard toolbox of quantitative methods and combines them with a complete lack of understanding of the actual state of affairs in British sociology departments. On the plus side Castellani's piece is less intemperate than Byrne's ridiculous polemic so every cloud has a silver lining I suppose. 

Still, we should probably feel a bit of sympathy for Castellani because as a student he seems to have suffered from the intellectual equivalent of child-abuse. He tells us that: "...my quantitative professors argued, statistics (and pretty much it alone) made 'sense' of the complexity of social reality." If they really forced that kind of garbage down his throat then I genuinely feel sorry for him. What can I say? He was clearly taught by people who didn't know what they were talking about and it has soured his whole understanding of statistics. That would explain sentences like: "And being a good quantitative social scientist, you would develop as simplistic a causal model as possible, what Capra and Luisi (2014) call mechanistic or reductionist social science." We're not pulling our punches then. People doing conventional  quantitative social science prefer simplistic models - I assume Castellani is aware of the difference in nuance between simple and simplistic - and what they are doing is mechanistic and reductionist (cries of boo and hiss from les enfants du paradis). Jesus, those quant guys are so primitive it's amazing they can stand on two legs (not like us super evolved complexity guys because complex is always better than simplistic right?).

Well, maybe, but it doesn't seem to imply an ability to stick to the point or to follow a coherent line of argument. What for example is the point of the figure, culled from Wikipedia,  showing various Gaussian probability density functions? It has no clear relationship to anything that Castellani is discussing at the point it is presented to us. Is the idea just to serve up to the home crowd some totemic figures attributable to the enemy  that they can focus their hatred on? Who knows? Who cares? Well, in a complex world seemingly anything goes,  after all Castellani tells us that students, instead of being force fed that quant methods hogwash should have been following a curriculum stuffed with post-positivism, post-structuralism, eco-feminism, deconstruction, constructionism, constructivism, qualitative method and post-modernism. But hang on, isn't that what most British undergraduate sociology courses actually consist of? Come on Brian, pay attention.

There are however, apparently, beacons of hope. Though Oxford, Harvard, Stanford and Cambridge feed their undergraduates a diet of "...mechanistic and reductionist experimental or quasi-experimental design" - Oh the brutes! - the School of Political and Social Science at the University of Melbourne, Science Po in Paris and, the BSc in Sociology at the LSE are held up as examples to us all. In these enlightened institutions, we are told, "...undergraduates are given courses in critical thinking, applied research and interdisciplinary and mixed methods." The gods be praised!

Well, fair dinkum to Melbourne, I can't gainsay 'em because I don't know what they do. Science Po I should know something about because my own institution has an exchange programme with them and I've supervised one or two students on it. I must say, they seemed perfectly normal to me. Not especially enlightened nor especially disdainful of the thin gruel they are obviously being served up in Oxford. And then the BSc in Sociology at the LSE. Well, there's something I do know about. Having been an undergraduate on it 35 years ago, I actually then taught on the degree for ten years - albeit over a decade ago - and am still in intimate contact with sources currently involved in delivering it. Let's say I have a pretty good idea of what the reality looks like on the ground. The rules of good manners and the UK's libel laws prevent me from saying at this point all of what I actually believe to be true. But let me put it this way: the department that hosts the degree is also home to that monument to absurdity resting on a  pedestal of  incompetence called the Great British Class Survey. If the future of quantitative methods  in the UK relies on the skills and understanding of the people associated with that, then God help us, we're doomed.

What we're doomed to is however not entirely clear from Castellani's account. Of course he has his diagram that explains everything. It's a fantastic green and white web of names that starts with Issac Newton in the top left-hand corner who apparently was working in the 1940s  and 50s (you might have to brush up the periodization of your  history of science a little there Brian) and ends up with Emma Uprichard in the bottom right hand corner. In between is more or less everything and everybody plus the kitchen sink, serious thinkers and doers along with notorious bullshit merchants. 

This actually tells me nothing and there's the rub. If you are going to persuade me that I need to learn a new set of tools - let's call them complexity science - to answer the questions I'm interested in then I think I'm entitled to say, OK, but first show me the money. Please show me, by producing convincing empirical evidence, that these tools give better answers to the sorts of questions I and other social scientists are interested in. I don't mean by this demonstrations that the GLM is an inappropriate model for understanding weather systems or the swarming behaviour of birds and fish. I know that. I also don't want a philosophical disquisition about ontology. I want to see the pay-off for standard social science questions. I also don't want demonstrations that RCTs and linear regressions are not appropriate for all questions . Nobody in their right mind thinks they are. In other words cut out the endless - in some cases book length - chat and demonstrate the empirical value of complexity tools when it comes to the sort of questions that most quantitatively oriented sociologists are interested in.

If the answer is that they add little or no value for the conventional questions, then that's fine. If your question is about networks then you need network models - nobody is going to dispute this. I went to my first course on social network analysis in 1989 and very good it was too. Since then I've not worked on problems that involved the analysis of network data so there was no particular pay-off other than gaining an appreciation of what can be done. I also have colleagues whose substantive work involves the use of network models, who develop the statistical theory to make inferences about networks and write the software to do it. None of them feel the need to disparage conventional quantitative methods or feel the need to shroud what they do in a sea of verbiage. I could say much the same sort of thing about colleagues whose work involves the use of agent-based models or system dynamics models. They have these things in their tool box and they take them out when they are the right tool for the job and when they can be shown to work.

Though he doesn't present any in his article probably Castellani has a whole bunch of convincing applications that he can pluck from his CV.  I don't know, I haven't looked at it. I'm less sure that Byrne and associates could do that. Let's face it, this is a guy who tries to tell us that whatever it is we're doing is all wrong yet is  unable to work out correctly the size of his state space (he's out by a factor of 10). In all serious disciplines that would earn you a one way ticket to Palookaville. In sociology it get's you a chair. It also seems that in practice complexity science for Byrne comes down to QCA and cluster analysis. That's odd, for when I look at what seems to me to be the serious stuff on complexity in the social sciences  - for instance at the Santa Fe Institute -  it's all differential equations and random matrices. I think it would be great if we taught our sociology undergraduates that. But then again given that most have given up maths at 16 and break out in a cold sweat if you show them a summation sign this would be a tall order. Probably better to go for rubbishing the modest but tangible achievements of conventional quantitative methods and give them a bit of waffle from Luhmann, Latour, Castells, Urry, Thrift and so on. Then they'll really understand what is going on in the world.




Wednesday 5 November 2014

Crime and Punishment

I'm getting to the age where I don't remember  what movies I've seen. A few weeks ago we started to watch a DVD borrowed from the library and only after watching for 10 minutes did it suddenly dawn on us that it looked terribly familiar...I suppose I'd have more to worry about if I didn't  notice.

But the last two weekends have been a triumph of cinematic consumption. Last week we watched John McDonagh's Calvary and the week before Lav Diaz's Norte, the End of History. Both are about crime and punishment and both in their very different ways are masterpieces.

Calvary, set in contemporary Ireland, is essentially a retelling of the Christian myth. Someone has to pay for the sins of the fathers  and who better than somebody who is good? Of course the twist is that the bad guys and the good guys all belong to the same institution - the Catholic church and that this exploration of the crisis of trust in the Irish Church is conducted with a considerable amount of deadpan black humour.

Norte starts off from Dostoyevsky's premise that there is only one thing worse than being punished for your crimes and that is not being punished. The film is four hours long and contains a very large number of long shots from a stationary camera which will not be to everyone's taste but I found it increasingly enthralling. In fact the last two hours just fly by as you get drawn into the descent of one the principle characters into  a  hell of insanity and violence.

The central message of both  films can be summed up as: the fathers have taken too little care of this, and that redemption entails tragedy.