Popular Posts

Caveat Emptor

The opinions expressed on this page are mine alone. Any similarities to the views of my employer are completely coincidental.

Monday, 18 March 2013

Is there an anti-quantitative bias in British sociology?

My title is inspired by a book chapter written more than 30 years ago by my former colleague Christopher T. Husbands  called: The anti-quantitative bias in post-war British sociology. I was reminded of it early last year when I read an opinion piece by David Byrne published in the journal Sociology titled UK Sociology and Quantitative Methods: Are We as Weak as They Think? Or Are They Barking up the Wrong Tree? [Apologies to readers who cannot penetrate the pay-wall].  I doubt that refereed journals are an appropriate place for opinion pieces, which is why I'm commenting on it in my blog rather than sending a response to the journal. At the end of the day opinions are ten-a-penny and a headcount of referees that agree doesn't  make an opinion worth reading. If you want the bottom line, I think that Byrne's views are ill-informed, perverse and if taken seriously likely to do serious intellectual damage to tender young minds. If you want to know why I think this you'll have to read on.
Before we get down to brass tacks let me make a few things clear so that only those who actively want to misunderstand will misunderstand. Most of the sociological questions that I'm interested in  involve some form of quantification, ie ultimately they require answers of the form: how many, how much, how often. They also usually require some sort of formal apparatus of inference - which normally implies some numerical calculations - because I'm seldom interested in the data in front of me for its own sake and normally want to regard it as evidence about some larger population (or process) from which it has been sampled. 
In saying this I am not saying that quantification is all there is to sociology. That would be absurd. Before you can count anything you have to know what you are looking for, which implies that you have to have spent some time thinking out the concepts that will organize reality and tell you what is important. That's partly what sociological theory should be about, right? 
I also think that the institutionalized and therefore little questioned distinction between qualitative and quantitative empirical research is, to say the least, unhelpful and should be abolished. There is a much bigger intellectual gulf between those who just want to study what is in front of their eyes and those who view what is in front of their eyes as an instantiation of something bigger. Qualitative or quantitative if your business is generalization you have to have some theory of inference and if you don't then your intellectual project is, in my view, incoherent.
Final preliminary. I don't know whether there is an anti-quantitative bias in British sociology. I can think of a few people who in my judgment have an irrational dislike of numbers, but a few loonies don't make an asylum. For what it's worth my impression is that, en masse, fear and ignorance  (rather than irrational antipathy) are mixed together in roughly equal parts to produce a climate in UK sociology that is characterized more by indifference punctured by occasional bouts of credulity. Since I think that what we believe about the state of society depends crucially on things we can count and measure this is not a state of affairs I find attractive. I also think we already have an armoury of conventional statistical weapons that, if used sensibly, can get us a long way down the road towards what we want to know and that it would be a good start if sociologists put some effort into mastering these before they set off in search of false gods.
I've spent a considerable part of my  career teaching and using quantitative methods  and over the years I've thought a bit (probably as much as any British sociologist) about  what  can and cannot be achieved with them. I'm guessing that I'm the sort of person that Byrne would dismiss as a purveyor of the old, bad, useless quantitative methods, although I'm not entirely sure for I must admit I find myself in a similar position to Lord Liverpool who on  receiving a highly abstruse letter from Samuel Taylor Coleridge wrote: "At least, I believe this is Mr Coleridge's meaning, but I cannot well understand him".
Byrne begins his article by discussing what  he concedes are the rather uncontroversial conclusions of the International Benchmarking Review of UK Sociology (IBRS) to the effect that most UK sociologists either don't do quantitative work very well or don't do it at all (and see no need for it). "Many UK sociologist"  he says say "are to all intents and purposes essentially innumerate. They lack fundamental mathematical skills and combine fear of quantitative work with an all too often scornful dismissal of it" (pp 14). So far we are reading from the same page of the hymn book.
But from there on in we start to drift apart. He take the  IBRS to task for a particularly feeble piece of drafting. They say: "...arguably statistical methods form the core of social sciences." He is  right: this is a silly statement and taken out of context it looks ridiculous. Statistics is no more the core of social science than mathematics is the core of physics. The core of physics is the physical world, not the mathematics used to describe it and the core of sociology is the social world, not a specific set of tools used to describe it. But isn't this criticism a  little ungenerous? The IBRS context is as follows:
"Of course statistical methods are not the only valid mode of inquiry, and each of the social sciences also embrace its own theoretical and quantitative approaches. But, arguably statistical methods form the core of social science."
Reading this closely, I'm not entirely sure what it means, but I'd be prepared to guess that it is just a particularly unfortunate piece of committee-waffle designed to keep a number of constituencies happy. This is not to excuse it, but  it is a pretty flimsy peg on which to hang an argument that quantitative methods as we know them "are essentially useless" (pp 15).
Why exactly does Byrne  think they are useless? This is something which it is difficult to convey succinctly to the reader who doesn't have access to his article, nevertheless I shall do my best to do justice to his argument.
The first plank amounts to a distaste for conventional approaches to social measurement. He starts by smearing a bit of mud on structural equation models (pp 15) - a rather soft target - and then goes on to glower at the purveyors of conventional statistical methods who, according to him:
"...in so far as they pay attention to measurement...do so in terms of issues of validity which never challenge the reality of the variates they deploy. So the question is always are we measuring this thing properly, as opposed to whether there is anything there, which has an independent and real character, to measure in the first place" [emphasis in the original].
This sounds bad, there are shadowy sociological currents at large (perhaps suicidogenic currents?)  that purport to measure things that don't exist (cue hisses and boos from off-stage right).  But Byrne throws them a life-line. They have strayed from the true path but can be brought back into the fold if only they can be made to understand  (poor souls) that the measurements they make "...are attributes which have no reality outwith the cases" (pp 16). How could they be so stupid? Short answer: they aren't. Let's go back to the beginning; perhaps an example will help.
Let's say I had a reason to use the concept "authoritarian". I'm pretty sure it corresponds to something real even if I can't poke "it" with a stick. It certainly helps me to talk about the actions of a set of people I am interested in without having to resort to extensive enumeration of all the kinds of actions I  so designate, for example: a reluctance to discuss or justify decisions with subordinates; an expectation that people should unquestioningly acquiesce; a tendency to bark orders rather than ask people politely to do things; a love of rules and a dislike of individuality; a belief in strong leadership etc. What exactly it is that is real, beyond any specific instantiation,  I might have difficulty in saying. But then again it might not be important for my purposes to be able to say exactly. Presumably, I could pursue it back to the firing of neurons in the brain, and further back  to the molecular level and ultimately even further back to the sub-atomic level until I reach the Kantian boundary which shields the Ding an Sich from my perception.
But I don't need to and for most social-scientific purposes it would be pointless. What I do need to be able to do is 1) pass the "show me" test ie be able to articulate how I would know an authoritarian act/belief if I saw/heard it; 2) show that such actions/beliefs have some kind of conceptual coherence; 3) show that  these sorts of acts/beliefs have some kind of empirical coherence such that observing an individual displaying one increases the likelihood that I will witness that same individual displaying another.
Using general concepts to organize experience is inevitable in social science as much as in everyday life: "Concepts without percepts are empty; percepts without concepts are blind" as Kant put it. The general procedure is pretty straightforward. I organize my experience of some phenomena in terms of a general concept like "authoritarianism". I make sure that it passes my first two tests. Using information from the "show me" test  I dream up some plausible observable indicators that seem to cover the most important regions of the semantic domain of the concept. I design instruments and make measurements on the indicators. I make a serious effort to validate my instruments.
The indicators tell me about (measure) an attribute of an individual - they appear to be rather more or rather less authoritarian. If there are no individuals then there is nobody to be authoritarian. Authoritarianism cannot stalk an empty land and as far as I'm aware nobody claims explicitly or implicitly that such an attribute could have reality "outwith the cases" that manifest it. In fact, to be an attribute it has to be an attribute of something and what could that possibly be except the cases themselves? 
Byrne goes on to quote with apparent approval Andrew Abbott's dystopian ontological fantasy:
"The people who called themselves sociologists believed that society looked the way it did because social forces and properties did things to other social forces and properties. … Sociologists called these forces and properties ‘variables’. Hypothesizing which of these variables affected which others was called ‘causal analysis’. … what made social science science (original emphasis) was the discovery of these ‘causal relationships’" (pp 18).
But, come on, this is just a piece of cheap rhetoric. Nobody that I know believes this and if they did they would be fools. The things that are of interest to sociologists in society happen because people act to carry out their intentions, which is not to say that they realize their intentions or that what emerges at the macro-level is what anyone intended or that the social actors are the only things that exist. If I use a conventional statistical method to describe the relationship between two variables it doesn't commit me to any particular ontological position regarding the "reality" of those variables. All it says is that I've observed that, for example, actors (cases) who experience a particularly strict potty training regime (an action by the parents that can be regarded as an attribute of the case) tend to develop (presumably unintentionally) ways of behaving that by common consensus we would label authoritarian. I could establish this fact descriptively with purely observational data and a correlation or experimentally with random assignment of subjects to strict and relaxed potty training regimes. If I do the latter I have more of a warrant - because of the presence of exogenously produced variation - to talk about causes.
I don't believe that conventional statistical methods imply an adherence to any particular ontological claims, other than ones that are obviously commonsensical. Byrne clearly does, but I have to say that  he doesn't actually produce any  clear arguments for  his belief. What  he does is make assertions and employ rhetorical tropes. Let me give you an example. This is what, for want of a better term, I call deception by  bullshit  citation (I use the term "bullshit" in the Frankfurtian technical sense). The move involves the bolstering of a profound sounding but actually banal statement with a quite irrelevant citation of a weighty sounding source. The trick works because nobody but a pedant would bother to check out the source and if someone does call you on it, well, you can always ridicule their pedantry. Byrne says:
"There is little evidence that most of those who utilize methods based on developments of the bivariate correlation...actually understand the ontological claims which are fundamental to their deployment" (pp 17).  The argument is, apparently, clinched by a reference in footnote 6 where I find the following: "For a mathematical demonstration of this see Van de Geer, 1971". Wanting to find out what "a mathematical demonstration of this" would look like I find in the bibliography that the Van de Geer referred to is the author of Introduction to Multivariate Analysis for the Social Sciences. Luckily the volume is sitting on my bookshelf, in fact about 30 years ago I taught myself the linear algebra you need to understand simple statistical models from it. Strange I don't recall it containing any mathematical demonstrations  of ontological claims ... That's because the book contains lots of useful, if mathematically straightforward, demonstrations of all sorts of things but has nothing to say whatsoever about ontology. Silly me, I've obviously misunderstood. What I suppose Byrne meant to say is that this book exemplifies the sort of thing he doesn't like precisely because it doesn't discuss ontology. But that's just a cheap shot. In fact it is a bit like saying that the Haynes manual I occasionally consult if I have to make a minor repair to my aging Skoda is no good because it says nothing about quantum thermodynamics. Well it wouldn't would it?
 In a state of some perplexity I moved on and found  this puzzling claim. In the context of quoting favourably some of Goldthorpe's diagnoses of what ails contemporary empirical sociology he comments:
"Goldthorpe’s suggested approaches include log linear methods which do not distinguish between independent and dependent variables. These methods still depend on an acceptance of the simple reality of the variables and can only deal with complex causation through the fitting of interaction terms, although in practice this is seldom done" (pp 16 my emphasis).
Let's look at this more closely. Byrne is quite correct, log-linear models do not distinguish between "independent" and "dependent" variables, but so what? And if it were important, it is trivial to impose the dependent/independent variable structure on any log-linear model with two or more factors, that's why you can read the estimated parameters of a logit model directly from the equivalent log-linear model. But I don't see that this is germane to anything.
Log-linear models "..still depend on the acceptance of the simple reality of the variables...": guilty as charged, but again so what? If I estimate a log-linear model based on crosstabulating sex by family income at age 14 by individual income at age 35, I will certainly assume that in reality (simple or otherwise) my respondents have penises and vaginas (though hopefully I won't have to ascertain that by direct inspection), and that there was a certain amount of money in the pocket, bank account or wage-packet at the points of measurement. Why would I doubt that these things are real (as distinct from worrying about measurement error which is something quite different)?What exactly is his point?
And now for the dandy: log-linear models "...can only deal with complex causation through the fitting of interaction terms, although in practice this is seldom done" [my emphasis]. No, no and a thousand times no. Log-linear models were partly invented to facilitate the exploration of interactions ie situations where the "effect" of X on Y varies according to levels of Z ie the idea of complex causation. 
Rather than rarely being estimated, models with interaction effects are reported all the time. And if an interaction is suggested by the data, but not included in the model, this will be immediately apparent to the reader in the reported fit statistics and will require justification.
All this is really a little embarrassing. But let's move on. Byrne cites with apparent approval the recent growth of interest in agent-based modelling (ABM). Fine, it's an interesting development and a reasonable approach to explore in situations where it is hard or even impossible to collect real data.  Two points: firstly it is entirely parasitic on the results of conventional quantitative techniques when it comes to calibrating the numerical outcomes of the simulations with real world quantities; secondly we've yet to see a flood of impressive and revelatory new sociological insights  from ABM (as opposed to endless exemplifications of the Schelling game and such like) so, as yet, much promise but  not enough delivery to suggest that the foundations of social science as we know it are going to be shaken. Maybe things will change, let's keep an open mind, but let's also keep a realistic sense of proportion. Let's also not mislead our readers. Byrne says apropos ABM:
"The UK sociology community is the original home of the Journal of Artificial Societies and Social Simulation and those responsible for this development might reasonably take umbrage at the suggestion that the UK is particularly weak in quantitative terms."
Really? Let's test that with an old fashioned conventional quantitative method: counting. I looked at the last 4 issues of JASSS and totted up the number of authors of articles (excluding book reviews). There were 89 (a couple are double counted because they authored more than 1 article). Thirteen had an institutional affiliation in the UK and 2 were sociologists. Only 1 had an institutional affiliation in the UK with an attachment to a sociology department and their article was about epistemology and contained no agent-based modelling or any other sort of quantification.
The creators of JASS might take umbrage, but it wouldn't be reasonable, or intelligible on the basis of these numbers for them to do so. Do you agree Professor Byrne? Or have my old fashioned quantitative skills, inadequate as they are to appreciate the relevance of "multi-dimensional torus attractors", let me down again. Perhaps I'll have to work a little harder at my ontology.
But I digress. Let's bring up another misrepresentation by Byrne:
"Another fundamental issue is the reliance of conventional regression-based methods on linear relationships." (pp 18). Excuse me. Splines? Generalized additive models? Non-parametric regression? Not good enough Dave, you're doing that old Harry Frankfurter soft-shoe shuffle again. The children are credulous, let's tell them a story, any old story will do. So you top it off with:
"...in the social realm the changes that matter are not incremental changes of degree but fundamental changes of kind." (pp18). Says who? So now you are the arbiter of "what really matters"?  And so it goes on.
It would be tedious to go through every false claim and downright misrepresentation, though  I must say I'm anticipating with some relish Byrne's promised, but as yet undelivered, demolition of psychometrics and econometrics (pp 17). No doubt my colleagues in experimental psychology and economics are quaking in their boots at the thought that their whole intellectual world is about to cave in.



3 comments:

druedin.com said...

Kudos for giving it such a close reading!

If you don't know it already, this will cheer you up: Cartmill, M. 1991. “Primate visions: Gender, race, and nature in the world of modern science.” International Journal of Primatology 12 (1): 67–75. doi:10.1007/BF02547559.

Paolo said...

Well, just a clarification: the Benchmarking Review of UK Sociology's assertion seems different.
At page 23 we read "statistical methods form A COMMON core of social science"; I'm not an English mother-tongue, but I think it's different from "statistical methods form THE core of social science". Am I wrong?

Colin said...

Paulo, thanks for this clarification. I haven't read the report, only Byrne's gloss on it, so I was taking it on trust that he managed to get this right. The fact that he can't even get the story right in his native tongue...well you don't need me to draw conclusions about how much trust you can put in the representation he makes of more important things.