Popular Posts

Caveat Emptor

The opinions expressed on this page are mine alone. Any similarities to the views of my employer are completely coincidental.

Monday 6 September 2010

More on statistical significance

Back in 2008 I posted something on statistical significance. In his blog Andrew Gelman draws attention to a sloppy piece of writing on the subject which appears to have the imprimatur of the British Psychological Society. As we all know the proper interpretation of a significance test is conditional on what is assumed ie most often p. is the probability that  (T is at least as large as the observed value|H0=True): where T is some function of the observed data ie a "test statistic" and crucially, for the condition in which we assume that the null hypothesis is true
This is basically what  I had drummed into me in Stats for Social Science 101. Like riding a bicycle, once you get it you don't forget it. However, we shouldn't pretend that it is a "natural" way to think about inference and so it's not surprising to see all sorts of odd (and wrong) interpretations of p. values are purveyed by those who should know better. A lot of the time it's probably just a matter of not writing very clearly, but if you appoint yourself an authority figure on something then you do have a responsibility to write precisely and get the content right. It is  more than a little tedious to hear a student complain when I correct them: "...but that is what it says in the book."
Funnily enough a few years ago I read a short article in a British sociology journal in which a self appointed guru waxed lyrical about the wonders of statistical significance tests. Only in the land of statistical ignorance (ie British sociology) would this pass muster as a serious contribution to sociological knowledge, but to make matters worse our "expert" managed to make exactly the same mistake that Gelman draws attention to. I wrote a short note pointing out the error and suggested that perhaps prophets should get the message straight before they start to preach. 
The reaction from the journal was interesting. First  my note was rejected without being sent to referees. I insisted that it should be sent to referees. With a certain amount of bad grace it was. It was then rejected again on the grounds that, though I might possibly be technically correct, it was jolly bad form to point out the errors in the original piece and I was obviously motivated by  personal malice towards the author. One referee even accused me of gross professional misconduct, presumably for airing the dirty linen. Actually I didn't know the author from Adam and was only motivated to prevent a silly error receiving reinforcement from publication in a professional journal. Trivial as it was I found the whole episode  revealed at lot both about intellectual standards and about the attitudes of the scientific gatekeepers in British sociology.
On the subject of scientific communication Ben Goldacre links to this hilarious YouTube post. Enjoy!

2 comments:

Andy said...

How do you feel about parametric versus non-parametric?

http://figuraleffect.wordpress.com/2010/09/05/statistics-in-psychology/

Or effect size?

http://figuraleffect.wordpress.com/2008/02/26/tired-of-people-going-on-about-effect-size/

Unnecessary brain damage.

Alexey said...

The comments on that entry in Gelman's blog do show that this misinterpretation is terribly widespread.

Recently I've read a good book by two dissenting economists that shows how often people get statistical significance tests wrong. Even in the journals like the American Economic Review.

Ziliak, S., McCloskey, D.N. (2007). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives.

Alexey