HG Wells never said that “statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write”. If he had said it, he would have been 100% right. Politicians, administrators, scientists, everyone has an indicator, an average or a p-value at the ready to back up their arguments. The source of this information is not always clear and occasionally, the interpretation, or the results themselves, are incorrect.
One notable example of this is the controversy which arose in the UK in February and which saw a group of doctors presenting the British Secretary of State for Health, Jeremy Hunt, with a three-metre-high edition of the book “How to read a paper”. Hunt, in defense of his “seven-day NHS” plan, stated that in the UK, stroke patients admitted to hospital on weekends were more likely to die. In a letter to the Sunday Times, 59 top neurologists accused Hunt of misrepresenting statistical results and using outdated data to justify his policies.
I do not know if we should be giving books away, or what size they should be, but it might be interesting to take advantage of this space to reflect on the use and abuse of statistics and the almost religious fascination with significant p-values. One particular jingle, that of statistical significance, reminds me of that whole “scientifically proven” claim sported by many products advertised on television when I as a child. A statistically significant result is the seal of approval we all seek relentlessly, but we would do well to remember the tale of Pahom and ask ourselves, how many p-values does a researcher need? Statistically significant or statistics seen as a significant excuse.
To be continued.
Post written by Cristian Tebé Cordomí (@Cristiantb), Statistical Advisory Service at Bellvitge Biomedical Research Institute and Associate Professor at Universitat Rovira i Virgili.