Slight change of tact today; instead of commenting on a political topic, I thought I’d have a pop at jotting down some musings on the world of statistics, most particularly the way they are utilised in research and reporting.
Mehrabian Nights – an informative tale about (mis)communication provides an example of the mis-appropriation of statistics addressing one investigation for the purpose of supporting an unrelated subject (in this example, statistics about one aspect of non-verbal communication that is used in conversations about general communication). In this case, one researchers statistics, effectively, are abused but that is only one area where problems lie with the use of statistics.
Gove, during his tenure in charge of Education, decided to align himself with campaigners for reform within education and was keen to trot out phrases that included “evidence based” as a means of justifying whatever tact he was taking in his crusade to usurp the lefties in Education. Unfortunately, like many politicians that trot out statistics in support of their ideological viewpoints, Gove was want to be highly selective over when he and from where he took his statistical ‘evidence’: Sutton Trust Toolkit – evidence-based guidance contradicts much Government advice, Michael Gove wants greater rigour in schools. Perhaps he should stop using UKTV Gold for his statistics Use of statistics or “evidence”, as some like to term it (though seem to blithely have a very limited view of what ‘evidence’ might constitute), to justify reform is problematic. It creates a narrative that paints any suggestions for reform that are not backed up by “evidence” as being unworthy, whilst ignoring some very basic problems with the gathering of statistical evidence that are politically naive. Research that gets published costs money, research that has ‘credibility’ costs money, research proposals that do not meet someone’s ideological drivers are unlikely to be funded; so we are faced with a problem. If we can only have reform that is backed by research and we can only have research that is backed by ideological drivers then we are unlikely to get reform that is not ideologically driven and is rather based on best practice (statistically speaking). Plus you have the research that is commissioned but where the findings do not match the ideology of the group funding it, so it never sees the light of day. Yes, if you want to get that cruise ship to change course you should be able to support why but sometimes, when you see an iceberg, you see an iceberg.
My final thoughts (for now) on the heralding of statistics as some form of Holy Grail of truth is, in a way, indirectly addressed by Nate Silver. There are some very clever, highly educated, highly experienced individuals that practice statistics for a living. They work for all manner of high paying industries to make forecasts and £billions are reliant/wagered on them … and they get it wrong, often spectacularly. Good statistics rely upon good analysis and good research design and the ability to get things wrong in either or both of those stages means that you won’t know how good your predictions are until after the event and then you won’t know how much of your ‘getting it right’ was down to luck and how much of it was down to good analysis or good research design or neither or both. It’s a crap shoot. If these highly educated specialists are unable to guarantee accurate research design then what hope do part time ‘amateurs’ have of designing research that actually researches the thing that they think it researches?
“But what’s the alternative?” I hear you cry. Ultimately, you have people that you consider to be experts (or, at the very least, the best that you have) and you trust them to give you their best opinions and if they align with what sounds good to you then you’re likely to follow that advice and either get it right or not. If not, hopefully the next person arbitrating after you does something different. If all the research and all the statistics keep telling you to do something that doesn’t work, would you just keep doing it, hoping that it works eventually or would you broaden your way of tackling the problem to include other viewpoints?