But a post I was reading today suggests a way in which teachers can at least alert their pupils to intellectual and moral dangers while at the same time sticking to the letter of their climate-related curricula. The post has a provocative title:
It is on a site whose goals I have a great deal of admiration for called Straight Statistics, where they descibe themselves as follows: 'We are a campaign established by journalists and statisticians to improve the understanding and use of statistics by government, politicians, companies, advertisers and the mass media. By exposing bad practice and rewarding good, we aim to restore public confidence in statistics.'
Extract from the post (I put it into italics, and added the emboldening):
'Suddenly, everybody’s saying it: the scientific and medical literature is riddled with poor studies, irreproducible results, concealed data and sloppy mistakes.
Since these studies underpin a huge number of government policies, from health to the environment, that’s a serious charge.
Let’s start with Stan Young, Assistant Director of Bioinformatics at the US National Institute of Statistical Sciences. He recently gave evidence to the US Congress Committee on Science, Space and Technology about the quality of science used by the US Environmental Protection Agency.
Some might think, he said, that peer review is enough to assure the quality of the work, but it isn’t. “Peer review only says that the work meets the common standards of the discipline and, on the face of it, the claims are plausible. Scientists doing peer review essentially never ask for data sets and subject the paper to the level of examination that is possible by making data electronically available.”
He called for the EPA to make the data underlying key regulations, such as those on air pollution and mortality, available. Without it, he said, those papers are “trust me” science. Authors of research reports funded by the EPA should provide, at the time of publication, three things: the study protocol, the statistical analysis code, and an electronic copy of the data used in the publication.
Further, he calls for data collection and analysis to be funded separately, since they call for different skills and if data building and analysis are together, there is a natural tendency for authors not to share the data until the last ounce of information is extracted. “It would be better to open up the analysis to multiple teams of scientists.”'
The key is to spot the 'trust me' science. We do need to take a lot on trust, especially in pre-university education where there is neither the time nor necessarily the specialist skills to demonstrate the evidence and the arguments for every assertion. But when scientific assertions are made which others deploy to produce widespread alarm, and/or to support far-reaching policy decisions, then it would seem obvious that someone somewhere should be able to thoroughly, and independently, check the results and the reasonings. In fact, the naive observer might suppose that governments would insist upon it under such circumstances. That did not happen in the area of climate policy. In direct contradiction to the Nullius in Verba spirit of the original (but not the present) Royal Society, the words of alarmists were taken at face value, not least the Summaries for Policy Makers published by the IPCC.
The flaws of the hockey-stick plot could have been exposed earlier had the methods and the data involved been made available to all. In fact it took a remarkable amount of determined statistical sleuthing to find the truth, the story of which has been captured for posterity in Andrew Montford's superb book, The Hockey Stick Illusion. McKittrick and McIntyre's work was inspired by the modest goal of trying to reproduce a dramatic graph which had been pushed through letter-boxes throughout Canada by a government there convinced by the alarming picture it conveyed. It was phoney. The government was fooled. The two part-timers (M&M), not in the climate science field, took several years to overcome the barriers to making that clear. In the meantime, the huge political impact of this graphic had happened. The damage was done.
The sorry state of some corners of climate science - those occupied by those most active in steering the IPCC - has been revealed by the release of the ClimateGate materials. They include this quote by Professor Jones of the Climate Research Unit at the University of East Anglia, a quote subsequently put in a Parliamentary record here by the climatologist, Warwick Hughes, who had requested some data held by Jones, and received this in reply:
"Why should I make the data available to you, when your aim is to try and find something wrong with it."
Jones is clearly of the 'trust me' school of scientific method. As are any who push the output of complex computer models as 'evidence'. Mann of the notorious hockey stick plot, was also expecting others to trust him and his coworkers. We do not have to assign sinister motives to such people in order to be very concerned about this. We merely have to assign them human fallibility.
So, my tentative suggestion is this. When you have to display some scary graph projected say tens of years into the future by some hideously complicated software, when you have to refer to unsubstantiated assertions about doomed polar bears, disappearing glaciers, spreading deserts, and so on and on, you can label it 'trust me' science. And explain perhaps, that our trust should at best be tentative, pending further enquiries.