Tuesday, May 27, 2008

Innumeracy in Global Warming skepticism

There's an article in the latest issue of Skeptic Magazine called "A Climate of Belief" by Patrick Frank. It says that the case for Global Warming being caused by CO2 is severely hurt by the fact that computer models of the climate are uncertain. At first, I thought it had raised a fairly good objection, at least good enough that I, mostly clueless about climate science, would have no idea how to refute it. But it turns out that the article fails at basic statistics.

The main argument of the article goes like this:

Computer models of climate show error bars in their results, but these error bars only show one kind of error: the variation between multiple runs of the simulation. What the error bars don't show is the "physical uncertainty", the measure of difference between the predicted and actual.

How do we estimate the physical uncertainty? We use the climate model to "retrodict" past climate, and then compare to the actual climate we had during that time. Frank shows that such retrodictions only calculated the total cloud cover with 10% accuracy. Of course, to show this, he uses retrodictions of the 1979-1988 period, and compares them to observations of 1983-1990. I have to wonder if it's good practice to compare different decades.

He goes on to say that 10% cloud cover has a huge impact on global temperature. How big? 1.1°C a year. That means that after a hundred years, the uncertainty is 110°C! See the graph below of the uncertainty as it increases with time.

This graph is what really set my skeptical bells ringing. Yes it's true that if the uncertainty is very large, we can draw no conclusions. But how can the error be so large? Intuitively, it does not make sense. If all your results are accurate within, say, 10°C, but the error bars are 100°C, that either means you've overestimated your error, or you got really, really lucky. Even global warming deniers will grant that the models are accurate within 10°C. Are they feeling lucky?

So where does his estimate of uncertainty go wrong? Frank's problem is pure statistical innumeracy. Unfortunately, statistics is not common knowledge, so this sort of innumeracy can go right over some people's heads. Allow me to explain.

Problem 1: Uncertainties do not add! If you have 1.1°C uncertainty in the first year, and 1.1°C uncertainty in the next year, what is the cumulative uncertainty? You might guess 2.2°C, but this assumes that both uncertainties are always in the same direction. Half of the time, they will be in opposite directions and partly cancel each other out. The result when you work out the math is a total uncertainty of 1.56°C after two years. Sure, it's possible that it will be off by 2.2°C, but error bars are only supposed to cover the most likely data. The uncertainty does not increase in a straight line. It should be proportional to the square-root of time. That is, it will increase more slowly after a little while. I was extremely shocked at such an egregious error. Has Frank never taken a statistics class?

Problem 2: Uncertainties are reduced in a stable system. The environment is a mostly stable system. That is, it doesn't swing wildly in temperature every century. If the temperature is a little higher than average one year, something will push it towards normal temperature. For instance, higher temperature might increase cloud cover, which reflects more of the sun's light away from Earth. Therefore, a temperature uncertainty this year may not survive to the next year. When I said the uncertainty is proportional to the square-root of time, I assumed that the system has no stabilizing mechanisms. In fact, the uncertainty will increase much more slowly than that.

Problem 3: What's the difference between Frank's uncertainty and the already reported error bars? Frank asserts that they are different, but I'm not so sure. Frank bases his uncertainty estimate on the predictions of cloud cover. But is this uncertainty different from the uncertainty between different runs of the simulation? I imagine each time the simulation is run, it gives a slightly different prediction of cloud cover in the same way that it gives a slightly different prediction of temperature. So not only is Frank calculating the uncertainty incorrectly, it may have already been accounted for.

Frank seems incredulous that we can estimate the temperature decades from now when we can't even estimate next year's temperature accurately. But actually, this makes sense. We can't predict the whether next week, but we can predict overall trends between seasons. Large, overall trends are easier to predict than year-to-year fluctuations!

I only spot the statistical errors, because that's the part I know. Given the kinds of errors I see, I wouldn't be surprised if the rest of it were also riddled with flaws.

[This post has been cross-posted at BASS. Visit the site and take a look around!]

Update: Pat Frank responds! See the BASS website for discussion.

2 comments:

Anonymous said...

Miller, the cloudiness error tests out as systematic and not random. The statistics of random errors do not therefore apply. Theory-bias error is cumulative. Please read the Supporting Information document that is linked to the HTML version of my article on the Skeptic web-site. You can get it here: http://tinyurl.com/6f3py6. It's an 892kB pdf download.

Anonymous said...

Game, set, match, to Pat Frank.