And I've been busy with other stuff too. Look what I made in Mathematica.
Anyways, I am still trying to figure out the global warming thing. Isn't it so much easier to talk about fallacies and reasoning rather than specific examples thereof?
This graph is what really set my skeptical bells ringing. Yes it's true that if the uncertainty is very large, we can draw no conclusions. But how can the error be so large? Intuitively, it does not make sense. If all your results are accurate within, say, 10°C, but the error bars are 100°C, that either means you've overestimated your error, or you got really, really lucky. Even global warming deniers will grant that the models are accurate within 10°C. Are they feeling lucky?
So where does his estimate of uncertainty go wrong? Frank's problem is pure statistical innumeracy. Unfortunately, statistics is not common knowledge, so this sort of innumeracy can go right over some people's heads. Allow me to explain.
Problem 1: Uncertainties do not add! If you have 1.1°C uncertainty in the first year, and 1.1°C uncertainty in the next year, what is the cumulative uncertainty? You might guess 2.2°C, but this assumes that both uncertainties are always in the same direction. Half of the time, they will be in opposite directions and partly cancel each other out. The result when you work out the math is a total uncertainty of 1.56°C after two years. Sure, it's possible that it will be off by 2.2°C, but error bars are only supposed to cover the most likely data. The uncertainty does not increase in a straight line. It should be proportional to the square-root of time. That is, it will increase more slowly after a little while. I was extremely shocked at such an egregious error. Has Frank never taken a statistics class?
Problem 2: Uncertainties are reduced in a stable system. The environment is a mostly stable system. That is, it doesn't swing wildly in temperature every century. If the temperature is a little higher than average one year, something will push it towards normal temperature. For instance, higher temperature might increase cloud cover, which reflects more of the sun's light away from Earth. Therefore, a temperature uncertainty this year may not survive to the next year. When I said the uncertainty is proportional to the square-root of time, I assumed that the system has no stabilizing mechanisms. In fact, the uncertainty will increase much more slowly than that.
Problem 3: What's the difference between Frank's uncertainty and the already reported error bars? Frank asserts that they are different, but I'm not so sure. Frank bases his uncertainty estimate on the predictions of cloud cover. But is this uncertainty different from the uncertainty between different runs of the simulation? I imagine each time the simulation is run, it gives a slightly different prediction of cloud cover in the same way that it gives a slightly different prediction of temperature. So not only is Frank calculating the uncertainty incorrectly, it may have already been accounted for.
Frank seems incredulous that we can estimate the temperature decades from now when we can't even estimate next year's temperature accurately. But actually, this makes sense. We can't predict the whether next week, but we can predict overall trends between seasons. Large, overall trends are easier to predict than year-to-year fluctuations!
I only spot the statistical errors, because that's the part I know. Given the kinds of errors I see, I wouldn't be surprised if the rest of it were also riddled with flaws.
[This post has been cross-posted at BASS. Visit the site and take a look around!]
Update: Pat Frank responds! See the BASS website for discussion.Dawkins arrogantly ignores all these deep philosophical ponderings to crudely accuse the Emperor of nudity.The Courtier's Reply is disdained by mainstream atheists, though reasons vary wildly. If you've never noticed, PZ doesn't actually refute the Courtier's Reply in his famous post--he just makes an analogy to ridicule it.