Weitzman's Dismal Theorem?

Interesting post on this over at James Empty Blog. So: Weitzmans basic thesis is: the PDF of the climate sensitivity has a long fat tail; the cost diminishes less quickly; so the "expected utility", which is the integral of the two, is divergent. This has echoes of mt's arguments, which he has been making for some time. But mt didn't wrap his arguments up in a pile of maths, so of course he gets ignored.

There, I've given away what I think: this is a pretty piece of maths (or it may be: I can't say I've read it all through in detail) but it has precious little relevance to the real world. For two reasons: we have no good evidence for high values of CS; and we have no real knowledge of the cost function for high temperature change either. So in fact this post isn't really going to critique anything of substance about the paper itself, but merely ride a couple of my hobby horses.

On a technical point, I would have thought that the total cost would be bounded above by total destruction of the world, and would therefore be finite, so I can't quite see how the integral would diverge anyway. Even if you allow us to expand into the entire galaxy and last for the expected lifetime of the universe the future value ought to be bounded above.

JAs main point, which appears to be true, is that CS actually has a value, we merely don't know what it is. Thats his point though so read him for that.

Perhaps more interesting for me is JAs comment Notably, although he talks in terms of climate sensitivity, there is nothing in Marty's maths that depends specifically on a doubling of CO2. A rise to 1.4x (which we have already passed) will cause half the climate change, but that would still give an unbounded expected loss in utility (half of infinity...). By the same argument, a rise even of 1ppm is untenable. Come to think of it, putting on the wrong sort of hat would become a matter of global importance (albedo effect).. This does seem to point up a confusion in the Marty paper: although it nominally talks about divergence of the integral due to long tails, in practice this can't happen because the tail isn't infinite. Its just about plausible to assert that the CS is 8 oC, it isn't possible to believe there is even a eeensy-teensy probability of 80 oC, whatever pretty maths you may have put in to generating your PDF. And since most of these start from a uniform prior on [0,10] or perhaps [0,20], 20 oC is a hard upper limit even just from the maths point of view. So, yet another reason why the integral can't diverge: all that he has done is put the wrong maths in.

But stepping back from the maths for a moment, I don't think there is any possibility of sensibly characterising the "tail": let us say, the bit above 6 oC. Of course, you're entitled to say 6 oC is such a disaster that we should avoid it, but in that case the whole Marty thing becomes irrelevant. There is no observational evidence for CS above 6 oC, ie data points in that region (a bold assertion made mostly in ignorance apart from a skim of ar4 chapter 9). There isn't really any good model evidence for it either (there are some runs with that high a value that could probably be discredited as implausible if looked at carefully...). So the idea that you can fit, mathematically, a reliable distribution to it is not true. Therefore, you cannot meaningfully attempt to integrate it. Similarly, the cost function for T > 6 oC has completely unknown form.

So perhaps you could attempt to say: aha, but with more research we could characterise the PDF better and maybe know its form better for high values and then we can justify divergence. But no, this is to miss JAs point: the PDF isn't a property of CS, its a property of our knowledge of CS. More research (assu,ing its correct; and maybe allows us to throw out some old stuff) should narrow the PDF towards whatever the true value is.

The bottom line: the Marty analysis is fun maths, but of no relevance to the real world. It simply amounts to mt's point, that high-damage low-probability events need to be weighted, but tells us nothing useful about how to do this weighing.

Oh dear, no, thats not the bottom line, since I just found Such a Âplanetary experiment of an exogenous injection of this much GHGs this fast seems unprecedented in EarthÂs history stretching back perhaps even hundreds of millions of years. Can anyone honestly say now, from very limited information or experience, what are reasonable upper bounds on the eventual global warming or climate change that we are currently trying to infer will be the outcome of such a first-ever planetary experiment? What we do know about climate science and extreme tail probabilities is that planet Earth hovers in an un-stable climate equilibrium [9], chaotic dynamics cannot be ruled out... which is all good well-meaning stuff, but its over the top. There is no good evidence that we're hovering in an un-stable equilibrium (in fact its obvious that we aren't, taking the words literally: if we were, we would have left it, that being the nature of unstable equilibria). So this is just hyperbole. Why is why ref [9] is to Hansen :-)

More like this

Perhaps you or James might just add a line or two re what Weitzman thinks he has proved and why it is good/bad. It is apparently bad for those of us concerned about climate change, but I can't tell why from your post and won't have time to read the actual paper for a few days.

But William, the Pleistocene is very palpably unstable. OTOH we do have a fairly good idea of what the equilibrium outcome will be (=Pliocene-ish). Even the bumpy ride to that new state has some constraints on it.

By Steve Bloom (not verified) on 06 Oct 2007 #permalink

Given that relatively small Milancovich(sp?) solar forcings seem to be correlated with ice age initiatiation and deglaciation, it would seem reasonable to assume, that at least at some tipping points the climate is very sensitive. Most likely there exist zones between the tipping points where CS is moderate, but the response (to CO2) forcing likely has some discontinuous (or near discontinuous) points. Hitting one of these points would be the low probability high consequence scenario. The problem is I don't think we have any way to identify where these points are, Hansen's >2C warming is dangerous <2C isn't is pretty arbitrary, and likely has more to do with political feasability than any knowledge of the real nonlinear CS curve.

"CS actually has a value, we merely don't know what it is" is incorrect. CS actually has a value all other things being constant. They are not.

[Its a simplification, taking the concept of CS literally. Yes the real world is more complex. But the point, that it has a value not a pdf, remains -W]

A divergent utility is an artifact of a wrong regularization on many fronts. First, the sensitivity is surely not greater than something like 5 degrees, and every sane person agrees, and the decrease of the probability distribution at higher values is much faster than needed to sustain the divergence.

Second, even if one makes the error described above and keeps the illogically slow decrease of the probability, it is still not true that the expected utility is infinite because a complete destruction of the world has a finite value. ;-)

[Well, that was my point, no need to re-make it -W]

Similar divergences are not special features of climate change. One could create the very same situation in the context of any kind of risk. Whoever does it suffers from an advanced form of paranoia.

Well I'm going to disagree with you on a few points. I don't think it is at all unreasonable to include a long tail in the prior, even up to extremely high values such as 100C. In this I disagree with people like Dave Frame and Myles Allen who argue that such high values should be arbitrarily prohibited a priori without any reference to the data :-) Of course I do think that such a broad prior should only assign very very small probability to such high values. But inverse polynomial as a prior may well be reasonable. There is plenty of data to narrow it down, of course.

[Seems rather arbitrary. But bound it at 100 if you like, the integral still doesn't diverge. Only if you insist on no upper limit is there a problem with divergence. So you can have divergence if you really want it, but its physically unrealistic -W]

Secondly, the unbounded loss thing is an unbounded loss of utility, not cash. If utility is a sufficiently convex function of $ then losing all your money has infinite negative utility, and this is apparently a standard premise of modern economic theory. No-one claims that this is an accurate description of human behaviour, rather the claim was that this is a reasonable way to guide policy. Weitzman has, I think, refuted that belief (assuming one accepts his manuscript as valid). Although, as I said, I think the probability thing is wrong to start with.

[Um well, that was a subtlety I missed. So we can add in yet another arbitrary function mapping utility to cost to the mix of unknowns! -W]

Since, as you agree, the value of CS depends on other things, its value depends on other things and the likelihood of a particular value of CS has a probability distribution, depending on the likelihood of the other things. Given this, measurements of CS at different times must vary, and we are selecting from a distribution, which may be wide. True, this is a frequentist pov, but, as we know the Bayesians are batshit crazy, and, one can estimate an expert prior through measurements of points in the distribution with their associated uncertainties (in fact this is I think what James has done to get his priors).

Your position is well described by Thomas Knutson:

"Michaels et al. (2005, hereafter MKL) recall the question of Ellsaesser: "Should we trust models or observations?" In reply we note that if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time."

Given the expected fate of the earth to solar evolution (runaway greenhouse) in roughly a billion years, we can't completely (exactly 0% probability) rule out some sort of feed back that brings on this state change early. I don't think any serious climatologist believes this fate will be triggered by todays crisis -but none can prove that that is an impossibility. So very large values >>100C would be justified in the strict mathematical sense.

There seems to be an increasing body of evidence that standard rational decision making based on utility theory is not what people do anyway.

If that is correct, the theorem is irrelevant.

By David B. Benson (not verified) on 08 Oct 2007 #permalink

"On a technical point, I would have thought that the total cost would be bounded above by total destruction of the world, and would therefore be finite, so I can't quite see how the integral would diverge anyway"

After showing that the integral diverges, Weitzman then introduces a parameter based on "something like the value of statistical life on earth as we know it, or perhaps the value of statistical civilization as we know it", which Weitzman calls a "VSL parameter". Weitzman then proves his "Dismal Theorem" that states that as the parameter approaches infinity, the expected cost also approaches infinity.

By Peter Wood (not verified) on 10 Feb 2008 #permalink