Quite a lot really. Unless, of course, you're looking at the wrong models in the wrong way. As Robert S. Pindyck does. I do have some sympathy for the paper, but its badly written, somewhat confused, and the author has failed to emphasise some key distinctions.
To begin with where I agree, I'm fairly happy with his assertion that "certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the [social cost of carbon] estimates". I'm only "fairly" happy, because to say that the discount rate is "arbitrary" is stupid (which is probably a hint that this thing hasn't been peer-reviewed; its only a working paper); what he means is, that people of good faith can nonetheless disagree about what the correct value should be. And it is certainly true that how much you think future climate damage should be costed now depends rather heavily on the discount rate. However, this is just the bleedin' obvious, so he gets no points for that. He also whinges about the uselessness of the models, but fails to realise that for exploring the impacts of different discount rates, or different damage functions, they are quite illuminating.
His paper is about "integrated assessment models" (IAMs) - the things used to turn parametrised versions of climate change into damage estimates. I know little about them, so I'm not going to say much about that bit. Of course his title doesn't specify that. And while his language in the abstract when he says the models' descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation is probably defensible, you have to read it fairly carefully to realise he is talking about the "damage functions" bit there - he isn't talking about the physical-climate-bit at all in the abstract. Later on he is more careless:
There are two types of inputs that lend themselves to arbitrary choices. The ﬁrst is the social welfare (utility) function and related parameters needed to value and compare current and future gains and losses from abatement. The second is the set of functional forms and related parameters that determine the response of temperature to changing CO2e concentrations and (especially) the economic impact of rising temperatures.
The arbitrariness of "(especially) the economic impact of rising temperatures" may or may not be correct - like I say, that's not my thing; though I know enough that damage isn't just temperature, it ought to be precip too, at least. But he's quite wrong to say that "the set of functional forms and related parameters that determine the response of temperature to changing CO2e concentrations" is arbitrary. That's just ignorance on his part.
He does talk about the physical-science-bit, wrapping the discussion around climate sensitivity, which is what it all boils down to at this level. But I don't think he knows what he is talking about; and instead he just talks up the uncertainties. What he should have written for this section (prefixed with "I know nowt about CS..." is "Compared to the uncertainties elsewhere, the uncertainty in CS is small and we may as well assume 3 oC. Or 2 oC. Or somewhere like that". Instead he repeats the Roe/Baker heresy even down to the Gaussian bit which I pointed out was wrong ages ago.
mt will be happy that he does point out that the IAM models intrinsically don't deal with "catastrophic outcomes" well, since they're built to be smooth, i.e. they're built not to have any catastrophic outcomes. So he gets a point for that.
What to Do?
Well, don't read his paper or the "What to Do?" since it doesn't really go anywhere. Instead, you need to think like Karl Popper (I think; though I could easily have the wrong guy; lets call him "KP" for now) who was talking about building political systems. And laying in to the likes of Plato, who designed their systems around philosopher-kings, who by their very nature were wise and good (even if they were instructed to lie to the populace; but that's another matter). KP tried to say No; we should not design our systems on the basis that our rulers will be Good; on the contrary, we should design them to survive rulers who are Bad.
Similarly, the answer to "oh noes, our IAM models suffer from unknown damage functions and uncertain discount rates" is not "fiddle with the models" or "select simpler more subjective models"; its to design models that work even if you don't know such things. Like - aha, you guessed it, I know - a carbon tax that we can ramp up or down slowly as needed.
If I had that name I'd change it...
[Do you know, I completely missed that? Still you folk are notorious for having names like "Peentangler" -W]
We have empirical tests - accidentally, anyway - of a carbon tax in the form of fuel duty. Although being a duty it varies, I believe that the current rate is c. 150%. A fairly substantial carbon tax, therefore.
We can then compare to another country - i.e. the US - and see that there are two elements. Fleet economy is roughly 20mpg in the US vs 30mpg here; add in changes in miles driven and you find usage is roughly halved in the UK.
So we have what could be regarded as extreme levels of taxation required for a significant reduction in use - even if we attribute all of this reduction to tax, and not the UK being a more compact country with better public transport. And it's taken decades to work..
Now, the reason for this is simple.. a car represents a large investment, and fuel is only one cost; for most cars, depreciation is a bigger cost than fuel even now, and insurance is significant too.
Now, if we look at electricity..
First, we note that consumers have essentially no direct control - unless you go off-grid, you'll just see higher bills. And because this will be across suppliers, they won't have a huge incentive to go out and build low-carbon capacity. After all, we are already replacing coal with natural gas. In any case, the turnover time of generating plant is measured in decades; in order to achieve rapid reductions in emissions, the tax would have to be set at a level where new-ish plants would have to be prematurely retired.
Natural gas for domestic use.. well, the price has already doubled or tripled over the past decade, without drastic impacts on end use. Because - and you may see a theme here - the capital cost of going to 'not using gas for heating' is very high (and options limited).
Now, I know that in economist land, these big practical details somehow don't exist - much like the way that pulleys and cables have zero friction/mass in newtonian mechanics land - but I would contend that in the real world, these 'details' would simply mean that a carbon tax set at a level that made any sort of difference would result in a huge hike in bills for customers with no relief in sight. Which would be extremely unpopular. And hence dropped pretty quickly.
The cynic in me thinks that some of the conservative proponents of a carbon tax are fully aware of this, and in fact simply want it as a way of hiking VAT (the taxes being very similar) and cutting income taxes - a standard way of transferring from poor/middle to rich; if they can get some of the more gullible Green lobby to join in, so much the better.
[I think you've missed things. Take the price of natural gas for heating. Increasing the cost of gas is supposed to affect the balance people make between turning the heating down, paying more for gas, or paying for more insulation. Getting people to shell out for insulation (with the associated tedium) is difficult; the impact is more on standards for new houses. but the effect is there.
But you're also missing the point that, since this is a tax (not a cap) one option people have is simply to pay it. There's nowt wrong with that -W]
Increasing the cost of gas is supposed to affect the balance people make between turning the heating down, paying more for gas, or paying for more insulation.
You know, I'd love to pay more for insulation. Unfortunately, because of the type of property I live in, it's physically impossible to install any more insulation without completely gutting the place first. And if I let it get any colder during the winter, I'm going to start having serious problems with damp. The inescapable fact is that a lot of the housing stock in this country is crap, and cannot be retro-fitted to make it all that much better. The only real option I have in my current property is to upgrade the boiler, which (because I'm a pretty light heating user) has a payback time of about 15 years. I could go to a completely different heating source (in an ideal world, I'd quite fancy an air-sourced heat pump), but that's got an even longer payback time and some serious installation problems.
Lots of good fun to be had over at Willard Tony's where the mouthbreathers think that IAM models are GCMs in drag.
An alarmingly productive search result:
" Popper is often, with Hayek, associated with a return to classical liberalism, or rather a certain caricature of it which sees no role for any social institutions but markets and a minimal nightwatchman state to enforce property rights. I think this is a gross misunderstanding, and that his actual, sound, political theory is quite compatible with the best traditions of social democracy; I would be happy to call myself a Left Popperian, if I thought anyone would get it. "
Wouldn't precip be a function of temperature? As I understood it, CO2 wouldn't be a direct contributor to preciptation, it would be the warming created by CO2. In that respect, reducing cost damage to "temperature" and "CO2" would be correct, I would presume? If CO2 rose and temperature did not, I would think precipitation would be unaffected though vegetation might be. Is that inaccurate?
[Not directly. I think its reasonable to use global T change as a measure of total change, and it will be a direct input into (say) sea level change. But precip is much harder: the impacts are mostly ecological, and hence local; and we're just as likely to see impacts from shifts in rain patterns as from changes in the total -W]
Dunc --- You could always add more insulation on the inside of the walls. Anything from proper insulation recovered by new drywall or just lots of heavy draperies.
Ceilings would be lower, aiding lower heating bills.
tim B --- Global precipitation is a function of temperature but regional and local precipitation changes are much more difficult to work out. Much of the excess precipitation is out at sea, for example.
Dunc is right about the problem: adding insulation (and caulking cracks, even air-barrier latex paint) restrict airflow). Dry rot gets active in wood's around 60 percent humidity -- very easy to reach that under a "cool roof" or inside walls.
This is an excellent paper on the issue: http://www.buildingscience.com/documents/insights/bsi-035-we-need-to-do…
With solar cells becoming cheap enough, PV solar directly wired to resistance heaters (much simpler than solar thermal hot water collectors) may solve the problem, for the leaky old buildings most of us live in.
WC- [Not directly. I think its reasonable to use global T change as a measure of total change, and it will be a direct input into (say) sea level change. But precip is much harder: the impacts are mostly ecological, and hence local; and we're just as likely to see impacts from shifts in rain patterns as from changes in the total -W]
Interesting. I have always understood that local effects could be different, both in terms of temperature, precipitation, etc, but I've never seen it postulated that changes to these effects were not attributable to a global net warming. Whence, I can see local amplifications where global rising temperature affects different local areas but never where a net zero global temperature rise would independently affect precipitation. That's why I though global costing as a function of CO2 and temperature would be complete. Sort of like costing health care as a function of age and gender in the whole would encompass individual excursions. Intuitively local precipitation seems more sensitive to temperature changes even on larger scale natural phenomenon like ENSO. I have not seen (nor looked as I am not climate scientist) as to whether overall net precipitation is increased/decreased or just shifted around.during cycles but it's definitely a clear signal that local precipitation sensitivity to regional temperature shifts (at least as we correlate them to SST in certain regions) is fairly high. It would be interesting to see if small changes in global temperatures amplify/dampen natural oscillations (i.e. the "dampling" of the oscillation is a function of net temperature - first thought would be that warmer encourages more mixing/less viscosity and whence a smaller SST range with warming ocean and if couse the presumption that the SST difference is the ENSO driver and not merely an correlated observable. ) Interesting though it is a perspective on net cost and local effects being greater than the whole would indicate. More obvious too, if effects are hemispherical as the cost of drought in Northern hemisphere would vastly outweigh the cost of flooding in the Southern hemisphere even if net precipitation is unchanged.
Hank Roberts - [With solar cells becoming cheap enough, PV solar directly wired to resistance heaters (much simpler than solar thermal hot water collectors) may solve the problem, for the leaky old buildings most of us live in.]
Be careful with this. For one, (at least in USA) PV energy cost is larger than just power company. Secondly it would be a serious safety/building code violation to penetrate a liveable building space with primary PV wiring. You would need external amperage and disconnect capability before penetrating a wall or roof. Secondly, I presume this cooling and damping is more pronounced at night.
Sound PV design is built around fault tolerance and danger. For electrical grid tie systems, fault tolerance determines where the circuit breaker is placed in the panel. Even more odd is that wire size on A/C side of the inverter is determined by grid voltage variation and not simply amperage as wire is for general circuitry. I once had a system where the buildings main circuit breaker had to be downsized because the fault from the PV system exceeded the rating of the panel.
KP has always been controversial. He is arguably a genius, but he has his failings that occasionally make him easy to dismiss.
Part of his problem is his public image, although he is capable of producing some decent spin of his own. On the whole, it's best to judge him on his record, rather than a few odd creases in his career.
Oh, and I forgot to mention KP's habit of changing sides, which has caused a lot of debate in itself.
the weasel is, as per usual, missing the point. if you accept pindyk's premise that the uncertainty bars for climate IAMs are so large as to render them functionally useless, then it follows that climate mitigation is best framed in terms of risk management not optimization.
so, no, it doesn't follow, as the weasel seems to think, that a carbon tax sidesteps problem. quite the opposite. under a risk management framework C&T is a much more suitable policy tool because it sets a limit on emissions (risk) not price (optimum).
"Still you folk are notorious for having names like "Peentangler" -W"
Oh yeah? Well, you guys live in places with names like Barking, Dorking, and Shellow Bowells 8^D!
@ Marlowe Johnson
No, this does not follow. Setting a cap at a certain target beyond which a specific risk lurks would mean that we are ready to avoid this risk at all costs. That is not how we value risk (and also not how we usually set a cap as soon as considerable costs for avoiding are involved!), and Pindyck acknowledges this (by bringing in preference parameters). In fact, he explicitly argues for a tax as a consequence of a more subjective valuation of the present value of avoided future risk in the conclusion. He still acknowledges IAM based SCC estimates as a useful, if somewhat arbitrary benchmark estimate for the introduction of a tax (with eventual updating). Incidentally, this is more or less to the letter what Weitzman has already suggested years ago.
Generally speaking, I have no idea why Pindyck sets this up as a working paper. The discount rate issue has been published about much more thoroughly after the Stern Review, together with the welfare/utility function question (especially Nordhaus); Weitzman has discussed catastrophic risk (with regard to CS, exactly as Pindyck does!) and the convexity of the damage function issue in much much greater detail. What Pindyck presents here is merely sort of a graduate term paper of what is already known (throwing quite a lot of the literature under the bus, in fact). There is absolutely nothing, nothing at all, that has not already been said much better, and in much more detail, before - and Pindyck doesn't even acknowledge it all! On his final point as to what to do in the face of failing IAMs and catastrophic risk (i.e. fat tails)? Oh, look here, the one we do not talk about, and another one we do not know:
That is, while Pindyck is talking about catastrophe insurance in excruciatingly vague terms, there have already been published papers that contain actual decision criteria for policy in the face of catastrophic risk, rather than some fluffy gobbledygook that tells you exactly nothing about what to do (other than that you should go with the IAM based estimate, anyway). "Avoiding catastrophic risk" is a lot, but it is not a policy.
Karl Popper was somewhat of a Social Democrat, but such a fierce opponent of Marxism that he is often appropriated by the right. I am basing this on the account of someone who knew Popper well - Profesor Bryan Magee in Confessions of a Philosopher.
The joke about Popper as a liberal (yes, that is how he described himself) is that he was fiercely intellectually domineering. To his credit some of his pupils went on the great eminence (Feyerabend, Lakatos, Gellner). It was Gellner who said that Popper's magnum opus The Open Society and its Enemies (a fierce attack on Plato, Hegel, Marx and historicism) should be called The Open Society by One of its Enemies
PS We should not be too hard on Popper - there are many notorious examples of philosophers failing to live up to their highest ideals - Jefferson and slavery, Rousseau gave up all his illegitemate children to a foundling hospital with a 50% survival rate.
Ya know, more on modeling in general -- and what it's letting us see -- would be welcome. Understanding successful modeling calls for a revision of how we see the world as marked as the change from understanding Statistics 101, pulling useful facts out of a mass of experience and data that people just didn't know what to do with.
Stunning example here:
Transportation Research Board of the National Academies
Fundamentals of Airplane Accident Dynamics: Extracting Meaning from Fatal Accidents
Short answer: transport aircraft build in a large safety margin; general aviation aircraft don't. General aviation has 10x as many accidents as transportation aviation. How can this be?
Long answer--see the paper, this is the sort of result only mathematical models -- computers and statistical analysis -- will get out of huge collections of data. In that study they learned these things:
(a) the concept of amplification of structural loads on close approach to an instability explained in-flight airframe-failure;
(b) the establishment of load and speed at the point of failure
(c) the finding that the most likely failure occurs slightly above design cruise speed if minimum strength and minimum stiffness were used in design;
(d) the correlation of accident rates with structural flexibility and speed;
(e) the finding that strong trends at high speed in four variables could not be intuitively separated until a math model of the failure process was created;
(f) the finding that the model matched the 10:1 difference in accident rates between GA and transports; and
(g) the finding that the model demonstrated that structural stiffness was more important than structural strength. ...
The article says that for fatal accidents in general aviation, the "largest single category" is "in flight airframe failure" -- they compared each "fleet" (specific aircraft model?) because each fleet had a characteristic pattern and rate of accidents. 10:1 was the worst rate found, compared to transportation aircraft. They don't say which fleet reached that mark for in flight airframe failures. I could guess but I won't. But y'all who are power pilots can read that and make more sense of it than I can. I found it remarkable pulling the specific statistics out of a huge data set that's been collected for years but not understood til computer modeling of the failure process was done. The paper does say "these data suggest that design compromises affecting safety were more important than pilot error."
I'd guess that's an unwelcome suggestion. Other people make mistakes; everyone flies something designed, mostly by others, to some spec.
But -- If the spec's that cheeseparingly stingy -- eventually, years later, the numbers will emerge if there are significant externalized deferred costs. For customers whose planes fall apart in the air -- that's disturbing.
And, ya know, without modeling -- you wouldn't know about this stuff.
Modeling -- as much as statistics -- changes how we see the world.
(That's why there are speeches in the Congress against it, and rants about "virtual risks" -- risks that happen to show up with enough data, but don't leave bodies stacked along the sidewalks in every neighborhood)
Look at this history -- a very early case where a conclusion was pulled out of the collected experience , after long delay and much obfuscation:
"... the V-tail Bonanza had the best overall safety record of any general aviation aircraft, but had the worst record when it came to mid-air break-up. The V-tail Bonanza was twenty-four times more likely to suffer from in-flight structural failure than the Straight-tail Bonanza. The only difference between theses two models was the configuration of the tail. Beech disputed this claim and said that the V-tail was only eight times more likely than the straight-tail to suffer from in-flight failure ...."
Ya know what else the modelers are able to understand, beyond the understanding individuals have managed before?
Social networks. That sort of analysis and modeling allows interesting discoveries.
Imagine you know that you're "the good guys" and subversives are trying to interfere with the marketing of your nation's precious geological fluids, pretending that fertilizer is bad stuff to add to the atmosphere. You can .... I dunno. Interfere with them.
A simpler older example is interesting:
[Hank Roberts] “… the V-tail Bonanza had the best overall safety record of any general aviation aircraft, but had the worst record when it came to mid-air break-up. The V-tail Bonanza was twenty-four times more likely to suffer from in-flight structural failure than the Straight-tail Bonanza. The only difference between theses two models was the configuration of the tail. Beech disputed this claim and said that the V-tail was only eight times more likely than the straight-tail to suffer from in-flight failure ….”
It is known as the "fork-tailed doctor killer" and the issue was a fluttering balance problem with the tail. The reality is that GA pilots time and operating constraints aren't as constricted. Commercial aircraft are designed with a more unstable CG for fuel efficiency but operating limits, policy and regulation keep them in a stable envelope. Also, a GA aircraft can stall/spin and breakup before pilot error is the cause. All airplanes have a coffin corner but if you never get near it because if you do, you get sacked, failures will be less. This is an excellent example for separating cause and correlation.
> fluttering balance
Not the only problem. It's an early case of the issue pulled out by that more recent modeling analysis I quoted first, that found
"(c) the finding that the most likely failure occurs slightly above design cruise speed if minimum strength and minimum stiffness were used in design"
That's the scary one. And that's for general aviation generally -- that's the design choice made for general aviation but not made for transport aviation, where 'better than the minimum ... in design' was required.
Point is you can't always show causation, certainly not for each separate and individual failure --- but when you have the pattern to establish sufficient correlation -- tobacco, asbestos, these airframe failures, and much else -- a clear public health choice is there to be found.
PCA isn't causation.
Remember where that "correlation is not causation so you can't prove any risk" argument comes from.
Doctor Connolley, sorry if this is not the right place, but I saw this news story and hoped you could clarify.
The article originated from the Fail (sorry, Mail) on Sunday, so I'm already suspicious.
[I was hoping to avoid this entire sorry mess. That article is impressively bad - a garbled version of the Fail's nonsense.
The truth is: last year's sea ice was unusually low, record-breakingly so, as I'm sure we all remember. To no-one's great surprise, this year's ice won't break last year's record. That's all there is to it. Looking at different years (http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm, or perhaps http://tamino.files.wordpress.com/2013/03/extent_anom_1yr.jpg?w=500&h=3…) you can see immeadiately that the decline isn't monotonic -W]
The truth is: last year's sea ice was unusually low, record-breakingly so, as I'm sure we all remember. To no-one's great surprise, this year's ice won't break last year's record. That's all there is to it.
Thank you Doctor Connolley.