For Some Value of "Experiment"

I'm running about a day behind on my Inside Higher Ed commentary because the ongoing search has made this a Week From Hell, but there was an interesting news item yesterday about an economic study suggesting that health care subsidies would improve education more than tuition credits:

The study's bottom line finding, in the authors' words, is that "health plays an extremely important role in determining an individual's educational attainment. On average, having been sick before the age of 21 decreases [an individual's average educational attainment] by 1.4 years."

(I assume that "sick" here means something more serious than seasonal colds or allergies. At least, I sure hope so...)

The bit that I'm really curious about, though, is the next paragraph:

To gauge the logical policy implications of their findings, the authors ran two experiments on a pool of 8,000 individuals aimed at assessing the impact of two possible subsidies: a $2,100 per year college tuition subsidy (available to all who attend college) and a $778 a year subsidy for health expenditures made for four years to all 16-year-olds who attend high school. The overall costs of the two subsidies were the same.

They find that the health expenditure subsidy is more effective, increasing the highest level of educational attainment by 20-25% more than the tuition subsidy. Which is an interesting result.

I'm curious, though, as to what is meant by "experiment" in this context. They can't really have handed out $16 million to a bunch of college-bound students and tracked the results for fifteen years. At least, I would find that really hard to believe. At the same time, though, this seems like a difficult thing to get from a computer model. So I'm puzzled...

The abstract of the paper in question doesn't shed any light, referring only to "policy experiments." I don't have free access to this from home, and I don't have time to read it at work, but if somebody knows what the deal is, here, I'd love to know.

Maybe we should find a hard-rockin' economist, and get them to do a "Basic Concepts" post defining "policy experiment" for us...

Tags

More like this

Of course it is not an actual experiment, it's just a model simulation. Given their model that being sick decreases educational attainment by a certain amount, they simulate what the effect would be of decreasing the number of sick students, and compute the cost necessary to produce that decrease. (e.g., how much does it cost in health subsidies to reduce sickness by a certain mount.) They compare it to another model of how college tuition subsidies affect educational attainment, and their associated costs.

By Ambitwistor (not verified) on 17 Jan 2007 #permalink

Essentially, they use observational data--i.e. observations on a bunch of people without experimental interventions--to estimate a very complicated statistical model relating health, academic performance, tuition costs, and educational attainment. They assume that this model is causal, and use it to simulate the effects of the two policies they mention.

The big problem in economics is that, for the most part, we can't do experiments: They tend to, as you say, cost $16 million and take 15 years, and would often be unethical in any case. There's a big divide among empirical economists about how to proceed.

One approach is to look for so-called "natural experiments," situations where the world happens to generate variation that is sufficiently experiment-like to permit credible causal inferences. Two classic examples are a study of the minimum wage, which examines employment on either side of the New Jersey-Pennsylvania border over a period when NJ raised its minimum wage but PA did not, and a study of the effect of immigration on labor markets that uses the Mariel Boatlift (when Cuba sent a bunch of unwanted citizens to Miami, for the most part against their will) to look at the effect on the Miami labor market.

The problem with this approach is that the world doesn't give us enough good natural experiments to study everything we want to know about. So another approach is so-called "structural" estimation, which attempts to estimate causal effects from non-experimental data by modelling all of the determinants of these decisions. This style of research requires many more, and much stronger, assumptions, but if those assumptions are satisfied it tells us much more than an experiment ever can. The paper cited here is of this sort.

There's a lot of hostility between the two camps. I fall more into the "natural experiment" camp (also known as the "reduced form" camp, for the type of statistical model used), but I think that both have value so long as we are appropriately circumspect about the strength of the evidence. This paper fails that test pretty badly.

Usually in economics, "experiment" will refer to an actual experiment (e.g. when Tennessee randomly assigned elementary students to smaller or larger classes) or to a "natural experiment." That's enough of an abuse of language. But this paper's use of the word "experiment" when what they really mean is "simulation" gives the rest of us a bad name...

Chad, MattXIV has the all the explanation you need at a "ground truth" level: Aggies.