A Poor Description of the Monty Hall Problem

My latest book project has been coediting the proceedings of the 2013 MOVES Conference held in New York City, which has turned out to be a lot harder than I anticipated. For the last few weeks it's been all-consuming, and spending so many hours in front of the computer staring at other people's writing has left me with little enthusiasm for producing any of my own.

Happily, the book is now finished (well, modulo the inevitable copy editing and production chaos at any rate), so it's time to do some blogging again. And what better way to get back into the swing of things than to look at an overly simplistic description of the Monty Hall problem, this time from F. D. Flam writing at The New York Times:

A famously counterintuitive puzzle that lends itself to a Bayesian approach is the Monty Hall problem, in which Mr. Hall, longtime host of the game show “Let's Make a Deal,” hides a car behind one of three doors and a goat behind each of the other two. The contestant picks Door No. 1, but before opening it, Mr. Hall opens Door No. 2 to reveal a goat. Should the contestant stick with No. 1 or switch to No. 3, or does it matter?

A Bayesian calculation would start with one-third odds that any given door hides the car, then update that knowledge with the new data: Door No. 2 had a goat. The odds that the contestant guessed right -- that the car is behind No. 1 -- remain one in three. Thus, the odds that she guessed wrong are two in three. And if she guessed wrong, the car must be behind Door No. 3. So she should indeed switch.

This comes in the middle of an article extolling the virtues of Bayesian over frequentist approaches to statistics. That's an interesting discussion, but I don't think the Monty Hall problem has much to contribute to it. The trouble with Flam's description is that there is nothing distinctively Bayesian about the calculation she describes. Moreover, her description obscures the really challenging part of the problem.

Yes, of course the contestant must revise his beliefs regarding the location of the car based on the information he obtained from Monty's actions. That's obvious. The challenge is determining the correct way of doing the updating. Flam's casual assertion that the probability of the car being behind door one remains one in three is precisely the point at issue. Most people's intuition is that Monty's actions raise the probability to one in two, and it is mighty hard to convince them they are wrong. The most effective way of doing that in practice is to take the frequentist approach of playing the game multiple times. You see quickly that you win far more often by switching.

Certainly you can use Bayes' theorem to illuminate the issues in the Monty Hall problem, but a frequentist would have no problem with that. Bayes' theorem is, indeed, very useful in this context, since it focuses your attention on the relevant variables in updating your probabilities correctly. In particular, it explains why you need detailed information regarding Monty's decision procedure before you can determine whether his actions should change your view regarding the correctness of your initial choice. But that's a far different claim from the one about how Bayesian statistics provide some unique insight into the workings of the problem.

As it happens, my book on the Monty Hall problem contains some discussion of how different approaches to probability illuminate different aspects of the problem. Bayesian, frequentist and classical approaches to probability all have something to say; all are useful and no one of them is more correct than the others. So I argue, at any rate.

And that leads me to a more general problem with Flam's article. From what she presents, it's very unclear what it is that frequentists and Bayesians are actually arguing about. Bayesians, no less than frequentists, are going to be interested in relative frequencies obtained from large samples of data. And frequenitsts, no less than Bayesians, understand that even very strong correlations found in a data set can ultimately turn out to be spurious. So what's all the fuss about?

Consider this:

Take, for instance, a study concluding that single women who were ovulating were 20 percent more likely to vote for President Obama in 2012 than those who were not. (In married women, the effect was reversed.)

Dr. Gelman re-evaluated the study using Bayesian statistics. That allowed him look at probability not simply as a matter of results and sample sizes, but in the light of other information that could affect those results.

He factored in data showing that people rarely change their voting preference over an election cycle, let alone a menstrual cycle. When he did, the study’s statistical significance evaporated. (The paper’s lead author, Kristina M. Durante of the University of Texas, San Antonio, said she stood by the finding.)

Dr. Gelman said the results would not have been considered statistically significant had the researchers used the frequentist method properly. He suggests using Bayesian calculations not necessarily to replace classical statistics but to flag spurious results.

Indeed. The initial finding sounds like a straightforward example of poor statistical technique, as opposed to some fundamental indictment of frequentist statistics generally. Frequentists certainly do not believe that results and sample sizes are the only things that matter, and they do not need to have it explained to them that they should consider information that could affect their data.

The article builds up to this:

But even Hume might have been impressed last year, when the Coast Guard used Bayesian statistics to search for Mr. Aldridge, its computers continually updating and narrowing down his most probable locations.

The Coast Guard has been using Bayesian analysis since the 1970s. The approach lends itself well to problems like searches, which involve a single incident and many different kinds of relevant data, said Lawrence Stone, a statistician for Metron, a scientific consulting firm in Reston, Va., that works with the Coast Guard.

At first, all the Coast Guard knew about the fisherman was that he fell off his boat sometime from 9 p.m. on July 24 to 6 the next morning. The sparse information went into a program called Sarops, for Search and Rescue Optimal Planning System. Over the next few hours, searchers added new information -- on prevailing currents, places the search helicopters had already flown and some additional clues found by the boat’s captain.

The system couldn’t deduce exactly where Mr. Aldridge was drifting, but with more information, it continued to narrow down the most promising places to search.

What's described here does not sound like a great victory for Bayesian statistics. Rather, it sounds like simple common sense to consider all relevant information in trying to locate a small target in a large space. While this analysis was going on, was there a group of frequentists standing off to one side, scoffing at the idea of updating your views based on new information?

To my mind, the most important part of the article is this:

Others say that in confronting the so-called replication crisis, the best cure for misleading findings is not Bayesian statistics, but good frequentist ones. It was frequentist statistics that allowed people to uncover all the problems with irreproducible research in the first place, said Deborah Mayo, a philosopher of science at Virginia Tech. The technique was developed to distinguish real effects from chance, and to prevent scientists from fooling themselves.

Uri Simonsohn, a psychologist at the University of Pennsylvania, agrees. Several years ago, he published a paper that exposed common statistical shenanigans in his field — logical leaps, unjustified conclusions, and various forms of unconscious and conscious cheating.

He said he had looked into Bayesian statistics and concluded that if people misused or misunderstood one system, they would do just as badly with the other. Bayesian statistics, in short, can’t save us from bad science.

That's the long and short of it. Applying statistics correctly is hard, even for people with professional training in the subject. But the problems are found in the complexity of real-life situations, and not in the underlying philosophical approaches to probability and statistics.

More like this

Argh, the Monty Hall Problem! Arguments as to the proper choice abound, and switching is always the best strategy, two-to-one. Except: as an old fart who actually watched the show (when I was sick at home from school), I can report that Monty did NOT always open a door. What if he only opened a door when the contestant had picked the door with the car? Switching would be a bad idea then!

Now, this Monty Hall character seemed pretty smooth, and even as a child I suspected that he wasn't beyond tipping the game toward the house. And, in fact, I seem to recall that most contestants were loath to switch, so maybe he only showed the goat when the contestant had chosen the second goat, secure in the knowledge that the producer wouldn't have to give away the car until another day, tipping open the door merely to add to the excitement.

My point is: the conundrum as usually stated doesn't consider Monty's agency, but doesn't eliminate it either. Thus the answer isn't nearly as straightforward. If Monty, primed with the knowledge of where the car lurks, always opens a door with a goat behind it, then it all works out and Marilyn Vos Savant is left smiling. But that detail is usually left out.

The analogy to sloppy statistical analysis is left for the reader. Bayes wants no part of it.

By weirdnoise (not verified) on 30 Sep 2014 #permalink

I get so frustrated when I read about the Monty Hall problem. People start going on about updating probabilities, etc, etc. The problem is trivial. Once you make a selection of a door you implicitly partition the probability space into two sets. The first set contains the door you chose, the other set contains the doors you didn't choose. Since each door has a 1/3 probability of having the prize behind it, there is an objective probability of 1/3 that the first set contains the prize and an objective probability of 2/3 that the prize is contained in the second set. You can open all the doors you want and this probability NEVER changes. The reason you switch to the second set is that you now know which door to chose because it is the only door in the 2/3 probability space that is still closed.

This can be easily programmed on a computer with no minds involved and one gets the same result.

weirdnose,

It is well known, and inarguable, that the solution to the Monty Hall problem depends on the precise formulation of the problem. That is not at issue; nobody is saying otherwise. In the classical formulation of the problem that is almost universally discussed, the contestant chooses a door, then Monty opens one of the other two doors to show a goat. There is no question of rigging of the game in this formulation. While this may or may not apply to the actual game show, this is the problem that is under discussion when we refer to the "Monty Hall Problem." This formulation seems very counterintuitive to many people, which is why it is an interesting problem in the first place.

Over on Jerry Coyne's page, "George" posted what I think is the most succinct method of describing the correct answer to Monty (even better than what I recall of your book, Jason!). Its this:

Suppose after choosing a door but before a door is opened, you were offered the opportunity to exchange your door for the other two – both of them. You would take that trade. You are still being offered the same thing after one door has been opened. Nothing has changed except you are now certain where one goat is.

Monty is effectively giving you the choice between "take car if it's behind A" and "take car if it's behind B or C." Of course the second one is better.

An "intuitionist" thinks that once Monty opens the goat door, the problem "resets" from a one-of-three chance to a one-of two-chance. A "frequentist" demonstration can disabuse this pretty effectively, but doesn't address the underlying intuitionist problem, which is why the game doesn't reset after Monty opens the door. Without that, an educated intuitionist can accept that switching is preferable, but will remain baffled as to why. Simply stating that the problem remains a 3-door problem does not help. While demonstrably true, it's not explanatory -- or at least not explanatory in a way an intuitionist is expecting.

By Jeff Chamberlain (not verified) on 02 Oct 2014 #permalink

I hope you sent this for publication to the NY Times. I found the assertion by Flam re Monty Hall to be so counter intuitive that I Googled the matter and found both your blog and the Wiki articles very helpful. Please insure that the Editors get to see your comments. HG

By Howard Green (not verified) on 02 Oct 2014 #permalink

Jeff Chamberlin,

Perhaps this way of thinking is more intuitive. The fact that Monty opens a door and shows you a goat is irrelevant to the problem. You either selected the door with the car with your original selection or you did not. If you did not, then you should obviously switch. If you did, you should keep your door. What is the probability that you selected the right door to begin with? The intuitionist should have no difficulty with that question. The intuitionist should also have no difficulty with the calculation that the P(wrong) = 1 - P(right). Therefore, it is intuitively obvious that there was a 2/3 probability that you selected the wrong door to begin with, so switching gives a 2/3 probability of winning the car.

Jeff,

If that's not convincing, consider an extension of the problem. In this extension, there are 100 doors, with 99 goats and 1 car. You select a door, and Monty opens 98 doors, all showing goats. He offers you the chance to switch your door for the remaining closed door. Should you switch? Again, what is the probability that your original choice was the correct one? Obviously, there was a 1% choice you selected correctly to begin with. Therefore, switching gives you a 99% chance of winning the car.

The problem is not convincing an "intuitionist" that switching is preferable. The problem is explaining why that's the case, i.e., why the problem doesn't "reset" once one (or more) goat doors have been opened. In a 3-door formulation, it is indeed obvious that the chooser has a one-in-three chance initially. But when a goat door is opened, the intuitionist thinks that door becomes irrelevant and a "new" 2-door problem has superseded the original 3-door problem. Similarly, in a 100-door problem, if 98 goat doors are opened an intuitionist thinks that what's left is a 2-door problem. "But that's not right," while true, doesn't address whatever intuitional thinking has caused the statistical mistake.

An intuitionalist doesn't understand why opening a goat door doesn't change things. Suppose the original presentation is 2 closed doors and 1 open door (showing a goat). No one would choose the open door. Because of that an intuitionist will likely see this formulation as a 2-door problem ... and she will also likely not perceive a difference between this formulation and the standard version. In the standard version, the goat door is opened after the initial choice. In the revised version, the goat door is opened before the initial choice. In either case, an intuitionist perceives a practical choice between only 2 (closed) doors -- and doesn't understand what practical difference the open door makes in either presentation.

By Jeff Chamberlain (not verified) on 03 Oct 2014 #permalink

Jeff,

Of course the MH problem is counterintuitive; that's why it receives the attention it does. Let me give another shot at a more intuitive explanation. You brought up a situation in which you choose after Monty opens a door. Let's consider that situation as compared to the classical MH problem:

Choose after door is opened:

1. Monty opens a door and shows a goat.
2. You choose a door - P(car) = 1/2
3. Monty asks if you want to switch doors
a. You reject the switch - P(car) = 1/2 (obviously, nothing has changed since step 2.)
b. You accept the switch - P(car) = 1/2 (car must be somewhere, so probability in 3a + 3b = 1).

In that case, there is nothing to be gained (or lost) by switching. It doesn't matter whether you switch or not.

Classic MH Problem:
1. You choose a door - P(car) = 1/3 (1 door out of 3 has car)
2. Monty opens a door with a goat - P(car) = 1/3 (Monty opening a door doesn't affect the probability that your original choice was the door with the car).
3. Monty offers the switch:
a. You reject - P(car) = 1/3 (Rejecting the switch could not possibly change your probability of winning)
b. You accept - P(car) = 2/3 (Again, car must be somewhere, so 3a and 3b probabilities must add to 1).

In this case, the fact that Monty opens a door doesn't change the probability that you got it right to begin with. Since it's more likely that your original pick was wrong, you should switch.

Maybe a more intuitive way to look at it would be that you are given a choice: either you win a car if you pick the car door or you win a car if you pick one of the goat doors. Which choice would you make? I think that's intuitively obvious; you choose to win the car when you pick a goat door. That is equivalent to switching in the MH problem, whereas choosing to win the car if you pick the car door is equivalent to keeping your original door and rejecting the switch.

"the fact that Monty opens a door doesn’t change the probability that you got it right to begin with."

Yes, this is true, but it's precisely what an intuitionist needs explained, not just stated. To an intuitionist, Monty opening the door may not change the original probabilities, but she thinks it changes the original problem to a different problem.

By Jeff Chamberlain (not verified) on 03 Oct 2014 #permalink

I guess my intuition is different than yours. Monty opening a door, to my intuition, has nothing to do with whether or not I guessed right originally. I know when I make my original choice that Monty is going to open a door and show me a goat. He's going to do so whether I guess right or wrong. Why should the occurrence of this completely expected and predictable event affect the probability that I guessed right originally?

I'll give it another shot. If Monty were to not open any doors and ask whether you wanted to change your mind and switch doors we would have a totally different situation. In this case, the probability that the car is behind any of the three doors is 1/3. It doesn't matter whether you switch or not, that probability remains the same. I'm sure we can agree on that.

Now, back to the actual MH problem. Relative to the example I just gave, the act of opening a door and showing you a goat gives you important information that you did not have in the previous example. In that example, it did not matter if you switch or not because if you chose to switch, you didn't really have any idea which door to switch to. Either of the two remaining doors were equally likely to be right. In the actual problem, though, this is eliminated. You know which door to switch to if you choose to switch. In the previous example, we see that switching is irrelevant. Surely, armed with the extra information relative to that example, we must be able to do better than in that example where we don't have that information.

I'm sure it's still counterintuitive, but that can't be helped. Our intuition is not a particularly good guide when it comes to the mathematics of probability.

Jeff @9:

An intuitionalist doesn’t understand why opening a goat door doesn’t change things.

If Monty flipped a coin and opened a door at random (and got a goat), the intuitive answer would be correct; you would then have a 50/50 chance of being right. So I think the mistake being made here is that the 'intuiters' are not mathematically accounting for the fact that Monty always opens a goat door, by rule and design.

That seems to me a reasonable mistake to make, because if you just ask someone how that rule factors into calculating the probabilities, most people probably wouldn't be able to tell you. They obviously aren't considering that information the way that they should.

Sean -- Knowing in advance that Monty will open a goat door is not part of the original formulation. What the contestant knows in advance (at least on reflection) is that if Monty opens a door it will be a goat door, and that's not the same thing. Do the probabilities change depending on whether the contestant knows beforehand that Monty will in fact open a door? And, to an intuitionist, the important information obtained when Monty opens a goat door is (or seems) that the car is now behind one of two -- not one of three -- doors.

Eric -- I think this may be on the right track. But to an intuitionist, why/how "what Monty knows" should affect the mathematics is likely baffling.

Remember, my comments have to do with the explanation, not the reality, of how the probabilities work. I'm not talking about "denialists." As I've worried the MHP over the years, what continually strikes me is how come the various explanations don't seem to meet the typical objections of intuitionists who accept the probabilities. Or, stated another way, why many smart and studious people don't "get it" despite myriad attempts to explain it by very smart and articulate people.

By Jeff Chamberlain (not verified) on 03 Oct 2014 #permalink

Hmmm...I think knowing in advance that Monty will open a goat door is part of the original formulation. Monty stands up there and tells the contestant that they will have a second chance to pick the car after he opens one of the doors.
If people fail to mention it, I would guess that's because they erroneously discount it as unimportant information, not because the problem was described incorrectly to them. In fact if you look at the discussion on the Wikipedia entry, Vos Savant is quoted as saying that she read thousands of letters, and pratically none of them described the problem wrong, they just disagreed with the answer.

It depends on what you mean by the “original formulation” of the problem. On the original game show Let's Make A Deal, it is true that Monty did not always open a door. It's also true that sometimes he offered cash incentives either to switch or to stay.

However, what mathematicians refer to as the Monty Hall problem is not intended to be a perfect imitation of what happened on the show. It was inspired by the show, but it is nonetheless separate from it. In the math problem, it is always taken for granted that Monty will open a door and that there will be a goat behind that door.

It is true, though, that casual statements of the problem in mainstream outlets tend not to state all of the necessary assumptions clearly. One big one that is almost never stated is that in those cases where the player initially chooses the correct door, so that in principle Monty Hall could open either of the other two doors to reveal a goat, we must stipulate that Monty chooses randomly. The trouble is that stating everything explicitly leads to a very unwieldy problem statement.

It's also mostly beside the point. What people find confusing has nothing to do with anything so subtle. Rather, the problem is that most people think that when Monty opens a door, the only thing you have learned is that the car Is not behind that door. That's why they think the probabilities are now fifty-fifty. The subtle point is that you have learned more than just that there is no car behind that door, because you also know that Monty operates under certain restrictions when making his choice. That is, what you learn is not, “There is no car behind that door,” but rather, “Monty, who makes his choices in certain specific ways, chose to open that door.” That's what people find it difficult to wrap their heads around.

By Jason Rosenhouse (not verified) on 03 Oct 2014 #permalink

While this analysis was going on, was there a group of frequentists standing off to one side, scoffing at the idea of updating your views based on new information?

Of course not. They were finding the frequencies by dumping more people overboard...

In seriousness, I am curious: Do mathematicians of the different schools of thought actually disagree with the usefulness of either set of tools? It seems like frequentists accept Bayes' Theorem (because it is necessarily true even under frequentism) and Bayesians accept frequency analysis (since frequency is a good basic indicator for priors and updates).

Surely the difference is more philosophical, as with the difference between mathematicians who think math is purely a human invention and those who think it involves the fundamental nature of the universe (or whatever the right terms are for those camps)? When has the definition of probability lead to fundamental practical disagreement?

Many confuse the concept of probability with what in fact is the case in a specific instance. Even if the initial guest chooses the correct door (that choice is random), it is still the case that there is a 2/3 probability that the prize is behind the other two doors. Nor do we have have to stipulate that Monty has to choose randomly in this instance. The problem wouldn't even exist if Monty ever under any circumstances chose randomly. All he need do is follow the rule that he should open one of the two doors that has no prize - a completely deterministic rule.

If I do in fact detect particle decay, that doesn't change the probability that I would have detected it at that particular instance. It doesn't all of a sudden mystically become 100%.

In these cases, it is like the proverbial blind squirrel finding a nut. That it did indeed find one, does not change the probability of it finding one in the first place.

The fact that in this instance, if the guest switches to another door and looses out on the prize because he initially chose correctly, doesn't change the probability that the prize has a 2/3 probability of being behind the other door. The problem is that the guest does not have the opportunity to replay the game and reap the rewards of the law of large numbers.

By Pedr Bran (not verified) on 06 Oct 2014 #permalink

BTW, my computer tells me that I am winning the prize 66% of the time when I switch doors. Mysteriously, no minds or intuition are involved.

1) Randomly assign prize to door A, B, or C.
2) Randomly pick door A, B, or C.
3) Open one of the doors that was not chosen, that does not have the prize assigned to it. A simple if-else clause is sufficient - of course the 'computer' has to 'know' to which door the prize is assigned.
4) Switch to the other unopened door.
66% of the time I get the prize and only 33% of the time a goat.

No Monty, no minds - just a Twitter Machine. There's nothing behind the screen folks.

By Pedr Bran (not verified) on 06 Oct 2014 #permalink

Jason, maybe you should bring up the Tuesday Birthday Problem again. That one's even less intuitive, and a lot more depends on the exact formulation of the problem.

By Another Matt (not verified) on 06 Oct 2014 #permalink