Bayes Bars with Multiple Hypotheses

Bayesian bar diagrams—or Bayes bars, as I call them—make it easy to visualize how new evidence bears on multiple hypotheses at once. To see how this works, consider the following example.

Imagine you are a police detective investigating the mysterious, untimely death of a wealthy businessman. The victim apparently died on impact after falling from the third-story balcony of his luxurious mansion. Was it an accident, suicide, or murder? Of course, it’s logically possible that there is some other explanation. Perhaps the businessman is still alive, having faked his own death with a realistic wax dummy, or perhaps you are simply dreaming and the whole scene is illusory. But you have already ruled out such possibilities with confidence, so they occupy a negligibly tiny sliver of your prior Bayes bar. This leaves you with only three plausible hypotheses:

A: The death was an accident.

S: The victim died by suicide.

M: The victim was murdered.

Your first impression of the situation, based on your instincts as an experienced detective, gives you the sense that an accident is somewhat less likely than either suicide or murder. You regard the latter two hypotheses as equally plausible:

Accident Suicide Murder

In search of clues, you decide to check whether a suicide note has been left anywhere in the house. The presence or absence of such a note won’t settle the case, of course. Not all suicide victims leave notes, and a cunning murderer might forge a note to mislead investigators. Nonetheless, the presence of a suicide note would constitute relevant evidence, since the existence of a note is more likely on some hypotheses than on others. You consider a note extremely unlikely to turn up if the death was an accident—so unlikely, in fact, that you can safely disregard the possibility. On the assumption that the death was a suicide, you think it’s about as likely as not that the victim left a note: the (S•N) and (S•~N) segments of your prior Bayes bar are roughly equal. Assuming the victim was murdered, you consider it less likely that a note will turn up: the (M•N) segment is shorter than the (M•~N) segment. Shading the segments where N is false, your prior Bayes bar looks something like this:

prior Bayes bar
(A•~N) (S•N) (S•~N) (M•N) (M•~N)

After a thorough search of the premises, you conclude that there is no suicide note. How should you update your credences? First, you eliminate the unshaded segments (where N is true), like this:

(A•~N) (S•~N) (M•~N)

Then, you renormalize—that is, you stretch the truncated bar back to full length:

posterior Bayes bar
(A•~N) (S•~N) (M•~N)

To see what happened to your credences, compare this posterior Bayes bar with the prior bar, above. As you can see, the probability of A increased (blue got longer), S decreased (the yellow part shrank), and M stayed roughly the same. So, the absence of a suicide note in this example confirms the accident hypothesis, disconfirms the suicide hypothesis, and doesn’t provide significant evidence either way regarding the murder hypothesis.

We’ll encounter more examples involving multiple hypotheses when we discuss probabilistic models of evidence in the next chapter. First, let’s examine some surprising implications that arise from Bayesianism’s ability to handle multiple hypotheses at once.