Bayesian Bar Diagrams

Bayesian confirmation theory says you should update your beliefs by conditionalizing on any new evidence you acquire. Conditionalization can be described mathematically, as we saw on the previous pages, but we can also think about it in a more intuitive, visual way. To visualize the process of conditionalization, we can use segmented bar diagrams that I’ll call Bayes bars. (Others have called them Bayesian bars,See for example Ben Page, “Introducing Bayesianism Through The Bayesian Bar” (unpublished manuscript). but I like the two-syllable alliteration.) A Bayes bar depicts your credences in a set of propositions at a particular time, with the length of each segment representing your degree of belief in a specific proposition. If the propositions are mutually exclusive and exhaustive, the total length of the Bayes bar is 1, or 100%.

We’ll begin with a Bayes bar representing your prior credences in a hypothesis H and its negation. Since H and ~H are mutually exclusive and exhaustive, their probabilities must sum to 1. Let’s suppose your credences are .4 and .6, respectively. Here is a Bayes bar representing those prior credences:
H ~H
40% 60%
Next, we subdivide each segment of the bar into smaller segments representing your conditional credences about evidence E, given each of those two hypotheses. Let’s suppose your conditional credence in E given H is ¾. In other words, you think E is three times more likely than ~E, given the assumption that H is true. To represent this, we apportion ¾ of the blue region to E and ¼ to ~E. (These fractions of the blue region make up 30% and 10%, respectively, of the whole bar.) Suppose you also think E is half as likely as ~E on the assumption that H is false. So, we divide the yellow segment of the bar, giving 20% to E and 40% to ~E. Using diagonal shading for the segments where E is false, the subdivided bar looks like this:
prior Bayes bar
(H•E) (H•~E) (~H•E) (~H•~E)
30% 10% 20% 40%

This prior Bayes bar, as I’ll call it, represents your credences prior to learning E. Now, how should your beliefs change if you discover that E is true? When you learn that E is true, this new evidence eliminates the shaded segments of the bar, since E is false in those segments. The two unshaded segments remain:

(H•E) (~H•E)
30% 20%

Because you learned that E is true, you now regard (H•E) and (~H•E) as the only possibilities, so your credences in those propositions should add up to 100%. Therefore, we must renormalize those probabilities: we must increase them so that they sum to 1 while keeping the same proportions between them. In other words, we must stretch the bar back to its full length, keeping the same ratio between the blue and yellow segments. In this example, the truncated bar is half as long as the original, so we need to double its length. Imagine stretching the bar like a rubber band so that the segments maintain their relative proportions. That’s what the process of conditionalization looks like: some segments of the prior Bayes bar are eliminated by new evidence, and the remaining segments expand. Here’s the final result, which I’ll call the posterior Bayes bar:

posterior Bayes bar
(H•E) (~H•E)
60% 40%

The posterior Bayes bar represents your posterior credences after conditionalizing on evidence E. Your posterior credence in hypothesis H is 60%, whereas your prior credence in H had been only 40%. Your credence in H has increased, so E confirms H. At the same time, your credence in ~H decreased, so E disconfirms ~H.


The amount of “stretching” needed to restore the bar to its full length doesn’t correspond to the Bayesian multiplier discussed on the previous page. In the example above, we stretched the truncated bar to double its length, but the Bayesian multiplier isn’t 2. The numerator of the Bayesian multiplier is pr1(E|H), which in this case is ¾, since E is true in ¾ of the blue region of the prior Bayes bar. The denominator of the Bayesian multiplier is pr1(E), which in this example is ½ because E is true in 50% of the prior Bayes bar. (E is true only in the two unshaded segments, which have lengths of 30% and 20%.) So, the Bayesian multiplier is
pr1(E|H)  =   ¾   =  1.5
pr1(E)  ½ 
This means that when you learned E, your credence in H should have increased by a factor of 1.5, and indeed it did: as we can see in the diagrams above, your credence in H increased from 40% to 60%.

Bayes bars make the process of conditionalization intuitive, and they offer other another important advantage as well. We can use them to estimate how we should update our beliefs, even without assigning precise prior probabilities! For a mundane example, consider the following story:

You’ve just awakened from a sound night’s sleep, and you wonder whether it rained during the night. You think it’s somewhat more likely that it hasn’t rained recently, since you vaguely remember seeing a forecast for sunny skies this week, but you don’t recall the precise details of the forecast. Even if you can’t put an exact number on your credence for the hypothesis that it recently rained, you can visualize a Bayes bar with a shorter segment representing your credence that it rained and a longer segment representing your credence that it didn’t:
R ~R

You decide to glance out the window to check whether the walkways are wet. Wet walkways won’t prove that it rained, you understand, since the walkways might be wet from dew even if it didn’t rain. Conversely, even if it rained sometime during the night, the walkways might have dried quickly. So, neither wet walkways nor dry walkways will conclusively settle the question whether it rained. Nonetheless, you consider proposition W (that the walkways are wet) more likely to be true than false if it rained, and you consider W more likely to be false than true if it didn’t rain. Your prior Bayes bar looks something like this:
prior Bayes bar
(R•W) (R•~W) (~R•W) (~R•~W)
Assuming it rained, wet walkways are somewhat more likely than not. Assuming it didn’t rain, wet walkways are somewhat less likely than not.
In this illustration, the blue region represents your prior credence in the hypothesis that it recently rained, subdivided into two possibilities: (R•W) is the possibilitly that it rained and the walkways are still wet, while (R•~W) is the possibility that it rained but the walkways aren’t wet anymore. The yellow region represents your prior credence that it didn’t rain, subdivided into the following two possibilities: it didn’t rain but the walkways are wet from dew (~R•W), or it didn’t rain and the walkways are not wet (~R•~W). You consider this last possibility most likely, as indicated with its longer length in the diagram.

Now, you look out the window and discover that the walkways aren’t wet. You’re not surprised. After all, you already expected they would be dry.The two segments where W is false, (R•~W) and (~R•~W), together occupy more than half of the bar. This means you were more than 50% confident the walkways would be dry, even before you looked out the window. But how should you update your credences in light of this new evidence? Does the fact that the walkways aren’t wet give you a reason to reduce your belief in R significantly? Or, does the observation make little difference in this case, since you already regarded rain as somewhat unlikely?

To find out, let’s conditionalize on ~W. First, we eliminate the segments of the Bayes bar that are inconsistent with the evidence. Since we learned that W is false, the two unshaded segments have been ruled out, leaving us with the shaded segments:
(R•~W) (~R•~W)
Then we renormalize, stretching the bar back to full length to represent your posterior credences:
posterior Bayes bar
(R•~W) (~R•~W)
Although we may not be able to determine precise probabilities, the rough proportions indicate clearly enough that your credence in the rain hypothesis drops noticeably when you learn that the walkways aren’t wet. Even though this new information is compatible with the rain hypothesis (the walkways may have dried quickly), and despite the fact that your prior credence in R already was somewhat low at the beginning of the story, dry walkways still disconfirm R. This is because—according to your (admittedly imprecise) prior credences—the walkways are at least somewhat more likely to be wet if it rained recently than if it didn’t rain.