Bayesian confirmation theory says you should update your beliefs by conditionalizing on any new evidence you acquire. Conditionalization can be described mathematically, as we saw on the previous pages, but we can also think about it in a more intuitive, visual way. To visualize the process of conditionalization, we can use segmented bar diagrams that I’ll call Bayes bars. (Others have called them Bayesian bars,See for example Ben Page, “Introducing Bayesianism Through The Bayesian Bar” (unpublished manuscript). but I like the two-syllable alliteration.) A Bayes bar depicts your credences in a set of propositions at a particular time, with the length of each segment representing your degree of belief in a specific proposition. If the propositions are mutually exclusive and exhaustive, the total length of the Bayes bar is 1, or 100%.
We’ll begin with a Bayes bar representing your prior credences in a hypothesis H and its negation. Since H and ~H are mutually exclusive and exhaustive, their probabilities must sum to 1. Let’s suppose your credences are .4 and .6, respectively. Here is a Bayes bar representing those prior credences:
H | ~H |
40% | 60% |
prior Bayes bar | |||
(H•E) | (H•~E) | (~H•E) | (~H•~E) |
30% | 10% | 20% | 40% |
This prior Bayes bar, as I’ll call it, represents your credences prior to learning E. How should your beliefs change if you discover that E is true? When you learn that E is true, this new evidence eliminates the shaded segments of the bar, since E is false in those segments. The two unshaded segments remain:
(H•E) | (~H•E) |
30% | 20% |
Because you learned that E is true, you now regard (H•E) and (~H•E) as the only possibilities, so your credences in those propositions should add up to 100%. Therefore, we must renormalize those probabilities: we must increase them so that they sum to 1 while keeping the same proportions between them. In other words, we must stretch the bar back to its full length, keeping the same ratio between the blue and yellow segments. In this example, the truncated bar is half as long as the original, so we need to double its length. Imagine stretching the bar like a rubber band so that the segments maintain their relative proportions. That’s what the process of conditionalization looks like: some segments of the prior Bayes bar are eliminated by new evidence, and the remaining segments expand. Here’s the final result, which I’ll call the posterior Bayes bar:
posterior Bayes bar | |
(H•E) | (~H•E) |
60% | 40% |
The posterior Bayes bar represents your posterior credences after conditionalizing on evidence E. Your posterior credence in hypothesis H is 60%, whereas your prior credence in H had been only 40%. Your credence in H has increased, so E confirms H. At the same time, your credence in ~H decreased, so E disconfirms ~H.
pr1(E|H) | = | ¾ | = | 1.5 |
pr1(E) | ½ |
Bayes bars make the process of conditionalization intuitive, and they offer other another important advantage as well. We can use them to estimate how we should update our beliefs, even without assigning precise prior probabilities! For a mundane example, consider the following story:
R | ~R |
You decide to glance out the window to check whether the walkways are wet. Wet walkways won’t prove that it rained, you understand, since the walkways might be wet from dew even if it didn’t rain. Conversely, even if it rained sometime during the night, the walkways might have dried quickly. So, neither wet walkways nor dry walkways will conclusively settle the question whether it rained. Nonetheless, you consider proposition W (that the walkways are wet) more likely to be true than false if it rained, and you consider W more likely to be false than true if it didn’t rain. Your prior Bayes bar looks something like this:
prior Bayes bar | |||
(R•W) | (R•~W) | (~R•W) | (~R•~W) |
Assuming it rained, wet walkways are somewhat more likely than not. | Assuming it didn’t rain, wet walkways are somewhat less likely than not. |
Now, you look out the window and discover that the walkways aren’t wet. You’re not surprised. After all, you already expected they would be dry.The two segments where W is false, (R•~W) and (~R•~W), together occupy more than half of the bar. This means you were more than 50% confident the walkways would be dry, even before you looked out the window. But how should you update your credences in light of this new evidence? Does the fact that the walkways aren’t wet give you a reason to reduce your belief in R significantly? Or, does the observation make little difference in this case, since you already regarded rain as somewhat unlikely?
To find out, let’s conditionalize on ~W. First, we eliminate the segments of the Bayes bar that are inconsistent with the evidence. Since we learned that W is false, the two unshaded segments have been ruled out, leaving us with the shaded segments:
(R•~W) | (~R•~W) |
posterior Bayes bar | |
(R•~W) | (~R•~W) |