In the next few pages, we’ll consider four common types of non-deductive inferences: enumerative induction, statistical syllogism, inference to the best explanation, and argument by analogy. Here’s a brief description of each:
Enumerative induction, sometimes called simple induction or just induction, is an inference that extrapolates observed patterns to unobserved cases. The structure of this inference can be formulated in two different ways, both of which begin with the premise that all observed things in one category are members of another category: all observed A’s are B’s. However, the two forms of enumerative induction differ in their conclusions. One form concludes that probably everything in the first category is in the second category: all A’s are B’s. For example, from the fact that all observed emeralds have been green, we can infer that probably, all emeralds are green. This first form of enumerative induction has been called inductive generalization or simply generalization.
A second form of enumerative induction is more modest, concluding only that the next thing we observe in category A will be in category B (the next A is a B). From the fact that all objects we have dropped have fallen to the ground, for instance, we infer that the next object we drop probably will fall to the ground. This second form of enumerative induction is also called projection or inductive prediction.Some authors classify projection as distinct from enumerative induction, using the term ‘enumerative induction’ more narrowly as synonymous with inductive generalization. See, for instance, Peter Godfrey-Smith, Theory and Reality: An Introduction to the Philosophy of Science, Second Edition (Chicago, University of Chicago Press: 2021), 57.
To summarize, enumerative induction can have either of the following forms:
Inductive generalization:
Premise: | All observed A’s are B’s. |
Conclusion: | Probably, all (or nearly all) A’s are B’s. |
Projection:
Premise: | All observed A’s are B’s. |
Conclusion: | Probably, the next A to be observed will be a B. |
A closely related inference form is the statistical syllogism, which begins with the premise that a certain proportion (or percentage) of observed things in one category are also members of another category, and concludes that probably a similar proportion holds in general. The structure of a statistical syllogism can be characterized as follows:
Premise: | N% of observed A’s are B’s. |
Conclusion: | Probably, approximately N% of all A’s are B’s. |
We can think of statistical syllogism as a generalized form of enumerative induction. Conversely, we can think of enumerative induction as a special case of statistical syllogism: namely, the case where N is 100%.
Inference to the best explanation, also called abduction, abductive inference, or explanatory inference,Charles Sanders Peirce coined the term ‘abduction’ in the 19th century, and although he used it in reference to a slightly different mode of explanatory reasoning than the one described here, philosophers today typically use ‘abduction’ as another name for inference to the best explanation. See the Stanford Encyclopedia of Philosophy entry on Abduction for further discussion. Some authors prefer the term explanatory inference: see, for instance, Peter Godfrey-Smith, Theory and Reality: An Introduction to the Philosophy of Science, Second Edition (Chicago, University of Chicago Press: 2021), 58. However, the latter term has also been used for inferences that do not involve explicit comparisons between rival hypotheses. For example, Kevin Davey advocates a form he calls “immediate explanatory inference” in his recent paper “On Inferring Explanations and Inference to the Best Explanation,” Episteme (2023), 1–18. is an argument which concludes that the best available explanation of some phenomenon is probably the true explanation. (What makes an explanation “best” is a difficult question to which we’ll return later in this chapter.) The structure of an inference to the best explanation can be characterized as follows:
Premise: | C1, C2, C3, … etc. are the only candidate explanations we have. |
Premise: | C1 is a better explanation than C2, C3, … etc. |
Conclusion: | Therefore, C1 is probably true. |
Argument by analogy, or analogical inference, is an argument that draws a conclusion about one thing based on that thing’s relevant similarities to something else. This type of reasoning is used, for example, when we infer that other people and animals have conscious experiences similar to our own. From the fact that my cat and I are similar in relevant ways, I infer that the cat and I have an additional similarity that I cannot observe: the cat has conscious experiences too. Argument by analogy has the following structure:
Premise: | x is an A. |
Premise: | x and y have many similarities relevant to being an A. |
Conclusion: | Therefore, y is probably an A. |