# How the Distinction between Deductive vs. Inductive Arguments Can Mask Uncertainty

Everyone who has taken a philosophy 101 class has learned the distinction between *deductive *and *inductive *arguments. It goes like this. Only *deductive *arguments may be valid; an argument is *valid* if and only if the truth of its premises guarantees the truth of its premises. Otherwise, the argument is *invalid. *If an argument is both valid and contains all true premises, then the argument is *sound.*

Not all invalid arguments are worthless, however, and the concept of an *inductive *argument shows why. An (inductive) argument is (*inductively) strong *if and only if (1) it is invalid; and (2) its premises confer a *high probability* upon its conclusion. In order to reinforce (2), some inductive logic textbooks will place the word “probably” inside the conclusions of inductive arguments.

Notice that validity is a binary, all-or-nothing affair. Just as one cannot be “sort of pregnant,” an argument cannot be “somewhat” valid. In contrast, *inductive strength *is a matter of degree. *Inductively strong arguments *confer a high probability on their conclusion, whereas *weak *arguments don’t.

So far, so good. It seems to me, however, that this distinction can sometimes mask the fact that uncertainty is often present in “real world” deductive arguments.

Consider the following argument (Deductive Argument 1 or DA1):

(1) If A, then B.

(2) A.

(3) Therefore, B.

*Question: What is the probability of B?*

Since DA1 is a valid argument, we know that if (1) and (2) are true, (3) has to be true. So the probability of B conditional upon A is 1. In symbols, Pr(B|A)=1. This is they key “insight,” if you will, of learning that DA1 is valid.

But we want to know the *unconditional *probability of B, Pr(B), not the contribution made to Pr(B) by the probability of B conditional upon A, Pr(B|A). So, again, what is the contribution made to Pr(B) by A itself? Answer: Pr(A). Suppose A is true by definition. In that case, its probability is 1 and so Pr(B)=1. Now suppose the probability of A is 50%. In that case, Pr(B)=0.5. The probability calculus implies, when Pr(B|A)=1, that the contribution to Pr(B) made by A alone will always equal Pr(A), so long as Pr(A) is not zero, which in “real world” problems is usually the case.

In his book, *Objecting to God, *Colin Howson makes a similar point and then writes this.

“A possibly more surprising feature of the logic of probability is that it subsumes the logic of conjecture and refutation. It tells us that if evidence is inconsistent with a hypothesis under test, i.e., if it is

refuting evidence,thenthat evidence reduces the probability of a hypothesis to zero.The formal theory of probability implies that if H entails that E is not true then Prob(E|H)=0 so long as Prob(H) is nonzero, as in all applications it will be. Looking at Bayes’s Theorem, we see that if Prob(H) is nonzero and Prob(E|H)=0 then Prob(H|E)=0.Thus the logic of probability subsumes the deductive logic of refutation as a special case.What is more interesting is the vastly more extensive territory outside the relatively small and safe domain where deductive logic can play its protective role.”

Consider William Lane Craig’s version of the fine-tuning argument which goes like this.

1. The fine-tuning of the universe’s initial conditions is either the result of chance, necessity or design. (Premise)

2. It is not the result of chance or necessity. (Premise)

3. Therefore, it is the result of design. (From 1 and 2)

This argument is clearly valid. We want to know the probability of (3). As in the case of DA1, the probability of (3) will depend upon the probability of (2). If we have a very weak degree of belief that (2) is true, say we think Pr(2)=0.25, then, *by itself, *this argument only warrants the belief Pr(3)=0.25.

* Thanks to Robert Greg Cavin for helpful comments on an earlier version of this post.