Monday, May 12, 2008

Induction and the Bayesian

Early in my blog, I distinguished between two types of reasoning, deduction and induction. Maybe you wouldn't know it from my writing, but I am absolutely fanatical about the concept. If the terms were common knowledge, I would talk about them all the time; as it is, I am afraid of confusing everyone.

So to recap: deduction is a type of reasoning that always gives absolute truths, given certain premises. Induction is a type of reasoning whose conclusions only have a certain probability of being true. Both types of reasoning are absolutely necessary, though induction is far more common. Read my post linked above for details and examples.

What I haven't yet explained is that there's actually a mathematical formulation of inductive reasoning. In practice, it's hard to apply rigorous mathematics to life, but I still think this will give us better insight into the inner workings of reason.

Bayes' Disease

To demonstrate, let's first use an example that is explicitly mathematical. Let's say there's a particular disease, "Bayes' disease" that occurs in 10% of the human population. We have a simple way to test for Bayes' disease, but there is a 25% chance (assume independence) that the test will get the wrong results. Let's say that you've just taken the test, and the results say you've got Bayes' disease. What is the probability that you really do have Bayes' disease?

Now, we could make an inductive argument, and say that because the test turned out positive, you're more likely to have Bayes' disease than before. But how much more likely?

To solve this problem, we first find the probability that any random person will test positive. This is equal to 0.25*0.9 + 0.75*0.1 = 0.3. Next, we find the probability that any random person will test positive and actually have Bayes' disease. This is equal to 0.75*0.1 = 0.075. Last, we divide these two numbers: 0.075 / 0.3 = 0.25. Conclusion: there is only a 25% chance that you actually have Bayes' disease.

The way of stating this as a mathematical formula is called Bayes' theorem. See Wikipedia for a short derivation.
P(A|B) = \frac{P(B | A)\, P(A)}{P(B)}.
In the above formula, P(A) is the "prior" probability of A, and P(B) is the "prior" probability of B. In my example, P(A) is the 10% chance of having Bayes' disease, while P(B) is the 30% chance that the test will come out positive for a random person. P(B|A) is the "conditional" probability of B, given A, while P(A|B) is the "conditional" probability of A, given B. In my example, P(B|A) is the 75% chance that the test will succeed, given a person who is infected with Bayes' disease. P(A|B) is the probability that you actually have Bayes' disease, knowing that you tested positive.

A More Realistic Example

Let's say we're looking for a particle predicted by an advanced physics theory. We use a particle accelerator, and repeat an experiment thousands of times. We notice a pattern in the results, but we're not sure whether it is the particle we're looking for, or if it is a random error due to the Uncertainty Principle. If our theory is correct, then there was a 40% chance of getting this pattern. If our theory is incorrect, there was a 10% chance of getting this pattern.

Here, P(B|A) = 40%. P(B) can be calculated from P(A) and the other numbers. But the problem is that we don't know P(A), the prior probability that our theory is correct. In reality, our theory either has a 100% chance of being correct, or a 0% chance--we simply don't know which. A good estimation is that P(A)=50%, but this is in some ways naive. I can't just create any random theory and declare it to be 50% likely, pending more evidence. For example, I might claim "I have an apple", and then claim "I have an apple and a banana." These two claims can't both be 50% likely.

Some say that we can't assume anything about the prior probability. But I think that's also naive, since it leaves us with practically no way to know the universe. So we will assume P(A) = 50%, just so we can work through the math. Hopefully, P(A|B) will turn out to be so high that it doesn't matter what we picked for P(A). If that happens, nobody can complain!

The probability that our theory is correct given this new evidence turns out to be 80%. And remember, we had to assume a value for P(A). If I had assumed P(A) was much lower, say 10%, the end result would be about 31%. Is it 80% or 31%? We can't say which. This is what we mean when we say science is uncertain! Luckily, most established scientific theories are far more certain than that.

In Real Life

Of course, in most of life, we don't have any numbers at all! In fact, it's generally a bad idea to try to quantify things that are so uncertain. But Bayes' theorem still gives us insight into what makes a good inductive argument.
  • P(B|A) should be high: the piece of evidence should have a high chance of occurring, given that our theory is correct.
  • P(B) should be low: the piece of evidence should have a low chance of occurring in general
  • If you've made your argument correctly, P(A|B) will be higher than P(A), no matter what P(A) is (unless P(A)=0). That is, a good inductive argument makes a claim more likely. Induction works!
  • If P(A) is low, an inductive argument won't help much. That is, a claim that was extremely unlikely prior to the argument will still be rather unlikely, unless you've got an extraordinarily good inductive argument.
In conclusion, you should frequently mention things like "prior probability" in your arguments in hopes of confusing your opponents! No, but seriously, think about Bayes' theorem when you use inductive arguments. Soon, mathematics will infect your whole mind! Mwahaha!

0 comments: