While I'm on the topic of induction, I should discuss its relation to the philosophy of "falsification". Or rather, its opposition to said philosophy.
Falsification is perhaps the most well-known piece of philosophy of science. The idea was invented by Karl Popper around the 1930s. Among other things, it was meant to answer the demarcation problem, the question of what is and isn't science. A theory is scientific if it is falsifiable; it is unscientific if it is unfalsifiable. By "falsifiable," we mean that there is some piece of evidence that might disprove the theory. If we have the theory, "All crows are black," this can be falsified by the observation of a white crow. The reasoning behind this piece of philosophy is that you can never prove that all crows are black, at least not in practical terms. But we can disprove it, by observing a white crow. So instead of trying to prove it, we should simply try our best to disprove it.
Despite being the popular view of how to distinguish science from non-science, falsifiability is not really how most scientists themselves view it. This is because science doesn't actually work that way, not exactly. Nor is it apparent that it should work that way. Scientists don't exclusively spend their time trying to disprove their own ideas. But my criticism comes from a different direction.
My problem with falsification is that it buys into the dichotomy between "positive" and "negative" claims. It's said you can't prove negative claims (ie, the non-existence of a particular object) but you can prove positive claims (ie, the existence of a particular object). While this certainly describes a lot of different claims, in general, there is no dichotomy. It's not necessarily easier to prove positive claims than negative ones. After all, the distinction between positive and negative is artificial. Any positive claim "P" can be made into the negative claim "not-(not-P)".
For example, consider the claim, "More peppered moths are black than white." You can't disprove this by simply finding a white peppered moth. Nor can you prove it by finding a black peppered moth. In fact, you can't ever absolutely disprove or prove it! You can come pretty close by observing a large random sampling, but you never prove or disprove anything.
More sophisticated forms of falsification account for this by saying you can falsify a theory when the evidence is so great that it's no longer reasonable. But no one single observation can falsify the theory, so when exactly does it go from unfalsified to falsified? Wouldn't it be more useful to be able to characterize all the grays between proof and disproof (especially when neither extreme is actually possible), or perhaps even quantify them?
An alternative to falsification is inductionism. Induction does not purport to be able to prove or disprove anything. But it can argue that certain claims are more or less likely, and that can be almost as good as proof. There is even some mathematical underpinning to it, so you could, in principle, quantify your grays. There are a few assumptions made, but they are not unreasonable, and we can always make exceptions for those few circumstances in which the assumptions are questionable.
And of course, the third alternative is to accept both inductionism and falsification. I think Popper saw falsification as a replacement for induction, not a supplement, but who am I to let Popper dictate our options? The problem is that falsification is usually more or less the same as induction, only less powerful. Other times, it seems exactly the same, except with clunkier terminology. The only time I think falsification is useful is in its simple solution to the demarcation problem. It makes distinguishing science from non-science easy. But then, I think it can be wrong sometimes, because it is too simplistic. Perhaps there are some scientific claims that can't be falsified, or unscientific claims that can be falsified.
Perhaps I can't put the final nail in the coffin of falsification, but I intend this for a general audience that perhaps has not previously questioned Popper's ideas about science. The take-home message is that falsification is not a universally accepted way to think about science, and should not be taken for granted. Usually, there is never any particular point where a scientific theory is clearly falsified, but that doesn't mean we can't make progress.
Monday, June 9, 2008
Subscribe to:
Post Comments (Atom)
18 comments:
So... Here's why you're wrong.
First, as you note, falsificationism is a semantic theory, not a methodological: it talks about the meaning of statements, not how to establish or determine the truth or falsity of a statement. Falsification doesn't say we must look to "disprove" our theory; it says only that unless a theory is in principle somehow disprovable by observation, observations that confirm the theory have no epistemic value.
Put another way, induction holds that all unfalsifiable theories are true. You don't make your definition of induction explicit, but I will assume that it means if the outcome of an experiment conforms to a theory, that adds evidence that the theory is true. An unfalsifiable theory entails by definition that all experimental outcomes conform to the theory, therefore all experiments add evidence that the theory is true, therefore the theory has strong evidentiary support.
Consider, for example, the theory that all people hate their father, and some are in denial about it. If you ask a person, "Do you hate your father?" and he replies in the affirmative, he hates his father and you have confirmed your theory. If he replies in the negative, he hates his father and is in denial, and you have confirmed your theory.
Another way of looking at falsification is to say that induction is meaningful if and only if it is theoretically possible for observation to disconfirm the theory. Otherwise, induction is vacuous.
The idea that there must be "some [single, individual] piece of evidence that might disprove the theory" is a simplification for the purpose of illustration. It's easy to generalize that a falsifiable statement can somehow be falsified by experiment. Consider your example, "More peppered moths are black than white." It can't be falsified by observing a single moth; not because falsification is somehow inept, but because it's the statement itself is not a statement about individual moths or all moths; it's a statement about a statistical distribution. We could in theory observe the whole population, which would indeed confirm or falsify the statement by observation. In practice, we settle for probabilistic confirmation or disconfirmation by observing samples, but that's a practical limitation, not a theoretical one.
While falsification does not establish a methodology, it does make some strong suggestions.
It is an oversimplification and kind of misses the point to say, "[I]nstead of trying to prove [a theory], we should simply try our best to disprove it." It is more accurate to say, that instead of looking for confirmatory evidence, we should look for disconfirmatory evidence. This latter view is seems obviously in the spirit of Feynman's Cargo Cult Science. Every good scientist (and engineer) is always looking for edge cases, not the typical case, to validate or invalidate her hypothesis.
The idea that there must be "some [single, individual] piece of evidence that might disprove the theory" is a simplification for the purpose of illustration.
-
I agree. However, it is a popular misunderstanding to think that it is not merely a simplification, but the entire idea. People expect scientific theories to be solidly disproven in one go. The idea of falsification is prone to misunderstanding, and that's one of the biggest things I dislike about it. But clearly, you have not made this error, so I will speak no more of it.
Provided that we've understood falsification correctly, I would not say that Popper's idea of falsification is wrong. (If I said it was wrong in the above post, then I now disagree with my past self.) Rather, it is equivalent to, or possibly weaker than induction.
I should make my alternative to falsification more explicit. I believe that induction is best modeled by Bayes' theorem. If Bayes' theorem says that the post-test probability is greater than the prior probability, then the test is confirmatory evidence. The extent to which Bayes' theorem is ambiguous (ie if the result is sensitive to our choice of priors) is approximately the extent to which our knowledge is ambiguous. From this model, we can prove that confirmatory evidence is possible if and only if disconfirmatory evidence is possible.
Popper says basically the same thing, only without all the nuances and without the quantitative underpinning.
While falsification does not establish a methodology, it does make some strong suggestions.
-
I agree. Falsification may have been historically useful. It might be pedagogically useful. For sure, it is useful in communication (by virtue of its being well-known). But beyond that, I wish to discard the idea of falsification, lest people mistake it for an entire methodology in itself.
Actually, I had the impression that Popper himself believed falsification to be a methodology in itself. Popper believed it to be a replacement for induction. He basically conceded the problem of induction, saying that induction is indeed unjustifiable. But because he needed a word to describe confirmatory evidence, he called it "corroboration". Popper's idea of corroboration is pretty much the same as induction, except that Popper refused to quantify it. But I admit that I could be wrong about Popper's views, and in any case it is irrelevant, since he is no authority.
I had the impression that Popper himself believed falsification to be a methodology in itself.
Maybe he did, maybe he didn't; I'm not that interested enough in philosophology.
What's important is not whether Popper was mistaken on some points, but rather whether an idea that he deserves credit for is philosophically sound. In just the same sense, Darwin was ignorant of and made several wrong conjectures about the inheritance of variation. He still deserves credit for his central idea, however, of natural selection operating on heritable variation.
Rather, it is equivalent to, or possibly weaker than induction.
I still don't think you have quite grasped the point. Semantically, falsifiability denotes those statements that can be supported by induction.
More importantly, if you're looking specifically at Bayes theorem, to get the maximum "bang for your buck", you want to look at precisely those situations where P(E|~H) is high (best is P(E|~H) is high and P(E) is low), i.e. those situations that would most quickly identify the falsity of your hypothesis.
Semantically, falsifiability denotes those statements that can be supported by induction.
Yeah, I get it. I have in fact used the language of falsifiability to speak about what constitutes good evidence. The concept is for the most part well-known, intuitive, and understandable. And given a sufficiently charitable statement of falsification, such as yours, I cannot truly disagree with it.
But in principle, I would prefer to use the language of induction by Bayes' theorem. If only it were more established in the popular mind, and if people were more capable of understanding it.
You can't prefer to use Bayesian language over falsification any more than you can prefer to use gravitational calculations over fluid dynamics to talk about aerodynamics. You need both.
For example, with a strictly Bayesian paradigm, the Fine Tuning argument holds water: the physical constants of the universe are evidence — strong evidence — for the existence of God. You have to apply a semantic analysis to the theory to realize that the purely Bayesian analysis is bullshit. There are more semantic rules than just falsification (and the FTA violates all of them) but wrt falsification, the observation that the constants of the universe don't support life would also confirm the existence of a God, therefore the theory is not falsifiable.
...the observation that the constants of the universe don't support life would also confirm the existence of a God, therefore the theory is not falsifiable.
You are mistaken. This is quite simply mathematically impossible under Bayesian analysis. If we assume that both observation L and observation ~L support the existence of a god, then we could say that "L or ~L" supports the existence of god, and therefore P(god) > P(god), which is contradiction.
I recall reading a paper one time which used purely Bayesian analysis to refute the Fine-Tuning Argument (see here). IIRC, you were the one who had pointed me to this paper.
Perhaps you're correct; I'll have to check out the math more carefully.
From the article: If we remove the restriction that the inequalities be strict, then the only case where both inequalities can be true is if
P(N|~F&L)=P(N|L) and P(N|F&L)=P(N|L).
In other words, the only case where both can be true is if the information that the universe is "life-friendly" has no effect on the probability that it is naturalistic (given the existence of life); and this can only be the case if neither inequality is strict.
This is what I meant: they can escape actual contradiction by making the the theory unfalsifiable.
My point is that falsifiability and Bayes' theorem go hand-in-hand; the only way to make Bayes theorem do any useful work is to create falsifiable hypotheses.
Another way of looking at it is that falsifiability captures in a fairly straightforward way a deep mathematical property of Bayes theorem that is not immediately apparent to non-mathematicians such as myself. It's possible with only a smidgen of subtle bullshit to make Bayes theorem look like it supports supernaturalism. I've actually had atheist philosophers (as well as religious scientists) miss the subtlety of Ikeda Bill Jefferys' mathematical argument, and denounce me as incompetent because I asserted the FTA was not evidentiary.
The criterion of falsifiability, however, precisely states this subtle property in an obvious way. It is sufficient and powerful to say that the FTA argument fails because both the observation of non-life-friendly and life-friendly physical laws both confirm supernaturalism.
er... Ikeda and Jefferys' argument
The point is taken about how falsifiability captures a rather obscure property of Bayes' theorem, turning it into a much more clear and obvious statement. It could be that I am being unnecessarily dismissive and eggheaded. I really like that there's so much math involved in Bayesian arguments, but I shouldn't dismiss more intuitive arguments. Simplification is a worthy goal in itself.
But see, this is a strategy I take to counter a specific problem I see: People often abuse and misunderstand falsification. There are two strategies to counter this problem. I could either 1) Explain that falsification is sometimes right, sometimes wrong, and a Bayesian argument would be more reliable, or 2) Explain that they got the idea of falsification wrong (and sometimes Popper did too).
I've been taking strategy 1, and you would probably take strategy 2. Though other two strategies seem to contradict, the difference is one of semantics. They are not necessarily in conflict, and have the same goal.
After some thought I realize that one of the weaknesses of strategy 1 is that while people may misunderstand falsification, they may also fail to understand bayes' theorem.
Keep in mind that creative and clever ways of misinterpreting the words of others forms the backbone of the the profession of philosophy. There are a few exceptions, but philosophers in general really are only a hair less intellectually dishonest than theologians, and not quite as honest as politicians, lawyers and used-car salesmen.
Noting that philosophers can misconstrue falsification says little more than noting that creationists can misconstrue science.
Anonymous,
I may be repeating or contradicting my earlier self (twas two years ago!), but here's my answer:
Under falsification theory, you're supposed to try to falsify a hypothesis. And each time you fail to falsify it, you've "corroborated" the hypothesis.
Under Bayesian induction, you're supposed to perform experiments that will likely have different results depending on whether the hypothesis is true or not. Sometimes this means performing an experiment that could nearly completely disprove the hypothesis. If the experiment fails to disprove it, this provides some degree of evidence that the hypothesis is true.
Do these two things sound similar to you?
Some comments removed by request.
Post a Comment