## Monday, June 30, 2008

You didn't think I had any particular reason for rambling about the Rubik's Cube? I started talking about it because in the recent issue of Scientific American, there is a very interesting article about the Rubik's Cube and similar puzzles. Sorry, it's subscription only, but I'll explain the basic ideas.

Permutation puzzles, as I previously explained, are puzzles where you have to arrange a number of items (like cubies or numbers) into the correct order. The underlying mathematics of permutation puzzles is called Group Theory. That's right, you're doing advanced mathematics when you turn a Rubik's Cube! Basically, all the possible positions of a permutation puzzle constitute a "group". For a Rubik's Cube, the size of this group is 43,252,003,274,489,856,000. But don't be intimidated by the size. As I said, the vast majority of permutation puzzles are easy. I mean, sure, you've got a 4-dimensional Rubik's Cube that boggles the mind, but they're easy in principle.

The mathematical reason that most permutation puzzles are easy is because they are based on symmetric or alternating groups. In a symmetric group, every single permutation is possible. In an alternating group, half of the permutations are possible. Maybe the previous three sentences made no sense to you, so allow me to translate to a concrete example: the Rubik's Cube. If the Rubik's cube is based on a symmetric group, that means you can find a way to switch two cubies without changing anything else. If it is based on an alternating group, that means you can find a way to switch three cubies without changing anything else.

But what happens if you've got a puzzle based on a much more complicated group? What if the simplest algorithm possible must switch 4 cubies at a time, or more? If we look to the mathematical research, there are lots of different finite simple groups. There are exactly 16 families of groups, plus 27 oddball groups, called the sporadic groups. (What's really amazing to me is that mathematicians can prove that there are no other finite simple groups.) So what happens if we build a puzzle based on one of those sporadic groups?

Well, that's what the Scientific American article did. They built puzzles based on the M12 sporadic group, the M24 sporadic group, and the Co0 group (aka the dotto group). Try them yourself!

The M24 puzzle and the dotto puzzle look ridiculously complicated, and I don't, at the moment, have enough patience to solve them. But let's look at the M12 puzzle.

Basically, you've got two moves called Invert and Merge. You can also create a custom macro, if you ever find an algorithm that you like. To solve this puzzle, I followed the same basic steps as I explained for the Rubik's cube. However, they are no longer easy. It is impossible to find an algorithm that switches exactly 2 or 3 numbers. In fact, you must switch at least 8 numbers at a time! Nevertheless, I came up with some useful algorithms.

So do you want some tips on solving the M12 puzzle? Try these simple algorithms. I came up with cryptic names for them too!

2-invert: MIM10I
2-bike: M2IM9
6-bike: M10IM
4-trike: M3IM8I
3-bike: M9IM2
bike/invert: M8IM3

Once you figure out what they do, you'll have to come up with a super-algorithm that combines them to solve the puzzle. It isn't easy, even with my hints.

## Friday, June 27, 2008

Did anyone take the Google Puzzle Championship besides me? If so, what were your scores? I ranked in the top 50. This is the result of much practice, taking the test for several years.

If you didn't take it, you can still try the test here.

### Rubik's Cube non-walkthrough

Ok, so I can solve the Rubik's Cube. You're probably not surprised. Well, if I may boast a bit here... I've been able to solve it for at least seven years. I figured it out completely on my own, without any walkthroughs. No, I didn't even ever see the hint booklet that comes with the Rubik's Cube. Ours was really old, and the hint booklet was long gone. As a result, I solve the cube in a completely different order than most people do (I think the way I do it makes much more sense). I also own a 4x4x4 Rubik's Cube (16 squares to a face) and can solve that.

The internet will probably not be impressed--not until I can videotape myself solving it in 30 seconds while simultaneously doing interpretive dance. But in real life, people tend to be impressed. The thing is, people don't realize how easy the Rubik's Cube is. There are a few very simple principles to solve any permutation puzzle. By permutation puzzles, I mean the kind of puzzle where you have to arrange the numbers in the right order, put the cubies in the correct place, or otherwise sort a number of items into the "correct" order. I'm not going to go into the details of how to solve the Rubik's Cube. I'll do better: I'll reveal the secret to solving pretty much any permutation puzzle without a walkthrough.

Step 1: Sort as many items as common sense will take you. On the Cube, most people solve one face; I solve for some other weird combination. This is where most people tend to fail--luckily, this step is skippable.

Step 2: Develop several algorithms that will change the position of only a few items. That way, you can solve the remaining part of the puzzle without messing up your previous work. Developing an algorithm is pretty much a trial and error process. Just try a bunch of moves, write down the steps, and note the changes. Once you find an algorithm, it behaves like a blackbox--you can just memorize the process without knowing exactly what's going on during the process. (However, if you're good, you can open the blackbox, and manipulate the algorithm without trial and error.)

Step 3: Combine these algorithms to solve the rest of the puzzle. The difficulty of this step depends on how good your algorithms are. For example, if you find a way to simply switch two cubies, you can easily repeat this over and over, putting each cubie into its place one by one. However, it is impossible to find any such algorithm on a Rubik's Cube. You must switch at least three cubies at a time. But if you've got enough thinking power, such an algorithm is enough to solve the puzzle. (Incidentally, you can switch two cubies for the 4x4x4, though there are other limits to what you can do.)

And that's it! Easy, right? Well, if you were looking for a solution to the Rubik's Cube, I probably didn't help you in the slightest bit. But now you know what you're looking for.

## Wednesday, June 25, 2008

### Half-truth #3

You should lie regularly. That way, no one will believe what you say.

## Tuesday, June 24, 2008

### Half-truth #2

The difference between the agnostic and the atheist is that the agnostic believes there is a difference, and the atheist doesn't.

## Monday, June 23, 2008

### Half-truth #1

One thing that separates skeptics from other people is that skeptics will question any generalization they hear.

## Sunday, June 22, 2008

### Skepticism in real life

Skepticism is about the truth. But there's more to it than just truth. Everybody is interested in the truth, after all. So what makes skepticism any different? What else does it value?

If you said "evidence", you're correct. But I view things more broadly. People are rather fond of saying that not everything is about cold hard evidence. I agree. In most areas of life, you will never be able to provide cold hard evidence, nor create overarching theories that explain broad swaths of phenomena. But I still think there is an evidence analogue in the fuzzier parts of life. While we do not care about cold hard evidence, we should still care about why things are true.

The first natural question is "Why ask why?" In science, we care about the why for several reasons. In science, we want to be able to convince other people. But in the rest of life, it's not our primary purpose to tell other people how to live. In science, we want to understand the underlying mechanisms so that you can predict other things too. But in the rest of life, underlying mechanisms, if they exist at all, are far too complicated to discern.

Lastly, in science, you want to know what would be needed to falsify your theory. This last reason is valid in the rest of life too. After all, anything you say about life in general is bound to be wrong some of the time. If we know why it is correct, we will have a much better idea of when it is incorrect.

Let's take an illustrative example: "Eat, drink, and be merry, for tomorrow we die." With slightly humorous intent, I will take this statement literally, and ignore its original context.
This little "truth" says we should eat, drink, and be merry. Why? Because tomorrow we die. Well, like all things, this truth is uncertain. There are at least some instances when it is wrong to eat, drink, and be merry. When is it wrong? Well, it might be wrong if tomorrow we don't die. If that is the case, perhaps we can find another reason to eat, drink, and be merry. Perhaps you should eat, drink and be merry because you enjoy doing so. Or maybe you can't find a reason, and should starve.

And another example: "A spoonful of sugar helps the medicine go down."

Why does a spoonful of sugar help the medicine go down? Because the medicine is good for you in the long run, but tastes gross by itself. Therefore, this wisdom does not apply when the medicine is not good for you in the long run. Nor does it apply when the medicine already tastes good by itself, or when you have no sugar to spare. The same reasoning still applies when the expression is taken metaphorically.

Sometimes it rather bothers me when I hear people simply quoting something clever without any hint of why it might be true. I mean, do people really think that there's anything in life that's true all the time without exceptions? For instance, consider the values of "open-mindedness", "diversity", or "moderation". Not to say these things aren't good, but it is nevertheless important to know why they are good, and thus when they are good.

If I may indulge myself with a statistical metaphor... not saying why something is true is like not reporting error bars.

And it's not even like it's hard to come up with reasons for these things. You don't need to write a dissertation about it or anything. Just a little something like "Be open-minded, for you just might be wrong". Easy. And you'll get better insight into the vagaries of life this way.

### Carnival of the Godless #94

The Carnival of the Godless is up at Earthman's Notebook. It includes my submission, Dogma and Metaphor.

I have had poor attendance at these carnivals, but they're still there even when I don't pay attention to them. Go read it!

## Friday, June 20, 2008

### Sorting a bookshelf

You have a bookshelf with eight books of equal thickness. The first four are fiction, the next four are nonfiction. Your goal is to sort the shelf such that the books alternate: fiction, nonfiction, fiction, nonfiction, etc.

The constraint: You may only move groups of three adjacent books. They must all be moved at once, without changing the order. Note that once you move a group, there are three empty spaces left behind. Books with spaces between them do not count as adjacent. Neither the starting or ending positions should have any empty spaces.

Try to do it in five four moves.

This is based on a similar puzzle in the game 11th Hour. The puzzle in 11th Hour was essentially the same, only that you move groups of two adjacent books instead of three.

See solution

## Wednesday, June 18, 2008

### Dogma and metaphor

Why are metaphorical and symbolic thought so common in religion? Outside the literalist traditions, most religious people will tell you that the Bible must be interpreted metaphorically. The actual events described are not so important as the moral lesson derived from it. They will also interpret many words (ie soul, spirituality, God, faith) symbolically, as something far more mysterious the the stereotypical, vulgar definitions.

By contrast, I prefer words that are clearly defined. If a word has too many meanings, I'd rather discard it, lest I confuse people. Why is this done so rarely in religion?

One reason, I think, is because of dogma.

When it comes to people's beliefs, they change a lot. The supposed rigidity of religion aside, religious people do think for themselves, and their beliefs move around throughout their lives. But despite the change, it is important that there is continuity of tradition. Believers want to believe that they are still within the tradition. The easiest way to do this is to keep the same language, even if it must be used to mean something slightly different. They still believe in the same things, they simply interpret them differently.

And this is not to say that the new interpretation is wrong. Often times, it's an improvement, or has more historical basis than the previous interpretation. And it is not so much "moving the goalpost" as it is people genuinely having a change of mind. There is nothing wrong about changing one's mind--in fact, I'd say it's a good thing. Nor is there anything wrong with metaphor, or wanting to preserve the religious language. But the fact that this is done so systematically is indicative of dogma.

So how does this work? We take a piece of dogma in religion, something general, like "God exists", "The soul exists", or "The Bible is good". To me, it is not so important whether these statements are true or false. It is important what we mean when we say they are true or false. If by "God", we mean the world, or the conception within the human mind, then of course God exists. If by "God" we mean a conscious metaphysical being who answers prayers, then I think not. If by "soul" we mean consciousness or the quality that makes one a person, then of course the soul exists. If by "soul" we mean something that comes in discrete quantities and survives the afterlife, I think not. If by "good" we mean historically important, then of course the Bible is good. If by "good" we mean true, or interesting (to me), I beg to differ.

Now, I have little reason to prefer any of the above definitions over the others. However, in the presence of dogma, it is far preferable that we stick to definitions in which the general statements remain true. With this practice of selecting the right definitions, even I could be, if not an outright Christian, some sort of agnostic or theist. If a person takes an even weaker position than my own, he/she could easily frame him/herself as a Christian. And again, there is nothing intrinsically wrong with such a decision. But the fact that this practice is so common indicates that many people (though not all) have been influenced by dogma.

## Monday, June 16, 2008

### The other Uncertainty Principle

Update:  I have to say that out of all my science posts, this is the one I regret the most.  I'm not sure that I would say it's inaccurate, but I think it is incomplete.  A more complete explanation would have explained the uncertainty principle in a way that makes sense with virtual particles.  But I will not attempt to fix the post; it remains below unchanged.

I've previously discussed the Uncertainty Principle:

Δx Δp ≥ ħ/2

Δx is the uncertainty in the position of a particle, while Δp is the uncertainty of the momentum of the particle. ħ/2 is a very small fundamental constant.

There is another Uncertainty Principle that people are fond of mentioning.

ΔE Δt ≥ ħ/2

Here, E is energy and t is time. This equation is often mentioned in footnotes, where writers note that not only is space uncertain, but time is also uncertain. The general thrust is "Quantum Mechanics: isn't it insane? Let us all marvel at Nature's imagination!"

The problem is that the equation is so often misunderstood. The equation is not exactly a myth, but it does not mean what people think it means.

Position (x), momentum (p) and energy (E) are all measurable quantities. Time (t) is not. What does it mean to measure the "time" of a particle? Ok, it is possible to measure "time" if you use the Theory of Relativity, but this equation has nothing to do with that. It is based on non-relativistic equations, and works for particles moving much slower than the speed of light. Because time is not a measurable quantity, the second uncertainty principle means something entirely different from the first one (and has a different derivation too).

Δt does not mean the same as Δx. Δx is defined as the average difference between the measured value of x and the expected value of x. Δt is defined as the amount of time it takes for a wavefunction to change by a "significant"* amount. If the uncertainty in energy (ΔE) is very low, then it will take a long time for the wavefunction to change. If the wavefunction changes very quickly (Δt is very small), then the energy must be uncertain. If the energy is known exactly, then the wavefunction does not change at all in any measurable way. This is called a "stationary state": a wavefunction that does not move, because the energy is exactly known.

In some sense, Δt really does indicate some sort of uncertainty. If you have a moving particle, you can use its position to determine how much time passes. Any such "clock" would have an uncertainty in time equal to Δt. This isn't due to any uncertainty in time, but rather, the uncertainty in position. If you have a particle in a stationary state, you can't use it as a clock at all, because its position never changes, and Δt is infinite. But is very misleading to think Δt means the same thing as uncertainty in time. Time itself is not uncertain--that's just your clock.

While Quantum Mechanics is weird, it's not quite so weird that it denies universal time. No, it was Special Relativity that denied universal time. Glad that's settled, then...

*"Significant" change means that a measurable quantity changes by one standard deviation. For example Δt could mean the time required for the expected position (x) to change by a quantity Δx.

## Saturday, June 14, 2008

### Occam's Razor: Don't claim too much

[Note: if you signed up for the Google Puzzle Championship, it's in two hours, 1 PM EDT!]

I've spoken on Occam's Razor before. And then I offered a different interpretation. If it seems like I'm trying to rationalize and salvage the few justifications for Occam's Razor, it's because I am. I'd like to emphasize that I don't think Occam's Razor is all it's cracked up to be. Its application is rather narrower than people think and its use is best avoided.

But there is yet another interpretation that I want to discuss. I said in a previous post that to do the mathematical calculations in Bayes' theorem, you need what is called the "prior" probability of a claim. The prior probability is the likelihood that the claim is true before we've looked at the evidence.

Estimating the prior probability of a claim is a fundamentally unsolvable problem. Sometimes you're lucky; if the claim is about people, you can simply test a bunch of people to get the probability that it is true for any one person. But if the claim is about the universe, you can't really test a bunch of universes--we can only see one! That's why it's generally a bad idea to give the prior probability an actual number. For an argument to be effective, it should not rely on questionable estimates of prior probabilities.

But sometimes there simply is no effective argument in either direction. What happens then? We start arguing over prior probabilities. We start arguing about how much "sense" the claim makes. The debate devolves into a matter of personal belief and incredulity.

Enter Occam's Razor. As I said before, the claim that "I have an apple" is more likely than the claim "I have an apple and a banana". That's because the latter necessarily implies the former. In general, any claim "A" has a probability equal to the sum of "A and B" and "A and not B". Therefore, by removing any mention of B, we've made our claim more likely. In fact, the claim becomes more likely the more elements you remove from it. The less you say, the less likely you are to be wrong.

Of course, the logical conclusion is that in order to avoid being wrong, the best claim is one with zero elements: pure agnosticism. But such a claim is useless. There is a certain balance here between usefulness and likelihood. To strike this balance, you want to remove any unnecessary elements from your claims and keep the necessary ones.

Some people try to compare claims by simply counting up the number of elements. This is a useful move, but it doesn't follow logically from the above. And how can you really count the number of "elements"? What constitutes a single "element"? While these questions are not unanswerable, they are a major obstacle.

## Friday, June 13, 2008

### 9-square fold solutions

See the original puzzle

In order to keep the different squares straight, I'm going to label each one with a number.

Now, I could just give you the solutions by giving the correct order of numbers. For example, the "spineless" one is 743652189. But that's kind of boring, and it doesn't really tell you exactly how you're supposed to fold it anyway. Instead, I'll discuss the general principle behind the 9-square fold.

The general principle is that paper can't go through itself unless you tear it. Try poking your finger through paper without tearing it. It doesn't work. Simple, eh?

Let's consider a simpler case: a 4-square fold with just the numbers 1, 2, 4, and 5. Can we create the following stack?

1245

There are exactly 4 folds in every 4-square stack. Those folds are between squares 1 and 2, 2 and 5, 4 and 5, 1 and 4. I will call these folds 1-2, 2-5, 4-5, and 1-4 respectively. If you create a square stack, you will find that folds 1-2 and 4-5 are always on the same side of the square, while 1-4 and 2-5 are on another side of the square. That means that the links 1-2 and 4-5 cannot cross each other, nor can the links 1-4 and 2-5. Paper can't go through itself! In the fold 1245, 1-4 and 2-5 cross each other, therefore it is impossible.

For a 9-square fold, there are 12 folds: 1-2, 2-3, 4-5, 5-6, 7-8, 8-9, 1-4, 2-5, 3-6, 4-7, 5-8, 6-9. Remember, the final stack will be a square with four sides. One side will have the folds 1-2, 4-5, and 7-8. Another will have 2-3, 5-6, 8-9. Another will have 1-4, 2-5, 3-6, and the last will have 4-7, 5-8, 6-9. Allow me to put this into a table:

Within each of these groups, there can be no crossed folds. This is sufficient to distinguish between possible and impossible solutions. Now you can tell, at a glance, whether the sequence 123456789 is possible (it's not). Of course, this still doesn't help you actually fold the paper. That is something that can only be mastered by trying it yourself.

## Wednesday, June 11, 2008

### Global Warming recap

Remember that discussion I have been having with Pat Frank at BASS about his global climate article in Skeptic Magazine? Yes, it's been going this whole time.

This sort of thing is way more time-consuming than the other stuff I write. General philosophy and skepticism is easy. Discussing specific claims is hard, especially when the opposing party is right there to call you out. But it's also rewarding in a way. I learn about a slice of climate science and the climate debate. My mistakes get corrected.

And I did make my mistakes. First of all, Pat Frank is by no means innumerate. In fact, that was probably a big lapse of judgment on my part. Second of all, I assumed the errors were random, but Pat Frank actually invested some time showing that they are not. Using these corrections, I narrowed down my criticism.

And the conclusion? You'll have to figure it out the hard way--by reading the discussion. I don't think the "conclusion" itself is as important as the content. I also think that our little discussion is pebbles to the larger global warming debate. For one thing, GCMs are but one part of climate science. For another, our discussion is more of a "popular" one than a "scientific" one. Skeptic, too, is a popular magazine, not a scientific journal. (Note: Skeptic included an opposing viewpoint in the same issue.)

Speaking of which, you have to wonder why Pat published in Skeptic rather than a scientific journal. An answer comes from a thread at RealClimate. At RealClimate, there are actually climate scientists discussing it. Unfortunately, the thread is really long, and at times too technical for me. Also, sometimes the scientists are simply dismissive, which doesn't really help me, the little guy! :-)

Anyways, in the RealClimate thread, Pat Frank replies to questions about his motivations.
I submitted the manuscript to Skeptic because it has a diverse and intelligent readership that includes professionals from many disciplines. I’ve also seen how articles published in the more professional literature that are critical of AGW never find their way into the public sphere, and wanted to avoid that fate.
Ok. I understand wanting to popularize one's science--I'm a bit of a popularizer myself. Obviously, he was successful enough that I started talking about it. But the drawback to popularizing the way he did is that now we get to question his motivations. ;-) For example, ideally, he should have first published in a science journal, and then put a popular version into Skeptic.

Ok, I won't attack Pat's character anymore. I'm just having a bit of fun. But seriously, he's ok, and I respect his debate tactics. Many thanks to him for being polite throughout.

## Monday, June 9, 2008

### Induction vs Falsification

While I'm on the topic of induction, I should discuss its relation to the philosophy of "falsification". Or rather, its opposition to said philosophy.

Falsification is perhaps the most well-known piece of philosophy of science. The idea was invented by Karl Popper around the 1930s. Among other things, it was meant to answer the demarcation problem, the question of what is and isn't science. A theory is scientific if it is falsifiable; it is unscientific if it is unfalsifiable. By "falsifiable," we mean that there is some piece of evidence that might disprove the theory. If we have the theory, "All crows are black," this can be falsified by the observation of a white crow. The reasoning behind this piece of philosophy is that you can never prove that all crows are black, at least not in practical terms. But we can disprove it, by observing a white crow. So instead of trying to prove it, we should simply try our best to disprove it.

Despite being the popular view of how to distinguish science from non-science, falsifiability is not really how most scientists themselves view it. This is because science doesn't actually work that way, not exactly. Nor is it apparent that it should work that way. Scientists don't exclusively spend their time trying to disprove their own ideas. But my criticism comes from a different direction.

My problem with falsification is that it buys into the dichotomy between "positive" and "negative" claims. It's said you can't prove negative claims (ie, the non-existence of a particular object) but you can prove positive claims (ie, the existence of a particular object). While this certainly describes a lot of different claims, in general, there is no dichotomy. It's not necessarily easier to prove positive claims than negative ones. After all, the distinction between positive and negative is artificial. Any positive claim "P" can be made into the negative claim "not-(not-P)".

For example, consider the claim, "More peppered moths are black than white." You can't disprove this by simply finding a white peppered moth. Nor can you prove it by finding a black peppered moth. In fact, you can't ever absolutely disprove or prove it! You can come pretty close by observing a large random sampling, but you never prove or disprove anything.

More sophisticated forms of falsification account for this by saying you can falsify a theory when the evidence is so great that it's no longer reasonable. But no one single observation can falsify the theory, so when exactly does it go from unfalsified to falsified? Wouldn't it be more useful to be able to characterize all the grays between proof and disproof (especially when neither extreme is actually possible), or perhaps even quantify them?

An alternative to falsification is inductionism. Induction does not purport to be able to prove or disprove anything. But it can argue that certain claims are more or less likely, and that can be almost as good as proof. There is even some mathematical underpinning to it, so you could, in principle, quantify your grays. There are a few assumptions made, but they are not unreasonable, and we can always make exceptions for those few circumstances in which the assumptions are questionable.

And of course, the third alternative is to accept both inductionism and falsification. I think Popper saw falsification as a replacement for induction, not a supplement, but who am I to let Popper dictate our options? The problem is that falsification is usually more or less the same as induction, only less powerful. Other times, it seems exactly the same, except with clunkier terminology. The only time I think falsification is useful is in its simple solution to the demarcation problem. It makes distinguishing science from non-science easy. But then, I think it can be wrong sometimes, because it is too simplistic. Perhaps there are some scientific claims that can't be falsified, or unscientific claims that can be falsified.

Perhaps I can't put the final nail in the coffin of falsification, but I intend this for a general audience that perhaps has not previously questioned Popper's ideas about science. The take-home message is that falsification is not a universally accepted way to think about science, and should not be taken for granted. Usually, there is never any particular point where a scientific theory is clearly falsified, but that doesn't mean we can't make progress.

## Saturday, June 7, 2008

### Religion and violence?

Take a look at this article: Too Much Faith in Faith. Alan Jacobs argues that it is not necessarily religion that causes people to behave violently or abuse power. Often, people are simply lying about their motivations.
Yet when someone does something nasty and claims to have done it in the name of religion, our leading atheists suddenly become paragons of credulity: If Osama bin Laden claims to be carrying out his program of terrorism in the name of Allah and for the cause of Islam, then what grounds have we to doubt him? It's not like anyone would lie about something like that as a strategy for justifying the unjustifiable, is it?

Though it may seem ironic for a Christian to be saying this, it's time to talk less about the power of religion and remember instead the dark forces in all human lives that religious language is too often used to hide.
The main problem with the article is that Alan Jacobs acts as if this is a new idea that no one thought of before. I'd consider it a venerable opinion of the intersection of religion and politics. It is a view that many atheists have already considered, or even agree with. I agree too, at least more than I disagree.

And since this view has been around awhile, there are plenty of objections already out there. Here's a comment by Ebonmuse that illustrates a typical response:
Yes, I can imagine what people would be capable of if they did not believe in God. They would be capable of building a peaceful world of reason where our mutual differences are set aside in the name of our common humanity. Religion is not the only cause of our ills, but as long as it divides us, and as long as people think their dogmas are more important than other people's freedom and happiness, the killing you refer to will never end. Atheism is not the solution to all our problems, but it is definitely the solution to one of the bigger ones.
Another view from Greta Christina:
Many defenders of religion do the exact same thing -- only in reverse. They point to people like King and Gandhi to show what a positive force religion is in the world... but then argue that the Bin Ladens and Torquemadas of the world would have acted exactly the same without religion.
But enough relying on authorities. Here's my take:

The fact is that many people use religion to justify violence. In all likelihood, the underlying causes of the violence are, whether they know it or not, political or social factors, not religious factors. And yet, people still use religion as a justification. Why? Is it because religion is especially good at "justifying the unjustifiable"? Whatever the reason, it doesn't speak well of religion.

In any case, this is not the sort of argument I like to use against religion. Just because a few adherents do something bad doesn't mean a whole lot. If we just look at religious violence, it's too difficult to sort out all the other possible causes. And then we might just forget about all the non-violent problems with religion. I prefer to criticize religion on rational grounds, not political ones--that way it becomes much clearer which parts of religion are bad, and what we can do to improve upon them.

On an unrelated note, I found this quote from the article wonderfully ironic.
I would counsel our contemporary atheists to study some of their more consistently skeptical ancestors: George Orwell, for instance, who exposed the fundamental and incorrigible dishonesty of most political speech in his great essay "Politics and the English Language."
Did he just "counsel" atheists by appealing to authority? Incidentally, I already happen to think Orwell's famous essay was overrated. I just thought it was amusing how Jacobs' counsel backfired.

## Friday, June 6, 2008

### Now to enact... The Plan

I recognize a meme when I see it. You just know this will be all over the skeptical blogosphere, given the recent news about the video evidence of aliens.

## Thursday, June 5, 2008

### Absence of evidence in the Bayesian

Here's a little something that maybe you didn't know about induction. Let's say I have evidence B. I can use this evidence B to argue inductively for claim A. Evidence B doesn't prove A, but it does make A more likely. So what happens if I instead have evidence not-B. That is, I've looked, and found that evidence B is absent. Does that make not-A more likely?

In other words, does absence of evidence amount to evidence of absence? Yes. And I can prove it mathematically. [I mention math and thus lose half my readers... Skip the proof section if you must.]

The Proof

Good to read first: Induction and the Bayesian

Bayes' theorem states the following:
$P(A|B) = \frac{P(B | A)\, P(A)}{P(B)}.$
P(A|B) is equal to the probability that claim A is true if we find evidence B. P(A) is the "prior" probability that A is true. P(B|A) is the probability of finding evidence B if we know A is true. P(B) is the "prior" probability of finding evidence B. If B is evidence for A, then P(A|B) > P(A). If not-B (written as ~B) is evidence for not-A, then P(~A|~B) > P(~A). Thus we seek to prove the following:

If P(A|B) > P(A), then P(~A|~B) > P(~A)

We will use, in addition to Bayes' theorem, the following identities:

~(~X) = X
P(~X) = 1 - P(X)
P(~X|Y) = 1 - P(X|Y)

This theorem assumes two things. First, none of the prior probabilities can be zero. For example, if P(B) = 0, Bayes' theorem doesn't even make sense anyways, since it divides by zero. Second, it assumes that probability is a good way to model knowledge.*

Proof:
2. We're given: P(A|B) > P(A)
3. Combining 1 and 2: P(B|A)*P(A)/P(B) > P(A)
4. Multiply by P(B): P(B|A)*P(A) > P(B)*P(A)
5. Use the identities: [1-P(~B|A)]*P(A) > [1-P(~B)]*P(A)
6. Some algebra: P(~B)*P(A) > P(~B|A)*P(A)
7. Use Bayes' theorem: P(~B)*P(A) > P(A|~B)*P(~B)
8. Use the identities: P(~B)*[1-P(~A)] > [1-P(~A|~B)]*P(~B)
9. Some algebra: P(~B)*P(~A|~B) > P(~A)*P(~B)
10. Divide out by P(~B): P(~A|~B) > P(~A)
QED

*Note: Some people might consider this second assumption questionable. Under certain interpretations, probability is only reliable in analyzing repeatable phenomena, and the universe is not repeatable.

Discussion and Conclusion

What does this mean? It means that if the existence of some evidence supports a claim, then the non-existence of that evidence detracts from the claim. This is a logical necessity in induction. The only assumptions are that we can model our knowledge with probabilities, and that none of the prior probabilities are certain.

This directly contradicts the conventional wisdom that "Absence of evidence is not evidence of absence". So where did this conventional wisdom come from?

There are two justifications I can think of. First, "absence of evidence" might mean that we neither know whether there is or there isn't evidence, because we haven't looked. In that case, the conventional wisdom is true. Second, though I proved that absence of evidence is evidence of absence, I did not prove that it's very good evidence of absence. For example, if I found bigfoot behind a tree, that would provide extremely good evidence for bigfoot, but if I didn't find him behind a tree, that would provide very weak evidence against bigfoot. But it's still evidence, mathematically speaking. I've previously explained this asymmetry as the basis for the concept of "burden of proof".

So, I wasn't lying when I said the Bayesian gives us insight into the inner workings of reason! This is just one of the reasons that math is cool.

## Wednesday, June 4, 2008

Hey everyone, you like puzzles, right? Of course you do.

Then go register for the 2008 Google Puzzle Championship. It is on Saturday June 14, 1 PM EST EDT. You must register before June 12.

Answers are submitted online. You have no obligation to participate if you register. You are given 2 1/2 hours, but you're not expected to complete the whole thing in that time. Typical puzzles include Sudoku variants and other "grid" puzzles, but there are also a bunch of grab-bag puzzles, like a "spot the difference" puzzle. You can try the practice test if you want a better idea of what's on it.

I have participated in this competition for a few years now. I never win, of course, but my results have slowly improved. My tips: print out the puzzles, and use colored pencils. I'll take questions if anyone is curious about anything. Tell me if you intend to participate. I want to know if anyone is interested, or if I should just never bring it up again.

## Tuesday, June 3, 2008

### God as a magic feather

"Doesn't it ever get you down that there's no god?"

Not really, no. See, God is like a magic feather.

The magic feather has got to be one of the cheesiest tropes to ever be used in fiction. A character is given some item and is told that it's magic. Further into the story, the item is somehow lost. Then it's revealed that the the item was never really magic, and the-magic-was-in-them-all-along. The most notable example is when Dumbo the elephant is given a magic feather that allows him to fly. Later, Dumbo realizes that he never actually needed the feather to fly, and the-magic-was-in-him-all-along. Awwwww.

Frankly, I don't really care for the magic feather trope. But the reason it's so cheesy is because it's overused and predictable, not because it's devoid of emotional content. If it ever happened in real life, it would be quite dramatic. That's one reason why deconversions can be dramatic. Somebody leads a good life, worshipping Christ, or Krishna, or whatever. Then he decides to consider it rationally, and find that it doesn't make a whole lot of sense. At first he is devastated at the loss of something so precious. But over time, he realizes that all that was good in life was not of God's doing, since God never really existed in the first place.

So many things are credited to God, but we finally realize that it was our own doing, or nature's doing. Those people who successfully quit drugs or crime by surrendering to Jesus? They had the power all along! Those people who spent their lives trying to help the less fortunate? They didn't really need God for that. Our moral compass? It was never contingent upon God. The community spirit? If everyone would realize it, they wouldn't require the Holy Spirit.

The same goes for all the bad things that come out of religion. The prejudice, intolerance, ignorance, holy wars, suicide bombers... You can continue to do those without God too, if you really wanted to. That's because the magic was in you all along! Awwwww.

## Sunday, June 1, 2008

### The Uncertainty Principle

(See my previous series on Quantum mechanics and the double slit experiment)

Position and Momentum of Wavefunctions

In quantum mechanics, things are not described as particles. Nor are they described exactly like waves. They are described as wavefunctions. Wavefunctions look like this:
Well, they don't always look like that. It can look like almost anything, really. Also, a true wavefunction would occupy three dimensions, while this one only occupies one. But this is a nice clean example, in that we have a fairly good idea of where the object "is" and where it's "going".

Where is it? Well, the object doesn't really have a "location", strictly speaking. And yet we can measure the location anyway. We'll get an exact location, limited only by the accuracy of our measuring device. But that exact location may be here, or it may be there. It may be millions of miles away. Of course, some locations are more likely than others. The chance of it being millions of miles away is much smaller than the chance that Quantum Mechanics is simply wrong. It's most likely to be near the middle. The probability of finding it at any point is proportional to the amplitude-squared of the wave. The bigger the wave is, the more likely you are to find the object there.

The "expected" position is right in the middle. But our measurement will never be exactly in the middle. You'd have to be impossibly lucky for that to happen. It will always appear slightly off to the side. The average distance* from the middle is called Δx.

Where is it going? The object doesn't really have a "velocity", strictly speaking. And yet we can measure the momentum (equal to mass times velocity). We'll get an exact momentum, again limited only by the accuracy of our measuring device. How do we tell the momentum just by looking at the wavefunction anyway? It turns out that the momentum is related to how much the wavefunction goes up and down. The faster it goes up and down, the higher the momentum.**

Now, this is a little more difficult to visually realize, but the momentum that we measure is not exact. Just like when we measured position, there will be an "expected" momentum, but the measured momentum will always be slightly different from the expected. The average difference between our measurement and the expected momentum is called Δp.

The Leaky Faucet
It's time for an (imperfect) analogy. Let's say you're listening to the sound of a leaky faucet. It keeps dripping. I ask you how quickly it is dripping. Since you've been listening for a long time, you can give me a very accurate answer. But then I ask you how quickly it is dripping this minute. It might have been dripping faster or slower than average this minute. Well, you count the number of drips during the minute, and you tell me. But you might be off by a fraction of a drip. Therefore, your uncertainty would be 1 drip per minute. When I ask you how quickly it is dripping this second you will have an uncertainty of 1 drip per second. If I ask you how quickly it is dripping this instant, the question is nonsensical. There isn't any answer to give.

The wavefunction is analogous, with Δx relating the amount of time to listen to drips, and Δp relating to the rate of drips. The smaller Δx is, the greater Δp is, because you have less space over which to count the number of "drips". The smaller Δp is, the greater Δx must be, because you have to have lots of space to count the number of "drips" accurately. In general, we have the Uncertainty Principle:

Δx Δp ≥ ħ/2

This means that the product of the uncertainty in position and uncertainty in momentum are at least ħ/2, which is a universal constant of nature (equal to 5.27 x 10-35 kg m2/s--it is extremely small!). And yes, I can prove this mathematically from the axioms of Quantum Mechanics, though it is too difficult to do so here.

You might ask, "Can't you just look at the faucet to see how quickly the water is accumulating into a drip?" Yes, but this is where the analogy breaks down. You can't do the same with a wavefunction. Well, yes, you can do a variety of experiments to determine exactly what the wavefunction is. And then, using this wavefunction, you can tell exactly what the expected position and expected momentum is. But if you directly try to measure either one, you won't get exactly the expected values.

Quantum Measurements

Recall that I said when you take a measurement, you get an exact value, only limited by the accuracy of your measuring device. Another important element is that once you've measured it, that measured value becomes the true value. For example, if you measure the momentum exactly, the wavefunction "collapses" into something like this:

In the wavefunction above (assuming it continues on to infinity), the momentum is exact. If you measure it again, the momentum will always be exactly equal to the expected momentum. However, the wavefunction above is also impossible. This wavefunction would be distributed evenly everywhere in the universe. Unless your device is capable of making the object be everywhere at once, it cannot measure the momentum exactly.

If you measure position exactly, you'll get this wavefunction:
In this wavefunction, the position is exact, at least for an instant. But the momentum is completely unknown. Asking for the momentum is like asking how quickly the faucet is dripping in a particular instant. The position won't stay exact for long because the wavefunction will quickly scatter in all directions. This wavefunction is also impossible because it has an infinite amount of energy. Unless you've got that kind of energy, you can't measure position exactly.

Although you can't measure position or momentum exactly, you can still measure them to any degree of accuracy you like, provided you have the instrument for it. What's to stop you from violating the Uncertainty Principle? Well, once you've measure it, you changed the wavefunction. If you measure it accurately enough, you drastically change wavefunction. First you can measure the position extremely accurately, and then the momentum extremely accurately. But if you try to measure position again, you may get an entirely different value than the first time.

This will occur no matter how good your measuring devices are. It has little to do with our common conception of uncertainty as the result of mere human error. It is a different, more fundamental kind of uncertainty that only really occurs on tiny, tiny scales. And that, my friends, is the Uncertainty Principle.

*technically, it's the root-mean-square distance

**The attentive reader might ask, "How do we know whether it's going left or right?" You can't, at least not from what I've shown. The wavefunction actually has two components, the "real" part and the "imaginary" part, and you need to see both to know which way the object is going.