Dear reader, that was one of those times where I had my tongue in cheek. I can't use that as a serious argument against quantum mysticism, because I just know that quantum mystics will bite the bullet and agree that they really can violate causality.
I'm amused to see a new study doing just that. The study shows that "priming" (subliminally showing a word or concept) affects people even if they are primed after being tested. The author takes this as evidence of psi, and goes on to talk quantum nonsense.
Those who follow contemporary developments in modern physics ... will be aware that several features of quantum phenomena are themselves incompatible with our everyday conception of physical realityGee, I do follow modern physics, and I'm pretty that it's incompatible with retroactive psi effects. Whatever the explanation for the results, this is not it.
For us lay skeptics, the appropriate thing to do here is stop and await replication. I'm probably not qualified to spot any errors in the study's methodology (and there's no guarantee that they wrote the report in such a way that it's even possible to spot the errors).
But from the comments, I found that part of the study was already replicated--with negative results. And someone went through the study itself and found some serious flaws in their statistical analysis. Among other things,* the study ignores the distinction between exploratory and confirmatory research. Exploratory research is intended to try out many hypotheses to see if any of them might be interesting. Confirmatory research is meant to look at a specific hypothesis to see if it pans out. The authors appear to have done exploratory research, but failed to be upfront about it. That is, they tested so many different hypotheses, that a random data set was bound to confirm at least one of them. They were data snooping.
*The study also ignores prior probabilities, and the positive results disappear under the more rigorous Bayesian t-test.
This study reminds me of my brief experience with LIGO. The LIGO collaboration has some pretty zany schemes to prevent bias in data analysis. Is it too much to ask that psi researchers do the same?
I just thought up a simple scheme they could use! First, they should simulate every study with a random number generator. Repeat like a hundred times. Then, give all the data sets to the data analysts without telling them which is the real data set. Data analysts must use the same analysis on all data sets. How much do you want to bet that they find correlations in nearly every data set?
(via freakonomics)