Dear reader, that was one of those times where I had my tongue in cheek. I can't use that as a serious argument against quantum mysticism, because I just know that quantum mystics will bite the bullet and agree that they really can violate causality.
I'm amused to see a new study doing just that. The study shows that "priming" (subliminally showing a word or concept) affects people even if they are primed after being tested. The author takes this as evidence of psi, and goes on to talk quantum nonsense.
Those who follow contemporary developments in modern physics ... will be aware that several features of quantum phenomena are themselves incompatible with our everyday conception of physical realityGee, I do follow modern physics, and I'm pretty that it's incompatible with retroactive psi effects. Whatever the explanation for the results, this is not it.
For us lay skeptics, the appropriate thing to do here is stop and await replication. I'm probably not qualified to spot any errors in the study's methodology (and there's no guarantee that they wrote the report in such a way that it's even possible to spot the errors).
But from the comments, I found that part of the study was already replicated--with negative results. And someone went through the study itself and found some serious flaws in their statistical analysis. Among other things,* the study ignores the distinction between exploratory and confirmatory research. Exploratory research is intended to try out many hypotheses to see if any of them might be interesting. Confirmatory research is meant to look at a specific hypothesis to see if it pans out. The authors appear to have done exploratory research, but failed to be upfront about it. That is, they tested so many different hypotheses, that a random data set was bound to confirm at least one of them. They were data snooping.
*The study also ignores prior probabilities, and the positive results disappear under the more rigorous Bayesian t-test.
This study reminds me of my brief experience with LIGO. The LIGO collaboration has some pretty zany schemes to prevent bias in data analysis. Is it too much to ask that psi researchers do the same?
I just thought up a simple scheme they could use! First, they should simulate every study with a random number generator. Repeat like a hundred times. Then, give all the data sets to the data analysts without telling them which is the real data set. Data analysts must use the same analysis on all data sets. How much do you want to bet that they find correlations in nearly every data set?
(via freakonomics)
5 comments:
It doesn't necessarily violate causality, it just involves reverse causality.
Of course, I set the priors for reverse causality low and it would take some pretty compelling evidence to convince me, but I don't see why we should cross it off any more than Newton should have crossed off action at a distance.
Yes, indeed. The hidden premise is that reverse causality violates causality. Though come to think of it, I'm not sure what the expression "violate causality" means. Hmmm...
Cutting out the vague language in the middle, reverse causality implies that we need a way to resolve the grandfather paradox. That's an experiment I would really like to see. Creating a grandfather paradox and watching how it resolves.
Come to think of it, I'm not sure I know what "violates causality" means either. I definitely like this grandfather paradox experiment concept though. Perhaps we could use the reverse causality psi powers to creat one?
"The distinction between exploratory and confirmatory research" - the problems in the former is called the Texas Sharpshooter fallacy. I've been meaning to write about that fallacy, will link here if I do. ;)
I think of it as data snooping or data fishing, but I suppose you could think of it as the texas sharpshooter fallacy too.
Post a Comment