I just got back from March Meeting, the one of the world's largest physics conferences. I thought I'd share one talk which used relatively basic physics to explain a surprising phenomenon:
Summary for people who don't watch videos: You have a long chain of metal beads in a beaker, and you let the chain fall out. Rather than slithering over the edge of the beaker, the chain jumps out of the beaker. It turns out the explanation has to do with the shape of the beads; when the chain is pulled out, geometry requires that the beaker gives the beads an extra kick.
I think it could be a neat science fair experiment for someone to design a chain which produces a larger fountain. The chain needs to be made of long beads, whose mass is concentrated in the middle. I'm not sure what you would make the beads out of.
Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts
Saturday, March 7, 2015
Tuesday, January 6, 2015
Wind and fire
In many ancient traditions, the world is made up of a small number of elements, including water, earth, fire, and air. But since I'm more of a gamer than a classicist, I see the classical elements as a motif that gets way overused in fantasy video games. You know, you have your fire spells, which engulf things in flame, and your wind spells which create miniature hurricanes, and so on.
The classical elements are clearly a timeless idea. But the thing is, we know that the underlying mechanism between wind and fire are pretty similar. Both air pressure and air temperature arise from random motion of molecules in a gas. Why, then, is there such an obvious distinction between getting burnt and getting blown away?
This question was asked by a reader. I also wrote something about temperature and pressure back in 2010, which was a bit more technical.
Defining terms
Temperature is basically "the willingness to transfer energy". That is to say, a hot object will transfer heat (energy) to a cold object, and not vice versa. The exact rate of transfer is influenced by many factors, but temperatures control the direction of transfer.
Temperature is distinct from thermal energy, which is the amount of energy in the random small-scale motion of a material. In general you can have objects with more thermal energy but lower temperature. For example, a cold glass of water has more thermal energy than a lit match, simply because it's bigger. But this can be true of equally-size objects, since it depends on the object's microscopic properties.
Pressure is the force that a gas applies outwards. Analogous to the above definition of temperature, pressure is "the willingness to expand". The pressures we are used to are incredibly strong. At sea level, the pressure is 2,116 pounds per square foot. The reason this force does not push you left and right is that usually the forces from air pressure cancel out exactly in all directions.
We often talk about pressure applying force on surfaces, such as the walls, floor, and ceilings of a room. But the gas also applies force on itself. You can divide a room of gas into a bunch of small units, and each unit of volume applies force on the adjacent units of volume. All the forces cancel out except at the surfaces--the walls, floor, and ceiling.
The classical elements are clearly a timeless idea. But the thing is, we know that the underlying mechanism between wind and fire are pretty similar. Both air pressure and air temperature arise from random motion of molecules in a gas. Why, then, is there such an obvious distinction between getting burnt and getting blown away?
This question was asked by a reader. I also wrote something about temperature and pressure back in 2010, which was a bit more technical.
Defining terms
Temperature is basically "the willingness to transfer energy". That is to say, a hot object will transfer heat (energy) to a cold object, and not vice versa. The exact rate of transfer is influenced by many factors, but temperatures control the direction of transfer.
Temperature is distinct from thermal energy, which is the amount of energy in the random small-scale motion of a material. In general you can have objects with more thermal energy but lower temperature. For example, a cold glass of water has more thermal energy than a lit match, simply because it's bigger. But this can be true of equally-size objects, since it depends on the object's microscopic properties.
Pressure is the force that a gas applies outwards. Analogous to the above definition of temperature, pressure is "the willingness to expand". The pressures we are used to are incredibly strong. At sea level, the pressure is 2,116 pounds per square foot. The reason this force does not push you left and right is that usually the forces from air pressure cancel out exactly in all directions.
We often talk about pressure applying force on surfaces, such as the walls, floor, and ceilings of a room. But the gas also applies force on itself. You can divide a room of gas into a bunch of small units, and each unit of volume applies force on the adjacent units of volume. All the forces cancel out except at the surfaces--the walls, floor, and ceiling.
A room is divided into many units of volume, and the red arrows show the forces applied outwards by each volume of air. The image is mine.
Wind is actually distinct from air pressure. Wind is the large-scale flow of gas, a velocity averaged over all the molecules that make up the gas. Wind is often caused by gradients of air pressure. For example, if there are two nearby regions, one with higher pressure than the other, then the pressure gradient will push air from the high pressure to low pressure.
It seems pretty intuitive that when wind is flowing a certain direction, it will push you in that direction. But the physics behind this is actually horribly complicated. When the air flows around you, it creates a slight pressure difference between one side of you and the other side, and that pushes you in the direction that the wind is going. The details are beyond the scope of this post. Just know that it's correct to say that when you're blown away by wind, it's caused by air pressure, but that wind and air pressure are not the same thing.
Random motion of molecules
Image from Wikipedia. Depicts randomly moving particles in a box, applying force to the walls of the box whenever they collide.
Microscopically speaking, the source of air pressure comes from collisions of molecules. All the molecules of a gas are moving around randomly, and when they collide with any surface they apply some force for a brief instant as they bounce backwards. Averaged over many molecules, the pressure can be considered to be constant over time.
At room temperature, the mean speed of air molecules is about 1800 km/hr. For comparison, this is about as fast as the fastest winds on Saturn. A hurricane with winds over 252 km/hr counts as category five. The speed of sound is about 1,200 km/hr. So there's a big difference between wind, the average velocity of molecules; and pressure, which comes from the random motion of molecules.
As it turns out, in a gas, the temperature--the willingness to transfer heat--is also related to the random motion of molecules. This makes sense, since the faster the molecules move, the more "willing" they are to give up some of that energy to whatever they collide into. Furthermore, the pressure and temperature of a gas are proportional. It doesn't even matter whether the molecules are big or small, although it does depend on the density of molecules.
Why won't a fire blow you away?
There are basically two explanation which play roles.
First, the temperature and pressure may be proportional in a gas, but that doesn't mean that the rate of heat transfer is proportional to pressure. Direct collisions is one mechanism of transferring heat from a gas to a surface, but radiation is another important mechanism.
In more detail: Radiation is the transfer of heat through light. For example, light can be created whenever two molecules in the gas collide with each other. This bit of light can travel to a solid and get absorbed as energy. Light also carries a little bit of momentum, but it's not enough to cause significant pressure in this situation. Where did the momentum go? Absent any wind, the average momentum of the gas is zero to begin with, so the momentum doesn't need to go anywhere.
The second explanation is that the density of gas molecules in a fire
will simply decrease until its pressure is nearly at equilibrium with
everything else. Pressure is only proportional to temperature when the
density of molecules is constant.
In more detail: When you create a fire, it does create an increase in pressure. But the pressure quickly comes to an equilibrium with the surrounding, depending on the size and suddenness of the fire. Big wildfires can certainly create big winds. But with a small candle, there only needs to be a slight displacement of the air, and the pressure quickly equilibriates. So you have this hot gas with fast-moving molecules, but there are also fewer molecules. If you touch a flame, the faster, sparser gas will apply the same pressure as normal. But since the molecules are faster, they're more willing to give up some of their energy.
It's not so much that the air molecules apply a lot of force to your finger. It's that they're moving really fast compared to the molecules in your finger, and so when they bounce backwards they lose some of their original speed. That energy is absorbed by your finger, possibly burning it. The rest is chemistry.
Friday, January 2, 2015
Ask me about physics
I'm not sure if I have enough readers to do this sort of thing, but it's worth a try.
Does anyone have any physics questions? Whenever I look at physics blogs, I see lots of commenters with questions in the comments, most of which never get answered. And while there are physics FAQs out there, you usually can't just send in any question and have it be answered.
Since I'm only a physics PhD student, I'm willing to answer questions informally, and even offer my subjective opinion. If I have a short answer, I'll put it in the comments. If I have a longer answer I'll write a post.
I study superconductivity, but I also know a lot about cosmology, quantum theory, and quantum interpretations. And when I read popular articles about physics, I tend to have a better understanding than the typical lay person.
Does anyone have any physics questions? Whenever I look at physics blogs, I see lots of commenters with questions in the comments, most of which never get answered. And while there are physics FAQs out there, you usually can't just send in any question and have it be answered.
Since I'm only a physics PhD student, I'm willing to answer questions informally, and even offer my subjective opinion. If I have a short answer, I'll put it in the comments. If I have a longer answer I'll write a post.
I study superconductivity, but I also know a lot about cosmology, quantum theory, and quantum interpretations. And when I read popular articles about physics, I tend to have a better understanding than the typical lay person.
Monday, July 28, 2014
Sleeping Beauty and other kinds of multiverses
After I wrote that post about Quantum Sleeping Beauty, Sean Carroll wrote about it too.
At the risk of too much Sleeping Beauty, I also want to discuss the
implications not just on Everettian Quantum Mechanics (EQM), but on
other multiverse scenarios.
Yes, there are multiple multiverse scenarios. For our purposes, we can use Max Tegmark's four-fold classification of multiverses:
As I explained in 2012, multiverses are not scientific theories by themselves, since they can't really be experimentally verified. Rather, they are predictions
of larger scientific theories. The level I multiverse is predicted by
most theories of uniform cosmology. Level II is predicted by
inflationary cosmology. Level III is predicted by quantum mechanics,
depending on your interpretation. Level IV... well I think that's just
speculation.
Multiverse scenarios live or die based on the larger theory that predicts them. Nonetheless, some people, even physicists, have suggested that multiverses themselves can be confirmed or disconfirmed. For instance, physicists argue that one of the reasons to prefer inflationary cosmology to other theories is because it predicts a Level II multiverse. A Level II multiverse would explain why physical constants seem to be fine-tuned for the existence of life. Thus, the very fact that we exist is evidence for a Level II multiverse, and thus a confirmation of inflationary cosmology.
What does this all have to do with Sleeping Beauty?
Imagine that God is a philosopher. God flips a coin, and if it's heads, then he creates a multiverse, where there are many copies of Sleeping Beauty. If it's tails, he creates a universe where there are very few copies of Sleeping Beauty (or maybe none at all). Sleeping Beauty knows all this, and has just woken up. What probability should Sleeping Beauty assign to waking up in the multiverse?
If
you take the thirder position on Sleeping Beauty, then Sleeping Beauty
should conclude that she most likely woke up in a multiverse.* Possibly
vastly more likely. Possibly infinitely more likely. In fact, you
might ask why we even bother considering any non-multiverse theories.
Seen this way, the unimaginable largeness of the multiverse is itself
evidence for the multiverse.
*Here I'm talking about Level I, II, or IV multiverse. The case of the level III multiverse is analogous to the Quantum Sleeping Beauty problem discussed in the previous post. That post argued that you can consistently take a thirder position in the classical Sleeping Beauty problem, and a halfer position in the Quantum Sleeping Beauty problem.
But that doesn't seem right. My physicist intuition goes against it. Even if the multiverse explains fine-tuned constants, this is at best very weak evidence for the multiverse. Even physicists who make that argument don't take it to the inevitable conclusion that we are infinitely more likely to live in a multiverse.
Nonetheless, the philosophical consensus is that the thirder position is correct. Mind you, philosophy consensuses are rarely very strong, and maybe it's just wrong in this case. But still, the thirder arguments are pretty solid, and this is a conflict that needs to be resolved.
Some light can be shed by adopting the phenomenalist view
of the Sleeping Beauty problem, which is a sort of middle ground
between the halfers and thirders that everyone can agree with. The
phenomenalist observes that the problem becomes trivial when we attach
consequences to the probabilities.
Say that Sleeping Beauty (in the original problem) gets a reward every time she guesses the coin flip correctly. Therefore, she should make bets as if she were a thirder, because if the coin is heads, she'll get double the usual payoff. On the other hand, say that Sleeping Beauty only gets a single reward after the experiment if she guessed correctly. In this case, she should make bets as if she were a halfer.
Now take the Sleeping Beauty multiverse problem. If each copy of Sleeping Beauty gets a reward for guessing correctly, then it seems better to guess that she's in a multiverse. That way, many copies of Sleeping Beauty get rewards. On the other hand, what exactly do we care about? Do we care about the average* reward given to Sleeping Beauty copies? Or do we care about the sum total of the rewards to all copies? If the former, then Sleeping Beauty should make bets as a halfer. If the latter, then Sleeping Beauty should make bets as a thirder.
But this
doesn't seem to be a very satisfying solution. "Is there a multiverse
or not?" is not satisfyingly answered by "It depends, do you think
average utility or total utility is more important?" Philosophers need to get on this and figure out what's going on.
Yes, there are multiple multiverse scenarios. For our purposes, we can use Max Tegmark's four-fold classification of multiverses:
Level I: There are different worlds separated by extremely large distances. For example, if you believe that the universe is uniform and infinite, then obviously the observable universe, which is limited by the speed of light, is much smaller than the universe as a whole. Thus there will be many copies of the observable universe with arbitrary configurations of stuff in them.
Level II: In some theories of cosmology, physical constants are also ultimately made of stuff, albeit the kind of stuff that gets decided very early on in a universe, and which is very stable thereafter. But some inflationary cosmology scenarios predict many pocket universes forming, each with possibly different physical constants.
Level III: This is the multiverse described by EQM.
Level IV: The particular laws which govern our universe are arbitrary, and the equations could have had many other arrangements. A level IV multiverse theory says that universes described by different math are not merely possible, but real.
[blockquote for organization, not because I'm quoting Tegmark]
Multiverse scenarios live or die based on the larger theory that predicts them. Nonetheless, some people, even physicists, have suggested that multiverses themselves can be confirmed or disconfirmed. For instance, physicists argue that one of the reasons to prefer inflationary cosmology to other theories is because it predicts a Level II multiverse. A Level II multiverse would explain why physical constants seem to be fine-tuned for the existence of life. Thus, the very fact that we exist is evidence for a Level II multiverse, and thus a confirmation of inflationary cosmology.
What does this all have to do with Sleeping Beauty?
Imagine that God is a philosopher. God flips a coin, and if it's heads, then he creates a multiverse, where there are many copies of Sleeping Beauty. If it's tails, he creates a universe where there are very few copies of Sleeping Beauty (or maybe none at all). Sleeping Beauty knows all this, and has just woken up. What probability should Sleeping Beauty assign to waking up in the multiverse?
*Here I'm talking about Level I, II, or IV multiverse. The case of the level III multiverse is analogous to the Quantum Sleeping Beauty problem discussed in the previous post. That post argued that you can consistently take a thirder position in the classical Sleeping Beauty problem, and a halfer position in the Quantum Sleeping Beauty problem.
But that doesn't seem right. My physicist intuition goes against it. Even if the multiverse explains fine-tuned constants, this is at best very weak evidence for the multiverse. Even physicists who make that argument don't take it to the inevitable conclusion that we are infinitely more likely to live in a multiverse.
Nonetheless, the philosophical consensus is that the thirder position is correct. Mind you, philosophy consensuses are rarely very strong, and maybe it's just wrong in this case. But still, the thirder arguments are pretty solid, and this is a conflict that needs to be resolved.
To summarize, we have three options:
- Physicists are wrong, and multiverse theories are infinitely preferred over their alternatives.
- Philosophers are wrong, and the thirder view is incorrect.
- There is a problem with the analogy between the two cases.
Say that Sleeping Beauty (in the original problem) gets a reward every time she guesses the coin flip correctly. Therefore, she should make bets as if she were a thirder, because if the coin is heads, she'll get double the usual payoff. On the other hand, say that Sleeping Beauty only gets a single reward after the experiment if she guessed correctly. In this case, she should make bets as if she were a halfer.
Now take the Sleeping Beauty multiverse problem. If each copy of Sleeping Beauty gets a reward for guessing correctly, then it seems better to guess that she's in a multiverse. That way, many copies of Sleeping Beauty get rewards. On the other hand, what exactly do we care about? Do we care about the average* reward given to Sleeping Beauty copies? Or do we care about the sum total of the rewards to all copies? If the former, then Sleeping Beauty should make bets as a halfer. If the latter, then Sleeping Beauty should make bets as a thirder.
*Tangentially,
there's another complication when we talk about the average Sleeping
Beauty in a multiverse with infinitely many copies of her. How do we average
over infinite copies? We can't even average over finite volumes of
space, because volume isn't even constant in inflationary cosmology.
This is known as the measure problem.
Categories:
cosmology,
philosophy,
physics
Friday, July 25, 2014
Sleeping beauty and quantum mechanics
My newest favorite philosophical dilemma is the Sleeping Beauty problem. The experiment goes as follows:
1. Sleeping Beauty is put to sleep.
2. We flip a coin.
3. If the coin is heads, then we wake Sleeping Beauty on Monday, and let her go.
4. If the coin is tails, then we wake Sleeping Beauty on Monday. Then, we put her to sleep and cause her to lose all memory of waking up. Then we wake her up on Tuesday, and let her go.
5. Now imagine Sleeping Beauty knows this whole setup, and has just been woken up. What probability should she assign to the claim that the coin was tails?
There are two possible answers. "Thirders" believe that Sleeping Beauty should assign a probability of 1/3 to tails. "Halfers" believe that Sleeping Beauty has gained no new relevant information, and therefore should assign a probability of 1/2 to tails. The thirder answer is most popular among philosophers.
This has deep implications for physics.
The paper argues for a different principle, the "Epistemic Separability Principle", the idea that Sleeping Beauty assigns probabilities on the basis of only what she observes. So if there is a second measurement device that Sleeping Beauty does not look at, then the probability she assigns to the first measurement should be independent of the result of the second measurement. There's a simple argument on page 4 which shows that if there are two branches of the wavefunction with equal amplitude, then we should assign equal probabilities to each branch.
So let's say we have the Three-Branch-Beauty experiment, and Sleeping Beauty has just woken up. By the Epistemic Separability Principle, she should assign equal probability to waking up in the first branch, and waking up in the second branch, because those two branches have equal amplitude. The third branch does not have equal amplitude, and based on more mathematical arguments, you can show that the third branch is twice as likely as the other two. Therefore, she would assign probabilities 1/4, 1/4, and 1/2 to the three locations.
Let's say we have the Two-Branch-Beauty experiment, and Sleeping Beauty has just woken up. By the Epistemic Separability Principle, she should assign equal probability between waking up on Monday in the first branch, and waking up on Monday in the second branch, since those two branches have equal amplitudes. Likewise, she should assign equal probability to waking up on Tuesday in the first branch, and waking up on Monday in the second branch. Therefore, she should assign probabilities of 1/3, 1/3, and 1/3 to the three locations. This recovers the popular thirder answer without giving up EQM.
In another post I will have further comments on Sleeping Beauty. (Update: it's written)
1. Sleeping Beauty is put to sleep.
2. We flip a coin.
3. If the coin is heads, then we wake Sleeping Beauty on Monday, and let her go.
4. If the coin is tails, then we wake Sleeping Beauty on Monday. Then, we put her to sleep and cause her to lose all memory of waking up. Then we wake her up on Tuesday, and let her go.
5. Now imagine Sleeping Beauty knows this whole setup, and has just been woken up. What probability should she assign to the claim that the coin was tails?
There are two possible answers. "Thirders" believe that Sleeping Beauty should assign a probability of 1/3 to tails. "Halfers" believe that Sleeping Beauty has gained no new relevant information, and therefore should assign a probability of 1/2 to tails. The thirder answer is most popular among philosophers.
This has deep implications for physics.
There is an argument
that Everettian Quantum Mechanics (aka Many Worlds Interpretation,
henceforth EQM) requires that you be a halfer. Say that you tell
Sleeping Beauty that you will wake her up on Monday and Tuesday, with a
memory wipe in between. The argument goes that this is exactly
analogous to telling her that you will cause her wavefunction to branch,
and in one branch she will wake up on Monday and in the other on
Tuesday. In both cases, the Sleeping Beauty on Monday and Sleeping
Beauty on Tuesday are both real, and neither has access to the other
(either because of the memory wipe or because they are in separate
branches).
Therefore, the standard sleeping beauty problem ("Two-Branch-Beauty" on left) is equivalent to the quantum sleeping beauty problem ("Three-Branch-Beauty" on right).
Therefore, the standard sleeping beauty problem ("Two-Branch-Beauty" on left) is equivalent to the quantum sleeping beauty problem ("Three-Branch-Beauty" on right).
From "Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics".
In this paper, they also interpret the very first coin flip as a
quantum measurement which causes the wavefunction to branch.
In the Three-Branch-Beauty problem, EQM
straightforwardly says that Sleeping Beauty should assign a probability
of 1/2 to being in the "tails" branch. If the Three-Branch-Beauty and
Two-Branch-Beauty problems are indeed analogous, then she should also
assign a probability of 1/2 in the Two-Branch problem. And therefore,
if we accept EQM, we must accept the unpopular halfer solution.
The problem with this argument, I think, is that assigning probabilities in EQM is... not at all straightforward. This is actually the major disadvantage of EQM, is that it's not clear why we should interpret the branching worlds with probabilities. How can we say one branch is more likely than another, if both branches are equally real?
The paper, "Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics" purports to answer this question (also see Sean Carroll's blog about it). When we assign probabilities to branches, these are interpreted as "self-locating" probabilities. That is, even if all branches are real, we still have to ask the question, which branch am I in now? You might naively take the principle of "indifference", arguing that all branches are equally likely. But that gets you the wrong probabilities.
The problem with this argument, I think, is that assigning probabilities in EQM is... not at all straightforward. This is actually the major disadvantage of EQM, is that it's not clear why we should interpret the branching worlds with probabilities. How can we say one branch is more likely than another, if both branches are equally real?
The paper, "Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics" purports to answer this question (also see Sean Carroll's blog about it). When we assign probabilities to branches, these are interpreted as "self-locating" probabilities. That is, even if all branches are real, we still have to ask the question, which branch am I in now? You might naively take the principle of "indifference", arguing that all branches are equally likely. But that gets you the wrong probabilities.
The paper argues for a different principle, the "Epistemic Separability Principle", the idea that Sleeping Beauty assigns probabilities on the basis of only what she observes. So if there is a second measurement device that Sleeping Beauty does not look at, then the probability she assigns to the first measurement should be independent of the result of the second measurement. There's a simple argument on page 4 which shows that if there are two branches of the wavefunction with equal amplitude, then we should assign equal probabilities to each branch.
So let's say we have the Three-Branch-Beauty experiment, and Sleeping Beauty has just woken up. By the Epistemic Separability Principle, she should assign equal probability to waking up in the first branch, and waking up in the second branch, because those two branches have equal amplitude. The third branch does not have equal amplitude, and based on more mathematical arguments, you can show that the third branch is twice as likely as the other two. Therefore, she would assign probabilities 1/4, 1/4, and 1/2 to the three locations.
Let's say we have the Two-Branch-Beauty experiment, and Sleeping Beauty has just woken up. By the Epistemic Separability Principle, she should assign equal probability between waking up on Monday in the first branch, and waking up on Monday in the second branch, since those two branches have equal amplitudes. Likewise, she should assign equal probability to waking up on Tuesday in the first branch, and waking up on Monday in the second branch. Therefore, she should assign probabilities of 1/3, 1/3, and 1/3 to the three locations. This recovers the popular thirder answer without giving up EQM.
In another post I will have further comments on Sleeping Beauty. (Update: it's written)
Categories:
philosophy,
physics,
quantum mechanics
Tuesday, June 17, 2014
More literal spaghetti
Yesterday, I wrote about the spaghetti metaphor for Everettian Quantum Mechanics (aka Many Worlds). Unfortunately, the metaphor breaks down because different worlds in quantum mechanics can constructively and destructively interfere with each other on a microscopic level, although we generally don't need to worry about this on a macroscopic level.
But there is another interpretation of quantum mechanics for which the spaghetti metaphor is more exact. I'm speaking of the Bohmian interpretation of quantum mechanics.
The Bohmian interpretation is usually the go-to example for how we can have a deterministic theory of quantum mechanics. In Bohmian theory, every particle has a well-defined trajectory, and only occupies one position at any given time. The fact that we can't predict exactly where the particle will be just has to do with the fact that we do not (and cannot) know with certainty the particle's initial position. Bohmian theory makes all the same predictions as the other major interpretations of quantum mechanics. But this comes at a cost: faster-than-light information transfer.
I'm largely a Many Worlds partisan, but I give the Bohmian interpretation credit, because it basically starts with the Many Worlds interpretation. In Bohmian theory there is no wavefunction collapse. Instead, the wavefunction simply evolves according a single equation, and splits into many worlds just like in the Many Worlds interpretation.
The difference is that the wavefunction is not interpreted as a description of reality (and therefore there aren't really many worlds). While the wavefunction is a real object, it is seen as distinct from all the particles we see around us. The wavefunction is interpreted as a pilot wave which merely guides the motion of particles.* All particles have a definite position, and we just need is this complicated pilot wave object to determine their motion.
*For those who have studied quantum mechanics, it's actually quite simple to understand. Even in standard quantum mechanics, we speak of the probability current. We can obtain the probability "velocity" by dividing the probability current by the probability density. Bohmian theory interprets this literally, by having the particle velocity equal the probability velocity.
To relate this to the spaghetti metaphor, let me consider the classic double slit experiment. We send light through two slits, and the waves coming from the two slits interfere with each other.
At some points, the waves interfere constructively, and at other points they interfere destructively. This creates alternating dark and light fringes. In more ordinary quantum interpretations, this is because the wavefunction of the light determines the probability that the light is in any particular location.
But there is another interpretation of quantum mechanics for which the spaghetti metaphor is more exact. I'm speaking of the Bohmian interpretation of quantum mechanics.
The Bohmian interpretation is usually the go-to example for how we can have a deterministic theory of quantum mechanics. In Bohmian theory, every particle has a well-defined trajectory, and only occupies one position at any given time. The fact that we can't predict exactly where the particle will be just has to do with the fact that we do not (and cannot) know with certainty the particle's initial position. Bohmian theory makes all the same predictions as the other major interpretations of quantum mechanics. But this comes at a cost: faster-than-light information transfer.
I'm largely a Many Worlds partisan, but I give the Bohmian interpretation credit, because it basically starts with the Many Worlds interpretation. In Bohmian theory there is no wavefunction collapse. Instead, the wavefunction simply evolves according a single equation, and splits into many worlds just like in the Many Worlds interpretation.
The difference is that the wavefunction is not interpreted as a description of reality (and therefore there aren't really many worlds). While the wavefunction is a real object, it is seen as distinct from all the particles we see around us. The wavefunction is interpreted as a pilot wave which merely guides the motion of particles.* All particles have a definite position, and we just need is this complicated pilot wave object to determine their motion.
*For those who have studied quantum mechanics, it's actually quite simple to understand. Even in standard quantum mechanics, we speak of the probability current. We can obtain the probability "velocity" by dividing the probability current by the probability density. Bohmian theory interprets this literally, by having the particle velocity equal the probability velocity.
To relate this to the spaghetti metaphor, let me consider the classic double slit experiment. We send light through two slits, and the waves coming from the two slits interfere with each other.
Waves of light go through two slits, located at the bottom, and travel upwards. (Technical details: I'm just showing the real part of the wavefunction, with blue positive, red negative, and green zero.)
At some points, the waves interfere constructively, and at other points they interfere destructively. This creates alternating dark and light fringes. In more ordinary quantum interpretations, this is because the wavefunction of the light determines the probability that the light is in any particular location.
The colors here show the probability that light is in any particular location.
But in the Bohmian interpretation, we do not interpret the wavefunction as probability. Instead, we interpret it as a pilot wave. The light goes on a well-defined trajectory, it's just hard to predict which particular trajectory it's on.
Here I show many possible trajectories for the light, as calculated using Bohmian theory. As you can see, particles don't respect conservation of momentum, and even though light only goes through one slit, it is obviously affected by the presence of the other slit. (You can find lots of similar images on the net, but this whole post is really an excuse for me to write a Bohmian calculator, so I'm giving you my image.)
Perhaps now you can see how this follows the spaghetti metaphor. Each possible trajectory for the light is a single strand of spaghetti. When we speak of probabilities in quantum mechanics, we're really talking about our degree of belief that we are in any particular strand of spaghetti.
The difference is that in the Bohmian interpretation, there is only one strand of spaghetti, the one that includes us. And in my Literal Spaghetti interpretation, there are many strands of spaghetti, of which we are just one.
Categories:
experiments,
physics,
quantum mechanics
Monday, June 16, 2014
Convergent spaghetti worlds
3:AM Magazine had an interview with Alastair Wilson,
a philosopher who thinks about the Everettian Quantum Mechanics (also
known as the Many Worlds Interpretation, henceforth referred to as
EQM). It's nice to know that some philosophers are thinking seriously
about it, since physicists generally aren't trained or paid to do so.
Wilson plays to the strengths of his academic discipline by identifying
the most interesting questions of Many Worlds. I'd like to discuss a
few of them.
Branching or parallel worlds?
Wilson says:
In informal or popular discussions, people use two metaphors pretty much interchangeably to describe the Everettian multiverse: branching worlds and parallel worlds. Of course, these two metaphors are in tension: the former suggests mereological overlap of worlds, like a branching tree, whereas the latter suggests mereological distinctness of worlds, like a packet of spaghetti.Wilson goes on to note (correctly) that the many worlds of EQM are an emergent macroscopic structure, from the much more complex fundamental structure of quantum physics. Therefore, the question of a branching tree vs a packet of spaghetti is really a question of which is more useful.
Wilson prefers the spaghetti metaphor because it partially resolves the "probability problem" of EQM: If all of the many worlds exist, then how does it make sense to assign probabilities to each of the many worlds? In the spaghetti metaphor, when we speak of probabilities, the probabilities represents our belief that we are in any particular noodle.
I am agreement with Wilson, but it's worth poking at that answer. Let's consider the simplest kind of world-splitting. No, not the double slit experiment, even simpler! Consider a beam splitter.
A beam splitter takes a beam of
light, and reflects half of it while transmitting the other half. It's
a rather standard optic, and we have several of them on the laser table
in our lab.
But suppose that instead of a beam of light, we had a single photon of light. Even though there's only one photon, this single photon will still split paths, becoming a superposition of path 1 and path 2. We've generated two distinct worlds, one where the photon is on path 1, and one where it's on path 2.
But these worlds are not very different from one another. Put another way, they are very close to each other in parameter space--all their parameters are exactly the same, except for the one parameter describing the location of the photon. Because the worlds are very close to each other, it is possible for them to interact, and experimentally feasible to make them interact in a controlled manner. In the EQM picture, we would say that the worlds have not yet diverged. Note that the distinction between divergent worlds and not-yet-divergent worlds is an emergent distinction rather than a fundamental one, since it partly has to do with what is experimentally feasible.
The beamsplitter in reverse
But suppose that instead of a beam of light, we had a single photon of light. Even though there's only one photon, this single photon will still split paths, becoming a superposition of path 1 and path 2. We've generated two distinct worlds, one where the photon is on path 1, and one where it's on path 2.
But these worlds are not very different from one another. Put another way, they are very close to each other in parameter space--all their parameters are exactly the same, except for the one parameter describing the location of the photon. Because the worlds are very close to each other, it is possible for them to interact, and experimentally feasible to make them interact in a controlled manner. In the EQM picture, we would say that the worlds have not yet diverged. Note that the distinction between divergent worlds and not-yet-divergent worlds is an emergent distinction rather than a fundamental one, since it partly has to do with what is experimentally feasible.
The beamsplitter in reverse
Yes, it's entirely possible to join these two worlds together again, to perform the beamsplitter experiment in reverse. Simply place mirrors on the two beam paths. This is a very common setup, called an interferometer (used in many experiments, but best known for the one that led to Relativity theory).
It's called an interferometer is because the light can actually end up on two paths: path A or path B. Whether it ends up on path A or B depends on whether the waves of light are in sync or out of sync, which is to say, whether they interfere constructively or destructively. Let's say that they interfere in such away that they only go down path B.
This presents a problem for the metaphor of the packet of spaghetti. Let's treat path 1 as a packet of spaghetti, and path 2 as another packet of spaghetti. If we have just path 1 on its own, half of its spaghetti goes down path A, and half of it goes down path B. And if we have just path 2 on its own, the same thing happens. And yet, if we have the photon go down path 1 and path 2 simultaneously, all of the spaghetti ends up on path B.
There is spaghetti from 1 that goes down A, and spaghetti from 2 that goes down A, but rather than adding the quantities of noodles in A, we subtract them. Here the metaphor of spaghetti just breaks down, since there's no reason you would ever subtract quantities of spaghetti if we took the metaphor seriously.
But we had already admitted that the spaghetti was just a metaphor for an emergent macroscopic structure of EQM, so it's unsurprising to see the metaphor break down on the microscopic level. It's just good to keep in mind how exactly it happens, and maintain skepticism about our metaphors.
Why divergence?
Now that I've illustrated how in EQM worlds can converge as well as diverge, we can ask why divergence is so much more common than convergence. Ultimately, it has to do with the Second Law of thermodynamics, and the increase of entropy over time.
If you have two strands of spaghetti, there are many ways for them to be apart from each other, and only a few ways for them to be stuck to each other. That is to say, pulling strands of spaghetti apart is associated with an increase in entropy. Thus, strands are more likely to diverge than converge.
But there's also an issue of dimensionality. Imagine that the spaghetti strands are close together in parameter space, in that they only differ by one parameter: the location of a single photon. As long as this is the only parameter which distinguishes the strands of spaghetti, it's as if they're trapped in a low-dimensional space. They're like cars, trapped on a 2d surface, prone to crashing into one another if we're not careful. But once the worlds differ by more parameters, we add more dimensions. It's much harder for two airplanes to collide into each other than cars, because they live in a 3-dimensional space.
However, not impossible
Now imagine that the two spaghetti strands differ by billions of billions of parameters. This can easily happen, if for instance the photon on path 1 hits a screen, a screen which is composed of about 10^23 electrons and nuclei. So now we're talking about airplanes not in 3-dimensional space, but airplanes in 10^23-dimensional space. It's not surprising if they hardly ever collide.
And that's why the many worlds metaphor works. If macroscopic worlds converged as often as they diverged, then we'd constantly face this problem of interfering noodles. But convergence only tends to occur on the microscopic levels.
Update: I have a followup post talking about the Bohmian interpretation
Categories:
philosophy,
physics,
quantum mechanics
Tuesday, February 11, 2014
Quantum Mechanics for skeptics, redux
When I was an undergrad, I gave an informal talk on quantum mechanics for my friends in the skeptical student group. Unfortunately, the website hosting the slides went defunct, so it's no longer available. I offered to give a similar talk to the atheist student group at my current university. So I'm redesigning it as a chalk talk. It will probably be a bit more serious this time, since I'm not an undergrad, but it's still very informal.
To organize the talk, it helps me to write a blog post along the same lines. So that's what you're getting. Apologies if it's a bit rough.
---------------------------------------
Quantum Mechanics for Skeptics
I. Introduction
(These quotes will be handed out on cards)
Richard Feynman of course was a famous physicist. But despite what he said, it is clear that some people understand quantum mechanics better than others. Now, most of you don't study physics (are there any physics majors in the audience?), so you probably don't understand quantum mechanics. The question is, how can you tell the nonsense from the science? Can you do it without deferring to an expert?
---------------------------------------
II. A quick overview of quantum mechanics
A. The context of quantum mechanics in physics
First I need to give some context. I am not a quantum physicist. The fact is that quantum physics is very well established, and isn't a topic of cutting edge research. Almost every physicist uses quantum physics as the framework to study something else. I'm a condensed matter physicist; I apply quantum theory to extremely large numbers of atoms. The fundamental rules of the game are well understood, it's scaling it up that's hard.
But yes, there are some things about quantum theory that are not well understood.

But quantum gravity isn't really relevant to this talk. Everything here is well understood. In fact I'll stick mostly to quantum mechanics.
B. The wavefunction and measurement
Quantum mechanics describes matter as made of things that are like particles and also like waves. Take for instance the electrons in atoms. The electrons can be in many possible states which we describe with a "wavefunction". This is usually represented with pictures of orbitals around the nucleus of the atom. But it's actually just some mathematical function, which I'll plot as a function of position.

The first consequence is that the possible energies of an electron are discrete. The energy of the electron is roughly related to the number of times this wave wiggles up and down. It needs to wiggle up and down an integer number of times, so there are discrete energy levels. In particular, there's a lowest energy level, which is a good thing. Otherwise the electrons would collapse into lower and lower energies, causing all atoms to implode.
There's the question of what this wavefunction actually represents. Well say that you had an ultra-precise way of measuring the position of the electron. The probability of finding the electron in any position is equal to the square of the wavefunction. So even if you prepare lots of electrons in the same way, you can never predict exactly where they are.
And here's where it gets weirder. Say that you make two measurements, one right after the other. The second measurement will agree with the first. So even though the position was uncertain to begin with, by measuring it you make its position certain. One way to describe this is by saying that the wavefunction has changed after measuring it. This is referred to as wavefunction collapse.

C. Quantum uncertainty vs classical uncertainty
The picture I've just drawn sounds a little bit like we just don't know where the electron is. After we measure it, and then we know where it is. I call this "classical uncertainty". But the uncertainty in quantum mechanics is different.
For instance, let's say that we don't know whether an electron is in a 1s state or a 2p state. But it's not just that we don't know in the classical sense, let's say we don't know in the quantum sense. In quantum mechanics, you represent this by adding the wavefunctions together. Now we can take two kinds of measurements of this system. If you try to measure the energy, sometimes you'll get the 1s energy and sometimes you'll get the 2p energy. Then the electron will collapse into the 1s or 2p state according to what you measured.

But suppose that we instead measure the position of the electron. We would mostly see the electron on the left side here and not on the right side. Now if the electron were really in the 1s state, we'd see it on the right and left sides equally. And if it were in the 2p state, we'd see it in the right and left sides equally. But it's not merely that we don't know whether it's in 1s or 2p, it's that in some sense it's in both states.
This, by the way, is entirely a thought experiment. Practically speaking, we wouldn't be able to control whether the electron was in a 1s + 2p state or a 1s - 2p state. If it's 1s + 2p, the electron would appear on the left, and if it's 1s - 2p, it would appear on the right. Since we don't know which one it is, we're back to the situation of classical uncertainty rather than quantum uncertainty. But there are other experiments that really do demonstrate that quantum uncertainty is special.
D. Entanglement
One of the strange consequences of quantum mechanics is that you can have correlations between particles, even if those particles are far away from each other. For instance, there's a way to prepare two photons such that they have the same polarization, even though we don't know the polarization of either individual photon. Again, it's not that we are ignorant of the polarization, it's that it's actually in a superposition of vertical/vertical and horizontal/horizontal

Sometimes people use entanglement to argue that if we think positive, positive things will come to us by the law of entanglement. But generally, if far-apart particles are correlated at all, there's no particular reason they would be correlated vs anticorrelated. If the particles are interacting with a random environment, they would switch between correlation and anticorrelation such that, effectively, there's no correlation at all.
---------------------------------------
III. How to recognize quantum nonsense
A. Vocabulary
Quantum nonsense uses lots of complicated terminology in order to confuse people. People also feel afraid to challenge it because maybe they just don't understand. The problem is that real science also uses lots of terminology, and if you're not an expert in the field, you may not be able to tell the difference.
It's difficult to make a rule of thumb to tell the difference, but here's what I propose: Look at who the intended audience is. If scientists are talking to other scientists, they need terminology in order to communicate precisely. If a scientist is speaking to the public, they may use terminology because they're not really sure how to say it in plain language. But plain language would be ideal.
In contrast, pseudoscientists are almost always talking to the public, and use scientific terminology intentionally. It's not that they don't know clearer ways of speaking, they actually want you not to understand.
B. Ignoring scale
In What the Bleep do We Know? there's a clip where they show a basketball bouncing in many places on a court. Then the basketball player looks at it, and it's only in one place. This is an okay illustration of quantum mechanics, but they neglected to explain how this only occurs on very small scales.
The appropriate scale is the atomic scale. When you have electrons in an atom, you don't know where the electron is, but there's an extremely high probability that it's not very far from the nucleus. The uncertainty of the basketball is on the same scale (smaller, really, since it's a heavier object).
In fact the picture is very much complicated by a system which is made by more than a few particles. As I said earlier, in my research I apply quantum physics to very large numbers of particles, such as what you would find in a grain of dust. Quantum physics has a big impact (for one thing, the atoms aren't imploding), but large objects do not behave like small ones. Unless the system is really cold (ie at the very limits of our cooling technology), there's too much randomness. This randomness turns quantum uncertainty into classical uncertainty.
C. Observers
My favorite part of What the Bleep was the following argument. In order to cause wavefunction collapse we need conscious observers. Human cells can cause wavefunction. Therefore, human cells are conscious beings. What follows is a computer-animated segment with anthropomorphic human cells. And when you think negative thoughts, the human cells have decadent parties and destroy your health. Long story short, you should throw out your medication and just think positive.
But seriously, there's nothing in quantum mechanics that requires conscious observers. Really what you need is some large complicated system, such as a grain of dust which introduces randomness. This makes a quantum system behave classically, and that's what wavefunction collapse is, more or less. Quantum mechanics doesn't say you're special (although you're free to think you're special anyway).
D. Quantum Interpretations
Now there are a few different interpretations of quantum mechanics, as to what it really all means, on the bottom of it. The most popular interpretations are the Many Worlds Interpretation and the Copenhagen interpretation.
The Copenhagen interpretation is more or less what I've already described. There's a quantum system which follows certain rules. And you can measure or observe the system, which causes the system to change. In the Many Worlds interpretation, there's nothing fundamentally different about the observation process. The system just interacts with a measurement device, and becomes a superposition of two states. These two states don't really interact and for all intents and purposes are independently evolving worlds.

These two interpretations are equivalent to each other, at least experimentally. There is no experiment that can be performed to distinguish between these two. So if someone says something that makes sense in one interpretation, but totally contradicts the experimental predictions of the other interpretation, then it's probably nonsense.
For example, when someone says that quantum mechanics requires conscious observers, you know that's wrong because there are no observers whatsoever in the Many Worlds Interpretation. When someone says that you interact with the parallel worlds, you know that's nonsense because in the Copenhagen interpretation there are no parallel worlds to interact with.
---------------------------------------
IV. Conclusion
Quantum Mechanics is a little strange. Quantum uncertainty is fundamentally different from what we usually think of as uncertainty. We can have correlations between far away particles. But it does not make conscious observers special, and nobody "chooses" reality. I hope this helps you to distinguish quantum science and quantum nonsense. But if not, you can ask an expert. I can take questions now.
To organize the talk, it helps me to write a blog post along the same lines. So that's what you're getting. Apologies if it's a bit rough.
---------------------------------------
Quantum Mechanics for Skeptics
I. Introduction
(These quotes will be handed out on cards)
I think I can safely say that no one understands quantum mechanics.Some of these quotes are from physicists, and some are nonsense. I do not intend for them to be difficult to distinguish. Most of the nonsense comes from people interviewed in the documentary What the Bleep Do We Know?
-Richard Feynman
The physical world, including our bodies, is a response of the observer. We create our bodies as we create the experience of our world.
-Deepak Chopra
The physical process of making a measurement has a very profound effect.
-David Albert
We're all connected by an energy field. We swim in a sea of light, basically, which is the zero point field.
-Lynn Mc Taggart
Light and matter are both single entities, and the apparent duality arises in the limitations of our language.
-Werner Heisenberg
I wake up in the morning and I consciously create my day the way I want it to happen... and out of nowhere little things happen that are so unexplainable, I know that they are the process or the result of my creation.
-Joe Dispenza
There's all sorts of universes sitting on top of each other, and they're splitting apart and differentiating as time moves on.
-Sean Carroll
A shift in quantum state brings a parallel lifetime. The relationship to you and your environment is lifted... You are now in a parallel existence.
-Ramtha, channeled by J.Z. Knight
Richard Feynman of course was a famous physicist. But despite what he said, it is clear that some people understand quantum mechanics better than others. Now, most of you don't study physics (are there any physics majors in the audience?), so you probably don't understand quantum mechanics. The question is, how can you tell the nonsense from the science? Can you do it without deferring to an expert?
---------------------------------------
II. A quick overview of quantum mechanics
A. The context of quantum mechanics in physics
First I need to give some context. I am not a quantum physicist. The fact is that quantum physics is very well established, and isn't a topic of cutting edge research. Almost every physicist uses quantum physics as the framework to study something else. I'm a condensed matter physicist; I apply quantum theory to extremely large numbers of atoms. The fundamental rules of the game are well understood, it's scaling it up that's hard.
But yes, there are some things about quantum theory that are not well understood.

But quantum gravity isn't really relevant to this talk. Everything here is well understood. In fact I'll stick mostly to quantum mechanics.
B. The wavefunction and measurement
Quantum mechanics describes matter as made of things that are like particles and also like waves. Take for instance the electrons in atoms. The electrons can be in many possible states which we describe with a "wavefunction". This is usually represented with pictures of orbitals around the nucleus of the atom. But it's actually just some mathematical function, which I'll plot as a function of position.

The first consequence is that the possible energies of an electron are discrete. The energy of the electron is roughly related to the number of times this wave wiggles up and down. It needs to wiggle up and down an integer number of times, so there are discrete energy levels. In particular, there's a lowest energy level, which is a good thing. Otherwise the electrons would collapse into lower and lower energies, causing all atoms to implode.
There's the question of what this wavefunction actually represents. Well say that you had an ultra-precise way of measuring the position of the electron. The probability of finding the electron in any position is equal to the square of the wavefunction. So even if you prepare lots of electrons in the same way, you can never predict exactly where they are.
And here's where it gets weirder. Say that you make two measurements, one right after the other. The second measurement will agree with the first. So even though the position was uncertain to begin with, by measuring it you make its position certain. One way to describe this is by saying that the wavefunction has changed after measuring it. This is referred to as wavefunction collapse.

C. Quantum uncertainty vs classical uncertainty
The picture I've just drawn sounds a little bit like we just don't know where the electron is. After we measure it, and then we know where it is. I call this "classical uncertainty". But the uncertainty in quantum mechanics is different.
For instance, let's say that we don't know whether an electron is in a 1s state or a 2p state. But it's not just that we don't know in the classical sense, let's say we don't know in the quantum sense. In quantum mechanics, you represent this by adding the wavefunctions together. Now we can take two kinds of measurements of this system. If you try to measure the energy, sometimes you'll get the 1s energy and sometimes you'll get the 2p energy. Then the electron will collapse into the 1s or 2p state according to what you measured.

But suppose that we instead measure the position of the electron. We would mostly see the electron on the left side here and not on the right side. Now if the electron were really in the 1s state, we'd see it on the right and left sides equally. And if it were in the 2p state, we'd see it in the right and left sides equally. But it's not merely that we don't know whether it's in 1s or 2p, it's that in some sense it's in both states.
This, by the way, is entirely a thought experiment. Practically speaking, we wouldn't be able to control whether the electron was in a 1s + 2p state or a 1s - 2p state. If it's 1s + 2p, the electron would appear on the left, and if it's 1s - 2p, it would appear on the right. Since we don't know which one it is, we're back to the situation of classical uncertainty rather than quantum uncertainty. But there are other experiments that really do demonstrate that quantum uncertainty is special.
D. Entanglement
One of the strange consequences of quantum mechanics is that you can have correlations between particles, even if those particles are far away from each other. For instance, there's a way to prepare two photons such that they have the same polarization, even though we don't know the polarization of either individual photon. Again, it's not that we are ignorant of the polarization, it's that it's actually in a superposition of vertical/vertical and horizontal/horizontal

Sometimes people use entanglement to argue that if we think positive, positive things will come to us by the law of entanglement. But generally, if far-apart particles are correlated at all, there's no particular reason they would be correlated vs anticorrelated. If the particles are interacting with a random environment, they would switch between correlation and anticorrelation such that, effectively, there's no correlation at all.
---------------------------------------
III. How to recognize quantum nonsense
A. Vocabulary
Quantum nonsense uses lots of complicated terminology in order to confuse people. People also feel afraid to challenge it because maybe they just don't understand. The problem is that real science also uses lots of terminology, and if you're not an expert in the field, you may not be able to tell the difference.
It's difficult to make a rule of thumb to tell the difference, but here's what I propose: Look at who the intended audience is. If scientists are talking to other scientists, they need terminology in order to communicate precisely. If a scientist is speaking to the public, they may use terminology because they're not really sure how to say it in plain language. But plain language would be ideal.
In contrast, pseudoscientists are almost always talking to the public, and use scientific terminology intentionally. It's not that they don't know clearer ways of speaking, they actually want you not to understand.
B. Ignoring scale
In What the Bleep do We Know? there's a clip where they show a basketball bouncing in many places on a court. Then the basketball player looks at it, and it's only in one place. This is an okay illustration of quantum mechanics, but they neglected to explain how this only occurs on very small scales.
The appropriate scale is the atomic scale. When you have electrons in an atom, you don't know where the electron is, but there's an extremely high probability that it's not very far from the nucleus. The uncertainty of the basketball is on the same scale (smaller, really, since it's a heavier object).
In fact the picture is very much complicated by a system which is made by more than a few particles. As I said earlier, in my research I apply quantum physics to very large numbers of particles, such as what you would find in a grain of dust. Quantum physics has a big impact (for one thing, the atoms aren't imploding), but large objects do not behave like small ones. Unless the system is really cold (ie at the very limits of our cooling technology), there's too much randomness. This randomness turns quantum uncertainty into classical uncertainty.
C. Observers
My favorite part of What the Bleep was the following argument. In order to cause wavefunction collapse we need conscious observers. Human cells can cause wavefunction. Therefore, human cells are conscious beings. What follows is a computer-animated segment with anthropomorphic human cells. And when you think negative thoughts, the human cells have decadent parties and destroy your health. Long story short, you should throw out your medication and just think positive.
But seriously, there's nothing in quantum mechanics that requires conscious observers. Really what you need is some large complicated system, such as a grain of dust which introduces randomness. This makes a quantum system behave classically, and that's what wavefunction collapse is, more or less. Quantum mechanics doesn't say you're special (although you're free to think you're special anyway).
D. Quantum Interpretations
Now there are a few different interpretations of quantum mechanics, as to what it really all means, on the bottom of it. The most popular interpretations are the Many Worlds Interpretation and the Copenhagen interpretation.
The Copenhagen interpretation is more or less what I've already described. There's a quantum system which follows certain rules. And you can measure or observe the system, which causes the system to change. In the Many Worlds interpretation, there's nothing fundamentally different about the observation process. The system just interacts with a measurement device, and becomes a superposition of two states. These two states don't really interact and for all intents and purposes are independently evolving worlds.

These two interpretations are equivalent to each other, at least experimentally. There is no experiment that can be performed to distinguish between these two. So if someone says something that makes sense in one interpretation, but totally contradicts the experimental predictions of the other interpretation, then it's probably nonsense.
For example, when someone says that quantum mechanics requires conscious observers, you know that's wrong because there are no observers whatsoever in the Many Worlds Interpretation. When someone says that you interact with the parallel worlds, you know that's nonsense because in the Copenhagen interpretation there are no parallel worlds to interact with.
---------------------------------------
IV. Conclusion
Quantum Mechanics is a little strange. Quantum uncertainty is fundamentally different from what we usually think of as uncertainty. We can have correlations between far away particles. But it does not make conscious observers special, and nobody "chooses" reality. I hope this helps you to distinguish quantum science and quantum nonsense. But if not, you can ask an expert. I can take questions now.
Categories:
bass,
nonsense,
physics,
quantum mechanics
Wednesday, September 11, 2013
The fluidity uncertainty principle
This was cross-posted on The Asexual Agenda.
This is a silly observation only I would ever make.
Every physicist should know that frequency and time are complementary variables. They’re related by an uncertainty principle, similar to the uncertainty principle in quantum mechanics governing position and momentum. The more precisely you know the position of a particle, the less precisely you can know its momentum. The more precisely you know its momentum, the less precisely you can know its position.
Likewise, if you want to precisely know the frequency of something, you have to average over a long period of time. If you want to know how the frequency changes over short timescales, you must accept an inherent uncertainty in the frequency.
One of the ways in which people are different from each other is in how frequently they’re sexually attracted to other people. However, our sexualities are not always constant throughout our lives. Therefore, you can describe (one aspect of) sexuality with a frequency, and you can say that this frequency changes over time.
But there’s a fundamental limitation to how precisely you can know frequency and time together. If you want to talk about how someone’s frequency of attraction varies from year to year, then it is impossible to pin down this frequency with precision greater than once a year.
Mostly, this doesn’t matter. If you’re attracted to about ten people a year, then what does it matter if you’re not sure if it’s actually nine or eleven people a year? Who can even count up that high anyway?
However, if you’re very infrequently attracted to people, and experience high fluidity, then we enter what I’m going to call the quantum sexuality regime. Here, the fluidity uncertainty principle reigns.
This is a silly observation only I would ever make.
Every physicist should know that frequency and time are complementary variables. They’re related by an uncertainty principle, similar to the uncertainty principle in quantum mechanics governing position and momentum. The more precisely you know the position of a particle, the less precisely you can know its momentum. The more precisely you know its momentum, the less precisely you can know its position.
Likewise, if you want to precisely know the frequency of something, you have to average over a long period of time. If you want to know how the frequency changes over short timescales, you must accept an inherent uncertainty in the frequency.
One of the ways in which people are different from each other is in how frequently they’re sexually attracted to other people. However, our sexualities are not always constant throughout our lives. Therefore, you can describe (one aspect of) sexuality with a frequency, and you can say that this frequency changes over time.
But there’s a fundamental limitation to how precisely you can know frequency and time together. If you want to talk about how someone’s frequency of attraction varies from year to year, then it is impossible to pin down this frequency with precision greater than once a year.
Mostly, this doesn’t matter. If you’re attracted to about ten people a year, then what does it matter if you’re not sure if it’s actually nine or eleven people a year? Who can even count up that high anyway?
However, if you’re very infrequently attracted to people, and experience high fluidity, then we enter what I’m going to call the quantum sexuality regime. Here, the fluidity uncertainty principle reigns.
Categories:
asexuality,
lgbta,
physics,
quantum mechanics
Tuesday, April 2, 2013
A portrait of an "unsolved problem"
I study high-temperature superconductivity (henceforth HTSC). It's one of the largest fields of condensed matter physics, which itself is the largest field of physics. HTSC is one of the outstanding unsolved problems in physics, and unsolved problems attract research. The two big questions are:
So my perspective is very limited. Superconductivity is a vast field, and I occupy one tiny little corner. I don't have a great idea of the big picture, because I'm too busy trying to understand the details of the stuff near my own corner. And I don't even fully understand that. Consider this a distorted portrait.
What "unsolved" means
In fact, superconductivity is already understood. It was solved in 1957, when BCS theory was proposed. BCS theory is named for its creators: Bardeen, Schrieffer, and Cooper. The solution is that electrons pair up. In order to pair up, there needs to be an attractive force between electrons. There is an interaction between electrons and the ionic lattice that creates an effective attractive interaction between electron pairs.
But superconductivity reasserted itself as a mystery with the discovery of HTSC in 1986. BCS theory does not work for HTSC materials. It does not predict that superconductors could exist at such high temperatures (ie minus 140 degrees celsius). We need a new theory of superconductivity for the newly discovered materials. But it's not completely up for grabs. We're still fairly sure that electrons must be pairing up due to some effective attractive interaction. We're just unsure where the effective attractive interaction comes from.
Mind you, when I say low-temperature superconductivity is "understood" and high-temperature superconductivity is not understood, I'm not referring to my personal level of understanding. I don't really understand BCS theory. That is to say, I don't know how to calculate the electron-phonon interaction, and I don't know how to get from the microscopic theory to the Ginzberg-Landau theory. But that previous sentence might have been gibberish to most of you. Perhaps what I call "not understanding" is a much deeper understanding than the most educated lay person.
Surely when HTSC is solved, the solution will involve all these little technicalities. I will not be able to understand the solution. I will understand the cartoon picture that accompanies the solution, but I will not understand the calculations.
An excess, not a scarcity, of theories
People generally don't talk about mechanisms for HTSC. The expression "elephant in the room" comes to mind. My impression is that lots of mechanisms were proposed around 1986-1990, and then it became unfashionable. The problem isn't that we don't have a theory, it's that we have too many theories. We need evidence to knock down some of those theories.
New mechanisms for HTSC are occasionally proposed on ArXiV (which is where most physicists share their upcoming publications). I often wonder if these are cranks. There's nothing really to stop cranks from putting things up on ArXiV, since it's not peer-reviewed. I'm told there's even an unwritten special section for cranks (the "general physics" section). But perhaps many of these papers are completely legitimate and respectable. The point is I wouldn't be able to the difference. I've never gotten the impression that they have high impact anyway.
At March Meeting (a huge condensed matter physics conference with over 8000 talks) a few weeks ago, I saw a couple proposals for HTSC mechanisms. One proposal was made during a 12 minute talk trying to explain the observations of some recent experiment. The talk sounded exciting, but I didn't understand it at all. That's not unusual; I don't understand most of the talks.
The other proposal occurred in a poster presentation. The guy had a theory that did not involve electron pairing. That makes it "wacky". I had the impression that he was sort of a crank, since he said he was unable to get published or get funding. But I respected him anyway. I don't understand BCS theory, and I didn't understand his theory. If I'm honest, I can't argue with him. Let the knowledgeable theorists do the arguing.
I mentioned my impression that people don't really talk about mechanisms for superconductivity. He said that's because everyone thinks superconductivity is already solved, and that the solution happens to be the idea they themselves proposed. He alluded to (Nobel Laureate) Phil Anderson's theory. I'm told that Anderson's mechanism involves electron pairing, but a repulsive force is sufficient to allow the pairing. That sounds "wacky" too, but what do I know?
Approaching the problem indirectly
Earlier I said that as an experimentalist, I just test ideas proposed by theorists. But we don't really talk about mechanisms for HTSC. Instead, we test smaller ideas.
For example, one of the big debates is about a "kink" in the electronic structure. Is it caused by an interaction between electrons and phonons, or an interaction between electrons and magnons? And is it related to superconductivity or not? I suppose there must be a class of theories involving phonons, and a class of theories involving magnons, but we don't talk about the theories directly. We're just trying to establish the basic facts.
Another big debate is about the so-called "pseudogap" state, which is a strange state that has been observed above the superconducting temperature. What is the nature of this state of matter? Is it competing with superconductivity, or is it perhaps an incipient form of superconductivity? Perhaps it has something to do with CDWs or stripes? Not that any of this makes sense to you unless you're in the same field as me. But once we figure out the answer, I'm sure I'll be able to draw a cartoon of it that you'll understand.
I think when we think of historical physics discoveries, we often think of the Eureka! moment. Someone writes a great paper, and all the problems are solved as it clicks into place. I'm suspicious of this narrative, because that's not how the field of HTSC looks. It will be a slow and incremental progression. Slowly working out incomprehensible technicalities. But afterwards it will have looked simple. We'll have a nice cartoon, and we'll tell stories about the scientists who, in a flash of brilliance, dreamed up those cartoons.
- What is the mechanism for HTSC? That is, how does it work?
- Can we find a superconductor that works at even higher temperatures, like room temperature?
So my perspective is very limited. Superconductivity is a vast field, and I occupy one tiny little corner. I don't have a great idea of the big picture, because I'm too busy trying to understand the details of the stuff near my own corner. And I don't even fully understand that. Consider this a distorted portrait.
What "unsolved" means
In fact, superconductivity is already understood. It was solved in 1957, when BCS theory was proposed. BCS theory is named for its creators: Bardeen, Schrieffer, and Cooper. The solution is that electrons pair up. In order to pair up, there needs to be an attractive force between electrons. There is an interaction between electrons and the ionic lattice that creates an effective attractive interaction between electron pairs.
But superconductivity reasserted itself as a mystery with the discovery of HTSC in 1986. BCS theory does not work for HTSC materials. It does not predict that superconductors could exist at such high temperatures (ie minus 140 degrees celsius). We need a new theory of superconductivity for the newly discovered materials. But it's not completely up for grabs. We're still fairly sure that electrons must be pairing up due to some effective attractive interaction. We're just unsure where the effective attractive interaction comes from.
Mind you, when I say low-temperature superconductivity is "understood" and high-temperature superconductivity is not understood, I'm not referring to my personal level of understanding. I don't really understand BCS theory. That is to say, I don't know how to calculate the electron-phonon interaction, and I don't know how to get from the microscopic theory to the Ginzberg-Landau theory. But that previous sentence might have been gibberish to most of you. Perhaps what I call "not understanding" is a much deeper understanding than the most educated lay person.
Surely when HTSC is solved, the solution will involve all these little technicalities. I will not be able to understand the solution. I will understand the cartoon picture that accompanies the solution, but I will not understand the calculations.
An excess, not a scarcity, of theories
People generally don't talk about mechanisms for HTSC. The expression "elephant in the room" comes to mind. My impression is that lots of mechanisms were proposed around 1986-1990, and then it became unfashionable. The problem isn't that we don't have a theory, it's that we have too many theories. We need evidence to knock down some of those theories.
New mechanisms for HTSC are occasionally proposed on ArXiV (which is where most physicists share their upcoming publications). I often wonder if these are cranks. There's nothing really to stop cranks from putting things up on ArXiV, since it's not peer-reviewed. I'm told there's even an unwritten special section for cranks (the "general physics" section). But perhaps many of these papers are completely legitimate and respectable. The point is I wouldn't be able to the difference. I've never gotten the impression that they have high impact anyway.
At March Meeting (a huge condensed matter physics conference with over 8000 talks) a few weeks ago, I saw a couple proposals for HTSC mechanisms. One proposal was made during a 12 minute talk trying to explain the observations of some recent experiment. The talk sounded exciting, but I didn't understand it at all. That's not unusual; I don't understand most of the talks.
The other proposal occurred in a poster presentation. The guy had a theory that did not involve electron pairing. That makes it "wacky". I had the impression that he was sort of a crank, since he said he was unable to get published or get funding. But I respected him anyway. I don't understand BCS theory, and I didn't understand his theory. If I'm honest, I can't argue with him. Let the knowledgeable theorists do the arguing.
I mentioned my impression that people don't really talk about mechanisms for superconductivity. He said that's because everyone thinks superconductivity is already solved, and that the solution happens to be the idea they themselves proposed. He alluded to (Nobel Laureate) Phil Anderson's theory. I'm told that Anderson's mechanism involves electron pairing, but a repulsive force is sufficient to allow the pairing. That sounds "wacky" too, but what do I know?
Approaching the problem indirectly
Earlier I said that as an experimentalist, I just test ideas proposed by theorists. But we don't really talk about mechanisms for HTSC. Instead, we test smaller ideas.
For example, one of the big debates is about a "kink" in the electronic structure. Is it caused by an interaction between electrons and phonons, or an interaction between electrons and magnons? And is it related to superconductivity or not? I suppose there must be a class of theories involving phonons, and a class of theories involving magnons, but we don't talk about the theories directly. We're just trying to establish the basic facts.
Another big debate is about the so-called "pseudogap" state, which is a strange state that has been observed above the superconducting temperature. What is the nature of this state of matter? Is it competing with superconductivity, or is it perhaps an incipient form of superconductivity? Perhaps it has something to do with CDWs or stripes? Not that any of this makes sense to you unless you're in the same field as me. But once we figure out the answer, I'm sure I'll be able to draw a cartoon of it that you'll understand.
I think when we think of historical physics discoveries, we often think of the Eureka! moment. Someone writes a great paper, and all the problems are solved as it clicks into place. I'm suspicious of this narrative, because that's not how the field of HTSC looks. It will be a slow and incremental progression. Slowly working out incomprehensible technicalities. But afterwards it will have looked simple. We'll have a nice cartoon, and we'll tell stories about the scientists who, in a flash of brilliance, dreamed up those cartoons.
Categories:
condensed matter,
physics,
science
Tuesday, February 26, 2013
What are topological defects?
It's often said that topology is the branch of mathematics where they can't tell the difference between a donut and a coffee mug. They each have a single hole (the mug's handle and the donut hole), and that's all that matters. If I may overanalyze this joke, the point seems to be that topology is so disconnected from our everyday experience. How is this useful?
I wish to explain one particular use of topology in physics: topological defects.
A topological defect is a sort of knot that exists in the microscopic structure of a material.* You can move the knot around from atom to atom, but you can't untie it. We'll get into how that works soon enough.
*Material is a vague term for "stuff". Later I'll discuss a few different materials including magnets, liquid crystals, and superconductors
2D Magnets
One classical material that everyone is familiar with are permanent magnets (aka ferromagnets). Permanent magnets are made of lots of little atoms, each of which generates a tiny magnetic field. In fact, many materials contain atoms that produce magnetic fields.. The miracle of permanent magnets is that all the magnetic fields of the atoms align together in the same direction.
(All images in this post, unless otherwise creditted, are my creations. You may use them, if you credit me.)
Suppose that the atoms were instead in a configuration like one of these:
Both of these configurations are "smooth", meaning that each atom is pointing in nearly the same direction as its neighbors. However, there seems to be a single discontinuity at the center of each configuration. This is a topological defect. Specifically, this is called a point defect, because it exists at one point. It is not possible to remove the point defect without temporarily sacrificing the overall smoothness. Therefore, even though the topological defects are not the lowest-energy states, they are still somewhat stable.
To demonstrate, let's draw a loop around the point defect. As we cross each arrow with our loop, take note of the direction of that arrow. You'll note that the arrows change direction very smoothly (as long as we didn't draw the loop through the point defect). Once you get back to the beginning of the loop, the direction of the arrow should be the same as where you started. However, in the meantime, the direction of the arrow has gone around completely in a circle!
There's no way to smoothly deform the arrows so they don't go around in a circle. To steal an analogy from my professor, it's like I tied a string in a loop around my finger. I can't very well take the string off, because my finger is in the way (ignoring the possibility of slipping it off the end of my finger).
Generalizing order
To better understand these topological defects, we have to think about the order parameter manifold. The "order parameter", in this case, is just the direction of each arrow. The "order parameter manifold" is the set of all possible directions that the arrow can point. In this example, the arrow may point in any direction in a 360 degree circle. Therefore, the order parameter manifold is a circle. If we draw a path around the circle, there is no way to continuously deform the path so that it no longer goes around the circle. That's why there exists a point defect.
In fact, there will be two kinds of point defects. One defect corresponds to going around the circle clockwise. The other corresponds to going around the circle counter-clockwise. (I showed one of each kind in an image above.) As it happens, if these two point defects meet, they cancel each other out. The defects behave a bit like particles and antiparticles, annihilating when they encounter each other.
Now let's consider the case where the magnetic field of each atom may point in any direction in three dimensions. In this case, the order parameter manifold is not a circle, but a sphere (specifically, the surface of the sphere). That makes a big difference, because if you draw a loop around the sphere, now it is possible to continuously deform the path into a single point. (In mathematical terms, every loop is "homotopic" to a point.) This means that there is no point defect (in 2D).
On the other hand, if you had an order parameter manifold which was shaped like a donut, then there would be some loops which you could not shrink down to a point. These loops would correspond to point defects.
Some order parameter manifolds may be even more complicated. For example, liquid crystals are made of long molecules. In the "nematic" phase of liquid crystals, the orientation of the molecules becomes an order parameter. But the order parameter manifold isn't quite a sphere, because you only need to go around 180 degrees to get back where you started. Instead, it is an exotic topology called the real projective plane. The real projective plane looks like a sphere, except that opposite points of the sphere are identified as the same point. Here is a point defect in a 2D liquid crystal:
Even though I drew all the orientations to be in the plane, they are still free to move out of the plane. But still there is no way to smoothly get rid of the defect.
What's interesting about defects in a real projective plane is that there are no defects and anti-defects. There is only one kind of defect. If any two of these defects meet, they will annihilate. This makes sense, because one defect corresponds to going around the sphere 180 degrees. Two defects corresponds to going around the sphere 360 degrees.
Other kinds of defects
All the above examples were point defects in 2D materials (though the order parameter manifold may have more dimensions). But there are other kinds of defects in higher-dimensions. For example, there is such a thing as a line defect, also called a vortex.
(I'm drawing more liquid crystals because my version of Mathematica can't draw arrows in 3D)
Whenever there would be a point defect in 2D, there will be a vortex in 3D. All it takes is a loop in the order parameter manifold that cannot be continuously deformed to a point.
On the other hand, if there is a bubble in the order parameter manifold that cannot be continuously deformed to a point, then we can have point defects in 3D. One example of such a defect is the "pincushion" pattern.
The region in the center can be moved, squeezed, spread out, but it cannot be removed smoothly. Topologically nontrivial field configurations exist in 3D as well, but it would be hard to visualize. Incidentally, if there is a nontrivial field configuration in 2D, then there must be a point defect in 3D. If there is a nontrivial field configuration in 3D, then hypothetically there would be a point defect in 4D.
Superconductors
I admit, I think it's just cool that topology appears in the real world, even if it's not so useful. But at the same time, I'm sure it is useful. I study superconductors, which have topological defects of their own. The superconductor's order parameter manifold is the same as my first example: a circle. (The order parameter, however, is not magnetic field direction, it's the complex phase of the superconducting condensate's wavefunction.) So that means that superconductors can have vortices.
And superconducting vortices are really important! As I've explained in a previous post, there are type I and type II superconductors. Type I superconductors block any magnetic fields, and a sufficiently strong external magnetic field will destroy the superconductivity. Type II superconductors can typically handle a stronger magnetic field, because they form vortices which the magnetic field can penetrate
(Image from Hyperphysics)
If we want superconducting wires with high electric current, we need them to tolerate the magnetic fields produced by that current. The superconducting magnets in MRI machines, therefore, require type-II superconductors. Topological defects: saving lives!
Categories:
condensed matter,
math,
physics
Thursday, February 14, 2013
More Brillouin Zones with origami
These are the first and second brillouin zones of the bcc crystal structure. This is a geometrical shape that is sometimes used in condensed matter physics, which is my field.
These models are much smaller and simpler than the model I created for the second brillouin zone of the fcc crystal structure:
That's great. Simpler is better. Anyway, now I can cross this off my bucket list:
Be a physicist.Get in the top 25 of the US Puzzle Championship.- Write a novel.
- Learn to play "Pyramid Song".
Create the second brillouin zones of the fcc and bcc structures using origami.
Categories:
condensed matter,
creativity,
origami,
physics
Monday, November 19, 2012
Quantum interpretations are scientific
Quantum Mechanics is famous for having multiple interpretations. Among them, the two most common interpretations are the Many Worlds Interpretation (MWI) and the Copenhagen interpretation.
According to the Copenhagen interpretation, when you measure a system that is in a mixed quantum state, then the system "collapses" into a definite state (that is, a state that gives you only a single result for your measurement). There are different probabilities for the system to collapse into different states, but it will always be a definite state.
In contrast, MWI says that there is no collapse. Rather, when you measure a system in a mixed quantum state, now you are in a mixed quantum state. One component of your state consists of you having measured one outcome; another component consists of you having measured the other outcome. These different components don't interact with each other, and evolve independently. The ultimate consequence is that the entire universe is in a mixed state with many components that don't interact with one another. Thus the name "many worlds".
MWI and the Copenhagen interpretation give identical predictions in all experiments. So it's impossible to falsify one in favor of the other. That's why some contend that the interpretation question is non-scientific. I do not agree, for two reasons:
1. Different interpretations suggest different directions for future theories.
2. Experiments might have something to say about Copenhagen vs MWI after all.
1. Directions for future theories
I want to quote something Richard Feynman said. Not because Feynman said it, therefore it was right, but because Feynman put the idea into my head. This occurred in a lecture series Feynman gave at Cornell (specifically, the second lecture, section 8). Feynman explained that there are three different ways to state the law of gravitation:
MWI and the Copenhagen interpretation are in the same situation as the law of gravity. They're equivalent in terms of predictions, but they lead to different ways of thinking which suggest different directions for expanding physical theories. The first thing that comes to mind is that MWI is deterministic and unitary (meaning it is deterministic if time plays backwards too). That's useful because it suggests that we can continue coming up with fundamental laws that obey time-symmetry. There may be other uses too.
As I argued in "Multiverses are scientific", scientific ideas can serve many roles. There are observations, hypotheses, experiments, theories, predictions, and so forth. Quantum interpretations also fulfill a role in science--they suggest different directions for future theories. They do not fulfill the role of hypotheses which can be tested. And that is okay, because not all scientific ideas need to fulfill every single role at once. A hypothesis doesn't also need to be a theory, and an interpretation doesn't also need to be a hypothesis. So the fact that the interpretations are not falsifiable doesn't necessarily mean it is unscientific.
2. How to (possibly) verify MWI
First, I need to explain how the Copenhagen interpretation and MWI, despite their differences, vary continuously into one another. Consider a thought experiment where a mechanical device detects whether a radioactive atom decays within a half-life. If it does, then it turns on a laser pointer, which provokes a cat to run through a hallway. At the other end of the hallway, I can see if the cat appears or not. Whatever I see, I publish my results for other scientists to see.
The question is, when does the collapse occur? Does it collapse when radioactive atom observes itself decaying? Does it collapse when the mechanical device observes the radioactive atom? Do the mechanical device and radioactive atom both collapse when the cat sees the laser pointer? Do the cat, device, and atom collapse when I see the cat (or not)? Do the cat, device, atom, and I all collapse when other scientists see my results? Etc. etc.
If you answer "no" ad infinitum, you are taking the MWI. But if you eventually answer "yes", you are taking the Copenhagen interpretation. To make the Copenhagen interpretation similar to the MWI, all you have to do is say "no" lots of times before eventually saying "yes".
We don't know the answer to all those questions, and cannot know. But we do know the answer to the first few is "no". We can verify that small systems are in mixed states because we can measure interference effects. With more complex systems, it's harder because of the sheer randomness that occurs when you have lots of particles at a non-zero temperature. In my knowledge, the largest quantum system created was 40 microns in length, and needed to be cooled down to 0.1 K.
It will be hard to push that limit, but we definitely could, in very small steps. We could push the Copenhagen interpretation closer and closer to MWI, though we will never quite reach it. Alternatively, MWI could be falsified if it's found that mixed states do not exist past a certain point (ie if no interference is found where it is expected).
Conclusion
People sometimes say that MWI is unscientific, because it posits parallel worlds that cannot be observed. This is mistaken because it assumes that every single idea in science needs to be verifiable. Scientific ideas may also serve other roles, such as suggesting future directions for theories which themselves would be experimentally verifiable. Secondly, it so happens that MWI is partially verifiable in very small increments.
According to the Copenhagen interpretation, when you measure a system that is in a mixed quantum state, then the system "collapses" into a definite state (that is, a state that gives you only a single result for your measurement). There are different probabilities for the system to collapse into different states, but it will always be a definite state.
In contrast, MWI says that there is no collapse. Rather, when you measure a system in a mixed quantum state, now you are in a mixed quantum state. One component of your state consists of you having measured one outcome; another component consists of you having measured the other outcome. These different components don't interact with each other, and evolve independently. The ultimate consequence is that the entire universe is in a mixed state with many components that don't interact with one another. Thus the name "many worlds".
MWI and the Copenhagen interpretation give identical predictions in all experiments. So it's impossible to falsify one in favor of the other. That's why some contend that the interpretation question is non-scientific. I do not agree, for two reasons:
1. Different interpretations suggest different directions for future theories.
2. Experiments might have something to say about Copenhagen vs MWI after all.
1. Directions for future theories
I want to quote something Richard Feynman said. Not because Feynman said it, therefore it was right, but because Feynman put the idea into my head. This occurred in a lecture series Feynman gave at Cornell (specifically, the second lecture, section 8). Feynman explained that there are three different ways to state the law of gravitation:
- Each object senses where all the other objects are, and feels a force towards each object of magnitude GmM/R^2.
- There is a gravitational potential in every point of space, governed by laws that only look at its surrounding neighborhood, without looking at far away objects. The gravitational force is determined by this potential.
- Given a start and end point, an object travels by the path that minimizes a particular quantity.
They are equivalent, scientifically; it is impossible to make a decision, because there's no experimental way to distinguish if all the consequences are the same.Physics has developed a lot since we discovered the law of gravitation, so we know for a fact that the different interpretations have different uses. The second theory has helped us understand some of the fundamental character of quantum field theory. But the third theory gave us Feynman path integrals, which are related to Feynman diagrams, an easy way to represent fundamental particle interactions. The first theory has not been very useful, and that's that.
Psychologically, they're very different in two ways. First, philosophically, you like them or don't like them--training is the only thing you can do to beat that disease. Second, psychologically they're different because they're completely unequivalent when you go to guess at a new law.
As long as physics is incomplete, and we're trying to find out the other laws, and to understand the other laws, then the different possible formulations give clues as to what might happen in another circumstance. And they become not equivalent in psychologically suggesting to us to guess as to what the laws might look like in a wider situation.
MWI and the Copenhagen interpretation are in the same situation as the law of gravity. They're equivalent in terms of predictions, but they lead to different ways of thinking which suggest different directions for expanding physical theories. The first thing that comes to mind is that MWI is deterministic and unitary (meaning it is deterministic if time plays backwards too). That's useful because it suggests that we can continue coming up with fundamental laws that obey time-symmetry. There may be other uses too.
As I argued in "Multiverses are scientific", scientific ideas can serve many roles. There are observations, hypotheses, experiments, theories, predictions, and so forth. Quantum interpretations also fulfill a role in science--they suggest different directions for future theories. They do not fulfill the role of hypotheses which can be tested. And that is okay, because not all scientific ideas need to fulfill every single role at once. A hypothesis doesn't also need to be a theory, and an interpretation doesn't also need to be a hypothesis. So the fact that the interpretations are not falsifiable doesn't necessarily mean it is unscientific.
2. How to (possibly) verify MWI
First, I need to explain how the Copenhagen interpretation and MWI, despite their differences, vary continuously into one another. Consider a thought experiment where a mechanical device detects whether a radioactive atom decays within a half-life. If it does, then it turns on a laser pointer, which provokes a cat to run through a hallway. At the other end of the hallway, I can see if the cat appears or not. Whatever I see, I publish my results for other scientists to see.
The question is, when does the collapse occur? Does it collapse when radioactive atom observes itself decaying? Does it collapse when the mechanical device observes the radioactive atom? Do the mechanical device and radioactive atom both collapse when the cat sees the laser pointer? Do the cat, device, and atom collapse when I see the cat (or not)? Do the cat, device, atom, and I all collapse when other scientists see my results? Etc. etc.
If you answer "no" ad infinitum, you are taking the MWI. But if you eventually answer "yes", you are taking the Copenhagen interpretation. To make the Copenhagen interpretation similar to the MWI, all you have to do is say "no" lots of times before eventually saying "yes".
We don't know the answer to all those questions, and cannot know. But we do know the answer to the first few is "no". We can verify that small systems are in mixed states because we can measure interference effects. With more complex systems, it's harder because of the sheer randomness that occurs when you have lots of particles at a non-zero temperature. In my knowledge, the largest quantum system created was 40 microns in length, and needed to be cooled down to 0.1 K.
It will be hard to push that limit, but we definitely could, in very small steps. We could push the Copenhagen interpretation closer and closer to MWI, though we will never quite reach it. Alternatively, MWI could be falsified if it's found that mixed states do not exist past a certain point (ie if no interference is found where it is expected).
Conclusion
People sometimes say that MWI is unscientific, because it posits parallel worlds that cannot be observed. This is mistaken because it assumes that every single idea in science needs to be verifiable. Scientific ideas may also serve other roles, such as suggesting future directions for theories which themselves would be experimentally verifiable. Secondly, it so happens that MWI is partially verifiable in very small increments.
Categories:
physics,
quantum mechanics,
science
Friday, October 12, 2012
My position on emergence, as a physicist
Rationally Speaking is starting a new series on emergence. So far, I like it merely because it starts out by talking about Renormalization Group theory, which comes from condensed matter physics. Finally, the philosophers are talking about my field, rather than all that stuff about cosmology and particle physics.
I should take this opportunity to explain my position on emergence as a condensed matter physicist. And yes, here I am speaking as a physicist, not because my study of physics has compelled me to view it one way or another, but because physics has greatly influenced my view. I can definitely imagine another physicist coming to the opposite conclusions as me, but surely their opinion would also be greatly influenced by their study of physics.
In a way, condensed matter is all about emergence. Condensed matter is about throwing ~100,000,000,000,000,000,000,000 atoms together, and trying to predict what they will do. It is not an easy task. Consider: a single helium atom is already an unsolvable problem, because it's just too hard to solve the quantum mechanical equations for the nucleus and two electrons.
How do we solve the problem? Approximation upon approximation upon approximation upon approximation etc. Really, there are too many approximations stacked up to fully comprehend them all at once. One could say that condensed matter physics is the art of making approximations, and using experiments to test that they're sufficiently approximate.
An approximation is all about throwing out information, clearing out some of those irrelevant details so that we can zoom out and see the big picture. In fact, Renormalization Group theory is literally about zooming out. In the theory, we zoom out by some scaling factor--say, if we removed every other atom, and the remaining atoms were 1 micrometer apart from each other, instead of 0.5 micrometers. What laws governing this sparser atomic lattice would cause them to behave just like they did in the original atomic lattice? That is, how are the parameters, such as the coupling strength between atomic neighbors, affected by zooming out?
Renormalization Group theory explains the behavior of phase transitions (like gas to liquid) by showing that when you zoom out on a liquid, the parameters change in a different direction than when you zoom out on a gas. It's a very powerful theory.
On Rationally Speaking, emergent behavior was defined in the following way:
But at the same time, I think the standard way of describing and understanding emergence is all wrong. As with the above definition, emergence is about something new that appears in the big picture. But having worked with emergent systems, I do not think it's about adding something new. It's about taking information away.
Perceived patterns do not indicate a greater amount of information, they indicate less information. Usually, a pattern would consist of many things repeated over and over. Repetition and redundancy do not convey more information. Repetition and redundancy do not convey... oh, you get it.
When you find emergent pattern that is difficult to predict, the problem isn't that you can't reduce it to fundamental physics. The problem is that you have to reduce it far, far beyond fundamental physics. You have to eliminate large swaths of useless information in increasingly creative ways. You have to do experiments on all levels to make sure that you didn't lose any of the important information.
Alternatively, you can take a more phenomenological approach, and "guess" the result of the information-elimination based on empirical observation and intuition. I get the sense that this is what many people call emergent behavior, because by taking guesses they're adding something new. Guessing is an tried-and-true practice used everywhere in science, so I'm cool with that.
I should take this opportunity to explain my position on emergence as a condensed matter physicist. And yes, here I am speaking as a physicist, not because my study of physics has compelled me to view it one way or another, but because physics has greatly influenced my view. I can definitely imagine another physicist coming to the opposite conclusions as me, but surely their opinion would also be greatly influenced by their study of physics.
In a way, condensed matter is all about emergence. Condensed matter is about throwing ~100,000,000,000,000,000,000,000 atoms together, and trying to predict what they will do. It is not an easy task. Consider: a single helium atom is already an unsolvable problem, because it's just too hard to solve the quantum mechanical equations for the nucleus and two electrons.
How do we solve the problem? Approximation upon approximation upon approximation upon approximation etc. Really, there are too many approximations stacked up to fully comprehend them all at once. One could say that condensed matter physics is the art of making approximations, and using experiments to test that they're sufficiently approximate.
An approximation is all about throwing out information, clearing out some of those irrelevant details so that we can zoom out and see the big picture. In fact, Renormalization Group theory is literally about zooming out. In the theory, we zoom out by some scaling factor--say, if we removed every other atom, and the remaining atoms were 1 micrometer apart from each other, instead of 0.5 micrometers. What laws governing this sparser atomic lattice would cause them to behave just like they did in the original atomic lattice? That is, how are the parameters, such as the coupling strength between atomic neighbors, affected by zooming out?
Renormalization Group theory explains the behavior of phase transitions (like gas to liquid) by showing that when you zoom out on a liquid, the parameters change in a different direction than when you zoom out on a gas. It's a very powerful theory.
On Rationally Speaking, emergent behavior was defined in the following way:
The idea being that a phenomenon is emergent if its behavior is not reducible to some sort of sum of the behaviors of its parts, if its behavior is not predictable given full knowledge of the behaviors of its parts, and if it is somehow new.I'm sort of on board with emergence, but I'm sort of not. Emergent behavior is everywhere, and in particular it's right in front of me on my desk every work day. Call emergence an illusion if you will, but by that standard so is superconductivity, which IMHO is a pretty hard-sciencey thing. I suppose I'm taking the slippery slope here: I think superconductivity is a real thing, therefore emergence is a real thing, and therefore even highly-emergent patterns like the stock market and the internet are real things. Yes, I'm willing to bite the bullet and say that the internet is real, despite appearances to the contrary.
But at the same time, I think the standard way of describing and understanding emergence is all wrong. As with the above definition, emergence is about something new that appears in the big picture. But having worked with emergent systems, I do not think it's about adding something new. It's about taking information away.
Perceived patterns do not indicate a greater amount of information, they indicate less information. Usually, a pattern would consist of many things repeated over and over. Repetition and redundancy do not convey more information. Repetition and redundancy do not convey... oh, you get it.
When you find emergent pattern that is difficult to predict, the problem isn't that you can't reduce it to fundamental physics. The problem is that you have to reduce it far, far beyond fundamental physics. You have to eliminate large swaths of useless information in increasingly creative ways. You have to do experiments on all levels to make sure that you didn't lose any of the important information.
Alternatively, you can take a more phenomenological approach, and "guess" the result of the information-elimination based on empirical observation and intuition. I get the sense that this is what many people call emergent behavior, because by taking guesses they're adding something new. Guessing is an tried-and-true practice used everywhere in science, so I'm cool with that.
Categories:
condensed matter,
philosophy,
physics,
science
Subscribe to:
Posts (Atom)