Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Thursday, January 15, 2015

Doing what you love

Should you find a job that you love, or should you just find a job that pays and do what you love on the side?  There is no ultimate answer to this question, only personal preferences and conventional wisdom.

The sense I get from US cultural history is that the conventional wisdom shifts from generation to generation, often tracing economic trends.  The clearest example I can think of is the idea of the "yuppies" in the 80s.  Yuppies were (supposedly) sellouts, people who chose corporate jobs over continuing the revolutions of culture in the 60s and 70s, or so the narrative goes.  In other words, Yuppies chose jobs that paid, rather than doing what they loved.

I am part of the millenial generation.  I feel it is impossible to ascribe motivations to a generation, as if it were a single individual, but I am a rather stereotypical millenial in many regards.  I am overeducated.  I am pessimistic about career, and about the economy.  I don't expect or want much in the way of material goods.  I do not drive.  And I don't love my job.

Unlike the stereotypical millenial, I don't have student debt.  In absence of debt, and in absence of any expensive hobbies, I would be happy with a shorter work week.  Really, we should all have shorter work weeks; it might help reduce unemployment.

I know lots of grad students.  My lack of enthusiasm is common.  But for some reason the cultural expectation is that scientists do what they do for the pure joy of discovery.  Non-scientists view science through the lens of popular science, where everything is cool and exciting.  I can fit my own research into this narrative too.  Liquid helium, ultra-high vacuum, class 4 lasers!  But science isn't all exciting ideas and fascinating discoveries.  It is, first and foremost, a job.  It's work.  I wouldn't do it if I didn't get paid for it.

On second thought, perhaps that's not true.  One of my volunteer projects is analyzing community survey data.  I'm basically doing social science purely because I want to do so.  But considering how little time I put into that project, I think it only serves to show: liking what I do can only get me so far.

But even when "doing what you love" seems unattainable, it sounds like a nice ideal.  It would be great if different kinds of labor could be allocated to exactly the people who like them.  Who could oppose such potential for human happiness?

I don't oppose the ideal.  Rather, I oppose what people are expressing through the ideal:  You are not allowed to like things, unless by liking them you contribute materially to society.  You can't like art unless you're an artist or critic.  You can't like games unless you're a designer or competitor.  You can't like music unless you're a performer.  As for whatever job you might have, you must work really hard at it, because you love to do so.  Forget the 40-hour work week, why would you want to constrain yourself?  And while you may not have much remaining free time to enjoy the income you earn, you can always spend the extra income on status goods.  Giant houses, and lots of things to put in the houses!  That's what comfort is, what luxury is.

To me, comfort doesn't mean having more status and wealth than other people.  It means having more time to do the things I actually want to do.

Friday, October 11, 2013

A research project shift

Here I'm going to talk a bit about my research, but devoid of any particulars.  This is partly because the vast majority of you wouldn't understand the particulars.  But mostly it's because I can't leak any information on my group's research.

Over the past few months, I shifted research projects.  It wasn't a huge shift.  I'm still in the same research group, studying the same material, using the same experimental technique, but I'm no longer working on the same paper.  This has probably set me back a bit, because I spent a year on my project without any paper to show for it.

The issue was that my advisor and Famous Theorist are both pushing a particular interpretation of my data that I don't agree with.  My interpretation is much less exciting than theirs.  I'm also not sure what experiment I could perform to rule out my interpretation.  My interpretation is along the lines of "this kind of analysis is invalid" rather than a fully-formed theory, so it's not really the kind of thing you prove or disprove easily.

I told a friend about this situation, and he said it was very "principled" of me to drop the project just because I didn't agree with the exciting conclusions.  He didn't agree with this principled stance.  He thought it was better to publish a paper with overreaching conclusions than not to publish at all.  Sure, the conclusions will likely be disproven later, but there's a small chance they'll turn out to be correct.

I'm not sure that I was taking any principled stance.  Maybe if the paper was nearly ready to be published, I'd be fine publishing it even with the conclusions that I disagree with.  But the truth is I'd have to do a lot more work to get to that point.  And if anyone would acknowledge my alternative hypothesis, that would mean even more experiments to rule it out.  I'm not really willing to invest that extra time to go nowhere.

My advisor and I were a bit frustrated with each other's positions.  My advisor was frustrated with me because she thought I was misunderstanding Famous Theorist, and that I wasn't proposing any experiment to test my theory, which she didn't understand.  I was frustrated with my advisor because she was misunderstanding Famous Theorist, and also my own theory.

But I don't mean to make it out like this has been hurting my relationship with my advisor.  There's an easy resolution to the disagreement: I switch projects, and she assigns another student to continue my old one.  It's slightly awkward, because I openly believe that the other student is now stuck in a dead-end project, but I wish them success in any case.

One advantage of switching projects at this time is that I know enough to form my own ideas of what to study.  I had a new idea in France, which I excitedly presented to my group.  My advisor likes when students come up with their own ideas, so she lets me pursue it.

Long story short, I haven't published anything, and I'm not close to publishing anything yet.  Oh well.  Such is the life of a grad student.

Monday, October 22, 2012

Current research: All-nighters for science

Sometimes I run experiments at the Advanced Light Source.  You can read more about it here or here, but the basic idea is that there is a beam of electrons going around in a giant ring, just like a synchrotron particle collider.  But in a particle collider, they have to worry about losing energy to radiation; at the Advanced Light Source, the radiation is the whole point.  All around the ring, the x-ray radiation is used for a diverse number of experiments, from biology to materials science.

Anyway, the result is that thousands of scientists come every year to do experiments.  Even though there are a dozen different places around the ring where experiments can run simultaneously, "beam time" is in high demand.  So every semester there's this process where we propose experiments, and get assigned specific days to run experiments.  Typically, we get assigned 24 hour blocks.

So when I run experiments at the Advanced Light Source, that means I'm working for 24 hours straight.

Here's how a typical experiment might go:

Hour 1: I load samples into vacuum and wait for it to pump down.  I wait for the liquid helium to cool the sample.
Hour 2: I try to align the X-ray beam with the sample.  Is it not showing because it's not aligned or is one of the settings wrong?
Hour 3: The staff scientist comes by and instantly solves the problem we've been working on for the last hour.  We fiddle around with the settings to see if we can get the signal to look better.
Hour 4: Geez, I'm already tired, and beam time has hardly gotten started.  But finally, we can take our first data, a quick fermi surface mapping.
Hour 5: The computer crashes repeatedly.  Even the staff scientist is puzzled for a while.  I'm hungry, so I produce dinner from thin air.  Just kidding, I painstakingly cooked all of that food the previous night.
Hour 6: This data doesn't look quite right.  Maybe we can solve the problem by taking more data?
Hour 7: Maybe it would look better if we tried a new sample?  We spend an hour switching to a new sample and cooling it down.
Hour 8: We've learned from our mistakes, and this time it only takes an hour to get the sample in the right place.  It doesn't look much better than the previous sample though.
Hour 9-12: Finally, we can take our data again.  I sort of nod off, only staying sufficiently awake to start new scans every hour.  I heat up more food and try reading my book, but soon I don't have the short term memory to get through long sentences.
Hour 13: Apparently, the light polarization was all wrong, and that's why the data didn't quite look the way we wanted.  Good thing we figured it out early and didn't waste too much time.
Hour 14: I argue with my coworker over the best way to ration our time.  Better statistics, better resolution, more data points, it's all a trade off.  We take test scans trying to figure out our best options.
Hour 15-23: Now we start really taking data.  Strangely, I feel more awake now, even though I no longer have to think very much.  I do my physics homework.
Hour 24: We're done with our main plan, with one hour to go.  We spend 20 minutes arguing over the best way to spend the last 40 minutes.

Hour 48: We discover that all the data we acquired was useless because the sample wasn't cooled properly despite what the thermometers said.

Experimental work is frequently a shaggy dog story.

Wednesday, September 5, 2012

Current lab work: Helium shortage

Did you know: there is a national helium shortage?

Little kids are gonna grow up without helium balloons in their childhood!  Football games will have to discontinue their ancient traditions of releasing thousands of helium balloons into the air at half time!  Perhaps more importantly, doctors will be unable to maintain the magnets of their MRI machines.  And most importantly for me, it will impact my research.

One of the things I do in my lab is cool superconducting samples with liquid helium.  A company delivers our helium in 100 liter dewars.  I roll this dewar to the elevator, press an elevator button, then run down the stairs to catch up with it.  This is a safety precaution, so that if there is a leak, it doesn't displace all the oxygen in the elevator and suffocate me.

Earlier, I had gone for a month and a half without helium.  During the time I meant to be experimenting, I read about the national helium shortage instead.

Helium is not a renewable resource.  Because helium is a very light molecule, at thermal equilibrium it moves faster than other air molecules.  Helium doesn't stay in the atmosphere very long because eventually the molecules go fast enough to simply escape the earth.  Instead we get our helium from natural gas deposits.

Back in 1960, the US government thought helium would be useful for military dirigibles or the space race or something, and they put lots of helium underground in the National Helium Reserve (NHR).  Later, the NHR would accumulate debt.  In response to the debt, congress passed the Helium Privatization Act in 1996.

So the government wanted to privatize helium, and to encourage this, they're selling off all the helium in the NHR.  At really low prices (enough to pay off the debt).  At a slow, fixed rate (I think this is a physical limitation of the extraction process?).  For an extended period of time (until 2018-2020).  Private helium companies can't compete with this, because they have to build all the infrastructure from scratch, while the NHR already has it.  And because helium is so cheap, a lot of users don't bother recycling it.

Something about this just seems dumb on Congress' part.  If they had kept helium nationalized and sold it at reasonable prices, I'm sure the debt would have been paid off by now.  If they wanted to encourage private helium companies, they shouldn't have mandated the fire sale.  They should have sold off ownership of the reserve like a publicly traded company or something like that.

Tuesday, January 17, 2012

Reflections on grad school

I am in my second year of a graduate physics program.  Students typically earn their PhD after five or more years.  In a few years, I will be at a completely different stage of research, and have a completely different perspective.  But right now, grad school is still somewhat "new".

My point of comparison is undergraduate university.  I thought undergraduate physics was really easy.  Obviously, I had many classmates who disagreed, which goes to show that my perspective is not necessarily representative.  But that was me.  I do very well in a class environment.  I never had to study for tests because I already understood the material from the time it was mentioned in lecture.

Grad school is not a class environment.  Or at least, not most of it.  I've been taking a few classes every semester, but they are not very important, and the grades don't really matter.  Soon I won't have any more classes to take.  At that point, I'll divert all my attention to research, which is the real centerpiece of physics graduate school.

My impression is that research uses a different set of skills from those used in classes.  It's hard to say exactly what that skill set is, but it includes self motivation, good communication skills, and good paper-reading skill.  For me, this is somewhat of a disappointment, because I may be great in the classroom, but I am only decent at research skills.  'Twas to be expected, since life isn't a series of lectures, but still.  I am most annoyed by all these papers.  There is something to be said for the compact and efficient way that physics papers present information.  But one thing I would not say for papers is that they are welcoming to people who are new to a topic.  I would have a much easier time of it if they were in lecture format.

I came into grad school wanting to do theoretical physics, but now I am doing experimental physics.  That's the way a lot of people do it, actually.  For whatever reason, incoming students' interests skew towards theoretical, even though there is more room in experimental.  An obvious possible cause is that theoretical physics is glamorous.  String theory and cosmology are also glamorous, and thus also overrepresented among incoming students' interests.  I've also heard it suggested that incoming students want to do theoretical physics because most undergraduate work is essentially theoretical.  Students want to do more of the same, and think theoretical research will fit.

Myself, I just liked the idea of solving mathematical puzzles.  I've been a puzzle enthusiast for a long time, as you know.  But I was open to the idea of doing experimental physics.  So I tried it.  And now I see there are a lot of advantages to experimental work.  And the thing is, I still get to solve puzzles!  Last semester, I spent a lot of time trying to explain a feature in our data.  I talked to a theorist about it, and he suggested a direction, but I still had to work out the rest.  It was quite satisfying.  This made me realize, experimentalists will always have an abundant supply of their own problems to solve, and theorists can't solve all of them.

I'm not sure what theoretical physics is like, but I suspect that it is not really much like undergraduate study after all.  They probably have to read lots of theoretical papers, which are like ten times harder to read than experimental papers.  And they probably do most calculations by computer modeling rather than pencil and paper like undergrads do.  And I bet it's more stressful because it's more competitive too.  Or so I imagine.

So yeah, I like where I am.  My advisor fits the "perpetually absent" archetype, which suits me fine.  I've met her several times, and she gives a great pep talk.  Most of the time I just refer to the other grad students and postdocs for help, and they are very helpful.  I have no complaints so far.  Let's see if that changes in a few years!

Monday, December 12, 2011

Electrons, gaps, and pseudogaps

I want to give you a bit of flavor of what I've been researching this semester.  Problem 1: It is confidential information (there are scientific competitors). Problem 2: It is incomprehensible.

So I'm not going to talk about my research.  I'm going to talk about a few of the broad ideas in cutting edge research into high temperature superconductors.  The point is not really to teach you about high temperature superconductors (which is hardly useful information if you're not studying superconductors), but so you get the idea of what it looks like.

So.  Superconductors.  Below a certain temperature "Tc", superconductors conduct electricity perfectly, with no resistance.  This property has obvious practical value, but unfortunately all known superconductors have a very low Tc.  Even the so-called "high Tc superconductors" discovered in 1986 still have a Tc of about -140 Celsius. 

We understand how low Tc superconductors work.  The problem was solved in 1957, and the solution is called BCS theory.  BCS theory provides a way for electrons to attract each other (despite being opposite charge), and the electron pairs form a Bose-Einstein condensate.  The condensate of electron pairs is what makes a superconductor.  High Tc superconductors also have condensed electron pairs, but BCS theory doesn't work, and no one knows why the electrons attract each other.

There are two major classes of high Tc superconductors, the cuprates and the iron-based superconductors.*  Iron-based superconductors are currently a hot topic because they were just discovered in 2008.  But I study cuprates.  In particular, I spend most of my time on a material called Bi-2212, which is one of the most highly studied superconductors.  I am not sure why it gets so much study, but I would guess that it is because it is cheap, easy to study, and (somewhat self-referentially) has been studied enough that it allows for an ever higher tower of knowledge.

*There are other high Tc superconductors, but I will not speak of them.

I study Bi-2212 using a technique called ARPES.  In concept, ARPES is simple: shine light on the material, and look at the ejected electrons.  In particular, we look at the direction that the ejected electrons go, and the energy.  If we graph the energy of the electrons (vertical axis) vs the angle (horizontal axes), we get something like this:

From Ronning et al, Science 1998.  Figure cropped for clarity.  I marked the Fermi Surface in blue.  Don't worry about what a quasiparticle is.

Experimental physics being what it is, we don't really see that whole picture there.  We see tiny slices of it at a time.  If I showed you some real data, it would be unrecognizable.

The way the electrons work, there are a bunch of quantum states for them to fill.  The quantum states act like slots, and only one electron fits in each slot.  The electrons only fill slots up to a certain energy, but there are more empty slots above that energy.  The most interesting physics happens where the filled slots meet the empty slots, what's called the Fermi Surface.  In fact, this is where the electron pairs live.

Long story short, when we look at the Fermi Surface of a superconductor, there is a small energy gap between the filled and empty slots.  This gap in energy represents the energy required to pull the attractive electrons pairs apart.  The fascinating thing about cuprate superconductors, is that the gap is not the same size everywhere on the Fermi Surface.

Also from Ronning et al.  The size of the gap has been greatly exaggerated.

Yes, in fact, there are even points on the Fermi Surface where there is no gap at all!  These points are called nodes. It would seem that at these nodes, there is no superconductivity happening.  This is very different from conventional superconductors, which have gaps everywhere on the Fermi Surface.

And what about iron-based superconductors?  Iron-based superconductors also have gaps everywhere, just like conventional superconductors.   But it's not quite the same!  We have reason to think that there are nodes in iron-based superconductors, but they cannot be seen directly because they are between Fermi Surfaces, rather than on the Fermi Surfaces.  Of course, this is not a settled matter...

When you raise the temperature of a superconductor, the superconductivity disappears, and so does the gap.  But the cuprates do something funny.  The gap remains even at high temperatures, when the material is not a superconductor.  Or at least, part of the gap does.
 From Lee et al, Nature 2007.  Figure cropped for clarity.  The horizontal axis is the position in the Fermi Surface; the vertical axis is the size of the gap.  The blue and green lines are at superconducting temperatures; the red line is above superconducting temperature.

The gap that remains when the material is no longer superconducting is called the "pseudogap".  It's a silly name, since the gap is real.  But is the gap there because of incipient superconductivity?  Or is it an unrelated property of the material?

Check out the date of that paper.  2007.  Scientists are still arguing over the pseudogap.  I've seen several talks about it, talks which disagree with each other.

So, that's what superconductivity research looks like.  Or at least, that's one very small part of it.

Wednesday, July 20, 2011

Current labwork: Vacuums

My mother is begging me to write about my current summer research.  I have already talked about my general research topic, but let's talk a little more about the experimental details.  Let's talk about this thing:


My reaction (and possibly yours too) is, "What the hell is that?"  I've toured a bunch of physics labs, and many of them are inhabited by these strange Physics Devices.  They just look like jumbles of spheres, windows, tubes, wires, and aluminum foil.  Who knows what those mad physicists are up to?

But after working on these Physics Devices for weeks, I've unlocked some of their secrets.  In short, they are vacuum chambers.

Actually, they are all sorts of different devices for different experiments, but vacuum chambers are a very common component.  A vacuum has obvious utility for lots of experiments.  In my own experiment, air will interact with the surfaces of the superconductors, and thus ruin the experimental results (which are very surface-sensitive).  Therefore, the superconductor has to be kept under vacuum.

Why do vacuum chambers look the way they do?  First, you need those spheres to prevent air from coming into the vacuum.  And then you need the windows so you can actually see what's happening inside.  You need additional spheres as loading chambers (so you don't let the air in everywhere whenever you load a material).  And then you may need additional chambers between the main one and the loading chamber in order to slowly step up the vacuum power.  There are also multiple vacuum pumps and pressure gauges.

And then, you need some way to transfer materials from the loading chamber to the main chamber.  This is tricky.  I'm going to zoom in on the solution.


That long cylinder is a manipulator arm.  On the very right end is something you can slide back and forth to move the arm.  Typically, you'd have a sample (eg a superconductor) screwed onto a "stage" in the loading chamber, and you'd use the manipulator arm to unscrew the sample, move it into the main chamber, and then screw it onto another stage.

What's with all the aluminum foil?  Water molecules tend to stick to the inner surfaces of the vacuum chamber, and they slowly come loose, to the detriment of the vacuum.  We need a really good vacuum, down to 10-11 atmospheres or so!  Therefore, before starting any experiments, we need to heat up the chamber, and boil off all the water so it can be pumped out.  This is called a "bake-out".  The aluminum foil is there to keep the heat in during the bake-out.  Grad students are usually too lazy to take the foil off afterwards, especially since they'll just have to put it on again for the next bake-out.

The wires are there to supply power and record data.  All the data ends up on a computer, which is actually where I do most of my work.  But that's boring to describe, so back to the vacuum.

Depending on the experiment, there could be lots of other attachments to the vacuum chamber.  In my experiment, we need to take off a layer of the superconductor while it's in the vacuum in order to expose a new surface that has never touched air before.  This is also quite tricky.  The solution involves gluing a little peg to the sample beforehand, and then using a "wobble stick" in order to jab the peg off.  I am not making this up.

My experiment also requires the addition of a hemispherical analyzer (shaped like a hemisphere), and a laser (which comes with a whole new jumble of lenses, mirrors, cameras, and other optics).  There's probably even more stuff that I don't understand.  Who am I kidding, I don't even fully understand the things I've described!

Friday, May 27, 2011

Summer research: High-Tc Superconductors

As you know, I am currently working on my physics Ph.D. with a specialization in condensed matter.  I am finally starting research this summer, next week in fact.  My research project is on one of the hottest topics in condensed matter physics, High-Temperature Superconductors.

I've written about superconductors before, but in case you're too lazy to read that...  *clears throat*

Just like many liquids freeze below a certain temperature, there are some materials which change into superconductors below a certain temperature (that temperature is called Tc).  They won't look any different, but they have awesome properties like zero electrical resistance and magnetic levitation (which are used in MRIs and maglev trains respectively).  For the earliest discovered superconductors, Tc was 30 degrees above absolute zero (-243 Celsius), but so-called High-Tc Superconductors have a Tc as high as 135 degrees above absolute zero (-138 Celsius).  Which is still very cold.  Many physicists seek to understand High-Tc Superconductors with the dream of discovering superconductors at room temperature.

More specifically, I will be investigating superconductors through the use of the ARPES method.  ARPES stands for angle-resolved photoemission spectroscopy.  ARPES involves shooting a photon at the material, and looking at the electrons that pop out.  It's a lot like the photoelectric effect experiment which won Einstein his Nobel prize.

But ARPES is a little more sophisticated, because it doesn't just measure the energy of the electrons that come out, it also measures the angle at which they come out.  The angle tells you about the electron's momentum.  And so we can plot graphs of energy vs momentum.  This is a graph of the electronic band structure, which is of such great importance that I don't know how to properly convey it.  One of these days I will write a better explanation for lay people.  For now, remember those energy bands which were so crucial to the understanding of conductors, insulators, and semiconductors?  Those energy bands are merely a simplified form of the electronic band structure.

Note that I haven't yet fully described my research project; ARPES plus Superconductors is way too broad for a single research project.  But that's just as well, as I don't start until next week.  Perhaps I will write more then, and say something about what exactly I'm doing in the lab.

Though, if it's anything like previous summers, I will probably never fully describe my research here, and instead opt for inside jokes.

Friday, July 30, 2010

Current lab work: Spintronics

As mentioned previously, I've just moved to the Bay Area so I can go to physics grad school.  Since I'm here a bit early, I'm doing a mini research project.  It's "mini" because the time frame is far too short to have anything approaching a full research project.

This mini research project is quite unlike my previous research experience because before I only did data analysis.  Now I'm actually working in a lab.

I wanted to include a picture, but I think taking a camera into the clean room would earn some strange looks.  So I got the next best thing: a stock image!

These here are silicon nanotubes, magnified by a factor of 10^9.

No, I'm just kidding.  They're drinking straws.  We use the straws to suspend crystal samples in the SQUID (superconducting quantum interference device).

The SQUID measures the crystal's response to strong magnetic fields.  To produce strong magnetic fields, we need a very strong current.  To carry a strong current, we need something that conducts electricity very well, like a superconductor.  To maintain a superconductor, we need to keep it cold.  To keep it cold, we use liquid helium.  So, you know, the SQUID is super advanced and super cool.

Why drinking straws?  They're cheap, and the right size.

My use of the SQUID is part of a larger research project on spintronics.  Spintronics is a kind of electronics which doesn't just make use of the negative charge of the electrons.  It also makes use of the spin of the electrons.

Electron spin is a bit like the electron is spinning.  Whenever there's electric charge spinning in a circle, it creates a magnet.  So an electron with spin produces a small magnetic field.  However, you cannot say that the electron is really spinning, because an electron is a point-like particle, and there's no way it could be spinning fast enough to produce the magnetic field we see.  Electron spin is fundamentally a quantum property.  There are two discrete spin states of an electron: spin up, or spin down.  Or it could be in some superposition of the two states.

Beyond that, I can't say anything about spintronics, because I don't know much.

To get spintronics to work, one thing you need is a "spin injector", which is a material that conducts mostly spin up electrons, not spin down electrons.  My research works on creating new crystals which could serve as spin injectors.

Growing crystals is a really complicated process.  I'm told that the training can take six months, so I'm not going to get to do it.  The device which grows crystals is called the e-beam.  The e-beam has a chamber under ultra-high vacuum.  Inside the vacuum are pure elemental metals.  These metals are heated up and evaporated using electron beams.  Some of the evaporated metal deposits on a little square (called the substrate).  The crystal grows at about half an angstrom per second.  Afterwards, the crystals are mechanically moved into an airlock so we can remove them without ruining the ultra-high vacuum.

We grow lots of crystals with different proportions of different elements.  And then we need to characterize all those crystals by testing them in all sorts of different ways.  The SQUID is just one of those tests.

Did I mention it's all done in a clean room?  And that there are giant dewars of liquid nitrogen all over the place?  Fun times are had by all.

Sunday, December 20, 2009

The science of closed boxes

A friend pointed me to an article in New Scientist, "Why we shouldn't release all we know about the cosmos". The article suggests that data on the Cosmic Microwave Background Radiation (CMBR) should be released slowly, not all at once.
If the whole data set is released at once, as is planned, any new ideas that cosmologists come up with may have to remain untested because they will have no further data to test them with.
It took a moment, but eventually I realized that they were suggesting the method of blind analysis.

We also used blind analysis in LIGO data (LIGO is the Laser Interferometer Gravitational-wave Observatory, a gigantic device designed to detect gravitational waves). Whenever LIGO records a set of data, only 10% of that data is released. That 10% is called the playground. We analyze the heck out of that playground! There's a huge computer program, called the data analysis pipeline, which is used to decide if there are any events in the playground which look like real gravitational waves. A large group of scientists build on the pipeline, finely adjusting parameters, adding new bells and whistles. And the whole time they are doing this, they are not allowed to peek at the other 90% of the data. That box is closed!

This is the sort of box I want you to visualize

Once the scientists are satisfied with the pipeline, they "open the box". That means they get to look at the other 90% of the data. But once the box is open, they're not allowed to change the pipeline in any way. If they want to add more bells and whistles to the pipeline, they have to wait until the next time LIGO takes a set of data, perhaps in a year or more.

What is the meaning of this silly ritual? Is it some sort of Christmas tradition among data analysts?

There are all sorts of ways you can bias your analysis. If you know what the results are every time you try a different method of data analysis, then you can, to some extent, "select" results you like. That's bad! We want the results to be unbiased, so that everyone can agree on them. Therefore, in blind analysis, there are two stages. First, you choose a method of data analysis without looking at the full results. Then you apply that method to the full results without changing it.

I read the paper which is reported in New Scientist, and they have another cool explanation of the same idea. The goal in science is to compare a bunch of different models, and determine which model best explains our observations. But first, we need to come up with those models. The models will be educated guesses based on all the evidence we've collected thus far. So if we want to test the models, it's somewhat redundant to use the present evidence; we should instead collect new observations to test the models.

The problem in cosmology is that at some point, there will be no new observations to make. There is only one universe. There is only one CMBR map, with all its random statistical fluctuations. If you stare long enough at those statistical fluctuations, chances are good that you'll find some false pattern. The pattern will be very difficult to falsify, since there is no more data to collect after that. The solution? Release data piece by piece, so that there will still be new data to test our models.

So you see, even something which sounds as boring as data analysis can have all these counter-intuitive tricks involved. Hiding data in a closed box? It sounds silly, possibly even counter to science's goal of obtaining as much true information about the world as possible. But if it's necessary to filter out human biases, I think we should do it!

Monday, December 7, 2009

On the hiding of climate data

I've been hearing a lot lately about this "ClimateGate" story? Someone hacked the e-mails of a bunch of climate scientists, and found evidence of fraud. That's pretty outrageous, isn't it? Seriously, what kind of person goes into science, which is a method of revealing truth, only to cover up and fabricate? Not my kind of scientist, that's for sure.

But then I actually saw what they consider evidence of fraud.
I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith’s to hide the decline.
I can see how someone might see this as evidence of a fraud. This scientist is talking about using a "trick" to "hide the decline" in temperature! But because of my limited research experience in data analysis, it's clear to me that it's completely innocuous, even without seeing the context.

In my experience, a significant part of data analysis is all about knowing what data to keep and what data to throw out. That's right, I threw out lots of data. Well, I didn't really throw anything out in the sense of deleting it from computers. I just excluded it from the analysis and from the results.

Let me explain a bit more about my research from over two years ago. I was looking at data from magnetometers, which very precisely measure changes in the Earth's magnetic field. One of the problems was that every so often, the Earth's magnetic field would jump up by a factor of a trillion or more. I wanted to cover this up! The public shouldn't be allowed to know! So what do I do? One by one, I went through these gigantic spikes in the data, and removed them. In retrospect, this was not a very efficient way to do it, but then I was an undergraduate researcher, so my time was pretty worthless anyways.

Why did I throw it out? It was bad data. I didn't like it. I clearly had some sort of personal vendetta against the data. More seriously, it's because these gigantic spikes in the data are caused by glitches in the magnetometer devices or other electronics. What exactly causes these glitches? Well, how should I know? I'm just an undergraduate researcher, not an engineer, and all I know is that the magnetometers occasionally acted really funky. If the earth's magnetic field really were jumping up by a factor of a trillion, I'd expect to see the effects all across the earth, at all magnetometers all at once. And I don't. So it was bad data. I didn't like it. I hid it in a little corner marked "raw data".

In my experience, data analysis is more or less one long string of choosing which data to throw out.

Of course, you don't just throw out data willy nilly. You have to come up with justifications for it. And saying, "I like the conclusions which we would draw from this data, but not that data," is not sufficient justification. It's tricky, because you don't want to bias yourself towards a previously held belief by only selecting the evidence which confirms the belief. There are some famous examples where scientists threw out data they thought was bad, but later turned out to be good. For example, before the cosmic microwave background radiation from the Big Bang was discovered, scientists had actually seen it on radio telescopes, but they thought it was just noise caused by pigeon droppings. Another example is the ozone hole, which was initially filtered out as bad data for about a decade. It's true, cientists make mistakes sometimes, but not because they're conspiring against the public, but because Science Is Hard.

Of course, those examples are the exception, not the rule. Data analysts throw data out on a regular basis, and the vast majority of the time, it's because they ought to.

So in the case of the climate researchers, even without looking at context, we know they probably had a good reason to throw out data. In fact, I know they have a good reason, because I looked it up. Apparently it has to do with the unreliability of using tree growth data to determine the temperatures of the last few decades. I don't really understand any of that, because I don't have much interest in climate science, but it should at least be clear that the justifications for their methods have been published out in the open. If climate scientists are indeed throwing out data that should be kept, it's not because they're part of a secret conspiracy.

So why is it that the e-mail talks about using a "trick" to "hide" data? Isn't that a bit odd word choice? Not really. "Trick" is commonly used to mean simply a clever method. "Hide" means that they're hiding unreliable data by putting more reliable data in its place. I have trouble seeing what the big deal is.

Tell you what it looks like to me now. Confirmation bias. People wanted to find a conspiracy, so they looked through a thousand e-mails and found a few e-mails to confirm their beliefs.* You'd think that if there really were some giant conspiracy, it would show up in more than just a few. But let's all just forget about the rest of the e-mails and documents. We don't like the data, so let's just throw it out, eh?

*Yes, there were a few others, but they don't impress me. I think the worst example was a request to delete some e-mail correspondence. In the interest of brevity, my response consists only of two words, "Hanlon's" and "razor".

Friday, November 27, 2009

I go to a conference

The other day, I attended an undergraduate research conference. Fun! I made a presentation on my summer research on classification of gravitational wave candidates. I try to make at least the first few slides very easy to understand, and here's a little peek.

This is a basic picture of what my research was about. You have a bunch of events, some of which may be spiraling binary black holes or neutron stars, and some of which may be noise caused by something entirely different. Each event has multiple associated parameters (such as "x" and "y" shown in the diagram, but there could be many more). So to classify them, we need to choose some dividing line. In two dimensions, finding a dividing line is easy. In twelve dimensions, not so much.

Anyways, that's just the introduction to my presentation, and there's obviously a lot more to it than that.

Funny thing about research, often the topics can get mind-numbingly specific. When people ask me what I did research on, I tell them, "gravitational waves", but of course that couldn't really have been my research topic. There's far too broad a subject for just one research project. If I want to elaborate further on my research, I explain that I worked on data analysis for LIGO. Still too broad. I could explain that I worked on the classification of candidate gravitational wave events for inspiraling compact binary systems. Still too broad, but now also incomprehensible to a general audience.

It makes me think of how pseudosciences often try to imitate science's use of complicated technical words. In my experience, the abstruseness of science and scientific language isn't placed there to provide an air of authority, it's there out of necessity.

Ironically, the presentations at the conference that interested me the most were not in my own field, physics, but in pure math.

In a strange coincidence, I encountered a poster all about a two-player game that I had once posted on my blog, Puppies and Kittens! Apparently the game is known to mathematicians as Wythoff's Game. The poster was about a generalization of Wythoff's Game, called Linear Nimhoff. The game starts with a set of vectors, such as {(1,0),(0,1),(1,1)}. There are two piles, which will be henceforth referred to as puppies and kittens in a pet store. (1,0) corresponds to buying one puppy; (0,1) corresponds to buying one kitten; (1,1) corresponds to buying one puppy and one kitten. During each player's turn the player selects one of the vectors, and buys an integer multiple of that vector. Whoever buys the last pet wins the game.

If you mathematically analyze the game, you find that there are "winning" and "losing" positions. If, for example, you are able to leave one puppy and two kittens in the pet store at the end of your turn, then you have won the game (provided that everyone plays the game perfectly). If you can determine what all the winning and losing positions are, then you have solved the game!

If you start with the vector set {(1,0),(0,1),(1,1)}, then it is called Wythoff's Game. But in general, you can have any set of vectors. Wythoff's Game is solved, but the general game is not solved. The poster did not have some amazing new general solution, but it was able to characterize the solution. Apparently, all the winning positions lie "near" one of several lines in the plane. The slopes of these lines can be determined from the initial vector set. Sounds pretty exciting if you ask me. But don't ask me what the practical applications are. I have no clue.

Another cool coincidence was that I saw a presentation on intrinsically knotted graphs. It's a coincidence because a fellow blogger (hi Susan!) did undergraduate research on the same thing, and had named her blog Intrinsically Knotted. Click the link for an explanation of what that is. The presentation I saw was about proving graphs to be "minor minimal"--that is, removing any edge of the graph will remove the intrinsically knotted property.

In summary, research in physics and math are a lot of fun. And if you like, you can extrapolate this hypothesis to other fields represented at the conference, like chemistry, biology, or even the social sciences.

Friday, August 21, 2009

Physicists are dreamin'

I will neither confirm nor deny having participated in the making of this video.

Friday, July 24, 2009

Classifying Exciting and Boring

My summer research job is all about classifying things. We got this big pile of events, and I want to classify them as Exciting and Boring. So what I do is I go through the events, one by one, inspect them, and then either toss them into the Exciting pile or the Boring pile. Okay, so it's not really that simple. I fiddle around a lot with Receiver Operating Characteristic Curves and Operating Points, and False Alarm Rates and Efficiency, and a whole bunch of other Concepts Which I Capitalize Ironically. (The CWICI will only get worse as I go on.)

Which is all to say, my thinking as of late has been colored by the science of classification. The other day, I classified some pens into the Out Of Ink or Just Needs To Be Shaken categories. Then I classified the different vegetables in my vegetable soup as Too Much or Not Enough. I took a list of books I wanted and classified them as In The Bookstore or Needs To Be Ordered Online.

Yeah, so I'm not actually doing all this ('twas a joke), but you get my point get that I have a point.

One thing you realize in the science of classification is that you will always get some of them wrong. Some of those events which I classified as Exciting will turn out to be false alarms. Some events which I classified as Boring are in fact Exciting. In fact, it's sort of arbitrary. I can always choose to be pickier, so that there will be less false alarms, but I'll miss more of the Exciting events. Or I can be less picky, accepting more false alarms, but being more sure that I'll catch all the Exciting events.

The tricky thing is, we don't really know how many events are actually Exciting. That's what we're trying to figure out! Maybe none of them are Exciting, and all I'm looking at are false alarms. How would I know? What's the False Alarm Rate when nothing is truly Exciting?

In my research, we have a very complex and precise way of estimating the False Alarm Rate.* But sometimes, we are not so lucky to have such a method available to us. So we make do with slightly less precise arguments.

*It probably involves time travel and robots.

For instance! UFO sightings: how many of them are Exciting Aliens, and how many are Overexcited Earthlings? UFOlogists assert that the False Alarm Rate is low, and that at least some of them are Excited Aliens. Skeptics assert that the sightings are well within the False Alarm Rate. How do we know which it is? Well, astronomers tend to inspect the sky much more often and more carefully than lay people. So astronomers would tend to have a smaller False Alarm Rate, while at the same time catching much more of the Exciting Aliens if they indeed exist.

There are far, far fewer UFO sightings among astronomers than among lay people. This indicates that UFO sightings are well within the False Alarm Rate, and there may be no Exciting Aliens to speak of. But hey, all these Overexcited Earthlings are pretty cool and interesting in their own way.

Of course, I stole this argument from Phil Plait, the Bad Astronomer. And then I made it a whole lot more arcane and technical. Fun times.

Monday, July 6, 2009

Some crazy LIGO

Everyone is asking me, "Hey mr. miller, what crazy things are you doing this summer?" Well, as I've already let slip, I'm working on LIGO, the Laser Interferometer Gravitational Wave Observatory. We're looking for gravitational waves. Specifically, I'm in the group which looks for gravitational waves which come from compact binary coalescences (CBCs). That basically means when two black holes smash together.

Have we found any gravitational waves yet? Well, the other day, I was at Chandler Cafe, and I found one in my noodles. It looked sort of like this:
Graphic made using a Mathematica Demonstration

Unfortunately, I don't really own a camera, and I was hungry. So I slurped it up, and it made a sound like this: voooooooooooooooooooouP (also available as mp3, from "Gravitational Wave Sounds"). I guess we'll just have to find another one now, huh?

A slightly more serious answer: I couldn't tell you even if we had seen anything. It is "privileged" information. Exciting! But let me say this: We have a fairly good idea of the density of binary neutron stars and black holes, and how "loud" they would be when they merge. So we can calculate the expected rate of detection. By one estimate (see arxiv), the expected detection rate of neutron star mergers is once per two hundred years of observation (probably even smaller for black hole mergers). Basically, we don't expect to see anything. The real excitement will occur when "advanced LIGO" starts in 2013, increasing the observation rate to about 20 per year.

Of course, there are other sources of gravitational waves--Gravitational Wave Pulsars, Big Explodey Things, etc.--so maybe we'll see some of those. I think these other sources aren't as well understood, so we don't have such precise estimates on their expected detection rates. So who knows, we may be lucky.

And if it turns out we're lucky, you probably wouldn't notice right away. LIGO data is littered with what we call "non-Gaussian noise", meaning that every so often, there's a data glitch, causing the measurement to jump up by some really high number. These glitches look like gravitational waves; the computer has trouble telling the difference. And there are so many of them. We toss the glitches through every statistical filter we can think of, and we're still overflooded with them.

But we still have some tricks up our sleeves. I'm working on one of those tricks. What I do, is give the computer a bunch of false signals and a bunch of "real" signals (which are inserted artificially). Then the computer uses these to learn the difference between the two. It's basically Skynet, except it's not even remotely like Skynet.

Instead, I would analogize it to a tree. You throw a bunch of apples and oranges at the tree, and then the tree tries to tell a supercomputer what the difference is between a fruit and a black hole. (I am joking! Don't take my analogy too seriously. It's not really like a tree at all; if anything, it's a forest.)

So basically, if you want to know what I'm doing this summer, you can visualize me tossing a bunch of black holes at trees in hopes of finding delicious spaghetti. That's more or less the right idea.

Monday, May 18, 2009

Great spiraling black holes!

Around this time is when you start to hear about everyone else's exciting plans for the summer. Hey, wait, I have one of those too! I got a research job at Caltech working with LIGO, the Laser Interferometer Gravitational Wave Observatory. It's probably not as glamorous as it sounds, but boy does it sound awesome.

Let's begin with the observatories. There is one observatory in Louisiana, and two in Washington state. The gravitational wave detector consists of two lasers which go in perpendicular directions. Each laser is 4 km (2.5 miles) long, and encased in a vacuum pipe. Once the lasers have bounced back and forth in their tubes many times, they recombine and interfere with each other. By looking at the interference pattern of the lasers, we can determine the difference in length of the two laser paths. And by that, I mean we can measure the difference very sensitively, down to 10^-18 meters. This is about a thousand times smaller than an a proton.

One of the Washington detectors. Credit: NASA

Why do we want to measure so sensitively the length of a laser path? It all goes back to Einstein.

Albert Einstein is most famous for his theories of Special Relativity and General Relativity. Special Relativity describes how physics behaves when things move near the speed of light. General Relativity is the theory which incorporates both Special Relativity and gravity. In fact, General Relativity is the theory which replaces the classical theory of gravity. The classical laws are very accurate under most conditions, but are decidedly incorrect nearby very massive objects and when things are moving near the speed of light.

In a way, it's rather surprising that General Relativity and classical gravity could possibly be describing the same thing. Classical gravity describes everything in terms of forces. General Relativity describes gravity as a distortion of the space-time topology. In other words, gravity influences the distances and time-intervals between different events. These small distortions cause a straight line through time appear to be curved, as if it were acted upon by some force.

One of the predictions of General Relativity is the existence of gravitational waves. Gravitational waves are analogous to electromagnetic waves (aka light). Electromagnetic waves are fluctuations in the electric and magnetic fields. Gravitational waves are fluctuations in the space-time topology. Electromagnetic waves are created whenever an electrically charged object accelerates. Gravitational waves are created whenever a massive object accelerates. Both kinds of waves are characterized by a frequency, which tells you how quickly the waves fluctuate. If a gravitational wave passes through the LIGO detector, it will cause the two laser arms to fluctuate in length. If the gravitational wave has a frequency of 40 Hz, then the lengths will fluctuate 40 times per second.

LIGO is only sensitive enough to detect gravitational waves with frequency 40 Hz or higher. At lower frequencies, it becomes too difficult to distinguish between gravitational waves and regular old earthquake activity.

What could possibly cause a gravitational wave of more than 40 Hz? Gravitational waves are caused by accelerating massive objects. For example, the earth is constantly accelerating towards the sun because it is in a circular orbit. But this should only cause gravitational waves with frequencies of about 1 per year. However, we might be able to detect orbiting objects if they are orbiting much faster than the earth. One of the objects we are interested in is the binary black hole* system. Black holes are very massive objects, and also very small. So two black holes could be orbiting very quickly and closely to each other. If a pair of black holes is what it takes, then let's look for black holes!

*It could also be any other type of massive astrophysical compact halo object (MACHO), like a neutron star.

One other thing about gravitational waves, is that they carry energy, just like light does. As two black holes orbit each other, they emit energy in the form of gravitational waves. This causes the black holes to slowly lose energy, falling slowly towards each other. Because they're closer together, the "force" of gravity is stronger, and they orbit faster and faster. The picture we have here is of two black holes, spiraling around each other, getting closer together and moving faster. Eventually, they collide, coalescing into a single black hole. When there is only one black hole left, it no longer emits gravitational waves, and its signal disappears.

This could really use some animation. So I found some animations on the net from the Numerical Relativity Group.

The detection of gravitational waves is not only a way to test Einstein's theory of General Relativity under new conditions, it is also a new way to do astronomy. It's much like how we build telescopes to detect electromagnetic waves from far away sources. We can use gravitational waves to detect objects like binary black holes, as well as exploding stars, and a certain kind of pulsar. Scientists are also trying to detect something analogous to the cosmic microwave background radiation, only it would be cosmic gravitational background radiation. It would be very difficult to detect, but it comes from a very early point in the universe's history, far earlier than even the microwave background radiation.

Friday, May 30, 2008

I've been published!

Good news! For me!

I now have a single-authored research article published in an undergraduate science journal! I am quite happy about this. The paper has something to do with magnetospheric waves that are thought to energize electrons in the Van Allen Radiation belts.

The natural follow-up question is, does this make me a scientist? I like to think that it does.

Thursday, March 6, 2008

Impersonal science writing

I've got my first single-authored science paper in the review process right now. It is awesome probably. Guess what it's about. Ok, ok, I'll give it away... partly. It's about magnetospheric waves. But that's not what I'm going to talk about.

One of the revisions that they asked for was to remove all first-person mentions from the paper. I disagree with this idea. I'm not angry at the reviewers or anything. I suspect they're just students who are following guidelines, or doing what their past teachers have taught them. In truth, I didn't even hesitate to follow through with the changes, because I don't really care. But I'll put on my internet-angry face just for flavor. >:-( *wink*

So there's this idea that in science writing, you never use the first person. The first person is the use of pronouns like "I" or "we". The second person is "you". And the third person is "he", "she", "the experimenter" or "the present author", etc. The reason you're supposed to avoid the first-person is to make science look more objective, and to place less emphasis on the people doing science.

I disagree because, for one thing, it only creates the illusion of objectivity. Merely changing around the sentence structure of your paper cannot actually change the degree of objectivity of the research. Avoiding the first person does not actually make the research any more objective, it only makes it appear objective. And how objective is the research, exactly? The research was, after all, performed by a person or persons. Should we be trying to make the research look as objective as possible, or as objective as it really is? Trying to make a study look more objective than it really is just smacks of subjectivity.

On the other hand, I understand wanting to place less emphasis on the people. When you read a paper, you don't particularly care which scientists wrote it, nor which students did the grunt work. The study should be replicable, meaning that any other scientist can try the same experiment and get the same results. If other scientists can't replicate the work, its conclusions are called into question. Furthermore, prescriptive rules are entirely appropriate for technical writing. A uniform writing style makes for clearer communication. Scientists basically deal with information, so clear communication is essential. Individualistic styles, though flavorful, might be harmful to clear communication.

But what is the alternative to using the first person? There are two major alternatives. One is to use the third person, and the other is to use the passive voice.

In the third person, I would refer to myself as "the researcher", "the programmer", "the present writer" or something along those lines. The present blogger thinks that's pretty awkward. And is it really any better than the first person? I'm still mentioning that there are *gasp* people involved in my scientific research. So how exactly does this make it more objective, and how does it deemphasize the people involved? Well, it's true that "he" sounds more objective than "I", but this is, again, mostly an illusion. And I suppose this makes it easier to conceive of a replication of the study, since you only need to replace "the researcher" with a researcher of your own. Still, using the third person just sounds weird.

The other alternative is the passive voice. In the active voice, I would say, "I analyzed the data," "You do not amuse us," or "This researcher dislikes the third person." In the passive voice, I would say, "The data was analyzed by me," "We are not amused by you," or "The third person is disliked by this researcher." The basic idea is that the thing that is [verb]ed becomes the subject of the sentence. The advantage of the passive voice is that you can omit any mention of who or what is doing the [verb]ing. I can say, "The data was analyzed," "We are not amused," or "The third person is disliked." So this is the primary way to avoid the first person.

The problem is that the passive voice is sometimes very awkward, and some people are adamantly opposed to it. Some crazy people even think the passive voice should never be used. Orwell once said: "The passive voice is wherever possible used in preference to the active," apparently expressing extreme disapproval of the same passive voice he had just used. (An aside: someone way back asked me what I thought of Orwell's essay on language, so there you go.) I don't think it's a deadly sin to use the passive voice, but using it all the time is just as bad as using the active voice all the time. If you're forced to always use the passive voice, occasionally you'll get sentences that are suboptimal. Case in point, convert one of my earlier sentences into the passive voice: "Ok, ok, it will be given away (by me)... partly."

So, you see, writers are stuck between a rock and a hard place. Using "I" doesn't give us that oh-so-important illusion of objectivity. Using "the researcher" is awkward and contrived. Using the passive voice is aesthetically displeasing, and widely disapproved of. You can't satisfy everyone.

I prefer using the first person, because creating an illusion of objectivity seems relatively unimportant. I think nowadays, most science journals no longer require avoiding first person. But it seems that this particular undergraduate science journal does. Of course, even I flinch at the mention of "I" in formal writing, so I prefer "we" instead. Of course, that doesn't make much sense in a single-authored paper. We guess we're using the royal "we"? In the end, I simply switched to the passive voice. It's not a big deal, really.

Sunday, December 9, 2007

Science is social

For reasons that will remain unexplained, I've been watching a Japanese show about a physicist who solves crimes using his amazing physicist abilities. The physicist sometimes fulfills the scientist stereotype, and sometimes subverts it. For example, in one episode, another protagonist tells him that he is more interested in numbers than people. Later, the physicist explains that scientists have a dull life (what!?) and seldom meet people, but are not anti-social.

I appreciate the effort to humanize scientists, but the characterization is still false, at least in my experience. I've found that scientists in fact meet lots of people. We basically deal with information and knowledge, so we must perpetually be in contact with other people. I only do a small amount of research, but I still need to meet once a week. The rest of the week, I make-do with lots of e-mails.

Secondly, scientists do not spend most of their time in a dim lab repeating the same experiment over and over. Relatively little time is spent in lab--much more time is spent analyzing the results of experiments than actually performing them. In fact, scientists get to travel a lot. They often go to science conferences, which are all over the world. My professor goes to several of these a year. He also travels to see nearly every solar eclipse, and travels to set up or check on magnetometers, which are placed all over the world. Indeed, traveling is so important to science that it is a force for peace between nations. Go science!