Some Quick Links to Interesting Reads

In my class this semester, we’ve had occasion to discuss sustainability. I’ve recently run across two articles that highlight some of the issues here:

First off, an article about the perceived trade-offs between progress in our living standards and maintaining the environment…are the trade-offs as stark as some would suggest?

And then, a great blog post about the limits of growth as an economic strategy…the physical limits that is:

This latter article highlights something that has bugged me for a while: the lack of any reference to physical constraints in standard models of economics (at least those that I’m aware of).

Also, I just returned from a very nice trip to the University of North Carolina – Charlotte (UNCC). I met with Dr. Greg Gbur, who’s blog I have now added to my increasingly alarming blog addiction. The blog is

Check it out! It’s an excellent mix of science, history, and fiction. I particularly enjoyed the following post on the many failed atomic models that preceded Niels Bohr’s:

Causality: An Addendum

Not long after the last post, I ran across this one:

in which Michael Nielsen grapples with something called the “causal calculus”. At the risk of massively oversimplifying, the key idea seems to be to construct a causal model for some system, and then to introduce a more refined type of conditional probability. The new type of conditional probability involves the probability of X given do(Y), where do(Y) implies that not only are we interested in instances of X occurring given that Y occurred, but rather, we are interested in instances of X occurring when we *make* Y occur. What this means is that we imagine that we can manipulate the causal structure of our model so that occurrences of Y are externally controlled and disconnected from the “endogenous” factors that influence it in the model.

Interesting Things I’ve Been Reading: Causality Edition

This post is the first in a series that will link to the things I’ve read in the past year or so. I’m going to try to group these thematically, and why not start with a big picture issue: causality.

Causality seems to be embedded in the world around us, but this could be largely due to its being deeply embedded in our psychology. Empirically speaking, establishing a causal relationship between phenomena is actually quite tricky since there is no completely logical way to do so! If event B always follows event A, all you have definitively is that up until now, B has followed A. There’s no guarantee that such a relationship will continue. Even if it were to continue, there may be some hidden C that is causing A first and B next, so that the two always occur in sequence, but are not in fact, causally related. This perspective on causality is essentially the one described by David Hume. For a more detailed discussion check out

The econ-blogger Karl Smith takes a view that attempts to simply dodge the whole thorny business:

What he is describing in the post above is a deterministic view of the universe—in fact, an essentially classical (that is, non-quantum) view of the universe. Thus, patterns in time, including things we interpret as cause-and-effect relations are simply a form of geometrical shape that we are forced to map as we proceed through the universe’s time dimension.

Of course, quantum mechanics radically alters this perspective. If you regard the collapse postulate as an approximate description of decoherence, then a global view of some large-scale system will probably look something more akin to the many worlds interpretation of quantum mechanics. That definitely doesn’t look like a block!

But without getting too exotic, the issue of causation versus correlation is extremely important. We tend to want to read causation into correlations, but we have to be careful. Here’s a good article describing problems this gives rise to in medicine:

Depending on my energy level, I may post some more about causality. But this is enough for now.

Interesting Things

It’s been a while since I last posted anything. Here are some interesting articles I’ve seen over the last few weeks:

Kepler has found a lot of candidate planets!

The role of a gene in the level of aggression in mice. Interestingly, in Frontiers we’re going to look at some work by Caspi on genes for the MAOA protein, child abuse, and the propensity to develop abusive or anti-social behavior. I wonder if there is a connection here…

Nabokov was an avid butterfly expert. Now some of his ideas appear to be vindicated.

We just recently looked at laser diffraction in class and some of the quantum mechanical ideas underlying it. One of my students brought this article to my attention: meet the anti-laser.

And while we’re talking about quantum mechanics, a friend sent me word of this article a little while ago. Apparently there is evidence that birds’ are capable of sensitivity to very weak magnetic fields due to some molecule in their eyes that is capable of exhibiting quantum entanglement effects.

I want to talk about quantum erasers and delayed choice experiments, but I haven’t the time right now. Here are the wikipedia entries to whet your appetites:

1. Erasers

2. Delayed Choice

How to Detect Things Without Interacting With Them, and Other Things to Bend Your Mind Around (Also, Study Tips)

I’ve recently been thinking a bit about a really cool idea that may have important applications to quantum computing and other things. The idea goes by the name “quantum interrogation” or occasionally “interaction-free measurement”. Basically, you use the fact that quantum particles can interfere with themselves as they travel unobserved since (to simplify immensely) particles probe the space around them rather like waves do when they are not interacting strongly with other things.

So, you set up a situation where the particle splits along two tracks. Quantum mechanically we can think of the particle as going along both at the same time. At some later point, the particle rejoins itself and is detected. The trick is to now place an obstacle in one of the paths so that the particle can no longer “rejoin” with itself—i.e. you block its ability to interfere with itself. So even if the particle ends up truly going on the other track (the one that the object is not obstructing) you will actually get a different result if you set up your detectors correctly. There is a way to bootstrap this procedure to essentially ensure with arbitrarily high probability that the particle never ends up taking the track that is obstructed, and yet, you can detect the presence of the object precisely because of the lack of particle self-interference.

The idea is Elitzur and Vaidman’s and has been experimentally implemented by Kwiat and collaborators. Take a look at Kwiat’s explanation of these ideas. For a very nice cartoon explanation that gets at the heart of it (it involves adorable puppies), look at Sean Carrol’s explanation on Discover’s Cosmic Variance.

Continuing along our quantum theme, there was a relatively nice article in the New York Times Science section about quantum computation recently.

And in my quantum particle-like walk around the internet, I came across a Scientific American article about plants using quantum entanglement to transport energy around. This is really neat since it is a clear example of biological systems exploiting quantum effects in an explicit manner—I mean, covalent bonding of atoms is a quantum effect, but as you go up the ladder of distance scales, truly weird quantum stuff becomes harder to achieve and more washed out. Things just start to average into classicality usually. So the fact that some plants use entanglement in a way that is essential for energy transport is really cool.

Also, lots of theories that aim at answering fundamental questions about the universe appear to be yielding different versions of the idea that our universe may be a lot more varied than we currently observe. Some like to refer to these types of notion as the “multiverse” concept. Anyway, listen to Brian Greene on Fresh Air talk about these ideas and his new book.

Speaking of alternate worlds, here’s a really amusing short video about the very strange world of Ms. Wind and Mr. Ug. Can you figure out what’s going on here? (thanks to Cosmic Variance for this!)

And that reminds me—I probably posted this a while back, but it’s really worth posting again. These are videos that act as tutorials for understanding various amazing mathematical concepts about geometry (including higher dimensions). Note that you can navigate to different videos by using the arrows that appear when you wobble the mouse over the video itself.

And, to mark the return to teaching Frontiers of Science this semester, enjoy the latest in our attempts to understand the best techniques for learning stuff. Here’s a helpful write-up in the New York Times.

(Mind you, I think a little too much is made about the testing angle of this. Perhaps that is important, but after chatting about this with my wife, I bet that the key here is getting the learners to actively recreate what they are supposed to be learning without reference to the source material. My guess is that this forces the learner to “own” the material themselves. I think it’s part of the reason that you become much better at a subject when you are forced to give lectures on it.)

Interesting Things often posts intriguing questions. A pretty large cross-section of scientists and other thinkers then post their answers. The most recent question is: What scientific concept would most improve everyone’s scientific toolkit?

I’m not thrilled with the phrasing, but the answers are quite interesting. Three of my favorites were

Supervenience – Joshua Greene

Duality – Amanda Gefter

Science’s Methods Aren’t Just for Science – Mark Henderson

Starting with the last one first, on the one hand, this is pretty obvious. On the other hand, I do think it is widely underappreciated! Part of the goal of the course I teach at Columbia, Frontiers of Science, is to equip our first-years with some of the most basic tools that scientists use all the time so that they can apply them broadly.

Now, duality is a great answer. It’s one of the most fascinating things nobody in the broader public really knows about. The earliest examples of duality—in the sense meant by Amanda Gefter—came from studying idealized physics models in two dimensions (one space, one time). These are rather near-and-dear to my heart since my PhD studies focused on these types of models. The key idea here is that two radically different physical theories can actually be descriptions of the same underlying system. Read the page for a bit more…I actually think it’s about time I wrote something discussing duality myself. Hopefully I can do so in the near future.

Finally, the philosophical concept of “supervenience” is a great choice that more people should know about. It’s a rather rigorous way of formulating the intuition that “higher level” things rest on “lower level” foundations. For example, an object’s temperature is a higher level property that arises from the random jiggling motion of the atoms that make up the object. The key point is that the precise way that the jiggling happens doesn’t really matter since the temperature is a sort of averaging out. However, if the temperature of the object changes, this inevitably must be reflected at the level of the jiggling—in particular, if the object gets hotter, the jiggling gets more violent. We say that the temperature supervenes on the underlying motion of the atoms.

There seems to me to be a tension between the notions of duality and supervenience. To say something supervenes on something else implies that the something else is more fundamental—in fact, it could be taken as a definition of what it means to be more fundamental. However, one implication of duality is that fundamental entities in one description of the system actually become non-fundamental in the other. In other words, fundamentality is a property of how you describe the system. Now, there may be some systematic way of choosing which description is best in a given circumstance, but as far as I can tell, that doesn’t really get you around the problem. I haven’t seen anyone try to tackle this meaningfully…

Conifolds and Tunneling in the String Landscape: Part 4

The ever-lengthening series continues!

Part 1 gave an overview of the two main themes of our investigation of tunneling in the String Landscape—the collection of possible configurations that string theory’s extra dimensions can find themselves in, including certain flux fields through the extra dimensions.

In part 2 I told a more detailed story of what it means to curl up extra dimensions and I explained the notion of a flux that can keep the sizes and shapes of these dimensions stable.

In part 3 I explained a very special configuration called the “conifold.” When the extra dimensions are wrapped up in the shape of a Calabi-Yau manifold, there are extra parameters that control certain aspects of that shape. The conifold configuration of the Calabi-Yau manifold represents a specific choice of parameter values at which some part of the space collapses to zero size. If you had a little six dimensional ant that lived in the Calabi-Yau, it would experience the part of the space near the collapsed region as appearing to be like a six dimensional cone.

We now have all the ingredients for understanding what is meant by “String Landscape.” The extra dimensions are curled up into a Calabi-Yau shape. Generalized electric and magnetic fields pierce through parts of this shape to give you fluxes. For a given Calabi-Yau there are huge numbers of possible fluxes to put on the shape—that large number of flux configurations is usually what is referred to by a string landscape. It’s a landscape when you consider the energy function which will look very bumpy with lots of peaks, ridges, valleys, and wells. The model universe will want to settle down in a well of low energy much as a ball will roll into the valley between two peaks rather than stay teetering at the tip of one of the peaks.

So if we are able to map out some of this landscape, we should be able to see which flux configurations are favored by string theory for that particular Calabi-Yau. We simply search for the lowest energy wells in the landscape of energy that describes the various flux configurations. Interestingly, the landscape for the simple models that we investigated in our paper has certain patterns. Energy wells that the universe would like to settle into tend to appear periodically as one travels around and around the conifold configurations of the extra dimensions. Thus, if you find an energy well somewhere in the landscape, and then you—the intrepid explorer—wind around the conifold point, then you will often find another energy well after making roughly a 360 degree turn about the conifold point.

This pattern presents us with an opportunity to explore the details of the following scenario applied to string theory. In a quantum system, objects will not usually just stay forever in a given configuration, even if it has a low energy and is at the bottom of an energy well. Quantum mechanics implies that systems will jitter and jiggle around such configurations. Usually these jitters are very small, but very very rarely, such a random jitter can drive you out of the low energy configuration you are in and into a neighboring one over the mountain ridge dividing the two.

This situation is often illustrated using the example of a particle in a box. In quantum theory, a particle isn’t described simply as a point in space. Instead it has associated with it a wavefunction that, for our purposes, tells us the probability of finding it at any given point. In this way, quantum theory tells us that even though when we observe them in space, particles look like little points, in order to understand how they move through space, we must model them as somehow extending out over all space in this wave-like manner.

So imagine placing a particle into a box and enclosing it tight. If you make the box out of some sort of impenetrable material—impermium let’s call it—then you know that the wave describing the particle is completely contained inside your impermium box. This means that sometime later, when you open the box up and try to locate the particle, you will inevitably find it somewhere inside the box.

But impermium is not a real material. Real materials always have some degree of permeability. So, if you build an almost impenetrable box and put the particle in, what happens? Well, sometime later you are likely to find that the particle is still in there when you open it up again. However, there is a tiny chance that the particle could have slipped out or “tunneled” through the box’s walls to escape. This is because a penetrable box allows the wavefunction of the particle to leak out a very slight amount. So despite having been initially placed in the box, there is a tiny but non-zero probability that it will get out after some time has passed.

Well, energy landscapes for general quantum systems behave in an analogous manner. If a system finds itself in a valley, but one of the mountain ridges is adjacent to another valley, then there is a tiny but non-zero probability that the system can tunnel out from its low-energy configuration described by the first valley and hop into the other low-energy configuration described by the adjacent valley. Note that this would not be possible if there was no quantum mechanics since there would be no quantum jitters that could drive the system up the mountain side and down into the adjacent valley.

So what does it look like when you take your quantum system to be the universe and you think about it as tunneling between two adjacent energy wells separated by mountain ridges in an energy landscape? In the simplest cases the transition would occur in some local region of the universe and it would look like a little bubble appearing. Within the bubble you have a state of the universe that is in the adjacent energy well. Outside the bubble is the state of the universe in the original energy well. Naturally this implies that the bubble has a wall at which the two states interface with each other.

If the energy state inside the bubble is higher than the one outside the bubble, then pressure from the outside will collapse the bubble and nothing much will change. But if the energy inside the bubble is lower, this will put pressure on the bubble wall to expand. The constant acceleration pushes the bubble wall to expand close to the speed of light, so eventually what you get is an ever expanding region of the low energy state eating up the surrounding parts of the universe that are in the original energy state.

Note that the wall of interface between the inside and outside of the bubble is itself physically interesting. It is a spherical membrane that expands outwards and it could very well have its own dynamics. In other words, a more careful analysis that allows for complications would involve understanding how the membrane wall ripples and fluctuates as it interacts with things outside of the bubble (and also just due to its own quantum mechanical jitters). The creation of the bubble can thus be understood as equivalent to the spontaneous creation of the bubble wall as a dynamical, physical object in its own right.

This may sound rather farfetched, but there are real life examples of such processes. If you take two plates of opposite charge and oppose them so that they are parallel to each other you have formed a system called a capacitor. There is an electric field that goes from the positively charged plate to the negatively charged plate. If you crank up the strength of this field, you will eventually get to a point where quantum mechanical fluctuations become important. In particular, quantum mechanics tells us that pairs of electrons and anti-electrons are constantly being created and then annihilate each other everywhere in space. Usually these pairs are “virtual” since they come into and out of existence so quickly nobody can directly observe them. However, in a strong enough electric field, the pair can spontaneously come into existence—similarly to the bubble wall from before—and then be accelerated in opposite directions—the negatively charged electron will fly toward the positively charged plate while the positively charged anti-electron will fly into the negatively charged plate. This will actually reduce the strength of the electric field since some if its energy will have been converted into the mass and energy necessary for these electron/anti-electron pairs to come into existence and accelerate apart from each other.

So, if the universe is described by an energy landscape that determines the fluxes through the extra dimensions, then it likely has many hills, ridges, valleys, and wells. In our simple models, we can indeed find wells that are adjacent to each other by going around the conifold configuration. This allows us to explore what it would be like for a bubble to appear, representing a transition from one energy well to another one across the conifold configuration. As it turns out, the dynamics of this transition are somewhat intricate. The universe cannot spontaneously create a bubble membrane that separates the two configurations. Instead, the extra dimensions have to be able to deform as well. In other words, the tunneling transitions taking our model universes from one flux configuration to another must be accompanied by a sort of dance of the extra dimensions. In particular, they deform so that a portion of them shrink down to very tiny sizes—that is, they approach the conifold configuration—and it is essentially (but not exactly) at that point that the membrane or bubble wall is able to be spontaneously generated. This bubble wall acts a lot like the electron/anti-electron pair in that it absorbs some of the flux from the universe’s original energy well and thus leads to a lower energy configuration inside the bubble with less flux.

This has broader potential implications for understanding how configurations may tunnel between each other in the string landscape. We cannot blithely assume that if two configurations can be connected by some appropriately charged bubble wall then they can simply hop into each other. Instead, the shape and size of the extra dimensions needs to also be taken into account. They will likely have to be able to deform in the appropriate ways so that it becomes energetically favorable to spontaneously generate these bubbles.

This work is part of a continuing effort to understand what we string theorists actually mean when we talk about a string landscape and the way a universe—ours perhaps—could evolve when it is described by such a theory. There are lots of details that I’ve left out and there are many more that need to be included to get a completely satisfying picture. That said, the work highlights some of the immense richness of string theory and suggests some very natural directions for further exploration. In particular, these sorts of tunneling processes ought to also be able to describe the tearing and patching together of the universe into yet more radically different configurations—or at least provide an explanation for why such things cannot happen in string theory.

Phew! Okay! I think that that wraps up my “summary” of recent work. I’m half-inclined to produce a summary of this summary! (But I think I should resist that impulse).

Conifolds and Tunneling in the String Landscape: Part 3

In Part 2, I described how the extra dimensions can be rolled up and stabilized by adding fluxes—generalized versions of electromagnetic fields—through them. In Part 1, I listed the two main themes of the work. We are now ready to start exploring them in a bit more depth.

The first theme involves the search for stable configurations of the extra dimensions. We refer to such a configuration as a “vacuum” since to a first approximation, if the universe settles into such a setup, it looks like a relatively empty four dimensional space. Small fluctuations around this vacuum then describe the various particles that we should observe.

There are two sets of choices to make when curling up the extra dimensions. The first is what shape you want to roll them up into. Think about taking a piece of paper and rolling it up into a cylinder—clearly there really is only one shape although the size of the cylinder is a choice. When you go to higher dimensions, there are many—perhaps infinitely—more options for how to roll things up. Needless to say, the options in six-dimensions are essentially limitless. That said, string theorists typically restrict themselves to rolling the extra six-dimensions on something called a “Calabi-Yau manifold”. Remarkably, even though there are again, probably an infinite number of these shapes, it has been conjectured that they are all closely related. In fact, Brian Greene and others have shown that certain very simple transformations that slightly alter the topology (the way things are connected) in these shapes can be used to transition between thousands of them. Furthermore, string theory provides a physical mechanism for such changes to occur. Thus, it is highly likely that if you choose an initial Calabi-Yau shape as a starting point, there may be physical processes (in string theory) that let you get to any of the others, so your initial choice wasn’t really much of a choice at all.

But let’s for the moment forget the connected nature of the set of Calabi-Yau manifolds and just pick a specific one. Once you’ve done that, you have another set of choices: you must choose the fluxes that stabilize the size and shape of the Calabi-Yau manifold. A typical Calabi-Yau manifold has hundreds of parameters that set its size and shape which in turn implies the need for hundreds of fluxes passing through the Calabi-Yau in some intricate manner. In our work, we looked at a family of Calabi-Yaus that are simpler: you could simply focus on two parameters that control the geometry of the manifold. Stabilizing these involves setting eight fluxes—a far more tractable problem to study (note to experts: by two parameters, I mean two real parameters. These can be combined into a single complex valued parameter, which is usually what is done).

So, we’ve picked one of these special types of Calabi-Yaus and we pick eight fluxes. Then you check to see if the potential energy of the configuration actually has a minimum—that is, is there a minimum energy that the system will naturally settle into, stabilizing the shape of the rolled up dimensions and leading to an effectively four dimensional theory. We conduct this search numerically, so we can only look at a finite portion of the potential energy and we find that whether or not there is a minimum is a hit-or-miss affair.

In general, the trial-and-error process of choosing fluxes and looking for these minima in a numerically generated (i.e. generated by computer) situation is tedious. One of the technical things we do is to automate this by adapting methods previously used to investigate the statistical distribution of these sorts of minima. The key is that certain choices of the parameter that controls the shape of the rolled up dimensions are special. In our examples there are three special choices. Our paper focuses on one such choice—the so-called “conifold point”—although the automated methods for finding minima could be adapted to any of the other special choices as well. The conifold point is a choice of the shape of the Calabi-Yau where part of it has collapsed to zero size. If you look in the vicinity of this collapsed region it resembles a cone, where the tip of the cone is the endpoint of the collapse. It turns out that in the vicinity of this point, we know completely analytically (i.e. via pen-and-paper—no computers necessary) the way the shape of the Calabi-Yau is configured. This let’s us do non-computer calculations and allows us to pinpoint configurations that minimize energy that are very close to the conifold point.

I should probably back-up here to explain something that may be confusing if you haven’t considered such things before. The Calabi-Yau manifold is a six dimensional collection of points, just like the space in your room is a three dimensional collection of points. To locate a point in a Calabi-Yau manifold you must give it a label with six numbers, just as you must give (x,y,z) coordinates to a point in your living room if you want to be able to locate it. There is a parameter that controls certain aspects of the shape of the Calabi-Yau manifold. To make this idea concrete, imagine if the walls of your living room were adjustable, but not independently of one-another. You have a dial that can be turned to any value. Given a setting on the dial, your living room walls will twist and stretch stopping at some preset configuration for the setting on the dial. So, you can imagine that for every number that you can choose using the dial, there is a configuration of the living room walls. The three dimensional space that is your living room is changed by choosing different settings on the dial.

The Calabi-Yau manifolds have a similar setup: any given shape is associated to a choice of two numbers. If you start at (0.421, 2.125) then the six dimensional Calabi-Yau will have some configuration. If you then alter these numbers a bit, changing them to (0.428, 2.121) then the shape of your Calabi-Yau will change a little bit. If you change the numbers a lot then the shape of the Calabi-Yau will differ by a lot. A key point is that even though the shape is changing, the topology—the way the points in the Calabi-Yau are connected—does not change in any substantial way. You are not poking any holes in the manifold, nor are you closing up the ones that may already be there. It’s like stretching and squeezing a donut, but without either breaking it, closing up the hole in it, or poking a new hole into it.

The conifold point is a choice of these two parameters such that the shape of the Calabi-Yau takes on a rather extreme form. A donut is a useful shape to keep in the back of your mind. As you adjust these two numbers so that you get closer to the conifold configuration, part of the Calabi-Yau begins to shrink. If you think of the donut, imagine taking your fingers, wrapping them around part of the donut and squeezing. Assuming this isn’t a crumbly donut, then by squeezing it, you are shrinking the part your fingers are wrapped around while the parts farther away don’t change much. Eventually, you can squeeze the part you are wrapping to a really tiny size—your donut will now look more like a very curved croissant whose ends are touching. This is analogous to choosing the Calabi-Yau parameters so that they are at the conifold configuration. Very near the point on the donut where it is crushed down to tiny size, the shape looks like a cone—that’s why the configuration is called a conifold—it’s a portmanteau of “cone” and “manifold”.

The conifold configuration of the Calabi-Yau—the configuration where it has a pinched point somewhere—is a major part of our analysis of the ways that space in string theory may undergo dramatic changes of shape. But we have to understand how quantum mechanics allows you to “tunnel” through barriers in order to understand this phenomenon. I think that that will have to wait for Part 4!

Happy New Year!

It’s Life Jim…Maybe As We Know It?

There is a debate (as well there should be!) about the validity of the Arsenic bacteria discovery that I mentioned earlier. Here’s a post detailing one researcher’s analysis and skeptical take on the work that was done:

Any potentially major discovery will and should generate argument and debate. It’s even quite possible that it will prove to be wrong, or that the methods of the original research will be called into question and render the work inadequate for showing what they claim.

It’s Life Jim, But Not Quite As We Know It

A very exciting development in the world of biology. Apparently, bacteria has been generated that uses arsenic in place of phosphorous as one of its basic building blocks. This is a big deal!

You see, up until now, the only life we’ve ever observed uses the following basic elements:

(H)ydrogen, (O)xygen, (N)itrogen, (C)arbon, (S)ulfur, and (P)hosphorus.

But if you look at the periodic table, the element directly below Phosphorus is Arsenic. This means that Arsenic’s outer shell of electrons has the same bonding structure as its upstairs cousin. This in turn means that at least as far as bonding is concerned, it might be possible to take chemicals that involve Phosphorus and to swap it out for Arsenic.

This is what happens to these bacteria: they are put into a Phosphorus starved/Arsenic rich environment. They somehow develop the adaptation to use Arsenic instead of Phosphorus for basic chemical structures such as DNA!

There have been a lot of stories about this in the news. This one from Wired is at least as good as the others.

Conifolds and Tunneling in the String Landscape, a paper: Part 2

In part 1, I gave the standard background about why one wants to consider “compactifying” string theory. The idea is that string theory is consistent in ten dimensions, but we see four. The way to get from ten to four is to roll up the extra dimensions. However, when you do this, you introduce new particles into your theory that correspond to the sizes and shapes of the extra rolled up dimensions. To fix these sizes and shapes you must give the particles masses, and you do this by turning on certain generalized electric and magnetic fields inside the extra six dimensions.

There’s more background left to cover. Let’s get on with it.

When you have a magnetic or electric field passing through some surface—think of the fields as arrows of force—the amount of field passing through the surface is called flux. The generalized fields in string theory don’t pass through surfaces (only). They can also pass through hyper-surfaces (three-dimensional volumes). There are others that can pass through objects of even higher dimension, but these are not important for our purposes.

(ASIDE: What does it mean for a field to pass through a hyper-surface? The best way to think of this is to lower the dimension: imagine a sphere with arrows poking out of it—the number of arrows poking through it is the flux through the sphere. Now a hyper-surface is a three-dimensional analog of a surface. You can for instance imagine a hyper-sphere—this is a three dimensional object such that if you were inside this object, if you went in the same direction for a while, you’d come back to the same point no matter which way you set off in—up, down, left, right, forward, or backwards, or some combination—you’d always eventually come back to where you started.)

The amount of energy that corresponds to different configurations of the rolled up dimensions given a set of fluxes through them can be computed. In fact, the energy can be thought of as a function of the sizes and shapes of the extra dimensions along with the fluxes. This function is called the potential energy.

A simple example of potential energy is measuring the amount of energy it takes to take a ball from the floor and lift it to some height. Each height has its own potential energy, and in general, as you lift the ball, the potential energy is greater. So now, imagine having a terrain of hills, valleys. A ball rolling around in a valley will generally be mostly stuck there—it will roll against the sloping hills on the sides of the valley and, since it only has a finite amount of energy, it may climb up a ways, but eventually it rolls down again. So it is stuck in the valley. A ball rolling at the top of a hill will very soon find itself rolling down the slopes, away from the peak, and eventually, into some valley, where it will be trapped. The key here is that the potential energy on peaks is greater than that in valleys, and if you don’t have anything stabilizing the ball at the peaks, it will roll down to a place where it is stabilized—in the valleys.

The configuration of the extra dimensions behaves in an analogous way; if the potential energy of a configuration is a “peak” then the configuration will adjust until it finds itself in a well—a region of potential energy where there are hills on all sides. If such a region exists, then the extra rolled up dimensions of ten dimensional space stabilize. That is, a stable balance is achieved between the forces due to the fluxes that are pushing on the extra rolled up dimensions. These wells in the potential energy are called “vacua” or “minima”. If the extra dimensions end up rolled up small enough, then an observer like us will see the world as four dimensional. Furthermore, now that the extra dimensions have been stabilized, the additional particles that I mentioned before gain very large masses—so they are very hard to push around. Basically, they won’t be observable unless we access them through extremely high energy collisions. So by stabilizing the extra dimensions we have solved a key problem in connecting ten dimensional string theory to the four dimensional world: we have gotten an *effectively* four dimensional world and we got rid of the extra gunk from the hidden dimensions by making it very heavy.

I have to go now, so I will continue this discussion in Part 3, but before I depart, I’d like to insert some caveats into the discussion above. The procedure I am outlining above is called “flux compactification". The word “compactification” refers to rolling up the extra dimensions. The word “flux” refers to these electric and magnetic-like fields that we turn on to stabilize the sizes and shapes of the rolled up extra dimensions. There are actually a handful of different avenues for doing this. In our paper, we focus on one of the best studied avenues which arises from so-called Type IIB superstring theory. The caveat is this: I said that when we roll up the extra dimensions we get a bunch of different particles that correspond to random fluctuations in the sizes and shapes of the rolled up dimensions. I then said that fluxes stabilize these dimensions and give heavy masses to these particles. In the context of our paper, the fluxes actually only give masses to *some* of the particles—a class of them called “complex structure moduli”. There is another type called “Kahler moduli” that do not get fixed by our fluxes, but instead need other methods to stabilize them. In our paper we simply assert that these are stabilized in some way or other, and that we will not focus on the details of how to do so. This is okay as long as you’re honest about it—but it also means that one natural extension of our work is to treat these other particles more seriously and try to deal with stabilizing them in detail.

I must run!

Conifolds and Tunneling in the String Landscape, a paper: Part I

Well, my colleagues (Eugene Lim, I-Sheng Yang, Pontus Ahlqvist, Saswat Sarangi, and Brian Greene) and I have finally put out our paper on flux vacua and stringy tunneling. Take a look:

It’s rather long, although we tried to be very detailed in our appendices and have included some calculations that are rather standard (the Klebanov-Strassler stuff along with the near-conifold expansions) for completeness.

There are two basic themes in this work:

1) How do we efficiently find vacua for tractable flux compactifications, and what are some of the relationships between them?

2) What are the details of the tunneling processes that might take you from one vacuum to another one that is smoothly connected via the potential?

I’ll now repeat a story that has been told many times before. String theory is most naturally formulated in ten dimensions. This leads to the obvious problem that we only observe four dimensions, three spatial dimensions (jump up and down, move left and right, forward and back) and one time direction (get older). This leads to one of the key questions in the field, “who’re you gonna believe? String theory or your lyin’ eyes?”

Well, the question doesn’t necessarily have to be either/or—you probably should believe your lyin’ eyes (ish), but that doesn’t rule string theory out. In fact, the extra six dimensions that seem extraneous and unwanted can be rolled up into certain types of shapes (Calabi-Yau manifolds and related objects). In this way they can be made very small, so small, we wouldn’t be able to detect them—at least not up until now. An added bonus is that if you roll them up the right way, you can get physics that is remarkably similar to the kind observed by us both gravitationally and in particle physics.

This business of rolling up dimensions isn’t as abstract as it sounds. Brian Greene has a nice explanation of it in terms of a garden hose. When you are far away, the hose looks like a long one dimensional line. But when you zoom in, you see that there is a rolled-up dimension that actually makes the hose a cylinder.

Note however that I stipulate that the physics is *similar* to the stuff that we already observe—similar but not exactly the same. In fact, if you do just as I have described, the extra dimensions still pose some problems. Think about the garden hose: the size of the rolled up dimension is an important input, and in string theory, the sizes of certain handles and holes in these more complicated Calabi-Yau shapes as well as the overall size of the shape itself are not for you to simply declare by fiat. Instead, one hopes that physics itself should cause these sizes to stabilize in some manner. Thus, we allow these parameters to become fields—or since it’s effectively the same thing—particles in our 4D universe. The size that they stabilize at is going to be related to the mass of these particles—bigger masses means smaller sizes.

That is *if* they stabilize at all. The problem with rolling up the extra dimensions in string theory on just a Calabi-Yau manifold is that these new particles (and there are potentially hundreds of them) actually don’t develop a mass. This means we should be detecting them if they exist. This of course is a problem.

But there are ways out. The reason that these particles don’t develop a mass is because there are no forces that act on the rolled up geometry to guide the sizes of these holes and handles. String theory however has an answer to that: it contains precisely the ingredients you would need to put fields similar to electric and magnetic fields through these shapes in just such a way that these new particles gain masses. The shapes settle at a size that minimizes the energy needed to balance out these generalized electric and magnetic fields. These fields through the rolled up shapes are called “fluxes” and the particles that represent the sizes of these shapes are called “moduli fields.” The whole business is called flux compactification of string theory.

I have to run and this is a natural place to pause. In the next part, I will try to explain what it is that my colleagues and I do in our paper.

Climate Stuff

There are always tons of articles in the media and scientific literature coming out related to the Earth’s climate. Thus, whenever we cover the topic in Frontiers it’s always nice to point out what’s going on and being said “out there in the world”:

First up, what happens when state legislatures that don’t have the time or…I don’t know what…try to legislate how to approach the issue of climate change? Here is Utah’s attempt:

Utah, Legislating Global Warming

And, for an even more entertaining take on this, here is South Dakota’s:

South Dakota, Legislating Global Warming


Now, yes, I am putting this up here to make fun of these two states’ legislatures and their decision to even consider such resolutions. They should also consider getting an editor and a fact checker, preferably someone who can explain the difference between astronomical and astrological (well, mainly South Dakota should consider that).

But state legislatures are pretty comically awful all around (as far as I can tell). See here and here. (note: Gail Collins is perhaps the best New York Times op-ed writer EVAR).

Moving to more sober things, NASA’s Goddard Institute of Space Studies (GISS) has a bunch of useful stuff to look at regarding the science aimed at understanding the Earth’s climate:

NASA, Goddard Page

NASA GISS, Gavin Schmidt Explains Climate Models

And finally, as if out of some sort of cosmic conspiracy to make Frontiers look extremely relevant, here is a DISCOVER magazine article about George Will and the need for some serious fact-checking:

DISCOVER, More Bad Science from George Will

And an article that discusses a recent comprehensive Nature review of the state of hurricane science, as it relates to climate change. The conclusion: it’s messy but there are hints that we should worry.

TNR, Hurricane Study

End of the Astro Unit

To mark the end of the astronomy unit, here are a number of articles that are of interest:

Hubble’s Ultra Deep Field has been updated:

NYTimes, Deep

One physicist’s ideas about how time travel would have to work (no killing your own grandfather!):

Discover, Time Travel

WMAP is examining the Cosmic Microwave Background with enormous precision. Certain aspects of cosmology–like details about the Universe’s inflationary period–were hoped to be discovered. Unfortunately, the Universe is very good at hiding her secrets…

WMAP, When Science Is Too Successful

This is a really useful video about results from the Relativistic Heavy Ion Collider:

Gizmodo, RHIC

Also, there are always articles connected to Earth an the climate coming out.

Mother Jones, Free Lunch
Nature News, Climate Controversey
TNR, Gates on Climate

Astronomy Lecture 2 Questions

Lots of really good questions from the students this week! Frontiers‘ second astronomy lecture was mainly about the stellar life-cycle and how we learn about it.

Questions mainly were about two things: how do light and temperature relate and what the heck are these neutrino thingies? I wanted to try to address questions about neutrinos here since they are very interesting and cutting edge, but I don’t know if I’ll be able to talk about them much in seminar.

A) What are neutrinos? Where do they come from?

Neutrinos are neutral (hence the “neutri” part of the name) particles with very very tiny masses. In fact, for a while it was thought that neutrinos had no mass (more on this below).

Neutrinos come from a number of sources, but the simplest to understand is the decay of the neutron. Just from the words alone you might have expected that neutrons and neutrinos are related. They are both neutral particles (not charged).

Now, a neutron is actually slightly unstable. If you have a neutron in your lab and you wait some time, it may randomly decay into an electron and a proton. Notice that the two new particles are charged but they have opposite charges, so the net charge is zero.

Suppose your neutron is just sitting there. Even when a particle isn’t moving, it has an intrinsic energy–the energy of its own mass. The amount of energy stored in a particle of mass m is

E = mc^2

where c is the speed of light. Notice that a 1 kg mass stores almost 10^17 Joules or about 10^14 Calories worth of energy! That’s a lot.

A neutron that is just sitting around has energy stored in its mass. When the neutron decays, the total energy of the two particles that emerge from the decay should equal the energy that the neutron had in its mass. Well, people measured this, and they found out that things didn’t match up. Rather than scrap conservation of energy (which is one of the most fundamental principles of physics) people suggested that a new particle must exist. The particle had to be neutral since the electron and proton already had charges that cancelled. Thus, the idea for the neutrino was born.

So to reiterate, a neutrino is a neutral particle with a very small mass and most commonly arises from “beta” decay of neutrons–neutrons decay by spitting out an electron, a neutrino, and turning into a proton.

B) How do we detect them?

Aside from measuring the absence of energy in some process, how do we try to detect a neutrino?

Well, in order to detect anything, the thing has to interact with other matter and we need to be able to observe those interactions. Now, there are four possible ways for matter to interact:

* Gravity: all mass and energy tugs on other mass and energy. However, this is very weak. For particles with very small masses, this is pretty much useless as a method of detection.

* Electromagnetism: charged objects cause other charged objects to move due to electric and magnetic forces. However, neutrinos are neutral, so you can’t observe them this way.

* Strong nuclear force: this is a force that holds atomic nuclei together–remember protons in the nucleus want to push apart, and that electric repulsion is quite powerful. You need a REALLY strong force to hold them together. Thus, the strong nuclear force is orders of magnitude stronger than electromagnetism. BUT, neutrinos are not affected by it since the strong nuclear force actually only acts on quarks and things that are made of them (neutrons, protons). Neutrinos are not made of quarks, so they don’t have “strong force” charge (just like the fact that they have no electric or magnetic charge).

* Weak nuclear force: we’re left with this force. In fact, this force was first posited to explain things like beta decay. Indeed, this is how neutrinos interact. However, the weak force is very weak: while it is orders of magnitude stronger than gravity, it is minuscule compared to electromagnetism and the strong force. This is why neutrinos are really really hard to detect: they pass through everything with a very very low probability of interacting weakly with other particles.

So how do you detect something that interacts only weakly, with a very low frequency of interaction? Well, you build a giant vat of stuff–water, and you tuck it way underground so that other particles that originate in the atmosphere don’t interfere with your observations. You also look for a source of neutrinos–many of them. As luck would have it, the sun is a huge source of neutrinos since they are produced in large quantities by nuclear reactions. There are lots of atoms of water in this giant vat. There are lots of neutrinos passing through this vat of water. So, even though the probability of any interaction occurring is very low, the numbers work out so that you will detect some smallish number of neutrinos through their interactions with the water nuclei or electrons.

But how do you know an interaction occurred? When the neutrino hits an electron in the water molecule, it can cause a charged particle to momentarily move faster than the speed of light IN WATER (note: things can move faster than light in some medium other than a vacuum. Nothing can move faster than light in a vacuum). This produces a sonic boom of light–something called Cherenkov radiation. It looks like a cone of blue light. Now you cleverly lined your giant vat with 10,000 photo-detectors. All of the sudden, a small ring of these detectors go off, indicating that a neutrino has collided with one of your water molecules. That’s how you detect the neutrinos.

There are other methods as well, but they all involve painstakingly careful and patient observation.

C) Three kinds of neutrinos? And why does this have to  do with their mass?

So now there’s a catch. When people started observing neutrinos using the methods mentioned above, they actually found that the number of neutrinos observed was a third of what was predicted! This was weird since the physics of the sun is actually pretty straightforward. So either something about neutrinos was wrong, OR something else about physics was really wrong. We try to modify as little as possible when we have a very successful theory that correctly predicts lots of other things, so people suggested that the issue was with our understanding of neutrinos.

It turns out that there are three “flavors” of neutrino: the electron-neutrino, muon-neutrino, and tau-neutrino. How did theorists come up with that? Because we know that there are electrons, muons, and taus, and that they are all analogous to each other, except that their masses are different. So since we knew that electrons were partnered with a neutrino, the other analog particles needed their own partners (this is related to the deep principle of symmetry in physics, something I won’t talk about here unless somebody wants me to).

Well, there are three kinds of neutrino and we were detecting only 1/3 what we expected…coincidence? NO! In fact, we were only detecting one type of neutrino out of the three because the molecules of water in our giant vats only will interact with the electron-neutrino, not the other types. So you might think that the story is over, the sun produces three kinds of neutrinos in equal proportion and we only detected one kind.

But it’s a bit more complicated. The sun’s nuclear reactions will actually only produce electron-neutrinos. So at first blush the above explanation doesn’t work. If the sun only produces electron-neutrinos, why do we only detect a third of them? The answer, as it happens, is that something funny happened on the way to the detector!

As the electron-neutrinos travel through space they can change their flavor. They have some probability of turning into a muon or tau neutrino. The connection with mass is this: the probability depends on the difference between the masses of different types of neutrinos. If the masses were the same, then the neutrinos wouldn’t change into one another

Astronomy Lecture 1 Questions

I’ve already received a couple of very interesting questions about the lecture from some students. Since these were relatively early I wrote up some answers. In the interest of giving everyone a chance to look at this, I’ve decided to create posts on the blog that will involve some of your questions and my responses. You can feel free to respond in the comments and I will periodically monitor what is written. That said, those of you who are my students should still e-mail me your lecture questions!

One student wrote me with the following:

“So we can measure the distance between us and another cosmic object by determining the period of nearby cepheids and then the corresponding luminosity. I want to know why the longer the measured period is, the brighter the luminosity. What is the physical mechanism behind that?”

There is a quick answer and a more in depth one. The quick one is that the higher the peak luminosity, the more time it takes to go from the peak luminosity to the minimum luminosity and back.

But why? Well, what makes a star shine? It’s the nuclear fuel that is fused due to the enormous gravitational pressures from having all that mass. Now, throughout most of a star’s life, the fusion involves abundant hydrogen and a little helium, which are in relatively steady supply, but as a star approaches the end of its life cycle, it can get into a situation where it has to fuse even heavier elements (more helium than hydrogen, lithium, etcetera). Well, the bang for your buck in fusing heavier elements is less, so the star is less able to withstand the gravitational pressures pulling it in on itself. However, as the star shrinks, it burns more rapidly (greater pressures) which causes it to actually expand out again. So it pulsates in this way (again, this is more toward the end of the star’s life cycle). A more massive star will have more extreme swings which take more time to work themselves out. When it is on the more crushed down side, it will be relatively more crushed down than a star that has less mass, this creates more pressure causing more energy to be released, but there is a quirk: the outmost helium becomes ionized, blocking the radiation from escaping. Instead, the helium is driven outward–the star expands, the helium cools, becomes electrically neutral again, and the star’s light comes through. The bigger the star, the more time this whole cycle takes and the higher the energies involved and hence, the more maximally luminous.

The student asked another question which is quite thought provoking:

“our conclusion from the observation of galaxies moving away is that the space itself is stretching. Which fundamental force is actually stretching the space? What is the physical definition of space? If we go back in time until the universe itself is a point singularity, how would be describe the region that is outside that singularity?”

So, last week I mentioned that there are four fundamental forces: gravity, electromagnetism, and the weak and strong nuclear forces. Now, one of these forces is a sort of odd man out–it’s gravity. It turns out that you can think of gravity as a force, but alternatively, you can think about it as the actual shape of space itself–that is, imagine that space is like a rubber sheet. When you have a mass sitting on the sheet, it distorts it–this is what mass does to space, it changes its shape. When another much smaller mass rolls by on the rubber sheet its path gets curved. When a small mass passes by a big one in space, the big one’s gravity pulls on it causing its path to curve. So there is a nice analogy between gravity and the bending of space. So gravity is somehow related to the expansion of space–but in fact, it works AGAINST the expansion. Masses attract each other, which is sort of like the space between them getting squeezed. So what is responsible for the expansion? Well, if you run the expansion backwards, it implies that everything was all together at some point in time. This seems to imply that everything exploded into existence–what we call the big bang. It’s the initial impulse of this event that continues to drive expansion…although there are some caveats to this statement.

Okay okay, so what is space? Where is this “singularity” we call the big bang situated? Space can be thought of as the arena in which matter and energy interact. However, it is not passive, as I mentioned before, it actively participates in the interactions since it is distorted by mass and energy.

There are lots of ideas regarding the notion behind this singularity, but the most straightforward to state and yet very difficult to grasp is that there isn’t any “outside” of this singularity. The big bang happened–there is no before and there is no elsewhere since the big bang was the origin of space and time.

Now, an alternative view is that what we call the observable universe is embedded in a much bigger object–the “entire” universe let’s call it. Perhaps our region of the entire universe was produced by some singular process, but the entire universe that it is embedded in existed forever? This is a viable idea that theorists work with. It gets rid of this pesky notion of a beginning to time and suggests the possibility that there might be a “history” to trace back before the big bang. That said, these ideas are purely speculation (with a lot of mathematics nonetheless). The simplest idea is that the universe (the entire universe) began with a big bang almost fourteen billion years ago–counter-intuitive as that may be…

An Accumulation of Interesting Things

Forests appear to be providing an important global ecosystem service:

NYTimes: Trees, Timber, and Global Warming

Newsweek: Carbon Farming (Columbia’s own Don Melnick is featured prominently in this!)

And while we’re talking about plants…

NYTimes: Plants Are Really Cool

The LHC has started smashing things together (finally!). The world appears to still be here (this may be a selection effect). Records have already been broken:


Hopefully the LHC will yield evidence for what dark matter is. CDMS is another experiment that’s been attempting to…recent results are somewhere between 1 and 2 sigma (not statistically significant enough). How tantalizing!

Berkeley: CDMSII

Maybe there’s something alive down there on Mars afterall?

EurekAlert: Methane in Mars’ Atmosphere

Speaking of the atmosphere, here are some articles worth reading regarding the phony hullabaloo surrounding the climate E-mails:


Popular Mechanics


Turning to where Brain and Behavior meets math we find that humans have a very hard time thinking probabilistically:

NYTimes: It’s Just Not Worth It (Unless You Win)!

NYTimes: The Uphill Battle for Evidence Based Medicine

And finally, an article from a while back on religion and science:

NYTimes: Evolution and Religion

Happy Holidays. Some Things To Keep You Thinking

Another semester done. Perhaps for some of you it was your first semester. With that, another batch of final exams to read and some very good responses to our final essay.

The essay’s essence is this: How do we–personally, or humanity as a whole–think about our place in the world in light of the scientific advances in subjects such as biodiversity, Earth science, neuroscience, and astronomy?

I enjoyed reading students’ responses. Several were personal: the topics we studied and discussed over the semester presented all of us with a new set of perspectives on the world. It seems that at times these new views bolstered beliefs that some students already held–the necessity for us humans to try to develop new, more sustainable ways of interacting with our planet for example. At other times, the ideas we explored were simply awe inspiring–contemplating the vastness of the Cosmos and its strange constituents and behavior (dark matter, expansion). Some students were downright disturbed; neuroscience’s Astonishing Hypothesis–the notion that all that we are, our perceptions, memories, impulses, fears, joys, and sorrows, even the feeling that we are the primary movers of our “Selves”, our senses of free will and self, are nothing more than a complicated dance of electrochemical action potentials  flickering around a three pound lump of matter in our craniums.

A theme that emerged in many of the essays was that our prejudice as humans, the almost reflexive instinct to place ourselves at Creation’s center, is a casualty of science’s progress. We used to consider ourselves central to the plan governing the Cosmos, yet we are continually pushed outward, to the outer provinces of the cosmic landscape we are charting: our planet isn’t central, it orbits the Sun. The Sun is merely one of some hundred billion stars in a galaxy, which in turn is one of a similar magnitude’s worth of galaxies in the observable universe. Even the remarkable discovery that almost all galaxies are rushing away from us reveals our lack of centrality. It is not evidence that we inhabit a unique location in space, but rather, that space itself is expanding. In fact, the simplest model suggests that space has no center!

Our study of Brain and Behavior only served to sharpen and personalize this theme. As I mentioned before, our sense of self, our sense that we can freely impose our will on our bodies and our surroundings is challenged by the reductive view that all of these perceptions are actually the residual by-products of the complex interchange of stimuli of neurons.

Given the accumulation of scientific understanding that is built on these assumptions–the principle that we inhabit a typical part of the Universe, the Astonishing Hypothesis–it is easy to believe that we humans are indeed marginalized, even within our own bodies. That would certainly be a consistent position, but it does not logically follow. The complexity of our microscopic selves and the emergence of our more familiar first-person sense of self and the world around us is still rich with mystery.

And paradoxically, as progress in science appears to push us to a more marginal view of ourselves, the self-same progress reveals the profound impact we are beginning to have on our cosmic home, the Earth, and our home’s fellow inhabitants. In fact, it is part of the reason that our activities are taking their toll. Through a combination of tool use, problem solving, and better forms of social organization, the human species has become a global force whose impact on other species and the overall environment is comparable in magnitude to the planetary impact that simple, single-celled life had (and continues to have).

Up until relatively recently, we as a species have remained oblivious to this strange position we find ourselves in. But we now confront uncomfortable choices: to act on the knowledge that we gain through science, accepting our important global role and beginning the process of altering our ways of interacting with the natural world, a process that is potentially painful in the short-run (although not likely to be as painful as skeptics and critics claim); or we can blithely ignore what we are learning, delude ourselves into thinking that we will somehow muddle through, and risk all of the hard won progress and achievements that we humans have made. The world will go on, life will likely go on, but drastic change can happen, and the mightiest can be laid low.

Speaking at a commencement, the astronomer Carl Sagan considered an image from space of a ray of sunlight, with a small reflective twinkle caught up in it:

Look again at that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.

We may not be central to the Cosmos’ workings, but the life on our world will never come again. The histories, myths and legends of our cultures will not be remembered by others. Being part of a vast and almost infinitely complex cosmos means that we are incalculably unique. Our ability to understand this and to extend it to our world as a whole is a gift. This is the view I take from science. I think it is a powerful argument to cherish this place and to relish our amazing ability to gaze outward and inward and marvel at what we find.

Time Travel via Statistics and other Interesting Things…

I just read a very fun article:

WSJ, Statistics Time Travel

One thing that I’d love to see studied using these techniques is how Roger Federer compares to a previous generation of tennis pros–the Pete Sampras era of the 90s. I have no doubt that he would be up at the top, but one certainly gets the sense that the field has been relatively flat over the last decade compared to the decade before that…or maybe I’m just becoming an old codger.

Another article, this one perhaps of particular interest to students of Frontiers is

Slate, Facial Profiling

The students were given an excerpt of this to critique for the essay question on our recent midterm.

On the Climate/Earth Science front, there are two things I found that are worth checking out. One is that Peter DeMenocal, one of our very own Columbia professors and a Frontiers lecturer is featured in a PBS documentary on how humans evolved:

PBS, Becoming Human

The other is an interview with Stephen Scheider–who as a graduate student wrote a paper suggesting that we should worry about global *cooling* but later found that it was the result of an error!

TNR, Interview With Stephen Scheider

Turning to Astronomy, we have a wonderful run down of all the different proxies used for determining astronomical distances:

Ned Wright, The Distance Ladder

And a student of mine directed me to this. It’s strangely inspiring (“A still more glorious dawn awaits, not a sunrise, but a galaxy-rise. A morning filled with four-hundred billion suns.”):

Eating Meat

I like beef. I eat too much of it and am trying to cut back. Perhaps that is what inspired me to suggest a question for our recent midterm exam having to do with estimating the percentage of carbon emissions eliminated by everyone ceasing to consume beef (assuming that this means that all the beef-cows somehow “go away.”). Of course, to do this estimate one needs to make all sorts of simplifying assumptions, but I think it is ultimately interesting. Particularly if you also try to estimate the change in carbon emissions if everyone shifts over to hybrid cars like the Prius.

Ezra Klein at the Washington Post wrote an interesting op-ed about all this over the summer. It’s worth a read.

Interesting Reads…

A standard technique in many sciences, particularly social sciences, is something called “linear regression” and its more complicated descendants (multiple regression, structural equation modeling, hierarchical linear modeling, …). This article critiques the overly uncritical usage of such techniques:

Goertzel: Why Regressions Fail

In computer science and mathematics that is closely related, a major unsolved problem is whether a certain class of problems that can be solved by “fast” algorithms is equivalent to another class of problems that can be easily checked *given* a solution but where the most efficient algorithm for solving them is not known to be “fast.” This conjecture is called P = NP. In fact, most people think that the conjecture is false–that just because you can check a problem’s solution quickly, you can’t necessarily find a quick algorithm for doing the problem. This article explains the issue very nicely:

MIT: P vs NP

And since we’re talking about math, and in Frontiers we’ve recently taken a look at probability, I thought it’d be nice to point you all toward one of my favorite radio shows looking at the question of what randomness is:

Radiolab: Stochasticity


Probability of Collapse

The “Economix” section of the New York Times online brings up an interesting observation:

“Autumn seems to beget a disproportionate share of American financial crises. But why?”

A quick glance at Wikipedia yielded a page that tallied up the biggest US stock market losses in one day. Starting from the Great Depression and compressing crashes that are within a couple years of one-another (we’ll treat those as part of the same phenomenon and say they began at the start of the earliest member in the series of crashes) the total number of such crashes on the list is 6.

This is over an 80 year period so the frequency of such major crashes is about 0.02 per season (season = 3 months). There have been a total of 80*4 = 320 seasons and of course, 80 fall seasons.

According to the method of counting, these crashes *all* started in the fall months of September/October/November. So the question is, how improbable is this? Assuming that such crashes could in principle be evenly distributed over the different seasons, and assuming that the 6 I’m counting are independent events (questionable assumptions of course), what is the probability that out of 320 seasons, they would all land in the 80 Falls between now and the 1929 crash?

So, take the probability of a big crash occurring in any given season to be

P(crash in a season) ~ 6/320 = 0.02 = 2%.

The number of different ways 1 crash could have occurred on is 320 (possibility of 1 in each season).

The number of different ways 2 crashes could occur is 320*319 / 2 (the first could have occurred in any of 320 seasons, the second has to occur on some other season (we assume–actually in counting up these things I made a different assumption since crashes that were only a couple seasons removed were treated as part of the same overall phenomenon…but anyway…). Divide by two to account for the fact that the order doesn’t matter (we treat the crashes as indistinguishable).

Three crashes: 320*319*318/3!

And so on…

6 crashes: 320*319*318*317*316*315/6! = 1,422,630,723,360 possible ways these outcomes of our interest could happen. Round this to ~ 1.4 x 10^12

Now consider the following *specific* scenario. Imagine 6 crashes happening in a row and then 314 season with no crashes. The probability of such a history is

(0.02)^6 x (1 – 0.02)^314 = (0.02)^6 x (0.98)^314 ~ 1.13 x 10^-13

This will in fact be the probability of any *specific* history involving 6 crashes and 314 crash-free seasons.

Therefore since there are a total of 1.4 x 10^12 ways of getting such histories, the probability of randomly getting any history involving 6 crashes and 314 non-crash seasons is

(1.4 x 10^12) (1.13 x 10^-13) = 0.16 = 16%

BUT we’re not done, we want to know the probability that all of these crashes occurred IN THE FALL given that they occurred at all.

The number of ways a single crash could occur in 80 fall seasons is 80.

The number of ways 6 crashes could occur only in fall seasons is

80*79*78*77*76*75*74/6! = 22,237,014,800 ~ 2.2 x 10^10

So the chance of picking *these* histories out of all of the possible histories of only 6 crashes is

(2.2 x 10^10)(1.13 x 10^-13) ~ 0.003 = 0.3 %

This is the probability of living through a history where 6 crashes occurred AND they occurred in the fall. The probability of living through 6 crashes that occurred in the fall GIVEN that they occurred is given by the formula for conditional probabilities:

P(6 *fall* crashes in 320 seasons GIVEN 6 crashes in 320 seasons)

= P(6 fall crashes) / P(6 crashes over 320 seasons)

~ 0.003 / 0.16 = 0.018 ~ 2% chance.

So given all the assumptions, there was about a 2% chance that given a history with 6 crashes over 80 years, we’ve had all of ours occur in the fall.

Note: you could have gotten close to 2% by taking the probability calculated before of living through a history with 6 crashes and dividing by 4 (you’d get 4%). This basically assumes that the probability of 6 crashes occurring is independent of the probability of a crash occurring in the fall. The assumption isn’t strictly true since once a crash has occurred during some fall season, another crash won’t be considered to have occurred in the same fall season.

If anyone sees an error, please let me know!


Wikipedia: Largest Daily Changes in the DJIA

NY Times: Why Do Financial Crises Happen in the Fall?


Global Cooling in the 70s?

A common misconception about the status of climate science in the 70s is that many scientists believed that global temperatures were cooling. Here’s an article from the excellent climate blog

And there’s also this paper from the American Meteorological Society.

The basic storyline is familiar: some scientists published papers about the topic. The papers were picked up in the general news media, and so, a new–and quite persistent–myth was born.

It should be noted that at the time, as I understand it, the data for global warming was very spotty and scientific consensus was nowhere near where we are at today.

Hello Earth Science!

Peter DeMenocal mentioned in Monday’s lecture that the last ten years have been a plateau of sorts in global temperature. How serious is this problem for proponents of global warming?

NY Times: What makes a trend?

Science Magazine: A Cold Spell

Another thing mentioned in the lecture: there are three broad factors that impact the global temperature.

a) The energy input from the sun (energy in ultimately must equal energy out, some of the energy out comes from thermal radiation)

b) Reflectivity, or albedo (this directly reflects light received by the sun–the Earth does not absorb this light and so, is not warmed by it).

c) Greenhouse effect (certain gasses have a tendency to trap light of different wavelengths so that they heat the planet up more. These gasses act like a blanket. The most famous is CO2, over which we humans have a pretty strong influence. The most effective such gas however is H2O, and it is predominantly responsible for keeping our planet temperate enough for life–otherwise it would be much too cold for us).

One idea to help mitigate climate change is to seed more cloud cover using dust particles. The science behind this is still quite primitive however. A recent study may be of interest:

EurekAlert: Dust and Climate Change

A common argument against action to mitigate climate change is that it will wreck the economy. A number of articles have recently come out arguing against this. For example:

WSJ: Not As Costly As We Think

I think another important point along these lines is that the longer we wait the greater the costs of changing course–this is true for many reasons: positive feedbacks will be stronger and thus require a more disproportionate response on our end; in order to achieve safe targets (like 350 ppm of Carbon in the atmosphere) we will have to do more in a shorter time which will involve more drastic economic and social measures; there will be a gradual increase in the severity of the climate impact on many regions in the world, taking an increasing economic (not to mention) human toll.

Finally, modeling the climate is very difficult as it is an extraordinarily complex system. A recent New York Times article looked at one area in applied mathematics that may someday be brought to bear on this:

NY Times: Lagrangean Coherent Structures

Farewell to Brain and Behavior (Fall 2009 Edition)

Over the course of our Brain and Behavior unit for Frontiers this semester there have been a number of interesting articles out on related topics. Here are some of them:

WIRED: The uncertainties of fMRI (do dead fish really think?)

EurekAlert: fMRI detection of our innate number sense

EurekAlert: The Neurology of Concepts

Should we be skeptical of this? (Show me the data!)

WIRED: Are Placebos Getting Stronger?

Finally, the Wikipedia has an interesting article on something called the “Hard Problem of Consciousness”:

Wikipedia: The Hard Problem of Consciousness

Welcome Back!

Well, school is about all started up again. I apologize for the lack of posts over the summer–I have been swamped with some very interesting research projects and various other things besides…

What follows is mainly for my students, although if you are looking at the blog for the first time, perhaps this will be of interest to you.

This semester in Frontiers of Science we begin by exploring the mysteries of the human brain, grappling with the (for some) discomfiting notion that every aspect of our being ultimately comes down to the electrical and chemical workings of this three pound calorie emitter in our heads.

After that we will turn to Earth Science. The theme here is how climate change can occur and its possible impacts on life. We start with a lecture about climate change happening in the recent past, now, and the near future. From there we go on to explore how periodically changing climate may have affected human evolution. Finally, we’ll look at the extinction of the dinosaurs–death by asteroid. But it wasn’t the asteroid that killed them all. The likely scenario is that the abrupt climate changes brought about by this catastrophe killed off most of the plant life, thereby starving the survivors of the original impact.

Astronomy is up next, wherein we’ll learn how to “build the Universe.” Sounds a bit ambitious, but at the very least, we’ll come away with a deeper understanding of what the Universe is on the largest scales we can probe and the behavior of some of its residents: the stars and galaxies.

Finally, we come back down to Earth and look at Biodiversity. What is it? Why is it important? How is it presently threatened and what can we do about it?

For those of you who are new, here’s a little bit about me: I’m a theoretical physicist working at the physics department here at Columbia in Professor Brian Greene’s group (called ISCAP: the Institute for Strings, Cosmology, and Astroparticle Physics). I am a string theoristy kind of physicist (though my interests are broader, extending into some purer mathematics as well as gauge theory and condensed matter physics).

At the moment I’m working on a few projects. The most exciting involves the possibility that space itself can tear and reform in a new configuration (technically, we say that the topology of space can change). This is a possible scenario that might occur in string theory.

Well, that about sums things up for now. Once again, welcome back.

Neurotech: Mind over Matter

There’s an interesting article up at the Washington Post about toys that let people move objects with their minds:

Also, a recent House episode featured a similar idea: a man has a mysterious ailment that has left him unable to move anything but his eyes, so he communicates by blinking. At one point, he loses that, but they connect him to a computer where he is able to move a cursor around by thinking.

New Energy and the Environment: Unintended Consequences

The Washington Post has an article about some of the potential costs–in terms of wildlife–that CO2-free energy can have. I’m sure that as we shift to new sources of energy we will uncover more unintended consequences. I’m also pretty sure that ingenuity will find a way to ameliorate these issues.

Frontiers of Political Science

I teach a course at Columbia University called Frontiers of Science (students who are taking/have taken the course know this already!). As I like to say in class, the course is not a science course, but rather, a course about science. Our goal is to try to get a sense of what science is, what scientists do, and what sorts of useful “habits of mind” we can take from scientific thinking and apply to our own lives.

Inevitably, the course touches on issues that are politically sensitive. We discuss climate change, evolution, the birth of the Earth and various other potentially loaded topics. These issues have, unfortunately, always been deeply entangled with people’s religious beliefs and worldviews. These are sensitive issues since people’s beliefs and politics tend to be emotional–certainly they are for me–and so I think they ought to be dealt with sensitively.

I think that this semester, the course may come across to some students as rather biased leftwards, politically. I think there are two reasons for this: one which I will argue is perfectly acceptable (but perhaps it is socially unfortunate). The other reason for the leftward tilt is not so acceptable–or at the very least, I thought that it ought to be addressed and discussed.

One reason–the unfortunate, but valid one–that this course (and perhaps other courses like it) may have a political tilt is that certain sciences have been politicized. When it comes to climate science or evolution, one side of the debate, as it happens the liberal side, has allied itself more closely with the scientific consensus, while the more conservative side has tended to be at best skeptical of, and at worst willfully ignorant or misleading about what the science shows.

Evolution is scientifically an open-and-shut case. There is a tremendous body of evidence that supports evolution as the valid scientific explanation of how species arise and how they are related. It’s a beautiful science in that we understand the broad scope picture (the overall pattern and logic of evolutionary processes) as well as the fundamental units that allow this process to occur for biological organisms (genes and their basis in DNA). Where there is scientific debate regarding evolution is in the details of how the process is carried out (how much random mutations matters versus active environmental selection, for example), and naturally the links between the fundamental level and the big picture always need to be more fleshed out. It seems to me that it is unreasonable to deny evolution as a scientific explanation for biological diversity and origins. There is a loophole here for those who choose to believe in other processes  of creation–you could explicitly choose to go with a non-scientific explanation–but then you shouldn’t call it science or teach it in science courses.

With regards to climate science, there is ample room for debate regarding how to respond to what we are learning about the climate and our impact on it. There is even room for debate about what precisely we are learning about the climate as there are some important areas of this research where uncertainties are high (such as forecasting). However, the data we have is strongly pointing toward some broad trends: warming and greater volatility. There are strong reasons to think that humanity has a hand in these effects and that where we can, we should be more cautious about climate modification.

The reason that a course that deals with these scientific topics may be “biased” in one direction rather than another is that scientifically speaking, one direction has the science behind it and not the other. The reason I find the politics to be unfortunate is that ideally, the main elements on all sides of the political spectrum should respect the science and the process of scientific discovery. This doesn’t mean that it cannot be debated at a more philosophical level (see my post where I responded to Dennis Overbye’s New York Times article about science and societal values:

I’d like to note, before passing to the other, less comfortable (for me, at any rate) reason for the course’s bias, that it’s not only the right of the political spectrum that clashes with science and muddies the waters for political reasons. The left certainly has done so, and does do so. There are all sorts of scientific subjects that make some of a more liberal bent uncomfortable. For example, some of on the left side of the spectrum would like to believe that Nurture beats Nature in determining the most important qualities of a person. This is nice because it means that–at least where it counts–we are hugely adaptable, and it jibes well with the belief that all people are equal. This view is not supported–the issue is of course vastly more complicated. Innate factors matter a great deal, but where things get very hairy is in how Nurture and Nature interact (in other words, it’s not an either/or sort of thing, but an immensely complex dance). Those who choose to throw this out are being willfully ignorant.

Another less philosophical, more political example comes from the debate over nuclear power. The left is far more inclined to be skeptical of the use of nuclear power, and some politically manipulate the issue without giving fair consideration to the other side. Nuclear power, at least in a first pass, does not generate CO2, so it could be a useful source of power as we try to transition to sources that will not further burden the climate. The problem is that there are dangers and there is waste, which is very hard to truly dispose of.

Okay, as I’ve been saying, this semester I have gotten a slightly uncomfortable feeling of bias beyond the one mentioned above. This is because, I think that we who are teaching the course, lecturers included, have not been as sensitive as we ought to be. I think that we may have inadvertently created an environment in which some students may feel unwelcome, made fun of, and may perhaps be justifiably a little resentful. I don’t think that it’s any one thing, but more a collection of references and jokes that taken together could produce this effect. I would mainly point to the following things that we did: (1) polled the students on when they believe the Earth was created–I don’t think that there’s anything wrong with this, but it could be taken the wrong way–(2) A joke during one of the chem lectures that could have offended some religious students. (3) A joke in a recent neuro lecture that could have offended people who identify politically as Republican or conservative. Also, at least in my seminar, I discussed Bobby Jindal’s mocking the stimulus package’s money for the US Geological Survey.

I don’t think that any harm is meant, and hopefully most students understand that the jokes were generally meant in a spirit of fun. That said, all these things can be taken to be as mainly against one side of the political spectrum, and well, they are. Taken all together, I can understand why some students might be upset with the political tenor of the course–especially since we present a scientific viewpoint about other contentious issues that also tends to be supported in a political manner.

If you are a Frontiers student reading this, please don’t hesitate to respond to what I’m writing here. perhaps you think I’m being overly sensitive about the politics and that it’s clear where the science ends and the opinions begins and that that’s fine. Perhaps you agree, or feel even more strongly than I am able to express here. Make your voices heard. If you are in my seminars in particular, I very much want to hear your views and discuss these things. I think that one of the key themes for this course is the interplay between science and society, and one of the places where this interaction occurs is in education, so the topic is certainly relevant.

Detecting Dark Matter

Dark matter is believed to make up about 90% of the matter in the universe. Our primary evidence for it rests on the fact that large gravitational systems–galaxies, clusters of galaxies, and the universe’s large scale structure–do not move as we would expect from just taking into account the visible mass (mass due to stars and such).

The simplest model for dark matter is that it consists of elementary particles that interact with other matter mostly through gravitation, and possibly the weak force. This makes these particles very hard to detect (the neutrino also only interacts via the weak force. We only can detect its presence through very very rare interactions with heavy-water nuclei in giant vats of the stuff buried deep underground! Otherwise, we detect it purely through its absence).

Detection of the particles only through gravitation is good, but not entirely satisfying. Certainly, one would want another way to see that it’s out there. Well, according to this article:

we may well have started to see it. One possibility stems from the basic idea is that dark matter, like all other matter, has an “anti” version of itself. When antiparticles and particles collide, they release energy. Apparently, some models of dark matter predict that this will be seen as an increase in the amount of positrons (anti-electrons). Indeed, this is what PAMELA appears to be detecting.

The observation of alternative signatures for dark matter is a big deal. By learning more about what signatures it has, we learn more about what models of dark matter are right. Some of these models even stem from something called “supersymmetry”. Evidence for this would be a tremendous boost for string theory (but not sufficient to truly claim that strings has been proved).

Habits in the News: Back of the Envelope

Natalie Angier has an excellent article on back-of-the-envelope calculations. This is one of the main skills that we try to teach in Frontiers. I think her article describes the process very nicely!

This skill is very empowering once you’ve gotten some practice with it. In some ways, it gives a certain illusion of control that probably isn’t warranted, but on the other hand, it can come in very handy and I would consider it as a sort of extension of knowing our world’s “geography”. Or maybe, it’s more like having some basic skills in navigation.

Now, one might ask, why can’t I just look facts up online or using some reference and that’s that. Well, you can, but isn’t it a good idea to have some way to double-check that what you are reading about makes sense?

For example, recently there has been lots of news about the economy. We’ve been reading that the so called GDP of the US is something like 14 trillion dollars. Can we try to understand how a number like this comes about? Let’s do a back-of-the-envelope calculation.

The first step is to figure out what is meant by GDP–what are the components that go into it. Well, it is the economic output of the entire US economy in one year, measured in dollars (over a year). How do these dollars get split up?

The most obvious pieces are the amount of money that the citizenry makes (i.e. the total amount of the annual salaries of everyone who works). Then there’s the amount that the government spends. Finally, we buy things from other countries, but we also sell things to other countries, so net exports.

How the heck to we estimate those things from scratch? Well, the key is not to get scared. One just needs to get a little creative.

There are about 300 million people in the US. Let’s say 200 million work. Suppose that the average salary is about 50,000 dollars a year. This comes out to about 10 trillion dollars.

Actually, this 10 trillion dollars component can be refined a bit. We know that the government taxes us, so people don’t get to keep their full salaries. Let’s say about a quarter of the money goes to taxes. That’s 2.5 trillion in taxes to the government. We also know that the government spends more than it makes in taxes! Over the last several years, we’ve run up budget deficits of 500 billion dollars or so. So this means that the government plus citizenry piece is about 11 trillion dollars.

Net exports gets a bit trickier. First, let’s ask how much income people don’t spend (i.e. how much do we save?). It’s a well known fact that for a while now, people in the US weren’t saving very much at all. That means that all the income was spent on stuff. Let’s break this problem up a bit: what is the average percentage of income spent on food? Perhaps 10%? Most of this is food that gets made here. Next, what percent on services and such things (school, etcetera), perhaps 30%? I don’t know–let’s go with that. In fact, let’s round up and say that between food and services, 50% of the income is locked up here in the US. Now let’s split the difference and say 25% of the other income is spent on stuff from here, while 25 % is spent on stuff from abroad.

That would mean that we spend about 3750 billion on imports. In fact, let’s just round this up to 400 billion dollars for simplicity.

We certainly do not export more than something on the order of 100’s of billions of dollars. This means that the net exports component of GDP will be a decimal place compared to what we’ve already estimated. So in fact, I will argue that for the purpose of back-of-the-envelope, we neglect it.

This means that our BOE calculation tells us that GDP is roughly 11 trillion–not bad, certainly in what I would consider the ballpark.

We can do more with this number. We’ve been reading reports of terrible job losses–it seems about 600,000 jobs a month. At this rate, we will lose about 7 million jobs over the whole year. In fact, let’s be “pessimistic” and put the figure up at 10 million jobs.

Unemployment insurance generally pays a fraction of people’s usual salaries. Let’s assume it’s about half. This means that personal income falls by about 250 billion dollars. Out of the 11 trillion dollars we estimated before, this amounts to a 2% drop in the GDP.

Let’s check these estimates: if you go to the Bureau of Labor Statistics, you can look up the different components of GDP that they gather up. They find that people’s private consumption during 2008 was about 10 trillion dollars, while people’s private investment was about 2 trillion, giving 12 trillion dollars for people’s overall personal income. We estimated 10 trillion before taxes and about  7.5 trillion after (so, clearly somewhat off, but not catastrophically). They found that the government spent about 2.8 trillion dollars, we estimated 3 trillion dollars (not bad!). Net exports were apparently about -600 billion or so. We said that we’d neglect this since it would alter things fractionally and we didn’t care about the fraction.

BOE’s are a really useful tool for cross-checking facts and more importantly, giving us a sense of place in the world, in a manner analogous to knowing where different important places in the world are relative to one-another, and relative to ourselves.


I’ve stumbled onto a fantastic site: The site features a series of video demonstrations about how to visualize higher dimensions, how to understand complex numbers, and–putting the two together–how to visualize Hopf fibrations and other basic aspects of algebraic geometry!

This should be checked out by anyone who has ever asked whether we can visualize higher dimensions.

Two Artices on Climate

The New York Times has an article in this week’s Week In Review section on the debates between climate scientists on “tipping points”–levels of temperature or CO2 (or other factors) that would lead to irreversible and fast climate changes:

Also–I am still reading this–there is an article about Freeman Dyson, one of the most eminent physicists around. Apparently he is skeptical about the dangers of climate change, and this is rankling people:

As I said, I haven’t finished the article. I get the impression that he doesn’t dispute the fact that climate change is occurring, but rather, argues that we shouldn’t really worry about it. He thinks that much of what gets said about it is alarmism and that there are bigger, more immediate issues to deal with.

I suppose that I mostly disagree with this perspective. I think that alarmism is bad, but I think that we should be concerned with the climate patterns that we are observing. We understand that CO2 can drive the temperature up, so we should work toward reducing emissions–this doesn’t have to be “anti” economy either. There’s plenty of room for innovation! That’s my general attitude: the climate should be approached as a serious problem with some very dangerous consequences that we cannot properly assess our risk for. We have an opportunity to implement lots of things we should want to do anyway: higher energy efficiency, better fuels, etcetera.


A while back the New York Times reported an experiment that managed to “teleport” the information from one atom into another over a distance of about a meter. It’s a fun read:

Quantum teleportation rests on the ability for quantum particles to become “entangled”. The basic idea here is that the state of a system of quantum particles does not always break down neatly into a sum of states of the individual particles. The total state is “bigger” than the sum of the parts.

This “not the sum of its parts” business is evident from some of the simplest quantum experiments one can do. The famous double-slit experiment rests on the fact that if you fire a beam of electrons at a screen that has two slits cut out of it (a very short distance apart from one another), then after the beam hits the slit, each particle in the beam is described by a wavefunction that involves two pieces:

(Psi_1 + Psi_2)/(sqrt(2))

where the first piece corresponds to passing through slit 1while the second piece corresponds to passing through slit 2

The probability distribution for a particle in the beam is the square of the above:

[(Psi_1)^2 + (Psi_2)^2 + 2 Psi_1 Psi_2]/2

The last term in the square-brackets would not come about if these electrons behaved like classical particles. There would simply be a 50% chance of the particles passing through slit 1 and 50% chance that they passed through slit 2. That last term has the effect of creating an interference pattern as the electrons hit some photographic plate beyond the double-slits.

Entanglement works on a similar principle, only now, rather than a single particle, a *single* wavefunction necessarily describes the state of the two particles. This wavefunction is set up in some way to ensure that some constraint is met–if the particles were photons that arose from the decay of a particle with zero angular momentum, then the total angular momentum for the photons is zero. This means that if you measure the spin of your photon, you know the spin of your friend’s photon.

The crazy thing is that by *measuring* the photon’s spin, you appear to have an influence on the spin of your friend’s photon. This influence appears to act instantaneously. This is what makes entanglement puzzling–it seems to go against Einstein’s special theory of relativity. In fact, it doesn’t because you cannot transmit the information you discover upon your measurement of the photon’s spin to your friend fast enough, and it is information transmission that matters. Still, as a professor I had once said: quantum theory obeys the letter of the law, but seems to certainly skirt close to breaking the spirit…

I guess quantum mechanics is still better than some in the financial industry…

Overbye reports on the Kepler mission

The Kepler spacecraft will soon be launched and hopefully start gathering data that will reveal the existence of Earth like planets around other stars. Dennis Overbye has a nice overview of the mission and what’s involved in a Science Times piece:

Earth like in this case means that the planet has roughly the same, or analogous orbital properties as the Earth. I don’t think that the kind of observations Kepler makes can tell us much about the composition of the planet’s atmosphere, which is obviously of the utmost importance to whether the planet can host life similar to what we know..

Something called ‘volcano monitoring’

I watched the president’s speech last night. Afterwards, I took some time to read Bobby Jindal’s rebuttal (I saw the beginning of it and did not think that the delivery would improve my chances of taking it seriously).

I had lots of issues with it. Now, I don’t mean to imply that debate about the stimulus package isn’t important. But shouldn’t we be trying elevate the debate to a serious level rather than picking out random things and telling 70/30 lies (to be generous) about them in order to gain political points?

One example: while discussing how the stimulus bill was wasteful, Jindal brings up the (we are supposed to feel outrageous) spending  on “something called ‘volcano monitoring'”. In Frontiers we just had a wonderful lecture by Professor Terry Plank on the dangers of volcanoes. They are many, varied, and awful–and rare, of course. But not TOO rare (

Well. Here’s my train of thought:


Second thought: What good timing!

Third thought: well, the probability of a volcano erupting in the US is pretty small…

Fourth thought: oh…wait…


There’s a nice article about all this here:

Oh, and as a parting point:  The $140,000,000 is not going to “something called ‘volcano monitoring'”. It’s going to something called the US Geological Survey. USGS studies the natural resources of the US, natural hazards, and other related things. I suppose that some could argue that it’s not worth it. But for a sense of scale: this spending accounts for 2 hundredths of a percent of the entire bill. If we can afford to stick in a cut to the alternative minimum tax, I think we can afford to help out our (generally underfunded) geologists who help with various things including something called volcano monitoring…

Tierney: Science and Politics

John Tierney at the New York Times discusses some of the ambiguities scientists face when they get involved in policy.

I think that the issue is a bit more complicated than he makes it out to be. It would be great if a scientist could simply say: here’s my science cap and here’s what we know (within limits of uncertainty). Now I’m putting on my policy cap and here’s what I believe we should do. Unfortunately, the separation is not so simple. It strikes me that the issue is rendered even worse by the fact that not everybody involved in debating these issues is an honest broker, and it would be unwise for scientists involved in policy to assume otherwise. Scientific uncertainty/debate is easily recast by politicians as reason for inaction and grounds for “hearing out the other side.” Even when the “other side” has no science to back it up (see the evolution/creationism “debate”). At times a scientist has to resort to calling into question the critics’ credentials, and if not that, then at least the critics’ understanding of the issues.

New estimates for Earth-like planets

Researchers have discovered over 300 extra-solar planets (exoplanets). These observations are mainly indirect: one observes the tug of a planet’s gravity on its parent star. Since gravity becomes stronger when separations between massive objects are small, and it is stronger for bigger masses, this “Doppler shift” method is biased towards detecting more massive planets and planets that are closer to their parent stars. Another method that nets us fewer planets but gives us more precise knowledge involves seeing the light from the parent star get slightly dimmer when the planet passes in front of it.

Of course, it would be best to directly see the planets themselves. So far, there is potentially one direct observation of a planet. Potential because these observations usually come with a host of uncertainties.

Regardless of the methods used, astronomers have been astounded at the number of planets that appear to exist and at their variety. We’ve managed to observe planets ranging from a couple times the mass of Earth (which is very exciting!) to several times the mass of Jupiter. The holy grail is to observe a planet of roughly Earth-like mass (could be a few times more massive) in an orbit that is similar to Earth’s around a star similar to our sun. This would be a phenomenal observation because such a planet could easily carry liquid water and all the same elements for life that we have here.

My student Cem, and a former student, Tim both pointed me to an article discussing recent attempts by Astronomers to take all this recent data estimate how many Earth-like planets may be out there:

Frontiers students are often asked to do so-called “back-of-the-envelope” calculations for a variety of things (quick: estimate the amount of carbon emitted yearly!). One fun one that you could try for yourself is to estimate the number of Earth-like planets in the observable universe, given that there are about 100 billion galaxies and about 100 billion stars per galaxy. Enjoy!

Happy 200th Birthday, Darwin and Lincoln!

Just wishing Charles Darwin and Abraham Lincoln happy birthdays.

I recommend reading some of the articles that have been written on and I won’t bother linking to them though.

I hadn’t appreciated that Darwin and Lincoln were born on the same day! I’ve corrected this post to reflect this (before it was just happy birthday for Darwin).

A Scientific Worldview; A Worldview on Science

I don’t like to pigeonhole myself too much with regards to where I stand on certain issues that I believe are deeply complex and to some extent have no obvious “good” answer. That said, I think that I laid out, in an e-mail to a student, some things that I deeply believe underlie the scientific endeavor. These are the themes that I try to build the Frontiers course around. So at risk of a little bit of pigeon-holing, here are those themes and some of my thoughts about them:

The Awe of Nature: this theme comes up time and again and I would argue is the backbone of the vast majority of science. It is the purest motivation of the scientist and is a feeling that drives us to keep asking questions and open up new frontiers.

Questions, not Answers: this is more of a slogan than a theme. Obviously, science is partly a collection of facts (subject to revision). But science truly *lives* at its frontiers, where it is all about the questions that we try to answer (and to some extent the methods that we need to develop to answer the questions).

Science and Society (or is it Science vs. Society?): We come back to this again and again in the course–it is rather inevitable. Science is a human activity and it is pursued in a social context. Since science is done by people, there is a community of scientists and all the politics that goes along with that. In addition, science is embedded in broader society and must make contact with it. Sometimes this results in uncomfortable situations where new discoveries force us to confront old ideas and potentially even deeply-held beliefs and values. This is one of the sources of conflict between science and society. We’ll often argue about whether it has to be this way or not over the course of the course.

The Unknown (or how to know how much you don’t know): The frontiers of science constantly confront us with the limitations of our knowledge. Moreover, they confront us with the limits of our methods to gain knowledge. Often our techniques improve (usually based on new discoveries put to use), but there are things that will never go away. On the smaller, less philosophical scale these are:

a) Random error in measurement. You can never totally get rid of it!
b) Systematic errors (or bias) in measurements. Again, almost any method we employ to explore some aspect of the world has inherent biases. We’ll see examples of this throughout the course. Sometimes we can take these biases into account–and certainly where we can we must–but sometimes we may not even be aware of them.

We deal with these issues using statistics to try to estimate how much we don’t know–how uncertain we are about our results. This is why some basic statistical concepts and techniques will form an important element of our course. We’ll review/learn about things like means, standard deviations, standard errors, distributions, and the way various issues of measurement affect these things.

On a broader, more philosophical note, the question of how to know how much you don’t know is hopeless. As Donald Rumsfeld once put it: there are unknown unknowns: the things we don’t know we don’t know. There is no method I know of to guard against that *except* the fundamental aspect of science that is keeping an open mind and always being ready to scrap old ideas if they prove to be wrong, and to improve them where improvements need to be made. That said, it doesn’t take much time to think about this question to realize that there just may be frontiers to our knowledge that we may never cross–at least not scientifically. It is easy to ask questions (questions that you will confront in courses like Lit Hum and CC: what is The Good, The Beautiful, The Just?). This isn’t to say that science has no bearing on these things. It may shape our understanding of these questions–it may tell us things about ourselves as human beings that give us new tools for asking these questions. But it seems that in the deepest sense, these questions cannot be completely subsumed into science. Some might argue that where they can’t be made scientific, they should be thrown out as nonsensical. I don’t share that view, but I look forward to debating it with students who might.

The question of The Unknown brings us right back around to The Awe of Nature. A lot of the time, you’ll hear that science erases Nature’s mystery and eliminates our sense of awe since it just reduces Nature to a bunch of parts that work in some fixed way to produce the world we see around us. I can’t force you to feel otherwise, but I’ve always felt that knowing more about what is going on “behind the veil” enriches the mystery and wonder. This is partly because it forces us to contend with the question of where the ultimate boundaries to our knowledge truly lie and to appreciate how wonderful it is that we can comprehend all that we can (it seems a lot to us, but probably it isn’t much compared to how much we potentially could know).

Science and Society: Scientific Values and Ethics

One of the key themes that I like to highlight in our Frontiers of Science course is the interaction between science and society. I usually call this theme “Science and Society (or is it Science vs. Society?)”. Dennis Overbye at the New York times has a nice essay called “Elevating Science, Elevating Democracy,” which argues against the view that science is somehow a values-neutral endeavor. It is certainly worth a read:

There are many things worth discussing, but I would like to focus on the basic question that Overbye raises: does science instill values in its practitioners and in the societies that foster it? His argument is essentially that science promotes inquiry, skepticism, honesty, as well as a sort-of egalitarian spirit–it doesn’t matter where you come from, if you are inquisitive enough and rigorous enough in your thinking and methods, you can go far in science.

I partly agree with the piece myself, but I have issues. Let’s start with this excerpt:

The knock on science from its cultural and religious critics is that it is arrogant and materialistic. It tells us wondrous things about nature and how to manipulate it, but not what we should do with this knowledge and power. The Big Bang doesn’t tell us how to live, or whether God loves us, or whether there is any God at all. It provides scant counsel on same-sex marriage or eating meat. It is silent on the desirability of mutual assured destruction as a strategy for deterring nuclear war.

Einstein seemed to echo this thought when he said, “I have never obtained any ethical values from my scientific work.” Science teaches facts, not values, the story goes.

Worse, not only does it not provide any values of its own, say its detractors, it also undermines the ones we already have, devaluing anything it can’t measure, reducing sunsets to wavelengths and romance to jiggly hormones. It destroys myths and robs the universe of its magic and mystery.

So the story goes.

But this is balderdash. Science is not a monument of received Truth but something that people do to look for truth.

It is true: science is partly a process for trying to find what is “true” about the world–but it is also a body of empirical facts (subject to revision and outright excision) and stories that we tell about how these facts connect up (i.e. theories). On top of scientific theory lies another level of interpretation: the nature of the “truth” that these theories contain. I am more-or-less a realist, and it sounds like Overbye is too. This means that we consider science to actually be uncovering real truth about the world (a funny sort of truth since it is subject to being proved out-and-out false, but rarely if ever able to be proved incontrovertibly 100% correct). But it is important not to lose sight that this “realistic” view of science is an assertion about the nature of the knowledge that science produces that sits on top of the facts and the theories and is not unobjectionable or closed for debate.

But I have already digressed. The point is that science is both process and content. And I think that Overbye is glossing this over by saying that “science is something people do to look for truth.” I think that Einstein was expressing his feeling that learning the facts of science does not truly infuse one with ethical values. And certainly there is something true in the criticisms of science that say that if there are values to extract, they tend toward materialism and some level of amorality. It’s easy to interpret the body of scientific facts as presenting a wholly amoral Nature, that has no shame and no compunction about what we might regard as cruelty.

So one has to be careful: I agree that the process of discovery tends towards democratic sorts of values as Overbye argues. But one could argue that the sorts of people who engage in science tend to already be predisposed to this kind of thinking (mind you, there are all sorts of characters, and some might be quite authoritarian. Some might cling passionately to their pet ideas. Such people can still do great science and be important parts of the community of scientists). If there is a predisposition, the science cannot really instill these values, it can only reinforce them, which is something, but it is not as strong a position as what Overbye asserts.

And that’s focusing on the scientists! Your average person is not exposed to the process of doing science, but rather, mainly to the end results. These end results rarely have a moral, or present a support for an ethical world view. Take evolution: cooperation is “good” insofar as it provides advantages to the organisms that engage in it in some natural context. But where predation, parasitism, sheer brute instinct and ability suffice to allow an organism to survive and procreate, those are just as “good”. There is no natural judgment about the better way to be. And this is what people see of science, by and large. A body of facts, connected perhaps by theories–some beautiful like Natural Selection, some that are patchworks–but the facts are never really connected up by an ethical story–a story with a moral at the end.

So in this sense, science is really values-neutral. And this neutrality has led to dangerous episodes in the past. Take the application of Darwinian-style thinking to society, what is called “Social Darwinism.” This idea is insidious because it is very easy to argue for it based on the science alone. Dismissing social Darwinism as a misapplication of science is not enough, you need to dismiss the notion that what we learn about Nature through science should form the basis for our moral or ethical thinking.

I don’t think that scientific knowledge should be kept 100% separate from ethics. I think that what we learn from doing science can greatly improve how we think about moral questions. But scientific knowledge by itself is an insufficient basis for ethics. I think that deep philosophical thinking about ethics, religious and otherwise, as well as our basic, instinctual humanity, are all important for morally grounding our societies. The scientific process of discovery reinforces, and probably even helps infuse our society with many of the desirable attributes Overbye discusses, but this process itself is grounded in those values, they do not originate from it.

The Nature of Light; The Nature of Time

Frontiers students (and hopefully others of you reading this as well!) learn that we primarily gather information about the Universe by collecting the light that rains down on us from the distant stars (this is at least true of astronomy. Obviously, when it comes to information about what’s around us on Earth, there are other sources…).

Usually when we think of the information contained by light we think of visual images that arrive at our eyes instantaneously. Both of these intuitive notions (visual images and the instantaneous transmission of light) are approximations. Visible light is actually a small fraction of the full spectrum, which includes invisible forms of light such as x-rays, infrared, radio waves. We detect these using instruments built for that purpose (although we can also detect infrared waves and microwaves directly by feeling hotter as a response to the energy they transmit).

The feeling that light emitted from, or reflected off of  some object reaches us instantaneously is an illusion due to the relatively short distance-scales we care about as we live our day-to-day lives *and* the enormous, but finite speed of light. A pulse of light, let’s say emitted by a flashlight turned on and subsequently turned off, will propagate through the air at a speed of nearly three-hundred million meters per second (or 186,000 miles per second, or 669,600,000 miles per hour). If the light pulse were confined to traveling around the Earth’s equator, then the pulse would go around the world almost eight times in one second. It is easy to see why people might assume that light is transmitted through space instantaneously.

The fact that light’s nature is not confined to our intuitive, everyday assumptions has some startling consequences. The broad spectrum of light makes the Universe a vastly richer place. Using instruments like x-ray and infrared telescopes, augmented with computers that can generate images from the gathered data by artificially shifting the invisible light into the visible range, we increase our visual vocabulary, and are able to observe remarkable structures that would otherwise be hidden from us.

The finite speed of light means that when I teach a class–my students see me not as I “am” but as I was a five or ten nanoseconds before, since the light bouncing off of me must traverse (an albeit relatively small distance) to reach their eyes. A somewhat less trivial amount of time passes when we consider the light from the sun. The sun is about 93 million miles from the Earth, so it takes about 8 minutes for light to arrive from there. This means that the sun appears to us as it was a full eight minutes ago (by our clocks). More profoundly still, the light from even nearby stars takes years to reach us, which means that we see these stars as they were years ago. The further out into the Cosmos we gaze the deeper into the past we look as well.

This “lookback” time is what allows us to study the history of the observable Universe, since by collecting light from further and further away, we are able to see the status of the things that emitted that light as they were even as far back as ten billion years (even a little further back than that, actually).

So we see that the finite speed of light has implications for how we might measure time. We do not get a picture of the Universe that is simultaneously up to date everywhere–distance intervenes, isolating us in a tiny bubble of “present time” and populating the world around us with objects that a increasingly out of sync with us, the further away from us they are.

Perhaps this is consternating to you, and you ask, “well, why not just redefine what we mean by simultaneous, and say that everything as we see it is the way it is now and that is just by definition. Then we don’t have to get into all this confusing talk of things that we see now looking as they were at some other time.” Embedded in this suggestion is a key aspect of Einstein’s own redefinition of the time’s nature.

To go further toward understanding our more modern view of time (at least in physics) I need to tell you (or remind you) that the speed of light acts as a sort-of universal speed limit and that light is observed to always travel at that speed no matter how you yourself are moving relative to the source of the light. So, no information can be transmitted faster than the speed of light and nobody ever sees light (in a vacuum) that travels slower than 300 million meters per second.

It’s easy to see that these two statements will have profound consequences, but it isn’t obvious what those consequences are. Without going into details, the basic point is that in order to accommodate these postulate, Einstein built a theoretical framework for physics that necessitated that observers in different states of motion will disagree about lengths and time intervals that they measure with their rulers and clocks. In particular, they will disagree about what occurs simultaneously, which is related (at least in flavor) to what I discussed before regarding lookback time (but mind you, these concepts are related but separate from one another).

Our understanding of the nature of time (and light) would be updated as the 20th century wore on, often becoming stranger and stranger. Perhaps I’ll have more to say about these things myself in a future post, but for now, I leave you with a wonderful discussion of what time is from the Leonard Lopate Show:

In Search of Time: Leonard Lopate interview Dan Falk

Steve Mirsky interviews biologist, and scientific advisor Sharon Long

Sharon Long is a biologist at Stanford University. She was a science advisor to the Obama campaign. The interview was conducted the day after election day and provides an interesting glimpse into the Obama administration’s attitude towards science and science’s role in policy.

You can read the transcript or download the podcast at this link:

Stephen Chu: Climate, Energy, and the Importance of Science

Stephen Chu (nobel laureate and soon to be Energy Secretary) has an excellent posting up on He is responding to comments about our policies for energy and climate change, but I really like how he ties in the importance of pursuing science itself. As he said, asking how the world around us works and what our place in it is is likely to be as old as humanity itself–truly part of the human condition.

Paul Krugman: What Obama Must Do

It’s rather outside the areas of my expertise, but I think this is an interesting and useful piece by Paul Krugman:

In addition to the article, you can hear much the same thing in a nice interview done recently at the 92nd Street Y here in Manhattan:

Remixing Colbert and Lessig

Stephen Colbert recently interviewed Lawrence Lessig (a lawyer who fights against draconian application of copyright law). You can see their interview here (skip ahead to 14 minutes into the show):

Having a hunch about Colbert’s undue influence on his fans, I immediately went to youtube, where I discovered (amongst other things):


The Roar of the Cosmos

Here’s an interesting new development…the Universe is yelling at us…, by Andrea Thompson, see also Dennis Overbye’s

Frontiers students, note that this is quite similar to the Cosmic Microwave Background phenomenon Kathryn mentions in her lecture. From the sound of the articles above, people think that the “roar” probably has a different origin.

First Post: Greetings

Hello, and welcome to the Frontier Scientist blog.

My name is David Kagan. I am a theoretical physicist at Columbia University with an interest in string theory, quantum field theory, and their application to both mathematics and physics. I’m particularly interested the AdS/CFT correspondence and recent attempts to apply it to both our understanding of the strong force between quarks at high temperature (as observed in the quark-gluon plasma) and also various condensed matter systems (superconductors, quantum hall systems…).

I also lead a seminar as part of Columbia’s Frontiers of Science course. Each semester we explore about four major areas in science (usually some combination of Astronomy, Climate, Geology, Neuroscience, Evolution, Biodiversity, and Physics and Chemistry). This is not a science course, but rather, a course on what science is. Obviously, fully explaining the nature of science is an impossible task. By looking at four varied areas, the students are (hopefully) able to grasp the breadth of scientific inquiry, the variety of methodologies and ideas, and some of the overarching unifying themes that run through all sciences to a greater or lesser extent.

This semester, the four topics are: Astronomy, Earth Sciences, Physics and Chemistry, and Neuroscience.

This blog is first and foremost a public airing of my thoughts on a variety of matters, scientific and otherwise. I hope too that students in my courses who come here find it a useful resource that will supplement our course and enrich their experience. Comments are welcome and encouraged, but please make sure that they are decent and respectful of others.