After a fortnight gallivanting around Europe and being more creative in my modes of transport than expected thanks to an unpronounceable mountain in Iceland, I’m back in Hawaii. On the flight to LA I ended up chatting to a bloke who works for a large US computer firm about various geeky things. The “what do you do?” question came up, and given he seemed worth talking to I opted for astronomer rather than physicist. Then at a lull in the conversation he volunteered the question, “how do they know how old the universe is?” We’ve been planning to add a “How do we know?” category to the blog so this seems like the perfect place for me to start.
The simplest measure of the age of the universe is known as the Hubble Time. The universe is expanding, we know this because we can see that light from distant galaxies is Doppler Shifted towards redder wavelengths, indicating they are moving away from us. The further away the galaxy, the faster it moves away. This is known as Hubble’s Law. The rate at which the recession velocity of a galaxy increases with its distance for us is known as the Hubble Constant. If we know how fast the universe is expanding, we can extrapolate back and see when the universe would have a size of zero, ie. when the big bang happened. Of course if we know how long ago the big bang was, we know roughly how old the universe is.
So all we need is the Hubble Constant, easy yeah, erm not really. The value of the Hubble Constant was for half a century the subject of great dispute. This period was known as the Hubble Wars which conjures up massed ranks of Welsh longbowmen cutting down the flower of French chivalry to establish domination over the fundamental constants of the universe. In reality it was a debate about measuring the distance to far off galaxies. Getting recession velocities of galaxies is pretty easy, but to find the Hubble constant, you’ve got to know the distance of each galaxy too. Measuring distances in astronomy is pretty hard, something we might deal with later in this series, so various novel techniques must be used.
When Edwin Hubble first worked on the recession velocities and distance of galaxies in the 1920s and 30s he used a fortuitously odd type of star as a “standard candle”. In astronomy, if you know how much light a star or galaxy puts out in total and how much we receive on Earth, you can combine these to get how far away it is. Hubble used unstable stars which have finished their main life as a normal star, known as Cepheid variables. They pulsate, and so vary in brightness, and the really lucky bit is that the pulsation rate is related to the total light emitted by the star. So the pulsation period gives the total light emitted – combine this with the apparent brightness and you get the distance. The problem is that you need a pretty powerful telescope to resolve individual stars in distant galaxies. Even with the largest telescope available in the middle of the last century, the 5m Hale Telescope on Palomar, only relatively nearby galaxies can have their Cephieids resolved from the mass of other stars. So astronomers had to get creative.
This doesn’t mean they went off and played guitar in Queen, Coldplay or, (as rumoured in the case of one astronomy blogger) the opening act for The Velvet Underground. Science itself is a creative process, trying to dream up innovative solutions to work around the limitations of the available technology and data. The work mostly rested on calibrating a myriad of new distance indicators using local galaxies at known distances (such as M100, pictured) and applying these new estimates to more distant objects. The results fell into two broad camps, one side led by the American astronomer Allan Sandage claimed a value of about 50 (I won’t go into the slightly obtuse units used for this measurement) and another led by the French cosmologist Gerard de Vaucouleurs claimed a value of about 100. For decades they fought over seeming minor points that shifted one particular rung on the intricate astronomical distance ladder up or down. From how dust in our own Galaxy affects the measured brightnesses of distant galaxies to subtle biases in samples of galaxies to the brightness of exploding stars, no point in the other group’s work was too minor to pick apart.
Fast forward to the end of the last century and say hello to the now 20-year-old Hubble Space Telescope. One of its key projects was to pick up where its namesake left off and find the Hubble Constant using Cepheid variables in more distant galaxies. After a huge amount of effort it came out with a result of about 72, giving an Hubble Time of roughly 13.8 billion years. This fits in fairly well with the estimated ages of the oldest stars. More recent measurements, such those from the WMAP study of ripples in the cosmic microwave background and more up to date supernovae studies have supported a value of roughly 70. However they also predict the expansion of the universe is accelerating, meaning our simple extrapolation, assuming constant expansion won’t give exactly the right answer.
I didn’t say all this to the bloke on the plane, we were about to land so I didn’t have much time, but I hope I got it across fairly well both to him and you.
[tweetmeme only_single=false service=wp.me source=allinthegutter]
I read 3 papers! Three beautiful, science-packed, revolutionary and mind-blowing papers! Ok, maybe not – but trust me, after being so busy with things like measuring redshifts, fixing codes and mountains of admin/conference organising, almost any science is pure beauty for the old brain.
But one of these papers did tap into something I’m quite interested in, and it’s related to the Cosmic Microwave Background (CMB). I’ve briefly mentioned the CMB before, but here’s a more decent introduction: when we say CMB we are talking about radiation that was created when the Universe was very very young – around 300,000 years old. At that time the Universe was hot and radiation (or photons) were the dominant component of the Universe. Because it was so hot at that time, the Universe was actually opaque – matter was ionized, meaning electrons were bobbling about not really being attached to any nuclei because of the high temperature. What this means is that photons could not get very far without bumping into something – they could not travel in a straight line for any decent sort of time, and were perpetually scattered around. That’s essentially what opaque means. However, something special happened around 300,000 years into the Universe’s lifetime, and that was a decrease in temperature that allowed these electrons to settle into atoms, effectively setting the photons free. We call this the time of last-scattering.
These photons were then free to travel unhindered. They were travelling then, and they are still travelling now! Straight into our telescopes, carrying information from around 13 billion years ago, virtually unaltered! This, for Cosmologists, is like Christmas – almost too good to be true. Anyway, I say virtually because some things affect these photons as they travel along. A big one is the fact that the Universe is expanding, and this causes their frequency to change. This is the reason why we see them in the Microwave part of the spectrum today – they were a lot hotter 13 billion years ago. Had we lived earlier on during the lifetime of the Universe, the CMB would have peaked in the visible band and the sky would be rather pretty! (of course, whether a planetary system with life could even exist then is another matter) But the Universe, in spite of being mostly empty, still has a lot of stuff around. You know, galaxies and things. And as these photons go through expanding regions with more matter (clusters) or less matter (voids) they see their frequency slightly changed.
Now, what I didn’t say and should have done is that not all CMB photons are the same. They are all very similar, but depending on the density of the region of space they were in at the time of last-scattering, today they are either a little bit hotter than the average, or a little bit colder. It’s a very small little – around 1 part in 10,000! But as I said before, a good enough experiment is able to pick up these tiny differences. When we look at the sky, we obviously only see the CMB photons travelling towards our telescope (they are travelling uniformly in every direction), but these tiny differences are projected in the sky in what is now a very familiar pattern, and that I need to show time and time again because it really is beauty:
What you’re seeing here is the whole sky, as seen in the microwave band (after having radiation from our own galaxy cleverly removed). Red patches are slightly hotter photons, blue patches are slightly colder photons. These patches tell you about the density distribution in the early Universe. It’s these fluctuations that seed the density fluctuations that give birth to stars and galaxies and you and I, but I’ll leave that to another time.
Right now, I want to focus on a small patch of these fluctuations. In you look at the bottom right corner rim, around 4:30pm if that map was a clock (oooohhhhhh!! That’s an idea! Can I have a CMB clock anyone?), you’ll find a cold patch that has been nick named The Cold Spot (one day I’ll write a rant about how Astronomers, being a rather clever and creative bunch of people, are rubbish at coming up with good names for things. mm.. maybe I just have.). Now, by eye the Cold Spot doesn’t look any different from other cold spots around the map. However, statistically, and given our model for the early Universe (which describes things pretty well, including the distribution of galaxies we see today), a spot of that shape, size and temperature has a very low probability to exist. ‘Very low’ here means something around 0.1% to 5%, depending on different estimates. This is slightly uncomfortable and Astronomers have spent a significant amount of time studying the Cold Spot.
There are a few options here:
1) The Cold Spot was formed at the last-scattering.
2) The Cold Spot was formed during the photon’s path to us.
3) The Cold Spot is an instrumental artifact.
4) The Cold Spot is a data-reduction artifact.
There are papers studying all of the above. 3) and 4) seem unlikely at this stage but should not yet be discarded. Having data from another experiment like Planck, with a whole new reduction pipeline, will help. 1) is fascinating because it could potentially mean there is something amiss from our early-Universe theory. But the paper that prompted this ridiculously long post actually focuses on 2).
Bremer et. al investigate the hypothesis that actually, the reason why we see a particularly cold spot in that region of the sky, is because there is a large void (a region of space with much less galaxies than the average) along that line of sight. The trick here is to note that the Universe is expanding – i.e., the shape and size of clusters and voids changes with time. The frequency (or temperature) of photons is affected by this change because the energy they loose or gain when they go into these structures is not completely recovered when they come out. There is a net effect, with is to gain a little bit of energy going through clusters, and loosing a little bit going through voids. There’s a little animation here that may make it clearer.
So Bremer and collaborators chose 5 regions of the sky, all inside the Cold Spot, and took redshift surveys in these 5 regions. This allowed them to see how the distribution of galaxies changed with distance between us and the region of last-scattering, and they looked for a deficit of galaxies that would be significant enough as to imprint the Cold Spot in the observable CMB. They did this by comparing these redshift distributions with those from other regions of the sky (outside the Cold Spot) and did not find any sign of such a deficit, or void. They could only look up to redshift of 1 (or around 7.8 billion years ago) because of instrumental limitations, and they didn’t have enough galaxies below redshift of 0.35 (around 3.8 billion years ago), but they covered a significant chunck of time when this effect is more likely to happen and did not find what they were looking for. What this means, is that they discarded at least some theories that could lead to point 2), although not all of them.
So the jury is still out on the Cold Spot. Personally, I’m sometimes tempted to add a 5th option:
5) The Cold Spot was formed at last-scattering, but its significance is being over-estimated.
But that, I’m afraid, is another post (I’m late for work!!).
M. N. Bremer, J. Silk, L. J. M. Davies, & M. D. Lehnert (2010). A redshift survey towards the CMB Cold Spot Submitted to MNRAS arXiv: 1004.1178v1