Counting galaxy mergers you can’t see


I’m going to pick up where I left off a while ago, when we talked about galaxy evolution. I have a staggering backlog of papers to read on my desk, most of which have the words “merger history”, “mass assembly” or “galaxy pairs”. All of these expressions are more or less equivalent, and they relate to the one of the processes we believe regulates galaxy growth – merging. The other one is star formation – by which I mean the process of turning cold gas into stars – but we’ll talk about that some other time.

We know that galaxies merge. We do not only see it  (go to Galaxy Zoo’s Mergers site to revel in pretty mergers images, and also to help astronomers get some real science out of them), but it is also a prediction of our current model of structure formation. I’ll cover that model and prediction in another post, but for now let me just say that we are in a position in which measuring the rate at which galaxies of different mass or luminosity merge in the Universe is becoming very important as a way to constrain our models of galaxy and structure formation. In other words, it’s time to get quantitative.

So what we want to know is, on average, how many galaxies merge per unit volume, per unit time, as the Universe evolves. If you sit down and think about this for a moment or two, you’ll quickly come to the conclusion that simply counting galaxies that are merging (which you can identify by looking at the images) is one way to go. But this is only possible relatively near by – as we go to higher redshift it becomes increasingly hard to get good enough images. Still, a number of people have been working hard at measuring this, and pushing this sort of analysis forward.

Another way to go, is to simply count galaxy pairs that are closer than a given physical distance. You can assume that if galaxies are too close then gravity will win at some point, and the galaxies will merge. The upshot is that you don’t need really good images to actually see interacting galaxies and you can take this to higher redshift. The downside is that you need to make assumptions about what this physical separation should be and, perhaps more importantly, how long it will take them to merge – the dynamical timescale. Another disadvantage comes from the fact that you miss pairs of galaxies in which one of the two is very faint – so you are limited to counting pairs of bright galaxies. The jargon for the merging of two galaxies of similar mass or luminosity is “major merging”. Some neat pieces of work have come out of this, and have measured the major-merger rate of luminous galaxies to a respectable redshift. The last one I read (but by no means the only, nor the last!) was by  Roberto de Propris et al. (2010) who did this up to a redshift of 0.55, but there are measurements of this quantity which span the last 8 Gyrs of the lifetime of the Universe or so (equivalent to a redshift of 1).

A few weeks back however, I read another paper which took a different and rather interesting approach to the subject. This is the work of Sugata Kaviraj et al. (2010), and their idea is as follows. The types of measurement like the ones I described in the above paragraph give you a number of how many major mergers there are at some point in the Universe. These mergers, however, leave a signature in the shapes of the galaxy for a certain time – they look disturbed (i.e., not smooth), until they final relax into one larger, smoother, and stable galaxy. However, this means that you should be able to predict how many galaxies of a given mass, on average, should look disturbed at any point in time by assuming a measured rate of mergers in the past.

And so they did. They took a whole load of high-resolution images from the Hubble Space Telescope and looked for signs of disturbed elliptical galaxies. What they found (amongst other neat things that I don’t have time to go into), is that there are too many of these disturbed galaxies if we assume that the other rates are correct. But hang on in there for a minute – the other rates are limited to major mergers because we can’t see the minor mergers when looking at pairs. So Sugata Kaviraj and collaborators postulate that the excess is due to these minor mergers – we can’t see them happening at high redshift, but we can see their effect at lower redshift. Moreover, they also observe these minor mergers to be significantly more dominant than major mergers since redshift of one, suggesting that galaxies have been growing from accreting smaller (fainter) galaxies in the recent Universe, but this was potentially very different at high redshift.

Other people have found this sort of behaviour in some way or another (including me!), but I was happy to see a rather neat way to.. well.. see (and measure) the unseen.

ResearchBlogging.org R. De Propris, S. P. Driver, M. M. Colless, M. J. Drinkwater, N. P. Ross, J. Bland-Hawthorn, D. G. York, & K. Pimbblet (2010). An upper limit to the dry merger rate at ~ 0.55 ApJ arXiv: 1001.0566v1

ResearchBlogging.org Sugata Kaviraj, Kok-Meng Tan, Richard S. Ellis, & Joseph Silk (2010). The principal driver of star formation in early-type galaxies at late epochs: the case for minor mergers MNRAS (submitted) arXiv: 1001.2141v1


Dark Matter Week: Do you feel lucky today?


First things first, we owe you an apology here at we are all in the gutter. Well, Stuart and I do, for not having followed up with our posts Wednesday and Thursday. We promised to tell you a little bit more about dark matter before the announcement today and alas, we sold you short. We still intend to do so, but in the meantime Friday caught up with us and the Cryogenic Dark Matter Search (CDMS) experiment have announced their results right on the mark and we felt we should tell you what they are.

They’ve published a really good summary here , which I’ll now take the liberty to quote because it’s rather clear.

First, a little bit about the experiment itself:

The Cryogenic Dark Matter Search (CDMS) experiment, located a half-mile underground at the Soudan mine in northern Minnesota, uses 30 detectors made of germanium and silicon in an attempt to detect such WIMP scatters. The detectors are cooled to temperatures very near absolute zero. Particle interactions in the crystalline detectors deposit energy in the form of heat,
and in the form of charges that move in an applied electric field. Special sensors detect these signals, which are then amplified and recorded in computers for later study. A comparison of the size and relative timing of these two signals can allow the experimenters to distinguish whether the particle that interacted in the crystal was a WIMP or one of the numerous known particles that come from radioactive decays, or from space in the form of cosmic rays. These background particles must be highly suppressed if we are to see a WIMP signal. Layers of shielding materials, as well as the half-mile of rock above the experiment, are used to provide such suppression.

The CDMS experiment has been searching for dark matter at Soudan since 2003. Previous data have not yielded evidence for WIMPs, but have provided assurance that the backgrounds have been suppressed to the level where as few as 1 WIMP interaction per year could have been detected.

We are now reporting on a new data set taken in 2007- 2008, which approximately doubles the sum of all past data sets.

One of the hardest things about such an experiment, is to ensure that you know your kit well enough that you can tell the real signal from background noise that come from other sources (i.e., not dark matter):

With each new data set, we must carefully evaluate the performance of each of the detectors, excluding periods when they were not operating properly. Detector operation is assessed by frequent exposure to sources of two types of radiation: gamma rays and neutrons. Gamma rays are the principal source of normal matter background in the experiment. Neutrons are the only type of normal matter particles that will interact with germanium nuclei in the billiard ball style that WIMPs would, although neutrons frequently scatter in more than one of our detectors. This calibration data is carefully studied to see how well a WIMP-like signal (produced by neutrons) can be seen over a background (produced by gamma rays). The expectation is that no more than 1 background event would be expected to be visible in the region of the data where WIMPs should appear. Since background and signal regions overlap somewhat, achievement of this background level required us to throw out roughly 2/3 of the data that might contain WIMPs, because these data would contain too many background events.

A particularly interesting aspect of the data analysis, commonly used for this type of experiment, is that it is blind. The CDMS team explains:

All of the data analysis is done without looking at the data region that might contain WIMP events. This standard scientific technique, sometimes referred to as ‘blinding’, is used to avoid the unintentional bias that might lead one to keep events having some of the characteristics of WIMP interactions but that are really from background sources. After all of the data selection criteria have been completed, and detailed estimates of background ‘leakage’ into the WIMP signal region are made, we ‘open the box’ and see if there are any WIMP events present.

And so, what did they find? In short, they found two events that fit the bill. If these are indeed real events, this has been the first direct detection of Dark Matter in our scientific history. But alas, it’s never that simple. There’s a non-negligible chance, of around 23%, that these two events were created by background sources. I.e., that the signal hitting the detectors was a result of an interaction of particles which have nothing to do with dark matter. Of course, this means there’s a 77% chance that these two events were real, but 77% is generally not considered high enough to make a detection statistical significant in scientific terms. This is what the CDMS team explains:

In this new data set there are indeed 2 events seen with characteristics consistent with those expected from WIMPs. However, there is also a chance that both events could be due to background particles. Scientists have a strict set of criteria for determining whether a new discovery has been made, in essence that the ratio of signal to background events must be large enough that there is no reasonable doubt. Typically there must be less than one chance in a thousand of the signal being due to background. In this case, a signal of about 5 events would have met those criteria. We estimate that there is about a one in four chance to have seen two backgrounds events, so we can make no claim to have discovered WIMPs. Instead we say that the rate of WIMP interactions with nuclei must be less than a particular value that depends on the mass of the WIMP. The numerical values obtained for these interaction rates from this data set are more stringent than those obtained from previous data for most WIMP masses predicted by theories. Such upper limits are still quite valuable in eliminating a number of theories that might explain dark matter.

So why is everyone so excited, if the significance is not enough to break out the champagne? Simply put, because even though none of us is going to take this as scientific proof of dark matter detection, most of us also knows that real detections are often preceded by marginal detections. And that’s exciting. There’s a feeling that we’re really not that far, and 77% can feel very encouraging. In the words of my office mate – 77% has never been more exciting.


Good reads

Quick post to let you know of two new blogs (well, new to me) that have caught my eye. Firstly, let me welcome Duncan and Well-Bred Insolence to my RSS feeder – keep an eye over there for news on planet formation, discovery and possible inhabitants amongst other things. He beat us to telling you about HARPS latest addition to the list of known extra-solar planets. Glad he did too, firstly because we would have been intolerably late with those news, and secondly because he knows a fair bit more about it than we do.

And then there’s the The Big Blog Theory, a blog by David Saltzberg – the science advisor to The Big Bang Theory (the American sitcom, not the beginning of the Universe) – who explains the science behind the episodes. My favourite TV sitcom just got better!


When telescopes get really, really big

This post was chosen as an Editor's Selection for ResearchBlogging.org

In our first post exploring galaxy evolution, we saw how observing galaxies at different distances from us is crucial for our understanding of how galaxies form and evolve. It also naturally follows that the larger the range of distances we can study, the better we can constrain our theories. So it’s only natural that astronomers have always been hunting for the most distant galaxies – it’s a sort of high-flying game in the astronomy community, and breaking the record for the most distant galaxy observed is no mean feat.

More distant objects appear on average fainter, and they are harder to detect. So traditionally one looks at technological improvements in order to make advancements in this area. For example, a larger telescope has a wider light-collecting area. Therefore it’s more sensitive, and is able to detect fainter objects in a given time. One can also observe a region of sky for a longer period of time, which again increases the number of photons that we collect. Astronomers call this deep imaging.

Recently, the public release of very deep imaging from the Hubble space telescope‘s new Wide Field Camera 3 generated a rush of papers which were precisely looking for very high-redshift galaxies. Look for example at Bunker et al., McLure et al., Oesch et al., among others, which were mostly submitted within a few hours of each other, and just a few days after the data was publicly released – astronomy doesn’t get much more immediately competitive (or stressful?) than this! The work of these particular papers requires not only deep imaging, but also a wide range in terms of electromagnetic spectrum – i.e., they need sensitive images of the same region of the sky in different colours, and the redder the better.

These papers detected galaxies at redshifts between 7 and 8.5. Or, in more common units, these galaxies are at least  12,900,000,000 light-years away. That means the light that was detected by the Hubble space telescope, and on which these papers and scientific analysis are based, left those galaxies 12,9000,000,000 years ago. The Universe was only around 778,000,000 years old by then. To go back to our previous analogy, it’s like looking at a snapshot of me when I was less than 2 years old. Admittedly I had bathed by then, but that’s still very very young!

These papers are interesting and important in their own right, but what prompted me to come and tell you all of this was actually the work of Bradac et al., which has the same goal as the above, it uses the same basic techniques as the above, but it cheats. And it’s the way it cheats that makes it really rather neat.

Bradac and co-authors use not only human-made telescopes, but also harvest the power of gravitational lensing to turn a galaxy cluster (the Bullet Cluster) into an enormous cosmic telescope. We have covered here before how mass affects space which in turn affects the way light travels. Matter can act to focus light from distant objects – and galaxy clusters have a lot of matter. This makes them rich and exciting playgrounds for astronomers who have now long used gravitational lensing to probe the distant Universe.

Bradac and friends have additionally showed just how effective it can be at measuring the density and properties of distant galaxies, and how much there is to gain from a given image (with a given sensitivity limit) when there is a strong and appropriately focused cosmic lens in the field of view. Distant galaxies are also magnified – their angular size on the sky increases, compared to an unlensed image – and that allows us to look at them in more detail, and study their properties. As a bonus, given that cosmic telescopes often produce more than one lensed image of any one given galaxy, they can use these multiple images to help with the distance measurement and avoid some contamination.

They didn’t set any distance records as the wavelength range of their imaging wasn’t quite right for that. But with the right imaging, the right clusters and the right analysis, the authors argue that this is the way forward for this sort of study. Galaxy evolution is hard and full of technical challenges, so using galaxy clusters as gigantic telescopes can certainly go a long long way.

ResearchBlogging.orgM. Bradač, T. Treu, D. Applegate, A. H. Gonzalez, D. Clowe, W. Forman, C. Jones, P. Marshall, P. Schneider, & D. Zaritsky (2009). Focusing Cosmic Telescopes: Exploring Redshift z~5-6 Galaxies with the Bullet Cluster 1E0657-56 Accepted for publication in ApJL arXiv: 0910.2708v1

ResearchBlogging.orgR. J. McLure, J. S. Dunlop, M. Cirasuolo, A. M. Koekemoer, E. Sabbi, D. P. Stark, T. A. Targett, & R. S. Ellis (2009). Galaxies at z = 6 – 9 from the WFC3/IR imaging of the HUDF Submitted to MNRAS arXiv: 0909.2437v1

ResearchBlogging.orgP. A. Oesch, R. J. Bouwens, G. D. Illingworth, C. M. Carollo, M. Franx, I. Labbe, D. Magee, M. Stiavelli, M. Trenti, & P. G. van Dokkum (2009). z~7 Galaxies in the HUDF: First Epoch WFC3/IR Results submitted to ApJL arXiv: 0909.1806v1

ResearchBlogging.orgAndrew Bunker, Stephen Wilkins, Richard Ellis, Daniel Stark, Silvio Lorenzoni, Kuenley Chiu, Mark Lacy, Matt Jarvis, & Samantha Hickey (2009). The Contribution of High Redshift Galaxies to Cosmic Reionization: New
Results from Deep WFC3 Imaging of the Hubble Ultra Deep Field Submitted to MNRAS arXiv: 0909.2255v2


300 done, 1500 to go

Last week I was in Milan doing something I traditionally don’t really do – getting my hands dirty on data, and doing my bit to make it usable for science.

By data, in this case, I mean 100,000 spectra which were collected by the VIMOS spectrograph which is one of the instruments on the Very Large Telescope in Paranal for a new project caller VIPERS. A spectrum is simply the light of an object decomposed in a relatively large number of components. So instead of decomposing an image in, say, 3 colours, a spectrum will often have hundreds to thousands of different pixels – or colours, really – so we can see exactly how “red”, or “blue” an object is. Think of it as a rainbow – which is simply the light of the Sun decomposed in a number of colours. And yes, astronomers spend a large amount of effort effectively making rainbows out of the light of galaxies.

Spectra of objects (stars, galaxies, quasars, planets, etc) are one of the most useful observables of our Universe. Depending on whether you are a cosmologists or someone who studies galaxy evolution, you want them mainly for different reasons. Today I’m going to put my cosmologist hat on.

As we’ve covered here before, galaxies have a very rich range of properties which we can use to trace their evolution. With our hat today, none of it matters – the astonishing this is that we can learn an awful lot about the evolution of the Universe as a whole simply by studying the position of these galaxies. This is the realm of observational cosmology - the study of the birth, evolution, dynamics, composition and ultimate fate of our Universe by studying the spacial distribution of galaxies. This may be a somewhat narrow description of a huge field of modern-day Astronomy, but to first order it’s perfectly correct.

So why is the spacial distribution of galaxies so revealing? Well, let us start by considering the components of our Universe. In our current standard model we have four main components: radiation, baryonic matter, dark matter and dark energy. The first two we are very familiar with – baryonic here just refers to the matter that makes up all we see in the Universe: ourselves, the solar system, distant galaxies, far away planets, etc. The last two are more of a mystery, and excitingly also the two most important components of our Universe today. I must leave a more thorough explanation for another post, but for now let us state that there is 5 times more dark matter out there than baryonic matter, but we can’t see it. It interacts gravitationally with baryonic matter so we can detect its presence and because it is so much more abundant than baryonic matter, it’s dark matter that has the most influence in the master game of tug of war that is the evolution of our Universe – not baryonic matter. The other player is dark energy. Now, I can’t tell you what dark energy is (nobody can), but I can tell you that it behaves in the opposite way to gravity. Whereas one attracts, the other repels. Whereas one brings things closer together, the other takes them apart.

The amount of dark matter and dark energy govern the dynamics of our Universe, but we can’t see either of them directly. So we turn to what we can see: baryonic matter, via the radiation produced mainly (but not exclusively) by stars. The neat thing is that baryonic and dark matter attract each other gravitationally, so baryonic matter traces dark matter. And this is why simply measuring where galaxies sit is so insightful – it gives you a tool to track dark matter and see how its spacial distribution evolves. This in turn tells you about the interplay between dark matter and dark energy – depending on which one dominates, by how much and for how long, the evolution of the spacial distribution of dark matter (and therefore galaxies) will be different. And that we can measure! The spacial distribution (or clustering) of galaxies is one of the most promising tools to tell us about dark energy and how structure has grown in our Universe.

But before all of this can be done, we need to know the distance to each galaxy. This means measuring their redshift, by using the spectrum of each galaxy to determine how fast it is receding from us – which in turn tells us how far away it is. If the data coming out of the telescope and spectrograph are good enough, the whole process can be fully automatised. However, our position is slightly different in that due to some instrumental artifacts, the red side of the spectra is much below standard and the computerised pipeline often fails in assigning the correct redshift.

This is where I (and around 25 other people) come in. Humans are much much better at spotting mistakes than computers, so each of the 100,000 spectra is being looked at by at least two different people to make sure that the redshift is correct and that when the time comes to do science, we are doing it with reliable data. As I mentioned, this is not something I’d ever done before, so my time in Milan was used to get me trained and up to scratch with the software. I measured 300 redshifts this week, which is rather slow especially when I think that I have to do 1500 more or so in the next couple of months!

It’s a huge task but one that really needs to be done. And on the upside, I will never get bored on a train or airport ever again!


Galaxy evolution 101

I think it’s about time that we start covering some ground in galaxy evolution, here in weareallinthegutter. We won’t do it in one post. We won’t do it in 100 either, simply because galaxy evolution is not yet solved. But of course, that only makes it more exciting.

Let us start with some of the basics then, and lay down our aims. The goal of galaxy evolution, in its broader terms, is to explain how galaxies are born and how they evolve throughout cosmic history. A successful theory will give a framework which, given an ensemble of galaxies at one point, can predict how these same galaxies (or another ensemble just like it) will end up in the future.

We have an advantage here, as Astronomers, in that we can look at the Universe during different stages of its past evolution. The trick is in the finite speed of light – for example we see a star which is 10 light years away as it was 10 years ago. In other words, light takes 10 years to travel from this star to us, and an observer sitting on a planet around this star would see me not sitting at my computer right now, but 10 years younger and sitting someplace else (probably a lot warmer).

So the further we look, the further back in time we’re travelling. If you’re studying galaxy evolution, then, this is incredibly advantageous: by looking at galaxies which are at different distances from us, we are looking at how galaxies looked at different stages of the cosmic evolution. Our job is to draw a coherent story line through these stages.

What, then, should our observations be? Let us start simply, with two sets of galaxies – one set near us, and one set far away from us. Each galaxy has a set of characteristics which we may want to study – for example its shape, known in the business as morphology; its colour; its brightness; its mass; its chemical composition; its dynamics (the way it moves); or even its neighbourhood, or environment. The truth is, there are many ways in which one could describe a galaxy, much in the same way as I could choose a variety of characteristics to describe a person. I could go for height, arm length, hair colour, eye colour, number of eye lashes, gender, etc. Some, you will agree, are more useful than others, depending on what I’m studying about a person or group of people. It’s the same with galaxies.

It turns out that one of the most defining characteristics of a galaxy is its colour. And not just any colour – galaxies tend to either be blue, or red. The colour is related to the age of the dominant stellar component – old stars are red, young stars are blue – so the colours themselves are easily explained. But what is surprising is that galaxies tend to sit very much in either the red or in the blue side of the fence. There are very, very few galaxies which sit on the fence and are, for example, green. This on itself is very revealing – it means that whatever process makes galaxies go from red to blue (or the other way around) must happen quickly. If this transition is fast, it means we are less likely to observe a galaxy in this period which explains why we see so few galaxies perching on the fence.

Good. Now, remember that we have two sets of galaxies – one near, and one far from us. If we have some theory of how galaxies go from blue to red and vice-versa, we should be able to predict the fraction of red and blue galaxies in the present (those near us) by measuring it in the past (in those far from us). Our observations of the near Universe should therefore help prove or disprove our theory for galaxy evolution.

This is the mantra of many a paper in galaxy evolution. Observables get more or less complicated – for example, instead of just looking at how the number of red and blue galaxies evolves, we can look at how bright they are, how fast they make stars, how they’re distributed in space, their environment, etc. But essentially, this is what galaxy evolution is all about – and it’s hard!

Two papers recently have caught my eye on this particular matter, so let me very briefly tell you about them. Last month, Tinker et al. looked at these sets of clouds of red and blue galaxies at different distances from us, and tried to make sense of the time-scale of the process which drives the blue-to-red transition. The process itself is still unconstrained, but what they did find is that whatever dominates this evolution today is different from what dominated it in the early Universe. And a bit later in the month, Zucca et al. studied how this transition depends not only on the epoch, but also on the environment of the galaxies. Interestingly, they found that in very dense regions (i.e., more packed regions of the Universe, where there are more galaxies per unit volume) most of the blue-to-red transition happened over 7 Gyr ago. However, in more sparse regions of the Universe, this transition is still happening today.

So the picture is complex – galaxies appear to evolve via different processes according to the age of the Universe, and according to their environment. This is not a surprise, but it is exactly this sort of observational constraints which help test, prove and most often disprove several ideas for galaxy evolution – they are as important as they are technically and instrumentally hard.

I’ll leave you now with this very brief and basic first introduction to galaxy evolution, but I promise to come back with more observational constraints, and with some explanation of what the theorists have to offer.

ResearchBlogging.org

Jeremy L. Tinker, & Andrew R. Wetzel (2009). What Does Clustering Tell Us About the Buildup of the Red Sequence? ApJ arXiv: 0909.1325v1

ResearchBlogging.org

E. Zucca, S. Bardelli, M. Bolzonella, G. Zamorani, O. Ilbert, L. Pozzetti, M. Mignoli, K. Kovac, S. Lilly, L. Tresse, L. Tasca, P. Cassata, C. Halliday, D. Vergani, K. Caputi, C. M. Carollo, T. Contini, J. P. Kneib, O. LeFevre, V. Mainieri, A. Renzini, M. Scodeggio, A. Bongiorno, G. Coppa, O. Cucciati, S. delaTorre, L. deRavel, P. Franzetti, B. Garilli, A. Iovino, P. Kampczyk, C. Knobel, F. Lamareille, J. F. LeBorgne, V. LeBrun, C. Maier, R. Pello`, Y. Peng, E. Perez-Montero, E. Ricciardelli, J. D. Silverman, M. Tanaka, U. Abbas, D. Bottini, A. Cappi, A. Cimatti, L. Guzzo, A. M. Koekemoer, A. Leauthaud, D. Maccagni, C. Marinoni, H. J. McCracken, P. Memeo, B. Meneux, M. Moresco, P. Oesch, C. Porciani, R. Scaramella, S. Arnouts, H. Aussel, P. Capak, J. Kartaltepe, M. Salvato, D. Sanders, N. Scoville, Y. Taniguchi, & D. Thompson (2009). The zCOSMOS survey: the role of the environment in the evolution of the luminosity function of different galaxy types A&A arXiv: 0909.4674v1


First light from Planck

Earlier this year, on the 14th of May, the rocket Ariane V launched from the French Guiana with an astonishing precious cargo on board: the Hershel and the Planck satellites, two very ambitious European astronomy experiments, each with its own mission.

So you may remember that  a while ago, Emma announced the release of the first images from Hershel, and last Thursday it was the turn of its sister mission’s first light to be set free to the public. Planck is out there to capture radiation which was created when the Universe was incredibly young, just a mere 340,000 years old or so – we call it the Cosmic Microwave Background radiation. If that sounds like a lot, remember that the Universe is around 13,500,000,000 years old today (give or take a few hundreds of thousands of years)! In human terms, it’s the equivalent of looking at a picture of myself 6 hours after my birth – prior to my first bath, even.

The difference between me and the Universe – or one of – is the fact that we can tell an awful lot about the content, geometry and evolution of the Universe from such an early on picture. Our understanding of Cosmology has in fact been shaped by a previous experiment that has been mapping the Cosmic Microwave Background since 2004 – NASA’s Wilkinson Microwave Anisotropy Probe. WMAP has answered many questions and raised tons more, both being the mark of a successful science experiment. Planck is like WMAP in its primary goal – which is to map the Cosmic Microwave Background with unprecedented detail and precision – but differs in just how well it can do it, which in turn broadens the scope of scientific questions it can answer. And of the ones it can raise!

So here you have it – a true, honest to heart picture of the Universe when it was less than 400,000 years old:

What you see is actually a combination of two pictures, so let me explain. The colourful strip that twists around the picture is the data that has come from Planck. The background, is an image of the full night sky, projected into two dimensions (like you would for a world map, pretty much) and with our own Milk Way going along the centre. The background is just there to give you a frame of reference – eventually Planck will map the whole sky, as that colourful strip extends to cover more and more of the sky. The reason why they look so different is because Planck is designed to pick up radiation in the microwave region of the electromagnetic spectrum, whereas the background picture is in visible light.

Planck will also look at each region of the sky multiple times, and each time it does it will improve the scientific value of the data. This is simply a preliminary picture, in the way of an example – but the data quality is excellent and all seems to be in place for a highly successful and smooth mission. We have to wait a while yet for the first science results to come out, but rest assured that we will cover them here on weareallinthegutter as they arrive.

These are good times, exciting times for science and cosmology – exciting times indeed!


Eclipse week 5 – 1919

It was nearly 10 years ago that I had the chance to witness my first – and only – total solar eclipse. It was every bit what is cracked up to be, and the feeling that you’re witnessing something special comes effortlessly. You don’t have to be an Astronomer to enjoy the uniqueness and transiency of such a brief moment, and millions of people in Asia had the same pleasure this week that I most certainly hope to have again in the near future.

But you probably do have to be an Astronomer to turn an eclipse into an event which will change the face of Physics. So today we’re not here to talk about the 1999 or the 2009 eclipses, but rather the total eclipse of the 29th of May, 1919. This puts us four years after Albert Einstein published his theory of General Relativity, in 1915.

General Relativity, at its most basic, is a theory of gravitation. It provides a theoretical framework which we can use to explain the behaviour of gravitationally interacting systems that we observe – like the solar system, or the Earth-Moon system, the Milky Way, etc. Most importantly, like any other scientific theory, it also allows us to make predictions about how gravity should affect these, and other, systems.

Prior to Einstein and General Relativity, our understanding of gravity was motivated by Newton’s theory of gravitation, often also referred to as the classical theory. In what we may call “everyday situations”, both theories make the same predictions, but General Relativity makes significantly different predictions, or presents very different explanations, for things like the geometry of space, the passage of time and how light propagates. Crucially, General Relativity predicts that the mass of an object affects space(time – the merging of space and time into a single mathematical object is also a consequence of General Relativity), and that the shape of spacetime affects the way light travels. These are stunningly anti-intuitive ideas, because they are only noticeable in regimes far detached from our everyday experiences. For example, we need a very large amount of mass to notice a very small deflection in light’s path.

But Einstein was not the first to suggest that mass affects the path of light. In 1801, Johann Georg von Soldner used Newton’s corpuscular theory of light – which states that beams of light are streams of particles of tiny mass – to calculate how mass would affect the path of a beam of light. He arrived to a result which is known as the Newtonian result for the bending of light. However, because we know now that light is in fact made of massless photons, there is no formal way to correctly treat the bending of light in Newtonian gravity. This was not known at the time, so the result stuck. Unaware of Soldner’s calculations, Einstein in 1911 calculated what the bending of light should be in his new theory of General Relativity, which at the time was work in progress. He reached the same value that Soldner had done, over 100 years previously. Crucially, once his theory was finalised in 1919, Einstein revisited this problem and realised he had made an error – the bending of light, once one takes into account the curving of space, should be twice that of the Newtonian result. Such a clear distinction between the predictions of two competing theories is a blessing – it gives scientists the chance to design an experiment which can show which one, if either, is correct.

We couldn’t look in the Earth for a suitable system to measure this effect in, but in 1919 Astronomer Royal Sir Frank Dyson and Plumian Professor of Astronomy in Cambridge Arthur Eddington decided to look elsewhere – they turned the Sun, the Moon, and a distant cluster of stars called the Hyades into their own laboratory. The idea is that the mass of the Sun is large enough to bend the light which passes nearby, as that of distant stars which are sitting behind or very near the Sun from the Earth’s perspective. This is of course happening all the time, but we simply can’t see the light of distance stars near the Sun because the Sun is so much brighter than the stars we are trying to observe. Unless something really big gets in the way and blocks the light from the Sun – say the Moon (which is actually much smaller than the Sun, but sits just at the right distance – see Emma’s post). A total eclipse makes therefore the perfect opportunity to see how the position of stars in the sky changes when their light has to travel close to the Sun on their way to us.

1919_eclipse_negative The eclipse of 1919 was a good and timely opportunity to measure this effect, and expeditions in the island of Principe (off the west coast of Africa) and Sobral in Brazil were planned to do precisely so. The Moon would block the light from the Sun for almost 7 minutes – an exceptionally long period of totality! What no Astronomer can ever do, however, is plan to the weather. And so, for over 6 minutes, Eddington waited and stared at a cloud.. and prayed. The cloud did disappear, giving Eddington and his team 10 seconds – 10 precious seconds! – to take the needed photo. You can see the negative of this photo on the left, and if you look carefully you can see the horizontal lines which mark the positions of the stars. The expedition in Brazil got perfect weather, but later a flaw in the telescope setup meant that the results could not be used. So Eddington and Dyson went home to analyse the data, while the community – and Einstein – waited.

Within the margin of error, the shifts observed in the positions of the stars were more in agreement with Einstein’s predictions that Newton’s, and Einstein was thrown into stardom once the results were announced. It is worth mentioning that there was healthy controversy at the time, and the data analysis of Eddington was challenged – as it should have been – by the scientific community. Such a leap in scientific thinking never gets an easy ride! But results in following expeditions confirmed the 1919 results, as did other experiments which measured slight departures from Newtonian predictions in different systems (the orbit of Mercury being one example). General Relativity has been proven time and time again to be accurate within our measurement errors, and it’s a deep and beautiful theory. Given another chance I will tell you why and how some scientists feel the need to adapt General Relativity to explain some recent observations of the distant Universe, and how controversial and interesting a topic that is. But not now..

Personally, I find it rather interesting that this event really catapulted Einstein into the public eye. Einstein_theory_triumphsYou can see on the right one of the headlines at the time, and in fact the story was picked up by newspapers and magazines all over the world. It is particularly interesting given how complex General Relativity is, which doesn’t make it a readily accessible theory to the public. This didn’t stop the world from taking an interest, and hopefully encouraged many people to try and understand it, even if only a small part.

I also finding it amusing that two other expeditions, planned for 1912 and 1914, failed due to bad weather and the war. But had they happened before Einstein corrected his predictions (in 1915), his theory would have been proven wrong – before he had a chance to correct it!

I’ve stayed too long, but for those who want to know more let me recommend Peter Coles’ excellent exposition of this subject here.

I believe this concludes our first mini-series of posts! I hope you enjoyed it, and learnt a thing or two. If you have any ideas for more mini-series just leave us a comment – we’ll be happy to consider it.


I recently came across this comic, which nicely encapsulates part of my difficulty in blogging about Astronomy, especially to a completely unspecified audience. I’m easily excitable, and will get a kick out of most of what this physical world can throw at me. Also chocolate. However, I fear that sometimes the usual perception that the rest of world might not share this enthusiasm of well… everything.. can hold me back. Well, not here. It might not be wireless, chocolate-empowered robots which will ultimately destroy the world, but if I think it’s cool, there’s a chance you might too. Watch this space!


Follow

Get every new post delivered to your Inbox.

Join 871 other followers