A spot of Cold

I read 3 papers! Three beautiful, science-packed, revolutionary and mind-blowing papers! Ok, maybe not – but trust me, after being so busy with things like measuring redshifts, fixing codes and mountains of admin/conference organising, almost any science is pure beauty for the old brain.

But one of these papers did tap into something I’m quite interested in, and it’s related to the Cosmic Microwave Background (CMB). I’ve briefly mentioned the CMB before, but here’s a more decent introduction: when we say CMB we are talking about radiation that was created when the Universe was very very young  – around 300,000 years old. At that time the Universe was hot and radiation (or photons) were the dominant component of the Universe. Because it was so hot at that time, the Universe was actually opaque – matter was ionized, meaning electrons were bobbling about not really being attached to any nuclei because of the high temperature. What this means is that photons could not get very far without bumping into something – they could not travel in a straight line for any decent sort of time, and were perpetually scattered around. That’s essentially what opaque means. However, something special happened around 300,000 years into the Universe’s lifetime, and that was a decrease in temperature that allowed these electrons to settle into atoms, effectively setting the photons free. We call this the time of last-scattering.

These photons were then free to travel unhindered. They were travelling then, and they are still travelling now! Straight into our telescopes, carrying information from around 13 billion years ago, virtually unaltered! This, for Cosmologists, is like Christmas – almost too good to be true. Anyway, I say virtually because some things affect these photons as they travel along. A big one is the fact that the Universe is expanding, and this causes their frequency to change. This is the reason why we see them in the Microwave part of the spectrum today – they were a lot hotter 13 billion years ago. Had we lived earlier on during the lifetime of the Universe, the CMB would have peaked in the visible band and the sky would be rather pretty! (of course, whether a planetary system with life could even exist then is another matter) But the Universe, in spite of being mostly empty, still has a lot of stuff around. You know, galaxies and things. And as these photons go through expanding regions with more matter (clusters) or less matter (voids) they see their frequency slightly changed.

Now, what I didn’t say and should have done is that not all CMB photons are the same. They are all very similar, but depending on the density of the region of space they were in at the time of last-scattering, today they are either a little bit hotter than the average, or a little bit colder. It’s a very small little – around 1 part in 10,000! But as I said before, a good enough experiment is able to pick up these tiny differences. When we look at the sky, we obviously only see the CMB photons travelling towards our telescope (they are travelling uniformly in every direction), but these tiny differences are projected in the sky in what is now a very familiar pattern, and that I need to show time and time again because it really is beauty:

The CMB, as seen by the WMAP satellite

What you’re seeing here is the whole sky, as seen in the microwave band (after having radiation from our own galaxy cleverly removed). Red patches are slightly hotter photons, blue patches are slightly colder photons. These patches tell you about the density distribution in the early Universe. It’s these fluctuations that seed the density fluctuations that give birth to stars and galaxies and you and I, but I’ll leave that to another time.

Right now, I want to focus on a small patch of these fluctuations. In you look at the bottom right corner rim, around 4:30pm if that map was a clock (oooohhhhhh!! That’s an idea! Can I have a CMB clock anyone?), you’ll find a cold patch that has been nick named The Cold Spot (one day I’ll write a rant about how Astronomers, being a rather clever and creative bunch of people, are rubbish at coming up with good names for things. mm.. maybe I just have.). Now, by eye the Cold Spot doesn’t look any different from other cold spots around the map. However, statistically, and given our model for the early Universe (which describes things pretty well, including the distribution of galaxies we see today), a spot of that shape, size and temperature has a very low probability to exist. ‘Very low’ here means something around 0.1% to 5%, depending on different estimates. This is slightly uncomfortable and Astronomers have spent a significant amount of time studying the Cold Spot.

There are a few options here:

1) The Cold Spot was formed at the last-scattering.

2) The Cold Spot was formed during the photon’s path to us.

3) The Cold Spot is an instrumental artifact.

4) The Cold Spot is a data-reduction artifact.

There are papers studying all of the above. 3) and 4) seem unlikely at this stage but should not yet be discarded. Having data from another experiment like Planck, with a whole new reduction pipeline, will help. 1) is fascinating because it could potentially mean there is something amiss from our early-Universe theory. But the paper that prompted this ridiculously long post actually focuses on 2).

Bremer et. al investigate the hypothesis that actually, the reason why we see a particularly cold spot in that region of the sky, is because there is a large void (a region of space with much less galaxies than the average) along that line of sight. The trick here is to note that the Universe is expanding – i.e., the shape and size of clusters and voids changes with time. The frequency (or temperature) of photons is affected by this change because the energy they loose or gain when they go into these structures is not completely recovered when they come out. There is a net effect, with is to gain a little bit of energy going through clusters, and loosing a little bit going through voids. There’s a little animation here that may make it clearer.

So Bremer and collaborators chose 5 regions of the sky, all inside the Cold Spot, and took redshift surveys in these 5 regions. This allowed them to see how the distribution of galaxies changed with distance between us and the region of last-scattering, and they looked for a deficit of galaxies that would be significant enough as to imprint the Cold Spot in the observable CMB.  They did this by comparing these redshift distributions with those from other regions of the sky (outside the Cold Spot) and did not find any sign of such a deficit, or void. They could only look up to redshift of 1 (or around 7.8 billion years ago) because of instrumental limitations, and they didn’t have enough galaxies below redshift of 0.35 (around 3.8 billion years ago), but they covered a significant chunck of time when this effect is more likely to happen and did not find what they were looking for. What this means, is that they discarded at least some theories that could lead to point 2), although not all of them.

So the jury is still out on the Cold Spot. Personally, I’m sometimes tempted to add a 5th option:

5) The Cold Spot was formed at last-scattering, but its significance is being over-estimated.

But that, I’m afraid, is another post (I’m late for work!!).

ResearchBlogging.org M. N. Bremer, J. Silk, L. J. M. Davies, & M. D. Lehnert (2010). A redshift survey towards the CMB Cold Spot Submitted to MNRAS arXiv: 1004.1178v1


Our Universe

I’m not going to fulfil my pledge just yet, mainly because I haven’t been doing much science recently – instead I was asked to push hard on this (and it turned out that 1500 was much too optimistic, there was a lot more that needed done!). But I got something a little different for you today.

Quite a long time ago, myself and a few colleagues from the ICG decided we’d have a go at making a videocast, called Our Universe, aimed at not only telling people a little bit about our wonderful Universe but also at trying to somehow share what is like to be a researching Astronomer. So we had a go at a pilot episode.

In the meantime, it became clear that none of us has the time to take this any further and decided not to pursue the project. It would be a shame, however, if the pilot never saw the light of day – if only because so many wonderful people contributed their time and energy to talk to us. Unfortunately, much of the footage was being saved for future episodes so most people’s contributions are under-represented in the pilot. That is a shame on itself, but we are genuinely thankful to everyone who spoke to us, or gave us ideas, or feedback, or money. This pilot was fully funded by the ICG, but the views represented on this podcast are not necessarily those of the ICG.

So here it is, as an exclusive to weareallinthegutter:


A pledge

A few people have recently asked me what makes me write about one paper and not the other. There seems to be some expectation that I (or most science bloggers) would write about the most significant or controversial papers in their area, which I guess is a fair first assumption to make. And whereas in some cases these papers are instantly recognisable, personally I tend to concentrate on papers that are on my desk, and that are likely to have a direct impact on my work even if not raise press releases.

The neat thing about my job is that most papers I read do have some interesting aspect that makes me want to sit down and tell you about it. And more importantly, I think we should be telling you more about day-to-day papers which would normally go unnoticed by the press and therefore by the non-astronomer. They’re more likely to give you an unbiased view of what we (as in each individual scientist) do for a living, and perhaps even a more accurate perspective of the personal scientific process of discovery in astronomy.

Lack of time means I don’t get to read as many as I’d like. And lack of time squared means that I don’t get to blog about as many as I read, but let me make a pledge that for every three papers I read in detail from now on, I’ll blog about one. My guess is that this should result in one post every one or two weeks (I skim through a lot of papers, I actually read very few!), and that seems doable for me and not overwhelming for you.

As it turns out, I was away all of last week, so you may have to wait a little while for the first one to come along..


Counting galaxy mergers you can’t see


I’m going to pick up where I left off a while ago, when we talked about galaxy evolution. I have a staggering backlog of papers to read on my desk, most of which have the words “merger history”, “mass assembly” or “galaxy pairs”. All of these expressions are more or less equivalent, and they relate to the one of the processes we believe regulates galaxy growth – merging. The other one is star formation – by which I mean the process of turning cold gas into stars – but we’ll talk about that some other time.

We know that galaxies merge. We do not only see it  (go to Galaxy Zoo’s Mergers site to revel in pretty mergers images, and also to help astronomers get some real science out of them), but it is also a prediction of our current model of structure formation. I’ll cover that model and prediction in another post, but for now let me just say that we are in a position in which measuring the rate at which galaxies of different mass or luminosity merge in the Universe is becoming very important as a way to constrain our models of galaxy and structure formation. In other words, it’s time to get quantitative.

So what we want to know is, on average, how many galaxies merge per unit volume, per unit time, as the Universe evolves. If you sit down and think about this for a moment or two, you’ll quickly come to the conclusion that simply counting galaxies that are merging (which you can identify by looking at the images) is one way to go. But this is only possible relatively near by – as we go to higher redshift it becomes increasingly hard to get good enough images. Still, a number of people have been working hard at measuring this, and pushing this sort of analysis forward.

Another way to go, is to simply count galaxy pairs that are closer than a given physical distance. You can assume that if galaxies are too close then gravity will win at some point, and the galaxies will merge. The upshot is that you don’t need really good images to actually see interacting galaxies and you can take this to higher redshift. The downside is that you need to make assumptions about what this physical separation should be and, perhaps more importantly, how long it will take them to merge – the dynamical timescale. Another disadvantage comes from the fact that you miss pairs of galaxies in which one of the two is very faint – so you are limited to counting pairs of bright galaxies. The jargon for the merging of two galaxies of similar mass or luminosity is “major merging”. Some neat pieces of work have come out of this, and have measured the major-merger rate of luminous galaxies to a respectable redshift. The last one I read (but by no means the only, nor the last!) was by  Roberto de Propris et al. (2010) who did this up to a redshift of 0.55, but there are measurements of this quantity which span the last 8 Gyrs of the lifetime of the Universe or so (equivalent to a redshift of 1).

A few weeks back however, I read another paper which took a different and rather interesting approach to the subject. This is the work of Sugata Kaviraj et al. (2010), and their idea is as follows. The types of measurement like the ones I described in the above paragraph give you a number of how many major mergers there are at some point in the Universe. These mergers, however, leave a signature in the shapes of the galaxy for a certain time – they look disturbed (i.e., not smooth), until they final relax into one larger, smoother, and stable galaxy. However, this means that you should be able to predict how many galaxies of a given mass, on average, should look disturbed at any point in time by assuming a measured rate of mergers in the past.

And so they did. They took a whole load of high-resolution images from the Hubble Space Telescope and looked for signs of disturbed elliptical galaxies. What they found (amongst other neat things that I don’t have time to go into), is that there are too many of these disturbed galaxies if we assume that the other rates are correct. But hang on in there for a minute – the other rates are limited to major mergers because we can’t see the minor mergers when looking at pairs. So Sugata Kaviraj and collaborators postulate that the excess is due to these minor mergers – we can’t see them happening at high redshift, but we can see their effect at lower redshift. Moreover, they also observe these minor mergers to be significantly more dominant than major mergers since redshift of one, suggesting that galaxies have been growing from accreting smaller (fainter) galaxies in the recent Universe, but this was potentially very different at high redshift.

Other people have found this sort of behaviour in some way or another (including me!), but I was happy to see a rather neat way to.. well.. see (and measure) the unseen.

ResearchBlogging.org R. De Propris, S. P. Driver, M. M. Colless, M. J. Drinkwater, N. P. Ross, J. Bland-Hawthorn, D. G. York, & K. Pimbblet (2010). An upper limit to the dry merger rate at ~ 0.55 ApJ arXiv: 1001.0566v1

ResearchBlogging.org Sugata Kaviraj, Kok-Meng Tan, Richard S. Ellis, & Joseph Silk (2010). The principal driver of star formation in early-type galaxies at late epochs: the case for minor mergers MNRAS (submitted) arXiv: 1001.2141v1


Dark Matter Week: Do you feel lucky today?


First things first, we owe you an apology here at we are all in the gutter. Well, Stuart and I do, for not having followed up with our posts Wednesday and Thursday. We promised to tell you a little bit more about dark matter before the announcement today and alas, we sold you short. We still intend to do so, but in the meantime Friday caught up with us and the Cryogenic Dark Matter Search (CDMS) experiment have announced their results right on the mark and we felt we should tell you what they are.

They’ve published a really good summary here , which I’ll now take the liberty to quote because it’s rather clear.

First, a little bit about the experiment itself:

The Cryogenic Dark Matter Search (CDMS) experiment, located a half-mile underground at the Soudan mine in northern Minnesota, uses 30 detectors made of germanium and silicon in an attempt to detect such WIMP scatters. The detectors are cooled to temperatures very near absolute zero. Particle interactions in the crystalline detectors deposit energy in the form of heat,
and in the form of charges that move in an applied electric field. Special sensors detect these signals, which are then amplified and recorded in computers for later study. A comparison of the size and relative timing of these two signals can allow the experimenters to distinguish whether the particle that interacted in the crystal was a WIMP or one of the numerous known particles that come from radioactive decays, or from space in the form of cosmic rays. These background particles must be highly suppressed if we are to see a WIMP signal. Layers of shielding materials, as well as the half-mile of rock above the experiment, are used to provide such suppression.

The CDMS experiment has been searching for dark matter at Soudan since 2003. Previous data have not yielded evidence for WIMPs, but have provided assurance that the backgrounds have been suppressed to the level where as few as 1 WIMP interaction per year could have been detected.

We are now reporting on a new data set taken in 2007- 2008, which approximately doubles the sum of all past data sets.

One of the hardest things about such an experiment, is to ensure that you know your kit well enough that you can tell the real signal from background noise that come from other sources (i.e., not dark matter):

With each new data set, we must carefully evaluate the performance of each of the detectors, excluding periods when they were not operating properly. Detector operation is assessed by frequent exposure to sources of two types of radiation: gamma rays and neutrons. Gamma rays are the principal source of normal matter background in the experiment. Neutrons are the only type of normal matter particles that will interact with germanium nuclei in the billiard ball style that WIMPs would, although neutrons frequently scatter in more than one of our detectors. This calibration data is carefully studied to see how well a WIMP-like signal (produced by neutrons) can be seen over a background (produced by gamma rays). The expectation is that no more than 1 background event would be expected to be visible in the region of the data where WIMPs should appear. Since background and signal regions overlap somewhat, achievement of this background level required us to throw out roughly 2/3 of the data that might contain WIMPs, because these data would contain too many background events.

A particularly interesting aspect of the data analysis, commonly used for this type of experiment, is that it is blind. The CDMS team explains:

All of the data analysis is done without looking at the data region that might contain WIMP events. This standard scientific technique, sometimes referred to as ‘blinding’, is used to avoid the unintentional bias that might lead one to keep events having some of the characteristics of WIMP interactions but that are really from background sources. After all of the data selection criteria have been completed, and detailed estimates of background ‘leakage’ into the WIMP signal region are made, we ‘open the box’ and see if there are any WIMP events present.

And so, what did they find? In short, they found two events that fit the bill. If these are indeed real events, this has been the first direct detection of Dark Matter in our scientific history. But alas, it’s never that simple. There’s a non-negligible chance, of around 23%, that these two events were created by background sources. I.e., that the signal hitting the detectors was a result of an interaction of particles which have nothing to do with dark matter. Of course, this means there’s a 77% chance that these two events were real, but 77% is generally not considered high enough to make a detection statistical significant in scientific terms. This is what the CDMS team explains:

In this new data set there are indeed 2 events seen with characteristics consistent with those expected from WIMPs. However, there is also a chance that both events could be due to background particles. Scientists have a strict set of criteria for determining whether a new discovery has been made, in essence that the ratio of signal to background events must be large enough that there is no reasonable doubt. Typically there must be less than one chance in a thousand of the signal being due to background. In this case, a signal of about 5 events would have met those criteria. We estimate that there is about a one in four chance to have seen two backgrounds events, so we can make no claim to have discovered WIMPs. Instead we say that the rate of WIMP interactions with nuclei must be less than a particular value that depends on the mass of the WIMP. The numerical values obtained for these interaction rates from this data set are more stringent than those obtained from previous data for most WIMP masses predicted by theories. Such upper limits are still quite valuable in eliminating a number of theories that might explain dark matter.

So why is everyone so excited, if the significance is not enough to break out the champagne? Simply put, because even though none of us is going to take this as scientific proof of dark matter detection, most of us also knows that real detections are often preceded by marginal detections. And that’s exciting. There’s a feeling that we’re really not that far, and 77% can feel very encouraging. In the words of my office mate – 77% has never been more exciting.


Good reads

Quick post to let you know of two new blogs (well, new to me) that have caught my eye. Firstly, let me welcome Duncan and Well-Bred Insolence to my RSS feeder – keep an eye over there for news on planet formation, discovery and possible inhabitants amongst other things. He beat us to telling you about HARPS latest addition to the list of known extra-solar planets. Glad he did too, firstly because we would have been intolerably late with those news, and secondly because he knows a fair bit more about it than we do.

And then there’s the The Big Blog Theory, a blog by David Saltzberg – the science advisor to The Big Bang Theory (the American sitcom, not the beginning of the Universe) – who explains the science behind the episodes. My favourite TV sitcom just got better!


When telescopes get really, really big

This post was chosen as an Editor's Selection for ResearchBlogging.org

In our first post exploring galaxy evolution, we saw how observing galaxies at different distances from us is crucial for our understanding of how galaxies form and evolve. It also naturally follows that the larger the range of distances we can study, the better we can constrain our theories. So it’s only natural that astronomers have always been hunting for the most distant galaxies – it’s a sort of high-flying game in the astronomy community, and breaking the record for the most distant galaxy observed is no mean feat.

More distant objects appear on average fainter, and they are harder to detect. So traditionally one looks at technological improvements in order to make advancements in this area. For example, a larger telescope has a wider light-collecting area. Therefore it’s more sensitive, and is able to detect fainter objects in a given time. One can also observe a region of sky for a longer period of time, which again increases the number of photons that we collect. Astronomers call this deep imaging.

Recently, the public release of very deep imaging from the Hubble space telescope‘s new Wide Field Camera 3 generated a rush of papers which were precisely looking for very high-redshift galaxies. Look for example at Bunker et al., McLure et al., Oesch et al., among others, which were mostly submitted within a few hours of each other, and just a few days after the data was publicly released – astronomy doesn’t get much more immediately competitive (or stressful?) than this! The work of these particular papers requires not only deep imaging, but also a wide range in terms of electromagnetic spectrum – i.e., they need sensitive images of the same region of the sky in different colours, and the redder the better.

These papers detected galaxies at redshifts between 7 and 8.5. Or, in more common units, these galaxies are at least  12,900,000,000 light-years away. That means the light that was detected by the Hubble space telescope, and on which these papers and scientific analysis are based, left those galaxies 12,9000,000,000 years ago. The Universe was only around 778,000,000 years old by then. To go back to our previous analogy, it’s like looking at a snapshot of me when I was less than 2 years old. Admittedly I had bathed by then, but that’s still very very young!

These papers are interesting and important in their own right, but what prompted me to come and tell you all of this was actually the work of Bradac et al., which has the same goal as the above, it uses the same basic techniques as the above, but it cheats. And it’s the way it cheats that makes it really rather neat.

Bradac and co-authors use not only human-made telescopes, but also harvest the power of gravitational lensing to turn a galaxy cluster (the Bullet Cluster) into an enormous cosmic telescope. We have covered here before how mass affects space which in turn affects the way light travels. Matter can act to focus light from distant objects – and galaxy clusters have a lot of matter. This makes them rich and exciting playgrounds for astronomers who have now long used gravitational lensing to probe the distant Universe.

Bradac and friends have additionally showed just how effective it can be at measuring the density and properties of distant galaxies, and how much there is to gain from a given image (with a given sensitivity limit) when there is a strong and appropriately focused cosmic lens in the field of view. Distant galaxies are also magnified – their angular size on the sky increases, compared to an unlensed image – and that allows us to look at them in more detail, and study their properties. As a bonus, given that cosmic telescopes often produce more than one lensed image of any one given galaxy, they can use these multiple images to help with the distance measurement and avoid some contamination.

They didn’t set any distance records as the wavelength range of their imaging wasn’t quite right for that. But with the right imaging, the right clusters and the right analysis, the authors argue that this is the way forward for this sort of study. Galaxy evolution is hard and full of technical challenges, so using galaxy clusters as gigantic telescopes can certainly go a long long way.

ResearchBlogging.orgM. Bradač, T. Treu, D. Applegate, A. H. Gonzalez, D. Clowe, W. Forman, C. Jones, P. Marshall, P. Schneider, & D. Zaritsky (2009). Focusing Cosmic Telescopes: Exploring Redshift z~5-6 Galaxies with the Bullet Cluster 1E0657-56 Accepted for publication in ApJL arXiv: 0910.2708v1

ResearchBlogging.orgR. J. McLure, J. S. Dunlop, M. Cirasuolo, A. M. Koekemoer, E. Sabbi, D. P. Stark, T. A. Targett, & R. S. Ellis (2009). Galaxies at z = 6 – 9 from the WFC3/IR imaging of the HUDF Submitted to MNRAS arXiv: 0909.2437v1

ResearchBlogging.orgP. A. Oesch, R. J. Bouwens, G. D. Illingworth, C. M. Carollo, M. Franx, I. Labbe, D. Magee, M. Stiavelli, M. Trenti, & P. G. van Dokkum (2009). z~7 Galaxies in the HUDF: First Epoch WFC3/IR Results submitted to ApJL arXiv: 0909.1806v1

ResearchBlogging.orgAndrew Bunker, Stephen Wilkins, Richard Ellis, Daniel Stark, Silvio Lorenzoni, Kuenley Chiu, Mark Lacy, Matt Jarvis, & Samantha Hickey (2009). The Contribution of High Redshift Galaxies to Cosmic Reionization: New
Results from Deep WFC3 Imaging of the Hubble Ultra Deep Field Submitted to MNRAS arXiv: 0909.2255v2


Follow

Get every new post delivered to your Inbox.

Join 885 other followers