Lasers, telescopes & aeroplanes

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

So this post was supposed to be about the discovery of the most distant galaxy ever found, at a redshift of about 8.2 (13.1 billion light years from us, or, to put it another way, only about 630 million years after the Big Bang), but I didn’t get round to it yesterday and I’ve now been distracted by a paper out today on telescopes, laser beams and aeroplanes instead! (For more details on the distant galaxy see this post over at the always excellent In the Dark. There’s a lot of effort being put into this field at the moment, as already discussed here by Rita, so I expect there’ll be a new ‘most distant’ one that I can catch up with soon.)

One of the main problems astronomers face when trying to observe things in the sky is the turbulence in the Earth’s atmosphere, from clouds, wind etc (this is known as ‘seeing’ and is what causes the stars to twinkle). To get away from this telescopes are normally built high up on mountains, above as many of the clouds as possible, or, like the Hubble Space Telescope, put into orbit where there’s no atmosphere to worry about.

Laser guide star at the Keck Telescope

Putting a telescope on a mountain doesn’t completely avoid all the atmospheric distortions, but a technique called adaptive optics can help to correct it. This needs a nice bright star in the field of view, but when this isn’t available an artificial star can be created by shooting a laser beam into the sky, as shown for the Keck Telescope in the picture above. The telescope then observes this ‘star’, and uses its fluctuations in shape (due to the current patch of atmosphere overhead at the time) to correct the astronomical images as they are observed. I should add here that this isn’t the only time lasers are used alongside telescopes – for example they are also fired at the reflectors left on the moon by the Apollo missions, as I’ve blogged about before.

cfht_adap

The benefit of adaptive optics for a galaxy observed at the Canada-France-Hawaii Telescope. Left: with correction, right: without

The only flaw in this telescopes-lasers-great corrected, pictures of the sky/laser ranging plan is aeroplanes. Laser beams can dazzle pilots so understandably they can’t be used at the telescopes when planes are in the vicinity. What this has meant in practice up till now is someone sitting outside all night, ready to close the laser-shutter when one comes too close. Not very practical and a pretty boring job! Well, now some researchers in California have come up with a solution. Aviation regulations require that all aircraft are fitted with transponders so that they can be tracked by air-traffic control. By measuring the ratio of transponder signal power from two antennae aligned with the laser, one broad and one narrow beam, the position of the aircraft can be found. An automated shutter can then be activated to turn the laser off, and the cold plane-watchman can go back inside.

ResearchBlogging.orgW. A. Coles, T. W. Murphy Jr., J. F. Melser, J. K. Tu, G. A. White, K. H. Kassabian, K. Bales, & B. B. Baumgartner (2009). A Radio System for Avoiding Illuminating Aircraft with a Laser Beam submitted to PASP arXiv: 0910.5685v1

Advertisements

Carnival of Space 126

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

Just a quick post to point you all towards this week’s Carnival of Space over at The Gish Bar Times, the blog to go to for all things Io-related.


Good reads

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

Quick post to let you know of two new blogs (well, new to me) that have caught my eye. Firstly, let me welcome Duncan and Well-Bred Insolence to my RSS feeder – keep an eye over there for news on planet formation, discovery and possible inhabitants amongst other things. He beat us to telling you about HARPS latest addition to the list of known extra-solar planets. Glad he did too, firstly because we would have been intolerably late with those news, and secondly because he knows a fair bit more about it than we do.

And then there’s the The Big Blog Theory, a blog by David Saltzberg – the science advisor to The Big Bang Theory (the American sitcom, not the beginning of the Universe) – who explains the science behind the episodes. My favourite TV sitcom just got better!


When telescopes get really, really big

This post was chosen as an Editor's Selection for ResearchBlogging.org

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

In our first post exploring galaxy evolution, we saw how observing galaxies at different distances from us is crucial for our understanding of how galaxies form and evolve. It also naturally follows that the larger the range of distances we can study, the better we can constrain our theories. So it’s only natural that astronomers have always been hunting for the most distant galaxies – it’s a sort of high-flying game in the astronomy community, and breaking the record for the most distant galaxy observed is no mean feat.

More distant objects appear on average fainter, and they are harder to detect. So traditionally one looks at technological improvements in order to make advancements in this area. For example, a larger telescope has a wider light-collecting area. Therefore it’s more sensitive, and is able to detect fainter objects in a given time. One can also observe a region of sky for a longer period of time, which again increases the number of photons that we collect. Astronomers call this deep imaging.

Recently, the public release of very deep imaging from the Hubble space telescope‘s new Wide Field Camera 3 generated a rush of papers which were precisely looking for very high-redshift galaxies. Look for example at Bunker et al., McLure et al., Oesch et al., among others, which were mostly submitted within a few hours of each other, and just a few days after the data was publicly released – astronomy doesn’t get much more immediately competitive (or stressful?) than this! The work of these particular papers requires not only deep imaging, but also a wide range in terms of electromagnetic spectrum – i.e., they need sensitive images of the same region of the sky in different colours, and the redder the better.

These papers detected galaxies at redshifts between 7 and 8.5. Or, in more common units, these galaxies are at least  12,900,000,000 light-years away. That means the light that was detected by the Hubble space telescope, and on which these papers and scientific analysis are based, left those galaxies 12,9000,000,000 years ago. The Universe was only around 778,000,000 years old by then. To go back to our previous analogy, it’s like looking at a snapshot of me when I was less than 2 years old. Admittedly I had bathed by then, but that’s still very very young!

These papers are interesting and important in their own right, but what prompted me to come and tell you all of this was actually the work of Bradac et al., which has the same goal as the above, it uses the same basic techniques as the above, but it cheats. And it’s the way it cheats that makes it really rather neat.

Bradac and co-authors use not only human-made telescopes, but also harvest the power of gravitational lensing to turn a galaxy cluster (the Bullet Cluster) into an enormous cosmic telescope. We have covered here before how mass affects space which in turn affects the way light travels. Matter can act to focus light from distant objects – and galaxy clusters have a lot of matter. This makes them rich and exciting playgrounds for astronomers who have now long used gravitational lensing to probe the distant Universe.

Bradac and friends have additionally showed just how effective it can be at measuring the density and properties of distant galaxies, and how much there is to gain from a given image (with a given sensitivity limit) when there is a strong and appropriately focused cosmic lens in the field of view. Distant galaxies are also magnified – their angular size on the sky increases, compared to an unlensed image – and that allows us to look at them in more detail, and study their properties. As a bonus, given that cosmic telescopes often produce more than one lensed image of any one given galaxy, they can use these multiple images to help with the distance measurement and avoid some contamination.

They didn’t set any distance records as the wavelength range of their imaging wasn’t quite right for that. But with the right imaging, the right clusters and the right analysis, the authors argue that this is the way forward for this sort of study. Galaxy evolution is hard and full of technical challenges, so using galaxy clusters as gigantic telescopes can certainly go a long long way.

ResearchBlogging.orgM. Bradač, T. Treu, D. Applegate, A. H. Gonzalez, D. Clowe, W. Forman, C. Jones, P. Marshall, P. Schneider, & D. Zaritsky (2009). Focusing Cosmic Telescopes: Exploring Redshift z~5-6 Galaxies with the Bullet Cluster 1E0657-56 Accepted for publication in ApJL arXiv: 0910.2708v1

ResearchBlogging.orgR. J. McLure, J. S. Dunlop, M. Cirasuolo, A. M. Koekemoer, E. Sabbi, D. P. Stark, T. A. Targett, & R. S. Ellis (2009). Galaxies at z = 6 – 9 from the WFC3/IR imaging of the HUDF Submitted to MNRAS arXiv: 0909.2437v1

ResearchBlogging.orgP. A. Oesch, R. J. Bouwens, G. D. Illingworth, C. M. Carollo, M. Franx, I. Labbe, D. Magee, M. Stiavelli, M. Trenti, & P. G. van Dokkum (2009). z~7 Galaxies in the HUDF: First Epoch WFC3/IR Results submitted to ApJL arXiv: 0909.1806v1

ResearchBlogging.orgAndrew Bunker, Stephen Wilkins, Richard Ellis, Daniel Stark, Silvio Lorenzoni, Kuenley Chiu, Mark Lacy, Matt Jarvis, & Samantha Hickey (2009). The Contribution of High Redshift Galaxies to Cosmic Reionization: New
Results from Deep WFC3 Imaging of the Hubble Ultra Deep Field Submitted to MNRAS arXiv: 0909.2255v2


Carnival of Space 125

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

This week’s Carnival of Space is now up at the always interesting Orbiting Frog. Hot topics this week include the misunderstandings surrounding NASA’s LCROSS mission, an impressive view of the Martian landscape and a new way to see one billion dollars. Oh, and a mysterious header image which looks astronomical but turns out to be from much closer to home.


300 done, 1500 to go

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

Last week I was in Milan doing something I traditionally don’t really do – getting my hands dirty on data, and doing my bit to make it usable for science.

By data, in this case, I mean 100,000 spectra which were collected by the VIMOS spectrograph which is one of the instruments on the Very Large Telescope in Paranal for a new project caller VIPERS. A spectrum is simply the light of an object decomposed in a relatively large number of components. So instead of decomposing an image in, say, 3 colours, a spectrum will often have hundreds to thousands of different pixels – or colours, really – so we can see exactly how “red”, or “blue” an object is. Think of it as a rainbow – which is simply the light of the Sun decomposed in a number of colours. And yes, astronomers spend a large amount of effort effectively making rainbows out of the light of galaxies.

Spectra of objects (stars, galaxies, quasars, planets, etc) are one of the most useful observables of our Universe. Depending on whether you are a cosmologists or someone who studies galaxy evolution, you want them mainly for different reasons. Today I’m going to put my cosmologist hat on.

As we’ve covered here before, galaxies have a very rich range of properties which we can use to trace their evolution. With our hat today, none of it matters – the astonishing this is that we can learn an awful lot about the evolution of the Universe as a whole simply by studying the position of these galaxies. This is the realm of observational cosmology – the study of the birth, evolution, dynamics, composition and ultimate fate of our Universe by studying the spacial distribution of galaxies. This may be a somewhat narrow description of a huge field of modern-day Astronomy, but to first order it’s perfectly correct.

So why is the spacial distribution of galaxies so revealing? Well, let us start by considering the components of our Universe. In our current standard model we have four main components: radiation, baryonic matter, dark matter and dark energy. The first two we are very familiar with – baryonic here just refers to the matter that makes up all we see in the Universe: ourselves, the solar system, distant galaxies, far away planets, etc. The last two are more of a mystery, and excitingly also the two most important components of our Universe today. I must leave a more thorough explanation for another post, but for now let us state that there is 5 times more dark matter out there than baryonic matter, but we can’t see it. It interacts gravitationally with baryonic matter so we can detect its presence and because it is so much more abundant than baryonic matter, it’s dark matter that has the most influence in the master game of tug of war that is the evolution of our Universe – not baryonic matter. The other player is dark energy. Now, I can’t tell you what dark energy is (nobody can), but I can tell you that it behaves in the opposite way to gravity. Whereas one attracts, the other repels. Whereas one brings things closer together, the other takes them apart.

The amount of dark matter and dark energy govern the dynamics of our Universe, but we can’t see either of them directly. So we turn to what we can see: baryonic matter, via the radiation produced mainly (but not exclusively) by stars. The neat thing is that baryonic and dark matter attract each other gravitationally, so baryonic matter traces dark matter. And this is why simply measuring where galaxies sit is so insightful – it gives you a tool to track dark matter and see how its spacial distribution evolves. This in turn tells you about the interplay between dark matter and dark energy – depending on which one dominates, by how much and for how long, the evolution of the spacial distribution of dark matter (and therefore galaxies) will be different. And that we can measure! The spacial distribution (or clustering) of galaxies is one of the most promising tools to tell us about dark energy and how structure has grown in our Universe.

But before all of this can be done, we need to know the distance to each galaxy. This means measuring their redshift, by using the spectrum of each galaxy to determine how fast it is receding from us – which in turn tells us how far away it is. If the data coming out of the telescope and spectrograph are good enough, the whole process can be fully automatised. However, our position is slightly different in that due to some instrumental artifacts, the red side of the spectra is much below standard and the computerised pipeline often fails in assigning the correct redshift.

This is where I (and around 25 other people) come in. Humans are much much better at spotting mistakes than computers, so each of the 100,000 spectra is being looked at by at least two different people to make sure that the redshift is correct and that when the time comes to do science, we are doing it with reliable data. As I mentioned, this is not something I’d ever done before, so my time in Milan was used to get me trained and up to scratch with the software. I measured 300 redshifts this week, which is rather slow especially when I think that I have to do 1500 more or so in the next couple of months!

It’s a huge task but one that really needs to be done. And on the upside, I will never get bored on a train or airport ever again!


One small step for an economist…..

[tweetmeme only_single=false service=”wp.me” source=”allinthegutter”]

So today the Nobel Memorial Prize in Economics (not one of the original true Nobel Prizes) will be announced. I thought this would be a good time to write about the work of last year’s winner. And yes, you are still reading an astronomy blog.

In 1978 Paul Krugman was a bored young assistant professor at Yale. To amuse himself he let his thoughts wander onto how economics could evolve in the far future. This resulted in him writing a paper entitled The Theory of Interstellar Trade, which is quite frankly fantastic.

This is a light-hearted work and with the claim on the front page that the work has been funded by the Committee to Re-elect Willian Proxmire it starts as it means to go on. Proxmire was a US Senator who crusaded against what he considered wasteful public spending. His cross-hairs sometimes fell on scientific projects which were often recipients of his Golden Fleece Award, a sort of insulting IgNobel for spending government money on bizarre research programmes. Proxmire is also credited with killing off NASA’s space colonisation research programme. Hence Krugman’s ironic funding claim for his odd research work on the economics of space trade.

What makes interstellar trade unusual is the great distances involved and the problem of the Universe’s speed limit. Stars are separated by distances of light years and it is likely those with habitable planets are rare and could be even further apart. As it isn’t possible to travel faster than the speed of light through empty space, trade missions could take tens or hundreds of years to travel between worlds. Additionally as some for of propulsion that carries a craft close to the speed of light will be required for such journeys, time dilation will prove a problem.

Time dilation is a effect of travelling close to the speed of light where time appears to go slower for an observer moving at high speed compared to one that is stationary in a particular frame of reference. You might have heard of a consequence of this, the twin paradox, where one twin remains on Earth, while the other goes off on a rocket at high speed. When the rocketeering twin returns he finds he has aged less than his brother. The same principle applies to a space trader on a long voyage and raises the problem, who’s time frame do you you use for calculating money gained from compound interest? Krugman uses an argument based on trade between the fictional planet of Trantor (Galactic capital in Azimov’s Foundation Series) and Earth to show that the time lapsed in the inertial reference frame of the trading planets should be used for calculating interest costs (this is his “First Fundamental Theorem of Interstellar Trade”).

Krugman further uses the example of Trantorian’s trading with and living on Earth to assert his second law, that interest rates on the different planets will eventually equalise.

I may have come across as glossing over some of the details, this is mostly because it would lead to a pretty long blog post. But also it’s because I want to encourage you to read the original paper because it is one of the smartest and most amusing pieces of writing I’ve read in a long time. Krugman is clearly both a science and sci-fi geek and uses his knowledge wonderfully to weave together a thoroughly fun read.

Paul Krugman was awarded his Nobel Memorial Prize for his terrestrial work. This is lucky because to be awarded the prize for his stellar interstellar work he’d have had to wait longer than Peter Higgs to be proved right.