Monthly Archives: January 2015

Quantum computer as detector shows space is not squeezed

 Robert Sanders


 

Ever since Einstein proposed his special theory of relativity in 1905, physics and cosmology have been based on the assumption that space looks the same in all directions – that it’s not squeezed in one direction relative to another.

A new experiment by UC Berkeley physicists used partially entangled atoms — identical to the qubits in a quantum computer — to demonstrate more precisely than ever before that this is true, to one part in a billion billion.

The classic experiment that inspired Albert Einstein was performed in Cleveland by Albert Michelson and Edward Morley in 1887 and disproved the existence of an “ether” permeating space through which light was thought to move like a wave through water. What it also proved, said Hartmut Häffner, a UC Berkeley assistant professor of physics, is that space is isotropic and that light travels at the same speed up, down and sideways.

“Michelson and Morley proved that space is not squeezed,” Häffner said. “This isotropy is fundamental to all physics, including the Standard Model of physics. If you take away isotropy, the whole Standard Model will collapse. That is why people are interested in testing this.”

The Standard Model of particle physics describes how all fundamental particles interact, and requires that all particles and fields be invariant under Lorentz transformations, and in particular that they behave the same no matter what direction they move.

Häffner and his team conducted an experiment analogous to the Michelson-Morley experiment, but with electrons instead of photons of light. In a vacuum chamber he and his colleagues isolated two calcium ions, partially entangled them as in a quantum computer, and then monitored the electron energies in the ions as Earth rotated over 24 hours.

As the Earth rotates every 24 hours, the orientation of the ions in the quantum computer/detector changes with respect to the Sun’s rest frame. If space were squeezed in one direction and not another, the energies of the electrons in the ions would have shifted with a 12-hour period. (Hartmut Haeffner image)
As the Earth rotates every 24 hours, the orientation of the ions in the quantum computer/detector changes with respect to the Sun’s rest frame. If space were squeezed in one direction and not another, the energies of the electrons in the ions would have shifted with a 12-hour period. (Hartmut Haeffner image)

If space were squeezed in one or more directions, the energy of the electrons would change with a 12-hour period. It didn’t, showing that space is in fact isotropic to one part in a billion billion (1018), 100 times better than previous experiments involving electrons, and five times better than experiments like Michelson and Morley’s that used light.

The results disprove at least one theory that extends the Standard Model by assuming some anisotropy of space, he said.

Häffner and his colleagues, including former graduate student Thaned Pruttivarasin, now at the Quantum Metrology Laboratory in Saitama, Japan, will report their findings in the Jan. 29 issue of the journal Nature.

Entangled qubits

Häffner came up with the idea of using entangled ions to test the isotropy of space while building quantum computers, which involve using ionized atoms as quantum bits, or qubits, entangling their electron wave functions, and forcing them to evolve to do calculations not possible with today’s digital computers. It occurred to him that two entangled qubits could serve as sensitive detectors of slight disturbances in space.

“I wanted to do the experiment because I thought it was elegant and that it would be a cool thing to apply our quantum computers to a completely different field of physics,” he said. “But I didn’t think we would be competitive with experiments being performed by people working in this field. That was completely out of the blue.”

He hopes to make more sensitive quantum computer detectors using other ions, such as ytterbium, to gain another 10,000-fold increase in the precision measurement of Lorentz symmetry. He is also exploring with colleagues future experiments to detect the spatial distortions caused by the effects of dark matter particles, which are a complete mystery despite comprising 27 percent of the mass of the universe.

“For the first time we have used tools from quantum information to perform a test of fundamental symmetries, that is, we engineered a quantum state which is immune to the prevalent noise but sensitive to the Lorentz-violating effects,” Häffner said. “We were surprised the experiment just worked, and now we have a fantastic new method at hand which can be used to make very precise measurements of perturbations of space.”

Other co-authors are UC Berkeley graduate student Michael Ramm, former UC Berkeley postdoc Michael Hohensee of Lawrence Livermore National Laboratory, and colleagues from the University of Delaware and Maryland and institutions in Russia. The work was supported by the National Science Foundation.

Source: UC Berkeley

The Mouth of the Beast

VLT images cometary globule CG4


Like the gaping mouth of a gigantic celestial creature, the cometary globule CG4 glows menacingly in this new image from ESO’s Very Large Telescope. Although it appears to be big and bright in this picture, this is actually a faint nebula, which makes it very hard for amateur astronomers to spot. The exact nature of CG4 remains a mystery.

Like the gaping mouth of a gigantic celestial creature, the cometary globule CG4 glows menacingly in this image from ESO’s Very Large Telescope. Although it looks huge and bright in this image it is actually a faint nebula and not easy to observe. The exact nature of CG4 remains a mystery. Credit: ESO
Like the gaping mouth of a gigantic celestial creature, the cometary globule CG4 glows menacingly in this image from ESO’s Very Large Telescope. Although it looks huge and bright in this image it is actually a faint nebula and not easy to observe. The exact nature of CG4 remains a mystery.
Credit:
ESO

In 1976 several elongated comet-like objects were discovered on pictures taken with the UK Schmidt Telescope in Australia. Because of their appearance, they became known as cometary globules even though they have nothing in common with comets. They were all located in a huge patch of glowing gas called the Gum Nebula. They had dense, dark, dusty heads and long, faint tails, which were generally pointing away from the Vela supernova remnant located at the centre of the Gum Nebula. Although these objects are relatively close by, it took astronomers a long time to find them as they glow very dimly and are therefore hard to detect.

The object shown in this new picture, CG4, which is also sometimes referred to as God’s Hand, is one of these cometary globules. It is located about 1300 light-years from Earth in the constellation of Puppis (The Poop, or Stern).

The head of CG4, which is the part visible on this image and resembles the head of the gigantic beast, has a diameter of 1.5 light-years. The tail of the globule — which extends downwards and is not visible in the image — is about eight light-years long. By astronomical standards this makes it a comparatively small cloud.

The relatively small size is a general feature of cometary globules. All of the cometary globules found so far are isolated, relatively small clouds of neutral gas and dust within the Milky Way, which are surrounded by hot ionised material.

The head part of CG4 is a thick cloud of gas and dust, which is only visible because it is illuminated by the light from nearby stars. The radiation emitted by these stars is gradually destroying the head of the globule and eroding away the tiny particles that scatter the starlight. However, the dusty cloud of CG4 still contains enough gas to make several Sun-sized stars and indeed, CG4 is actively forming new stars, perhaps triggered as radiation from the stars powering the Gum Nebula reached CG4.

Why CG4 and other cometary globules have their distinct form is still a matter of debate among astronomers and two theories have developed. Cometary globules, and therefore also CG4, could originally have been spherical nebulae, which were disrupted and acquired their new, unusual form because of the effects of a nearby supernova explosion. Other astronomers suggest, that cometary globules are shaped by stellar winds and ionising radiation from hot, massiveOB stars. These effects could first lead to the bizarrely (but appropriately!) named formations known as elephant trunksand then eventually cometary globules.

To find out more, astronomers need to find out the mass, density, temperature, and velocities of the material in the globules. These can be determined by the measurements of molecular spectral lines which are most easily accessible at millimetre wavelengths — wavelengths at which telescopes like the Atacama Large Millimeter/submillimeter Array (ALMA) operate.

This picture comes from the ESO Cosmic Gems programme, an outreach initiative to produce images of interesting, intriguing or visually attractive objects using ESO telescopes, for the purposes of education and public outreach. The programme makes use of telescope time that cannot be used for science observations. All data collected may also be suitable for scientific purposes, and are made available to astronomers through ESO’s science archive.

Source: ESO

Timeline of the approach and departure phases — surrounding close approach on July 14, 2015 — of the New Horizons Pluto encounter.
Image Credit: NASA/JHU APL/SwRI

NASA’s New Horizons Spacecraft Begins First Stages of Pluto Encounter

NASA’s New Horizons spacecraft recently began its long-awaited, historic encounter with Pluto. The spacecraft is entering the first of several approach phases that culminate July 14 with the first close-up flyby of the dwarf planet, 4.67 billion miles (7.5 billion kilometers) from Earth.

“NASA first mission to distant Pluto will also be humankind’s first close up view of this cold, unexplored world in our solar system,” said Jim Green, director of NASA’s Planetary Science Division at the agency’s Headquarters in Washington. “The New Horizons team worked very hard to prepare for this first phase, and they did it flawlessly.”

The fastest spacecraft when it was launched, New Horizons lifted off in January 2006. It awoke from its final hibernation period last month after a voyage of more than 3 billion miles, and will soon pass close to Pluto, inside the orbits of its five known moons. In preparation for the close encounter, the mission’s science, engineering and spacecraft operations teams configured the piano-sized probe for distant observations of the Pluto system that start Sunday, Jan. 25 with a long-range photo shoot.

 

 

Timeline of the approach and departure phases — surrounding close approach on July 14, 2015 — of the New Horizons Pluto encounter. Image Credit: NASA/JHU APL/SwRI
Timeline of the approach and departure phases — surrounding close approach on July 14, 2015 — of the New Horizons Pluto encounter.
Image Credit: NASA/JHU APL/SwRI

The images captured by New Horizons’ telescopic Long-Range Reconnaissance Imager (LORRI) will give mission scientists a continually improving look at the dynamics of Pluto’s moons. The images also will play a critical role in navigating the spacecraft as it covers the remaining 135 million miles (220 million kilometers) to Pluto.

“We’ve completed the longest journey any spacecraft has flown from Earth to reach its primary target, and we are ready to begin exploring,” said Alan Stern, New Horizons principal investigator from Southwest Research Institute in Boulder, Colorado.

LORRI will take hundreds of pictures of Pluto over the next few months to refine current estimates of the distance between the spacecraft and the dwarf planet. Though the Pluto system will resemble little more than bright dots in the camera’s view until May, mission navigators will use the data to design course-correction maneuvers to aim the spacecraft toward its target point this summer. The first such maneuver could occur as early as March.

“We need to refine our knowledge of where Pluto will be when New Horizons flies past it,” said Mark Holdridge, New Horizons encounter mission manager at Johns Hopkins University’s Applied Physics Laboratory (APL) in Laurel, Maryland. “The flyby timing also has to be exact, because the computer commands that will orient the spacecraft and point the science instruments are based on precisely knowing the time we pass Pluto – which these images will help us determine.”

The “optical navigation” campaign that begins this month marks the first time pictures from New Horizons will be used to help pinpoint Pluto’s location.

Throughout the first approach phase, which runs until spring, New Horizons will conduct a significant amount of additional science. Spacecraft instruments will gather continuous data on the interplanetary environment where the planetary system orbits, including measurements of the high-energy particles streaming from the sun and dust-particle concentrations in the inner reaches of the Kuiper Belt. In addition to Pluto, this area, the unexplored outer region of the solar system, potentially includes thousands of similar icy, rocky small planets.

More intensive studies of Pluto begin in the spring, when the cameras and spectrometers aboard New Horizons will be able to provide image resolutions higher than the most powerful telescopes on Earth. Eventually, the spacecraft will obtain images good enough to map Pluto and its moons more accurately than achieved by previous planetary reconnaissance missions.

APL manages the New Horizons mission for NASA’s Science Mission Directorate in Washington. Alan Stern, of the Southwest Research Institute (SwRI), headquartered in San Antonio, is the principal investigator and leads the mission. SwRI leads the science team, payload operations, and encounter science planning. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama. APL designed, built and operates the spacecraft.

For more information about the New Horizons mission, visit:

www.nasa.gov/newhorizons

Software that knows the risks

Planning algorithms evaluate probability of success, suggest low-risk alternatives.

By Larry Hardesty


CAMBRIDGE, Mass. – Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours. Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.

That kind of application is the goal of Brian Williams’ group at MIT’s Computer Science and Artificial Intelligence Laboratory — although the same underlying framework has led to software that both NASA and the Woods Hole Oceanographic Institution have used to plan missions.

At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, researchers in Williams’ group will present algorithms that represent significant steps toward what Williams describes as “a better Siri” — the user-assistance application found in Apple products. But they would be just as useful for any planning task — say, scheduling flights or bus routes.

Together with Williams, Peng Yu and Cheng Fang, who are graduate students in MIT’s Department of Aeronautics and Astronautics, have developed software that allows a planner to specify constraints — say, buses along a certain route should reach their destination at 10-minute intervals — and reliability thresholds, such as that the buses should be on time at least 90 percent of the time. Then, on the basis of probabilistic models — which reveal data such as that travel time along this mile of road fluctuates between two and 10 minutes — the system determines whether a solution exists: For example, perhaps the buses’ departures should be staggered by six minutes at some times of day, 12 minutes at others.

If, however, a solution doesn’t exist, the software doesn’t give up. Instead, it suggests ways in which the planner might relax the problem constraints: Could the buses reach their destinations at 12-minute intervals? If the planner rejects the proposed amendment, the software offers an alternative: Could you add a bus to the route?

Short tails

One aspect of the software that distinguishes it from previous planning systems is that it assesses risk. “It’s always hard working directly with probabilities, because they always add complexity to your computations,” Fang says. “So we added this idea of risk allocation. We say, ‘What’s your budget of risk for this entire mission? Let’s divide that up and use it as a resource.’”

The time it takes to traverse any mile of a bus route, for instance, can be represented by a probability distribution — a bell curve, plotting time against probability. Keeping track of all those probabilities and compounding them for every mile of the route would yield a huge computation. But if the system knows in advance that the planner can tolerate a certain amount of failure, it can, in effect, assign that failure to the lowest-probability outcomes in the distributions, lopping off their tails. That makes them much easier to deal with mathematically.

At AAAI, Williams and another of his students, Andrew Wang, have a paper describing how to evaluate those assignments efficiently, in order to find quick solutions to soluble planning problems. But the paper with Yu and Fang — which appears at the same conference session — concentrates on identifying those constraints that prevent a problem’s solution.

There’s the rub

Both procedures are rooted in graph theory. In this context, a graph is a data representation that consists of nodes, usually depicted as circles, and edges, usually depicted as line segments connecting the nodes. Any scheduling problem can be represented as a graph. Nodes represent events, and the edges indicate the sequence in which events must occur. Each edge also has an associated weight, indicating the cost of progressing from one event to the next — the time it takes a bus to travel between stops, for instance.

Yu, Williams, and Fang’s algorithm first represents a problem as a graph, then begins adding edges that represent the constraints imposed by the planner. If the problem is soluble, the weights of the edges representing constraints will everywhere be greater than the weights representing the costs of transitions between events. Existing algorithms, however, can quickly home in on loops in the graph where the weights are imbalanced. The MIT researchers’ system then calculates the lowest-cost way of rebalancing the loop, which it presents to the planner as a modification of the problem’s initial constraints.

Source: MIT News Office

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.

Credit: F. Najafi/ MIT

Toward quantum chips

Packing single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits.

By Larry Hardesty


CAMBRIDGE, Mass. – A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons. Credit: F. Najafi/ MIT
Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.
Credit: F. Najafi/ MIT

In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

“You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

Thinking small

According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

“Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.

Relocation

The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

“It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

Source: MIT News Office

The dark nebula LDN 483.
Credit: ESO

Where Did All the Stars Go?

Dark cloud obscures hundreds of background stars


Some of the stars appear to be missing in this intriguing new ESO image. But the black gap in this glitteringly beautiful starfield is not really a gap, but rather a region of space clogged with gas and dust. This dark cloud is called LDN 483 — for Lynds Dark Nebula 483. Such clouds are the birthplaces of future stars. The Wide Field Imager, an instrument mounted on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile, captured this image of LDN 483 and its surroundings.

The dark nebula LDN 483. Credit: ESO
The Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile snapped this image of the dark nebula LDN 483. The object is a region of space clogged with gas and dust. These materials are dense enough to effectively eclipse the light of background stars. LDN 483 is located about 700 light-years away in the constellation of Serpens (The Serpent). Credit: ESO

LDN 483 [1] is located about 700 light-years away in the constellation of Serpens (The Serpent). The cloud contains enough dusty material to completely block the visible light from background stars. Particularly dense molecular clouds, like LDN 483, qualify as dark nebulae because of this obscuring property. The starless nature of LDN 483 and its ilk would suggest that they are sites where stars cannot take root and grow. But in fact the opposite is true: dark nebulae offer the most fertile environments for eventual star formation.

Astronomers studying star formation in LDN 483 have discovered some of the youngest observable kinds of baby stars buried in LDN 483’s shrouded interior. These gestating stars can be thought of as still being in the womb, having not yet been born as complete, albeit immature, stars.

In this first stage of stellar development, the star-to-be is just a ball of gas and dust contracting under the force of gravity within the surrounding molecular cloud. The protostar is still quite cool — about –250 degrees Celsius — and shines only in long-wavelength submillimetre light [2]. Yet temperature and pressure are beginning to increase in the fledgling star’s core.

This earliest period of star growth lasts a mere thousands of years, an astonishingly short amount of time in astronomical terms, given that stars typically live for millions or billions of years. In the following stages, over the course of several million years, the protostar will grow warmer and denser. Its emission will increase in energy along the way, graduating from mainly cold, far-infrared light to near-infrared and finally to visible light. The once-dim protostar will have then become a fully luminous star.

As more and more stars emerge from the inky depths of LDN 483, the dark nebula will disperse further and lose its opacity. The missing background stars that are currently hidden will then come into view — but only after the passage of millions of years, and they will be outshone by the bright young-born stars in the cloud [3].

Notes
[1] The Lynds Dark Nebula catalogue was compiled by the American astronomer Beverly Turner Lynds, and published in 1962. These dark nebulae were found from visual inspection of the Palomar Sky Survey photographic plates.

[2] The Atacama Large Millimeter/submillimeter Array (ALMA), operated in part by ESO, observes in submillimetre and millimetre light and is ideal for the study of such very young stars in molecular clouds.

[3] Such a young open star cluster can be seen here, and a more mature one here.
Source : ESO

Although NASA's Hubble Space Telescope has taken many breathtaking images of the universe, one snapshot stands out from the rest: the iconic view of the so-called "Pillars of Creation." The jaw-dropping photo, taken in 1995, revealed never-before-seen details of three giant columns of cold gas bathed in the scorching ultraviolet light from a cluster of young, massive stars in a small region of the Eagle Nebula, or M16.

Credit: Hubble Site

Hubble Goes High Def to Revisit the Iconic ‘Pillars of Creation’

Although NASA’s Hubble Space Telescope has taken many breathtaking images of the universe, one snapshot stands out from the rest: the iconic view of the so-called “Pillars of Creation.” The jaw-dropping photo, taken in 1995, revealed never-before-seen details of three giant columns of cold gas bathed in the scorching ultraviolet light from a cluster of young, massive stars in a small region of the Eagle Nebula, or M16.

Though such butte-like features are common in star-forming regions, the M16 structures are by far the most photogenic and evocative. The Hubble image is so popular that it has appeared in movies and television shows, on tee-shirts and pillows, and even on a postage stamp.

And now, in celebration of its 25th anniversary, Hubble has revisited the famous pillars, providing astronomers with a sharper and wider view. As a bonus, the pillars have been photographed in near-infrared light, as well as visible light. The infrared view transforms the pillars into eerie, wispy silhouettes seen against a background of myriad stars. That’s because the infrared light penetrates much of the gas and dust, except for the densest regions of the pillars. Newborn stars can be seen hidden away inside the pillars. The new images are being unveiled at the American Astronomical Society meeting in Seattle, Washington.

Although the original image was dubbed the Pillars of Creation, the new image hints that they are also pillars of destruction. “I’m impressed by how transitory these structures are. They are actively being ablated away before our very eyes. The ghostly bluish haze around the dense edges of the pillars is material getting heated up and evaporating away into space. We have caught these pillars at a very unique and short-lived moment in their evolution,” explained Paul Scowen of Arizona State University in Tempe, who, with astronomer Jeff Hester, formerly of Arizona State University, led the original Hubble observations of the Eagle Nebula.

The infrared image shows that the reason the pillars exist is because the very ends of them are dense, and they shadow the gas below them, creating the long, pillar-like structures. The gas in between the pillars has long since been blown away by the ionizing winds from the central star cluster located above the pillars.

At the top edge of the left-hand pillar, a gaseous fragment has been heated up and is flying away from the structure, underscoring the violent nature of star-forming regions. “These pillars represent a very dynamic, active process,” Scowen said. “The gas is not being passively heated up and gently wafting away into space. The gaseous pillars are actually getting ionized (a process by which electrons are stripped off of atoms) and heated up by radiation from the massive stars. And then they are being eroded by the stars’ strong winds (barrage of charged particles), which are sandblasting away the tops of these pillars.”

When Scowen and Hester used Hubble to make the initial observations of the Eagle Nebula in 1995, astronomers had seen the pillar-like structures in ground-based images, but not in detail. They knew that the physical processes are not unique to the Eagle Nebula because star birth takes place across the universe. But at a distance of just 6,500 light-years, M16 is the most dramatic nearby example, as the team soon realized.

As Scowen was piecing together the Hubble exposures of the Eagle, he was amazed at what he saw. “I called Jeff Hester on his phone and said, ‘You need to get here now,’” Scowen recalled. “We laid the pictures out on the table, and we were just gushing because of all the incredible detail that we were seeing for the very first time.”

The first features that jumped out at the team in 1995 were the streamers of gas seemingly floating away from the columns. Astronomers had previously debated what effect nearby massive stars would have on the surrounding gas in stellar nurseries. “There is only one thing that can light up a neighborhood like this: massive stars kicking out enough horsepower in ultraviolet light to ionize the gas clouds and make them glow,” Scowen said. “Nebulous star-forming regions like M16 are the interstellar neon signs that say, ‘We just made a bunch of massive stars here.’ This was the first time we had directly seen observational evidence that the erosionary process, not only the radiation but the mechanical stripping away of the gas from the columns, was actually being seen.”

By comparing the 1995 and 2014 pictures, astronomers also noticed a lengthening of a narrow jet-like feature that may have been ejected from a newly forming star. The jet looks like a stream of water from a garden hose. Over the intervening 19 years, this jet has stretched farther into space, across an additional 60 billion miles, at an estimated speed of about 450,000 miles per hour.

Although NASA's Hubble Space Telescope has taken many breathtaking images of the universe, one snapshot stands out from the rest: the iconic view of the so-called "Pillars of Creation." The jaw-dropping photo, taken in 1995, revealed never-before-seen details of three giant columns of cold gas bathed in the scorching ultraviolet light from a cluster of young, massive stars in a small region of the Eagle Nebula, or M16. Credit: Hubble Site
Although NASA’s Hubble Space Telescope has taken many breathtaking images of the universe, one snapshot stands out from the rest: the iconic view of the so-called “Pillars of Creation.” The jaw-dropping photo, taken in 1995, revealed never-before-seen details of three giant columns of cold gas bathed in the scorching ultraviolet light from a cluster of young, massive stars in a small region of the Eagle Nebula, or M16.
Credit: Hubble Site

Our Sun probably formed in a similar turbulent star-forming region. There is evidence that the forming solar system was seasoned with radioactive shrapnel from a nearby supernova. That means that our Sun was formed as part of a cluster that included stars massive enough to produce powerful ionizing radiation, such as is seen in the Eagle Nebula. “That’s the only way the nebula from which the Sun was born could have been exposed to a supernova that quickly, in the short period of time that represents, because supernovae only come from massive stars, and those stars only live a few tens of millions of years,” Scowen explained. “What that means is when you look at the environment of the Eagle Nebula or other star-forming regions, you’re looking at exactly the kind of nascent environment that our Sun formed in.”
Hubble Revisits the Famous
Source: Hubblesite.org
Source: Hubble Site

More Related Images: http://hubblesite.org/newscenter/archive/releases/2015/02/image/a/

Hubble’s High-Definition Panoramic View of the Andromeda Galaxy

The largest NASA Hubble Space Telescope image ever assembled, this sweeping view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic neighbor. Though the galaxy is over 2 million light-years away, the Hubble telescope is powerful enough to resolve individual stars in a 61,000-light-year-long section of the galaxy’s pancake-shaped disk. It’s like photographing a beach and resolving individual grains of sand. And, there are lots of stars in this sweeping view — over 100 million, with some of them in thousands of star clusters seen embedded in the disk. This ambitious photographic cartography of the Andromeda galaxy represents a new benchmark for precision studies of large spiral galaxies which dominate the universe’s population of over 100 billion galaxies. Never before have astronomers been able to see individual stars over a major portion of an external spiral galaxy. Most of the stars in the universe live inside such majestic star cities, and this is the first data that reveal populations of stars in context to their home galaxy.

Click here for more related images

The panorama is the product of the Panchromatic Hubble Andromeda Treasury (PHAT) program. Images were obtained from viewing the galaxy in near-ultraviolet, visible, and near-infrared wavelengths, using the Advanced Camera for Surveys and the Wide Field Camera 3 aboard Hubble. This view shows the galaxy in its natural visible-light color, as photographed with Hubble’s Advanced Camera for Surveys in red and blue filters July 2010 through October 2013.

DETAILS ABOUT THIS IMAGE:

The largest NASA Hubble Space Telescope image ever assembled, this sweeping bird’s-eye view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic next-door neighbor. Though the galaxy is over 2 million light-years away, the Hubble telescope is powerful enough to resolve individual stars in a 61,000-light-year-long stretch of the galaxy’s pancake-shaped disk. It’s like photographing a beach and resolving individual grains of sand. And, there are lots of stars in this sweeping view — over 100 million, with some of them in thousands of star clusters seen embedded in the disk.

This ambitious photographic cartography of the Andromeda galaxy represents a new benchmark for precision studies of large spiral galaxies that dominate the universe’s population of over 100 billion galaxies. Never before have astronomers been able to see individual stars inside an external spiral galaxy over such a large contiguous area. Most of the stars in the universe live inside such majestic star cities, and this is the first data that reveal populations of stars in context to their home galaxy.

Hubble traces densely packed stars extending from the innermost hub of the galaxy, seen at left. Moving out from this central galactic bulge, the panorama sweeps from the galaxy’s central bulge across lanes of stars and dust to the sparser outer disk. Large groups of young blue stars indicate the locations of star clusters and star-forming regions. The stars bunch up in the blue ring-like feature toward the right side of the image. The dark silhouettes trace out complex dust structures. Underlying the entire galaxy is a smooth distribution of cooler red stars that trace Andromeda’s evolution over billions of years.

Because the galaxy is only 2.5 million light-years from Earth, it is a much bigger target in the sky than the myriad galaxies Hubble routinely photographs that are billions of light-years away. This means that the Hubble survey is assembled together into a mosaic image using 7,398 exposures taken over 411 individual pointings.

The panorama is the product of the Panchromatic Hubble Andromeda Treasury (PHAT) program. Images were obtained from viewing the galaxy in near-ultraviolet, visible, and near-infrared wavelengths, using the Advanced Camera for Surveys and the Wide Field Camera 3 aboard Hubble. This cropped view shows a 48,000-light-year-long stretch of the galaxy in its natural visible-light color, as photographed with Hubble’s Advanced Camera for Surveys in red and blue filters July 2010 through October 2013.

The panorama is being presented at the 225th Meeting of the Astronomical Society in Seattle, Washington.

Source: Hubble Site

The enormous structure, dubbed the Fermi Bubbles, was discovered five years ago as a gamma-ray glow on the sky in the direction of the galactic center. The balloon-like features have since been observed in X-rays and radio waves. But astronomers needed NASA's Hubble Space Telescope to measure for the first time the velocity and composition of the mystery lobes. 

Credit: Hubble Site

Hubble Discovers that Milky Way Core Drives Wind at 2 Million Miles Per Hour

At a time when our earliest human ancestors had recently mastered walking upright, the heart of our Milky Way galaxy underwent a titanic eruption, driving gases and other material outward at 2 million miles per hour.

Now, at least 2 million years later, astronomers are witnessing the aftermath of the explosion: billowing clouds of gas towering about 30,000 light-years above and below the plane of our galaxy.

The enormous structure was discovered five years ago as a gamma-ray glow on the sky in the direction of the galactic center. The balloon-like features have since been observed in X-rays and radio waves. But astronomers needed NASA’s Hubble Space Telescope to measure for the first time the velocity and composition of the mystery lobes. They now seek to calculate the mass of the material being blown out of our galaxy, which could lead them to determine the outburst’s cause from several competing scenarios.

Astronomers have proposed two possible origins for the bipolar lobes: a firestorm of star birth at the Milky Way’s center or the eruption of its supermassive black hole. Although astronomers have seen gaseous winds, composed of streams of charged particles, emanating from the cores of other galaxies, they are getting a unique, close-up view of our galaxy’s own fireworks.

“When you look at the centers of other galaxies, the outflows appear much smaller because the galaxies are farther away,” said Andrew Fox of the Space Telescope Science Institute in Baltimore, Maryland, lead researcher of the study. “But the outflowing clouds we’re seeing are only 25,000 light-years away in our galaxy. We have a front-row seat. We can study the details of these structures. We can look at how big the bubbles are and can measure how much of the sky they are covering.”

Fox’s results will be published in The Astrophysical Journal Letters and will be presented at the American Astronomical Society meeting in Seattle, Washington.

The giant lobes, dubbed Fermi Bubbles, initially were spotted using NASA’s Fermi Gamma-ray Space Telescope. The detection of high-energy gamma rays suggested that a violent event in the galaxy’s core aggressively launched energized gas into space. To provide more information about the outflows, Fox used Hubble’s Cosmic Origins Spectrograph (COS) to probe the ultraviolet light from a distant quasar that lies behind the base of the northern bubble. Imprinted on that light as it travels through the lobe is information about the velocity, composition, and temperature of the expanding gas inside the bubble, which only COS can provide.

The enormous structure, dubbed the Fermi Bubbles, was discovered five years ago as a gamma-ray glow on the sky in the direction of the galactic center. The balloon-like features have since been observed in X-rays and radio waves. But astronomers needed NASA's Hubble Space Telescope to measure for the first time the velocity and composition of the mystery lobes.  Credit: Hubble Site
The enormous structure, dubbed the Fermi Bubbles, was discovered five years ago as a gamma-ray glow on the sky in the direction of the galactic center. The balloon-like features have since been observed in X-rays and radio waves. But astronomers needed NASA’s Hubble Space Telescope to measure for the first time the velocity and composition of the mystery lobes.
Credit: Hubble Site

Fox’s team was able to measure that the gas on the near side of the bubble is moving toward Earth and the gas on the far side is travelling away. COS spectra show that the gas is rushing from the galactic center at roughly 2 million miles an hour (3 million kilometers an hour).

“This is exactly the signature we knew we would get if this was a bipolar outflow,” explained Rongmon Bordoloi of the Space Telescope Science Institute, a co-author on the science paper. “This is the closest sightline we have to the galaxy’s center where we can see the bubble being blown outward and energized.”

The COS observations also measure, for the first time, the composition of the material being swept up in the gaseous cloud. COS detected silicon, carbon, and aluminum, indicating that the gas is enriched in the heavy elements produced inside stars and represents the fossil remnants of star formation.

COS measured the temperature of the gas at approximately 17,500 degrees Fahrenheit, which is much cooler than most of the super-hot gas in the outflow, thought to be at about 18 million degrees Fahrenheit. “We are seeing cooler gas, perhaps interstellar gas in our galaxy’s disk, being swept up into that hot outflow,” Fox explained.

This is the first result in a survey of 20 faraway quasars whose light passes through gas inside or just outside the Fermi Bubbles — like a needle piercing a balloon. An analysis of the full sample will yield the amount of mass being ejected. The astronomers can then compare the outflow mass with the velocities at various locations in the bubbles to determine the amount of energy needed to drive the outburst and possibly the origin of the explosive event.

One possible cause for the outflows is a star-making frenzy near the galactic center that produces supernovas, which blow out gas. Another scenario is a star or a group of stars falling onto the Milky Way’s supermassive black hole. When that happens, gas superheated by the black hole blasts deep into space. Because the bubbles are short-lived compared to the age of our galaxy, it suggests this may be a repeating phenomenon in the Milky Way’s history. Whatever the trigger is, it likely occurs episodically, perhaps only when the black hole gobbles up a concentration of material.

“It looks like the outflows are a hiccup,” Fox said. “There may have been repeated ejections of material that have blown up, and we’re catching the latest one. By studying the light from the other quasars in our program, we may be able to detect the fossils of previous outflows.”

Galactic winds are common in star-forming galaxies, such as M82, which is furiously making stars in its core. “It looks like there’s a link between the amount of star formation and whether or not these outflows happen,” Fox said. “Although the Milky Way overall currently produces a moderate one to two stars a year, there is a high concentration of star formation close to the core of the galaxy.”

Source: Hubble Site

Rotating night shift work can be hazardous to your health

Possible increase in cardiovascular disease and lung cancer mortality observed in nurses working rotating night shifts, according to report in the American Journal of Preventive Medicine

ELSEVIER HEALTH SCIENCES


Night shift work has been consistently associated with higher risk for cardiovascular disease (CVD) and cancer. In 2007 the World Health Organization classified night shift work as a probable carcinogen due to circadian disruption. In a study in the current issue of the American Journal of Preventive Medicine, researchers found that women working rotating night shifts for five or more years appeared to have a modest increase in all-cause and CVD mortality and those working 15 or more years of rotating night shift work appeared to have a modest increase in lung cancer mortality. These results add to prior evidence of a potentially detrimental effect of rotating night shift work on health and longevity.

Sleep and the circadian system play an important role in cardiovascular health and antitumor activity. There is substantial biological evidence that night shift work enhances the development of cancer and CVD, and contributes to higher mortality.

An international team of researchers investigated possible links between rotating night shift work and all-cause, CVD, and cancer mortality in a study of almost 75,000 registered U.S. nurses. Using data from the Nurses’ Health Study (NHS), the authors analyzed 22 years of follow-up and found that working rotating night shifts for more than five years was associated with an increase in all-cause and CVD mortality. Mortality from all causes appeared to be 11% higher for women with 6-14 or ?15 years of rotating night shift work. CVD mortality appeared to be 19% and 23% higher for those groups, respectively. There was no association between rotating shift work and any cancer mortality, except for lung cancer in those who worked shift work for 15 or more years (25% higher risk).

The NHS, which is based at Brigham and Women’s Hospital, began in 1976, with 121,700 U.S. female nurses aged 30-55 years, who have been followed up with biennial questionnaires. Night shift information was collected in 1988, at which time 85,197 nurses responded. After excluding women with pre-existing CVD or other than non-melanoma skin cancer, 74,862 women were included in this analysis. Defining rotating shift work as working at least three nights per month in addition to days or evenings in that month, respondents were asked how many years they had worked in this way. The prespecified categories were never, 1-2, 3-5, 6-9, 10-14, 15-19, 20-29, and ?30 years.

According to Eva S. Schernhammer, MD, DrPH, currently Associate Professor of Medicine, Harvard Medical School, and Associate Epidemiologist, Department of Medicine, Brigham and Women’s Hospital, Boston, this study “is one of the largest prospective cohort studies worldwide with a high proportion of rotating night shift workers and long follow-up time. A single occupation (nursing) provides more internal validity than a range of different occupational groups, where the association between shift work and disease outcomes could be confounded by occupational differences.”

Comparing this work with previous studies, she continues, “These results add to prior evidence of a potentially detrimental relation of rotating night shift work and health and longevity…To derive practical implications for shift workers and their health, the role of duration and intensity of rotating night shift work and the interplay of shift schedules with individual traits (e.g., chronotype) warrant further exploration.”

Source: American Journal of Preventive Medicine via EurekAlert