Tag Archives: news

Trapping light with a twister

New understanding of how to halt photons could lead to miniature particle accelerators, improved data transmission.

By David L. Chandler


Researchers at MIT who succeeded last year in creating a material that could trap light and stop it in its tracks have now developed a more fundamental understanding of the process. The new work — which could help explain some basic physical mechanisms — reveals that this behavior is connected to a wide range of other seemingly unrelated phenomena.

The findings are reported in a paper in the journal Physical Review Letters, co-authored by MIT physics professor Marin Soljačić; postdocs Bo Zhen, Chia Wei Hsu, and Ling Lu; and Douglas Stone, a professor of applied physics at Yale University.

Light can usually be confined only with mirrors, or with specialized materials such as photonic crystals. Both of these approaches block light beams; last year’s finding demonstrated a new method in which the waves cancel out their own radiation fields. The new work shows that this light-trapping process, which involves twisting the polarization direction of the light, is based on a kind of vortex — the same phenomenon behind everything from tornadoes to water swirling down a drain.

Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1. Courtesy of the researchers Source: MIT
Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1.
Courtesy of the researchers
Source: MIT

In addition to revealing the mechanism responsible for trapping the light, the new analysis shows that this trapped state is much more stable than had been thought, making it easier to produce and harder to disturb.

“People think of this [trapped state] as very delicate,” Zhen says, “and almost impossible to realize. But it turns out it can exist in a robust way.”

In most natural light, the direction of polarization — which can be thought of as the direction in which the light waves vibrate — remains fixed. That’s the principle that allows polarizing sunglasses to work: Light reflected from a surface is selectively polarized in one direction; that reflected light can then be blocked by polarizing filters oriented at right angles to it.

But in the case of these light-trapping crystals, light that enters the material becomes polarized in a way that forms a vortex, Zhen says, with the direction of polarization changing depending on the beam’s direction.

Because the polarization is different at every point in this vortex, it produces a singularity — also called a topological defect, Zhen says — at its center, trapping the light at that point.

Hsu says the phenomenon makes it possible to produce something called a vector beam, a special kind of laser beam that could potentially create small-scale particle accelerators. Such devices could use these vector beams to accelerate particles and smash them into each other — perhaps allowing future tabletop devices to carry out the kinds of high-energy experiments that today require miles-wide circular tunnels.

The finding, Soljačić says, could also enable easy implementation of super-resolution imaging (using a method called stimulated emission depletion microscopy) and could allow the sending of far more channels of data through a single optical fiber.

“This work is a great example of how supposedly well-studied physical systems can contain rich and undiscovered phenomena, which can be unearthed if you dig in the right spot,” says Yidong Chong, an assistant professor of physics and applied physics at Nanyang Technological University in Singapore who was not involved in this research.

Chong says it is remarkable that such surprising findings have come from relatively well-studied materials. “It deals with photonic crystal slabs of the sort that have been extensively analyzed, both theoretically and experimentally, since the 1990s,” he says. “The fact that the system is so unexotic, together with the robustness associated with topological phenomena, should give us confidence that these modes will not simply

be theoretical curiosities, but can be exploited in technologies such as microlasers.”

The research was partly supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and by the Department of Energy and the National Science Foundation.

Source: MIT News Office

Quantum physics breakthrough: Scientists solve 100-year-old puzzle

Two fundamental concepts of the quantum world are actually just different manifestations of the same thing, says Waterloo researcher.

By Jenny Hogan

Centre for Quantum Technologies


A Waterloo researcher is part of an international team that has proven that two peculiar features of the quantum world – long thought to be distinct – are actually different manifestations of the same thing.

The breakthrough findings are published today inNature Communications. The two distinct ideas in question have been fundamental concepts in quantum physics since the early 1900s. They are what is known as the wave-particle duality and the uncertainty principle.

“We were guided by a gut feeling, and only a gut feeling, that there should be a connection,” says Patrick Coles, now a postdoctoral fellow at the Institute for Quantum Computing and the Department of Physics and Astronomy at the University of Waterloo.

  • Wave-particle duality is the idea that a quantum particle can behave like a wave, but that the wave behavior disappears if you try to locate the object.
  • The uncertainty principle is the idea that it’s impossible to know certain pairs of things about a quantum particle at once. For example, the more precisely you know the position of an atom, the less precisely you can know the speed with which it’s moving.

Coles was part of the research team at the National University of Singapore that made the discovery that wave-particle duality is simply the quantum uncertainty principle in disguise.

Like discovering the Rosetta Stone of quantum physics

“It was like we had discovered the ‘Rosetta Stone’ that connected two different languages,” says Coles. “The literature on wave-particle duality was like hieroglyphics that we could translate into our native tongue. We had several eureka moments when we finally understood what people had done.”

The research team at Singapore’s Centre for Quantum Technologies, included Jedrzej Kaniewski and Stephanie Wehner, now both researchers at the Netherlands’ Delft University of Technology.

“The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information,” says Wehner.

The wave-particle duality is perhaps most simply seen in a double slit experiment, where single particles, electrons, say, are fired one by one at a screen containing two narrow slits. The particles pile up behind the slits not in two heaps as classical objects would, but in a stripy pattern like you’d expect for waves interfering. At least this is what happens until you sneak a look at which slit a particle goes through – do that and the interference pattern vanishes.

The discovery deepens our understanding of quantum physics and could prompt ideas for new applications of wave-particle duality.

New protocols for quantum cryptography possible

Coles, Kaniewski and Wehner are experts in a form of mathematical equations known as ‘entropic uncertainty relations.’ They discovered that all the maths previously used to describe wave-particle duality could be reformulated in terms of these relations.

Because the entropic uncertainty relations used in their translation have also been used in proving the security of quantum cryptography – schemes for secure communication using quantum particles – the researchers suggest the work could help inspire new cryptography protocols.

How is nature itself constructed?

In earlier papers, the researchers found connections between the uncertainty principle and other physics, namely quantum ‘non-locality’ and the second law of thermodynamics. The tantalizing next goal for the researchers is to think about how these pieces fit together and what bigger picture that paints of how nature is constructed.

Source: University of WaterLoo

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Credit:
ESO

The Hot Blue Stars of Messier 47

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Messier 47 is located approximately 1600 light-years from Earth, in the constellation of Puppis (the poop deck of the mythological ship Argo). It was first noticed some time before 1654 by Italian astronomer Giovanni Battista Hodierna and was later independently discovered by Charles Messier himself, who apparently had no knowledge of Hodierna’s earlier observation.

Although it is bright and easy to see, Messier 47 is one of the least densely populated open clusters. Only around 50 stars are visible in a region about 12 light-years across, compared to other similar objects which can contain thousands of stars.

Messier 47 has not always been so easy to identify. In fact, for years it was considered missing, as Messier had recorded the coordinates incorrectly. The cluster was later rediscovered and given another catalogue designation — NGC 2422. The nature of Messier’s mistake, and the firm conclusion that Messier 47 and NGC 2422 are indeed the same object, was only established in 1959 by Canadian astronomer T. F. Morris.

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars. Credit: ESO
This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.
Credit:
ESO



The bright blue–white colours of these stars are an indication of their temperature, with hotter stars appearing bluer and cooler stars appearing redder. This relationship between colour, brightness and temperature can be visualised by use of the Planck curve. But the more detailed study of the colours of stars using spectroscopy also tells astronomers a lot more — including how fast the stars are spinning and their chemical compositions. There are also a few bright red stars in the picture — these are red giant stars that are further through their short life cycles than the less massive and longer-lived blue stars [1].

By chance Messier 47 appears close in the sky to another contrasting star cluster — Messier 46. Messier 47 is relatively close, at around 1600 light-years, but Messier 46 is located around 5500 light-years away and contains a lot more stars, with at least 500 stars present. Despite containing more stars, it appears significantly fainter due to its greater distance.

Messier 46 could be considered to be the older sister of Messier 47, with the former being approximately 300 million years old compared to the latter’s 78 million years. Consequently, many of the most massive and brilliant of the stars in Messier 46 have already run through their short lives and are no longer visible, so most stars within this older cluster appear redder and cooler.

This image of Messier 47 was produced as part of the ESO Cosmic Gems programme [2].

Notes

[1] The lifetime of a star depends primarily on its mass. Massive stars, containing many times as much material as the Sun, have short lives measured in millions of years. On the other hand much less massive stars can continue to shine for many billions of years. In a cluster, the stars all have about the same age and same initial chemical composition. So the brilliant massive stars evolve quickest, become red giants sooner, and end their lives first, leaving the less massive and cooler ones to long outlive them.

[2] The ESO Cosmic Gems programme is an outreach initiative to produce images of interesting, intriguing or visually attractive objects using ESO telescopes, for the purposes of education and public outreach. The programme makes use of telescope time that cannot be used for science observations. All data collected may also be suitable for scientific purposes, and are made available to astronomers through ESO’s science archive.

More information

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Source: ESO 

Characteristics of a universal simulator|Study narrows the scope of research on quantum computing

Despite a lot of work being done by many research groups around the world, the field of Quantum computing is still in its early stages. We still need to cover a lot of grounds to achieve the goal of developing a working Quantum computer capable of doing the tasks which are expected or predicted. Recent research by a SISSA led team has tried to give the future research in the area of Quantum computing some direction based on the current state of research in the area.


“A quantum computer may be thought of as a ‘simulator of overall Nature,” explains Fabio Franchini, a researcher at the International School for Advanced Studies (SISSA) of Trieste, “in other words, it’s a machine capable of simulating Nature as a quantum system, something that classical computers cannot do”. Quantum computers are machines that carry out operations by exploiting the phenomena of quantum mechanics, and they are capable of performing different functions from those of current computers. This science is still very young and the systems produced to date are still very limited. Franchini is the first author of a study just published in Physical Review Xwhich establishes a basic characteristic that this type of machine should possess and in doing so guides the direction of future research in this field.

The study used analytical and numerical methods. “What we found” explains Franchini, “is that a system that does not exhibit ‘Majorana fermions’ cannot be a universal quantum simulator”. Majorana fermions were hypothesized by Ettore Majorana in a paper published 1937, and they display peculiar characteristics: a Majorana fermion is also its own antiparticle. “That means that if Majorana fermions meet they annihilate among themselves,” continues Franchini. “In recent years it has been suggested that these fermions could be found in states of matter useful for quantum computing, and our study confirms that they must be present, with a certain probability related to entanglement, in the material used to build the machine”.

Entanglement, or “action at a distance”, is a property of quantum systems whereby an action done on one part of the system has an effect on another part of the same system, even if the latter has been split into two parts that are located very far apart. “Entanglement is a fundamental phenomenon for quantum computers,” explains Franchini.

“Our study helps to understand what types of devices research should be focusing on to construct this universal simulator. Until now, given the lack of criteria, research has proceeded somewhat randomly, with a huge consumption of time and resources”.

The study was conducted with the participation of many other international research institutes in addition to SISSA, including the Massachusetts Institute of Technology (MIT) in Boston, the University of Oxford and many others.

More in detail…

“Having a quantum computer would open up new worlds. For example, if we had one today we would be able to break into any bank account,” jokes Franchini. “But don’t worry, we’re nowhere near that goal”.

At the present time, several attempts at quantum machines exist that rely on the properties of specific materials. Depending on the technology used, these computers have sizes varying from a small box to a whole room, but so far they are only able to process a limited number of information bits, an amount infinitely smaller than that processed by classical computers.

However, it’s not correct to say that quantum computers are, or will be, more powerful than traditional ones, points out Franchini. “There are several things that these devices are worse at. But, by exploiting quantum mechanics, they can perform operations that would be impossible for classical computers”.

Source: International School of Advanced Studies (SISSA)

 

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

Best Quantum Receiver

RECORD HIGH DATA ACCURACY RATES FOR PHASE-MODULATED TRANSMISSION

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second.

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. - Credit : JQI
By passing it through a special crystal, a light wave’s phase—denoting position along the wave’s cycle—can be delayed. A delay of a certain amount can denote a piece of data. In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent—for laser light this means a predictable set of crests and troughs along the wave—a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires—an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.)

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

- See more at: http://jqi.umd.edu/news/best-quantum-receiver#sthash.SS5zfkis.dpuf

- Source: JQI 

The MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile captured this richly colourful view of the bright star cluster NGC 3532. Some of the stars still shine with a hot bluish colour, but many of the more massive ones have become red giants and glow with a rich orange hue.

Credit:

ESO/G. Beccari

A Colourful Gathering of Middle-aged Stars

The MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile has captured a richly colourful view of the bright star cluster NGC 3532. Some of the stars still shine with a hot bluish colour, but many of the more massive ones have become red giants and glow with a rich orange hue.

NGC 3532 is a bright open cluster located some 1300 light-years away in the constellation of Carina (The Keel of the ship Argo). It is informally known as the Wishing Well Cluster, as it resembles scattered silver coins which have been dropped into a well. It is also referred to as the Football Cluster, although how appropriate this is depends on which side of the Atlantic you live. It acquired the name because of its oval shape, which citizens of rugby-playing nations might see as resembling a rugby ball.

This very bright star cluster is easily seen with the naked eye from the southern hemisphere. It was discovered by French astronomer Nicolas Louis de Lacaille whilst observing from South Africa in 1752 and was catalogued three years later in 1755. It is one of the most spectacular open star clusters in the whole sky.

The MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile captured this richly colourful view of the bright star cluster NGC 3532. Some of the stars still shine with a hot bluish colour, but many of the more massive ones have become red giants and glow with a rich orange hue. Credit: ESO/G. Beccari
The MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile captured this richly colourful view of the bright star cluster NGC 3532. Some of the stars still shine with a hot bluish colour, but many of the more massive ones have become red giants and glow with a rich orange hue.
Credit:
ESO/G. Beccari

NGC 3532 covers an area of the sky that is almost twice the size of the full Moon. It was described as a binary-rich cluster by John Herschel who observed “several elegant double stars” here during his stay in southern Africa in the 1830s. Of additional, much more recent, historical relevance, NGC 3532 was the first target to be observed by the NASA/ESA Hubble Space Telescope, on 20 May 1990.

This grouping of stars is about 300 million years old. This makes it middle-aged by open star cluster standards [1]. The cluster stars that started off with moderate masses are still shining brightly with blue-white colours, but the more massive ones have already exhausted their supplies of hydrogen fuel and have become red giant stars. As a result the cluster appears rich in both blue and orange stars. The most massive stars in the original cluster will have already run through their brief but brilliant lives and exploded as supernovae long ago. There are also numerous less conspicuous fainter stars of lower mass that have longer lives and shine with yellow or red hues. NGC 3532 consists of around 400 stars in total.

The background sky here in a rich part of the Milky Way is very crowded with stars. Some glowing red gas is also apparent, as well as subtle lanes of dust that block the view of more distant stars. These are probably not connected to the cluster itself, which is old enough to have cleared away any material in its surroundings long ago.

This image of NGC 3532 was captured by the Wide Field Imager instrument at ESO’s La Silla Observatory in February 2013.

Notes

[1] Stars with masses many times greater than the Sun have lives of just a few million years, the Sun is expected to live for about ten billion years and low-mass stars have expected lives of hundreds of billions of years — much greater than the current age of the Universe.

More information

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Links

The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning.
Credit: CERN

CERN makes public first data of LHC experiments

CERN1 launched today its Open Data Portal where data from real collision events, produced by the LHC experiments will for the first time be made openly available to all. It is expected that these data will be of high value for the research community, and also be used for education purposes.

”Launching the CERN Open Data Portal is an important step for our Organization. Data from the LHC programme are among the most precious assets of the LHC experiments, that today we start sharing openly with the world. We hope these open data will support and inspire the global research community, including students and citizen scientists,” said CERN Director General Rolf Heuer.

The principle of openness is enshrined in CERN’s founding Convention, and all LHC publications have been published Open Access, free for all to read and re-use. Widening the scope, the LHC collaborations recently approved Open Data policies and will release collision data over the coming years.

The first high-level and analysable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. This data set is now publicly available on the CERN Open Data Portal. Open source software to read and analyse the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” said CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

 The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning. Credit: CERN
The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b’- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b’- region plotted with a finer binning.
Credit: CERN

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations, which have been specifically prepared for educational purposes, such as the international masterclasses in particle physics2 benefiting over ten thousand high-school students every year. These resources are accompanied by visualisation tools.

“Our own data policy foresees data preservation and its sharing. We have seen that students are fascinated by being able to analyse LHC data in the past and so, we are very happy to take the first steps and make available some selected data for education” said Silvia Amerio, data preservation coordinator of the LHCb experiment.

“The development of this Open Data Portal represents a first milestone in our mission to serve our users in preserving and sharing their research materials. It will ensure that the data and tools can be accessed and used, now and in the future,” said Tim Smith from CERN’s IT Department.

All data on OpenData.cern.ch are shared under a Creative Commons CC03 public domain dedication; data and software are assigned unique DOI identifiers to make them citable in scientific articles; and software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

Further information:

Open data portal

Open data policies

CMS Open Data

 

Footnote(s):

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer Status.

2. http://www.physicsmasterclasses.org(link is external)

3. http://creativecommons.org/publicdomain/zero/1.0/

Electrical and computer engineering Professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research could help untangle what happens in the brain during sleep and dreaming.

Photo Credit: Nick Berard/UW-Madison

Stanford scientists seek to map origins of mental illness and develop noninvasive treatment

An interdisciplinary team of scientists has convened to map the origins of mental illnesses in the brain and develop noninvasive technologies to treat the conditions. The collaboration could lead to improved treatments for depression, anxiety and post-traumatic stress disorder.

BY AMY ADAMS


Over the years imaging technologies have revealed a lot about what’s happening in our brains, including which parts are active in people with conditions like depression, anxiety or post-traumatic stress disorder. But here’s the secret Amit Etkin wants the world to know about those tantalizing images: they show the result of a brain state, not what caused it.

This is important because until we know how groups of neurons, called circuits, are causing these conditions – not just which are active later – scientists will never be able to treat them in a targeted way.

“You see things activated in brain images but you can’t tell just by watching what is cause and what is effect,” said Amit Etkin, an assistant professor of psychiatry and behavioral sciences. Etkin is co-leader of a new interdisciplinary initiative to understand what brain circuits underlie mental health conditions and then direct noninvasive treatments to those locations.

“Right now, if a patient with a mental illness goes to see their doctor they would likely be given a medication that goes all over the brain and body,” Etkin said. “While medications can work well, they do so for only a portion of people and often only partially.” Medications don’t specifically act on the brain circuits critically affected in that illness or individual.

The Big Idea: treat roots of mental illness

The new initiative, called NeuroCircuit, has the goal of finding the brain circuits that are responsible for mental health conditions and then developing ways of remotely stimulating those circuits and, the team hopes, potentially treating those conditions.

The initiative is part of the Stanford Neurosciences Institute‘s Big Ideas, which bring together teams of researchers from across disciplines to solve major problems in neuroscience and society. Stephen Baccus, an associate professor of neurobiology who co-leads the initiative with Etkin, said that what makes NeuroCircuit a big idea is the merging of teams trying to map circuits responsible for mental health conditions and teams developing new technologies to remotely access those circuits.

“Many psychiatric disorders, especially disorders of mood, probably involve malfunction within specific brain circuits that regulate emotion and motivation, yet our current pharmaceutical treatments affect circuits all over the brain,” said William Newsome, director of the Stanford Neurosciences Institute. “The ultimate goal of NeuroCircuit is more precise treatments, with minimal side effects, for specific psychiatric disorders.”

“The connection between the people who develop the technology and carry out research with the clinical goal, that’s what’s really come out of the Big Ideas,” Baccus said.

Brain control

Etkin has been working with a technology called transcranial magnetic stimulation, or TMS, to map and remotely stimulate parts of the brain. The device, which looks like a pair of doughnuts on a stick, generates a strong magnetic current that stimulates circuits near the surface of the brain.

TMS is currently used as a way of treating depression and anxiety, but Etkin said the brain regions being targeted are the ones available to TMS, not necessarily the ones most likely to treat a person’s condition. They are also not personalized for the individual.

Pairing TMS with another technology that shows which brain regions are active, Etkin and his team can stimulate one part of the brain with TMS and look for a reaction elsewhere. These studies can eventually help map the relationships between brain circuits and identify the circuits that underlie mental health conditions.

In parallel, the team is working to improve TMS to make it more useful as a therapy. TMS currently only reaches the surface of the brain and is not very focused. The goal is to improve the technology so that it can reach structures deeper in the brain in a more targeted way. “Right now they are hitting the only accessible target,” he said. “The parts we really want to hit for depression, anxiety or PTSD are likely deeper in the brain.”

Technology of the future

In parallel with the TMS work, Baccus and a team of engineers, radiologists and physiologists have been developing a way of using ultrasound to stimulate the brain. Ultrasound is widely used to image the body, most famously for producing images of developing babies in the womb. But in recent years scientists have learned that at the right frequency and focus, ultrasound can also stimulate nerves to fire.

In his lab, Baccus has been using ultrasound to stimulate nerve cells of the retina – the light-sensing structure at the back of the eye – as part of an effort to develop a prosthetic retina. He is also teaming up with colleagues to understand how ultrasound might be triggering that stimulation. It appears to compress the nerve cells in a way that could lead to activation, but the connection is far from clear.

Other members of the team are modifying existing ultrasound technology to direct it deep within the brain at a frequency that can stimulate nerves without harming them. If the team is successful, ultrasound could be a more targeted and focused tool than TMS for remotely stimulating circuits that underlie mental health conditions.

The group has been working together for about five years, and in 2012 got funding from Bio-X NeuroVentures, which eventually gave rise to the Stanford Neurosciences Institute, to pursue this technology. Baccus said that before merging with Etkin’s team they had been focusing on the technology without specific brain diseases in mind. “This merger really gives a target and a focus to the technology,” he said.

Etkin and Baccus said that if they are successful, they hope to have both a better understanding of how the brain functions and new tools for treating disabling mental health conditions.

Source: Stanford News

This map of Turkey shows the artists' interpretation of the North Anatolian Fault (blue line) and the possible site of an earthquake (white lines) that could strike beneath the Sea of Marmara.

Image: NASA, and Christine Daniloff and Jose-Luis Olivares/MIT

Groundwater composition as potential precursor to earthquakes

By Meres J. Weche


 

The world experiences over 13,000 earthquakes per year reaching a Richter magnitude of 4.0 or greater. But what if there was a way to predict these oft-deadly earthquakes and, through a reliable process, mitigate loss of life and damage to vital urban infrastructures?

Earthquake prediction is the “holy grail” of geophysics, says KAUST’s Dr. Sigurjón Jónsson, Associate Professor of Earth Science and Engineering and Principal Investigator of the Crustal Deformation and InSAR Group. But after some initial optimism among scientists in the 1970′s about the reality of predicting earthquakes, ushered in by the successful prediction within hours of a major earthquake in China in 1975, several failed predictions have since then moved the pendulum towards skepticism from the 1990′s onwards.

This map of Turkey shows the artists' interpretation of the North Anatolian Fault (blue line) and the possible site of an earthquake (white lines) that could strike beneath the Sea of Marmara. Image: NASA, and Christine Daniloff and Jose-Luis Olivares/MIT
This map of Turkey shows the artists’ interpretation of the North Anatolian Fault (blue line) and the possible site of an earthquake (white lines) that could strike beneath the Sea of Marmara.
Image: NASA, and Christine Daniloff and Jose-Luis Olivares/MIT

In a study recently published in Nature Geoscience by a group of Icelandic and Swedish researchers, including Prof. Sigurjón Jónsson, an interesting correlation was established between two earthquakes greater than magnitude 5 in North Iceland, in 2012 and 2013, and the observed changing chemical composition of area groundwater prior to these tectonic events. The changes included variations in dissolved element concentrations and fluctuations in the proportion of stable isotopes of oxygen and hydrogen.

Can We Really Predict Earthquakes?

The basic common denominator guiding scientists and general observers investigating the predictability of earthquakes is the detection of these noticeable changes before seismic events. Some of these observable precursors are changes in groundwater level, radon gas sometimes coming out from the ground, smaller quakes called foreshocks, and even strange behavior by some animals before large earthquakes.

There are essentially three prevailing schools of thought in earthquake prediction among scientists. There’s a first group of scientists who believe that earthquake prediction is achievable but we simply don’t yet know how to do it reliably. They believe that we may, at some point in the future, be able to give short-term predictions.

Then there’s another class of scientists who believe that we will never be able to predict earthquakes. Their philosophy is that the exact start of earthquakes is simply randomly occurring and that the best thing we can do is to retrofit our houses and make probability forecasts — but no short-term warnings.

The last group, which currently represents a minority of scientists who are not often taken seriously, believes that earthquakes are indeed predictable and that we have the tools to do it.

Following the wave of optimism in the ’70s and ’80s, the interest and confidence of scientists in predicting earthquakes have generally subsided, along with the funding. Scientists now tend to focus mainly on understanding the physics behind earthquakes. As Prof. Jónsson summarizes:

“From geology and from earthquake occurrence today we can more or less see where in the world we have large earthquakes and where we have areas which are relatively safe. Although we cannot make short-term predictions we can make what we call forecasts. We can give probabilities. But short-term predictions are not achievable and may never be. We will see.”

The Message from the Earth’s Cracking Crust

Iceland was an ideal location to conduct the collaborative study undertaken by the scientists from Akureyri University, the University of Iceland, Landsvirkjun (the National Power Company of Iceland), the University of Stockholm, the University of Gothenburg and Karolinska Institutet in Stockholm, and KAUST.

“Iceland is a good testing ground because, geologically speaking, it’s very active. It has erupting volcanoes and it has large earthquakes also happening relatively often compared to many other places. And these areas that are active are relatively accessible,” said Prof. Jónsson.

The team of researchers monitored the chemistry, temperature and pressure in a few water wells in north Iceland for a period of five years more or less continuously. “They have been doing this to form an understanding of the variability of these chemical compounds in the wells; and then possibly associate significant changes to tectonic or major events,” he adds.

Through the five-year data collection period, which began in 2008, they were able to detect perceptible changes in the aquifer system as much as four to six months prior to the two recorded earthquakes: one of a magnitude 5.6 in October 2012 and a second one of magnitude 5.5 in April 2013. Their main observation was that the proportion of young local precipitation water in the geothermal water increased – in proportion to water that fell as rain thousands of years ago (aquifer systems are typically a mix of these two). At the same time, alterations were evident in the dissolved chemicals like sodium, calcium and silicon during that precursor period. Interestingly, the proportion went back to its previous state about three months after the quakes.

While the scientists are cautioning that this is not a confirmation that earthquake predictions are now feasible, the observations are promising and worthy of further investigation involving more exhaustive monitoring in additional locations. But, statistically speaking, it would be very difficult to disassociate these changes in the groundwater chemical composition from the two earthquakes.

The reason why a change in the ratio between old and new water in the aquifer system is important is because it points to the development of small fractures from the build-up of stress on the rocks before an earthquake. So the new rainwater seeps through the newly formed cracks, or microfracturing, in the rocky soil. Prof. Sigurjón Jónsson illustrates this as follows:

“It’s similar to when you take a piece of wood and you start to bend it. At some point before it snaps it starts to crack a little; and then poof it snaps. Something similar might be happening in the earth. Meaning that just before an earthquake happens, if you start to have a lot of micro-fracturing you will have water having an easier time to move around in the rocks.”

The team will be presenting their findings at the American Geophysical Union (AGU) meeting in San Francisco in December 2014. “It will be interesting to see the reaction there,” said Prof. Jónsson

Source: KAUST News

Time to Wake Up: Artist’s impression of NASA’s New Horizons spacecraft, currently en route to Pluto. Operators at the Johns Hopkins University Applied Physics Laboratory are preparing to “wake” the spacecraft from electronic hibernation on Dec. 6, when the probe will be more than 2.9 billion miles from Earth. (Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute)

New Horizons Set to Wake Up for Pluto Encounter

NASA’s New Horizons spacecraft comes out of hibernation for the last time on Dec. 6. Between now and then, while the Pluto-bound probe enjoys three more weeks of electronic slumber, work on Earth is well under way to prepare the spacecraft for a six-month encounter with the dwarf planet that begins in January.

“New Horizons is healthy and cruising quietly through deep space – nearly three billion miles from home – but its rest is nearly over,” says Alice Bowman, New Horizons mission operations manager at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md. “It’s time for New Horizons to wake up, get to work, and start making history.”

Since launching in January 2006, New Horizons has spent 1,873 days in hibernation – about two-thirds of its flight time – spread over 18 separate hibernation periods from mid-2007 to late 2014 that ranged from 36 days to 202 days long.

In hibernation mode much of the spacecraft is unpowered; the onboard flight computer monitors system health and broadcasts a weekly beacon-status tone back to Earth. On average, operators woke New Horizons just over twice each year to check out critical systems, calibrate instruments, gather science data, rehearse Pluto-encounter activities and perform course corrections when necessary.

New Horizons pioneered routine cruise-flight hibernation for NASA. Not only has hibernation reduced wear and tear on the spacecraft’s electronics, it lowered operations costs and freed up NASA Deep Space Network tracking and communication resources for other missions.

Ready to Go

Next month’s wake-up call was preprogrammed into New Horizons’ on-board computer in August, commanding it come out of hibernation at 3 p.m. EST on Dec. 6. About 90 minutes later New Horizons will transmit word to Earth that it’s in “active” mode; those signals, even traveling at light speed, will need four hours and 25 minutes to reach home. Confirmation should reach the mission operations team at APL around 9:30 p.m. EST. At the time New Horizons will be more than 2.9 billion miles from Earth, and just 162 million miles – less than twice the distance between Earth and the sun – from Pluto.

Time to Wake Up: Artist’s impression of NASA’s New Horizons spacecraft, currently en route to Pluto. Operators at the Johns Hopkins University Applied Physics Laboratory are preparing to “wake” the spacecraft from electronic hibernation on Dec. 6, when the probe will be more than 2.9 billion miles from Earth. (Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute)
Time to Wake Up: Artist’s impression of NASA’s New Horizons spacecraft, currently en route to Pluto. Operators at the Johns Hopkins University Applied Physics Laboratory are preparing to “wake” the spacecraft from electronic hibernation on Dec. 6, when the probe will be more than 2.9 billion miles from Earth. (Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute)

After several days of collecting navigation-tracking data, downloading and analyzing the cruise science and spacecraft housekeeping data stored on New Horizons’ digital recorders, the mission team will begin activities that include conducting final tests on the spacecraft’s science instruments and operating systems, and building and testing the computer-command sequences that will guide New Horizons through its flight to and reconnaissance of the Pluto system. Tops on the mission’s science list are characterizing the global geology and topography of Pluto and its large moon Charon, mapping their surface compositions and temperatures, examining Pluto’s atmospheric composition and structure, studying Pluto’s smaller moons and searching for new moons and rings.

New Horizons’ seven-instrument science payload, developed under direction of Southwest Research Institute, includes advanced imaging infrared and ultraviolet spectrometers, a compact multicolor camera, a high-resolution telescopic camera, two powerful particle spectrometers, a space-dust detector (designed and built by students at the University of Colorado) and two radio science experiments. The entire spacecraft, drawing electricity from a single radioisotope thermoelectric generator, operates on less power than a pair of 100-watt light bulbs.

Distant observations of the Pluto system begin Jan. 15 and will continue until late July 2015; closest approach to Pluto is July 14.

“We’ve worked years to prepare for this moment,” says Mark Holdridge, New Horizons encounter mission manager at APL. “New Horizons might have spent most of its cruise time across nearly three billion miles of space sleeping, but our team has done anything but, conducting a flawless flight past Jupiter just a year after launch, putting the spacecraft through annual workouts, plotting out each step of the Pluto flyby and even practicing the entire Pluto encounter on the spacecraft. We are ready to go.”

“The final hibernation wake up Dec. 6 signifies the end of an historic cruise across the entirety of our planetary system,” added New Horizons Principal Investigator Alan Stern, of the Southwest Research Institute. “We are almost on Pluto’s doorstep!”

The Johns Hopkins Applied Physics Laboratory manages the New Horizons mission for NASA’s Science Mission Directorate. Alan Stern, of the Southwest Research Institute (SwRI) is the principal investigator and leads the mission; SwRI leads the science team, payload operations, and encounter science planning. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Ala. APL designed, built and operates the New Horizons spacecraft.

Source: JHUAPL