Tag Archives: light

Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


 

CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

Physicists solve quantum tunneling mystery

An international team of scientists studying ultrafast physics have solved a mystery of quantum mechanics, and found that quantum tunneling is an instantaneous process.

The new theory could lead to faster and smaller electronic components, for which quantum tunneling is a significant factor. It will also lead to a better understanding of diverse areas such as electron microscopy, nuclear fusion and DNA mutations.

“Timescales this short have never been explored before. It’s an entirely new world,” said one of the international team, Professor Anatoli Kheifets, from The Australian National University (ANU).

“We have modelled the most delicate processes of nature very accurately.”

At very small scales quantum physics shows that particles such as electrons have wave-like properties – their exact position is not well defined. This means they can occasionally sneak through apparently impenetrable barriers, a phenomenon called quantum tunneling.

Quantum tunneling plays a role in a number of phenomena, such as nuclear fusion in the sun, scanning tunneling microscopy, and flash memory for computers. However, the leakage of particles also limits the miniaturisation of electronic components.

Professor Kheifets and Dr. Igor Ivanov, from the ANU Research School of Physics and Engineering, are members of a team which studied ultrafast experiments at the attosecond scale (10-18 seconds), a field that has developed in the last 15 years.

Until their work, a number of attosecond phenomena could not be adequately explained, such as the time delay when a photon ionised an atom.

“At that timescale the time an electron takes to quantum tunnel out of an atom was thought to be significant. But the mathematics says the time during tunneling is imaginary – a complex number – which we realised meant it must be an instantaneous process,” said Professor Kheifets.

“A very interesting paradox arises, because electron velocity during tunneling may become greater than the speed of light. However, this does not contradict the special theory of relativity, as the tunneling velocity is also imaginary” said Dr Ivanov, who recently took up a position at the Center for Relativistic Laser Science in Korea.

The team’s calculations, which were made using the Raijin supercomputer, revealed that the delay in photoionisation originates not from quantum tunneling but from the electric field of the nucleus attracting the escaping electron.

The results give an accurate calibration for future attosecond-scale research, said Professor Kheifets.

“It’s a good reference point for future experiments, such as studying proteins unfolding, or speeding up electrons in microchips,” he said.

The research is published in Nature Physics.

Source: ANU

New kind of “tandem” solar cell developed: MIT Research

Researchers combine two types of photovoltaic material to make a cell that harnesses more sunlight.

By David Chandler


 

CAMBRIDGE, Mass–Researchers at MIT and Stanford University have developed a new kind of solar cell that combines two different layers of sunlight-absorbing material in order to harvest a broader range of the sun’s energy. The development could lead to photovoltaic cells that are more efficient than those currently used in solar-power installations, the researchers say.

The new cell uses a layer of silicon — which forms the basis for most of today’s solar panels — but adds a semi-transparent layer of a material called perovskite, which can absorb higher-energy particles of light. Unlike an earlier “tandem” solar cell reported by members of the same team earlier this year — in which the two layers were physically stacked, but each had its own separate electrical connections — the new version has both layers connected together as a single device that needs only one control circuit.

The new findings are reported in the journal Applied Physics Letters by MIT graduate student Jonathan Mailoa; associate professor of mechanical engineering Tonio Buonassisi; Colin Bailie and Michael McGehee at Stanford; and four others.

“Different layers absorb different portions of the sunlight,” Mailoa explains. In the earlier tandem solar cell, the two layers of photovoltaic material could be operated independently of each other and required their own wiring and control circuits, allowing each cell to be tuned independently for optimal performance.

By contrast, the new combined version should be much simpler to make and install, Mailoa says. “It has advantages in terms of simplicity, because it looks and operates just like a single silicon cell,” he says, with only a single electrical control circuit needed.

One tradeoff is that the current produced is limited by the capacity of the lesser of the two layers. Electrical current, Buonassisi explains, can be thought of as analogous to the volume of water passing through a pipe, which is limited by the diameter of the pipe: If you connect two lengths of pipe of different diameters, one after the other, “the amount of water is limited by the narrowest pipe,” he says. Combining two solar cell layers in series has the same limiting effect on current.

To address that limitation, the team aims to match the current output of the two layers as precisely as possible. In this proof-of-concept solar cell, this means the total power output is about the same as that of conventional solar cells; the team is now working to optimize that output.

Perovskites have been studied for potential electronic uses including solar cells, but this is the first time they have been successfully paired with silicon cells in this configuration, a feat that posed numerous technical challenges. Now the team is focusing on increasing the power efficiency — the percentage of sunlight’s energy that gets converted to electricity — that is possible from the combined cell. In this initial version, the efficiency is 13.7 percent, but the researchers say they have identified low-cost ways of improving this to about 30 percent — a substantial improvement over today’s commercial silicon-based solar cells — and they say this technology could ultimately achieve a power efficiency of more than 35 percent.

They will also explore how to easily manufacture the new type of device, but Buonassisi says that should be relatively straightforward, since the materials lend themselves to being made through methods very similar to conventional silicon-cell manufacturing.

One hurdle is making the material durable enough to be commercially viable: The perovskite material degrades quickly in open air, so it either needs to be modified to improve its inherent durability or encapsulated to prevent exposure to air — without adding significantly to manufacturing costs and without degrading performance.

This exact formulation may not turn out to be the most advantageous for better solar cells, Buonassisi says, but is one of several pathways worth exploring. “Our job at this point is to provide options to the world,” he says. “The market will select among them.”

The research team also included Eric Johlin PhD ’14 and postdoc Austin Akey at MIT, and Eric Hoke and William Nguyen of Stanford. It was supported by the Bay Area Photovoltaic Consortium and the U.S. Department of Energy.

Source: News Office

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy. 
Credit: KAUST

Tsunami on demand: the power to harness catastrophic events

A new study published in Nature Physics features a nano-optical chip that makes possible generating and controlling nanoscale rogue waves. The innovative chip was developed by an international team of physicists, led by Andrea Fratalocchi from KAUST (Saudi Arabia), and is expected to have significant applications for energy research and environmental safety.

Can you imagine how much energy is in a tsunami wave, or in a tornado? Energy is all around us, but mainly contained in a quiet state. But there are moments in time when large amounts of energy build up spontaneously and create rare phenomena on a potentially disastrous scale. How these events occur, in many cases, is still a mystery.

To reveal the natural mechanisms behind such high-energy phenomena, Andrea Fratalocchi, assistant professor in the Computer, Electrical and Mathematical Science and Engineering Division of King Abdullah University of Science and Technology (KAUST), led a team of researchers from Saudi Arabia and three European universities and research centers to understand the dynamics of such destructive events and control their formation in new optical chips, which can open various technological applications. The results and implications of this study are published in the journal Nature Physics.

“I have always been fascinated by the unpredictability of nature,” Fratalocchi said. “And I believe that understanding this complexity is the next frontier that will open cutting edge pathways in science and offer novel applications in a variety of areas.”

Fratalocchi’s team began their research by developing new theoretical ideas to explain the formation of rare energetic natural events such as rogue waves — large surface waves that develop spontaneously in deep water and represent a potential risk for vessels and open-ocean oil platforms.”

“Our idea was something never tested before,” Fratalocchi continued. “We wanted to demonstrate that small perturbations of a chaotic sea of interacting waves could, contrary to intuition, control the formation of rare events of exceptional amplitude.”

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.  Credit: KAUST
Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.
Credit: KAUST

A planar photonic crystal chip, fabricated at the University of St. Andrews and tested at the FOM institute AMOLF in the Amsterdam Science Park, was used to generate ultrafast (163 fs long) and subwavelength (203 nm wide) nanoscale rogue waves, proving that Fratalocchi’s theory was correct. The newly developed photonic chip offered an exceptional level of controllability over these rare events.

Thomas F. Krauss, head of the Photonics Group and Nanocentre Cleanroom at the University of York, UK, was involved in the development of the experiment and the analysis of the data. He shared, “By realizing a sea of interacting waves on a photonic chip, we were able study the formation of rare high energy events in a controlled environment. We noted that these events only happened when some sets of waves were missing, which is one of the key insights our study.”

Kobus Kuipers, head of nanophotonics at FOM institute AMOLF, NL, who was involved in the experimental visualization of the rogue waves, was fascinated by their dynamics: “We have developed a microscope that allows us to visualize optical behavior at the nanoscale. Unlike conventional wave behavior, it was remarkable to see the rogue waves suddenly appear, seemingly out of nowhere, and then disappear again…as if they had never been there.”

Andrea Di Falco, leader of the Synthetic Optics group at the University of St. Andrews said, “The advantage of using light confined in an optical chip is that we can control very carefully how the energy in a chaotic system is dissipated, giving rise to these rare and extreme events. It is as if we were able to produce a determined amount of waves of unusual height in a small lake, just by accurately landscaping its coasts and controlling the size and number of its emissaries.”

The outcomes of this project offer leading edge technological applications in energy research, high speed communication and in disaster preparedness.

Fratalocchi and the team believe their research represents a major milestone for KAUST and for the field. “This discovery can change once and for all the way we look at catastrophic events,” concludes Fratalocchi, “opening new perspectives in preventing their destructive appearance on large scales, or using their unique power for ideating new applications at the nanoscale.”The title of the Nature Physics paper is “Triggering extreme events at the nanoscale in photonic seas.” The paper is accessible on the Nature Photonics website: http://dx.doi.org/10.1038/nphys3263

Source : KAUST News

The first ever photograph of light as both a particle and wave

Light behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior. Credit:EPFL
ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

Quantum mechanics tells us that light can behave simultaneously as a particle or a wave. However, there has never been an experiment able to capture both natures of light at the same time; the closest we have come is seeing either wave or particle, but always at different times. Taking a radically different experimental approach, EPFL scientists have now been able to take the first ever snapshot of light behaving both as a wave and as a particle. The breakthrough work is published in Nature Communications.

When UV light hits a metal surface, it causes an emission of electrons. Albert Einstein explained this “photoelectric” effect by proposing that light – thought to only be a wave – is also a stream of particles. Even though a variety of experiments have successfully observed both the particle- and wave-like behaviors of light, they have never been able to observe both at the same time. 

 Alternate Link on YTPAK: http://www.ytpak.com/?component=video&task=view&id=UQ-qseLBnxc

A new approach on a classic effect

A research team led by Fabrizio Carbone at EPFL has now carried out an experiment with a clever twist: using electrons to image light. The researchers have captured, for the first time ever, a single snapshot of light behaving simultaneously as both a wave and a stream of particles particle.

The experiment is set up like this: A pulse of laser light is fired at a tiny metallic nanowire. The laser adds energy to the charged particles in the nanowire, causing them to vibrate. Light travels along this tiny wire in two possible directions, like cars on a highway. When waves traveling in opposite directions meet each other they form a new wave that looks like it is standing in place. Here, this standing wave becomes the source of light for the experiment, radiating around the nanowire.

This is where the experiment’s trick comes in: The scientists shot a stream of electrons close to the nanowire, using them to image the standing wave of light. As the electrons interacted with the confined light on the nanowire, they either sped up or slowed down. Using the ultrafast microscope to image the position where this change in speed occurred, Carbone’s team could now visualize the standing wave, which acts as a fingerprint of the wave-nature of light.

While this phenomenon shows the wave-like nature of light, it simultaneously demonstrates its particle aspect as well. As the electrons pass close to the standing wave of light, they “hit” the light’s particles, the photons. As mentioned above, this affects their speed, making them move faster or slower. This change in speed appears as an exchange of energy “packets” (quanta) between electrons and photons. The very occurrence of these energy packets shows that the light on the nanowire behaves as a particle.

“This experiment demonstrates that, for the first time ever, we can film quantum mechanics – and its paradoxical nature – directly,” says Fabrizio Carbone. In addition, the importance of this pioneering work can extend beyond fundamental science and to future technologies. As Carbone explains: “Being able to image and control quantum phenomena at the nanometer scale like this opens up a new route towards quantum computing.”

This work represents a collaboration between the Laboratory for Ultrafast Microscopy and Electron Scattering of EPFL, the Department of Physics of Trinity College (US) and the Physical and Life Sciences Directorate of the Lawrence Livermore National Laboratory. The imaging was carried out EPFL’s ultrafast energy-filtered transmission electron microscope – one of the two in the world.

Reference

Piazza L, Lummen TTA, Quiñonez E, Murooka Y, Reed BW, Barwick B, Carbone F.Simultaneous observation of the quantization and the interference pattern of a plasmonic near-field. Nature Communications 02 March 2015. DOI: 10.1038/ncomms7407

Source: EPFL

Polarisation of the Cosmic Microwave Background

Credit: ESA/PLANCK

PLANCK Reveals First Starts Were Formed Much Later Than Previously Thought

New maps from ESA’s Planck satellite uncover the ‘polarised’ light from the early Universe across the entire sky, revealing that the first stars formed much later than previously thought.

The history of our Universe is a 13.8 billion-year tale that scientists endeavour to read by studying the planets, asteroids, comets and other objects in our Solar System, and gathering light emitted by distant stars, galaxies and the matter spread between them.

Polarisation of the Cosmic Microwave Background Credit: ESA/PLANCK
Polarisation of the Cosmic Microwave Background
Credit: ESA/PLANCK

A major source of information used to piece together this story is the Cosmic Microwave Background, or CMB, the fossil light resulting from a time when the Universe was hot and dense, only 380 000 years after the Big Bang.

Thanks to the expansion of the Universe, we see this light today covering the whole sky at microwave wavelengths.

Between 2009 and 2013, Planck surveyed the sky to study this ancient light in unprecedented detail. Tiny differences in the background’s temperature trace regions of slightly different density in the early cosmos, representing the seeds of all future structure, the stars and galaxies of today.

Scientists from the Planck collaboration have published the results from the analysis of these data in a large number of scientific papers over the past two years, confirming the standard cosmological picture of our Universe with ever greater accuracy.

History of the Universe Credit:ESA
History of the Universe
Credit:ESA

“But there is more: the CMB carries additional clues about our cosmic history that are encoded in its ‘polarisation’,” explains Jan Tauber, ESA’s Planck project scientist.

“Planck has measured this signal for the first time at high resolution over the entire sky, producing the unique maps released today.”

Light is polarised when it vibrates in a preferred direction, something that may arise as a result of photons – the particles of light – bouncing off other particles. This is exactly what happened when the CMB originated in the early Universe.

Initially, photons were trapped in a hot, dense soup of particles that, by the time the Universe was a few seconds old, consisted mainly of electrons, protons and neutrinos. Owing to the high density, electrons and photons collided with one another so frequently that light could not travel any significant distant before bumping into another electron, making the early Universe extremely ‘foggy’.

Slowly but surely, as the cosmos expanded and cooled, photons and the other particles grew farther apart, and collisions became less frequent.

This had two consequences: electrons and protons could finally combine and form neutral atoms without them being torn apart again by an incoming photon, and photons had enough room to travel, being no longer trapped in the cosmic fog.

Once freed from the fog, the light was set on its cosmic journey that would take it all the way to the present day, where telescopes like Planck detect it as the CMB. But the light also retains a memory of its last encounter with the electrons, captured in its polarisation.

“The polarisation of the CMB also shows minuscule fluctuations from one place to another across the sky: like the temperature fluctuations, these reflect the state of the cosmos at the time when light and matter parted company,” says François Bouchet of the Institut d’Astrophysique de Paris, France.

“This provides a powerful tool to estimate in a new and independent way parameters such as the age of the Universe, its rate of expansion and its essential composition of normal matter, dark matter and dark energy.”

Planck’s polarisation data confirm the details of the standard cosmological picture determined from its measurement of the CMB temperature fluctuations, but add an important new answer to a fundamental question: when were the first stars born?

“After the CMB was released, the Universe was still very different from the one we live in today, and it took a long time until the first stars were able to form,” explains Marco Bersanelli of Università degli Studi di Milano, Italy.

“Planck’s observations of the CMB polarisation now tell us that these ‘Dark Ages’ ended some 550 million years after the Big Bang – more than 100 million years later than previously thought.

“While these 100 million years may seem negligible compared to the Universe’s age of almost 14 billion years, they make a significant difference when it comes to the formation of the first stars.”

The Dark Ages ended as the first stars began to shine. And as their light interacted with gas in the Universe, more and more of the atoms were turned back into their constituent particles: electrons and protons.

This key phase in the history of the cosmos is known as the ‘epoch of reionisation’.

The newly liberated electrons were once again able to collide with the light from the CMB, albeit much less frequently now that the Universe had significantly expanded. Nevertheless, just as they had 380 000 years after the Big Bang, these encounters between electrons and photons left a tell-tale imprint on the polarisation of the CMB.

“From our measurements of the most distant galaxies and quasars, we know that the process of reionisation was complete by the time that the Universe was about 900 million years old,” says George Efstathiou of the University of Cambridge, UK.

“But, at the moment, it is only with the CMB data that we can learn when this process began.”

Planck’s new results are critical, because previous studies of the CMB polarisation seemed to point towards an earlier dawn of the first stars, placing the beginning of reionisation about 450 million years after the Big Bang.

This posed a problem. Very deep images of the sky from the NASA–ESA Hubble Space Telescope have provided a census of the earliest known galaxies in the Universe, which started forming perhaps 300–400 million years after the Big Bang.

However, these would not have been powerful enough to succeed at ending the Dark Ages within 450 million years.

“In that case, we would have needed additional, more exotic sources of energy to explain the history of reionisation,” says Professor Efstathiou.

The new evidence from Planck significantly reduces the problem, indicating that reionisation started later than previously believed, and that the earliest stars and galaxies alone might have been enough to drive it.

This later end of the Dark Ages also implies that it might be easier to detect the very first generation of galaxies with the next generation of observatories, including the James Webb Space Telescope.

But the first stars are definitely not the limit. With the new Planck data released today, scientists are also studying the polarisation of foreground emission from gas and dust in the Milky Way to analyse the structure of the Galactic magnetic field.

The data have also enabled new important insights into the early cosmos and its components, including the intriguing dark matter and the elusive neutrinos, as described in papers also released today.

The Planck data have delved into the even earlier history of the cosmos, all the way to inflation – the brief era of accelerated expansion that the Universe underwent when it was a tiny fraction of a second old. As the ultimate probe of this epoch, astronomers are looking for a signature of gravitational waves triggered by inflation and later imprinted on the polarisation of the CMB.

No direct detection of this signal has yet been achieved, as reported last week. However, when combining the newest all-sky Planck data with those latest results, the limits on the amount of primordial gravitational waves are pushed even further down to achieve the best upper limits yet.

“These are only a few highlights from the scrutiny of Planck’s observations of the CMB polarisation, which is revealing the sky and the Universe in a brand new way,” says Jan Tauber.

“This is an incredibly rich data set and the harvest of discoveries has just begun.”

A series of scientific papers describing the new results was published on 5 February and it can be downloaded here.

The new results from Planck are based on the complete surveys of the entire sky, performed between 2009 and 2013. New data, including temperature maps of the CMB at all nine frequencies observed by Planck and polarisation maps at four frequencies (30, 44, 70 and 353 GHz), are also released today.

The three principal scientific leaders of the Planck mission, Nazzareno Mandolesi, Jean-Loup Puget and Jan Tauber, were recently awarded the 2015 EPS Edison Volta Prize for “directing the development of the Planck payload and the analysis of its data, resulting in the refinement of our knowledge of the temperature fluctuations in the Cosmic Microwave Background as a vastly improved tool for doing precision cosmology at unprecedented levels of accuracy, and consolidating our understanding of the very early universe.

More about Planck

Launched in 2009, Planck was designed to map the sky in nine frequencies using two state-of-the-art instruments: the Low Frequency Instrument (LFI), which includes three frequency bands in the range 30–70 GHz, and the High Frequency Instrument (HFI), which includes six frequency bands in the range 100–857 GHz.

HFI completed its survey in January 2012, while LFI continued to make science observations until 3 October 2013, before being switched off on 19 October 2013. Seven of Planck’s nine frequency channels were equipped with polarisation-sensitive detectors.

The Planck Scientific Collaboration consists of all the scientists who have contributed to the development of the mission, and who participate in the scientific exploitation of the data during the proprietary period.

These scientists are members of one or more of four consortia: the LFI Consortium, the HFI Consortium, the DK-Planck Consortium, and ESA’s Planck Science Office. The two European-led Planck Data Processing Centres are located in Paris, France and Trieste, Italy.

The LFI consortium is led by N. Mandolesi, Università degli Studi di Ferrara, Italy (deputy PI: M. Bersanelli, Università degli Studi di Milano, Italy), and was responsible for the development and operation of LFI. The HFI consortium is led by J.L. Puget, Institut d’Astrophysique Spatiale in Orsay (CNRS/Université Paris-Sud), France (deputy PI: F. Bouchet, Institut d’Astrophysique de Paris (CNRS/UPMC), France), and was responsible for the development and operation of HFI.

Source: ESA

The dark nebula LDN 483.
Credit: ESO

Where Did All the Stars Go?

Dark cloud obscures hundreds of background stars


Some of the stars appear to be missing in this intriguing new ESO image. But the black gap in this glitteringly beautiful starfield is not really a gap, but rather a region of space clogged with gas and dust. This dark cloud is called LDN 483 — for Lynds Dark Nebula 483. Such clouds are the birthplaces of future stars. The Wide Field Imager, an instrument mounted on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile, captured this image of LDN 483 and its surroundings.

The dark nebula LDN 483. Credit: ESO
The Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile snapped this image of the dark nebula LDN 483. The object is a region of space clogged with gas and dust. These materials are dense enough to effectively eclipse the light of background stars. LDN 483 is located about 700 light-years away in the constellation of Serpens (The Serpent). Credit: ESO

LDN 483 [1] is located about 700 light-years away in the constellation of Serpens (The Serpent). The cloud contains enough dusty material to completely block the visible light from background stars. Particularly dense molecular clouds, like LDN 483, qualify as dark nebulae because of this obscuring property. The starless nature of LDN 483 and its ilk would suggest that they are sites where stars cannot take root and grow. But in fact the opposite is true: dark nebulae offer the most fertile environments for eventual star formation.

Astronomers studying star formation in LDN 483 have discovered some of the youngest observable kinds of baby stars buried in LDN 483’s shrouded interior. These gestating stars can be thought of as still being in the womb, having not yet been born as complete, albeit immature, stars.

In this first stage of stellar development, the star-to-be is just a ball of gas and dust contracting under the force of gravity within the surrounding molecular cloud. The protostar is still quite cool — about –250 degrees Celsius — and shines only in long-wavelength submillimetre light [2]. Yet temperature and pressure are beginning to increase in the fledgling star’s core.

This earliest period of star growth lasts a mere thousands of years, an astonishingly short amount of time in astronomical terms, given that stars typically live for millions or billions of years. In the following stages, over the course of several million years, the protostar will grow warmer and denser. Its emission will increase in energy along the way, graduating from mainly cold, far-infrared light to near-infrared and finally to visible light. The once-dim protostar will have then become a fully luminous star.

As more and more stars emerge from the inky depths of LDN 483, the dark nebula will disperse further and lose its opacity. The missing background stars that are currently hidden will then come into view — but only after the passage of millions of years, and they will be outshone by the bright young-born stars in the cloud [3].

Notes
[1] The Lynds Dark Nebula catalogue was compiled by the American astronomer Beverly Turner Lynds, and published in 1962. These dark nebulae were found from visual inspection of the Palomar Sky Survey photographic plates.

[2] The Atacama Large Millimeter/submillimeter Array (ALMA), operated in part by ESO, observes in submillimetre and millimetre light and is ideal for the study of such very young stars in molecular clouds.

[3] Such a young open star cluster can be seen here, and a more mature one here.
Source : ESO

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

Credit: CCNY

Study Unveils New Half-Light Half-Matter Quantum Particles

Prospects of developing computing and communication technologies based on quantum properties of light and matter may have taken a major step forward thanks to research by City College of New York physicists led by Dr. Vinod Menon.

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

“Besides being a fundamental breakthrough, this opens up the possibility of making devices which take the benefits of both light and matter,” said Professor Menon.  

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles. Credit: CCNY
In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.
Credit: CCNY

For example one can start envisioning logic gates and signal processors that take on best of light and matter. The discovery is also expected to contribute to developing practical platforms for quantum computing. 

Dr. Dirk Englund, a professor at MIT whose research focuses on quantum technologies based on semiconductor and optical systems, hailed the City College study.

“What is so remarkable and exciting in the work by Vinod and his team is how readily this strong coupling regime could actually be achieved. They have shown convincingly that by coupling a rather standard dielectric cavity to exciton–polaritons in a monolayer of molybdenum disulphide, they could actually reach this strong coupling regime with a very large binding strength,” he said. 

Professor Menon’s research team included City College PhD students, Xiaoze Liu, Tal Galfsky and Zheng Sun, and scientists from Yale University, National Tsing Hua University (Taiwan) and Ecole Polytechnic -Montreal (Canada).

The study appears in the January issue of the journal “Nature Photonics.” It was funded by the U.S. Army Research Laboratory’s Army Research Office and the National Science Foundation through the Materials Research Science and Engineering Center – Center for Photonic and Multiscale Nanomaterials. 

Source: The City College New of York

Trapping light with a twister

New understanding of how to halt photons could lead to miniature particle accelerators, improved data transmission.

By David L. Chandler


Researchers at MIT who succeeded last year in creating a material that could trap light and stop it in its tracks have now developed a more fundamental understanding of the process. The new work — which could help explain some basic physical mechanisms — reveals that this behavior is connected to a wide range of other seemingly unrelated phenomena.

The findings are reported in a paper in the journal Physical Review Letters, co-authored by MIT physics professor Marin Soljačić; postdocs Bo Zhen, Chia Wei Hsu, and Ling Lu; and Douglas Stone, a professor of applied physics at Yale University.

Light can usually be confined only with mirrors, or with specialized materials such as photonic crystals. Both of these approaches block light beams; last year’s finding demonstrated a new method in which the waves cancel out their own radiation fields. The new work shows that this light-trapping process, which involves twisting the polarization direction of the light, is based on a kind of vortex — the same phenomenon behind everything from tornadoes to water swirling down a drain.

Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1. Courtesy of the researchers Source: MIT
Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1.
Courtesy of the researchers
Source: MIT

In addition to revealing the mechanism responsible for trapping the light, the new analysis shows that this trapped state is much more stable than had been thought, making it easier to produce and harder to disturb.

“People think of this [trapped state] as very delicate,” Zhen says, “and almost impossible to realize. But it turns out it can exist in a robust way.”

In most natural light, the direction of polarization — which can be thought of as the direction in which the light waves vibrate — remains fixed. That’s the principle that allows polarizing sunglasses to work: Light reflected from a surface is selectively polarized in one direction; that reflected light can then be blocked by polarizing filters oriented at right angles to it.

But in the case of these light-trapping crystals, light that enters the material becomes polarized in a way that forms a vortex, Zhen says, with the direction of polarization changing depending on the beam’s direction.

Because the polarization is different at every point in this vortex, it produces a singularity — also called a topological defect, Zhen says — at its center, trapping the light at that point.

Hsu says the phenomenon makes it possible to produce something called a vector beam, a special kind of laser beam that could potentially create small-scale particle accelerators. Such devices could use these vector beams to accelerate particles and smash them into each other — perhaps allowing future tabletop devices to carry out the kinds of high-energy experiments that today require miles-wide circular tunnels.

The finding, Soljačić says, could also enable easy implementation of super-resolution imaging (using a method called stimulated emission depletion microscopy) and could allow the sending of far more channels of data through a single optical fiber.

“This work is a great example of how supposedly well-studied physical systems can contain rich and undiscovered phenomena, which can be unearthed if you dig in the right spot,” says Yidong Chong, an assistant professor of physics and applied physics at Nanyang Technological University in Singapore who was not involved in this research.

Chong says it is remarkable that such surprising findings have come from relatively well-studied materials. “It deals with photonic crystal slabs of the sort that have been extensively analyzed, both theoretically and experimentally, since the 1990s,” he says. “The fact that the system is so unexotic, together with the robustness associated with topological phenomena, should give us confidence that these modes will not simply

be theoretical curiosities, but can be exploited in technologies such as microlasers.”

The research was partly supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and by the Department of Energy and the National Science Foundation.

Source: MIT News Office

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

Best Quantum Receiver

RECORD HIGH DATA ACCURACY RATES FOR PHASE-MODULATED TRANSMISSION

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second.

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. - Credit : JQI
By passing it through a special crystal, a light wave’s phase—denoting position along the wave’s cycle—can be delayed. A delay of a certain amount can denote a piece of data. In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent—for laser light this means a predictable set of crests and troughs along the wave—a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires—an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.)

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

- See more at: http://jqi.umd.edu/news/best-quantum-receiver#sthash.SS5zfkis.dpuf

- Source: JQI