Physicists at Yale University have observed a new form of quantum friction that could serve as a basis for robust information storage in quantum computers in the future. The researchers are building upon decades of research, experimentally demonstrating a procedure theorized nearly 30 years ago.
The results appear in the journal Science and are based on work in the lab of Michel Devoret, the F.W. Beinecke Professor of Applied Physics.
Quantum computers, a technology still in development, would rely on the laws of quantum mechanics to solve certain problems exponentially faster than classical computers. They would store information in quantum systems, such as the spin of an electron or the energy levels of an artificial atom. Called “qubits,” these storage units are the quantum equivalent of classical “bits.” But while bits can be in states like 0 or 1, qubits can simultaneously be in the 0 and 1 state. This property is called quantum superposition; it is a powerful resource, but also very fragile. Ensuring the integrity of quantum information is a major challenge of the field.
Zaki Leghtas, first author on the paper and a postdoctoral researcher at Yale, offered the following metaphor to explain this new form of quantum friction:
Imagine a hill surrounded by two basins. If you put a ball at the top of the hill, it will roll down the sides and settle in one of the basins. As it rolls, it loses energy due to the friction between the ball and the ground, and it slows down. This is why it stops at the bottom of the basin. But friction also causes the ball to leave a path in its wake. By looking at either side of the hill and seeing where grass is flattened and stones are pushed aside, you can tell whether the ball rolled into the right or left basin.
If you replace the ball with a quantum particle, however, you run into a problem. Quantum particles can exist in many states at the same time, so in theory, the particle could occupy both basins simultaneously. But as the particle is rolling down, the friction between the particle and the hill leaves an impact on the environment, which can be measured. The same friction that stops the particle at the bottom also carves the path. This destroys the superposition and forces the particle to exist in only one basin.
Previously, researchers had been able to take advantage of this friction to trap quantum particles in particular basins. But now, Devoret’s lab demonstrates a new type of friction — one that slows the particle as it rolls, but does not carve a path that tells which side it is choosing. This allows the particle to simultaneously exist in both the left and right basins at the same time.
Each of these “basin” states is both stable and steady. While the quantum particle might move around in the basins, small perturbations won’t kick it out of the basins. Furthermore, any superpositions of these two basin states are also stable and steady. This means they could be used as a basis for storing quantum information.
Technically, this is called a two-dimensional quantum steady-state manifold. Devoret and Leghtas point out that the next step is expanding this two-dimensional manifold to four dimensions — adding two more basins to the landscape. This will allow scientists to redundantly encode quantum information and to do error correction within the manifold. Error correction is one of the key components that must be developed in order to make a practical quantum computer feasible.
Additional authors are Steven Touzard, Ioan Pop, Angela Kou, Brian Vlastakis, Andrei Petrenko, Katrina Sliwa, Anirudh Narla, Shyam Shankar, Michael Hatridge, Matthew Reagor, Luigi Frunzio, Robert Schoelkopf, and Mazyar Mirrahimi of Yale. Mirrahimi also has an appointment at the Institut National de Recherche en Informatique et en Automatique Paris-Rocquencourt.
Solar storm found to produce “ultrarelativistic, killer electrons” in 60 seconds.
By Jennifer Chu
CAMBRIDGE, Mass. – On Oct. 8, 2013, an explosion on the sun’s surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth’s magnetic field, setting off a magnetized sound pulse around the planet.
NASA’s Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth’s magnetic field, captured the effects of the solar shockwave just before and after it struck.
Now scientists at MIT’s Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes’ data, and observed a sudden and dramatic effect in the shockwave’s aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth’s radiation belts, accelerating certain particles to ultrahigh energies.
“These are very lightweight particles, but they are ultrarelativistic, killer electrons — electrons that can go right through a satellite,” says John Foster, associate director of MIT’s Haystack Observatory. “These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it’s exciting: We see something that, in terms of the radiation belt, is really quick.”
The findings represent the first time the effects of a solar shockwave on Earth’s radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.
Catching a shockwave in the act
Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes’ mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.
One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons — particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.
The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth’s magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave’s aftermath.
Dealing a “sledgehammer blow”
Foster and his colleagues analyzed the probes’ data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck “a sledgehammer blow” to the protective barrier of the Earth’s magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse — a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.
In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.
Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles’ velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed “drift resonant,” and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.
Foster says solar shockwaves can impact Earth’s radiation belts a couple of times each month. The event in 2013 was a relatively minor one.
“This was a relatively small shock. We know they can be much, much bigger,” Foster says. “Interactions between solar activity and Earth’s magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics.”
The new SPHERE instrument on ESO’s Very Large Telescope has been used to search for a brown dwarf expected to be orbiting the unusual double star V471 Tauri. SPHERE has given astronomers the best look so far at the surroundings of this intriguing object and they found — nothing. The surprising absence of this confidently predicted brown dwarf means that the conventional explanation for the odd behaviour of V471 Tauri is wrong. This unexpected result is described in the first science paper based on observations from SPHERE.
Some pairs of stars consist of two normal stars with slightly different masses. When the star of slightly higher mass ages and expands to become a red giant, material is transferred to other star and ends up surrounding both stars in a huge gaseous envelope. When this cloud disperses the two move closer together and form a very tight pair with one white dwarf, and one more normal star .
One such stellar pair is called V471 Tauri . It is a member of the Hyades star cluster in the constellation of Taurus and is estimated to be around 600 million years old and about 163 light-years from Earth. The two stars are very close and orbit each other every 12 hours. Twice per orbit one star passes in front of the other — which leads to regular changes in the brightness of the pair observed from Earth as they eclipse each other.
A team of astronomers led by Adam Hardy (Universidad Valparaíso, Valparaíso, Chile) first used the ULTRACAM system on ESO’s New Technology Telescope to measure these brightness changes very precisely. The times of the eclipses were measured with an accuracy of better than two seconds — a big improvement on earlier measurements.
The eclipse timings were not regular, but could be explained well by assuming that there was a brown dwarf orbiting both stars whose gravitational pull was disturbing the orbits of the stars. They also found hints that there might be a second small companion object.
Up to now however, it has been impossible to actually image a faint brown dwarf so close to much brighter stars. But the power of the newly installed SPHERE instrument on ESO’s Very Large Telescope allowed the team to look for the first time exactly where the brown dwarf companion was expected to be. But they saw nothing, even though the very high quality images from SPHERE should have easily revealed it .
“There are many papers suggesting the existence of such circumbinary objects, but the results here provide damaging evidence against this hypothesis,” remarks Adam Hardy.
If there is no orbiting object then what is causing the odd changes to the orbit of the binary? Several theories have been proposed, and, while some of these have already been ruled out, it is possible that the effects are caused by magnetic field variations in the larger of the two stars , somewhat similar to the smaller changes seen in the Sun.
“A study such as this has been necessary for many years, but has only become possible with the advent of powerful new instruments such as SPHERE. This is how science works: observations with new technology can either confirm, or as in this case disprove, earlier ideas. This is an excellent way to start the observational life of this amazing instrument,” concludes Adam Hardy.
 This name means that the object is the 471st variable star (or as closer analysis shows, pair of stars) to be identified in the constellation of Taurus.
 The SPHERE images are so accurate that they would have been able to reveal a companion such as a brown dwarf that is 70 000 times fainter than the central star, and only 0.26 arcseconds away from it. The expected brown dwarf companion in this case was predicted to be much brighter.
 This effect is called the Applegate mechanism and results in regular changes in the shape of the star, which can lead to changes in the apparent brightness of the double star seen from Earth.
New maps from ESA’s Planck satellite uncover the ‘polarised’ light from the early Universe across the entire sky, revealing that the first stars formed much later than previously thought.
The history of our Universe is a 13.8 billion-year tale that scientists endeavour to read by studying the planets, asteroids, comets and other objects in our Solar System, and gathering light emitted by distant stars, galaxies and the matter spread between them.
A major source of information used to piece together this story is the Cosmic Microwave Background, or CMB, the fossil light resulting from a time when the Universe was hot and dense, only 380 000 years after the Big Bang.
Thanks to the expansion of the Universe, we see this light today covering the whole sky at microwave wavelengths.
Between 2009 and 2013, Planck surveyed the sky to study this ancient light in unprecedented detail. Tiny differences in the background’s temperature trace regions of slightly different density in the early cosmos, representing the seeds of all future structure, the stars and galaxies of today.
Scientists from the Planck collaboration have published the results from the analysis of these data in a large number of scientific papers over the past two years, confirming the standard cosmological picture of our Universe with ever greater accuracy.
“But there is more: the CMB carries additional clues about our cosmic history that are encoded in its ‘polarisation’,” explains Jan Tauber, ESA’s Planck project scientist.
“Planck has measured this signal for the first time at high resolution over the entire sky, producing the unique maps released today.”
Light is polarised when it vibrates in a preferred direction, something that may arise as a result of photons – the particles of light – bouncing off other particles. This is exactly what happened when the CMB originated in the early Universe.
Initially, photons were trapped in a hot, dense soup of particles that, by the time the Universe was a few seconds old, consisted mainly of electrons, protons and neutrinos. Owing to the high density, electrons and photons collided with one another so frequently that light could not travel any significant distant before bumping into another electron, making the early Universe extremely ‘foggy’.
Slowly but surely, as the cosmos expanded and cooled, photons and the other particles grew farther apart, and collisions became less frequent.
This had two consequences: electrons and protons could finally combine and form neutral atoms without them being torn apart again by an incoming photon, and photons had enough room to travel, being no longer trapped in the cosmic fog.
Once freed from the fog, the light was set on its cosmic journey that would take it all the way to the present day, where telescopes like Planck detect it as the CMB. But the light also retains a memory of its last encounter with the electrons, captured in its polarisation.
“The polarisation of the CMB also shows minuscule fluctuations from one place to another across the sky: like the temperature fluctuations, these reflect the state of the cosmos at the time when light and matter parted company,” says François Bouchet of the Institut d’Astrophysique de Paris, France.
“This provides a powerful tool to estimate in a new and independent way parameters such as the age of the Universe, its rate of expansion and its essential composition of normal matter, dark matter and dark energy.”
Planck’s polarisation data confirm the details of the standard cosmological picture determined from its measurement of the CMB temperature fluctuations, but add an important new answer to a fundamental question: when were the first stars born?
“After the CMB was released, the Universe was still very different from the one we live in today, and it took a long time until the first stars were able to form,” explains Marco Bersanelli of Università degli Studi di Milano, Italy.
“Planck’s observations of the CMB polarisation now tell us that these ‘Dark Ages’ ended some 550 million years after the Big Bang – more than 100 million years later than previously thought.
“While these 100 million years may seem negligible compared to the Universe’s age of almost 14 billion years, they make a significant difference when it comes to the formation of the first stars.”
The Dark Ages ended as the first stars began to shine. And as their light interacted with gas in the Universe, more and more of the atoms were turned back into their constituent particles: electrons and protons.
This key phase in the history of the cosmos is known as the ‘epoch of reionisation’.
The newly liberated electrons were once again able to collide with the light from the CMB, albeit much less frequently now that the Universe had significantly expanded. Nevertheless, just as they had 380 000 years after the Big Bang, these encounters between electrons and photons left a tell-tale imprint on the polarisation of the CMB.
“From our measurements of the most distant galaxies and quasars, we know that the process of reionisation was complete by the time that the Universe was about 900 million years old,” says George Efstathiou of the University of Cambridge, UK.
“But, at the moment, it is only with the CMB data that we can learn when this process began.”
Planck’s new results are critical, because previous studies of the CMB polarisation seemed to point towards an earlier dawn of the first stars, placing the beginning of reionisation about 450 million years after the Big Bang.
This posed a problem. Very deep images of the sky from the NASA–ESA Hubble Space Telescope have provided a census of the earliest known galaxies in the Universe, which started forming perhaps 300–400 million years after the Big Bang.
However, these would not have been powerful enough to succeed at ending the Dark Ages within 450 million years.
“In that case, we would have needed additional, more exotic sources of energy to explain the history of reionisation,” says Professor Efstathiou.
The new evidence from Planck significantly reduces the problem, indicating that reionisation started later than previously believed, and that the earliest stars and galaxies alone might have been enough to drive it.
This later end of the Dark Ages also implies that it might be easier to detect the very first generation of galaxies with the next generation of observatories, including the James Webb Space Telescope.
But the first stars are definitely not the limit. With the new Planck data released today, scientists are also studying the polarisation of foreground emission from gas and dust in the Milky Way to analyse the structure of the Galactic magnetic field.
The data have also enabled new important insights into the early cosmos and its components, including the intriguing dark matter and the elusive neutrinos, as described in papers also released today.
The Planck data have delved into the even earlier history of the cosmos, all the way to inflation – the brief era of accelerated expansion that the Universe underwent when it was a tiny fraction of a second old. As the ultimate probe of this epoch, astronomers are looking for a signature of gravitational waves triggered by inflation and later imprinted on the polarisation of the CMB.
No direct detection of this signal has yet been achieved, as reported last week. However, when combining the newest all-sky Planck data with those latest results, the limits on the amount of primordial gravitational waves are pushed even further down to achieve the best upper limits yet.
“These are only a few highlights from the scrutiny of Planck’s observations of the CMB polarisation, which is revealing the sky and the Universe in a brand new way,” says Jan Tauber.
“This is an incredibly rich data set and the harvest of discoveries has just begun.”
A series of scientific papers describing the new results was published on 5 February and it can be downloaded here.
The new results from Planck are based on the complete surveys of the entire sky, performed between 2009 and 2013. New data, including temperature maps of the CMB at all nine frequencies observed by Planck and polarisation maps at four frequencies (30, 44, 70 and 353 GHz), are also released today.
The three principal scientific leaders of the Planck mission, Nazzareno Mandolesi, Jean-Loup Puget and Jan Tauber, were recently awarded the 2015 EPS Edison Volta Prize for “directing the development of the Planck payload and the analysis of its data, resulting in the refinement of our knowledge of the temperature fluctuations in the Cosmic Microwave Background as a vastly improved tool for doing precision cosmology at unprecedented levels of accuracy, and consolidating our understanding of the very early universe.“
More about Planck
Launched in 2009, Planck was designed to map the sky in nine frequencies using two state-of-the-art instruments: the Low Frequency Instrument (LFI), which includes three frequency bands in the range 30–70 GHz, and the High Frequency Instrument (HFI), which includes six frequency bands in the range 100–857 GHz.
HFI completed its survey in January 2012, while LFI continued to make science observations until 3 October 2013, before being switched off on 19 October 2013. Seven of Planck’s nine frequency channels were equipped with polarisation-sensitive detectors.
The Planck Scientific Collaboration consists of all the scientists who have contributed to the development of the mission, and who participate in the scientific exploitation of the data during the proprietary period.
These scientists are members of one or more of four consortia: the LFI Consortium, the HFI Consortium, the DK-Planck Consortium, and ESA’s Planck Science Office. The two European-led Planck Data Processing Centres are located in Paris, France and Trieste, Italy.
The LFI consortium is led by N. Mandolesi, Università degli Studi di Ferrara, Italy (deputy PI: M. Bersanelli, Università degli Studi di Milano, Italy), and was responsible for the development and operation of LFI. The HFI consortium is led by J.L. Puget, Institut d’Astrophysique Spatiale in Orsay (CNRS/Université Paris-Sud), France (deputy PI: F. Bouchet, Institut d’Astrophysique de Paris (CNRS/UPMC), France), and was responsible for the development and operation of HFI.
New infrared view of the Trifid Nebula reveals new variable stars far beyond
A new image taken with ESO’s VISTA survey telescope reveals the famous Trifid Nebula in a new and ghostly light. By observing in infrared light, astronomers can see right through the dust-filled central parts of the Milky Way and spot many previously hidden objects. In just this tiny part of one of the VISTA surveys, astronomers have discovered two unknown and very distant Cepheid variable stars that lie almost directly behind the Trifid. They are the first such stars found that lie in the central plane of the Milky Way beyond its central bulge.
As one of its major surveys of the southern sky, the VISTA telescope at ESO’s Paranal Observatory in Chile is mapping the central regions of the Milky Way in infrared light to search for new and hidden objects. This VVV survey (standing forVISTA Variables in the Via Lactea) is also returning to the same parts of the sky again and again to spot objects that vary in brightness as time passes.
A tiny fraction of this huge VVV dataset has been used to create this striking new picture of a famous object, the star formation region Messier 20, usually called the Trifid Nebula, because of the ghostly dark lanes that divide it into three parts when seen through a telescope.
The familiar pictures of the Trifid show it in visible light, where it glows brightly in both the pink emission from ionised hydrogen and the blue haze of scattered light from hot young stars. Huge clouds of light-absorbing dust are also prominent. But the view in the VISTA infrared picture is very different. The nebula is just a ghost of its usual visible-light self. The dust clouds are far less prominent and the bright glow from the hydrogen clouds is barely visible at all. The three-part structure is almost invisible.
In the new image, as if to compensate for the fading of the nebula, a spectacular new panorama comes into view. The thick dust clouds in the disc of our galaxy that absorb visible light allow through most of the infrared light that VISTA can see. Rather than the view being blocked, VISTA can see far beyond the Trifid and detect objects on the other side of the galaxy that have never been seen before.
By chance this picture shows a perfect example of the surprises that can be revealed when imaging in the infrared. Apparently close to the Trifid in the sky, but in reality about seven times more distant , a newly discovered pair of variable stars has been found in the VISTA data. These are Cepheid variables, a type of bright star that is unstable and slowly brightens and then fades with time. This pair of stars, which the astronomers think are the brightest members of a cluster of stars, are the only Cepheid variables detected so far that are close to the central plane, but on the far side of the galaxy. They brighten and fade over a period of eleven days.
 The Trifid Nebula lies about 5200 light-years from Earth, the centre of the Milky Way is about 27 000 light-years away, in almost the same direction, and the newly discovered Cepheids are at a distance of about 37 000 light-years.
CAMBRIDGE, Mass. – As a grape slowly dries and shrivels, its surface creases, ultimately taking on the wrinkled form of a raisin. Similar patterns can be found on the surfaces of other dried materials, as well as in human fingerprints. While these patterns have long been observed in nature, and more recently in experiments, scientists have not been able to come up with a way to predict how such patterns arise in curved systems, such as microlenses.
Now a team of MIT mathematicians and engineers has developed a mathematical theory, confirmed through experiments, that predicts how wrinkles on curved surfaces take shape. From their calculations, they determined that one main parameter — curvature — rules the type of pattern that forms: The more curved a surface is, the more its surface patterns resemble a crystal-like lattice.
The researchers say the theory, reported this week in the journal Nature Materials, may help to generally explain how fingerprints and wrinkles form.
“If you look at skin, there’s a harder layer of tissue, and underneath is a softer layer, and you see these wrinkling patterns that make fingerprints,” says Jörn Dunkel, an assistant professor of mathematics at MIT. “Could you, in principle, predict these patterns? It’s a complicated system, but there seems to be something generic going on, because you see very similar patterns over a huge range of scales.”
The group sought to develop a general theory to describe how wrinkles on curved objects form — a goal that was initially inspired by observations made by Dunkel’s collaborator, Pedro Reis, the Gilbert W. Winslow Career Development Associate Professor in Civil Engineering.
In past experiments, Reis manufactured ping pong-sized balls of polymer in order to investigate how their surface patterns may affect a sphere’s drag, or resistance to air. Reis observed a characteristic transition of surface patterns as air was slowly sucked out: As the sphere’s surface became compressed, it began to dimple, forming a pattern of regular hexagons before giving way to a more convoluted, labyrinthine configuration, similar to fingerprints.
“Existing theories could not explain why we were seeing these completely different patterns,” Reis says.
Denis Terwagne, a former postdoc in Reis’ group, mentioned this conundrum in a Department of Mathematics seminar attended by Dunkel and postdoc Norbert Stoop. The mathematicians took up the challenge, and soon contacted Reis to collaborate.
Ahead of the curve
Reis shared data from his past experiments, which Dunkel and Stoop used to formulate a generalized mathematical theory. According to Dunkel, there exists a mathematical framework for describing wrinkling, in the form of elasticity theory — a complex set of equations one could apply to Reis’ experiments to predict the resulting shapes in computer simulations. However, these equations are far too complicated to pinpoint exactly when certain patterns start to morph, let alone what causes such morphing.
Combining ideas from fluid mechanics with elasticity theory, Dunkel and Stoop derived a simplified equation that accurately predicts the wrinkling patterns found by Reis and his group.
“What type of stretching and bending is going on, and how the substrate underneath influences the pattern — all these different effects are combined in coefficients so you now have an analytically tractable equation that predicts how the patterns evolve, depending on the forces that act on that surface,” Dunkel explains.
In computer simulations, the researchers confirmed that their equation was indeed able to reproduce correctly the surface patterns observed in experiments. They were therefore also able to identify the main parameters that govern surface patterning.
As it turns out, curvature is one major determinant of whether a wrinkling surface becomes covered in hexagons or a more labyrinthine pattern: The more curved an object, the more regular its wrinkled surface. The thickness of an object’s shell also plays a role: If the outer layer is very thin compared to its curvature, an object’s surface will likely be convoluted, similar to a fingerprint. If the shell is a bit thicker, the surface will form a more hexagonal pattern.
Dunkel says the group’s theory, although based primarily on Reis’ work with spheres, may also apply to more complex objects. He and Stoop, together with postdoc Romain Lagrange, have used their equation to predict the morphing patterns in a donut-shaped object, which they have now challenged Reis to reproduce experimentally. If these predictions can be confirmed in future experiments, Reis says the new theory will serve as a design tool for scientists to engineer complex objects with morphable surfaces.
“This theory allows us to go and look at shapes other than spheres,” Reis says. “If you want to make a more complicated object wrinkle — say, a Pringle-shaped area with multiple curvatures — would the same equation still apply? Now we’re developing experiments to check their theory.”
This research was funded in part by the National Science Foundation, the Swiss National Science Foundation, and the MIT Solomon Buchsbaum Fund.