Tag Archives: nature

Wrinkle predictions:New mathematical theory may explain patterns in fingerprints, raisins, and microlenses.

By Jennifer Chu


CAMBRIDGE, Mass. – As a grape slowly dries and shrivels, its surface creases, ultimately taking on the wrinkled form of a raisin. Similar patterns can be found on the surfaces of other dried materials, as well as in human fingerprints. While these patterns have long been observed in nature, and more recently in experiments, scientists have not been able to come up with a way to predict how such patterns arise in curved systems, such as microlenses.

Now a team of MIT mathematicians and engineers has developed a mathematical theory, confirmed through experiments, that predicts how wrinkles on curved surfaces take shape. From their calculations, they determined that one main parameter — curvature — rules the type of pattern that forms: The more curved a surface is, the more its surface patterns resemble a crystal-like lattice.

The researchers say the theory, reported this week in the journal Nature Materials, may help to generally explain how fingerprints and wrinkles form.

“If you look at skin, there’s a harder layer of tissue, and underneath is a softer layer, and you see these wrinkling patterns that make fingerprints,” says Jörn Dunkel, an assistant professor of mathematics at MIT. “Could you, in principle, predict these patterns? It’s a complicated system, but there seems to be something generic going on, because you see very similar patterns over a huge range of scales.”

The group sought to develop a general theory to describe how wrinkles on curved objects form — a goal that was initially inspired by observations made by Dunkel’s collaborator, Pedro Reis, the Gilbert W. Winslow Career Development Associate Professor in Civil Engineering.

In past experiments, Reis manufactured ping pong-sized balls of polymer in order to investigate how their surface patterns may affect a sphere’s drag, or resistance to air. Reis observed a characteristic transition of surface patterns as air was slowly sucked out: As the sphere’s surface became compressed, it began to dimple, forming a pattern of regular hexagons before giving way to a more convoluted, labyrinthine configuration, similar to fingerprints.

“Existing theories could not explain why we were seeing these completely different patterns,” Reis says.

Denis Terwagne, a former postdoc in Reis’ group, mentioned this conundrum in a Department of Mathematics seminar attended by Dunkel and postdoc Norbert Stoop. The mathematicians took up the challenge, and soon contacted Reis to collaborate.

Ahead of the curve

Reis shared data from his past experiments, which Dunkel and Stoop used to formulate a generalized mathematical theory. According to Dunkel, there exists a mathematical framework for describing wrinkling, in the form of elasticity theory — a complex set of equations one could apply to Reis’ experiments to predict the resulting shapes in computer simulations. However, these equations are far too complicated to pinpoint exactly when certain patterns start to morph, let alone what causes such morphing.

Combining ideas from fluid mechanics with elasticity theory, Dunkel and Stoop derived a simplified equation that accurately predicts the wrinkling patterns found by Reis and his group.

“What type of stretching and bending is going on, and how the substrate underneath influences the pattern — all these different effects are combined in coefficients so you now have an analytically tractable equation that predicts how the patterns evolve, depending on the forces that act on that surface,” Dunkel explains.

In computer simulations, the researchers confirmed that their equation was indeed able to reproduce correctly the surface patterns observed in experiments. They were therefore also able to identify the main parameters that govern surface patterning.

As it turns out, curvature is one major determinant of whether a wrinkling surface becomes covered in hexagons or a more labyrinthine pattern: The more curved an object, the more regular its wrinkled surface. The thickness of an object’s shell also plays a role: If the outer layer is very thin compared to its curvature, an object’s surface will likely be convoluted, similar to a fingerprint. If the shell is a bit thicker, the surface will form a more hexagonal pattern.

Dunkel says the group’s theory, although based primarily on Reis’ work with spheres, may also apply to more complex objects. He and Stoop, together with postdoc Romain Lagrange, have used their equation to predict the morphing patterns in a donut-shaped object, which they have now challenged Reis to reproduce experimentally. If these predictions can be confirmed in future experiments, Reis says the new theory will serve as a design tool for scientists to engineer complex objects with morphable surfaces.

“This theory allows us to go and look at shapes other than spheres,” Reis says. “If you want to make a more complicated object wrinkle — say, a Pringle-shaped area with multiple curvatures — would the same equation still apply? Now we’re developing experiments to check their theory.”

This research was funded in part by the National Science Foundation, the Swiss National Science Foundation, and the MIT Solomon Buchsbaum Fund.

Source: MIT News Office

Quantum computer as detector shows space is not squeezed

 Robert Sanders


 

Ever since Einstein proposed his special theory of relativity in 1905, physics and cosmology have been based on the assumption that space looks the same in all directions – that it’s not squeezed in one direction relative to another.

A new experiment by UC Berkeley physicists used partially entangled atoms — identical to the qubits in a quantum computer — to demonstrate more precisely than ever before that this is true, to one part in a billion billion.

The classic experiment that inspired Albert Einstein was performed in Cleveland by Albert Michelson and Edward Morley in 1887 and disproved the existence of an “ether” permeating space through which light was thought to move like a wave through water. What it also proved, said Hartmut Häffner, a UC Berkeley assistant professor of physics, is that space is isotropic and that light travels at the same speed up, down and sideways.

“Michelson and Morley proved that space is not squeezed,” Häffner said. “This isotropy is fundamental to all physics, including the Standard Model of physics. If you take away isotropy, the whole Standard Model will collapse. That is why people are interested in testing this.”

The Standard Model of particle physics describes how all fundamental particles interact, and requires that all particles and fields be invariant under Lorentz transformations, and in particular that they behave the same no matter what direction they move.

Häffner and his team conducted an experiment analogous to the Michelson-Morley experiment, but with electrons instead of photons of light. In a vacuum chamber he and his colleagues isolated two calcium ions, partially entangled them as in a quantum computer, and then monitored the electron energies in the ions as Earth rotated over 24 hours.

As the Earth rotates every 24 hours, the orientation of the ions in the quantum computer/detector changes with respect to the Sun’s rest frame. If space were squeezed in one direction and not another, the energies of the electrons in the ions would have shifted with a 12-hour period. (Hartmut Haeffner image)
As the Earth rotates every 24 hours, the orientation of the ions in the quantum computer/detector changes with respect to the Sun’s rest frame. If space were squeezed in one direction and not another, the energies of the electrons in the ions would have shifted with a 12-hour period. (Hartmut Haeffner image)

If space were squeezed in one or more directions, the energy of the electrons would change with a 12-hour period. It didn’t, showing that space is in fact isotropic to one part in a billion billion (1018), 100 times better than previous experiments involving electrons, and five times better than experiments like Michelson and Morley’s that used light.

The results disprove at least one theory that extends the Standard Model by assuming some anisotropy of space, he said.

Häffner and his colleagues, including former graduate student Thaned Pruttivarasin, now at the Quantum Metrology Laboratory in Saitama, Japan, will report their findings in the Jan. 29 issue of the journal Nature.

Entangled qubits

Häffner came up with the idea of using entangled ions to test the isotropy of space while building quantum computers, which involve using ionized atoms as quantum bits, or qubits, entangling their electron wave functions, and forcing them to evolve to do calculations not possible with today’s digital computers. It occurred to him that two entangled qubits could serve as sensitive detectors of slight disturbances in space.

“I wanted to do the experiment because I thought it was elegant and that it would be a cool thing to apply our quantum computers to a completely different field of physics,” he said. “But I didn’t think we would be competitive with experiments being performed by people working in this field. That was completely out of the blue.”

He hopes to make more sensitive quantum computer detectors using other ions, such as ytterbium, to gain another 10,000-fold increase in the precision measurement of Lorentz symmetry. He is also exploring with colleagues future experiments to detect the spatial distortions caused by the effects of dark matter particles, which are a complete mystery despite comprising 27 percent of the mass of the universe.

“For the first time we have used tools from quantum information to perform a test of fundamental symmetries, that is, we engineered a quantum state which is immune to the prevalent noise but sensitive to the Lorentz-violating effects,” Häffner said. “We were surprised the experiment just worked, and now we have a fantastic new method at hand which can be used to make very precise measurements of perturbations of space.”

Other co-authors are UC Berkeley graduate student Michael Ramm, former UC Berkeley postdoc Michael Hohensee of Lawrence Livermore National Laboratory, and colleagues from the University of Delaware and Maryland and institutions in Russia. The work was supported by the National Science Foundation.

Source: UC Berkeley

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.

Credit: F. Najafi/ MIT

Toward quantum chips

Packing single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits.

By Larry Hardesty


CAMBRIDGE, Mass. – A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons. Credit: F. Najafi/ MIT
Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.
Credit: F. Najafi/ MIT

In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

“You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

Thinking small

According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

“Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.

Relocation

The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

“It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

Source: MIT News Office

The dark nebula LDN 483.
Credit: ESO

Where Did All the Stars Go?

Dark cloud obscures hundreds of background stars


Some of the stars appear to be missing in this intriguing new ESO image. But the black gap in this glitteringly beautiful starfield is not really a gap, but rather a region of space clogged with gas and dust. This dark cloud is called LDN 483 — for Lynds Dark Nebula 483. Such clouds are the birthplaces of future stars. The Wide Field Imager, an instrument mounted on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile, captured this image of LDN 483 and its surroundings.

The dark nebula LDN 483. Credit: ESO
The Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile snapped this image of the dark nebula LDN 483. The object is a region of space clogged with gas and dust. These materials are dense enough to effectively eclipse the light of background stars. LDN 483 is located about 700 light-years away in the constellation of Serpens (The Serpent). Credit: ESO

LDN 483 [1] is located about 700 light-years away in the constellation of Serpens (The Serpent). The cloud contains enough dusty material to completely block the visible light from background stars. Particularly dense molecular clouds, like LDN 483, qualify as dark nebulae because of this obscuring property. The starless nature of LDN 483 and its ilk would suggest that they are sites where stars cannot take root and grow. But in fact the opposite is true: dark nebulae offer the most fertile environments for eventual star formation.

Astronomers studying star formation in LDN 483 have discovered some of the youngest observable kinds of baby stars buried in LDN 483’s shrouded interior. These gestating stars can be thought of as still being in the womb, having not yet been born as complete, albeit immature, stars.

In this first stage of stellar development, the star-to-be is just a ball of gas and dust contracting under the force of gravity within the surrounding molecular cloud. The protostar is still quite cool — about –250 degrees Celsius — and shines only in long-wavelength submillimetre light [2]. Yet temperature and pressure are beginning to increase in the fledgling star’s core.

This earliest period of star growth lasts a mere thousands of years, an astonishingly short amount of time in astronomical terms, given that stars typically live for millions or billions of years. In the following stages, over the course of several million years, the protostar will grow warmer and denser. Its emission will increase in energy along the way, graduating from mainly cold, far-infrared light to near-infrared and finally to visible light. The once-dim protostar will have then become a fully luminous star.

As more and more stars emerge from the inky depths of LDN 483, the dark nebula will disperse further and lose its opacity. The missing background stars that are currently hidden will then come into view — but only after the passage of millions of years, and they will be outshone by the bright young-born stars in the cloud [3].

Notes
[1] The Lynds Dark Nebula catalogue was compiled by the American astronomer Beverly Turner Lynds, and published in 1962. These dark nebulae were found from visual inspection of the Palomar Sky Survey photographic plates.

[2] The Atacama Large Millimeter/submillimeter Array (ALMA), operated in part by ESO, observes in submillimetre and millimetre light and is ideal for the study of such very young stars in molecular clouds.

[3] Such a young open star cluster can be seen here, and a more mature one here.
Source : ESO

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Credit:
ESO

The Hot Blue Stars of Messier 47

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Messier 47 is located approximately 1600 light-years from Earth, in the constellation of Puppis (the poop deck of the mythological ship Argo). It was first noticed some time before 1654 by Italian astronomer Giovanni Battista Hodierna and was later independently discovered by Charles Messier himself, who apparently had no knowledge of Hodierna’s earlier observation.

Although it is bright and easy to see, Messier 47 is one of the least densely populated open clusters. Only around 50 stars are visible in a region about 12 light-years across, compared to other similar objects which can contain thousands of stars.

Messier 47 has not always been so easy to identify. In fact, for years it was considered missing, as Messier had recorded the coordinates incorrectly. The cluster was later rediscovered and given another catalogue designation — NGC 2422. The nature of Messier’s mistake, and the firm conclusion that Messier 47 and NGC 2422 are indeed the same object, was only established in 1959 by Canadian astronomer T. F. Morris.

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars. Credit: ESO
This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.
Credit:
ESO



The bright blue–white colours of these stars are an indication of their temperature, with hotter stars appearing bluer and cooler stars appearing redder. This relationship between colour, brightness and temperature can be visualised by use of the Planck curve. But the more detailed study of the colours of stars using spectroscopy also tells astronomers a lot more — including how fast the stars are spinning and their chemical compositions. There are also a few bright red stars in the picture — these are red giant stars that are further through their short life cycles than the less massive and longer-lived blue stars [1].

By chance Messier 47 appears close in the sky to another contrasting star cluster — Messier 46. Messier 47 is relatively close, at around 1600 light-years, but Messier 46 is located around 5500 light-years away and contains a lot more stars, with at least 500 stars present. Despite containing more stars, it appears significantly fainter due to its greater distance.

Messier 46 could be considered to be the older sister of Messier 47, with the former being approximately 300 million years old compared to the latter’s 78 million years. Consequently, many of the most massive and brilliant of the stars in Messier 46 have already run through their short lives and are no longer visible, so most stars within this older cluster appear redder and cooler.

This image of Messier 47 was produced as part of the ESO Cosmic Gems programme [2].

Notes

[1] The lifetime of a star depends primarily on its mass. Massive stars, containing many times as much material as the Sun, have short lives measured in millions of years. On the other hand much less massive stars can continue to shine for many billions of years. In a cluster, the stars all have about the same age and same initial chemical composition. So the brilliant massive stars evolve quickest, become red giants sooner, and end their lives first, leaving the less massive and cooler ones to long outlive them.

[2] The ESO Cosmic Gems programme is an outreach initiative to produce images of interesting, intriguing or visually attractive objects using ESO telescopes, for the purposes of education and public outreach. The programme makes use of telescope time that cannot be used for science observations. All data collected may also be suitable for scientific purposes, and are made available to astronomers through ESO’s science archive.

More information

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Source: ESO 

Characteristics of a universal simulator|Study narrows the scope of research on quantum computing

Despite a lot of work being done by many research groups around the world, the field of Quantum computing is still in its early stages. We still need to cover a lot of grounds to achieve the goal of developing a working Quantum computer capable of doing the tasks which are expected or predicted. Recent research by a SISSA led team has tried to give the future research in the area of Quantum computing some direction based on the current state of research in the area.


“A quantum computer may be thought of as a ‘simulator of overall Nature,” explains Fabio Franchini, a researcher at the International School for Advanced Studies (SISSA) of Trieste, “in other words, it’s a machine capable of simulating Nature as a quantum system, something that classical computers cannot do”. Quantum computers are machines that carry out operations by exploiting the phenomena of quantum mechanics, and they are capable of performing different functions from those of current computers. This science is still very young and the systems produced to date are still very limited. Franchini is the first author of a study just published in Physical Review Xwhich establishes a basic characteristic that this type of machine should possess and in doing so guides the direction of future research in this field.

The study used analytical and numerical methods. “What we found” explains Franchini, “is that a system that does not exhibit ‘Majorana fermions’ cannot be a universal quantum simulator”. Majorana fermions were hypothesized by Ettore Majorana in a paper published 1937, and they display peculiar characteristics: a Majorana fermion is also its own antiparticle. “That means that if Majorana fermions meet they annihilate among themselves,” continues Franchini. “In recent years it has been suggested that these fermions could be found in states of matter useful for quantum computing, and our study confirms that they must be present, with a certain probability related to entanglement, in the material used to build the machine”.

Entanglement, or “action at a distance”, is a property of quantum systems whereby an action done on one part of the system has an effect on another part of the same system, even if the latter has been split into two parts that are located very far apart. “Entanglement is a fundamental phenomenon for quantum computers,” explains Franchini.

“Our study helps to understand what types of devices research should be focusing on to construct this universal simulator. Until now, given the lack of criteria, research has proceeded somewhat randomly, with a huge consumption of time and resources”.

The study was conducted with the participation of many other international research institutes in addition to SISSA, including the Massachusetts Institute of Technology (MIT) in Boston, the University of Oxford and many others.

More in detail…

“Having a quantum computer would open up new worlds. For example, if we had one today we would be able to break into any bank account,” jokes Franchini. “But don’t worry, we’re nowhere near that goal”.

At the present time, several attempts at quantum machines exist that rely on the properties of specific materials. Depending on the technology used, these computers have sizes varying from a small box to a whole room, but so far they are only able to process a limited number of information bits, an amount infinitely smaller than that processed by classical computers.

However, it’s not correct to say that quantum computers are, or will be, more powerful than traditional ones, points out Franchini. “There are several things that these devices are worse at. But, by exploiting quantum mechanics, they can perform operations that would be impossible for classical computers”.

Source: International School of Advanced Studies (SISSA)

 

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

Best Quantum Receiver

RECORD HIGH DATA ACCURACY RATES FOR PHASE-MODULATED TRANSMISSION

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second.

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. - Credit : JQI
By passing it through a special crystal, a light wave’s phase—denoting position along the wave’s cycle—can be delayed. A delay of a certain amount can denote a piece of data. In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent—for laser light this means a predictable set of crests and troughs along the wave—a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires—an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.)

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

- See more at: http://jqi.umd.edu/news/best-quantum-receiver#sthash.SS5zfkis.dpuf

- Source: JQI 

This map of Turkey shows the artists' interpretation of the North Anatolian Fault (blue line) and the possible site of an earthquake (white lines) that could strike beneath the Sea of Marmara.

Image: NASA, and Christine Daniloff and Jose-Luis Olivares/MIT

Groundwater composition as potential precursor to earthquakes

By Meres J. Weche


 

The world experiences over 13,000 earthquakes per year reaching a Richter magnitude of 4.0 or greater. But what if there was a way to predict these oft-deadly earthquakes and, through a reliable process, mitigate loss of life and damage to vital urban infrastructures?

Earthquake prediction is the “holy grail” of geophysics, says KAUST’s Dr. Sigurjón Jónsson, Associate Professor of Earth Science and Engineering and Principal Investigator of the Crustal Deformation and InSAR Group. But after some initial optimism among scientists in the 1970′s about the reality of predicting earthquakes, ushered in by the successful prediction within hours of a major earthquake in China in 1975, several failed predictions have since then moved the pendulum towards skepticism from the 1990′s onwards.

This map of Turkey shows the artists' interpretation of the North Anatolian Fault (blue line) and the possible site of an earthquake (white lines) that could strike beneath the Sea of Marmara. Image: NASA, and Christine Daniloff and Jose-Luis Olivares/MIT
This map of Turkey shows the artists’ interpretation of the North Anatolian Fault (blue line) and the possible site of an earthquake (white lines) that could strike beneath the Sea of Marmara.
Image: NASA, and Christine Daniloff and Jose-Luis Olivares/MIT

In a study recently published in Nature Geoscience by a group of Icelandic and Swedish researchers, including Prof. Sigurjón Jónsson, an interesting correlation was established between two earthquakes greater than magnitude 5 in North Iceland, in 2012 and 2013, and the observed changing chemical composition of area groundwater prior to these tectonic events. The changes included variations in dissolved element concentrations and fluctuations in the proportion of stable isotopes of oxygen and hydrogen.

Can We Really Predict Earthquakes?

The basic common denominator guiding scientists and general observers investigating the predictability of earthquakes is the detection of these noticeable changes before seismic events. Some of these observable precursors are changes in groundwater level, radon gas sometimes coming out from the ground, smaller quakes called foreshocks, and even strange behavior by some animals before large earthquakes.

There are essentially three prevailing schools of thought in earthquake prediction among scientists. There’s a first group of scientists who believe that earthquake prediction is achievable but we simply don’t yet know how to do it reliably. They believe that we may, at some point in the future, be able to give short-term predictions.

Then there’s another class of scientists who believe that we will never be able to predict earthquakes. Their philosophy is that the exact start of earthquakes is simply randomly occurring and that the best thing we can do is to retrofit our houses and make probability forecasts — but no short-term warnings.

The last group, which currently represents a minority of scientists who are not often taken seriously, believes that earthquakes are indeed predictable and that we have the tools to do it.

Following the wave of optimism in the ’70s and ’80s, the interest and confidence of scientists in predicting earthquakes have generally subsided, along with the funding. Scientists now tend to focus mainly on understanding the physics behind earthquakes. As Prof. Jónsson summarizes:

“From geology and from earthquake occurrence today we can more or less see where in the world we have large earthquakes and where we have areas which are relatively safe. Although we cannot make short-term predictions we can make what we call forecasts. We can give probabilities. But short-term predictions are not achievable and may never be. We will see.”

The Message from the Earth’s Cracking Crust

Iceland was an ideal location to conduct the collaborative study undertaken by the scientists from Akureyri University, the University of Iceland, Landsvirkjun (the National Power Company of Iceland), the University of Stockholm, the University of Gothenburg and Karolinska Institutet in Stockholm, and KAUST.

“Iceland is a good testing ground because, geologically speaking, it’s very active. It has erupting volcanoes and it has large earthquakes also happening relatively often compared to many other places. And these areas that are active are relatively accessible,” said Prof. Jónsson.

The team of researchers monitored the chemistry, temperature and pressure in a few water wells in north Iceland for a period of five years more or less continuously. “They have been doing this to form an understanding of the variability of these chemical compounds in the wells; and then possibly associate significant changes to tectonic or major events,” he adds.

Through the five-year data collection period, which began in 2008, they were able to detect perceptible changes in the aquifer system as much as four to six months prior to the two recorded earthquakes: one of a magnitude 5.6 in October 2012 and a second one of magnitude 5.5 in April 2013. Their main observation was that the proportion of young local precipitation water in the geothermal water increased – in proportion to water that fell as rain thousands of years ago (aquifer systems are typically a mix of these two). At the same time, alterations were evident in the dissolved chemicals like sodium, calcium and silicon during that precursor period. Interestingly, the proportion went back to its previous state about three months after the quakes.

While the scientists are cautioning that this is not a confirmation that earthquake predictions are now feasible, the observations are promising and worthy of further investigation involving more exhaustive monitoring in additional locations. But, statistically speaking, it would be very difficult to disassociate these changes in the groundwater chemical composition from the two earthquakes.

The reason why a change in the ratio between old and new water in the aquifer system is important is because it points to the development of small fractures from the build-up of stress on the rocks before an earthquake. So the new rainwater seeps through the newly formed cracks, or microfracturing, in the rocky soil. Prof. Sigurjón Jónsson illustrates this as follows:

“It’s similar to when you take a piece of wood and you start to bend it. At some point before it snaps it starts to crack a little; and then poof it snaps. Something similar might be happening in the earth. Meaning that just before an earthquake happens, if you start to have a lot of micro-fracturing you will have water having an easier time to move around in the rocks.”

The team will be presenting their findings at the American Geophysical Union (AGU) meeting in San Francisco in December 2014. “It will be interesting to see the reaction there,” said Prof. Jónsson

Source: KAUST News

Live longer? Save the planet? Better diet could nail both

New study shows healthier food choices could dramatically decrease environmental costs of agriculture


As cities and incomes increase around the world, so does consumption of refined sugars, refined fats, oils and resource- and land-intense agricultural products such as beef. A new study led by University of Minnesota ecologist David Tilman shows how a shift away from this trajectory and toward healthier traditional Mediterranean, pescatarian or vegetarian diets could not only boost human lifespan and quality of life, but also slash greenhouse gas emissions and save habitat for endangered species.

The study, published in the November 12 online edition of Nature by Tilman and graduate student Michael Clark, synthesized data on environmental costs of food production, diet trends, relationships between diet and health, and population growth. Their integrated analysis painted a striking picture of the human and environmental health costs of our current diet trajectory as well as how strategically modifying food choices could reduce not only incidence of type II diabetes, coronary heart disease and other chronic diseases, but global agricultural greenhouse gas emissions and habitat degradation, as well.

“We showed that the same dietary changes that can add about a decade to our lives can also prevent massive environmental damage,” said Tilman, a professor in the University’s College of Biological Sciences and resident fellow at the Institute on the Environment. “In particular, if the world were to adopt variations on three common diets, health would be greatly increased at the same time global greenhouse gas emissions were reduced by an amount equal to the current greenhouse gas emissions of all cars, trucks, planes, trains and ships. In addition, this dietary shift would prevent the destruction of an area of tropical forests and savannas as large as half of the United States.”

The researchers found that, as incomes increased between 1961 and 2009, people consumed more meat protein, empty calories and total calories per person. When these trends were combined with forecasts of population growth and income growth for the coming decades, the study predicted that diets in 2050 would contain fewer servings of fruits and vegetables, but about 60 percent more empty calories and 25 to 50 percent more pork, poultry, beef, dairy and eggs — a suite of changes that would increase incidence of type II diabetes, coronary heart disease and some cancers. Using life-cycle analyses of various food production systems, the study also calculated that, if current trends prevail, these 2050 diets would also lead to an 80 percent increase in global greenhouse gas emissions from food production as well as habitat destruction due to land clearing for agriculture around the world.

The study then compared health impacts of the global omnivorous diet with those reported for traditional Mediterranean, pescatarian and vegetarian diets. Adopting these alternative diets could reduce incidence of type II diabetes by about 25 percent, cancer by about 10 percent and death from heart disease by about 20 percent relative to the omnivore diet. Additionally, the adoption of these or similar alternative diets would prevent most or all of the increased greenhouse gas emissions and habitat destruction that would otherwise be caused by both current diet trends and increased global population.

The authors acknowledged that numerous factors go into diet choice, but also pointed out that the alternative diets already are part of the lives of countless people around the world. Noting that variations on the diets used in the scenario could potentially show even greater benefit, they concluded that “the evaluation and implementation of dietary solutions to the tightly linked diet-environment-health trilemma is a global challenge, and opportunity, of great environmental and public health importance.”

Tilman is a Regents Professor and McKnight Presidential Chair in Ecology in the College of Biological Sciences’ Department of Ecology, Evolution and Behavior and a resident fellow in the University of Minnesota’s Institute on the Environment, which seeks lasting solutions to Earth’s biggest challenges through research, partnerships and leadership development. Clark is currently a doctoral student in the College of Food, Agricultural and Natural Resource Sciences.

Source: University of Minnesota

The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning.
Credit: CERN

LHCb experiment observes two new baryon particles never seen before

Geneva 19 November 2014. Today the collaboration for the LHCb experiment at CERN1’s Large Hadron Collider announced the discovery of two new particles in the baryon family. The particles, known as the Xi_b’- and Xi_b*-, were predicted to exist by the quark model but had never been seen before. A related particle, the Xi_b*0, was found by the CMS experiment at CERN in 2012. The LHCb collaboration submitted a paper reporting the finding to Physical Review Letters.

Like the well-known protons that the LHC accelerates, the new particles are baryons made from three quarks bound together by the strong force. The types of quarks are different, though: the new X_ib particles both contain one beauty (b), one strange (s), and one down (d) quark. Thanks to the heavyweight b quarks, they are more than six times as massive as the proton. But the particles are more than just the sum of their parts: their mass also depends on how they are configured. Each of the quarks has an attribute called “spin”. In the Xi_b’- state, the spins of the two lighter quarks point in the opposite direction to the b quark, whereas in the Xi_b*- state they are aligned. This difference makes the Xi_b*a little heavier.

“Nature was kind and gave us two particles for the price of one,” said Matthew Charles of the CNRS’s LPNHE laboratory at Paris VI University. “The Xi_b’is very close in mass to the sum of its decay products: if it had been just a little lighter, we wouldn’t have seen it at all using the decay signature that we were looking for.”

 The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning. Credit: CERN
The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b’- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b’- region plotted with a finer binning.
Credit: CERN

“This is a very exciting result. Thanks to LHCb’s excellent hadron identification, which is unique among the LHC experiments, we were able to separate a very clean and strong signal from the background,”said Steven Blusk from Syracuse University in New York. “It demonstrates once again the sensitivity and how precise the LHCb detector is.”

As well as the masses of these particles, the research team studied their relative production rates, their widths – a measure of how unstable they are – and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

QCD is part of the Standard Model of particle physics, the theory that describes the fundamental particles of matter, how they interact and the forces between them. Testing QCD at high precision is a key to refine our understanding of quark dynamics, models of which are tremendously difficult to calculate.

“If we want to find new physics beyond the Standard Model, we need first to have a sharp picture,” said LHCb’s physics coordinator Patrick Koppenburg from Nikhef Institute in Amsterdam. “Such high precision studies will help us to differentiate between Standard Model effects and anything new or unexpected in the future.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared – after its first long shutdown – to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

Further information

Link to the paper on Arxiv: http://arxiv.org/abs/1411.4849(link is external)
More about the result on LHCb’s collaboration website: http://lhcb-public.web.cern.ch/lhcb-public/Welcome.html#StrBeaBa
Observation of a new Xi_b*0 beauty particle, on CMS’ collaboration website:http://cms.web.cern.ch/news/observation-new-xib0-beauty-particle

Footnote(s)

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer Status.

Source: CERN