Tag Archives: processing

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.

Illustration: Jose-Luis Olivares/MIT

Quantum sensor’s advantages survive entanglement breakdown

Preserving the fragile quantum property known as entanglement isn’t necessary to reap benefits.

By Larry Hardesty 


CAMBRIDGE, Mass. – The extraordinary promise of quantum information processing — solving problems that classical computers can’t, perfectly secure communication — depends on a phenomenon called “entanglement,” in which the physical states of different quantum particles become interrelated. But entanglement is very fragile, and the difficulty of preserving it is a major obstacle to developing practical quantum information systems.

In a series of papers since 2008, members of the Optical and Quantum Communications Group at MIT’s Research Laboratory of Electronics have argued that optical systems that use entangled light can outperform classical optical systems — even when the entanglement breaks down.

Two years ago, they showed that systems that begin with entangled light could offer much more efficient means of securing optical communications. And now, in a paper appearing in Physical Review Letters, they demonstrate that entanglement can also improve the performance of optical sensors, even when it doesn’t survive light’s interaction with the environment.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration Credit: Jose-Luis Olivares/MIT

“That is something that has been missing in the understanding that a lot of people have in this field,” says senior research scientist Franco Wong, one of the paper’s co-authors and, together with Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, co-director of the Optical and Quantum Communications Group. “They feel that if unavoidable loss and noise make the light being measured look completely classical, then there’s no benefit to starting out with something quantum. Because how can it help? And what this experiment shows is that yes, it can still help.”

Phased in

Entanglement means that the physical state of one particle constrains the possible states of another. Electrons, for instance, have a property called spin, which describes their magnetic orientation. If two electrons are orbiting an atom’s nucleus at the same distance, they must have opposite spins. This spin entanglement can persist even if the electrons leave the atom’s orbit, but interactions with the environment break it down quickly.

In the MIT researchers’ system, two beams of light are entangled, and one of them is stored locally — racing through an optical fiber — while the other is projected into the environment. When light from the projected beam — the “probe” — is reflected back, it carries information about the objects it has encountered. But this light is also corrupted by the environmental influences that engineers call “noise.” Recombining it with the locally stored beam helps suppress the noise, recovering the information.

The local beam is useful for noise suppression because its phase is correlated with that of the probe. If you think of light as a wave, with regular crests and troughs, two beams are in phase if their crests and troughs coincide. If the crests of one are aligned with the troughs of the other, their phases are anti-correlated.

But light can also be thought of as consisting of particles, or photons. And at the particle level, phase is a murkier concept.

“Classically, you can prepare beams that are completely opposite in phase, but this is only a valid concept on average,” says Zheshen Zhang, a postdoc in the Optical and Quantum Communications Group and first author on the new paper. “On average, they’re opposite in phase, but quantum mechanics does not allow you to precisely measure the phase of each individual photon.”

Improving the odds

Instead, quantum mechanics interprets phase statistically. Given particular measurements of two photons, from two separate beams of light, there’s some probability that the phases of the beams are correlated. The more photons you measure, the greater your certainty that the beams are either correlated or not. With entangled beams, that certainty increases much more rapidly than it does with classical beams.

When a probe beam interacts with the environment, the noise it accumulates also increases the uncertainty of the ensuing phase measurements. But that’s as true of classical beams as it is of entangled beams. Because entangled beams start out with stronger correlations, even when noise causes them to fall back within classical limits, they still fare better than classical beams do under the same circumstances.

“Going out to the target and reflecting and then coming back from the target attenuates the correlation between the probe and the reference beam by the same factor, regardless of whether you started out at the quantum limit or started out at the classical limit,” Shapiro says. “If you started with the quantum case that’s so many times bigger than the classical case, that relative advantage stays the same, even as both beams become classical due to the loss and the noise.”

In experiments that compared optical systems that used entangled light and classical light, the researchers found that the entangled-light systems increased the signal-to-noise ratio — a measure of how much information can be recaptured from the reflected probe — by 20 percent. That accorded very well with their theoretical predictions.

But the theory also predicts that improvements in the quality of the optical equipment used in the experiment could double or perhaps even quadruple the signal-to-noise ratio. Since detection error declines exponentially with the signal-to-noise ratio, that could translate to a million-fold increase in sensitivity.

Source: MIT News Office

Recommendation theory

Model for evaluating product-recommendation algorithms suggests that trial and error get it right.

By Larry Hardesty

Devavrat Shah’s group at MIT’s Laboratory for Information and Decision Systems (LIDS) specializes in analyzing how social networks process information. In 2012, the group demonstrated algorithms that could predict what topics would trend on Twitter up to five hours in advance; this year, they used the same framework to predict fluctuations in the prices of the online currency known as Bitcoin.

Next month, at the Conference on Neural Information Processing Systems, they’ll present a paper that applies their model to the recommendation engines that are familiar from websites like Amazon and Netflix — with surprising results.

“Our interest was, we have a nice model for understanding data-processing from social data,” says Shah, the Jamieson Associate Professor of Electrical Engineering and Computer Science. “It makes sense in terms of how people make decisions, exhibit preferences, or take actions. So let’s go and exploit it and design a better, simple, basic recommendation algorithm, and it will be something very different. But it turns out that under that model, the standard recommendation algorithm is the right thing to do.”

The standard algorithm is known as “collaborative filtering.” To get a sense of how it works, imagine a movie-streaming service that lets users rate movies they’ve seen. To generate recommendations specific to you, the algorithm would first assign the other users similarity scores based on the degree to which their ratings overlap with yours. Then, to predict your response to a particular movie, it would aggregate the ratings the movie received from other users, weighted according to similarity scores.

To simplify their analysis, Shah and his collaborators — Guy Bresler, a postdoc in LIDS, and George Chen, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS) who is co-advised by Shah and EECS associate professor Polina Golland — assumed that the ratings system had two values, thumbs-up or thumbs-down. The taste of every user could thus be described, with perfect accuracy, by a string of ones and zeroes, where the position in the string corresponds to a particular movie and the number at that location indicates the rating.

Birds of a feather

The MIT researchers’ model assumes that large groups of such strings can be clustered together, and that those clusters can be described probabilistically. Rather than ones and zeroes at each location in the string, a probabilistic cluster model would feature probabilities: an 80 percent chance that the members of the cluster will like movie “A,” a 20 percent chance that they’ll like movie “B,” and so on.

The question is how many such clusters are required to characterize a population. If half the people who like “Die Hard” also like “Shakespeare in Love,” but the other half hate it, then ideally, you’d like to split “Die Hard” fans into two clusters. Otherwise, you’d lose correlations between their preferences that could be predictively useful. On the other hand, the more clusters you have, the more ratings you need to determine which of them a given user belongs to. Reliable prediction from limited data becomes impossible.

In their new paper, the MIT researchers show that so long as the number of clusters required to describe the variation in a population is low, collaborative filtering yields nearly optimal predictions. But in practice, how low is that number?

To answer that question, the researchers examined data on 10 million users of a movie-streaming site and identified 200 who had rated the same 500 movies. They found that, in fact, just five clusters — five probabilistic models — were enough to account for most of the variation in the population.

Missing links

While the researchers’ model corroborates the effectiveness of collaborative filtering, it also suggests ways to improve it. In general, the more information a collaborative-filtering algorithm has about users’ preferences, the more accurate its predictions will be. But not all additional information is created equal. If a user likes “The Godfather,” the information that he also likes “The Godfather: Part II” will probably have less predictive power than the information that he also likes “The Notebook.”

Using their analytic framework, the LIDS researchers show how to select a small number of products that carry a disproportionate amount of information about users’ tastes. If the service provider recommended those products to all its customers, then, based on the resulting ratings, it could much more efficiently sort them into probability clusters, which should improve the quality of its recommendations.

Sujay Sanghavi, an associate professor of electrical and computer engineering at the University of Texas at Austin, considers this the most interesting aspect of the research. “If you do some kind of collaborative filtering, two things are happening,” he says. “I’m getting value from it as a user, but other people are getting value, too. Potentially, there is a trade-off between these things. If there’s a popular movie, you can easily show that I’ll like it, but it won’t improve the recommendations for other people.”

That trade-off, Sanghavi says, “has been looked at in an empirical context, but there’s been nothing that’s principled. To me, what is appealing about this paper is that they have a principled look at this issue, which no other work has done. They’ve found a new kind of problem. They are looking at a new issue.”

Source : MIT News