Tag Archives: engineering

Researchers devise efficient power converter for internet of things

Researchers devise efficient power converter for internet of things

By Larry Hardesty


 

CAMBRIDGE, Mass. – The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.

Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 200,000-fold increase in current levels.

“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”

Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.

Packet perspective

The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.

“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”

The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.

If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.

To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.

Clocking down

Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.

But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.

“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.

The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.

Source: MIT News Office

Click on the image to know more about Prime Embedded Solutions

Persian Gulf could experience deadly heat: MIT Study

Detailed climate simulation shows a threshold of survivability could be crossed without mitigation measures.

By David Chandler


 

CAMBRIDGE, Mass.–Within this century, parts of the Persian Gulf region could be hit with unprecedented events of deadly heat as a result of climate change, according to a study of high-resolution climate models.

The research reveals details of a business-as-usual scenario for greenhouse gas emissions, but also shows that curbing emissions could forestall these deadly temperature extremes.

The study, published today in the journal Nature Climate Change, was carried out by Elfatih Eltahir, a professor of civil and environmental engineering at MIT, and Jeremy Pal PhD ’01 at Loyola Marymount University. They conclude that conditions in the Persian Gulf region, including its shallow water and intense sun, make it “a specific regional hotspot where climate change, in absence of significant mitigation, is likely to severely impact human habitability in the future.”

Running high-resolution versions of standard climate models, Eltahir and Pal found that many major cities in the region could exceed a tipping point for human survival, even in shaded and well-ventilated spaces. Eltahir says this threshold “has, as far as we know … never been reported for any location on Earth.”

That tipping point involves a measurement called the “wet-bulb temperature” that combines temperature and humidity, reflecting conditions the human body could maintain without artificial cooling. That threshold for survival for more than six unprotected hours is 35 degrees Celsius, or about 95 degrees Fahrenheit, according to recently published research. (The equivalent number in the National Weather Service’s more commonly used “heat index” would be about 165 F.)

This limit was almost reached this summer, at the end of an extreme, weeklong heat wave in the region: On July 31, the wet-bulb temperature in Bandahr Mashrahr, Iran, hit 34.6 C — just a fraction below the threshold, for an hour or less.

But the severe danger to human health and life occurs when such temperatures are sustained for several hours, Eltahir says — which the models show would occur several times in a 30-year period toward the end of the century under the business-as-usual scenario used as a benchmark by the Intergovernmental Panel on Climate Change.

The Persian Gulf region is especially vulnerable, the researchers say, because of a combination of low elevations, clear sky, water body that increases heat absorption, and the shallowness of the Persian Gulf itself, which produces high water temperatures that lead to strong evaporation and very high humidity.

The models show that by the latter part of this century, major cities such as Doha, Qatar, Abu Dhabi, and Dubai in the United Arab Emirates, and Bandar Abbas, Iran, could exceed the 35 C threshold several times over a 30-year period. What’s more, Eltahir says, hot summer conditions that now occur once every 20 days or so “will characterize the usual summer day in the future.”

While the other side of the Arabian Peninsula, adjacent to the Red Sea, would see less extreme heat, the projections show that dangerous extremes are also likely there, reaching wet-bulb temperatures of 32 to 34 C. This could be a particular concern, the authors note, because the annual Hajj, or annual Islamic pilgrimage to Mecca — when as many as 2 million pilgrims take part in rituals that include standing outdoors for a full day of prayer — sometimes occurs during these hot months.

While many in the Persian Gulf’s wealthier states might be able to adapt to new climate extremes, poorer areas, such as Yemen, might be less able to cope with such extremes, the authors say.

The research was supported by the Kuwait Foundation for the Advancement of Science.

Source: MIT News Office

Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


 

CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy. 
Credit: KAUST

Tsunami on demand: the power to harness catastrophic events

A new study published in Nature Physics features a nano-optical chip that makes possible generating and controlling nanoscale rogue waves. The innovative chip was developed by an international team of physicists, led by Andrea Fratalocchi from KAUST (Saudi Arabia), and is expected to have significant applications for energy research and environmental safety.

Can you imagine how much energy is in a tsunami wave, or in a tornado? Energy is all around us, but mainly contained in a quiet state. But there are moments in time when large amounts of energy build up spontaneously and create rare phenomena on a potentially disastrous scale. How these events occur, in many cases, is still a mystery.

To reveal the natural mechanisms behind such high-energy phenomena, Andrea Fratalocchi, assistant professor in the Computer, Electrical and Mathematical Science and Engineering Division of King Abdullah University of Science and Technology (KAUST), led a team of researchers from Saudi Arabia and three European universities and research centers to understand the dynamics of such destructive events and control their formation in new optical chips, which can open various technological applications. The results and implications of this study are published in the journal Nature Physics.

“I have always been fascinated by the unpredictability of nature,” Fratalocchi said. “And I believe that understanding this complexity is the next frontier that will open cutting edge pathways in science and offer novel applications in a variety of areas.”

Fratalocchi’s team began their research by developing new theoretical ideas to explain the formation of rare energetic natural events such as rogue waves — large surface waves that develop spontaneously in deep water and represent a potential risk for vessels and open-ocean oil platforms.”

“Our idea was something never tested before,” Fratalocchi continued. “We wanted to demonstrate that small perturbations of a chaotic sea of interacting waves could, contrary to intuition, control the formation of rare events of exceptional amplitude.”

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.  Credit: KAUST
Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.
Credit: KAUST

A planar photonic crystal chip, fabricated at the University of St. Andrews and tested at the FOM institute AMOLF in the Amsterdam Science Park, was used to generate ultrafast (163 fs long) and subwavelength (203 nm wide) nanoscale rogue waves, proving that Fratalocchi’s theory was correct. The newly developed photonic chip offered an exceptional level of controllability over these rare events.

Thomas F. Krauss, head of the Photonics Group and Nanocentre Cleanroom at the University of York, UK, was involved in the development of the experiment and the analysis of the data. He shared, “By realizing a sea of interacting waves on a photonic chip, we were able study the formation of rare high energy events in a controlled environment. We noted that these events only happened when some sets of waves were missing, which is one of the key insights our study.”

Kobus Kuipers, head of nanophotonics at FOM institute AMOLF, NL, who was involved in the experimental visualization of the rogue waves, was fascinated by their dynamics: “We have developed a microscope that allows us to visualize optical behavior at the nanoscale. Unlike conventional wave behavior, it was remarkable to see the rogue waves suddenly appear, seemingly out of nowhere, and then disappear again…as if they had never been there.”

Andrea Di Falco, leader of the Synthetic Optics group at the University of St. Andrews said, “The advantage of using light confined in an optical chip is that we can control very carefully how the energy in a chaotic system is dissipated, giving rise to these rare and extreme events. It is as if we were able to produce a determined amount of waves of unusual height in a small lake, just by accurately landscaping its coasts and controlling the size and number of its emissaries.”

The outcomes of this project offer leading edge technological applications in energy research, high speed communication and in disaster preparedness.

Fratalocchi and the team believe their research represents a major milestone for KAUST and for the field. “This discovery can change once and for all the way we look at catastrophic events,” concludes Fratalocchi, “opening new perspectives in preventing their destructive appearance on large scales, or using their unique power for ideating new applications at the nanoscale.”The title of the Nature Physics paper is “Triggering extreme events at the nanoscale in photonic seas.” The paper is accessible on the Nature Photonics website: http://dx.doi.org/10.1038/nphys3263

Source : KAUST News

Teaching programming to preschoolers: MIT Research

System that lets children program a robot using stickers embodies new theories about programming languages.

By Larry Hardesty


Researchers at the MIT Media Laboratory are developing a system that enables young children to program interactive robots by affixing stickers to laminated sheets of paper.

Not only could the system introduce children to programming principles, but it could also serve as a research tool, to help determine which computational concepts children can grasp at what ages, and how interactive robots can best be integrated into educational curricula.

Last week, at the Association for Computing Machinery and Institute of Electrical and Electronics Engineers’ International Conference on Human-Robot Interaction, the researchers presented the results of an initial study of the system, which investigated its use by children ages 4 to 8.

“We did not want to put this in the digital world but rather in the tangible world,” says Michal Gordon, a postdoc in media arts and sciences and lead author on the new paper. “It’s a sandbox for exploring computational concepts, but it’s a sandbox that comes to the children’s world.”

In their study, the MIT researchers used an interactive robot called Dragonbot, developed by the Personal Robots Group at the Media Lab, which is led by associate professor of media arts and sciences Cynthia Breazeal. Dragonbot has audio and visual sensors, a speech synthesizer, a range of expressive gestures, and a video screen for a face that can assume a variety of expressions. The programs that children created dictated how Dragonbot would react to stimuli.

“It’s programming in the context of relational interactions with the robot,” says Edith Ackermann, a developmental psychologist and visiting professor in the Personal Robots Group, who with Gordon and Breazeal is a co-author on the new paper. “This is what children do — they’re learning about social relations. So taking this expression of computational principles to the social world is very appropriate.”

Lessons that stick

The root components of the programming system are triangular and circular stickers — which represent stimuli and responses, respectively — and arrow stickers, which represent relationships between them. Children can first create computational “templates” by affixing triangles, circles, and arrows to sheets of laminated paper. They then fill in the details with stickers that represent particular stimuli — like thumbs up or down — and responses — like the narrowing or widening of Dragonbot’s eyes. There are also blank stickers on which older children can write their own verbal cues and responses.

Researchers in the Personal Robotics Group are developing a computer vision system that will enable children to convey new programs to Dragonbot simply by holding pages of stickers up to its camera. But for the purposes of the new study, the system’s performance had to be perfectly reliable, so one of the researchers would manually enter the stimulus-and-response sequences devised by the children, using a tablet computer with a touch-screen interface that featured icons depicting all the available options.

To introduce a new subject to the system, the researchers would ask him or her to issue an individual command, by attaching a single response sticker to a small laminated sheet. When presented with the sheet, Dragonbot would execute the command. But when it’s presented with a program, it instead nods its head and says, “I’ve got it.” Thereafter, it will execute the specified chain of responses whenever it receives the corresponding stimulus.

Even the youngest subjects were able to distinguish between individual commands and programs, and interviews after their sessions suggested that they understood that programs, unlike commands, modified the internal state of the robot. The researchers plan additional studies to determine the extent of their understanding.

Paradigm shift

The sticker system is, in fact, designed to encourage a new way of thinking about programming, one that may be more consistent with how computation is done in the 21st century.

“The systems we’re programming today are not sequential, as they were 20 or 30 years back,” Gordon says. “A system has many inputs coming in, complex state, and many outputs.” A cellphone, for instance, might be monitoring incoming transmissions over both Wi-Fi and the cellular network while playing back a video, transmitting the audio over Bluetooth, and running a timer that’s set to go off when the rice on the stove has finished cooking.

As a graduate student in computer science at the Weizmann Institute of Science in Israel, Gordon explains, she worked with her advisor, David Harel, on a new programming paradigm called scenario-based programming. “The idea is to describe your code in little scenarios, and the engine in the back connects them,” she explains. “You could think of it as rules, with triggers and actions.” Gordon and her colleagues’ new system could be used to introduce children to the principles of conventional, sequential programming. But it’s well adapted to scenario-based programming.

“It’s actually how we think about how programs are written before we try to integrate it into a whole programming artifact,” she says. “So I was thinking, ‘Why not try it earlier?’”

Source : MIT News Office


In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.

Illustration: Jose-Luis Olivares/MIT

Quantum sensor’s advantages survive entanglement breakdown

Preserving the fragile quantum property known as entanglement isn’t necessary to reap benefits.

By Larry Hardesty 


CAMBRIDGE, Mass. – The extraordinary promise of quantum information processing — solving problems that classical computers can’t, perfectly secure communication — depends on a phenomenon called “entanglement,” in which the physical states of different quantum particles become interrelated. But entanglement is very fragile, and the difficulty of preserving it is a major obstacle to developing practical quantum information systems.

In a series of papers since 2008, members of the Optical and Quantum Communications Group at MIT’s Research Laboratory of Electronics have argued that optical systems that use entangled light can outperform classical optical systems — even when the entanglement breaks down.

Two years ago, they showed that systems that begin with entangled light could offer much more efficient means of securing optical communications. And now, in a paper appearing in Physical Review Letters, they demonstrate that entanglement can also improve the performance of optical sensors, even when it doesn’t survive light’s interaction with the environment.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration Credit: Jose-Luis Olivares/MIT

“That is something that has been missing in the understanding that a lot of people have in this field,” says senior research scientist Franco Wong, one of the paper’s co-authors and, together with Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, co-director of the Optical and Quantum Communications Group. “They feel that if unavoidable loss and noise make the light being measured look completely classical, then there’s no benefit to starting out with something quantum. Because how can it help? And what this experiment shows is that yes, it can still help.”

Phased in

Entanglement means that the physical state of one particle constrains the possible states of another. Electrons, for instance, have a property called spin, which describes their magnetic orientation. If two electrons are orbiting an atom’s nucleus at the same distance, they must have opposite spins. This spin entanglement can persist even if the electrons leave the atom’s orbit, but interactions with the environment break it down quickly.

In the MIT researchers’ system, two beams of light are entangled, and one of them is stored locally — racing through an optical fiber — while the other is projected into the environment. When light from the projected beam — the “probe” — is reflected back, it carries information about the objects it has encountered. But this light is also corrupted by the environmental influences that engineers call “noise.” Recombining it with the locally stored beam helps suppress the noise, recovering the information.

The local beam is useful for noise suppression because its phase is correlated with that of the probe. If you think of light as a wave, with regular crests and troughs, two beams are in phase if their crests and troughs coincide. If the crests of one are aligned with the troughs of the other, their phases are anti-correlated.

But light can also be thought of as consisting of particles, or photons. And at the particle level, phase is a murkier concept.

“Classically, you can prepare beams that are completely opposite in phase, but this is only a valid concept on average,” says Zheshen Zhang, a postdoc in the Optical and Quantum Communications Group and first author on the new paper. “On average, they’re opposite in phase, but quantum mechanics does not allow you to precisely measure the phase of each individual photon.”

Improving the odds

Instead, quantum mechanics interprets phase statistically. Given particular measurements of two photons, from two separate beams of light, there’s some probability that the phases of the beams are correlated. The more photons you measure, the greater your certainty that the beams are either correlated or not. With entangled beams, that certainty increases much more rapidly than it does with classical beams.

When a probe beam interacts with the environment, the noise it accumulates also increases the uncertainty of the ensuing phase measurements. But that’s as true of classical beams as it is of entangled beams. Because entangled beams start out with stronger correlations, even when noise causes them to fall back within classical limits, they still fare better than classical beams do under the same circumstances.

“Going out to the target and reflecting and then coming back from the target attenuates the correlation between the probe and the reference beam by the same factor, regardless of whether you started out at the quantum limit or started out at the classical limit,” Shapiro says. “If you started with the quantum case that’s so many times bigger than the classical case, that relative advantage stays the same, even as both beams become classical due to the loss and the noise.”

In experiments that compared optical systems that used entangled light and classical light, the researchers found that the entangled-light systems increased the signal-to-noise ratio — a measure of how much information can be recaptured from the reflected probe — by 20 percent. That accorded very well with their theoretical predictions.

But the theory also predicts that improvements in the quality of the optical equipment used in the experiment could double or perhaps even quadruple the signal-to-noise ratio. Since detection error declines exponentially with the signal-to-noise ratio, that could translate to a million-fold increase in sensitivity.

Source: MIT News Office

More-flexible digital communication

New theory could yield more-reliable communication protocols.

By Larry Hardesty


Communication protocols for digital devices are very efficient but also very brittle: They require information to be specified in a precise order with a precise number of bits. If sender and receiver — say, a computer and a printer — are off by even a single bit relative to each other, communication between them breaks down entirely.

Humans are much more flexible. Two strangers may come to a conversation with wildly differing vocabularies and frames of reference, but they will quickly assess the extent of their mutual understanding and tailor their speech accordingly.

Madhu Sudan, an adjunct professor of electrical engineering and computer science at MIT and a principal researcher at Microsoft Research New England, wants to bring that type of flexibility to computer communication. In a series of recent papers, he and his colleagues have begun to describe theoretical limits on the degree of imprecision that communicating computers can tolerate, with very real implications for the design of communication protocols.

“Our goal is not to understand how human communication works,” Sudan says. “Most of the work is really in trying to abstract, ‘What is the kind of problem that human communication tends to solve nicely, [and] designed communication doesn’t?’ — and let’s now see if we can come up with designed communication schemes that do the same thing.”

One thing that humans do well is gauging the minimum amount of information they need to convey in order to get a point across. Depending on the circumstances, for instance, one co-worker might ask another, “Who was that guy?”; “Who was that guy in your office?”; “Who was that guy in your office this morning?”; or “Who was that guy in your office this morning with the red tie and glasses?”

Similarly, the first topic Sudan and his colleagues began investigating is compression, or the minimum number of bits that one device would need to send another in order to convey all the information in a data file.

Uneven odds

In a paper presented in 2011, at the ACM Symposium on Innovations in Computer Science (now known as Innovations in Theoretical Computer Science, or ITCS), Sudan and colleagues at Harvard University, Microsoft, and the University of Pennsylvania considered a hypothetical case in which the devices shared an almost infinite codebook that assigned a random string of symbols — a kind of serial number — to every possible message that either might send.

Of course, such a codebook is entirely implausible, but it allowed the researchers to get a statistical handle on the problem of compression. Indeed, it’s an extension of one of theconcepts that longtime MIT professor Claude Shannon used to determine the maximum capacity of a communication channel in the seminal 1948 paper that created the field of information theory.

In Sudan and his colleagues’ codebook, a vast number of messages might have associated strings that begin with the same symbol. But fewer messages will have strings that share their first two symbols, fewer still strings that share their first three symbols, and so on. In any given instance of communication, the question is how many symbols of the string one device needs to send the other in order to pick out a single associated message.

The answer to that question depends on the probability that any given interpretation of a string of symbols makes sense in context. By way of analogy, if your co-worker has had only one visitor all day, asking her, “Who was that guy in your office?” probably suffices. If she’s had a string of visitors, you may need to specify time of day and tie color.

Existing compression schemes do, in fact, exploit statistical regularities in data. But Sudan and his colleagues considered the case in which sender and receiver assign different probabilities to different interpretations. They were able to show that, so long as protocol designers can make reasonable assumptions about the ranges within which the probabilities might fall, good compression is still possible.

For instance, Sudan says, consider a telescope in deep-space orbit. The telescope’s designers might assume that 90 percent of what it sees will be blackness, and they can use that assumption to compress the image data it sends back to Earth. With existing protocols, anyone attempting to interpret the telescope’s transmissions would need to know the precise figure — 90 percent — that the compression scheme uses. But Sudan and his colleagues showed that the protocol could be designed to accommodate a range of assumptions — from, say, 85 percent to 95 percent — that might be just as reasonable as 90 percent.

Buggy codebook

In a paper being presented at the next ITCS, in January, Sudan and colleagues at Columbia University, Carnegie Mellon University, and Microsoft add even more uncertainty to their compression model. In the new paper, not only do sender and receiver have somewhat different probability estimates, but they also have slightly different codebooks. Again, the researchers were able to devise a protocol that would still provide good compression.

They also generalized their model to new contexts. For instance, Sudan says, in the era of cloud computing, data is constantly being duplicated on servers scattered across the Internet, and data-management systems need to ensure that the copies are kept up to date. One way to do that efficiently is by performing “checksums,” or adding up a bunch of bits at corresponding locations in the original and the copy and making sure the results match.

That method, however, works only if the servers know in advance which bits to add up — and if they store the files in such a way that data locations correspond perfectly. Sudan and his colleagues’ protocol could provide a way for servers using different file-management schemes to generate consistency checks on the fly.

“I shouldn’t tell you if the number of 1’s that I see in this subset is odd or even,” Sudan says. “I should send you some coarse information saying 90 percent of the bits in this set are 1’s. And you say, ‘Well, I see 89 percent,’ but that’s close to 90 percent — that’s actually a good protocol. We prove this.”

“This sequence of works puts forward a general theory of goal-oriented communication, where the focus is not on the raw data being communicated but rather on its meaning,” says Oded Goldreich, a professor of computer science at the Weizmann Institute of Science in Israel. “I consider this sequence a work of fundamental nature.”

“Following a dominant approach in 20th-century philosophy, the work associates the meaning of communication with the goal achieved by it and provides a mathematical framework for discussing all these natural notions,” he adds. “This framework is based on a general definition of the notion of a goal and leads to a problem that is complementary to the problem of reliable communication considered by Shannon, which established information theory.”

 

Source: MIT News Office

Recommendation theory

Model for evaluating product-recommendation algorithms suggests that trial and error get it right.

By Larry Hardesty

Devavrat Shah’s group at MIT’s Laboratory for Information and Decision Systems (LIDS) specializes in analyzing how social networks process information. In 2012, the group demonstrated algorithms that could predict what topics would trend on Twitter up to five hours in advance; this year, they used the same framework to predict fluctuations in the prices of the online currency known as Bitcoin.

Next month, at the Conference on Neural Information Processing Systems, they’ll present a paper that applies their model to the recommendation engines that are familiar from websites like Amazon and Netflix — with surprising results.

“Our interest was, we have a nice model for understanding data-processing from social data,” says Shah, the Jamieson Associate Professor of Electrical Engineering and Computer Science. “It makes sense in terms of how people make decisions, exhibit preferences, or take actions. So let’s go and exploit it and design a better, simple, basic recommendation algorithm, and it will be something very different. But it turns out that under that model, the standard recommendation algorithm is the right thing to do.”

The standard algorithm is known as “collaborative filtering.” To get a sense of how it works, imagine a movie-streaming service that lets users rate movies they’ve seen. To generate recommendations specific to you, the algorithm would first assign the other users similarity scores based on the degree to which their ratings overlap with yours. Then, to predict your response to a particular movie, it would aggregate the ratings the movie received from other users, weighted according to similarity scores.

To simplify their analysis, Shah and his collaborators — Guy Bresler, a postdoc in LIDS, and George Chen, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS) who is co-advised by Shah and EECS associate professor Polina Golland — assumed that the ratings system had two values, thumbs-up or thumbs-down. The taste of every user could thus be described, with perfect accuracy, by a string of ones and zeroes, where the position in the string corresponds to a particular movie and the number at that location indicates the rating.

Birds of a feather

The MIT researchers’ model assumes that large groups of such strings can be clustered together, and that those clusters can be described probabilistically. Rather than ones and zeroes at each location in the string, a probabilistic cluster model would feature probabilities: an 80 percent chance that the members of the cluster will like movie “A,” a 20 percent chance that they’ll like movie “B,” and so on.

The question is how many such clusters are required to characterize a population. If half the people who like “Die Hard” also like “Shakespeare in Love,” but the other half hate it, then ideally, you’d like to split “Die Hard” fans into two clusters. Otherwise, you’d lose correlations between their preferences that could be predictively useful. On the other hand, the more clusters you have, the more ratings you need to determine which of them a given user belongs to. Reliable prediction from limited data becomes impossible.

In their new paper, the MIT researchers show that so long as the number of clusters required to describe the variation in a population is low, collaborative filtering yields nearly optimal predictions. But in practice, how low is that number?

To answer that question, the researchers examined data on 10 million users of a movie-streaming site and identified 200 who had rated the same 500 movies. They found that, in fact, just five clusters — five probabilistic models — were enough to account for most of the variation in the population.

Missing links

While the researchers’ model corroborates the effectiveness of collaborative filtering, it also suggests ways to improve it. In general, the more information a collaborative-filtering algorithm has about users’ preferences, the more accurate its predictions will be. But not all additional information is created equal. If a user likes “The Godfather,” the information that he also likes “The Godfather: Part II” will probably have less predictive power than the information that he also likes “The Notebook.”

Using their analytic framework, the LIDS researchers show how to select a small number of products that carry a disproportionate amount of information about users’ tastes. If the service provider recommended those products to all its customers, then, based on the resulting ratings, it could much more efficiently sort them into probability clusters, which should improve the quality of its recommendations.

Sujay Sanghavi, an associate professor of electrical and computer engineering at the University of Texas at Austin, considers this the most interesting aspect of the research. “If you do some kind of collaborative filtering, two things are happening,” he says. “I’m getting value from it as a user, but other people are getting value, too. Potentially, there is a trade-off between these things. If there’s a popular movie, you can easily show that I’ll like it, but it won’t improve the recommendations for other people.”

That trade-off, Sanghavi says, “has been looked at in an empirical context, but there’s been nothing that’s principled. To me, what is appealing about this paper is that they have a principled look at this issue, which no other work has done. They’ve found a new kind of problem. They are looking at a new issue.”

Source : MIT News


Projecting a robot’s intentions

A new spin on virtual reality helps engineers read robots’ minds.  

By Jennifer Chu


Video/release: http://newsoffice.mit.edu/2014/system-shows-robot-intentions-1029

MIT Video release for the news

[MIT researchers explain their new visualization system that can project a robot's "thoughts." Video: Melanie Gonick/MIT]

CAMBRIDGE, Mass. – In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab..

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.

Source: MIT News