Tag Archives: mit

Better debugger

System to automatically find a common type of programming bug significantly outperforms its predecessors.

By Larry Hardesty

CAMBRIDGE, Mass. – Integer overflows are one of the most common bugs in computer programs — not only causing programs to crash but, even worse, potentially offering points of attack for malicious hackers. Computer scientists have devised a battery of techniques to identify them, but all have drawbacks.

This month, at the Association for Computing Machinery’s International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new algorithm for identifying integer-overflow bugs. The researchers tested the algorithm on five common open-source programs, in which previous analyses had found three bugs. The new algorithm found all three known bugs — and 11 new ones.

The variables used by computer programs come in a few standard types, such as floating-point numbers, which can contain decimals; characters, like the letters of this sentence; or integers, which are whole numbers. Every time the program creates a new variable, it assigns it a fixed amount of space in memory.

If a program tries to store too large a number at a memory address reserved for an integer, the operating system will simply lop off the bits that don’t fit. “It’s like a car odometer,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the new paper. “You go over a certain number of miles, you go back to zero.”

In itself, an integer overflow won’t crash a program; in fact, many programmers use integer overflows to perform certain types of computations more efficiently. But if a program tries to do something with an integer that has overflowed, havoc can ensue. Say, for instance, that the integer represents the number of pixels in an image the program is processing. If the program allocates memory to store the image, but its estimate of the image’s size is off by several orders of magnitude, the program will crash.

Charting a course

Any program can be represented as a flow chart — or, more technically, a graph, with boxes that represent operations connected by line segments that represent the flow of data between operations. Any given program input will trace a single route through the graph. Prior techniques for finding integer-overflow bugs would start at the top of the graph and begin working through it, operation by operation.

For even a moderately complex program, however, that graph is enormous; exhaustive exploration of the entire thing would be prohibitively time-consuming. “What this means is that you can find a lot of errors in the early input-processing code,” says Martin Rinard, an MIT professor of computer science and engineering and a co-author on the new paper. “But you haven’t gotten past that part of the code before the whole thing poops out. And then there are all these errors deep in the program, and how do you find them?”

Rinard, Sidiroglou-Douskos, and several other members of Rinard’s group — researchers Eric Lahtinen and Paolo Piselli and graduate students Fan Long, Doekhwan Kim, and Nathan Rittenhouse — take a different approach. Their system, dubbed DIODE (for Directed Integer Overflow Detection), begins by feeding the program a single sample input. As that input is processed, however — as it traces a path through the graph — the system records each of the operations performed on it by adding new terms to what’s known as a “symbolic expression.”

“These symbolic expressions are complicated like crazy,” Rinard explains. “They’re bubbling up through the very lowest levels of the system into the program. This 32-bit integer has been built up of all these complicated bit-level operations that the lower-level parts of your system do to take this out of your input file and construct those integers for you. So if you look at them, they’re pages long.”

Trigger warning

When the program reaches a point at which an integer is involved in a potentially dangerous operation — like a memory allocation — DIODE records the current state of the symbolic expression. The initial test input won’t trigger an overflow, but DIODE can analyze the symbolic expression to calculate an input that will.

The process still isn’t over, however: Well-written programs frequently include input checks specifically designed to prevent problems like integer overflows, and the new input, unlike the initial input, might fail those checks. So DIODE seeds the program with its new input, and if it fails such a check, it imposes a new constraint on the symbolic expression and computes a new overflow-triggering input. This process continues until the system either finds an input that can pass the checks but still trigger an overflow, or it concludes that triggering an overflow is impossible.

If DIODE does find a trigger value, it reports it, providing developers with a valuable debugging tool. Indeed, since DIODE doesn’t require access to a program’s source code but works on its “binary” — the executable version of the program — a program’s users could run it and then send developers the trigger inputs as graphic evidence that they may have missed security vulnerabilities.

Source: News Office

A second minor planet may possess Saturn-like rings

Researchers detect features around Chiron that may signal rings, jets, or a shell of dust.

By Jennifer Chu

CAMBRIDGE, Mass. – There are only five bodies in our solar system that are known to bear rings. The most obvious is the planet Saturn; to a lesser extent, rings of gas and dust also encircle Jupiter, Uranus, and Neptune. The fifth member of this haloed group is Chariklo, one of a class of minor planets called centaurs: small, rocky bodies that possess qualities of both asteroids and comets.

Scientists only recently detected Chariklo’s ring system — a surprising finding, as it had been thought that centaurs are relatively dormant. Now scientists at MIT and elsewhere have detected a possible ring system around a second centaur, Chiron.

In November 2011, the group observed a stellar occultation in which Chiron passed in front of a bright star, briefly blocking its light. The researchers analyzed the star’s light emissions, and the momentary shadow created by Chiron, and identified optical features that suggest the centaur may possess a circulating disk of debris. The team believes the features may signify a ring system, a circular shell of gas and dust, or symmetric jets of material shooting out from the centaur’s surface.

“It’s interesting, because Chiron is a centaur — part of that middle section of the solar system, between Jupiter and Pluto, where we originally weren’t thinking things would be active, but it’s turning out things are quite active,” says Amanda Bosh, a lecturer in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

Bosh and her colleagues at MIT — Jessica Ruprecht, Michael Person, and Amanda Gulbis — have published their results in the journal Icarus.

Catching a shadow

Chiron, discovered in 1977, was the first planetary body categorized as a centaur, after the mythological Greek creature — a hybrid of man and beast. Like their mythological counterparts, centaurs are hybrids, embodying traits of both asteroids and comets. Today, scientists estimate there are more than 44,000 centaurs in the solar system, concentrated mainly in a band between the orbits of Jupiter and Pluto.

While most centaurs are thought to be dormant, scientists have seen glimmers of activity from Chiron. Starting in the late 1980s, astronomers observed patterns of brightening from the centaur, as well as activity similar to that of a streaking comet.

In 1993 and 1994, James Elliot, then a professor of planetary astronomy and physics at MIT, observed a stellar occultation of Chiron and made the first estimates of its size. Elliot also observed features in the optical data that looked like jets of water and dust spewing from the centaur’s surface.

Now MIT researchers — some of them former members of Elliot’s group — have obtained more precise observations of Chiron, using two large telescopes in Hawaii: NASA’s Infrared Telescope Facility, on Mauna Kea, and the Las Cumbres Observatory Global Telescope Network, at Haleakala.

In 2010, the team started to chart the orbits of Chiron and nearby stars in order to pinpoint exactly when the centaur might pass across a star bright enough to detect. The researchers determined that such a stellar occultation would occur on Nov. 29, 2011, and reserved time on the two large telescopes in hopes of catching Chiron’s shadow.

“There’s an aspect of serendipity to these observations,” Bosh says. “We need a certain amount of luck, waiting for Chiron to pass in front of a star that is bright enough. Chiron itself is small enough that the event is very short; if you blink, you might miss it.”

The team observed the stellar occultation remotely, from MIT’s Building 54. The entire event lasted just a few minutes, and the telescopes recorded the fading light as Chiron cast its shadow over the telescopes.

Rings around a theory

The group analyzed the resulting light, and detected something unexpected. A simple body, with no surrounding material, would create a straightforward pattern, blocking the star’s light entirely. But the researchers observed symmetrical, sharp features near the start and end of the stellar occultation — a sign that material such as dust might be blocking a fraction of the starlight.

The researchers observed two such features, each about 300 kilometers from the center of the centaur. Judging from the optical data, the features are 3 and 7 kilometers wide, respectively.  The features are similar to what Elliot observed in the 1990s.

In light of these new observations, the researchers say that Chiron may still possess symmetrical jets of gas and dust, as Elliot first proposed. However, other interpretations may be equally valid, including the “intriguing possibility,” Bosh says, of a shell or ring of gas and dust.

Ruprecht, who is a researcher at MIT’s Lincoln Laboratory, says it is possible to imagine a scenario in which centaurs may form rings: For example, when a body breaks up, the resulting debris can be captured gravitationally around another body, such as Chiron. Rings can also be leftover material from the formation of Chiron itself.

“Another possibility involves the history of Chiron’s distance from the sun,” Ruprecht says. “Centaurs may have started further out in the solar system and, through gravitational interactions with giant planets, have had their orbits perturbed closer in to the sun. The frozen material that would have been stable out past Pluto is becoming less stable closer in, and can turn into gases that spray dust and material off the surface of a body. ”

An independent group has since combined the MIT group’s occultation data with other light data, and has concluded that the features around Chiron most likely represent a ring system. However, Ruprecht says that researchers will have to observe more stellar occultations of Chiron to truly determine which interpretation — rings, shell, or jets — is the correct one.

“If we want to make a strong case for rings around Chiron, we’ll need observations by multiple observers, distributed over a few hundred kilometers, so that we can map the ring geometry,” Ruprecht says. “But that alone doesn’t tell us if the rings are a temporary feature of Chiron, or a more permanent one. There’s a lot of work that needs to be done.”

Nevertheless, Bosh says the possibility of a second ringed centaur in the solar system is an enticing one.

“Until Chariklo’s rings were found, it was commonly believed that these smaller bodies don’t have ring systems,” Bosh says. “If Chiron has a ring system, it will show it’s more common than previously thought.”

This research was funded in part by NASA and the National Research Foundation of South Africa.

Source: MIT News Office

Teaching programming to preschoolers: MIT Research

System that lets children program a robot using stickers embodies new theories about programming languages.

By Larry Hardesty

Researchers at the MIT Media Laboratory are developing a system that enables young children to program interactive robots by affixing stickers to laminated sheets of paper.

Not only could the system introduce children to programming principles, but it could also serve as a research tool, to help determine which computational concepts children can grasp at what ages, and how interactive robots can best be integrated into educational curricula.

Last week, at the Association for Computing Machinery and Institute of Electrical and Electronics Engineers’ International Conference on Human-Robot Interaction, the researchers presented the results of an initial study of the system, which investigated its use by children ages 4 to 8.

“We did not want to put this in the digital world but rather in the tangible world,” says Michal Gordon, a postdoc in media arts and sciences and lead author on the new paper. “It’s a sandbox for exploring computational concepts, but it’s a sandbox that comes to the children’s world.”

In their study, the MIT researchers used an interactive robot called Dragonbot, developed by the Personal Robots Group at the Media Lab, which is led by associate professor of media arts and sciences Cynthia Breazeal. Dragonbot has audio and visual sensors, a speech synthesizer, a range of expressive gestures, and a video screen for a face that can assume a variety of expressions. The programs that children created dictated how Dragonbot would react to stimuli.

“It’s programming in the context of relational interactions with the robot,” says Edith Ackermann, a developmental psychologist and visiting professor in the Personal Robots Group, who with Gordon and Breazeal is a co-author on the new paper. “This is what children do — they’re learning about social relations. So taking this expression of computational principles to the social world is very appropriate.”

Lessons that stick

The root components of the programming system are triangular and circular stickers — which represent stimuli and responses, respectively — and arrow stickers, which represent relationships between them. Children can first create computational “templates” by affixing triangles, circles, and arrows to sheets of laminated paper. They then fill in the details with stickers that represent particular stimuli — like thumbs up or down — and responses — like the narrowing or widening of Dragonbot’s eyes. There are also blank stickers on which older children can write their own verbal cues and responses.

Researchers in the Personal Robotics Group are developing a computer vision system that will enable children to convey new programs to Dragonbot simply by holding pages of stickers up to its camera. But for the purposes of the new study, the system’s performance had to be perfectly reliable, so one of the researchers would manually enter the stimulus-and-response sequences devised by the children, using a tablet computer with a touch-screen interface that featured icons depicting all the available options.

To introduce a new subject to the system, the researchers would ask him or her to issue an individual command, by attaching a single response sticker to a small laminated sheet. When presented with the sheet, Dragonbot would execute the command. But when it’s presented with a program, it instead nods its head and says, “I’ve got it.” Thereafter, it will execute the specified chain of responses whenever it receives the corresponding stimulus.

Even the youngest subjects were able to distinguish between individual commands and programs, and interviews after their sessions suggested that they understood that programs, unlike commands, modified the internal state of the robot. The researchers plan additional studies to determine the extent of their understanding.

Paradigm shift

The sticker system is, in fact, designed to encourage a new way of thinking about programming, one that may be more consistent with how computation is done in the 21st century.

“The systems we’re programming today are not sequential, as they were 20 or 30 years back,” Gordon says. “A system has many inputs coming in, complex state, and many outputs.” A cellphone, for instance, might be monitoring incoming transmissions over both Wi-Fi and the cellular network while playing back a video, transmitting the audio over Bluetooth, and running a timer that’s set to go off when the rice on the stove has finished cooking.

As a graduate student in computer science at the Weizmann Institute of Science in Israel, Gordon explains, she worked with her advisor, David Harel, on a new programming paradigm called scenario-based programming. “The idea is to describe your code in little scenarios, and the engine in the back connects them,” she explains. “You could think of it as rules, with triggers and actions.” Gordon and her colleagues’ new system could be used to introduce children to the principles of conventional, sequential programming. But it’s well adapted to scenario-based programming.

“It’s actually how we think about how programs are written before we try to integrate it into a whole programming artifact,” she says. “So I was thinking, ‘Why not try it earlier?’”

Source : MIT News Office

Magnetic brain stimulation:New technique could lead to long-lasting localized stimulation of brain tissue without external connections.

By David Chandler

CAMBRIDGE, Mass–Researchers at MIT have developed a method to stimulate brain tissue using external magnetic fields and injected magnetic nanoparticles — a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections.

The research, conducted by Polina Anikeeva, an assistant professor of materials science and engineering, graduate student Ritchie Chen, and three others, has been published in the journal Science.

Previous efforts to stimulate the brain using pulses of electricity have proven effective in reducing or eliminating tremors associated with Parkinson’s disease, but the treatment has remained a last resort because it requires highly invasive implanted wires that connect to a power source outside the brain.

“In the future, our technique may provide an implant-free means to provide brain stimulation and mapping,” Anikeeva says.

In their study, the team injected magnetic iron oxide particles just 22 nanometers in diameter into the brain. When exposed to an external alternating magnetic field — which can penetrate deep inside biological tissues — these particles rapidly heat up.

The resulting local temperature increase can then lead to neural activation by triggering heat-sensitive capsaicin receptors — the same proteins that the body uses to detect both actual heat and the “heat” of spicy foods. (Capsaicin is the chemical that gives hot peppers their searing taste.) Anikeeva’s team used viral gene delivery to induce the sensitivity to heat in selected neurons in the brain.

The particles, which have virtually no interaction with biological tissues except when heated, tend to remain where they’re placed, allowing for long-term treatment without the need for further invasive procedures.

“The nanoparticles integrate into the tissue and remain largely intact,” Anikeeva says. “Then, that region can be stimulated at will by externally applying an alternating magnetic field. The goal for us was to figure out whether we could deliver stimuli to the nervous system in a wireless and noninvasive way.”

The new work has proven that the approach is feasible, but much work remains to turn this proof-of-concept into a practical method for brain research or clinical treatment.

The use of magnetic fields and injected particles has been an active area of cancer research; the thought is that this approach could destroy cancer cells by heating them. “The new technique is derived, in part, from that research,” Anikeeva says. “By calibrating the delivered thermal dosage, we can excite neurons without killing them. The magnetic nanoparticles also have been used for decades as contrast agents in MRI scans, so they are considered relatively safe in the human body.”

The team developed ways to make the particles with precisely controlled sizes and shapes, in order to maximize their interaction with the applied alternating magnetic field. They also developed devices to deliver the applied magnetic field: Existing devices for cancer treatment — intended to produce much more intense heating — were far too big and energy-inefficient for this application.

The next step toward making this a practical technology for clinical use in humans “is to understand better how our method works through neural recordings and behavioral experiments, and assess whether there are any other side effects to tissues in the affected area,” Anikeeva says.

In addition to Anikeeva and Chen, the research team also included postdoc Gabriela Romero, graduate student Michael Christiansen, and undergraduate Alan Mohr. The work was funded by the Defense Advanced Research Projects Agency, MIT’s McGovern Institute for Brain Research, and the National Science Foundation.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.

Illustration: Jose-Luis Olivares/MIT

Quantum sensor’s advantages survive entanglement breakdown

Preserving the fragile quantum property known as entanglement isn’t necessary to reap benefits.

By Larry Hardesty 

CAMBRIDGE, Mass. – The extraordinary promise of quantum information processing — solving problems that classical computers can’t, perfectly secure communication — depends on a phenomenon called “entanglement,” in which the physical states of different quantum particles become interrelated. But entanglement is very fragile, and the difficulty of preserving it is a major obstacle to developing practical quantum information systems.

In a series of papers since 2008, members of the Optical and Quantum Communications Group at MIT’s Research Laboratory of Electronics have argued that optical systems that use entangled light can outperform classical optical systems — even when the entanglement breaks down.

Two years ago, they showed that systems that begin with entangled light could offer much more efficient means of securing optical communications. And now, in a paper appearing in Physical Review Letters, they demonstrate that entanglement can also improve the performance of optical sensors, even when it doesn’t survive light’s interaction with the environment.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration Credit: Jose-Luis Olivares/MIT

“That is something that has been missing in the understanding that a lot of people have in this field,” says senior research scientist Franco Wong, one of the paper’s co-authors and, together with Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, co-director of the Optical and Quantum Communications Group. “They feel that if unavoidable loss and noise make the light being measured look completely classical, then there’s no benefit to starting out with something quantum. Because how can it help? And what this experiment shows is that yes, it can still help.”

Phased in

Entanglement means that the physical state of one particle constrains the possible states of another. Electrons, for instance, have a property called spin, which describes their magnetic orientation. If two electrons are orbiting an atom’s nucleus at the same distance, they must have opposite spins. This spin entanglement can persist even if the electrons leave the atom’s orbit, but interactions with the environment break it down quickly.

In the MIT researchers’ system, two beams of light are entangled, and one of them is stored locally — racing through an optical fiber — while the other is projected into the environment. When light from the projected beam — the “probe” — is reflected back, it carries information about the objects it has encountered. But this light is also corrupted by the environmental influences that engineers call “noise.” Recombining it with the locally stored beam helps suppress the noise, recovering the information.

The local beam is useful for noise suppression because its phase is correlated with that of the probe. If you think of light as a wave, with regular crests and troughs, two beams are in phase if their crests and troughs coincide. If the crests of one are aligned with the troughs of the other, their phases are anti-correlated.

But light can also be thought of as consisting of particles, or photons. And at the particle level, phase is a murkier concept.

“Classically, you can prepare beams that are completely opposite in phase, but this is only a valid concept on average,” says Zheshen Zhang, a postdoc in the Optical and Quantum Communications Group and first author on the new paper. “On average, they’re opposite in phase, but quantum mechanics does not allow you to precisely measure the phase of each individual photon.”

Improving the odds

Instead, quantum mechanics interprets phase statistically. Given particular measurements of two photons, from two separate beams of light, there’s some probability that the phases of the beams are correlated. The more photons you measure, the greater your certainty that the beams are either correlated or not. With entangled beams, that certainty increases much more rapidly than it does with classical beams.

When a probe beam interacts with the environment, the noise it accumulates also increases the uncertainty of the ensuing phase measurements. But that’s as true of classical beams as it is of entangled beams. Because entangled beams start out with stronger correlations, even when noise causes them to fall back within classical limits, they still fare better than classical beams do under the same circumstances.

“Going out to the target and reflecting and then coming back from the target attenuates the correlation between the probe and the reference beam by the same factor, regardless of whether you started out at the quantum limit or started out at the classical limit,” Shapiro says. “If you started with the quantum case that’s so many times bigger than the classical case, that relative advantage stays the same, even as both beams become classical due to the loss and the noise.”

In experiments that compared optical systems that used entangled light and classical light, the researchers found that the entangled-light systems increased the signal-to-noise ratio — a measure of how much information can be recaptured from the reflected probe — by 20 percent. That accorded very well with their theoretical predictions.

But the theory also predicts that improvements in the quality of the optical equipment used in the experiment could double or perhaps even quadruple the signal-to-noise ratio. Since detection error declines exponentially with the signal-to-noise ratio, that could translate to a million-fold increase in sensitivity.

Source: MIT News Office

The rise and fall of cognitive skills:Neuroscientists find that different parts of the brain work best at different ages.

By Anne Trafton

CAMBRIDGE, Mass–Scientists have long known that our ability to think quickly and recall information, also known as fluid intelligence, peaks around age 20 and then begins a slow decline. However, more recent findings, including a new study from neuroscientists at MIT and Massachusetts General Hospital (MGH), suggest that the real picture is much more complex.

The study, which appears in the XX issue of the journal Psychological Science, finds that different components of fluid intelligence peak at different ages, some as late as age 40.

“At any given age, you’re getting better at some things, you’re getting worse at some other things, and you’re at a plateau at some other things. There’s probably not one age at which you’re peak on most things, much less all of them,” says Joshua Hartshorne, a postdoc in MIT’s Department of Brain and Cognitive Sciences and one of the paper’s authors.

“It paints a different picture of the way we change over the lifespan than psychology and neuroscience have traditionally painted,” adds Laura Germine, a postdoc in psychiatric and neurodevelopmental genetics at MGH and the paper’s other author.

Measuring peaks

Until now, it has been difficult to study how cognitive skills change over time because of the challenge of getting large numbers of people older than college students and younger than 65 to come to a psychology laboratory to participate in experiments. Hartshorne and Germine were able to take a broader look at aging and cognition because they have been running large-scale experiments on the Internet, where people of any age can become research subjects.

Their web sites, gameswithwords.org and testmybrain.org, feature cognitive tests designed to be completed in just a few minutes. Through these sites, the researchers have accumulated data from nearly 3 million people in the past several years.

In 2011, Germine published a study showing that the ability to recognize faces improves until the early 30s before gradually starting to decline. This finding did not fit into the theory that fluid intelligence peaks in late adolescence. Around the same time, Hartshorne found that subjects’ performance on a visual short-term memory task also peaked in the early 30s.

Intrigued by these results, the researchers, then graduate students at Harvard University, decided that they needed to explore a different source of data, in case some aspect of collecting data on the Internet was skewing the results. They dug out sets of data, collected decades ago, on adult performance at different ages on the Weschler Adult Intelligence Scale, which is used to measure IQ, and the Weschler Memory Scale. Together, these tests measure about 30 different subsets of intelligence, such as digit memorization, visual search, and assembling puzzles.

Hartshorne and Germine developed a new way to analyze the data that allowed them to compare the age peaks for each task. “We were mapping when these cognitive abilities were peaking, and we saw there was no single peak for all abilities. The peaks were all over the place,” Hartshorne says. “This was the smoking gun.”

However, the dataset was not as large as the researchers would have liked, so they decided to test several of the same cognitive skills with their larger pools of Internet study participants. For the Internet study, the researchers chose four tasks that peaked at different ages, based on the data from the Weschler tests. They also included a test of the ability to perceive others’ emotional state, which is not measured by the Weschler tests.

The researchers gathered data from nearly 50,000 subjects and found a very clear picture showing that each cognitive skill they were testing peaked at a different age. For example, raw speed in processing information appears to peak around age 18 or 19, then immediately starts to decline. Meanwhile, short-term memory continues to improve until around age 25, when it levels off and then begins to drop around age 35.

For the ability to evaluate other people’s emotional states, the peak occurred much later, in the 40s or 50s.

More work will be needed to reveal why each of these skills peaks at different times, the researchers say. However, previous studies have hinted that genetic changes or changes in brain structure may play a role.

“If you go into the data on gene expression or brain structure at different ages, you see these lifespan patterns that we don’t know what to make of. The brain seems to continue to change in dynamic ways through early adulthood and middle age,” Germine says. “The question is: What does it mean? How does it map onto the way you function in the world, or the way you think, or the way you change as you age?”

Accumulated intelligence

The researchers also included a vocabulary test, which serves as a measure of what is known as crystallized intelligence — the accumulation of facts and knowledge. These results confirmed that crystallized intelligence peaks later in life, as previously believed, but the researchers also found something unexpected: While data from the Weschler IQ tests suggested that vocabulary peaks in the late 40s, the new data showed a later peak, in the late 60s or early 70s.

The researchers believe this may be a result of better education, more people having jobs that require a lot of reading, and more opportunities for intellectual stimulation for older people.

Hartshorne and Germine are now gathering more data from their websites and have added new cognitive tasks designed to evaluate social and emotional intelligence, language skills, and executive function. They are also working on making their data public so that other researchers can access it and perform other types of studies and analyses.

“We took the existing theories that were out there and showed that they’re all wrong. The question now is: What is the right one? To get to that answer, we’re going to need to run a lot more studies and collect a lot more data,” Hartshorne says.

The research was funded by the National Institutes of Health, the National Science Foundation, and a National Defense Science and Engineering Graduate Fellowship.

Source: MIT News Office

For the first time, spacecraft catch a solar shockwave in the act

Solar storm found to produce “ultrarelativistic, killer electrons” in 60 seconds.

By Jennifer Chu

CAMBRIDGE, Mass. – On Oct. 8, 2013, an explosion on the sun’s surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth’s magnetic field, setting off a magnetized sound pulse around the planet.

NASA’s Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth’s magnetic field, captured the effects of the solar shockwave just before and after it struck.

Now scientists at MIT’s Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes’ data, and observed a sudden and dramatic effect in the shockwave’s aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth’s radiation belts, accelerating certain particles to ultrahigh energies.

“These are very lightweight particles, but they are ultrarelativistic, killer electrons — electrons that can go right through a satellite,” says John Foster, associate director of MIT’s Haystack Observatory. “These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it’s exciting: We see something that, in terms of the radiation belt, is really quick.”

The findings represent the first time the effects of a solar shockwave on Earth’s radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

Catching a shockwave in the act

Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes’ mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons — particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth’s magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave’s aftermath.

Dealing a “sledgehammer blow”

Foster and his colleagues analyzed the probes’ data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck “a sledgehammer blow” to the protective barrier of the Earth’s magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse — a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles’ velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed “drift resonant,” and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

Foster says solar shockwaves can impact Earth’s radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

“This was a relatively small shock. We know they can be much, much bigger,” Foster says. “Interactions between solar activity and Earth’s magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics.”

Source: MIT News

Wrinkle predictions:New mathematical theory may explain patterns in fingerprints, raisins, and microlenses.

By Jennifer Chu

CAMBRIDGE, Mass. – As a grape slowly dries and shrivels, its surface creases, ultimately taking on the wrinkled form of a raisin. Similar patterns can be found on the surfaces of other dried materials, as well as in human fingerprints. While these patterns have long been observed in nature, and more recently in experiments, scientists have not been able to come up with a way to predict how such patterns arise in curved systems, such as microlenses.

Now a team of MIT mathematicians and engineers has developed a mathematical theory, confirmed through experiments, that predicts how wrinkles on curved surfaces take shape. From their calculations, they determined that one main parameter — curvature — rules the type of pattern that forms: The more curved a surface is, the more its surface patterns resemble a crystal-like lattice.

The researchers say the theory, reported this week in the journal Nature Materials, may help to generally explain how fingerprints and wrinkles form.

“If you look at skin, there’s a harder layer of tissue, and underneath is a softer layer, and you see these wrinkling patterns that make fingerprints,” says Jörn Dunkel, an assistant professor of mathematics at MIT. “Could you, in principle, predict these patterns? It’s a complicated system, but there seems to be something generic going on, because you see very similar patterns over a huge range of scales.”

The group sought to develop a general theory to describe how wrinkles on curved objects form — a goal that was initially inspired by observations made by Dunkel’s collaborator, Pedro Reis, the Gilbert W. Winslow Career Development Associate Professor in Civil Engineering.

In past experiments, Reis manufactured ping pong-sized balls of polymer in order to investigate how their surface patterns may affect a sphere’s drag, or resistance to air. Reis observed a characteristic transition of surface patterns as air was slowly sucked out: As the sphere’s surface became compressed, it began to dimple, forming a pattern of regular hexagons before giving way to a more convoluted, labyrinthine configuration, similar to fingerprints.

“Existing theories could not explain why we were seeing these completely different patterns,” Reis says.

Denis Terwagne, a former postdoc in Reis’ group, mentioned this conundrum in a Department of Mathematics seminar attended by Dunkel and postdoc Norbert Stoop. The mathematicians took up the challenge, and soon contacted Reis to collaborate.

Ahead of the curve

Reis shared data from his past experiments, which Dunkel and Stoop used to formulate a generalized mathematical theory. According to Dunkel, there exists a mathematical framework for describing wrinkling, in the form of elasticity theory — a complex set of equations one could apply to Reis’ experiments to predict the resulting shapes in computer simulations. However, these equations are far too complicated to pinpoint exactly when certain patterns start to morph, let alone what causes such morphing.

Combining ideas from fluid mechanics with elasticity theory, Dunkel and Stoop derived a simplified equation that accurately predicts the wrinkling patterns found by Reis and his group.

“What type of stretching and bending is going on, and how the substrate underneath influences the pattern — all these different effects are combined in coefficients so you now have an analytically tractable equation that predicts how the patterns evolve, depending on the forces that act on that surface,” Dunkel explains.

In computer simulations, the researchers confirmed that their equation was indeed able to reproduce correctly the surface patterns observed in experiments. They were therefore also able to identify the main parameters that govern surface patterning.

As it turns out, curvature is one major determinant of whether a wrinkling surface becomes covered in hexagons or a more labyrinthine pattern: The more curved an object, the more regular its wrinkled surface. The thickness of an object’s shell also plays a role: If the outer layer is very thin compared to its curvature, an object’s surface will likely be convoluted, similar to a fingerprint. If the shell is a bit thicker, the surface will form a more hexagonal pattern.

Dunkel says the group’s theory, although based primarily on Reis’ work with spheres, may also apply to more complex objects. He and Stoop, together with postdoc Romain Lagrange, have used their equation to predict the morphing patterns in a donut-shaped object, which they have now challenged Reis to reproduce experimentally. If these predictions can be confirmed in future experiments, Reis says the new theory will serve as a design tool for scientists to engineer complex objects with morphable surfaces.

“This theory allows us to go and look at shapes other than spheres,” Reis says. “If you want to make a more complicated object wrinkle — say, a Pringle-shaped area with multiple curvatures — would the same equation still apply? Now we’re developing experiments to check their theory.”

This research was funded in part by the National Science Foundation, the Swiss National Science Foundation, and the MIT Solomon Buchsbaum Fund.

Source: MIT News Office

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.

Credit: F. Najafi/ MIT

Toward quantum chips

Packing single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits.

By Larry Hardesty

CAMBRIDGE, Mass. – A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons. Credit: F. Najafi/ MIT
Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.
Credit: F. Najafi/ MIT

In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

“You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

Thinking small

According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

“Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.


The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

“It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

Source: MIT News Office

Trapping light with a twister

New understanding of how to halt photons could lead to miniature particle accelerators, improved data transmission.

By David L. Chandler

Researchers at MIT who succeeded last year in creating a material that could trap light and stop it in its tracks have now developed a more fundamental understanding of the process. The new work — which could help explain some basic physical mechanisms — reveals that this behavior is connected to a wide range of other seemingly unrelated phenomena.

The findings are reported in a paper in the journal Physical Review Letters, co-authored by MIT physics professor Marin Soljačić; postdocs Bo Zhen, Chia Wei Hsu, and Ling Lu; and Douglas Stone, a professor of applied physics at Yale University.

Light can usually be confined only with mirrors, or with specialized materials such as photonic crystals. Both of these approaches block light beams; last year’s finding demonstrated a new method in which the waves cancel out their own radiation fields. The new work shows that this light-trapping process, which involves twisting the polarization direction of the light, is based on a kind of vortex — the same phenomenon behind everything from tornadoes to water swirling down a drain.

Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1. Courtesy of the researchers Source: MIT
Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1.
Courtesy of the researchers
Source: MIT

In addition to revealing the mechanism responsible for trapping the light, the new analysis shows that this trapped state is much more stable than had been thought, making it easier to produce and harder to disturb.

“People think of this [trapped state] as very delicate,” Zhen says, “and almost impossible to realize. But it turns out it can exist in a robust way.”

In most natural light, the direction of polarization — which can be thought of as the direction in which the light waves vibrate — remains fixed. That’s the principle that allows polarizing sunglasses to work: Light reflected from a surface is selectively polarized in one direction; that reflected light can then be blocked by polarizing filters oriented at right angles to it.

But in the case of these light-trapping crystals, light that enters the material becomes polarized in a way that forms a vortex, Zhen says, with the direction of polarization changing depending on the beam’s direction.

Because the polarization is different at every point in this vortex, it produces a singularity — also called a topological defect, Zhen says — at its center, trapping the light at that point.

Hsu says the phenomenon makes it possible to produce something called a vector beam, a special kind of laser beam that could potentially create small-scale particle accelerators. Such devices could use these vector beams to accelerate particles and smash them into each other — perhaps allowing future tabletop devices to carry out the kinds of high-energy experiments that today require miles-wide circular tunnels.

The finding, Soljačić says, could also enable easy implementation of super-resolution imaging (using a method called stimulated emission depletion microscopy) and could allow the sending of far more channels of data through a single optical fiber.

“This work is a great example of how supposedly well-studied physical systems can contain rich and undiscovered phenomena, which can be unearthed if you dig in the right spot,” says Yidong Chong, an assistant professor of physics and applied physics at Nanyang Technological University in Singapore who was not involved in this research.

Chong says it is remarkable that such surprising findings have come from relatively well-studied materials. “It deals with photonic crystal slabs of the sort that have been extensively analyzed, both theoretically and experimentally, since the 1990s,” he says. “The fact that the system is so unexotic, together with the robustness associated with topological phenomena, should give us confidence that these modes will not simply

be theoretical curiosities, but can be exploited in technologies such as microlasers.”

The research was partly supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and by the Department of Energy and the National Science Foundation.

Source: MIT News Office