Tag Archives: research

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty

CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

Researchers unravel secrets of hidden waves

Region of world’s strongest “internal waves” is analyzed in detail; work could help refine climate models.

By David Chandler

CAMBRIDGE, Mass–Detailed new field studies, laboratory experiments, and simulations of the largest known “internal waves” in the Earth’s oceans — phenomena that play a key role in mixing ocean waters, greatly affecting ocean temperatures — provide a comprehensive new view of how these colossal, invisible waves are born, spread, and die off.

The work, published today in the journal Nature, could add significantly to the improvement of global climate models, the researchers say. The paper is co-authored by 42 researchers from 25 institutions in five countries.

“What this report presents is a complete picture, a cradle-to-grave picture of these waves,” says Thomas Peacock, an associate professor of mechanical engineering at MIT, and one of the paper’s two lead authors.

Internal waves — giant waves, below the surface, that roil stratified layers of heavier, saltier water and lighter, less-salty water — are ubiquitous throughout the world’s oceans. But by far the largest and most powerful known internal waves are those that form in one area of the South China Sea, originating from the Luzon Strait between the Philippines and Taiwan.

These subsurface waves can tower more than 500 meters high, and generate powerful turbulence. Because of their size and behavior, the rise and spread of these waves are important for marine processes, including the supply of nutrients for marine organisms; the distribution of sediments and pollutants; and the propagation of sound waves. They are also a significant factor in the mixing of ocean waters, combining warmer surface waters with cold, deep waters — a process that is essential to understanding the dynamics of global climate.

This international research effort, called IWISE (Internal Waves In Straits Experiment), was a rare undertaking in this field, Peacock says; the last such field study on internal waves on this scale, the Hawaii Ocean Mixing Experiment, concluded in 2002. The new study looked at internal waves that were much stronger, and went significantly further in determining not just how the waves originated, but how their energy dissipated.

One unexpected finding, Peacock says, was the degree of turbulence produced as the waves originate, as tides and currents pass over ridges on the seafloor. “These were unexpected field discoveries,” he says, revealing “some of the most intense mixing ever observed in the deep ocean. It’s like a giant washing machine — the mixing is much more dramatic than we ever expected.”

The new observations, Peacock says, resolve a longstanding technical question about how internal waves propagate — whether the towering waves start out full strength at their point of origin, or whether they continue to build as they spread from that site. Many attempts to answer this question have produced contradictory results over the years.

This new research, which involved placing several long mooring lines from the seafloor to buoys at the surface, with instruments at intervals all along the lines, has decisively resolved that question, Peacock says: The waves grow larger as they propagate. Prior measurements, the new work found, had been drawn from too narrow a slice of the region, resulting in conflicting results — rather like the fable of blind men describing an elephant. The new, more comprehensive data has now resolved the mystery.

The new data also contradict a long-held assumption — a “commonly held belief that was almost stated as fact,” Peacock says — that solitary internal waves are completely absent from the South China Sea during the winter months. But with equipment in place to reliably measure water movement throughout the year, the team found these waves were “carrying on quite happily throughout the entire winter,” Peacock says: Previously, their presence had been masked by the winter’s stormier weather, and by the influence of a strong boundary current that runs along the coast of Taiwan — the regional equivalent of the Gulf Stream.

The improved understanding of internal waves, Peacock says, could be useful for researchers in a number of areas. The waves are key to some ecosystems, for example — some marine creatures essentially “surf” them to move in toward shore, for feeding or breeding; in the South China Sea, this process helps sustain an extensive coral reef system. The waves also help carry heat from the ocean’s surface to its depths, an important parameter in modeling climate.

The research, which was primarily a collaboration between U.S. and Taiwanese scientists, was funded by the U.S. Office of Naval Research and the Taiwan National Science Council.

Source: MIT News Office

Shown here is "event zero," the first detection of a trapped electron in the MIT physicists' instrument. The color indicates the electron's detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell.

Courtesy of the researchers

Source: MIT News

New tabletop detector “sees” single electrons

Magnet-based setup may help detect the elusive mass of neutrinos.

Jennifer Chu

MIT physicists have developed a new tabletop particle detector that is able to identify single electrons in a radioactive gas.
As the gas decays and gives off electrons, the detector uses a magnet to trap them in a magnetic bottle. A radio antenna then picks up very weak signals emitted by the electrons, which can be used to map the electrons’ precise activity over several milliseconds.

Shown here is "event zero," the first detection of a trapped electron in the MIT physicists' instrument. The color indicates the electron's detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell. Courtesy of the researchers Source: MIT News
Shown here is “event zero,” the first detection of a trapped electron in the MIT physicists’ instrument. The color indicates the electron’s detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell.
Courtesy of the researchers
Source: MIT News

The team worked with researchers at Pacific Northwest National Laboratory, the University of Washington, the University of California at Santa Barbara (UCSB), and elsewhere to record the activity of more than 100,000 individual electrons in krypton gas.
The majority of electrons observed behaved in a characteristic pattern: As the radioactive krypton gas decays, it emits electrons that vibrate at a baseline frequency before petering out; this frequency spikes again whenever an electron hits an atom of radioactive gas. As an electron ping-pongs against multiple atoms in the detector, its energy appears to jump in a step-like pattern.
“We can literally image the frequency of the electron, and we see this electron suddenly pop into our radio antenna,” says Joe Formaggio, an associate professor of physics at MIT. “Over time, the frequency changes, and actually chirps up. So these electrons are chirping in radio waves.”
Formaggio says the group’s results, published in Physical Review Letters, are a big step toward a more elusive goal: measuring the mass of a neutrino.

A ghostly particle
Neutrinos are among the more mysterious elementary particles in the universe: Billions of them pass through every cell of our bodies each second, and yet these ghostly particles are incredibly difficult to detect, as they don’t appear to interact with ordinary matter. Scientists have set theoretical limits on neutrino mass, but researchers have yet to precisely detect it.
“We have [the mass] cornered, but haven’t measured it yet,” Formaggio says. “The name of the game is to measure the energy of an electron — that’s your signature that tells you about the neutrino.”
As Formaggio explains it, when a radioactive atom such as tritium decays, it turns into an isotope of helium and, in the process, also releases an electron and a neutrino. The energy of all particles released adds up to the original energy of the parent neutron. Measuring the energy of the electron, therefore, can illuminate the energy — and consequently, the mass — of the neutrino.
Scientists agree that tritium, a radioactive isotope of hydrogen, is key to obtaining a precise measurement: As a gas, tritium decays at such a rate that scientists can relatively easily observe its electron byproducts.
Researchers in Karlsruhe, Germany, hope to measure electrons in tritium using a massive spectrometer as part of an experiment named KATRIN (Karlsruhe Tritium Neutrino Experiment). Electrons, produced from the decay of tritium, pass through the spectrometer, which filters them according to their different energy levels. The experiment, which is just getting under way, may obtain measurements of single electrons, but at a cost.
“In KATRIN, the electrons are detected in a silicon detector, which means the electrons smash into the crystal, and a lot of random things happen, essentially destroying the electrons,” says Daniel Furse, a graduate student in physics, and a co-author on the paper. “We still want to measure the energy of electrons, but we do it in a nondestructive way.”
The group’s setup has an additional advantage: size. The detector essentially fits on a tabletop, and the space in which electrons are detected is smaller than a postage stamp. In contrast, KATRIN’s spectrometer, when delivered to Karlsruhe, barely fit through the city’s streets.
Tuning in
Furse and Formaggio’s detector — an experiment called “Project 8” — is based on a decades-old phenomenon known as cyclotron radiation, in which charged particles such as electrons emit radio waves in a magnetic field. It turns out electrons emit this radiation at a frequency similar to that of military radio communications.
“It’s the same frequency that the military uses — 26 gigahertz,” Formaggio says. “And it turns out the baseline frequency changes very slightly if the electron has energy. So we said, ‘Why not look at the radiation [electrons] emit directly?’”
Formaggio and former postdoc Benjamin Monreal, now an assistant professor of physics at UCSB, reasoned that if they could tune into this baseline frequency, they could catch electrons as they shot out of a decaying radioactive gas, and measure their energy in a magnetic field.
“If you could measure the frequency of this radio signal, you could measure the energy potentially much more accurately than you can with any other method,” Furse says. “The problem is, you’re looking at this really weak signal over a very short amount of time, and it’s tough to see, which is why no one has ever done it before.”
It took five years of fits and starts before the group was finally able to build an accurate detector. Once the researchers turned the detector on, they were able to record individual electrons within the first 100 milliseconds of the experiment — although the analysis took a bit longer.
“Our software was so slow at processing things that we could tell funny things were happening because, all of a sudden, our file size became larger, as these things started appearing,” Formaggio recalls.
He says the precision of the measurements obtained so far in krypton gas has encouraged the team to move on to tritium — a goal Formaggio says may be attainable in the next year or two — and pave a path toward measuring the mass of the neutrino.
Steven Elliott, a technical staff member at Los Alamos National Laboratory, says the group’s new detector “represents a very significant result.” In order to use the detector to measure the mass of a neutrino, Elliott adds, the group will have to make multiple improvements, including developing a bigger cell to contain a larger amount of tritium.
“This was the first step, albeit a very important step, along the way to building a next-generation experiment,” says Elliott, who did not contribute to the research. “As a result, the neutrino community is very impressed with the concept and execution of this experiment.”
This research was funded in part by the Department of Energy and the National Science Foundation.
Star formation in what are now "dead" galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts.

This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the "dead" regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right.


Giant Galaxies Die from the Inside Out

VLT and Hubble observations show that star formation shuts down in the centres of elliptical galaxies first

Astronomers have shown for the first time how star formation in “dead” galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts. The results will be published in the 17 April 2015 issue of the journal Science.

Star formation in what are now "dead" galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts. This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the "dead" regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right. Credit: ESO
Star formation in what are now “dead” galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts.
This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the “dead” regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right.

A major astrophysical mystery has centred on how massive, quiescent elliptical galaxies, common in the modern Universe, quenched their once furious rates of star formation. Such colossal galaxies, often also called spheroids because of their shape, typically pack in stars ten times as densely in the central regions as in our home galaxy, the Milky Way, and have about ten times its mass.

Astronomers refer to these big galaxies as red and dead as they exhibit an ample abundance of ancient red stars, but lack young blue stars and show no evidence of new star formation. The estimated ages of the red stars suggest that their host galaxies ceased to make new stars about ten billion years ago. This shutdown began right at the peak of star formation in the Universe, when many galaxies were still giving birth to stars at a pace about twenty times faster than nowadays.

“Massive dead spheroids contain about half of all the stars that the Universe has produced during its entire life,” said Sandro Tacchella of ETH Zurich in Switzerland, lead author of the article. “We cannot claim to understand how the Universe evolved and became as we see it today unless we understand how these galaxies come to be.”

Tacchella and colleagues observed a total of 22 galaxies, spanning a range of masses, from an era about three billion years after the Big Bang [1]. The SINFONI instrument on ESO’s Very Large Telescope (VLT) collected light from this sample of galaxies, showing precisely where they were churning out new stars. SINFONI could make these detailed measurements of distant galaxies thanks to its adaptive optics system, which largely cancels out the blurring effects of Earth’s atmosphere.

The researchers also trained the NASA/ESA Hubble Space Telescope on the same set of galaxies, taking advantage of the telescope’s location in space above our planet’s distorting atmosphere. Hubble’s WFC3 camera snapped images in the near-infrared, revealing the spatial distribution of older stars within the actively star-forming galaxies.

“What is amazing is that SINFONI’s adaptive optics system can largely beat down atmospheric effects and gather information on where the new stars are being born, and do so with precisely the same accuracy as Hubble allows for the stellar mass distributions,” commented Marcella Carollo, also of ETH Zurich and co-author of the study.

According to the new data, the most massive galaxies in the sample kept up a steady production of new stars in their peripheries. In their bulging, densely packed centres, however, star formation had already stopped.

“The newly demonstrated inside-out nature of star formation shutdown in massive galaxies should shed light on the underlying mechanisms involved, which astronomers have long debated,” says Alvio Renzini, Padova Observatory, of the Italian National Institute of Astrophysics.

A leading theory is that star-making materials are scattered by torrents of energy released by a galaxy’s central supermassive black hole as it sloppily devours matter. Another idea is that fresh gas stops flowing into a galaxy, starving it of fuel for new stars and transforming it into a red and dead spheroid.

“There are many different theoretical suggestions for the physical mechanisms that led to the death of the massive spheroids,” said co-author Natascha Förster Schreiber, at the Max-Planck-Institut für extraterrestrische Physik in Garching, Germany. “Discovering that the quenching of star formation started from the centres and marched its way outwards is a very important step towards understanding how the Universe came to look like it does now.”

[1] The Universe’s age is about 13.8 billion years, so the galaxies studied by Tacchella and colleagues are generally seen as they were more than 10 billion years ago.

Source: ESO

Study on MOOCs provides new insights on an evolving space

Findings suggest many teachers enroll, learner intentions matter, and cost boosts completion rates.

CAMBRIDGE, Mass. – Today, a joint MIT and Harvard University research team published one of the largest investigations of massive open online courses (MOOCs) to date. Building on these researchers’ prior work — a January 2014 report describing the first year of open online courses launched on edX, a nonprofit learning platform founded by the two institutions — the latest effort incorporates another year of data, bringing the total to nearly 70 courses in subjects from programming to poetry.

“We explored 68 certificate-granting courses, 1.7 million participants, 10 million participant-hours, and 1.1 billion participant-logged events,” says Andrew Ho, a professor at the Harvard Graduate School of Education. The research team also used surveys to ­gain additional information about participants’ backgrounds and their intentions.

Ho and Isaac Chuang, a professor of electrical engineering and computer science and senior associate dean of digital learning at MIT, led a group effort that delved into the demographics of MOOC learners, analyzed participant intent, and looked at patterns that “serial MOOCers,” or those taking more than one course, tend to pursue.

“What jumped out for me was the survey that revealed that in some cases as many as 39 percent of our learners are teachers,” Chuang says. “This finding forces us to broaden our conceptions of who MOOCs serve and how they might make a difference in improving learning.”

Key findings

The researchers conducted a trend analysis that showed a rising share of female, U.S.-based, and older participants, as well as a survey analysis of intent, revealing that almost half of registrants were not interested in or unsure about certification. In this study, the researchers redefined their population of learners from those who simply registered for courses (and took no subsequent action) — a metric used in prior findings and often cited by MOOC providers — to those who participated (such as by logging into the course at least once).

1. Participation in HarvardX and MITx open online courses has grown steadily, while participation in repeated courses has declined and then stabilized.

From July 24, 2012, through Sept. 21, 2014, an average of 1,300 new participants joined a HarvardX or MITx course each day, for a total of 1 million unique participants and 1.7 million total participants. With the increase in second and third versions of courses, the researchers found that participation in second versions declined by 43 percent, while there was stable participation between versions two and three. There were outliers, such as the HarvardX course CS50x (Introduction to Computer Science), which doubled in size, perhaps due to increased student flexibility: Students in this course could participate over a yearlong period at their own pace, and complete at any time.

2. A slight majority of MOOC takers are seeking certification, and many participants are teachers.

Among the one-third of participants who responded to a survey about their intentions, 57 percent stated their desire to earn a certificate; nearly a quarter of those respondents went on to earn certificates. Further, among participants who were unsure or did not intend to earn a certificate, 8 percent ultimately did so. These learners appear to have been inspired to finish a MOOC even after initially stating that they had no intention of doing so.

Among 200,000 participants who responded to a survey about teaching, 39 percent self-identified as a past or present teacher; 21 percent of those teachers reported teaching in the course topic area. The strong participation by teachers suggests that even participants who are uninterested in certification may still make productive use of MOOCs.

3. Academic areas matter when it comes to participation, certification, and course networks.

Participants were drawn to computer science courses in particular, with per-course participation numbers nearly four times higher than courses in the humanities, sciences, and social sciences. That said, certificate rates in computer science and other science- and technology-based offerings (7 percent and 6 percent, respectively) were about half of those in the humanities and social sciences.

The larger data sets also allowed the researchers to study those participating in more than one course, revealing that computer science courses serve as hubs for students, who naturally move to and from related courses. Intentional sequencing, as was done for the 10-part HarvardX Chinese history course “ChinaX,” led to some of the highest certification rates in the study. Other courses with high certification rates were “Introduction to Computer Science” from MITx and “Justice” and “Health in Numbers” from HarvardX.

4. Those opting for fee-based ID-verified certificates certify at higher rates.

Across 12 courses, participants who paid for “ID-verified” certificates (with costs ranging from $50 to $250) earned certifications at a higher rate than other participants: 59 percent, on average, compared with 5 percent. Students opting for the ID-verified track appear to have stronger intentions to complete courses, and the monetary stake may add an extra form of motivation.

Questions and implications

Based upon these findings, Chuang and Ho identified questions that might “reset and reorient expectations” around MOOCs.

First, while many MOOC creators and providers have increased access to learning opportunities, those who are accessing MOOCs are disproportionately those who already have college and graduate degrees. The researchers do not necessarily see this as a problem, as academic experience may be a requirement in advanced courses. However, to serve underrepresented and traditionally underserved groups, the data suggest that proactive strategies may be necessary.

“These free, open courses are phenomenal opportunities for millions of learners,” Ho emphasizes, “but equity cannot be increased just by opening doors. We hope that our data help teachers and institutions to think about their intended audiences, and serve as a baseline for charting progress.”

Second, if improving online and on-campus learning is a priority, then “the flow of pedagogical innovations needs to be formalized,” Chuang says. For example, many of the MOOCs in the study used innovations from their campus counterparts, like physics assessments from MIT and close-reading practices from Harvard’s classics courses. Likewise, residential faculty are using MOOC content, such as videos and assessment scoring algorithms, in smaller, traditional lecture courses.

“The real potential is in the fostering of feedback loops between the two realms,” Chuang says. “In particular, the high number of teacher participants signals great potential for impact beyond Harvard and MIT, especially if deliberate steps could be taken to share best practices.”

Third, advancing research through MOOCs may require a more nuanced definition of audience. Much of the research to date has done little to differentiate among the diverse participants in these free, self-paced learning environments.

“While increasing completion has been a subject of interest, given that many participants have limited, uncertain, or zero interest in completing MOOCs, exerting research muscle to indiscriminately increase completion may not be productive,” Ho explains. “Researchers might want to focus more specifically on well-surveyed or paying subpopulations, where we have a better sense of their expectations and motivations.”

More broadly, Ho and Chuang hope to showcase the potential and diversity of MOOCs and MOOC data by developing “Top 5” lists based upon course attributes, such as scale (an MIT computer science course clocked in with 900,000 participant hours); demographics (the MOOC with the most female representation is a museum course from HarvardX called “Tangible Things,” while MITx’s computing courses attracted the largest global audience); and type and level of interaction (those in ChinaX most frequently posted in online forums, while those in an introduction to computer science course fromMITx most frequently played videos).

“These courses reflect the breadth of our university curricula, and we felt the need to highlight their diverse designs, philosophies, audiences, and learning outcomes in our analyses,” Chuang says. “Which course is right for you? It depends, and these lists might help learners decide what qualities in a given MOOC are most important to them.”

Additional authors on the report included Justin Reich, Jacob Whitehill, Joseph Williams, Glenn Lopez, John Hansen, and Rebecca Petersen from Harvard, and Cody Coleman and Curtis Northcutt from MIT.


Related links

Paper: “HarvardX and MITx: Two years of open online courses fall 2012-summer 2014”

Office of Digital Learning

MITx working papers

HarvardX working papers

Related MIT News

ARCHIVE: MIT and Harvard release working papers on open online courses

ARCHIVE: Reviewing online homework at scale

ARCHIVE: Study: Online classes really do work

ARCHIVE: The future of MIT education looks more global, modular, and flexible

 Source: MIT News Office

New kind of “tandem” solar cell developed: MIT Research

Researchers combine two types of photovoltaic material to make a cell that harnesses more sunlight.

By David Chandler


CAMBRIDGE, Mass–Researchers at MIT and Stanford University have developed a new kind of solar cell that combines two different layers of sunlight-absorbing material in order to harvest a broader range of the sun’s energy. The development could lead to photovoltaic cells that are more efficient than those currently used in solar-power installations, the researchers say.

The new cell uses a layer of silicon — which forms the basis for most of today’s solar panels — but adds a semi-transparent layer of a material called perovskite, which can absorb higher-energy particles of light. Unlike an earlier “tandem” solar cell reported by members of the same team earlier this year — in which the two layers were physically stacked, but each had its own separate electrical connections — the new version has both layers connected together as a single device that needs only one control circuit.

The new findings are reported in the journal Applied Physics Letters by MIT graduate student Jonathan Mailoa; associate professor of mechanical engineering Tonio Buonassisi; Colin Bailie and Michael McGehee at Stanford; and four others.

“Different layers absorb different portions of the sunlight,” Mailoa explains. In the earlier tandem solar cell, the two layers of photovoltaic material could be operated independently of each other and required their own wiring and control circuits, allowing each cell to be tuned independently for optimal performance.

By contrast, the new combined version should be much simpler to make and install, Mailoa says. “It has advantages in terms of simplicity, because it looks and operates just like a single silicon cell,” he says, with only a single electrical control circuit needed.

One tradeoff is that the current produced is limited by the capacity of the lesser of the two layers. Electrical current, Buonassisi explains, can be thought of as analogous to the volume of water passing through a pipe, which is limited by the diameter of the pipe: If you connect two lengths of pipe of different diameters, one after the other, “the amount of water is limited by the narrowest pipe,” he says. Combining two solar cell layers in series has the same limiting effect on current.

To address that limitation, the team aims to match the current output of the two layers as precisely as possible. In this proof-of-concept solar cell, this means the total power output is about the same as that of conventional solar cells; the team is now working to optimize that output.

Perovskites have been studied for potential electronic uses including solar cells, but this is the first time they have been successfully paired with silicon cells in this configuration, a feat that posed numerous technical challenges. Now the team is focusing on increasing the power efficiency — the percentage of sunlight’s energy that gets converted to electricity — that is possible from the combined cell. In this initial version, the efficiency is 13.7 percent, but the researchers say they have identified low-cost ways of improving this to about 30 percent — a substantial improvement over today’s commercial silicon-based solar cells — and they say this technology could ultimately achieve a power efficiency of more than 35 percent.

They will also explore how to easily manufacture the new type of device, but Buonassisi says that should be relatively straightforward, since the materials lend themselves to being made through methods very similar to conventional silicon-cell manufacturing.

One hurdle is making the material durable enough to be commercially viable: The perovskite material degrades quickly in open air, so it either needs to be modified to improve its inherent durability or encapsulated to prevent exposure to air — without adding significantly to manufacturing costs and without degrading performance.

This exact formulation may not turn out to be the most advantageous for better solar cells, Buonassisi says, but is one of several pathways worth exploring. “Our job at this point is to provide options to the world,” he says. “The market will select among them.”

The research team also included Eric Johlin PhD ’14 and postdoc Austin Akey at MIT, and Eric Hoke and William Nguyen of Stanford. It was supported by the Bay Area Photovoltaic Consortium and the U.S. Department of Energy.

Source: News Office

Better debugger

System to automatically find a common type of programming bug significantly outperforms its predecessors.

By Larry Hardesty

CAMBRIDGE, Mass. – Integer overflows are one of the most common bugs in computer programs — not only causing programs to crash but, even worse, potentially offering points of attack for malicious hackers. Computer scientists have devised a battery of techniques to identify them, but all have drawbacks.

This month, at the Association for Computing Machinery’s International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new algorithm for identifying integer-overflow bugs. The researchers tested the algorithm on five common open-source programs, in which previous analyses had found three bugs. The new algorithm found all three known bugs — and 11 new ones.

The variables used by computer programs come in a few standard types, such as floating-point numbers, which can contain decimals; characters, like the letters of this sentence; or integers, which are whole numbers. Every time the program creates a new variable, it assigns it a fixed amount of space in memory.

If a program tries to store too large a number at a memory address reserved for an integer, the operating system will simply lop off the bits that don’t fit. “It’s like a car odometer,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the new paper. “You go over a certain number of miles, you go back to zero.”

In itself, an integer overflow won’t crash a program; in fact, many programmers use integer overflows to perform certain types of computations more efficiently. But if a program tries to do something with an integer that has overflowed, havoc can ensue. Say, for instance, that the integer represents the number of pixels in an image the program is processing. If the program allocates memory to store the image, but its estimate of the image’s size is off by several orders of magnitude, the program will crash.

Charting a course

Any program can be represented as a flow chart — or, more technically, a graph, with boxes that represent operations connected by line segments that represent the flow of data between operations. Any given program input will trace a single route through the graph. Prior techniques for finding integer-overflow bugs would start at the top of the graph and begin working through it, operation by operation.

For even a moderately complex program, however, that graph is enormous; exhaustive exploration of the entire thing would be prohibitively time-consuming. “What this means is that you can find a lot of errors in the early input-processing code,” says Martin Rinard, an MIT professor of computer science and engineering and a co-author on the new paper. “But you haven’t gotten past that part of the code before the whole thing poops out. And then there are all these errors deep in the program, and how do you find them?”

Rinard, Sidiroglou-Douskos, and several other members of Rinard’s group — researchers Eric Lahtinen and Paolo Piselli and graduate students Fan Long, Doekhwan Kim, and Nathan Rittenhouse — take a different approach. Their system, dubbed DIODE (for Directed Integer Overflow Detection), begins by feeding the program a single sample input. As that input is processed, however — as it traces a path through the graph — the system records each of the operations performed on it by adding new terms to what’s known as a “symbolic expression.”

“These symbolic expressions are complicated like crazy,” Rinard explains. “They’re bubbling up through the very lowest levels of the system into the program. This 32-bit integer has been built up of all these complicated bit-level operations that the lower-level parts of your system do to take this out of your input file and construct those integers for you. So if you look at them, they’re pages long.”

Trigger warning

When the program reaches a point at which an integer is involved in a potentially dangerous operation — like a memory allocation — DIODE records the current state of the symbolic expression. The initial test input won’t trigger an overflow, but DIODE can analyze the symbolic expression to calculate an input that will.

The process still isn’t over, however: Well-written programs frequently include input checks specifically designed to prevent problems like integer overflows, and the new input, unlike the initial input, might fail those checks. So DIODE seeds the program with its new input, and if it fails such a check, it imposes a new constraint on the symbolic expression and computes a new overflow-triggering input. This process continues until the system either finds an input that can pass the checks but still trigger an overflow, or it concludes that triggering an overflow is impossible.

If DIODE does find a trigger value, it reports it, providing developers with a valuable debugging tool. Indeed, since DIODE doesn’t require access to a program’s source code but works on its “binary” — the executable version of the program — a program’s users could run it and then send developers the trigger inputs as graphic evidence that they may have missed security vulnerabilities.

Source: News Office

A cartoon illustration of a levitated drop of superfluid helium. A single photon circulating inside the drop (red arrow) will be used to produce the superposition. The drop's gravitational field (illustrated schematically in the background) may play a role in limiting the lifetime of such a superposition.

Credit: Yale News

Opening a window on quantum gravity

Yale University has received a grant from the W. M. Keck Foundation to fund experiments that researchers hope will provide new insights into quantum gravity. Jack Harris, associate professor of physics, will lead a Yale team that aims to address a long-standing question in physics — how the classical behavior of macroscopic objects emerges from microscopic constituents that obey the laws of quantum mechanics.

Very small objects like photons and electrons are known for their odd behavior. Thanks to the laws of quantum mechanics, they can act as particles or waves, appear in multiple places at once, and mysteriously interact over great distances. The question is why these behaviors are not observed in larger objects.

A cartoon illustration of a levitated drop of superfluid helium. A single photon circulating inside the drop (red arrow) will be used to produce the superposition. The drop's gravitational field (illustrated schematically in the background) may play a role in limiting the lifetime of such a superposition. Credit: Yale News
A cartoon illustration of a levitated drop of superfluid helium. A single photon circulating inside the drop (red arrow) will be used to produce the superposition. The drop’s gravitational field (illustrated schematically in the background) may play a role in limiting the lifetime of such a superposition.
Credit: Yale News

Scientists know that friction plays an important part in producing classical behavior in macroscopic objects, but many suspect that gravity also suppresses quantum effects. Unfortunately, there has been no practical way to test this possibility, and in the absence of a full quantum theory of gravity, it is difficult even to make any quantitative predictions.

To address this problem, Harris will create a novel instrument that will enable a drop of liquid helium to exhibit quantum mechanical effects. “A millimeter across,” Harris said, “our droplet will be five orders of magnitude more massive than any other object in which quantum effects have been observed. It will enable us to explore quantum behavior on unprecedentedly macroscopic scales and to provide the first experimental tests of leading models of gravity at the quantum level.”

Game-changing research

The W.M. Keck Foundation grant will fund five years of activity at the Harris lab, which is part of Yale’s Department of Physics. In the first year, Harris and his team will construct their apparatus, and in subsequent years they will use it to perform increasingly sophisticated experiments.

“We are extremely grateful to the W.M. Keck Foundation for this generous support,” said Steven Girvin, the Eugene Higgins Professor of Physics and deputy provost for research. “This is a forward-looking grant that will advance truly ground-breaking research.”

Girvin, whose own research interests include quantum computing, described the Harris project as a possible game-changer. “Truly quantum mechanical behaviors have been observed in the flight of molecules through a vacuum and in the flow of electrons through superconductive circuits, but nothing has been accomplished on this scale. If Jack succeeds, this would be the first time that an object visible to the naked eye has bulk motion that exhibits genuine quantum mechanical effects.”

Into the whispering gallery

To explain his project, Harris invokes an architectural quirk of St. Paul’s cathedral, a London landmark with a famous “whispering gallery.” High up in its main dome, a whisper uttered against one wall is easily audible at great distances, as the sound waves skim along the dome’s interior. Harris plans to create his own whispering gallery, albeit on a smaller scale, using a droplet of liquid helium suspended in a powerful magnetic field. Rather than sound waves, Harris’ gallery will bounce a single photon.

This approach is closely related to an idea proposed by Albert Einstein in the 1920s, but until now, it has remained beyond the technical capabilities of experimentalists. To complete the experiment, Harris will need to combine recent advances in three different areas of physics: the study of optical cavities (objects that can capture photons), magnetic levitation, and the strange, frictionless world of superfluid helium. “Superfluid liquid helium has particular properties, like absence of viscosity and near-absence of optical absorption,” Harris explained. “In our device, a drop of liquid helium will be made to capture a single photon, which will bounce around inside. We expect to see the drop respond to the photon. “A photon always behaves quantum mechanically,” he added. “If you have a macroscopic object — our helium drop — that responds appreciably to a photon, the quantum mechanical behavior can be transferred to the large object. Our device will be ideally suited to studying quantum effects in the drop’s motion.” Potential applications for Harris’ research include new approaches to computing, cryptography, and communications. But Harris is most excited about the implications for fundamental physics: “Finding a theory of quantum gravity has been an outstanding challenge in physics for several decades, and it has proceeded largely without input from experiments. We hope that our research can provide some empirical data in this arena.”

About the W.M. Keck Foundation

The W.M. Keck Foundation was established in 1954 by William Myron Keck, founder of the Superior Oil Company. The foundation supports pioneering research in science, engineering, and medicine and has provided generous funding for numerous research initiatives at Yale University. In 2014, the Keck Foundation awarded a separate grant to a team of scientists led by Corey O’Hern, associate professor of mechanical engineering at Yale, to explore the physics of systems composed of macro-sized particles. Source : Yale News

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy. 
Credit: KAUST

Tsunami on demand: the power to harness catastrophic events

A new study published in Nature Physics features a nano-optical chip that makes possible generating and controlling nanoscale rogue waves. The innovative chip was developed by an international team of physicists, led by Andrea Fratalocchi from KAUST (Saudi Arabia), and is expected to have significant applications for energy research and environmental safety.

Can you imagine how much energy is in a tsunami wave, or in a tornado? Energy is all around us, but mainly contained in a quiet state. But there are moments in time when large amounts of energy build up spontaneously and create rare phenomena on a potentially disastrous scale. How these events occur, in many cases, is still a mystery.

To reveal the natural mechanisms behind such high-energy phenomena, Andrea Fratalocchi, assistant professor in the Computer, Electrical and Mathematical Science and Engineering Division of King Abdullah University of Science and Technology (KAUST), led a team of researchers from Saudi Arabia and three European universities and research centers to understand the dynamics of such destructive events and control their formation in new optical chips, which can open various technological applications. The results and implications of this study are published in the journal Nature Physics.

“I have always been fascinated by the unpredictability of nature,” Fratalocchi said. “And I believe that understanding this complexity is the next frontier that will open cutting edge pathways in science and offer novel applications in a variety of areas.”

Fratalocchi’s team began their research by developing new theoretical ideas to explain the formation of rare energetic natural events such as rogue waves — large surface waves that develop spontaneously in deep water and represent a potential risk for vessels and open-ocean oil platforms.”

“Our idea was something never tested before,” Fratalocchi continued. “We wanted to demonstrate that small perturbations of a chaotic sea of interacting waves could, contrary to intuition, control the formation of rare events of exceptional amplitude.”

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.  Credit: KAUST
Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.
Credit: KAUST

A planar photonic crystal chip, fabricated at the University of St. Andrews and tested at the FOM institute AMOLF in the Amsterdam Science Park, was used to generate ultrafast (163 fs long) and subwavelength (203 nm wide) nanoscale rogue waves, proving that Fratalocchi’s theory was correct. The newly developed photonic chip offered an exceptional level of controllability over these rare events.

Thomas F. Krauss, head of the Photonics Group and Nanocentre Cleanroom at the University of York, UK, was involved in the development of the experiment and the analysis of the data. He shared, “By realizing a sea of interacting waves on a photonic chip, we were able study the formation of rare high energy events in a controlled environment. We noted that these events only happened when some sets of waves were missing, which is one of the key insights our study.”

Kobus Kuipers, head of nanophotonics at FOM institute AMOLF, NL, who was involved in the experimental visualization of the rogue waves, was fascinated by their dynamics: “We have developed a microscope that allows us to visualize optical behavior at the nanoscale. Unlike conventional wave behavior, it was remarkable to see the rogue waves suddenly appear, seemingly out of nowhere, and then disappear again…as if they had never been there.”

Andrea Di Falco, leader of the Synthetic Optics group at the University of St. Andrews said, “The advantage of using light confined in an optical chip is that we can control very carefully how the energy in a chaotic system is dissipated, giving rise to these rare and extreme events. It is as if we were able to produce a determined amount of waves of unusual height in a small lake, just by accurately landscaping its coasts and controlling the size and number of its emissaries.”

The outcomes of this project offer leading edge technological applications in energy research, high speed communication and in disaster preparedness.

Fratalocchi and the team believe their research represents a major milestone for KAUST and for the field. “This discovery can change once and for all the way we look at catastrophic events,” concludes Fratalocchi, “opening new perspectives in preventing their destructive appearance on large scales, or using their unique power for ideating new applications at the nanoscale.”The title of the Nature Physics paper is “Triggering extreme events at the nanoscale in photonic seas.” The paper is accessible on the Nature Photonics website: http://dx.doi.org/10.1038/nphys3263

Source : KAUST News

A second minor planet may possess Saturn-like rings

Researchers detect features around Chiron that may signal rings, jets, or a shell of dust.

By Jennifer Chu

CAMBRIDGE, Mass. – There are only five bodies in our solar system that are known to bear rings. The most obvious is the planet Saturn; to a lesser extent, rings of gas and dust also encircle Jupiter, Uranus, and Neptune. The fifth member of this haloed group is Chariklo, one of a class of minor planets called centaurs: small, rocky bodies that possess qualities of both asteroids and comets.

Scientists only recently detected Chariklo’s ring system — a surprising finding, as it had been thought that centaurs are relatively dormant. Now scientists at MIT and elsewhere have detected a possible ring system around a second centaur, Chiron.

In November 2011, the group observed a stellar occultation in which Chiron passed in front of a bright star, briefly blocking its light. The researchers analyzed the star’s light emissions, and the momentary shadow created by Chiron, and identified optical features that suggest the centaur may possess a circulating disk of debris. The team believes the features may signify a ring system, a circular shell of gas and dust, or symmetric jets of material shooting out from the centaur’s surface.

“It’s interesting, because Chiron is a centaur — part of that middle section of the solar system, between Jupiter and Pluto, where we originally weren’t thinking things would be active, but it’s turning out things are quite active,” says Amanda Bosh, a lecturer in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

Bosh and her colleagues at MIT — Jessica Ruprecht, Michael Person, and Amanda Gulbis — have published their results in the journal Icarus.

Catching a shadow

Chiron, discovered in 1977, was the first planetary body categorized as a centaur, after the mythological Greek creature — a hybrid of man and beast. Like their mythological counterparts, centaurs are hybrids, embodying traits of both asteroids and comets. Today, scientists estimate there are more than 44,000 centaurs in the solar system, concentrated mainly in a band between the orbits of Jupiter and Pluto.

While most centaurs are thought to be dormant, scientists have seen glimmers of activity from Chiron. Starting in the late 1980s, astronomers observed patterns of brightening from the centaur, as well as activity similar to that of a streaking comet.

In 1993 and 1994, James Elliot, then a professor of planetary astronomy and physics at MIT, observed a stellar occultation of Chiron and made the first estimates of its size. Elliot also observed features in the optical data that looked like jets of water and dust spewing from the centaur’s surface.

Now MIT researchers — some of them former members of Elliot’s group — have obtained more precise observations of Chiron, using two large telescopes in Hawaii: NASA’s Infrared Telescope Facility, on Mauna Kea, and the Las Cumbres Observatory Global Telescope Network, at Haleakala.

In 2010, the team started to chart the orbits of Chiron and nearby stars in order to pinpoint exactly when the centaur might pass across a star bright enough to detect. The researchers determined that such a stellar occultation would occur on Nov. 29, 2011, and reserved time on the two large telescopes in hopes of catching Chiron’s shadow.

“There’s an aspect of serendipity to these observations,” Bosh says. “We need a certain amount of luck, waiting for Chiron to pass in front of a star that is bright enough. Chiron itself is small enough that the event is very short; if you blink, you might miss it.”

The team observed the stellar occultation remotely, from MIT’s Building 54. The entire event lasted just a few minutes, and the telescopes recorded the fading light as Chiron cast its shadow over the telescopes.

Rings around a theory

The group analyzed the resulting light, and detected something unexpected. A simple body, with no surrounding material, would create a straightforward pattern, blocking the star’s light entirely. But the researchers observed symmetrical, sharp features near the start and end of the stellar occultation — a sign that material such as dust might be blocking a fraction of the starlight.

The researchers observed two such features, each about 300 kilometers from the center of the centaur. Judging from the optical data, the features are 3 and 7 kilometers wide, respectively.  The features are similar to what Elliot observed in the 1990s.

In light of these new observations, the researchers say that Chiron may still possess symmetrical jets of gas and dust, as Elliot first proposed. However, other interpretations may be equally valid, including the “intriguing possibility,” Bosh says, of a shell or ring of gas and dust.

Ruprecht, who is a researcher at MIT’s Lincoln Laboratory, says it is possible to imagine a scenario in which centaurs may form rings: For example, when a body breaks up, the resulting debris can be captured gravitationally around another body, such as Chiron. Rings can also be leftover material from the formation of Chiron itself.

“Another possibility involves the history of Chiron’s distance from the sun,” Ruprecht says. “Centaurs may have started further out in the solar system and, through gravitational interactions with giant planets, have had their orbits perturbed closer in to the sun. The frozen material that would have been stable out past Pluto is becoming less stable closer in, and can turn into gases that spray dust and material off the surface of a body. ”

An independent group has since combined the MIT group’s occultation data with other light data, and has concluded that the features around Chiron most likely represent a ring system. However, Ruprecht says that researchers will have to observe more stellar occultations of Chiron to truly determine which interpretation — rings, shell, or jets — is the correct one.

“If we want to make a strong case for rings around Chiron, we’ll need observations by multiple observers, distributed over a few hundred kilometers, so that we can map the ring geometry,” Ruprecht says. “But that alone doesn’t tell us if the rings are a temporary feature of Chiron, or a more permanent one. There’s a lot of work that needs to be done.”

Nevertheless, Bosh says the possibility of a second ringed centaur in the solar system is an enticing one.

“Until Chariklo’s rings were found, it was commonly believed that these smaller bodies don’t have ring systems,” Bosh says. “If Chiron has a ring system, it will show it’s more common than previously thought.”

This research was funded in part by NASA and the National Research Foundation of South Africa.

Source: MIT News Office