Tag Archives: mit

Income inequality linked to export “complexity”

The mix of products that countries export is a good predictor of income distribution, study finds.

By Larry Hardesty


 

CAMBRIDGE, Mass. – In a series of papers over the past 10 years, MIT Professor César Hidalgo and his collaborators have argued that the complexity of a country’s exports — not just their diversity but the expertise and technological infrastructure required to produce them — is a better predictor of future economic growth than factors economists have historically focused on, such as capital and education.

Now, a new paper by Hidalgo and his colleagues, appearing in the journal World Development, argues that everything else being equal, the complexity of a country’s exports also correlates with its degree of economic equality: The more complex a country’s products, the greater equality it enjoys relative to similar-sized countries with similar-sized economies.

“When people talk about the role of policy in inequality, there is an implicit assumption that you can always reduce inequality using only redistributive policies,” says Hidalgo, the Asahi Broadcasting Corporation Associate Professor of Media Arts and Sciences at the MIT Media Lab. “What these new results are telling us is that the effectiveness of policy is limited because inequality lives within a range of values that are determined by your underlying industrial structure.

“So if you’re a country like Venezuela, no matter how much money Chavez or Maduro gives out, you’re not going to be able to reduce inequality, because, well, all the money is coming in from one industry, and the 30,000 people involved in that industry of course are going to have an advantage in the economy. While if you’re in a country like Germany or Switzerland, where the economy is very diversified, and there are many people who are generating money in many different industries, firms are going to be under much more pressure to be more inclusive and redistributive.”

Joining Hidalgo on the paper are first author Dominik Hartmann, who was a postdoc in Hidalgo’s group when the work was done and is now a research fellow at the Fraunhofer Center for International Management and Knowledge Economy in Leipzig, Germany; Cristian Jara-Figueroa and Manuel Aristarán, MIT graduate students in media arts and sciences; and Miguel Guevara, a professor of computer science at Playa Ancha University in Valparaíso, Chile, who earned his PhD at the MIT Media Lab.

Quantifying complexity

For Hidalgo and his colleagues, the complexity of a product is related to the breadth of knowledge required to produce it. The PhDs who operate a billion-dollar chip-fabrication facility are repositories of knowledge, and the facility of itself is the embodiment of knowledge. But complexity also factors in the infrastructure and institutions that facilitate the aggregation of knowledge, such as reliable transportation and communication systems, and a culture of trust that enables productive collaboration.

In the new study, rather than try to itemize and quantify all such factors — probably an impossible task — the researchers made a simplifying assumption: Complex products are rare products exported by countries with diverse export portfolios. For instance, both chromium ore and nonoptical microscopes are rare exports, but the Czech Republic, which is the second-leading exporter of nonoptical microscopes, has a more diverse export portfolio than South Africa, the leading exporter of chromium ore.

The researchers compared each country’s complexity measure to its Gini coefficient, the most widely used measure of income inequality. They also compared Gini coefficients to countries’ per-capita gross domestic products (GDPs) and to standard measures of institutional development and education.

Predictive power

According to the researchers’ analysis of economic data from 1996 to 2008, per-capita GDP predicts only 36 percent of the variation in Gini coefficients, but product complexity predicts 58 percent. Combining per-capita GDP, export complexity, education levels, and population predicts 69 percent of variation. However, whereas leaving out any of the other three factors lowers that figure to about 68 percent, leaving out complexity lowers it to 61 percent, indicating that the complexity measure captures something crucial that the other factors leave out.

Using trade data from 1963 to 2008, the researchers also showed that countries whose economic complexity increased, such as South Korea, saw reductions in income inequality, while countries whose economic complexity decreased, such as Norway, saw income inequality increase.

Source: MIT News Office

Click on the image to know more about Prime Consulting

New device could provide electrical power source from walking and other ambient motions:MIT Research

Harnessing the energy of small bending motions
New device could provide electrical power source from walking and other ambient motions.

By David Chandler


 

CAMBRIDGE, Mass.–For many applications such as biomedical, mechanical, or environmental monitoring devices, harnessing the energy of small motions could provide a small but virtually unlimited power supply. While a number of approaches have been attempted, researchers at MIT have now developed a completely new method based on electrochemical principles, which could be capable of harvesting energy from a broader range of natural motions and activities, including walking.

The new system, based on the slight bending of a sandwich of metal and polymer sheets, is described in the journal Nature Communications, in a paper by MIT professor Ju Li, graduate students Sangtae Kim and Soon Ju Choi, and four others.

Most previously designed devices for harnessing small motions have been based on the triboelectric effect (essentially friction, like rubbing a balloon against a wool sweater) or piezoelectrics (crystals that produce a small voltage when bent or compressed). These work well for high-frequency sources of motion such as those produced by the vibrations of machinery. But for typical human-scale motions such as walking or exercising, such systems have limits.

“When you put in an impulse” to such traditional materials, “they respond very well, in microseconds. But this doesn’t match the timescale of most human activities,” says Li, who is the Battelle Energy Alliance Professor in Nuclear Science and Engineering and professor of materials science and engineering. “Also, these devices have high electrical impedance and bending rigidity and can be quite expensive,” he says.

Simple and flexible

By contrast, the new system uses technology similar to that in lithium ion batteries, so it could likely be produced inexpensively at large scale, Li says. In addition, these devices would be inherently flexible, making them more compatible with wearable technology and less likely to break under mechanical stress.

While piezoelectric materials are based on a purely physical process, the new system is electrochemical, like a battery or a fuel cell. It uses two thin sheets of lithium alloys as electrodes, separated by a layer of porous polymer soaked with liquid electrolyte that is efficient at transporting lithium ions between the metal plates. But unlike a rechargeable battery, which takes in electricity, stores it, and then releases it, this system takes in mechanical energy and puts out electricity.

When bent even a slight amount, the layered composite produces a pressure difference that squeezes lithium ions through the polymer (like the reverse osmosis process used in water desalination). It also produces a counteracting voltage and an electrical current in the external circuit between the two electrodes, which can be then used directly to power other devices.

Because it requires only a small amount of bending to produce a voltage, such a device could simply have a tiny weight attached to one end to cause the metal to bend as a result of ordinary movements, when strapped to an arm or leg during everyday activities. Unlike batteries and solar cells, the output from the new system comes in the form of alternating current (AC), with the flow moving first in one direction and then the other as the material bends first one way and then back.

This device converts mechanical to electrical energy; therefore, “it is not limited by the second law of thermodynamics,” Li says, which sets an upper limit on the theoretically possible efficiency. “So in principle, [the efficiency] could be 100 percent,” he says. In this first-generation device developed to demonstrate the electrochemomechanical working principle, he says, “the best we can hope for is about 15 percent” efficiency. But the system could easily be manufactured in any desired size and is amenable to industrial manufacturing process.

Test of time

The test devices maintain their properties through many cycles of bending and unbending, Li reports, with little reduction in performance after 1,500 cycles. “It’s a very stable system,” he says.

Previously, the phenomenon underlying the new device “was considered a parasitic effect in the battery community,” according to Li, and voltage put into the battery could sometimes induce bending. “We do just the opposite,” Li says, putting in the stress and getting a voltage as output. Besides being a potential energy source, he says, this could also be a complementary diagnostic tool in electrochemistry. “It’s a good way to evaluate damage mechanisms in batteries, a way to understand battery materials better,” he says.

In addition to harnessing daily motion to power wearable devices, the new system might also be useful as an actuator with biomedical applications, or used for embedded stress sensors in settings such as roads, bridges, keyboards, or other structures, the researchers suggest.

The team also included postdoc Kejie Zhao (now assistant professor at Purdue University) and visiting graduate student Giorgia Gobbi , and Hui Yang and Sulin Zhang at Penn State. The work was supported by the National Science Foundation, the MIT MADMEC Contest, the Samsung Scholarship Foundation, and the Kwanjeong Educational Foundation.

Source: MIT News Office

Persian Gulf could experience deadly heat: MIT Study

Detailed climate simulation shows a threshold of survivability could be crossed without mitigation measures.

By David Chandler


 

CAMBRIDGE, Mass.–Within this century, parts of the Persian Gulf region could be hit with unprecedented events of deadly heat as a result of climate change, according to a study of high-resolution climate models.

The research reveals details of a business-as-usual scenario for greenhouse gas emissions, but also shows that curbing emissions could forestall these deadly temperature extremes.

The study, published today in the journal Nature Climate Change, was carried out by Elfatih Eltahir, a professor of civil and environmental engineering at MIT, and Jeremy Pal PhD ’01 at Loyola Marymount University. They conclude that conditions in the Persian Gulf region, including its shallow water and intense sun, make it “a specific regional hotspot where climate change, in absence of significant mitigation, is likely to severely impact human habitability in the future.”

Running high-resolution versions of standard climate models, Eltahir and Pal found that many major cities in the region could exceed a tipping point for human survival, even in shaded and well-ventilated spaces. Eltahir says this threshold “has, as far as we know … never been reported for any location on Earth.”

That tipping point involves a measurement called the “wet-bulb temperature” that combines temperature and humidity, reflecting conditions the human body could maintain without artificial cooling. That threshold for survival for more than six unprotected hours is 35 degrees Celsius, or about 95 degrees Fahrenheit, according to recently published research. (The equivalent number in the National Weather Service’s more commonly used “heat index” would be about 165 F.)

This limit was almost reached this summer, at the end of an extreme, weeklong heat wave in the region: On July 31, the wet-bulb temperature in Bandahr Mashrahr, Iran, hit 34.6 C — just a fraction below the threshold, for an hour or less.

But the severe danger to human health and life occurs when such temperatures are sustained for several hours, Eltahir says — which the models show would occur several times in a 30-year period toward the end of the century under the business-as-usual scenario used as a benchmark by the Intergovernmental Panel on Climate Change.

The Persian Gulf region is especially vulnerable, the researchers say, because of a combination of low elevations, clear sky, water body that increases heat absorption, and the shallowness of the Persian Gulf itself, which produces high water temperatures that lead to strong evaporation and very high humidity.

The models show that by the latter part of this century, major cities such as Doha, Qatar, Abu Dhabi, and Dubai in the United Arab Emirates, and Bandar Abbas, Iran, could exceed the 35 C threshold several times over a 30-year period. What’s more, Eltahir says, hot summer conditions that now occur once every 20 days or so “will characterize the usual summer day in the future.”

While the other side of the Arabian Peninsula, adjacent to the Red Sea, would see less extreme heat, the projections show that dangerous extremes are also likely there, reaching wet-bulb temperatures of 32 to 34 C. This could be a particular concern, the authors note, because the annual Hajj, or annual Islamic pilgrimage to Mecca — when as many as 2 million pilgrims take part in rituals that include standing outdoors for a full day of prayer — sometimes occurs during these hot months.

While many in the Persian Gulf’s wealthier states might be able to adapt to new climate extremes, poorer areas, such as Yemen, might be less able to cope with such extremes, the authors say.

The research was supported by the Kuwait Foundation for the Advancement of Science.

Source: MIT News Office

Automating big-data analysis : MIT Research

System that replaces human intuition with algorithms outperforms 615 of 906 human teams.

By Larry Hardesty


Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk fordropping out of online courses.

“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITxdoesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition

Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.

For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.

It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.

Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.

“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”

Source: MIT News Office

 

Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


 

CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

Researchers unravel secrets of hidden waves

Region of world’s strongest “internal waves” is analyzed in detail; work could help refine climate models.

By David Chandler


CAMBRIDGE, Mass–Detailed new field studies, laboratory experiments, and simulations of the largest known “internal waves” in the Earth’s oceans — phenomena that play a key role in mixing ocean waters, greatly affecting ocean temperatures — provide a comprehensive new view of how these colossal, invisible waves are born, spread, and die off.

The work, published today in the journal Nature, could add significantly to the improvement of global climate models, the researchers say. The paper is co-authored by 42 researchers from 25 institutions in five countries.

“What this report presents is a complete picture, a cradle-to-grave picture of these waves,” says Thomas Peacock, an associate professor of mechanical engineering at MIT, and one of the paper’s two lead authors.

Internal waves — giant waves, below the surface, that roil stratified layers of heavier, saltier water and lighter, less-salty water — are ubiquitous throughout the world’s oceans. But by far the largest and most powerful known internal waves are those that form in one area of the South China Sea, originating from the Luzon Strait between the Philippines and Taiwan.

These subsurface waves can tower more than 500 meters high, and generate powerful turbulence. Because of their size and behavior, the rise and spread of these waves are important for marine processes, including the supply of nutrients for marine organisms; the distribution of sediments and pollutants; and the propagation of sound waves. They are also a significant factor in the mixing of ocean waters, combining warmer surface waters with cold, deep waters — a process that is essential to understanding the dynamics of global climate.

This international research effort, called IWISE (Internal Waves In Straits Experiment), was a rare undertaking in this field, Peacock says; the last such field study on internal waves on this scale, the Hawaii Ocean Mixing Experiment, concluded in 2002. The new study looked at internal waves that were much stronger, and went significantly further in determining not just how the waves originated, but how their energy dissipated.

One unexpected finding, Peacock says, was the degree of turbulence produced as the waves originate, as tides and currents pass over ridges on the seafloor. “These were unexpected field discoveries,” he says, revealing “some of the most intense mixing ever observed in the deep ocean. It’s like a giant washing machine — the mixing is much more dramatic than we ever expected.”

The new observations, Peacock says, resolve a longstanding technical question about how internal waves propagate — whether the towering waves start out full strength at their point of origin, or whether they continue to build as they spread from that site. Many attempts to answer this question have produced contradictory results over the years.

This new research, which involved placing several long mooring lines from the seafloor to buoys at the surface, with instruments at intervals all along the lines, has decisively resolved that question, Peacock says: The waves grow larger as they propagate. Prior measurements, the new work found, had been drawn from too narrow a slice of the region, resulting in conflicting results — rather like the fable of blind men describing an elephant. The new, more comprehensive data has now resolved the mystery.

The new data also contradict a long-held assumption — a “commonly held belief that was almost stated as fact,” Peacock says — that solitary internal waves are completely absent from the South China Sea during the winter months. But with equipment in place to reliably measure water movement throughout the year, the team found these waves were “carrying on quite happily throughout the entire winter,” Peacock says: Previously, their presence had been masked by the winter’s stormier weather, and by the influence of a strong boundary current that runs along the coast of Taiwan — the regional equivalent of the Gulf Stream.

The improved understanding of internal waves, Peacock says, could be useful for researchers in a number of areas. The waves are key to some ecosystems, for example — some marine creatures essentially “surf” them to move in toward shore, for feeding or breeding; in the South China Sea, this process helps sustain an extensive coral reef system. The waves also help carry heat from the ocean’s surface to its depths, an important parameter in modeling climate.

The research, which was primarily a collaboration between U.S. and Taiwanese scientists, was funded by the U.S. Office of Naval Research and the Taiwan National Science Council.

Source: MIT News Office

Shown here is "event zero," the first detection of a trapped electron in the MIT physicists' instrument. The color indicates the electron's detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell.

Courtesy of the researchers

Source: MIT News

New tabletop detector “sees” single electrons

Magnet-based setup may help detect the elusive mass of neutrinos.

Jennifer Chu


MIT physicists have developed a new tabletop particle detector that is able to identify single electrons in a radioactive gas.
As the gas decays and gives off electrons, the detector uses a magnet to trap them in a magnetic bottle. A radio antenna then picks up very weak signals emitted by the electrons, which can be used to map the electrons’ precise activity over several milliseconds.

Shown here is "event zero," the first detection of a trapped electron in the MIT physicists' instrument. The color indicates the electron's detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell. Courtesy of the researchers Source: MIT News
Shown here is “event zero,” the first detection of a trapped electron in the MIT physicists’ instrument. The color indicates the electron’s detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell.
Courtesy of the researchers
Source: MIT News

The team worked with researchers at Pacific Northwest National Laboratory, the University of Washington, the University of California at Santa Barbara (UCSB), and elsewhere to record the activity of more than 100,000 individual electrons in krypton gas.
The majority of electrons observed behaved in a characteristic pattern: As the radioactive krypton gas decays, it emits electrons that vibrate at a baseline frequency before petering out; this frequency spikes again whenever an electron hits an atom of radioactive gas. As an electron ping-pongs against multiple atoms in the detector, its energy appears to jump in a step-like pattern.
“We can literally image the frequency of the electron, and we see this electron suddenly pop into our radio antenna,” says Joe Formaggio, an associate professor of physics at MIT. “Over time, the frequency changes, and actually chirps up. So these electrons are chirping in radio waves.”
Formaggio says the group’s results, published in Physical Review Letters, are a big step toward a more elusive goal: measuring the mass of a neutrino.

A ghostly particle
Neutrinos are among the more mysterious elementary particles in the universe: Billions of them pass through every cell of our bodies each second, and yet these ghostly particles are incredibly difficult to detect, as they don’t appear to interact with ordinary matter. Scientists have set theoretical limits on neutrino mass, but researchers have yet to precisely detect it.
“We have [the mass] cornered, but haven’t measured it yet,” Formaggio says. “The name of the game is to measure the energy of an electron — that’s your signature that tells you about the neutrino.”
As Formaggio explains it, when a radioactive atom such as tritium decays, it turns into an isotope of helium and, in the process, also releases an electron and a neutrino. The energy of all particles released adds up to the original energy of the parent neutron. Measuring the energy of the electron, therefore, can illuminate the energy — and consequently, the mass — of the neutrino.
Scientists agree that tritium, a radioactive isotope of hydrogen, is key to obtaining a precise measurement: As a gas, tritium decays at such a rate that scientists can relatively easily observe its electron byproducts.
Researchers in Karlsruhe, Germany, hope to measure electrons in tritium using a massive spectrometer as part of an experiment named KATRIN (Karlsruhe Tritium Neutrino Experiment). Electrons, produced from the decay of tritium, pass through the spectrometer, which filters them according to their different energy levels. The experiment, which is just getting under way, may obtain measurements of single electrons, but at a cost.
“In KATRIN, the electrons are detected in a silicon detector, which means the electrons smash into the crystal, and a lot of random things happen, essentially destroying the electrons,” says Daniel Furse, a graduate student in physics, and a co-author on the paper. “We still want to measure the energy of electrons, but we do it in a nondestructive way.”
The group’s setup has an additional advantage: size. The detector essentially fits on a tabletop, and the space in which electrons are detected is smaller than a postage stamp. In contrast, KATRIN’s spectrometer, when delivered to Karlsruhe, barely fit through the city’s streets.
Tuning in
Furse and Formaggio’s detector — an experiment called “Project 8” — is based on a decades-old phenomenon known as cyclotron radiation, in which charged particles such as electrons emit radio waves in a magnetic field. It turns out electrons emit this radiation at a frequency similar to that of military radio communications.
“It’s the same frequency that the military uses — 26 gigahertz,” Formaggio says. “And it turns out the baseline frequency changes very slightly if the electron has energy. So we said, ‘Why not look at the radiation [electrons] emit directly?’”
Formaggio and former postdoc Benjamin Monreal, now an assistant professor of physics at UCSB, reasoned that if they could tune into this baseline frequency, they could catch electrons as they shot out of a decaying radioactive gas, and measure their energy in a magnetic field.
“If you could measure the frequency of this radio signal, you could measure the energy potentially much more accurately than you can with any other method,” Furse says. “The problem is, you’re looking at this really weak signal over a very short amount of time, and it’s tough to see, which is why no one has ever done it before.”
It took five years of fits and starts before the group was finally able to build an accurate detector. Once the researchers turned the detector on, they were able to record individual electrons within the first 100 milliseconds of the experiment — although the analysis took a bit longer.
“Our software was so slow at processing things that we could tell funny things were happening because, all of a sudden, our file size became larger, as these things started appearing,” Formaggio recalls.
He says the precision of the measurements obtained so far in krypton gas has encouraged the team to move on to tritium — a goal Formaggio says may be attainable in the next year or two — and pave a path toward measuring the mass of the neutrino.
Steven Elliott, a technical staff member at Los Alamos National Laboratory, says the group’s new detector “represents a very significant result.” In order to use the detector to measure the mass of a neutrino, Elliott adds, the group will have to make multiple improvements, including developing a bigger cell to contain a larger amount of tritium.
“This was the first step, albeit a very important step, along the way to building a next-generation experiment,” says Elliott, who did not contribute to the research. “As a result, the neutrino community is very impressed with the concept and execution of this experiment.”
This research was funded in part by the Department of Energy and the National Science Foundation.

Study on MOOCs provides new insights on an evolving space

Findings suggest many teachers enroll, learner intentions matter, and cost boosts completion rates.


CAMBRIDGE, Mass. – Today, a joint MIT and Harvard University research team published one of the largest investigations of massive open online courses (MOOCs) to date. Building on these researchers’ prior work — a January 2014 report describing the first year of open online courses launched on edX, a nonprofit learning platform founded by the two institutions — the latest effort incorporates another year of data, bringing the total to nearly 70 courses in subjects from programming to poetry.

“We explored 68 certificate-granting courses, 1.7 million participants, 10 million participant-hours, and 1.1 billion participant-logged events,” says Andrew Ho, a professor at the Harvard Graduate School of Education. The research team also used surveys to ­gain additional information about participants’ backgrounds and their intentions.

Ho and Isaac Chuang, a professor of electrical engineering and computer science and senior associate dean of digital learning at MIT, led a group effort that delved into the demographics of MOOC learners, analyzed participant intent, and looked at patterns that “serial MOOCers,” or those taking more than one course, tend to pursue.

“What jumped out for me was the survey that revealed that in some cases as many as 39 percent of our learners are teachers,” Chuang says. “This finding forces us to broaden our conceptions of who MOOCs serve and how they might make a difference in improving learning.”

Key findings

The researchers conducted a trend analysis that showed a rising share of female, U.S.-based, and older participants, as well as a survey analysis of intent, revealing that almost half of registrants were not interested in or unsure about certification. In this study, the researchers redefined their population of learners from those who simply registered for courses (and took no subsequent action) — a metric used in prior findings and often cited by MOOC providers — to those who participated (such as by logging into the course at least once).

1. Participation in HarvardX and MITx open online courses has grown steadily, while participation in repeated courses has declined and then stabilized.

From July 24, 2012, through Sept. 21, 2014, an average of 1,300 new participants joined a HarvardX or MITx course each day, for a total of 1 million unique participants and 1.7 million total participants. With the increase in second and third versions of courses, the researchers found that participation in second versions declined by 43 percent, while there was stable participation between versions two and three. There were outliers, such as the HarvardX course CS50x (Introduction to Computer Science), which doubled in size, perhaps due to increased student flexibility: Students in this course could participate over a yearlong period at their own pace, and complete at any time.

2. A slight majority of MOOC takers are seeking certification, and many participants are teachers.

Among the one-third of participants who responded to a survey about their intentions, 57 percent stated their desire to earn a certificate; nearly a quarter of those respondents went on to earn certificates. Further, among participants who were unsure or did not intend to earn a certificate, 8 percent ultimately did so. These learners appear to have been inspired to finish a MOOC even after initially stating that they had no intention of doing so.

Among 200,000 participants who responded to a survey about teaching, 39 percent self-identified as a past or present teacher; 21 percent of those teachers reported teaching in the course topic area. The strong participation by teachers suggests that even participants who are uninterested in certification may still make productive use of MOOCs.

3. Academic areas matter when it comes to participation, certification, and course networks.

Participants were drawn to computer science courses in particular, with per-course participation numbers nearly four times higher than courses in the humanities, sciences, and social sciences. That said, certificate rates in computer science and other science- and technology-based offerings (7 percent and 6 percent, respectively) were about half of those in the humanities and social sciences.

The larger data sets also allowed the researchers to study those participating in more than one course, revealing that computer science courses serve as hubs for students, who naturally move to and from related courses. Intentional sequencing, as was done for the 10-part HarvardX Chinese history course “ChinaX,” led to some of the highest certification rates in the study. Other courses with high certification rates were “Introduction to Computer Science” from MITx and “Justice” and “Health in Numbers” from HarvardX.

4. Those opting for fee-based ID-verified certificates certify at higher rates.

Across 12 courses, participants who paid for “ID-verified” certificates (with costs ranging from $50 to $250) earned certifications at a higher rate than other participants: 59 percent, on average, compared with 5 percent. Students opting for the ID-verified track appear to have stronger intentions to complete courses, and the monetary stake may add an extra form of motivation.

Questions and implications

Based upon these findings, Chuang and Ho identified questions that might “reset and reorient expectations” around MOOCs.

First, while many MOOC creators and providers have increased access to learning opportunities, those who are accessing MOOCs are disproportionately those who already have college and graduate degrees. The researchers do not necessarily see this as a problem, as academic experience may be a requirement in advanced courses. However, to serve underrepresented and traditionally underserved groups, the data suggest that proactive strategies may be necessary.

“These free, open courses are phenomenal opportunities for millions of learners,” Ho emphasizes, “but equity cannot be increased just by opening doors. We hope that our data help teachers and institutions to think about their intended audiences, and serve as a baseline for charting progress.”

Second, if improving online and on-campus learning is a priority, then “the flow of pedagogical innovations needs to be formalized,” Chuang says. For example, many of the MOOCs in the study used innovations from their campus counterparts, like physics assessments from MIT and close-reading practices from Harvard’s classics courses. Likewise, residential faculty are using MOOC content, such as videos and assessment scoring algorithms, in smaller, traditional lecture courses.

“The real potential is in the fostering of feedback loops between the two realms,” Chuang says. “In particular, the high number of teacher participants signals great potential for impact beyond Harvard and MIT, especially if deliberate steps could be taken to share best practices.”

Third, advancing research through MOOCs may require a more nuanced definition of audience. Much of the research to date has done little to differentiate among the diverse participants in these free, self-paced learning environments.

“While increasing completion has been a subject of interest, given that many participants have limited, uncertain, or zero interest in completing MOOCs, exerting research muscle to indiscriminately increase completion may not be productive,” Ho explains. “Researchers might want to focus more specifically on well-surveyed or paying subpopulations, where we have a better sense of their expectations and motivations.”

More broadly, Ho and Chuang hope to showcase the potential and diversity of MOOCs and MOOC data by developing “Top 5” lists based upon course attributes, such as scale (an MIT computer science course clocked in with 900,000 participant hours); demographics (the MOOC with the most female representation is a museum course from HarvardX called “Tangible Things,” while MITx’s computing courses attracted the largest global audience); and type and level of interaction (those in ChinaX most frequently posted in online forums, while those in an introduction to computer science course fromMITx most frequently played videos).

“These courses reflect the breadth of our university curricula, and we felt the need to highlight their diverse designs, philosophies, audiences, and learning outcomes in our analyses,” Chuang says. “Which course is right for you? It depends, and these lists might help learners decide what qualities in a given MOOC are most important to them.”

Additional authors on the report included Justin Reich, Jacob Whitehill, Joseph Williams, Glenn Lopez, John Hansen, and Rebecca Petersen from Harvard, and Cody Coleman and Curtis Northcutt from MIT.

###

Related links

Paper: “HarvardX and MITx: Two years of open online courses fall 2012-summer 2014”
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2586847

Office of Digital Learning
http://odl.mit.edu

MITx working papers
http://odl.mit.edu/mitx-working-papers/

HarvardX working papers
http://harvardx.harvard.edu/harvardx-working-papers

Related MIT News

ARCHIVE: MIT and Harvard release working papers on open online courses
https://newsoffice.mit.edu/2014/mit-and-harvard-release-working-papers-on-open-online-courses-0121

ARCHIVE: Reviewing online homework at scale
https://newsoffice.mit.edu/2015/reviewing-mooc-homework-0330

ARCHIVE: Study: Online classes really do work
https://newsoffice.mit.edu/2014/study-shows-online-courses-effective-0924

ARCHIVE: The future of MIT education looks more global, modular, and flexible
https://newsoffice.mit.edu/2014/future-of-mit-education-0804

 Source: MIT News Office

New kind of “tandem” solar cell developed: MIT Research

Researchers combine two types of photovoltaic material to make a cell that harnesses more sunlight.

By David Chandler


 

CAMBRIDGE, Mass–Researchers at MIT and Stanford University have developed a new kind of solar cell that combines two different layers of sunlight-absorbing material in order to harvest a broader range of the sun’s energy. The development could lead to photovoltaic cells that are more efficient than those currently used in solar-power installations, the researchers say.

The new cell uses a layer of silicon — which forms the basis for most of today’s solar panels — but adds a semi-transparent layer of a material called perovskite, which can absorb higher-energy particles of light. Unlike an earlier “tandem” solar cell reported by members of the same team earlier this year — in which the two layers were physically stacked, but each had its own separate electrical connections — the new version has both layers connected together as a single device that needs only one control circuit.

The new findings are reported in the journal Applied Physics Letters by MIT graduate student Jonathan Mailoa; associate professor of mechanical engineering Tonio Buonassisi; Colin Bailie and Michael McGehee at Stanford; and four others.

“Different layers absorb different portions of the sunlight,” Mailoa explains. In the earlier tandem solar cell, the two layers of photovoltaic material could be operated independently of each other and required their own wiring and control circuits, allowing each cell to be tuned independently for optimal performance.

By contrast, the new combined version should be much simpler to make and install, Mailoa says. “It has advantages in terms of simplicity, because it looks and operates just like a single silicon cell,” he says, with only a single electrical control circuit needed.

One tradeoff is that the current produced is limited by the capacity of the lesser of the two layers. Electrical current, Buonassisi explains, can be thought of as analogous to the volume of water passing through a pipe, which is limited by the diameter of the pipe: If you connect two lengths of pipe of different diameters, one after the other, “the amount of water is limited by the narrowest pipe,” he says. Combining two solar cell layers in series has the same limiting effect on current.

To address that limitation, the team aims to match the current output of the two layers as precisely as possible. In this proof-of-concept solar cell, this means the total power output is about the same as that of conventional solar cells; the team is now working to optimize that output.

Perovskites have been studied for potential electronic uses including solar cells, but this is the first time they have been successfully paired with silicon cells in this configuration, a feat that posed numerous technical challenges. Now the team is focusing on increasing the power efficiency — the percentage of sunlight’s energy that gets converted to electricity — that is possible from the combined cell. In this initial version, the efficiency is 13.7 percent, but the researchers say they have identified low-cost ways of improving this to about 30 percent — a substantial improvement over today’s commercial silicon-based solar cells — and they say this technology could ultimately achieve a power efficiency of more than 35 percent.

They will also explore how to easily manufacture the new type of device, but Buonassisi says that should be relatively straightforward, since the materials lend themselves to being made through methods very similar to conventional silicon-cell manufacturing.

One hurdle is making the material durable enough to be commercially viable: The perovskite material degrades quickly in open air, so it either needs to be modified to improve its inherent durability or encapsulated to prevent exposure to air — without adding significantly to manufacturing costs and without degrading performance.

This exact formulation may not turn out to be the most advantageous for better solar cells, Buonassisi says, but is one of several pathways worth exploring. “Our job at this point is to provide options to the world,” he says. “The market will select among them.”

The research team also included Eric Johlin PhD ’14 and postdoc Austin Akey at MIT, and Eric Hoke and William Nguyen of Stanford. It was supported by the Bay Area Photovoltaic Consortium and the U.S. Department of Energy.

Source: News Office