Category Archives: Engineering and Technology

Persian Gulf could experience deadly heat: MIT Study

Detailed climate simulation shows a threshold of survivability could be crossed without mitigation measures.

By David Chandler


 

CAMBRIDGE, Mass.–Within this century, parts of the Persian Gulf region could be hit with unprecedented events of deadly heat as a result of climate change, according to a study of high-resolution climate models.

The research reveals details of a business-as-usual scenario for greenhouse gas emissions, but also shows that curbing emissions could forestall these deadly temperature extremes.

The study, published today in the journal Nature Climate Change, was carried out by Elfatih Eltahir, a professor of civil and environmental engineering at MIT, and Jeremy Pal PhD ’01 at Loyola Marymount University. They conclude that conditions in the Persian Gulf region, including its shallow water and intense sun, make it “a specific regional hotspot where climate change, in absence of significant mitigation, is likely to severely impact human habitability in the future.”

Running high-resolution versions of standard climate models, Eltahir and Pal found that many major cities in the region could exceed a tipping point for human survival, even in shaded and well-ventilated spaces. Eltahir says this threshold “has, as far as we know … never been reported for any location on Earth.”

That tipping point involves a measurement called the “wet-bulb temperature” that combines temperature and humidity, reflecting conditions the human body could maintain without artificial cooling. That threshold for survival for more than six unprotected hours is 35 degrees Celsius, or about 95 degrees Fahrenheit, according to recently published research. (The equivalent number in the National Weather Service’s more commonly used “heat index” would be about 165 F.)

This limit was almost reached this summer, at the end of an extreme, weeklong heat wave in the region: On July 31, the wet-bulb temperature in Bandahr Mashrahr, Iran, hit 34.6 C — just a fraction below the threshold, for an hour or less.

But the severe danger to human health and life occurs when such temperatures are sustained for several hours, Eltahir says — which the models show would occur several times in a 30-year period toward the end of the century under the business-as-usual scenario used as a benchmark by the Intergovernmental Panel on Climate Change.

The Persian Gulf region is especially vulnerable, the researchers say, because of a combination of low elevations, clear sky, water body that increases heat absorption, and the shallowness of the Persian Gulf itself, which produces high water temperatures that lead to strong evaporation and very high humidity.

The models show that by the latter part of this century, major cities such as Doha, Qatar, Abu Dhabi, and Dubai in the United Arab Emirates, and Bandar Abbas, Iran, could exceed the 35 C threshold several times over a 30-year period. What’s more, Eltahir says, hot summer conditions that now occur once every 20 days or so “will characterize the usual summer day in the future.”

While the other side of the Arabian Peninsula, adjacent to the Red Sea, would see less extreme heat, the projections show that dangerous extremes are also likely there, reaching wet-bulb temperatures of 32 to 34 C. This could be a particular concern, the authors note, because the annual Hajj, or annual Islamic pilgrimage to Mecca — when as many as 2 million pilgrims take part in rituals that include standing outdoors for a full day of prayer — sometimes occurs during these hot months.

While many in the Persian Gulf’s wealthier states might be able to adapt to new climate extremes, poorer areas, such as Yemen, might be less able to cope with such extremes, the authors say.

The research was supported by the Kuwait Foundation for the Advancement of Science.

Source: MIT News Office

Automating big-data analysis : MIT Research

System that replaces human intuition with algorithms outperforms 615 of 906 human teams.

By Larry Hardesty


Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk fordropping out of online courses.

“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITxdoesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition

Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.

For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.

It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.

Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.

“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”

Source: MIT News Office

 

Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


 

CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.

Credit:
ESO/M. Kornmesser

Real Martians: How to Protect Astronauts from Space Radiation on Mars

On Aug. 7, 1972, in the heart of the Apollo era, an enormous solar flare exploded from the sun’s atmosphere. Along with a gigantic burst of light in nearly all wavelengths, this event accelerated a wave of energetic particles. Mostly protons, with a few electrons and heavier elements mixed in, this wash of quick-moving particles would have been dangerous to anyone outside Earth’s protective magnetic bubble. Luckily, the Apollo 16 crew had returned to Earth just five months earlier, narrowly escaping this powerful event.

In the early days of human space flight, scientists were only just beginning to understand how events on the sun could affect space, and in turn how that radiation could affect humans and technology. Today, as a result of extensive space radiation research, we have a much better understanding of our space environment, its effects, and the best ways to protect astronauts—all crucial parts of NASA’s mission to send humans to Mars.

“The Martian” film highlights the radiation dangers that could occur on a round trip to Mars. While the mission in the film is fictional, NASA has already started working on the technology to enable an actual trip to Mars in the 2030s. In the film, the astronauts’ habitat on Mars shields them from radiation, and indeed, radiation shielding will be a crucial technology for the voyage. From better shielding to advanced biomedical countermeasures, NASA currently studies how to protect astronauts and electronics from radiation – efforts that will have to be incorporated into every aspect of Mars mission planning, from spacecraft and habitat design to spacewalk protocols.

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres. Credit: ESO/M. Kornmesser
This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.
Credit:
ESO/M. Kornmesser

“The space radiation environment will be a critical consideration for everything in the astronauts’ daily lives, both on the journeys between Earth and Mars and on the surface,” said Ruthan Lewis, an architect and engineer with the human spaceflight program at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “You’re constantly being bombarded by some amount of radiation.”

Radiation, at its most basic, is simply waves or sub-atomic particles that transports energy to another entity – whether it is an astronaut or spacecraft component. The main concern in space is particle radiation. Energetic particles can be dangerous to humans because they pass right through the skin, depositing energy and damaging cells or DNA along the way. This damage can mean an increased risk for cancer later in life or, at its worst, acute radiation sickness during the mission if the dose of energetic particles is large enough.

Fortunately for us, Earth’s natural protections block all but the most energetic of these particles from reaching the surface. A huge magnetic bubble, called the magnetosphere, which deflects the vast majority of these particles, protects our planet. And our atmosphere subsequently absorbs the majority of particles that do make it through this bubble. Importantly, since the International Space Station (ISS) is in low-Earth orbit within the magnetosphere, it also provides a large measure of protection for our astronauts.

“We have instruments that measure the radiation environment inside the ISS, where the crew are, and even outside the station,” said Kerry Lee, a scientist at NASA’s Johnson Space Center in Houston.

This ISS crew monitoring also includes tracking of the short-term and lifetime radiation doses for each astronaut to assess the risk for radiation-related diseases. Although NASA has conservative radiation limits greater than allowed radiation workers on Earth, the astronauts are able to stay well under NASA’s limit while living and working on the ISS, within Earth’s magnetosphere.

But a journey to Mars requires astronauts to move out much further, beyond the protection of Earth’s magnetic bubble.

“There’s a lot of good science to be done on Mars, but a trip to interplanetary space carries more radiation risk than working in low-Earth orbit,” said Jonathan Pellish, a space radiation engineer at Goddard.

A human mission to Mars means sending astronauts into interplanetary space for a minimum of a year, even with a very short stay on the Red Planet. Nearly all of that time, they will be outside the magnetosphere, exposed to the harsh radiation environment of space. Mars has no global magnetic field to deflect energetic particles, and its atmosphere is much thinner than Earth’s, so they’ll get only minimal protection even on the surface of Mars.

 

Throughout the entire trip, astronauts must be protected from two sources of radiation. The first comes from the sun, which regularly releases a steady stream of solar particles, as well as occasional larger bursts in the wake of giant explosions, such as solar flares and coronal mass ejections, on the sun. These energetic particles are almost all protons, and, though the sun releases an unfathomably large number of them, the proton energy is low enough that they can almost all be physically shielded by the structure of the spacecraft.

 

Since solar activity strongly contributes to the deep-space radiation environment, a better understanding of the sun’s modulation of this radiation environment will allow mission planners to make better decisions for a future Mars mission. NASA currently operates a fleet of spacecraft studying the sun and the space environment throughout the solar system. Observations from this area of research, known as heliophysics, help us better understand the origin of solar eruptions and what effects these events have on the overall space radiation environment.

 

“If we know precisely what’s going on, we don’t have to be as conservative with our estimates, which gives us more flexibility when planning the mission,” said Pellish.

 

The second source of energetic particles is harder to shield. These particles come from galactic cosmic rays, often known as GCRs. They’re particles accelerated to near the speed of light that shoot into our solar system from other stars in the Milky Way or even other galaxies. Like solar particles, galactic cosmic rays are mostly protons. However, some of them are heavier elements, ranging from helium up to the heaviest elements. These more energetic particles can knock apart atoms in the material they strike, such as in the astronaut, the metal walls of a spacecraft, habitat, or vehicle, causing sub-atomic particles to shower into the structure. This secondary radiation, as it is known, can reach a dangerous level.

 

There are two ways to shield from these higher-energy particles and their secondary radiation: use a lot more mass of traditional spacecraft materials, or use more efficient shielding materials.

 

The sheer volume of material surrounding a structure would absorb the energetic particles and their associated secondary particle radiation before they could reach the astronauts. However, using sheer bulk to protect astronauts would be prohibitively expensive, since more mass means more fuel required to launch.

 

Using materials that shield more efficiently would cut down on weight and cost, but finding the right material takes research and ingenuity. NASA is currently investigating a handful of possibilities that could be used in anything from the spacecraft to the Martian habitat to space suits.

 

“The best way to stop particle radiation is by running that energetic particle into something that’s a similar size,” said Pellish. “Otherwise, it can be like you’re bouncing a tricycle off a tractor-trailer.”

 

Because protons and neutrons are similar in size, one element blocks both extremely well—hydrogen, which most commonly exists as just a single proton and an electron. Conveniently, hydrogen is the most abundant element in the universe, and makes up substantial parts of some common compounds, such as water and plastics like polyethylene. Engineers could take advantage of already-required mass by processing the astronauts’ trash into plastic-filled tiles used to bolster radiation protection. Water, already required for the crew, could be stored strategically to create a kind of radiation storm shelter in the spacecraft or habitat. However, this strategy comes with some challenges—the crew would need to use the water and then replace it with recycled water from the advanced life support systems.

 

Polyethylene, the same plastic commonly found in water bottles and grocery bags, also has potential as a candidate for radiation shielding. It is very high in hydrogen and fairly cheap to produce—however, it’s not strong enough to build a large structure, especially a spacecraft, which goes through high heat and strong forces during launch. And adding polyethylene to a metal structure would add quite a bit of mass, meaning that more fuel would be required for launch.

 

“We’ve made progress on reducing and shielding against these energetic particles, but we’re still working on finding a material that is a good shield and can act as the primary structure of the spacecraft,” said Sheila Thibeault, a materials researcher at NASA’s Langley Research Center in Hampton, Virginia.

 

One material in development at NASA has the potential to do both jobs: Hydrogenated boron nitride nanotubes—known as hydrogenated BNNTs—are tiny, nanotubes made of carbon, boron, and nitrogen, with hydrogen interspersed throughout the empty spaces left in between the tubes. Boron is also an excellent absorber secondary neutrons, making hydrogenated BNNTs an ideal shielding material.

“This material is really strong—even at high heat—meaning that it’s great for structure,” said Thibeault.

Remarkably, researchers have successfully made yarn out of BNNTs, so it’s flexible enough to be woven into the fabric of space suits, providing astronauts with significant radiation protection even while they’re performing spacewalks in transit or out on the harsh Martian surface. Though hydrogenated BNNTs are still in development and testing, they have the potential to be one of our key structural and shielding materials in spacecraft, habitats, vehicles, and space suits that will be used on Mars.

Physical shields aren’t the only option for stopping particle radiation from reaching astronauts: Scientists are also exploring the possibility of building force fields. Force fields aren’t just the realm of science fiction: Just like Earth’s magnetic field protects us from energetic particles, a relatively small, localized electric or magnetic field would—if strong enough and in the right configuration—create a protective bubble around a spacecraft or habitat. Currently, these fields would take a prohibitive amount of power and structural material to create on a large scale, so more work is needed for them to be feasible.

The risk of health effects can also be reduced in operational ways, such as having a special area of the spacecraft or Mars habitat that could be a radiation storm shelter; preparing spacewalk and research protocols to minimize time outside the more heavily-shielded spacecraft or habitat; and ensuring that astronauts can quickly return indoors in the event of a radiation storm.

Radiation risk mitigation can also be approached from the human body level. Though far off, a medication that would counteract some or all of the health effects of radiation exposure would make it much easier to plan for a safe journey to Mars and back.

“Ultimately, the solution to radiation will have to be a combination of things,” said Pellish. “Some of the solutions are technology we have already, like hydrogen-rich materials, but some of it will necessarily be cutting edge concepts that we haven’t even thought of yet.”

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

Physicists solve quantum tunneling mystery

An international team of scientists studying ultrafast physics have solved a mystery of quantum mechanics, and found that quantum tunneling is an instantaneous process.

The new theory could lead to faster and smaller electronic components, for which quantum tunneling is a significant factor. It will also lead to a better understanding of diverse areas such as electron microscopy, nuclear fusion and DNA mutations.

“Timescales this short have never been explored before. It’s an entirely new world,” said one of the international team, Professor Anatoli Kheifets, from The Australian National University (ANU).

“We have modelled the most delicate processes of nature very accurately.”

At very small scales quantum physics shows that particles such as electrons have wave-like properties – their exact position is not well defined. This means they can occasionally sneak through apparently impenetrable barriers, a phenomenon called quantum tunneling.

Quantum tunneling plays a role in a number of phenomena, such as nuclear fusion in the sun, scanning tunneling microscopy, and flash memory for computers. However, the leakage of particles also limits the miniaturisation of electronic components.

Professor Kheifets and Dr. Igor Ivanov, from the ANU Research School of Physics and Engineering, are members of a team which studied ultrafast experiments at the attosecond scale (10-18 seconds), a field that has developed in the last 15 years.

Until their work, a number of attosecond phenomena could not be adequately explained, such as the time delay when a photon ionised an atom.

“At that timescale the time an electron takes to quantum tunnel out of an atom was thought to be significant. But the mathematics says the time during tunneling is imaginary – a complex number – which we realised meant it must be an instantaneous process,” said Professor Kheifets.

“A very interesting paradox arises, because electron velocity during tunneling may become greater than the speed of light. However, this does not contradict the special theory of relativity, as the tunneling velocity is also imaginary” said Dr Ivanov, who recently took up a position at the Center for Relativistic Laser Science in Korea.

The team’s calculations, which were made using the Raijin supercomputer, revealed that the delay in photoionisation originates not from quantum tunneling but from the electric field of the nucleus attracting the escaping electron.

The results give an accurate calibration for future attosecond-scale research, said Professor Kheifets.

“It’s a good reference point for future experiments, such as studying proteins unfolding, or speeding up electrons in microchips,” he said.

The research is published in Nature Physics.

Source: ANU

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.

Credit:
ESO/M. Kornmesser

UAE’s Al-Amal Mars Mission: A Great Initiative with Even Greater Intent

The mission will be launched in 2020 and the landing is expected to be in 2021
 By Syed Faisal ur Rahman


Recently UAE has announced details of its mission to Mars named ‘Al-Amal’. Amal is an Arabic word and name meaning ‘hope’ or ‘aspiration’ and the program truly represents the desires of many in Arab or even the whole Muslim world to contribute something big in humanity’s endeavors to explore the universe.

There was a time when Muslim and especially Arab astronomers used to contribute or even lead in many areas of science. From algebra to astronomy and medicine, we can find a lot of literature in history highlighting the contribution of Muslim scientists and engineers.

If you look at the star charts and astronomy catalogues, you will find many Arabic names of celestial objects and that’s because some of the early discoveries in astronomy were made by Muslim scientists in a time when Europe was going through dark ages.

Unfortunately, Muslims lost their way into darkness 7-8 centuries ago and the intellectual leadership was taken over by people who pushed us away from the path of learning physical sciences, reasoning and exploring the uncharted territories. According to the details provided by Mohammed Bin Rashid Space Center MBRSC, the mission will be launched in 2020 and the landing is expected to be in 2021. The mission will not only cover the entire Martian atmosphere for the first time but will also acquire critical data which will help in understanding climate and atmosphere on our own planet “Earth”.

The data from the probe will also help in learning more about Exo-planets and so will also help in finding prospects of life beyond Earth. Sheikh Mohammad of UAE rightly said “The Emirates Mars Mission will be a great contribution to human knowledge, a milestone for Arab civilization, and a real investment for future generations.” It is a good thing that after USA, Europe and Russia, Asian countries like India, China, Japan and now UAE are also excelling in space sector.

It will be good if Pakistan can also accelerate its space program and have put more focus on the civilian aspects of space technology. A right path for us will be to bring more scientists into our decision making structure and like India, make science and technology collaboration, especially in civilian or academic areas, as an important part of our foreign policy goals. Currently, our foreign policy goals mainly revolve around security, energy and aid related issues. We need to be pro-active if we want to be among the successful nations of the world.

In the end, I would like to wish best of luck to our brothers and sisters in UAE for their great initiative and hope that their mission will contribute greatly towards humanity’s goal of exploring worlds beyond our own.


The article is also published in Daily Times Pakistan.

Shown here is "event zero," the first detection of a trapped electron in the MIT physicists' instrument. The color indicates the electron's detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell.

Courtesy of the researchers

Source: MIT News

New tabletop detector “sees” single electrons

Magnet-based setup may help detect the elusive mass of neutrinos.

Jennifer Chu


MIT physicists have developed a new tabletop particle detector that is able to identify single electrons in a radioactive gas.
As the gas decays and gives off electrons, the detector uses a magnet to trap them in a magnetic bottle. A radio antenna then picks up very weak signals emitted by the electrons, which can be used to map the electrons’ precise activity over several milliseconds.

Shown here is "event zero," the first detection of a trapped electron in the MIT physicists' instrument. The color indicates the electron's detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell. Courtesy of the researchers Source: MIT News
Shown here is “event zero,” the first detection of a trapped electron in the MIT physicists’ instrument. The color indicates the electron’s detected power as a function of frequency and time. The sudden “jumps” in frequency indicate an electron collision with the residual hydrogen gas in the cell.
Courtesy of the researchers
Source: MIT News

The team worked with researchers at Pacific Northwest National Laboratory, the University of Washington, the University of California at Santa Barbara (UCSB), and elsewhere to record the activity of more than 100,000 individual electrons in krypton gas.
The majority of electrons observed behaved in a characteristic pattern: As the radioactive krypton gas decays, it emits electrons that vibrate at a baseline frequency before petering out; this frequency spikes again whenever an electron hits an atom of radioactive gas. As an electron ping-pongs against multiple atoms in the detector, its energy appears to jump in a step-like pattern.
“We can literally image the frequency of the electron, and we see this electron suddenly pop into our radio antenna,” says Joe Formaggio, an associate professor of physics at MIT. “Over time, the frequency changes, and actually chirps up. So these electrons are chirping in radio waves.”
Formaggio says the group’s results, published in Physical Review Letters, are a big step toward a more elusive goal: measuring the mass of a neutrino.

A ghostly particle
Neutrinos are among the more mysterious elementary particles in the universe: Billions of them pass through every cell of our bodies each second, and yet these ghostly particles are incredibly difficult to detect, as they don’t appear to interact with ordinary matter. Scientists have set theoretical limits on neutrino mass, but researchers have yet to precisely detect it.
“We have [the mass] cornered, but haven’t measured it yet,” Formaggio says. “The name of the game is to measure the energy of an electron — that’s your signature that tells you about the neutrino.”
As Formaggio explains it, when a radioactive atom such as tritium decays, it turns into an isotope of helium and, in the process, also releases an electron and a neutrino. The energy of all particles released adds up to the original energy of the parent neutron. Measuring the energy of the electron, therefore, can illuminate the energy — and consequently, the mass — of the neutrino.
Scientists agree that tritium, a radioactive isotope of hydrogen, is key to obtaining a precise measurement: As a gas, tritium decays at such a rate that scientists can relatively easily observe its electron byproducts.
Researchers in Karlsruhe, Germany, hope to measure electrons in tritium using a massive spectrometer as part of an experiment named KATRIN (Karlsruhe Tritium Neutrino Experiment). Electrons, produced from the decay of tritium, pass through the spectrometer, which filters them according to their different energy levels. The experiment, which is just getting under way, may obtain measurements of single electrons, but at a cost.
“In KATRIN, the electrons are detected in a silicon detector, which means the electrons smash into the crystal, and a lot of random things happen, essentially destroying the electrons,” says Daniel Furse, a graduate student in physics, and a co-author on the paper. “We still want to measure the energy of electrons, but we do it in a nondestructive way.”
The group’s setup has an additional advantage: size. The detector essentially fits on a tabletop, and the space in which electrons are detected is smaller than a postage stamp. In contrast, KATRIN’s spectrometer, when delivered to Karlsruhe, barely fit through the city’s streets.
Tuning in
Furse and Formaggio’s detector — an experiment called “Project 8” — is based on a decades-old phenomenon known as cyclotron radiation, in which charged particles such as electrons emit radio waves in a magnetic field. It turns out electrons emit this radiation at a frequency similar to that of military radio communications.
“It’s the same frequency that the military uses — 26 gigahertz,” Formaggio says. “And it turns out the baseline frequency changes very slightly if the electron has energy. So we said, ‘Why not look at the radiation [electrons] emit directly?’”
Formaggio and former postdoc Benjamin Monreal, now an assistant professor of physics at UCSB, reasoned that if they could tune into this baseline frequency, they could catch electrons as they shot out of a decaying radioactive gas, and measure their energy in a magnetic field.
“If you could measure the frequency of this radio signal, you could measure the energy potentially much more accurately than you can with any other method,” Furse says. “The problem is, you’re looking at this really weak signal over a very short amount of time, and it’s tough to see, which is why no one has ever done it before.”
It took five years of fits and starts before the group was finally able to build an accurate detector. Once the researchers turned the detector on, they were able to record individual electrons within the first 100 milliseconds of the experiment — although the analysis took a bit longer.
“Our software was so slow at processing things that we could tell funny things were happening because, all of a sudden, our file size became larger, as these things started appearing,” Formaggio recalls.
He says the precision of the measurements obtained so far in krypton gas has encouraged the team to move on to tritium — a goal Formaggio says may be attainable in the next year or two — and pave a path toward measuring the mass of the neutrino.
Steven Elliott, a technical staff member at Los Alamos National Laboratory, says the group’s new detector “represents a very significant result.” In order to use the detector to measure the mass of a neutrino, Elliott adds, the group will have to make multiple improvements, including developing a bigger cell to contain a larger amount of tritium.
“This was the first step, albeit a very important step, along the way to building a next-generation experiment,” says Elliott, who did not contribute to the research. “As a result, the neutrino community is very impressed with the concept and execution of this experiment.”
This research was funded in part by the Department of Energy and the National Science Foundation.

Study on MOOCs provides new insights on an evolving space

Findings suggest many teachers enroll, learner intentions matter, and cost boosts completion rates.


CAMBRIDGE, Mass. – Today, a joint MIT and Harvard University research team published one of the largest investigations of massive open online courses (MOOCs) to date. Building on these researchers’ prior work — a January 2014 report describing the first year of open online courses launched on edX, a nonprofit learning platform founded by the two institutions — the latest effort incorporates another year of data, bringing the total to nearly 70 courses in subjects from programming to poetry.

“We explored 68 certificate-granting courses, 1.7 million participants, 10 million participant-hours, and 1.1 billion participant-logged events,” says Andrew Ho, a professor at the Harvard Graduate School of Education. The research team also used surveys to ­gain additional information about participants’ backgrounds and their intentions.

Ho and Isaac Chuang, a professor of electrical engineering and computer science and senior associate dean of digital learning at MIT, led a group effort that delved into the demographics of MOOC learners, analyzed participant intent, and looked at patterns that “serial MOOCers,” or those taking more than one course, tend to pursue.

“What jumped out for me was the survey that revealed that in some cases as many as 39 percent of our learners are teachers,” Chuang says. “This finding forces us to broaden our conceptions of who MOOCs serve and how they might make a difference in improving learning.”

Key findings

The researchers conducted a trend analysis that showed a rising share of female, U.S.-based, and older participants, as well as a survey analysis of intent, revealing that almost half of registrants were not interested in or unsure about certification. In this study, the researchers redefined their population of learners from those who simply registered for courses (and took no subsequent action) — a metric used in prior findings and often cited by MOOC providers — to those who participated (such as by logging into the course at least once).

1. Participation in HarvardX and MITx open online courses has grown steadily, while participation in repeated courses has declined and then stabilized.

From July 24, 2012, through Sept. 21, 2014, an average of 1,300 new participants joined a HarvardX or MITx course each day, for a total of 1 million unique participants and 1.7 million total participants. With the increase in second and third versions of courses, the researchers found that participation in second versions declined by 43 percent, while there was stable participation between versions two and three. There were outliers, such as the HarvardX course CS50x (Introduction to Computer Science), which doubled in size, perhaps due to increased student flexibility: Students in this course could participate over a yearlong period at their own pace, and complete at any time.

2. A slight majority of MOOC takers are seeking certification, and many participants are teachers.

Among the one-third of participants who responded to a survey about their intentions, 57 percent stated their desire to earn a certificate; nearly a quarter of those respondents went on to earn certificates. Further, among participants who were unsure or did not intend to earn a certificate, 8 percent ultimately did so. These learners appear to have been inspired to finish a MOOC even after initially stating that they had no intention of doing so.

Among 200,000 participants who responded to a survey about teaching, 39 percent self-identified as a past or present teacher; 21 percent of those teachers reported teaching in the course topic area. The strong participation by teachers suggests that even participants who are uninterested in certification may still make productive use of MOOCs.

3. Academic areas matter when it comes to participation, certification, and course networks.

Participants were drawn to computer science courses in particular, with per-course participation numbers nearly four times higher than courses in the humanities, sciences, and social sciences. That said, certificate rates in computer science and other science- and technology-based offerings (7 percent and 6 percent, respectively) were about half of those in the humanities and social sciences.

The larger data sets also allowed the researchers to study those participating in more than one course, revealing that computer science courses serve as hubs for students, who naturally move to and from related courses. Intentional sequencing, as was done for the 10-part HarvardX Chinese history course “ChinaX,” led to some of the highest certification rates in the study. Other courses with high certification rates were “Introduction to Computer Science” from MITx and “Justice” and “Health in Numbers” from HarvardX.

4. Those opting for fee-based ID-verified certificates certify at higher rates.

Across 12 courses, participants who paid for “ID-verified” certificates (with costs ranging from $50 to $250) earned certifications at a higher rate than other participants: 59 percent, on average, compared with 5 percent. Students opting for the ID-verified track appear to have stronger intentions to complete courses, and the monetary stake may add an extra form of motivation.

Questions and implications

Based upon these findings, Chuang and Ho identified questions that might “reset and reorient expectations” around MOOCs.

First, while many MOOC creators and providers have increased access to learning opportunities, those who are accessing MOOCs are disproportionately those who already have college and graduate degrees. The researchers do not necessarily see this as a problem, as academic experience may be a requirement in advanced courses. However, to serve underrepresented and traditionally underserved groups, the data suggest that proactive strategies may be necessary.

“These free, open courses are phenomenal opportunities for millions of learners,” Ho emphasizes, “but equity cannot be increased just by opening doors. We hope that our data help teachers and institutions to think about their intended audiences, and serve as a baseline for charting progress.”

Second, if improving online and on-campus learning is a priority, then “the flow of pedagogical innovations needs to be formalized,” Chuang says. For example, many of the MOOCs in the study used innovations from their campus counterparts, like physics assessments from MIT and close-reading practices from Harvard’s classics courses. Likewise, residential faculty are using MOOC content, such as videos and assessment scoring algorithms, in smaller, traditional lecture courses.

“The real potential is in the fostering of feedback loops between the two realms,” Chuang says. “In particular, the high number of teacher participants signals great potential for impact beyond Harvard and MIT, especially if deliberate steps could be taken to share best practices.”

Third, advancing research through MOOCs may require a more nuanced definition of audience. Much of the research to date has done little to differentiate among the diverse participants in these free, self-paced learning environments.

“While increasing completion has been a subject of interest, given that many participants have limited, uncertain, or zero interest in completing MOOCs, exerting research muscle to indiscriminately increase completion may not be productive,” Ho explains. “Researchers might want to focus more specifically on well-surveyed or paying subpopulations, where we have a better sense of their expectations and motivations.”

More broadly, Ho and Chuang hope to showcase the potential and diversity of MOOCs and MOOC data by developing “Top 5” lists based upon course attributes, such as scale (an MIT computer science course clocked in with 900,000 participant hours); demographics (the MOOC with the most female representation is a museum course from HarvardX called “Tangible Things,” while MITx’s computing courses attracted the largest global audience); and type and level of interaction (those in ChinaX most frequently posted in online forums, while those in an introduction to computer science course fromMITx most frequently played videos).

“These courses reflect the breadth of our university curricula, and we felt the need to highlight their diverse designs, philosophies, audiences, and learning outcomes in our analyses,” Chuang says. “Which course is right for you? It depends, and these lists might help learners decide what qualities in a given MOOC are most important to them.”

Additional authors on the report included Justin Reich, Jacob Whitehill, Joseph Williams, Glenn Lopez, John Hansen, and Rebecca Petersen from Harvard, and Cody Coleman and Curtis Northcutt from MIT.

###

Related links

Paper: “HarvardX and MITx: Two years of open online courses fall 2012-summer 2014”
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2586847

Office of Digital Learning
http://odl.mit.edu

MITx working papers
http://odl.mit.edu/mitx-working-papers/

HarvardX working papers
http://harvardx.harvard.edu/harvardx-working-papers

Related MIT News

ARCHIVE: MIT and Harvard release working papers on open online courses
https://newsoffice.mit.edu/2014/mit-and-harvard-release-working-papers-on-open-online-courses-0121

ARCHIVE: Reviewing online homework at scale
https://newsoffice.mit.edu/2015/reviewing-mooc-homework-0330

ARCHIVE: Study: Online classes really do work
https://newsoffice.mit.edu/2014/study-shows-online-courses-effective-0924

ARCHIVE: The future of MIT education looks more global, modular, and flexible
https://newsoffice.mit.edu/2014/future-of-mit-education-0804

 Source: MIT News Office

New kind of “tandem” solar cell developed: MIT Research

Researchers combine two types of photovoltaic material to make a cell that harnesses more sunlight.

By David Chandler


 

CAMBRIDGE, Mass–Researchers at MIT and Stanford University have developed a new kind of solar cell that combines two different layers of sunlight-absorbing material in order to harvest a broader range of the sun’s energy. The development could lead to photovoltaic cells that are more efficient than those currently used in solar-power installations, the researchers say.

The new cell uses a layer of silicon — which forms the basis for most of today’s solar panels — but adds a semi-transparent layer of a material called perovskite, which can absorb higher-energy particles of light. Unlike an earlier “tandem” solar cell reported by members of the same team earlier this year — in which the two layers were physically stacked, but each had its own separate electrical connections — the new version has both layers connected together as a single device that needs only one control circuit.

The new findings are reported in the journal Applied Physics Letters by MIT graduate student Jonathan Mailoa; associate professor of mechanical engineering Tonio Buonassisi; Colin Bailie and Michael McGehee at Stanford; and four others.

“Different layers absorb different portions of the sunlight,” Mailoa explains. In the earlier tandem solar cell, the two layers of photovoltaic material could be operated independently of each other and required their own wiring and control circuits, allowing each cell to be tuned independently for optimal performance.

By contrast, the new combined version should be much simpler to make and install, Mailoa says. “It has advantages in terms of simplicity, because it looks and operates just like a single silicon cell,” he says, with only a single electrical control circuit needed.

One tradeoff is that the current produced is limited by the capacity of the lesser of the two layers. Electrical current, Buonassisi explains, can be thought of as analogous to the volume of water passing through a pipe, which is limited by the diameter of the pipe: If you connect two lengths of pipe of different diameters, one after the other, “the amount of water is limited by the narrowest pipe,” he says. Combining two solar cell layers in series has the same limiting effect on current.

To address that limitation, the team aims to match the current output of the two layers as precisely as possible. In this proof-of-concept solar cell, this means the total power output is about the same as that of conventional solar cells; the team is now working to optimize that output.

Perovskites have been studied for potential electronic uses including solar cells, but this is the first time they have been successfully paired with silicon cells in this configuration, a feat that posed numerous technical challenges. Now the team is focusing on increasing the power efficiency — the percentage of sunlight’s energy that gets converted to electricity — that is possible from the combined cell. In this initial version, the efficiency is 13.7 percent, but the researchers say they have identified low-cost ways of improving this to about 30 percent — a substantial improvement over today’s commercial silicon-based solar cells — and they say this technology could ultimately achieve a power efficiency of more than 35 percent.

They will also explore how to easily manufacture the new type of device, but Buonassisi says that should be relatively straightforward, since the materials lend themselves to being made through methods very similar to conventional silicon-cell manufacturing.

One hurdle is making the material durable enough to be commercially viable: The perovskite material degrades quickly in open air, so it either needs to be modified to improve its inherent durability or encapsulated to prevent exposure to air — without adding significantly to manufacturing costs and without degrading performance.

This exact formulation may not turn out to be the most advantageous for better solar cells, Buonassisi says, but is one of several pathways worth exploring. “Our job at this point is to provide options to the world,” he says. “The market will select among them.”

The research team also included Eric Johlin PhD ’14 and postdoc Austin Akey at MIT, and Eric Hoke and William Nguyen of Stanford. It was supported by the Bay Area Photovoltaic Consortium and the U.S. Department of Energy.

Source: News Office