Tag Archives: research

Most distant object ever observed by ALMA sheds Light on the First Stars

Ancient Stardust Sheds Light on the First Stars

Most distant object ever observed by ALMA



Astronomers have used ALMA to detect a huge mass of glowing stardust in a galaxy seen when the Universe was only four percent of its present age. This galaxy was observed shortly after its formation and is the most distant galaxy in which dust has been detected. This observation is also the most distant detection of oxygen in the Universe. These new results provide brand-new insights into the birth and explosive deaths of the very first stars.

This image is dominated by a spectacular view of the rich galaxy cluster Abell 2744 from the NASA/ESA Hubble Space Telescope. But, far beyond this cluster, and seen when the Universe was only about 600 million years old, is a very faint galaxy called A2744_YD4. New observations of this galaxy with ALMA, shown in red, have demonstrated that it is rich in dust. Credit: ALMA (ESO/NAOJ/NRAO), NASA, ESA, ESO and D. Coe (STScI)/J. Merten (Heidelberg/Bologna)
This image is dominated by a spectacular view of the rich galaxy cluster Abell 2744 from the NASA/ESA Hubble Space Telescope. But, far beyond this cluster, and seen when the Universe was only about 600 million years old, is a very faint galaxy called A2744_YD4. New observations of this galaxy with ALMA, shown in red, have demonstrated that it is rich in dust.
ALMA (ESO/NAOJ/NRAO), NASA, ESA, ESO and D. Coe (STScI)/J. Merten (Heidelberg/Bologna)

An international team of astronomers, led by Nicolas Laporte of University College London, have used the Atacama Large Millimeter/submillimeter Array (ALMA) to observe A2744_YD4, the youngest and most remote galaxy ever seen by ALMA. They were surprised to find that this youthful galaxy contained an abundance of interstellar dust — dust formed by the deaths of an earlier generation of stars.

Follow-up observations using the X-shooter instrument on ESO’s Very Large Telescope confirmed the enormous distance to A2744_YD4. The galaxy appears to us as it was when the Universe was only 600 million years old, during the period when the first stars and galaxies were forming [1].

Not only is A2744_YD4 the most distant galaxy yet observed by ALMA,” comments Nicolas Laporte, “but the detection of so much dust indicates early supernovae must have already polluted this galaxy.”

Cosmic dust is mainly composed of silicon, carbon and aluminium, in grains as small as a millionth of a centimetre across. The chemical elements in these grains are forged inside stars and are scattered across the cosmos when the stars die, most spectacularly in supernova explosions, the final fate of short-lived, massive stars. Today, this dust is plentiful and is a key building block in the formation of stars, planets and complex molecules; but in the early Universe — before the first generations of stars died out — it was scarce.

The observations of the dusty galaxy A2744_YD4 were made possible because this galaxy lies behind a massive galaxy cluster called Abell 2744 [2]. Because of a phenomenon called gravitational lensing, the cluster acted like a giant cosmic “telescope” to magnify the more distant A2744_YD4 by about 1.8 times, allowing the team to peer far back into the early Universe.

The ALMA observations also detected the glowing emission of ionised oxygen from A2744_YD4. This is the most distant, and hence earliest, detection of oxygen in the Universe, surpassing another ALMA result from 2016.

The detection of dust in the early Universe provides new information on when the first supernovae exploded and hence the time when the first hot stars bathed the Universe in light. Determining the timing of this “cosmic dawn” is one of the holy grails of modern astronomy, and it can be indirectly probed through the study of early interstellar dust.

The team estimates that A2744_YD4 contained an amount of dust equivalent to 6 million times the mass of our Sun, while the galaxy’s total stellar mass — the mass of all its stars — was 2 billion times the mass of our Sun. The team also measured the rate of star formation in A2744_YD4 and found that stars are forming at a rate of 20 solar masses per year — compared to just one solar mass per year in the Milky Way [3].

This rate is not unusual for such a distant galaxy, but it does shed light on how quickly the dust in A2744_YD4 formed,” explains Richard Ellis (ESO and University College London), a co-author of the study. “Remarkably, the required time is only about 200 million years — so we are witnessing this galaxy shortly after its formation.”

This means that significant star formation began approximately 200 million years before the epoch at which the galaxy is being observed. This provides a great opportunity for ALMA to help study the era when the first stars and galaxies “switched on” — the earliest epoch yet probed. Our Sun, our planet and our existence are the products — 13 billion years later — of this first generation of stars. By studying their formation, lives and deaths, we are exploring our origins.

With ALMA, the prospects for performing deeper and more extensive observations of similar galaxies at these early times are very promising,” says Ellis.

And Laporte concludes: “Further measurements of this kind offer the exciting prospect of tracing early star formation and the creation of the heavier chemical elements even further back into the early Universe.


[1] This time corresponds to a redshift of z=8.38, during the epoch of reionisation.

[2] Abell 2744 is a massive object, lying 3.5 billion light-years away (redshift 0.308), that is thought to be the result of four smaller galaxy clusters colliding. It has been nicknamed Pandora’s Cluster because of the many strange and different phenomena that were unleashed by the huge collision that occurred over a period of about 350 million years. The galaxies only make up five percent of the cluster’s mass, while dark matter makes up seventy-five percent, providing the massive gravitational influence necessary to bend and magnify the light of background galaxies. The remaining twenty percent of the total mass is thought to be in the form of hot gas.

[3] This rate means that the total mass of the stars formed every year is equivalent to 20 times the mass of the Sun.

More information

This research was presented in a paper entitled “Dust in the Reionization Era: ALMA Observations of a z =8.38 Gravitationally-Lensed Galaxy” by Laporte et al., to appear in The Astrophysical Journal Letters.

The team is composed of N. Laporte (University College London, UK), R. S. Ellis (University College London, UK; ESO, Garching, Germany), F. Boone (Institut de Recherche en Astrophysique et Planétologie (IRAP), Toulouse, France), F. E. Bauer (Pontificia Universidad Católica de Chile, Instituto de Astrofísica, Santiago, Chile), D. Quénard (Queen Mary University of London, London, UK), G. Roberts-Borsani (University College London, UK), R. Pelló (Institut de Recherche en Astrophysique et Planétologie (IRAP), Toulouse, France), I. Pérez-Fournon (Instituto de Astrofísica de Canarias, Tenerife, Spain; Universidad de La Laguna, Tenerife, Spain), and A. Streblyanska (Instituto de Astrofísica de Canarias, Tenerife, Spain; Universidad de La Laguna, Tenerife, Spain).

The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of ESO, the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI).

ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Source: ESO

Income inequality linked to export “complexity”

The mix of products that countries export is a good predictor of income distribution, study finds.

By Larry Hardesty


CAMBRIDGE, Mass. – In a series of papers over the past 10 years, MIT Professor César Hidalgo and his collaborators have argued that the complexity of a country’s exports — not just their diversity but the expertise and technological infrastructure required to produce them — is a better predictor of future economic growth than factors economists have historically focused on, such as capital and education.

Now, a new paper by Hidalgo and his colleagues, appearing in the journal World Development, argues that everything else being equal, the complexity of a country’s exports also correlates with its degree of economic equality: The more complex a country’s products, the greater equality it enjoys relative to similar-sized countries with similar-sized economies.

“When people talk about the role of policy in inequality, there is an implicit assumption that you can always reduce inequality using only redistributive policies,” says Hidalgo, the Asahi Broadcasting Corporation Associate Professor of Media Arts and Sciences at the MIT Media Lab. “What these new results are telling us is that the effectiveness of policy is limited because inequality lives within a range of values that are determined by your underlying industrial structure.

“So if you’re a country like Venezuela, no matter how much money Chavez or Maduro gives out, you’re not going to be able to reduce inequality, because, well, all the money is coming in from one industry, and the 30,000 people involved in that industry of course are going to have an advantage in the economy. While if you’re in a country like Germany or Switzerland, where the economy is very diversified, and there are many people who are generating money in many different industries, firms are going to be under much more pressure to be more inclusive and redistributive.”

Joining Hidalgo on the paper are first author Dominik Hartmann, who was a postdoc in Hidalgo’s group when the work was done and is now a research fellow at the Fraunhofer Center for International Management and Knowledge Economy in Leipzig, Germany; Cristian Jara-Figueroa and Manuel Aristarán, MIT graduate students in media arts and sciences; and Miguel Guevara, a professor of computer science at Playa Ancha University in Valparaíso, Chile, who earned his PhD at the MIT Media Lab.

Quantifying complexity

For Hidalgo and his colleagues, the complexity of a product is related to the breadth of knowledge required to produce it. The PhDs who operate a billion-dollar chip-fabrication facility are repositories of knowledge, and the facility of itself is the embodiment of knowledge. But complexity also factors in the infrastructure and institutions that facilitate the aggregation of knowledge, such as reliable transportation and communication systems, and a culture of trust that enables productive collaboration.

In the new study, rather than try to itemize and quantify all such factors — probably an impossible task — the researchers made a simplifying assumption: Complex products are rare products exported by countries with diverse export portfolios. For instance, both chromium ore and nonoptical microscopes are rare exports, but the Czech Republic, which is the second-leading exporter of nonoptical microscopes, has a more diverse export portfolio than South Africa, the leading exporter of chromium ore.

The researchers compared each country’s complexity measure to its Gini coefficient, the most widely used measure of income inequality. They also compared Gini coefficients to countries’ per-capita gross domestic products (GDPs) and to standard measures of institutional development and education.

Predictive power

According to the researchers’ analysis of economic data from 1996 to 2008, per-capita GDP predicts only 36 percent of the variation in Gini coefficients, but product complexity predicts 58 percent. Combining per-capita GDP, export complexity, education levels, and population predicts 69 percent of variation. However, whereas leaving out any of the other three factors lowers that figure to about 68 percent, leaving out complexity lowers it to 61 percent, indicating that the complexity measure captures something crucial that the other factors leave out.

Using trade data from 1963 to 2008, the researchers also showed that countries whose economic complexity increased, such as South Korea, saw reductions in income inequality, while countries whose economic complexity decreased, such as Norway, saw income inequality increase.

Source: MIT News Office

Click on the image to know more about Prime Consulting

Researchers devise efficient power converter for internet of things

Researchers devise efficient power converter for internet of things

By Larry Hardesty


CAMBRIDGE, Mass. – The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.

Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 200,000-fold increase in current levels.

“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”

Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.

Packet perspective

The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.

“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”

The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.

If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.

To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.

Clocking down

Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.

But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.

“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.

The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.

Source: MIT News Office

Click on the image to know more about Prime Embedded Solutions

Academic and research collaboration to improve people to people contacts for peace and progress

Syed Faisal ur Rahman

Muslim world especially Middle East and surrounding regions, where we live, are facing some of the worst political turmoil of our history. We are seeing wars, terrorism, refugee crisis and resulting economic. The toughest calamities are faced by common people who have very little or no control over the policies which are resulting in the current mess. Worst thing which is happening is the exploitation of sectarianism as a tool to forward foreign policy and strategic agenda. Muslims in many parts of the world are criticizing western powers for this situation but we also need to seriously do some soul searching.

We need to see why are we in this mess?

For me one major reason is that OIC members have failed to find enough common constructive goals to bring their people together.

After the Second World War, Europe realized the importance of academic and economic cooperation for promoting peace and stability. CERN is a prime example of how formal foes can join hands for the purpose of discovery and innovation.

France and Germany have established common institutes and their universities regularly conduct joint research projects. UK and USA, despite enormous bloodshed the historical American war of independence, enjoy exemplary people to people relationships and academic collaboration is a major part of it. It is this attitude of thinking big, finding common constructive goals and strong academic collaboration, which has put them in the forefront of science and technology.

Over the last few decades, humanity has sent probes like Voyager which are challenging the limits of our solar system, countries are thinking about colonizing Mars, satellites like PLANCK and WMAP are tracking radiation from the early stages of our universe, quantum computing is now looking like a possibility and projects are being made for hyper-sonic flights. But in most of the so called Muslim world, we are stuck with centuries old and good for nothing sectarian issues.

Despite some efforts in the defense sector, OIC member countries largely lack the technology base to independently produce jets, automobiles, advanced electronics, precision instruments and many other things which are being produced by public or independent private sector companies in USA, China, Russia, Japan and Europe. Most of the things which are being indigenously produced by OIC countries rely heavily on foreign core components like engine or high precision electronics items. This is due to our lack of investment on fundamental research especially Physics.

OIC countries like Turkey, Pakistan, Malaysia, Iran, Saudi Arabia and some others have some basic infrastructure on which they can build upon to conduct research projects and joint ventures in areas like sending space probes, ground based optical and radio astronomy, particle physics, climate change and development of strong industrial technology base.  All we need is the will to start joint projects and promote knowledge sharing via exchange of researchers or joint academic and industrial research projects.

These joint projects will not only be helpful in enhancing people to people contacts and improving academic research standards but they will also contribute positively in the overall progress of humanity. It is a great loss for humanity as a whole that a civilization, which once led the efforts to develop astronomy, medicine and other key areas of science, is not making any or making very little contribution in advancing our understanding of the universe.

The situation is bad and if we look at Syria, Afghanistan, Iraq, Yemen or Libya then it seems we have hit the rock bottom. It is “Us” who need to find the way out of this mess as no one is going to solve our problems especially the current sectarian mess which is a result of narrow mindsets taking weak decisions. To come out of this dire state, we need broad minds with big vision and a desire of moving forward through mutual respect and understanding.


Stanford study finds promise in expanding renewables based on results in three major economies

A new Stanford study found that renewable energy can make a major and increasingly cost-effective contribution to alleviating climate change.


Stanford energy experts have released a study that compares the experiences of three large economies in ramping up renewable energy deployment and concludes that renewables can make a major and increasingly cost-effective contribution to climate change mitigation.

The report from Stanford’s Steyer-Taylor Center for Energy Policy and Finance analyzes the experiences of Germany, California and Texas, the world’s fourth, eighth and 12th largest economies, respectively. It found, among other things, that Germany, which gets about half as much sunshine as California and Texas, nevertheless generates electricity from solar installations at a cost comparable to that of Texas and only slightly higher than in California.

The report was released in time for the United Nations Climate Change Conference that started this week, where international leaders are gathering to discuss strategies to deal with global warming, including massive scale-ups of renewable energy.

“As policymakers from around the world gather for the climate negotiations in Paris, our report draws on the experiences of three leaders in renewable-energy deployment to shed light on some of the most prominent and controversial themes in the global renewables debate,” said Dan Reicher, executive director of the Steyer-Taylor Center, which is a joint center between Stanford Law School and Stanford Graduate School of Business. Reicher also is interim president and chief executive officer of the American Council on Renewable Energy.

“Our findings suggest that renewable energy has entered the mainstream and is ready to play a leading role in mitigating global climate change,” said Felix Mormann, associate professor of law at the University of Miami, faculty fellow at the Steyer-Taylor Center and lead author of the report.

Other conclusions of the report, “A Tale of Three Markets: Comparing the Solar and Wind Deployment Experiences of California, Texas, and Germany,” include:

  • Germany’s success in deploying renewable energy at scale is due largely to favorable treatment of “soft cost” factors such as financing, permitting, installation and grid access. This approach has allowed the renewable energy policies of some countries to deliver up to four times the average deployment of other countries, despite offering only half the financial incentives.
  • Contrary to widespread concern, a higher share of renewables does not automatically translate to higher electricity bills for ratepayers. While Germany’s residential electric rates are two to three times those of California and Texas, this price differential is only partly due to Germany’s subsidies for renewables. The average German household’s electricity bill is, in fact, lower than in Texas and only slightly higher than in California, partly as a result of energy-efficiency efforts in German homes.
  • An increase in the share of intermittent solar and wind power need not jeopardize the stability of the electric grid. From 2006 to 2013, Germany tripled the amount of electricity generated from solar and wind to a market share of 26 percent, while managing to reduce average annual outage times for electricity customers in its grid from an already impressive 22 minutes to just 15 minutes. During that same period, California tripled the amount of electricity produced from solar and wind to a joint market share of 8 percent and reduced its outage times from more than 100 minutes to less than 90 minutes. However, Texas increased its outage times from 92 minutes to 128 minutes after ramping up its wind-generated electricity sixfold to a market share of 10 percent.

The study may inform the energy debate in the United States, where expanding the nation’s renewable energy infrastructure is a top priority of the Obama administration and the subject of debate among presidential candidates.

The current share of renewables in U.S. electricity generation is 14 percent – half that of Germany. Germany’s ambitious – and controversial – Energiewende (Energy Transition) initiative commits the country to meeting 80 percent of its electricity needs with renewables by 2050. In the United States, 29 states, including California and Texas, have set mandatory targets for renewable energy.

In California, Gov. Jerry Brown recently signed legislation committing the state to producing 50 percent of its electricity from renewables by 2030. Texas, the leading U.S. state for wind development, set a mandate of 10,000 megawatts of renewable energy capacity by 2025, but reached this target 15 years ahead of schedule and now generates over 10 percent of the state’s electricity from wind alone.

Source: Sanford News

Persian Gulf could experience deadly heat: MIT Study

Detailed climate simulation shows a threshold of survivability could be crossed without mitigation measures.

By David Chandler


CAMBRIDGE, Mass.–Within this century, parts of the Persian Gulf region could be hit with unprecedented events of deadly heat as a result of climate change, according to a study of high-resolution climate models.

The research reveals details of a business-as-usual scenario for greenhouse gas emissions, but also shows that curbing emissions could forestall these deadly temperature extremes.

The study, published today in the journal Nature Climate Change, was carried out by Elfatih Eltahir, a professor of civil and environmental engineering at MIT, and Jeremy Pal PhD ’01 at Loyola Marymount University. They conclude that conditions in the Persian Gulf region, including its shallow water and intense sun, make it “a specific regional hotspot where climate change, in absence of significant mitigation, is likely to severely impact human habitability in the future.”

Running high-resolution versions of standard climate models, Eltahir and Pal found that many major cities in the region could exceed a tipping point for human survival, even in shaded and well-ventilated spaces. Eltahir says this threshold “has, as far as we know … never been reported for any location on Earth.”

That tipping point involves a measurement called the “wet-bulb temperature” that combines temperature and humidity, reflecting conditions the human body could maintain without artificial cooling. That threshold for survival for more than six unprotected hours is 35 degrees Celsius, or about 95 degrees Fahrenheit, according to recently published research. (The equivalent number in the National Weather Service’s more commonly used “heat index” would be about 165 F.)

This limit was almost reached this summer, at the end of an extreme, weeklong heat wave in the region: On July 31, the wet-bulb temperature in Bandahr Mashrahr, Iran, hit 34.6 C — just a fraction below the threshold, for an hour or less.

But the severe danger to human health and life occurs when such temperatures are sustained for several hours, Eltahir says — which the models show would occur several times in a 30-year period toward the end of the century under the business-as-usual scenario used as a benchmark by the Intergovernmental Panel on Climate Change.

The Persian Gulf region is especially vulnerable, the researchers say, because of a combination of low elevations, clear sky, water body that increases heat absorption, and the shallowness of the Persian Gulf itself, which produces high water temperatures that lead to strong evaporation and very high humidity.

The models show that by the latter part of this century, major cities such as Doha, Qatar, Abu Dhabi, and Dubai in the United Arab Emirates, and Bandar Abbas, Iran, could exceed the 35 C threshold several times over a 30-year period. What’s more, Eltahir says, hot summer conditions that now occur once every 20 days or so “will characterize the usual summer day in the future.”

While the other side of the Arabian Peninsula, adjacent to the Red Sea, would see less extreme heat, the projections show that dangerous extremes are also likely there, reaching wet-bulb temperatures of 32 to 34 C. This could be a particular concern, the authors note, because the annual Hajj, or annual Islamic pilgrimage to Mecca — when as many as 2 million pilgrims take part in rituals that include standing outdoors for a full day of prayer — sometimes occurs during these hot months.

While many in the Persian Gulf’s wealthier states might be able to adapt to new climate extremes, poorer areas, such as Yemen, might be less able to cope with such extremes, the authors say.

The research was supported by the Kuwait Foundation for the Advancement of Science.

Source: MIT News Office

Automating big-data analysis : MIT Research

System that replaces human intuition with algorithms outperforms 615 of 906 human teams.

By Larry Hardesty

Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk fordropping out of online courses.

“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITxdoesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition

Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.

For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.

It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.

Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.

“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”

Source: MIT News Office


Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

New research shows how to make effective political arguments, Stanford sociologist says

Stanford sociologist Robb Willer finds that an effective way to persuade people in politics is to reframe arguments to appeal to the moral values of those holding opposing positions.


In today’s American politics, it might seem impossible to craft effective political messages that reach across the aisle on hot-button issues like same-sex marriage, national health insurance and military spending. But, based on new research by Stanford sociologist Robb Willer, there’s a way to craft messages that could lead to politicians finding common ground.

“We found the most effective arguments are ones in which you find a new way to connect a political position to your target audience’s moral values,” Willer said.

While most people’s natural inclination is to make political arguments grounded in their own moral values, Willer said, these arguments are less persuasive than “reframed” moral arguments.

To be persuasive, reframe political arguments to appeal to the moral values of those holding the opposing political positions, said Matthew Feinberg, assistant professor of organizational behavior at the University of Toronto, who co-authored the study with Willer. Their work was published recently online in the Personality and Social Psychology Bulletin.

Such reframed moral appeals are persuasive because they increase the apparent agreement between a political position and the target audience’s moral values, according to the research, Feinberg said.

In fact, Willer pointed out, the research shows a “potential effective path for building popular support in our highly polarized political world.” Creating bipartisan success on legislative issues – whether in Congress or in state legislatures – requires such a sophisticated approach to building coalitions among groups not always in agreement with each other, he added.

Different moral values

Feinberg and Willer drew upon past research showing that American liberals and conservatives tend to endorse different moral values to different extents. For example, liberals tend to be more concerned with care and equality where conservatives are more concerned with values like group loyalty, respect for authority and purity.

They then conducted four studies testing the idea that moral arguments reframed to fit a target audience’s moral values could be persuasive on even deeply entrenched political issues. In one study, conservative participants recruited via the Internet were presented with passages that supported legalizing same-sex marriage.

Conservative participants were ultimately persuaded by a patriotism-based argument that “same-sex couples are proud and patriotic Americans … [who] contribute to the American economy and society.”

On the other hand, they were significantly less persuaded by a passage that argued for legalized same-sex marriage in terms of fairness and equality.

Feinberg and Willer found similar results for studies targeting conservatives with a pro-national health insurance message and liberals with arguments for high levels of military spending and making English the official language of the United States. In all cases, messages were significantly more persuasive when they fit the values endorsed more by the target audience.

“Morality can be a source of political division, a barrier to building bi-partisan support for policies,” Willer said. “But it can also be a bridge if you can connect your position to your audience’s deeply held moral convictions.”

Values and framing messages

“Moral reframing is not intuitive to people,” Willer said. “When asked to make moral political arguments, people tend to make the ones they believe in and not that of an opposing audience – but the research finds this type of argument unpersuasive.”

To test this, the researchers conducted two additional studies examining the moral arguments people typically make. They asked a panel of self-reported liberals to make arguments that would convince a conservative to support same-sex marriage, and a panel of conservatives to convince liberals to support English being the official language of the United States.

They found that, in both studies, most participants crafted messages with significant moral content, and most of that moral content reflected their own moral values, precisely the sort of arguments their other studies showed were ineffective.

“Our natural tendency is to make political arguments in terms of our own morality,” Feinberg said. “But the most effective arguments are based on the values of whomever you are trying to persuade.”

In all, Willer and Feinberg conducted six online studies involving 1,322 participants.

Source: Stanford News 

Penn Vet-Temple team characterizes genetic mutations linked to a form of blindness

Achromatopsia is a rare, inherited vision disorder that affects the eye’s cone cells, resulting in problems with daytime vision, clarity and color perception. It often strikes people early in life, and currently there is no cure for the condition.

One of the most promising avenues for developing a cure, however, is through gene therapy, and to create those therapies requires animal models of disease that closely replicate the human condition.

In a new study, a collaboration between University of Pennsylvania and Temple University scientists has identified two naturally occurring genetic mutations in dogs that result in achromatopsia. Having identified the mutations responsible, they used structural modeling and molecular dynamics on the Titan supercomputer at Oak Ridge National Laboratory and the Stampede supercomputer at the Texas Advanced Computing Center to simulate how the mutations would impact the resulting protein, showing that the mutations destabilized a molecular channel essential to light signal transduction.

The findings provide new insights into the molecular cause of this form of blindness and also present new opportunities for conducting preclinical assessments of curative gene therapy for achromatopsia in both dogs and humans.

“Our work in the dogs, in vitro and in silico shows us the consequences of these mutations in disrupting the function of these crucial channels,” said Karina Guziewicz, senior author on the study and a senior research investigator at Penn’s School of Veterinary Medicine. “Everything we found suggests that gene therapy will be the best approach to treating this disease, and we are looking forward to taking that next step.”

The study was published in the journal PLOS ONE and coauthored by Penn Vet’s Emily V. Dutrow and Temple’s Naoto Tanaka. Additional coauthors from Penn Vet included Gustavo D. Aguirre, Keiko Miyadera, Shelby L. Reinstein, William R. Crumley and Margret L. Casal. Temple’s team, all from the College of Science and Technology, included Lucie Delemotte, Christopher M. MacDermaid, Michael L. Klein and Jacqueline C. Tanaka. Christopher J. Dixon of Veterinary Vision in the United Kingdom also contributed.

The research began with a German shepherd that was brought to Penn Vet’s Ryan Hospital. The owners were worried about its vision.

“This dog displayed a classical loss of cone vision; it could not see well in daylight but had no problem in dim light conditions,” said Aguirre, professor of medical genetics and ophthalmology at Penn Vet.

The Penn Vet researchers wanted to identify the genetic cause, but the dog had none of the “usual suspects,” the known gene mutations responsible for achromatopsia in dogs. To find the new mutation, the scientists looked at five key genes that play a role in phototransduction, or the process by which light signals are transmitted through the eye to the brain.

They found what they were looking for on the CNGA3 gene, which encodes a cyclic nucleotide channel and plays a key role in transducing visual signals. The change was a “missense” mutation, meaning that the mutation results in the production of a different amino acid. Meanwhile, they heard from colleague Dixon that he had examined Labrador retrievers with similar symptoms. When the Penn team performed the same genetic analysis, they found a different mutation on the same part of the same gene where the shepherd’s mutation was found. Neither mutation had ever been characterized previously in dogs.

“The next step was to take this further and look at the consequences of these particular mutations,” Guziewicz said.

The group had the advantage of using the Titan and Stampede supercomputers, which can simulate models of the atomic structure of proteins and thereby elucidate how the protein might function. That work revealed that both mutations disrupted the function of the channel, making it unstable.

“The computational approach allows us to model, right down to the atomic level, how small changes in protein sequence can have a major impact on signaling,” said MacDermaid, assistant professor of research at Temple’s Institute for Computational Molecular Science. “We can then use these insights to help us understand and refine our experimental and clinical work.”

The Temple researchers recreated these mutated channels and showed that one resulted in a loss of channel function. Further in vitro experiments showed that the second mutation caused the channels to be routed improperly within the cell.

Penn Vet researchers have had success in treating various forms of blindness in dogs with gene therapy, setting the stage to treat human blindness. In human achromatopsia, nearly 100 different mutations have been identified in the CNGA3 gene, including the very same one identified in the German shepherd in this study.

The results, therefore, lay the groundwork for designing gene therapy constructs that can target this form of blindness with the same approach.

The study was supported by the Foundation Fighting Blindness, the National Eye Institute, the National Science Foundation, the European Union Seventh Framework Program, Hope for Vision, the Macula Vision Research Foundation and the Van Sloun Fund for Canine Genetic Research.

Source: University of Pennsylvania