Tag Archives: science

Academic and research collaboration to improve people to people contacts for peace and progress

Syed Faisal ur Rahman

Muslim world especially Middle East and surrounding regions, where we live, are facing some of the worst political turmoil of our history. We are seeing wars, terrorism, refugee crisis and resulting economic. The toughest calamities are faced by common people who have very little or no control over the policies which are resulting in the current mess. Worst thing which is happening is the exploitation of sectarianism as a tool to forward foreign policy and strategic agenda. Muslims in many parts of the world are criticizing western powers for this situation but we also need to seriously do some soul searching.

We need to see why are we in this mess?

For me one major reason is that OIC members have failed to find enough common constructive goals to bring their people together.

After the Second World War, Europe realized the importance of academic and economic cooperation for promoting peace and stability. CERN is a prime example of how formal foes can join hands for the purpose of discovery and innovation.

France and Germany have established common institutes and their universities regularly conduct joint research projects. UK and USA, despite enormous bloodshed the historical American war of independence, enjoy exemplary people to people relationships and academic collaboration is a major part of it. It is this attitude of thinking big, finding common constructive goals and strong academic collaboration, which has put them in the forefront of science and technology.

Over the last few decades, humanity has sent probes like Voyager which are challenging the limits of our solar system, countries are thinking about colonizing Mars, satellites like PLANCK and WMAP are tracking radiation from the early stages of our universe, quantum computing is now looking like a possibility and projects are being made for hyper-sonic flights. But in most of the so called Muslim world, we are stuck with centuries old and good for nothing sectarian issues.

Despite some efforts in the defense sector, OIC member countries largely lack the technology base to independently produce jets, automobiles, advanced electronics, precision instruments and many other things which are being produced by public or independent private sector companies in USA, China, Russia, Japan and Europe. Most of the things which are being indigenously produced by OIC countries rely heavily on foreign core components like engine or high precision electronics items. This is due to our lack of investment on fundamental research especially Physics.

OIC countries like Turkey, Pakistan, Malaysia, Iran, Saudi Arabia and some others have some basic infrastructure on which they can build upon to conduct research projects and joint ventures in areas like sending space probes, ground based optical and radio astronomy, particle physics, climate change and development of strong industrial technology base.  All we need is the will to start joint projects and promote knowledge sharing via exchange of researchers or joint academic and industrial research projects.

These joint projects will not only be helpful in enhancing people to people contacts and improving academic research standards but they will also contribute positively in the overall progress of humanity. It is a great loss for humanity as a whole that a civilization, which once led the efforts to develop astronomy, medicine and other key areas of science, is not making any or making very little contribution in advancing our understanding of the universe.

The situation is bad and if we look at Syria, Afghanistan, Iraq, Yemen or Libya then it seems we have hit the rock bottom. It is “Us” who need to find the way out of this mess as no one is going to solve our problems especially the current sectarian mess which is a result of narrow mindsets taking weak decisions. To come out of this dire state, we need broad minds with big vision and a desire of moving forward through mutual respect and understanding.

 

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

Entering 2016 with new hope

Syed Faisal ur Rahman


 

Year 2015 left many good and bad memories for many of us. On one hand we saw more wars, terrorist attacks and political confrontations, and on the other hand we saw humanity raising voices for peace, sheltering refugees and joining hands to confront the climate change.

In science, we saw first ever photograph of light as both wave and particle. We also saw some serious development in machine learning, data sciences and artificial intelligence areas with some voices raising caution about the takeover of AI over humanity and issues related to privacy. The big question of energy and climate change remained a key point of  discussion in scientific and political circles. The biggest break through came near the end of the year with Paris deal during COP21.

The deal involving around 200 countries represent a true spirit of humanity to limit global warming below 2C and commitments for striving to keep temperatures at above 1.5C pre-industrial levels. This truly global commitment also served in bringing rival countries to sit together for a common cause to save humanity from self destruction. I hope the spirit will continue in other areas of common interest as well.

This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture. New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old. Credit: NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)
This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture.
New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old.
Credit:
NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)

Space Sciences also saw some enormous advancements with New Horizon sending photographs from Pluto, SpaceX successfully landed the reusable Falcon 9 rocket back after a successful launch and we also saw the discovery of the largest regular formation in the Universe,by Prof Lajos Balazs, which is a ring of nine galaxies 7 billion light years away and 5 billion light years wide covering a third of our sky.We also learnt this year that Mars once had more water than Earth’s Arctic Ocean. NASA later confirmed the evidence that water flows on the surface of Mars. The announcement led to some interesting insight into the atmospheric studies and history of the red planet.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration: Jose-Luis Olivares/MIT

We also saw some encouraging advancements in neurosciences where we saw MIT’s researchers  developing a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections. We also saw researchers reactivating neuro-plasticity in older mice, restoring their brains to a younger state and we also saw some good progress in combating Alzheimer’s diseases.

Quantum physics again stayed as a key area of scientific advancements. Quantu

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior. Credit:EPFL
ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

m computing is getting more closer to become a viable alternative to current architecture. The packing of the single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits. Researchers at the Australian National University (ANU)  performed experiment to prove that reality does not exist until it is measured.

There are many other areas where science and technology reached new heights and will hopefully continue to do so in the year 2016. I hope these advancements will not only help us in growing economically but also help us in becoming better human beings and a better society.

 

 

 

 

 

Persian Gulf could experience deadly heat: MIT Study

Detailed climate simulation shows a threshold of survivability could be crossed without mitigation measures.

By David Chandler


 

CAMBRIDGE, Mass.–Within this century, parts of the Persian Gulf region could be hit with unprecedented events of deadly heat as a result of climate change, according to a study of high-resolution climate models.

The research reveals details of a business-as-usual scenario for greenhouse gas emissions, but also shows that curbing emissions could forestall these deadly temperature extremes.

The study, published today in the journal Nature Climate Change, was carried out by Elfatih Eltahir, a professor of civil and environmental engineering at MIT, and Jeremy Pal PhD ’01 at Loyola Marymount University. They conclude that conditions in the Persian Gulf region, including its shallow water and intense sun, make it “a specific regional hotspot where climate change, in absence of significant mitigation, is likely to severely impact human habitability in the future.”

Running high-resolution versions of standard climate models, Eltahir and Pal found that many major cities in the region could exceed a tipping point for human survival, even in shaded and well-ventilated spaces. Eltahir says this threshold “has, as far as we know … never been reported for any location on Earth.”

That tipping point involves a measurement called the “wet-bulb temperature” that combines temperature and humidity, reflecting conditions the human body could maintain without artificial cooling. That threshold for survival for more than six unprotected hours is 35 degrees Celsius, or about 95 degrees Fahrenheit, according to recently published research. (The equivalent number in the National Weather Service’s more commonly used “heat index” would be about 165 F.)

This limit was almost reached this summer, at the end of an extreme, weeklong heat wave in the region: On July 31, the wet-bulb temperature in Bandahr Mashrahr, Iran, hit 34.6 C — just a fraction below the threshold, for an hour or less.

But the severe danger to human health and life occurs when such temperatures are sustained for several hours, Eltahir says — which the models show would occur several times in a 30-year period toward the end of the century under the business-as-usual scenario used as a benchmark by the Intergovernmental Panel on Climate Change.

The Persian Gulf region is especially vulnerable, the researchers say, because of a combination of low elevations, clear sky, water body that increases heat absorption, and the shallowness of the Persian Gulf itself, which produces high water temperatures that lead to strong evaporation and very high humidity.

The models show that by the latter part of this century, major cities such as Doha, Qatar, Abu Dhabi, and Dubai in the United Arab Emirates, and Bandar Abbas, Iran, could exceed the 35 C threshold several times over a 30-year period. What’s more, Eltahir says, hot summer conditions that now occur once every 20 days or so “will characterize the usual summer day in the future.”

While the other side of the Arabian Peninsula, adjacent to the Red Sea, would see less extreme heat, the projections show that dangerous extremes are also likely there, reaching wet-bulb temperatures of 32 to 34 C. This could be a particular concern, the authors note, because the annual Hajj, or annual Islamic pilgrimage to Mecca — when as many as 2 million pilgrims take part in rituals that include standing outdoors for a full day of prayer — sometimes occurs during these hot months.

While many in the Persian Gulf’s wealthier states might be able to adapt to new climate extremes, poorer areas, such as Yemen, might be less able to cope with such extremes, the authors say.

The research was supported by the Kuwait Foundation for the Advancement of Science.

Source: MIT News Office

Automating big-data analysis : MIT Research

System that replaces human intuition with algorithms outperforms 615 of 906 human teams.

By Larry Hardesty


Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk fordropping out of online courses.

“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITxdoesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition

Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.

For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.

It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.

Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.

“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”

Source: MIT News Office

 

Climate change requires new conservation models, Stanford scientists say

In a world transformed by climate change and human activity, Stanford scientists say that conserving biodiversity and protecting species will require an interdisciplinary combination of ecological and social research methods.

By Ker Than

A threatened tree species in Alaska could serve as a model for integrating ecological and social research methods in efforts to safeguard species that are vulnerable to climate change effects and human activity.

In a new Stanford-led study, published online this week in the journal Biological Conservation, scientists assessed the health of yellow cedar, a culturally and commercially valuable tree throughout coastal Alaska that is experiencing climate change-induced dieback.

In an era when climate change touches every part of the globe, the traditional conservation approach of setting aside lands to protect biodiversity is no longer sufficient to protect species, said the study’s first author, Lauren Oakes, a research associate at Stanford University.

“A lot of that kind of conservation planning was intended to preserve historic conditions, which, for example, might be defined by the population of a species 50 years ago or specific ecological characteristics when a park was established,” said Oakes, who is a recent PhD graduate of the Emmett Interdisciplinary Program in Environment and Resources (E-IPER) at Stanford’s School of Earth, Energy, & Environmental Sciences.

But as the effects of climate change become increasingly apparent around the world, resource managers are beginning to recognize that “adaptive management” strategies are needed that account for how climate change affects species now and in the future.

Similarly, because climate change effects will vary across regions, new management interventions must consider not only local laws, policies and regulations, but also local peoples’ knowledge about climate change impacts and their perceptions about new management strategies. For yellow cedar, new strategies could include assisting migration of the species to places where it may be more likely to survive or increasing protection of the tree from direct uses, such as harvesting.

Gathering these perspectives requires an interdisciplinary social-ecological approach, said study leader Eric Lambin, the George and Setsuko Ishiyama Provostial Professor in the School of Earth, Energy, & Environmental Sciences.

“The impact of climate change on ecosystems is not just a biophysical issue. Various actors depend on these ecosystems and on the services they provide for their livelihoods,” said Lambin, who is also  a senior fellow at the Stanford Woods Institute for the Environment.

“Moreover, as the geographic distribution of species is shifting due to climate change, new areas that are currently under human use will need to be managed for biodiversity conservation. Any feasible management solution needs to integrate the ecological and social dimensions of this challenge.”

Gauging yellow cedar health

The scientists used aerial surveys to map the distribution of yellow cedar in Alaska’s Glacier Bay National Park and Preserve (GLBA) and collected data about the trees’ health and environmental conditions from 18 randomly selected plots inside the park and just south of the park on designated wilderness lands.

“Some of the plots were really challenging to access,” Oakes said. “We would get dropped off by boat for 10 to 15 days at a time, travel by kayak on the outer coast, and hike each day through thick forests to reach the sites. We’d wake up at 6 a.m. and it wouldn’t be until 11 a.m. that we reached the sites and actually started the day’s work of measuring trees.”

The field surveys revealed that yellow cedars inside of GLBA were relatively healthy and unstressed compared to trees outside the park, to the south. Results also showed reduced crowns and browned foliage in yellow cedar trees at sites outside the park, indicating early signs of the dieback progressing toward the park.

Additionally, modeling by study co-authors Paul Hennon, David D’Amore, and Dustin Wittwer at the USDA Forest Service suggested the dieback is expected to emerge inside GLBA in the future. As the region warms, reductions in snow cover, which helps insulate the tree’s shallow roots, leave the roots vulnerable to sudden springtime cold events.

Merging disciplines

In addition to collecting data about the trees themselves with a team of research assistants, Oakes conducted interviews with 45 local residents and land managers to understand their perceptions about climate change-induced yellow cedar dieback; whether or not they thought humans should intervene to protect the species in GLBA; and what forms those interventions should take.

One unexpected and interesting pattern that emerged from the interviews is that those participants who perceived protected areas as “separate” from nature commonly expressed strong opposition to intervention inside protected areas, like GLBA. In contrast, those who thought of humans as being “a part of” protected areas viewed intervention more favorably.

“Native Alaskans told me stories of going to yellow cedar trees to walk with their ancestors,” Oakes said. “There were other interview participants who said they’d go to a yellow cedar tree every day just to be in the presence of one.”

These people tended to support new kinds of interventions because they believed humans were inherently part of the system and they derived many intangible values, like spiritual or recreational values, from the trees. In contrast, those who perceived protected areas as “natural” and separate from humans were more likely to oppose new interventions in the protected areas.

Lambin said he was not surprised to see this pattern for individuals because people’s choices are informed by their values. “It was less expected for land managers who occupy an official role,” he added. “We often think about an organization and its missions, but forget that day-to-day decisions are made by people who carry their own value systems and perceptions of risks.”

The insights provided by combining ecological and social techniques could inform decisions about when, where, and how to adapt conservation practices in a changing climate, said study co-author Nicole Ardoin, an assistant professor at Stanford’s Graduate School of Education and a center fellow at the Woods Institute.

“Some initial steps in southeast Alaska might include improving tree monitoring in protected areas and increasing collaboration among the agencies that oversee managed and protected lands, as well as working with local community members to better understand how they value these species,” Ardoin said.

The team members said they believe their interdisciplinary approach is applicable to other climate-sensitive ecosystems and species, ranging from redwood forests in California to wild herbivore species in African savannas, and especially those that are currently surrounded by human activities.

“In a human-dominated planet, such studies will have to become the norm,” Lambin said. “Humans are part of these land systems that are rapidly transforming.”

This study was done in partnership with the U.S. Forest Service Pacific Northwest Research Station. It was funded with support from the George W. Wright Climate Change Fellowship; the Morrison Institute for Population and Resource Studies and the School of Earth, Energy & Environmental Sciences at Stanford University; the Wilderness Society Gloria Barron Fellowship; the National Forest Foundation; and U.S. Forest Service Pacific Northwest Research Station and Forest Health Protection.

For more Stanford experts on climate change and other topics, visit Stanford Experts.

Source : Stanford News


Science, politics, news agenda and our priorities

By Syed Faisal ur Rahman


 

Recent postponement of the first Organization of Islamic Countries (OIC) summit on Science and Technology and COMSTECH 15th general assembly meeting, by the government of Pakistan due to security reasons tells a lot about our national priorities.

The summit was first of its kind meeting of the heads of state and dignitaries from the Muslim world on the issue of science and technology.

Today most Muslim countries are known in other parts of the world as backward, narrow minded and violent regions. Recent wars in the Middle East, sectarian rifts and totalitarian regimes are also not presenting a great picture either. While rest of the world is sending probes towards the edge of our solar system, sending missions to Mars and exploring moons of Saturn, we are busy and failing in finding moon on the right dates of the Islamic calendar.

Any average person can figure out that we need something drastic to change this situation. This summit was exactly the kind of step we needed for a jump start. Some serious efforts were made by the COMSTECH staff under the leadership of Dr. Shaukat Hameed Khan and even the secretary general of OIC was pushing hard for the summit. According to reports, OIC secretary general personally visited more than a dozen OIC member countries to successfully convince their head of states to attend the summit.

This summit would have also provided an opportunity to bring harmony and peace in the Muslim world as many Muslim countries are at odds with each other on regional issues like in Syria, Iraq, Yemen and Afghanistan.

Last century saw enormous developments in the fields of fundamental science, which also helped countries to rapidly develop their potential in industry, medical sciences, defense, space and many other sectors. Countries which made science and technology research and education as priority areas emerged as stronger nations as compared to those who merely relied on agriculture and the abundance of natural resources. We are now living in an era where humanity is reaching to the end points of our solar system through probes like Voyager 1, sent decades ago by NASA with messages from our civilization; Quantum computing is well on its way to become a reality; Humanity is also endeavoring to colonize other planets through multi-national projects; We are also looking deepest into the space for new stars, galaxies and even to some of the earliest times after the creation of our universe through cosmic microwave background probes like Planck.

Unfortunately, in Pakistan, anti-science and anti-research attitudes are getting stronger. The lack of anti-science and anti-research attitude is not just limited to the religious zealots but the so called liberals of Pakistan do not simply put much heed to what is going around in the world of science.

If you are one of the regular followers of political arena, daily news coverage on the media and keep your ears open to hear what is going around in the country then you can easily get the idea what are our priorities as a nation. How many talk shows we saw on the main stream media over the cancellation of the summit? How many questions were raised in the parliament?

The absence or very unnoticeable presence of such issues is conspicuous and apart from one senator, Senator Sehar Kamran, who wrote a piece in a news paper, no politician even bothered to raise the relevant questions.

Forget about main stream media or politicians. If we go to social media or drawing room discussions, did you hear anyone discussing the issue in a debate when we make  fuss about issues like what kind of dress some xyz model was wearing on her court hearing in a money laundering case or which politician’s marriage is supposedly in trouble or whose hand Junaid Jamshed was holding in group photo?

We boast about our success in reducing terrorism through successful military operations and use that success to attract investors, sports teams and tourists but on the other hand we are using security concerns as an excuse to cancel an important summit on the development of science and technology. This shows that either we are confused or hypocrites or we are simply not ready for any kind of intellectual growth.

There is a need to seriously do some brain storming and soul searching about our priorities.  One thing which I have learned as a student of Astronomy is that we are insignificant as compared to the vastness of our universe, the only thing which can make us somewhat special as compared to other species on earth or a lifeless rock on Pluto is that we can challenge our thinking ability to learn, to explore and to discover. Unfortunately, in our country we are losing this special capacity day by day.

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

Researchers unravel secrets of hidden waves

Region of world’s strongest “internal waves” is analyzed in detail; work could help refine climate models.

By David Chandler


CAMBRIDGE, Mass–Detailed new field studies, laboratory experiments, and simulations of the largest known “internal waves” in the Earth’s oceans — phenomena that play a key role in mixing ocean waters, greatly affecting ocean temperatures — provide a comprehensive new view of how these colossal, invisible waves are born, spread, and die off.

The work, published today in the journal Nature, could add significantly to the improvement of global climate models, the researchers say. The paper is co-authored by 42 researchers from 25 institutions in five countries.

“What this report presents is a complete picture, a cradle-to-grave picture of these waves,” says Thomas Peacock, an associate professor of mechanical engineering at MIT, and one of the paper’s two lead authors.

Internal waves — giant waves, below the surface, that roil stratified layers of heavier, saltier water and lighter, less-salty water — are ubiquitous throughout the world’s oceans. But by far the largest and most powerful known internal waves are those that form in one area of the South China Sea, originating from the Luzon Strait between the Philippines and Taiwan.

These subsurface waves can tower more than 500 meters high, and generate powerful turbulence. Because of their size and behavior, the rise and spread of these waves are important for marine processes, including the supply of nutrients for marine organisms; the distribution of sediments and pollutants; and the propagation of sound waves. They are also a significant factor in the mixing of ocean waters, combining warmer surface waters with cold, deep waters — a process that is essential to understanding the dynamics of global climate.

This international research effort, called IWISE (Internal Waves In Straits Experiment), was a rare undertaking in this field, Peacock says; the last such field study on internal waves on this scale, the Hawaii Ocean Mixing Experiment, concluded in 2002. The new study looked at internal waves that were much stronger, and went significantly further in determining not just how the waves originated, but how their energy dissipated.

One unexpected finding, Peacock says, was the degree of turbulence produced as the waves originate, as tides and currents pass over ridges on the seafloor. “These were unexpected field discoveries,” he says, revealing “some of the most intense mixing ever observed in the deep ocean. It’s like a giant washing machine — the mixing is much more dramatic than we ever expected.”

The new observations, Peacock says, resolve a longstanding technical question about how internal waves propagate — whether the towering waves start out full strength at their point of origin, or whether they continue to build as they spread from that site. Many attempts to answer this question have produced contradictory results over the years.

This new research, which involved placing several long mooring lines from the seafloor to buoys at the surface, with instruments at intervals all along the lines, has decisively resolved that question, Peacock says: The waves grow larger as they propagate. Prior measurements, the new work found, had been drawn from too narrow a slice of the region, resulting in conflicting results — rather like the fable of blind men describing an elephant. The new, more comprehensive data has now resolved the mystery.

The new data also contradict a long-held assumption — a “commonly held belief that was almost stated as fact,” Peacock says — that solitary internal waves are completely absent from the South China Sea during the winter months. But with equipment in place to reliably measure water movement throughout the year, the team found these waves were “carrying on quite happily throughout the entire winter,” Peacock says: Previously, their presence had been masked by the winter’s stormier weather, and by the influence of a strong boundary current that runs along the coast of Taiwan — the regional equivalent of the Gulf Stream.

The improved understanding of internal waves, Peacock says, could be useful for researchers in a number of areas. The waves are key to some ecosystems, for example — some marine creatures essentially “surf” them to move in toward shore, for feeding or breeding; in the South China Sea, this process helps sustain an extensive coral reef system. The waves also help carry heat from the ocean’s surface to its depths, an important parameter in modeling climate.

The research, which was primarily a collaboration between U.S. and Taiwanese scientists, was funded by the U.S. Office of Naval Research and the Taiwan National Science Council.

Source: MIT News Office

Star formation in what are now "dead" galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts.

This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the "dead" regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right.

Credit:
ESO

Giant Galaxies Die from the Inside Out

VLT and Hubble observations show that star formation shuts down in the centres of elliptical galaxies first


Astronomers have shown for the first time how star formation in “dead” galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts. The results will be published in the 17 April 2015 issue of the journal Science.

Star formation in what are now "dead" galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts. This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the "dead" regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right. Credit: ESO
Star formation in what are now “dead” galaxies sputtered out billions of years ago. ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts.
This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the “dead” regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right.
Credit:
ESO

A major astrophysical mystery has centred on how massive, quiescent elliptical galaxies, common in the modern Universe, quenched their once furious rates of star formation. Such colossal galaxies, often also called spheroids because of their shape, typically pack in stars ten times as densely in the central regions as in our home galaxy, the Milky Way, and have about ten times its mass.

Astronomers refer to these big galaxies as red and dead as they exhibit an ample abundance of ancient red stars, but lack young blue stars and show no evidence of new star formation. The estimated ages of the red stars suggest that their host galaxies ceased to make new stars about ten billion years ago. This shutdown began right at the peak of star formation in the Universe, when many galaxies were still giving birth to stars at a pace about twenty times faster than nowadays.

“Massive dead spheroids contain about half of all the stars that the Universe has produced during its entire life,” said Sandro Tacchella of ETH Zurich in Switzerland, lead author of the article. “We cannot claim to understand how the Universe evolved and became as we see it today unless we understand how these galaxies come to be.”

Tacchella and colleagues observed a total of 22 galaxies, spanning a range of masses, from an era about three billion years after the Big Bang [1]. The SINFONI instrument on ESO’s Very Large Telescope (VLT) collected light from this sample of galaxies, showing precisely where they were churning out new stars. SINFONI could make these detailed measurements of distant galaxies thanks to its adaptive optics system, which largely cancels out the blurring effects of Earth’s atmosphere.

The researchers also trained the NASA/ESA Hubble Space Telescope on the same set of galaxies, taking advantage of the telescope’s location in space above our planet’s distorting atmosphere. Hubble’s WFC3 camera snapped images in the near-infrared, revealing the spatial distribution of older stars within the actively star-forming galaxies.

“What is amazing is that SINFONI’s adaptive optics system can largely beat down atmospheric effects and gather information on where the new stars are being born, and do so with precisely the same accuracy as Hubble allows for the stellar mass distributions,” commented Marcella Carollo, also of ETH Zurich and co-author of the study.

According to the new data, the most massive galaxies in the sample kept up a steady production of new stars in their peripheries. In their bulging, densely packed centres, however, star formation had already stopped.

“The newly demonstrated inside-out nature of star formation shutdown in massive galaxies should shed light on the underlying mechanisms involved, which astronomers have long debated,” says Alvio Renzini, Padova Observatory, of the Italian National Institute of Astrophysics.

A leading theory is that star-making materials are scattered by torrents of energy released by a galaxy’s central supermassive black hole as it sloppily devours matter. Another idea is that fresh gas stops flowing into a galaxy, starving it of fuel for new stars and transforming it into a red and dead spheroid.

“There are many different theoretical suggestions for the physical mechanisms that led to the death of the massive spheroids,” said co-author Natascha Förster Schreiber, at the Max-Planck-Institut für extraterrestrische Physik in Garching, Germany. “Discovering that the quenching of star formation started from the centres and marched its way outwards is a very important step towards understanding how the Universe came to look like it does now.”

Notes
[1] The Universe’s age is about 13.8 billion years, so the galaxies studied by Tacchella and colleagues are generally seen as they were more than 10 billion years ago.

Source: ESO


First Signs of Self-interacting Dark Matter?

Dark matter may not be completely dark after all


Based on our current scientific understanding of the universe and various surveys like the Cosmic Microwave Background observations by Planck or WMAP, we still only know about 4-5% of the visible or baryonic matter. Rest of the 96-94% is still a mystery. This huge unknown portion of the dark universe is known to be comprised of the dark energy (the source of accelerating expansion of the universe)  and dark matter (the extra un-explained mass of the galaxies). Despite having indirect signatures suggesting their presence, we still are not able to observe these phenomena.

For the first time dark matter may have been observed interacting with other dark matter in a way other than through the force of gravity. Observations of colliding galaxies made with ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have picked up the first intriguing hints about the nature of this mysterious component of the Universe.

This image from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 3827. The strange pale blue structures surrounding the central galaxies are gravitationally lensed views of a much more distant galaxy behind the cluster. The distribution of dark matter in the cluster is shown with blue contour lines. The dark matter clump for the galaxy at the left is significantly displaced from the position of the galaxy itself, possibly implying dark matter-dark matter interactions of an unknown nature are occuring. Credit: ESO/R. Massey
This image from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 3827. The strange pale blue structures surrounding the central galaxies are gravitationally lensed views of a much more distant galaxy behind the cluster.
The distribution of dark matter in the cluster is shown with blue contour lines. The dark matter clump for the galaxy at the left is significantly displaced from the position of the galaxy itself, possibly implying dark matter-dark matter interactions of an unknown nature are occuring.
Credit:
ESO/R. Massey

Using the MUSE instrument on ESO’s VLT in Chile, along with images from Hubble in orbit, a team of astronomers studied the simultaneous collision of four galaxies in the galaxy cluster Abell 3827. The team could trace out where the mass lies within the system and compare the distribution of the dark matter with the positions of the luminous galaxies.

Although dark matter cannot be seen, the team could deduce its location using a technique called gravitational lensing. The collision happened to take place directly in front of a much more distant, unrelated source. The mass of dark matter around the colliding galaxies severely distorted spacetime, deviating the path of light rays coming from the distant background galaxy — and distorting its image into characteristic arc shapes.

Our current understanding is that all galaxies exist inside clumps of dark matter. Without the constraining effect of dark matter’s gravity, galaxies like the Milky Way would fling themselves apart as they rotate. In order to prevent this, 85 percent of the Universe’s mass [1] must exist as dark matter, and yet its true nature remains a mystery.

In this study, the researchers observed the four colliding galaxies and found that one dark matter clump appeared to be lagging behind the galaxy it surrounds. The dark matter is currently 5000 light-years (50 000 million million kilometres) behind the galaxy — it would take NASA’s Voyager spacecraft 90 million years to travel that far.

A lag between dark matter and its associated galaxy is predicted during collisions if dark matter interacts with itself, even very slightly, through forces other than gravity [2]. Dark matter has never before been observed interacting in any way other than through the force of gravity.

Lead author Richard Massey at Durham University, explains: “We used to think that dark matter just sits around, minding its own business, except for its gravitational pull. But if dark matter were being slowed down during this collision, it could be the first evidence for rich physics in the dark sector — the hidden Universe all around us.”

The researchers note that more investigation will be needed into other effects that could also produce a lag. Similar observations of more galaxies, and computer simulations of galaxy collisions will need to be made.

Team member Liliya Williams of the University of Minnesota adds: “We know that dark matter exists because of the way that it interacts gravitationally, helping to shape the Universe, but we still know embarrassingly little about what dark matter actually is. Our observation suggests that dark matter might interact with forces other than gravity, meaning we could rule out some key theories about what dark matter might be.”

This result follows on from a recent result from the team which observed 72 collisions between galaxy clusters [3] and found that dark matter interacts very little with itself. The new work however concerns the motion of individual galaxies, rather than clusters of galaxies. Researchers say that the collision between these galaxies could have lasted longer than the collisions observed in the previous study — allowing the effects of even a tiny frictional force to build up over time and create a measurable lag [4].

Taken together, the two results bracket the behaviour of dark matter for the first time. Dark matter interacts more than this, but less than that. Massey added: “We are finally homing in on dark matter from above and below — squeezing our knowledge from two directions.”

Notes
[1] Astronomers have found that the total mass/energy content of the Universe is split in the proportions 68% dark energy, 27% dark matter and 5% “normal” matter. So the 85% figure relates to the fraction of “matter” that is dark.

[2] Computer simulations show that the extra friction from the collision would make the dark matter slow down. The nature of that interaction is unknown; it could be caused by well-known effects or some exotic unknown force. All that can be said at this point is that it is not gravity.

All four galaxies might have been separated from their dark matter. But we happen to have a very good measurement from only one galaxy, because it is by chance aligned so well with the background, gravitationally lensed object. With the other three galaxies, the lensed images are further away, so the constraints on the location of their dark matter too loose to draw statistically significant conclusions.

[3] Galaxy clusters contain up to a thousand individual galaxies.

[4] The main uncertainty in the result is the timespan for the collision: the friction that slowed the dark matter could have been a very weak force acting over about a billion years, or a relatively stronger force acting for “only” 100 million years.

Source: ESO