Category Archives: Life Sciences

10556427_10152654193979603_4236573740586429870_n

Latest news from around the world!


NASA Telescope Reveals Largest Batch of Earth-Size, Habitable-Zone Planets Around Single Star

NASA’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.

The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.

“This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of the agency’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.”

At about 40 light-years (235 trillion miles) from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets.

This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory’s Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven.

The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington.

Using Spitzer data, the team precisely measured the sizes of the seven planets and developed first estimates of the masses of six of them, allowing their density to be estimated.

Based on their densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will not only help determine whether they are rich in water, but also possibly reveal whether any could have liquid water on their surfaces. The mass of the seventh and farthest exoplanet has not yet been estimated – scientists believe it could be an icy, “snowball-like” world, but further observations are needed.

“The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star,” said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. “It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds.”

In contrast to our sun, the TRAPPIST-1 star – classified as an ultra-cool dwarf – is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun. The planets also are very close to each other. If a person was standing on one of the planet’s surface, they could gaze up and potentially see geological features or clouds of neighboring worlds, which would sometimes appear larger than the moon in Earth’s sky.

The planets may also be tidally locked to their star, which means the same side of the planet is always facing the star, therefore each side is either perpetual day or night. This could mean they have weather patterns totally unlike those on Earth, such as strong winds blowing from the day side to the night side, and extreme temperature changes.

Spitzer, an infrared telescope that trails Earth as it orbits the sun, was well-suited for studying TRAPPIST-1 because the star glows brightest in infrared light, whose wavelengths are longer than the eye can see. In the fall of 2016, Spitzer observed TRAPPIST-1 nearly continuously for 500 hours. Spitzer is uniquely positioned in its orbit to observe enough crossing – transits – of the planets in front of the host star to reveal the complex architecture of the system. Engineers optimized Spitzer’s ability to observe transiting planets during Spitzer’s “warm mission,” which began after the spacecraft’s coolant ran out as planned after the first five years of operations.

“This is the most exciting result I have seen in the 14 years of Spitzer operations,” said Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC in Pasadena, California. “Spitzer will follow up in the fall to further refine our understanding of these planets so that the James Webb Space Telescope can follow up. More observations of the system are sure to reveal more secrets.”

Following up on the Spitzer discovery, NASA’s Hubble Space Telescope has initiated the screening of four of the planets, including the three inside the habitable zone. These observations aim at assessing the presence of puffy, hydrogen-dominated atmospheres, typical for gaseous worlds like Neptune, around these planets.

In May 2016, the Hubble team observed the two innermost planets, and found no evidence for such puffy atmospheres. This strengthened the case that the planets closest to the star are rocky in nature.

“The TRAPPIST-1 system provides one of the best opportunities in the next decade to study the atmospheres around Earth-size planets,” said Nikole Lewis, co-leader of the Hubble study and astronomer at the Space Telescope Science Institute in Baltimore, Maryland. NASA’s planet-hunting Kepler space telescope also is studying the TRAPPIST-1 system, making measurements of the star’s minuscule changes in brightness due to transiting planets. Operating as the K2 mission, the spacecraft’s observations will allow astronomers to refine the properties of the known planets, as well as search for additional planets in the system. The K2 observations conclude in early March and will be made available on the public archive.

Spitzer, Hubble, and Kepler will help astronomers plan for follow-up studies using NASA’s upcoming James Webb Space Telescope, launching in 2018. With much greater sensitivity, Webb will be able to detect the chemical fingerprints of water, methane, oxygen, ozone, and other components of a planet’s atmosphere. Webb also will analyze planets’ temperatures and surface pressures – key factors in assessing their habitability.

NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate. Science operations are conducted at the Spitzer Science Center, at Caltech, in Pasadena, California. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at Caltech/IPAC. Caltech manages JPL for NASA.

For more information about Spitzer, visit:

https://www.nasa.gov/spitzer

For more information on the TRAPPIST-1 system, visit:

https://exoplanets.nasa.gov/trappist1

For more information on exoplanets, visit:

https://www.nasa.gov/exoplanets

Credits
Source: NASA Solar SystemFelicia Chou / Sean Potter
Headquarters, Washington
202-358-1726 / 202-358-1536
felicia.chou@nasa.gov / sean.potter@nasa.gov

Elizabeth Landau
Jet Propulsion Laboratory, Pasadena, Calif.
818-354-6425
elizabeth.landau@jpl.nasa.gov

NASA Satellite Finds Unreported Sources of Toxic Air Pollution

Using a new satellite-based method, scientists at NASA, Environment and Climate Change Canada, and two universities have located 39 unreported and major human-made sources of toxic sulfur dioxide emissions.

A known health hazard and contributor to acid rain, sulfur dioxide (SO2) is one of six air pollutants regulated by the U.S. Environmental Protection Agency. Current, sulfur dioxide monitoring activities include the use of emission inventories that are derived from ground-based measurements and factors, such as fuel usage. The inventories are used to evaluate regulatory policies for air quality improvements and to anticipate future emission scenarios that may occur with economic and population growth.

 Source: NASA

But, to develop comprehensive and accurate inventories, industries, government agencies and scientists first must know the location of pollution sources.

“We now have an independent measurement of these emission sources that does not rely on what was known or thought known,” said Chris McLinden, an atmospheric scientist with Environment and Climate Change Canada in Toronto and lead author of the study published this week in Nature Geosciences. “When you look at a satellite picture of sulfur dioxide, you end up with it appearing as hotspots – bull’s-eyes, in effect — which makes the estimates of emissions easier.”

The 39 unreported emission sources, found in the analysis of satellite data from 2005 to 2014, are clusters of coal-burning power plants, smelters, oil and gas operations found notably in the Middle East, but also in Mexico and parts of Russia. In addition, reported emissions from known sources in these regions were — in some cases — two to three times lower than satellite-based estimates.

Altogether, the unreported and underreported sources account for about 12 percent of all human-made emissions of sulfur dioxide – a discrepancy that can have a large impact on regional air quality, said McLinden.

The research team also located 75 natural sources of sulfur dioxide — non-erupting volcanoes slowly leaking the toxic gas throughout the year. While not necessarily unknown, many volcanoes are in remote locations and not monitored, so this satellite-based data set is the first to provide regular annual information on these passive volcanic emissions.

“Quantifying the sulfur dioxide bull’s-eyes is a two-step process that would not have been possible without two innovations in working with the satellite data,” said co-author Nickolay Krotkov, an atmospheric scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

First was an improvement in the computer processing that transforms raw satellite observations from the Dutch-Finnish Ozone Monitoring Instrument aboard NASA’s Aura spacecraft into precise estimates of sulfur dioxide concentrations. Krotkov and his team now are able to more accurately detect smaller sulfur dioxide concentrations, including those emitted by human-made sources such as oil-related activities and medium-size power plants.

Being able to detect smaller concentrations led to the second innovation. McLinden and his colleagues used a new computer program to more precisely detect sulfur dioxide that had been dispersed and diluted by winds. They then used accurate estimates of wind strength and direction derived from a satellite data-driven model to trace the pollutant back to the location of the source, and also to estimate how much sulfur dioxide was emitted from the smoke stack.

“The unique advantage of satellite data is spatial coverage,” said Bryan Duncan, an atmospheric scientist at Goddard. “This paper is the perfect demonstration of how new and improved satellite datasets, coupled with new and improved data analysis techniques, allow us to identify even smaller pollutant sources and to quantify these emissions over the globe.”

The University of Maryland, College Park, and Dalhousie University in Halifax, Nova Scotia, contributed to this study.

For more information about, and access to, NASA’s air quality data, visit:

http://so2.gsfc.nasa.gov/

NASA uses the vantage point of space to increase our understanding of our home planet, improve lives, and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.

For more information about NASA Earth science research, visit:

http://www.nasa.gov/earth

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

Entering 2016 with new hope

Syed Faisal ur Rahman


 

Year 2015 left many good and bad memories for many of us. On one hand we saw more wars, terrorist attacks and political confrontations, and on the other hand we saw humanity raising voices for peace, sheltering refugees and joining hands to confront the climate change.

In science, we saw first ever photograph of light as both wave and particle. We also saw some serious development in machine learning, data sciences and artificial intelligence areas with some voices raising caution about the takeover of AI over humanity and issues related to privacy. The big question of energy and climate change remained a key point of  discussion in scientific and political circles. The biggest break through came near the end of the year with Paris deal during COP21.

The deal involving around 200 countries represent a true spirit of humanity to limit global warming below 2C and commitments for striving to keep temperatures at above 1.5C pre-industrial levels. This truly global commitment also served in bringing rival countries to sit together for a common cause to save humanity from self destruction. I hope the spirit will continue in other areas of common interest as well.

This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture. New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old. Credit: NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)
This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture.
New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old.
Credit:
NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)

Space Sciences also saw some enormous advancements with New Horizon sending photographs from Pluto, SpaceX successfully landed the reusable Falcon 9 rocket back after a successful launch and we also saw the discovery of the largest regular formation in the Universe,by Prof Lajos Balazs, which is a ring of nine galaxies 7 billion light years away and 5 billion light years wide covering a third of our sky.We also learnt this year that Mars once had more water than Earth’s Arctic Ocean. NASA later confirmed the evidence that water flows on the surface of Mars. The announcement led to some interesting insight into the atmospheric studies and history of the red planet.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration: Jose-Luis Olivares/MIT

We also saw some encouraging advancements in neurosciences where we saw MIT’s researchers  developing a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections. We also saw researchers reactivating neuro-plasticity in older mice, restoring their brains to a younger state and we also saw some good progress in combating Alzheimer’s diseases.

Quantum physics again stayed as a key area of scientific advancements. Quantu

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior. Credit:EPFL
ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

m computing is getting more closer to become a viable alternative to current architecture. The packing of the single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits. Researchers at the Australian National University (ANU)  performed experiment to prove that reality does not exist until it is measured.

There are many other areas where science and technology reached new heights and will hopefully continue to do so in the year 2016. I hope these advancements will not only help us in growing economically but also help us in becoming better human beings and a better society.

 

 

 

 

 

Persian Gulf could experience deadly heat: MIT Study

Detailed climate simulation shows a threshold of survivability could be crossed without mitigation measures.

By David Chandler


 

CAMBRIDGE, Mass.–Within this century, parts of the Persian Gulf region could be hit with unprecedented events of deadly heat as a result of climate change, according to a study of high-resolution climate models.

The research reveals details of a business-as-usual scenario for greenhouse gas emissions, but also shows that curbing emissions could forestall these deadly temperature extremes.

The study, published today in the journal Nature Climate Change, was carried out by Elfatih Eltahir, a professor of civil and environmental engineering at MIT, and Jeremy Pal PhD ’01 at Loyola Marymount University. They conclude that conditions in the Persian Gulf region, including its shallow water and intense sun, make it “a specific regional hotspot where climate change, in absence of significant mitigation, is likely to severely impact human habitability in the future.”

Running high-resolution versions of standard climate models, Eltahir and Pal found that many major cities in the region could exceed a tipping point for human survival, even in shaded and well-ventilated spaces. Eltahir says this threshold “has, as far as we know … never been reported for any location on Earth.”

That tipping point involves a measurement called the “wet-bulb temperature” that combines temperature and humidity, reflecting conditions the human body could maintain without artificial cooling. That threshold for survival for more than six unprotected hours is 35 degrees Celsius, or about 95 degrees Fahrenheit, according to recently published research. (The equivalent number in the National Weather Service’s more commonly used “heat index” would be about 165 F.)

This limit was almost reached this summer, at the end of an extreme, weeklong heat wave in the region: On July 31, the wet-bulb temperature in Bandahr Mashrahr, Iran, hit 34.6 C — just a fraction below the threshold, for an hour or less.

But the severe danger to human health and life occurs when such temperatures are sustained for several hours, Eltahir says — which the models show would occur several times in a 30-year period toward the end of the century under the business-as-usual scenario used as a benchmark by the Intergovernmental Panel on Climate Change.

The Persian Gulf region is especially vulnerable, the researchers say, because of a combination of low elevations, clear sky, water body that increases heat absorption, and the shallowness of the Persian Gulf itself, which produces high water temperatures that lead to strong evaporation and very high humidity.

The models show that by the latter part of this century, major cities such as Doha, Qatar, Abu Dhabi, and Dubai in the United Arab Emirates, and Bandar Abbas, Iran, could exceed the 35 C threshold several times over a 30-year period. What’s more, Eltahir says, hot summer conditions that now occur once every 20 days or so “will characterize the usual summer day in the future.”

While the other side of the Arabian Peninsula, adjacent to the Red Sea, would see less extreme heat, the projections show that dangerous extremes are also likely there, reaching wet-bulb temperatures of 32 to 34 C. This could be a particular concern, the authors note, because the annual Hajj, or annual Islamic pilgrimage to Mecca — when as many as 2 million pilgrims take part in rituals that include standing outdoors for a full day of prayer — sometimes occurs during these hot months.

While many in the Persian Gulf’s wealthier states might be able to adapt to new climate extremes, poorer areas, such as Yemen, might be less able to cope with such extremes, the authors say.

The research was supported by the Kuwait Foundation for the Advancement of Science.

Source: MIT News Office

Climate change requires new conservation models, Stanford scientists say

In a world transformed by climate change and human activity, Stanford scientists say that conserving biodiversity and protecting species will require an interdisciplinary combination of ecological and social research methods.

By Ker Than

A threatened tree species in Alaska could serve as a model for integrating ecological and social research methods in efforts to safeguard species that are vulnerable to climate change effects and human activity.

In a new Stanford-led study, published online this week in the journal Biological Conservation, scientists assessed the health of yellow cedar, a culturally and commercially valuable tree throughout coastal Alaska that is experiencing climate change-induced dieback.

In an era when climate change touches every part of the globe, the traditional conservation approach of setting aside lands to protect biodiversity is no longer sufficient to protect species, said the study’s first author, Lauren Oakes, a research associate at Stanford University.

“A lot of that kind of conservation planning was intended to preserve historic conditions, which, for example, might be defined by the population of a species 50 years ago or specific ecological characteristics when a park was established,” said Oakes, who is a recent PhD graduate of the Emmett Interdisciplinary Program in Environment and Resources (E-IPER) at Stanford’s School of Earth, Energy, & Environmental Sciences.

But as the effects of climate change become increasingly apparent around the world, resource managers are beginning to recognize that “adaptive management” strategies are needed that account for how climate change affects species now and in the future.

Similarly, because climate change effects will vary across regions, new management interventions must consider not only local laws, policies and regulations, but also local peoples’ knowledge about climate change impacts and their perceptions about new management strategies. For yellow cedar, new strategies could include assisting migration of the species to places where it may be more likely to survive or increasing protection of the tree from direct uses, such as harvesting.

Gathering these perspectives requires an interdisciplinary social-ecological approach, said study leader Eric Lambin, the George and Setsuko Ishiyama Provostial Professor in the School of Earth, Energy, & Environmental Sciences.

“The impact of climate change on ecosystems is not just a biophysical issue. Various actors depend on these ecosystems and on the services they provide for their livelihoods,” said Lambin, who is also  a senior fellow at the Stanford Woods Institute for the Environment.

“Moreover, as the geographic distribution of species is shifting due to climate change, new areas that are currently under human use will need to be managed for biodiversity conservation. Any feasible management solution needs to integrate the ecological and social dimensions of this challenge.”

Gauging yellow cedar health

The scientists used aerial surveys to map the distribution of yellow cedar in Alaska’s Glacier Bay National Park and Preserve (GLBA) and collected data about the trees’ health and environmental conditions from 18 randomly selected plots inside the park and just south of the park on designated wilderness lands.

“Some of the plots were really challenging to access,” Oakes said. “We would get dropped off by boat for 10 to 15 days at a time, travel by kayak on the outer coast, and hike each day through thick forests to reach the sites. We’d wake up at 6 a.m. and it wouldn’t be until 11 a.m. that we reached the sites and actually started the day’s work of measuring trees.”

The field surveys revealed that yellow cedars inside of GLBA were relatively healthy and unstressed compared to trees outside the park, to the south. Results also showed reduced crowns and browned foliage in yellow cedar trees at sites outside the park, indicating early signs of the dieback progressing toward the park.

Additionally, modeling by study co-authors Paul Hennon, David D’Amore, and Dustin Wittwer at the USDA Forest Service suggested the dieback is expected to emerge inside GLBA in the future. As the region warms, reductions in snow cover, which helps insulate the tree’s shallow roots, leave the roots vulnerable to sudden springtime cold events.

Merging disciplines

In addition to collecting data about the trees themselves with a team of research assistants, Oakes conducted interviews with 45 local residents and land managers to understand their perceptions about climate change-induced yellow cedar dieback; whether or not they thought humans should intervene to protect the species in GLBA; and what forms those interventions should take.

One unexpected and interesting pattern that emerged from the interviews is that those participants who perceived protected areas as “separate” from nature commonly expressed strong opposition to intervention inside protected areas, like GLBA. In contrast, those who thought of humans as being “a part of” protected areas viewed intervention more favorably.

“Native Alaskans told me stories of going to yellow cedar trees to walk with their ancestors,” Oakes said. “There were other interview participants who said they’d go to a yellow cedar tree every day just to be in the presence of one.”

These people tended to support new kinds of interventions because they believed humans were inherently part of the system and they derived many intangible values, like spiritual or recreational values, from the trees. In contrast, those who perceived protected areas as “natural” and separate from humans were more likely to oppose new interventions in the protected areas.

Lambin said he was not surprised to see this pattern for individuals because people’s choices are informed by their values. “It was less expected for land managers who occupy an official role,” he added. “We often think about an organization and its missions, but forget that day-to-day decisions are made by people who carry their own value systems and perceptions of risks.”

The insights provided by combining ecological and social techniques could inform decisions about when, where, and how to adapt conservation practices in a changing climate, said study co-author Nicole Ardoin, an assistant professor at Stanford’s Graduate School of Education and a center fellow at the Woods Institute.

“Some initial steps in southeast Alaska might include improving tree monitoring in protected areas and increasing collaboration among the agencies that oversee managed and protected lands, as well as working with local community members to better understand how they value these species,” Ardoin said.

The team members said they believe their interdisciplinary approach is applicable to other climate-sensitive ecosystems and species, ranging from redwood forests in California to wild herbivore species in African savannas, and especially those that are currently surrounded by human activities.

“In a human-dominated planet, such studies will have to become the norm,” Lambin said. “Humans are part of these land systems that are rapidly transforming.”

This study was done in partnership with the U.S. Forest Service Pacific Northwest Research Station. It was funded with support from the George W. Wright Climate Change Fellowship; the Morrison Institute for Population and Resource Studies and the School of Earth, Energy & Environmental Sciences at Stanford University; the Wilderness Society Gloria Barron Fellowship; the National Forest Foundation; and U.S. Forest Service Pacific Northwest Research Station and Forest Health Protection.

For more Stanford experts on climate change and other topics, visit Stanford Experts.

Source : Stanford News


Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


 

CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.

Credit:
ESO/M. Kornmesser

Real Martians: How to Protect Astronauts from Space Radiation on Mars

On Aug. 7, 1972, in the heart of the Apollo era, an enormous solar flare exploded from the sun’s atmosphere. Along with a gigantic burst of light in nearly all wavelengths, this event accelerated a wave of energetic particles. Mostly protons, with a few electrons and heavier elements mixed in, this wash of quick-moving particles would have been dangerous to anyone outside Earth’s protective magnetic bubble. Luckily, the Apollo 16 crew had returned to Earth just five months earlier, narrowly escaping this powerful event.

In the early days of human space flight, scientists were only just beginning to understand how events on the sun could affect space, and in turn how that radiation could affect humans and technology. Today, as a result of extensive space radiation research, we have a much better understanding of our space environment, its effects, and the best ways to protect astronauts—all crucial parts of NASA’s mission to send humans to Mars.

“The Martian” film highlights the radiation dangers that could occur on a round trip to Mars. While the mission in the film is fictional, NASA has already started working on the technology to enable an actual trip to Mars in the 2030s. In the film, the astronauts’ habitat on Mars shields them from radiation, and indeed, radiation shielding will be a crucial technology for the voyage. From better shielding to advanced biomedical countermeasures, NASA currently studies how to protect astronauts and electronics from radiation – efforts that will have to be incorporated into every aspect of Mars mission planning, from spacecraft and habitat design to spacewalk protocols.

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres. Credit: ESO/M. Kornmesser
This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.
Credit:
ESO/M. Kornmesser

“The space radiation environment will be a critical consideration for everything in the astronauts’ daily lives, both on the journeys between Earth and Mars and on the surface,” said Ruthan Lewis, an architect and engineer with the human spaceflight program at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “You’re constantly being bombarded by some amount of radiation.”

Radiation, at its most basic, is simply waves or sub-atomic particles that transports energy to another entity – whether it is an astronaut or spacecraft component. The main concern in space is particle radiation. Energetic particles can be dangerous to humans because they pass right through the skin, depositing energy and damaging cells or DNA along the way. This damage can mean an increased risk for cancer later in life or, at its worst, acute radiation sickness during the mission if the dose of energetic particles is large enough.

Fortunately for us, Earth’s natural protections block all but the most energetic of these particles from reaching the surface. A huge magnetic bubble, called the magnetosphere, which deflects the vast majority of these particles, protects our planet. And our atmosphere subsequently absorbs the majority of particles that do make it through this bubble. Importantly, since the International Space Station (ISS) is in low-Earth orbit within the magnetosphere, it also provides a large measure of protection for our astronauts.

“We have instruments that measure the radiation environment inside the ISS, where the crew are, and even outside the station,” said Kerry Lee, a scientist at NASA’s Johnson Space Center in Houston.

This ISS crew monitoring also includes tracking of the short-term and lifetime radiation doses for each astronaut to assess the risk for radiation-related diseases. Although NASA has conservative radiation limits greater than allowed radiation workers on Earth, the astronauts are able to stay well under NASA’s limit while living and working on the ISS, within Earth’s magnetosphere.

But a journey to Mars requires astronauts to move out much further, beyond the protection of Earth’s magnetic bubble.

“There’s a lot of good science to be done on Mars, but a trip to interplanetary space carries more radiation risk than working in low-Earth orbit,” said Jonathan Pellish, a space radiation engineer at Goddard.

A human mission to Mars means sending astronauts into interplanetary space for a minimum of a year, even with a very short stay on the Red Planet. Nearly all of that time, they will be outside the magnetosphere, exposed to the harsh radiation environment of space. Mars has no global magnetic field to deflect energetic particles, and its atmosphere is much thinner than Earth’s, so they’ll get only minimal protection even on the surface of Mars.

 

Throughout the entire trip, astronauts must be protected from two sources of radiation. The first comes from the sun, which regularly releases a steady stream of solar particles, as well as occasional larger bursts in the wake of giant explosions, such as solar flares and coronal mass ejections, on the sun. These energetic particles are almost all protons, and, though the sun releases an unfathomably large number of them, the proton energy is low enough that they can almost all be physically shielded by the structure of the spacecraft.

 

Since solar activity strongly contributes to the deep-space radiation environment, a better understanding of the sun’s modulation of this radiation environment will allow mission planners to make better decisions for a future Mars mission. NASA currently operates a fleet of spacecraft studying the sun and the space environment throughout the solar system. Observations from this area of research, known as heliophysics, help us better understand the origin of solar eruptions and what effects these events have on the overall space radiation environment.

 

“If we know precisely what’s going on, we don’t have to be as conservative with our estimates, which gives us more flexibility when planning the mission,” said Pellish.

 

The second source of energetic particles is harder to shield. These particles come from galactic cosmic rays, often known as GCRs. They’re particles accelerated to near the speed of light that shoot into our solar system from other stars in the Milky Way or even other galaxies. Like solar particles, galactic cosmic rays are mostly protons. However, some of them are heavier elements, ranging from helium up to the heaviest elements. These more energetic particles can knock apart atoms in the material they strike, such as in the astronaut, the metal walls of a spacecraft, habitat, or vehicle, causing sub-atomic particles to shower into the structure. This secondary radiation, as it is known, can reach a dangerous level.

 

There are two ways to shield from these higher-energy particles and their secondary radiation: use a lot more mass of traditional spacecraft materials, or use more efficient shielding materials.

 

The sheer volume of material surrounding a structure would absorb the energetic particles and their associated secondary particle radiation before they could reach the astronauts. However, using sheer bulk to protect astronauts would be prohibitively expensive, since more mass means more fuel required to launch.

 

Using materials that shield more efficiently would cut down on weight and cost, but finding the right material takes research and ingenuity. NASA is currently investigating a handful of possibilities that could be used in anything from the spacecraft to the Martian habitat to space suits.

 

“The best way to stop particle radiation is by running that energetic particle into something that’s a similar size,” said Pellish. “Otherwise, it can be like you’re bouncing a tricycle off a tractor-trailer.”

 

Because protons and neutrons are similar in size, one element blocks both extremely well—hydrogen, which most commonly exists as just a single proton and an electron. Conveniently, hydrogen is the most abundant element in the universe, and makes up substantial parts of some common compounds, such as water and plastics like polyethylene. Engineers could take advantage of already-required mass by processing the astronauts’ trash into plastic-filled tiles used to bolster radiation protection. Water, already required for the crew, could be stored strategically to create a kind of radiation storm shelter in the spacecraft or habitat. However, this strategy comes with some challenges—the crew would need to use the water and then replace it with recycled water from the advanced life support systems.

 

Polyethylene, the same plastic commonly found in water bottles and grocery bags, also has potential as a candidate for radiation shielding. It is very high in hydrogen and fairly cheap to produce—however, it’s not strong enough to build a large structure, especially a spacecraft, which goes through high heat and strong forces during launch. And adding polyethylene to a metal structure would add quite a bit of mass, meaning that more fuel would be required for launch.

 

“We’ve made progress on reducing and shielding against these energetic particles, but we’re still working on finding a material that is a good shield and can act as the primary structure of the spacecraft,” said Sheila Thibeault, a materials researcher at NASA’s Langley Research Center in Hampton, Virginia.

 

One material in development at NASA has the potential to do both jobs: Hydrogenated boron nitride nanotubes—known as hydrogenated BNNTs—are tiny, nanotubes made of carbon, boron, and nitrogen, with hydrogen interspersed throughout the empty spaces left in between the tubes. Boron is also an excellent absorber secondary neutrons, making hydrogenated BNNTs an ideal shielding material.

“This material is really strong—even at high heat—meaning that it’s great for structure,” said Thibeault.

Remarkably, researchers have successfully made yarn out of BNNTs, so it’s flexible enough to be woven into the fabric of space suits, providing astronauts with significant radiation protection even while they’re performing spacewalks in transit or out on the harsh Martian surface. Though hydrogenated BNNTs are still in development and testing, they have the potential to be one of our key structural and shielding materials in spacecraft, habitats, vehicles, and space suits that will be used on Mars.

Physical shields aren’t the only option for stopping particle radiation from reaching astronauts: Scientists are also exploring the possibility of building force fields. Force fields aren’t just the realm of science fiction: Just like Earth’s magnetic field protects us from energetic particles, a relatively small, localized electric or magnetic field would—if strong enough and in the right configuration—create a protective bubble around a spacecraft or habitat. Currently, these fields would take a prohibitive amount of power and structural material to create on a large scale, so more work is needed for them to be feasible.

The risk of health effects can also be reduced in operational ways, such as having a special area of the spacecraft or Mars habitat that could be a radiation storm shelter; preparing spacewalk and research protocols to minimize time outside the more heavily-shielded spacecraft or habitat; and ensuring that astronauts can quickly return indoors in the event of a radiation storm.

Radiation risk mitigation can also be approached from the human body level. Though far off, a medication that would counteract some or all of the health effects of radiation exposure would make it much easier to plan for a safe journey to Mars and back.

“Ultimately, the solution to radiation will have to be a combination of things,” said Pellish. “Some of the solutions are technology we have already, like hydrogen-rich materials, but some of it will necessarily be cutting edge concepts that we haven’t even thought of yet.”

Penn Vet-Temple team characterizes genetic mutations linked to a form of blindness

Achromatopsia is a rare, inherited vision disorder that affects the eye’s cone cells, resulting in problems with daytime vision, clarity and color perception. It often strikes people early in life, and currently there is no cure for the condition.

One of the most promising avenues for developing a cure, however, is through gene therapy, and to create those therapies requires animal models of disease that closely replicate the human condition.

In a new study, a collaboration between University of Pennsylvania and Temple University scientists has identified two naturally occurring genetic mutations in dogs that result in achromatopsia. Having identified the mutations responsible, they used structural modeling and molecular dynamics on the Titan supercomputer at Oak Ridge National Laboratory and the Stampede supercomputer at the Texas Advanced Computing Center to simulate how the mutations would impact the resulting protein, showing that the mutations destabilized a molecular channel essential to light signal transduction.

The findings provide new insights into the molecular cause of this form of blindness and also present new opportunities for conducting preclinical assessments of curative gene therapy for achromatopsia in both dogs and humans.

“Our work in the dogs, in vitro and in silico shows us the consequences of these mutations in disrupting the function of these crucial channels,” said Karina Guziewicz, senior author on the study and a senior research investigator at Penn’s School of Veterinary Medicine. “Everything we found suggests that gene therapy will be the best approach to treating this disease, and we are looking forward to taking that next step.”

The study was published in the journal PLOS ONE and coauthored by Penn Vet’s Emily V. Dutrow and Temple’s Naoto Tanaka. Additional coauthors from Penn Vet included Gustavo D. Aguirre, Keiko Miyadera, Shelby L. Reinstein, William R. Crumley and Margret L. Casal. Temple’s team, all from the College of Science and Technology, included Lucie Delemotte, Christopher M. MacDermaid, Michael L. Klein and Jacqueline C. Tanaka. Christopher J. Dixon of Veterinary Vision in the United Kingdom also contributed.

The research began with a German shepherd that was brought to Penn Vet’s Ryan Hospital. The owners were worried about its vision.

“This dog displayed a classical loss of cone vision; it could not see well in daylight but had no problem in dim light conditions,” said Aguirre, professor of medical genetics and ophthalmology at Penn Vet.

The Penn Vet researchers wanted to identify the genetic cause, but the dog had none of the “usual suspects,” the known gene mutations responsible for achromatopsia in dogs. To find the new mutation, the scientists looked at five key genes that play a role in phototransduction, or the process by which light signals are transmitted through the eye to the brain.

They found what they were looking for on the CNGA3 gene, which encodes a cyclic nucleotide channel and plays a key role in transducing visual signals. The change was a “missense” mutation, meaning that the mutation results in the production of a different amino acid. Meanwhile, they heard from colleague Dixon that he had examined Labrador retrievers with similar symptoms. When the Penn team performed the same genetic analysis, they found a different mutation on the same part of the same gene where the shepherd’s mutation was found. Neither mutation had ever been characterized previously in dogs.

“The next step was to take this further and look at the consequences of these particular mutations,” Guziewicz said.

The group had the advantage of using the Titan and Stampede supercomputers, which can simulate models of the atomic structure of proteins and thereby elucidate how the protein might function. That work revealed that both mutations disrupted the function of the channel, making it unstable.

“The computational approach allows us to model, right down to the atomic level, how small changes in protein sequence can have a major impact on signaling,” said MacDermaid, assistant professor of research at Temple’s Institute for Computational Molecular Science. “We can then use these insights to help us understand and refine our experimental and clinical work.”

The Temple researchers recreated these mutated channels and showed that one resulted in a loss of channel function. Further in vitro experiments showed that the second mutation caused the channels to be routed improperly within the cell.

Penn Vet researchers have had success in treating various forms of blindness in dogs with gene therapy, setting the stage to treat human blindness. In human achromatopsia, nearly 100 different mutations have been identified in the CNGA3 gene, including the very same one identified in the German shepherd in this study.

The results, therefore, lay the groundwork for designing gene therapy constructs that can target this form of blindness with the same approach.


The study was supported by the Foundation Fighting Blindness, the National Eye Institute, the National Science Foundation, the European Union Seventh Framework Program, Hope for Vision, the Macula Vision Research Foundation and the Van Sloun Fund for Canine Genetic Research.

Source: University of Pennsylvania 

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office