Tag Archives: energy

Researchers devise efficient power converter for internet of things

Researchers devise efficient power converter for internet of things

By Larry Hardesty


 

CAMBRIDGE, Mass. – The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.

Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 200,000-fold increase in current levels.

“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”

Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.

Packet perspective

The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.

“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”

The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.

If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.

To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.

Clocking down

Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.

But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.

“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.

The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.

Source: MIT News Office

Click on the image to know more about Prime Embedded Solutions

New device could provide electrical power source from walking and other ambient motions:MIT Research

Harnessing the energy of small bending motions
New device could provide electrical power source from walking and other ambient motions.

By David Chandler


 

CAMBRIDGE, Mass.–For many applications such as biomedical, mechanical, or environmental monitoring devices, harnessing the energy of small motions could provide a small but virtually unlimited power supply. While a number of approaches have been attempted, researchers at MIT have now developed a completely new method based on electrochemical principles, which could be capable of harvesting energy from a broader range of natural motions and activities, including walking.

The new system, based on the slight bending of a sandwich of metal and polymer sheets, is described in the journal Nature Communications, in a paper by MIT professor Ju Li, graduate students Sangtae Kim and Soon Ju Choi, and four others.

Most previously designed devices for harnessing small motions have been based on the triboelectric effect (essentially friction, like rubbing a balloon against a wool sweater) or piezoelectrics (crystals that produce a small voltage when bent or compressed). These work well for high-frequency sources of motion such as those produced by the vibrations of machinery. But for typical human-scale motions such as walking or exercising, such systems have limits.

“When you put in an impulse” to such traditional materials, “they respond very well, in microseconds. But this doesn’t match the timescale of most human activities,” says Li, who is the Battelle Energy Alliance Professor in Nuclear Science and Engineering and professor of materials science and engineering. “Also, these devices have high electrical impedance and bending rigidity and can be quite expensive,” he says.

Simple and flexible

By contrast, the new system uses technology similar to that in lithium ion batteries, so it could likely be produced inexpensively at large scale, Li says. In addition, these devices would be inherently flexible, making them more compatible with wearable technology and less likely to break under mechanical stress.

While piezoelectric materials are based on a purely physical process, the new system is electrochemical, like a battery or a fuel cell. It uses two thin sheets of lithium alloys as electrodes, separated by a layer of porous polymer soaked with liquid electrolyte that is efficient at transporting lithium ions between the metal plates. But unlike a rechargeable battery, which takes in electricity, stores it, and then releases it, this system takes in mechanical energy and puts out electricity.

When bent even a slight amount, the layered composite produces a pressure difference that squeezes lithium ions through the polymer (like the reverse osmosis process used in water desalination). It also produces a counteracting voltage and an electrical current in the external circuit between the two electrodes, which can be then used directly to power other devices.

Because it requires only a small amount of bending to produce a voltage, such a device could simply have a tiny weight attached to one end to cause the metal to bend as a result of ordinary movements, when strapped to an arm or leg during everyday activities. Unlike batteries and solar cells, the output from the new system comes in the form of alternating current (AC), with the flow moving first in one direction and then the other as the material bends first one way and then back.

This device converts mechanical to electrical energy; therefore, “it is not limited by the second law of thermodynamics,” Li says, which sets an upper limit on the theoretically possible efficiency. “So in principle, [the efficiency] could be 100 percent,” he says. In this first-generation device developed to demonstrate the electrochemomechanical working principle, he says, “the best we can hope for is about 15 percent” efficiency. But the system could easily be manufactured in any desired size and is amenable to industrial manufacturing process.

Test of time

The test devices maintain their properties through many cycles of bending and unbending, Li reports, with little reduction in performance after 1,500 cycles. “It’s a very stable system,” he says.

Previously, the phenomenon underlying the new device “was considered a parasitic effect in the battery community,” according to Li, and voltage put into the battery could sometimes induce bending. “We do just the opposite,” Li says, putting in the stress and getting a voltage as output. Besides being a potential energy source, he says, this could also be a complementary diagnostic tool in electrochemistry. “It’s a good way to evaluate damage mechanisms in batteries, a way to understand battery materials better,” he says.

In addition to harnessing daily motion to power wearable devices, the new system might also be useful as an actuator with biomedical applications, or used for embedded stress sensors in settings such as roads, bridges, keyboards, or other structures, the researchers suggest.

The team also included postdoc Kejie Zhao (now assistant professor at Purdue University) and visiting graduate student Giorgia Gobbi , and Hui Yang and Sulin Zhang at Penn State. The work was supported by the National Science Foundation, the MIT MADMEC Contest, the Samsung Scholarship Foundation, and the Kwanjeong Educational Foundation.

Source: MIT News Office

Stanford study finds promise in expanding renewables based on results in three major economies

A new Stanford study found that renewable energy can make a major and increasingly cost-effective contribution to alleviating climate change.

BY TERRY NAGEL


Stanford energy experts have released a study that compares the experiences of three large economies in ramping up renewable energy deployment and concludes that renewables can make a major and increasingly cost-effective contribution to climate change mitigation.

The report from Stanford’s Steyer-Taylor Center for Energy Policy and Finance analyzes the experiences of Germany, California and Texas, the world’s fourth, eighth and 12th largest economies, respectively. It found, among other things, that Germany, which gets about half as much sunshine as California and Texas, nevertheless generates electricity from solar installations at a cost comparable to that of Texas and only slightly higher than in California.

The report was released in time for the United Nations Climate Change Conference that started this week, where international leaders are gathering to discuss strategies to deal with global warming, including massive scale-ups of renewable energy.

“As policymakers from around the world gather for the climate negotiations in Paris, our report draws on the experiences of three leaders in renewable-energy deployment to shed light on some of the most prominent and controversial themes in the global renewables debate,” said Dan Reicher, executive director of the Steyer-Taylor Center, which is a joint center between Stanford Law School and Stanford Graduate School of Business. Reicher also is interim president and chief executive officer of the American Council on Renewable Energy.

“Our findings suggest that renewable energy has entered the mainstream and is ready to play a leading role in mitigating global climate change,” said Felix Mormann, associate professor of law at the University of Miami, faculty fellow at the Steyer-Taylor Center and lead author of the report.

Other conclusions of the report, “A Tale of Three Markets: Comparing the Solar and Wind Deployment Experiences of California, Texas, and Germany,” include:

  • Germany’s success in deploying renewable energy at scale is due largely to favorable treatment of “soft cost” factors such as financing, permitting, installation and grid access. This approach has allowed the renewable energy policies of some countries to deliver up to four times the average deployment of other countries, despite offering only half the financial incentives.
  • Contrary to widespread concern, a higher share of renewables does not automatically translate to higher electricity bills for ratepayers. While Germany’s residential electric rates are two to three times those of California and Texas, this price differential is only partly due to Germany’s subsidies for renewables. The average German household’s electricity bill is, in fact, lower than in Texas and only slightly higher than in California, partly as a result of energy-efficiency efforts in German homes.
  • An increase in the share of intermittent solar and wind power need not jeopardize the stability of the electric grid. From 2006 to 2013, Germany tripled the amount of electricity generated from solar and wind to a market share of 26 percent, while managing to reduce average annual outage times for electricity customers in its grid from an already impressive 22 minutes to just 15 minutes. During that same period, California tripled the amount of electricity produced from solar and wind to a joint market share of 8 percent and reduced its outage times from more than 100 minutes to less than 90 minutes. However, Texas increased its outage times from 92 minutes to 128 minutes after ramping up its wind-generated electricity sixfold to a market share of 10 percent.

The study may inform the energy debate in the United States, where expanding the nation’s renewable energy infrastructure is a top priority of the Obama administration and the subject of debate among presidential candidates.

The current share of renewables in U.S. electricity generation is 14 percent – half that of Germany. Germany’s ambitious – and controversial – Energiewende (Energy Transition) initiative commits the country to meeting 80 percent of its electricity needs with renewables by 2050. In the United States, 29 states, including California and Texas, have set mandatory targets for renewable energy.

In California, Gov. Jerry Brown recently signed legislation committing the state to producing 50 percent of its electricity from renewables by 2030. Texas, the leading U.S. state for wind development, set a mandate of 10,000 megawatts of renewable energy capacity by 2025, but reached this target 15 years ahead of schedule and now generates over 10 percent of the state’s electricity from wind alone.

Source: Sanford News

Climate change requires new conservation models, Stanford scientists say

In a world transformed by climate change and human activity, Stanford scientists say that conserving biodiversity and protecting species will require an interdisciplinary combination of ecological and social research methods.

By Ker Than

A threatened tree species in Alaska could serve as a model for integrating ecological and social research methods in efforts to safeguard species that are vulnerable to climate change effects and human activity.

In a new Stanford-led study, published online this week in the journal Biological Conservation, scientists assessed the health of yellow cedar, a culturally and commercially valuable tree throughout coastal Alaska that is experiencing climate change-induced dieback.

In an era when climate change touches every part of the globe, the traditional conservation approach of setting aside lands to protect biodiversity is no longer sufficient to protect species, said the study’s first author, Lauren Oakes, a research associate at Stanford University.

“A lot of that kind of conservation planning was intended to preserve historic conditions, which, for example, might be defined by the population of a species 50 years ago or specific ecological characteristics when a park was established,” said Oakes, who is a recent PhD graduate of the Emmett Interdisciplinary Program in Environment and Resources (E-IPER) at Stanford’s School of Earth, Energy, & Environmental Sciences.

But as the effects of climate change become increasingly apparent around the world, resource managers are beginning to recognize that “adaptive management” strategies are needed that account for how climate change affects species now and in the future.

Similarly, because climate change effects will vary across regions, new management interventions must consider not only local laws, policies and regulations, but also local peoples’ knowledge about climate change impacts and their perceptions about new management strategies. For yellow cedar, new strategies could include assisting migration of the species to places where it may be more likely to survive or increasing protection of the tree from direct uses, such as harvesting.

Gathering these perspectives requires an interdisciplinary social-ecological approach, said study leader Eric Lambin, the George and Setsuko Ishiyama Provostial Professor in the School of Earth, Energy, & Environmental Sciences.

“The impact of climate change on ecosystems is not just a biophysical issue. Various actors depend on these ecosystems and on the services they provide for their livelihoods,” said Lambin, who is also  a senior fellow at the Stanford Woods Institute for the Environment.

“Moreover, as the geographic distribution of species is shifting due to climate change, new areas that are currently under human use will need to be managed for biodiversity conservation. Any feasible management solution needs to integrate the ecological and social dimensions of this challenge.”

Gauging yellow cedar health

The scientists used aerial surveys to map the distribution of yellow cedar in Alaska’s Glacier Bay National Park and Preserve (GLBA) and collected data about the trees’ health and environmental conditions from 18 randomly selected plots inside the park and just south of the park on designated wilderness lands.

“Some of the plots were really challenging to access,” Oakes said. “We would get dropped off by boat for 10 to 15 days at a time, travel by kayak on the outer coast, and hike each day through thick forests to reach the sites. We’d wake up at 6 a.m. and it wouldn’t be until 11 a.m. that we reached the sites and actually started the day’s work of measuring trees.”

The field surveys revealed that yellow cedars inside of GLBA were relatively healthy and unstressed compared to trees outside the park, to the south. Results also showed reduced crowns and browned foliage in yellow cedar trees at sites outside the park, indicating early signs of the dieback progressing toward the park.

Additionally, modeling by study co-authors Paul Hennon, David D’Amore, and Dustin Wittwer at the USDA Forest Service suggested the dieback is expected to emerge inside GLBA in the future. As the region warms, reductions in snow cover, which helps insulate the tree’s shallow roots, leave the roots vulnerable to sudden springtime cold events.

Merging disciplines

In addition to collecting data about the trees themselves with a team of research assistants, Oakes conducted interviews with 45 local residents and land managers to understand their perceptions about climate change-induced yellow cedar dieback; whether or not they thought humans should intervene to protect the species in GLBA; and what forms those interventions should take.

One unexpected and interesting pattern that emerged from the interviews is that those participants who perceived protected areas as “separate” from nature commonly expressed strong opposition to intervention inside protected areas, like GLBA. In contrast, those who thought of humans as being “a part of” protected areas viewed intervention more favorably.

“Native Alaskans told me stories of going to yellow cedar trees to walk with their ancestors,” Oakes said. “There were other interview participants who said they’d go to a yellow cedar tree every day just to be in the presence of one.”

These people tended to support new kinds of interventions because they believed humans were inherently part of the system and they derived many intangible values, like spiritual or recreational values, from the trees. In contrast, those who perceived protected areas as “natural” and separate from humans were more likely to oppose new interventions in the protected areas.

Lambin said he was not surprised to see this pattern for individuals because people’s choices are informed by their values. “It was less expected for land managers who occupy an official role,” he added. “We often think about an organization and its missions, but forget that day-to-day decisions are made by people who carry their own value systems and perceptions of risks.”

The insights provided by combining ecological and social techniques could inform decisions about when, where, and how to adapt conservation practices in a changing climate, said study co-author Nicole Ardoin, an assistant professor at Stanford’s Graduate School of Education and a center fellow at the Woods Institute.

“Some initial steps in southeast Alaska might include improving tree monitoring in protected areas and increasing collaboration among the agencies that oversee managed and protected lands, as well as working with local community members to better understand how they value these species,” Ardoin said.

The team members said they believe their interdisciplinary approach is applicable to other climate-sensitive ecosystems and species, ranging from redwood forests in California to wild herbivore species in African savannas, and especially those that are currently surrounded by human activities.

“In a human-dominated planet, such studies will have to become the norm,” Lambin said. “Humans are part of these land systems that are rapidly transforming.”

This study was done in partnership with the U.S. Forest Service Pacific Northwest Research Station. It was funded with support from the George W. Wright Climate Change Fellowship; the Morrison Institute for Population and Resource Studies and the School of Earth, Energy & Environmental Sciences at Stanford University; the Wilderness Society Gloria Barron Fellowship; the National Forest Foundation; and U.S. Forest Service Pacific Northwest Research Station and Forest Health Protection.

For more Stanford experts on climate change and other topics, visit Stanford Experts.

Source : Stanford News


Researchers use engineered viruses to provide quantum-based enhancement of energy transport:MIT Research

Quantum physics meets genetic engineering

Researchers use engineered viruses to provide quantum-based enhancement of energy transport.

By David Chandler


 

CAMBRIDGE, Mass.–Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the MIT researchers achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT and in Italy.

Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

The project started from a chance meeting at a conference in Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

“I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

“It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

The research was supported by the Italian energy company Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

Source:MIT News Office

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.

Credit:
ESO/M. Kornmesser

Real Martians: How to Protect Astronauts from Space Radiation on Mars

On Aug. 7, 1972, in the heart of the Apollo era, an enormous solar flare exploded from the sun’s atmosphere. Along with a gigantic burst of light in nearly all wavelengths, this event accelerated a wave of energetic particles. Mostly protons, with a few electrons and heavier elements mixed in, this wash of quick-moving particles would have been dangerous to anyone outside Earth’s protective magnetic bubble. Luckily, the Apollo 16 crew had returned to Earth just five months earlier, narrowly escaping this powerful event.

In the early days of human space flight, scientists were only just beginning to understand how events on the sun could affect space, and in turn how that radiation could affect humans and technology. Today, as a result of extensive space radiation research, we have a much better understanding of our space environment, its effects, and the best ways to protect astronauts—all crucial parts of NASA’s mission to send humans to Mars.

“The Martian” film highlights the radiation dangers that could occur on a round trip to Mars. While the mission in the film is fictional, NASA has already started working on the technology to enable an actual trip to Mars in the 2030s. In the film, the astronauts’ habitat on Mars shields them from radiation, and indeed, radiation shielding will be a crucial technology for the voyage. From better shielding to advanced biomedical countermeasures, NASA currently studies how to protect astronauts and electronics from radiation – efforts that will have to be incorporated into every aspect of Mars mission planning, from spacecraft and habitat design to spacewalk protocols.

This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres. Credit: ESO/M. Kornmesser
This artist’s impression shows how Mars may have looked about four billion years ago. The young planet Mars would have had enough water to cover its entire surface in a liquid layer about 140 metres deep, but it is more likely that the liquid would have pooled to form an ocean occupying almost half of Mars’s northern hemisphere, and in some regions reaching depths greater than 1.6 kilometres.
Credit:
ESO/M. Kornmesser

“The space radiation environment will be a critical consideration for everything in the astronauts’ daily lives, both on the journeys between Earth and Mars and on the surface,” said Ruthan Lewis, an architect and engineer with the human spaceflight program at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “You’re constantly being bombarded by some amount of radiation.”

Radiation, at its most basic, is simply waves or sub-atomic particles that transports energy to another entity – whether it is an astronaut or spacecraft component. The main concern in space is particle radiation. Energetic particles can be dangerous to humans because they pass right through the skin, depositing energy and damaging cells or DNA along the way. This damage can mean an increased risk for cancer later in life or, at its worst, acute radiation sickness during the mission if the dose of energetic particles is large enough.

Fortunately for us, Earth’s natural protections block all but the most energetic of these particles from reaching the surface. A huge magnetic bubble, called the magnetosphere, which deflects the vast majority of these particles, protects our planet. And our atmosphere subsequently absorbs the majority of particles that do make it through this bubble. Importantly, since the International Space Station (ISS) is in low-Earth orbit within the magnetosphere, it also provides a large measure of protection for our astronauts.

“We have instruments that measure the radiation environment inside the ISS, where the crew are, and even outside the station,” said Kerry Lee, a scientist at NASA’s Johnson Space Center in Houston.

This ISS crew monitoring also includes tracking of the short-term and lifetime radiation doses for each astronaut to assess the risk for radiation-related diseases. Although NASA has conservative radiation limits greater than allowed radiation workers on Earth, the astronauts are able to stay well under NASA’s limit while living and working on the ISS, within Earth’s magnetosphere.

But a journey to Mars requires astronauts to move out much further, beyond the protection of Earth’s magnetic bubble.

“There’s a lot of good science to be done on Mars, but a trip to interplanetary space carries more radiation risk than working in low-Earth orbit,” said Jonathan Pellish, a space radiation engineer at Goddard.

A human mission to Mars means sending astronauts into interplanetary space for a minimum of a year, even with a very short stay on the Red Planet. Nearly all of that time, they will be outside the magnetosphere, exposed to the harsh radiation environment of space. Mars has no global magnetic field to deflect energetic particles, and its atmosphere is much thinner than Earth’s, so they’ll get only minimal protection even on the surface of Mars.

 

Throughout the entire trip, astronauts must be protected from two sources of radiation. The first comes from the sun, which regularly releases a steady stream of solar particles, as well as occasional larger bursts in the wake of giant explosions, such as solar flares and coronal mass ejections, on the sun. These energetic particles are almost all protons, and, though the sun releases an unfathomably large number of them, the proton energy is low enough that they can almost all be physically shielded by the structure of the spacecraft.

 

Since solar activity strongly contributes to the deep-space radiation environment, a better understanding of the sun’s modulation of this radiation environment will allow mission planners to make better decisions for a future Mars mission. NASA currently operates a fleet of spacecraft studying the sun and the space environment throughout the solar system. Observations from this area of research, known as heliophysics, help us better understand the origin of solar eruptions and what effects these events have on the overall space radiation environment.

 

“If we know precisely what’s going on, we don’t have to be as conservative with our estimates, which gives us more flexibility when planning the mission,” said Pellish.

 

The second source of energetic particles is harder to shield. These particles come from galactic cosmic rays, often known as GCRs. They’re particles accelerated to near the speed of light that shoot into our solar system from other stars in the Milky Way or even other galaxies. Like solar particles, galactic cosmic rays are mostly protons. However, some of them are heavier elements, ranging from helium up to the heaviest elements. These more energetic particles can knock apart atoms in the material they strike, such as in the astronaut, the metal walls of a spacecraft, habitat, or vehicle, causing sub-atomic particles to shower into the structure. This secondary radiation, as it is known, can reach a dangerous level.

 

There are two ways to shield from these higher-energy particles and their secondary radiation: use a lot more mass of traditional spacecraft materials, or use more efficient shielding materials.

 

The sheer volume of material surrounding a structure would absorb the energetic particles and their associated secondary particle radiation before they could reach the astronauts. However, using sheer bulk to protect astronauts would be prohibitively expensive, since more mass means more fuel required to launch.

 

Using materials that shield more efficiently would cut down on weight and cost, but finding the right material takes research and ingenuity. NASA is currently investigating a handful of possibilities that could be used in anything from the spacecraft to the Martian habitat to space suits.

 

“The best way to stop particle radiation is by running that energetic particle into something that’s a similar size,” said Pellish. “Otherwise, it can be like you’re bouncing a tricycle off a tractor-trailer.”

 

Because protons and neutrons are similar in size, one element blocks both extremely well—hydrogen, which most commonly exists as just a single proton and an electron. Conveniently, hydrogen is the most abundant element in the universe, and makes up substantial parts of some common compounds, such as water and plastics like polyethylene. Engineers could take advantage of already-required mass by processing the astronauts’ trash into plastic-filled tiles used to bolster radiation protection. Water, already required for the crew, could be stored strategically to create a kind of radiation storm shelter in the spacecraft or habitat. However, this strategy comes with some challenges—the crew would need to use the water and then replace it with recycled water from the advanced life support systems.

 

Polyethylene, the same plastic commonly found in water bottles and grocery bags, also has potential as a candidate for radiation shielding. It is very high in hydrogen and fairly cheap to produce—however, it’s not strong enough to build a large structure, especially a spacecraft, which goes through high heat and strong forces during launch. And adding polyethylene to a metal structure would add quite a bit of mass, meaning that more fuel would be required for launch.

 

“We’ve made progress on reducing and shielding against these energetic particles, but we’re still working on finding a material that is a good shield and can act as the primary structure of the spacecraft,” said Sheila Thibeault, a materials researcher at NASA’s Langley Research Center in Hampton, Virginia.

 

One material in development at NASA has the potential to do both jobs: Hydrogenated boron nitride nanotubes—known as hydrogenated BNNTs—are tiny, nanotubes made of carbon, boron, and nitrogen, with hydrogen interspersed throughout the empty spaces left in between the tubes. Boron is also an excellent absorber secondary neutrons, making hydrogenated BNNTs an ideal shielding material.

“This material is really strong—even at high heat—meaning that it’s great for structure,” said Thibeault.

Remarkably, researchers have successfully made yarn out of BNNTs, so it’s flexible enough to be woven into the fabric of space suits, providing astronauts with significant radiation protection even while they’re performing spacewalks in transit or out on the harsh Martian surface. Though hydrogenated BNNTs are still in development and testing, they have the potential to be one of our key structural and shielding materials in spacecraft, habitats, vehicles, and space suits that will be used on Mars.

Physical shields aren’t the only option for stopping particle radiation from reaching astronauts: Scientists are also exploring the possibility of building force fields. Force fields aren’t just the realm of science fiction: Just like Earth’s magnetic field protects us from energetic particles, a relatively small, localized electric or magnetic field would—if strong enough and in the right configuration—create a protective bubble around a spacecraft or habitat. Currently, these fields would take a prohibitive amount of power and structural material to create on a large scale, so more work is needed for them to be feasible.

The risk of health effects can also be reduced in operational ways, such as having a special area of the spacecraft or Mars habitat that could be a radiation storm shelter; preparing spacewalk and research protocols to minimize time outside the more heavily-shielded spacecraft or habitat; and ensuring that astronauts can quickly return indoors in the event of a radiation storm.

Radiation risk mitigation can also be approached from the human body level. Though far off, a medication that would counteract some or all of the health effects of radiation exposure would make it much easier to plan for a safe journey to Mars and back.

“Ultimately, the solution to radiation will have to be a combination of things,” said Pellish. “Some of the solutions are technology we have already, like hydrogen-rich materials, but some of it will necessarily be cutting edge concepts that we haven’t even thought of yet.”

First Signs of Self-interacting Dark Matter?

Dark matter may not be completely dark after all


Based on our current scientific understanding of the universe and various surveys like the Cosmic Microwave Background observations by Planck or WMAP, we still only know about 4-5% of the visible or baryonic matter. Rest of the 96-94% is still a mystery. This huge unknown portion of the dark universe is known to be comprised of the dark energy (the source of accelerating expansion of the universe)  and dark matter (the extra un-explained mass of the galaxies). Despite having indirect signatures suggesting their presence, we still are not able to observe these phenomena.

For the first time dark matter may have been observed interacting with other dark matter in a way other than through the force of gravity. Observations of colliding galaxies made with ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope have picked up the first intriguing hints about the nature of this mysterious component of the Universe.

This image from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 3827. The strange pale blue structures surrounding the central galaxies are gravitationally lensed views of a much more distant galaxy behind the cluster. The distribution of dark matter in the cluster is shown with blue contour lines. The dark matter clump for the galaxy at the left is significantly displaced from the position of the galaxy itself, possibly implying dark matter-dark matter interactions of an unknown nature are occuring. Credit: ESO/R. Massey
This image from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 3827. The strange pale blue structures surrounding the central galaxies are gravitationally lensed views of a much more distant galaxy behind the cluster.
The distribution of dark matter in the cluster is shown with blue contour lines. The dark matter clump for the galaxy at the left is significantly displaced from the position of the galaxy itself, possibly implying dark matter-dark matter interactions of an unknown nature are occuring.
Credit:
ESO/R. Massey

Using the MUSE instrument on ESO’s VLT in Chile, along with images from Hubble in orbit, a team of astronomers studied the simultaneous collision of four galaxies in the galaxy cluster Abell 3827. The team could trace out where the mass lies within the system and compare the distribution of the dark matter with the positions of the luminous galaxies.

Although dark matter cannot be seen, the team could deduce its location using a technique called gravitational lensing. The collision happened to take place directly in front of a much more distant, unrelated source. The mass of dark matter around the colliding galaxies severely distorted spacetime, deviating the path of light rays coming from the distant background galaxy — and distorting its image into characteristic arc shapes.

Our current understanding is that all galaxies exist inside clumps of dark matter. Without the constraining effect of dark matter’s gravity, galaxies like the Milky Way would fling themselves apart as they rotate. In order to prevent this, 85 percent of the Universe’s mass [1] must exist as dark matter, and yet its true nature remains a mystery.

In this study, the researchers observed the four colliding galaxies and found that one dark matter clump appeared to be lagging behind the galaxy it surrounds. The dark matter is currently 5000 light-years (50 000 million million kilometres) behind the galaxy — it would take NASA’s Voyager spacecraft 90 million years to travel that far.

A lag between dark matter and its associated galaxy is predicted during collisions if dark matter interacts with itself, even very slightly, through forces other than gravity [2]. Dark matter has never before been observed interacting in any way other than through the force of gravity.

Lead author Richard Massey at Durham University, explains: “We used to think that dark matter just sits around, minding its own business, except for its gravitational pull. But if dark matter were being slowed down during this collision, it could be the first evidence for rich physics in the dark sector — the hidden Universe all around us.”

The researchers note that more investigation will be needed into other effects that could also produce a lag. Similar observations of more galaxies, and computer simulations of galaxy collisions will need to be made.

Team member Liliya Williams of the University of Minnesota adds: “We know that dark matter exists because of the way that it interacts gravitationally, helping to shape the Universe, but we still know embarrassingly little about what dark matter actually is. Our observation suggests that dark matter might interact with forces other than gravity, meaning we could rule out some key theories about what dark matter might be.”

This result follows on from a recent result from the team which observed 72 collisions between galaxy clusters [3] and found that dark matter interacts very little with itself. The new work however concerns the motion of individual galaxies, rather than clusters of galaxies. Researchers say that the collision between these galaxies could have lasted longer than the collisions observed in the previous study — allowing the effects of even a tiny frictional force to build up over time and create a measurable lag [4].

Taken together, the two results bracket the behaviour of dark matter for the first time. Dark matter interacts more than this, but less than that. Massey added: “We are finally homing in on dark matter from above and below — squeezing our knowledge from two directions.”

Notes
[1] Astronomers have found that the total mass/energy content of the Universe is split in the proportions 68% dark energy, 27% dark matter and 5% “normal” matter. So the 85% figure relates to the fraction of “matter” that is dark.

[2] Computer simulations show that the extra friction from the collision would make the dark matter slow down. The nature of that interaction is unknown; it could be caused by well-known effects or some exotic unknown force. All that can be said at this point is that it is not gravity.

All four galaxies might have been separated from their dark matter. But we happen to have a very good measurement from only one galaxy, because it is by chance aligned so well with the background, gravitationally lensed object. With the other three galaxies, the lensed images are further away, so the constraints on the location of their dark matter too loose to draw statistically significant conclusions.

[3] Galaxy clusters contain up to a thousand individual galaxies.

[4] The main uncertainty in the result is the timespan for the collision: the friction that slowed the dark matter could have been a very weak force acting over about a billion years, or a relatively stronger force acting for “only” 100 million years.

Source: ESO

New kind of “tandem” solar cell developed: MIT Research

Researchers combine two types of photovoltaic material to make a cell that harnesses more sunlight.

By David Chandler


 

CAMBRIDGE, Mass–Researchers at MIT and Stanford University have developed a new kind of solar cell that combines two different layers of sunlight-absorbing material in order to harvest a broader range of the sun’s energy. The development could lead to photovoltaic cells that are more efficient than those currently used in solar-power installations, the researchers say.

The new cell uses a layer of silicon — which forms the basis for most of today’s solar panels — but adds a semi-transparent layer of a material called perovskite, which can absorb higher-energy particles of light. Unlike an earlier “tandem” solar cell reported by members of the same team earlier this year — in which the two layers were physically stacked, but each had its own separate electrical connections — the new version has both layers connected together as a single device that needs only one control circuit.

The new findings are reported in the journal Applied Physics Letters by MIT graduate student Jonathan Mailoa; associate professor of mechanical engineering Tonio Buonassisi; Colin Bailie and Michael McGehee at Stanford; and four others.

“Different layers absorb different portions of the sunlight,” Mailoa explains. In the earlier tandem solar cell, the two layers of photovoltaic material could be operated independently of each other and required their own wiring and control circuits, allowing each cell to be tuned independently for optimal performance.

By contrast, the new combined version should be much simpler to make and install, Mailoa says. “It has advantages in terms of simplicity, because it looks and operates just like a single silicon cell,” he says, with only a single electrical control circuit needed.

One tradeoff is that the current produced is limited by the capacity of the lesser of the two layers. Electrical current, Buonassisi explains, can be thought of as analogous to the volume of water passing through a pipe, which is limited by the diameter of the pipe: If you connect two lengths of pipe of different diameters, one after the other, “the amount of water is limited by the narrowest pipe,” he says. Combining two solar cell layers in series has the same limiting effect on current.

To address that limitation, the team aims to match the current output of the two layers as precisely as possible. In this proof-of-concept solar cell, this means the total power output is about the same as that of conventional solar cells; the team is now working to optimize that output.

Perovskites have been studied for potential electronic uses including solar cells, but this is the first time they have been successfully paired with silicon cells in this configuration, a feat that posed numerous technical challenges. Now the team is focusing on increasing the power efficiency — the percentage of sunlight’s energy that gets converted to electricity — that is possible from the combined cell. In this initial version, the efficiency is 13.7 percent, but the researchers say they have identified low-cost ways of improving this to about 30 percent — a substantial improvement over today’s commercial silicon-based solar cells — and they say this technology could ultimately achieve a power efficiency of more than 35 percent.

They will also explore how to easily manufacture the new type of device, but Buonassisi says that should be relatively straightforward, since the materials lend themselves to being made through methods very similar to conventional silicon-cell manufacturing.

One hurdle is making the material durable enough to be commercially viable: The perovskite material degrades quickly in open air, so it either needs to be modified to improve its inherent durability or encapsulated to prevent exposure to air — without adding significantly to manufacturing costs and without degrading performance.

This exact formulation may not turn out to be the most advantageous for better solar cells, Buonassisi says, but is one of several pathways worth exploring. “Our job at this point is to provide options to the world,” he says. “The market will select among them.”

The research team also included Eric Johlin PhD ’14 and postdoc Austin Akey at MIT, and Eric Hoke and William Nguyen of Stanford. It was supported by the Bay Area Photovoltaic Consortium and the U.S. Department of Energy.

Source: News Office

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy. 
Credit: KAUST

Tsunami on demand: the power to harness catastrophic events

A new study published in Nature Physics features a nano-optical chip that makes possible generating and controlling nanoscale rogue waves. The innovative chip was developed by an international team of physicists, led by Andrea Fratalocchi from KAUST (Saudi Arabia), and is expected to have significant applications for energy research and environmental safety.

Can you imagine how much energy is in a tsunami wave, or in a tornado? Energy is all around us, but mainly contained in a quiet state. But there are moments in time when large amounts of energy build up spontaneously and create rare phenomena on a potentially disastrous scale. How these events occur, in many cases, is still a mystery.

To reveal the natural mechanisms behind such high-energy phenomena, Andrea Fratalocchi, assistant professor in the Computer, Electrical and Mathematical Science and Engineering Division of King Abdullah University of Science and Technology (KAUST), led a team of researchers from Saudi Arabia and three European universities and research centers to understand the dynamics of such destructive events and control their formation in new optical chips, which can open various technological applications. The results and implications of this study are published in the journal Nature Physics.

“I have always been fascinated by the unpredictability of nature,” Fratalocchi said. “And I believe that understanding this complexity is the next frontier that will open cutting edge pathways in science and offer novel applications in a variety of areas.”

Fratalocchi’s team began their research by developing new theoretical ideas to explain the formation of rare energetic natural events such as rogue waves — large surface waves that develop spontaneously in deep water and represent a potential risk for vessels and open-ocean oil platforms.”

“Our idea was something never tested before,” Fratalocchi continued. “We wanted to demonstrate that small perturbations of a chaotic sea of interacting waves could, contrary to intuition, control the formation of rare events of exceptional amplitude.”

Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.  Credit: KAUST
Fully experimental image of a nanoscaled and ultrafast optical rogue wave retrieved by Near-field Scanning Optical Microscope (NSOM). The flow lines visible in the image represent the direction of light energy.
Credit: KAUST

A planar photonic crystal chip, fabricated at the University of St. Andrews and tested at the FOM institute AMOLF in the Amsterdam Science Park, was used to generate ultrafast (163 fs long) and subwavelength (203 nm wide) nanoscale rogue waves, proving that Fratalocchi’s theory was correct. The newly developed photonic chip offered an exceptional level of controllability over these rare events.

Thomas F. Krauss, head of the Photonics Group and Nanocentre Cleanroom at the University of York, UK, was involved in the development of the experiment and the analysis of the data. He shared, “By realizing a sea of interacting waves on a photonic chip, we were able study the formation of rare high energy events in a controlled environment. We noted that these events only happened when some sets of waves were missing, which is one of the key insights our study.”

Kobus Kuipers, head of nanophotonics at FOM institute AMOLF, NL, who was involved in the experimental visualization of the rogue waves, was fascinated by their dynamics: “We have developed a microscope that allows us to visualize optical behavior at the nanoscale. Unlike conventional wave behavior, it was remarkable to see the rogue waves suddenly appear, seemingly out of nowhere, and then disappear again…as if they had never been there.”

Andrea Di Falco, leader of the Synthetic Optics group at the University of St. Andrews said, “The advantage of using light confined in an optical chip is that we can control very carefully how the energy in a chaotic system is dissipated, giving rise to these rare and extreme events. It is as if we were able to produce a determined amount of waves of unusual height in a small lake, just by accurately landscaping its coasts and controlling the size and number of its emissaries.”

The outcomes of this project offer leading edge technological applications in energy research, high speed communication and in disaster preparedness.

Fratalocchi and the team believe their research represents a major milestone for KAUST and for the field. “This discovery can change once and for all the way we look at catastrophic events,” concludes Fratalocchi, “opening new perspectives in preventing their destructive appearance on large scales, or using their unique power for ideating new applications at the nanoscale.”The title of the Nature Physics paper is “Triggering extreme events at the nanoscale in photonic seas.” The paper is accessible on the Nature Photonics website: http://dx.doi.org/10.1038/nphys3263

Source : KAUST News

This artist’s impression depicts the formation of a galaxy cluster in the early Universe. The galaxies are vigorously forming new stars and interacting with each other. Such a scene closely resembles the Spiderweb Galaxy (formally known as MRC 1138-262) and its surroundings, which is one of the best-studied protoclusters.

Credit:

ESO/M. Kornmesser

Universe may face a darker future

Since the discovery of the accelerated expansion of the universe in 1997 by High-Z Supernova Team led by Prof. Brian Schmidt and Adam Rees, and by Supernova Cosmology Project Team led by Prof. Saul Perlmutter, the question of the nature of this expansion and the role of the mysterious dark energy has puzzled the minds of many theoretical and observational physicists/astrophysicists.

Another puzzling question in astronomy comes from the unusual behavior of the stars revolving around the galaxies with higher velocities than expected if we consider the apparent baryonic matter in the galaxy.This has led to many new questions related to something we called the dark matter, another unexplained phenomenon.

 


 

New research offers a novel insight into the nature of dark matter and dark energy and what the future of our Universe might be.

Researchers in Portsmouth and Rome have found hints that dark matter, the cosmic scaffolding on which our Universe is built, is being slowly erased, swallowed up by dark energy.

The findings appear in the journal Physical Review Letters, published by the American Physical Society. In the journal cosmologists at the Universities of Portsmouth and Rome, argue that the latest astronomical data favours a dark energy that grows as it interacts with dark matter, and this appears to be slowing the growth of structure in the cosmos.

Professor David Wands, Director of Portsmouth’sInstitute of Cosmology and Gravitation, is one of the research team.

He said: “This study is about the fundamental properties of space-time. On a cosmic scale, this is about our Universe and its fate.

“If the dark energy is growing and dark matter is evaporating we will end up with a big, empty, boring Universe with almost nothing in it.

 

“Dark matter provides a framework for structures to grow in the Universe. The galaxies we see are built on that scaffolding and what we are seeing here, in these findings, suggests that dark matter is evaporating, slowing that growth of structure.”

Cosmology underwent a paradigm shift in 1998 when researchers announced that the rate at which the Universe was expanding was accelerating. The idea of a constant dark energy throughout space-time (the “cosmological constant”) became the standard model of cosmology, but now the Portsmouth and Rome researchers believe they have found a better description, including energy transfer between dark energy and dark matter.

Research students Valentina Salvatelli and Najla Said from the University of Rome worked in Portsmouth with Dr Marco Bruni and Professor Wands, and with Professor Alessandro Melchiorri in Rome. They examined data from a number of astronomical surveys, including the Sloan Digital Sky Survey, and used the growth of structure revealed by these surveys to test different models of dark energy.

Professor Wands said: “Valentina and Najla spent several months here over the summer looking at the consequences of the latest observations. Much more data is available now than was available in 1998 and it appears that the standard model is no longer sufficient to describe all of the data. We think we’ve found a better model of dark energy.

“Since the late 1990s astronomers have been convinced that something is causing the expansion of our Universe to accelerate. The simplest explanation was that empty space – the vacuum – had an energy density that was a cosmological constant. However there is growing evidence that this simple model cannot explain the full range of astronomical data researchers now have access to; in particular the growth of cosmic structure, galaxies and clusters of galaxies, seems to be slower than expected.”

Professor Dragan Huterer,of the University of Michigan, has read the research and said scientists need to take notice of the findings.

He said: “The paper does look very interesting. Any time there is a new development in the dark energy sector we need to take notice since so little is understood about it. I would not say, however, that I am surprised at the results, that they come out different than in the simplest model with no interactions. We’ve known for some months now that there is some problem in all data fitting perfectly to the standard simplest model.”

Source: Materials taken from Uop News