Tag Archives: energy

KAUST team synthesizes novel metal-organic framework for efficient CO2 removal

By Caitlin Clark

“In Professor Mohamed Eddaoudi’s research group, we are always on the quest to find novel nanostructured functionalized materialsfor specific applications,” explained KAUST Research Scientist Dr. Youssef Belmabkhout, a member of Prof. Eddaoudi’s Functional Materials Design, Discovery, and Development (FMD3) group, part of KAUST’s Advanced Membranes and Porous Materials (AMPM) research center.

Dr. Osama Shekhah, Senior Research Scientist in the FMD3 group added that the group searches “for materials that will be highly suitable for trace and low CO2 concentration removal using purely physical adsorption. These will help in energy saving and in the reduction of the cost of the production, purification, and enrichment of highly valuable commodities such as CH4, H2, O2, N2, and others.”

Drs. Shekhah and Belmabkhout and a team of researchers from Prof. Eddaoudi’s group recently discovered and synthesized a new porous, moisture-resistant, inexpensive and reusable copper-based metal-organic framework (MOF) called SIFSIX-3-Cu that can selectively adsorb and remove trace CO2 from mixtures of various gases. Their findings were published in the June 25 edition of Nature Communications (DOI: 10.1038/ncomms5228).

MOFs are a promising new class of hybrid solid-state materials for CO2 removal. “Their uniqueness,” explained Prof. Eddaoudi, “resides in the ability to control their assembly and introduce functionality on demand. This feature is not readily available in other solid-state materials.”

The researchers showed for the first time that MOF crystal chemistry permits the assembly of a new isostructural hexafluorosilicate MOF (SIFSIX-3-Cu) based on copper instead of zinc.

“This technology is anticipated to outperform the existing mature technologies for CO2 physical adsorption in terms of energy efficiency,” says Dr. Shekhah. “The key factors for this finding are the combination of suitable pore size and high, uniform charge density in the pores of the MOF.”

Using their newly synthesized MOF, the researchers examined the conditions relevant to direct air capture (DAC), a mechanism to remove CO2 from air and reduce greenhouse gas emissions uniformly around the world.

DAC is more challenging than post-combustion capture, but it may be practical if alternative “suitable adsorbent combining optimum uptake, kinetics, energetics and CO2 selectivity is available at trace CO2 concentration,” the researchers stated.

The team discovered that contracting SIFSIX-3-Cu’s pore system to 3.5 Å enhanced the material’s efficiency, making it able to adsorb relatively large CO2 amounts 10-15 times higher than zinc-based metal-organic adsorbents, such as SIFSIX-3-Zn. In SIFSIX-3-Zn, the pore size is 3.84 Å.

“We attribute this property to enhanced physical sorption through the favorable electrostatic interactions between CO2 molecules and fluorine atoms present on the surface of the adsorbent,” explained Zhijie Chen, a Ph.D. student in the FMD3 group and a co-author of the paper.

Dr. Vincent Guillerm, a post-doctoral fellow in the FMD3 group and a co-author of the paper also noted that, “the pore contraction gives CO2 uptake and selectivity at very low partial pressures. This is relevant to DAC and trace carbon dioxide removal.”

“SIFSIX-3-Cu gives enhanced CO2 physical adsorption properties, uptake, and selectivity in highly diluted gas streams, and this performance is unachievable with other classes of porous materials,” added Dr. Karim Adil, a co-author of the paper and Research Scientist in the FMD3 group.

The researchers are excited about their finding as it offers the potential to be used not only for DAC but also for other applications related to energy, the environment, and the healthcare field. For example, SIFSIX-3-Cu could be used to remove and recycle CO2 in confined spaces, such as in submarines or space shuttles, and could also be used in anesthesia machines, which require efficient CO2 sorbents.

“Our work paves the way for scientists to develop new separation agents suitable for challenging endeavor pertaining to CO2 ultra-purification processing,” said Dr. Shekhah. “Our study is also part of a greater critical effort to develop economical and practical pathways to reduce cumulative CO2 emissions provoking the undesirable greenhouse gas effect.”

Prof. Eddaoudi reiterated that “MOFs offer remarkable CO2 physical adsorption attributes in highly diluted gas streams thanks to their ability for rational pore size modification and inorganic-organics moieties substitution. Other classes of plain materials are unable to attain this.”

In the future, Prof. Eddaoudi’s FMD3 group will continue to develop topologically and chemically different MOFs. “We aim to target novel MOFs with suitable pore size and high charge density,” explained Prof. Eddaoudi. “We will then use these for the important task of removing trace and low and high concentration CO2.”

Source: KAUST


 

Physicists from Japan and USA shared 2014 Physics Nobel Prize

The Nobel Prize in Physics 2014 was awarded jointly to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura “for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”.

Following is the press release from NobelPrize.Org regarding the announcement.


The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics for 2014 to

Isamu Akasaki
Meijo University, Nagoya, Japan and Nagoya University, Japan

Hiroshi Amano
Nagoya University, Japan

and

Shuji Nakamura
University of California, Santa Barbara, CA, USA

“for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”

New light to illuminate the world

This year’s Nobel Laureates are rewarded for having invented a new energy-efficient and environment-friendly light source – the blue light-emitting diode (LED). In the spirit of Alfred Nobel the Prize rewards an invention of greatest benefit to mankind; using blue LEDs, white light can be created in a new way. With the advent of LED lamps we now have more long-lasting and more efficient alternatives to older light sources.

When Isamu AkasakiHiroshi Amano and Shuji Nakamura produced bright blue light beams from their semi-conductors in the early 1990s, they triggered a funda-mental transformation of lighting technology. Red and green diodes had been around for a long time but without blue light, white lamps could not be created. Despite considerable efforts, both in the scientific community and in industry, the blue LED had remained a challenge for three decades.

They succeeded where everyone else had failed. Akasaki worked together with Amano at the University of Nagoya, while Nakamura was employed at Nichia Chemicals, a small company in Tokushima. Their inventions were revolutionary. Incandescent light bulbs lit the 20th century; the 21st century will be lit by LED lamps.

White LED lamps emit a bright white light, are long-lasting and energy-efficient. They are constantly improved, getting more efficient with higher luminous flux (measured in lumen) per unit electrical input power (measured in watt). The most recent record is just over 300 lm/W, which can be compared to 16 for regular light bulbs and close to 70 for fluorescent lamps. As about one fourth of world electricity consumption is used for lighting purposes, the LEDs contribute to saving the Earth’s resources. Materials consumption is also diminished as LEDs last up to 100,000 hours, compared to 1,000 for incandescent bulbs and 10,000 hours for fluorescent lights.

The LED lamp holds great promise for increasing the quality of life for over 1.5 billion people around the world who lack access to electricity grids: due to low power requirements it can be powered by cheap local solar power.

The invention of the blue LED is just twenty years old, but it has already contributed to create white light in an entirely new manner to the benefit of us all.

Read more about this year’s prize
Information for the Public
Pdf 1.1 MB
Scientific Background
Pdf 770 kB
To read the text you need Acrobat Reader.

Image – diode (1.1 Mb)

Image – efficiency (2.8 Mb)

Image – white sign (12.4 Mb)


 

Isamu Akasaki,, Japanese citizen. Born 1929 in Chiran, Japan. Ph.D. 1964 from Nagoya University, Japan. Professor at Meijo University, Nagoya, and Distinguished Professor at Nagoya University, Japan.
http://en.nagoya-u.ac.jp/people/distinguished_award_recipients/nagoya_university_distinguished_professor_isamu_akasaki.html

Hiroshi Amano,, Japanese citizen. Born 1960 in Hamamatsu, Japan. Ph.D. 1989 from Nagoya University, Japan. Professor at Nagoya University, Japan.
http://profs.provost.nagoya-u.ac.jp/view/html/100001778_en.html

Shuji Nakamura, American citizen. Born 1954 in Ikata, Japan. Ph.D. 1994 from University of Tokushima, Japan. Professor at University of California, Santa Barbara, CA, USA.
www.sslec.ucsb.edu/nakamura/

Prize amount: SEK 8 million, to be shared equally between the Laureates.

Source: NobelPrize.Org

Hide & Seek: Sterile Neutrinos Remain Elusive

—-Daya Bay neutrino experiment publishes a new result on its first search for a “sterile” neutrino

BEIJING; BERKELEY, CA; and UPTON, NY—The Daya Bay Collaboration, an international group of scientists studying the subtle transformations of subatomic particles called neutrinos, is publishing its first results on the search for a so-called sterile neutrino, a possible new type of neutrino beyond the three known neutrino “flavors,” or types. The existence of this elusive particle, if proven, would have a profound impact on our understanding of the universe, and could impact the design of future neutrino experiments. The new results, appearing in the journal Physical Review Letters, show no evidence for sterile neutrinos in a previously unexplored mass range.

There is strong theoretical motivation for sterile neutrinos.  Yet, the experimental landscape is unsettled—several experiments have hinted that sterile neutrinos may exist, but the others yielded null results. Having amassed one of the largest samples of neutrinos in the world, the Daya Bay Experiment is poised to shed light on the existence of sterile neutrinos.

The Daya Bay Experiment is situated close to the Daya Bay and Ling Ao nuclear power plants in China, 55 kilometers northeast of Hong Kong. These reactors produce a steady flux of antineutrinos that the Daya Bay Collaboration scientists use for research at detectors located at varying distances from the reactors. The collaboration includes more than 200 scientists from six regions and countries.

The Daya Bay experiment began its operation on December 24, 2011. Soon after, in March 2012, the collaboration announced its first results: the observation of a new type of neutrino oscillation—evidence that these particles mix and change flavors from one type to others—and a precise determination of a neutrino “mixing angle,” called θ13, which is a definitive measure of the mixing of at least three mass states of neutrinos.

The fact that neutrinos have mass at all is a relatively new discovery, as is the observation at Daya Bay that the electron neutrino is a mixture of at least three mass states.  While scientists don’t know the exact values of the neutrino masses, they are able to measure the differences between them, or “mass splittings.” They also know that these particles are dramatically less massive than the well-known electron, though both are members of the family of particles called “leptons.”

These unexpected observations have led to the possibility that the electrically neutral, almost undetectable neutrino could be a special type of matter and a very important component of the mass of the universe. Given that the nature of matter and in particular the property of mass is one of the fundamental questions in science, these new revelations about the neutrino make it clear that it is important to search for other light neutral particles that might be partners of the active neutrinos, and may contribute to the dark matter of the universe.

Search for a light sterile neutrino

The new Daya Bay paper describes the search for such a light neutral particle, the “sterile neutrino,” by looking for evidence that it mixes with the three known neutrino types—electron, muon, and tau. If, like the known flavors, the sterile neutrino also exists as a mixture of different masses, it would lead to mixing of neutrinos from known flavors to the sterile flavor, thus giving scientists proof of its existence. That proof would show up as a disappearance of neutrinos of known flavors.

Measuring disappearing neutrinos isn’t as strange as it seems. In fact that’s how Daya Bay scientists detect neutrino oscillations. The scientists count how many of the millions of quadrillions of electron antineutrinos produced every second by the six China General Nuclear Power Group reactors are captured by the detectors located in three experimental halls built at varying distances from the reactors. The detectors are only sensitive to electron antineutrinos. Calculations based on the number that disappear along the way to the farthest reactor give them information about how many have changed flavors.

The rate at which they transform is the basis for measuring the mixing angles (for example, θ13), and the mass splitting is determined by how the rate of transformation depends on the neutrino energy and the distance between the reactor and the detector.

That distance is also referred to as the “baseline.” With six detectors strategically positioned at three separate locations to catch antineutrinos generated from the three pairs of reactors, Daya Bay provides a unique opportunity to search for a light sterile neutrino with baselines ranging from 360 meters to 1.8 kilometers.

Daya Bay performed its first search for a light sterile neutrino using the energy dependence of detected electron antineutrinos from the reactors. Within the searched mass range for a fourth possible mass state, Daya Bay found no evidence for the existence of a sterile neutrino.

This data represents the best world limit on sterile neutrinos over a wide range of masses and so far supports the standard three-flavor neutrino picture. Given the importance of clarifying the existence of the sterile neutrino, there are continuous quests by many scientists and experiments. The Daya Bay’s new result remarkably narrowed down the unexplored area.

Source: IHEP Chinese Academy of Sciences 

The NOMADD technology represents KAUST's first royalty-bearing license agreement. Credit: KAUST News

Innovation in the desert! KAUST’s NOMADD sets sights on solar energy future

The NOMADD technology represents KAUST’s first royalty-bearing license agreement.

By Meres J. Weche


The United Nations estimates the Saudi population will grow to 45 million by 2050; and as the population increases, domestic energy demand is anticipated to double by 2030. In recognition of the growing importance of developing sustainable and renewable energy sources for the Kingdom, the Saudi government has established the ambitious goal of generating a third of the country’s electricity sources (41,000 megawatts) through solar power by 2032. Towards this goal, the King Abdullah City for Atomic and Renewable Energy (KACARE) aims to construct a $109 billion solar industry in Saudi Arabia, which would represent about 20,000 football fields worth of solar panels.

“We hope to be the industry standard solution to clean all those panels,” said Georg Eitelhuber, Founder and Chief Executive Officer of NOMADD. The startup company, developed three years ago at KAUST and originally supported and funded by theEntrepreneurship Center and the Seed Fund program, offers a waterless and remotely operated system to clean solar panels. The acronym NOMADD stands for NO-water Mechanical Automated Dusting Device.

The NOMADD technology represents KAUST's first royalty-bearing license agreement. Credit: KAUST News
The NOMADD technology represents KAUST’s first royalty-bearing license agreement. Credit: KAUST News

Describing the challenges facing Saudi Arabia’s burgeoning solar energy industry, the NOMADD founder says: “The big challenge, is dust. Desert winds pick up the dust and push it onto the solar panels, all day every day. Sometimes you can have dust storms which put so much dust on the solar panel surface, you can lose 60% of your output in a single day.” Actually, solar panels lose between 0.4-0.8% of their efficiency per day just from desert sand and dust.

A mechanical engineer by training, Eitelhuber was working as a physics teacher at the KAUST School when he started experimenting with Lego blocks and paper to find a solution to clean solar panels exposed to the rough dusty environment of Saudi Arabia. His innovation has since been recognized with the 2014 Solar Pioneer Award and he has been working on further testing and developing the solution with world-leading companies in solar energy such as First Solar Inc. and SunPower Corp.

Eitelhuber is grateful for the backing of KAUST, with all of its resources, in assisting inventors like himself. As the NOMADD team works with various industrial testing partners on improving the technology, KAUST Tech Transfer is there to maintain control of patentable technology which may emerge in the process. A milestone was achieved last month when KAUST signed its first royalty-bearing license agreement for the NOMADD desert solar solution system.

A Continuous Drive for Improvement

Demonstrating the newly devised fifth version of the NOMADD system in its three years of development, Georg Eitelhuber explains that it’s now “70% lighter than previous versions and uses less than half of the power.” In addition to that, it’s much cheaper to manufacture.

“Every time we do a new version it’s simpler, cheaper and faster,” he adds. For example, the rail system supporting the brushes cleaning the solar panels from top to bottom is not only lighter and cheaper but it also now just clips on – whereas previous versions required many nuts and bolts. The mounting system moreover features an inbuilt self-adjustment process tailored to determine the optimal gravity-adjusted angle as the solar panels are cleaned.

It’s important for the cleaning system to be both economically and functionally optimized since some panel rows can be 400 meters long. “That’s a lot of rail,” said Eitelhuber.” “The old version had literally hundreds of nuts and bolts, little fasteners and washers and it worked great but it also weighted as much as a tank.”

Compared to some earlier models, which had around 120-odd manufacturing pieces, the latest NOMADD system has narrowed it down to 10 to 15 pieces. This means that it’s now easier to manufacture and assemble. “The key thing is that it has to be cheaper than sending out a worker with a squeegee and more economical than anything else in the market,” he adds.

The achieved objective has been to make NOMADD desert-proof – as the arid environment causes things to break down at higher frequencies. The device is basically machined aluminum and stainless steel.

It’s also noteworthy that the brushes used to non-abrasively clean the solar panels can easily be slid out and replaced. So it would take someone around five minutes to change all the brushes.

In addition, one of the major advantages of the NOMADD system is that it’s remotely operated. The cleaning functions can be monitored and operated online from around the world.

A Saudi-Specific Innovation with a Global Footprint

“The advantage that we’ve got is that we’ve basically been three years in development and we’ve been developing this solution for the desert while being in the desert. We’ve got a real understanding of the issues involved in cleaning solar panels in the desert,” said Georg Eitelhuber.

Unlike some other solar panel cleaning solutions from North American and European companies, designed for mild climates, that use water and require manual labor, the NOMADD system really has an edge by being a waterless model ideally suited for these arid conditions. “We understand that having someone standing outside at 45 degrees Celsius cleaning solar panels eight hours a day isn’t feasible,” he adds.

As they keep an eye out for the competition, the NOMADD team is confident that, once they make it through the final development process, they will have every chance of being a huge commercial success.

KAUST’s director of New Ventures and Entrepreneurship, Gordon McConnell, says NOMADD’s local presence in the Kingdom will help contribute in building a knowledge-based economy in Saudi Arabia. “The local incorporation is not just of bureaucratic significance, but will now enable NOMADD to develop its business which in turn will help to create high level jobs in sales, marketing and technical areas, while also offering an opportunity to build up local manufacturing capacity and it will make it easier for fund raising within the Kingdom,” said McConnell.

The NOMADD project has greatly benefited from the collaborative efforts of several key team members such as Guodong Li, Chief Electrical Engineer, and Elizabeth Cassell, the project’s chief Administrator, both from the KAUST Solar Center; as well as Head Mechanical Design Engineer Steven Schneider who has been instrumental in producing technical drawings for manufacturing. Andres Pablo, a Ph.D. student, and Razeen Stoffberg, one of Georg’s ex students front he KAUST school, have been assisting with technical setups and product testing and evaluation.

Also, as much of the manufacturing work is done in Asia, the NOMADD team has set up an office in Singapore, headed by Chief Development Officer Cliff Barrett. As a next step, the team has been actively recruiting a new CEO to help the project achieve critical mass and reach their ambitious future milestones.

“Thanks to some great mentorship from the KAUST New Ventures and Entrepreneurshipteam, I’ve done my best as a CEO but I’m an engineer and an inventor by nature,” said Georg Eitelhuber. “It’s been one of my dreams from the very beginning to try and start something which will have a net positive environmental and social impact.”

Source: KAUST News

Computer-generated drawing of the Alpha Magnetic Spectrometer (AMS).

Credit: NASA

Particle detector finds hints of dark matter in space

Alpha Magnetic Spectrometer detects positrons in cosmic ray flux that hint at dark matter’s origin.

By Jennifer Chu


Researchers at MIT’s Laboratory for Nuclear Science have released new measurements that promise to shed light on the origin of dark matter.

Computer-generated drawing of the Alpha Magnetic Spectrometer (AMS). Credit: NASA
Computer-generated drawing of the Alpha Magnetic Spectrometer (AMS).
Credit: NASA

The MIT group leads an international collaboration of scientists that analyzed two and a half years’ worth of data taken by the Alpha Magnetic Spectrometer (AMS) — a large particle detector mounted on the exterior of the International Space Station — that captures incoming cosmic rays from all over the galaxy.

Among 41 billion cosmic ray events — instances of cosmic particles entering the detector — the researchers identified 10 million electrons and positrons, stable antiparticles of electrons. Positrons can exist in relatively small numbers within the cosmic ray flux.

An excess of these particles has been observed by previous experiments — suggesting that they may not originate from cosmic rays, but come instead from a new source. In 2013, the AMS collaboration, for the first time, accurately measured the onset of this excess.

The new AMS results may ultimately help scientists narrow in on the origin and features of dark matter — whose collisions may give rise to positrons.

The team reports the observed positron fraction — the ratio of the number of positrons to the combined number of positrons and electrons — within a wider energy range than previously reported. From the data, the researchers observed that this positron fraction increases quickly at low energies, after which it slows and eventually levels off at much higher energies.

The team reports that this is the first experimental observation of the positron fraction maximum — at 243 to 307 gigaelectronvolts (GeV) — after half a century of cosmic ray experiments.

“The new AMS results show unambiguously that a new source of positrons is active in the galaxy,” says Paolo Zuccon, an assistant professor of physics at MIT. “We do not know yet if these positrons are coming from dark matter collisions, or from astrophysical sources such as pulsars. But measurements are underway by AMS that may discriminate between the two hypotheses.”

The new measurements, Zuccon adds, are compatible with a dark matter particle with mass on the order of 1 teraelectronvolt (TeV) — about 1,000 times the mass of a proton.

Zuccon and his colleagues, including AMS’s principal investigator, Samuel Ting, the Thomas D. Cabot Professor of Physics at MIT, detail their results in two papers published today in the journal Physical Review Letters and in a third, forthcoming publication.

Catching a galactic stream

Nearly 85 percent of the universe is made of dark matter — matter that somehow does not emit or reflect light, and is therefore invisible to modern telescopes. For decades, astronomers have observed only the effects of dark matter, in the form of mysterious gravitational forces that seem to hold together clusters of galaxy that would otherwise fly apart. Such observations eventually led to the theory of an invisible, stabilizing source of gravitational mass, or dark matter.

The AMS experiment aboard the International Space Station aims to identify the origins of dark matter. The detector takes in a constant flux of cosmic rays, which Zuccon describes as “streams of the universe that bring with them everything they can catch around the galaxy.”

Presumably, this cosmic stream includes leftovers from the violent collisions between dark matter particles.

According to theoretical predictions, when two dark matter particles collide, they annihilate, releasing a certain amount of energy that depends on the mass of the original particles. When the particles annihilate, they produce ordinary particles that eventually decay into stable particles, including electrons, protons, antiprotons, and positrons.

As the visible matter in the universe consists of protons and electrons, the researchers reasoned that the contribution of these same particles from dark matter collisions would be negligible. However, positrons and antiprotons are much rarer in the universe; any detection of these particles above the very small expected background would likely come from a new source. The features of this excess — and in particular its onset, maximum position, and offset — will help scientists determine whether positrons arise from astrophysical sources such as pulsars, or from dark matter.

After continuously collecting data since 2011, the AMS team analyzed 41 billion incoming particles and identified 10 million positrons and electrons with energies ranging from 0.5 to 500 GeV — a wider energy range than previously measured.

The researchers studied the positron fraction versus energy, and found an excess of positrons starting at lower energies (8 GeV), suggesting a source for the particles other than the cosmic rays themselves. The positron fraction then slowed and peaked at 275 GeV, indicating that the data may be compatible with a dark matter source of positrons.

“Dark matter is there,” Zuccon says. “We just don’t know what it is. AMS has the possibility to shine a light on its features. We see some hint now, and it is within our possibility to say if that hint is true.”

If it turns out that the AMS results are due to dark matter, the experiment could establish that dark matter is a new kind of particle, says Barry Barish, a professor emeritus of physics and high-energy physics at the California Institute of Technology.

“The new phenomena could be evidence for the long-sought dark matter in the universe, or it could be due to some other equally exciting new science,” says Barish, who was not involved in the experiments. “In either case, the observation in itself is what is exciting; the scientific explanation will come with further experimentation.”

This research was funded in part by the U.S. Department of Energy.

Source: MIT News Office

 


Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)

Stanford scientists develop water splitter that runs on ordinary AAA battery

Hongjie Dai and colleagues have developed a cheap, emissions-free device that uses a 1.5-volt battery to split water into hydrogen and oxygen. The hydrogen gas could be used to power fuel cells in zero-emissions vehicles.

BY MARK SHWARTZ


In 2015, American consumers will finally be able to purchase fuel cell cars from Toyota and other manufacturers. Although touted as zero-emissions vehicles, most of the cars will run on hydrogen made from natural gas, a fossil fuel that contributes to global warming.

Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)
Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)

Now scientists at Stanford University have developed a low-cost, emissions-free device that uses an ordinary AAA battery to produce hydrogen by water electrolysis.  The battery sends an electric current through two electrodes that split liquid water into hydrogen and oxygen gas. Unlike other water splitters that use precious-metal catalysts, the electrodes in the Stanford device are made of inexpensive and abundant nickel and iron.

“Using nickel and iron, which are cheap materials, we were able to make the electrocatalysts active enough to split water at room temperature with a single 1.5-volt battery,” said Hongjie Dai, a professor of chemistry at Stanford. “This is the first time anyone has used non-precious metal catalysts to split water at a voltage that low. It’s quite remarkable, because normally you need expensive metals, like platinum or iridium, to achieve that voltage.”

In addition to producing hydrogen, the novel water splitter could be used to make chlorine gas and sodium hydroxide, an important industrial chemical, according to Dai. He and his colleagues describe the new device in a study published in the Aug. 22 issue of the journal Nature Communications.

The promise of hydrogen

Automakers have long considered the hydrogen fuel cell a promising alternative to the gasoline engine.  Fuel cell technology is essentially water splitting in reverse. A fuel cell combines stored hydrogen gas with oxygen from the air to produce electricity, which powers the car. The only byproduct is water – unlike gasoline combustion, which emits carbon dioxide, a greenhouse gas.

Earlier this year, Hyundai began leasing fuel cell vehicles in Southern California. Toyota and Honda will begin selling fuel cell cars in 2015. Most of these vehicles will run on fuel manufactured at large industrial plants that produce hydrogen by combining very hot steam and natural gas, an energy-intensive process that releases carbon dioxide as a byproduct.

Splitting water to make hydrogen requires no fossil fuels and emits no greenhouse gases. But scientists have yet to develop an affordable, active water splitter with catalysts capable of working at industrial scales.

“It’s been a constant pursuit for decades to make low-cost electrocatalysts with high activity and long durability,” Dai said. “When we found out that a nickel-based catalyst is as effective as platinum, it came as a complete surprise.”

Saving energy and money

The discovery was made by Stanford graduate student Ming Gong, co-lead author of the study. “Ming discovered a nickel-metal/nickel-oxide structure that turns out to be more active than pure nickel metal or pure nickel oxide alone,” Dai said.  “This novel structure favors hydrogen electrocatalysis, but we still don’t fully understand the science behind it.”

The nickel/nickel-oxide catalyst significantly lowers the voltage required to split water, which could eventually save hydrogen producers billions of dollars in electricity costs, according to Gong. His next goal is to improve the durability of the device.

“The electrodes are fairly stable, but they do slowly decay over time,” he said. “The current device would probably run for days, but weeks or months would be preferable. That goal is achievable based on my most recent results”

The researchers also plan to develop a water splitter than runs on electricity produced by solar energy.

“Hydrogen is an ideal fuel for powering vehicles, buildings and storing renewable energy on the grid,” said Dai. “We’re very glad that we were able to make a catalyst that’s very active and low cost. This shows that through nanoscale engineering of materials we can really make a difference in how we make fuels and consume energy.”

Other authors of the study are Wu Zhou, Oak Ridge National Laboratory (co-lead author); Mingyun Guan, Meng-Chang Lin, Bo Zhang, Di-Yan Wang and Jiang Yang, Stanford; Mon-Che Tsai and Bing-Joe Wang, National Taiwan University of Science and Technology; Jiang Zhou and Yongfeng Hu, Canadian Light Source Inc.; and Stephen J. Pennycook, University of Tennessee.

Principal funding was provided by the Global Climate and Energy Project (GCEP) and the Precourt Institute for Energy at Stanford and by the U.S. Department of Energy.

Mark Shwartz writes about energy technology at the Precourt Institute for Energy at Stanford University.

Light enters a two-dimensional ring-resonator array from the lower left and exits at the lower right. Light that follows the edge of the array (blue) does not suffer energy loss and exits after a consistent amount of delay. Light that travels into the interior of the array (green) suffers energy loss. 
Credit: Sean Kelley/JQI

On-chip Topological Light

FIRST MEASUREMENTS OF TRANSMISSION AND DELAY

Topological transport of light is the photonic analog of topological electron flow in certain semiconductors. In the electron case, the current flows around the edge of the material but not through the bulk. It is “topological” in that even if electrons encounter impurities in the material the electrons will continue to flow without losing energy.

Light enters a two-dimensional ring-resonator array from the lower left and exits at the lower right. Light that follows the edge of the array (blue) does not suffer energy loss and exits after a consistent amount of delay. Light that travels into the interior of the array (green) suffers energy loss.  Credit: Sean Kelley/JQI
Light enters a two-dimensional ring-resonator array from the lower left and exits at the lower right. Light that follows the edge of the array (blue) does not suffer energy loss and exits after a consistent amount of delay. Light that travels into the interior of the array (green) suffers energy loss.
Credit: Sean Kelley/JQI

In the photonic equivalent, light flows not through and around a regular material but in a meta-material consisting of an array of tiny glass loops fabricated on a silicon substrate. If the loops are engineered just right, the topological feature appears: light sent into the array easily circulates around the edge with very little energy loss (even if some of the loops aren’t working) while light taking an interior route suffers loss.

Mohammad Hafezi and his colleagues at the Joint Quantum Institute have published a series of papers on the subject of topological light. The first pointed out the potential application of robustness in delay lines and conceived a scheme to implement quantum Hall models in arrays of photonic loops. In photonics, signals sometimes need to be delayed, usually by sending light into a kilometers-long loop of optical fiber. In an on-chip scheme, such delays could be accomplished on the microscale; this is in addition to the energy-loss reduction made possible by topological robustness (see Miniaturizing Delay Lines below).

The 2D array consists of resonator rings, where light spends more time, and link rings, where light spends little time. Undergoing a circuit around a complete unit cell of rings, light will return to the starting point with a slight change in phase, phi. Credit: Sean Kelley/JQI
The 2D array consists of resonator rings, where light spends more time, and link rings, where light spends little time. Undergoing a circuit around a complete unit cell of rings, light will return to the starting point with a slight change in phase, phi.
Credit: Sean Kelley/JQI

The next paper reported on results from an actual experiment. Since the tiny loops aren’t perfect, they do allow a bit of light to escape vertically out of the plane of the array (see Topological Light below). This faint light allowed the JQI experimenters to image the course of light. This confirmed the plan that light persists when it goes around the edge of the array but suffers energy loss when traveling through the bulk.

The third paper, appearing now in Physical Review Letters, and highlighted in a Viewpoint, actually delivers detailed measurements of the transmission (how much energy is lost) and delay for edge-state light and for bulk-route light (see reference publication below). The paper is notable enough to have received an “editor’s suggestion” designation. “Apart from the potential photonic-chip applications of this scheme,” said Hafezi, “this photonic platform could allow us to investigate fundamental quantum transport properties.”

Another measured quality is consistency. Sunil Mittal, a graduate student at the University of Maryland and first author on the paper, points out that microchip manufacturing is not a perfect process. “Irregularities in integrated photonic device fabrication usually result in device-to-device performance variations,” he said. And this usually undercuts the microchip performance. But with topological protection (photons traveling at the edge of the array are practically invulnerable to impurities) at work, consistency is greatly strengthened.

Indeed, the authors, reporting trials with numerous array samples, reveal that for light taking the bulk (interior) route in the array, the delay and transmission of light can vary a lot, whereas for light making the edge route, the amount of energy loss is regularly less and the time delay for signals more consistent. Robustness and consistency are vital if you want to integrate such arrays into photonic schemes for processing quantum information.

How does the topological property emerge at the microscopic level? First, look at the electron topological behavior, which is an offshoot of the quantum Hall effect. Electrons, under the influence of an applied magnetic field can execute tiny cyclonic orbits. In some materials, called topological insulators, no external magnetic field is needed since the necessary field is supplied by spin-orbit interactions — that is, the coupling between the orbital motion of electrons and their spins. In these materials the conduction regime is topological: the material is conductive around the edge but is an insulator in the interior.

And now for the photonic equivalent. Light waves do not usually feel magnetic fields, and if they do it is very weak. In the photonic case, the equivalent of a magnetic field is supplied by a subtle phase shift imposed on the light as it circulates around the loops. Actually the loops in the array are of two kinds: resonator loops designed to exactly accommodate light at a certain frequency, allowing the waves to circle the loop many times. Link loops, by contrast, are not exactly suited to the waves, and are designed chiefly to pass the light onto the neighboring resonator loop.

Light that circulates around one unit cell of the loop array will undergo a slight phase change, an amount signified by the letter phi. That is, the light signal, in coming around the unit cell, re-arrives where it started advanced or retarded just a bit from its original condition. Just this amount of change imparts the topological robustness to the global transmission of the light in the array.

In summary, documented on-chip light delay and a robust, consistent, low-loss transport of light has now been demonstrated. The transport of light is tunable to a range of frequencies and the chip can be manufactured using standard micro-fabrications techniques.

REFERENCE PUBLICATION
RESEARCH CONTACT
Mohammad Hafezi

|

|

Sunil Mittal

|

|

MEDIA CONTACT
Phillip F. Schewe

|

|

(301) 405-0989

- See more at: http://jqi.umd.edu/news/on-chip-topological-light#sthash.nXI5fKGs.dpuf

Source: Joint Quantum Institute

The power of salt

MIT study investigates power generation from the meeting of river water and seawater.

By Jennifer Chu


Where the river meets the sea, there is the potential to harness a significant amount of renewable energy, according to a team of mechanical engineers at MIT.

The researchers evaluated an emerging method of power generation called pressure retarded osmosis (PRO), in which two streams of different salinity are mixed to produce energy. In principle, a PRO system would take in river water and seawater on either side of a semi-permeable membrane. Through osmosis, water from the less-salty stream would cross the membrane to a pre-pressurized saltier side, creating a flow that can be sent through a turbine to recover power.

The MIT team has now developed a model to evaluate the performance and optimal dimensions of large PRO systems. In general, the researchers found that the larger a system’s membrane, the more power can be produced — but only up to a point. Interestingly, 95 percent of a system’s maximum power output can be generated using only half or less of the maximum membrane area.

Leonardo Banchik, a graduate student in MIT’s Department of Mechanical Engineering, says reducing the size of the membrane needed to generate power would, in turn, lower much of the upfront cost of building a PRO plant.

“People have been trying to figure out whether these systems would be viable at the intersection between the river and the sea,” Banchik says. “You can save money if you identify the membrane area beyond which there are rapidly diminishing returns.”

Banchik and his colleagues were also able to estimate the maximum amount of power produced, given the salt concentrations of two streams: The greater the ratio of salinities, the more power can be generated. For example, they found that a mix of brine, a byproduct of desalination, and treated wastewater can produce twice as much power as a combination of seawater and river water.

Based on his calculations, Banchik says that a PRO system could potentially power a coastal wastewater-treatment plant by taking in seawater and combining it with treated wastewater to produce renewable energy.

“Here in Boston Harbor, at the Deer Island Waste Water Treatment Plant, where wastewater meets the sea … PRO could theoretically supply all of the power required for treatment,” Banchik says.

He and John Lienhard, the Abdul Latif Jameel Professor of Water and Food at MIT, along with Mostafa Sharqawy of King Fahd University of Petroleum and Minerals in Saudi Arabia, report their results in the Journal of Membrane Science.

Finding equilibrium in nature

The team based its model on a simplified PRO system in which a large semi-permeable membrane divides a long rectangular tank. One side of the tank takes in pressurized salty seawater, while the other side takes in river water or wastewater. Through osmosis, the membrane lets through water, but not salt. As a result, freshwater is drawn through the membrane to balance the saltier side.

“Nature wants to find an equilibrium between these two streams,” Banchik explains.

As the freshwater enters the saltier side, it becomes pressurized while increasing the flow rate of the stream on the salty side of the membrane. This pressurized mixture exits the tank, and a turbine recovers energy from this flow.

Banchik says that while others have modeled the power potential of PRO systems, these models are mostly valid for laboratory-scale systems that incorporate “coupon-sized” membranes. Such models assume that the salinity and flow of incoming streams is constant along a membrane. Given such stable conditions, these models predict a linear relationship: the bigger the membrane, the more power generated.

But in flowing through a system as large as a power plant, Banchik says, the streams’ salinity and flux will naturally change. To account for this variability, he and his colleagues developed a model based on an analogy with heat exchangers.

“Just as the radiator in your car exchanges heat between the air and a coolant, this system exchanges mass, or water, across a membrane,” Banchik says. “There’s a method in literature used for sizing heat exchangers, and we borrowed from that idea.”

The researchers came up with a model with which they could analyze a wide range of values for membrane size, permeability, and flow rate. With this model, they observed a nonlinear relationship between power and membrane size for large systems. Instead, as the area of a membrane increases, the power generated increases to a point, after which it gradually levels off. While a system may be able to produce the maximum amount of power at a certain membrane size, it could also produce 95 percent of the power with a membrane half as large.

Still, if PRO systems were to supply power to Boston’s Deer Island treatment plant, the size of a plant’s membrane would be substantial — at least 2.5 million square meters, which Banchik notes is the membrane area of the largest operating reverse osmosis plant in the world.

“Even though this seems like a lot, clever people are figuring out how to pack a lot of membrane into a small volume,” Banchik says. “For example, some configurations are spiral-wound, with flat sheets rolled up like paper towels around a central tube. It’s still an active area of research to figure out what the modules would look like.”

“Say we’re in a place that could really use desalinated water, like California, which is going through a terrible drought,” Banchik adds. “They’re building a desalination plant that would sit right at the sea, which would take in seawater and give Californians water to drink. It would also produce a saltier brine, which you could mix with wastewater to produce power. More research needs to be done to see whether it can be economically viable, but the science is sound.”

This work was funded by the King Fahd University of Petroleum and Minerals through the Center for Clean Water and Clean Energy and by the National Science Foundation.

Source: MIT News Office