Monthly Archives: August 2014

Global water scarcity will intensify the privatization of water resources

Local collectives should control the world’s shrinking water supplies rather than multinational companies, according to University of Leicester expert Dr Georgios Patsiaouras.

 


The world’s water reserves will increasingly fail to meet our needs over the coming decades, leaving a third of the global population without adequate drinking water by 2025, according to University of Leicester experts Dr Georgios Patsiaouras, Professor Michael Saren and Professor James Fitchett.

Local communities should be given control over the water in their area in order to stop private companies profiteering from shrinking global water supplies, says Dr Patsiaouras, Lecturer in Marketing and Consumption at the University of Leicester’s School of Management.

Ahead of World Water Week 2014, in a new research paper Dr Patsiaouras argues that increased competition for water from both the public and from industry will make it likely that a privatised, market-based water system will develop, controlled by private companies.

He predicts that nations will begin to sell key water sources – such as lakes, rivers and groundwater reserves – to companies. This will mean the supply of water around the world will soon resemble the market for oil and minerals.

Dr Patsiaouras said: “Increased competition between nations and institutions for access to clean water will create a global marketplace for buying, selling and trading water resources.”

“There will be an increase in phenomena such as water transfer, water banking and mega-engineering desalination plants emerging as alternative and competing means of managing water supply.”

This new water economy will only work in the favour of countries and communities that can afford to bid the highest amounts for water – while poorer and drought-stricken countries might see water supplies becoming even more scarce, Dr Patsiaouras warns.

To avoid this, he argues that control over water should be localised, with communities taking control over lakes and other water sources in their area, giving priority to public health over profit.

Dr Patsiaouras says the potential for community-based and cooperative alternatives for handling water supply needs to be closely examined.

He said: “Community-based water management offers an alternative solution to market-based and state-based failures.

“Although the majority of governments around the world have chosen hybrid water supply delivery models – where water supplies are controlled by both the state and private companies – the role and importance of culture and community in sustainable market development has been woefully under-examined. “

Transforming water into a global commodity is a dangerous move since water is essential for human survival, he adds.

“Cooperative alternatives have offered and will continue to offer viable solutions for the Global South, especially in light of the fact that conventional delivery systems have tended to favour the interests of wealthy citizens and affluent neighbourhoods,” he said.

Source: University of Leicester

Neuroscientists reverse memories’ emotional associations

MIT study also identifies the brain circuit that links feelings to memories.

By Anne Trafton

Most memories have some kind of emotion associated with them: Recalling the week you just spent at the beach probably makes you feel happy, while reflecting on being bullied provokes more negative feelings.

A new study from MIT neuroscientists reveals the brain circuit that controls how memories become linked with positive or negative emotions. Furthermore, the researchers found that they could reverse the emotional association of specific memories by manipulating brain cells with optogenetics — a technique that uses light to control neuron activity.

The findings, described in the Aug. 27 issue of Nature, demonstrated that a neuronal circuit connecting the hippocampus and the amygdala plays a critical role in associating emotion with memory. This circuit could offer a target for new drugs to help treat conditions such as post-traumatic stress disorder, the researchers say.

“In the future, one may be able to develop methods that help people to remember positive memories more strongly than negative ones,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory, and senior author of the paper.

 This image depicts the injection sites and the expression of the viral constructs in the two areas of the brain studied: the Dentate Gyrus of the hippocampus (middle) and the Basolateral Amygdala (bottom corners). Credits Image courtesy of the researchers/MIT

This image depicts the injection sites and the expression of the viral constructs in the two areas of the brain studied: the Dentate Gyrus of the hippocampus (middle) and the Basolateral Amygdala (bottom corners).
Credits: Image courtesy of the researchers/MIT

The paper’s lead authors are Roger Redondo, a Howard Hughes Medical Institute postdoc at MIT, and Joshua Kim, a graduate student in MIT’s Department of Biology.

Shifting memories

Memories are made of many elements, which are stored in different parts of the brain. A memory’s context, including information about the location where the event took place, is stored in cells of the hippocampus, while emotions linked to that memory are found in the amygdala.

Previous research has shown that many aspects of memory, including emotional associations, are malleable. Psychotherapists have taken advantage of this to help patients suffering from depression and post-traumatic stress disorder, but the neural circuitry underlying such malleability is not known.

In this study, the researchers set out to explore that malleability with an experimental technique they recently devised that allows them to tag neurons that encode a specific memory, or engram. To achieve this, they label hippocampal cells that are turned on during memory formation with a light-sensitive protein called channelrhodopsin. From that point on, any time those cells are activated with light, the mice recall the memory encoded by that group of cells.

Last year, Tonegawa’s lab used this technique to implant, or “incept,” false memories in miceby reactivating engrams while the mice were undergoing a different experience. In the new study, the researchers wanted to investigate how the context of a memory becomes linked to a particular emotion. First, they used their engram-labeling protocol to tag neurons associated with either a rewarding experience (for male mice, socializing with a female mouse) or an unpleasant experience (a mild electrical shock). In this first set of experiments, the researchers labeled memory cells in a part of the hippocampus called the dentate gyrus.

Two days later, the mice were placed into a large rectangular arena. For three minutes, the researchers recorded which half of the arena the mice naturally preferred. Then, for mice that had received the fear conditioning, the researchers stimulated the labeled cells in the dentate gyrus with light whenever the mice went into the preferred side. The mice soon began avoiding that area, showing that the reactivation of the fear memory had been successful.

The reward memory could also be reactivated: For mice that were reward-conditioned, the researchers stimulated them with light whenever they went into the less-preferred side, and they soon began to spend more time there, recalling the pleasant memory.

A couple of days later, the researchers tried to reverse the mice’s emotional responses. For male mice that had originally received the fear conditioning, they activated the memory cells involved in the fear memory with light for 12 minutes while the mice spent time with female mice. For mice that had initially received the reward conditioning, memory cells were activated while they received mild electric shocks.

Next, the researchers again put the mice in the large two-zone arena. This time, the mice that had originally been conditioned with fear and had avoided the side of the chamber where their hippocampal cells were activated by the laser now began to spend more time in that side when their hippocampal cells were activated, showing that a pleasant association had replaced the fearful one. This reversal also took place in mice that went from reward to fear conditioning.

Altered connections

The researchers then performed the same set of experiments but labeled memory cells in the basolateral amygdala, a region involved in processing emotions. This time, they could not induce a switch by reactivating those cells — the mice continued to behave as they had been conditioned when the memory cells were first labeled.

This suggests that emotional associations, also called valences, are encoded somewhere in the neural circuitry that connects the dentate gyrus to the amygdala, the researchers say. A fearful experience strengthens the connections between the hippocampal engram and fear-encoding cells in the amygdala, but that connection can be weakened later on as new connections are formed between the hippocampus and amygdala cells that encode positive associations.

“That plasticity of the connection between the hippocampus and the amygdala plays a crucial role in the switching of the valence of the memory,” Tonegawa says.

These results indicate that while dentate gyrus cells are neutral with respect to emotion, individual amygdala cells are precommitted to encode fear or reward memory. The researchers are now trying to discover molecular signatures of these two types of amygdala cells. They are also investigating whether reactivating pleasant memories has any effect on depression, in hopes of identifying new targets for drugs to treat depression and post-traumatic stress disorder.

David Anderson, a professor of biology at the California Institute of Technology, says the study makes an important contribution to neuroscientists’ fundamental understanding of the brain and also has potential implications for treating mental illness.

“This is a tour de force of modern molecular-biology-based methods for analyzing processes, such as learning and memory, at the neural-circuitry level. It’s one of the most sophisticated studies of this type that I’ve seen,” he says.

The research was funded by the RIKEN Brain Science Institute, Howard Hughes Medical Institute, and the JPB Foundation.

 

 


Eye implant developed at Stanford could lead to better glaucoma treatments

Lowering internal eye pressure is currently the only way to treat glaucoma. A tiny eye implant developed by Stephen Quake’s lab could pair with a smartphone to improve the way doctors measure and lower a patient’s eye pressure.

BY BJORN CAREY


For the 2.2 million Americans battling glaucoma, the main course of action for staving off blindness involves weekly visits to eye specialists who monitor – and control – increasing pressure within the eye.

Now, a tiny eye implant developed at Stanford could enable patients to take more frequent readings from the comfort of home. Daily or hourly measurements of eye pressure could help doctors tailor more effective treatment plans.

Internal optic pressure (IOP) is the main risk factor associated with glaucoma, which is characterized by a continuous loss of specific retina cells and degradation of the optic nerve fiber. The mechanism linking IOP and the damage is not clear, but in most patients IOP levels correlate with the rate of damage.

Reducing IOP to normal or below-normal levels is currently the only treatment available for glaucoma. This requires repeated measurements of the patient’s IOP until the levels stabilize. The trick with this, though, is that the readings do not always tell the truth.

Like blood pressure, IOP can vary day-to-day and hour-to-hour; it can be affected by other medications, body posture or even a neck-tie that is knotted too tightly. If patients are tested on a low IOP day, the test can give a false impression of the severity of the disease and affect their treatment in a way that can ultimately lead to worse vision.

The new implant was developed as part of a collaboration between Stephen Quake, a professor of bioengineering and of applied physics at Stanford, and ophthalmologist Yossi Mandel of Bar-Ilan University in Israel. It consists of a small tube – one end is open to the fluids that fill the eye; the other end is capped with a small bulb filled with gas. As the IOP increases, intraocular fluid is pushed into the tube; the gas pushes back against this flow.

As IOP fluctuates, the meniscus – the barrier between the fluid and the gas – moves back and forth in the tube. Patients could use a custom smartphone app or a wearable technology, such as Google Glass, to snap a photo of the instrument at any time, providing a critical wealth of data that could steer treatment. For instance, in one previous study, researchers found that 24-hour IOP monitoring resulted in a change in treatment in up to 80 percent of patients.

The implant is currently designed to fit inside a standard intraocular lens prosthetic, which many glaucoma patients often get when they have cataract surgery, but the scientists are investigating ways to implant it on its own.

“For me, the charm of this is the simplicity of the device,” Quake said. “Glaucoma is a substantial issue in human health. It’s critical to catch things before they go off the rails, because once you go off, you can go blind. If patients could monitor themselves frequently, you might see an improvement in treatments.”

Remarkably, the implant won’t distort vision. When subjected to the vision test used by the U.S. Air Force, the device caused nearly no optical distortion, the researchers said.

Before they can test the device in humans, however, the scientists say they need to re-engineer the device with materials that will increase the life of the device inside the human eye. Because of the implant’s simple design, they expect this will be relatively achievable.

“I believe that only a few years are needed before clinical trials can be conducted,” said Mandel, head of the Ophthalmic Science and Engineering Laboratory at Bar-Ilan University, who collaborated on developing the implant.

The work, published in the current issue of Nature Medicine, was co-authored by Ismail E. Araci, a postdoctoral scholar in Quake’s lab, and Baolong Su, a technician in Quake’s lab and currently an undergraduate student at the University of California, Los Angeles.

Source: Stanford News

Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)

Stanford scientists develop water splitter that runs on ordinary AAA battery

Hongjie Dai and colleagues have developed a cheap, emissions-free device that uses a 1.5-volt battery to split water into hydrogen and oxygen. The hydrogen gas could be used to power fuel cells in zero-emissions vehicles.

BY MARK SHWARTZ


In 2015, American consumers will finally be able to purchase fuel cell cars from Toyota and other manufacturers. Although touted as zero-emissions vehicles, most of the cars will run on hydrogen made from natural gas, a fossil fuel that contributes to global warming.

Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)
Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)

Now scientists at Stanford University have developed a low-cost, emissions-free device that uses an ordinary AAA battery to produce hydrogen by water electrolysis.  The battery sends an electric current through two electrodes that split liquid water into hydrogen and oxygen gas. Unlike other water splitters that use precious-metal catalysts, the electrodes in the Stanford device are made of inexpensive and abundant nickel and iron.

“Using nickel and iron, which are cheap materials, we were able to make the electrocatalysts active enough to split water at room temperature with a single 1.5-volt battery,” said Hongjie Dai, a professor of chemistry at Stanford. “This is the first time anyone has used non-precious metal catalysts to split water at a voltage that low. It’s quite remarkable, because normally you need expensive metals, like platinum or iridium, to achieve that voltage.”

In addition to producing hydrogen, the novel water splitter could be used to make chlorine gas and sodium hydroxide, an important industrial chemical, according to Dai. He and his colleagues describe the new device in a study published in the Aug. 22 issue of the journal Nature Communications.

The promise of hydrogen

Automakers have long considered the hydrogen fuel cell a promising alternative to the gasoline engine.  Fuel cell technology is essentially water splitting in reverse. A fuel cell combines stored hydrogen gas with oxygen from the air to produce electricity, which powers the car. The only byproduct is water – unlike gasoline combustion, which emits carbon dioxide, a greenhouse gas.

Earlier this year, Hyundai began leasing fuel cell vehicles in Southern California. Toyota and Honda will begin selling fuel cell cars in 2015. Most of these vehicles will run on fuel manufactured at large industrial plants that produce hydrogen by combining very hot steam and natural gas, an energy-intensive process that releases carbon dioxide as a byproduct.

Splitting water to make hydrogen requires no fossil fuels and emits no greenhouse gases. But scientists have yet to develop an affordable, active water splitter with catalysts capable of working at industrial scales.

“It’s been a constant pursuit for decades to make low-cost electrocatalysts with high activity and long durability,” Dai said. “When we found out that a nickel-based catalyst is as effective as platinum, it came as a complete surprise.”

Saving energy and money

The discovery was made by Stanford graduate student Ming Gong, co-lead author of the study. “Ming discovered a nickel-metal/nickel-oxide structure that turns out to be more active than pure nickel metal or pure nickel oxide alone,” Dai said.  “This novel structure favors hydrogen electrocatalysis, but we still don’t fully understand the science behind it.”

The nickel/nickel-oxide catalyst significantly lowers the voltage required to split water, which could eventually save hydrogen producers billions of dollars in electricity costs, according to Gong. His next goal is to improve the durability of the device.

“The electrodes are fairly stable, but they do slowly decay over time,” he said. “The current device would probably run for days, but weeks or months would be preferable. That goal is achievable based on my most recent results”

The researchers also plan to develop a water splitter than runs on electricity produced by solar energy.

“Hydrogen is an ideal fuel for powering vehicles, buildings and storing renewable energy on the grid,” said Dai. “We’re very glad that we were able to make a catalyst that’s very active and low cost. This shows that through nanoscale engineering of materials we can really make a difference in how we make fuels and consume energy.”

Other authors of the study are Wu Zhou, Oak Ridge National Laboratory (co-lead author); Mingyun Guan, Meng-Chang Lin, Bo Zhang, Di-Yan Wang and Jiang Yang, Stanford; Mon-Che Tsai and Bing-Joe Wang, National Taiwan University of Science and Technology; Jiang Zhou and Yongfeng Hu, Canadian Light Source Inc.; and Stephen J. Pennycook, University of Tennessee.

Principal funding was provided by the Global Climate and Energy Project (GCEP) and the Precourt Institute for Energy at Stanford and by the U.S. Department of Energy.

Mark Shwartz writes about energy technology at the Precourt Institute for Energy at Stanford University.

Light enters a two-dimensional ring-resonator array from the lower left and exits at the lower right. Light that follows the edge of the array (blue) does not suffer energy loss and exits after a consistent amount of delay. Light that travels into the interior of the array (green) suffers energy loss. 
Credit: Sean Kelley/JQI

On-chip Topological Light

FIRST MEASUREMENTS OF TRANSMISSION AND DELAY

Topological transport of light is the photonic analog of topological electron flow in certain semiconductors. In the electron case, the current flows around the edge of the material but not through the bulk. It is “topological” in that even if electrons encounter impurities in the material the electrons will continue to flow without losing energy.

Light enters a two-dimensional ring-resonator array from the lower left and exits at the lower right. Light that follows the edge of the array (blue) does not suffer energy loss and exits after a consistent amount of delay. Light that travels into the interior of the array (green) suffers energy loss.  Credit: Sean Kelley/JQI
Light enters a two-dimensional ring-resonator array from the lower left and exits at the lower right. Light that follows the edge of the array (blue) does not suffer energy loss and exits after a consistent amount of delay. Light that travels into the interior of the array (green) suffers energy loss.
Credit: Sean Kelley/JQI

In the photonic equivalent, light flows not through and around a regular material but in a meta-material consisting of an array of tiny glass loops fabricated on a silicon substrate. If the loops are engineered just right, the topological feature appears: light sent into the array easily circulates around the edge with very little energy loss (even if some of the loops aren’t working) while light taking an interior route suffers loss.

Mohammad Hafezi and his colleagues at the Joint Quantum Institute have published a series of papers on the subject of topological light. The first pointed out the potential application of robustness in delay lines and conceived a scheme to implement quantum Hall models in arrays of photonic loops. In photonics, signals sometimes need to be delayed, usually by sending light into a kilometers-long loop of optical fiber. In an on-chip scheme, such delays could be accomplished on the microscale; this is in addition to the energy-loss reduction made possible by topological robustness (see Miniaturizing Delay Lines below).

The 2D array consists of resonator rings, where light spends more time, and link rings, where light spends little time. Undergoing a circuit around a complete unit cell of rings, light will return to the starting point with a slight change in phase, phi. Credit: Sean Kelley/JQI
The 2D array consists of resonator rings, where light spends more time, and link rings, where light spends little time. Undergoing a circuit around a complete unit cell of rings, light will return to the starting point with a slight change in phase, phi.
Credit: Sean Kelley/JQI

The next paper reported on results from an actual experiment. Since the tiny loops aren’t perfect, they do allow a bit of light to escape vertically out of the plane of the array (see Topological Light below). This faint light allowed the JQI experimenters to image the course of light. This confirmed the plan that light persists when it goes around the edge of the array but suffers energy loss when traveling through the bulk.

The third paper, appearing now in Physical Review Letters, and highlighted in a Viewpoint, actually delivers detailed measurements of the transmission (how much energy is lost) and delay for edge-state light and for bulk-route light (see reference publication below). The paper is notable enough to have received an “editor’s suggestion” designation. “Apart from the potential photonic-chip applications of this scheme,” said Hafezi, “this photonic platform could allow us to investigate fundamental quantum transport properties.”

Another measured quality is consistency. Sunil Mittal, a graduate student at the University of Maryland and first author on the paper, points out that microchip manufacturing is not a perfect process. “Irregularities in integrated photonic device fabrication usually result in device-to-device performance variations,” he said. And this usually undercuts the microchip performance. But with topological protection (photons traveling at the edge of the array are practically invulnerable to impurities) at work, consistency is greatly strengthened.

Indeed, the authors, reporting trials with numerous array samples, reveal that for light taking the bulk (interior) route in the array, the delay and transmission of light can vary a lot, whereas for light making the edge route, the amount of energy loss is regularly less and the time delay for signals more consistent. Robustness and consistency are vital if you want to integrate such arrays into photonic schemes for processing quantum information.

How does the topological property emerge at the microscopic level? First, look at the electron topological behavior, which is an offshoot of the quantum Hall effect. Electrons, under the influence of an applied magnetic field can execute tiny cyclonic orbits. In some materials, called topological insulators, no external magnetic field is needed since the necessary field is supplied by spin-orbit interactions — that is, the coupling between the orbital motion of electrons and their spins. In these materials the conduction regime is topological: the material is conductive around the edge but is an insulator in the interior.

And now for the photonic equivalent. Light waves do not usually feel magnetic fields, and if they do it is very weak. In the photonic case, the equivalent of a magnetic field is supplied by a subtle phase shift imposed on the light as it circulates around the loops. Actually the loops in the array are of two kinds: resonator loops designed to exactly accommodate light at a certain frequency, allowing the waves to circle the loop many times. Link loops, by contrast, are not exactly suited to the waves, and are designed chiefly to pass the light onto the neighboring resonator loop.

Light that circulates around one unit cell of the loop array will undergo a slight phase change, an amount signified by the letter phi. That is, the light signal, in coming around the unit cell, re-arrives where it started advanced or retarded just a bit from its original condition. Just this amount of change imparts the topological robustness to the global transmission of the light in the array.

In summary, documented on-chip light delay and a robust, consistent, low-loss transport of light has now been demonstrated. The transport of light is tunable to a range of frequencies and the chip can be manufactured using standard micro-fabrications techniques.

REFERENCE PUBLICATION
RESEARCH CONTACT
Mohammad Hafezi

|

|

Sunil Mittal

|

|

MEDIA CONTACT
Phillip F. Schewe

|

|

(301) 405-0989

- See more at: http://jqi.umd.edu/news/on-chip-topological-light#sthash.nXI5fKGs.dpuf

Source: Joint Quantum Institute

In-house or Outsource? A case in favor of developing long term partnerships.

By Syed Faisal ur Rahman


Often audit or management consultancy firms and related service oriented businesses face issues of managing huge client work load for a certain time of the year like financial year end or time for appraisal and hiring.

This creates a tricky situation for companies with limited budget. In this situation, normally you either need to go for hiring more people to deal with the work load or burden your current work force to the limit. Hiring people for short term is challenging as not many well qualified people will agree to work as short term employees and in case of hiring new permanent people to cater short term demand is a costly process. The situation is even more challenging in developed markets where hiring costs are relatively higher than developing countries.

A much more cost effective and efficient solution is to make right outsourcing partners in same region or in other regions. There are BPO firms in developing markets which are capable of providing quality services related to IT, data entry, recruitment and even provide services in data analysis and market research. In-house development of products in a developed market is a big decision in terms of financial and managerial commitment involving several costs and risks.

However, for small tech firms with limited risks and costs, developing products on small scales is often just a decision to commit for a particular set of technologies in developing some particular tools. The drawbacks for them are their lack of exposure to corporate sector, their capacity to convert their products for larger scales and their limited market presence. Long term service based and business process outsourcing partnerships can solve problems of high costs for big business services firms in developed markets and lack of market exposure problem for smaller tech firms.

I have worked in both developed and developing markets with roles involving management, product development, data analysis and research, and in my view many companies in developing and developed markets don’t achieve their full potential only because of their lack of readiness for cooperation and conservative management structure.

Major hurdles are trust and lack of understanding about capabilities of potential partners. The solution to this problem is to take smaller steps and gradually move towards bigger products and services. At the same times managers from both sides need to develop their understanding about the capacity, talent pool and management culture of their partners.

In my humble view, we can achieve a lot more than we usually do if we just try and understand the capabilities of our potential partners and actively look for good long term partnerships based on trust and mutual respect.

The power of salt

MIT study investigates power generation from the meeting of river water and seawater.

By Jennifer Chu


Where the river meets the sea, there is the potential to harness a significant amount of renewable energy, according to a team of mechanical engineers at MIT.

The researchers evaluated an emerging method of power generation called pressure retarded osmosis (PRO), in which two streams of different salinity are mixed to produce energy. In principle, a PRO system would take in river water and seawater on either side of a semi-permeable membrane. Through osmosis, water from the less-salty stream would cross the membrane to a pre-pressurized saltier side, creating a flow that can be sent through a turbine to recover power.

The MIT team has now developed a model to evaluate the performance and optimal dimensions of large PRO systems. In general, the researchers found that the larger a system’s membrane, the more power can be produced — but only up to a point. Interestingly, 95 percent of a system’s maximum power output can be generated using only half or less of the maximum membrane area.

Leonardo Banchik, a graduate student in MIT’s Department of Mechanical Engineering, says reducing the size of the membrane needed to generate power would, in turn, lower much of the upfront cost of building a PRO plant.

“People have been trying to figure out whether these systems would be viable at the intersection between the river and the sea,” Banchik says. “You can save money if you identify the membrane area beyond which there are rapidly diminishing returns.”

Banchik and his colleagues were also able to estimate the maximum amount of power produced, given the salt concentrations of two streams: The greater the ratio of salinities, the more power can be generated. For example, they found that a mix of brine, a byproduct of desalination, and treated wastewater can produce twice as much power as a combination of seawater and river water.

Based on his calculations, Banchik says that a PRO system could potentially power a coastal wastewater-treatment plant by taking in seawater and combining it with treated wastewater to produce renewable energy.

“Here in Boston Harbor, at the Deer Island Waste Water Treatment Plant, where wastewater meets the sea … PRO could theoretically supply all of the power required for treatment,” Banchik says.

He and John Lienhard, the Abdul Latif Jameel Professor of Water and Food at MIT, along with Mostafa Sharqawy of King Fahd University of Petroleum and Minerals in Saudi Arabia, report their results in the Journal of Membrane Science.

Finding equilibrium in nature

The team based its model on a simplified PRO system in which a large semi-permeable membrane divides a long rectangular tank. One side of the tank takes in pressurized salty seawater, while the other side takes in river water or wastewater. Through osmosis, the membrane lets through water, but not salt. As a result, freshwater is drawn through the membrane to balance the saltier side.

“Nature wants to find an equilibrium between these two streams,” Banchik explains.

As the freshwater enters the saltier side, it becomes pressurized while increasing the flow rate of the stream on the salty side of the membrane. This pressurized mixture exits the tank, and a turbine recovers energy from this flow.

Banchik says that while others have modeled the power potential of PRO systems, these models are mostly valid for laboratory-scale systems that incorporate “coupon-sized” membranes. Such models assume that the salinity and flow of incoming streams is constant along a membrane. Given such stable conditions, these models predict a linear relationship: the bigger the membrane, the more power generated.

But in flowing through a system as large as a power plant, Banchik says, the streams’ salinity and flux will naturally change. To account for this variability, he and his colleagues developed a model based on an analogy with heat exchangers.

“Just as the radiator in your car exchanges heat between the air and a coolant, this system exchanges mass, or water, across a membrane,” Banchik says. “There’s a method in literature used for sizing heat exchangers, and we borrowed from that idea.”

The researchers came up with a model with which they could analyze a wide range of values for membrane size, permeability, and flow rate. With this model, they observed a nonlinear relationship between power and membrane size for large systems. Instead, as the area of a membrane increases, the power generated increases to a point, after which it gradually levels off. While a system may be able to produce the maximum amount of power at a certain membrane size, it could also produce 95 percent of the power with a membrane half as large.

Still, if PRO systems were to supply power to Boston’s Deer Island treatment plant, the size of a plant’s membrane would be substantial — at least 2.5 million square meters, which Banchik notes is the membrane area of the largest operating reverse osmosis plant in the world.

“Even though this seems like a lot, clever people are figuring out how to pack a lot of membrane into a small volume,” Banchik says. “For example, some configurations are spiral-wound, with flat sheets rolled up like paper towels around a central tube. It’s still an active area of research to figure out what the modules would look like.”

“Say we’re in a place that could really use desalinated water, like California, which is going through a terrible drought,” Banchik adds. “They’re building a desalination plant that would sit right at the sea, which would take in seawater and give Californians water to drink. It would also produce a saltier brine, which you could mix with wastewater to produce power. More research needs to be done to see whether it can be economically viable, but the science is sound.”

This work was funded by the King Fahd University of Petroleum and Minerals through the Center for Clean Water and Clean Energy and by the National Science Foundation.

Source: MIT News Office

Amazon's delivery drones. Credit: Amaon

Delivery by drone

New algorithm lets drones monitor their own health during long package-delivery missions.

By Jennifer  Chu


CAMBRIDGE, MA — In the near future, the package that you ordered online may be deposited at your doorstep by a drone: Last December, online retailer Amazon announced plans to explore drone-based delivery, suggesting that fleets of flying robots might serve as autonomous messengers that shuttle packages to customers within 30 minutes of an order.

To ensure safe, timely, and accurate delivery, drones would need to deal with a degree of uncertainty in responding to factors such as high winds, sensor measurement errors, or drops in fuel. But such “what-if” planning typically requires massive computation, which can be difficult to perform on the fly.

Now MIT researchers have come up with a two-pronged approach that significantly reduces the computation associated with lengthy delivery missions. The team first developed an algorithm that enables a drone to monitor aspects of its “health” in real time. With the algorithm, a drone can predict its fuel level and the condition of its propellers, cameras, and other sensors throughout a mission, and take proactive measures — for example, rerouting to a charging station — if needed.

The researchers also devised a method for a drone to efficiently compute its possible future locations offline, before it takes off. The method simplifies all potential routes a drone may take to reach a destination without colliding with obstacles.

In simulations involving multiple deliveries under various environmental conditions, the researchers found that their drones delivered as many packages as those that lacked health-monitoring algorithms — but with far fewer failures or breakdowns.

Amazon's delivery drones. Credit: Amaon
Amazon’s delivery drones. Credit: Amaon

“With something like package delivery, which needs to be done persistently over hours, you need to take into account the health of the system,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Department of Aeronautics and Astronautics. “Interestingly, in our simulations, we found that, even in harsh environments, out of 100 drones, we only had a few failures.”

Agha-mohammadi will present details of the group’s approach in September at the IEEE/RSJ International Conference on Intelligent Robots and Systems, in Chicago. His co-authors are MIT graduate student Kemal Ure; Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics; and John Vian of Boeing.

Tree of possibilities

Planning an autonomous vehicle’s course often involves an approach called Markov Decision Process (MDP), a sequential decision-making framework that resembles a “tree” of possible actions. Each node along a tree can branch into several potential actions — each of which, if taken, may result in even more possibilities. As Agha-mohammadi explains it, MDP is “the process of reasoning about the future” to determine the best sequence of policies to minimize risk.

MDP, he says, works reasonably well in environments with perfect measurements, where the result of one action will be observed perfectly. But in real-life scenarios, where there is uncertainty in measurements, such sequential reasoning is less reliable. For example, even if a command is given to turn 90 degrees, a strong wind may prevent that command from being carried out.

Instead, the researchers chose to work with a more general framework of Partially Observable Markov Decision Processes (POMDP). This approach generates a similar tree of possibilities, although each node represents a probability distribution, or the likelihood of a given outcome. Planning a vehicle’s route over any length of time, therefore, can result in an exponential growth of probable outcomes, which can be a monumental task in computing.

Agha-mohammadi chose to simplify the problem by splitting the computation into two parts: vehicle-level planning, such as a vehicle’s location at any given time; and mission-level, or health planning, such as the condition of a vehicle’s propellers, cameras, and fuel levels.

For vehicle-level planning, he developed a computational approach to POMDP that essentially funnels multiple possible outcomes into a few most-likely outcomes.

“Imagine a huge tree of possibilities, and a large chunk of leaves collapses to one leaf, and you end up with maybe 10 leaves instead of a million leaves,” Agha-mohammadi says. “Then you can … let this run offline for say, half an hour, and map a large environment, and accurately predict the collision and failure probabilities on different routes.”

He says that planning out a vehicle’s possible positions ahead of time frees up a significant amount of computational energy, which can then be spent on mission-level planning in real time. In this regard, he and his colleagues used POMDP to generate a tree of possible health outcomes, including fuel levels and the status of sensors and propellers.

Proactive delivery

The researchers combined the two computational approaches, and ran simulations in which drones were tasked with delivering multiple packages to different addresses under various wind conditions and with limited fuel. They found that drones operating under the two-pronged approach were more proactive in preserving their health, rerouting to a recharge station midmission to keep from running out of fuel. Even with these interruptions, the team found that these drones were able to deliver just as many packages as those that were programmed to simply make deliveries without considering health.

Going forward, the team plans to test the route-planning approach in actual experiments. The researchers have attached electromagnets to small drones, or quadrotors, enabling them to pick up and drop off small parcels. The team has also programmed the drones to land on custom-engineered recharge stations.

“We believe in the near future, in a lab setting, we can show what we’re gaining with this framework by delivering as many packages as we can while preserving health,” Agha-mohammadi says. “Not only the drone, but the package might be important, and if you fail, it could be a big loss.”

This work was supported by Boeing.

Source: MIT News office

Our connection to content

Using neuroscience tools, Innerscope Research explores the connections between consumers and media.

By Rob Matheson


It’s often said that humans are wired to connect: The neural wiring that helps us read the emotions and actions of other people may be a foundation for human empathy.

But for the past eight years, MIT Media Lab spinout Innerscope Research has been using neuroscience technologies that gauge subconscious emotions by monitoring brain and body activity to show just how powerfully we also connect to media and marketing communications.

“We are wired to connect, but that connection system is not very discriminating. So while we connect with each other in powerful ways, we also connect with characters on screens and in books, and, we found, we also connect with brands, products, and services,” says Innerscope’s chief science officer, Carl Marci, a social neuroscientist and former Media Lab researcher.

With this core philosophy, Innerscope — co-founded at MIT by Marci and Brian Levine MBA ’05 — aims to offer market research that’s more advanced than traditional methods, such as surveys and focus groups, to help content-makers shape authentic relationships with their target consumers.

“There’s so much out there, it’s hard to make something people will notice or connect to,” Levine says. “In a way, we aim to be the good matchmaker between content and people.”

So far, it’s drawn some attention. The company has conducted hundreds of studies and more than 100,000 content evaluations with its host of Fortune 500 clients, which include Campbell’s Soup, Yahoo, and Fox Television, among others.

And Innerscope’s studies are beginning to provide valuable insights into the way consumers connect with media and advertising. Take, for instance, its recent project to measure audience engagement with television ads that aired during the Super Bowl.

Innerscope first used biometric sensors to capture fluctuations in heart rate, skin conductance, breathing, and motion among 80 participants who watched select ads and sorted them into “winning” and “losing” commercials (in terms of emotional responses). Then their collaborators at Temple University’s Center for Neural Decision Making used functional magnetic resonance imaging (fMRI) brain scans to further measure engagement.

Ads that performed well elicited increased neural activity in the amygdala (which drives emotions), superior temporal gyrus (sensory processing), hippocampus (memory formation), and lateral prefrontal cortex (behavioral control).

“But what was really interesting was the high levels of activity in the area known as the precuneus — involved in feelings of self-consciousness — where it is believed that we keep our identity. The really powerful ads generated a heightened sense of personal identification,” Marci says.

Using neuroscience to understand marketing communications and, ultimately, consumers’ purchasing decisions is still at a very early stage, Marci admits — but the Super Bowl study and others like it represent real progress. “We’re right at the cusp of coherent, neuroscience-informed measures of how ad engagement works,” he says.

Capturing “biometric synchrony”

Innerscope’s arsenal consists of 10 tools: Electroencephalography and fMRI technologies measure brain waves and structures. Biometric tools — such as wristbands and attachable sensors — track heart rate, skin conductance, motion, and respiration, which reflect emotional processing. And then there’s eye-tracking, voice-analysis, and facial-coding software, as well as other tests to complement these measures.

Such technologies were used for market research long before the rise of Innerscope. But, starting at MIT, Marci and Levine began developing novel algorithms, informed by neuroscience, that find trends among audiences pointing to exact moments when an audience is engaged together — in other words, in “biometric synchrony.”

Traditional algorithms for such market research would average the responses of entire audiences, Levine explains. “What you get is an overall level of arousal — basically, did they love or hate the content?” he says. “But how is that emotion going to be useful? That’s where the hole was.”

Innerscope’s algorithms tease out real-time detail from individual reactions — comprising anywhere from 500 million to 1 billion data points — to locate instances when groups’ responses (such as surprise, excitement, or disappointment) collectively match.

As an example, Levine references an early test conducted using an episode of the television show “Lost,” where a group of strangers are stranded on a tropical island.

Levine and Marci attached biometric sensors to six separate groups of five participants. At the long-anticipated moment when the show’s “monster” is finally revealed, nearly everyone held their breath for about 10 to 15 seconds.

“What our algorithms are looking for is this group response. The more similar the group response, the more likely the stimuli is creating that response,” Levine explains. “That allows us to understand if people are paying attention and if they’re going on a journey together.”

Getting on the map

Before MIT, Marci was a neuroscientist studying empathy, using biometric sensors and other means to explore how empathy between patient and doctor can improve patient health.

“I was lugging around boxes of equipment, with wires coming out and videotaping patients and doctors. Then someone said, ‘Hey, why don’t you just go to the MIT Media Lab,’” Marci says. “And I realized it had the resources I needed.”

At the Media Lab, Marci met behavioral analytics expert and collaborator Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, who helped him set up Bluetooth sensors around Massachusetts General Hospital to track emotions and empathy between doctors and patients with depression.

During this time, Levine, a former Web developer, had enrolled at MIT, splitting his time between the MIT Sloan School of Management and the Media Lab. “I wanted to merge an idea to understand customers better with being able to prototype anything,” he says.

After meeting Marci through a digital anthropology class, Levine proposed that they use this emotion-tracking technology to measure the connections of audiences to media. Using prototype sensor vests equipped with heart-rate monitors, stretch receptors, accelerometers, and skin-conductivity sensors, they trialed the technology with students around the Media Lab.

All the while, Levine pieced together Innerscope’s business plan in his classes at MIT Sloan, with help from other students and professors. “The business-strategy classes were phenomenal for that,” Levine says. “Right after finishing MIT, I had a complete and detailed business plan in my hands.”

Innerscope launched in 2006. But a 2008 study really accelerated the company’s growth. “NBC Universal had a big concern at the time: DVR,” Marci says. “Were people who were watching the prerecorded program still remembering the ads, even though they were clearly skipping them?”

Innerscope compared facial cues and biometrics from people who fast-forwarded ads against those who didn’t. The results were unexpected: While fast-forwarding, people stared at the screen blankly, but their eyes actually caught relevant brands, characters, and text. Because they didn’t want to miss their show, while fast-forwarding, they also had a heightened sense of engagement, signaled by leaning forward and staring fixedly.

“What we concluded was that people don’t skip ads,” Marci says. “They’re processing them in a different way, but they’re still processing those ads. That was one of those insights you couldn’t get from a survey. That put us on the map.”

Today, Innerscope is looking to expand. One project is bringing kiosks to malls and movie theaters, where the company recruits passersby for fast and cost-effective results. (Wristbands monitor emotional response, while cameras capture facial cues and eye motion.) The company is also aiming to try applications in mobile devices, wearables, and at-home sensors.

“We’re rewiring a generation of Americans in novel ways and moving toward a world of ubiquitous sensing,” Marci says. “We’ll need data science and algorithms and experts that can make sense of all that data.”

 

Source : MIT News Office

 

Engineering new bone growth

Coated tissue scaffolds help the body grow new bone to repair injuries or congenital defects.

By Anne Trafton


 

CAMBRIDGE, MA — MIT chemical engineers have devised a new implantable tissue scaffold coated with bone growth factors that are released slowly over a few weeks. When applied to bone injuries or defects, this coated scaffold induces the body to rapidly form new bone that looks and behaves just like the original tissue.

This type of coated scaffold could offer a dramatic improvement over the current standard for treating bone injuries, which involves transplanting bone from another part of the patient’s body — a painful process that does not always supply enough bone. Patients with severe bone injuries, such as soldiers wounded in battle; people who suffer from congenital bone defects, such as craniomaxillofacial disorders; and patients in need of bone augmentation prior to insertion of dental implants could benefit from the new tissue scaffold, the researchers say.

“It’s been a truly challenging medical problem, and we have tried to provide one way to address that problem,” says Nisarg Shah, a recent PhD recipient and lead author of the paper, which appears in the Proceedings of the National Academy of Sciences this week.

Paula Hammond, the David H. Koch Professor in Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Department of Chemical Engineering, is the paper’s senior author. Other authors are postdocs M. Nasim Hyder and Mohiuddin Quadir, graduate student Noémie-Manuelle Dorval Courchesne, Howard Seeherman of Restituo, Myron Nevins of the Harvard School of Dental Medicine, and Myron Spector of Brigham and Women’s Hospital.

Stimulating bone growth

Two of the most important bone growth factors are platelet-derived growth factor (PDGF) and bone morphogenetic protein 2 (BMP-2). As part of the natural wound-healing cascade, PDGF is one of the first factors released immediately following a bone injury, such as a fracture. After PDGF appears, other factors, including BMP-2, help to create the right environment for bone regeneration by recruiting cells that can produce bone and forming a supportive structure, including blood vessels.

Efforts to treat bone injury with these growth factors have been hindered by the inability to effectively deliver them in a controlled manner. When very large quantities of growth factors are delivered too quickly, they are rapidly cleared from the treatment site — so they have reduced impact on tissue repair, and can also induce unwanted side effects.

“You want the growth factor to be released very slowly and with nanogram or microgram quantities, not milligram quantities,” Hammond says. “You want to recruit these native adult stem cells we have in our bone marrow to go to the site of injury and then generate bone around the scaffold, and you want to generate a vascular system to go with it.”

This process takes time, so ideally the growth factors would be released slowly over several days or weeks. To achieve this, the MIT team created a very thin, porous scaffold sheet coated with layers of PDGF and BMP. Using a technique called layer-by-layer assembly, they first coated the sheet with about 40 layers of BMP-2; on top of that are another 40 layers of PDGF. This allowed PDGF to be released more quickly, along with a more sustained BMP-2 release, mimicking aspects of natural healing.

The scaffold sheet is about 0.1 millimeter thick; once the growth-factor coatings are applied, scaffolds can be cut from the sheet on demand, and in the appropriate size for implantation into a bone injury or defect.

Effective repair

The researchers tested the scaffold in rats with a skull defect large enough — 8 millimeters in diameter — that it could not heal on its own. After the scaffold was implanted, growth factors were released at different rates. PDGF, released during the first few days after implantation, helped initiate the wound-healing cascade and mobilize different precursor cells to the site of the wound. These cells are responsible for forming new tissue, including blood vessels, supportive vascular structures, and bone.

BMP, released more slowly, then induced some of these immature cells to become osteoblasts, which produce bone. When both growth factors were used together, these cells generated a layer of bone, as soon as two weeks after surgery, that was indistinguishable from natural bone in its appearance and mechanical properties, the researchers say.

“Using this combination allows us to not only have accelerated proliferation first, but also facilitates laying down some vascular tissue, which provides a route for both the stem cells and the precursor osteoblasts and other players to get in and do their jobs. You end up with a very uniform healed system,” Hammond says.

Another advantage of this approach is that the scaffold is biodegradable and breaks down inside the body within a few weeks. The scaffold material, a polymer called PLGA, is widely used in medical treatment and can be tuned to disintegrate at a specific rate so the researchers can design it to last only as long as needed.

Hammond’s team has filed a patent based on this work and now aims to begin testing the system in larger animals in hopes of eventually moving it into clinical trials.

This study was funded by the National Institutes of Health.

Source: MIT News Office