Tag Archives: stanford

Stanford’s social robot ‘Jackrabbot’ seeks to understand pedestrian behavior

View video here.

The Computational Vision and Geometry Lab has developed a robot prototype that could soon autonomously move among us, following normal human social etiquettes. It’s named ‘Jackrabbot’ after the springy hares that bounce around campus.

BY VIGNESH RAMACHANDRAN


In order for robots to circulate on sidewalks and mingle with humans in other crowded places, they’ll have to understand the unwritten rules of pedestrian behavior. Stanford researchers have created a short, non-humanoid prototype of just such a moving, self-navigating machine.

The robot is nicknamed “Jackrabbot” – after the jackrabbits often seen darting across the Stanford campus – and looks like a ball on wheels. Jackrabbot is equipped with sensors to be able to understand its surroundings and navigate streets and hallways according to normal human etiquette.

The idea behind the work is that by observing how Jackrabbot navigates itself among students around the halls and sidewalks of Stanford’s School of Engineering, and over time learns unwritten conventions of these social behaviors, the researchers will gain critical insight in how to design the next generation of everyday robots such that they operate smoothly alongside humans in crowded open spaces like shopping malls or train stations.

“By learning social conventions, the robot can be part of ecosystems where humans and robots coexist,” said Silvio Savarese, an assistant professor of computer science and director of the Stanford Computational Vision and Geometry Lab.

The researchers will present their system for predicting human trajectories in crowded spaces at the Computer Vision and Pattern Recognition conference in Las Vegas on June 27.

As robotic devices become more common in human environments, it becomes increasingly important that they understand and respect human social norms, Savarese said. How should they behave in crowds? How do they share public resources, like sidewalks or parking spots? When should a robot take its turn? What are the ways people signal each other to coordinate movements and negotiate other spontaneous activities, like forming a line?

These human social conventions aren’t necessarily explicit nor are they written down complete with lane markings and traffic lights, like the traffic rules that govern the behavior of autonomous cars.

So Savarese’s lab is using machine learning techniques to create algorithms that will, in turn, allow the robot to recognize and react appropriately to unwritten rules of pedestrian traffic. The team’s computer scientists have been collecting images and video of people moving around the Stanford campus and transforming those images into coordinates. From those coordinates, they can train an algorithm.

“Our goal in this project is to actually learn those (pedestrian) rules automatically from observations – by seeing how humans behave in these kinds of social spaces,” Savarese said. “The idea is to transfer those rules into robots.”

Jackrabbot already moves automatically and can navigate without human assistance indoors, and the team members are fine-tuning the robot’s self-navigation capabilities outdoors. The next step in their research is the implementation of “social aspects” of pedestrian navigation such as deciding rights of way on the sidewalk. This work, described in their newest conference papers, has been demonstrated in computer simulations.

“We have developed a new algorithm that is able to automatically move the robot with social awareness, and we’re currently integrating that in Jackrabbot,” said Alexandre Alahi, a postdoctoral researcher in the lab.

Even though social robots may someday roam among humans, Savarese said he believes they don’t necessarily need to look like humans. Instead they should be designed to look as lovable and friendly as possible. In demos, the roughly three-foot-tall Jackrabbot roams around campus wearing a Stanford tie and sun-hat, generating hugs and curiosity from passersby.

Today, Jackrabbot is an expensive prototype. But Savarese estimates that in five or six years social robots like this could become as cheap as $500, making it possible for companies to release them to the mass market.

“It’s possible to make these robots affordable for on-campus delivery, or for aiding impaired people to navigate in a public space like a train station or for guiding people to find their way through an airport,” Savarese said.

The conference paper is titled “Social LSTM: Human Trajectory Prediction in Crowded Spaces.” See conference program for details.

Source: Stanford University News Service

Stanford study finds promise in expanding renewables based on results in three major economies

A new Stanford study found that renewable energy can make a major and increasingly cost-effective contribution to alleviating climate change.

BY TERRY NAGEL


Stanford energy experts have released a study that compares the experiences of three large economies in ramping up renewable energy deployment and concludes that renewables can make a major and increasingly cost-effective contribution to climate change mitigation.

The report from Stanford’s Steyer-Taylor Center for Energy Policy and Finance analyzes the experiences of Germany, California and Texas, the world’s fourth, eighth and 12th largest economies, respectively. It found, among other things, that Germany, which gets about half as much sunshine as California and Texas, nevertheless generates electricity from solar installations at a cost comparable to that of Texas and only slightly higher than in California.

The report was released in time for the United Nations Climate Change Conference that started this week, where international leaders are gathering to discuss strategies to deal with global warming, including massive scale-ups of renewable energy.

“As policymakers from around the world gather for the climate negotiations in Paris, our report draws on the experiences of three leaders in renewable-energy deployment to shed light on some of the most prominent and controversial themes in the global renewables debate,” said Dan Reicher, executive director of the Steyer-Taylor Center, which is a joint center between Stanford Law School and Stanford Graduate School of Business. Reicher also is interim president and chief executive officer of the American Council on Renewable Energy.

“Our findings suggest that renewable energy has entered the mainstream and is ready to play a leading role in mitigating global climate change,” said Felix Mormann, associate professor of law at the University of Miami, faculty fellow at the Steyer-Taylor Center and lead author of the report.

Other conclusions of the report, “A Tale of Three Markets: Comparing the Solar and Wind Deployment Experiences of California, Texas, and Germany,” include:

  • Germany’s success in deploying renewable energy at scale is due largely to favorable treatment of “soft cost” factors such as financing, permitting, installation and grid access. This approach has allowed the renewable energy policies of some countries to deliver up to four times the average deployment of other countries, despite offering only half the financial incentives.
  • Contrary to widespread concern, a higher share of renewables does not automatically translate to higher electricity bills for ratepayers. While Germany’s residential electric rates are two to three times those of California and Texas, this price differential is only partly due to Germany’s subsidies for renewables. The average German household’s electricity bill is, in fact, lower than in Texas and only slightly higher than in California, partly as a result of energy-efficiency efforts in German homes.
  • An increase in the share of intermittent solar and wind power need not jeopardize the stability of the electric grid. From 2006 to 2013, Germany tripled the amount of electricity generated from solar and wind to a market share of 26 percent, while managing to reduce average annual outage times for electricity customers in its grid from an already impressive 22 minutes to just 15 minutes. During that same period, California tripled the amount of electricity produced from solar and wind to a joint market share of 8 percent and reduced its outage times from more than 100 minutes to less than 90 minutes. However, Texas increased its outage times from 92 minutes to 128 minutes after ramping up its wind-generated electricity sixfold to a market share of 10 percent.

The study may inform the energy debate in the United States, where expanding the nation’s renewable energy infrastructure is a top priority of the Obama administration and the subject of debate among presidential candidates.

The current share of renewables in U.S. electricity generation is 14 percent – half that of Germany. Germany’s ambitious – and controversial – Energiewende (Energy Transition) initiative commits the country to meeting 80 percent of its electricity needs with renewables by 2050. In the United States, 29 states, including California and Texas, have set mandatory targets for renewable energy.

In California, Gov. Jerry Brown recently signed legislation committing the state to producing 50 percent of its electricity from renewables by 2030. Texas, the leading U.S. state for wind development, set a mandate of 10,000 megawatts of renewable energy capacity by 2025, but reached this target 15 years ahead of schedule and now generates over 10 percent of the state’s electricity from wind alone.

Source: Sanford News

Climate change requires new conservation models, Stanford scientists say

In a world transformed by climate change and human activity, Stanford scientists say that conserving biodiversity and protecting species will require an interdisciplinary combination of ecological and social research methods.

By Ker Than

A threatened tree species in Alaska could serve as a model for integrating ecological and social research methods in efforts to safeguard species that are vulnerable to climate change effects and human activity.

In a new Stanford-led study, published online this week in the journal Biological Conservation, scientists assessed the health of yellow cedar, a culturally and commercially valuable tree throughout coastal Alaska that is experiencing climate change-induced dieback.

In an era when climate change touches every part of the globe, the traditional conservation approach of setting aside lands to protect biodiversity is no longer sufficient to protect species, said the study’s first author, Lauren Oakes, a research associate at Stanford University.

“A lot of that kind of conservation planning was intended to preserve historic conditions, which, for example, might be defined by the population of a species 50 years ago or specific ecological characteristics when a park was established,” said Oakes, who is a recent PhD graduate of the Emmett Interdisciplinary Program in Environment and Resources (E-IPER) at Stanford’s School of Earth, Energy, & Environmental Sciences.

But as the effects of climate change become increasingly apparent around the world, resource managers are beginning to recognize that “adaptive management” strategies are needed that account for how climate change affects species now and in the future.

Similarly, because climate change effects will vary across regions, new management interventions must consider not only local laws, policies and regulations, but also local peoples’ knowledge about climate change impacts and their perceptions about new management strategies. For yellow cedar, new strategies could include assisting migration of the species to places where it may be more likely to survive or increasing protection of the tree from direct uses, such as harvesting.

Gathering these perspectives requires an interdisciplinary social-ecological approach, said study leader Eric Lambin, the George and Setsuko Ishiyama Provostial Professor in the School of Earth, Energy, & Environmental Sciences.

“The impact of climate change on ecosystems is not just a biophysical issue. Various actors depend on these ecosystems and on the services they provide for their livelihoods,” said Lambin, who is also  a senior fellow at the Stanford Woods Institute for the Environment.

“Moreover, as the geographic distribution of species is shifting due to climate change, new areas that are currently under human use will need to be managed for biodiversity conservation. Any feasible management solution needs to integrate the ecological and social dimensions of this challenge.”

Gauging yellow cedar health

The scientists used aerial surveys to map the distribution of yellow cedar in Alaska’s Glacier Bay National Park and Preserve (GLBA) and collected data about the trees’ health and environmental conditions from 18 randomly selected plots inside the park and just south of the park on designated wilderness lands.

“Some of the plots were really challenging to access,” Oakes said. “We would get dropped off by boat for 10 to 15 days at a time, travel by kayak on the outer coast, and hike each day through thick forests to reach the sites. We’d wake up at 6 a.m. and it wouldn’t be until 11 a.m. that we reached the sites and actually started the day’s work of measuring trees.”

The field surveys revealed that yellow cedars inside of GLBA were relatively healthy and unstressed compared to trees outside the park, to the south. Results also showed reduced crowns and browned foliage in yellow cedar trees at sites outside the park, indicating early signs of the dieback progressing toward the park.

Additionally, modeling by study co-authors Paul Hennon, David D’Amore, and Dustin Wittwer at the USDA Forest Service suggested the dieback is expected to emerge inside GLBA in the future. As the region warms, reductions in snow cover, which helps insulate the tree’s shallow roots, leave the roots vulnerable to sudden springtime cold events.

Merging disciplines

In addition to collecting data about the trees themselves with a team of research assistants, Oakes conducted interviews with 45 local residents and land managers to understand their perceptions about climate change-induced yellow cedar dieback; whether or not they thought humans should intervene to protect the species in GLBA; and what forms those interventions should take.

One unexpected and interesting pattern that emerged from the interviews is that those participants who perceived protected areas as “separate” from nature commonly expressed strong opposition to intervention inside protected areas, like GLBA. In contrast, those who thought of humans as being “a part of” protected areas viewed intervention more favorably.

“Native Alaskans told me stories of going to yellow cedar trees to walk with their ancestors,” Oakes said. “There were other interview participants who said they’d go to a yellow cedar tree every day just to be in the presence of one.”

These people tended to support new kinds of interventions because they believed humans were inherently part of the system and they derived many intangible values, like spiritual or recreational values, from the trees. In contrast, those who perceived protected areas as “natural” and separate from humans were more likely to oppose new interventions in the protected areas.

Lambin said he was not surprised to see this pattern for individuals because people’s choices are informed by their values. “It was less expected for land managers who occupy an official role,” he added. “We often think about an organization and its missions, but forget that day-to-day decisions are made by people who carry their own value systems and perceptions of risks.”

The insights provided by combining ecological and social techniques could inform decisions about when, where, and how to adapt conservation practices in a changing climate, said study co-author Nicole Ardoin, an assistant professor at Stanford’s Graduate School of Education and a center fellow at the Woods Institute.

“Some initial steps in southeast Alaska might include improving tree monitoring in protected areas and increasing collaboration among the agencies that oversee managed and protected lands, as well as working with local community members to better understand how they value these species,” Ardoin said.

The team members said they believe their interdisciplinary approach is applicable to other climate-sensitive ecosystems and species, ranging from redwood forests in California to wild herbivore species in African savannas, and especially those that are currently surrounded by human activities.

“In a human-dominated planet, such studies will have to become the norm,” Lambin said. “Humans are part of these land systems that are rapidly transforming.”

This study was done in partnership with the U.S. Forest Service Pacific Northwest Research Station. It was funded with support from the George W. Wright Climate Change Fellowship; the Morrison Institute for Population and Resource Studies and the School of Earth, Energy & Environmental Sciences at Stanford University; the Wilderness Society Gloria Barron Fellowship; the National Forest Foundation; and U.S. Forest Service Pacific Northwest Research Station and Forest Health Protection.

For more Stanford experts on climate change and other topics, visit Stanford Experts.

Source : Stanford News


New research shows how to make effective political arguments, Stanford sociologist says

Stanford sociologist Robb Willer finds that an effective way to persuade people in politics is to reframe arguments to appeal to the moral values of those holding opposing positions.

BY CLIFTON B. PARKER


In today’s American politics, it might seem impossible to craft effective political messages that reach across the aisle on hot-button issues like same-sex marriage, national health insurance and military spending. But, based on new research by Stanford sociologist Robb Willer, there’s a way to craft messages that could lead to politicians finding common ground.

“We found the most effective arguments are ones in which you find a new way to connect a political position to your target audience’s moral values,” Willer said.

While most people’s natural inclination is to make political arguments grounded in their own moral values, Willer said, these arguments are less persuasive than “reframed” moral arguments.

To be persuasive, reframe political arguments to appeal to the moral values of those holding the opposing political positions, said Matthew Feinberg, assistant professor of organizational behavior at the University of Toronto, who co-authored the study with Willer. Their work was published recently online in the Personality and Social Psychology Bulletin.

Such reframed moral appeals are persuasive because they increase the apparent agreement between a political position and the target audience’s moral values, according to the research, Feinberg said.

In fact, Willer pointed out, the research shows a “potential effective path for building popular support in our highly polarized political world.” Creating bipartisan success on legislative issues – whether in Congress or in state legislatures – requires such a sophisticated approach to building coalitions among groups not always in agreement with each other, he added.

Different moral values

Feinberg and Willer drew upon past research showing that American liberals and conservatives tend to endorse different moral values to different extents. For example, liberals tend to be more concerned with care and equality where conservatives are more concerned with values like group loyalty, respect for authority and purity.

They then conducted four studies testing the idea that moral arguments reframed to fit a target audience’s moral values could be persuasive on even deeply entrenched political issues. In one study, conservative participants recruited via the Internet were presented with passages that supported legalizing same-sex marriage.

Conservative participants were ultimately persuaded by a patriotism-based argument that “same-sex couples are proud and patriotic Americans … [who] contribute to the American economy and society.”

On the other hand, they were significantly less persuaded by a passage that argued for legalized same-sex marriage in terms of fairness and equality.

Feinberg and Willer found similar results for studies targeting conservatives with a pro-national health insurance message and liberals with arguments for high levels of military spending and making English the official language of the United States. In all cases, messages were significantly more persuasive when they fit the values endorsed more by the target audience.

“Morality can be a source of political division, a barrier to building bi-partisan support for policies,” Willer said. “But it can also be a bridge if you can connect your position to your audience’s deeply held moral convictions.”

Values and framing messages

“Moral reframing is not intuitive to people,” Willer said. “When asked to make moral political arguments, people tend to make the ones they believe in and not that of an opposing audience – but the research finds this type of argument unpersuasive.”

To test this, the researchers conducted two additional studies examining the moral arguments people typically make. They asked a panel of self-reported liberals to make arguments that would convince a conservative to support same-sex marriage, and a panel of conservatives to convince liberals to support English being the official language of the United States.

They found that, in both studies, most participants crafted messages with significant moral content, and most of that moral content reflected their own moral values, precisely the sort of arguments their other studies showed were ineffective.

“Our natural tendency is to make political arguments in terms of our own morality,” Feinberg said. “But the most effective arguments are based on the values of whomever you are trying to persuade.”

In all, Willer and Feinberg conducted six online studies involving 1,322 participants.

Source: Stanford News 

Stanford researchers solve the mystery of the dancing droplets

Years of research satisfy a graduate student’s curiosity about the molecular minuet he observed among drops of ordinary food coloring.

BY TOM ABATE


A puzzling observation, pursued through hundreds of experiments, has led Stanford researchers to a simple yet profound discovery: Under certain circumstances, droplets of fluid will move like performers in a dance choreographed by molecular physics.

“These droplets sense one another, they move and interact, almost like living cells,” said Manu Prakash, an assistant professor of bioengineering and senior author of an article published in Nature.

The unexpected findings may prove useful in semiconductor manufacturing and self-cleaning solar panels, but what truly excites Prakash is that the discovery resulted from years of persistent effort to satisfy a scientific curiosity.

Video: Stanford researchers solve the mystery of the dancing droplets

The research began in 2009 when Nate Cira, then an undergraduate at the University of Wisconsin, was doing an unrelated experiment. In the course of that experiment Cira deposited several droplets of food coloring onto a sterilized glass slide and was astonished when they began to move.

Cira replicated and studied this phenomenon alone for two years until he became a graduate student at Stanford, where he shared this curious observation with Prakash. The professor soon became hooked by the puzzle, and recruited a third member to the team: Adrien Benusiglio, a postdoctoral scholar in the Prakash Lab.

Together they spent three years performing increasingly refined experiments to learn how these tiny droplets of food coloring sense one another and move. In living cells these processes of sensing and motility are known as chemotaxis.

“We’ve discovered how droplets can exhibit behaviors akin to artificial chemotaxis,” Prakash said.

As the Nature article explains, the critical fact was that food coloring is a two-component fluid. In such fluids, two different chemical compounds coexist while retaining separate molecular identities

The droplets in this experiment consisted of two molecular compounds found naturally in food coloring: water and propylene glycol.

The researchers discovered how the dynamic interactions of these two molecular components enabled inanimate droplets to mimic some of the behaviors of living cells.

Surface tension and evaporation

Essentially, the droplets danced because of a delicate balance between surface tension and evaporation.

Evaporation is easily understood. On the surface of any liquid, some molecules convert to a gaseous state and float away.

Surface tension is what causes liquids to bead up. It arises from how tightly the molecules in a liquid bind together.

Water evaporates more quickly than propylene glycol. Water also has a higher surface tension.  These differences create a tornado-like flow inside the droplets, which not only allows them to move but also allows a single droplet to sense its neighbors.

To understand the molecular forces involved, imagine shrinking down to size and diving inside a droplet.

There, water and propylene glycol molecules try to remain evenly distributed but differences in evaporation and surface tension create turmoil within the droplet.

On the curved dome of each droplet, water molecules become gaseous and float away faster than their evaporation-averse propylene glycol neighbors.

This evaporation happens more readily on the thin lower edges of the domed droplet, leaving excess of propylene glycol there. Meanwhile, the peak of the dome has a higher concentration of water.

The water at the top exerts its higher surface tension to pull the droplet tight so it doesn’t flatten out. This tugging causes a tumbling molecular motion inside the droplet. Thus surface tension gets the droplet ready to roll.

Evaporation determines the direction of that motion. Each droplet sends aloft gaseous molecules of water like a radially emanating signal announcing the exact location of any given droplet. The droplets converge where the signal is strongest.

So evaporation provided the sensing mechanism and surface tension the pull to move droplets together in what seemed to the eye to be a careful dance.

Rule for two-component fluids

The researchers experimented with varied proportions of water and propylene glycol. Droplets that were 1 percent propylene glycol (PG) to 99 percent water exhibited much the same behavior as droplets that were two-thirds PG to just one-third water.

Based on these experiments the paper describes a “universal rule” to identify any two-component fluids that will demonstrate sensing and motility.

Adding colors to the mixtures made it easier to tell how the droplets of different concentrations behaved and created some visually striking results.

In one experiment, a droplet with more propylene glycol seems to chase a droplet with more water. In actuality, the droplet with more water exerts a higher surface tension tug, pulling the propylene droplet along.

In another experiment, researchers showed how physically separated droplets could align themselves using ever-so-slight signals of evaporation.

In a third experiment they used Sharpie pens to draw black lines on glass slides. The lines changed the surface of the slide and created a series of catch basins. The researchers filled each basin with fluids of different concentrations to create a self-sorting mechanism. Droplets bounced from reservoir to reservoir until they sensed the fluid that matched their concentration and merged with that pool.

What started as a curiosity-driven project may also have many practical implications.

The deep physical understanding of two component fluids allows the researchers to predict which fluids and surfaces will show these unusual effects. The effect is present on a large number of common surfaces and can be replicated with a number of chemical compounds.

“If necessity is the mother of invention, then curiosity is the father,” Prakash observed.

Source: Stanford News

Electrical and computer engineering Professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research could help untangle what happens in the brain during sleep and dreaming.

Photo Credit: Nick Berard/UW-Madison

Stanford scientists seek to map origins of mental illness and develop noninvasive treatment

An interdisciplinary team of scientists has convened to map the origins of mental illnesses in the brain and develop noninvasive technologies to treat the conditions. The collaboration could lead to improved treatments for depression, anxiety and post-traumatic stress disorder.

BY AMY ADAMS


Over the years imaging technologies have revealed a lot about what’s happening in our brains, including which parts are active in people with conditions like depression, anxiety or post-traumatic stress disorder. But here’s the secret Amit Etkin wants the world to know about those tantalizing images: they show the result of a brain state, not what caused it.

This is important because until we know how groups of neurons, called circuits, are causing these conditions – not just which are active later – scientists will never be able to treat them in a targeted way.

“You see things activated in brain images but you can’t tell just by watching what is cause and what is effect,” said Amit Etkin, an assistant professor of psychiatry and behavioral sciences. Etkin is co-leader of a new interdisciplinary initiative to understand what brain circuits underlie mental health conditions and then direct noninvasive treatments to those locations.

“Right now, if a patient with a mental illness goes to see their doctor they would likely be given a medication that goes all over the brain and body,” Etkin said. “While medications can work well, they do so for only a portion of people and often only partially.” Medications don’t specifically act on the brain circuits critically affected in that illness or individual.

The Big Idea: treat roots of mental illness

The new initiative, called NeuroCircuit, has the goal of finding the brain circuits that are responsible for mental health conditions and then developing ways of remotely stimulating those circuits and, the team hopes, potentially treating those conditions.

The initiative is part of the Stanford Neurosciences Institute‘s Big Ideas, which bring together teams of researchers from across disciplines to solve major problems in neuroscience and society. Stephen Baccus, an associate professor of neurobiology who co-leads the initiative with Etkin, said that what makes NeuroCircuit a big idea is the merging of teams trying to map circuits responsible for mental health conditions and teams developing new technologies to remotely access those circuits.

“Many psychiatric disorders, especially disorders of mood, probably involve malfunction within specific brain circuits that regulate emotion and motivation, yet our current pharmaceutical treatments affect circuits all over the brain,” said William Newsome, director of the Stanford Neurosciences Institute. “The ultimate goal of NeuroCircuit is more precise treatments, with minimal side effects, for specific psychiatric disorders.”

“The connection between the people who develop the technology and carry out research with the clinical goal, that’s what’s really come out of the Big Ideas,” Baccus said.

Brain control

Etkin has been working with a technology called transcranial magnetic stimulation, or TMS, to map and remotely stimulate parts of the brain. The device, which looks like a pair of doughnuts on a stick, generates a strong magnetic current that stimulates circuits near the surface of the brain.

TMS is currently used as a way of treating depression and anxiety, but Etkin said the brain regions being targeted are the ones available to TMS, not necessarily the ones most likely to treat a person’s condition. They are also not personalized for the individual.

Pairing TMS with another technology that shows which brain regions are active, Etkin and his team can stimulate one part of the brain with TMS and look for a reaction elsewhere. These studies can eventually help map the relationships between brain circuits and identify the circuits that underlie mental health conditions.

In parallel, the team is working to improve TMS to make it more useful as a therapy. TMS currently only reaches the surface of the brain and is not very focused. The goal is to improve the technology so that it can reach structures deeper in the brain in a more targeted way. “Right now they are hitting the only accessible target,” he said. “The parts we really want to hit for depression, anxiety or PTSD are likely deeper in the brain.”

Technology of the future

In parallel with the TMS work, Baccus and a team of engineers, radiologists and physiologists have been developing a way of using ultrasound to stimulate the brain. Ultrasound is widely used to image the body, most famously for producing images of developing babies in the womb. But in recent years scientists have learned that at the right frequency and focus, ultrasound can also stimulate nerves to fire.

In his lab, Baccus has been using ultrasound to stimulate nerve cells of the retina – the light-sensing structure at the back of the eye – as part of an effort to develop a prosthetic retina. He is also teaming up with colleagues to understand how ultrasound might be triggering that stimulation. It appears to compress the nerve cells in a way that could lead to activation, but the connection is far from clear.

Other members of the team are modifying existing ultrasound technology to direct it deep within the brain at a frequency that can stimulate nerves without harming them. If the team is successful, ultrasound could be a more targeted and focused tool than TMS for remotely stimulating circuits that underlie mental health conditions.

The group has been working together for about five years, and in 2012 got funding from Bio-X NeuroVentures, which eventually gave rise to the Stanford Neurosciences Institute, to pursue this technology. Baccus said that before merging with Etkin’s team they had been focusing on the technology without specific brain diseases in mind. “This merger really gives a target and a focus to the technology,” he said.

Etkin and Baccus said that if they are successful, they hope to have both a better understanding of how the brain functions and new tools for treating disabling mental health conditions.

Source: Stanford News

Eye implant developed at Stanford could lead to better glaucoma treatments

Lowering internal eye pressure is currently the only way to treat glaucoma. A tiny eye implant developed by Stephen Quake’s lab could pair with a smartphone to improve the way doctors measure and lower a patient’s eye pressure.

BY BJORN CAREY


For the 2.2 million Americans battling glaucoma, the main course of action for staving off blindness involves weekly visits to eye specialists who monitor – and control – increasing pressure within the eye.

Now, a tiny eye implant developed at Stanford could enable patients to take more frequent readings from the comfort of home. Daily or hourly measurements of eye pressure could help doctors tailor more effective treatment plans.

Internal optic pressure (IOP) is the main risk factor associated with glaucoma, which is characterized by a continuous loss of specific retina cells and degradation of the optic nerve fiber. The mechanism linking IOP and the damage is not clear, but in most patients IOP levels correlate with the rate of damage.

Reducing IOP to normal or below-normal levels is currently the only treatment available for glaucoma. This requires repeated measurements of the patient’s IOP until the levels stabilize. The trick with this, though, is that the readings do not always tell the truth.

Like blood pressure, IOP can vary day-to-day and hour-to-hour; it can be affected by other medications, body posture or even a neck-tie that is knotted too tightly. If patients are tested on a low IOP day, the test can give a false impression of the severity of the disease and affect their treatment in a way that can ultimately lead to worse vision.

The new implant was developed as part of a collaboration between Stephen Quake, a professor of bioengineering and of applied physics at Stanford, and ophthalmologist Yossi Mandel of Bar-Ilan University in Israel. It consists of a small tube – one end is open to the fluids that fill the eye; the other end is capped with a small bulb filled with gas. As the IOP increases, intraocular fluid is pushed into the tube; the gas pushes back against this flow.

As IOP fluctuates, the meniscus – the barrier between the fluid and the gas – moves back and forth in the tube. Patients could use a custom smartphone app or a wearable technology, such as Google Glass, to snap a photo of the instrument at any time, providing a critical wealth of data that could steer treatment. For instance, in one previous study, researchers found that 24-hour IOP monitoring resulted in a change in treatment in up to 80 percent of patients.

The implant is currently designed to fit inside a standard intraocular lens prosthetic, which many glaucoma patients often get when they have cataract surgery, but the scientists are investigating ways to implant it on its own.

“For me, the charm of this is the simplicity of the device,” Quake said. “Glaucoma is a substantial issue in human health. It’s critical to catch things before they go off the rails, because once you go off, you can go blind. If patients could monitor themselves frequently, you might see an improvement in treatments.”

Remarkably, the implant won’t distort vision. When subjected to the vision test used by the U.S. Air Force, the device caused nearly no optical distortion, the researchers said.

Before they can test the device in humans, however, the scientists say they need to re-engineer the device with materials that will increase the life of the device inside the human eye. Because of the implant’s simple design, they expect this will be relatively achievable.

“I believe that only a few years are needed before clinical trials can be conducted,” said Mandel, head of the Ophthalmic Science and Engineering Laboratory at Bar-Ilan University, who collaborated on developing the implant.

The work, published in the current issue of Nature Medicine, was co-authored by Ismail E. Araci, a postdoctoral scholar in Quake’s lab, and Baolong Su, a technician in Quake’s lab and currently an undergraduate student at the University of California, Los Angeles.

Source: Stanford News

Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)

Stanford scientists develop water splitter that runs on ordinary AAA battery

Hongjie Dai and colleagues have developed a cheap, emissions-free device that uses a 1.5-volt battery to split water into hydrogen and oxygen. The hydrogen gas could be used to power fuel cells in zero-emissions vehicles.

BY MARK SHWARTZ


In 2015, American consumers will finally be able to purchase fuel cell cars from Toyota and other manufacturers. Although touted as zero-emissions vehicles, most of the cars will run on hydrogen made from natural gas, a fossil fuel that contributes to global warming.

Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)
Stanford graduate student Ming Gong, left, and Professor Hongjie Dai have developed a low-cost electrolytic device that splits water into hydrogen and oxygen at room temperature. The device is powered by an ordinary AAA battery. (Mark Shwartz / Stanford Precourt Institute for Energy)

Now scientists at Stanford University have developed a low-cost, emissions-free device that uses an ordinary AAA battery to produce hydrogen by water electrolysis.  The battery sends an electric current through two electrodes that split liquid water into hydrogen and oxygen gas. Unlike other water splitters that use precious-metal catalysts, the electrodes in the Stanford device are made of inexpensive and abundant nickel and iron.

“Using nickel and iron, which are cheap materials, we were able to make the electrocatalysts active enough to split water at room temperature with a single 1.5-volt battery,” said Hongjie Dai, a professor of chemistry at Stanford. “This is the first time anyone has used non-precious metal catalysts to split water at a voltage that low. It’s quite remarkable, because normally you need expensive metals, like platinum or iridium, to achieve that voltage.”

In addition to producing hydrogen, the novel water splitter could be used to make chlorine gas and sodium hydroxide, an important industrial chemical, according to Dai. He and his colleagues describe the new device in a study published in the Aug. 22 issue of the journal Nature Communications.

The promise of hydrogen

Automakers have long considered the hydrogen fuel cell a promising alternative to the gasoline engine.  Fuel cell technology is essentially water splitting in reverse. A fuel cell combines stored hydrogen gas with oxygen from the air to produce electricity, which powers the car. The only byproduct is water – unlike gasoline combustion, which emits carbon dioxide, a greenhouse gas.

Earlier this year, Hyundai began leasing fuel cell vehicles in Southern California. Toyota and Honda will begin selling fuel cell cars in 2015. Most of these vehicles will run on fuel manufactured at large industrial plants that produce hydrogen by combining very hot steam and natural gas, an energy-intensive process that releases carbon dioxide as a byproduct.

Splitting water to make hydrogen requires no fossil fuels and emits no greenhouse gases. But scientists have yet to develop an affordable, active water splitter with catalysts capable of working at industrial scales.

“It’s been a constant pursuit for decades to make low-cost electrocatalysts with high activity and long durability,” Dai said. “When we found out that a nickel-based catalyst is as effective as platinum, it came as a complete surprise.”

Saving energy and money

The discovery was made by Stanford graduate student Ming Gong, co-lead author of the study. “Ming discovered a nickel-metal/nickel-oxide structure that turns out to be more active than pure nickel metal or pure nickel oxide alone,” Dai said.  “This novel structure favors hydrogen electrocatalysis, but we still don’t fully understand the science behind it.”

The nickel/nickel-oxide catalyst significantly lowers the voltage required to split water, which could eventually save hydrogen producers billions of dollars in electricity costs, according to Gong. His next goal is to improve the durability of the device.

“The electrodes are fairly stable, but they do slowly decay over time,” he said. “The current device would probably run for days, but weeks or months would be preferable. That goal is achievable based on my most recent results”

The researchers also plan to develop a water splitter than runs on electricity produced by solar energy.

“Hydrogen is an ideal fuel for powering vehicles, buildings and storing renewable energy on the grid,” said Dai. “We’re very glad that we were able to make a catalyst that’s very active and low cost. This shows that through nanoscale engineering of materials we can really make a difference in how we make fuels and consume energy.”

Other authors of the study are Wu Zhou, Oak Ridge National Laboratory (co-lead author); Mingyun Guan, Meng-Chang Lin, Bo Zhang, Di-Yan Wang and Jiang Yang, Stanford; Mon-Che Tsai and Bing-Joe Wang, National Taiwan University of Science and Technology; Jiang Zhou and Yongfeng Hu, Canadian Light Source Inc.; and Stephen J. Pennycook, University of Tennessee.

Principal funding was provided by the Global Climate and Energy Project (GCEP) and the Precourt Institute for Energy at Stanford and by the U.S. Department of Energy.

Mark Shwartz writes about energy technology at the Precourt Institute for Energy at Stanford University.

In her long career teaching writing and rhetoric, Andrea Lunsford became increasingly intrigued by the many forms in which students write. (Photo: Linda A. Cicero / Stanford News Service)

From Twitter to Kickstarter, Stanford English professor says the digital revolution is changing what it means to be an author

Stanford English Professor Andrea Lunsford says today’s writing instruction should teach students how to become better writers for social media and other interactive online environments.

BY ANGELA BECERRA VIDERGAR


 

Between LOLs, emoticons and 140-character rants, it may seem like digital communication has only served to stunt young people’s writing abilities.

In her long career teaching writing and rhetoric, Andrea Lunsford became increasingly intrigued by the many forms in which students write. (Photo: Linda A. Cicero / Stanford News Service)
In her long career teaching writing and rhetoric, Andrea Lunsford became increasingly intrigued by the many forms in which students write. (Photo: Linda A. Cicero / Stanford News Service)

But according to Stanford English professor and rhetorician Andrea Lunsford, students today are writing more than ever before – just in forms unseen or unacknowledged in writing instruction.

The former director of Stanford’s Program in Writing and Rhetoric, Lunsford said that in today’s world of instant online publication, anyone can potentially have their written work distributed to a wide audience.

“Turn on your computer, write a blog post – and you’re an author,” said Lunsford, the Louise Hewlett Nixon Professor, Emerita.

The co-author of the digital-age writing guide Everyone’s an Author, Lunsford said that students are “writing more today than they ever have in the history of the world, and it’s because of social media.” Students themselves “may think it’s not writing, but it is writing, and it’s important writing,” said Lunsford.

Everyone’s an Author includes samples and instruction on how to write online reviews, project proposals, articles on health policy and even Wikipedia articles. Lunsford wants to teach students about writing that makes things happen and is “growing and living.”

Lunsford has authored several widely used writing guides. But with Everyone’s an Author, Lunsford and her co-authors Lisa Ede, Beverly J. Moss, Carole Clark Papper and Keith Walters wanted to provide an alternative to typical writing instruction that “seemed to assume an audience of middle-class, white students that are monolingual and who were writing still on paper.”

The co-authors, English, writing and linguistics professors, developed a textbook that assumed “multilingual, multimodal, multimedia discourse and students of extraordinarily varied linguistic and cultural backgrounds.”

In addition to the more traditional writing forms addressed in other writing texts, such as the academic research essay, the co-authors placed emphasis on areas often overlooked in writing instruction. Their text shows students how to write effectively on media-sharing and crowd-sourcing platforms by integrating words and images and other types of multimedia communication.

For example, a chapter on “Designing What You Write” helps students think about their genre, audience, context, medium and other concerns before making stylistic choices like color schemes, infographics and video and other such narrative tools that are not typically discussed in a writing class.

Lunsford wants today’s writing instruction to challenge assumptions about who is authorized to communicate and in what ways.

“With web 2.0 came participatory experiences by the billions. Young people today are not content to sit back and just consume – swallow – what’s been thought and written in the past 2,500 years. They want to produce things themselves,” said Lunsford.

Teaching the hybrid medium

Lunsford, who has taught writing and rhetoric courses for four decades, became increasingly intrigued by the many forms in which students write while examining the results of the Stanford Study of Writing. The study centered on over 15,000 pieces of writing from 2001 to 2005, produced by a random sampling of undergraduates for assignments in and outside of class and which showed that students were writing across a wide variety of genres.

According to Lunsford, the students preferred to talk about the writing they were doing outside of class.

“They would wax eloquent about a newsletter they were putting out for temporary workers at Stanford, for instance,” Lunsford said, “but only spend a couple of minutes talking about their IHUM [Introduction to the Humanities] assignments,” which were formal academic papers.

Lunsford saw some of this enthusiastic, socially engaged student writing when she recently taught in a Semester at Sea. In the study abroad program sponsored by the University of Virginia, more than 600 students from all over the world sailed around the world while studying college-level courses.

Lunsford said her students got excited about writing for purposes they initiated, like using web-based programs to address issues in countries they visited: One group of students built a website to raise funds for an inexpensive homemade water-purifying device, while another group started a Kickstarter campaign that within a month raised enough money for a young man in Ghana to attend college for a year.

“He’s now in his junior year at the university funded by this group of kids who just got online and started writing,” Lunsford said. “So that’s what I mean when I say there are profound ways in which authorship is happening on the web through social media and other things like Kickstarter campaigns.”

The authors give students examples of how to represent themselves or their causes on platforms like Facebook or Twitter. They suggest amplifying a status update with a “rhetorically arranged” photo, or with special attention to audience and tone – elements that can carry over to academic writing. The authors explain how a convincing Yelp review reflects research skills like how one observes and collects evidence.

Students are now more adept at using different media.

“Students today are capable of producing those forms,” Lunsford said. “Students want to make little films for an assignment. They want to draw a comic.”

Lunsford advocates including graphic narratives in pedagogy and is fascinated by their innovative combinations of image and text. Comics are represented in Everyone’s an Author, as well as in Lunsford’s courses. She finds the hybrid medium works to “engage with our brains in different ways.”

Quality control

The often collaborative nature of multimedia texts and the digital writing environment reinforces Lunsford’s belief that no writing is truly solitary. The rhetorician points out that even the comments we write in response to online articles are miniature, interactive, published compositions.

The authors point out that students today “write and research not just to report or analyze but to join conversations. With the click of a mouse they can respond to a Washington Post blog, publishing their views alongside those of the Post writer.”

One might question where quality control and ethical responsibility come into play. Internet “trolling” and various form of aggressive, hateful speech are all too common these days.

Lunsford explains that in the Program in Writing and Rhetoric, the instructors define rhetoric as “the art, theory and practice of ethical communication,” with emphasis on the word ethical. “We are responsible for what we write and say,” Lunsford said.

She added that when it comes to quality, “the buck stops with you in some ways. If you’re not trying to control your own quality, you’re expecting other people to do it for you. I think that’s a big cop-out.”

That shift in editorial control is part of the contemporary reality of writing.

“Young people today,” she said, “want not just their voices to be heard. They want some control and some authority – some authorship.”

Angela Becerra Vidergar, who received her doctorate in comparative literature from Stanford in 2013, writes about the Humanities at Stanford.

Source: Stanford University

Simple isn’t better when talking about science, Stanford philosopher suggests

Taking a philosophical approach to the assumptions that surround the study of human behavior, Stanford philosophy Professor Helen Longino suggests that no single research method is capable of answering the question of nature vs. nurture.


 

By Barbara Wilcox

Studies of the origins of human sexuality and aggression are typically in the domain of the sciences, where researchers examine genetic, neurobiological, social and environmental factors.

Behavioral research findings draw intense interest from other researchers, policymakers and the general public. But Stanford’s Helen E. Longino, the Clarence Irving Lewis Professor of Philosophy, says there’s more to the story.

Longino, who specializes in the philosophy of science, asserts in her latest book that the limitations of behavioral research are not clearly communicated in academic or popular discourse. As a result, this lack of communication distorts the scope of current behavioral research.

In her book Studying Human Behavior: How Scientists Investigate Aggression and Sexuality, Longino examines five common scientific approaches to the study of behavior – quantitative behavioral genetics, molecular behavioral genetics, developmental psychology, neurophysiology and anatomy, and social/environmental methods.

Applying the analytical tools of philosophy, Longino defines what is – and is not – measured by each of these approaches. She also reflects on how this research is depicted in academic and popular media.

In her analysis of citations of behavioral research, Longino found that the demands of journalism and of the culture at large favor science with a very simple storyline. Research that looks for a single “warrior gene” or a “gay gene,” for example, receives more attention in both popular and scholarly media than research that takes an integrative approach across scientific approaches or disciplines.

Longino spoke with the Stanford News Service about why it is important for scientists and the public to understand the parameters of behavioral research:

 

Your research suggests that social-science researchers are not adequately considering the limitations of their processes and findings. To what do you attribute this phenomenon?

The sciences have become hyper-specialized. Scientists rarely have the opportunity or support to step back from their research and ask how it connects with other work on similar topics. I see one role of philosophers of science as the provision of that larger, interpretive picture. This is not to say that there is one correct interpretation, rather that as philosophers we can show that the interpretive questions are askable.

 

Why study behavioral research through a philosophic lens?

Philosophy deals, in part, with the study of how things are known. A philosopher can ask, “What are the grounds for believing any of the claims here? What are the relationships between these approaches? The differences? What can we learn? What can this way of thinking not tell us?”

These are the questions I asked of each article I read. I developed a grid system for analyzing and recording the way the behavior under study was defined and measured, the correlational or observational data – including size and character of sample population – developed, the hypotheses evaluated.

 

What about your findings do you think would surprise people most?

I went into the project thinking that what would differentiate each approach was its definition of behavior. As the patterns emerged, I saw that. What differentiated each approach was how it characterized the range of possible causal factors.

Because each approach characterized this range differently, the measurements of different research approaches were not congruent. Thus, their results could not be combined or integrated or treated as empirical competitors. But this is what is required if the nature vs. nurture – or nature and nurture – question is to be meaningful.

I also investigated the representation of this research in public media. I found that research that locates the roots of behavior in the individual is cited far more often than population-based studies, and that research that cites genetic or neurobiological factors is cited more frequently than research into social or environmental influences on behavior. Interestingly, science journalists fairly consistently described biological studies as being more fruitful and promising than studies into social factors of behavior.

Social research was always treated as “terminally inconclusive,” using terms that amount to “we’ll never get an answer.” Biological research was always treated as being a step “on the road to knowledge.”

 

What prompted you to begin the research that became Studying Human Behavior?

In 1992, an East Coast conference on “genetic factors and crime” was derailed under pressure from activists and the Congressional Black Caucus, which feared that the findings being presented might be misused to find a racial basis for crime or links between race and intelligence. I became interested in the conceptual and theoretical foundations of the conference – the voiced and unvoiced assumptions made by both the conference participants and by the activists, policymakers and other users of the research.

 

Why did you pair human aggression and sexuality as a subject for a book?

While I started with the research on aggression, research on sexual orientation started popping up in the news and I wanted to include research on at least two behaviors or families of behavior in order to avoid being misled by potential sample bias. Of course, these behaviors are central to social life, so how we try to understand them is intrinsically interesting.

 

What could science writers be doing better?

Articles in the popular media, such as the science sections of newspapers, rarely discuss the methodology of studies that they cover as news. Yet methodology and the disciplinary approach of the scientists doing the research are critical because they frame the question.

For example, quantitative behavioral genetics research will consider a putatively shared genome against social factors such as birth order, parental environment and socioeconomic status. Molecular genetics research seeks to associate specific traits with specific alleles or combinations within the genome, but the social factors examined by quantitative behavioral genetics lie outside its purview. Neurobiological research might occupy a middle ground. But no single approach or even a combination of approaches can measure all the factors that bear on a behavior.

It’s also important to know that often, behavior is not what’s being studied. It’s a tool, not the subject. The process of serotonin re-uptake, for example, may be of primary interest to the researcher, not the behavior that it yields. Yet behavior is what’s being reported.

 

What advice do you have for people who might be concerned about potential political ramifications of research into sexuality or aggression?

I see political ramifications in what is not studied.

In studying sexual orientation, the 7-point Kinsey scale was an improvement over a previous binary measure of orientation. Researchers employing the Kinsey scale still tend to find greater concentrations at the extremes. Middle points still get dropped out of the analysis. In addition to more attention to intermediates on the scale, there could be focus on other dimensions of erotic orientation in addition to, or instead of, the sex of the individual to which one is attracted.

Similarly, there are a number of standard ways to measure aggressive response, but they are all focused on the individual. Collective action is not incorporated. If the interest in studying aggression is to shed light on crime, there’s a whole lot of behavior that falls outside that intersection, including white-collar crime and state- or military-sponsored crime.

 

What other fields of inquiry could benefit from your findings?

Climate study is as complex as behavioral study. We’d have a much better debate about climate change if we were not looking for a single answer or silver bullet. The public should understand the complexities that the IPCC [Intergovernmental Panel on Climate Change] must cope with in producing its findings.

Source: Stanford News Service