Tag Archives: neural

Electrical and computer engineering Professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research could help untangle what happens in the brain during sleep and dreaming.

Photo Credit: Nick Berard/UW-Madison

Imagination, reality flow in opposite directions in the brain

By Scott Gordon


As real as that daydream may seem, its path through your brain runs opposite reality.

Aiming to discern discrete neural circuits, researchers at the University of Wisconsin–Madison have tracked electrical activity in the brains of people who alternately imagined scenes or watched videos.

“A really important problem in brain research is understanding how different parts of the brain are functionally connected. What areas are interacting? What is the direction of communication?” says Barry Van Veen, a UW-Madison professor of electrical and computer engineering. “We know that the brain does not function as a set of independent areas, but as a network of specialized areas that collaborate.”

Van Veen, along with Giulio Tononi, a UW-Madison psychiatry professor and neuroscientist, Daniela Dentico, a scientist at UW–Madison’s Waisman Center, and collaborators from the University of Liege in Belgium, published results recently in the journalNeuroImage. Their work could lead to the development of new tools to help Tononi untangle what happens in the brain during sleep and dreaming, while Van Veen hopes to apply the study’s new methods to understand how the brain uses networks to encode short-term memory.

During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe — from a higher-order region that combines inputs from several of the senses out to a lower-order region.

Electrical and computer engineering Professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research could help untangle what happens in the brain during sleep and dreaming. Photo Credit: Nick Berard/UW-Madison
Electrical and computer engineering Professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research could help untangle what happens in the brain during sleep and dreaming.
Photo Credit: Nick Berard/UW-Madison

In contrast, visual information taken in by the eyes tends to flow from the occipital lobe — which makes up much of the brain’s visual cortex — “up” to the parietal lobe.

“There seems to be a lot in our brains and animal brains that is directional, that neural signals move in a particular direction, then stop, and start somewhere else,” says. “I think this is really a new theme that had not been explored.”

The researchers approached the study as an opportunity to test the power of electroencephalography (EEG) — which uses sensors on the scalp to measure underlying electrical activity — to discriminate between different parts of the brain’s network.

Brains are rarely quiet, though, and EEG tends to record plenty of activity not necessarily related to a particular process researchers want to study.

To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle — focusing on the details of shapes, colors and textures — before watching a short video of silent nature scenes.

Using an algorithm Van Veen developed to parse the detailed EEG data, the researchers were able to compile strong evidence of the directional flow of information.

“We were very interested in seeing if our signal-processing methods were sensitive enough to discriminate between these conditions,” says Van Veen, whose work is supported by the National Institute of Biomedical Imaging and Bioengineering. “These types of demonstrations are important for gaining confidence in new tools.”

Source: UW-Madison News

Recommendation theory

Model for evaluating product-recommendation algorithms suggests that trial and error get it right.

By Larry Hardesty

Devavrat Shah’s group at MIT’s Laboratory for Information and Decision Systems (LIDS) specializes in analyzing how social networks process information. In 2012, the group demonstrated algorithms that could predict what topics would trend on Twitter up to five hours in advance; this year, they used the same framework to predict fluctuations in the prices of the online currency known as Bitcoin.

Next month, at the Conference on Neural Information Processing Systems, they’ll present a paper that applies their model to the recommendation engines that are familiar from websites like Amazon and Netflix — with surprising results.

“Our interest was, we have a nice model for understanding data-processing from social data,” says Shah, the Jamieson Associate Professor of Electrical Engineering and Computer Science. “It makes sense in terms of how people make decisions, exhibit preferences, or take actions. So let’s go and exploit it and design a better, simple, basic recommendation algorithm, and it will be something very different. But it turns out that under that model, the standard recommendation algorithm is the right thing to do.”

The standard algorithm is known as “collaborative filtering.” To get a sense of how it works, imagine a movie-streaming service that lets users rate movies they’ve seen. To generate recommendations specific to you, the algorithm would first assign the other users similarity scores based on the degree to which their ratings overlap with yours. Then, to predict your response to a particular movie, it would aggregate the ratings the movie received from other users, weighted according to similarity scores.

To simplify their analysis, Shah and his collaborators — Guy Bresler, a postdoc in LIDS, and George Chen, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS) who is co-advised by Shah and EECS associate professor Polina Golland — assumed that the ratings system had two values, thumbs-up or thumbs-down. The taste of every user could thus be described, with perfect accuracy, by a string of ones and zeroes, where the position in the string corresponds to a particular movie and the number at that location indicates the rating.

Birds of a feather

The MIT researchers’ model assumes that large groups of such strings can be clustered together, and that those clusters can be described probabilistically. Rather than ones and zeroes at each location in the string, a probabilistic cluster model would feature probabilities: an 80 percent chance that the members of the cluster will like movie “A,” a 20 percent chance that they’ll like movie “B,” and so on.

The question is how many such clusters are required to characterize a population. If half the people who like “Die Hard” also like “Shakespeare in Love,” but the other half hate it, then ideally, you’d like to split “Die Hard” fans into two clusters. Otherwise, you’d lose correlations between their preferences that could be predictively useful. On the other hand, the more clusters you have, the more ratings you need to determine which of them a given user belongs to. Reliable prediction from limited data becomes impossible.

In their new paper, the MIT researchers show that so long as the number of clusters required to describe the variation in a population is low, collaborative filtering yields nearly optimal predictions. But in practice, how low is that number?

To answer that question, the researchers examined data on 10 million users of a movie-streaming site and identified 200 who had rated the same 500 movies. They found that, in fact, just five clusters — five probabilistic models — were enough to account for most of the variation in the population.

Missing links

While the researchers’ model corroborates the effectiveness of collaborative filtering, it also suggests ways to improve it. In general, the more information a collaborative-filtering algorithm has about users’ preferences, the more accurate its predictions will be. But not all additional information is created equal. If a user likes “The Godfather,” the information that he also likes “The Godfather: Part II” will probably have less predictive power than the information that he also likes “The Notebook.”

Using their analytic framework, the LIDS researchers show how to select a small number of products that carry a disproportionate amount of information about users’ tastes. If the service provider recommended those products to all its customers, then, based on the resulting ratings, it could much more efficiently sort them into probability clusters, which should improve the quality of its recommendations.

Sujay Sanghavi, an associate professor of electrical and computer engineering at the University of Texas at Austin, considers this the most interesting aspect of the research. “If you do some kind of collaborative filtering, two things are happening,” he says. “I’m getting value from it as a user, but other people are getting value, too. Potentially, there is a trade-off between these things. If there’s a popular movie, you can easily show that I’ll like it, but it won’t improve the recommendations for other people.”

That trade-off, Sanghavi says, “has been looked at in an empirical context, but there’s been nothing that’s principled. To me, what is appealing about this paper is that they have a principled look at this issue, which no other work has done. They’ve found a new kind of problem. They are looking at a new issue.”

Source : MIT News


Neuroscientists reverse memories’ emotional associations

MIT study also identifies the brain circuit that links feelings to memories.

By Anne Trafton

Most memories have some kind of emotion associated with them: Recalling the week you just spent at the beach probably makes you feel happy, while reflecting on being bullied provokes more negative feelings.

A new study from MIT neuroscientists reveals the brain circuit that controls how memories become linked with positive or negative emotions. Furthermore, the researchers found that they could reverse the emotional association of specific memories by manipulating brain cells with optogenetics — a technique that uses light to control neuron activity.

The findings, described in the Aug. 27 issue of Nature, demonstrated that a neuronal circuit connecting the hippocampus and the amygdala plays a critical role in associating emotion with memory. This circuit could offer a target for new drugs to help treat conditions such as post-traumatic stress disorder, the researchers say.

“In the future, one may be able to develop methods that help people to remember positive memories more strongly than negative ones,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory, and senior author of the paper.

 This image depicts the injection sites and the expression of the viral constructs in the two areas of the brain studied: the Dentate Gyrus of the hippocampus (middle) and the Basolateral Amygdala (bottom corners). Credits Image courtesy of the researchers/MIT

This image depicts the injection sites and the expression of the viral constructs in the two areas of the brain studied: the Dentate Gyrus of the hippocampus (middle) and the Basolateral Amygdala (bottom corners).
Credits: Image courtesy of the researchers/MIT

The paper’s lead authors are Roger Redondo, a Howard Hughes Medical Institute postdoc at MIT, and Joshua Kim, a graduate student in MIT’s Department of Biology.

Shifting memories

Memories are made of many elements, which are stored in different parts of the brain. A memory’s context, including information about the location where the event took place, is stored in cells of the hippocampus, while emotions linked to that memory are found in the amygdala.

Previous research has shown that many aspects of memory, including emotional associations, are malleable. Psychotherapists have taken advantage of this to help patients suffering from depression and post-traumatic stress disorder, but the neural circuitry underlying such malleability is not known.

In this study, the researchers set out to explore that malleability with an experimental technique they recently devised that allows them to tag neurons that encode a specific memory, or engram. To achieve this, they label hippocampal cells that are turned on during memory formation with a light-sensitive protein called channelrhodopsin. From that point on, any time those cells are activated with light, the mice recall the memory encoded by that group of cells.

Last year, Tonegawa’s lab used this technique to implant, or “incept,” false memories in miceby reactivating engrams while the mice were undergoing a different experience. In the new study, the researchers wanted to investigate how the context of a memory becomes linked to a particular emotion. First, they used their engram-labeling protocol to tag neurons associated with either a rewarding experience (for male mice, socializing with a female mouse) or an unpleasant experience (a mild electrical shock). In this first set of experiments, the researchers labeled memory cells in a part of the hippocampus called the dentate gyrus.

Two days later, the mice were placed into a large rectangular arena. For three minutes, the researchers recorded which half of the arena the mice naturally preferred. Then, for mice that had received the fear conditioning, the researchers stimulated the labeled cells in the dentate gyrus with light whenever the mice went into the preferred side. The mice soon began avoiding that area, showing that the reactivation of the fear memory had been successful.

The reward memory could also be reactivated: For mice that were reward-conditioned, the researchers stimulated them with light whenever they went into the less-preferred side, and they soon began to spend more time there, recalling the pleasant memory.

A couple of days later, the researchers tried to reverse the mice’s emotional responses. For male mice that had originally received the fear conditioning, they activated the memory cells involved in the fear memory with light for 12 minutes while the mice spent time with female mice. For mice that had initially received the reward conditioning, memory cells were activated while they received mild electric shocks.

Next, the researchers again put the mice in the large two-zone arena. This time, the mice that had originally been conditioned with fear and had avoided the side of the chamber where their hippocampal cells were activated by the laser now began to spend more time in that side when their hippocampal cells were activated, showing that a pleasant association had replaced the fearful one. This reversal also took place in mice that went from reward to fear conditioning.

Altered connections

The researchers then performed the same set of experiments but labeled memory cells in the basolateral amygdala, a region involved in processing emotions. This time, they could not induce a switch by reactivating those cells — the mice continued to behave as they had been conditioned when the memory cells were first labeled.

This suggests that emotional associations, also called valences, are encoded somewhere in the neural circuitry that connects the dentate gyrus to the amygdala, the researchers say. A fearful experience strengthens the connections between the hippocampal engram and fear-encoding cells in the amygdala, but that connection can be weakened later on as new connections are formed between the hippocampus and amygdala cells that encode positive associations.

“That plasticity of the connection between the hippocampus and the amygdala plays a crucial role in the switching of the valence of the memory,” Tonegawa says.

These results indicate that while dentate gyrus cells are neutral with respect to emotion, individual amygdala cells are precommitted to encode fear or reward memory. The researchers are now trying to discover molecular signatures of these two types of amygdala cells. They are also investigating whether reactivating pleasant memories has any effect on depression, in hopes of identifying new targets for drugs to treat depression and post-traumatic stress disorder.

David Anderson, a professor of biology at the California Institute of Technology, says the study makes an important contribution to neuroscientists’ fundamental understanding of the brain and also has potential implications for treating mental illness.

“This is a tour de force of modern molecular-biology-based methods for analyzing processes, such as learning and memory, at the neural-circuitry level. It’s one of the most sophisticated studies of this type that I’ve seen,” he says.

The research was funded by the RIKEN Brain Science Institute, Howard Hughes Medical Institute, and the JPB Foundation.

 

 


Our connection to content

Using neuroscience tools, Innerscope Research explores the connections between consumers and media.

By Rob Matheson


It’s often said that humans are wired to connect: The neural wiring that helps us read the emotions and actions of other people may be a foundation for human empathy.

But for the past eight years, MIT Media Lab spinout Innerscope Research has been using neuroscience technologies that gauge subconscious emotions by monitoring brain and body activity to show just how powerfully we also connect to media and marketing communications.

“We are wired to connect, but that connection system is not very discriminating. So while we connect with each other in powerful ways, we also connect with characters on screens and in books, and, we found, we also connect with brands, products, and services,” says Innerscope’s chief science officer, Carl Marci, a social neuroscientist and former Media Lab researcher.

With this core philosophy, Innerscope — co-founded at MIT by Marci and Brian Levine MBA ’05 — aims to offer market research that’s more advanced than traditional methods, such as surveys and focus groups, to help content-makers shape authentic relationships with their target consumers.

“There’s so much out there, it’s hard to make something people will notice or connect to,” Levine says. “In a way, we aim to be the good matchmaker between content and people.”

So far, it’s drawn some attention. The company has conducted hundreds of studies and more than 100,000 content evaluations with its host of Fortune 500 clients, which include Campbell’s Soup, Yahoo, and Fox Television, among others.

And Innerscope’s studies are beginning to provide valuable insights into the way consumers connect with media and advertising. Take, for instance, its recent project to measure audience engagement with television ads that aired during the Super Bowl.

Innerscope first used biometric sensors to capture fluctuations in heart rate, skin conductance, breathing, and motion among 80 participants who watched select ads and sorted them into “winning” and “losing” commercials (in terms of emotional responses). Then their collaborators at Temple University’s Center for Neural Decision Making used functional magnetic resonance imaging (fMRI) brain scans to further measure engagement.

Ads that performed well elicited increased neural activity in the amygdala (which drives emotions), superior temporal gyrus (sensory processing), hippocampus (memory formation), and lateral prefrontal cortex (behavioral control).

“But what was really interesting was the high levels of activity in the area known as the precuneus — involved in feelings of self-consciousness — where it is believed that we keep our identity. The really powerful ads generated a heightened sense of personal identification,” Marci says.

Using neuroscience to understand marketing communications and, ultimately, consumers’ purchasing decisions is still at a very early stage, Marci admits — but the Super Bowl study and others like it represent real progress. “We’re right at the cusp of coherent, neuroscience-informed measures of how ad engagement works,” he says.

Capturing “biometric synchrony”

Innerscope’s arsenal consists of 10 tools: Electroencephalography and fMRI technologies measure brain waves and structures. Biometric tools — such as wristbands and attachable sensors — track heart rate, skin conductance, motion, and respiration, which reflect emotional processing. And then there’s eye-tracking, voice-analysis, and facial-coding software, as well as other tests to complement these measures.

Such technologies were used for market research long before the rise of Innerscope. But, starting at MIT, Marci and Levine began developing novel algorithms, informed by neuroscience, that find trends among audiences pointing to exact moments when an audience is engaged together — in other words, in “biometric synchrony.”

Traditional algorithms for such market research would average the responses of entire audiences, Levine explains. “What you get is an overall level of arousal — basically, did they love or hate the content?” he says. “But how is that emotion going to be useful? That’s where the hole was.”

Innerscope’s algorithms tease out real-time detail from individual reactions — comprising anywhere from 500 million to 1 billion data points — to locate instances when groups’ responses (such as surprise, excitement, or disappointment) collectively match.

As an example, Levine references an early test conducted using an episode of the television show “Lost,” where a group of strangers are stranded on a tropical island.

Levine and Marci attached biometric sensors to six separate groups of five participants. At the long-anticipated moment when the show’s “monster” is finally revealed, nearly everyone held their breath for about 10 to 15 seconds.

“What our algorithms are looking for is this group response. The more similar the group response, the more likely the stimuli is creating that response,” Levine explains. “That allows us to understand if people are paying attention and if they’re going on a journey together.”

Getting on the map

Before MIT, Marci was a neuroscientist studying empathy, using biometric sensors and other means to explore how empathy between patient and doctor can improve patient health.

“I was lugging around boxes of equipment, with wires coming out and videotaping patients and doctors. Then someone said, ‘Hey, why don’t you just go to the MIT Media Lab,’” Marci says. “And I realized it had the resources I needed.”

At the Media Lab, Marci met behavioral analytics expert and collaborator Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, who helped him set up Bluetooth sensors around Massachusetts General Hospital to track emotions and empathy between doctors and patients with depression.

During this time, Levine, a former Web developer, had enrolled at MIT, splitting his time between the MIT Sloan School of Management and the Media Lab. “I wanted to merge an idea to understand customers better with being able to prototype anything,” he says.

After meeting Marci through a digital anthropology class, Levine proposed that they use this emotion-tracking technology to measure the connections of audiences to media. Using prototype sensor vests equipped with heart-rate monitors, stretch receptors, accelerometers, and skin-conductivity sensors, they trialed the technology with students around the Media Lab.

All the while, Levine pieced together Innerscope’s business plan in his classes at MIT Sloan, with help from other students and professors. “The business-strategy classes were phenomenal for that,” Levine says. “Right after finishing MIT, I had a complete and detailed business plan in my hands.”

Innerscope launched in 2006. But a 2008 study really accelerated the company’s growth. “NBC Universal had a big concern at the time: DVR,” Marci says. “Were people who were watching the prerecorded program still remembering the ads, even though they were clearly skipping them?”

Innerscope compared facial cues and biometrics from people who fast-forwarded ads against those who didn’t. The results were unexpected: While fast-forwarding, people stared at the screen blankly, but their eyes actually caught relevant brands, characters, and text. Because they didn’t want to miss their show, while fast-forwarding, they also had a heightened sense of engagement, signaled by leaning forward and staring fixedly.

“What we concluded was that people don’t skip ads,” Marci says. “They’re processing them in a different way, but they’re still processing those ads. That was one of those insights you couldn’t get from a survey. That put us on the map.”

Today, Innerscope is looking to expand. One project is bringing kiosks to malls and movie theaters, where the company recruits passersby for fast and cost-effective results. (Wristbands monitor emotional response, while cameras capture facial cues and eye motion.) The company is also aiming to try applications in mobile devices, wearables, and at-home sensors.

“We’re rewiring a generation of Americans in novel ways and moving toward a world of ubiquitous sensing,” Marci says. “We’ll need data science and algorithms and experts that can make sense of all that data.”

 

Source : MIT News Office