Monthly Archives: July 2014

Simple isn’t better when talking about science, Stanford philosopher suggests

Taking a philosophical approach to the assumptions that surround the study of human behavior, Stanford philosophy Professor Helen Longino suggests that no single research method is capable of answering the question of nature vs. nurture.


 

By Barbara Wilcox

Studies of the origins of human sexuality and aggression are typically in the domain of the sciences, where researchers examine genetic, neurobiological, social and environmental factors.

Behavioral research findings draw intense interest from other researchers, policymakers and the general public. But Stanford’s Helen E. Longino, the Clarence Irving Lewis Professor of Philosophy, says there’s more to the story.

Longino, who specializes in the philosophy of science, asserts in her latest book that the limitations of behavioral research are not clearly communicated in academic or popular discourse. As a result, this lack of communication distorts the scope of current behavioral research.

In her book Studying Human Behavior: How Scientists Investigate Aggression and Sexuality, Longino examines five common scientific approaches to the study of behavior – quantitative behavioral genetics, molecular behavioral genetics, developmental psychology, neurophysiology and anatomy, and social/environmental methods.

Applying the analytical tools of philosophy, Longino defines what is – and is not – measured by each of these approaches. She also reflects on how this research is depicted in academic and popular media.

In her analysis of citations of behavioral research, Longino found that the demands of journalism and of the culture at large favor science with a very simple storyline. Research that looks for a single “warrior gene” or a “gay gene,” for example, receives more attention in both popular and scholarly media than research that takes an integrative approach across scientific approaches or disciplines.

Longino spoke with the Stanford News Service about why it is important for scientists and the public to understand the parameters of behavioral research:

 

Your research suggests that social-science researchers are not adequately considering the limitations of their processes and findings. To what do you attribute this phenomenon?

The sciences have become hyper-specialized. Scientists rarely have the opportunity or support to step back from their research and ask how it connects with other work on similar topics. I see one role of philosophers of science as the provision of that larger, interpretive picture. This is not to say that there is one correct interpretation, rather that as philosophers we can show that the interpretive questions are askable.

 

Why study behavioral research through a philosophic lens?

Philosophy deals, in part, with the study of how things are known. A philosopher can ask, “What are the grounds for believing any of the claims here? What are the relationships between these approaches? The differences? What can we learn? What can this way of thinking not tell us?”

These are the questions I asked of each article I read. I developed a grid system for analyzing and recording the way the behavior under study was defined and measured, the correlational or observational data – including size and character of sample population – developed, the hypotheses evaluated.

 

What about your findings do you think would surprise people most?

I went into the project thinking that what would differentiate each approach was its definition of behavior. As the patterns emerged, I saw that. What differentiated each approach was how it characterized the range of possible causal factors.

Because each approach characterized this range differently, the measurements of different research approaches were not congruent. Thus, their results could not be combined or integrated or treated as empirical competitors. But this is what is required if the nature vs. nurture – or nature and nurture – question is to be meaningful.

I also investigated the representation of this research in public media. I found that research that locates the roots of behavior in the individual is cited far more often than population-based studies, and that research that cites genetic or neurobiological factors is cited more frequently than research into social or environmental influences on behavior. Interestingly, science journalists fairly consistently described biological studies as being more fruitful and promising than studies into social factors of behavior.

Social research was always treated as “terminally inconclusive,” using terms that amount to “we’ll never get an answer.” Biological research was always treated as being a step “on the road to knowledge.”

 

What prompted you to begin the research that became Studying Human Behavior?

In 1992, an East Coast conference on “genetic factors and crime” was derailed under pressure from activists and the Congressional Black Caucus, which feared that the findings being presented might be misused to find a racial basis for crime or links between race and intelligence. I became interested in the conceptual and theoretical foundations of the conference – the voiced and unvoiced assumptions made by both the conference participants and by the activists, policymakers and other users of the research.

 

Why did you pair human aggression and sexuality as a subject for a book?

While I started with the research on aggression, research on sexual orientation started popping up in the news and I wanted to include research on at least two behaviors or families of behavior in order to avoid being misled by potential sample bias. Of course, these behaviors are central to social life, so how we try to understand them is intrinsically interesting.

 

What could science writers be doing better?

Articles in the popular media, such as the science sections of newspapers, rarely discuss the methodology of studies that they cover as news. Yet methodology and the disciplinary approach of the scientists doing the research are critical because they frame the question.

For example, quantitative behavioral genetics research will consider a putatively shared genome against social factors such as birth order, parental environment and socioeconomic status. Molecular genetics research seeks to associate specific traits with specific alleles or combinations within the genome, but the social factors examined by quantitative behavioral genetics lie outside its purview. Neurobiological research might occupy a middle ground. But no single approach or even a combination of approaches can measure all the factors that bear on a behavior.

It’s also important to know that often, behavior is not what’s being studied. It’s a tool, not the subject. The process of serotonin re-uptake, for example, may be of primary interest to the researcher, not the behavior that it yields. Yet behavior is what’s being reported.

 

What advice do you have for people who might be concerned about potential political ramifications of research into sexuality or aggression?

I see political ramifications in what is not studied.

In studying sexual orientation, the 7-point Kinsey scale was an improvement over a previous binary measure of orientation. Researchers employing the Kinsey scale still tend to find greater concentrations at the extremes. Middle points still get dropped out of the analysis. In addition to more attention to intermediates on the scale, there could be focus on other dimensions of erotic orientation in addition to, or instead of, the sex of the individual to which one is attracted.

Similarly, there are a number of standard ways to measure aggressive response, but they are all focused on the individual. Collective action is not incorporated. If the interest in studying aggression is to shed light on crime, there’s a whole lot of behavior that falls outside that intersection, including white-collar crime and state- or military-sponsored crime.

 

What other fields of inquiry could benefit from your findings?

Climate study is as complex as behavioral study. We’d have a much better debate about climate change if we were not looking for a single answer or silver bullet. The public should understand the complexities that the IPCC [Intergovernmental Panel on Climate Change] must cope with in producing its findings.

Source: Stanford News Service

Collecting just the right data : MIT Research

When you can’t collect all the data you need, a new algorithm tells you which to target.

 Larry Hardesty | MIT News Office 


Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon and Netflix.

But some types of data are harder to collect than online click histories —information about geological formations thousands of feet underground, for instance. And in other applications — such as trying to predict the path of a storm — there may just not be enough time to crunch all the available data.

Dan Levine, an MIT graduate student in aeronautics and astronautics, and his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, have developed a new technique that could help with both problems. For a range of common applications in which data is either difficult to collect or too time-consuming to process, the technique can identify the subset of data items that will yield the most reliable predictions. So geologists trying to assess the extent of underground petroleum deposits, or meteorologists trying to forecast the weather, can make do with just a few, targeted measurements, saving time and money.

Levine and How, who presented their work at the Uncertainty in Artificial Intelligence conference this week, consider the special case in which something about the relationships between data items is known in advance. Weather prediction provides an intuitive example: Measurements of temperature, pressure, and wind velocity at one location tend to be good indicators of measurements at adjacent locations, or of measurements at the same location a short time later, but the correlation grows weaker the farther out you move either geographically or chronologically.

Graphic content

Such correlations can be represented by something called a probabilistic graphical model. In this context, a graph is a mathematical abstraction consisting of nodes — typically depicted as circles — and edges — typically depicted as line segments connecting nodes. A network diagram is one example of a graph; a family tree is another. In a probabilistic graphical model, the nodes represent variables, and the edges represent the strength of the correlations between them.

Levine and How developed an algorithm that can efficiently calculate just how much information any node in the graph gives you about any other — what in information theory is called “mutual information.” As Levine explains, one of the obstacles to performing that calculation efficiently is the presence of “loops” in the graph, or nodes that are connected by more than one path.

Calculating mutual information between nodes, Levine says, is kind of like injecting blue dye into one of them and then measuring the concentration of blue at the other. “It’s typically going to fall off as we go further out in the graph,” Levine says. “If there’s a unique path between them, then we can compute it pretty easily, because we know what path the blue dye will take. But if there are loops in the graph, then it’s harder for us to compute how blue other nodes are because there are many different paths.”

So the first step in the researchers’ technique is to calculate “spanning trees” for the graph. A tree is just a graph with no loops: In a family tree, for instance, a loop might mean that someone was both parent and sibling to the same person. A spanning tree is a tree that touches all of a graph’s nodes but dispenses with the edges that create loops.

Betting the spread

Most of the nodes that remain in the graph, however, are “nuisances,” meaning that they don’t contain much useful information about the node of interest. The key to Levine and How’s technique is a way to use those nodes to navigate the graph without letting their short-range influence distort the long-range calculation of mutual information.

That’s possible, Levine explains, because the probabilities represented by the graph are Gaussian, meaning that they follow the bell curve familiar as the model of, for instance, the dispersion of characteristics in a population. A Gaussian distribution is exhaustively characterized by just two measurements: the average value — say, the average height in a population — and the variance — the rate at which the bell spreads out.

“The uncertainty in the problem is really a function of the spread of the distribution,” Levine says. “It doesn’t really depend on where the distribution is centered in space.” As a consequence, it’s often possible to calculate variance across a probabilistic graphical model without relying on the specific values of the nodes. “The usefulness of data can be assessed before the data itself becomes available,” Levine says.

Reprinted with permission of MIT News (http://newsoffice.mit.edu/)

 

New observations reveal how stardust forms around a supernova

A group of astronomers has been able to follow stardust being made in real time — during the aftermath of a supernova explosion. For the first time they show that these cosmic dust factories make their grains in a two-stage process, starting soon after the explosion, but continuing for years afterwards. The team used ESO’s Very Large Telescope (VLT) in northern Chile to analyse the light from the supernova SN2010jl as it slowly faded. The new results are published online in the journal Nature on 9 July 2014.

The origin of cosmic dust in galaxies is still a mystery [1]. Astronomers know that supernovae may be the primary source of dust, especially in the early Universe, but it is still unclear how and where dust grains condense and grow. It is also unclear how they avoid destruction in the harsh environment of a star-forming galaxy. But now, observations using ESO’s VLT at the Paranal Observatory in northern Chile are lifting the veil for the first time.

An international team used the X-shooter spectrograph to observe a supernova — known as SN2010jl — nine times in the months following the explosion, and for a tenth time 2.5 years after the explosion, at both visible and near-infrared wavelengths [2]. This unusually bright supernova, the result of the death of a massive star, exploded in the small galaxy UGC 5189A.

By combining the data from the nine early sets of observations we were able to make the first direct measurements of how the dust around a supernova absorbs the different colours of light,” said lead author Christa Gall from Aarhus University, Denmark. “This allowed us to find out more about the dust than had been possible before.

The team found that dust formation starts soon after the explosion and continues over a long time period. The new measurements also revealed how big the dust grains are and what they are made of. These discoveries are a step beyond recent results obtained using the Atacama Large Millimeter/submillimeter Array (ALMA), which first detected the remains of a recent supernova brimming with freshly formed dust from the famous supernova 1987A (SN 1987A; eso1401).

 Artist’s impression of dust formation around a supernova explosion. Credit: ESO

Artist’s impression of dust formation around a supernova explosion.
Credit: ESO

The team found that dust grains larger than one thousandth of a millimetre in diameter formed rapidly in the dense material surrounding the star. Although still tiny by human standards, this is large for a grain of cosmic dust and the surprisingly large size makes them resistant to destructive processes. How dust grains could survive the violent and destructive environment found in the remnants of supernovae was one of the main open questions of the ALMA paper, which this result has now answered — the grains are larger than expected.

Our detection of large grains soon after the supernova explosion means that there must be a fast and efficient way to create them,” said co-author Jens Hjorth from the Niels Bohr Institute of the University of Copenhagen, Denmark, and continued: “We really don’t know exactly how this happens.

But the astronomers think they know where the new dust must have formed: in material that the star shed out into space even before it exploded. As the supernova’s shockwave expanded outwards, it created a cool, dense shell of gas — just the sort of environment where dust grains could seed and grow.

Results from the observations indicate that in a second stage — after several hundred days — an accelerated dust formation process occurs involving ejected material from the supernova. If the dust production in SN2010jl continues to follow the observed trend, by 25 years after the supernova, the total mass of dust will be about half the mass of the Sun; similar to the dust mass observed in other supernovae such as SN 1987A.

Previously astronomers have seen plenty of dust in supernova remnants left over after the explosions. But they also only found evidence for small amounts of dust actually being created in the supernova explosions. These remarkable new observations explain how this apparent contradiction can be resolved,” concludes Christa Gall.

Notes

[1] Cosmic dust consists of silicate and amorphous carbon grains — minerals also abundant on Earth. The soot from a candle is very similar to cosmic carbon dust, although the size of the grains in the soot are ten or more times bigger than typical grain sizes for cosmic grains.

[2] Light from this supernova was first seen in 2010, as is reflected in the name, SN 2010jl. It is classed as a Type IIn supernova. Supernovae classified as Type II result from the violent explosion of a massive star with at least eight times the mass of the Sun. The subtype of a Type IIn supernova — “n” denotes narrow — shows narrow hydrogen lines in its spectra. These lines result from the interaction between the material ejected by the supernova and the material already surrounding the star.

More information

This research was presented in a paper “Rapid formation of large dust grains in the luminous supernova SN 2010jl”, by C. Gall et al., to appear online in the journal Nature on 9 July 2014.

The team is composed of Christa Gall (Department of Physics and Astronomy, Aarhus University, Denmark; Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Denmark; Observational Cosmology Lab, NASA Goddard Space Flight Center, USA), Jens Hjorth (Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Denmark), Darach Watson (Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Denmark), Eli Dwek (Observational Cosmology Lab, NASA Goddard Space Flight Center, USA), Justyn R. Maund (Astrophysics Research Centre School of Mathematics and Physics Queen’s University Belfast, UK; Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Denmark; Department of Physics and Astronomy, University of Sheffield, UK), Ori Fox (Department of Astronomy, University of California, Berkeley, USA), Giorgos Leloudas (The Oskar Klein Centre, Department of Physics, Stockholm University, Sweden; Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Denmark), Daniele Malesani (Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Denmark) and Avril C. Day-Jones (Departamento de Astronomia, Universidad de Chile, Chile).

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Source: ESO