Tag Archives: science

Study on MOOCs provides new insights on an evolving space

Findings suggest many teachers enroll, learner intentions matter, and cost boosts completion rates.


CAMBRIDGE, Mass. – Today, a joint MIT and Harvard University research team published one of the largest investigations of massive open online courses (MOOCs) to date. Building on these researchers’ prior work — a January 2014 report describing the first year of open online courses launched on edX, a nonprofit learning platform founded by the two institutions — the latest effort incorporates another year of data, bringing the total to nearly 70 courses in subjects from programming to poetry.

“We explored 68 certificate-granting courses, 1.7 million participants, 10 million participant-hours, and 1.1 billion participant-logged events,” says Andrew Ho, a professor at the Harvard Graduate School of Education. The research team also used surveys to ­gain additional information about participants’ backgrounds and their intentions.

Ho and Isaac Chuang, a professor of electrical engineering and computer science and senior associate dean of digital learning at MIT, led a group effort that delved into the demographics of MOOC learners, analyzed participant intent, and looked at patterns that “serial MOOCers,” or those taking more than one course, tend to pursue.

“What jumped out for me was the survey that revealed that in some cases as many as 39 percent of our learners are teachers,” Chuang says. “This finding forces us to broaden our conceptions of who MOOCs serve and how they might make a difference in improving learning.”

Key findings

The researchers conducted a trend analysis that showed a rising share of female, U.S.-based, and older participants, as well as a survey analysis of intent, revealing that almost half of registrants were not interested in or unsure about certification. In this study, the researchers redefined their population of learners from those who simply registered for courses (and took no subsequent action) — a metric used in prior findings and often cited by MOOC providers — to those who participated (such as by logging into the course at least once).

1. Participation in HarvardX and MITx open online courses has grown steadily, while participation in repeated courses has declined and then stabilized.

From July 24, 2012, through Sept. 21, 2014, an average of 1,300 new participants joined a HarvardX or MITx course each day, for a total of 1 million unique participants and 1.7 million total participants. With the increase in second and third versions of courses, the researchers found that participation in second versions declined by 43 percent, while there was stable participation between versions two and three. There were outliers, such as the HarvardX course CS50x (Introduction to Computer Science), which doubled in size, perhaps due to increased student flexibility: Students in this course could participate over a yearlong period at their own pace, and complete at any time.

2. A slight majority of MOOC takers are seeking certification, and many participants are teachers.

Among the one-third of participants who responded to a survey about their intentions, 57 percent stated their desire to earn a certificate; nearly a quarter of those respondents went on to earn certificates. Further, among participants who were unsure or did not intend to earn a certificate, 8 percent ultimately did so. These learners appear to have been inspired to finish a MOOC even after initially stating that they had no intention of doing so.

Among 200,000 participants who responded to a survey about teaching, 39 percent self-identified as a past or present teacher; 21 percent of those teachers reported teaching in the course topic area. The strong participation by teachers suggests that even participants who are uninterested in certification may still make productive use of MOOCs.

3. Academic areas matter when it comes to participation, certification, and course networks.

Participants were drawn to computer science courses in particular, with per-course participation numbers nearly four times higher than courses in the humanities, sciences, and social sciences. That said, certificate rates in computer science and other science- and technology-based offerings (7 percent and 6 percent, respectively) were about half of those in the humanities and social sciences.

The larger data sets also allowed the researchers to study those participating in more than one course, revealing that computer science courses serve as hubs for students, who naturally move to and from related courses. Intentional sequencing, as was done for the 10-part HarvardX Chinese history course “ChinaX,” led to some of the highest certification rates in the study. Other courses with high certification rates were “Introduction to Computer Science” from MITx and “Justice” and “Health in Numbers” from HarvardX.

4. Those opting for fee-based ID-verified certificates certify at higher rates.

Across 12 courses, participants who paid for “ID-verified” certificates (with costs ranging from $50 to $250) earned certifications at a higher rate than other participants: 59 percent, on average, compared with 5 percent. Students opting for the ID-verified track appear to have stronger intentions to complete courses, and the monetary stake may add an extra form of motivation.

Questions and implications

Based upon these findings, Chuang and Ho identified questions that might “reset and reorient expectations” around MOOCs.

First, while many MOOC creators and providers have increased access to learning opportunities, those who are accessing MOOCs are disproportionately those who already have college and graduate degrees. The researchers do not necessarily see this as a problem, as academic experience may be a requirement in advanced courses. However, to serve underrepresented and traditionally underserved groups, the data suggest that proactive strategies may be necessary.

“These free, open courses are phenomenal opportunities for millions of learners,” Ho emphasizes, “but equity cannot be increased just by opening doors. We hope that our data help teachers and institutions to think about their intended audiences, and serve as a baseline for charting progress.”

Second, if improving online and on-campus learning is a priority, then “the flow of pedagogical innovations needs to be formalized,” Chuang says. For example, many of the MOOCs in the study used innovations from their campus counterparts, like physics assessments from MIT and close-reading practices from Harvard’s classics courses. Likewise, residential faculty are using MOOC content, such as videos and assessment scoring algorithms, in smaller, traditional lecture courses.

“The real potential is in the fostering of feedback loops between the two realms,” Chuang says. “In particular, the high number of teacher participants signals great potential for impact beyond Harvard and MIT, especially if deliberate steps could be taken to share best practices.”

Third, advancing research through MOOCs may require a more nuanced definition of audience. Much of the research to date has done little to differentiate among the diverse participants in these free, self-paced learning environments.

“While increasing completion has been a subject of interest, given that many participants have limited, uncertain, or zero interest in completing MOOCs, exerting research muscle to indiscriminately increase completion may not be productive,” Ho explains. “Researchers might want to focus more specifically on well-surveyed or paying subpopulations, where we have a better sense of their expectations and motivations.”

More broadly, Ho and Chuang hope to showcase the potential and diversity of MOOCs and MOOC data by developing “Top 5” lists based upon course attributes, such as scale (an MIT computer science course clocked in with 900,000 participant hours); demographics (the MOOC with the most female representation is a museum course from HarvardX called “Tangible Things,” while MITx’s computing courses attracted the largest global audience); and type and level of interaction (those in ChinaX most frequently posted in online forums, while those in an introduction to computer science course fromMITx most frequently played videos).

“These courses reflect the breadth of our university curricula, and we felt the need to highlight their diverse designs, philosophies, audiences, and learning outcomes in our analyses,” Chuang says. “Which course is right for you? It depends, and these lists might help learners decide what qualities in a given MOOC are most important to them.”

Additional authors on the report included Justin Reich, Jacob Whitehill, Joseph Williams, Glenn Lopez, John Hansen, and Rebecca Petersen from Harvard, and Cody Coleman and Curtis Northcutt from MIT.

###

Related links

Paper: “HarvardX and MITx: Two years of open online courses fall 2012-summer 2014”
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2586847

Office of Digital Learning
http://odl.mit.edu

MITx working papers
http://odl.mit.edu/mitx-working-papers/

HarvardX working papers
http://harvardx.harvard.edu/harvardx-working-papers

Related MIT News

ARCHIVE: MIT and Harvard release working papers on open online courses
https://newsoffice.mit.edu/2014/mit-and-harvard-release-working-papers-on-open-online-courses-0121

ARCHIVE: Reviewing online homework at scale
https://newsoffice.mit.edu/2015/reviewing-mooc-homework-0330

ARCHIVE: Study: Online classes really do work
https://newsoffice.mit.edu/2014/study-shows-online-courses-effective-0924

ARCHIVE: The future of MIT education looks more global, modular, and flexible
https://newsoffice.mit.edu/2014/future-of-mit-education-0804

 Source: MIT News Office

Better debugger

System to automatically find a common type of programming bug significantly outperforms its predecessors.

By Larry Hardesty


CAMBRIDGE, Mass. – Integer overflows are one of the most common bugs in computer programs — not only causing programs to crash but, even worse, potentially offering points of attack for malicious hackers. Computer scientists have devised a battery of techniques to identify them, but all have drawbacks.

This month, at the Association for Computing Machinery’s International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new algorithm for identifying integer-overflow bugs. The researchers tested the algorithm on five common open-source programs, in which previous analyses had found three bugs. The new algorithm found all three known bugs — and 11 new ones.

The variables used by computer programs come in a few standard types, such as floating-point numbers, which can contain decimals; characters, like the letters of this sentence; or integers, which are whole numbers. Every time the program creates a new variable, it assigns it a fixed amount of space in memory.

If a program tries to store too large a number at a memory address reserved for an integer, the operating system will simply lop off the bits that don’t fit. “It’s like a car odometer,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the new paper. “You go over a certain number of miles, you go back to zero.”

In itself, an integer overflow won’t crash a program; in fact, many programmers use integer overflows to perform certain types of computations more efficiently. But if a program tries to do something with an integer that has overflowed, havoc can ensue. Say, for instance, that the integer represents the number of pixels in an image the program is processing. If the program allocates memory to store the image, but its estimate of the image’s size is off by several orders of magnitude, the program will crash.

Charting a course

Any program can be represented as a flow chart — or, more technically, a graph, with boxes that represent operations connected by line segments that represent the flow of data between operations. Any given program input will trace a single route through the graph. Prior techniques for finding integer-overflow bugs would start at the top of the graph and begin working through it, operation by operation.

For even a moderately complex program, however, that graph is enormous; exhaustive exploration of the entire thing would be prohibitively time-consuming. “What this means is that you can find a lot of errors in the early input-processing code,” says Martin Rinard, an MIT professor of computer science and engineering and a co-author on the new paper. “But you haven’t gotten past that part of the code before the whole thing poops out. And then there are all these errors deep in the program, and how do you find them?”

Rinard, Sidiroglou-Douskos, and several other members of Rinard’s group — researchers Eric Lahtinen and Paolo Piselli and graduate students Fan Long, Doekhwan Kim, and Nathan Rittenhouse — take a different approach. Their system, dubbed DIODE (for Directed Integer Overflow Detection), begins by feeding the program a single sample input. As that input is processed, however — as it traces a path through the graph — the system records each of the operations performed on it by adding new terms to what’s known as a “symbolic expression.”

“These symbolic expressions are complicated like crazy,” Rinard explains. “They’re bubbling up through the very lowest levels of the system into the program. This 32-bit integer has been built up of all these complicated bit-level operations that the lower-level parts of your system do to take this out of your input file and construct those integers for you. So if you look at them, they’re pages long.”

Trigger warning

When the program reaches a point at which an integer is involved in a potentially dangerous operation — like a memory allocation — DIODE records the current state of the symbolic expression. The initial test input won’t trigger an overflow, but DIODE can analyze the symbolic expression to calculate an input that will.

The process still isn’t over, however: Well-written programs frequently include input checks specifically designed to prevent problems like integer overflows, and the new input, unlike the initial input, might fail those checks. So DIODE seeds the program with its new input, and if it fails such a check, it imposes a new constraint on the symbolic expression and computes a new overflow-triggering input. This process continues until the system either finds an input that can pass the checks but still trigger an overflow, or it concludes that triggering an overflow is impossible.

If DIODE does find a trigger value, it reports it, providing developers with a valuable debugging tool. Indeed, since DIODE doesn’t require access to a program’s source code but works on its “binary” — the executable version of the program — a program’s users could run it and then send developers the trigger inputs as graphic evidence that they may have missed security vulnerabilities.

Source: News Office

Magnetic brain stimulation:New technique could lead to long-lasting localized stimulation of brain tissue without external connections.

By David Chandler


CAMBRIDGE, Mass–Researchers at MIT have developed a method to stimulate brain tissue using external magnetic fields and injected magnetic nanoparticles — a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections.

The research, conducted by Polina Anikeeva, an assistant professor of materials science and engineering, graduate student Ritchie Chen, and three others, has been published in the journal Science.

Previous efforts to stimulate the brain using pulses of electricity have proven effective in reducing or eliminating tremors associated with Parkinson’s disease, but the treatment has remained a last resort because it requires highly invasive implanted wires that connect to a power source outside the brain.

“In the future, our technique may provide an implant-free means to provide brain stimulation and mapping,” Anikeeva says.

In their study, the team injected magnetic iron oxide particles just 22 nanometers in diameter into the brain. When exposed to an external alternating magnetic field — which can penetrate deep inside biological tissues — these particles rapidly heat up.

The resulting local temperature increase can then lead to neural activation by triggering heat-sensitive capsaicin receptors — the same proteins that the body uses to detect both actual heat and the “heat” of spicy foods. (Capsaicin is the chemical that gives hot peppers their searing taste.) Anikeeva’s team used viral gene delivery to induce the sensitivity to heat in selected neurons in the brain.

The particles, which have virtually no interaction with biological tissues except when heated, tend to remain where they’re placed, allowing for long-term treatment without the need for further invasive procedures.

“The nanoparticles integrate into the tissue and remain largely intact,” Anikeeva says. “Then, that region can be stimulated at will by externally applying an alternating magnetic field. The goal for us was to figure out whether we could deliver stimuli to the nervous system in a wireless and noninvasive way.”

The new work has proven that the approach is feasible, but much work remains to turn this proof-of-concept into a practical method for brain research or clinical treatment.

The use of magnetic fields and injected particles has been an active area of cancer research; the thought is that this approach could destroy cancer cells by heating them. “The new technique is derived, in part, from that research,” Anikeeva says. “By calibrating the delivered thermal dosage, we can excite neurons without killing them. The magnetic nanoparticles also have been used for decades as contrast agents in MRI scans, so they are considered relatively safe in the human body.”

The team developed ways to make the particles with precisely controlled sizes and shapes, in order to maximize their interaction with the applied alternating magnetic field. They also developed devices to deliver the applied magnetic field: Existing devices for cancer treatment — intended to produce much more intense heating — were far too big and energy-inefficient for this application.

The next step toward making this a practical technology for clinical use in humans “is to understand better how our method works through neural recordings and behavioral experiments, and assess whether there are any other side effects to tissues in the affected area,” Anikeeva says.

In addition to Anikeeva and Chen, the research team also included postdoc Gabriela Romero, graduate student Michael Christiansen, and undergraduate Alan Mohr. The work was funded by the Defense Advanced Research Projects Agency, MIT’s McGovern Institute for Brain Research, and the National Science Foundation.

Trapping light with a twister

New understanding of how to halt photons could lead to miniature particle accelerators, improved data transmission.

By David L. Chandler


Researchers at MIT who succeeded last year in creating a material that could trap light and stop it in its tracks have now developed a more fundamental understanding of the process. The new work — which could help explain some basic physical mechanisms — reveals that this behavior is connected to a wide range of other seemingly unrelated phenomena.

The findings are reported in a paper in the journal Physical Review Letters, co-authored by MIT physics professor Marin Soljačić; postdocs Bo Zhen, Chia Wei Hsu, and Ling Lu; and Douglas Stone, a professor of applied physics at Yale University.

Light can usually be confined only with mirrors, or with specialized materials such as photonic crystals. Both of these approaches block light beams; last year’s finding demonstrated a new method in which the waves cancel out their own radiation fields. The new work shows that this light-trapping process, which involves twisting the polarization direction of the light, is based on a kind of vortex — the same phenomenon behind everything from tornadoes to water swirling down a drain.

Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1. Courtesy of the researchers Source: MIT
Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1.
Courtesy of the researchers
Source: MIT

In addition to revealing the mechanism responsible for trapping the light, the new analysis shows that this trapped state is much more stable than had been thought, making it easier to produce and harder to disturb.

“People think of this [trapped state] as very delicate,” Zhen says, “and almost impossible to realize. But it turns out it can exist in a robust way.”

In most natural light, the direction of polarization — which can be thought of as the direction in which the light waves vibrate — remains fixed. That’s the principle that allows polarizing sunglasses to work: Light reflected from a surface is selectively polarized in one direction; that reflected light can then be blocked by polarizing filters oriented at right angles to it.

But in the case of these light-trapping crystals, light that enters the material becomes polarized in a way that forms a vortex, Zhen says, with the direction of polarization changing depending on the beam’s direction.

Because the polarization is different at every point in this vortex, it produces a singularity — also called a topological defect, Zhen says — at its center, trapping the light at that point.

Hsu says the phenomenon makes it possible to produce something called a vector beam, a special kind of laser beam that could potentially create small-scale particle accelerators. Such devices could use these vector beams to accelerate particles and smash them into each other — perhaps allowing future tabletop devices to carry out the kinds of high-energy experiments that today require miles-wide circular tunnels.

The finding, Soljačić says, could also enable easy implementation of super-resolution imaging (using a method called stimulated emission depletion microscopy) and could allow the sending of far more channels of data through a single optical fiber.

“This work is a great example of how supposedly well-studied physical systems can contain rich and undiscovered phenomena, which can be unearthed if you dig in the right spot,” says Yidong Chong, an assistant professor of physics and applied physics at Nanyang Technological University in Singapore who was not involved in this research.

Chong says it is remarkable that such surprising findings have come from relatively well-studied materials. “It deals with photonic crystal slabs of the sort that have been extensively analyzed, both theoretically and experimentally, since the 1990s,” he says. “The fact that the system is so unexotic, together with the robustness associated with topological phenomena, should give us confidence that these modes will not simply

be theoretical curiosities, but can be exploited in technologies such as microlasers.”

The research was partly supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and by the Department of Energy and the National Science Foundation.

Source: MIT News Office

Islamic Republic of Pakistan to become Associate Member State of CERN: CERN Press Release

Geneva 19 December 2014. CERN1 Director General, Rolf Heuer, and the Chairman of the Pakistan Atomic Energy Commission, Ansar Parvez, signed today in Islamabad, in presence of Prime Minister Nawaz Sharif, a document admitting the Islamic Republic of Pakistan to CERN Associate Membership, subject to ratification by the Government of Pakistan.

“Pakistan has been a strong participant in CERN’s endeavours in science and technology since the 1990s,” said Rolf Heuer. “Bringing nations together in a peaceful quest for knowledge and education is one of the most important missions of CERN. Welcoming Pakistan as a new Associate Member State is therefore for our Organization a very significant event and I’m looking forward to enhanced cooperation with Pakistan in the near future.”

“It is indeed a historic day for science in Pakistan. Today’s signing of the agreement is a reward for the collaboration of our scientists, engineers and technicians with CERN over the past two decades,” said Ansar Parvez. “This Membership will bring in its wake multiple opportunities for our young students and for industry to learn and benefit from CERN. To us in Pakistan, science is not just pursuit of knowledge, it is also the basic requirement to help us build our nation.”

The Islamic Republic of Pakistan and CERN signed a Co-operation Agreement in 1994. The signature of several protocols followed this agreement, and Pakistan contributed to building the CMS and ATLAS experiments. Pakistan contributes today to the ALICE, ATLAS, CMS and LHCb experiments and operates a Tier-2 computing centre in the Worldwide LHC Computing Grid that helps to process and analyse the massive amounts of data the experiments generate. Pakistan is also involved in accelerator developments, making it an important partner for CERN.

The Associate Membership of Pakistan will open a new era of cooperation that will strengthen the long-term partnership between CERN and the Pakistani scientific community. Associate Membership will allow Pakistan to participate in the governance of CERN, through attending the meetings of the CERN Council. Moreover, it will allow Pakistani scientists to become members of the CERN staff, and to participate in CERN’s training and career-development programmes. Finally, it will allow Pakistani industry to bid for CERN contracts, thus opening up opportunities for industrial collaboration in areas of advanced technology.

Footnote(s)

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Union, JINR and UNESCO have Observer Status.

Source : CERN

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Credit:
ESO

The Hot Blue Stars of Messier 47

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Messier 47 is located approximately 1600 light-years from Earth, in the constellation of Puppis (the poop deck of the mythological ship Argo). It was first noticed some time before 1654 by Italian astronomer Giovanni Battista Hodierna and was later independently discovered by Charles Messier himself, who apparently had no knowledge of Hodierna’s earlier observation.

Although it is bright and easy to see, Messier 47 is one of the least densely populated open clusters. Only around 50 stars are visible in a region about 12 light-years across, compared to other similar objects which can contain thousands of stars.

Messier 47 has not always been so easy to identify. In fact, for years it was considered missing, as Messier had recorded the coordinates incorrectly. The cluster was later rediscovered and given another catalogue designation — NGC 2422. The nature of Messier’s mistake, and the firm conclusion that Messier 47 and NGC 2422 are indeed the same object, was only established in 1959 by Canadian astronomer T. F. Morris.

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars. Credit: ESO
This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.
Credit:
ESO



The bright blue–white colours of these stars are an indication of their temperature, with hotter stars appearing bluer and cooler stars appearing redder. This relationship between colour, brightness and temperature can be visualised by use of the Planck curve. But the more detailed study of the colours of stars using spectroscopy also tells astronomers a lot more — including how fast the stars are spinning and their chemical compositions. There are also a few bright red stars in the picture — these are red giant stars that are further through their short life cycles than the less massive and longer-lived blue stars [1].

By chance Messier 47 appears close in the sky to another contrasting star cluster — Messier 46. Messier 47 is relatively close, at around 1600 light-years, but Messier 46 is located around 5500 light-years away and contains a lot more stars, with at least 500 stars present. Despite containing more stars, it appears significantly fainter due to its greater distance.

Messier 46 could be considered to be the older sister of Messier 47, with the former being approximately 300 million years old compared to the latter’s 78 million years. Consequently, many of the most massive and brilliant of the stars in Messier 46 have already run through their short lives and are no longer visible, so most stars within this older cluster appear redder and cooler.

This image of Messier 47 was produced as part of the ESO Cosmic Gems programme [2].

Notes

[1] The lifetime of a star depends primarily on its mass. Massive stars, containing many times as much material as the Sun, have short lives measured in millions of years. On the other hand much less massive stars can continue to shine for many billions of years. In a cluster, the stars all have about the same age and same initial chemical composition. So the brilliant massive stars evolve quickest, become red giants sooner, and end their lives first, leaving the less massive and cooler ones to long outlive them.

[2] The ESO Cosmic Gems programme is an outreach initiative to produce images of interesting, intriguing or visually attractive objects using ESO telescopes, for the purposes of education and public outreach. The programme makes use of telescope time that cannot be used for science observations. All data collected may also be suitable for scientific purposes, and are made available to astronomers through ESO’s science archive.

More information

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Source: ESO 

Musashi proteins, stained red, appear in the cell cytoplasm, outside the nucleus. At right, the cell nucleus is stained blue.
Image Credit: Yarden Katz/MIT

Proteins drive cancer cells to change states

When RNA-binding proteins are turned on, cancer cells get locked in a proliferative state.

 By Anne Trafton


 

A new study from MIT implicates a family of RNA-binding proteins in the regulation of cancer, particularly in a subtype of breast cancer. These proteins, known as Musashi proteins, can force cells into a state associated with increased proliferation.

Biologists have previously found that this kind of transformation, which often occurs in cancer cells as well as during embryonic development, is controlled by transcription factors — proteins that turn genes on and off. However, the new MIT research reveals that RNA-binding proteins also play an important role. Human cells have about 500 different RNA-binding proteins, which influence gene expression by regulating messenger RNA, the molecule that carries DNA’s instructions to the rest of the cell.

“Recent discoveries show that there’s a lot of RNA-processing that happens in human cells and mammalian cells in general,” says Yarden Katz, a recent MIT PhD recipient and one of the lead authors of the new paper. “RNA is processed at several points within the cell, and this gives opportunities for RNA-binding proteins to regulate RNA at each point. We’re very interested in trying to understand this unexplored class of RNA-binding proteins and how they regulate cell-state transitions.”

Feifei Li of China Agricultural University is also a lead author of the paper, which appears in the journal eLife on Dec. 15. Senior authors of the paper are MIT biology professors Christopher Burge and Rudolf Jaenisch, and Zhengquan Yu of China Agricultural University.

Controlling cell states

Until this study, scientists knew very little about the functions of Musashi proteins. These RNA-binding proteins have traditionally been used to identify neural stem cells, in which they are very abundant. They have also been found in tumors, including in glioblastoma, a very aggressive form of brain cancer.

“Normally they’re marking stem and progenitor cells, but they get turned on in cancers. That was intriguing to us because it suggested they might impose a more undifferentiated state on cancer cells,” Katz says.

To study this possibility, Katz manipulated the levels of Musashi proteins in neural stem cells and measured the effects on other genes. He found that genes affected by Musashi proteins were related to the epithelial-to-mesenchymal transition (EMT), a process by which cells lose their ability to stick together and begin invading other tissues.

EMT has been shown to be important in breast cancer, prompting the team to look into Musashi proteins in cancers of non-neural tissue. They found that Musashi proteins are most highly expressed in a type of breast tumors called luminal B tumors, which are not metastatic but are aggressive and fast-growing.

When the researchers knocked down Musashi proteins in breast cancer cells grown in the lab, the cells were forced out of the epithelial state. Also, if the proteins were artificially boosted in mesenchymal cells, the cells transitioned to an epithelial state. This suggests that Musashi proteins are responsible for maintaining cancer cells in a proliferative, epithelial state.

“These proteins seem to really be regulating this cell-state transition, which we know from other studies is very important, especially in breast cancer,” Katz says.

Musashi proteins, stained red, appear in the cell cytoplasm, outside the nucleus. At right, the cell nucleus is stained blue. Image Credit: Yarden Katz/MIT
Musashi proteins, stained red, appear in the cell cytoplasm, outside the nucleus. At right, the cell nucleus is stained blue.
Image Credit: Yarden Katz , MIT

 

The researchers found that Musashi proteins repress a gene called Jagged1, which in turn regulates the Notch signaling pathway. Notch signaling promotes cell division in neurons during embryonic development and also plays a major role in cancer.

When Jagged1 is repressed, cells are locked in an epithelial state and are much less motile. The researchers found that Musashi proteins also repress Jagged1 during normal mammary-gland development, not just in cancer. When these proteins were overexpressed in normal mammary glands, cells were less able to undergo the type of healthy EMT required for mammary tissue development.

Brenton Graveley, a professor of genetics and developmental biology at the University of Connecticut, says he was surprised to see how much influence Musashi proteins can have by controlling a relatively small number of genes in a cell. “Musashi proteins have been known to be interesting for many years, but until now nobody has really figured out exactly what they’re doing, especially on a genome-wide scale,” he says.

The researchers are now trying to figure out how Musashi proteins, which are normally turned off after embryonic development, get turned back on in cancer cells. “We’ve studied what this protein does, but we know very little about how it’s regulated,” Katz says.

He says it is too early to know if the Musashi proteins might make good targets for cancer drugs, but they could make a good diagnostic marker for what state a cancer cell is in. “It’s more about understanding the cell states of cancer at this stage, and diagnosing them, rather than treating them,” he says.

The research was funded by the National Institutes of Health.

Source : MIT News Office

More-flexible digital communication

New theory could yield more-reliable communication protocols.

By Larry Hardesty


Communication protocols for digital devices are very efficient but also very brittle: They require information to be specified in a precise order with a precise number of bits. If sender and receiver — say, a computer and a printer — are off by even a single bit relative to each other, communication between them breaks down entirely.

Humans are much more flexible. Two strangers may come to a conversation with wildly differing vocabularies and frames of reference, but they will quickly assess the extent of their mutual understanding and tailor their speech accordingly.

Madhu Sudan, an adjunct professor of electrical engineering and computer science at MIT and a principal researcher at Microsoft Research New England, wants to bring that type of flexibility to computer communication. In a series of recent papers, he and his colleagues have begun to describe theoretical limits on the degree of imprecision that communicating computers can tolerate, with very real implications for the design of communication protocols.

“Our goal is not to understand how human communication works,” Sudan says. “Most of the work is really in trying to abstract, ‘What is the kind of problem that human communication tends to solve nicely, [and] designed communication doesn’t?’ — and let’s now see if we can come up with designed communication schemes that do the same thing.”

One thing that humans do well is gauging the minimum amount of information they need to convey in order to get a point across. Depending on the circumstances, for instance, one co-worker might ask another, “Who was that guy?”; “Who was that guy in your office?”; “Who was that guy in your office this morning?”; or “Who was that guy in your office this morning with the red tie and glasses?”

Similarly, the first topic Sudan and his colleagues began investigating is compression, or the minimum number of bits that one device would need to send another in order to convey all the information in a data file.

Uneven odds

In a paper presented in 2011, at the ACM Symposium on Innovations in Computer Science (now known as Innovations in Theoretical Computer Science, or ITCS), Sudan and colleagues at Harvard University, Microsoft, and the University of Pennsylvania considered a hypothetical case in which the devices shared an almost infinite codebook that assigned a random string of symbols — a kind of serial number — to every possible message that either might send.

Of course, such a codebook is entirely implausible, but it allowed the researchers to get a statistical handle on the problem of compression. Indeed, it’s an extension of one of theconcepts that longtime MIT professor Claude Shannon used to determine the maximum capacity of a communication channel in the seminal 1948 paper that created the field of information theory.

In Sudan and his colleagues’ codebook, a vast number of messages might have associated strings that begin with the same symbol. But fewer messages will have strings that share their first two symbols, fewer still strings that share their first three symbols, and so on. In any given instance of communication, the question is how many symbols of the string one device needs to send the other in order to pick out a single associated message.

The answer to that question depends on the probability that any given interpretation of a string of symbols makes sense in context. By way of analogy, if your co-worker has had only one visitor all day, asking her, “Who was that guy in your office?” probably suffices. If she’s had a string of visitors, you may need to specify time of day and tie color.

Existing compression schemes do, in fact, exploit statistical regularities in data. But Sudan and his colleagues considered the case in which sender and receiver assign different probabilities to different interpretations. They were able to show that, so long as protocol designers can make reasonable assumptions about the ranges within which the probabilities might fall, good compression is still possible.

For instance, Sudan says, consider a telescope in deep-space orbit. The telescope’s designers might assume that 90 percent of what it sees will be blackness, and they can use that assumption to compress the image data it sends back to Earth. With existing protocols, anyone attempting to interpret the telescope’s transmissions would need to know the precise figure — 90 percent — that the compression scheme uses. But Sudan and his colleagues showed that the protocol could be designed to accommodate a range of assumptions — from, say, 85 percent to 95 percent — that might be just as reasonable as 90 percent.

Buggy codebook

In a paper being presented at the next ITCS, in January, Sudan and colleagues at Columbia University, Carnegie Mellon University, and Microsoft add even more uncertainty to their compression model. In the new paper, not only do sender and receiver have somewhat different probability estimates, but they also have slightly different codebooks. Again, the researchers were able to devise a protocol that would still provide good compression.

They also generalized their model to new contexts. For instance, Sudan says, in the era of cloud computing, data is constantly being duplicated on servers scattered across the Internet, and data-management systems need to ensure that the copies are kept up to date. One way to do that efficiently is by performing “checksums,” or adding up a bunch of bits at corresponding locations in the original and the copy and making sure the results match.

That method, however, works only if the servers know in advance which bits to add up — and if they store the files in such a way that data locations correspond perfectly. Sudan and his colleagues’ protocol could provide a way for servers using different file-management schemes to generate consistency checks on the fly.

“I shouldn’t tell you if the number of 1’s that I see in this subset is odd or even,” Sudan says. “I should send you some coarse information saying 90 percent of the bits in this set are 1’s. And you say, ‘Well, I see 89 percent,’ but that’s close to 90 percent — that’s actually a good protocol. We prove this.”

“This sequence of works puts forward a general theory of goal-oriented communication, where the focus is not on the raw data being communicated but rather on its meaning,” says Oded Goldreich, a professor of computer science at the Weizmann Institute of Science in Israel. “I consider this sequence a work of fundamental nature.”

“Following a dominant approach in 20th-century philosophy, the work associates the meaning of communication with the goal achieved by it and provides a mathematical framework for discussing all these natural notions,” he adds. “This framework is based on a general definition of the notion of a goal and leads to a problem that is complementary to the problem of reliable communication considered by Shannon, which established information theory.”

 

Source: MIT News Office

The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning.
Credit: CERN

CERN makes public first data of LHC experiments

CERN1 launched today its Open Data Portal where data from real collision events, produced by the LHC experiments will for the first time be made openly available to all. It is expected that these data will be of high value for the research community, and also be used for education purposes.

”Launching the CERN Open Data Portal is an important step for our Organization. Data from the LHC programme are among the most precious assets of the LHC experiments, that today we start sharing openly with the world. We hope these open data will support and inspire the global research community, including students and citizen scientists,” said CERN Director General Rolf Heuer.

The principle of openness is enshrined in CERN’s founding Convention, and all LHC publications have been published Open Access, free for all to read and re-use. Widening the scope, the LHC collaborations recently approved Open Data policies and will release collision data over the coming years.

The first high-level and analysable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. This data set is now publicly available on the CERN Open Data Portal. Open source software to read and analyse the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” said CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

 The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning. Credit: CERN
The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b’- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b’- region plotted with a finer binning.
Credit: CERN

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations, which have been specifically prepared for educational purposes, such as the international masterclasses in particle physics2 benefiting over ten thousand high-school students every year. These resources are accompanied by visualisation tools.

“Our own data policy foresees data preservation and its sharing. We have seen that students are fascinated by being able to analyse LHC data in the past and so, we are very happy to take the first steps and make available some selected data for education” said Silvia Amerio, data preservation coordinator of the LHCb experiment.

“The development of this Open Data Portal represents a first milestone in our mission to serve our users in preserving and sharing their research materials. It will ensure that the data and tools can be accessed and used, now and in the future,” said Tim Smith from CERN’s IT Department.

All data on OpenData.cern.ch are shared under a Creative Commons CC03 public domain dedication; data and software are assigned unique DOI identifiers to make them citable in scientific articles; and software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

Further information:

Open data portal

Open data policies

CMS Open Data

 

Footnote(s):

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer Status.

2. http://www.physicsmasterclasses.org(link is external)

3. http://creativecommons.org/publicdomain/zero/1.0/

The DC-8 airborne laboratory is one of several NASA aircraft that will fly in support of five new investigations into how different aspects of the interconnected Earth system influence climate change.
Image Credit: NASA

NASA Airborne Campaigns Tackle Climate Questions from Africa to Arctic

Five new NASA airborne field campaigns will take to the skies starting in 2015 to investigate how long-range air pollution, warming ocean waters, and fires in Africa affect our climate.

These studies into several incompletely understood Earth system processes were competitively-selected as part of NASA’s Earth Venture-class projects. Each project is funded at a total cost of no more than $30 million over five years. This funding includes initial development, field campaigns and analysis of data.

This is NASA’s second series of Earth Venture suborbital investigations — regularly solicited, quick-turnaround projects recommended by the National Research Council in 2007. The first series of five projects was selected in 2010.

“These new investigations address a variety of key scientific questions critical to advancing our understanding of how Earth works,” said Jack Kaye, associate director for research in NASA’s Earth Science Division in Washington. “These innovative airborne experiments will let us probe inside processes and locations in unprecedented detail that complements what we can do with our fleet of Earth-observing satellites.”

The DC-8 airborne laboratory is one of several NASA aircraft that will fly in support of five new investigations into how different aspects of the interconnected Earth system influence climate change. Image Credit: NASA
The DC-8 airborne laboratory is one of several NASA aircraft that will fly in support of five new investigations into how different aspects of the interconnected Earth system influence climate change.
Image Credit: NASA

The five selected Earth Venture investigations are:

  • Atmospheric chemistry and air pollution – Steven Wofsy of Harvard University in Cambridge, Massachusetts, will lead the Atmospheric Tomography project to study the impact of human-produced air pollution on certain greenhouse gases. Airborne instruments will look at how atmospheric chemistry is transformed by various air pollutants and at the impact on methane and ozone which affect climate. Flights aboard NASA’s DC-8 will originate from the Armstrong Flight Research Center in Palmdale, California, fly north to the western Arctic, south to the South Pacific, east to the Atlantic, north to Greenland, and return to California across central North America.
  • Ecosystem changes in a warming ocean – Michael Behrenfeld of Oregon State University in Corvallis, Oregon, will lead the North Atlantic Aerosols and Marine Ecosystems Study, which seeks to improve predictions of how ocean ecosystems would change with ocean warming. The mission will study the annual life cycle of phytoplankton and the impact small airborne particles derived from marine organisms have on climate in the North Atlantic. The large annual phytoplankton bloom in this region may influence the Earth’s energy budget. Research flights by NASA’s C-130 aircraft from Wallops Flight Facility, Virginia, will be coordinated with a University-National Oceanographic Laboratory System (UNOLS) research vessel. UNOLS, located at the University of Rhode Island’s Graduate School of Oceanography in Narragansett, Rhode Island, is an organization of 62 academic institutions and national laboratories involved in oceanographic research.
  • Greenhouse gas sources – Kenneth Davis of Pennsylvania State University in University Park, will lead the Atmospheric Carbon and Transport-America project to quantify the sources of regional carbon dioxide, methane and other gases, and document how weather systems transport these gases in the atmosphere. The research goal is to improve identification and predictions of carbon dioxide and methane sources and sinks using spaceborne, airborne and ground-based data over the eastern United States. Research flights will use NASA’s C-130 from Wallops and the UC-12 from Langley Research Center in Hampton, Virginia.
  • African fires and Atlantic clouds – Jens Redemann of NASA’s Ames Research Center in Mountain View, California, will lead the Observations of Aerosols above Clouds and their Interactions project to probe how smoke particles from massive biomass burning in Africa influences cloud cover over the Atlantic. Particles from this seasonal burning that are lofted into the mid-troposphere and transported westward over the southeast Atlantic interact with permanent stratocumulus “climate radiators,” which are critical to the regional and global climate system. NASA aircraft, including a Wallops P-3 and an Armstrong ER-2, will be used to conduct the investigation flying out of Walvis Bay, Namibia.
  • Melting Greenland glaciers – Josh Willis of NASA’s Jet Propulsion Laboratory in Pasadena, California, will lead the Oceans Melting Greenland mission to investigate the role of warmer saltier Atlantic subsurface waters in Greenland glacier melting. The study will help pave the way for improved estimates of future sea level rise by observing changes in glacier melting where ice contacts seawater. Measurements of the ocean bottom as well as seawater properties around Greenland will be taken from ships and the air using several aircraft including a NASA S-3 from Glenn Research Center in Cleveland, Ohio, and Gulfstream III from Armstrong.

Seven NASA centers, 25 educational institutions, three U.S. government agencies and two industry partners are involved in these Earth Venture projects. The five investigations were selected from 33 proposals.

Earth Venture investigations are part of NASA’s Earth System Science Pathfinder program managed at Langley for NASA’s Science Mission Directorate in Washington. The missions in this program provide an innovative approach to address Earth science research with periodic windows of opportunity to accommodate new scientific priorities.

NASA monitors Earth’s vital signs from land, sea, air and space with a fleet of satellites and ambitious airborne and surface-based observation campaigns. With this information and computer analysis tools, NASA studies Earth’s interconnected systems to better see how our planet is changing. The agency shares this unique knowledge with the global community and works with institutions in the United States and around the world that contribute to understanding and protecting our home planet.

For more information about NASA’s Earth science activities, visit:

http://www.nasa.gov/earthrightnow

Source: NASA