Monthly Archives: December 2014

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

Credit: CCNY

Study Unveils New Half-Light Half-Matter Quantum Particles

Prospects of developing computing and communication technologies based on quantum properties of light and matter may have taken a major step forward thanks to research by City College of New York physicists led by Dr. Vinod Menon.

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

“Besides being a fundamental breakthrough, this opens up the possibility of making devices which take the benefits of both light and matter,” said Professor Menon.  

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles. Credit: CCNY
In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.
Credit: CCNY

For example one can start envisioning logic gates and signal processors that take on best of light and matter. The discovery is also expected to contribute to developing practical platforms for quantum computing. 

Dr. Dirk Englund, a professor at MIT whose research focuses on quantum technologies based on semiconductor and optical systems, hailed the City College study.

“What is so remarkable and exciting in the work by Vinod and his team is how readily this strong coupling regime could actually be achieved. They have shown convincingly that by coupling a rather standard dielectric cavity to exciton–polaritons in a monolayer of molybdenum disulphide, they could actually reach this strong coupling regime with a very large binding strength,” he said. 

Professor Menon’s research team included City College PhD students, Xiaoze Liu, Tal Galfsky and Zheng Sun, and scientists from Yale University, National Tsing Hua University (Taiwan) and Ecole Polytechnic -Montreal (Canada).

The study appears in the January issue of the journal “Nature Photonics.” It was funded by the U.S. Army Research Laboratory’s Army Research Office and the National Science Foundation through the Materials Research Science and Engineering Center – Center for Photonic and Multiscale Nanomaterials. 

Source: The City College New of York

Trapping light with a twister

New understanding of how to halt photons could lead to miniature particle accelerators, improved data transmission.

By David L. Chandler


Researchers at MIT who succeeded last year in creating a material that could trap light and stop it in its tracks have now developed a more fundamental understanding of the process. The new work — which could help explain some basic physical mechanisms — reveals that this behavior is connected to a wide range of other seemingly unrelated phenomena.

The findings are reported in a paper in the journal Physical Review Letters, co-authored by MIT physics professor Marin Soljačić; postdocs Bo Zhen, Chia Wei Hsu, and Ling Lu; and Douglas Stone, a professor of applied physics at Yale University.

Light can usually be confined only with mirrors, or with specialized materials such as photonic crystals. Both of these approaches block light beams; last year’s finding demonstrated a new method in which the waves cancel out their own radiation fields. The new work shows that this light-trapping process, which involves twisting the polarization direction of the light, is based on a kind of vortex — the same phenomenon behind everything from tornadoes to water swirling down a drain.

Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1. Courtesy of the researchers Source: MIT
Vortices of bound states in the continuum. The left panel shows five bound states in the continuum in a photonic crystal slab as bright spots. The right panel shows the polarization vector field in the same region as the left panel, revealing five vortices at the locations of the bound states in the continuum. These vortices are characterized with topological charges +1 or -1.
Courtesy of the researchers
Source: MIT

In addition to revealing the mechanism responsible for trapping the light, the new analysis shows that this trapped state is much more stable than had been thought, making it easier to produce and harder to disturb.

“People think of this [trapped state] as very delicate,” Zhen says, “and almost impossible to realize. But it turns out it can exist in a robust way.”

In most natural light, the direction of polarization — which can be thought of as the direction in which the light waves vibrate — remains fixed. That’s the principle that allows polarizing sunglasses to work: Light reflected from a surface is selectively polarized in one direction; that reflected light can then be blocked by polarizing filters oriented at right angles to it.

But in the case of these light-trapping crystals, light that enters the material becomes polarized in a way that forms a vortex, Zhen says, with the direction of polarization changing depending on the beam’s direction.

Because the polarization is different at every point in this vortex, it produces a singularity — also called a topological defect, Zhen says — at its center, trapping the light at that point.

Hsu says the phenomenon makes it possible to produce something called a vector beam, a special kind of laser beam that could potentially create small-scale particle accelerators. Such devices could use these vector beams to accelerate particles and smash them into each other — perhaps allowing future tabletop devices to carry out the kinds of high-energy experiments that today require miles-wide circular tunnels.

The finding, Soljačić says, could also enable easy implementation of super-resolution imaging (using a method called stimulated emission depletion microscopy) and could allow the sending of far more channels of data through a single optical fiber.

“This work is a great example of how supposedly well-studied physical systems can contain rich and undiscovered phenomena, which can be unearthed if you dig in the right spot,” says Yidong Chong, an assistant professor of physics and applied physics at Nanyang Technological University in Singapore who was not involved in this research.

Chong says it is remarkable that such surprising findings have come from relatively well-studied materials. “It deals with photonic crystal slabs of the sort that have been extensively analyzed, both theoretically and experimentally, since the 1990s,” he says. “The fact that the system is so unexotic, together with the robustness associated with topological phenomena, should give us confidence that these modes will not simply

be theoretical curiosities, but can be exploited in technologies such as microlasers.”

The research was partly supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and by the Department of Energy and the National Science Foundation.

Source: MIT News Office

Quantum physics breakthrough: Scientists solve 100-year-old puzzle

Two fundamental concepts of the quantum world are actually just different manifestations of the same thing, says Waterloo researcher.

By Jenny Hogan

Centre for Quantum Technologies


A Waterloo researcher is part of an international team that has proven that two peculiar features of the quantum world – long thought to be distinct – are actually different manifestations of the same thing.

The breakthrough findings are published today inNature Communications. The two distinct ideas in question have been fundamental concepts in quantum physics since the early 1900s. They are what is known as the wave-particle duality and the uncertainty principle.

“We were guided by a gut feeling, and only a gut feeling, that there should be a connection,” says Patrick Coles, now a postdoctoral fellow at the Institute for Quantum Computing and the Department of Physics and Astronomy at the University of Waterloo.

  • Wave-particle duality is the idea that a quantum particle can behave like a wave, but that the wave behavior disappears if you try to locate the object.
  • The uncertainty principle is the idea that it’s impossible to know certain pairs of things about a quantum particle at once. For example, the more precisely you know the position of an atom, the less precisely you can know the speed with which it’s moving.

Coles was part of the research team at the National University of Singapore that made the discovery that wave-particle duality is simply the quantum uncertainty principle in disguise.

Like discovering the Rosetta Stone of quantum physics

“It was like we had discovered the ‘Rosetta Stone’ that connected two different languages,” says Coles. “The literature on wave-particle duality was like hieroglyphics that we could translate into our native tongue. We had several eureka moments when we finally understood what people had done.”

The research team at Singapore’s Centre for Quantum Technologies, included Jedrzej Kaniewski and Stephanie Wehner, now both researchers at the Netherlands’ Delft University of Technology.

“The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information,” says Wehner.

The wave-particle duality is perhaps most simply seen in a double slit experiment, where single particles, electrons, say, are fired one by one at a screen containing two narrow slits. The particles pile up behind the slits not in two heaps as classical objects would, but in a stripy pattern like you’d expect for waves interfering. At least this is what happens until you sneak a look at which slit a particle goes through – do that and the interference pattern vanishes.

The discovery deepens our understanding of quantum physics and could prompt ideas for new applications of wave-particle duality.

New protocols for quantum cryptography possible

Coles, Kaniewski and Wehner are experts in a form of mathematical equations known as ‘entropic uncertainty relations.’ They discovered that all the maths previously used to describe wave-particle duality could be reformulated in terms of these relations.

Because the entropic uncertainty relations used in their translation have also been used in proving the security of quantum cryptography – schemes for secure communication using quantum particles – the researchers suggest the work could help inspire new cryptography protocols.

How is nature itself constructed?

In earlier papers, the researchers found connections between the uncertainty principle and other physics, namely quantum ‘non-locality’ and the second law of thermodynamics. The tantalizing next goal for the researchers is to think about how these pieces fit together and what bigger picture that paints of how nature is constructed.

Source: University of WaterLoo

Islamic Republic of Pakistan to become Associate Member State of CERN: CERN Press Release

Geneva 19 December 2014. CERN1 Director General, Rolf Heuer, and the Chairman of the Pakistan Atomic Energy Commission, Ansar Parvez, signed today in Islamabad, in presence of Prime Minister Nawaz Sharif, a document admitting the Islamic Republic of Pakistan to CERN Associate Membership, subject to ratification by the Government of Pakistan.

“Pakistan has been a strong participant in CERN’s endeavours in science and technology since the 1990s,” said Rolf Heuer. “Bringing nations together in a peaceful quest for knowledge and education is one of the most important missions of CERN. Welcoming Pakistan as a new Associate Member State is therefore for our Organization a very significant event and I’m looking forward to enhanced cooperation with Pakistan in the near future.”

“It is indeed a historic day for science in Pakistan. Today’s signing of the agreement is a reward for the collaboration of our scientists, engineers and technicians with CERN over the past two decades,” said Ansar Parvez. “This Membership will bring in its wake multiple opportunities for our young students and for industry to learn and benefit from CERN. To us in Pakistan, science is not just pursuit of knowledge, it is also the basic requirement to help us build our nation.”

The Islamic Republic of Pakistan and CERN signed a Co-operation Agreement in 1994. The signature of several protocols followed this agreement, and Pakistan contributed to building the CMS and ATLAS experiments. Pakistan contributes today to the ALICE, ATLAS, CMS and LHCb experiments and operates a Tier-2 computing centre in the Worldwide LHC Computing Grid that helps to process and analyse the massive amounts of data the experiments generate. Pakistan is also involved in accelerator developments, making it an important partner for CERN.

The Associate Membership of Pakistan will open a new era of cooperation that will strengthen the long-term partnership between CERN and the Pakistani scientific community. Associate Membership will allow Pakistan to participate in the governance of CERN, through attending the meetings of the CERN Council. Moreover, it will allow Pakistani scientists to become members of the CERN staff, and to participate in CERN’s training and career-development programmes. Finally, it will allow Pakistani industry to bid for CERN contracts, thus opening up opportunities for industrial collaboration in areas of advanced technology.

Footnote(s)

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Union, JINR and UNESCO have Observer Status.

Source : CERN

Credit: X-ray: NASA/CXC/INAF/P.Tozzi, et al; Optical: NAOJ/Subaru and ESO/VLT; Infrared: ESA/Herschel

NASA’s Chandra Weighs Most Massive Galaxy Cluster in Distant Universe

Using NASA’s Chandra X-ray Observatory, astronomers have made the first determination of the mass and other properties of a very young, distant galaxy cluster.

The Chandra study shows that the galaxy cluster, seen at the comparatively young age of about 800 million years, is the most massive known cluster with that age or younger. As the largest gravitationally- bound structures known, galaxy clusters can act as crucial gauges for how the Universe itself has evolved over time.

The galaxy cluster was originally discovered using ESA’s XMM-Newton observatory and is located about 9.6 billion light years from Earth. Astronomers used X-ray data from Chandra that, when combined with scientific models, provides an accurate weight of the cluster, which comes in at a whopping 400 trillion times the mass of the Sun. Scientists believe the cluster formed about 3.3 billion years after the Big Bang.

Credit: X-ray: NASA/CXC/INAF/P.Tozzi, et al; Optical: NAOJ/Subaru and ESO/VLT; Infrared: ESA/Herschel
Credit: X-ray: NASA/CXC/INAF/P.Tozzi, et al; Optical: NAOJ/Subaru and ESO/VLT; Infrared: ESA/Herschel

The cluster is officially named XDCP J0044.0-2033, but the researchers have nicknamed it “Gioiello”, which is Italian for “jewel”. They chose this name because an image of the cluster contains many sparkling colors from the hot, X-ray emitting gas and various star-forming galaxies within the cluster. Also, the research team met to discuss the Chandra data for the first time at Villa il Gioiello, a 15th century villa near the Observatory of Arcetri, which was the last residence of prominent Italian astronomer Galileo Galilei.

“Finding this enormous galaxy cluster at this early epoch means that there could be more out there,” said Paolo Tozzi of the National Institute for Astrophysics (INAF) in Florence, Italy, who led the new study. “This kind of information could have an impact on our understanding of how the large scale structure of the Universe formed and evolved.”

Previously, astronomers had found an enormous galaxy cluster, known as “El Gordo,” at a distance of 7 billion light years away and a few other large, distant clusters. According to the best current model for how the Universe evolved, there is a low chance of finding clusters as massive as the Gioiello Cluster and El Gordo. The new findings suggest that there might be problems with the theory, and are enticing astronomers to look for other distant and massive clusters.

“The hint that there might be problems with the standard model of cosmology is interesting,” said co-author James Jee of the University of California in Davis, “but we need bigger and deeper samples of clusters before we can tell if there’s a real problem.”

The Chandra observation of the Gioiello Cluster lasted over 4 days and is the deepest X-ray observation yet made on a cluster beyond a distance of about 8 billion light years.

“Unlike the galaxy clusters that are close to us, this cluster still has lots of stars forming within its galaxies,” said co-author Joana Santos, also from INAF in Florence. “This gives us a unique window into what galaxy clusters are like when they are very young.”

 

In the past, astronomers have reported finding several galaxy cluster candidates that are located more than 9.5 billion light years away. However, some of these objects turned out to be protoclusters, that is, precursors to fully developed galaxy clusters.

The researchers also note that there are hints of uneven structure in the hot gas. These may be large clumps that could have been caused by collisions and mergers with smaller clusters of galaxies and provides clues to how the cluster became so hefty at its early age. The authors expect that the cluster is still young enough to be undergoing many such interactions.

A paper describing these results will appear in an upcoming issue of The Astrophysical Journal and is available online. NASA’s Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Mass., controls Chandra’s science and flight operations.

An interactive image, a podcast, and a video about these findings are available at:
http://chandra.si.edu

For Chandra images, multimedia and related materials, visit:
http://www.nasa.gov/chandra

 

Source: Chandra X-Ray Observatory

A combined Hubble/ALMA image of NGC 1266. The zoom-in section shows the molecular gas being propelled by the black hole's jets (red and blue), the central ALMA data (yellow) indicate the dense molecular gas. Credit: NASA/ESA Hubble; ALMA (NRAO/ESO/NAOJ)

‘Perfect Storm’ Suffocating Star Formation around a Supermassive Black Hole

High-energy jets powered by supermassive black holes can blast away a galaxy’s star-forming fuel — resulting in so-called “red and dead” galaxies: those brimming with ancient red stars yet little or no hydrogen gas available to create new ones.

Now astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have discovered that black holes don’t have to be nearly so powerful to shut down star formation. By observing the dust and gas at the center NGC 1266, a nearby lenticular galaxy with a relatively modest central black hole, the astronomers have detected a “perfect storm” of turbulence that is squelching star formation in a region that would otherwise be an ideal star factory.
This turbulence is stirred up by jets from the galaxy’s central black hole slamming into an incredibly dense envelope of gas. This dense region, which may be the result of a recent merger with another smaller galaxy, blocks nearly 98 percent of material propelled by the jets from escaping the galactic center.

 Artist illustration of the central region of NGC 1266 near its central black hole with jet and gas motions indicated (yellow and white arrows, respectively). The large-scale gas motions induce turbulence on smaller scales, preventing star formation. Credit: B. Saxton (NRAO/AUI/NSF)
Artist illustration of the central region of NGC 1266 near its central black hole with jet and gas motions indicated (yellow and white arrows, respectively). The large-scale gas motions induce turbulence on smaller scales, preventing star formation. Credit: B. Saxton (NRAO/AUI/NSF)

“Like an unstoppable force meeting an immovable object, the molecules in these jets meet so much resistance when they hit the surrounding dense gas that they are almost completely stopped in their tracks,” said Katherine Alatalo, an astronomer with the California Institute of Technology in Pasadena and lead author on a paper published in the Astrophysical Journal. This energetic collision produces powerful turbulence in the surrounding gas, disrupting the first critical stage of star formation. “So what we see is the most intense suppression of star formation ever observed,” noted Alatalo.

Previous observations of NGC 1266 revealed a broad outflow of gas from the galactic center traveling up to 400 kilometers per second. Alatalo and her colleagues estimate that this outflow is as forceful as the simultaneous supernova explosion of 10,000 stars. The jets, though powerful enough to stir the gas, are not powerful enough to give it the velocity it needs to escape from the system.
“Another way of looking at it is that the jets are injecting turbulence into the gas, preventing it from settling down, collapsing, and forming stars,” said National Radio Astronomy Observatory astronomer and co-author Mark Lacy.

The region observed by ALMA contains about 400 million times the mass of our Sun in star-forming gas, which is 100 times more than is found in giant star-forming molecular clouds in our own Milky Way. Normally, gas this concentrated should be producing stars at a rate at least 50 times faster than the astronomers observed in this galaxy.

Previously, astronomers believed that only extremely powerful quasars and radio galaxies contained black holes that were powerful enough to serve as a star-forming “on/off” switch.

A combined Hubble/ALMA image of NGC 1266. The zoom-in section shows the molecular gas being propelled by the black hole's jets (red and blue), the central ALMA data (yellow) indicate the dense molecular gas. Credit: NASA/ESA Hubble; ALMA (NRAO/ESO/NAOJ)
A combined Hubble/ALMA image of NGC 1266. The zoom-in section shows the molecular gas being propelled by the black hole’s jets (red and blue), the central ALMA data (yellow) indicate the dense molecular gas. Credit: NASA/ESA Hubble; ALMA (NRAO/ESO/NAOJ)

“The usual assumption in the past has been that the jets needed to be powerful enough to eject the gas from the galaxy completely in order to be effective at stopping start formation,” said Lacy.

To make this discovery, the astronomers first pinpointed the location of the far-infrared light being emitted by the galaxy. Normally, this light is associated with star formation and enables astronomers to detect regions where new stars are forming. In the case of NGC 1266, however, this light was coming from an extremely confined region of the galaxy. “This very small area was almost too small for the infrared light to be coming from star formation,” noted Alatalo.

With ALMA’s exquisite sensitivity and resolution, and along with observations from CARMA (the Combined Array for Research in Millimeter-wave Astronomy), the astronomers were then able to trace the location of the very dense molecular gas at the galactic center. They found that the gas is surrounding this compact source of the far-infrared light.

Under normal conditions, gas this dense would be forming stars at a very high rate. The dust embedded within this gas would then be heated by young stars and seen as a bright and extended source of infrared light. The small size and faintness of the infrared source in this galaxy suggests that NGC 1266 is instead choking on its own fuel, seemingly in defiance of the rules of star formation.

The astronomers also speculate that there is a feedback mechanism at work in this region. Eventually, the black hole will calm down and the turbulence will subside so star-formation can begin anew. With this renewed star formation, however, comes greater motion in the dense gas, which then falls in on the black hole and reestablishes the jets, shutting down star formation once again.

NGC 1266 is located approximately 100 million light-years away in the constellation Eridanus. Leticular galaxies are spiral galaxies, like our own Milky Way, but they have little interstellar gas available to form new stars.

More Information

The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of the European Organisation for Astronomical Research in the Southern Hemisphere (ESO), the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI).

ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.

 

Source: ALMA Observatory

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Credit:
ESO

The Hot Blue Stars of Messier 47

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.

Messier 47 is located approximately 1600 light-years from Earth, in the constellation of Puppis (the poop deck of the mythological ship Argo). It was first noticed some time before 1654 by Italian astronomer Giovanni Battista Hodierna and was later independently discovered by Charles Messier himself, who apparently had no knowledge of Hodierna’s earlier observation.

Although it is bright and easy to see, Messier 47 is one of the least densely populated open clusters. Only around 50 stars are visible in a region about 12 light-years across, compared to other similar objects which can contain thousands of stars.

Messier 47 has not always been so easy to identify. In fact, for years it was considered missing, as Messier had recorded the coordinates incorrectly. The cluster was later rediscovered and given another catalogue designation — NGC 2422. The nature of Messier’s mistake, and the firm conclusion that Messier 47 and NGC 2422 are indeed the same object, was only established in 1959 by Canadian astronomer T. F. Morris.

This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars. Credit: ESO
This spectacular image of the star cluster Messier 47 was taken using the Wide Field Imager camera, installed on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This young open cluster is dominated by a sprinkling of brilliant blue stars but also contains a few contrasting red giant stars.
Credit:
ESO



The bright blue–white colours of these stars are an indication of their temperature, with hotter stars appearing bluer and cooler stars appearing redder. This relationship between colour, brightness and temperature can be visualised by use of the Planck curve. But the more detailed study of the colours of stars using spectroscopy also tells astronomers a lot more — including how fast the stars are spinning and their chemical compositions. There are also a few bright red stars in the picture — these are red giant stars that are further through their short life cycles than the less massive and longer-lived blue stars [1].

By chance Messier 47 appears close in the sky to another contrasting star cluster — Messier 46. Messier 47 is relatively close, at around 1600 light-years, but Messier 46 is located around 5500 light-years away and contains a lot more stars, with at least 500 stars present. Despite containing more stars, it appears significantly fainter due to its greater distance.

Messier 46 could be considered to be the older sister of Messier 47, with the former being approximately 300 million years old compared to the latter’s 78 million years. Consequently, many of the most massive and brilliant of the stars in Messier 46 have already run through their short lives and are no longer visible, so most stars within this older cluster appear redder and cooler.

This image of Messier 47 was produced as part of the ESO Cosmic Gems programme [2].

Notes

[1] The lifetime of a star depends primarily on its mass. Massive stars, containing many times as much material as the Sun, have short lives measured in millions of years. On the other hand much less massive stars can continue to shine for many billions of years. In a cluster, the stars all have about the same age and same initial chemical composition. So the brilliant massive stars evolve quickest, become red giants sooner, and end their lives first, leaving the less massive and cooler ones to long outlive them.

[2] The ESO Cosmic Gems programme is an outreach initiative to produce images of interesting, intriguing or visually attractive objects using ESO telescopes, for the purposes of education and public outreach. The programme makes use of telescope time that cannot be used for science observations. All data collected may also be suitable for scientific purposes, and are made available to astronomers through ESO’s science archive.

More information

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Source: ESO 

Musashi proteins, stained red, appear in the cell cytoplasm, outside the nucleus. At right, the cell nucleus is stained blue.
Image Credit: Yarden Katz/MIT

Proteins drive cancer cells to change states

When RNA-binding proteins are turned on, cancer cells get locked in a proliferative state.

 By Anne Trafton


 

A new study from MIT implicates a family of RNA-binding proteins in the regulation of cancer, particularly in a subtype of breast cancer. These proteins, known as Musashi proteins, can force cells into a state associated with increased proliferation.

Biologists have previously found that this kind of transformation, which often occurs in cancer cells as well as during embryonic development, is controlled by transcription factors — proteins that turn genes on and off. However, the new MIT research reveals that RNA-binding proteins also play an important role. Human cells have about 500 different RNA-binding proteins, which influence gene expression by regulating messenger RNA, the molecule that carries DNA’s instructions to the rest of the cell.

“Recent discoveries show that there’s a lot of RNA-processing that happens in human cells and mammalian cells in general,” says Yarden Katz, a recent MIT PhD recipient and one of the lead authors of the new paper. “RNA is processed at several points within the cell, and this gives opportunities for RNA-binding proteins to regulate RNA at each point. We’re very interested in trying to understand this unexplored class of RNA-binding proteins and how they regulate cell-state transitions.”

Feifei Li of China Agricultural University is also a lead author of the paper, which appears in the journal eLife on Dec. 15. Senior authors of the paper are MIT biology professors Christopher Burge and Rudolf Jaenisch, and Zhengquan Yu of China Agricultural University.

Controlling cell states

Until this study, scientists knew very little about the functions of Musashi proteins. These RNA-binding proteins have traditionally been used to identify neural stem cells, in which they are very abundant. They have also been found in tumors, including in glioblastoma, a very aggressive form of brain cancer.

“Normally they’re marking stem and progenitor cells, but they get turned on in cancers. That was intriguing to us because it suggested they might impose a more undifferentiated state on cancer cells,” Katz says.

To study this possibility, Katz manipulated the levels of Musashi proteins in neural stem cells and measured the effects on other genes. He found that genes affected by Musashi proteins were related to the epithelial-to-mesenchymal transition (EMT), a process by which cells lose their ability to stick together and begin invading other tissues.

EMT has been shown to be important in breast cancer, prompting the team to look into Musashi proteins in cancers of non-neural tissue. They found that Musashi proteins are most highly expressed in a type of breast tumors called luminal B tumors, which are not metastatic but are aggressive and fast-growing.

When the researchers knocked down Musashi proteins in breast cancer cells grown in the lab, the cells were forced out of the epithelial state. Also, if the proteins were artificially boosted in mesenchymal cells, the cells transitioned to an epithelial state. This suggests that Musashi proteins are responsible for maintaining cancer cells in a proliferative, epithelial state.

“These proteins seem to really be regulating this cell-state transition, which we know from other studies is very important, especially in breast cancer,” Katz says.

Musashi proteins, stained red, appear in the cell cytoplasm, outside the nucleus. At right, the cell nucleus is stained blue. Image Credit: Yarden Katz/MIT
Musashi proteins, stained red, appear in the cell cytoplasm, outside the nucleus. At right, the cell nucleus is stained blue.
Image Credit: Yarden Katz , MIT

 

The researchers found that Musashi proteins repress a gene called Jagged1, which in turn regulates the Notch signaling pathway. Notch signaling promotes cell division in neurons during embryonic development and also plays a major role in cancer.

When Jagged1 is repressed, cells are locked in an epithelial state and are much less motile. The researchers found that Musashi proteins also repress Jagged1 during normal mammary-gland development, not just in cancer. When these proteins were overexpressed in normal mammary glands, cells were less able to undergo the type of healthy EMT required for mammary tissue development.

Brenton Graveley, a professor of genetics and developmental biology at the University of Connecticut, says he was surprised to see how much influence Musashi proteins can have by controlling a relatively small number of genes in a cell. “Musashi proteins have been known to be interesting for many years, but until now nobody has really figured out exactly what they’re doing, especially on a genome-wide scale,” he says.

The researchers are now trying to figure out how Musashi proteins, which are normally turned off after embryonic development, get turned back on in cancer cells. “We’ve studied what this protein does, but we know very little about how it’s regulated,” Katz says.

He says it is too early to know if the Musashi proteins might make good targets for cancer drugs, but they could make a good diagnostic marker for what state a cancer cell is in. “It’s more about understanding the cell states of cancer at this stage, and diagnosing them, rather than treating them,” he says.

The research was funded by the National Institutes of Health.

Source : MIT News Office

More-flexible digital communication

New theory could yield more-reliable communication protocols.

By Larry Hardesty


Communication protocols for digital devices are very efficient but also very brittle: They require information to be specified in a precise order with a precise number of bits. If sender and receiver — say, a computer and a printer — are off by even a single bit relative to each other, communication between them breaks down entirely.

Humans are much more flexible. Two strangers may come to a conversation with wildly differing vocabularies and frames of reference, but they will quickly assess the extent of their mutual understanding and tailor their speech accordingly.

Madhu Sudan, an adjunct professor of electrical engineering and computer science at MIT and a principal researcher at Microsoft Research New England, wants to bring that type of flexibility to computer communication. In a series of recent papers, he and his colleagues have begun to describe theoretical limits on the degree of imprecision that communicating computers can tolerate, with very real implications for the design of communication protocols.

“Our goal is not to understand how human communication works,” Sudan says. “Most of the work is really in trying to abstract, ‘What is the kind of problem that human communication tends to solve nicely, [and] designed communication doesn’t?’ — and let’s now see if we can come up with designed communication schemes that do the same thing.”

One thing that humans do well is gauging the minimum amount of information they need to convey in order to get a point across. Depending on the circumstances, for instance, one co-worker might ask another, “Who was that guy?”; “Who was that guy in your office?”; “Who was that guy in your office this morning?”; or “Who was that guy in your office this morning with the red tie and glasses?”

Similarly, the first topic Sudan and his colleagues began investigating is compression, or the minimum number of bits that one device would need to send another in order to convey all the information in a data file.

Uneven odds

In a paper presented in 2011, at the ACM Symposium on Innovations in Computer Science (now known as Innovations in Theoretical Computer Science, or ITCS), Sudan and colleagues at Harvard University, Microsoft, and the University of Pennsylvania considered a hypothetical case in which the devices shared an almost infinite codebook that assigned a random string of symbols — a kind of serial number — to every possible message that either might send.

Of course, such a codebook is entirely implausible, but it allowed the researchers to get a statistical handle on the problem of compression. Indeed, it’s an extension of one of theconcepts that longtime MIT professor Claude Shannon used to determine the maximum capacity of a communication channel in the seminal 1948 paper that created the field of information theory.

In Sudan and his colleagues’ codebook, a vast number of messages might have associated strings that begin with the same symbol. But fewer messages will have strings that share their first two symbols, fewer still strings that share their first three symbols, and so on. In any given instance of communication, the question is how many symbols of the string one device needs to send the other in order to pick out a single associated message.

The answer to that question depends on the probability that any given interpretation of a string of symbols makes sense in context. By way of analogy, if your co-worker has had only one visitor all day, asking her, “Who was that guy in your office?” probably suffices. If she’s had a string of visitors, you may need to specify time of day and tie color.

Existing compression schemes do, in fact, exploit statistical regularities in data. But Sudan and his colleagues considered the case in which sender and receiver assign different probabilities to different interpretations. They were able to show that, so long as protocol designers can make reasonable assumptions about the ranges within which the probabilities might fall, good compression is still possible.

For instance, Sudan says, consider a telescope in deep-space orbit. The telescope’s designers might assume that 90 percent of what it sees will be blackness, and they can use that assumption to compress the image data it sends back to Earth. With existing protocols, anyone attempting to interpret the telescope’s transmissions would need to know the precise figure — 90 percent — that the compression scheme uses. But Sudan and his colleagues showed that the protocol could be designed to accommodate a range of assumptions — from, say, 85 percent to 95 percent — that might be just as reasonable as 90 percent.

Buggy codebook

In a paper being presented at the next ITCS, in January, Sudan and colleagues at Columbia University, Carnegie Mellon University, and Microsoft add even more uncertainty to their compression model. In the new paper, not only do sender and receiver have somewhat different probability estimates, but they also have slightly different codebooks. Again, the researchers were able to devise a protocol that would still provide good compression.

They also generalized their model to new contexts. For instance, Sudan says, in the era of cloud computing, data is constantly being duplicated on servers scattered across the Internet, and data-management systems need to ensure that the copies are kept up to date. One way to do that efficiently is by performing “checksums,” or adding up a bunch of bits at corresponding locations in the original and the copy and making sure the results match.

That method, however, works only if the servers know in advance which bits to add up — and if they store the files in such a way that data locations correspond perfectly. Sudan and his colleagues’ protocol could provide a way for servers using different file-management schemes to generate consistency checks on the fly.

“I shouldn’t tell you if the number of 1’s that I see in this subset is odd or even,” Sudan says. “I should send you some coarse information saying 90 percent of the bits in this set are 1’s. And you say, ‘Well, I see 89 percent,’ but that’s close to 90 percent — that’s actually a good protocol. We prove this.”

“This sequence of works puts forward a general theory of goal-oriented communication, where the focus is not on the raw data being communicated but rather on its meaning,” says Oded Goldreich, a professor of computer science at the Weizmann Institute of Science in Israel. “I consider this sequence a work of fundamental nature.”

“Following a dominant approach in 20th-century philosophy, the work associates the meaning of communication with the goal achieved by it and provides a mathematical framework for discussing all these natural notions,” he adds. “This framework is based on a general definition of the notion of a goal and leads to a problem that is complementary to the problem of reliable communication considered by Shannon, which established information theory.”

 

Source: MIT News Office