Astronomers have used ALMA to detect a huge mass of glowing stardust in a galaxy seen when the Universe was only four percent of its present age. This galaxy was observed shortly after its formation and is the most distant galaxy in which dust has been detected. This observation is also the most distant detection of oxygen in the Universe. These new results provide brand-new insights into the birth and explosive deaths of the very first stars.
An international team of astronomers, led by Nicolas Laporte of University College London, have used the Atacama Large Millimeter/submillimeter Array (ALMA) to observe A2744_YD4, the youngest and most remote galaxy ever seen by ALMA. They were surprised to find that this youthful galaxy contained an abundance of interstellar dust — dust formed by the deaths of an earlier generation of stars.
Follow-up observations using the X-shooter instrument on ESO’s Very Large Telescope confirmed the enormous distance to A2744_YD4. The galaxy appears to us as it was when the Universe was only 600 million years old, during the period when the first stars and galaxies were forming .
“Not only is A2744_YD4 the most distant galaxy yet observed by ALMA,” comments Nicolas Laporte, “but the detection of so much dust indicates early supernovae must have already polluted this galaxy.”
Cosmic dust is mainly composed of silicon, carbon and aluminium, in grains as small as a millionth of a centimetre across. The chemical elements in these grains are forged inside stars and are scattered across the cosmos when the stars die, most spectacularly in supernova explosions, the final fate of short-lived, massive stars. Today, this dust is plentiful and is a key building block in the formation of stars, planets and complex molecules; but in the early Universe — before the first generations of stars died out — it was scarce.
The observations of the dusty galaxy A2744_YD4 were made possible because this galaxy lies behind a massive galaxy cluster called Abell 2744 . Because of a phenomenon called gravitational lensing, the cluster acted like a giant cosmic “telescope” to magnify the more distant A2744_YD4 by about 1.8 times, allowing the team to peer far back into the early Universe.
The ALMA observations also detected the glowing emission of ionised oxygen from A2744_YD4. This is the most distant, and hence earliest, detection of oxygen in the Universe, surpassing another ALMA result from 2016.
The detection of dust in the early Universe provides new information on when the first supernovae exploded and hence the time when the first hot stars bathed the Universe in light. Determining the timing of this “cosmic dawn” is one of the holy grails of modern astronomy, and it can be indirectly probed through the study of early interstellar dust.
The team estimates that A2744_YD4 contained an amount of dust equivalent to 6 million times the mass of our Sun, while the galaxy’s total stellar mass — the mass of all its stars — was 2 billion times the mass of our Sun. The team also measured the rate of star formation in A2744_YD4 and found that stars are forming at a rate of 20 solar masses per year — compared to just one solar mass per year in the Milky Way .
“This rate is not unusual for such a distant galaxy, but it does shed light on how quickly the dust in A2744_YD4 formed,” explains Richard Ellis (ESO and University College London), a co-author of the study. “Remarkably, the required time is only about 200 million years — so we are witnessing this galaxy shortly after its formation.”
This means that significant star formation began approximately 200 million years before the epoch at which the galaxy is being observed. This provides a great opportunity for ALMA to help study the era when the first stars and galaxies “switched on” — the earliest epoch yet probed. Our Sun, our planet and our existence are the products — 13 billion years later — of this first generation of stars. By studying their formation, lives and deaths, we are exploring our origins.
“With ALMA, the prospects for performing deeper and more extensive observations of similar galaxies at these early times are very promising,” says Ellis.
And Laporte concludes: “Further measurements of this kind offer the exciting prospect of tracing early star formation and the creation of the heavier chemical elements even further back into the early Universe.”
 Abell 2744 is a massive object, lying 3.5 billion light-years away (redshift 0.308), that is thought to be the result of four smaller galaxy clusters colliding. It has been nicknamed Pandora’s Cluster because of the many strange and different phenomena that were unleashed by the huge collision that occurred over a period of about 350 million years. The galaxies only make up five percent of the cluster’s mass, while dark matter makes up seventy-five percent, providing the massive gravitational influence necessary to bend and magnify the light of background galaxies. The remaining twenty percent of the total mass is thought to be in the form of hot gas.
 This rate means that the total mass of the stars formed every year is equivalent to 20 times the mass of the Sun.
This research was presented in a paper entitled “Dust in the Reionization Era: ALMA Observations of a z =8.38 Gravitationally-Lensed Galaxy” by Laporte et al., to appear in The Astrophysical Journal Letters.
The team is composed of N. Laporte (University College London, UK), R. S. Ellis (University College London, UK; ESO, Garching, Germany), F. Boone (Institut de Recherche en Astrophysique et Planétologie (IRAP), Toulouse, France), F. E. Bauer (Pontificia Universidad Católica de Chile, Instituto de Astrofísica, Santiago, Chile), D. Quénard (Queen Mary University of London, London, UK), G. Roberts-Borsani (University College London, UK), R. Pelló (Institut de Recherche en Astrophysique et Planétologie (IRAP), Toulouse, France), I. Pérez-Fournon (Instituto de Astrofísica de Canarias, Tenerife, Spain; Universidad de La Laguna, Tenerife, Spain), and A. Streblyanska (Instituto de Astrofísica de Canarias, Tenerife, Spain; Universidad de La Laguna, Tenerife, Spain).
The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of ESO, the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI).
ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.
ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.
NASA’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.
The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.
“This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of the agency’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.”
At about 40 light-years (235 trillion miles) from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets.
This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory’s Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven.
The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington.
Using Spitzer data, the team precisely measured the sizes of the seven planets and developed first estimates of the masses of six of them, allowing their density to be estimated.
Based on their densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will not only help determine whether they are rich in water, but also possibly reveal whether any could have liquid water on their surfaces. The mass of the seventh and farthest exoplanet has not yet been estimated – scientists believe it could be an icy, “snowball-like” world, but further observations are needed.
“The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star,” said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. “It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds.”
In contrast to our sun, the TRAPPIST-1 star – classified as an ultra-cool dwarf – is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun. The planets also are very close to each other. If a person was standing on one of the planet’s surface, they could gaze up and potentially see geological features or clouds of neighboring worlds, which would sometimes appear larger than the moon in Earth’s sky.
The planets may also be tidally locked to their star, which means the same side of the planet is always facing the star, therefore each side is either perpetual day or night. This could mean they have weather patterns totally unlike those on Earth, such as strong winds blowing from the day side to the night side, and extreme temperature changes.
Spitzer, an infrared telescope that trails Earth as it orbits the sun, was well-suited for studying TRAPPIST-1 because the star glows brightest in infrared light, whose wavelengths are longer than the eye can see. In the fall of 2016, Spitzer observed TRAPPIST-1 nearly continuously for 500 hours. Spitzer is uniquely positioned in its orbit to observe enough crossing – transits – of the planets in front of the host star to reveal the complex architecture of the system. Engineers optimized Spitzer’s ability to observe transiting planets during Spitzer’s “warm mission,” which began after the spacecraft’s coolant ran out as planned after the first five years of operations.
“This is the most exciting result I have seen in the 14 years of Spitzer operations,” said Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC in Pasadena, California. “Spitzer will follow up in the fall to further refine our understanding of these planets so that the James Webb Space Telescope can follow up. More observations of the system are sure to reveal more secrets.”
Following up on the Spitzer discovery, NASA’s Hubble Space Telescope has initiated the screening of four of the planets, including the three inside the habitable zone. These observations aim at assessing the presence of puffy, hydrogen-dominated atmospheres, typical for gaseous worlds like Neptune, around these planets.
In May 2016, the Hubble team observed the two innermost planets, and found no evidence for such puffy atmospheres. This strengthened the case that the planets closest to the star are rocky in nature.
“The TRAPPIST-1 system provides one of the best opportunities in the next decade to study the atmospheres around Earth-size planets,” said Nikole Lewis, co-leader of the Hubble study and astronomer at the Space Telescope Science Institute in Baltimore, Maryland. NASA’s planet-hunting Kepler space telescope also is studying the TRAPPIST-1 system, making measurements of the star’s minuscule changes in brightness due to transiting planets. Operating as the K2 mission, the spacecraft’s observations will allow astronomers to refine the properties of the known planets, as well as search for additional planets in the system. The K2 observations conclude in early March and will be made available on the public archive.
Spitzer, Hubble, and Kepler will help astronomers plan for follow-up studies using NASA’s upcoming James Webb Space Telescope, launching in 2018. With much greater sensitivity, Webb will be able to detect the chemical fingerprints of water, methane, oxygen, ozone, and other components of a planet’s atmosphere. Webb also will analyze planets’ temperatures and surface pressures – key factors in assessing their habitability.
NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate. Science operations are conducted at the Spitzer Science Center, at Caltech, in Pasadena, California. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at Caltech/IPAC. Caltech manages JPL for NASA.
Researchers devise efficient power converter for internet of things
By Larry Hardesty
CAMBRIDGE, Mass. – The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.
Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.
Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 200,000-fold increase in current levels.
“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”
Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.
The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.
“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”
The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.
If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.
To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.
Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.
But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.
“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.
The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.
A team of astronomers have used the SPHERE instrument on ESO’s Very Large Telescope to image the first planet ever found in a wide orbit inside a triple-star system. The orbit of such a planet had been expected to be unstable, probably resulting in the planet being quickly ejected from the system. But somehow this one survives. This unexpected observation suggests that such systems may actually be more common than previously thought. The results will be published online in the journal Science on 7 July 2016.
Luke Skywalker‘s home planet, Tatooine, in the Star Wars saga, was a strange world with two suns in the sky, but astronomers have now found a planet in an even more exotic system, where an observer would either experience constant daylight or enjoy triple sunrises and sunsets each day, depending on the seasons, which last longer than human lifetimes.
This world has been discovered by a team of astronomers led by the University of Arizona, USA, using direct imaging at ESO’s Very Large Telescope (VLT) in Chile. The planet, HD 131399Ab , is unlike any other known world — its orbit around the brightest of the three stars is by far the widest known within a multi-star system. Such orbits are often unstable, because of the complex and changing gravitational attraction from the other two stars in the system, and planets in stable orbits were thought to be very unlikely.
Located about 320 light-years from Earth in the constellation of Centaurus (The Centaur), HD 131399Ab is about 16 million years old, making it also one of the youngest exoplanets discovered to date, and one of very few directly imaged planets. With a temperature of around 580 degrees Celsius and an estimated mass of four Jupiter masses, it is also one of the coldest and least massive directly-imaged exoplanets.
“HD 131399Ab is one of the few exoplanets that have been directly imaged, and it’s the first one in such an interesting dynamical configuration,” said Daniel Apai, from the University of Arizona, USA, and one of the co-authors of the new paper.
“For about half of the planet’s orbit, which lasts 550 Earth-years, three stars are visible in the sky; the fainter two are always much closer together, and change in apparent separation from the brightest star throughout the year,” adds Kevin Wagner, the paper’s first author and discoverer of HD 131399Ab .
Kevin Wagner, who is a PhD student at the University of Arizona, identified the planet among hundreds of candidate planets and led the follow-up observations to verify its nature.
The planet also marks the first discovery of an exoplanet made with the SPHERE instrument on the VLT. SPHERE is sensitive to infrared light, allowing it to detect the heat signatures of young planets, along with sophisticated features correcting for atmospheric disturbances and blocking out the otherwise blinding light of their host stars.
Although repeated and long-term observations will be needed to precisely determine the planet’s trajectory among its host stars, observations and simulations seem to suggest the following scenario: the brightest star is estimated to be eighty percent more massive than the Sun and dubbed HD 131399A, which itself is orbited by the less massive stars, B and C, at about 300 au (one au, or astronomical unit, equals the average distance between the Earth and the Sun). All the while, B and C twirl around each other like a spinning dumbbell, separated by a distance roughly equal to that between the Sun and Saturn (10 au).
In this scenario, planet HD 131399Ab travels around the star A in an orbit with a radius of about 80 au, about twice as large as Pluto’s in the Solar System, and brings the planet to about one third of the separation between star A and the B/C star pair. The authors point out that a range of orbital scenarios is possible, and the verdict on the long-term stability of the system will have to wait for planned follow-up observations that will better constrain the planet’s orbit.
“If the planet was further away from the most massive star in the system, it would be kicked out of the system,” Apai explained. “Our computer simulations have shown that this type of orbit can be stable, but if you change things around just a little bit, it can become unstable very quickly.”
Planets in multi-star systems are of special interest to astronomers and planetary scientists because they provide an example of how the mechanism of planetary formation functions in these more extreme scenarios. While multi-star systems seem exotic to us in our orbit around our solitary star, multi-star systems are in fact just as common as single stars.
“It is not clear how this planet ended up on its wide orbit in this extreme system, and we can’t say yet what this means for our broader understanding of the types of planetary systems, but it shows that there is more variety out there than many would have deemed possible,” concludes Kevin Wagner. “What we do know is that planets in multi-star systems have been studied far less often, but are potentially just as numerous as planets in single-star systems.”
 The three components of the triple star are named HD 131399A, HD 131399B and HD 131399C respectively, in decreasing order of brightness. The planet orbits the brightest star and hence is named HD 131399Ab.
 For much of the planet’s year the stars would appear close together in the sky, giving it a familiar night-side and day-side with a unique triple sunset and sunrise each day. As the planet moves along its orbit the stars grow further apart each day, until they reach a point where the setting of one coincides with the rising of the other — at which point the planet is in near-constant daytime for about one-quarter of its orbit, or roughly 140 Earth-years.
On December 26, 2015 at 03:38:53 UTC, scientists observed gravitational waves–ripples in the fabric of spacetime–for the second time.
The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA.
The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. The discovery, accepted for publication in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the GEO Collaboration and the Australian Consortium for Interferometric Gravitational Astronomy) and the Virgo Collaboration using data from the two LIGO detectors.
Gravitational waves carry information about their origins and about the nature of gravity that cannot otherwise be obtained, and physicists have concluded that these gravitational waves were produced during the final moments of the merger of two black holes–14 and 8 times the mass of the sun–to produce a single, more massive spinning black hole that is 21 times the mass of the sun.
“It is very significant that these black holes were much less massive than those observed in the first detection,” says Gabriela González, LIGO Scientific Collaboration (LSC) spokesperson and professor of physics and astronomy at Louisiana State University. “Because of their lighter masses compared to the first detection, they spent more time–about one second–in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe.”
During the merger, which occurred approximately 1.4 billion years ago, a quantity of energy roughly equivalent to the mass of the sun was converted into gravitational waves. The detected signal comes from the last 27 orbits of the black holes before their merger. Based on the arrival time of the signals–with the Livingston detector measuring the waves 1.1 milliseconds before the Hanford detector–the position of the source in the sky can be roughly determined.
“In the near future, Virgo, the European interferometer, will join a growing network of gravitational wave detectors, which work together with ground-based telescopes that follow-up on the signals,” notes Fulvio Ricci, the Virgo Collaboration spokesperson, a physicist at Istituto Nazionale di Nucleare (INFN) and professor at Sapienza University of Rome. “The three interferometers together will permit a far better localization in the sky of the signals.”
The first detection of gravitational waves, announced on February 11, 2016, was a milestone in physics and astronomy; it confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity, and marked the beginning of the new field of gravitational-wave astronomy.
The second discovery “has truly put the ‘O’ for Observatory in LIGO,” says Caltech’s Albert Lazzarini, deputy director of the LIGO Laboratory. “With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future. LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”
“We are starting to get a glimpse of the kind of new astrophysical information that can only come from gravitational wave detectors,” says MIT’s David Shoemaker, who led the Advanced LIGO detector construction program.
Both discoveries were made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed
“With the advent of Advanced LIGO, we anticipated researchers would eventually succeed at detecting unexpected phenomena, but these two detections thus far have surpassed our expectations,” says NSF Director France A. Córdova. “NSF’s 40-year investment in this foundational research is already yielding new information about the nature of the dark universe.”
Advanced LIGO’s next data-taking run will begin this fall. By then, further improvements in detector sensitivity are expected to allow LIGO to reach as much as 1.5 to 2 times more of the volume of the universe. The Virgo detector is expected to join in the latter half of the upcoming observing run.
LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector.
Virgo research is carried out by the Virgo Collaboration, consisting of more than 250 physicists and engineers belonging to 19 different European research groups: 6 from Centre National de la Recherche Scientifique (CNRS) in France; 8 from the Istituto Nazionale di Fisica Nucleare (INFN) in Italy; 2 in The Netherlands with Nikhef; the MTA Wigner RCP in Hungary; the POLGRAW group in Poland and the European Gravitational Observatory (EGO), the laboratory hosting the Virgo detector near Pisa in Italy.
The NSF leads in financial support for Advanced LIGO. Funding organizations in Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project.
Several of the key technologies that made Advanced LIGO so much more sensitive have been developed and tested by the German UK GEO collaboration. Significant computer resources have been contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University, the ARCCA cluster at Cardiff University, the University of Wisconsin-Milwaukee, and the Open Science Grid. Several universities designed, built, and tested key components and techniques for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Western Australia, the University of Florida, Stanford University, Columbia University in the City of New York, and Louisiana State University. The GEO team includes scientists at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI), Leibniz Universität Hannover, along with partners at the University of Glasgow, Cardiff University, the University of Birmingham, other universities in the United Kingdom and Germany, and the University of the Balearic Islands in Spain.
For more information and interview requests, please contact:
Director of Media Relations
Deputy Director, MIT News Office
Senior Content and Media Strategist
LIGO Scientific Collaboration
External Relations Manager
Louisiana State University
EGO-European Gravitational Observatory
Tel +39 050752325
Using a new satellite-based method, scientists at NASA, Environment and Climate Change Canada, and two universities have located 39 unreported and major human-made sources of toxic sulfur dioxide emissions.
A known health hazard and contributor to acid rain, sulfur dioxide (SO2) is one of six air pollutants regulated by the U.S. Environmental Protection Agency. Current, sulfur dioxide monitoring activities include the use of emission inventories that are derived from ground-based measurements and factors, such as fuel usage. The inventories are used to evaluate regulatory policies for air quality improvements and to anticipate future emission scenarios that may occur with economic and population growth.
But, to develop comprehensive and accurate inventories, industries, government agencies and scientists first must know the location of pollution sources.
“We now have an independent measurement of these emission sources that does not rely on what was known or thought known,” said Chris McLinden, an atmospheric scientist with Environment and Climate Change Canada in Toronto and lead author of the study published this week in Nature Geosciences. “When you look at a satellite picture of sulfur dioxide, you end up with it appearing as hotspots – bull’s-eyes, in effect — which makes the estimates of emissions easier.”
The 39 unreported emission sources, found in the analysis of satellite data from 2005 to 2014, are clusters of coal-burning power plants, smelters, oil and gas operations found notably in the Middle East, but also in Mexico and parts of Russia. In addition, reported emissions from known sources in these regions were — in some cases — two to three times lower than satellite-based estimates.
Altogether, the unreported and underreported sources account for about 12 percent of all human-made emissions of sulfur dioxide – a discrepancy that can have a large impact on regional air quality, said McLinden.
The research team also located 75 natural sources of sulfur dioxide — non-erupting volcanoes slowly leaking the toxic gas throughout the year. While not necessarily unknown, many volcanoes are in remote locations and not monitored, so this satellite-based data set is the first to provide regular annual information on these passive volcanic emissions.
“Quantifying the sulfur dioxide bull’s-eyes is a two-step process that would not have been possible without two innovations in working with the satellite data,” said co-author Nickolay Krotkov, an atmospheric scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.
First was an improvement in the computer processing that transforms raw satellite observations from the Dutch-Finnish Ozone Monitoring Instrument aboard NASA’s Aura spacecraft into precise estimates of sulfur dioxide concentrations. Krotkov and his team now are able to more accurately detect smaller sulfur dioxide concentrations, including those emitted by human-made sources such as oil-related activities and medium-size power plants.
Being able to detect smaller concentrations led to the second innovation. McLinden and his colleagues used a new computer program to more precisely detect sulfur dioxide that had been dispersed and diluted by winds. They then used accurate estimates of wind strength and direction derived from a satellite data-driven model to trace the pollutant back to the location of the source, and also to estimate how much sulfur dioxide was emitted from the smoke stack.
“The unique advantage of satellite data is spatial coverage,” said Bryan Duncan, an atmospheric scientist at Goddard. “This paper is the perfect demonstration of how new and improved satellite datasets, coupled with new and improved data analysis techniques, allow us to identify even smaller pollutant sources and to quantify these emissions over the globe.”
The University of Maryland, College Park, and Dalhousie University in Halifax, Nova Scotia, contributed to this study.
For more information about, and access to, NASA’s air quality data, visit:
NASA uses the vantage point of space to increase our understanding of our home planet, improve lives, and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.
For more information about NASA Earth science research, visit:
Astronomers have uncovered one of the biggest supermassive black holes, with the mass of 17 billion Suns, in an unlikely place: the centre of a galaxy that lies in a quiet backwater of the Universe. The observations, made with the NASA/ESA Hubble Space Telescope and the Gemini Telescope in Hawaii, indicate that these monster objects may be more common than once thought. The results of this study are released in the journal Nature.
Until now, the biggest supermassive black holes — those having more than 10 billion times the mass of our Sun — have only been found at the cores of very large galaxies in the centres of massive galaxy clusters. Now, an international team of astronomers using the NASA/ESA Hubble Space Telescope has discovered a supersized black hole with a mass of 17 billion Suns in the centre of the rather isolated galaxy NGC 1600.
NGC 1600 is an elliptical galaxy which is located not in a cluster of galaxies, but in a small group of about twenty. The group is located 200 million light-years away in the constellation Eridanus. While finding a gigantic supermassive black hole in a massive galaxy within a cluster of galaxies is to be expected, finding one in an average-sized galaxy group like the one surrounding NGC 1600 is much more surprising.
“Even though we already had hints that the galaxy might host an extreme object in the centre, we were surprised that the black hole in NGC 1600 is ten times more massive than predicted by the mass of the galaxy,” explains lead author of the study Jens Thomas from the Max Planck-Institute for Extraterrestrial Physics, Germany.
Based on previous Hubble surveys of supermassive black holes, astronomers had discovered a correlation between a black hole’s mass and the mass of its host galaxy’s central bulge of stars: the larger the galaxy bulge, the more massive the black hole is expected to be. “It appears from our finding that this relation does not work so well with extremely massive black holes,” says Thomas. “These monster black holes account for a much larger fraction of the host galaxy’s mass than the previous correlations would suggest.”
Finding this extremely massive black hole in NGC 1600 leads astronomers to ask whether these objects are more common than previously thought. “There are quite a few galaxies the size of NGC 1600 that reside in average-size galaxy groups,” explains co-author Chung-Pei Ma, an astronomer from the University of California, Berkeley, USA, and head of the MASSIVE Survey . “We estimate that these smaller groups are about fifty times more abundant than large, dense galaxy clusters. So the question now is: is this the tip of an iceberg? Maybe there are a lot more monster black holes out there.”
It is assumed that this black hole grew by merging with another supermassive black hole from another galaxy. It may then have continued to grow by gobbling up gas funneled to the core of the galaxy by further galaxy collisions. Thus may also explain why NGC 1600 resides in a sparsely populated region of the Universe and why it is at least three times brighter than its neighbours.
As the supermassive black hole is currently dormant, astronomers were only able to find it and estimate its mass by measuring the velocities of stars close to it, using the Gemini North 8-metre telescope on Mauna Kea, Hawaii. Using these data the team discovered that stars lying about 3000 light-years from the core are moving as if there had been many more stars in the core in the distant past. This indicates that most of the stars in this region have been kicked out from the centre of the galaxy.
Archival Hubble images, taken with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS), support the idea that the two merging supermassive black holes in the distant past gave stars the boot. The NICMOS images revealed that the galaxy’s core is unusually faint, indicating a lack of stars close to the galactic centre. “We estimate that the mass of stars tossed out of the central region of NGC 1600 is equal to 40 billion Suns,” concludes Thomas. “This is comparable to ejecting the entire disc of our Milky Way galaxy.”
 The MASSIVE Survey, which began in 2014, measures the mass of stars, dark matter, and the central black hole of the 100 most massive, nearby galaxies, those larger than 300 billion solar masses and within 350 million light-years of Earth. Among its goals are to find the descendants of luminous quasars that may be sleeping unsuspected in large nearby galaxies and to understand how galaxies form and grow supermassive black holes.
The Hubble Space Telescope is a project of international cooperation between ESA and NASA.
The study “A 17-billion-solar-mass black hole in a group galaxy with a diffuse core” appeared in the journal Nature.
The international team of astronomers in this study consists of J. Thomas (Max Planck Institute for Extraterrestrial Physics, Germany), C.-P. Ma (University of California, Berkeley, USA), N. McConnell (Dominion Astrophysical Observatory, Canada), J. Greene (Princeton University, USA), J. Blakeslee (Dominion Astrophysical Observatory, Canada), and R. Janish (University of California, Berkeley, USA)
Muslim world especially Middle East and surrounding regions, where we live, are facing some of the worst political turmoil of our history. We are seeing wars, terrorism, refugee crisis and resulting economic. The toughest calamities are faced by common people who have very little or no control over the policies which are resulting in the current mess. Worst thing which is happening is the exploitation of sectarianism as a tool to forward foreign policy and strategic agenda. Muslims in many parts of the world are criticizing western powers for this situation but we also need to seriously do some soul searching.
We need to see why are we in this mess?
For me one major reason is that OIC members have failed to find enough common constructive goals to bring their people together.
After the Second World War, Europe realized the importance of academic and economic cooperation for promoting peace and stability. CERN is a prime example of how formal foes can join hands for the purpose of discovery and innovation.
France and Germany have established common institutes and their universities regularly conduct joint research projects. UK and USA, despite enormous bloodshed the historical American war of independence, enjoy exemplary people to people relationships and academic collaboration is a major part of it. It is this attitude of thinking big, finding common constructive goals and strong academic collaboration, which has put them in the forefront of science and technology.
Over the last few decades, humanity has sent probes like Voyager which are challenging the limits of our solar system, countries are thinking about colonizing Mars, satellites like PLANCK and WMAP are tracking radiation from the early stages of our universe, quantum computing is now looking like a possibility and projects are being made for hyper-sonic flights. But in most of the so called Muslim world, we are stuck with centuries old and good for nothing sectarian issues.
Despite some efforts in the defense sector, OIC member countries largely lack the technology base to independently produce jets, automobiles, advanced electronics, precision instruments and many other things which are being produced by public or independent private sector companies in USA, China, Russia, Japan and Europe. Most of the things which are being indigenously produced by OIC countries rely heavily on foreign core components like engine or high precision electronics items. This is due to our lack of investment on fundamental research especially Physics.
OIC countries like Turkey, Pakistan, Malaysia, Iran, Saudi Arabia and some others have some basic infrastructure on which they can build upon to conduct research projects and joint ventures in areas like sending space probes, ground based optical and radio astronomy, particle physics, climate change and development of strong industrial technology base. All we need is the will to start joint projects and promote knowledge sharing via exchange of researchers or joint academic and industrial research projects.
These joint projects will not only be helpful in enhancing people to people contacts and improving academic research standards but they will also contribute positively in the overall progress of humanity. It is a great loss for humanity as a whole that a civilization, which once led the efforts to develop astronomy, medicine and other key areas of science, is not making any or making very little contribution in advancing our understanding of the universe.
The situation is bad and if we look at Syria, Afghanistan, Iraq, Yemen or Libya then it seems we have hit the rock bottom. It is “Us” who need to find the way out of this mess as no one is going to solve our problems especially the current sectarian mess which is a result of narrow mindsets taking weak decisions. To come out of this dire state, we need broad minds with big vision and a desire of moving forward through mutual respect and understanding.
Harnessing the energy of small bending motions
New device could provide electrical power source from walking and other ambient motions.
By David Chandler
CAMBRIDGE, Mass.–For many applications such as biomedical, mechanical, or environmental monitoring devices, harnessing the energy of small motions could provide a small but virtually unlimited power supply. While a number of approaches have been attempted, researchers at MIT have now developed a completely new method based on electrochemical principles, which could be capable of harvesting energy from a broader range of natural motions and activities, including walking.
The new system, based on the slight bending of a sandwich of metal and polymer sheets, is described in the journal Nature Communications, in a paper by MIT professor Ju Li, graduate students Sangtae Kim and Soon Ju Choi, and four others.
Most previously designed devices for harnessing small motions have been based on the triboelectric effect (essentially friction, like rubbing a balloon against a wool sweater) or piezoelectrics (crystals that produce a small voltage when bent or compressed). These work well for high-frequency sources of motion such as those produced by the vibrations of machinery. But for typical human-scale motions such as walking or exercising, such systems have limits.
“When you put in an impulse” to such traditional materials, “they respond very well, in microseconds. But this doesn’t match the timescale of most human activities,” says Li, who is the Battelle Energy Alliance Professor in Nuclear Science and Engineering and professor of materials science and engineering. “Also, these devices have high electrical impedance and bending rigidity and can be quite expensive,” he says.
Simple and flexible
By contrast, the new system uses technology similar to that in lithium ion batteries, so it could likely be produced inexpensively at large scale, Li says. In addition, these devices would be inherently flexible, making them more compatible with wearable technology and less likely to break under mechanical stress.
While piezoelectric materials are based on a purely physical process, the new system is electrochemical, like a battery or a fuel cell. It uses two thin sheets of lithium alloys as electrodes, separated by a layer of porous polymer soaked with liquid electrolyte that is efficient at transporting lithium ions between the metal plates. But unlike a rechargeable battery, which takes in electricity, stores it, and then releases it, this system takes in mechanical energy and puts out electricity.
When bent even a slight amount, the layered composite produces a pressure difference that squeezes lithium ions through the polymer (like the reverse osmosis process used in water desalination). It also produces a counteracting voltage and an electrical current in the external circuit between the two electrodes, which can be then used directly to power other devices.
Because it requires only a small amount of bending to produce a voltage, such a device could simply have a tiny weight attached to one end to cause the metal to bend as a result of ordinary movements, when strapped to an arm or leg during everyday activities. Unlike batteries and solar cells, the output from the new system comes in the form of alternating current (AC), with the flow moving first in one direction and then the other as the material bends first one way and then back.
This device converts mechanical to electrical energy; therefore, “it is not limited by the second law of thermodynamics,” Li says, which sets an upper limit on the theoretically possible efficiency. “So in principle, [the efficiency] could be 100 percent,” he says. In this first-generation device developed to demonstrate the electrochemomechanical working principle, he says, “the best we can hope for is about 15 percent” efficiency. But the system could easily be manufactured in any desired size and is amenable to industrial manufacturing process.
Test of time
The test devices maintain their properties through many cycles of bending and unbending, Li reports, with little reduction in performance after 1,500 cycles. “It’s a very stable system,” he says.
Previously, the phenomenon underlying the new device “was considered a parasitic effect in the battery community,” according to Li, and voltage put into the battery could sometimes induce bending. “We do just the opposite,” Li says, putting in the stress and getting a voltage as output. Besides being a potential energy source, he says, this could also be a complementary diagnostic tool in electrochemistry. “It’s a good way to evaluate damage mechanisms in batteries, a way to understand battery materials better,” he says.
In addition to harnessing daily motion to power wearable devices, the new system might also be useful as an actuator with biomedical applications, or used for embedded stress sensors in settings such as roads, bridges, keyboards, or other structures, the researchers suggest.
The team also included postdoc Kejie Zhao (now assistant professor at Purdue University) and visiting graduate student Giorgia Gobbi , and Hui Yang and Sulin Zhang at Penn State. The work was supported by the National Science Foundation, the MIT MADMEC Contest, the Samsung Scholarship Foundation, and the Kwanjeong Educational Foundation.