Monthly Archives: February 2017

NASA Telescope Reveals Largest Batch of Earth-Size, Habitable-Zone Planets Around Single Star

NASA’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.

The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.

“This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of the agency’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.”

At about 40 light-years (235 trillion miles) from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets.

This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory’s Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven.

The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington.

Using Spitzer data, the team precisely measured the sizes of the seven planets and developed first estimates of the masses of six of them, allowing their density to be estimated.

Based on their densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will not only help determine whether they are rich in water, but also possibly reveal whether any could have liquid water on their surfaces. The mass of the seventh and farthest exoplanet has not yet been estimated – scientists believe it could be an icy, “snowball-like” world, but further observations are needed.

“The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star,” said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. “It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds.”

In contrast to our sun, the TRAPPIST-1 star – classified as an ultra-cool dwarf – is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun. The planets also are very close to each other. If a person was standing on one of the planet’s surface, they could gaze up and potentially see geological features or clouds of neighboring worlds, which would sometimes appear larger than the moon in Earth’s sky.

The planets may also be tidally locked to their star, which means the same side of the planet is always facing the star, therefore each side is either perpetual day or night. This could mean they have weather patterns totally unlike those on Earth, such as strong winds blowing from the day side to the night side, and extreme temperature changes.

Spitzer, an infrared telescope that trails Earth as it orbits the sun, was well-suited for studying TRAPPIST-1 because the star glows brightest in infrared light, whose wavelengths are longer than the eye can see. In the fall of 2016, Spitzer observed TRAPPIST-1 nearly continuously for 500 hours. Spitzer is uniquely positioned in its orbit to observe enough crossing – transits – of the planets in front of the host star to reveal the complex architecture of the system. Engineers optimized Spitzer’s ability to observe transiting planets during Spitzer’s “warm mission,” which began after the spacecraft’s coolant ran out as planned after the first five years of operations.

“This is the most exciting result I have seen in the 14 years of Spitzer operations,” said Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC in Pasadena, California. “Spitzer will follow up in the fall to further refine our understanding of these planets so that the James Webb Space Telescope can follow up. More observations of the system are sure to reveal more secrets.”

Following up on the Spitzer discovery, NASA’s Hubble Space Telescope has initiated the screening of four of the planets, including the three inside the habitable zone. These observations aim at assessing the presence of puffy, hydrogen-dominated atmospheres, typical for gaseous worlds like Neptune, around these planets.

In May 2016, the Hubble team observed the two innermost planets, and found no evidence for such puffy atmospheres. This strengthened the case that the planets closest to the star are rocky in nature.

“The TRAPPIST-1 system provides one of the best opportunities in the next decade to study the atmospheres around Earth-size planets,” said Nikole Lewis, co-leader of the Hubble study and astronomer at the Space Telescope Science Institute in Baltimore, Maryland. NASA’s planet-hunting Kepler space telescope also is studying the TRAPPIST-1 system, making measurements of the star’s minuscule changes in brightness due to transiting planets. Operating as the K2 mission, the spacecraft’s observations will allow astronomers to refine the properties of the known planets, as well as search for additional planets in the system. The K2 observations conclude in early March and will be made available on the public archive.

Spitzer, Hubble, and Kepler will help astronomers plan for follow-up studies using NASA’s upcoming James Webb Space Telescope, launching in 2018. With much greater sensitivity, Webb will be able to detect the chemical fingerprints of water, methane, oxygen, ozone, and other components of a planet’s atmosphere. Webb also will analyze planets’ temperatures and surface pressures – key factors in assessing their habitability.

NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate. Science operations are conducted at the Spitzer Science Center, at Caltech, in Pasadena, California. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at Caltech/IPAC. Caltech manages JPL for NASA.

For more information about Spitzer, visit:

https://www.nasa.gov/spitzer

For more information on the TRAPPIST-1 system, visit:

https://exoplanets.nasa.gov/trappist1

For more information on exoplanets, visit:

https://www.nasa.gov/exoplanets

Credits
Source: NASA Solar SystemFelicia Chou / Sean Potter
Headquarters, Washington
202-358-1726 / 202-358-1536
felicia.chou@nasa.gov / sean.potter@nasa.gov

Elizabeth Landau
Jet Propulsion Laboratory, Pasadena, Calif.
818-354-6425
elizabeth.landau@jpl.nasa.gov

Income inequality linked to export “complexity”

The mix of products that countries export is a good predictor of income distribution, study finds.

By Larry Hardesty


 

CAMBRIDGE, Mass. – In a series of papers over the past 10 years, MIT Professor César Hidalgo and his collaborators have argued that the complexity of a country’s exports — not just their diversity but the expertise and technological infrastructure required to produce them — is a better predictor of future economic growth than factors economists have historically focused on, such as capital and education.

Now, a new paper by Hidalgo and his colleagues, appearing in the journal World Development, argues that everything else being equal, the complexity of a country’s exports also correlates with its degree of economic equality: The more complex a country’s products, the greater equality it enjoys relative to similar-sized countries with similar-sized economies.

“When people talk about the role of policy in inequality, there is an implicit assumption that you can always reduce inequality using only redistributive policies,” says Hidalgo, the Asahi Broadcasting Corporation Associate Professor of Media Arts and Sciences at the MIT Media Lab. “What these new results are telling us is that the effectiveness of policy is limited because inequality lives within a range of values that are determined by your underlying industrial structure.

“So if you’re a country like Venezuela, no matter how much money Chavez or Maduro gives out, you’re not going to be able to reduce inequality, because, well, all the money is coming in from one industry, and the 30,000 people involved in that industry of course are going to have an advantage in the economy. While if you’re in a country like Germany or Switzerland, where the economy is very diversified, and there are many people who are generating money in many different industries, firms are going to be under much more pressure to be more inclusive and redistributive.”

Joining Hidalgo on the paper are first author Dominik Hartmann, who was a postdoc in Hidalgo’s group when the work was done and is now a research fellow at the Fraunhofer Center for International Management and Knowledge Economy in Leipzig, Germany; Cristian Jara-Figueroa and Manuel Aristarán, MIT graduate students in media arts and sciences; and Miguel Guevara, a professor of computer science at Playa Ancha University in Valparaíso, Chile, who earned his PhD at the MIT Media Lab.

Quantifying complexity

For Hidalgo and his colleagues, the complexity of a product is related to the breadth of knowledge required to produce it. The PhDs who operate a billion-dollar chip-fabrication facility are repositories of knowledge, and the facility of itself is the embodiment of knowledge. But complexity also factors in the infrastructure and institutions that facilitate the aggregation of knowledge, such as reliable transportation and communication systems, and a culture of trust that enables productive collaboration.

In the new study, rather than try to itemize and quantify all such factors — probably an impossible task — the researchers made a simplifying assumption: Complex products are rare products exported by countries with diverse export portfolios. For instance, both chromium ore and nonoptical microscopes are rare exports, but the Czech Republic, which is the second-leading exporter of nonoptical microscopes, has a more diverse export portfolio than South Africa, the leading exporter of chromium ore.

The researchers compared each country’s complexity measure to its Gini coefficient, the most widely used measure of income inequality. They also compared Gini coefficients to countries’ per-capita gross domestic products (GDPs) and to standard measures of institutional development and education.

Predictive power

According to the researchers’ analysis of economic data from 1996 to 2008, per-capita GDP predicts only 36 percent of the variation in Gini coefficients, but product complexity predicts 58 percent. Combining per-capita GDP, export complexity, education levels, and population predicts 69 percent of variation. However, whereas leaving out any of the other three factors lowers that figure to about 68 percent, leaving out complexity lowers it to 61 percent, indicating that the complexity measure captures something crucial that the other factors leave out.

Using trade data from 1963 to 2008, the researchers also showed that countries whose economic complexity increased, such as South Korea, saw reductions in income inequality, while countries whose economic complexity decreased, such as Norway, saw income inequality increase.

Source: MIT News Office

Click on the image to know more about Prime Consulting

Researchers devise efficient power converter for internet of things

Researchers devise efficient power converter for internet of things

By Larry Hardesty


 

CAMBRIDGE, Mass. – The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.

Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 200,000-fold increase in current levels.

“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”

Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.

Packet perspective

The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.

“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”

The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.

If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.

To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.

Clocking down

Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.

But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.

“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.

The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.

Source: MIT News Office

Click on the image to know more about Prime Embedded Solutions