Category Archives: Technology


Latest news from around the world!

NASA Telescope Reveals Largest Batch of Earth-Size, Habitable-Zone Planets Around Single Star

NASA’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.

The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.

“This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of the agency’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.”

At about 40 light-years (235 trillion miles) from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets.

This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory’s Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven.

The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington.

Using Spitzer data, the team precisely measured the sizes of the seven planets and developed first estimates of the masses of six of them, allowing their density to be estimated.

Based on their densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will not only help determine whether they are rich in water, but also possibly reveal whether any could have liquid water on their surfaces. The mass of the seventh and farthest exoplanet has not yet been estimated – scientists believe it could be an icy, “snowball-like” world, but further observations are needed.

“The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star,” said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. “It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds.”

In contrast to our sun, the TRAPPIST-1 star – classified as an ultra-cool dwarf – is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun. The planets also are very close to each other. If a person was standing on one of the planet’s surface, they could gaze up and potentially see geological features or clouds of neighboring worlds, which would sometimes appear larger than the moon in Earth’s sky.

The planets may also be tidally locked to their star, which means the same side of the planet is always facing the star, therefore each side is either perpetual day or night. This could mean they have weather patterns totally unlike those on Earth, such as strong winds blowing from the day side to the night side, and extreme temperature changes.

Spitzer, an infrared telescope that trails Earth as it orbits the sun, was well-suited for studying TRAPPIST-1 because the star glows brightest in infrared light, whose wavelengths are longer than the eye can see. In the fall of 2016, Spitzer observed TRAPPIST-1 nearly continuously for 500 hours. Spitzer is uniquely positioned in its orbit to observe enough crossing – transits – of the planets in front of the host star to reveal the complex architecture of the system. Engineers optimized Spitzer’s ability to observe transiting planets during Spitzer’s “warm mission,” which began after the spacecraft’s coolant ran out as planned after the first five years of operations.

“This is the most exciting result I have seen in the 14 years of Spitzer operations,” said Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC in Pasadena, California. “Spitzer will follow up in the fall to further refine our understanding of these planets so that the James Webb Space Telescope can follow up. More observations of the system are sure to reveal more secrets.”

Following up on the Spitzer discovery, NASA’s Hubble Space Telescope has initiated the screening of four of the planets, including the three inside the habitable zone. These observations aim at assessing the presence of puffy, hydrogen-dominated atmospheres, typical for gaseous worlds like Neptune, around these planets.

In May 2016, the Hubble team observed the two innermost planets, and found no evidence for such puffy atmospheres. This strengthened the case that the planets closest to the star are rocky in nature.

“The TRAPPIST-1 system provides one of the best opportunities in the next decade to study the atmospheres around Earth-size planets,” said Nikole Lewis, co-leader of the Hubble study and astronomer at the Space Telescope Science Institute in Baltimore, Maryland. NASA’s planet-hunting Kepler space telescope also is studying the TRAPPIST-1 system, making measurements of the star’s minuscule changes in brightness due to transiting planets. Operating as the K2 mission, the spacecraft’s observations will allow astronomers to refine the properties of the known planets, as well as search for additional planets in the system. The K2 observations conclude in early March and will be made available on the public archive.

Spitzer, Hubble, and Kepler will help astronomers plan for follow-up studies using NASA’s upcoming James Webb Space Telescope, launching in 2018. With much greater sensitivity, Webb will be able to detect the chemical fingerprints of water, methane, oxygen, ozone, and other components of a planet’s atmosphere. Webb also will analyze planets’ temperatures and surface pressures – key factors in assessing their habitability.

NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate. Science operations are conducted at the Spitzer Science Center, at Caltech, in Pasadena, California. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at Caltech/IPAC. Caltech manages JPL for NASA.

For more information about Spitzer, visit:

For more information on the TRAPPIST-1 system, visit:

For more information on exoplanets, visit:

Source: NASA Solar SystemFelicia Chou / Sean Potter
Headquarters, Washington
202-358-1726 / 202-358-1536 /

Elizabeth Landau
Jet Propulsion Laboratory, Pasadena, Calif.

Researchers devise efficient power converter for internet of things

Researchers devise efficient power converter for internet of things

By Larry Hardesty


CAMBRIDGE, Mass. – The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.

Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 200,000-fold increase in current levels.

“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”

Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.

Packet perspective

The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.

“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”

The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.

If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.

To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.

Clocking down

Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.

But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.

“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.

The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.

Source: MIT News Office

Click on the image to know more about Prime Embedded Solutions

The LIGO Scientific Collaboration and the Virgo Collaboration identify a second gravitational wave event from another pair of black holes in the data from Advanced LIGO detectors

Gravitational waves detected from second pair of colliding black holes

The LIGO Scientific Collaboration and the Virgo Collaboration identify a second gravitational wave event in the data from Advanced LIGO detectors



On December 26, 2015 at 03:38:53 UTC, scientists observed gravitational waves–ripples in the fabric of spacetime–for the second time.

The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA.

The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. The discovery, accepted for publication in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the GEO Collaboration and the Australian Consortium for Interferometric Gravitational Astronomy) and the Virgo Collaboration using data from the two LIGO detectors.

Gravitational waves carry information about their origins and about the nature of gravity that cannot otherwise be obtained, and physicists have concluded that these gravitational waves were produced during the final moments of the merger of two black holes–14 and 8 times the mass of the sun–to produce a single, more massive spinning black hole that is 21 times the mass of the sun.

“It is very significant that these black holes were much less massive than those observed in the first detection,” says Gabriela González, LIGO Scientific Collaboration (LSC) spokesperson and professor of physics and astronomy at Louisiana State University. “Because of their lighter masses compared to the first detection, they spent more time–about one second–in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe.”

During the merger, which occurred approximately 1.4 billion years ago, a quantity of energy roughly equivalent to the mass of the sun was converted into gravitational waves. The detected signal comes from the last 27 orbits of the black holes before their merger. Based on the arrival time of the signals–with the Livingston detector measuring the waves 1.1 milliseconds before the Hanford detector–the position of the source in the sky can be roughly determined.

“In the near future, Virgo, the European interferometer, will join a growing network of gravitational wave detectors, which work together with ground-based telescopes that follow-up on the signals,” notes Fulvio Ricci, the Virgo Collaboration spokesperson, a physicist at Istituto Nazionale di Nucleare (INFN) and professor at Sapienza University of Rome. “The three interferometers together will permit a far better localization in the sky of the signals.”

The first detection of gravitational waves, announced on February 11, 2016, was a milestone in physics and astronomy; it confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity, and marked the beginning of the new field of gravitational-wave astronomy.

The second discovery “has truly put the ‘O’ for Observatory in LIGO,” says Caltech’s Albert Lazzarini, deputy director of the LIGO Laboratory. “With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future. LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”

“We are starting to get a glimpse of the kind of new astrophysical information that can only come from gravitational wave detectors,” says MIT’s David Shoemaker, who led the Advanced LIGO detector construction program.

Both discoveries were made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed

“With the advent of Advanced LIGO, we anticipated researchers would eventually succeed at detecting unexpected phenomena, but these two detections thus far have surpassed our expectations,” says NSF Director France A. Córdova. “NSF’s 40-year investment in this foundational research is already yielding new information about the nature of the dark universe.”

Advanced LIGO’s next data-taking run will begin this fall. By then, further improvements in detector sensitivity are expected to allow LIGO to reach as much as 1.5 to 2 times more of the volume of the universe. The Virgo detector is expected to join in the latter half of the upcoming observing run.

LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector.

Virgo research is carried out by the Virgo Collaboration, consisting of more than 250 physicists and engineers belonging to 19 different European research groups: 6 from Centre National de la Recherche Scientifique (CNRS) in France; 8 from the Istituto Nazionale di Fisica Nucleare (INFN) in Italy; 2 in The Netherlands with Nikhef; the MTA Wigner RCP in Hungary; the POLGRAW group in Poland and the European Gravitational Observatory (EGO), the laboratory hosting the Virgo detector near Pisa in Italy.

The NSF leads in financial support for Advanced LIGO. Funding organizations in Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project.

Several of the key technologies that made Advanced LIGO so much more sensitive have been developed and tested by the German UK GEO collaboration. Significant computer resources have been contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University, the ARCCA cluster at Cardiff University, the University of Wisconsin-Milwaukee, and the Open Science Grid. Several universities designed, built, and tested key components and techniques for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Western Australia, the University of Florida, Stanford University, Columbia University in the City of New York, and Louisiana State University. The GEO team includes scientists at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI), Leibniz Universität Hannover, along with partners at the University of Glasgow, Cardiff University, the University of Birmingham, other universities in the United Kingdom and Germany, and the University of the Balearic Islands in Spain.



For more information and interview requests, please contact:

Kimberly Allen
Director of Media Relations
Deputy Director, MIT News Office
617-253-2702 (office)
617-852-6094 (cell)

Whitney Clavin
Senior Content and Media Strategist
626-390-9601 (cell)

Ivy Kupec
Media Officer
703-292-8796 (Office)
703-225-8216 (Cell)

LIGO Scientific Collaboration
Mimi LaValle
External Relations Manager
Louisiana State University
225-439-5633 (Cell)

EGO-European Gravitational Observatory
Séverine Perus
Media Contact
Tel +39 050752325

Stanford’s social robot ‘Jackrabbot’ seeks to understand pedestrian behavior

View video here.

The Computational Vision and Geometry Lab has developed a robot prototype that could soon autonomously move among us, following normal human social etiquettes. It’s named ‘Jackrabbot’ after the springy hares that bounce around campus.


In order for robots to circulate on sidewalks and mingle with humans in other crowded places, they’ll have to understand the unwritten rules of pedestrian behavior. Stanford researchers have created a short, non-humanoid prototype of just such a moving, self-navigating machine.

The robot is nicknamed “Jackrabbot” – after the jackrabbits often seen darting across the Stanford campus – and looks like a ball on wheels. Jackrabbot is equipped with sensors to be able to understand its surroundings and navigate streets and hallways according to normal human etiquette.

The idea behind the work is that by observing how Jackrabbot navigates itself among students around the halls and sidewalks of Stanford’s School of Engineering, and over time learns unwritten conventions of these social behaviors, the researchers will gain critical insight in how to design the next generation of everyday robots such that they operate smoothly alongside humans in crowded open spaces like shopping malls or train stations.

“By learning social conventions, the robot can be part of ecosystems where humans and robots coexist,” said Silvio Savarese, an assistant professor of computer science and director of the Stanford Computational Vision and Geometry Lab.

The researchers will present their system for predicting human trajectories in crowded spaces at the Computer Vision and Pattern Recognition conference in Las Vegas on June 27.

As robotic devices become more common in human environments, it becomes increasingly important that they understand and respect human social norms, Savarese said. How should they behave in crowds? How do they share public resources, like sidewalks or parking spots? When should a robot take its turn? What are the ways people signal each other to coordinate movements and negotiate other spontaneous activities, like forming a line?

These human social conventions aren’t necessarily explicit nor are they written down complete with lane markings and traffic lights, like the traffic rules that govern the behavior of autonomous cars.

So Savarese’s lab is using machine learning techniques to create algorithms that will, in turn, allow the robot to recognize and react appropriately to unwritten rules of pedestrian traffic. The team’s computer scientists have been collecting images and video of people moving around the Stanford campus and transforming those images into coordinates. From those coordinates, they can train an algorithm.

“Our goal in this project is to actually learn those (pedestrian) rules automatically from observations – by seeing how humans behave in these kinds of social spaces,” Savarese said. “The idea is to transfer those rules into robots.”

Jackrabbot already moves automatically and can navigate without human assistance indoors, and the team members are fine-tuning the robot’s self-navigation capabilities outdoors. The next step in their research is the implementation of “social aspects” of pedestrian navigation such as deciding rights of way on the sidewalk. This work, described in their newest conference papers, has been demonstrated in computer simulations.

“We have developed a new algorithm that is able to automatically move the robot with social awareness, and we’re currently integrating that in Jackrabbot,” said Alexandre Alahi, a postdoctoral researcher in the lab.

Even though social robots may someday roam among humans, Savarese said he believes they don’t necessarily need to look like humans. Instead they should be designed to look as lovable and friendly as possible. In demos, the roughly three-foot-tall Jackrabbot roams around campus wearing a Stanford tie and sun-hat, generating hugs and curiosity from passersby.

Today, Jackrabbot is an expensive prototype. But Savarese estimates that in five or six years social robots like this could become as cheap as $500, making it possible for companies to release them to the mass market.

“It’s possible to make these robots affordable for on-campus delivery, or for aiding impaired people to navigate in a public space like a train station or for guiding people to find their way through an airport,” Savarese said.

The conference paper is titled “Social LSTM: Human Trajectory Prediction in Crowded Spaces.” See conference program for details.

Source: Stanford University News Service

Academic and research collaboration to improve people to people contacts for peace and progress

Syed Faisal ur Rahman

Muslim world especially Middle East and surrounding regions, where we live, are facing some of the worst political turmoil of our history. We are seeing wars, terrorism, refugee crisis and resulting economic. The toughest calamities are faced by common people who have very little or no control over the policies which are resulting in the current mess. Worst thing which is happening is the exploitation of sectarianism as a tool to forward foreign policy and strategic agenda. Muslims in many parts of the world are criticizing western powers for this situation but we also need to seriously do some soul searching.

We need to see why are we in this mess?

For me one major reason is that OIC members have failed to find enough common constructive goals to bring their people together.

After the Second World War, Europe realized the importance of academic and economic cooperation for promoting peace and stability. CERN is a prime example of how formal foes can join hands for the purpose of discovery and innovation.

France and Germany have established common institutes and their universities regularly conduct joint research projects. UK and USA, despite enormous bloodshed the historical American war of independence, enjoy exemplary people to people relationships and academic collaboration is a major part of it. It is this attitude of thinking big, finding common constructive goals and strong academic collaboration, which has put them in the forefront of science and technology.

Over the last few decades, humanity has sent probes like Voyager which are challenging the limits of our solar system, countries are thinking about colonizing Mars, satellites like PLANCK and WMAP are tracking radiation from the early stages of our universe, quantum computing is now looking like a possibility and projects are being made for hyper-sonic flights. But in most of the so called Muslim world, we are stuck with centuries old and good for nothing sectarian issues.

Despite some efforts in the defense sector, OIC member countries largely lack the technology base to independently produce jets, automobiles, advanced electronics, precision instruments and many other things which are being produced by public or independent private sector companies in USA, China, Russia, Japan and Europe. Most of the things which are being indigenously produced by OIC countries rely heavily on foreign core components like engine or high precision electronics items. This is due to our lack of investment on fundamental research especially Physics.

OIC countries like Turkey, Pakistan, Malaysia, Iran, Saudi Arabia and some others have some basic infrastructure on which they can build upon to conduct research projects and joint ventures in areas like sending space probes, ground based optical and radio astronomy, particle physics, climate change and development of strong industrial technology base.  All we need is the will to start joint projects and promote knowledge sharing via exchange of researchers or joint academic and industrial research projects.

These joint projects will not only be helpful in enhancing people to people contacts and improving academic research standards but they will also contribute positively in the overall progress of humanity. It is a great loss for humanity as a whole that a civilization, which once led the efforts to develop astronomy, medicine and other key areas of science, is not making any or making very little contribution in advancing our understanding of the universe.

The situation is bad and if we look at Syria, Afghanistan, Iraq, Yemen or Libya then it seems we have hit the rock bottom. It is “Us” who need to find the way out of this mess as no one is going to solve our problems especially the current sectarian mess which is a result of narrow mindsets taking weak decisions. To come out of this dire state, we need broad minds with big vision and a desire of moving forward through mutual respect and understanding.


New device could provide electrical power source from walking and other ambient motions:MIT Research

Harnessing the energy of small bending motions
New device could provide electrical power source from walking and other ambient motions.

By David Chandler


CAMBRIDGE, Mass.–For many applications such as biomedical, mechanical, or environmental monitoring devices, harnessing the energy of small motions could provide a small but virtually unlimited power supply. While a number of approaches have been attempted, researchers at MIT have now developed a completely new method based on electrochemical principles, which could be capable of harvesting energy from a broader range of natural motions and activities, including walking.

The new system, based on the slight bending of a sandwich of metal and polymer sheets, is described in the journal Nature Communications, in a paper by MIT professor Ju Li, graduate students Sangtae Kim and Soon Ju Choi, and four others.

Most previously designed devices for harnessing small motions have been based on the triboelectric effect (essentially friction, like rubbing a balloon against a wool sweater) or piezoelectrics (crystals that produce a small voltage when bent or compressed). These work well for high-frequency sources of motion such as those produced by the vibrations of machinery. But for typical human-scale motions such as walking or exercising, such systems have limits.

“When you put in an impulse” to such traditional materials, “they respond very well, in microseconds. But this doesn’t match the timescale of most human activities,” says Li, who is the Battelle Energy Alliance Professor in Nuclear Science and Engineering and professor of materials science and engineering. “Also, these devices have high electrical impedance and bending rigidity and can be quite expensive,” he says.

Simple and flexible

By contrast, the new system uses technology similar to that in lithium ion batteries, so it could likely be produced inexpensively at large scale, Li says. In addition, these devices would be inherently flexible, making them more compatible with wearable technology and less likely to break under mechanical stress.

While piezoelectric materials are based on a purely physical process, the new system is electrochemical, like a battery or a fuel cell. It uses two thin sheets of lithium alloys as electrodes, separated by a layer of porous polymer soaked with liquid electrolyte that is efficient at transporting lithium ions between the metal plates. But unlike a rechargeable battery, which takes in electricity, stores it, and then releases it, this system takes in mechanical energy and puts out electricity.

When bent even a slight amount, the layered composite produces a pressure difference that squeezes lithium ions through the polymer (like the reverse osmosis process used in water desalination). It also produces a counteracting voltage and an electrical current in the external circuit between the two electrodes, which can be then used directly to power other devices.

Because it requires only a small amount of bending to produce a voltage, such a device could simply have a tiny weight attached to one end to cause the metal to bend as a result of ordinary movements, when strapped to an arm or leg during everyday activities. Unlike batteries and solar cells, the output from the new system comes in the form of alternating current (AC), with the flow moving first in one direction and then the other as the material bends first one way and then back.

This device converts mechanical to electrical energy; therefore, “it is not limited by the second law of thermodynamics,” Li says, which sets an upper limit on the theoretically possible efficiency. “So in principle, [the efficiency] could be 100 percent,” he says. In this first-generation device developed to demonstrate the electrochemomechanical working principle, he says, “the best we can hope for is about 15 percent” efficiency. But the system could easily be manufactured in any desired size and is amenable to industrial manufacturing process.

Test of time

The test devices maintain their properties through many cycles of bending and unbending, Li reports, with little reduction in performance after 1,500 cycles. “It’s a very stable system,” he says.

Previously, the phenomenon underlying the new device “was considered a parasitic effect in the battery community,” according to Li, and voltage put into the battery could sometimes induce bending. “We do just the opposite,” Li says, putting in the stress and getting a voltage as output. Besides being a potential energy source, he says, this could also be a complementary diagnostic tool in electrochemistry. “It’s a good way to evaluate damage mechanisms in batteries, a way to understand battery materials better,” he says.

In addition to harnessing daily motion to power wearable devices, the new system might also be useful as an actuator with biomedical applications, or used for embedded stress sensors in settings such as roads, bridges, keyboards, or other structures, the researchers suggest.

The team also included postdoc Kejie Zhao (now assistant professor at Purdue University) and visiting graduate student Giorgia Gobbi , and Hui Yang and Sulin Zhang at Penn State. The work was supported by the National Science Foundation, the MIT MADMEC Contest, the Samsung Scholarship Foundation, and the Kwanjeong Educational Foundation.

Source: MIT News Office

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.

Entering 2016 with new hope

Syed Faisal ur Rahman


Year 2015 left many good and bad memories for many of us. On one hand we saw more wars, terrorist attacks and political confrontations, and on the other hand we saw humanity raising voices for peace, sheltering refugees and joining hands to confront the climate change.

In science, we saw first ever photograph of light as both wave and particle. We also saw some serious development in machine learning, data sciences and artificial intelligence areas with some voices raising caution about the takeover of AI over humanity and issues related to privacy. The big question of energy and climate change remained a key point of  discussion in scientific and political circles. The biggest break through came near the end of the year with Paris deal during COP21.

The deal involving around 200 countries represent a true spirit of humanity to limit global warming below 2C and commitments for striving to keep temperatures at above 1.5C pre-industrial levels. This truly global commitment also served in bringing rival countries to sit together for a common cause to save humanity from self destruction. I hope the spirit will continue in other areas of common interest as well.

This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture. New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old. Credit: NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)
This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture.
New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old.
NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)

Space Sciences also saw some enormous advancements with New Horizon sending photographs from Pluto, SpaceX successfully landed the reusable Falcon 9 rocket back after a successful launch and we also saw the discovery of the largest regular formation in the Universe,by Prof Lajos Balazs, which is a ring of nine galaxies 7 billion light years away and 5 billion light years wide covering a third of our sky.We also learnt this year that Mars once had more water than Earth’s Arctic Ocean. NASA later confirmed the evidence that water flows on the surface of Mars. The announcement led to some interesting insight into the atmospheric studies and history of the red planet.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration: Jose-Luis Olivares/MIT

We also saw some encouraging advancements in neurosciences where we saw MIT’s researchers  developing a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections. We also saw researchers reactivating neuro-plasticity in older mice, restoring their brains to a younger state and we also saw some good progress in combating Alzheimer’s diseases.

Quantum physics again stayed as a key area of scientific advancements. Quantu

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior. Credit:EPFL
ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.

m computing is getting more closer to become a viable alternative to current architecture. The packing of the single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits. Researchers at the Australian National University (ANU)  performed experiment to prove that reality does not exist until it is measured.

There are many other areas where science and technology reached new heights and will hopefully continue to do so in the year 2016. I hope these advancements will not only help us in growing economically but also help us in becoming better human beings and a better society.






SpaceX successfully landed it’s Falcon 9 rocket after launching it into space

SpaceX, founded by Elon Musk, has landed it’s Falcon 9 rocket after launching it into space. The rocket is part of an attempt to develop a credible relaunch-able platform for sending satellites into space.


According to SpaceX’s youtube page:


With this mission, SpaceX’s Falcon 9 rocket will deliver 11 satellites to low-Earth orbit for ORBCOMM, a leading global provider of Machine-to-Machine communication and Internet of Things solutions. The ORBCOMM launch is targeted for an evening launch from Space Launch Complex 40 at Cape Canaveral Air Force Station, Fla. If all goes as planned, the 11 satellites will be deployed approximately 20 minutes after liftoff, completing a 17-satellite, low Earth orbit constellation for ORBCOMM. This mission also marks SpaceX’s return-to-flight as well as its first attempt to land a first stage on land. The landing of the first stage is a secondary test objective.”

The youtube video link is given below:
ORBCOMM-2 Full Launch Webcast


Stanford study finds promise in expanding renewables based on results in three major economies

A new Stanford study found that renewable energy can make a major and increasingly cost-effective contribution to alleviating climate change.


Stanford energy experts have released a study that compares the experiences of three large economies in ramping up renewable energy deployment and concludes that renewables can make a major and increasingly cost-effective contribution to climate change mitigation.

The report from Stanford’s Steyer-Taylor Center for Energy Policy and Finance analyzes the experiences of Germany, California and Texas, the world’s fourth, eighth and 12th largest economies, respectively. It found, among other things, that Germany, which gets about half as much sunshine as California and Texas, nevertheless generates electricity from solar installations at a cost comparable to that of Texas and only slightly higher than in California.

The report was released in time for the United Nations Climate Change Conference that started this week, where international leaders are gathering to discuss strategies to deal with global warming, including massive scale-ups of renewable energy.

“As policymakers from around the world gather for the climate negotiations in Paris, our report draws on the experiences of three leaders in renewable-energy deployment to shed light on some of the most prominent and controversial themes in the global renewables debate,” said Dan Reicher, executive director of the Steyer-Taylor Center, which is a joint center between Stanford Law School and Stanford Graduate School of Business. Reicher also is interim president and chief executive officer of the American Council on Renewable Energy.

“Our findings suggest that renewable energy has entered the mainstream and is ready to play a leading role in mitigating global climate change,” said Felix Mormann, associate professor of law at the University of Miami, faculty fellow at the Steyer-Taylor Center and lead author of the report.

Other conclusions of the report, “A Tale of Three Markets: Comparing the Solar and Wind Deployment Experiences of California, Texas, and Germany,” include:

  • Germany’s success in deploying renewable energy at scale is due largely to favorable treatment of “soft cost” factors such as financing, permitting, installation and grid access. This approach has allowed the renewable energy policies of some countries to deliver up to four times the average deployment of other countries, despite offering only half the financial incentives.
  • Contrary to widespread concern, a higher share of renewables does not automatically translate to higher electricity bills for ratepayers. While Germany’s residential electric rates are two to three times those of California and Texas, this price differential is only partly due to Germany’s subsidies for renewables. The average German household’s electricity bill is, in fact, lower than in Texas and only slightly higher than in California, partly as a result of energy-efficiency efforts in German homes.
  • An increase in the share of intermittent solar and wind power need not jeopardize the stability of the electric grid. From 2006 to 2013, Germany tripled the amount of electricity generated from solar and wind to a market share of 26 percent, while managing to reduce average annual outage times for electricity customers in its grid from an already impressive 22 minutes to just 15 minutes. During that same period, California tripled the amount of electricity produced from solar and wind to a joint market share of 8 percent and reduced its outage times from more than 100 minutes to less than 90 minutes. However, Texas increased its outage times from 92 minutes to 128 minutes after ramping up its wind-generated electricity sixfold to a market share of 10 percent.

The study may inform the energy debate in the United States, where expanding the nation’s renewable energy infrastructure is a top priority of the Obama administration and the subject of debate among presidential candidates.

The current share of renewables in U.S. electricity generation is 14 percent – half that of Germany. Germany’s ambitious – and controversial – Energiewende (Energy Transition) initiative commits the country to meeting 80 percent of its electricity needs with renewables by 2050. In the United States, 29 states, including California and Texas, have set mandatory targets for renewable energy.

In California, Gov. Jerry Brown recently signed legislation committing the state to producing 50 percent of its electricity from renewables by 2030. Texas, the leading U.S. state for wind development, set a mandate of 10,000 megawatts of renewable energy capacity by 2025, but reached this target 15 years ahead of schedule and now generates over 10 percent of the state’s electricity from wind alone.

Source: Sanford News