Tag Archives: news

10556427_10152654193979603_4236573740586429870_n

Latest news from around the world!


New device could provide electrical power source from walking and other ambient motions:MIT Research

Harnessing the energy of small bending motions
New device could provide electrical power source from walking and other ambient motions.

By David Chandler


 

CAMBRIDGE, Mass.–For many applications such as biomedical, mechanical, or environmental monitoring devices, harnessing the energy of small motions could provide a small but virtually unlimited power supply. While a number of approaches have been attempted, researchers at MIT have now developed a completely new method based on electrochemical principles, which could be capable of harvesting energy from a broader range of natural motions and activities, including walking.

The new system, based on the slight bending of a sandwich of metal and polymer sheets, is described in the journal Nature Communications, in a paper by MIT professor Ju Li, graduate students Sangtae Kim and Soon Ju Choi, and four others.

Most previously designed devices for harnessing small motions have been based on the triboelectric effect (essentially friction, like rubbing a balloon against a wool sweater) or piezoelectrics (crystals that produce a small voltage when bent or compressed). These work well for high-frequency sources of motion such as those produced by the vibrations of machinery. But for typical human-scale motions such as walking or exercising, such systems have limits.

“When you put in an impulse” to such traditional materials, “they respond very well, in microseconds. But this doesn’t match the timescale of most human activities,” says Li, who is the Battelle Energy Alliance Professor in Nuclear Science and Engineering and professor of materials science and engineering. “Also, these devices have high electrical impedance and bending rigidity and can be quite expensive,” he says.

Simple and flexible

By contrast, the new system uses technology similar to that in lithium ion batteries, so it could likely be produced inexpensively at large scale, Li says. In addition, these devices would be inherently flexible, making them more compatible with wearable technology and less likely to break under mechanical stress.

While piezoelectric materials are based on a purely physical process, the new system is electrochemical, like a battery or a fuel cell. It uses two thin sheets of lithium alloys as electrodes, separated by a layer of porous polymer soaked with liquid electrolyte that is efficient at transporting lithium ions between the metal plates. But unlike a rechargeable battery, which takes in electricity, stores it, and then releases it, this system takes in mechanical energy and puts out electricity.

When bent even a slight amount, the layered composite produces a pressure difference that squeezes lithium ions through the polymer (like the reverse osmosis process used in water desalination). It also produces a counteracting voltage and an electrical current in the external circuit between the two electrodes, which can be then used directly to power other devices.

Because it requires only a small amount of bending to produce a voltage, such a device could simply have a tiny weight attached to one end to cause the metal to bend as a result of ordinary movements, when strapped to an arm or leg during everyday activities. Unlike batteries and solar cells, the output from the new system comes in the form of alternating current (AC), with the flow moving first in one direction and then the other as the material bends first one way and then back.

This device converts mechanical to electrical energy; therefore, “it is not limited by the second law of thermodynamics,” Li says, which sets an upper limit on the theoretically possible efficiency. “So in principle, [the efficiency] could be 100 percent,” he says. In this first-generation device developed to demonstrate the electrochemomechanical working principle, he says, “the best we can hope for is about 15 percent” efficiency. But the system could easily be manufactured in any desired size and is amenable to industrial manufacturing process.

Test of time

The test devices maintain their properties through many cycles of bending and unbending, Li reports, with little reduction in performance after 1,500 cycles. “It’s a very stable system,” he says.

Previously, the phenomenon underlying the new device “was considered a parasitic effect in the battery community,” according to Li, and voltage put into the battery could sometimes induce bending. “We do just the opposite,” Li says, putting in the stress and getting a voltage as output. Besides being a potential energy source, he says, this could also be a complementary diagnostic tool in electrochemistry. “It’s a good way to evaluate damage mechanisms in batteries, a way to understand battery materials better,” he says.

In addition to harnessing daily motion to power wearable devices, the new system might also be useful as an actuator with biomedical applications, or used for embedded stress sensors in settings such as roads, bridges, keyboards, or other structures, the researchers suggest.

The team also included postdoc Kejie Zhao (now assistant professor at Purdue University) and visiting graduate student Giorgia Gobbi , and Hui Yang and Sulin Zhang at Penn State. The work was supported by the National Science Foundation, the MIT MADMEC Contest, the Samsung Scholarship Foundation, and the Kwanjeong Educational Foundation.

Source: MIT News Office

Science, politics, news agenda and our priorities

By Syed Faisal ur Rahman


 

Recent postponement of the first Organization of Islamic Countries (OIC) summit on Science and Technology and COMSTECH 15th general assembly meeting, by the government of Pakistan due to security reasons tells a lot about our national priorities.

The summit was first of its kind meeting of the heads of state and dignitaries from the Muslim world on the issue of science and technology.

Today most Muslim countries are known in other parts of the world as backward, narrow minded and violent regions. Recent wars in the Middle East, sectarian rifts and totalitarian regimes are also not presenting a great picture either. While rest of the world is sending probes towards the edge of our solar system, sending missions to Mars and exploring moons of Saturn, we are busy and failing in finding moon on the right dates of the Islamic calendar.

Any average person can figure out that we need something drastic to change this situation. This summit was exactly the kind of step we needed for a jump start. Some serious efforts were made by the COMSTECH staff under the leadership of Dr. Shaukat Hameed Khan and even the secretary general of OIC was pushing hard for the summit. According to reports, OIC secretary general personally visited more than a dozen OIC member countries to successfully convince their head of states to attend the summit.

This summit would have also provided an opportunity to bring harmony and peace in the Muslim world as many Muslim countries are at odds with each other on regional issues like in Syria, Iraq, Yemen and Afghanistan.

Last century saw enormous developments in the fields of fundamental science, which also helped countries to rapidly develop their potential in industry, medical sciences, defense, space and many other sectors. Countries which made science and technology research and education as priority areas emerged as stronger nations as compared to those who merely relied on agriculture and the abundance of natural resources. We are now living in an era where humanity is reaching to the end points of our solar system through probes like Voyager 1, sent decades ago by NASA with messages from our civilization; Quantum computing is well on its way to become a reality; Humanity is also endeavoring to colonize other planets through multi-national projects; We are also looking deepest into the space for new stars, galaxies and even to some of the earliest times after the creation of our universe through cosmic microwave background probes like Planck.

Unfortunately, in Pakistan, anti-science and anti-research attitudes are getting stronger. The lack of anti-science and anti-research attitude is not just limited to the religious zealots but the so called liberals of Pakistan do not simply put much heed to what is going around in the world of science.

If you are one of the regular followers of political arena, daily news coverage on the media and keep your ears open to hear what is going around in the country then you can easily get the idea what are our priorities as a nation. How many talk shows we saw on the main stream media over the cancellation of the summit? How many questions were raised in the parliament?

The absence or very unnoticeable presence of such issues is conspicuous and apart from one senator, Senator Sehar Kamran, who wrote a piece in a news paper, no politician even bothered to raise the relevant questions.

Forget about main stream media or politicians. If we go to social media or drawing room discussions, did you hear anyone discussing the issue in a debate when we make  fuss about issues like what kind of dress some xyz model was wearing on her court hearing in a money laundering case or which politician’s marriage is supposedly in trouble or whose hand Junaid Jamshed was holding in group photo?

We boast about our success in reducing terrorism through successful military operations and use that success to attract investors, sports teams and tourists but on the other hand we are using security concerns as an excuse to cancel an important summit on the development of science and technology. This shows that either we are confused or hypocrites or we are simply not ready for any kind of intellectual growth.

There is a need to seriously do some brain storming and soul searching about our priorities.  One thing which I have learned as a student of Astronomy is that we are insignificant as compared to the vastness of our universe, the only thing which can make us somewhat special as compared to other species on earth or a lifeless rock on Pluto is that we can challenge our thinking ability to learn, to explore and to discover. Unfortunately, in our country we are losing this special capacity day by day.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.

Illustration: Jose-Luis Olivares/MIT

Quantum sensor’s advantages survive entanglement breakdown

Preserving the fragile quantum property known as entanglement isn’t necessary to reap benefits.

By Larry Hardesty 


CAMBRIDGE, Mass. – The extraordinary promise of quantum information processing — solving problems that classical computers can’t, perfectly secure communication — depends on a phenomenon called “entanglement,” in which the physical states of different quantum particles become interrelated. But entanglement is very fragile, and the difficulty of preserving it is a major obstacle to developing practical quantum information systems.

In a series of papers since 2008, members of the Optical and Quantum Communications Group at MIT’s Research Laboratory of Electronics have argued that optical systems that use entangled light can outperform classical optical systems — even when the entanglement breaks down.

Two years ago, they showed that systems that begin with entangled light could offer much more efficient means of securing optical communications. And now, in a paper appearing in Physical Review Letters, they demonstrate that entanglement can also improve the performance of optical sensors, even when it doesn’t survive light’s interaction with the environment.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration Credit: Jose-Luis Olivares/MIT

“That is something that has been missing in the understanding that a lot of people have in this field,” says senior research scientist Franco Wong, one of the paper’s co-authors and, together with Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, co-director of the Optical and Quantum Communications Group. “They feel that if unavoidable loss and noise make the light being measured look completely classical, then there’s no benefit to starting out with something quantum. Because how can it help? And what this experiment shows is that yes, it can still help.”

Phased in

Entanglement means that the physical state of one particle constrains the possible states of another. Electrons, for instance, have a property called spin, which describes their magnetic orientation. If two electrons are orbiting an atom’s nucleus at the same distance, they must have opposite spins. This spin entanglement can persist even if the electrons leave the atom’s orbit, but interactions with the environment break it down quickly.

In the MIT researchers’ system, two beams of light are entangled, and one of them is stored locally — racing through an optical fiber — while the other is projected into the environment. When light from the projected beam — the “probe” — is reflected back, it carries information about the objects it has encountered. But this light is also corrupted by the environmental influences that engineers call “noise.” Recombining it with the locally stored beam helps suppress the noise, recovering the information.

The local beam is useful for noise suppression because its phase is correlated with that of the probe. If you think of light as a wave, with regular crests and troughs, two beams are in phase if their crests and troughs coincide. If the crests of one are aligned with the troughs of the other, their phases are anti-correlated.

But light can also be thought of as consisting of particles, or photons. And at the particle level, phase is a murkier concept.

“Classically, you can prepare beams that are completely opposite in phase, but this is only a valid concept on average,” says Zheshen Zhang, a postdoc in the Optical and Quantum Communications Group and first author on the new paper. “On average, they’re opposite in phase, but quantum mechanics does not allow you to precisely measure the phase of each individual photon.”

Improving the odds

Instead, quantum mechanics interprets phase statistically. Given particular measurements of two photons, from two separate beams of light, there’s some probability that the phases of the beams are correlated. The more photons you measure, the greater your certainty that the beams are either correlated or not. With entangled beams, that certainty increases much more rapidly than it does with classical beams.

When a probe beam interacts with the environment, the noise it accumulates also increases the uncertainty of the ensuing phase measurements. But that’s as true of classical beams as it is of entangled beams. Because entangled beams start out with stronger correlations, even when noise causes them to fall back within classical limits, they still fare better than classical beams do under the same circumstances.

“Going out to the target and reflecting and then coming back from the target attenuates the correlation between the probe and the reference beam by the same factor, regardless of whether you started out at the quantum limit or started out at the classical limit,” Shapiro says. “If you started with the quantum case that’s so many times bigger than the classical case, that relative advantage stays the same, even as both beams become classical due to the loss and the noise.”

In experiments that compared optical systems that used entangled light and classical light, the researchers found that the entangled-light systems increased the signal-to-noise ratio — a measure of how much information can be recaptured from the reflected probe — by 20 percent. That accorded very well with their theoretical predictions.

But the theory also predicts that improvements in the quality of the optical equipment used in the experiment could double or perhaps even quadruple the signal-to-noise ratio. Since detection error declines exponentially with the signal-to-noise ratio, that could translate to a million-fold increase in sensitivity.

Source: MIT News Office

The rise and fall of cognitive skills:Neuroscientists find that different parts of the brain work best at different ages.

By Anne Trafton


CAMBRIDGE, Mass–Scientists have long known that our ability to think quickly and recall information, also known as fluid intelligence, peaks around age 20 and then begins a slow decline. However, more recent findings, including a new study from neuroscientists at MIT and Massachusetts General Hospital (MGH), suggest that the real picture is much more complex.

The study, which appears in the XX issue of the journal Psychological Science, finds that different components of fluid intelligence peak at different ages, some as late as age 40.

“At any given age, you’re getting better at some things, you’re getting worse at some other things, and you’re at a plateau at some other things. There’s probably not one age at which you’re peak on most things, much less all of them,” says Joshua Hartshorne, a postdoc in MIT’s Department of Brain and Cognitive Sciences and one of the paper’s authors.

“It paints a different picture of the way we change over the lifespan than psychology and neuroscience have traditionally painted,” adds Laura Germine, a postdoc in psychiatric and neurodevelopmental genetics at MGH and the paper’s other author.

Measuring peaks

Until now, it has been difficult to study how cognitive skills change over time because of the challenge of getting large numbers of people older than college students and younger than 65 to come to a psychology laboratory to participate in experiments. Hartshorne and Germine were able to take a broader look at aging and cognition because they have been running large-scale experiments on the Internet, where people of any age can become research subjects.

Their web sites, gameswithwords.org and testmybrain.org, feature cognitive tests designed to be completed in just a few minutes. Through these sites, the researchers have accumulated data from nearly 3 million people in the past several years.

In 2011, Germine published a study showing that the ability to recognize faces improves until the early 30s before gradually starting to decline. This finding did not fit into the theory that fluid intelligence peaks in late adolescence. Around the same time, Hartshorne found that subjects’ performance on a visual short-term memory task also peaked in the early 30s.

Intrigued by these results, the researchers, then graduate students at Harvard University, decided that they needed to explore a different source of data, in case some aspect of collecting data on the Internet was skewing the results. They dug out sets of data, collected decades ago, on adult performance at different ages on the Weschler Adult Intelligence Scale, which is used to measure IQ, and the Weschler Memory Scale. Together, these tests measure about 30 different subsets of intelligence, such as digit memorization, visual search, and assembling puzzles.

Hartshorne and Germine developed a new way to analyze the data that allowed them to compare the age peaks for each task. “We were mapping when these cognitive abilities were peaking, and we saw there was no single peak for all abilities. The peaks were all over the place,” Hartshorne says. “This was the smoking gun.”

However, the dataset was not as large as the researchers would have liked, so they decided to test several of the same cognitive skills with their larger pools of Internet study participants. For the Internet study, the researchers chose four tasks that peaked at different ages, based on the data from the Weschler tests. They also included a test of the ability to perceive others’ emotional state, which is not measured by the Weschler tests.

The researchers gathered data from nearly 50,000 subjects and found a very clear picture showing that each cognitive skill they were testing peaked at a different age. For example, raw speed in processing information appears to peak around age 18 or 19, then immediately starts to decline. Meanwhile, short-term memory continues to improve until around age 25, when it levels off and then begins to drop around age 35.

For the ability to evaluate other people’s emotional states, the peak occurred much later, in the 40s or 50s.

More work will be needed to reveal why each of these skills peaks at different times, the researchers say. However, previous studies have hinted that genetic changes or changes in brain structure may play a role.

“If you go into the data on gene expression or brain structure at different ages, you see these lifespan patterns that we don’t know what to make of. The brain seems to continue to change in dynamic ways through early adulthood and middle age,” Germine says. “The question is: What does it mean? How does it map onto the way you function in the world, or the way you think, or the way you change as you age?”

Accumulated intelligence

The researchers also included a vocabulary test, which serves as a measure of what is known as crystallized intelligence — the accumulation of facts and knowledge. These results confirmed that crystallized intelligence peaks later in life, as previously believed, but the researchers also found something unexpected: While data from the Weschler IQ tests suggested that vocabulary peaks in the late 40s, the new data showed a later peak, in the late 60s or early 70s.

The researchers believe this may be a result of better education, more people having jobs that require a lot of reading, and more opportunities for intellectual stimulation for older people.

Hartshorne and Germine are now gathering more data from their websites and have added new cognitive tasks designed to evaluate social and emotional intelligence, language skills, and executive function. They are also working on making their data public so that other researchers can access it and perform other types of studies and analyses.

“We took the existing theories that were out there and showed that they’re all wrong. The question now is: What is the right one? To get to that answer, we’re going to need to run a lot more studies and collect a lot more data,” Hartshorne says.

The research was funded by the National Institutes of Health, the National Science Foundation, and a National Defense Science and Engineering Graduate Fellowship.

Source: MIT News Office

For the first time, spacecraft catch a solar shockwave in the act

Solar storm found to produce “ultrarelativistic, killer electrons” in 60 seconds.

By Jennifer Chu


CAMBRIDGE, Mass. – On Oct. 8, 2013, an explosion on the sun’s surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth’s magnetic field, setting off a magnetized sound pulse around the planet.

NASA’s Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth’s magnetic field, captured the effects of the solar shockwave just before and after it struck.

Now scientists at MIT’s Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes’ data, and observed a sudden and dramatic effect in the shockwave’s aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth’s radiation belts, accelerating certain particles to ultrahigh energies.

“These are very lightweight particles, but they are ultrarelativistic, killer electrons — electrons that can go right through a satellite,” says John Foster, associate director of MIT’s Haystack Observatory. “These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it’s exciting: We see something that, in terms of the radiation belt, is really quick.”

The findings represent the first time the effects of a solar shockwave on Earth’s radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

Catching a shockwave in the act

Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes’ mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons — particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth’s magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave’s aftermath.

Dealing a “sledgehammer blow”

Foster and his colleagues analyzed the probes’ data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck “a sledgehammer blow” to the protective barrier of the Earth’s magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse — a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles’ velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed “drift resonant,” and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

Foster says solar shockwaves can impact Earth’s radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

“This was a relatively small shock. We know they can be much, much bigger,” Foster says. “Interactions between solar activity and Earth’s magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics.”

Source: MIT News

Timeline of the approach and departure phases — surrounding close approach on July 14, 2015 — of the New Horizons Pluto encounter.
Image Credit: NASA/JHU APL/SwRI

NASA’s New Horizons Spacecraft Begins First Stages of Pluto Encounter

NASA’s New Horizons spacecraft recently began its long-awaited, historic encounter with Pluto. The spacecraft is entering the first of several approach phases that culminate July 14 with the first close-up flyby of the dwarf planet, 4.67 billion miles (7.5 billion kilometers) from Earth.

“NASA first mission to distant Pluto will also be humankind’s first close up view of this cold, unexplored world in our solar system,” said Jim Green, director of NASA’s Planetary Science Division at the agency’s Headquarters in Washington. “The New Horizons team worked very hard to prepare for this first phase, and they did it flawlessly.”

The fastest spacecraft when it was launched, New Horizons lifted off in January 2006. It awoke from its final hibernation period last month after a voyage of more than 3 billion miles, and will soon pass close to Pluto, inside the orbits of its five known moons. In preparation for the close encounter, the mission’s science, engineering and spacecraft operations teams configured the piano-sized probe for distant observations of the Pluto system that start Sunday, Jan. 25 with a long-range photo shoot.

 

 

Timeline of the approach and departure phases — surrounding close approach on July 14, 2015 — of the New Horizons Pluto encounter. Image Credit: NASA/JHU APL/SwRI
Timeline of the approach and departure phases — surrounding close approach on July 14, 2015 — of the New Horizons Pluto encounter.
Image Credit: NASA/JHU APL/SwRI

The images captured by New Horizons’ telescopic Long-Range Reconnaissance Imager (LORRI) will give mission scientists a continually improving look at the dynamics of Pluto’s moons. The images also will play a critical role in navigating the spacecraft as it covers the remaining 135 million miles (220 million kilometers) to Pluto.

“We’ve completed the longest journey any spacecraft has flown from Earth to reach its primary target, and we are ready to begin exploring,” said Alan Stern, New Horizons principal investigator from Southwest Research Institute in Boulder, Colorado.

LORRI will take hundreds of pictures of Pluto over the next few months to refine current estimates of the distance between the spacecraft and the dwarf planet. Though the Pluto system will resemble little more than bright dots in the camera’s view until May, mission navigators will use the data to design course-correction maneuvers to aim the spacecraft toward its target point this summer. The first such maneuver could occur as early as March.

“We need to refine our knowledge of where Pluto will be when New Horizons flies past it,” said Mark Holdridge, New Horizons encounter mission manager at Johns Hopkins University’s Applied Physics Laboratory (APL) in Laurel, Maryland. “The flyby timing also has to be exact, because the computer commands that will orient the spacecraft and point the science instruments are based on precisely knowing the time we pass Pluto – which these images will help us determine.”

The “optical navigation” campaign that begins this month marks the first time pictures from New Horizons will be used to help pinpoint Pluto’s location.

Throughout the first approach phase, which runs until spring, New Horizons will conduct a significant amount of additional science. Spacecraft instruments will gather continuous data on the interplanetary environment where the planetary system orbits, including measurements of the high-energy particles streaming from the sun and dust-particle concentrations in the inner reaches of the Kuiper Belt. In addition to Pluto, this area, the unexplored outer region of the solar system, potentially includes thousands of similar icy, rocky small planets.

More intensive studies of Pluto begin in the spring, when the cameras and spectrometers aboard New Horizons will be able to provide image resolutions higher than the most powerful telescopes on Earth. Eventually, the spacecraft will obtain images good enough to map Pluto and its moons more accurately than achieved by previous planetary reconnaissance missions.

APL manages the New Horizons mission for NASA’s Science Mission Directorate in Washington. Alan Stern, of the Southwest Research Institute (SwRI), headquartered in San Antonio, is the principal investigator and leads the mission. SwRI leads the science team, payload operations, and encounter science planning. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama. APL designed, built and operates the spacecraft.

For more information about the New Horizons mission, visit:

www.nasa.gov/newhorizons

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.

Credit: F. Najafi/ MIT

Toward quantum chips

Packing single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits.

By Larry Hardesty


CAMBRIDGE, Mass. – A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons. Credit: F. Najafi/ MIT
Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.
Credit: F. Najafi/ MIT

In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

“You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

Thinking small

According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

“Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.

Relocation

The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

“It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

Source: MIT News Office

The dark nebula LDN 483.
Credit: ESO

Where Did All the Stars Go?

Dark cloud obscures hundreds of background stars


Some of the stars appear to be missing in this intriguing new ESO image. But the black gap in this glitteringly beautiful starfield is not really a gap, but rather a region of space clogged with gas and dust. This dark cloud is called LDN 483 — for Lynds Dark Nebula 483. Such clouds are the birthplaces of future stars. The Wide Field Imager, an instrument mounted on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile, captured this image of LDN 483 and its surroundings.

The dark nebula LDN 483. Credit: ESO
The Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile snapped this image of the dark nebula LDN 483. The object is a region of space clogged with gas and dust. These materials are dense enough to effectively eclipse the light of background stars. LDN 483 is located about 700 light-years away in the constellation of Serpens (The Serpent). Credit: ESO

LDN 483 [1] is located about 700 light-years away in the constellation of Serpens (The Serpent). The cloud contains enough dusty material to completely block the visible light from background stars. Particularly dense molecular clouds, like LDN 483, qualify as dark nebulae because of this obscuring property. The starless nature of LDN 483 and its ilk would suggest that they are sites where stars cannot take root and grow. But in fact the opposite is true: dark nebulae offer the most fertile environments for eventual star formation.

Astronomers studying star formation in LDN 483 have discovered some of the youngest observable kinds of baby stars buried in LDN 483’s shrouded interior. These gestating stars can be thought of as still being in the womb, having not yet been born as complete, albeit immature, stars.

In this first stage of stellar development, the star-to-be is just a ball of gas and dust contracting under the force of gravity within the surrounding molecular cloud. The protostar is still quite cool — about –250 degrees Celsius — and shines only in long-wavelength submillimetre light [2]. Yet temperature and pressure are beginning to increase in the fledgling star’s core.

This earliest period of star growth lasts a mere thousands of years, an astonishingly short amount of time in astronomical terms, given that stars typically live for millions or billions of years. In the following stages, over the course of several million years, the protostar will grow warmer and denser. Its emission will increase in energy along the way, graduating from mainly cold, far-infrared light to near-infrared and finally to visible light. The once-dim protostar will have then become a fully luminous star.

As more and more stars emerge from the inky depths of LDN 483, the dark nebula will disperse further and lose its opacity. The missing background stars that are currently hidden will then come into view — but only after the passage of millions of years, and they will be outshone by the bright young-born stars in the cloud [3].

Notes
[1] The Lynds Dark Nebula catalogue was compiled by the American astronomer Beverly Turner Lynds, and published in 1962. These dark nebulae were found from visual inspection of the Palomar Sky Survey photographic plates.

[2] The Atacama Large Millimeter/submillimeter Array (ALMA), operated in part by ESO, observes in submillimetre and millimetre light and is ideal for the study of such very young stars in molecular clouds.

[3] Such a young open star cluster can be seen here, and a more mature one here.
Source : ESO

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

Credit: CCNY

Study Unveils New Half-Light Half-Matter Quantum Particles

Prospects of developing computing and communication technologies based on quantum properties of light and matter may have taken a major step forward thanks to research by City College of New York physicists led by Dr. Vinod Menon.

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.

“Besides being a fundamental breakthrough, this opens up the possibility of making devices which take the benefits of both light and matter,” said Professor Menon.  

In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles. Credit: CCNY
In a pioneering study, Professor Menon and his team were able to discover half-light, half-matter particles in atomically thin semiconductors (thickness ~ a millionth of a single sheet of paper) consisting of two-dimensional (2D) layer of molybdenum and sulfur atoms arranged similar to graphene. They sandwiched this 2D material in a light trapping structure to realize these composite quantum particles.
Credit: CCNY

For example one can start envisioning logic gates and signal processors that take on best of light and matter. The discovery is also expected to contribute to developing practical platforms for quantum computing. 

Dr. Dirk Englund, a professor at MIT whose research focuses on quantum technologies based on semiconductor and optical systems, hailed the City College study.

“What is so remarkable and exciting in the work by Vinod and his team is how readily this strong coupling regime could actually be achieved. They have shown convincingly that by coupling a rather standard dielectric cavity to exciton–polaritons in a monolayer of molybdenum disulphide, they could actually reach this strong coupling regime with a very large binding strength,” he said. 

Professor Menon’s research team included City College PhD students, Xiaoze Liu, Tal Galfsky and Zheng Sun, and scientists from Yale University, National Tsing Hua University (Taiwan) and Ecole Polytechnic -Montreal (Canada).

The study appears in the January issue of the journal “Nature Photonics.” It was funded by the U.S. Army Research Laboratory’s Army Research Office and the National Science Foundation through the Materials Research Science and Engineering Center – Center for Photonic and Multiscale Nanomaterials. 

Source: The City College New of York