Tag Archives: detection

BIGHORNS_Sky_avg_SN_integr

Ionospheric effects not detrimental to EoR detection from ground : CAASTRO Research

The Epoch of Reionisation (EoR) is the time in the early Universe when the first stars and galaxies formed and re-ionised the neutral hydrogen. Indirect information about the EoR has been obtained from the Cosmic Microwave Background and spectra of the distant quasars. However, the bulk of information about the physical parameters of the EoR is encoded in the 21cm line (1420 MHz) from neutral hydrogen redshifted into the low radio frequency range 200 – 50 MHz, for redshifts of 6 < z < 30.

The observational approaches range from large interferometer arrays to single antenna experiments. The latter, so-called global EoR experiments, spatially average the signal from the entire visible sky and try to identify the tiny signature of the EoR (of order 100 milliKelvin, which is a few orders of magnitude smaller than the Galactic foregrounds) in the sky-averaged spectrum. This extremely challenging precision requires very long observations (hundreds of hours) to achieve a sufficiently high signal-to-noise ratio. Moreover, ground-based global EoR experiments are affected by frequency-dependent effects (i.e. absorption and refraction) due to the propagation of radio-waves in the Earth’s ionosphere. The amplitude of these effects changes in time. There has therefore been an ongoing discussion in the literature on the importance of ionospheric effects and whether the global EoR signature can feasibly be detected from the ground.BIGHORNS_Sky_avg_SN_integr

The team of CAASTRO researches at Curtin University, led by Dr Marcin Sokolowski, used three months’ worth of 2014/2015 data collected with the BIGHORNS system with a conical log-spiral antenna deployed at the Murchison Radio-astronomy Observatory to study the impact of the ionosphere on its capability to detect the global EoR signal. Comparison of data collected on different days at the same sidereal time enabled the researchers to infer some properties of the ionosphere, such as electron temperature (Te≈470 K at night-time) and amplitude and variability of ionospheric absorption of radio waves. Furthermore, the data sample shows that the sky-averaged spectrum indeed varies in time due to fluctuations of these ionospheric properties. Nevertheless, the data analysis indicates that averaging over very long observations (several days or even several weeks) suppresses the noise and leads to an improved signal-to-noise ratio. Therefore, the ionospheric effects and fluctuations are not fundamental impediments that prevent ground-based instruments, such as BIGHORNS, from integrating down to the precision required for global EoR experiments, provided that the ionospheric contribution is properly accounted for in the data analysis.

 

Publication details:

The impact of the ionosphere on ground-based detection of the global Epoch of Reionisation signal

Source: CAASTRO

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.

Credit: F. Najafi/ MIT

Toward quantum chips

Packing single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits.

By Larry Hardesty


CAMBRIDGE, Mass. – A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons. Credit: F. Najafi/ MIT
Illustration of superconducting detectors on arrayed waveguides on a photonic integrated circuit for detection of single photons.
Credit: F. Najafi/ MIT

In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

“You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

Thinking small

According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

“Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.

Relocation

The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

“It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

Source: MIT News Office

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

Best Quantum Receiver

RECORD HIGH DATA ACCURACY RATES FOR PHASE-MODULATED TRANSMISSION

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second.

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. - Credit : JQI
By passing it through a special crystal, a light wave’s phase—denoting position along the wave’s cycle—can be delayed. A delay of a certain amount can denote a piece of data. In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent—for laser light this means a predictable set of crests and troughs along the wave—a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires—an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.)

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

- See more at: http://jqi.umd.edu/news/best-quantum-receiver#sthash.SS5zfkis.dpuf

- Source: JQI 

Credit: NASA/CXC/Univ. of Wisconsin/Y.Bai. et al.

NASA X-ray Telescopes Find Black Hole May Be a Neutrino Factory

The giant black hole at the center of the Milky Way may be producing mysterious particles called neutrinos. If confirmed, this would be the first time that scientists have traced neutrinos back to a black hole.

The evidence for this came from three NASA satellites that observe in X-ray light: the Chandra X-ray Observatory, the Swift gamma-ray mission, and the Nuclear Spectroscopic Telescope Array (NuSTAR).

Neutrinos are tiny particles that carry no charge and interact very weakly with electrons and protons. Unlike light or charged particles, neutrinos can emerge from deep within their cosmic sources and travel across the universe without being absorbed by intervening matter or, in the case of charged particles, deflected by magnetic fields.

The Earth is constantly bombarded with neutrinos from the sun. However, neutrinos from beyond the solar system can be millions or billions of times more energetic. Scientists have long been searching for the origin of ultra-high energy and very high-energy neutrinos.

“Figuring out where high-energy neutrinos come from is one of the biggest problems in astrophysics today,” said Yang Bai of the University of Wisconsin in Madison, who co-authored a study about these results published in Physical Review D. “We now have the first evidence that an astronomical source – the Milky Way’s supermassive black hole – may be producing these very energetic neutrinos.”

Because neutrinos pass through material very easily, it is extremely difficult to build detectors that reveal exactly where the neutrino came from. The IceCube Neutrino Observatory, located under the South Pole, has detected 36 high-energy neutrinos since the facility became operational in 2010.

By pairing IceCube’s capabilities with the data from the three X-ray telescopes, scientists were able to look for violent events in space that corresponded with the arrival of a high-energy neutrino here on Earth.

Credit: NASA/CXC/Univ. of Wisconsin/Y.Bai. et al.
Credit: NASA/CXC/Univ. of Wisconsin/Y.Bai. et al.

“We checked to see what happened after Chandra witnessed the biggest outburst ever detected from Sagittarius A*, the Milky Way’s supermassive black hole,” said co-author Andrea Peterson, also of the University of Wisconsin. “And less than three hours later, there was a neutrino detection at IceCube.”

In addition, several neutrino detections appeared within a few days of flares from the supermassive black hole that were observed with Swift and NuSTAR.

“It would be a very big deal if we find out that Sagittarius A* produces neutrinos,” said co-author Amy Barger of the University of Wisconsin. “It’s a very promising lead for scientists to follow.”

Scientists think that the highest energy neutrinos were created in the most powerful events in the Universe like galaxy mergers, material falling onto supermassive black holes, and the winds around dense rotating stars called pulsars.
The team of researchers is still trying to develop a case for how Sagittarius A* might produce neutrinos. One idea is that it could happen when particles around the black hole are accelerated by a shock wave, like a sonic boom, that produces charged particles that decay to neutrinos.

This latest result may also contribute to the understanding of another major puzzle in astrophysics: the source of high-energy cosmic rays. Since the charged particles that make up cosmic rays are deflected by magnetic fields in our Galaxy, scientists have been unable to pinpoint their origin. The charged particles accelerated by a shock wave near Sgr A* may be a significant source of very energetic cosmic rays.

The paper describing these results is available online. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra’s science and flight operations.

An interactive image, a podcast, and a video about these findings are available at:

http://chandra.si.edu

For Chandra images, multimedia and related materials, visit:

http://www.nasa.gov/chandra

Source: Chandra Harvard