Tag Archives: data

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

Entering 2016 with new hope

Syed Faisal ur Rahman


 

Year 2015 left many good and bad memories for many of us. On one hand we saw more wars, terrorist attacks and political confrontations, and on the other hand we saw humanity raising voices for peace, sheltering refugees and joining hands to confront the climate change.

In science, we saw first ever photograph of light as both wave and particle. We also saw some serious development in machine learning, data sciences and artificial intelligence areas with some voices raising caution about the takeover of AI over humanity and issues related to privacy. The big question of energy and climate change remained a key point of  discussion in scientific and political circles. The biggest break through came near the end of the year with Paris deal during COP21.

The deal involving around 200 countries represent a true spirit of humanity to limit global warming below 2C and commitments for striving to keep temperatures at above 1.5C pre-industrial levels. This truly global commitment also served in bringing rival countries to sit together for a common cause to save humanity from self destruction. I hope the spirit will continue in other areas of common interest as well.

This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture. New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old. Credit: NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)
This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture.
New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old.
Credit:
NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)

Space Sciences also saw some enormous advancements with New Horizon sending photographs from Pluto, SpaceX successfully landed the reusable Falcon 9 rocket back after a successful launch and we also saw the discovery of the largest regular formation in the Universe,by Prof Lajos Balazs, which is a ring of nine galaxies 7 billion light years away and 5 billion light years wide covering a third of our sky.We also learnt this year that Mars once had more water than Earth’s Arctic Ocean. NASA later confirmed the evidence that water flows on the surface of Mars. The announcement led to some interesting insight into the atmospheric studies and history of the red planet.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration: Jose-Luis Olivares/MIT

We also saw some encouraging advancements in neurosciences where we saw MIT’s researchers  developing a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections. We also saw researchers reactivating neuro-plasticity in older mice, restoring their brains to a younger state and we also saw some good progress in combating Alzheimer’s diseases.

Quantum physics again stayed as a key area of scientific advancements. Quantu

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior. Credit:EPFL
ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

m computing is getting more closer to become a viable alternative to current architecture. The packing of the single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits. Researchers at the Australian National University (ANU)  performed experiment to prove that reality does not exist until it is measured.

There are many other areas where science and technology reached new heights and will hopefully continue to do so in the year 2016. I hope these advancements will not only help us in growing economically but also help us in becoming better human beings and a better society.

 

 

 

 

 

Automating big-data analysis : MIT Research

System that replaces human intuition with algorithms outperforms 615 of 906 human teams.

By Larry Hardesty


Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.

Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk fordropping out of online courses.

“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITxdoesn’t record either of those statistics, but it does collect data from which they can be inferred.

Featured composition

Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.

For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.

It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.

Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.

“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”

Source: MIT News Office

 

Using images from ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope, astronomers have discovered fast-moving wave-like features in the dusty disc around the nearby star AU Microscopii. These odd structures are unlike anything ever observed, or even predicted, before now.

The top row shows a Hubble image of the AU Mic disc from 2010, the middle row Hubble from 2011 and the bottom row VLT/SPHERE data from 2014. The black central circles show where the brilliant light of the central star has been blocked off to reveal the much fainter disc, and the position of the star is indicated schematically.

The scale bar at the top of the picture indicates the diameter of the orbit of the planet Neptune in the Solar System (60 AU).

Note that the brightness of the outer parts of the disc has been artificially brightened to reveal the faint structure.

Credit:
ESO, NASA & ESA

Mysterious Ripples Found Racing Through Planet-forming Disc: ESO

Unique structures spotted around nearby star


 

Using images from ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope, astronomers have discovered fast-moving wave-like features in the dusty disc around the nearby star AU Microscopii. These odd structures are unlike anything ever observed, or even predicted, before now. The top row shows a Hubble image of the AU Mic disc from 2010, the middle row Hubble from 2011 and the bottom row VLT/SPHERE data from 2014. The black central circles show where the brilliant light of the central star has been blocked off to reveal the much fainter disc, and the position of the star is indicated schematically. The scale bar at the top of the picture indicates the diameter of the orbit of the planet Neptune in the Solar System (60 AU). Note that the brightness of the outer parts of the disc has been artificially brightened to reveal the faint structure. Credit: ESO, NASA & ESA
Using images from ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope, astronomers have discovered fast-moving wave-like features in the dusty disc around the nearby star AU Microscopii. These odd structures are unlike anything ever observed, or even predicted, before now.
The top row shows a Hubble image of the AU Mic disc from 2010, the middle row Hubble from 2011 and the bottom row VLT/SPHERE data from 2014. The black central circles show where the brilliant light of the central star has been blocked off to reveal the much fainter disc, and the position of the star is indicated schematically.
The scale bar at the top of the picture indicates the diameter of the orbit of the planet Neptune in the Solar System (60 AU).
Note that the brightness of the outer parts of the disc has been artificially brightened to reveal the faint structure.
Credit:
ESO, NASA & ESA

Using images from ESO’s Very Large Telescope and the NASA/ESA Hubble Space Telescope, astronomers have discovered never-before-seen structures within a dusty disc surrounding a nearby star. The fast-moving wave-like features in the disc of the star AU Microscopii are unlike anything ever observed, or even predicted, before now. The origin and nature of these features present a new mystery for astronomers to explore. The results are published in the journal Nature on 8 October 2015.

AU Microscopii, or AU Mic for short, is a young, nearby star surrounded by a large disc of dust [1]. Studies of such debris discs can provide valuable clues about how planets, which form from these discs, are created.

Astronomers have been searching AU Mic’s disc for any signs of clumpy or warped features, as such signs might give away the location of possible planets. And in 2014 they used the more powerful high-contrast imaging capabilities of ESO’s newly installed SPHERE instrument, mounted on the Very Large Telescope for their search — and discovered something very unusual.

Our observations have shown something unexpected,” explains Anthony Boccaletti, LESIA (Observatoire de Paris/CNRS/UPMC/Paris-Diderot), France, and lead author on the paper. “The images from SPHERE show a set of unexplained features in the disc which have an arch-like, or wave-like, structure, unlike anything that has ever been observed before.

Five wave-like arches at different distances from the star show up in the new images, reminiscent of ripples in water. After spotting the features in the SPHERE data the team turned to earlier images of the disc taken by the NASA/ESA Hubble Space Telescope in 2010 and 2011 to see whether the features were also visible in these [2]. They were not only able to identify the features on the earlier Hubble images — but they also discovered that they had changed over time. It turns out that these ripples are moving — and very fast!

We reprocessed images from the Hubble data and ended up with enough information to track the movement of these strange features over a four-year period,” explains team member Christian Thalmann (ETH Zürich, Switzerland). “By doing this, we found that the arches are racing away from the star at speeds of up to about 40 000 kilometres/hour!

The features further away from the star seem to be moving faster than those closer to it. At least three of the features are moving so fast that they could well be escaping from the gravitational attraction of the star. Such high speeds rule out the possibility that these are conventional disc features caused by objects — like planets — disturbing material in the disc while orbiting the star. There must have been something else involved to speed up the ripples and make them move so quickly, meaning that they are a sign of something truly unusual [3].

Everything about this find was pretty surprising!” comments co-author Carol Grady of Eureka Scientific, USA. “And because nothing like this has been observed or predicted in theory we can only hypothesise when it comes to what we are seeing and how it came about.

The team cannot say for sure what caused these mysterious ripples around the star. But they have considered and ruled out a series of phenomena as explanations, including the collision of two massive and rare asteroid-like objects releasing large quantities of dust, and spiral waves triggered by instabilities in the system’s gravity.

But other ideas that they have considered look more promising.

One explanation for the strange structure links them to the star’s flares. AU Mic is a star with high flaring activity — it often lets off huge and sudden bursts of energy from on or near its surface,” explains co-author Glenn Schneider of Steward Observatory, USA. “One of these flares could perhaps have triggered something on one of the planets — if there are planets — like a violent stripping of material which could now be propagating through the disc, propelled by the flare’s force.

It is very satisfying that SPHERE has proved to be very capable at studying discs like this in its first year of operation,” adds Jean-Luc Beuzit, who is both a co-author of the new study and also led the development of SPHERE itself.

The team plans to continue to observe the AU Mic system with SPHERE and other facilities, including ALMA, to try to understand what is happening. But, for now, these curious features remain an unsolved mystery.

Notes

[1] AU Microscopii lies just 32 light-years away from Earth. The disc essentially comprises asteroids that have collided with such vigour that they have been ground to dust.

[2] The data were gathered by Hubble’s Space Telescope Imaging Spectrograph (STIS).

[3] The edge-on view of the disc complicates the interpretation of its three-dimensional structure.

More information

This research was presented in a paper entitled “Fast-Moving Structures in the Debris Disk Around AU Microscopii”, to appear in the journal Nature on 8 October 2015.

Source: ESO

The rise and fall of cognitive skills:Neuroscientists find that different parts of the brain work best at different ages.

By Anne Trafton


CAMBRIDGE, Mass–Scientists have long known that our ability to think quickly and recall information, also known as fluid intelligence, peaks around age 20 and then begins a slow decline. However, more recent findings, including a new study from neuroscientists at MIT and Massachusetts General Hospital (MGH), suggest that the real picture is much more complex.

The study, which appears in the XX issue of the journal Psychological Science, finds that different components of fluid intelligence peak at different ages, some as late as age 40.

“At any given age, you’re getting better at some things, you’re getting worse at some other things, and you’re at a plateau at some other things. There’s probably not one age at which you’re peak on most things, much less all of them,” says Joshua Hartshorne, a postdoc in MIT’s Department of Brain and Cognitive Sciences and one of the paper’s authors.

“It paints a different picture of the way we change over the lifespan than psychology and neuroscience have traditionally painted,” adds Laura Germine, a postdoc in psychiatric and neurodevelopmental genetics at MGH and the paper’s other author.

Measuring peaks

Until now, it has been difficult to study how cognitive skills change over time because of the challenge of getting large numbers of people older than college students and younger than 65 to come to a psychology laboratory to participate in experiments. Hartshorne and Germine were able to take a broader look at aging and cognition because they have been running large-scale experiments on the Internet, where people of any age can become research subjects.

Their web sites, gameswithwords.org and testmybrain.org, feature cognitive tests designed to be completed in just a few minutes. Through these sites, the researchers have accumulated data from nearly 3 million people in the past several years.

In 2011, Germine published a study showing that the ability to recognize faces improves until the early 30s before gradually starting to decline. This finding did not fit into the theory that fluid intelligence peaks in late adolescence. Around the same time, Hartshorne found that subjects’ performance on a visual short-term memory task also peaked in the early 30s.

Intrigued by these results, the researchers, then graduate students at Harvard University, decided that they needed to explore a different source of data, in case some aspect of collecting data on the Internet was skewing the results. They dug out sets of data, collected decades ago, on adult performance at different ages on the Weschler Adult Intelligence Scale, which is used to measure IQ, and the Weschler Memory Scale. Together, these tests measure about 30 different subsets of intelligence, such as digit memorization, visual search, and assembling puzzles.

Hartshorne and Germine developed a new way to analyze the data that allowed them to compare the age peaks for each task. “We were mapping when these cognitive abilities were peaking, and we saw there was no single peak for all abilities. The peaks were all over the place,” Hartshorne says. “This was the smoking gun.”

However, the dataset was not as large as the researchers would have liked, so they decided to test several of the same cognitive skills with their larger pools of Internet study participants. For the Internet study, the researchers chose four tasks that peaked at different ages, based on the data from the Weschler tests. They also included a test of the ability to perceive others’ emotional state, which is not measured by the Weschler tests.

The researchers gathered data from nearly 50,000 subjects and found a very clear picture showing that each cognitive skill they were testing peaked at a different age. For example, raw speed in processing information appears to peak around age 18 or 19, then immediately starts to decline. Meanwhile, short-term memory continues to improve until around age 25, when it levels off and then begins to drop around age 35.

For the ability to evaluate other people’s emotional states, the peak occurred much later, in the 40s or 50s.

More work will be needed to reveal why each of these skills peaks at different times, the researchers say. However, previous studies have hinted that genetic changes or changes in brain structure may play a role.

“If you go into the data on gene expression or brain structure at different ages, you see these lifespan patterns that we don’t know what to make of. The brain seems to continue to change in dynamic ways through early adulthood and middle age,” Germine says. “The question is: What does it mean? How does it map onto the way you function in the world, or the way you think, or the way you change as you age?”

Accumulated intelligence

The researchers also included a vocabulary test, which serves as a measure of what is known as crystallized intelligence — the accumulation of facts and knowledge. These results confirmed that crystallized intelligence peaks later in life, as previously believed, but the researchers also found something unexpected: While data from the Weschler IQ tests suggested that vocabulary peaks in the late 40s, the new data showed a later peak, in the late 60s or early 70s.

The researchers believe this may be a result of better education, more people having jobs that require a lot of reading, and more opportunities for intellectual stimulation for older people.

Hartshorne and Germine are now gathering more data from their websites and have added new cognitive tasks designed to evaluate social and emotional intelligence, language skills, and executive function. They are also working on making their data public so that other researchers can access it and perform other types of studies and analyses.

“We took the existing theories that were out there and showed that they’re all wrong. The question now is: What is the right one? To get to that answer, we’re going to need to run a lot more studies and collect a lot more data,” Hartshorne says.

The research was funded by the National Institutes of Health, the National Science Foundation, and a National Defense Science and Engineering Graduate Fellowship.

Source: MIT News Office

Software that knows the risks

Planning algorithms evaluate probability of success, suggest low-risk alternatives.

By Larry Hardesty


CAMBRIDGE, Mass. – Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours. Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.

That kind of application is the goal of Brian Williams’ group at MIT’s Computer Science and Artificial Intelligence Laboratory — although the same underlying framework has led to software that both NASA and the Woods Hole Oceanographic Institution have used to plan missions.

At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, researchers in Williams’ group will present algorithms that represent significant steps toward what Williams describes as “a better Siri” — the user-assistance application found in Apple products. But they would be just as useful for any planning task — say, scheduling flights or bus routes.

Together with Williams, Peng Yu and Cheng Fang, who are graduate students in MIT’s Department of Aeronautics and Astronautics, have developed software that allows a planner to specify constraints — say, buses along a certain route should reach their destination at 10-minute intervals — and reliability thresholds, such as that the buses should be on time at least 90 percent of the time. Then, on the basis of probabilistic models — which reveal data such as that travel time along this mile of road fluctuates between two and 10 minutes — the system determines whether a solution exists: For example, perhaps the buses’ departures should be staggered by six minutes at some times of day, 12 minutes at others.

If, however, a solution doesn’t exist, the software doesn’t give up. Instead, it suggests ways in which the planner might relax the problem constraints: Could the buses reach their destinations at 12-minute intervals? If the planner rejects the proposed amendment, the software offers an alternative: Could you add a bus to the route?

Short tails

One aspect of the software that distinguishes it from previous planning systems is that it assesses risk. “It’s always hard working directly with probabilities, because they always add complexity to your computations,” Fang says. “So we added this idea of risk allocation. We say, ‘What’s your budget of risk for this entire mission? Let’s divide that up and use it as a resource.’”

The time it takes to traverse any mile of a bus route, for instance, can be represented by a probability distribution — a bell curve, plotting time against probability. Keeping track of all those probabilities and compounding them for every mile of the route would yield a huge computation. But if the system knows in advance that the planner can tolerate a certain amount of failure, it can, in effect, assign that failure to the lowest-probability outcomes in the distributions, lopping off their tails. That makes them much easier to deal with mathematically.

At AAAI, Williams and another of his students, Andrew Wang, have a paper describing how to evaluate those assignments efficiently, in order to find quick solutions to soluble planning problems. But the paper with Yu and Fang — which appears at the same conference session — concentrates on identifying those constraints that prevent a problem’s solution.

There’s the rub

Both procedures are rooted in graph theory. In this context, a graph is a data representation that consists of nodes, usually depicted as circles, and edges, usually depicted as line segments connecting the nodes. Any scheduling problem can be represented as a graph. Nodes represent events, and the edges indicate the sequence in which events must occur. Each edge also has an associated weight, indicating the cost of progressing from one event to the next — the time it takes a bus to travel between stops, for instance.

Yu, Williams, and Fang’s algorithm first represents a problem as a graph, then begins adding edges that represent the constraints imposed by the planner. If the problem is soluble, the weights of the edges representing constraints will everywhere be greater than the weights representing the costs of transitions between events. Existing algorithms, however, can quickly home in on loops in the graph where the weights are imbalanced. The MIT researchers’ system then calculates the lowest-cost way of rebalancing the loop, which it presents to the planner as a modification of the problem’s initial constraints.

Source: MIT News Office

Islamic Republic of Pakistan to become Associate Member State of CERN: CERN Press Release

Geneva 19 December 2014. CERN1 Director General, Rolf Heuer, and the Chairman of the Pakistan Atomic Energy Commission, Ansar Parvez, signed today in Islamabad, in presence of Prime Minister Nawaz Sharif, a document admitting the Islamic Republic of Pakistan to CERN Associate Membership, subject to ratification by the Government of Pakistan.

“Pakistan has been a strong participant in CERN’s endeavours in science and technology since the 1990s,” said Rolf Heuer. “Bringing nations together in a peaceful quest for knowledge and education is one of the most important missions of CERN. Welcoming Pakistan as a new Associate Member State is therefore for our Organization a very significant event and I’m looking forward to enhanced cooperation with Pakistan in the near future.”

“It is indeed a historic day for science in Pakistan. Today’s signing of the agreement is a reward for the collaboration of our scientists, engineers and technicians with CERN over the past two decades,” said Ansar Parvez. “This Membership will bring in its wake multiple opportunities for our young students and for industry to learn and benefit from CERN. To us in Pakistan, science is not just pursuit of knowledge, it is also the basic requirement to help us build our nation.”

The Islamic Republic of Pakistan and CERN signed a Co-operation Agreement in 1994. The signature of several protocols followed this agreement, and Pakistan contributed to building the CMS and ATLAS experiments. Pakistan contributes today to the ALICE, ATLAS, CMS and LHCb experiments and operates a Tier-2 computing centre in the Worldwide LHC Computing Grid that helps to process and analyse the massive amounts of data the experiments generate. Pakistan is also involved in accelerator developments, making it an important partner for CERN.

The Associate Membership of Pakistan will open a new era of cooperation that will strengthen the long-term partnership between CERN and the Pakistani scientific community. Associate Membership will allow Pakistan to participate in the governance of CERN, through attending the meetings of the CERN Council. Moreover, it will allow Pakistani scientists to become members of the CERN staff, and to participate in CERN’s training and career-development programmes. Finally, it will allow Pakistani industry to bid for CERN contracts, thus opening up opportunities for industrial collaboration in areas of advanced technology.

Footnote(s)

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Union, JINR and UNESCO have Observer Status.

Source : CERN

Researchers use real data rather than theory to measure the cosmos

For the first time researchers have measured large distances in the Universe using data, rather than calculations related to general relativity.

A research team from Imperial College London and the University of Barcelona has used data from astronomical surveys to measure a standard distance that is central to our understanding of the expansion of the universe.

Previously the size of this ‘standard ruler’ has only been predicted from theoretical models that rely on general relativity to explain gravity at large scales. The new study is the first to measure it using observed data. A standard ruler is an object which consistently has the same physical size so that a comparison of its actual size to its size in the sky will provide a measurement of its distance to earth.

“Our research suggests that current methods for measuring distance in the Universe are more complicated than they need to be,” said Professor Alan Heavens from the Department of Physics, Imperial College London who led the study. “Traditionally in cosmology, general relativity plays a central role in most models and interpretations. We have demonstrated that current data are powerful enough to measure the geometry and expansion history of the Universe without relying on calculations relating to general relativity.

“We hope this more data-driven approach, combined with an ever increasing wealth of observational data, could provide more precise measurements that will be useful for future projects that are planning to answer major questions around the acceleration of the Universe and dark energy.”

The standard ruler measured in the research is the baryon acoustic oscillation scale. This is a pattern of a specific length which is imprinted in the clustering of matter created by small variations in density in the very early Universe (about 400,000 years after the Big Bang). The length of this pattern, which is the same today as it was then, is the baryon acoustic oscillation scale.

The team calculated the length to be 143 Megaparsecs (nearly 480 million light years) which is similar to accepted predictions for this distance from models based on general relativity.

Published in Physical Review Letters, the findings of the research suggest it is possible to measure cosmological distances independently from models that rely on general relativity.

Einstein’s theory of general relativity replaced Newton’s law to become the accepted explanation of how gravity behaves at large scales. Many important astrophysics models are based on general relativity, including those dealing with the expansion of the Universe and black holes. However some unresolved issues surround general relativity. These include its lack of reconciliation with the laws of quantum physics and the need for it to be extrapolated many orders of magnitude in scales in order to apply it in cosmological settings. No other physics law have been extrapolated that much without needing any adjustment, so its assumptions are still open to question.

Co-author of the study, Professor Raul Jimenez from the University of Barcelona said: “The uncertainties around general relativity have motivated us to develop methods to derive more direct measurements of the cosmos, rather than relying so heavily on inferences from models. For our study we only made some minimal theoretical assumptions such as the symmetry of the Universe and a smooth expansion history.”

Co-author Professor Licia Verde from the University of Barcelona added: “There is a big difference between measuring distance and inferring its value indirectly. Usually in cosmology we can only do the latter and this is one of these rare and precious cases where we can directly measure distance. Most statements in cosmology assume general relativity works and does so on extremely large scales, which means we are often extrapolating figures out of our comfort zone. So it is reassuring to discover that we can make strong and important statements without depending on general relativity and which match previous statements. It gives one confidence that the observations we have of the Universe, as strange and puzzling as they might be, are realistic and sound!”

The research used current data from astronomical surveys on the brightness of exploding stars (supernovae) and on the regular pattern in the clustering of matter (baryonic acoustic oscillations) to measure the size of this ‘standard ruler’. The matter that created this standard ruler formed about 400,000 years after the Big Bang. This period was a time when the physics of the Universe was still relatively simple so the researchers did not need to consider more ‘exotic’ concepts such as dark energy in their measurements.

“In this study we have used measurements that are very clean,” Professor Heavens explained, “And the theory that we do apply comes from a time relatively soon after the Big Bang when the physics was also clean. This means we have what we believe to be a precise method of measurement based on observations of the cosmos. Astrophysics is an incredibly active but changeable field and the support for the different models is liable to change. Even when models are abandoned, measurements of the cosmos will survive. If we can rely on direct measurements based on real observations rather than theoretical models then this is good news for cosmology and astrophysics.”

The research was supported by the Royal Society and the European Research Council.

Source : Imperial College

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

Best Quantum Receiver

RECORD HIGH DATA ACCURACY RATES FOR PHASE-MODULATED TRANSMISSION

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second.

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

By passing it through a special crystal, a light wave’s phase---denoting position along the wave’s cycle---can be delayed.  A delay of a certain amount can denote a piece of data.  In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. - Credit : JQI
By passing it through a special crystal, a light wave’s phase—denoting position along the wave’s cycle—can be delayed. A delay of a certain amount can denote a piece of data. In this experiment light pulses can be delayed by a zero amount, or by ¼ of a cycle, or 2/4, or ¾ of a cycle. -
Credit : JQI

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent—for laser light this means a predictable set of crests and troughs along the wave—a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires—an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.)

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

- See more at: http://jqi.umd.edu/news/best-quantum-receiver#sthash.SS5zfkis.dpuf

- Source: JQI 

The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning.
Credit: CERN

CERN makes public first data of LHC experiments

CERN1 launched today its Open Data Portal where data from real collision events, produced by the LHC experiments will for the first time be made openly available to all. It is expected that these data will be of high value for the research community, and also be used for education purposes.

”Launching the CERN Open Data Portal is an important step for our Organization. Data from the LHC programme are among the most precious assets of the LHC experiments, that today we start sharing openly with the world. We hope these open data will support and inspire the global research community, including students and citizen scientists,” said CERN Director General Rolf Heuer.

The principle of openness is enshrined in CERN’s founding Convention, and all LHC publications have been published Open Access, free for all to read and re-use. Widening the scope, the LHC collaborations recently approved Open Data policies and will release collision data over the coming years.

The first high-level and analysable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. This data set is now publicly available on the CERN Open Data Portal. Open source software to read and analyse the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” said CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

 The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b'- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b'- region plotted with a finer binning. Credit: CERN
The mass difference spectrum: the LHCb result shows strong evidence of the existence of two new particles the Xi_b’- (first peak) and Xi_b*- (second peak), with the very high-level confidence of 10 sigma. The black points are the signal sample and the hatched red histogram is a control sample. The blue curve represents a model including the two new particles, fitted to the data. Delta_m is the difference between the mass of the Xi_b0 pi- pair and the sum of the individual masses of the Xi_b0 and pi-.. INSET: Detail of the Xi_b’- region plotted with a finer binning.
Credit: CERN

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations, which have been specifically prepared for educational purposes, such as the international masterclasses in particle physics2 benefiting over ten thousand high-school students every year. These resources are accompanied by visualisation tools.

“Our own data policy foresees data preservation and its sharing. We have seen that students are fascinated by being able to analyse LHC data in the past and so, we are very happy to take the first steps and make available some selected data for education” said Silvia Amerio, data preservation coordinator of the LHCb experiment.

“The development of this Open Data Portal represents a first milestone in our mission to serve our users in preserving and sharing their research materials. It will ensure that the data and tools can be accessed and used, now and in the future,” said Tim Smith from CERN’s IT Department.

All data on OpenData.cern.ch are shared under a Creative Commons CC03 public domain dedication; data and software are assigned unique DOI identifiers to make them citable in scientific articles; and software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

Further information:

Open data portal

Open data policies

CMS Open Data

 

Footnote(s):

1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer Status.

2. http://www.physicsmasterclasses.org(link is external)

3. http://creativecommons.org/publicdomain/zero/1.0/

Recommendation theory

Model for evaluating product-recommendation algorithms suggests that trial and error get it right.

By Larry Hardesty

Devavrat Shah’s group at MIT’s Laboratory for Information and Decision Systems (LIDS) specializes in analyzing how social networks process information. In 2012, the group demonstrated algorithms that could predict what topics would trend on Twitter up to five hours in advance; this year, they used the same framework to predict fluctuations in the prices of the online currency known as Bitcoin.

Next month, at the Conference on Neural Information Processing Systems, they’ll present a paper that applies their model to the recommendation engines that are familiar from websites like Amazon and Netflix — with surprising results.

“Our interest was, we have a nice model for understanding data-processing from social data,” says Shah, the Jamieson Associate Professor of Electrical Engineering and Computer Science. “It makes sense in terms of how people make decisions, exhibit preferences, or take actions. So let’s go and exploit it and design a better, simple, basic recommendation algorithm, and it will be something very different. But it turns out that under that model, the standard recommendation algorithm is the right thing to do.”

The standard algorithm is known as “collaborative filtering.” To get a sense of how it works, imagine a movie-streaming service that lets users rate movies they’ve seen. To generate recommendations specific to you, the algorithm would first assign the other users similarity scores based on the degree to which their ratings overlap with yours. Then, to predict your response to a particular movie, it would aggregate the ratings the movie received from other users, weighted according to similarity scores.

To simplify their analysis, Shah and his collaborators — Guy Bresler, a postdoc in LIDS, and George Chen, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS) who is co-advised by Shah and EECS associate professor Polina Golland — assumed that the ratings system had two values, thumbs-up or thumbs-down. The taste of every user could thus be described, with perfect accuracy, by a string of ones and zeroes, where the position in the string corresponds to a particular movie and the number at that location indicates the rating.

Birds of a feather

The MIT researchers’ model assumes that large groups of such strings can be clustered together, and that those clusters can be described probabilistically. Rather than ones and zeroes at each location in the string, a probabilistic cluster model would feature probabilities: an 80 percent chance that the members of the cluster will like movie “A,” a 20 percent chance that they’ll like movie “B,” and so on.

The question is how many such clusters are required to characterize a population. If half the people who like “Die Hard” also like “Shakespeare in Love,” but the other half hate it, then ideally, you’d like to split “Die Hard” fans into two clusters. Otherwise, you’d lose correlations between their preferences that could be predictively useful. On the other hand, the more clusters you have, the more ratings you need to determine which of them a given user belongs to. Reliable prediction from limited data becomes impossible.

In their new paper, the MIT researchers show that so long as the number of clusters required to describe the variation in a population is low, collaborative filtering yields nearly optimal predictions. But in practice, how low is that number?

To answer that question, the researchers examined data on 10 million users of a movie-streaming site and identified 200 who had rated the same 500 movies. They found that, in fact, just five clusters — five probabilistic models — were enough to account for most of the variation in the population.

Missing links

While the researchers’ model corroborates the effectiveness of collaborative filtering, it also suggests ways to improve it. In general, the more information a collaborative-filtering algorithm has about users’ preferences, the more accurate its predictions will be. But not all additional information is created equal. If a user likes “The Godfather,” the information that he also likes “The Godfather: Part II” will probably have less predictive power than the information that he also likes “The Notebook.”

Using their analytic framework, the LIDS researchers show how to select a small number of products that carry a disproportionate amount of information about users’ tastes. If the service provider recommended those products to all its customers, then, based on the resulting ratings, it could much more efficiently sort them into probability clusters, which should improve the quality of its recommendations.

Sujay Sanghavi, an associate professor of electrical and computer engineering at the University of Texas at Austin, considers this the most interesting aspect of the research. “If you do some kind of collaborative filtering, two things are happening,” he says. “I’m getting value from it as a user, but other people are getting value, too. Potentially, there is a trade-off between these things. If there’s a popular movie, you can easily show that I’ll like it, but it won’t improve the recommendations for other people.”

That trade-off, Sanghavi says, “has been looked at in an empirical context, but there’s been nothing that’s principled. To me, what is appealing about this paper is that they have a principled look at this issue, which no other work has done. They’ve found a new kind of problem. They are looking at a new issue.”

Source : MIT News