Tag Archives: artificial intelligence

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

Entering 2016 with new hope

Syed Faisal ur Rahman


 

Year 2015 left many good and bad memories for many of us. On one hand we saw more wars, terrorist attacks and political confrontations, and on the other hand we saw humanity raising voices for peace, sheltering refugees and joining hands to confront the climate change.

In science, we saw first ever photograph of light as both wave and particle. We also saw some serious development in machine learning, data sciences and artificial intelligence areas with some voices raising caution about the takeover of AI over humanity and issues related to privacy. The big question of energy and climate change remained a key point of  discussion in scientific and political circles. The biggest break through came near the end of the year with Paris deal during COP21.

The deal involving around 200 countries represent a true spirit of humanity to limit global warming below 2C and commitments for striving to keep temperatures at above 1.5C pre-industrial levels. This truly global commitment also served in bringing rival countries to sit together for a common cause to save humanity from self destruction. I hope the spirit will continue in other areas of common interest as well.

This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture. New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old. Credit: NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)
This spectacular view from the NASA/ESA Hubble Space Telescope shows the rich galaxy cluster Abell 1689. The huge concentration of mass bends light coming from more distant objects and can increase their total apparent brightness and make them visible. One such object, A1689-zD1, is located in the box — although it is still so faint that it is barely seen in this picture.
New observations with ALMA and ESO’s VLT have revealed that this object is a dusty galaxy seen when the Universe was just 700 million years old.
Credit:
NASA; ESA; L. Bradley (Johns Hopkins University); R. Bouwens (University of California, Santa Cruz); H. Ford (Johns Hopkins University); and G. Illingworth (University of California, Santa Cruz)

Space Sciences also saw some enormous advancements with New Horizon sending photographs from Pluto, SpaceX successfully landed the reusable Falcon 9 rocket back after a successful launch and we also saw the discovery of the largest regular formation in the Universe,by Prof Lajos Balazs, which is a ring of nine galaxies 7 billion light years away and 5 billion light years wide covering a third of our sky.We also learnt this year that Mars once had more water than Earth’s Arctic Ocean. NASA later confirmed the evidence that water flows on the surface of Mars. The announcement led to some interesting insight into the atmospheric studies and history of the red planet.

In the researchers' new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT
In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment.
Illustration: Jose-Luis Olivares/MIT

We also saw some encouraging advancements in neurosciences where we saw MIT’s researchers  developing a technique allowing direct stimulation of neurons, which could be an effective treatment for a variety of neurological diseases, without the need for implants or external connections. We also saw researchers reactivating neuro-plasticity in older mice, restoring their brains to a younger state and we also saw some good progress in combating Alzheimer’s diseases.

Quantum physics again stayed as a key area of scientific advancements. Quantu

ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior. Credit:EPFL
ight behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL have succeeded in capturing the first-ever snapshot of this dual behavior.
Credit:EPFL

m computing is getting more closer to become a viable alternative to current architecture. The packing of the single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits. Researchers at the Australian National University (ANU)  performed experiment to prove that reality does not exist until it is measured.

There are many other areas where science and technology reached new heights and will hopefully continue to do so in the year 2016. I hope these advancements will not only help us in growing economically but also help us in becoming better human beings and a better society.

 

 

 

 

 

Software that knows the risks

Planning algorithms evaluate probability of success, suggest low-risk alternatives.

By Larry Hardesty


CAMBRIDGE, Mass. – Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours. Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.

That kind of application is the goal of Brian Williams’ group at MIT’s Computer Science and Artificial Intelligence Laboratory — although the same underlying framework has led to software that both NASA and the Woods Hole Oceanographic Institution have used to plan missions.

At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, researchers in Williams’ group will present algorithms that represent significant steps toward what Williams describes as “a better Siri” — the user-assistance application found in Apple products. But they would be just as useful for any planning task — say, scheduling flights or bus routes.

Together with Williams, Peng Yu and Cheng Fang, who are graduate students in MIT’s Department of Aeronautics and Astronautics, have developed software that allows a planner to specify constraints — say, buses along a certain route should reach their destination at 10-minute intervals — and reliability thresholds, such as that the buses should be on time at least 90 percent of the time. Then, on the basis of probabilistic models — which reveal data such as that travel time along this mile of road fluctuates between two and 10 minutes — the system determines whether a solution exists: For example, perhaps the buses’ departures should be staggered by six minutes at some times of day, 12 minutes at others.

If, however, a solution doesn’t exist, the software doesn’t give up. Instead, it suggests ways in which the planner might relax the problem constraints: Could the buses reach their destinations at 12-minute intervals? If the planner rejects the proposed amendment, the software offers an alternative: Could you add a bus to the route?

Short tails

One aspect of the software that distinguishes it from previous planning systems is that it assesses risk. “It’s always hard working directly with probabilities, because they always add complexity to your computations,” Fang says. “So we added this idea of risk allocation. We say, ‘What’s your budget of risk for this entire mission? Let’s divide that up and use it as a resource.’”

The time it takes to traverse any mile of a bus route, for instance, can be represented by a probability distribution — a bell curve, plotting time against probability. Keeping track of all those probabilities and compounding them for every mile of the route would yield a huge computation. But if the system knows in advance that the planner can tolerate a certain amount of failure, it can, in effect, assign that failure to the lowest-probability outcomes in the distributions, lopping off their tails. That makes them much easier to deal with mathematically.

At AAAI, Williams and another of his students, Andrew Wang, have a paper describing how to evaluate those assignments efficiently, in order to find quick solutions to soluble planning problems. But the paper with Yu and Fang — which appears at the same conference session — concentrates on identifying those constraints that prevent a problem’s solution.

There’s the rub

Both procedures are rooted in graph theory. In this context, a graph is a data representation that consists of nodes, usually depicted as circles, and edges, usually depicted as line segments connecting the nodes. Any scheduling problem can be represented as a graph. Nodes represent events, and the edges indicate the sequence in which events must occur. Each edge also has an associated weight, indicating the cost of progressing from one event to the next — the time it takes a bus to travel between stops, for instance.

Yu, Williams, and Fang’s algorithm first represents a problem as a graph, then begins adding edges that represent the constraints imposed by the planner. If the problem is soluble, the weights of the edges representing constraints will everywhere be greater than the weights representing the costs of transitions between events. Existing algorithms, however, can quickly home in on loops in the graph where the weights are imbalanced. The MIT researchers’ system then calculates the lowest-cost way of rebalancing the loop, which it presents to the planner as a modification of the problem’s initial constraints.

Source: MIT News Office

Collecting just the right data : MIT Research

When you can’t collect all the data you need, a new algorithm tells you which to target.

 Larry Hardesty | MIT News Office 


Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon and Netflix.

But some types of data are harder to collect than online click histories —information about geological formations thousands of feet underground, for instance. And in other applications — such as trying to predict the path of a storm — there may just not be enough time to crunch all the available data.

Dan Levine, an MIT graduate student in aeronautics and astronautics, and his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, have developed a new technique that could help with both problems. For a range of common applications in which data is either difficult to collect or too time-consuming to process, the technique can identify the subset of data items that will yield the most reliable predictions. So geologists trying to assess the extent of underground petroleum deposits, or meteorologists trying to forecast the weather, can make do with just a few, targeted measurements, saving time and money.

Levine and How, who presented their work at the Uncertainty in Artificial Intelligence conference this week, consider the special case in which something about the relationships between data items is known in advance. Weather prediction provides an intuitive example: Measurements of temperature, pressure, and wind velocity at one location tend to be good indicators of measurements at adjacent locations, or of measurements at the same location a short time later, but the correlation grows weaker the farther out you move either geographically or chronologically.

Graphic content

Such correlations can be represented by something called a probabilistic graphical model. In this context, a graph is a mathematical abstraction consisting of nodes — typically depicted as circles — and edges — typically depicted as line segments connecting nodes. A network diagram is one example of a graph; a family tree is another. In a probabilistic graphical model, the nodes represent variables, and the edges represent the strength of the correlations between them.

Levine and How developed an algorithm that can efficiently calculate just how much information any node in the graph gives you about any other — what in information theory is called “mutual information.” As Levine explains, one of the obstacles to performing that calculation efficiently is the presence of “loops” in the graph, or nodes that are connected by more than one path.

Calculating mutual information between nodes, Levine says, is kind of like injecting blue dye into one of them and then measuring the concentration of blue at the other. “It’s typically going to fall off as we go further out in the graph,” Levine says. “If there’s a unique path between them, then we can compute it pretty easily, because we know what path the blue dye will take. But if there are loops in the graph, then it’s harder for us to compute how blue other nodes are because there are many different paths.”

So the first step in the researchers’ technique is to calculate “spanning trees” for the graph. A tree is just a graph with no loops: In a family tree, for instance, a loop might mean that someone was both parent and sibling to the same person. A spanning tree is a tree that touches all of a graph’s nodes but dispenses with the edges that create loops.

Betting the spread

Most of the nodes that remain in the graph, however, are “nuisances,” meaning that they don’t contain much useful information about the node of interest. The key to Levine and How’s technique is a way to use those nodes to navigate the graph without letting their short-range influence distort the long-range calculation of mutual information.

That’s possible, Levine explains, because the probabilities represented by the graph are Gaussian, meaning that they follow the bell curve familiar as the model of, for instance, the dispersion of characteristics in a population. A Gaussian distribution is exhaustively characterized by just two measurements: the average value — say, the average height in a population — and the variance — the rate at which the bell spreads out.

“The uncertainty in the problem is really a function of the spread of the distribution,” Levine says. “It doesn’t really depend on where the distribution is centered in space.” As a consequence, it’s often possible to calculate variance across a probabilistic graphical model without relying on the specific values of the nodes. “The usefulness of data can be assessed before the data itself becomes available,” Levine says.

Reprinted with permission of MIT News (http://newsoffice.mit.edu/)