Monthly Archives: June 2015

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

eso1523a

A Celestial Butterfly Emerges from its Dusty Cocoon

eso1523aSPHERE reveals earliest stage of planetary nebula formation


Some of the sharpest images ever made with ESO’s Very Large Telescope (VLT) have, for the first time, revealed what appears to be an ageing star giving birth to a butterfly-like planetary nebula. These observations of the red giant star L2 Puppis, from the ZIMPOL mode of the newly installed SPHERE instrument, also clearly showed a close companion. The dying stages of stars continue to pose astronomers with many riddles, and the origin of such bipolar nebulae, with their complex and alluring hourglass figures, doubly so. This new imaging mode means that the VLT is currently the sharpest astronomical direct imaging instrument in existence.

At about 200 light-years away, L2 Puppis is one of the closest red giants to Earth known to be entering its final stages of life. The new observations with the ZIMPOL mode of SPHERE were made in visible light using extreme adaptive optics, which corrects images to a much higher degree than standard adaptive optics, allowing faint objects and structures close to bright sources of light to be seen in greater detail. They are the first published results from this mode and the most detailed of such a star.

ZIMPOL can produce images that are three times sharper than those from the NASA/ESA Hubble Space Telescope, and the new observations show the dust that surrounds L2 Puppis in exquisite detail [1]. They confirm earlier findings, made using NACO, of the dust being arranged in a disc, which from Earth is seen almost completely edge-on, but provide a much more detailed view. The polarisation information from ZIMPOL also allowed the team to construct a three dimensional model of the dust structures [2].

The astronomers found the dust disc to begin about 900 million kilometres from the star — slightly farther than the distance from the Sun to Jupiter — and discovered that it flares outwards, creating a symmetrical, funnel-like shape surrounding the star. The team also observed a second source of light about 300 million kilometres — twice the distance from Earth to the Sun — from L2 Puppis. This very close companion star is likely to be another red giant of slightly lower mass, but less evolved.

The combination of a large amount of dust surrounding a slowly dying star, along with the presence of a companion star, mean that this is exactly the type of system expected to create a bipolar planetary nebula. These three elements seem to be necessary, but a considerable amount of good fortune is also still required if they are to lead to the subsequent emergence of a celestial butterfly from this dusty chrysalis.

Lead author of the paper, Pierre Kervella, explains: “The origin of bipolar planetary nebulae is one of the great classic problems of modern astrophysics, especially the question of how, exactly, stars return their valuable payload of metals back into space — an important process, because it is this material that will be used to produce later generations of planetary systems.”

In addition to L2 Puppis’s flared disc, the team found two cones of material, which rise out perpendicularly to the disc. Importantly, within these cones, they found two long, slowly curving plumes of material. From the origin points of these plumes, the team deduces that one is likely to be the product of the interaction between the material from L2 Puppis and the companions star’s wind and radiation pressure, while the other is likely to have arisen from a collision between the stellar winds from the two stars, or be the result of an accretion disc around the companion star.

Although much is still to be understood, there are two leading theories of bipolar planetary nebulae, both relying on the existence of a binary star system [3]. The new observations suggest that both of these processes are in action around L2 Puppis, making it appear very probable that the pair of stars will, in time, give birth to a butterfly.

Pierre Kervella concludes: “With the companion star orbiting L2 Puppis only every few years, we expect to see how the companion star shapes the red giant’s disc. It will be possible to follow the evolution of the dust features around the star in real time — an extremely rare and exciting prospect.”

Notes
[1] SPHERE/ZIMPOL use extreme adaptive optics to create diffraction-limited images, which come a lot closer than previous adaptive optics instruments to achieving the theoretical limit of the telescope if there were no atmosphere. Extreme adaptive optics also allows much fainter objects to be seen very close to a bright star. These images are also taken in visible light — shorter wavelengths than the near-infrared regime, where most earlier adaptive optics imaging was performed. These two factors result in significantly sharper images than earlier VLT images. Even higher spatial resolution has been achieved with VLTI, but the interferometer does not create images directly.

[2] The dust in the disc was very efficient at scattering the stars’ light towards Earth and polarising it, a feature that the team could use to create a three-dimensional map of the envelope using both ZIMPOL and NACO data and a disc model based on the RADMC-3D radiative transfer modeling tool, which uses a given set of parameters for the dust to simulate photons propagating through it.

[3] The first theory is that the dust produced by the primary, dying star’s stellar wind is confined to a ring-like orbit about the star by the stellar winds and radiation pressure produced by the companion star. Any further mass lost from the main star is then funneled, or collimated, by this disc, forcing the material to move outwards in two opposing columns perpendicular to the disc.

The second holds that most of the material being ejected by the dying star is accreted by its nearby companion, which begins to form an accretion disc and a pair of powerful jets. Any remaining material is pushed away by the dying star’s stellar winds, forming an encompassing cloud of gas and dust, as would normally occur in a single star system. The companion star’s newly created bipolar jets, moving with much greater force than the stellar winds of the dying star, then carve dual cavities through the surrounding dust, resulting in the characteristic appearance of a bipolar planetary nebula.

Source: ESO

The system Kepler-444 formed when the Milky Way galaxy was a youthful two billion years old. The planets were detected from the dimming that occurs when they transit the disc of their parent star, as shown in this artist's conception.

Image courtesy of NASA

Circular orbits identified for 74 small exoplanets

Observations of 74 Earth-sized planets around distant stars may narrow field of habitable candidates.

By Jennifer Chu


CAMBRIDGE, Mass. – Viewed from above, our solar system’s planetary orbits around the sun resemble rings around a bulls-eye. Each planet, including Earth, keeps to a roughly circular path, always maintaining the same distance from the sun.

The system Kepler-444 formed when the Milky Way galaxy was a youthful two billion years old. The planets were detected from the dimming that occurs when they transit the disc of their parent star, as shown in this artist's conception. Image courtesy of NASA
The system Kepler-444 formed when the Milky Way galaxy was a youthful two billion years old. The planets were detected from the dimming that occurs when they transit the disc of their parent star, as shown in this artist’s conception.
Image courtesy of NASA

For decades, astronomers have wondered whether the solar system’s circular orbits might be a rarity in our universe. Now a new analysis suggests that such orbital regularity is instead the norm, at least for systems with planets as small as Earth.

In a paper published in the Astrophysical Journal, researchers from MIT and Aarhus University in Denmark report that 74 exoplanets, located hundreds of light-years away, orbit their respective stars in circular patterns, much like the planets of our solar system.

These 74 exoplanets, which orbit 28 stars, are about the size of Earth, and their circular trajectories stand in stark contrast to those of more massive exoplanets, some of which come extremely close to their stars before hurtling far out in highly eccentric, elongated orbits.

“Twenty years ago, we only knew about our solar system, and everything was circular and so everyone expected circular orbits everywhere,” says Vincent Van Eylen, a visiting graduate student in MIT’s Department of Physics. “Then we started finding giant exoplanets, and we found suddenly a whole range of eccentricities, so there was an open question about whether this would also hold for smaller planets. We find that for small planets, circular is probably the norm.”

Ultimately, Van Eylen says that’s good news in the search for life elsewhere. Among other requirements, for a planet to be habitable, it would have to be about the size of Earth — small and compact enough to be made of rock, not gas. If a small planet also maintained a circular orbit, it would be even more hospitable to life, as it would support a stable climate year-round. (In contrast, a planet with a more eccentric orbit might experience dramatic swings in climate as it orbited close in, then far out from its star.)

“If eccentric orbits are common for habitable planets, that would be quite a worry for life, because they would have such a large range of climate properties,” Van Eylen says. “But what we find is, probably we don’t have to worry too much because circular cases are fairly common.”

Star-crossed numbers

In the past, researchers have calculated the orbital eccentricities of large, “gas giant” exoplanets using radial velocity — a technique that measures a star’s movement. As a planet orbits a star, its gravitational force will tug on the star, causing it to move in a pattern that reflects the planet’s orbit. However, the technique is most successful for larger planets, as they exert enough gravitational pull to influence their stars.

Researchers commonly find smaller planets by using a transit-detecting method, in which they study the light given off by a star, in search of dips in starlight that signify when a planet crosses, or “transits,” in front of that star, momentarily diminishing its light. Ordinarily, this method only illuminates a planet’s existence, not its orbit. But Van Eylen and his colleague Simon Albrecht, of Aarhus University, devised a way to glean orbital information from stellar transit data.

They first reasoned that if they knew the mass and radius of a planet’s star, they could calculate how long a planet would take to orbit that star, if its orbit were circular. The mass and radius of a star determines its gravitational pull, which in turn influences how fast a planet travels around the star.

By calculating a planet’s orbital velocity in a circular orbit, they could then estimate a transit’s duration — how long a planet would take to cross in front of a star. If the calculated transit matched an actual transit, the researchers reasoned that the planet’s orbit must be circular. If the transit were longer or shorter, the orbit must be more elongated, or eccentric.

Not so eccentric

To obtain actual transit data, the team looked through data collected over the past four years by NASA’s Kepler telescope — a space observatory that surveys a slice of the sky in search of habitable planets. The telescope has monitored the brightness of over 145,000 stars, only a fraction of which have been characterized in any detail.

The team chose to concentrate on 28 stars for which mass and radius have previously been measured, using asteroseismology — a technique that measures stellar pulsations, which reflect a star’s mass and radius.

These 28 stars host multiplanet systems — 74 exoplanets in all. The researchers obtained Kepler data for each exoplanet, looking not only for the occurrence of transits, but also their duration. Given the mass and radius of the host stars, the team calculated each planet’s transit duration if its orbit were circular, then compared the estimated transit durations with actual transit durations from Kepler data.

Across the board, Van Eylen and Albrecht found the calculated and actual transit durations matched, suggesting that all 74 exoplanets maintain circular, not eccentric, orbits.

“We found that most of them matched pretty closely, which means they’re pretty close to being circular,” Van Eylen says. “We are very certain that if very high eccentricities were common, we would’ve seen that, which we don’t.”

Van Eylen says the orbital results for these smaller planets may eventually help to explain why larger planets have more extreme orbits.

“We want to understand why some exoplanets have extremely eccentric orbits, while in other cases, such as the solar system, planets orbit mostly circularly,” Van Eylen says. “This is one of the first times we’ve reliably measured the eccentricities of small planets, and it’s exciting to see they are different from the giant planets, but similar to the solar system.”

This research was funded in part by the European Research Council.

 

Related links

ARCHIVE: New technique allows analysis of clouds around exoplanets
http://newsoffice.mit.edu/2015/clouds-around-exoplanets-0303

ARCHIVE: New technique measures mass of exoplanets
http://newsoffice.mit.edu/2013/new-technique-measures-mass-of-exoplanets-1219

ARCHIVE: Researchers discover that an exoplanet is Earth-like in mass and size
http://newsoffice.mit.edu/2013/kepler-78b-earth-like-in-mass-and-size-1030

 

Source: MIT News Office

Physicists solve quantum tunneling mystery

An international team of scientists studying ultrafast physics have solved a mystery of quantum mechanics, and found that quantum tunneling is an instantaneous process.

The new theory could lead to faster and smaller electronic components, for which quantum tunneling is a significant factor. It will also lead to a better understanding of diverse areas such as electron microscopy, nuclear fusion and DNA mutations.

“Timescales this short have never been explored before. It’s an entirely new world,” said one of the international team, Professor Anatoli Kheifets, from The Australian National University (ANU).

“We have modelled the most delicate processes of nature very accurately.”

At very small scales quantum physics shows that particles such as electrons have wave-like properties – their exact position is not well defined. This means they can occasionally sneak through apparently impenetrable barriers, a phenomenon called quantum tunneling.

Quantum tunneling plays a role in a number of phenomena, such as nuclear fusion in the sun, scanning tunneling microscopy, and flash memory for computers. However, the leakage of particles also limits the miniaturisation of electronic components.

Professor Kheifets and Dr. Igor Ivanov, from the ANU Research School of Physics and Engineering, are members of a team which studied ultrafast experiments at the attosecond scale (10-18 seconds), a field that has developed in the last 15 years.

Until their work, a number of attosecond phenomena could not be adequately explained, such as the time delay when a photon ionised an atom.

“At that timescale the time an electron takes to quantum tunnel out of an atom was thought to be significant. But the mathematics says the time during tunneling is imaginary – a complex number – which we realised meant it must be an instantaneous process,” said Professor Kheifets.

“A very interesting paradox arises, because electron velocity during tunneling may become greater than the speed of light. However, this does not contradict the special theory of relativity, as the tunneling velocity is also imaginary” said Dr Ivanov, who recently took up a position at the Center for Relativistic Laser Science in Korea.

The team’s calculations, which were made using the Raijin supercomputer, revealed that the delay in photoionisation originates not from quantum tunneling but from the electric field of the nucleus attracting the escaping electron.

The results give an accurate calibration for future attosecond-scale research, said Professor Kheifets.

“It’s a good reference point for future experiments, such as studying proteins unfolding, or speeding up electrons in microchips,” he said.

The research is published in Nature Physics.

Source: ANU

Experiment confirms quantum theory weirdness

The bizarre nature of reality as laid out by quantum theory has survived another test, with scientists performing a famous experiment and proving that reality does not exist until it is measured.

Physicists at The Australian National University (ANU) have conducted John Wheeler’s delayed-choice thought experiment, which involves a moving object that is given the choice to act like a particle or a wave. Wheeler’s experiment then asks – at which point does the object decide?

Common sense says the object is either wave-like or particle-like, independent of how we measure it. But quantum physics predicts that whether you observe wave like behavior (interference) or particle behavior (no interference) depends only on how it is actually measured at the end of its journey. This is exactly what the ANU team found.

“It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said Associate Professor Andrew Truscott from the ANU Research School of Physics and Engineering.

Despite the apparent weirdness, the results confirm the validity of quantum theory, which governs the world of the very small, and has enabled the development of many technologies such as LEDs, lasers and computer chips.

The ANU team not only succeeded in building the experiment, which seemed nearly impossible when it was proposed in 1978, but reversed Wheeler’s original concept of light beams being bounced by mirrors, and instead used atoms scattered by laser light.

“Quantum physics’ predictions about interference seem odd enough when applied to light, which seems more like a wave, but to have done the experiment with atoms, which are complicated things that have mass and interact with electric fields and so on, adds to the weirdness,” said Roman Khakimov, PhD student at the Research School of Physics and Engineering.

Professor Truscott’s team first trapped a collection of helium atoms in a suspended state known as a Bose-Einstein condensate, and then ejected them until there was only a single atom left.

The single atom was then dropped through a pair of counter-propagating laser beams, which formed a grating pattern that acted as crossroads in the same way a solid grating would scatter light.

A second light grating to recombine the paths was randomly added, which led to constructive or destructive interference as if the atom had travelled both paths. When the second light grating was not added, no interference was observed as if the atom chose only one path.

However, the random number determining whether the grating was added was only generated after the atom had passed through the crossroads.

If one chooses to believe that the atom really did take a particular path or paths then one has to accept that a future measurement is affecting the atom’s past, said Truscott.

“The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behavior was brought into existence,” he said.

The research is published in Nature Physics.

Source: ANU