Tag Archives: theory

Longstanding problem put to rest:Proof that a 40-year-old algorithm is the best possible will come as a relief to computer scientists.

By Larry Hardesty


CAMBRIDGE, Mass. – Comparing the genomes of different species — or different members of the same species — is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common — the “edit distance” between them — is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes — or texts, or speech samples, or anything else that can be represented as a string of symbols — can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that — without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off

Edit distance is the minimum number of edits — deletions, insertions, and substitutions — required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups — factoring in constraints like Alice’s restraining order — becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: MIT News Office

Wrinkle predictions:New mathematical theory may explain patterns in fingerprints, raisins, and microlenses.

By Jennifer Chu


CAMBRIDGE, Mass. – As a grape slowly dries and shrivels, its surface creases, ultimately taking on the wrinkled form of a raisin. Similar patterns can be found on the surfaces of other dried materials, as well as in human fingerprints. While these patterns have long been observed in nature, and more recently in experiments, scientists have not been able to come up with a way to predict how such patterns arise in curved systems, such as microlenses.

Now a team of MIT mathematicians and engineers has developed a mathematical theory, confirmed through experiments, that predicts how wrinkles on curved surfaces take shape. From their calculations, they determined that one main parameter — curvature — rules the type of pattern that forms: The more curved a surface is, the more its surface patterns resemble a crystal-like lattice.

The researchers say the theory, reported this week in the journal Nature Materials, may help to generally explain how fingerprints and wrinkles form.

“If you look at skin, there’s a harder layer of tissue, and underneath is a softer layer, and you see these wrinkling patterns that make fingerprints,” says Jörn Dunkel, an assistant professor of mathematics at MIT. “Could you, in principle, predict these patterns? It’s a complicated system, but there seems to be something generic going on, because you see very similar patterns over a huge range of scales.”

The group sought to develop a general theory to describe how wrinkles on curved objects form — a goal that was initially inspired by observations made by Dunkel’s collaborator, Pedro Reis, the Gilbert W. Winslow Career Development Associate Professor in Civil Engineering.

In past experiments, Reis manufactured ping pong-sized balls of polymer in order to investigate how their surface patterns may affect a sphere’s drag, or resistance to air. Reis observed a characteristic transition of surface patterns as air was slowly sucked out: As the sphere’s surface became compressed, it began to dimple, forming a pattern of regular hexagons before giving way to a more convoluted, labyrinthine configuration, similar to fingerprints.

“Existing theories could not explain why we were seeing these completely different patterns,” Reis says.

Denis Terwagne, a former postdoc in Reis’ group, mentioned this conundrum in a Department of Mathematics seminar attended by Dunkel and postdoc Norbert Stoop. The mathematicians took up the challenge, and soon contacted Reis to collaborate.

Ahead of the curve

Reis shared data from his past experiments, which Dunkel and Stoop used to formulate a generalized mathematical theory. According to Dunkel, there exists a mathematical framework for describing wrinkling, in the form of elasticity theory — a complex set of equations one could apply to Reis’ experiments to predict the resulting shapes in computer simulations. However, these equations are far too complicated to pinpoint exactly when certain patterns start to morph, let alone what causes such morphing.

Combining ideas from fluid mechanics with elasticity theory, Dunkel and Stoop derived a simplified equation that accurately predicts the wrinkling patterns found by Reis and his group.

“What type of stretching and bending is going on, and how the substrate underneath influences the pattern — all these different effects are combined in coefficients so you now have an analytically tractable equation that predicts how the patterns evolve, depending on the forces that act on that surface,” Dunkel explains.

In computer simulations, the researchers confirmed that their equation was indeed able to reproduce correctly the surface patterns observed in experiments. They were therefore also able to identify the main parameters that govern surface patterning.

As it turns out, curvature is one major determinant of whether a wrinkling surface becomes covered in hexagons or a more labyrinthine pattern: The more curved an object, the more regular its wrinkled surface. The thickness of an object’s shell also plays a role: If the outer layer is very thin compared to its curvature, an object’s surface will likely be convoluted, similar to a fingerprint. If the shell is a bit thicker, the surface will form a more hexagonal pattern.

Dunkel says the group’s theory, although based primarily on Reis’ work with spheres, may also apply to more complex objects. He and Stoop, together with postdoc Romain Lagrange, have used their equation to predict the morphing patterns in a donut-shaped object, which they have now challenged Reis to reproduce experimentally. If these predictions can be confirmed in future experiments, Reis says the new theory will serve as a design tool for scientists to engineer complex objects with morphable surfaces.

“This theory allows us to go and look at shapes other than spheres,” Reis says. “If you want to make a more complicated object wrinkle — say, a Pringle-shaped area with multiple curvatures — would the same equation still apply? Now we’re developing experiments to check their theory.”

This research was funded in part by the National Science Foundation, the Swiss National Science Foundation, and the MIT Solomon Buchsbaum Fund.

Source: MIT News Office

Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitational lensing effect, which produces two enlarged but highly distorted views of the Cloud. Across the top, the Milky Way disk appears distorted into an arc.

Simulated view of a black hole in front of the Large Magellanic Cloud. The ratio between the black hole Schwarzschild radius and the observer distance to it is 1:9. Of note is the gravitational lensing effect known as an Einstein ring, which produces a set of two fairly bright and large but highly distorted images of the Cloud as compared to its actual angular size. Image Source: wikipedia.org By User:Alain r (Own work) [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons

Hawking Radiation Reportedly Observed in a Lab Experiment

Recently published paper in Nature Physics claims to have observed the Hawking radiation as predicted by one of the most respected theoretical astrophysicists of all times, Stephen Hawkings.

Author of the paper Dr. Jeff Steinhauer, physicist at the Technion-Israel Institute of Technology in Haifa, mimicked a charged black hole by creating a narrow, low density, very low temperature atomic Bose–Einstein condensate (BEC), containing an analogue black-hole horizon and an inner horizon.

Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitational lensing effect, which produces two enlarged but highly distorted views of the Cloud. Across the top, the Milky Way disk appears distorted into an arc. Simulated view of a black hole in front of the Large Magellanic Cloud. The ratio between the black hole Schwarzschild radius and the observer distance to it is 1:9. Of note is the gravitational lensing effect known as an Einstein ring, which produces a set of two fairly bright and large but highly distorted images of the Cloud as compared to its actual angular size. Image Source: wikipedia.org By User:Alain r (Own work) [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons
Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitational lensing effect, which produces two enlarged but highly distorted views of the Cloud. Across the top, the Milky Way disk appears distorted into an arc.
Simulated view of a black hole in front of the Large Magellanic Cloud. The ratio between the black hole Schwarzschild radius and the observer distance to it is 1:9. Of note is the gravitational lensing effect known as an Einstein ring, which produces a set of two fairly bright and large but highly distorted images of the Cloud as compared to its actual angular size. Image Source: wikipedia.org By User:Alain r (Own work) [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons
This model black hole produced the kind of emissions as predicted by Stephen Hawkings. The results may not fully confirm the existence of Hawking radiations but they will surely narrow down the possibilities or at least give some revolutionary boost to the search of the phenomenon.  The discovery can play a very important role in research areas related to quantum field theory and general relativity.

It will be interesting to see how scientific community especially other experimental physicists and observational astronomers see the findings.

The full abstract and paper are available at :

http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3104.html

Laser system

Physical constant is constant even in strong gravitational fields

An international team of physicists has shown that the mass ratio between protons and electrons is the same in weak and in very strong gravitational fields. Their study, which was partly funded by the FOM Foundation, is published online on 18 September 2014 in Physical Review Letters.


The idea that the laws of physics and its fundamental constants do not depend on local circumstances is called the equivalence principle. This principle is a cornerstone to Einstein’s theory of general relativity. To put the principle to the test, FOM physicists working at the LaserLaB at VU University Amsterdam determined whether one fundamental constant, the mass ratio between protons and electrons, depends on the strength of the gravitational field that the particles are in. Laser system

Laboratories on earth and in space 
The researchers compared the proton-electron mass ratio near the surface of a white dwarf star to the mass ratio in a laboratory on Earth. White dwarfs stars, which are in a late stage of their life cycle, have collapsed to less than one percent of their original size. The gravitational field at the surface of these stars is therefore much larger than that on earth, by a factor of 10,000. The physicists concluded that even these strong gravitational conditions, the proton-electron mass ratio is the same within a margin of 0.005 percent. In both cases, the proton mass is 1836.152672 times as big as the electron mass . 

Absorption spectra 
To reach their conclusion, the Dutch physicists collaborated with astronomers of the University of Leicester, the University of Cambridge and the Swinburne University of Technology in Melbourne. The team analysed absorption spectra of hydrogen molecules in white dwarf photospheres (the outer shell of a star from which light is radiated). The spectra were then compared to spectra obtained with a laser at LaserLaB in Amsterdam. 

Absorption spectra reveal which radiation frequencies are absorbed by a particle. A small deviation of the proton-electron mass ration would affect the structure of the molecule, and therefore the absorption spectrum as well. However, the comparison revealed that the spectra were very similar, which proves that the value of the proton-electron mass ratio is indeed independent of the strength of the gravitation field. 

Rock-solid 
FOM PhD student Julija Bagdonaite: “Previously, we confirmed the constancy of this fundamental constant on a cosmological time scale with the Very Large Telescope in Chile. Now we searched for a dependence on strong gravitational fields using the Hubble Space Telescope. Gradually we find that the fundamental constants seem to be rock-solid and eternal.”

Contact information Prof.dr. Wim Ubachs, LaserLaB VU University Amsterdam, +31 (0)20 598 79 48

Images The astronomical spectra were recorded with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope. For a picture of the COS, please visit the NASA website.

Reference Limits on a Gravitational field Dependence of the Proton-to-Electron Mass Ratio from H2 in White Dwarf Stars, Physical Review Letters, 18 September 2014.
Paper on ArXiv.  

Source: FOM