A few weeks ago I was listening to one of my favorite radio shows, BBC Radio 4's In Our Time. It's about as adult-contemporary as a podcast gets: a roundtable of British academics talking about one subject every week, everything from the Lancashire Cotton Famine to Beowulf to Dark Matter.
Last month they covered a famous math question, the P vs. NP problem. It's an arcane problem, though famous enough that it turns up as a Simpsons joke. And its implications are genuinely immense.
What is P vs. NP? While it's one of the hardest unsolved problems in math, it's conceptually very simple. Start with Wikipedia's definition of the problem: "Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer."
Why is that a big deal? Crudely speaking, computer security as we know it is based on computers being able to verify problems quickly but not solve them. Or, in short, NP (easily verified) does not equal P (easily solved).
But if P does equal NP, computer security goes boink (in very broad terms). It's such a big deal in math that anyone who solves the P vs. NP problem gets one million dollars.
Not exactly light listening for the commute, but I figured it was as far away from thinking about my job as possible. But during the middle of the episode, one of the guests broke some news: A big breakthrough in the P vs. NP problem was rumored to have been made, and was about to be presented at the University of Chicago.
Fast forward a couple weeks, and I get a message from a colleague—her friend had tweeted a math lecture, and it had gone viral: a "livetweet of Babai's first Graph Isomorphism talk." Would I be interested? The tweets were incomprehensible to me, but recognizable as the big P vs. NP breakthrough I'd randomly heard about on In Our Time.
My math education basically stopped somewhere within calculus. I literally failed the math entrance exam at the university that László Babai—the computer scientist and mathematician behind the breakthrough—teaches at. But his discovery is a really, really big deal:
“My first thought was that this was a joke. I checked the month to make sure it wasn’t April,” says Ryan Williams, a Stanford University computer scientist. “It’s a huge jump in our understanding of the problem.” He says the discovery is potentially the most important theoretical computer science advance in more than a decade.
Caveat: theoretical computer science. It might not change anything for us mortals. But neither do a lot of amazing things that people do, like freeclimbing a mountain no one thought possible. So I thought I'd at least figure out what the big deal was, with some help from the guy who live-tweeted it, Gabriel Gaster, a data scientist at Datascope (I'd previously talked to him about his map of Divvy traffic patterns).
Babai's breakthrough is extraordinarily complex. But the problem is not. "There are plenty of questions that would take you a year of study to understand just what the question is," Gaster says. "This is not one of those."
Basically, how long does it take a computer to tell if two graphs are the same?
This is a graph in a mathematical sense: points joined by edges. "Network" is a better way of thinking about it. A street map is a graph. Facebook is a graph. Our air traffic system is a graph.
Or even more picayune: seating people at a wedding party. You might have done this. You group some people (points) because they went to college together (edges). You group other people (points) because they're related (edges).
But graphs get really complex really fast. If you looked at every possible pairing of people who might sit next to each other, to determine the optimum seating? With 100 people, there are more possibilities than there hydrogen atoms in the universe. The amount of time required to solve the problem increases exponentially. So you have to figure out a smarter way of sorting—a smarter algorithm. On the other end of the spectrum from exponential is polynomial time; you add numbers in polynomial time. And that's what you want from an algorithm.
Obviously, we have lots of smart algorithms that deal with street maps and social media in polynomial time. "There are practical algorithms that work really well for most problems," Gaster says. "But they're still an exponential amount of work in the worst case."
And that's what Babai has done: solve the worst-case scenario. He has proven that any two networks, no matter how complex, can be compared in "quasi-polynomial time"—not polynomial time, but not bad.
Here's a good thumbnail sketch of what an achievement it is, and how it could be an even bigger one than we think right now:
If true, then his algorithm could render analysis of any network feasible in a reasonable amount of time.
[snip]
There is some doubt as to whether the algorithm will be practical. Some think that the algorithms currently in use are good enough. I remain hopeful that Babai’s algorithm, if functional, will prove useful. I imagine that as little as 40 years ago, people would scoff at the idea of ever needing to analyze networks with nodes among the millions. Now, of course, we know different. Perhaps in the near future, we will find a use for Babai’s work, if there is not one already.
What sort of use? Maybe this:
While the problem may seem abstract, it’s a prominent example of a strange class of puzzles that computers have trouble solving despite being able to quickly verify a solution if one is provided. The result could also reverberate beyond computer science, such as allowing chemists to determine whether complex molecules have the same bonding structure.
Not everyone agrees that Babai's algorithm will be practical to use, or that we have any practical reason to do the kind of work it makes possible. Even Babai says it's impractical. Think of it this way: Say Babai announced "I have a miraculous new mountain-climbing technique, allowing you to climb any mountain in the universe, but it's way more expensive and takes much longer than current techniques." Mountain climbers would pass on it, since the current techniques work better for all the mountains on earth. But they'd surely be impressed by the technical achievement, and maybe there'd be something in it to apply to the here and now, once they start messing with it. Or maybe we'll have a reason to go mountain climbing on Mars someday; humans are funny that way.
So just start here: Something we couldn't conceive to be possible is now conceivably possible. And in its field it's just a huge achievement, one that Babai has been building towards for decades. "In math, it's really wonderful that there are very basic problems that still exist that nobody knows the answer to," Gaster says. "And it's amazing when they get solved."
More amazing still is how it relates to P vs. NP—why mathematicians across the Atlantic got so giddy about Babai's work on In Our Time. Recall that if P = NP, things get crazy. Now: Within the sets of problems called "NP," there are problems called "NP complete." If you can solve one NP-complete problem quickly, you can solve all NP problems quickly. And if you can solve all NP problems quickly, then P = NP. Which equals crazy.
Computers would acquire mind-boggling powers such as near-perfect translation, speech recognition and object identification; the hardest questions in mathematics would melt like butter under computation’s power; and current computer security methods would be as easy to crack as a TSA-approved suitcase lock.
The problem Babai has solved, Gaster says, "could turn out to be NP complete, maybe, and if that were to happen, the hierarchy of complexity would collapse."
Babai's proof doesn't get us there, nor does Gaster or anyone else think it's a likely scenario. (And it's worth noting that it hasn't been peer reviewed, but the consensus is that his track record all but guarantees the proof works.) But it might get us closer, or give us better tools for tackling P vs. NP.
Maybe Babai's proof is simply a great achievement in the abstract realm of math and theoretical computer science. But even such an abstract achievement could be a solid foundation. For example, the problem of factoring is closely related to the computer-security-goes-boink scenario mentioned above, and it's considered to be just as difficult as the problem Babai answered. It makes a lot of other seemingly impossible things now seem possible.
That's not mathematical logic, but it's where mathematical discoveries begin.
Comments are closed.