Theoretical computer scientists measure algorithms' running times not in seconds or hours, but in the number of operations required, relative to the number of elements being manipulated. With cutting-plane methods, the number of elements is the number of variables in the cost function—the weight of the car, the cost of its materials, drag, legroom, and so on. That's also the dimension of the hyper-sphere.

With the best general-purpose cutting-plane method, the time required to select each new point to test was proportional to the number of elements raised to the power 3.373. Sid ford, Lee, and Wong get that down to 3.

But they also describe a new way to adapt cutting-plane methods to particular types of optimization problems, with names like sub-modular minimization, sub-modular flow, matroid intersection, and semi-definite programming. And in many of those cases, they report dramatic improvements in efficiency, from running times that scale with the fifth or sixth power of the number of variables (n5 or n6, in computer science parlance) down to the second or third power (n2 or n3).

"This is indeed an astonishing paper, " says Satoru Iwata, a professor of mathematical informative at the University of Tokyo, who has published widely on the problem of sub modular minimization. "For this problem, " he says, "the running time bounds derived with the aid of discrete geometry and combination techniques are by far better than what I could imagine."

Source: Phy.org