However, classical complexity fails to adequately adress why there is such a large difference in complexity even among problems with an expected superpolynomial running time. For example, why does there exist small instances containing only a handful of variables, which makes even the most sophisticated solvers crawl to a halt, when the same solvers manage to handle real-world instances with millions of variables?
To answer such questions the normal algebraic approach is not applicable, since the presence of a non-trivial polymorphism would result in a polynomial time solvable problem, and one alternative is to instead look at weaker symmetries than polymorphisms, so-called partial polymorphisms. Intuitively, while a partial polymorphism does not give a complete recipe for how solutions can be combined to solve a problem efficiently, even a partial description might be sufficient to avoid the need of trying all possible combinations. With the help of this approach I have for example studied the borderline between tractability and intractability, concentrating on hard problems suspected to be very close to this invisible barrier, which led to the identification of the easiest NP-complete satisfiability problem. More recently, this approach has also been applied to study kernelization properties of constraint satisfaction problems, and we have shown that the existence of certain types of partial polymorphisms are indeed sufficient to obtain significantly faster, although still exponential, algorithms. However, this algebraic approach is still in its infancy, and many questions remain unresolved.
Explain like I’m five summary
There are good and bad methods to solve problems. Some problems appear to be really hard to solve, and only bad methods are known. With the help of algebra I try to make bad methods slightly better. Essentially I’m trying to draw a picture which explains how the hard problems are related to each other.See my external web page for more information on my research.