However, classical complexity fails to adequately adress why there is such a large difference in complexity even among problems with an expected superpolynomial running time. For example, why does there exist small instances containing only a handful of variables, which makes even the most sophisticated solvers crawl to a halt, when the same solvers manage to handle real-world instances with millions of variables?
To answer such questions the normal algebraic approach is not applicable, since the presence of a non-trivial polymorphism would result in a polynomial time solvable problem, and one alternative is to instead look at weaker symmetries than polymorphisms, so-called partial polymorphisms. Intuitively, while a partial polymorphism does not give a complete recipe for how solutions can be combined to solve a problem efficiently, even a partial description might be sufficient to avoid the need of trying all possible combinations. With the help of this approach I have for example studied the borderline between tractability and intracatbility, concentrating on hard problems suspected to be very close to this invisible barrier, which led to the identification of the easiest NP-complete satisfiability problem. More recently, this approach has also been applied to study kernelization properties of constraint satisfaction problems, and we have shown that the existence of certain types of partial polymorphisms are indeed sufficient to obtain significantly faster, although still exponential, algorithms. However, this algebraic approach is still in its infancy, and many questions remain unresolved.