Search code examples
algorithmcomputer-sciencecomplexity-theorynp-completenp-hard

If Y is reducible to X in polynomial time, then how is it true that X is at least as hard as Y?


I am having difficulty understanding the relationship between the complexity of two classes of problems, say NP-hard and NP-complete problems.

The answer at https://stackoverflow.com/a/1857342/ states:

NP Hard

Intuitively, these are the problems that are at least as hard as the NP-complete problems. Note that NP-hard problems do not have to be in NP, and they do not have to be decision problems.

The precise definition here is that a problem X is NP-hard, if there is an NP-complete problem Y, such that Y is reducible to X in polynomial time.

If a problem Y can be reduced to X in polynomial time, should we not say that Y is at least as hard as X? If a problem Y is reducible to X in polynomial time, then the time required to solve Y is polynomial time + the time required to solve X. So it appears to me that problem Y is at least as hard as X.

But the quoted text above says just the opposite. It says, if an NP-complete problem Y is reducible to an NP-hard problem X, then the NP-hard problem is at least as hard as the NP-complete problem.

How does this make sense? Where am I making an error in thinking?


Solution

  • Your error is in supposing that you have to solve X in order to solve Y. Y might be actually much easier, but one way to solve it is to change it to an instance of X problem. And since we are in big O notation and in NP class we are way past linear algorithms, you can always safely discard any linear parts of an algorithm. Heck you can almost safely discard any polynomial parts until P=NP problem is solved. That means O(f(n) + n) = O(f(n)) where n=O(f(n)).

    Example (which is obviously with neither NP-hard or NP-complete problems but just a mere illustration): You are to find the lowest number in an unsorted array of n numbers. There is obvious solution to iterate over the whole list and remember the lowest number you found, pretty straight-forward and solid O(n).

    Someone else comes and says, ok, let's change it to sorting the array, then we can just take the first number and it will be the lowest. Note here, that this conversion of the problem was O(1), but we can for example pretend there had to be some preprocessing done with the array that would make it O(n). The overall solution is O(n + n*log(n)) = O(n * log(n)).

    Here you too changed easy problem to a hard problem, thus proving that the hard problem is indeed the same or harder as the easy one.

    Basically what the NP-hard problem difinition means is, that X is at least as hard as an NP-complete Y problem. If you find an NP-complete Y problem that you can solve by solving X problem, it means either that X is as hard or harder than Y and then it is indeed NP-hard, or if it is simpler, it means you found an algorithm to solve Y faster than any algorithm before, potentially even moving it out of NP-complete class.

    Another example: let's pretend convolution is in my set of "complete", and normally takes O(n²). Then you come up with Fast Fourier Transformation with O(n * log(n)) and you find out you can solve convolution by transforming it to FFT problem. Now you came up with a solution for convolution, which is o(n²), more specifically O(n * log(n)).