# Do hard problems really exist?

My conclusion is that the general question whether P equals NP can never be solved. Since we like axioms so much, (personally, I don’t), it can be defined as an axiom to be either true or false. It depends what we prefer – if we prefer to be able to solve any hard problem in short time then we can define “P equals NP” as an axiom (and build a corresponding Turing machine). On the other hand, if we prefer to keep using our encryption algorithms without having the risk of other people being able to reveal all our secrets – then we can define “P is not equal to NP” as an axiom (and therefore prevent the existence of a Turing machine that will reveal all our secrets). Neither of them leads to a contradiction.

It seems that deciding whether a given decision problem is hard is itself a hard problem. We should remember that even if a general decision problem is proved to be hard, it is still possible to solve a less general version of the problem in short time. So the question whether a general problem is hard or not is not that important. What is important to know, is whether this problem can be solved in reality.

For example, lets define a decision problem which I assume to be hard. I will define a sequence a(n) recursively: a(0)= 0; and a(n+1)= 2a(n). Therefore,

a(0)= 0;
a(1)= 1;
a(2)= 2;
a(3)= 4; // seems reasonable to far.
a(4)= 16;
a(5)= 65,536; // Things are starting to get complicated.
a(6)= 265,536; // Very complicated.

(and so on).

Now, lets take a known irrational number, such as the square root of 2 (any number can be chosen instead). We know an algorithm that can produce its binary digits, and therefore we can define d(n) as the binary digit n for each n. Now, lets define the sequence s: s(n)= d(a(n)). We come up with a sequence of bits, 0 or 1 (which can also be seen as a function, decision problem etc.) which is computable. But is it computable in reasonable time?

Of course not. At least not in the way we implemented it. Calculating the nth bit of the square root of 2 would take something in the order of O(n*log2(n)), if we use the algorithm I wrote. But is there an algorithm that can do it more efficiently? Maybe there is, but we don’t know it (it might be possible to prove whether such an algorithm exists). But for any given n, we can produce an algorithm that will return the first n bits of s(n) in the order of O(n) time. All we have to do is to use a Turing machine to calculate the first n bits of s(n), and then produce an algorithm that will display those bits as a sequence. Even though the Turing machine might take a huge amount of time to calculate this algorithm, this algorithm will be in the order of O(n) in both memory space and time. Such an algorithm exists, and it is computable.

If I generalize a little – if we have a decision problem known to be hard, and it has a computable function – for each n there is a computable algorithm that will produce the first n results of the function in O(n) time. Any hard problem can be represented in a way which is not hard. Of course, representing a hard problem in an easy way is in itself a hard problem. But it can be done. And it is computable.

But it is computable for any given, finite n. It might not be computable in the general case. A given decision problem might be really hard in the general case. Even if it is computable, we might not be able to compute it within reasonable time, or memory space (or both). We might use other methods such as nondeterministic algorithms. We might be successful sometimes (sometimes not). But we are really doomed in the general case. Any hard problem can be made harder. Not any hard problem can be made easier.

I will rephrase my last sentence in terms of Turing machines. Suppose there were a Turing machine p, which can take any given decision function f (Turing machine) as input, and return another algorithm that will compute the same function in a shorter time, if such an algorithm exists. Otherwise, it will return f. And suppose we limit ourselves only to algorithms that halt. And suppose f and g are two decision functions who are identical (they always return the same result), and g in known to be more efficient than f. Then we would be able to create an algorithm a that would do the following:

1. Calculate p(a).
2. If p(a) is equal to a, return f.
3. Otherwise, return the complement of p(a).

It can be seen that p(a) will never be correct. If p(a) is equal to a, then it should not be possible to calculate a more efficiently than a itself. Yet, f and g are more efficient versions of a. If p(a) is not equal to a, then a and p(a) are not identical. The conclusion is that there is no algorithm that can determine whether a given function is calculated efficiently.