The halting problem (2)

I have previously claimed that it possible (theoretically, although not practically) to solve the halting problem on real computers. But I forgot to mention something important about real computers – no real computer is completely deterministic. This is due to the uncertainty principle in quantum mechanics, which claims that any quantum event has some uncertainty. Therefore, computers can make mistakes – no hardware is completely reliable.

This doesn’t mean we can’t rely on computers. We can. The probability of a real computer making mistakes is very low, and can be minimized even more by running the same algorithm on more than one computer, or more than once on the same computer. Or actually, it can be done if the number of steps we need to run is reasonable – something in a polynomial order of the number of memory bits we are using. If the number of steps is exponential – if it is in the order of 2n (let n be the number of memory bits) – then it can’t be done. That is – even if we had enough time and patience to let a computer run 2n steps – the number of errors we will have will be very high. And of course, we don’t have enough time. Eventually we will either lose patience and turn off the computer, or die, or the entire universe may die. But even if there existed a supernatural being who has enough patience and time – it will find out that any algorithm that is run more than once will return different results each time: in terms of the halting problem – sometimes it will halt, and sometimes not – on random.

This is true for even the most reliable hardware. For any hardware and any number of bits in memory, any algorithm will eventually halt – but if it returns an answer, it will not always return the correct answer. If we allow a computer to run for the order of 2n steps and return an answer – we will not always get the same answer. But for real computers there is no halting problem – any algorithm will eventually halt.

But if we restrict ourselves to a polynomial number of steps (in terms of the number of memory bits we are using) – then we are able to achieve reliable answers to most problems. So the interesting question is not whether a given algorithm will halt – but if it will halt within a reasonable time (after a polynomial number of steps) and return a correct answer.

But the word “polynomial” is not sufficient. n1000 steps is also polynomial, but is too big for even the smallest n. Whether P and NP, as defined on Turing machines, are equal or not equal we don’t really know – there is currently no proof they are equal and no proof they are not. I’m not even sure whether the classes P and NP are well defined on Turing machines, or more generally speaking if they can be defined. But even if they are well defined, it’s possible that there is no proof whether they are equal or not. It’s like Gödel’s incompleteness theorem – there are statements which can neither be proved nor disproved in terms of our ordinary logic.

When I defined real numbers as computable numbers, I used a constructive approach. My intuition said we should be able to calculate computable numbers to any desired precision (or any number of digits or bits after the dot), and therefore I insisted on having an algorithm that defines an increasing sequence of rational numbers – not just ANY converging sequence. It turns out I was right. Although theoretical mathematics has another definition for convergence, it’s not computable. It’s not enough to claim that “there is a natural number N …” without stating what N is. If we want to compute a number, the function (or algorithm) that defines N (for a given precision required) must be computable too.

It turns out that if we allow a number to be defined by any converging sequence in the pure mathematical sense, then the binary representation of the halting problem can also be defined. This is because for any given n we can run n steps of the first n Turing machines (or until the machines halt) and return 1 if the machines halt, and otherwise 0. It can be proved that this sequence does converge, but it can’t be approximated by any computable function. Therefore, it can be claimed that such a number can be defined in the mathematical sense, although it can’t be computed. But a Turing machine can’t understand such a number, in the sense that it can’t use it for operations such as arithmetic operations or other practical purposes. So in this sense, I can claim that such a number can’t be told in the language of Turing machines.

A Turing machine is not able to specify whether a given algorithm will output a computable number or not (for any definition of computable numbers we can define), since this problem is as hard as the halting problem. And therefore, the binary representation of the computable set (1 for each Turing machine that returns a computable number; 0 for each Turing machine that does not) is itself noncomputable. In other words, the question of whether a given algorithm (or Turing machine) defines a computable number is an undecidable problem. So my question is – are complexity classes such as P and NP well defined? Are they computable and decidable in the same sense we use? Is there a Turing machine which can specify whether a given decision problem belongs to complexity classes such as P and NP and return a correct answer for each input? I think they are not.

If there is no such a Turing machine, then in what sense do P and NP exist? They exist in our language as intuitive ideas, just like the words love and friendship exist. Asking whether P and NP are equal is similar to asking whether love and friendship are equal as well. There is no formal answer. Sometimes they are similar, sometimes they are not. If we want to ask whether they are mathematically equal, we need to check whether they are mathematically well defined.

I was thinking how to prove this, since just counting on my intuition would not be enough. But I came to a conclusion. Suppose there was such a Turing machine that would define the set P – return yes for any decision problem which is in P, and no for any decision problem which is not in P. Any decision problem can be defined in terms of an algorithm (or function, or Turing machine) that returns yes or no for any natural number. We limit ourselves to algorithms that halt – algorithms that don’t halt can be excluded.

So this Turing machine – lets call it p – would return a yes or no answer for any Turing machine which represents a decision problem (if it doesn’t represent a decision problem, it doesn’t matter so much what it will do). Then we would be able to create an algorithm a that would do the following:

1. Use p to calculate whether a is in P.
2. If a is in P, define a decision problem which is not in P.
3. Otherwise (if a is not in P), define a decision problem which is in P.

Therefore, a will always do the opposite of what p expects, which leads to a conclusion that there is no algorithm that can define P. P is not computable, and therefore can’t be defined in terms of a Turing machine.

Is there a way to define P without relying on Turing machines? Well, it all depends on the language we’re using. If we’re using our intuition, we can define P intuitively, in the same sense that we can define friendship and love. But if we want to define something concrete – a real set of decision problems – we have to use the language of deterministic algorithms. Some people think that we are smarter than computers – that we can do what a computer can’t do. But we are not. Defining P is as hard as defining the halting problem – it can’t be done. No computer can do it, and no human can do it either. We can ask the question whether a given algorithm will halt. But we have to accept the fact that there are cases where there is no answer. Or alternatively, there is an answer which we are not able to know.

We can claim that even if we don’t know the answer, at least we can know that we don’t know the answer. But there are cases where we don’t know that we don’t know the answer. Gödel’s incompleteness theorem can be extended ad infinitum. If something can’t be computed, it can’t be defined. Such definitions, in terms of an unambiguous language, don’t exist.

I would conclude that any complexity class can’t be computed. It can be shown in a similar way. So if you ask me whether complexity classes P and NP are equal, my answer is that they are neither equal nor not equal. Both of them can’t be defined.