Are we really smarter than computers?

Are we really smarter than computers? I don’t think we are, but we’re probably smarter than Turing machines. We are smarter because we allow ourselves to make assumptions, calculate probabilities, occasionally make mistakes. Turing machines are not allowed to make mistakes. If a Turing machine can be proved to make even one mistake, it is doomed. Everything has to be completely deterministic. But does determinism really exist? Is it consistent? Gödel proved that there is no proof that it’s not inconsistent. I suspect he is right. A deterministic approach might prove itself to be inconsistent.

Does God play dice? I don’t think so. God defines anything that can be defined. But God doesn’t know what can and cannot be defined (this question in itself is not definable). So God defines everything, and whatever can’t be defined contradicts itself.

My intuition about the number of algorithms was wrong. I thought that the number of algorithms is less infinite than the number of problems we can define. But no infinity can be proved to be less than another infinity. Any Turing machine has to take itself into account, too. A Turing machine can’t decide whether a proof represented by another Turing machine is valid. If it does, we can prove it is wrong. We can prove a contradiction. Why are we able to do what Turing machines cannot? Because we allow ourselves to make mistakes.

So how do we know it is not a mistake? Maybe a Turing machine can decide everything that is decidable? Or more generally speaking, Maybe a Turing machine can decide everything? No proof can be proved ad infinitum, my proves included. They are just assumptions, they might be wrong. I suppose every question that can be defined, can be answered. If it can be defined by a Turing machine, it can be answered by a Turing machine too. Maybe defining a question and answering it is really the same thing? I think it is.

So, I guess that the number of algorithms we have is unlimited. We don’t have to limit ourselves to deterministic algorithms. But we are trying at least to define our questions in an unambiguous way. Is that at all possible?

Probably not. If two Turing machines are not represented by the same number, they are not identical. No Turing machine can prove that they are identical. If two people ask the same question, it’s not the same question. A question such as “how old I am” have different answers, depending on who’s asking the question.

So why are we allowed to do what Turing machines are not? Is a Turing machine not allowed to have a personality? Isn’t it allowed to ask a question about the number it represents? If two Turing machines are represented by different numbers, some answers will not be the same. That is the definition of different numbers. They are different, they are not the same.

I had the intuitive idea that a problem that can be solved “in P” (according to definition) is easier than a problem who’s definition is “not in P”. But this is not always the case. As I said, if we prove that a given problem is “not in P”, it leads to a contradiction. I’m starting to understand why. A function such as c*n (or more generally speaking, a polynomial of n) is thought to be more increasing than a function like nm (when c and m are considered to be constant). This might be right. But who said c and m have to be constant? Who said they are not allowed to increase as well? Does it matter so much if c is a constant? And remember the sequence a(n) I defined – would you consider a number such as a(1000) to be a constant? And what about a(a(1000))? Is it a constant? Can anybody calculate it in any time?

But the number of algorithms is far more than a(a(1000)). There are more algorithms than any number we are able to represent. Do you really think we are so smart, so that we are able to prove something that cannot be contradicted by any of them? We have to put ourselves inside the equation. If something can be proved, it can be contradicted. Whether the proof and the contradiction are valid, nobody knows. And what matters is what we can do in reality. Whether we can solve a problem in real time or not. It doesn’t matter whether we can prove something like “any function that is executed twice, will return the same answer in both cases” (and if there is such a proof, it can be contradicted). It can be taken as another axiom (and the axiom can also be contradicted). Anything can be contradicted. And the contradiction can be contradicted, too (ad infinitum).

So I guess the class P as defined as a class of decision problems which we are able to solve “in short time” is merely intuitive. Nobody really cares if a problem that can be solved in a(1000) time is proved to be in P. But when we define the problem, we define the solution. For example, consider the definition of prime numbers. The definition defines a function. Another function may calculate the same function more efficiently. But if it can be proved, it can be contradicted. Anything can be contradicted.

I came to the conclusion that the square root of 2 may not be an irrational number. The proof can be contradicted. It is possible that an algorithm that calculates the bits sequence of the square root of 2 may do something like not halting or returning an infinite number of zeros or ones. I don’t think it can ever be proved that 2 algorithms that will calculate the square root of 2 will return the same sequence of bits ad infinitum.

I thought about it, maybe the definition of the square root of 2 depends on the algorithm? Maybe two algorithms will return different numbers, or at least different bits sequence of the same number? I estimated the amount of time to it will take to calculate the first n bits of the square root of 2 at O(n*log2(n)), but maybe the algorithm is more efficient? Maybe it converges to something in the order of O(n)? If n bits can be calculated in O(n), does it mean that each bit can be calculated in O(1)? It seems to me something quite constant, or at least periodic. The conclusion is probably that the infinite sequence of the bits of the square root of 2 is not something that can be proved to calculated in any nondeterministic way.

Any if it can’t be calculated in any nondeterministic way, in what sense does it exist? It seems to me that God does not only play dice in physics, he does so in mathematics too. Whether all valid algorithms that calculate the square root of 2 will return the same answer, only God knows. Whether any algorithm that is presumed to calculate the square root of 2 is valid, only God knows. Maybe there is no square root of 2, maybe there are many of them. only God knows.