### The halting problem (2)

I have previously claimed that it possible (theoretically, although not practically) to solve the halting problem on real computers. But I forgot to mention something important about real computers – no real computer is completely deterministic. This is due to the uncertainty principle in quantum mechanics, which claims that any quantum event has some uncertainty. Therefore, computers can make mistakes – no hardware is completely reliable.

This doesn’t mean we can’t rely on computers. We can. The probability of a real computer making mistakes is very low, and can be minimized even more by running the same algorithm on more than one computer, or more than once on the same computer. Or actually, it can be done if the number of steps we need to run is reasonable – something in a polynomial order of the number of memory bits we are using. If the number of steps is exponential – if it is in the order of 2n (let n be the number of memory bits) – then it can’t be done. That is – even if we had enough time and patience to let a computer run 2n steps – the number of errors we will have will be very high. And of course, we don’t have enough time. Eventually we will either lose patience and turn off the computer, or die, or the entire universe may die. But even if there existed a supernatural being who has enough patience and time – it will find out that any algorithm that is run more than once will return different results each time: in terms of the halting problem – sometimes it will halt, and sometimes not – on random.

This is true for even the most reliable hardware. For any hardware and any number of bits in memory, any algorithm will eventually halt – but if it returns an answer, it will not always return the correct answer. If we allow a computer to run for the order of 2n steps and return an answer – we will not always get the same answer. But for real computers there is no halting problem – any algorithm will eventually halt.

But if we restrict ourselves to a polynomial number of steps (in terms of the number of memory bits we are using) – then we are able to achieve reliable answers to most problems. So the interesting question is not whether a given algorithm will halt – but if it will halt within a reasonable time (after a polynomial number of steps) and return a correct answer.

But the word “polynomial” is not sufficient. n1000 steps is also polynomial, but is too big for even the smallest n. Whether P and NP, as defined on Turing machines, are equal or not equal we don’t really know – there is currently no proof they are equal and no proof they are not. I’m not even sure whether the classes P and NP are well defined on Turing machines, or more generally speaking if they can be defined. But even if they are well defined, it’s possible that there is no proof whether they are equal or not. It’s like Gödel’s incompleteness theorem – there are statements which can neither be proved nor disproved in terms of our ordinary logic.

When I defined real numbers as computable numbers, I used a constructive approach. My intuition said we should be able to calculate computable numbers to any desired precision (or any number of digits or bits after the dot), and therefore I insisted on having an algorithm that defines an increasing sequence of rational numbers – not just ANY converging sequence. It turns out I was right. Although theoretical mathematics has another definition for convergence, it’s not computable. It’s not enough to claim that “there is a natural number N …” without stating what N is. If we want to compute a number, the function (or algorithm) that defines N (for a given precision required) must be computable too.

It turns out that if we allow a number to be defined by any converging sequence in the pure mathematical sense, then the binary representation of the halting problem can also be defined. This is because for any given n we can run n steps of the first n Turing machines (or until the machines halt) and return 1 if the machines halt, and otherwise 0. It can be proved that this sequence does converge, but it can’t be approximated by any computable function. Therefore, it can be claimed that such a number can be defined in the mathematical sense, although it can’t be computed. But a Turing machine can’t understand such a number, in the sense that it can’t use it for operations such as arithmetic operations or other practical purposes. So in this sense, I can claim that such a number can’t be told in the language of Turing machines.

A Turing machine is not able to specify whether a given algorithm will output a computable number or not (for any definition of computable numbers we can define), since this problem is as hard as the halting problem. And therefore, the binary representation of the computable set (1 for each Turing machine that returns a computable number; 0 for each Turing machine that does not) is itself noncomputable. In other words, the question of whether a given algorithm (or Turing machine) defines a computable number is an undecidable problem. So my question is – are complexity classes such as P and NP well defined? Are they computable and decidable in the same sense we use? Is there a Turing machine which can specify whether a given decision problem belongs to complexity classes such as P and NP and return a correct answer for each input? I think they are not.

If there is no such a Turing machine, then in what sense do P and NP exist? They exist in our language as intuitive ideas, just like the words love and friendship exist. Asking whether P and NP are equal is similar to asking whether love and friendship are equal as well. There is no formal answer. Sometimes they are similar, sometimes they are not. If we want to ask whether they are mathematically equal, we need to check whether they are mathematically well defined.

I was thinking how to prove this, since just counting on my intuition would not be enough. But I came to a conclusion. Suppose there was such a Turing machine that would define the set P – return yes for any decision problem which is in P, and no for any decision problem which is not in P. Any decision problem can be defined in terms of an algorithm (or function, or Turing machine) that returns yes or no for any natural number. We limit ourselves to algorithms that halt – algorithms that don’t halt can be excluded.

So this Turing machine – lets call it p – would return a yes or no answer for any Turing machine which represents a decision problem (if it doesn’t represent a decision problem, it doesn’t matter so much what it will do). Then we would be able to create an algorithm a that would do the following:

1. Use p to calculate whether a is in P.
2. If a is in P, define a decision problem which is not in P.
3. Otherwise (if a is not in P), define a decision problem which is in P.

Therefore, a will always do the opposite of what p expects, which leads to a conclusion that there is no algorithm that can define P. P is not computable, and therefore can’t be defined in terms of a Turing machine.

Is there a way to define P without relying on Turing machines? Well, it all depends on the language we’re using. If we’re using our intuition, we can define P intuitively, in the same sense that we can define friendship and love. But if we want to define something concrete – a real set of decision problems – we have to use the language of deterministic algorithms. Some people think that we are smarter than computers – that we can do what a computer can’t do. But we are not. Defining P is as hard as defining the halting problem – it can’t be done. No computer can do it, and no human can do it either. We can ask the question whether a given algorithm will halt. But we have to accept the fact that there are cases where there is no answer. Or alternatively, there is an answer which we are not able to know.

We can claim that even if we don’t know the answer, at least we can know that we don’t know the answer. But there are cases where we don’t know that we don’t know the answer. Gödel’s incompleteness theorem can be extended ad infinitum. If something can’t be computed, it can’t be defined. Such definitions, in terms of an unambiguous language, don’t exist.

I would conclude that any complexity class can’t be computed. It can be shown in a similar way. So if you ask me whether complexity classes P and NP are equal, my answer is that they are neither equal nor not equal. Both of them can’t be defined.

### The halting problem

I have previously mentioned the halting problem – a well-known problem in computer science and mathematics. I claimed that there is a language in which an algorithm that solves the halting problem can be constructed. If we assume that any given algorithm either halts or will run to infinity, then we can construct this simple algorithm:

1. Take an algorithm from input.
2. If it halts, return yes.
3. Otherwise, return no.

It’s a simple algorithm that returns “yes” if a given algorithm halts, and “no” if it doesn’t. But it what language is it written? In English. It requires the knowledge whether a given algorithm will halt. As we assumed, such a knowledge exists, but it has been proved that there is no deterministic algorithm (for Turing machines) that can contain such a knowledge. We can conclude that if this knowledge indeed exists, it is not computable, and therefore can’t be expressed in a finite number of bits (or computer files).

Suppose we try another approach:

1. Take an algorithm from input.
2. Run it one step at a time – either until it halts, or let it run infinite steps.
3. If it halts, return yes.
4. Otherwise, return no.

This algorithm would seem to work and return a correct answer for algorithms that halt, but it might get stuck in an infinite loop for algorithms that don’t halt. But if we allow it to run on a computer that can run infinite steps in finite time – it will always stop and return a correct answer. But the problem is – there is no such a computer.

But there are no Turing machines, either. A Turing machine has infinite memory. Since it has infinite memory, some programs might run for infinite time. In reality – no computer has infinite memory, and no computer program will run for infinite time (somebody will eventually turn off the computer). So lets forget about Turing machines, and check if we can solve the halting problem for real computers.

Real computers have a finite amount of memory. If we are given a real computer and an algorithm that runs on it – is it possible to determine whether this algorithm, when run on the real computer, will ever halt?

Of course it is! let n be the number of bits in this real computer’s memory – so the total number of different states the computer can be in is 2n. If a computer program hasn’t stopped after 2n steps, then it is stuck in an endless loop and will run forever. So I will revise my algorithm a little:

1. Take an algorithm from input.
2. Run it one step at a time – either until it halts, or let it run 2n steps.
3. If it halts, return yes.
4. Otherwise, return no.

This program will always stop and return a correct answer, although it might take a long time. It will also need to use a different computer (or virtual computer) to count the number of steps it is running, and this might take another n bits of memory as well. But I’m not trying to be efficient here. I’m trying to prove that this problem can be solved. And it can be solved.

So what can’t be solved? The question of whether algorithms halt on a Turing machine can’t be solved. The halting function (return yes for any algorithm that halts; no for any algorithm that doesn’t) can’t be computed. The corresponding number can’t be defined in the computer machine language. If there is such a knowledge, it can’t be expressed. Some things we are just not able to know.

But remember – we also don’t know if the googleth bit of the square root of 2 is 0 or 1 – and it is computable (at least in theory, on Turing machines). If something is theoretically computable, it still doesn’t mean that it can be computed in reality and within reasonable time. We just have to accept that some things we are not able to know. It would be boring if we knew everything. If we did, we would have nothing to learn.

### Are the real numbers really uncountable?

I already demonstrated that the numbers of ideas that can be expressed is countable. So how come the number of real numbers is uncountable? In what sense are the real number real? Each real number can be considered as an idea. Can we express an infinite number of ideas in a finite number of words?

But I already said that it all depends on the language we use. Let’s start with checking our definition of numbers. Since we have the inductive definition of natural numbers, and we can define rational numbers by a ratio of two natural numbers – lets check our definition of irrational numbers. There are a few ways to define irrational numbers. Lets define irrational numbers (or more generally speaking, real numbers) as the limit of a known sequence of rational numbers, which is increasing and has an upper bound.

It can be argued that a limit of such a sequence might not be regarded as a number, if it’s not in itself a rational number. But this is just terminology. It doesn’t matter if we call it a limit or a number, as long as we know it exists. It exists in the sense that it is computable – we can calculate it to any desired precision by a finite, terminating algorithm. Or to be more accurate – it is computable if the original sequence of rational numbers can be generated by a known algorithm.

So in the language of computers and deterministic algorithms, we can define any computable number in such a way. Can we define numbers which are not computable? It all depends on our definition of “definition”. While there might exist languages in which such numbers can be defined, my view is that there also exist languages in which such numbers cannot be defined – for example, the language I’m using now. It can be claimed that an infinite definition is also a definition, and these numbers can be defined by the infinite sequence of rational numbers itself. But in reality, it will take us infinite time to express such a definition, and we would need infinite memory to remember it. Therefore, my view is that anything that requires an infinite definition is not real. So lets limit ourselves to definitions of finite length. If there exists a finite algorithm that can define a number (by defining a sequence of rational numbers that converges to it) then this number is computable and therefore definable. If there doesn’t exist such an algorithm – then we cannot define such a number.

So, if I conclude that numbers that can’t be defined don’t exist (because we are not able to express them in the language I’m using), then we come to a conclusion that the set of real numbers is countable. How does it get along with arguments such as Cantor’s diagonal argument, which claim that the set of real numbers is uncountable? Well, while Cantor’s diagonal argument claims that there exists a sequence of rational numbers, which converges to a real number, and is not computable (because the set of computable numbers is countable) – we are not able to express such a set in a finite way in the language I’m using. And since it can’t be expressed, I can claim that it doesn’t exist.

Why doesn’t it exist? Consider this – I can claim that there is a computer language, in which there is an algorithm, that is able to solve the halting problem (decide whether any given computer program will halt). Indeed, there is such a language and algorithm in the same sense that there are numbers which can’t be computed. But there is no computer who runs such an algorithm – it can’t be compiled into known computer languages. So in what sense does such a computer language exist? It doesn’t exist in reality – it exists only in our minds. And therefore, numbers which can’t be computed exist only in our minds, too. They are not real numbers, in the sense that they are not real. They are real in the same sense that a computer who solves any problem is real.

### The limits of knowledge

Is there a limit to human knowledge? I would rather rephrase this question: Is there a limit to the knowledge that can be expressed in human language? While some people might think that the potential of our knowledge and wisdom is unlimited, I will demonstrate that it is.

It is well known that many aspects of human communication can be expressed in computer files – including written language, spoken language including music and songs, visual pictures, movies and books. Is there a limit to the amount of information that can be expressed in files? We all know that computer files can be represented as a sequence of binary digits (bits). Each sequence of binary digits can also be viewed as a positive integer number (a natural number). While some files might contain millions or even billions of bits – their size is always finite. A computer file cannot contain an infinite number of bits.

So it seems that any idea, any piece of knowledge or information that can be expressed in words (or any other form of communication that can be represented in files) can be represented as a sequence of binary digits, or a natural number. But the number of natural numbers is countable. Even more – the number of natural numbers which can actually be represented in reality is at least limited by the number of particles in our known universe, which is finite. We come to a conclusion that the number of ideas that can be expressed in words is finite, or at most countable (if we don’t put a limit on the number of words we use to express one idea).

Even more – since the sequence of natural numbers is already known to us, and we can produce a simple computer program or algorithm* that will express all of them (if allowed to run to infinity) – then the entire knowledge and ideas that can be expressed in words is already known as well. All the words have already been said, all the books have already been written, all the movies have already been created and seen – by a simple algorithm that counts the natural numbers to infinity. Nothing is new – everything is already known to this algorithm, and therefore to us. Just like nothing is new with any natural number – nothing is new with any sequence of binary digits, or with any computer file.

This also eliminates our concept of authors, or copyrights. Can a number have an author? Can it be copyrighted? I can prove by induction that no number has any author and no number has copyrights. Since we all know that 0 and 1 are not copyrighted, and since nobody can claim he’s the author of either of them – then if we have a sequence of bits which has no author, and is not copyrighted, and we add to it another bit (either 0 or 1) – then it’s easy to conclude that the new sequence too has no author and is not copyrighted. Can one claim to be the author of a sequence of bits in which only one bit he wrote by himself? Compare it to taking a book someone else wrote, and adding one letter. Can you claim that you wrote the entire book? Of course not. And if nobody wrote the original book, and you added one letter – then nobody wrote the new book, too. Or compare it to numbers. If you add one to a well-known natural number, can you claim that the new number is yours? Can you claim that you wrote it, and nobody has the right to write the same number for the next 50 years? Of course not.

If we were able to claim copyrights on natural numbers, then I would be able to claim that the algorithm that outputs the entire sequence of natural numbers is mine, I wrote it, and therefore the entire sequence of natural numbers is mine. Nobody is allowed to write any number for the next 50 years. Would you allow something like this? Of course not. Then we should conclude that no knowledge is new, everything is already known, no book has an author and nothing is copyrighted.

* Here’s my algorithm in the awk language, in case you’re interested:
for (i=0; 1; i++) {print i;}

### The axiom of choice (2)

After reading what I wrote about the axiom of choice about two years ago, it appears to me that I forgot to mention something important. I claimed that there are numbers which we are not able to tell. But it all depends on the language. It can be proved that for each real or complex number, there exists a language (or actually, there is an infinite number of languages) in which this number can be told. It’s very easy to prove – we can just define a language in which this number has a symbol, such as 1, 0 or o. We tend to look at some symbols as universal, but they are not. For example, the digit 0 means zero in English, but five in Arabic. The dot is used for zero in Arabic. So defining numbers depends on the language we use.

But, since there are infinite possible languages, the language itself has to be defined, or told, in order for other people to understand. We come to a conclusion that the ability to tell numbers depends on some universal language, in which we can tell the number either directly, or by defining a language and then tell the number in this language (any finite number of languages can be used to define each other). But in order to communicate and understand each other, we need a universal language in which we can start our communication.

It still means we are not able to communicate more than a countable number of numbers, or a finite number of numbers in any given finite binary digits or time, but the set of the countable (or finite) number of numbers that we can communicate depends on the language we use. For example, if we represent numbers as rational numbers (as a ratio of two integers) then we can represent any rational number, but we can’t represent irrational numbers such as the square root of 2. But if we include the option of writing “square root of (a rational number)” then we can represent also numbers which are square roots of rational numbers. In this way we can extend our language, but it’s hard to define which numbers we are able to define in non-ambiguous definitions. An example of a set of numbers we can define in such a way are the computable numbers.

In any case, for any language the number of numbers that can be defined in it is countable, and we can conclude that any uncountable set has an (uncountable) subset of numbers which can’t be defined in this language. If we subtract the set of numbers that can be defined from the original uncountable set, we can define an infinite set of numbers, none of which we are able to define or express. If there are languages in which these numbers can be expressed – these languages too can not be expressed in the original language.

It’s similar to what we have in natural languages. Some expressions (or maybe even any expression) can’t be translated from one language to another. For example, in Hebrew there is no word for tact. The word tact is sometimes used as it is literally, but this is not Hebrew. There are many words in Hebrew, and any language, from other languages. But the Hebrew language itself does not have a word for tact.

### Is the speed of light constant?

The theory of relativity predicts that the speed of light in empty space is constant. Physical experiments, such as the famous Michelson–Morley experiment, confirm this. However, consider a theoretical experiment such as the double-slit experiment, built in such a way that there is a difference in distance between the two possible paths. According to Richard Feynman’s path integral formulation, or sum-over-paths, a quantum of light (a photon) can’t be seen as passing through one of the slits or the other, but both of them at once. Therefore, it will be considered as passing through two different distances at the same time, inevitably leading to a conclusion that the speed of light is not constant. This is what I call the paradox of the speed of light.

Why haven’t we seen this in experiments? I think because of the nature of light and the way we perceive it, its speed is always perceived as very close to a constant value, due to the probabilistic nature of waves in general, and the electromagnetic wave in particular. One of the reasons is because we always measure the speed of light for long distances, while what I describe is relevant to the tiny distances related to quantum mechanics. At the macroscopic level, indeed light seems to travel at a constant speed and also in more or less straight lines – according to light’s wave length and general relativity. But at the microscopic level (the quantum level) there is no constant speed. The speed of light, in this case, varies according to probability.

This means that spacetime itself might not be a constant thing at the microscopic level. I compare it to the surface of water in the sea. If electromagnetic waves can be compared to waves of sea water, then the spacetime itself can be compared to the water itself. The surface of water is never flat, when looking from close enough. But from a distance it looks flat enough (or actually a sphere around earth).

Does it mean a material object can reach or pass beyond the limit of speed of light? I think it does. If a particle can reach a speed close to the speed of light, if given enough energy, and when considering the wave features of particles and light – it appears that a particle with high energy is able to reach or pass the speed of light, under some circumstances. This might mean a particle can go back in time. For a massive body with many particles I think the chances are very low – almost zero – to do that. But for a tiny sub-atom particle I think it is possible, and the probability of it reaching the speed of light is high enough to actually allow this to happen once in a while.

### Light and the theory of relativity (2)

The theory of relativity says that information can’t travel at a speed greater than light. The reason is because various events in the universe can be considered as simultaneous events, at least in some context of time (which vary from one observer to another). If two events can be considered as simultaneous events, information about one event can’t be known to the people at the other event – otherwise they will know that the other event has already happened. This will contradict the possibility of both events to be simultaneous. Therefore, the limit of speed of information in the physical world is the speed of light. The theory of relativity also says that matter can never reach the speed of light.

So there are generally three speeds in the universe – the speed of matter, which is always below the speed of light; the speed of light; and the speed of thoughts. Our physical body, as being built of physical matter, can never travel at a speed equal or greater than the speed of light. But information, including all forms of communication, is already travelling at the speed of light between us, using technologies such as radio waves, electronic communication and the world wide web.

But what about our thoughts? Are they also limited by physical boundaries? Look at the stars. Some of them are millions of light years away from us. Yet we can see them right now. It can be said that we see them as how they were in the past – we see their ancient history and not their state in the present. If a star who is one million light years away from us would explode – for example right now or if it has exploded last year – we will not know about it for the next million years. But we can still think about this star at the present and also at the future. We can think about it right now. We can think about the entire universe in less than a second. Our thoughts can transcend the speed of light. The speed of our thoughts is infinite.

Now, I have already demonstrated that the speed of light, according to its own time scale, is infinite. For light, the entire universe is a small instance in space and time. If we consider the speed of light as the speed of information, knowledge, and wisdom – it’s not surprising that the speed of light, in its own time scale, is the speed of thoughts. We are used to refer to ourselves as a physical body – and tend to refer to the limits of our physical body as our own limits. We consume food, water and air, we don’t live forever, we are bound to a small physical location in space. But if we refer to ourselves as our knowledge, wisdom and awareness – we can see that these are not limited by our physical body. They are not even limited by our physical universe. They are without boundaries and eternal.

If we consider our physical body as our hardware, and our knowledge and wisdom as our software – then while our hardware has limitations, our software does not. Our software can travel at the speed of light, and therefore – transcend all our limits of time and space. Our physical body, as a manifestation of us, is just an illusion. It’s not separate from the rest of the universe. The molecules, atoms and particles it is composed of change all the time. Our own self, or ego, as a separate entity from the rest of the universe is just an illusion as well. Space and time are a manifestation of our thoughts and are also illusions. The only thing which is not an illusion is our own existence in terms of eternal awareness or consciousness – what we sometimes refer to as Buddha, or Yehova.

### Light and the theory of relativity

According to modern physics, light is an electromagnetic wave in spacetime. Suppose that there is a star 20 million light years away from us, and a person is travelling there at a speed very close to the speed of light. Then, according to the theory of relativity, this person will get there at a very short time according to his own personal time. If his speed is close enough to the speed of light, he might get there in less than a second. This also means that the distance between this star and our planet is a very short distance for him. It’s can’t be more than the distance light itself can travel in one second – one light second.

Now, consider that light itself has a consciousness. When generalizing the laws of relativity to light itself, it appears that both the time to get there and the distance, in light time and light space, is zero. This means that if we look at the universe from the perspective of light itself, everything in the universe happens right here, right now. There is no space and no time from light’s perspective. The entire universe with all its galaxies is a small dot in space; it’s entire history and future are a small dot in time. The Big Bang is not an event in the past – it happens right here and right now. The end of the universe, whatever it is, is also happening now. It seems that light itself is very close to how we perceive Yehova – the one that exists everywhere, beyond limits of space and time.

If light itself is able to perceive space and time – if it has a life beyond a small dot in space and time – then it’s possible that our universe is just a tiny event in light’s life. Each second, each nanosecond in light’s time may be a new universe. Each nanometer in light’s space as well. It might live in a universe in which each particle or quantum event is a universe in itself. Each particle or quantum event in our own universe might be a universe in itself as well. Each particle or quantum event in our universe might have a consciousness.

### What are black holes?

Black holes are separate universes within our universe, in which time goes backwards and order increases in time. The second law of thermodynamics is reversed in black holes. What we perceive as future is the past in black holes – a singularity such as the Big Bang in our own universe is perceived in black holes as belongs to the past. However, from our perspective, this singularity in black holes belongs to the future. Because of this difference in the direction of time, we are not able to perceive what’s going on in black holes.

When the density of intelligence in an area of spacetime passes beyond some limit, time is reversed and a black hole is formed. Each black hole is a separate universe. When a civilization is advanced, it knows how to create new black holes (universes) in infinite numbers. These universes may be created to solve a problem, answer a question, or just out of curiosity. Within these universes life may be formed, and new black holes may be created ad infinitum. Our own universe may be a black hole in another universe, possibly created by intelligent life or civilization.

In each of these universes there exists an awareness or consciousness which is beyond space, beyond time. This awareness is what we call Yehova.

### The Arrow of Time

The concept of time machines is paradoxical. If we would be able to go into the past, we might have changed things which affects our past and our present, therefore creating a paradoxical endless loop. If we could predict the future, we would be able to gamble and win the lottery, or in any casino game. Therefore, it’s not possible to change the past nor to predict the future. It is possible to travel into remote future, using relativity for example – but it’s not possible to come back. Time, as we perceive it, is a one way direction – past, present, future.

I read two good books related to the issues of time:
1. A Brief History of Time (Hawking)
2. The Arrow of Time
(You can also read about it in Wikipedia)

I believe that the arrow of time as we perceive it, or what is called the thermodynamic arrow of time (the second law of thermodynamics), is not inherent in the physical world itself, but only in the way we perceive it. Therefore, it is possible to “go to the past”. But although it’s physically possible to go to the past, we will never be able to perceive it. This is because the way we perceive time. Our common sense just can’t handle such things, due to our biological limitations. But it doesn’t mean that’s impossible.

Uri

### Generating random numbers

Is there an algorithm that can generate random numbers? Of course, the question is if there is such an algorithm which is deterministic. But a deterministic algorithm always provides deterministic results. So the question is – can we provide an algorithm with a random seed, in such a way that it will provide an infinite range of random numbers in an unpredictable way? The answer is obviously no. I can prove it!

First, I already showed that it’s not possible to select a random number in the range 0 to 1, if all the possible numbers have the same probability. Since we can treat an infinite series of random bits (either 0 or 1 with the same probability) as a real number between 0 and 1, then it seems that I already proved it. But to clarify it, I only proved that we can’t tell such a selection, I didn’t prove we can’t make one. But if we can define an algorithm that can create such a series, this is equivalent to defining the number or telling it. Therefore, there exist series of 0 and 1 which no deterministic algorithm can create!

Second, if we consider the number of bits necessary to write down both the algorithm and the random seed, we can define this as n. Then there are only 2n number of possibilities for strings of length n. And therefore, the number of different series we can produce with n bits (both the algorithm and the random seed) is not more than 2n. And for any known algorithm, only the number of bits in the random seed counts. For n bits, this algorithm will create no more than 2n different series. This means that theoretically, if we see the beginning of a series given by this algorithm (the first m bits for some m) then we can tell the rest.

But, we can still create series of numbers (or series of bits) which look as random to us – that is, we can’t tell any order in it, we can’t compress it and so on. I like to define it with the word entropy (taken from physics). I was about to define it, but it seems that there is already a good definition in Wikipedia. In other words – although we can’t generate a true random series, we can generate a series which will look random to any observer. That’s good enough.

I like to consider irrational numbers, like the square root of 2 for example, as a series of pseudo-random bits (when written in the binary form). That is – there is no order in it, we can’t compress it and so on. While I will not try to prove it, I think it’s quite obvious. If you don’t believe me, then I’d like to give you a challenge – prove that the google (10100) bit is either 0 or 1 (that is, either prove it’s 0, or prove it’s 1). I bet you can’t do it. And if for some strange reason you do succeed, try google! (the factorial of google) or googleplex (10google).

If you need a way to calculate the first n digits of the binary form of the square root of 2, here’s a simple algorithm. Of course, you need to implement it or use a software which lets each number have n bits of precision (after the dot).

1. let x be any seed number, such as 1.
2. let x2 = 2/x.
3. let x be (x+x2)/2.
4. If the first n digits of x2 and x are not equal, go to step 2.
5. The result is x.

It can be proved that this algorithm converges very quickly – in about O(log2(n)) iterations. That is, to calculate the google first bits will take less than 500 iterations of this loop (or actually, something around 332, which is the log of google in base 2). But the problem is, in order to make calculations of n precision you will need to store n bits, and the calculation itself will take about O(n) steps in machine language. So the total amount of time to calculate the first n bits of the square root of 2 is about O(n*log2(n)). I doubt you will ever get to implement it for google!

Therefore, I can claim that there exists an integer n, which for every bit of the square root of 2 beyond the nth bit, we can’t prove that it is 0 and we can’t prove that it is 1. It’s not that we can’t calculate it – we can calculate it in theory, but it will take infinite time (that is, the calculation will never end during our life time). This is also related to the halting problem – the algorithm to calculate the first n bits of the square root of 2 will halt for every small n, but for big values of n it will never halt. Of course you can claim that in theory it will halt, but in reality there exists an integer n for which it will never halt. Even for relatively small numbers it will not halt – such as google, and you need only 333 bits to represent google in the binary form.

Since we don’t know if the googleth bit of the square root of 2 (and any bit beyond it) is 0 or 1, we can define its probability to be 50% for either 0 or 1. I’m not going to prove this mathematically, but for me it makes sense. Although every bit is either definitely 0 or definitely 1 (according to mathematical logic), for us it has 50%/50% probability to be either 0 or 1. This is like the uncertainty principle in quantum physics – we don’t know something, and we know we don’t know it, but we can define its probability for every possible value. The probability is not inherent in the value itself, but it is our knowledge of it. Although the value of every bit of the square root of 2 is deterministic, we can look at it in a probabilistic way.

### The axiom of choice

A few days ago I came across the Clay Mathematics Institute’s website, and the one million dollar prizes they set for the people who will succeed in solving a few math problems. One of these problems is the famous P vs NP problem, which I came across during my studies of computer science. In the last few days I read some information about related issues – mostly mathematics and computer science issues. The main source I used was Wikipedia (in English), which is a very good source for reading information about anything.

Among the articles I read in Wikipedia are articles about the halting problem, the axiom of choice, P vs NP and other interesting issues. I decided to comment about these issues, and that’s the main reason for starting my own weblog.

The first subject I would like to comment about is the axiom of choice. The reason is because I believe this axiom is not true. Here is a short explanation why I think it is not true:

Suppose we have to select a random real number in a given range, say from 0 to 1. This can be achieved by various ways in real life, such as turning a roulette, throwing a dart, using computers random number generators etc. And suppose that each number in the range has an equal probability of being selected. Then I claim that this is already a paradox! And even if we limit ourselves only to rational numbers, it is still a paradox. It’s not possible to make such a selection. Only if we limit ourselves to rational numbers with a certain precision – that is, only if the set of possible numbers is finite, then we are able to make such a selection. Or, if we don’t require that each number will have an equal probability of being selected (this I will show later).

Why is it a paradox? Because the sum of all probabilities to select each number must be exactly 1. If there are infinite possible numbers, and all the probabilities are equal, then each probability must be exactly 0 (this is easy to prove). And if a probability is 0, then it cannot occur! Of course this still has to be proved, but I think it’s already intuitive. Things with probability 0 just don’t happen!

I want to tackle this issue from a different point of view. The issue is not selecting a number, but telling which number was selected or writing it down. Of course we can still claim that we can select a number even if we can’t tell it or write it down, but this is like claiming about anything else that we know it and can’t tell it, for example I can claim that I know whether P and NP are equal or not but I can’t tell you, or that I know whether any given algorithm halts or not but I can’t tell you, and so on. Therefore, I would assume that if we can’t tell what number we selected and we can’t write it down, then we can’t make the selection. This is true for all practical purposes.

Now, let’s consider the ways we write down numbers. There are a few ways to write numbers: The most common way is decimal (or binary) representation (either fixed point or scientific representation). We can represent any real number with binary representation (or at any integer base). The problem is: with some numbers, this representation is infinite. For rational numbers, it’s periodic, but for irrational numbers it isn’t. Therefore, if we wanted to write down a real number which is not rational in the binary form, it would take us an infinite number of digits. Which leads to the question if such numbers really exist. This is a philosophical question. But if we assume they exist, we are able to write them down in alternative ways (not binary representation) – for example: e, PI, and the square root of 2.

Lets consider all the ways to tell a number or write it down. We can decode each way in the binary form and save it to a binary file. Everything can be encoded into binary files: text, graphics, voice etc. But with n binary bits, it’s possible to write down only 2n different files. Therefore, not more than 2n different numbers can be represented with n bits. The number 2n grows exponentially, but it’s still finite. It’s not possible to represent an infinite number of different values with a finite number of bits.

How is this related to selecting a real number in a given range, or with the axiom of choice? Well, there is only a finite number of different numbers we can write down or tell with a finite number of bits (or in a finite time). Therefore, the number of numbers we can ever write down or tell is countable (I think it’s what we call definable numbers). So if a set is uncountable (for philosophical reasons, I don’t think any uncountable set really exists, but this is another story) – then it must contain numbers which we will never be able to tell! Or in other words, we can define a set of all the definable numbers (numbers we can tell) and subtract it from the set of real numbers, and the result will be a set of numbers none of which we will ever be able to tell. And therefore, we are not able to select an arbitrary number from that set.

Now, what about rational numbers? Can we randomly select a rational number in a given range? I say we can, but we can’t give all the numbers the same probability. We can give each number a positive probability (more than 0), but if we insist in giving each number the same probability – we will still not be able to tell which number we have selected. Or to be more accurate, the probability that we will be able to tell which number we have selected is 0. This is because for each number of bits n, there are only a finite numbers that can be represented with up to n bits, but an infinite numbers which require more than n bits.

On the other hand, if we don’t require all the numbers to have the same probability, then we can definitely select a number within the given range. This is trivial and very easy to prove. We can just select bits at random, and after selecting each bit select randomly whether we should stop or keep selecting more bits. We can show that each number has a positive probability to be selected. This can be shown with any countable set, not just with rational numbers.