Adding Randomisation To Complexity Classes

This is an introduction and is used a forest view.

Effect of Randomisation

Let’s discuss the superficial effects of adding randomisation to an algorithm.

If I am tasked with verifying a problem, and randomness is used, there is a chance that the problem is verified to be true only due to the randomness.

An example. Suppose we want to find out whether a number is prime, and instead of deterministically checking, we use randomness to help. In particular, we choose a random number and check if it divides our supposed prime.

fn is_prime(n : Number, random_number : Number) -> bool {
    assert!(random_number < sqrt(n)); // Can ignore this, if you've never seen it before
    (n / random_number).is_not_integer()
}

Imagine we call it with the following input is_prime(20, 7); 20 is not prime, however because the random number chosen was 7, it does not produce an integer.

We can run the algorithm many times to get better confidence. However the purpose of the example, is to show that randomised algorithms are probabilistic in nature and can pass when they should not.

This is different to the complexity classes P and NP which are deterministic in nature.

Does the probability matter?

Although I did not explicitly mention it, in our algorithm. There was some probability that is_prime outputs true for 20. Lets say that this probability is $1/k$ . If I repeat the algorithm again, then it is $(1/k)^2$. If I repeat it n times then it is $(1/k)^n$.

In practice, we care greatly about what the value of k is, but in theory we just want to know if this algorithm is randomised. So whether you put $1/k$ or $(1/k)^2$ is irrelevant when defining the probability.

Repeatedly computing a randomised algorithm a polynomial number of times, does not make it “more” randomised or “more” deterministic.

Polymomial number of times?

This is wording that comes up in every other paragraph of a cryptography paper. Complexity theory can be seen as the study of how efficient algorithms are, but in order to compare the efficiency of algorithms, we need to define what we mean by efficient. The choice of efficiency that we use is polynomial-time in the input. The class P. This choice is not arbitrary, however it is also not set in stone. You can choose a different class to associate with efficient algorithms, however the theory may become more complicated than need be. Without going into the details, polynomial time seems to be the most reasonable, natural and general choice of efficiency.

IP

As a passing note, in crypto we regularly use another complexity class called IP, which in addition to adding randomness, also adds interaction. As a consequence of using randomness, when we define this complexity class, we will see some notion of a probability.

Ways To Add Randomisation

In the algorithm above, the randomness was inputted as an input. Meaning it was computed elsewhere and supplied to the algorithm. This is called offline randomised computation.

Another way to define the addition of randomisation it to compute it in the algorithm itself. This is called online randomised computation.

Why Add Randomisation In The First Place?

Adding randomisation gives rise to a few new complexity classes. This in turn means that there may be problems that deterministic algorithms cannot prove, but randomised algorithms can. Unless we show that these new groups of complexity classes are equal!