Subscribe to DSC Newsletter

Interesting Computational Complexity Question

In my recent article on a new, robust coefficient of correlation and R Squared, I mentioned an algorithm to generate random permutations:

Rudimentary algorithm to generate a random permutation (p(0), p(1), ... , p(n-1))

For k=0 to n-1, do:

  • Step 1. Generate p(k) as a random number on {0, ... , n-1}
  • Step 2. Repeat Step 1 until p(k) is different from p(0), p(1), ... , p(k-1)

End

I also asked what is the computational complexity of this algorithm, as I was comparing it with two

Alternate algorithms:

  • Create n random numbers on [0,1], then p(k) is simply the rank of the k-th deviate (k=0, ... , n-1). From a computational complexity point of view, this is equivalent to sorting, and thus it is O(n log n).
  • Use the one-to-one mapping between n-permutations and numbers in 1...n! However, transforming a number into a permutation, using the factorial number representation, is O(n^2).

You would expect my rudimentary algorithm to be terribly slow. Actually, it might never end, running indefinitely, even just to produce p(1), let alone p(n-1) which is considerably harder to produce.

Yet it turns out that my rudimentary algorithm is also O(n log n). Proving this fact would actually be a great job interview question. Here's the outline:

For positive k strictly smaller than n, you can create p(k) either

  • on the first shot with probability L(k,1) = 1 - k/n.
  • on the second shot with probability L(k,2) = (k/n) * (1 - k/n)
  • on the third shot with probability L(k,3) = (k/n)^2 * (1 - k/n)
  • on the fourth shot with probability L(k,4) = (k/n)^3 * (1 - k/n)

etc. Note that the number of shots required might be infinite (with probability 0).

Thus, on average, you need M(k) = SUM{ j * L(k,j) } = n/(n-k) shots to create p(k), where the sum is over all positive integers j = 0, 1, 2... The total number of operations is thus (on average) SUM{M(k)} = n * {1 +1/2 + 1/3 + ... + 1/n}, where the sum is on k=0,1,...,n-1. This turns out to be O(n log n). Note that in terms of data structure, we need an auxiliary array A of size n, initialized to 0. When p(k) is created (in the last, successful shot at iteration k), we update A as follows: A[p(k)]=1. This array is used to check if  p(k) is different from p(0), p(1), ... , p(k-1), or in other words, if A[p(k)]=0.

Is O(n log n) the best that we can achieve? No, the algorithm below is O(n):

Fast Algorithm (Fisher–Yates shuffle)

p(k) ← k, k=0,...,n-1

For k=n-1 down to 1, do:

  • j ← random integer with 0 ≤ j ≤ k
  • exchange p(j) and p(k)

End

By the way, what is the name of my algorithm? It must have been invented long ago, althought it takes less time to reinvent it and assess its computational complexity, then it takes to find it in the literature. 

Related articles

Views: 1345

Comment

You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Follow Us

On Data Science Central

On DataViz

On Hadoop

© 2016   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Terms of Service