Analysis of Time Complexity

This set of exercises accompanies Lecture of Week 1 of the course Algorithm Design and Analysis. In order to do the exercises, you need to read these lecture notes.

Short Summary

We can determine Upper Bound (Big-O), Lower Bound (Big-Omega) and Tight Bound (Big-Theta).

Determining the asymptotic complexity of recursive algorithms can be difficult. First you have to identify which is the variable (or expression !) that acts as the "size of the problem", let it be n. Then you have to find out T(n), the execution time as function of the problem size. For this, you will have to solve the recurrence relationship that describes the recursive algorithm.

General methods for solving recurrence relationships are the substitution method and the recursion-tree method. For certain particular types of recurrences (the divide-and-conquer type of recurrences) the result is also given by the Master Theorem.

Sometimes solving certain recurrence relationships is mathematically difficult and we cannot calculate the tight bound. In this case we can introduce approximations, that will help us determine only lower bounds and upper bounds. Sometimes intuition may help to guess the complexity function, but it has to be proven that it fulfills the recurrence relationship.

Simple exercises

Determine the asymptotic complexity (tight bounds) of following algorithms, using both the substitution method and the recursion-tree method, and, where applicable, verify your result by applying the Master Theorem.

Towers of Hanoi

HANOI(n, src, dst, tmp):
    if n > 0
          HANOI(n-1, src, tmp, dst)
          move disk n from src to dst
          HANOI(n-1, tmp, dst, src)

Recursive Power function

POWER(a, n):
    if n = 1
        return a
   else
    x = POWER(a, n/2)
    if n is even
          return x * x
    else
          return a * x * x

Recursive MAX

MAX(A[1..n]) is:
	if n=1 
		return A[1]
	else 
		m1=MAX(A[1 .. n/2])
		m2=MAX(A[n/2+1 .. n])
		if (m1 is bigger than m2)  return m1
		else return m2

Recursive MIN

The recursive MIN algorithm has the same recurrence as the recursive MAX. Pay attention on the detail of identifying the "size of the problem" argument in this case ! The size of the problem is the size of the subarray and it is given by the difference of the two arguments of min.
A is array[1..n]

MIN(left, right) is:     
        if left==right  
                return A[left]
        else
		middle=(left+right)/2
                m1=MIN(left, middle)
                m2=MIN(middle+1, right)
                if (m1 is smaller than m2) return m1
                else return m2

...

Function MIN is called MIN(1,n)

Difficult exercises

Divide-and-Conquer Matrix Multiplication

Determine the asymptotic complexity (tight bounds) of the RECURSIVE_SQUARE_MATRIX_MULTIPLY algorithm, using one of the methods: substitution method or recursion-tree method. You can check your result with the Master Theorem.

Given two square matrices A and B of size N x N each, where N is a power of 2, the algorithm finds their matrix product.


RECURSIVE_SQUARE_MATRIX_MULTIPLY (A, B: SQUARE_MATRIX[1..N, 1..N]) 
	Returns SQUARE_MATRIX[1..N, 1..N]  is:

  if (N==1)
	c11=a11 * b11
        Return C
  else
       * Divide A into quarters (A11, A12, A21, A22)
       * Divide B into quarters (B11, B12, B21, B22)
       P1 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A11, B11)
       P2 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A12, B21)
       P3 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A11, B12)
       P4 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A12, B22)
       P5 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A21, B11)
       P6 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A22, B21) 
       P7 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A21, B12)
       P8 = RECURSIVE_SQUARE_MATRIX_MULTIPLY(A22, B22)
       C11 = MATRIX_ADD(P1, P2)
       C12 = MATRIX_ADD(P3, P4) 
       C21 = MATRIX_ADD(P5, P6)
       C22 = MATRIX_ADD(P7, P8)
       * Build C from quarters
       Return C
     

The algorithm works in following way:

If the size of the matrices N==1, the result is obtained by a scalar multiplication.

If the size N>1, the matrices A and B each are divided in 4 sub-matrices of size N/2 x N/2: A11, A12, A21, A22 and B11, B12, B21, B22.

We compute:

C11 = A11 x B11 + A12 x B21

C12 = A11 x B12 + A12 x B22

C21 = A21 x B11 + A22 x B21

C22 = A21 x B12 + A22 x B22

The result matrix is assembled from sub-matrices C11, C12, C21, C22.

We do 8 multiplications for matrices of size N/2 x N/2 (applying recursively the same strategy) and 4 additions of matrices of size N/2 x N/2.

We know that adding two matrices (MATRIX_ADD) can be done by the standard algorithm in quadratic time.

Dividing matrices in quarters can be done in constant time by avoiding copying elements, but by using index calculations, identifying submatrixes by a range of row indexes and a range of column indexes.

Cruel and Unusual

Thanks to Jeff Erickson for this cruel and unusual exercise:

Determine the asymptotic complexity of the CRUEL algorithm. Of course, you will have to determine the complexity of UNUSUAL first ! You can use either the substitution method or the recursion-tree method and verify the results with the Master Theorem.

The functions take as argument an array of n elements. It is known that a precondition for CRUEL and UNUSUAL is that the input size n is a power of 2. (You may try to find out what this CRUEL and UNUSUAL algorithm is actually doing ;-) You can even try its java version code)

CRUEL(A[1 .. n]) is :
   if n > 1
      CRUEL(A[1 .. n/2])   // recursive call on left half of array
      CRUEL(A[n/2+1 .. n]) // recursive call on right half of array
      UNUSUAL(A[1 .. n])   // call of other function UNUSUAL


UNUSUAL(A[1 .. n]) is :
   if n = 2
        if A[1] > A[2]
            swap (A[1],A[2])
   else
         for i=1 to n/4 
               swap (A[i + n/4], A[i + n/2])  // swap 2nd and 3rd quarters
         UNUSUAL(A[1 .. n/2])           // recursive call on left half 
         UNUSUAL(A[n/2+1 .. n])         // recursive call on right half 
         UNUSUAL(A[n/4+1 .. 3n/4])      // recursive call on middle half 

(*) Analyzis by approximation or other methods

Recursive binomial coefficients

A binomial coefficient C(n, k) is the number of ways you can pick k out of n.

It can be computed by the following recursive algorithm:

C(n, k)=1, if n==k or n==0
C(n, k)=C(n-1, k)+ C(n-1, k-1)
Which is the time complexity of the recursive binomial coefficients algorithm ? (It is something really big, as you can see running its implementation) Try to determine the asymptotic complexity (tight bounds) of this algorithm. If you cannot solve the recurrences, try approximations.