The Java Course provides a general introduction to programming in Java. It is based on A.B. Downey's book, How to Think Like a Computer Scientist. Click here for details.


Performance Analysis

When we compare algorithms, we would like to have a way to tell when one is faster than another, or takes less space, or uses less of some other resource. It is hard to answer those questions in detail, because the time and space used by an algorithm depend on the implementation of the algorithm, the particular problem being solved, and the hardware the program runs on.

The objective of this section is to develop a way of talking about performance that is independent of all of those things, and only depends on the algorithm itself. To start, we will focus on run time; later we will talk about other resources.

Our decisions are guided by a series of constraints:

  1. First, the performance of an algorithm depends on the hardware it runs on, so we usually don't talk about run time in absolute terms like seconds. Instead, we usually count the number of abstract operations the algorithm performs.
  2. Second, performance often depends on the particular problem we are trying to solve -- some problems are easier than others. To compare algorithms, we usually focus on either the worst-case scenario or an average (or common) case.
  3. Third, performance depends on the size of the problem (usually, but not always, the number of elements in a collection). We address this dependence explicitly by expressing run time as a function of problem size.
  4. Finally, performance depends on details of the implementation like object allocation overhead and method invocation overhead. We usually ignore these details because they don't affect the rate at which the number of abstract operations increases with problem size.

To make this process more concrete, consider two algorithms we have already seen for sorting an array of integers. The first is selection sort, which we saw in Section 12.2. Here is the pseudocode we used there.

    selectionsort (array) {
        for (int i=0; i<array.length; i++) {
          // find the lowest item at or to the right of i
          // swap the ith item and the lowest item
        }
    }

To perform the operations specified in the pseudocode, we wrote helper methods named findLowest and swap. In pseudocode, findLowest looks like this

    // find the index of the lowest item between
    // i and the end of the array

    findLowest (array, i) {
        // lowest contains the index of the lowest item so far
        lowest = i;
        for (int j=i+1; j<array.length; j++) {
          // compare the jth item to the lowest item so far
          // if the jth item is lower, replace lowest with j
        }
        return lowest;
    }

And swap looks like this:

    swap (i, j) {
        // store a reference to the ith card in temp
        // make the ith element of the array refer to the jth card
        // make the jth element of the array refer to temp
    }

To analyze the performance of this algorithm, the first step is to decide what operations to count. Obviously, the program does a lot of things: it increments i, compares it to the length of the deck, it searches for the largest element of the array, etc. It is not obvious what the right thing is to count.

It turns out that a good choice is the number of times we compare two items. Many other choices would yield the same result in the end, but this is easy to do and we will find that it allows us to compare most easily with other sort algorithms.

The next step is to define the "problem size." In this case it is natural to choose the size of the array, which we'll call n.

Finally, we would like to derive an expression that tells us how many abstract operations (specifically, comparisons) we have to do, as a function of n.

We start by analyzing the helper methods. swap copies several references, but it doesn't perform any comparisons, so we ignore the time spent performing swaps. findLowest starts at i and traverses the array, comparing each item to lowest. The number of items we look at is n-i, so the total number of comparisons is n-i-1.

Next we consider how many times findLowest gets invoked and what the value of i is each time. The last time it is invoked, i is n-2 so the number of comparisons is 1. The previous iteration performs 2 comparisons, and so on. During the first iteration, i is 0 and the number of comparisons is n-1.

So the total number of comparisons is 1 + 2 + ··· + n-1. This sum is equal to n2/2 - n/2. To describe this algorithm, we would typically ignore the lower order term (n/2) and say that the total amount of work is proportional to n2. Since the leading order term is quadratic, we might also say that this algorithm is quadratic time.



Last Update: 2011-01-24