The C++Course provides a general introduction to programming in C++. It is based on A.B. Downey's book, How to Think Like a Computer Scientist. Click here for details. 
Home Heap Performance Analysis  
See also: Overhead  
Performance Analysis
When we compare algorithms, we would like to have a way to tell when one is faster than another, or takes less space, or uses less of some other resource. It is hard to answer those questions in detail, because the time and space used by an algorithm depend on the implementation of the algorithm, the particular problem being solved, and the hardware the program runs on. The objective of this section is to develop a way of talking about performance that is independent of all of those things, and only depends on the algorithm itself. To start, we will focus on run time; later we will talk about other resources. Our decisions are guided by a series of constraints:
To make this process more concrete, consider two algorithms we have already seen for sorting an array of integers. The first is selection sort, which we saw in Section 13.7. Here is the pseudocode we used there. selectionsort (array) {for (int i=0; i<array.length(); i++) { // find the lowest item at or to the right of i // swap the ith item and the lowest item } } To perform the operations specified in the pseudocode, we wrote helper methods named findLowest() and swap. In pseudocode, findLowest() looks like this // find the index of the lowest item between// i and the end of the array findLowest (array, i) { // lowest contains the index of the lowest item so far lowest = i; for (int j=i+1; j<array.length(); j++) { // compare the jth item to the lowest item so far // if the jth item is lower, replace lowest with j } return lowest; } And swap looks like this: swap (i, j) {// store a reference to the ith card in temp // make the ith element of the array refer to the jth card // make the jth element of the array refer to temp } To analyze the performance of this algorithm, the first step is to decide what operations to count. Obviously, the program does a lot of things: it increments i, compares it to the length of the deck, it searches for the largest element of the array, etc. It is not obvious what the right thing is to count. It turns out that a good choice is the number of times we compare two items. Many other choices would yield the same result in the end, but this is easy to do and we will find that it allows us to compare most easily with other sort algorithms. The next step is to define the "problem size." In this case it is natural to choose the size of the array, which we'll call n. Finally, we would like to derive an expression that tells us how many abstract operations (specifically, comparisons) we have to do, as a function of n. We start by analyzing the helper methods. swap copies several references, but it doesn't perform any comparisons, so we ignore the time spent performing swaps. findLowest starts at i and traverses the array, comparing each item to lowest. The number of items we look at is ni, so the total number of comparisons is ni1. Next we consider how many times findLowest gets invoked and what the value of i is each time. The last time it is invoked, i is n2 so the number of comparisons is 1. The previous iteration performs 2 comparisons, and so on. During the first iteration, i is 0 and the number of comparisons is n1. So the total number of comparisons is 1 + 2 + ··· + n1. This sum is equal to n^{2}/2  n/2. To describe this algorithm, we would typically ignore the lower order term (n/2) and say that the total amount of work is proportional to n^{2}. Since the leading order term is quadratic, we might also say that this algorithm is quadratic time.


Home Heap Performance Analysis 