Problem : Why do we need to be able to measure the efficiency of an algorithm?

Knowing how efficient an algorithm helps us to estimate how efficient an application of this algorithm will be. It also helps us to choose between multiple algorithms that may accomplish the same task.

Problem : Two algorithms are both run on the same machine. The first takes 10 seconds to accomplish its task while the second takes 20 seconds. In general, which algorithm is more efficient?

We don't know! Most importantly, we don't know the size of the input given to the two algorithms. It is possible that the input given was extremely small, and on larger inputs, algorithm two would be much more efficient. More on this after we cover Big-O notation in later sections.

Problem : Define the "cost" of an algorithm.

The cost of an algorithm is how much of the system's resources it uses. We can measure how much memory, disk space, time, etc, that an algorithm requires to run. Of course, this cost can and will often change depending on the input size of the problem. Stated more simply, the cost of an algorithm is how much computing power and time the algorithm takes to run.

Problem : What are three drawbacks to using the "real running time" of an algorithm to measure its efficiency?

Input size, implementation dependency and platform dependency.

Problem : How else could we measure the efficiency of an algorithm such that these concerns about real running time are inconsequential?

Abstract time. Go on to the next section.