In an asymptotic analysis, we care more about the order of magnitude
of a function rather than the actual value of a function itself. In terms
of the abstract time of an algorithm, this should make some intuitive
sense. After all, we call it "abstract time" because we use "abstract"
operations such as "number of math operations" in
*f* (*n*)
, or "number of
comparisons" in
*f* (*n*)
. In addition, we don't know exactly how long each
operation might actually take on a certain computer. Intuitively, the order
of magnitude appeals to our sense that
*n*
^{2}
is a faster-growing function
than a linear function like
*n*
.

To describe the order of magnitude of a function, we use Big-O notation.
If we had an algorithm that did
7*n*
^{4} +35*n*3 - 19*n*
^{2} + 3
operations, its
big-O notation would be
*O*(*n*
^{4})
. If we had an algorithm that did
2*n* + 5
operations, the big-O notation would be
*O*(*n*)
. Pretty simple, right?

We can formalize what it means for a function to be the big-O of something:
*g*(*n*)*EO*(*f* (*n*))
if and only if there is some constant
*c* > 0
and
*n*
_{o} > 1
,
such that
*g*(*n*) < = *cf* (*n*)
for all
*n* > *n*
_{o}
.

Now in English: a function
*g*(*n*)
is in the class of functions of the order
*f* (*n*)
if, and only if, we can multiply
*f* (*n*)
by some constant
*c*
, and
ignore all the
*n*
below some constant
*n*0
, and have the function
*c***f* (*n*)
be greater (for each
*n* > *n*0
) than
*g*(*n*)
.

That might sound very confusing, but it is actually pretty straightforward and you'll get the hang of it soon enough. Practically we run into a few basic big-Os (there are of course an infinite number of others, but you will see these most frequently):

- 1.
*O*(1) - constant time - 2.
*O*(*logn*) - logarithmic time - 3.
*O*(*n*) - linear time - 4.
*O*(*nlogn*) - 5.
*O*(*n*^{c}) - polynomial - 6.
*O*(*c*^{n}) - exponential - 7.
*O*(*n*!) - factorial

When comparing functions using big-O notation, think about very large
*n*
.
For example,
*O*(*n*
^{2}) > *O*(*n*)
and
*O*(*c*
^{n}) > *O*(*n*
^{c})
.

When discussing the efficiency of an algorithm, there are three cases that need to be considered: the best case, the worst case, and the average case. The best case is how the algorithm will work on the best possible input. The worst case is how the algorithm runs on the worst possible input. And the average case is how it runs on most inputs. When comparing algorithms we very rarely use the best case, often use the average case and sometimes use the worst case.

This is how we measure the efficiency of an algorithm. Such measurements will be used heavily in the rest of the guide.