Skip over navigation

Searches: Efficiency

Problems

Efficiency and Big-O notation

How to Cite This SparkNote

Problem : Define "Big-O notation".

Big-O notation is a theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n , which is usually the number of items in the input. Informally, saying some equation f (n) = O(g(n)) means it is less than some constant multiple of g(n) . More formally it means there are positive constants c and k , such that 0 < = f (n) < = cg(n) for all n > = k . The values of c and k must be fixed for the function f and must not depend on n .

Problem : Prove that the function f (n) = n 2 + 3n + 1 is O(n 2) .

We can come up with an equation g(n) like g(n) = 2n 2 such that f (n) < g(n) when n > = 3 . Therefore, f (n) = O(g(n)) , and n 2 + 3n + 1 is O(n 2) .

Problem : You are given two functions, one which has an average case running time of O(n 2) and the other which has an average running time of O(nlogn) . In general, which would you choose?

You would most likely choose the algorithm with an efficiency of O(nlogn) . For a large enough input size, an algorithm with O(nlogn) will run faster than an algorithm with O(n 2) .

Problem : True or false: A function with O(n) efficiency will always run faster than a function with O(n 2) efficiency?

False. Remember that we only care about the dominant term in an equation when determining the big-O of a function. For example, function 1 could have been 1000n and function 2 could have been n 2 + 1 . Note than for some n , the first function will actually take longer than the second, but for significantly large n the second function will be faster.

Problem : Draw a graph showing how n , logn , n 2 , and 2n compare as n increases.

Figure %: Graph of the rates of growth

Follow Us