**Problem : **
What is the base case for quicksort? How would combining quicksort
with another sorting algorithm like selection sort change the base case?

**Problem : **
How does mergesort achieve its *O*(*nlogn*) efficiency?

*O*(

*n*) elements. Since the data set can be split in half

*O*(

*logn*) times, the total work for mergesort is

*O*(

*nlogn*).

**Problem : **
While mergesort and quicksort are two "smart" and efficient sorts,
there are plenty of inefficient sorts out there, none of which you
would ever want to use in a program. One such sort is the permutation
sort. A permutation of a data set is one configuration, one ordering
of the data. If there are *n* data elements in a data set, then there
are *n*! permatuations (you have *n* choices for which element goes
first, then *n* - 1 choices for which element goes second, *n* - 2 choices
for which element goes third, etc, so *n*!). The permutation sort
algorithm computes every permutation of the data set, and for each
one checks to see if it is in order If it is, the algorithm ends. If
not, it continues on to the next permuation. Write permuation sort
recursively (the easiest way to do it). Note that a recursive
algorithm can still have loops.

**Problem : **
Your friend Jane proposes the following algorithm for a sort:

**Problem : **
Your friend John claims that quicksort has a worst case running time of
*O*(*n*^{2}). Is he right?

*O*(

*n*

^{2}). If the value of the pivot causes the split to create two sets in every recursive step, one with only 1 element and one with the rest of the elements, then there will be

*O*(

*n*) recursive calls, each one doing

*O*(

*n*) work. Thus an

*O*(

*n*

^{2}) algorithm. However, with a good implementation of quicksort that uses pivot picking methods such as random selection and tri-median, the chances of this hapenning are minimal. Quicksort is often the best sort to use, and is used in many commercial and academic programs.