**Problem : **
What is the base case for quicksort? How would combining quicksort
with another sorting algorithm like selection sort change the base case?

The base case for quick sort is when the size of the partition we're
working on is one element. As it is in order by definition, there
is nothing more to do and the recursion can stop. If we were to
combine quicksort with another sort like selection sort, the base
case for quicksort becomes the size of the partition at which we
switch sorts; this is often around five or six elements.

**Problem : **
How does mergesort achieve its *O*(*nlogn*) efficiency?

Mergesort continually splits the data set in half, and at each step it
works on

*O*(*n*) elements. Since the data set can be split in half

*O*(*logn*) times, the total work for mergesort is

*O*(*nlogn*).

**Problem : **
While mergesort and quicksort are two "smart" and efficient sorts,
there are plenty of inefficient sorts out there, none of which you
would ever want to use in a program. One such sort is the permutation
sort. A permutation of a data set is one configuration, one ordering
of the data. If there are *n* data elements in a data set, then there
are *n*! permatuations (you have *n* choices for which element goes
first, then *n* - 1 choices for which element goes second, *n* - 2 choices
for which element goes third, etc, so *n*!). The permutation sort
algorithm computes every permutation of the data set, and for each
one checks to see if it is in order If it is, the algorithm ends. If
not, it continues on to the next permuation. Write permuation sort
recursively (the easiest way to do it). Note that a recursive
algorithm can still have loops.

int sort(int arr[], int n, int i)
{
int j, flag, swap;
int true = 1, false = 0;
/* Check to see if list is sorted */
flag = 1;
for (j=0; j<n-1; j) {
if (arr[j] >= arr[j+1]) {
flag = 0;
break;
}
}
if (flag) return 1;
/* Compute each permutation recursively */
for(j=i+1; j<n; j) {
swap = arr[i];
arr[i] = arr[j];
arr[j] = swap;
if (sort(arr, n, i+1)) return true;
swap = arr[i];
arr[i] = arr[j];
arr[j] = swap;
}
return false;
}
void permutationsort(int arr[], int n)
{
sort(arr, n, 0);
}

**Problem : **
Your friend Jane proposes the following algorithm for a sort:

random_sort(data set) {
-randomly swap two elements
-check to see if the data is in order
-if it is return as we're done
-otherwise call random_sort
}

Jane claims that although this algorithm is incredibly inefficient, it
will work. You claim that even if you lucked out and got good random
swaps, in most cases it would cause your computer program to crash. Why?

After every swap, the function will make another recursive call to itself.
Due to the incredible number of function calls necessary to get the array
into order, the space on the call stack will be exhausted far earlier than
a solution could be found.

**Problem : **
Your friend John claims that quicksort has a worst case running time of
*O*(*n*^{2}). Is he right?

Yes, quicksort does have a worst case running time of

*O*(*n*^{2}). If the
value of the pivot causes the split to create two sets in every recursive
step, one with only 1 element and one with the rest of the elements, then
there will be

*O*(*n*) recursive calls, each one doing

*O*(*n*) work. Thus
an

*O*(*n*^{2}) algorithm. However, with a good implementation of quicksort
that uses pivot picking methods such as random selection and tri-median,
the chances of this hapenning are minimal. Quicksort is often the best
sort to use, and is used in many commercial and academic programs.