Problem : Consider the case where every pivot quick sort picks divides the list into one empty list and one list containing elements all greater than the pivot. This degenerate case of quick sort is equivalent to which other sorting algorithm?

Insertion sort, which finds iteratively takes the smallest element of the remaining unsorted portion of the list and places it at the front.

Problem : Explain why a sophistocated pivot selection method can be helpful to the efficiency of quick sort.

Quick sort relies on picking a good pivot. If every pivot picks divides its list approximately in half, the sort will run in close to O(nlog(n)) time. The small amount of time taken in picking a good pivot is saved in the efficiency gained from making bad pivots unlikely.

Problem : Consider the implementation of quick sort given in the summary. Why is that implementation inefficient for the list: (9, 8, 7, 6, 5, 4, 3, 2, 1)?

Since the given implementation chooses the first element of the list as the pivot, it chooses bad pivots for this data set.

Problem : Why don't we just search through the data set each time we need to find a pivot and select one that will divide the data set roughly in half?

This strategy requires O(n) time to search through an n element list. Quick sort's efficiency relies on being able to pick a pivot quickly (which generally means in constant time).

Problem : In terms of big-O notation, is quick sort's best case any better than merge sort?

No, they are both O(nlog(n))