This page requires JavaScript, which you
don't seem to have. Please try a different browser.

Scroll through the page to review your answers. The correct answer is
highlighted in
green.
Your incorrect answers (if any) are highlighted in
red.
If you'd like to take the test over again, click the reset button at the end of the test.

Consider the following consecutive configurations of a list while it it being
sorted:

(4, 5, 3, 1)

(4, 5, 3, 1)

(4, 3, 5, 1)

(4, 3, 1, 5)

What sorting algorithm is being used?

(A)
Quick Sort

(B)
Insertion Sort

(C)
Bubble Sort

(D)
Merge Sort

What is the best case running time of Bubble Sort?

(A)
O(n^{2})

(B)
O(1)

(C)
O(n)

(D)
O(logn)

Why is
log(n)
often a term in the efficiency expressions for divide and
conquer
algorithms?

(A)
Because such algorithms divide up the problems they are given, and it takes
log(n)
steps to divide a list of n elements into n lists of one element.

(B)
Because such algorithms don't need to look at every element in the list.

(C)
Because such algorithms use a faster comparison function.

(D)
Because such algorithms can only be implemented on quantum computers.

What sort might you use if you know that your data will be pretty much in order
to begin with and why would you use that sort?

(A)
Selection sort; because its efficiency is independent of the data being sorted.

(B)
Bubble sort; because it can "short circuit" its operation when it detects that
the list
is in order.

(C)
Quick sort; because divide and conquer algorithms work well on nearly sorted
data.

(D)
Insertion sort; because inserting into a nearly ordered list is highly
efficient.

Imagine that we run quick sort on an already ordered list, picking the pivot by
taking
the first element. What problem do we run into?

(A)
The sort fails because quick sort cannot realize that it has an already sorted
list.

(B)
The sort runs in
O(n)
time because quick sort detects that the list is ordered
after
one pass.

(C)
The sort runs in
O(log(n))
time because quick sort is a divide and conquer
algorithm.

(D)
The sort runs inefficiently because the pivots always divide the lists into an
empty
list and a large list.

Why are merge sort and quick sort known as "divide and conquer" algorithms?

(A)
Because they don't sort the entire list they're given.

(B)
Because they sort the entire list at once.

(C)
Because they divide the list they're given into smaller lists and then sort
those smaller lists.

(D)
Because they use arithmetic division to speed the sorting process.

Why might it seem counterintuitive that heap sort can run so efficiently?

(A)
Because a heap cannot be easily represented as an array.

(B)
Because re-heaping often increases the number of inversions (out of order
elements) in the array.

(C)
Because it takes
O(n^{2})
time to re-heap after removing the root node.

(D)
Because heaps are not random access when implemented as arrays.

Merge sort is
O(nlog(n))
. Where does the n term come from?

(A)
From the dividing of the list.

(B)
From the initial pass through the list to see if it's sorted.

(C)
From the merging of the sorted sublists.

(D)
From the costly precomputation merge sort does.

Which of the following is a proper heap?

(A)
(1, 2, 3, 4, 5)

(B)
(5, 3, 4, 1, 2)

(C)
(5, 1, 2, 3, 4)

(D)
(5, 3, 2, 4, 1)

For merge sort to merge the following two arrays: (1, 4, 5, 8) and (3, 7, 9,
13), what comparisons have to take place?

(A)
1 and 4, 5 and 8, 3 and 7, 9 and 13

(B)
1 and 3, 4 and 7, 5 and 9, 8 and 13

(C)
1 and 9, 13 and 7, 4 and 5, 8 and 5, 3 and 1, 7 and 4, 9 and 13

(D)
1 and 3, 3 and 4, 4 and 7, 5 and 7, 7 and 8, 8 and 9

Consider a sorting algorithm that checks all possible configurations of a list
until it finds one that is in order. This algorithm will sort a list correctly,
but is very inefficient. What is its big-O notation?

(A)
O(n^{2})

(B)
O(n^{3})

(C)
O(n^{5}log(n))

(D)
O(n!)

In what case is the algorithm described above as efficient as bubble sort?

(A)
When the data is in nearly sorted order.

(B)
When the data is in reverse sorted order.

(C)
When the data is in random order.

(D)
When the data is already in sorted order.

In what case is the algorithm described above more efficient than
selection sort?

(A)
When the data is already in sorted order.

(B)
When the data is randomly distributed.

(C)
When the data is in reverse sorted order.

(D)
When the data is in reverse sorted order.

Consider the intermediate configurations of an array being sorted below. What
sort is being used?

(4, 5, 2, 1, 7)

(1, 5, 2, 4, 7)

(1, 2, 5, 4, 7)

(A)
Selection sort

(B)
Merge sort.

(C)
Quick sort

(D)
Not enough information to tell.

What sorting algorithm might you choose for the following list? Why?
(1, 2, 3, 6, 5, 9)

(A)
Quick sort, because its efficiency is
O(nlog(n))
.

(B)
Merge sort, because divide and conquer sorts are always faster.

(C)
Bubble sort, because the list is in nearly sorted order.

(D)
Heap sort, because the data is already in a heap.

True of false: merge sort and quick sort can only be used on lists whose length
is a power of 2.

(A)
TRUE

(B)
FALSE

How many comparisons would it take merge sort to merge the following lists: (1,
2, 3, 4, 5) and (6, 7, 8, 9, 10)?

(A)
5

(B)
10

(C)
20

(D)
7

True or false: selection sort can sometimes run as fast as
O(n)
.

(A)
TRUE

(B)
FALSE

The intermediate configurations below are characteristic of which sorting
algorithm?

(5, 1, 4, 8, 2)

(1, 5, 4, 8, 2)

(1, 4, 5, 8, 2)

(1, 4, 5, 2, 8)

(A)
Selection sort.

(B)
Insertion sort.

(C)
Bubble sort.

(D)
Heap sort.

Why would it be a bad idea to implement heap sort using a heap data structure
that didn't support random access?

(A)
A data structure without random access can't be re-heaped.

(B)
Accessing elements in the heap would take
O(n)
time, making the sort less
efficient.

(C)
Non random access data structures will always cause memory violations, even when
used properly.

(D)
Non random access data structures cannot hold the same data elements as arrays.

Imagine the following strategy for picking a pivot in quick sort: scan through
half the data set, and use the median value as the pivot. Why is this a bad
strategy?

(A)
Because it will often pick out a bad pivot.

(B)
Because this makes choosing a pivot
O(n)
; quick sort's efficiency relies on
being able to choose a pivot in constant time.

(C)
Picking a good pivot is not necessary for the efficient operation of quick sort.

Imagine the following specification of a comparison function. If the two
numbers passed in have an equal number of digits, they are equal. Otherwise,
the one with the larger number of digits is greater. Which of the following
lists are sorted with respect to this comparison function?

Why are we so concerned with the efficiencies of sorting algorithms?

(A)
Since sorting is a very common operation in computer science it is important
that it be performed in an efficient manner; many processes rely on sorting.

(B)
It is not particularly important but is an interesting intellectual exercise and
a good demonstration of big-O notation.

Imagine a comparison function for complicated objects. Why might efficiency
calculations for a sort using this comparison function be misleading?

(A)
They won't be misleading; they're just as accurate as always.

(B)
Because sorting doesn't work for things other than numbers.

(C)
Efficiency calculations generally consider one comparison to be one step, but a
complicated comparison function might take numerous steps.

(D)
Efficiency calculations are usually misleading.

Consider the following intermediate configurations of a list being sorted. What
sorting algorithm is being used?

(5, 2, 8, 1, 9)

(1, 5, 2, 8, 9)

(1, 2, 5, 8, 9)

(A)
Merge sort

(B)
Bubble sort

(C)
Selection sort

(D)
Insertion sort

What is the first swap insertion sort would make on the following list?
(5, 3, 4, 9, 1)

(A)
1 and 5

(B)
5 and 3

(C)
4 and 9

(D)
None of the above; insertion sort doesn't make swaps, it does shifts.

What is the first swap selection sort would make on the following list?
(5, 3, 4, 9, 1)

(A)
5 and 1

(B)
5 and 3

(C)
4 and 9

(D)
None of the above; selection sort doesn't make swaps.

What kind of data structure would make insertion sort particularly
inefficient?

(A)
A memory intensive data structure.

(B)
A linear data structure.

(C)
A data structure for which shifts are inefficient.

Imagine a situation where most of the data to be sorted starts in roughly
reverse order. Why would this not be a good situation to use bubble sort?

(A)
If the data is in roughly reverse order then bubble sort will almost always be
O(n^{2})
.

(B)
Bubble sort is always an inefficient algorithm.

(C)
It's not a bad idea to use bubble sort in this case; this is when it performs
best.

(D)
Because comparisons take longer in situations like this.

What simple modification could be made to bubble sort to make it efficient in
the situation described above?

(A)
Have it look ahead two elements before making each comparison.

(B)
Start at the end of the list and bubble the small elements down.

(C)
Reverse the list before applying bubble sort to it.

(D)
There is no way to make bubble sort more efficient in this case.

Why would bubble sort be more efficient on the list (1, 2, 3, 4, 5, 6, 7) than
selection sort?

(A)
Bubble sort will detect that the list is in order after one pass, while
selection sort always takes
O(n)
steps to sort a list.

(B)
Selection sort can't sort in-order lists.

(C)
Bubble sort is always more efficient than selection sort.

(D)
Bubble sort can sort ordered lists in constant time.

When using quick sort, why is it common to switch to another sort when the lists
being sorted are small?

(A)
This is a common error made by programmers which does not make the code run any
faster.

(B)
Quick sort's efficiency is very bad, so the sooner we switch to a different
algorithm, the faster the data will be sorted.

(C)
Quick sort is often implemented recursively and when the lists being sorted
become small the overhead of making a recursive function call outweighs quick
sort's efficiency.

(D)
Quick sort's big-O notation becomes worse for small lists.

In what situation might heap sort be useful?

(A)
In a situation where most of the data were already in sorted order.

(B)
In a situation where ease of implementation was key.

(C)
In a situation where it was crucial not to use any extra memory other than the
array of elements being sorted.

(D)
In a situation where memory was not a concern.

What makes heap sort attractive as opposed to quick sort?

(A)
Heap sort's average case is better.

(B)
Heap sort always sorts a list in
O(nlog(n))

(C)
Heap sort can detect in order lists in
O(n)
time.

(D)
Heap sort is never better than quick sort.

Why is quick sort's name misleading?

(A)
Quick sort's big-O notation is no better than merge sort's or heap sort's, and
can even be worse if bad pivots are chosen.

(B)
Because it is actually a very slow sort.

(C)
Because it gains speed by sorting the data incompletely.

(D)
Because it's big-O notation is
O(n^{3})
.

True or false: bubble sort gains efficiency by splitting the data in half.

(A)
FALSE

(B)
TRUE

Why is sorting important to the process of searching?

(A)
Because the two are often performed in parallel.

(B)
Because a data set must be sorted after it is searched.

(C)
Because it is much faster to search a sorted data set.

(D)
Because algorithms for sorting can be used for searching without modification.

True or false: for some data sets, quick sort will be slower than bubble sort.

(A)
TRUE

(B)
FALSE

How long does re-heaping take?

(A)
O(n)

(B)
O(log(n))

(C)
O(1)

(D)
O(n^{2})

True or false: selection sort's efficiency is independent of the data being
sorted.