When I was new to teaching, a mentor asked me to consider the career that lay before me with the following challenge: Once a year, I should try to revitalize one unit in my curriculum – whichever unit was least engaging for students. He said even if I spent all of my creative efforts on that one unit, if I could manage to do that once a year, with some time I would have a some pretty exciting curricula!

Whether or not you teach computer science, you can take my word: teaching sorting algorithms is not very exciting.

Last year, I began working with a colleague, Nate Levin, who pulled off a creative twist to his sorting algorithms unit: he turned it into a sorting algorithm *competition*. The idea was to encourage students to research the effectiveness of various sorting algorithms under a variety of conditions. These were the five types of sorts he came up with:

- Sort a randomized
**int**array of length 10,000 and return the median - Sort a
**String**array of length 10,000 (randomly generated**Strings**) and return the index of a given**String**in the array - Sort a
**int**arry of length 10,000 that is*mostly*sorted and return the median - Sort a 2D
**int**arr of 1,000 x 1,000: sort each array in the 2D array, sort the rows by*their medians*, and finally, return the*median of the medians.* - Sort a
**Comparable**array of length 10,000 and return the index of a given**Comparable**in the array

This year, Nate invited my classes to do the competition along side his. I thought it was a great idea, but there were a few pitfalls. At that time, the competition was run by benchmarking each student *one at a time*. The output was delivered as a set of numbers printed through the console. Despite the fact that students had worked hard on the assignment, they were less engaged watching the results come in, many students didn't even watch. It seemed to me the only thing missing was an interface to provide a face for the effectiveness of their algorithms *while* the algorithms were running. Furthermore, if every student's algorithm could run concurrently, it would minimize the lag time between the deliver of results. With this end in mind, I contributed a GUI (Graphic User Interface).

The GUI enables students to enter as a "Contestant" and select a Color and an avatar (taken from Street Fighter sprite sheets I found online). When the competition runs, each contestant is paired up with another to go head-to-head in a race to complete at least 250 sorts – 50 each of Nate's original five tasks. Each time a contestant successfully sorts an array, that contestant "inflicts damage" on its opponent. The students can observe the relative success of their avatars in real time by watching the status bar of their opponents. Furthermore, an incorrect sort or mistake inflicts damage on oneself and costs valuable time.

After each round, scores are tallied and contestants get paired up for the next round until only one contestant is left standing: our champion!

I really liked this adaptation of Sort-O-Mania. Students enjoyed choosing their avatars and were glued to the screen watching the scores come in. Furthermore, as my colleague Nate witnessed, the competition aspect provided a boost in motivation behind student research. Finally, the fact that the results were programmatically recorded in a CSV made the process of grading this project trivial (C.S. teachers know how rare that is!) If you would like to try it in your classroom, everything you need, including more detailed instructions and the student works samples from my 2017 class, can be found on this repository: https://github.com/bnockles/DuelNockles.git