Thesis Advisor Allocation Problem

Link to module

Evaluated December 2021

This module is a straightforward exploration of the Gale-Shapley algorithm for bipartite matching. It  covers the asymmetry of the algorithm and the fact that the disadvantaged side has some incentive to misrepresent their preferences strategically, whereas the other side does not. This module is designed for a Computer Science course on Algorithms, should take minimal preparation, and fits naturally into such a course.

It directly covers material in Algorithms and Complexity/Fundamental Data Structures and Algorithms, Algorithms and Complexity/Advanced Data Structures Algorithms and Analysis.

The instructor needs to be prepared for technical questions about the algorithm and about the proofs requested in the assignment. A familiarity with the background and basic papers (e.g., David Gale and Marilda Sotomayor. “Ms. Machiavelli and the stable matching problem.” The American Mathematical Monthly 92.4 (1985): 261-268) would make the technical material easier to teach. The assignment demonstrates for students that the Gale-Shapley algorithm can be applied to be optimal to either side (and pessimal to the other) and runs in polynomial time. However, finding a median matching is NP-hard. It might be useful for the instructor to know this in case students ask.

To be well-prepared to use this assignment to advance responsible computer science, the instructor needs to be well-prepared to lead students in discussion of algorithmic fairness and social issues. This module is interesting in that once the choice is made to use the Gale-Shapley algorithm, there is a built-in bias and a method for an outsider to attack that bias and leverage it to their own advantage. It provides a basis for identifying challenges that computer scientists face in doing responsible development. Making explicit that algorithms can be declared “optimal” according to some criterion or for some subset of stakeholders and can still cause significant harm to other stakeholders is eye opening. This important idea widens the scope of awareness for analysis of algorithms and can be tied to notions of utilitarianism that encompass all stakeholders and to notions from value-centered design. Since these lessons are not explicitly spelled out in the module the instructor needs both knowledge of them and skill in leading students to these ideas rather than bluntly declaring them.

Since this is a standard algorithm and the analysis is important – both the proof of optimality/pessimality and the social analysis, it can easily be easily incorporated into a standard algorithms class. However, the instructor will need to develop assignment grading guidelines and determine whether and how to assess students’ analysis of the social and ethical issues.


The evaluation of this module was led by Jaye Nias and Judy Goldsmith as part of the Mozilla Foundation Responsible Computer Science Challenge. Patrick Anderson, Emanuelle Burton,  Colleen Greer, Darakhshan Mir, Evan Peck, and Marty J. Wolf also made contributions. These works are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.