Link to module

Evaluated December 2021

This module evolves across lab, lecture and reflection activities. It starts as a classic media manipulation lab (changing RGB values in pixels). Yet, in the last portion of the lab, students are given a series of face images and write code to generate the average face of those images. In the associated lecture, students are provided an opportunity to reflect on what happens when we analyze the demographics of the data underlying our face-averaging algorithm. It can be used as an introductory analogy to the shortcomings of training data on machine-learning, and an entry to talk about face-recognition.

It directly covers material in Software Development Fundamentals/Fundamental Programming Concepts, Software Development Fundamentals/Fundamental Data Structures.

Instructors adopting this module may find it to be a good fit for Computer Science I because it integrates ethical reflection into a technical assignment. This module is also suitable for an interdisciplinary tech-and-society type course that has a python/basic coding prerequisite. It has a well-curated packet of external resources that will help the instructor prepare for delivery on basic justice issues at stake in facial recognition technology. These resources include a video interview, an official ACM statement and other substantial materials that help to frame core concerns for students to consider. An option here would be to engage in connection with faculty from criminal justice, critical race studies or sociology to enhance the delivery, especially those issues associated with implicit biases. The module could work well as a standalone, but it could also be effective in a course with 2-5 other standalone modules of a similar type and kind. The real power of this module comes from its dual structure, combining a practical exercise with an opportunity for students to learn about and reflect upon its broader social, political and ethical stakes. There is a great deal of room in this module to incorporate a discussion of the structural racism that has created the present conditions in tech culture, as well as a consideration of how facial recognition is deployed in the world

This module is designed to help students discover some of the problems implicit in facial recognition for themselves. Students will need to be encouraged into perceptual sensitivity and social awareness to have this module be effective. While some may have developed deeper awareness due to AI ethics in the news or familiarity based on lived experiences associated with patterns of policing in some locations, instructors should be prepared to use student connections with YouTube or TikTok to draw out what is at stake in facial recognition related image-manipulation technologies. In addition, instructors need to be aware that the engagement itself may “trigger” negative experiences associated with these kinds of technologies.

Instructors using this module will find a “TA/instructor check” at four different points in the first, technical part of the assignment; these checks are designed to help keep students on track rather than to assign them grades. The second part of the module involves an at-home reflection assignment, designed to invite and encourage critical consideration of the workings and stakes of facial recognition so that the student is prepared to engage thoughtfully in discussion during the next meeting. These elements align well for instructors who wish to develop specific assessment documents.


The evaluation of this module was led by Emanuelle Burton and Darakhshan Mir as part of the Mozilla Foundation Responsible Computer Science Challenge. Patrick Anderson, Judy Goldsmith, Colleen Greer, Jaye Nias, Evan Peck and Marty J. Wolf also made contributions. These works are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.