Link to module

Evaluated December 2021

This module, developed for middle school students, explores various complex issues of data, representation, bias and fairness in machine learning models and can be adapted for college audiences. Within the landscape of other analyses of these topics, this module would need to be developed more fully for use with introductory computer science students. However, within the landscape of introductory material meant for students without technical background or much familiarity, it offers significant depth and rigor and is a good match for low-level technical courses.

This module covers material in Social Issues and Professional Practice.

Instructors choosing to adopt this module will find a full array of materials that enable its immediate use. These include slides, learning goals and detailed activities with teacher guidance. Additionally, the material as written provides enough background in relevant issues that an instructor without prior outside knowledge could successfully teach this module without any outside reading, if need be. Nevertheless, an instructor would be better equipped to teach this module with some additional background reading to increase their own knowledge, particularly if the activities prompt additional student questions.

As a fully developed curriculum the module assumes no prerequisites, other than that the student has some familiarity with technology. Students engaged in using this module will have a change to explore artificial intelligence and some of the messy ethical implications associated with artificial intelligence. Although the material is thoroughly scaffolded, with extensive resources for classroom integration, the level of reading will need to be adjusted to appropriately challenge the students in an introductory college course. In addition, instructors will need to adjust the way in which implications of artificial intelligence (AI) is discussed if they intend to address core ethical problems. Current training sets uses a set of images of cats and dogs, rather than multiple types of humans. It lays the basis for sociopolitical implications of algorithms and classifiers so with some modifications to human and social issues will be necessary. Connecting with colleagues in sociology, anthropology or ethnic studies could be of assistance in making these changes. If the full sequence of activities is used, significant modification to a course would be needed, but a portion of the curriculum and associated learning goals could be worked into a number of computer science courses.

Instructors choosing to adopt this module will find clearly identified locations within the delivery for students to respond and for instructors to collect responses. There are specific learning goals, which will substantially help with assessment. No specific assessment tools are provided.


The evaluation of this module was led by Evan Peck and Emanuelle Burton as part of the Mozilla Foundation Responsible Computer Science Challenge. Patrick Anderson, Judy Goldsmith, Colleen Greer, Darakhshan Mir, Jaye Nias and Marty J. Wolf also made contributions. These works are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.