Systems for Scaling Accessibility Efforts in Large Computing Courses

Systems for Scaling Accessibility Efforts in Large Computing Courses
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

It is critically important to make computing courses accessible for disabled students. This is particularly challenging in large computing courses, which face unique challenges due to the sheer scale of course content and staff. In this experience report, we share our attempts to scale accessibility efforts for a large university-level introductory programming course sequence, with over 3500 enrolled students and 100 teaching assistants (TAs) per year. First, we introduce our approach to auditing and remediating course materials by systematically identifying and resolving accessibility issues. However, remediating content post-hoc is purely reactive and scales poorly. We then discuss two approaches to systems that enable proactive accessibility work. We developed technical systems to manage remediation complexity at scale: redesigning other course content to be web-first and accessible by default, providing alternate accessible views for existing course content, and writing automated tests to receive instant feedback on a subset of accessibility issues. Separately, we established human systems to empower both course staff and students in accessibility best practices: developing and running various TA-targeted accessibility trainings, establishing course-wide accessibility norms, and integrating accessibility topics into core course curriculum. Preliminary qualitative feedback from both staff and students shows increased engagement in accessibility work and accessible technologies. We close by discussing limitations and lessons learned from our work, with advice for others developing similar auditing, remediation, technical, or human systems.


💡 Research Summary

The paper presents a comprehensive experience report on scaling accessibility efforts for a large introductory programming course sequence at the University of Washington, serving over 3,500 students and more than 100 teaching assistants (TAs) each year. The authors begin by highlighting the stark disparity between the prevalence of disability among U.S. undergraduates (21 %) and the low proportion of computer‑science students receiving accommodations (4.6 %). They argue that large computing courses face unique scalability challenges because of the volume of digital artifacts and the limited resources of institutional disability services.

The core contribution is a four‑system framework: (1) an Auditing and Remediation system, (2) Technical systems, (3) Human systems, and (4) an integration/maintenance workflow.

Auditing and Remediation – The first author conducted a detailed audit of two courses (CSE 122 and CSE 123) covering lecture slides, recitation handouts, assignment specifications, videos, and PDFs. Over 900 accessibility issues were catalogued, each annotated with platform, content element, format, location, instructional necessity, and a suggested fix. Issues were classified as “trivial” (e.g., missing alt text on a simple image) or “non‑trivial” (e.g., re‑recording a video). The remediation phase involved systematic rewriting of text to proper semantic structure, adding descriptive alt text, improving color contrast, providing textual equivalents for visual diagrams, and manually correcting or re‑creating captions for videos.

Technical Systems – To move from a reactive audit‑then‑fix model to a proactive one, the team implemented three technical strategies:

  1. Web‑First Materials – PDFs, which dominate course delivery, were replaced with HTML/Markdown pages compiled to static sites. This allowed native support for headings, ARIA landmarks, and MathML via MathJax. The new pages could be printed to PDF on demand, preserving the familiar distribution format while retaining accessibility.

  2. Alternate Views for PDF‑Only Content – For legacy PDFs that could not be immediately rewritten, an HTML “alternate view” was generated and linked from the course site, giving screen‑reader users an accessible representation without discarding the original file.

  3. Automated Accessibility Tests – The authors built a CI pipeline that runs a suite of automated checks (e.g., axe‑core, pa11y) on every commit to the course repository. The tests flag missing alt attributes, insufficient color contrast, and other WCAG 2.1 violations, providing instant feedback to instructors and TAs. While acknowledging that automation cannot capture every nuance (especially in multimedia), the system reduces the manual burden and catches low‑hanging fruit early.

Human Systems – Recognizing that technology alone is insufficient, the authors designed a comprehensive TA‑focused training program. Workshops covered WCAG fundamentals, practical checklist usage during grading, and strategies for communicating with disabled students. The training was repeated each semester and supplemented with a publicly available guide. Additionally, the course adopted explicit “accessibility norms” (e.g., always write descriptive link text, never rely on color alone) that were embedded in the syllabus and reinforced during recitation planning. Finally, accessibility concepts were woven into the core curriculum: a dedicated module on accessible coding practices appeared in the second half of the semester, giving all students exposure to the topic.

Results and Feedback – Preliminary qualitative feedback from both TAs and students indicated increased engagement with accessibility tools and higher satisfaction with course materials. Students reported that web‑first reference sheets were easier to navigate with screen readers, and TAs appreciated the automated test alerts that saved time during content updates.

Limitations and Lessons Learned – The authors note several constraints: automated tools cannot fully evaluate complex multimedia (e.g., interactive simulations), and the current evaluation relies on short‑term, self‑reported data rather than longitudinal performance metrics. They also observed that scaling the TA training to all faculty and department staff would further embed accessibility culture.

In conclusion, the paper demonstrates that a combination of systematic auditing, proactive technical redesign, and targeted human training can make large‑scale computing courses more accessible. By publishing their tooling, audit templates, and training materials, the authors provide a reusable blueprint for other institutions seeking to address accessibility at scale.


Comments & Academic Discussion

Loading comments...

Leave a Comment