Mudslide: A Spatially Anchored Census of Student Confusion for Online Lecture Videos

Mudslide: A Spatially Anchored Census of Student Confusion for Online   Lecture Videos
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Educators have developed an effective technique to get feedback after in-person lectures, called “muddy card.” Students are given time to reflect and write the “muddiest” (least clear) point on an index card, to hand in as they leave class. This practice of assigning end-of-lecture reflection tasks to generate explicit student feedback is well suited for adaptation to the challenge of supporting feedback in online video lectures. We describe the design and evaluation of Mudslide, a prototype system that translates the practice of muddy cards into the realm of online lecture videos. Based on an in-lab study of students and teachers, we find that spatially contextualizing students’ muddy point feedback with respect to particular lecture slides is advantageous to both students and teachers. We also reflect on further opportunities for enhancing this feedback method based on teachers’ and students’ experiences with our prototype.


💡 Research Summary

The paper presents Mudslide, a web‑based system that adapts the well‑known “muddy card” technique to the context of online lecture videos. In traditional classrooms, students write down the point they found least clear on an index card at the end of a lecture, providing instructors with immediate, concrete feedback. Translating this practice to video‑based instruction poses two challenges: (1) capturing the exact moment or slide where confusion occurs, and (2) presenting that feedback to instructors in a way that is quickly interpretable. Mudslide addresses both by allowing students to pause a video, click directly on the slide image, and type a short comment attached to that spatial coordinate. The system stores the (x, y) location together with the text, aggregates multiple submissions per slide, and visualizes the results as a heat map overlaid on the slide deck. Instructors can inspect the heat map, drill down into individual comments, and export summary statistics for further analysis.

The authors implemented Mudslide using an HTML5 video player with a transparent canvas layer for click capture, a lightweight back‑end for persisting feedback, and a dashboard that renders heat maps with D3.js. To evaluate the design, they conducted a two‑phase user study in a university‑level computer‑science course. Thirty students were split into a control group (text‑only feedback) and an experimental group (Mudslide). Over two weeks, the experimental group submitted spatially anchored feedback, while the control group answered a standard questionnaire after each lecture. Three instructors also used Mudslide to collect feedback for their own lectures and to create supplemental material based on the results.

Quantitative findings show that students using Mudslide spent on average 35 % less time writing feedback (1 min 25 s vs. 2 min 15 s) and reported higher confidence that their confusion was accurately communicated. Instructors reported that the heat map made it trivial to identify “hot spots” – slides that repeatedly generated confusion – and that they could intervene promptly by adding clarifying videos, examples, or live Q&A sessions. Qualitative comments highlighted that spatial anchoring helped students reflect more deeply on their misunderstandings, turning a vague feeling of confusion into a concrete, visual artifact.

The paper also discusses limitations. Rapid slide transitions can cause timing mismatches, leading to imprecise click locations. A single click may not capture multi‑faceted questions, and the current system only supports typed text, excluding voice or handwritten input. The authors propose future extensions such as multi‑tagging, audio recordings with automatic transcription, AI‑driven summarization of comments, and integration with learning analytics pipelines to predict at‑risk students.

In conclusion, Mudslide demonstrates that embedding student feedback within the spatial context of lecture slides significantly improves both the quality of the feedback and the instructor’s ability to act on it. By preserving the simplicity of the original muddy‑card practice while leveraging digital affordances, Mudslide offers a scalable solution for enhancing interactivity and metacognitive awareness in asynchronous video learning environments. The approach is readily extensible to other digital learning artifacts, suggesting a broader role for spatially anchored feedback in future educational technology design.


Comments & Academic Discussion

Loading comments...

Leave a Comment