Boosting Point-supervised Temporal Action Localization via Text Refinement and Alignment

Boosting Point-supervised Temporal Action Localization via Text Refinement and Alignment
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Recently, point-supervised temporal action localization has gained significant attention for its effective balance between labeling costs and localization accuracy. However, current methods only consider features from visual inputs, neglecting helpful semantic information from the text side. To address this issue, we propose a Text Refinement and Alignment (TRA) framework that effectively utilizes textual features from visual descriptions to complement the visual features as they are semantically rich. This is achieved by designing two new modules for the original point-supervised framework: a Point-based Text Refinement module (PTR) and a Point-based Multimodal Alignment module (PMA). Specifically, we first generate descriptions for video frames using a pre-trained multimodal model. Next, PTR refines the initial descriptions by leveraging point annotations together with multiple pre-trained models. PMA then projects all features into a unified semantic space and leverages a point-level multimodal feature contrastive learning to reduce the gap between visual and linguistic modalities. Last, the enhanced multi-modal features are fed into the action detector for precise localization. Extensive experimental results on five widely used benchmarks demonstrate the favorable performance of our proposed framework compared to several state-of-the-art methods. Moreover, our computational overhead analysis shows that the framework can run on a single 24 GB RTX 3090 GPU, indicating its practicality and scalability.


💡 Research Summary

The paper tackles the limitation of point‑supervised temporal action localization (PT‑AL) methods that rely solely on visual cues. While PT‑AL reduces annotation cost by requiring only a single timestamp per action instance, existing approaches ignore the rich semantic information that can be extracted from textual descriptions of video content. To bridge this gap, the authors propose the Text Refinement and Alignment (TRA) framework, which augments a standard PT‑AL pipeline with two novel modules: Point‑based Text Refinement (PTR) and Point‑based Multimodal Alignment (PMA).

The workflow begins by segmenting an untrimmed video into short snippets. Visual features (RGB and optical flow) are extracted using a pretrained I3D encoder. Simultaneously, a state‑of‑the‑art vision‑language model (BLIP‑2) generates a caption for each snippet. Because automatic captioning can produce hallucinations—especially for entity‑dependent actions such as “throw a hammer” versus “throw a discus”—the raw captions are not directly usable.

PTR addresses caption errors by exploiting the sparse point annotations. First, actions are classified into entity‑dependent and entity‑independent categories, and an action‑to‑entity mapping is built (e.g., “hammer throw” → “hammer”). A textual parser extracts entities from each caption. Using the point‑level descriptions collected from annotated frames, a reliable memory M(y) is constructed for each action class y. Incorrect entities that belong to a set of irrelevant entities (Eₓ) are either replaced with the correct entity from the mapping or removed entirely. This refinement yields a set of high‑quality, entity‑accurate captions for each snippet.

The refined captions are encoded with a pretrained text encoder (X‑CLIP) to produce textual embeddings. PMA then projects both visual and textual embeddings into a shared semantic space via linear layers. Crucially, the model leverages the point annotations to generate pseudo‑action and pseudo‑background points during training. A point‑level multimodal contrastive loss (similar to InfoNCE) pulls together embeddings of the same action class across modalities while pushing apart embeddings of different classes. This alignment reduces the modality gap and enriches the visual representation with linguistic cues, especially beneficial for actions that are visually ambiguous but semantically distinct.

The aligned multimodal features are concatenated and fed into an existing PT‑AL detector (the authors adopt HR‑Pro as the backbone). The detector predicts start and end times, class labels, and confidence scores for each action instance.

Extensive experiments on five benchmarks—THUMOS’14, GTEA, BEOID, ActivityNet‑1.2, and ActivityNet‑1.3—demonstrate that TRA consistently outperforms recent state‑of‑the‑art point‑supervised methods. Gains range from 2% to 4% absolute mAP at IoU 0.5, with particularly large improvements (up to 8% absolute) on entity‑dependent actions. An overhead analysis shows that the entire pipeline, including caption generation and multimodal alignment, runs on a single RTX 3090 (24 GB) GPU at near‑real‑time speed, confirming its practicality.

In summary, the contributions are: (1) identifying and addressing the neglect of textual semantics in PT‑AL; (2) introducing a point‑guided caption refinement module that corrects entity errors; (3) designing a point‑level multimodal contrastive alignment that fuses visual and linguistic features; and (4) achieving state‑of‑the‑art performance with modest computational resources. The work opens avenues for future research on richer multimodal supervision, self‑supervised text refinement, and extension to fully zero‑shot settings where no point annotations are available.


Comments & Academic Discussion

Loading comments...

Leave a Comment