DExTeR: Weakly Semi-Supervised Object Detection with Class and Instance Experts for Medical Imaging
Detecting anatomical landmarks in medical imaging is essential for diagnosis and intervention guidance. However, object detection models rely on costly bounding box annotations, limiting scalability. Weakly Semi-Supervised Object Detection (WSSOD) with point annotations proposes annotating each instance with a single point, minimizing annotation time while preserving localization signals. A Point-to-Box teacher model, trained on a small box-labeled subset, converts these point annotations into pseudo-box labels to train a student detector. Yet, medical imagery presents unique challenges, including overlapping anatomy, variable object sizes, and elusive structures, which hinder accurate bounding box inference. To overcome these challenges, we introduce DExTeR (DETR with Experts), a transformer-based Point-to-Box regressor tailored for medical imaging. Built upon Point-DETR, DExTeR encodes single-point annotations as object queries, refining feature extraction with the proposed class-guided deformable attention, which guides attention sampling using point coordinates and class labels to capture class-specific characteristics. To improve discrimination in complex structures, it introduces CLICK-MoE (CLass, Instance, and Common Knowledge Mixture of Experts), decoupling class and instance representations to reduce confusion among adjacent or overlapping instances. Finally, we implement a multi-point training strategy which promotes prediction consistency across different point placements, improving robustness to annotation variability. DExTeR achieves state-of-the-art performance across three datasets spanning different medical domains (endoscopy, chest X-rays, and endoscopic ultrasound) highlighting its potential to reduce annotation costs while maintaining high detection accuracy.
💡 Research Summary
The paper introduces DExTeR, a transformer‑based Point‑to‑Box regressor designed for weakly semi‑supervised object detection (WSSOD‑P) in medical imaging. The motivation stems from the high cost of bounding‑box annotations in clinical datasets, where a single point per instance can convey both class and coarse location information. Existing point‑based teachers suffer from slow convergence, sensitivity to point placement, and confusion among overlapping anatomical structures. DExTeR tackles these issues through three novel components.
First, Class‑guided Multi‑Scale Deformable Attention (Class‑guided MSDA) extends Deformable DETR’s multi‑scale attention by using the point query’s class embedding as an additional reference. This allows the attention offsets and sampling weights to be conditioned on class‑specific priors such as typical size and shape, improving feature extraction for small or densely packed objects.
Second, the paper proposes CLICK‑MoE (Class, Instance, and Common Knowledge Mixture of Experts). Instead of a single feed‑forward network applied uniformly to all queries, CLICK‑MoE contains three experts: a common expert (standard FFN), a class‑specific expert that processes queries conditioned on their class embedding, and an instance expert that generates dynamic parameters based on the current query’s spatial context. A gating mechanism blends the three outputs, yielding representations that are simultaneously class‑aware, instance‑discriminative, and globally informed. This design markedly reduces mis‑classification of adjacent or overlapping structures.
Third, a multi‑point training strategy is introduced. During each training iteration, N random points are sampled per object, forming N independent query groups that are decoded in parallel without inter‑group interaction. This forces the model to produce consistent box predictions regardless of where the point lies on the object, thereby mitigating the notorious point‑location dependence observed in prior work.
The overall architecture follows the classic DETR pipeline: a ResNet‑50 backbone extracts multi‑scale features, a transformer encoder refines them with class‑guided MSDA, point (or box) encoders embed coordinates and class indices into query vectors, self‑attention mixes queries, cross‑attention with class‑guided MSDA gathers visual cues, CLICK‑MoE refines the queries, and a regression head predicts box deltas. After each decoder layer, the predicted box is re‑encoded as a new query for the next layer, enabling iterative refinement.
Experiments were conducted on three heterogeneous medical datasets: Endoscapes (surgical endoscopy), VinDr‑CXR (chest X‑ray), and EUS‑D130 (endoscopic ultrasound). In each case, only 10–20 % of the images carried full box annotations; the remainder were annotated with single points. Compared against baselines such as Point‑DETR, Group R‑CNN, PBC, and PSL‑Net, DExTeR achieved state‑of‑the‑art mean average precision (mAP) improvements of 4–6 percentage points. Ablation studies confirmed that each component contributed positively: class‑guided MSDA (+1.5 p), CLICK‑MoE (+2 p), and multi‑point training (+1 p). Moreover, DExTeR converged roughly 30 % faster than vanilla DETR‑based teachers, a critical advantage when training on limited data.
In summary, DExTeR demonstrates that carefully integrating class information, instance‑specific adaptation, and robust training strategies can bridge the performance gap between weakly annotated and fully supervised medical object detectors. The method reduces annotation cost dramatically while preserving, and even enhancing, detection accuracy across diverse imaging modalities, paving the way for scalable, annotation‑efficient AI assistance in clinical workflows.
Comments & Academic Discussion
Loading comments...
Leave a Comment