AdaFuse: Adaptive Multimodal Fusion for Lung Cancer Risk Prediction via Reinforcement Learning
Multimodal fusion has emerged as a promising paradigm for disease diagnosis and prognosis, integrating complementary information from heterogeneous data sources such as medical images, clinical records, and radiology reports. However, existing fusion methods process all available modalities through the network, either treating them equally or learning to assign different contribution weights, leaving a fundamental question unaddressed: for a given patient, should certain modalities be used at all? We present AdaFuse, an adaptive multimodal fusion framework that leverages reinforcement learning (RL) to learn patient-specific modality selection and fusion strategies for lung cancer risk prediction. AdaFuse formulates multimodal fusion as a sequential decision process, where the policy network iteratively decides whether to incorporate an additional modality or proceed to prediction based on the information already acquired. This sequential formulation enables the model to condition each selection on previously observed modalities and terminate early when sufficient information is available, rather than committing to a fixed subset upfront. We evaluate AdaFuse on the National Lung Screening Trial (NLST) dataset. Experimental results demonstrate that AdaFuse achieves the highest AUC (0.762) compared to the best single-modality baseline (0.732), the best fixed fusion strategy (0.759), and adaptive baselines including DynMM (0.754) and MoE (0.742), while using fewer FLOPs than all triple-modality methods. Our work demonstrates the potential of reinforcement learning for personalized multimodal fusion in medical imaging, representing a shift from uniform fusion strategies toward adaptive diagnostic pipelines that learn when to consult additional modalities and when existing information suffices for accurate prediction.
💡 Research Summary
AdaFuse introduces a novel adaptive multimodal fusion framework for lung cancer risk prediction that leverages reinforcement learning (RL) to make patient‑specific decisions about which data modalities to use and how to fuse them. The authors identify a critical gap in existing multimodal approaches: they either process all available modalities uniformly or assign soft contribution weights, but they never decide whether a particular modality should be used at all for a given patient. To address this, AdaFuse formulates modality selection as a sequential decision‑making problem modeled as a Markov Decision Process (MDP). A policy network interacts with the environment in up to three steps. First, it selects a primary modality from CT images (A), clinical variables (B), or radiology text reports (C). Second, based on the current state—comprising the encoded features of already selected modalities and a binary mask indicating selection—it decides either to stop and predict or to add one of the remaining modalities. Third, if two modalities have been selected, the policy may either stop or incorporate the third modality while simultaneously choosing a fusion operator among concatenation, mean pooling, or tensor fusion. This design mirrors clinical practice where physicians order additional tests only when needed.
Each modality is processed by a dedicated pretrained encoder (Sybil for CT, a PLCO‑based model for clinical data, and CORe BERT for text) and projected into a shared 32‑dimensional space. The state encoder concatenates the masked modality embeddings with the selection mask and passes them through a two‑layer MLP to produce a 64‑dimensional state vector. The policy heads output logits for each decision step; actions are sampled via a softmax with a temperature that anneals during training and becomes greedy at inference.
The reward combines two components: (1) the negative binary cross‑entropy (BCE) of the prediction, providing a smooth signal based on confidence, and (2) a mini‑batch AUC term normalized to
Comments & Academic Discussion
Loading comments...
Leave a Comment