LLM-based Fusion of Multi-modal Features for Commercial Memorability Prediction

Reading time: 1 minute
...

📝 Original Info

  • Title: LLM-based Fusion of Multi-modal Features for Commercial Memorability Prediction
  • ArXiv ID: 2510.22829
  • Date: 2025-10-26
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않음 (Authors: 정보 없음) **

📝 Abstract

This paper addresses the prediction of commercial (brand) memorability as part of "Subtask 2: Commercial/Ad Memorability" within the "Memorability: Predicting movie and commercial memorability" task at the MediaEval 2025 workshop competition. We propose a multimodal fusion system with a Gemma-3 LLM backbone that integrates pre-computed visual (ViT) and textual (E5) features by multi-modal projections. The model is adapted using Low-Rank Adaptation (LoRA). A heavily-tuned ensemble of gradient boosted trees serves as a baseline. A key contribution is the use of LLM-generated rationale prompts, grounded in expert-derived aspects of memorability, to guide the fusion model. The results demonstrate that the LLM-based system exhibits greater robustness and generalization performance on the final test set, compared to the baseline. The paper's codebase can be found at https://github.com/dsgt-arc/mediaeval-2025-memorability

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut