LabelFusion: Learning to Fuse LLMs and Transformer Classifiers for Robust Text Classification

Reading time: 1 minute
...

📝 Original Info

  • Title: LabelFusion: Learning to Fuse LLMs and Transformer Classifiers for Robust Text Classification
  • ArXiv ID: 2512.10793
  • Date: 2025-12-11
  • Authors: Michael Schlee, Christoph Weisser, Timo Kivimäki, Melchizedek Mashiku, Benjamin Saefken

📝 Abstract

LabelFusion is a fusion ensemble for text classification that learns to combine a traditional transformerbased classifier (e.g., RoBERTa) with one or more Large Language Models (LLMs such as OpenAI GPT, Google Gemini, or DeepSeek) to deliver accurate and cost-aware predictions across multi-class and multi-label tasks. The package provides a simple high-level interface (AutoFusionClassifier) that trains the full pipeline end-to-end with minimal configuration, and a flexible API for advanced users. Under the hood, LabelFusion integrates vector signals from both sources by concatenating the ML backbone's embeddings with the LLM-derived per-class scores-obtained through structured prompt-engineering strategies-and feeds this joint representation into a compact multi-layer perceptron (FusionMLP) that produces the final prediction. This learned fusion approach captures complementary strengths of LLM reasoning and traditional transformer-based classifiers, yielding robust performance across domains-achieving 92.4% accuracy on AG News and 92.3% on 10-class Reuters 21578 topic classification-while enabling practical trade-offs between accuracy, latency, and cost.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut