TabDistill: Distilling Transformers into Neural Nets for Few-Shot Tabular Classification

Reading time: 1 minute
...

📝 Original Info

  • Title: TabDistill: Distilling Transformers into Neural Nets for Few-Shot Tabular Classification
  • ArXiv ID: 2511.05704
  • Date: 2025-11-07
  • Authors: ** 정보 없음 (제공된 텍스트에 저자 정보가 포함되지 않음) **

📝 Abstract

Transformer-based models have shown promising performance on tabular data compared to their classical counterparts such as neural networks and Gradient Boosted Decision Trees (GBDTs) in scenarios with limited training data. They utilize their pre-trained knowledge to adapt to new domains, achieving commendable performance with only a few training examples, also called the few-shot regime. However, the performance gain in the few-shot regime comes at the expense of significantly increased complexity and number of parameters. To circumvent this trade-off, we introduce TabDistill, a new strategy to distill the pre-trained knowledge in complex transformer-based models into simpler neural networks for effectively classifying tabular data. Our framework yields the best of both worlds: being parameter-efficient while performing well with limited training data. The distilled neural networks surpass classical baselines such as regular neural networks, XGBoost and logistic regression under equal training data, and in some cases, even the original transformer-based models that they were distilled from.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut