TabClustPFN: A Prior-Fitted Network for Tabular Data Clustering
Clustering tabular data is a fundamental yet challenging problem due to heterogeneous feature types, diverse data-generating mechanisms, and the absence of transferable inductive biases across datasets. Prior-fitted networks (PFNs) have recently demonstrated strong generalization in supervised tabular learning by amortizing Bayesian inference under a broad synthetic prior. Extending this paradigm to clustering is nontrivial: clustering is unsupervised, admits a combinatorial and permutation-invariant output space, and requires inferring the number of clusters. We introduce TabClustPFN, a prior-fitted network for tabular data clustering that performs amortized Bayesian inference over both cluster assignments and cluster cardinality. Pretrained on synthetic datasets drawn from a flexible clustering prior, TabClustPFN clusters unseen datasets in a single forward pass, without dataset-specific retraining or hyperparameter tuning. The model naturally handles heterogeneous numerical and categorical features and adapts to a wide range of clustering structures. Experiments on synthetic data and curated real-world tabular benchmarks show that TabClustPFN outperforms classical, deep, and amortized clustering baselines, while exhibiting strong robustness in out-of-the-box exploratory settings. Code is available at https://github.com/Tianqi-Zhao/TabClustPFN.
💡 Research Summary
TabClustPFN introduces a foundation‑model style approach to clustering tabular data, extending the Prior‑Fitted Network (PFN) paradigm that has proven effective for supervised tabular learning. The core idea is to amortize Bayesian inference over both the number of clusters (cardinality) and the assignment of each observation, enabling zero‑shot clustering without any per‑dataset fine‑tuning.
The method consists of two cooperating modules. The Partition Inference Network (PIN) receives a dataset X and a candidate cluster count K, and outputs a soft assignment matrix P∈
Comments & Academic Discussion
Loading comments...
Leave a Comment