Pimp My LLM: Leveraging Variability Modeling to Tune Inference Hyperparameters
Large Language Models (LLMs) are being increasingly used across a wide range of tasks. However, their substantial computational demands raise concerns about the energy efficiency and sustainability of both training and inference. Inference, in particular, dominates total compute usage, making its optimization crucial. Recent research has explored optimization techniques and analyzed how configuration choices influence energy consumption. Yet, the vast configuration space of inference servers makes exhaustive empirical evaluation infeasible due to combinatorial explosion. In this paper, we introduce a new perspective on this problem by treating LLMs as configurable systems and applying variability management techniques to systematically analyze inference-time configuration choices. We evaluate our approach on the Hugging Face Transformers library by representing generation hyperparameters and their constraints using a feature-based variability model, sampling representative configurations, measuring their energy consumption, latency, accuracy, and learning predictive models from the collected data. Our results show that variability modeling effectively manages the complexity of LLM inference configurations. It enables systematic analysis of hyperparameters effects and interactions, reveals trade-offs, and supports accurate prediction of inference behavior from a limited number of measurements. Overall, this work opens a new research direction that bridges software engineering and machine learning by leveraging variability modeling for the efficient and sustainable configuration of LLMs.
💡 Research Summary
The paper addresses the growing concern of energy consumption and latency in Large Language Model (LLM) inference by importing variability modeling techniques from software product line engineering. The authors treat the Hugging Face Transformers library as a highly configurable system and construct a feature model (FM) that captures 96 generation‑time hyperparameters, of which 67 are concrete features. Each feature represents either a Boolean switch or a discretized numeric value (e.g., temperature_0.7). Cross‑tree constraints encode dependencies such as “sampling requires temperature to be set” or “beam size cannot exceed max_length”. Translating the FM into propositional logic yields an estimated 9.37 × 10¹² valid configurations, illustrating the infeasibility of exhaustive evaluation.
To explore this massive space, the authors adopt three sampling strategies: two interaction‑aware 2‑wise techniques (YASA and ICPL) and a baseline random sampler. The t‑wise samplers guarantee that every pair of features appears together in at least one sampled configuration, thereby capturing non‑linear interactions that are common in LLM generation settings. The random sampler generates a number of configurations equal to the feature count (≈96) for comparison. All sampled configurations are automatically validated against the FM to ensure executability.
Each configuration is run on a standard inference platform (Intel Xeon CPU, NVIDIA RTX 3090 GPU). Energy consumption is measured using the Running Average Power Limit (RAPL) interface for the CPU and nvidia‑smi for the GPU. Latency is recorded as tokens‑per‑second, and generation quality (accuracy) is evaluated with multiple metrics such as BLEU, ROUGE, and exact‑match on a code‑generation benchmark. The resulting dataset contains several hundred measured points covering a diverse set of hyperparameter combinations.
From this dataset the authors train predictive models for three target variables: energy, latency, and accuracy. They experiment with tree‑based regressors (Random Forest, XGBoost) and a multilayer perceptron, employing 5‑fold cross‑validation to assess generalization. The best models achieve mean absolute percentage errors below 5 % for energy, 3 % for latency, and 2 % for accuracy, demonstrating that a modest number of measurements suffices to predict the behavior of unseen configurations with high fidelity.
Using the learned models, the authors perform a Pareto‑front analysis to identify configurations that balance the three objectives. For instance, a configuration with temperature 0.7, top‑p 0.9, and beam width 4 reduces energy consumption by roughly 12 % and latency by 8 % while incurring only a 1 % drop in code‑generation accuracy. Such trade‑offs are uncovered systematically thanks to the FM’s ability to enumerate valid configurations and the sampling strategy’s coverage of feature interactions.
The paper’s contributions are threefold: (1) a publicly released feature model of Hugging Face Transformers inference hyperparameters, providing a reusable artifact for the community; (2) empirical evidence that t‑wise sampling effectively captures interaction effects in LLM inference settings; (3) a complete “model‑sample‑measure‑learn” pipeline that yields accurate performance and energy predictions from a limited experimental budget. Limitations include the focus on a single inference engine, omission of GPU memory usage and multi‑GPU scaling, and lack of validation on alternative back‑ends such as vLLM or Text Generation Inference. Future work is outlined to extend the FM to other libraries, incorporate additional resource metrics, and integrate the predictive models into automated configuration optimizers that can adapt at runtime.
Comments & Academic Discussion
Loading comments...
Leave a Comment