A Metamorphic Testing Perspective on Knowledge Distillation for Language Models of Code: Does the Student Deeply Mimic the Teacher?

A Metamorphic Testing Perspective on Knowledge Distillation for Language Models of Code: Does the Student Deeply Mimic the Teacher?
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Transformer-based language models of code have achieved state-of-the-art performance across a wide range of software analytics tasks, but their practical deployment remains limited due to high computational costs, slow inference speeds, and significant environmental impact. To address these challenges, recent research has increasingly explored knowledge distillation as a method for compressing a large language model of code (the teacher) into a smaller model (the student) while maintaining performance. However, the degree to which a student model deeply mimics the predictive behavior and internal representations of its teacher remains largely unexplored, as current accuracy-based evaluation provides only a surface-level view of model quality and often fails to capture more profound discrepancies in behavioral fidelity between the teacher and student models. To address this gap, we empirically show that the student model often fails to deeply mimic the teacher model, resulting in up to 285% greater performance drop under adversarial attacks, which is not captured by traditional accuracy-based evaluation. Therefore, we propose MetaCompress, a metamorphic testing framework that systematically evaluates behavioral fidelity by comparing the outputs of teacher and student models under a set of behavior-preserving metamorphic relations. We evaluate MetaCompress on two widely studied tasks, using compressed versions of popular language models of code, obtained via three different knowledge distillation techniques: Compressor, AVATAR, and MORPH. The results show that MetaCompress identifies up to 62% behavioral discrepancies in student models, underscoring the need for behavioral fidelity evaluation within the knowledge distillation pipeline and establishing MetaCompress as a practical framework for testing compressed language models of code derived through knowledge distillation.


💡 Research Summary

The paper investigates whether student models obtained by knowledge distillation (KD) of large code‑specific language models truly replicate the predictive behavior and internal representations of their teacher models. While transformer‑based models such as CodeBERT and GraphCodeBERT achieve state‑of‑the‑art results on tasks like clone detection and vulnerability prediction, their deployment is hampered by high computational cost and environmental impact. KD offers a promising route to compress these models, but existing evaluations rely almost exclusively on accuracy against ground‑truth labels, which provides only a surface‑level view of model quality.

To expose deeper discrepancies, the authors first conduct simple yet effective adversarial attacks that rename identifiers and apply semantics‑preserving code transformations. They find that, although distilled student models match teacher accuracy on clean data, they suffer up to 285 % greater performance loss under adversarial perturbations, indicating a lack of behavioral fidelity.

Motivated by this gap, the authors propose MetaCompress, a metamorphic testing (MT) framework that evaluates behavioral fidelity by comparing teacher and student outputs across a set of behavior‑preserving metamorphic relations (MRs). Unlike traditional MT, which transforms inputs, MetaCompress defines MRs between the two models’ outputs for the same input. Four MRs are designed: (1) label agreement, (2) similarity of probability distributions measured by KL‑divergence, (3) preservation of class‑ranking order, and (4) calibration consistency of confidence scores. These relations capture different facets of fidelity, from coarse label matches to fine‑grained distributional alignment.

The empirical study uses two widely studied tasks (clone detection and vulnerability prediction) and two teacher models (CodeBERT, GraphCodeBERT). Student models are generated via three recent KD techniques: Compressor, AVATAR, and MORPH. Across all settings, traditional accuracy metrics show negligible gaps between teacher and student. However, MetaCompress reveals that up to 62 % of test cases violate at least one MR, with the KL‑divergence MR accounting for the majority of failures. The analysis also uncovers technique‑specific patterns: Compressor achieves the highest compression (down to 3 MB) but exhibits the greatest behavioral discrepancy; MORPH maintains the most stable fidelity while still providing meaningful size reduction.

Key contributions are:

  1. Insight – Demonstrating that accuracy alone cannot capture the behavioral fidelity of distilled models, and that adversarial robustness is a practical proxy for deep mimicry.
  2. Technique – Introducing MetaCompress, a novel output‑based metamorphic testing framework with four rigorously defined MRs for code‑model evaluation.
  3. Evaluation – Providing extensive experiments that show substantial hidden discrepancies in student models across multiple tasks and KD methods.
  4. Open Science – Releasing all code, data, and replication scripts to enable reproducibility.

The study concludes that while KD dramatically reduces resource consumption, ensuring that student models faithfully emulate teacher behavior requires systematic testing beyond accuracy. MetaCompress offers a scalable, oracle‑free solution that can be integrated into KD pipelines, guiding future research toward loss functions and training strategies that explicitly preserve behavioral fidelity (e.g., weighted KL‑divergence, internal representation alignment).


Comments & Academic Discussion

Loading comments...

Leave a Comment