Antidistillation Fingerprinting
Model distillation enables efficient emulation of frontier large language models (LLMs), creating a need for robust mechanisms to detect when a third-party student model has trained on a teacher model’s outputs. However, existing fingerprinting techniques that could be used to detect such distillation rely on heuristic perturbations that impose a steep trade-off between generation quality and fingerprinting strength, often requiring significant degradation of utility to ensure the fingerprint is effectively internalized by the student. We introduce antidistillation fingerprinting (ADFP), a principled approach that aligns the fingerprinting objective with the student’s learning dynamics. Building upon the gradient-based framework of antidistillation sampling, ADFP utilizes a proxy model to identify and sample tokens that directly maximize the expected detectability of the fingerprint in the student after fine-tuning, rather than relying on the incidental absorption of the un-targeted biases of a more naive watermark. Experiments on GSM8K and OASST1 benchmarks demonstrate that ADFP achieves a significant Pareto improvement over state-of-the-art baselines, yielding stronger detection confidence with minimal impact on utility, even when the student model’s architecture is unknown.
💡 Research Summary
The paper addresses the problem of detecting whether a third‑party “student” language model has been fine‑tuned on the outputs of a proprietary “teacher” model. Existing watermark‑based fingerprinting methods (e.g., red‑and‑green‑list schemes) inject a uniform bias into the teacher’s logits, hoping that the student will inadvertently absorb this bias during distillation. However, achieving a reliable fingerprint typically requires a strong bias (large δ), which degrades the teacher’s generation quality and creates an unfavorable trade‑off between utility and detectability.
Antidistillation Fingerprinting (ADFP) proposes a fundamentally different approach: it aligns the fingerprinting objective with the student’s learning dynamics. A proxy model (θₚ), which approximates the unknown student, is used to evaluate, for each possible next token, how much that token would increase the expected green‑list probability after the student’s fine‑tuning. The method computes a token‑wise logit perturbation Δ_ADS(t) = qₜ·(I
Comments & Academic Discussion
Loading comments...
Leave a Comment