Refinement Provenance Inference: Detecting LLM-Refined Training Prompts from Model Behavior

Reading time: 1 minute
...

📝 Original Info

  • Title: Refinement Provenance Inference: Detecting LLM-Refined Training Prompts from Model Behavior
  • ArXiv ID: 2601.01966
  • Date: 2026-01-05
  • Authors: Bo Yin, Qi Li, Runpeng Yu, Xinchao Wang

📝 Abstract

Instruction tuning increasingly relies on LLMbased prompt refinement, where prompts in the training corpus are selectively rewritten by an external refiner to improve clarity and instruction alignment. This motivates an instancelevel audit problem: for a fine-tuned model and a training prompt-response pair, can we infer whether the model was trained on the original prompt or its LLM-refined version within a mixed corpus? This matters for dataset governance and dispute resolution when training data are contested. However, it is nontrivial in practice: refined and raw instances are interleaved in the training corpus with unknown, source-dependent mixture ratios, making it harder to develop provenance methods that generalize across models and training setups. In this paper, we formalize this audit task as Refinement Provenance Inference (RPI) and show that prompt refinement yields stable, detectable shifts in teacher-forced token distributions, even when semantic differences are not obvious. Building on this phenomenon, we propose RePro, a logit-based provenance framework that fuses teacher-forced likelihood features with logit-ranking signals. During training, RePro learns a transferable representation via shadow fine-tunin...

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut