AdaDetectGPT: Adaptive Detection of LLM-Generated Text with Statistical Guarantees
We study the problem of determining whether a piece of text has been authored by a human or by a large language model (LLM). Existing state of the art logits-based detectors make use of statistics derived from the log-probability of the observed text evaluated using the distribution function of a given source LLM. However, relying solely on log probabilities can be sub-optimal. In response, we introduce AdaDetectGPT – a novel classifier that adaptively learns a witness function from training data to enhance the performance of logits-based detectors. We provide statistical guarantees on its true positive rate, false positive rate, true negative rate and false negative rate. Extensive numerical studies show AdaDetectGPT nearly uniformly improves the state-of-the-art method in various combination of datasets and LLMs, and the improvement can reach up to 37%. A python implementation of our method is available at https://github.com/Mamba413/AdaDetectGPT.
💡 Research Summary
AdaDetectGPT addresses the increasingly critical task of distinguishing human‑written text from that generated by large language models (LLMs). While recent detectors such as DetectGPT and Fast‑DetectGPT rely solely on statistics derived from the log‑probabilities of a source LLM, they suffer from two main drawbacks: (1) the raw log‑probability often does not fully capture the distributional gap between human and machine text, and (2) threshold selection for controlling false‑negative rates (FNR) is typically heuristic, lacking rigorous statistical guarantees.
The proposed method introduces a one‑dimensional “witness function” w that non‑linearly transforms token‑level log‑probabilities before they are aggregated into a detection statistic. Formally, for a passage X the statistic is
\
Comments & Academic Discussion
Loading comments...
Leave a Comment