Neural network for excess noise estimation in continuous-variable quantum key distribution under composable finite-size security
Parameter estimation is a critical step in continuous-variable quantum key distribution (CV-QKD), especially in the finite-size regime where worst-case confidence intervals can significantly reduce the achievable secret-key rate. We provide a finite-size security analysis demonstrating that neural networks can be reliably employed for parameter estimation in CV-QKD with quantifiable failure probabilities $ε_{PE}$, endowed with an operational interpretation and composable security guarantees. Using a protocol that is operationally equivalent to standard approaches, our method produces significantly tighter confidence intervals, unlocking higher key rates even under collective Gaussian attacks. The proposed approach yields tighter confidence intervals, leading to a quantifiable increase in the secret-key rate under collective Gaussian attacks. These results open up new perspectives for integrating modern machine learning techniques into quantum cryptographic protocols, particularly in practical resource-constrained scenarios.
💡 Research Summary
This paper addresses a critical bottleneck in continuous‑variable quantum key distribution (CV‑QKD): the finite‑size estimation of the channel’s excess noise ξ, which directly limits the achievable secret‑key rate. While maximum‑likelihood estimation (MLE) has been the standard method because its confidence intervals can be analytically linked to the composable security parameter ε_PE, MLE yields overly conservative bounds, especially at long distances where excess noise dominates.
The authors propose a neural‑network‑based estimator for ξ and, crucially, derive a worst‑case confidence interval for the network’s output using the delta‑method. They model the relationship between the measured data (Alice’s modulation x_i and Bob’s measurement y_i) as y_i = t x_i + z_i, with t = √T and z_i a Gaussian noise of variance σ² = μ + t² ξ. The neural network learns a mapping f(X,θ) ≈ y, where θ are the trainable weights. After training on a representative dataset, the parameter vector θ̂ converges to the optimal θ*; a first‑order Taylor expansion around θ* together with the delta‑method yields an analytic expression for the variance of the prediction error ε₀. By inflating the point estimate ξ̂ with a factor z_{ε_PE/2} · σ_{ξ̂}, the authors construct a high‑confidence upper bound ξ_max that satisfies P(ξ ≤ ξ_max) ≥ 1 − ε_PE/2. This bound is then inserted into the covariance matrix Γ_{ε_PE} used in the composable security analysis, ensuring that the overall protocol failure probability ε = ε_PE + ε_cor + ε_sec respects the required security definitions.
A detailed finite‑size secret‑key formula is employed:
k_ε = n p_EC N
Comments & Academic Discussion
Loading comments...
Leave a Comment