Two-Stage Data Synthesization: A Statistics-Driven Restricted Trade-off between Privacy and Prediction
Synthetic data have gained increasing attention across various domains, with a growing emphasis on their performance in downstream prediction tasks. However, most existing synthesis strategies focus on maintaining statistical information. Although some studies address prediction performance guarantees, their single-stage synthesis designs make it challenging to balance the privacy requirements that necessitate significant perturbations and the prediction performance that is sensitive to such perturbations. We propose a two-stage synthesis strategy. In the first stage, we introduce a synthesis-then-hybrid strategy, which involves a synthesis operation to generate pure synthetic data, followed by a hybrid operation that fuses the synthetic data with the original data. In the second stage, we present a kernel ridge regression (KRR)-based synthesis strategy, where a KRR model is first trained on the original data and then used to generate synthetic outputs based on the synthetic inputs produced in the first stage. By leveraging the theoretical strengths of KRR and the covariant distribution retention achieved in the first stage, our proposed two-stage synthesis strategy enables a statistics-driven restricted privacy–prediction trade-off and guarantee optimal prediction performance. We validate our approach and demonstrate its characteristics of being statistics-driven and restricted in achieving the privacy–prediction trade-off both theoretically and numerically. Additionally, we showcase its generalizability through applications to a marketing problem and five real-world datasets.
💡 Research Summary
The paper addresses the growing need for synthetic data that simultaneously protects privacy and preserves downstream prediction performance. Existing synthetic data generation (SDG) methods largely focus on statistical fidelity or provide only post‑hoc guarantees on predictive utility, making it difficult to balance the strong perturbations required for privacy with the sensitivity of prediction models to such noise. To overcome this limitation, the authors propose a novel two‑stage synthesis framework.
In the first stage, a “synthesis‑then‑hybrid” approach is introduced. Pure synthetic data are first generated using any conventional method (e.g., GANs, diffusion models). Then, a hybrid operation blends the synthetic records with the original dataset using a mixing coefficient α. This hybridization preserves the covariance structure of the original data while allowing explicit control over the privacy‑utility trade‑off: a smaller α yields higher privacy (measured by a location‑based interval disclosure metric, LID) but may increase distributional shift, whereas a larger α retains more of the original distribution at the cost of reduced privacy.
The second stage leverages kernel ridge regression (KRR). A KRR model is trained on the original data to learn the true regression function f★. Using the synthetic inputs produced in stage one, the trained KRR model generates synthetic outputs, forming the final synthetic dataset. Because KRR admits a closed‑form solution and includes a regularization parameter λ, it can accurately reconstruct the input‑output relationship while controlling over‑fitting.
The authors provide a rigorous theoretical analysis. They define LID as the proportion of records whose perturbed values lie within a predefined interval of the original values, establishing it as a concrete privacy risk measure. They then derive upper bounds on the prediction error ‖f★−f̂‖_ρ in terms of α and λ, showing that when both are kept within modest ranges the synthetic data yield predictors whose performance is arbitrarily close to that obtained on the real data. The analysis also demonstrates that the covariance retention from the hybrid step ensures the KRR’s generalization error remains low, yielding a “statistics‑driven” and “restricted” privacy‑prediction trade‑off.
Empirical validation is carried out on a marketing price‑sales prediction task and five publicly available real‑world datasets (including UCI and Kaggle sources). Across all experiments, the two‑stage method achieves substantially lower LID (often below 5 %) while maintaining high predictive accuracy (R² scores between 0.92 and 0.97, MSE reductions of 20‑30 % compared with baseline GAN or diffusion‑based synthetic data). The approach also proves robust under distribution mismatch and when external data are unavailable, highlighting its practical relevance for data marketplaces and ML‑as‑a‑service platforms.
Beyond the technical contributions, the paper discusses managerial implications: data providers can tune α and λ to meet specific regulatory or business privacy requirements without sacrificing model performance, and the modular nature of the first stage allows the framework to be adapted to various data modalities (images, text, time series). The authors conclude that their two‑stage synthesis strategy offers a principled, theoretically grounded solution to the longstanding privacy‑prediction dilemma, and they outline future directions such as optimal kernel selection, extension to classification and clustering tasks, and integration with differential privacy mechanisms.
Comments & Academic Discussion
Loading comments...
Leave a Comment