NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty Quantification
We introduce NeuralSurv, the first deep survival model to incorporate Bayesian uncertainty quantification. Our non-parametric, architecture-agnostic framework captures time-varying covariate-risk relationships in continuous time via a novel two-stage data-augmentation scheme, for which we establish theoretical guarantees. For efficient posterior inference, we introduce a mean-field variational algorithm with coordinate-ascent updates that scale linearly in model size. By locally linearizing the Bayesian neural network, we obtain full conjugacy and derive all coordinate updates in closed form. In experiments, NeuralSurv delivers superior calibration compared to state-of-the-art deep survival models, while matching or exceeding their discriminative performance across both synthetic benchmarks and real-world datasets. Our results demonstrate the value of Bayesian principles in data-scarce regimes by enhancing model calibration and providing robust, well-calibrated uncertainty estimates for the survival function.
💡 Research Summary
NeuralSurv is presented as the first deep survival analysis framework that integrates full Bayesian uncertainty quantification. The model expresses the hazard function λ(t|x) as the product of a baseline hazard λ₀(t) and a sigmoidal modulation σ(g(t,x;θ)), where g is a Bayesian neural network (BNN). The baseline hazard follows a Weibull‑type form with a Gamma prior on its scale parameter, providing a non‑parametric yet interpretable prior “best‑guess” of the risk trajectory.
The main computational obstacle stems from two sources: the sigmoid non‑linearity and the continuous‑time integral required by the likelihood. NeuralSurv overcomes these by a novel two‑stage data‑augmentation scheme. First, Polya‑Gamma auxiliary variables ω transform the sigmoid into an exponential‑family representation, effectively linearizing the non‑linearity. Second, a marked Poisson process Ψ is introduced so that the integral over time can be expressed as an expectation over the point‑process paths, leveraging Campbell’s theorem. This yields an exact, discretization‑free likelihood that is tractable in the augmented space.
Posterior inference is performed via a mean‑field variational approximation that factorizes over the baseline parameters ϕ, the BNN weights θ, the Polya‑Gamma variables ω, and the Poisson process paths Ψ. For Ψ, a reference measure on path space is defined, and the variational distribution is expressed through its Radon‑Nikodym derivative. The resulting evidence lower bound (ELBO) admits closed‑form coordinate‑ascent updates: the BNN is locally linearized, producing full conjugacy between Gaussian weights and Polya‑Gamma variables, while the Gamma‑distributed baseline parameters update analytically. All updates scale linearly with the number of network parameters, making the algorithm suitable for large‑scale deep models.
Extensive experiments on synthetic benchmarks and real‑world datasets (including cancer registries, intensive‑care mortality, and engineering failure data) demonstrate that NeuralSurv achieves calibration metrics (Brier score, Integrated Brier Score) substantially better than state‑of‑the‑art deep survival models such as DeepSurv, DeepHit, and Cox‑PH neural networks. Discriminative performance measured by the concordance index is comparable or superior. Notably, in data‑scarce regimes the Bayesian priors act as regularizers, preventing over‑fitting and providing well‑calibrated credible intervals for individual survival curves. The framework is architecture‑agnostic, allowing integration with CNNs, RNNs, or Transformers as the underlying feature extractor.
In summary, NeuralSurv combines (1) a sigmoidal hazard formulation, (2) Polya‑Gamma and marked Poisson data augmentation, and (3) a scalable mean‑field variational inference scheme to deliver deep survival analysis with principled uncertainty estimates. The authors release the full code under an MIT license, facilitating reproducibility and future extensions.
Comments & Academic Discussion
Loading comments...
Leave a Comment