DerivKit: stable numerical derivatives bridging Fisher forecasts and MCMC

DerivKit: stable numerical derivatives bridging Fisher forecasts and MCMC
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

DerivKit is a Python package for derivative-based statistical inference. It implements stable numerical differentiation and derivative assembly utilities for Fisher-matrix forecasting and higher-order likelihood approximations in scientific applications, supporting scalar- and vector-valued models including black-box or tabulated functions where automatic differentiation is impractical or unavailable. These derivatives are used to construct Fisher forecasts, Fisher bias estimates, and non-Gaussian likelihood expansions based on the Derivative Approximation for Likelihoods (DALI). By extending derivative-based inference beyond the Gaussian approximation, DerivKit forms a practical bridge between fast Fisher forecasts and more computationally intensive sampling-based methods such as Markov chain Monte Carlo (MCMC).


💡 Research Summary

DerivKit is an open‑source Python library that provides a complete workflow for derivative‑based statistical inference, targeting problems where analytic gradients or automatic differentiation are unavailable or unreliable. The package is organized into four modular components. The core DerivativeKit implements high‑order central finite‑difference stencils (3, 5, 7, 9 points) supporting up to fourth‑order derivatives, and enhances accuracy and robustness through extrapolation techniques such as Richardson, Ridders, and noise‑robust Gauss‑Richardson schemes. For noisy or stiff functions, the PolynomialFit engine offers local polynomial fitting; a fixed‑window version and an adaptive version that automatically builds Chebyshev sampling grids, scales the data, optionally applies ridge regularization, and dynamically selects the polynomial degree based on conditioning diagnostics. CalculusKit builds on these engines to assemble gradients, Jacobians, Hessians, and higher‑order derivative tensors with a consistent API, handling both scalar‑ and vector‑valued models. ForecastKit consumes the derivative tensors to construct Fisher information matrices, Fisher bias estimates, and non‑Gaussian likelihood expansions using the Derivative Approximation for Likelihoods (DALI). DALI incorporates second‑, third‑, and fourth‑order derivatives of the log‑likelihood, allowing the capture of leading non‑Gaussian features and producing contour approximations that closely match full MCMC results (e.g., emcee) while remaining computationally cheap. The library is diagnostics‑driven: it records metadata about sampling geometry, fit quality, and internal consistency, emits warnings when tolerance criteria are violated, and can fall back to more stable methods automatically. Benchmark experiments demonstrate that adaptive polynomial fitting reduces derivative error by roughly a factor of two compared with standard finite‑difference schemes in the presence of Gaussian noise (σ = 0.2). Fisher forecasts generated with DerivKit include extensions to X–Y Fisher analysis (accounting for uncertainties in both inputs and outputs) and bias calculations for perturbed data vectors. DALI‑based non‑Gaussian contours are shown to agree with MCMC posterior samples, validating the higher‑order expansion. Use cases span Fisher forecasting, higher‑order likelihood corrections, derivative estimation from tabulated or pre‑computed models, sensitivity studies for black‑box simulators, and differentiation of parameter‑dependent covariance matrices. DerivKit is released under an OSI‑approved MIT‑style license, installable via pip, and accompanied by extensive unit tests, documentation, and example notebooks hosted on GitHub. Ongoing work applies DerivKit to standard cosmological probes (weak lensing, galaxy clustering, CMB, supernovae), illustrating its potential to bridge the speed of Fisher forecasts with the accuracy of sampling‑based inference.


Comments & Academic Discussion

Loading comments...

Leave a Comment