Neural Network Machine Regression (NNMR): A Deep Learning Framework for Uncovering High-order Synergistic Effects

Neural Network Machine Regression (NNMR): A Deep Learning Framework for Uncovering High-order Synergistic Effects
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose a new neural network framework, termed Neural Network Machine Regression (NNMR), which integrates trainable input gating and adaptive depth regularization to jointly perform feature selection and function estimation in an end-to-end manner. By penalizing both gating parameters and redundant layers, NNMR yields sparse and interpretable architectures while capturing complex nonlinear relationships driven by high-order synergistic effects. We further develop a post-selection inference procedure based on split-sample, permutation-based hypothesis testing, enabling valid inference without restrictive parametric assumptions. Compared with existing methods, including Bayesian kernel machine regression and widely used post hoc attribution techniques, NNMR scales efficiently to high-dimensional feature spaces while rigorously controlling type I error. Simulation studies demonstrate its superior selection accuracy and inference reliability. Finally, an empirical application reveals sparse, biologically meaningful food group predictors associated with somatic growth among adolescents living in Mexico City.


💡 Research Summary

The paper introduces Neural Network Machine Regression (NNMR), a deep learning framework designed to simultaneously perform variable selection and non‑linear function estimation in high‑dimensional settings where high‑order synergistic effects are of interest. The core of NNMR is a trainable gating vector α applied to the input layer; each predictor is multiplied by its gate value, and an ℓ₁ penalty (λ₁‖α‖₁) forces many gates to zero, thereby selecting a sparse subset of variables during model training rather than as a preprocessing step.

In addition to input gating, NNMR incorporates adaptive depth regularization. For each hidden layer l, a penalty term λ₂(‖Wₗ−I‖₁+|cₗ|) encourages the weight matrix Wₗ to collapse toward the identity matrix and the bias cₗ toward zero. When a layer’s contribution becomes negligible, it can be pruned after training, yielding a compact network whose depth automatically matches the complexity of the selected feature set. The overall objective is

\


Comments & Academic Discussion

Loading comments...

Leave a Comment