FreDN: Spectral Disentanglement for Time Series Forecasting via Learnable Frequency Decomposition
Time series forecasting is essential in a wide range of real world applications. Recently, frequency-domain methods have attracted increasing interest for their ability to capture global dependencies. However, when applied to non-stationary time series, these methods encounter the $\textit{spectral entanglement}$ and the computational burden of complex-valued learning. The $\textit{spectral entanglement}$ refers to the overlap of trends, periodicities, and noise across the spectrum due to $\textit{spectral leakage}$ and the presence of non-stationarity. However, existing decompositions are not suited to resolving spectral entanglement. To address this, we propose the Frequency Decomposition Network (FreDN), which introduces a learnable Frequency Disentangler module to separate trend and periodic components directly in the frequency domain. Furthermore, we propose a theoretically supported ReIm Block to reduce the complexity of complex-valued operations while maintaining performance. We also re-examine the frequency-domain loss function and provide new theoretical insights into its effectiveness. Extensive experiments on seven long-term forecasting benchmarks demonstrate that FreDN outperforms state-of-the-art methods by up to 10%. Furthermore, compared with standard complex-valued architectures, our real-imaginary shared-parameter design reduces the parameter count and computational cost by at least 50%.
💡 Research Summary
The paper introduces FreDN, a novel architecture for long‑term time‑series forecasting that directly tackles two persistent challenges of frequency‑domain models: spectral entanglement and the computational overhead of complex‑valued operations. Spectral entanglement arises because a finite look‑back window causes spectral leakage, spreading the energy of trends, seasonality, and noise across many frequencies. Moreover, non‑stationary trends are not well represented by a small set of Fourier bases, leading to overlap of trend and periodic components throughout the spectrum.
FreDN addresses these issues with two key modules. First, the learnable Frequency Disentangler operates on the full FFT of the embedded input. A sigmoid‑activated mask M∈ℝ^{L_freq×d} is learned for each frequency‑channel, separating the spectrum into a trend part (˜X_trend = ˜X ⊙ σ(M)) and a seasonal part (˜X_season = ˜X ⊙ (1‑σ(M))). The mask is initialized according to the theoretical decay of Sobolev‑smooth functions (e.g., w(k)=−log(1+|k|)), ensuring higher weights for low frequencies where smooth trends concentrate energy. The trend component is transformed back to the time domain via IFFT and processed by a standard residual MLP (TimeMLP) that captures slowly varying patterns.
Second, the ReIm Block replaces conventional complex‑valued linear layers. Instead of performing (W_r + jW_i)(˜X_r + j˜X_i), the model shares a real‑valued weight matrix across the real and imaginary branches: ˜Y_season = MLP(˜X_r) + j·MLP(˜X_i). The authors prove (Theorem 2) that as long as the input contains at least two frequency components with linearly independent phases, this real‑only projection can represent any complex number, preserving the essential interaction between magnitude and phase while cutting parameters and FLOPs by more than 50 %.
A theoretical analysis of loss functions shows that frequency‑domain MAE yields structured, uniform gradients across frequencies, mitigating the adverse effects of spectral leakage compared with time‑domain MSE. This insight explains why the proposed loss improves robustness to high‑frequency noise.
Extensive experiments on seven benchmark datasets (ETTh1, ETTh2, ECL, Traffic, etc.) demonstrate that FreDN consistently outperforms state‑of‑the‑art models such as FreTS, FITS, and DLinear, achieving up to 10 % lower MSE/MAE. Ablation studies confirm that removing the Frequency Disentangler or reverting the ReIm Block to a full complex layer degrades performance and dramatically increases computational cost.
In summary, FreDN makes three major contributions: (1) it formalizes spectral entanglement and offers a learnable frequency‑wise disentanglement mechanism grounded in Sobolev smoothness theory; (2) it introduces the ReIm Block, a real‑valued shared‑weight design that eliminates complex arithmetic while retaining expressive power; (3) it provides a gradient‑based justification for frequency‑domain MAE loss. Together, these advances enable efficient, accurate forecasting on non‑stationary time series and open a practical pathway for broader adoption of frequency‑domain deep learning.
Comments & Academic Discussion
Loading comments...
Leave a Comment