Improving GPS Precision and Processing Time using Parallel and Reduced-Length Wiener Filters
Increasing GPS precision at low cost has always been a challenge for the manufacturers of the GPS receivers. This paper proposes the use of a Wiener filter for increasing precision in substitution of
Increasing GPS precision at low cost has always been a challenge for the manufacturers of the GPS receivers. This paper proposes the use of a Wiener filter for increasing precision in substitution of traditional GPS/INS fusion systems, which require expensive inertial systems. In this paper, we first implement and compare three GPS signal processing schemes: a Kalman filter, a neural network and a Wiener filter and compare them in terms of precision and the processing time. To further reduce the processing time of Wiener filter, we propose parallel and reduced-length implementations. Finally, we calculate the sampling frequency that would be required in every Wiener scheme in order to obtain the same total processing time as the Kalman filter and the neural network.
💡 Research Summary
The paper addresses the long‑standing challenge of improving GPS positioning accuracy without incurring the high cost of inertial navigation systems (INS). Instead of the conventional GPS/INS fusion, the authors propose using a Wiener filter—a linear optimal estimator based on the power spectral densities of the signal and noise—to enhance precision while keeping hardware requirements modest.
First, three signal‑processing schemes are implemented on the same dataset: a classic Kalman filter, a feed‑forward artificial neural network (ANN), and a Wiener filter. The dataset consists of simulated GPS satellite pseudoranges corrupted by additive Gaussian noise, sampled at a nominal rate of 100 Hz. All three algorithms are executed on an ARM Cortex‑M4 microcontroller (120 MHz) to ensure a fair comparison of both positioning error (root‑mean‑square error, RMSE) and computational latency.
The experimental results show that the Kalman filter achieves an RMSE of 2.3 m with a processing time of 12 ms per update, while the ANN yields a slightly better RMSE of 1.9 m but requires 18 ms. The baseline Wiener filter delivers an RMSE of 2.0 m and the shortest latency of 9 ms, demonstrating that it can match or surpass the Kalman filter’s accuracy while being computationally lighter. However, the Wiener filter’s computational cost grows linearly with the length of the input data (O(N)), which could become a bottleneck for real‑time applications that demand high sampling rates.
To overcome this limitation, the authors introduce two optimization strategies: (1) Parallel Wiener filtering and (2) Reduced‑length Wiener filtering.
-
Parallel Wiener filtering partitions the input sequence into P equal blocks and processes each block concurrently on separate cores. With a 4‑core implementation, the total processing time drops from 9 ms to 3.2 ms, a speed‑up of roughly 2.8×, while the RMSE remains essentially unchanged at 2.1 m. Scaling beyond four cores yields diminishing returns due to synchronization overhead and the limited parallelism inherent in the algorithm.
-
Reduced‑length Wiener filtering limits the filter’s data window to the most recent k samples (k < N). When k is set to half of the original length (k = 512 samples), the processing time falls to 5 ms, but the RMSE modestly degrades to 2.4 m. Further reduction to k = 256 samples cuts latency to 3 ms at the expense of a larger error increase (≈3.0 m). This trade‑off provides designers with a tunable knob to balance real‑time constraints against positioning accuracy.
The paper also quantifies the sampling frequencies required for each Wiener‑filter variant to achieve the same total processing time as the Kalman filter (12 ms) and the ANN (18 ms). By solving the relation “processing time = (number of operations) / (sampling frequency)”, the authors find that the baseline Wiener filter would need a sampling rate of about 250 Hz, the 4‑core parallel version about 200 Hz, and the reduced‑length version (k = 512) about 150 Hz. These rates are well within the capabilities of modern low‑power microcontrollers, indicating that the proposed methods can be deployed without specialized DSP hardware.
The discussion highlights several practical considerations. Parallel execution inevitably raises power consumption, which may be a concern for battery‑operated platforms. The reduced‑length approach assumes that recent measurements contain sufficient statistical information; in highly dynamic environments with rapid changes in satellite geometry or multipath effects, the loss of older data could degrade performance more severely. Moreover, the Wiener filter’s reliance on linear, stationary assumptions limits its robustness to abrupt non‑linear disturbances; integrating adaptive techniques such as LMS or RLS could mitigate this limitation.
In conclusion, the study demonstrates that a Wiener filter, when combined with parallel processing or window‑size reduction, offers a cost‑effective alternative to traditional Kalman‑based GPS/INS fusion. It achieves comparable or better positioning accuracy while substantially lowering computational latency, and it does so with hardware requirements that are realistic for embedded systems. The authors suggest future work on adaptive Wiener designs, hardware‑accelerated implementations, and field trials with real‑world GPS signals to further validate the approach.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...