Exploring Spatial-Temporal Representation via Star Graph for mmWave Radar-based Human Activity Recognition

Exploring Spatial-Temporal Representation via Star Graph for mmWave Radar-based Human Activity Recognition
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Human activity recognition (HAR) requires extracting accurate spatial-temporal features with human movements. A mmWave radar point cloud-based HAR system suffers from sparsity and variable-size problems due to the physical features of the mmWave signal. Existing works usually borrow the preprocessing algorithms for the vision-based systems with dense point clouds, which may not be optimal for mmWave radar systems. In this work, we proposed a graph representation with a discrete dynamic graph neural network (DDGNN) to explore the spatial-temporal representation of human movement-related features. Specifically, we designed a star graph to describe the high-dimensional relative relationship between a manually added static center point and the dynamic mmWave radar points in the same and consecutive frames. We then adopted DDGNN to learn the features residing in the star graph with variable sizes. Experimental results demonstrated that our approach outperformed other baseline methods using real-world HAR datasets. Our system achieved an overall classification accuracy of 94.27%, which gets the near-optimal performance with a vision-based skeleton data accuracy of 97.25%. We also conducted an inference test on Raspberry Pi~4 to demonstrate its effectiveness on resource-constraint platforms. \sh{ We provided a comprehensive ablation study for variable DDGNN structures to validate our model design. Our system also outperformed three recent radar-specific methods without requiring resampling or frame aggregators.


💡 Research Summary

This paper presents a novel approach to Human Activity Recognition (HAR) using millimeter-wave (mmWave) radar, addressing the fundamental challenges of sparsity and variable point cloud size inherent to this sensor modality. Unlike vision-based systems that generate dense point clouds, mmWave radar produces sparse and fluctuating data, making traditional preprocessing and modeling techniques suboptimal.

The core innovation is a two-part solution: a novel graph representation and a tailored neural network architecture. First, the authors propose a “Star Graph” representation for each radar point cloud frame. Instead of connecting radar points to each other—which is computationally expensive and unreliable due to sparsity—a single, manually defined static center point is connected to every dynamic radar point in the frame. This structure focuses the model on learning the high-dimensional relative relationships between the human-generated points and a common reference, effectively capturing spatial patterns of movement.

Second, to handle sequences of these variable-sized star graphs, the authors design a Discrete Dynamic Graph Neural Network (DDGNN). The DDGNN processes each graph in the sequence independently using graph convolution operations to extract spatial features. These per-frame features are then fed into a Bidirectional LSTM (Bi-LSTM) to model temporal dependencies across frames. A critical advantage of the DDGNN is its ability to accept graphs of different sizes (different numbers of radar points per frame) and output fixed-length feature vectors, eliminating the need for resampling, zero-padding, or frame aggregation techniques that can introduce noise or computational overhead.

The proposed system was evaluated on a real-world HAR dataset containing various human activities. Experimental results demonstrated superior performance over several strong baselines, including general point cloud networks (PointNet++, PointLSTM, MeteorNet) and recent radar-specific methods (Tesla-Rapture, MMPointGNN). The star graph with DDGNN achieved an overall classification accuracy of 94.27%, which is notably close to the 97.25% accuracy achieved by a vision-based skeleton data system, highlighting the efficacy of the radar-only approach. The model’s practicality was further validated through an inference speed test on a resource-constrained Raspberry Pi 4 platform. Comprehensive ablation studies provided insights into the contribution of different model components, confirming the design choices.

In summary, this work offers a principled and effective framework for mmWave radar-based HAR. By introducing a purpose-built star graph representation and a DDGNN capable of handling variable-sized inputs natively, it bypasses the limitations of existing methods and sets a new state-of-the-art performance, paving the way for privacy-preserving activity recognition in real-world applications.


Comments & Academic Discussion

Loading comments...

Leave a Comment