ASD is a complicated neurodevelopmental disorder marked by variation in symptom presentation and neurological underpinnings, making early and objective diagnosis extremely problematic. This paper presents a Graph Convolutional Network (GCN) model, incorporating Chebyshev Spectral Graph Convolution and Graph Attention Networks (GAT), to increase the classification accuracy of ASD utilizing multimodal neuroimaging and phenotypic data. Leveraging the ABIDE I dataset, which contains resting-state functional MRI (rs-fMRI), structural MRI (sMRI), and phenotypic variables from 870 patients, the model leverages a multi-branch architecture that processes each modality individually before merging them via concatenation. Graph structure is encoded using site-based similarity to generate a population graph, which helps in understanding relationship connections across individuals. Chebyshev polynomial filters provide localized spectral learning with lower computational complexity, whereas GAT layers increase node representations by attention-weighted aggregation of surrounding information. The proposed model is trained using stratified five-fold cross-validation with a total input dimension of 5,206 features per individual. Extensive trials demonstrate the enhanced model's superiority, achieving a test accuracy of 74.82% and an AUC of 0.82 on the entire dataset, surpassing multiple state-of-the-art baselines, including conventional GCNs, autoencoder-based deep neural networks, and multimodal CNNs.
Deep Dive into Enhanced Graph Convolutional Network with Chebyshev Spectral Graph and Graph Attention for Autism Spectrum Disorder Classification.
ASD is a complicated neurodevelopmental disorder marked by variation in symptom presentation and neurological underpinnings, making early and objective diagnosis extremely problematic. This paper presents a Graph Convolutional Network (GCN) model, incorporating Chebyshev Spectral Graph Convolution and Graph Attention Networks (GAT), to increase the classification accuracy of ASD utilizing multimodal neuroimaging and phenotypic data. Leveraging the ABIDE I dataset, which contains resting-state functional MRI (rs-fMRI), structural MRI (sMRI), and phenotypic variables from 870 patients, the model leverages a multi-branch architecture that processes each modality individually before merging them via concatenation. Graph structure is encoded using site-based similarity to generate a population graph, which helps in understanding relationship connections across individuals. Chebyshev polynomial filters provide localized spectral learning with lower computational complexity, whereas GAT layer
Autism Spectrum Disorder (ASD) is recognized as a complex neurodevelopmental disorder characterized by persistent deficits in social interaction, communication, and restricted, repetitive behaviors and interests. Currently, the diagnosis of ASD is largely based on behavioral observations and clinical interviews, which are performed by clinical experts in neurodevelopmental disorders. However, these methods are considered subjective, and there is an absence of laboratory-based diagnostic tests, making identifying and diagnosing ASD a challenging task. Early detection of ASD and specialized support are crucial for prompt diagnosis and access to essential services and interventions, which can improve the quality of life.
Observation and interview methodologies are two prevalent manual strategies for the identification and diagnosis of ASD. The Childhood Autism Rating Scale (CARS) [1] comprises 15 items designed to evaluate ASD. CARS provides a spectrum of scores to denote ASD levels; for instance, a score of 30-37 indicates moderate ASD, but a score of 38-60 signifies severe ASD. Conversely, interview-based detection and diagnostic systems [2,3,4,5] rely on discussions with parents or caregivers. Nonetheless, the manual techniques rely on behavioral symptoms and the observations of parents or caregivers, necessitating the expertise of a physician for accurate assessments. Consequently, it is unable to collect data on authentic conditions of routine everyday operations. Furthermore, these methodologies are expensive and labor-intensive [6,7]. In recent years, non-invasive brain imaging techniques, such as magnetic resonance imaging (MRI), have been explored to identify structural and functional differences between individuals with ASD and typical controls [8], aiding in objective diagnosis and identifying neural correlates.
However, utilizing MRI data for ASD detection faces several challenges [9]. Issues include variations in brain connectivity patterns, small sample sizes, and data heterogeneity, particularly from multi-site collections like the Autism Brain Imaging Data Exchange (ABIDE) [10] dataset, which introduces variability due to different imaging procedures and scanners. Conventional image classification methods have often failed to accurately identify the disorder [11], frequently focusing solely on imaging features and overlooking important non-imaging data. Furthermore, traditional techniques often rely on pairwise comparisons and may not fully capture complex interactions or individual characteristics. The inherent biological heterogeneity of autism also poses a significant limitation to classification performance [12]. These challenges underscore the need for improved analysis techniques.
This study contributes a novel multi-branch Graph Convolutional Network (GCN) architecture that combines Chebyshev Spectral Graph Convolution and Graph Attention Networks (GAT) to classify ASD using multimodal neuroimaging and phenotypic data. By constructing a site-based population graph and processing each modality through specialised branches, the model effectively captures complex inter-subject relationships. Extensive evaluation on the ABIDE I dataset demonstrates state-of-the-art performance, achieving 74.81% accuracy and 0.82 AUC, with ablation studies confirming the value of graph attention and hyperparameter tuning.
Recent advancements have seen deep learning models applied extensively in the medical imaging field, aiming for improved accuracy compared to classical machine learning methods. These models have achieved performance levels comparable to human experts in various domains, including natural language processing and computer vision.
One of the first successful application of deep learning in ASD detection utilized deep neural networks combining stacked denoising autoencoders [13] and a fully connected network (FCN) on fMRI data from the ABIDE dataset, achieving a 70% classification accuracy. Later, an autoencoder followed by a fully connected network (AE-FCN) [14] along with Ensembles of Multiple Models and Architectures (EMMA), incorporating both functional MRI (fMRI) and structural MRI (sMRI) data to reach 85% accuracy.
A graph convolutional neural network (GCN) [15] that integrates phenotypic information, obtained a best accuracy of 70.4% on ABIDE fMRI data. Building on the GCN, an edge-variational graph convolutional neural network (EV-GCN) [16] was proposed, which reportedly achieved 81% classification accuracy. Furthermore, GCN and Ensemble GCN [17] on the ABIDE I dataset, obtained 68.6% and 71.3% accuracy, respectively. Later, multi-modal GCN (MMGCN) [18] was introduced, deploying multimodal data from the ABIDE I dataset and obtained an accuracy of 72.37%. A weight-learning deep GCN [19] proposed that deployed pairwise associations of non-imaging data into the already existing DeepGCN classification model. The authors achieved an accuracy of 77.27% on a partial testing dataset. The
…(Full text truncated)…
This content is AI-processed based on ArXiv data.