Arxiv 2512.24628

Reading time: 5 minute
...

📝 Original Info

  • Title: Arxiv 2512.24628
  • ArXiv ID: 2512.24628
  • Date: 2025-12-31
  • Authors: Mohsen Annabestani, Samira Aghadoost, Anais Rameau, Olivier Elemento, Gloria Chia-Yi Chiang

📝 Abstract

Benign laryngeal voice disorders affect nearly one in five individuals and often manifest as dysphonia, while also serving as non-invasive indicators of broader physiological dysfunction. We introduce a clinically inspired hierarchical machine learning framework for automated classification of eight benign voice disorders alongside healthy controls, using acoustic features extracted from short, sustained vowel phonations. Experiments utilized 15,132 recordings from 1,261 speakers in the Saarbruecken Voice Database, covering vowels /a/, /i/, and /u/ at neutral, high, low, and gliding pitches. Mirroring clinical triage workflows, the framework operates in three sequential stages: Stage 1 performs binary screening of pathological versus non-pathological voices by integrating convolutional neural network-derived mel-spectrogram features with 21 interpretable acoustic biomarkers; Stage 2 stratifies voices into Healthy, Functional or Psychogenic, and Structural or Inflammatory groups using a cubic support vector machine; Stage 3 achieves fine-grained classification by incorporating probabilistic outputs from prior stages, improving discrimination of structural and inflammatory disorders relative to functional conditions. The proposed system consistently outperformed flat multi-class classifiers and pretrained self-supervised models, including META HuBERT and Google HeAR, whose generic objectives are not optimized for sustained clinical phonation. By combining deep spectral representations with interpretable acoustic features, the framework enhances transparency and clinical alignment. These results highlight the potential of quantitative voice biomarkers as scalable, non-invasive tools for early screening, diagnostic triage, and longitudinal monitoring of vocal health.

💡 Deep Analysis

Deep Dive into Arxiv 2512.24628.

Benign laryngeal voice disorders affect nearly one in five individuals and often manifest as dysphonia, while also serving as non-invasive indicators of broader physiological dysfunction. We introduce a clinically inspired hierarchical machine learning framework for automated classification of eight benign voice disorders alongside healthy controls, using acoustic features extracted from short, sustained vowel phonations. Experiments utilized 15,132 recordings from 1,261 speakers in the Saarbruecken Voice Database, covering vowels /a/, /i/, and /u/ at neutral, high, low, and gliding pitches. Mirroring clinical triage workflows, the framework operates in three sequential stages: Stage 1 performs binary screening of pathological versus non-pathological voices by integrating convolutional neural network-derived mel-spectrogram features with 21 interpretable acoustic biomarkers; Stage 2 stratifies voices into Healthy, Functional or Psychogenic, and Structural or Inflammatory groups using a

📄 Full Content

Voice is one of the most fundamental and complex instruments of human communication. It supports linguistic exchange while also transmitting emotion, identity, and social intent [1,2]. Maintaining a healthy and stable voice is essential for effective interpersonal interaction and for professional performance, particularly in vocally intensive fields such as teaching, acting, singing, and broadcasting [3][4][5]. The ability to produce a clear and fatigue-resistant voice enables individuals to communicate efficiently, sustain social relationships, and meet the demands of their professional roles [6,7]. In contrast, disruptions in vocal function, collectively described as voice disorders, can substantially hinder social participation, emotional expression, and occupational capabilities [8,9]. Voice disorders affect an estimated 7-10% of individuals worldwide at some point in their lives, highlighting their clinical and public health significance [10]. Their etiologies are diverse and include functional and psychogenic origins such as hyperfunctional dysphonia, muscle tension dysphonia (MTD), and vocal misuse. They also include structural and pathological conditions such as vocal fold nodules, polyps, Reinke's edema, laryngitis, and pachydermia [11,12]. Environmental exposures including air pollution and occupational vocal load, lifestyle factors including smoking and caffeine intake, and systemic diseases such as gastroesophageal reflux and hypothyroidism further contribute to the development and persistence of these disorders [12,13]. Among their manifestations, dysphonia is the most prevalent and clinically quantifiable symptom. It is characterized by abnormalities in pitch, loudness, and timbre, and is perceived as hoarseness, breathiness, or roughness [14,15]. Persistent dysphonia reduces vocal endurance, compromises professional performance, and negatively affects overall quality of life [16,17]. It may also obscure early signs of more severe conditions such as vocal fold paralysis or laryngeal carcinoma. For these reasons, timely and accurate differentiation of underlying etiologies is essential for selecting appropriate behavioral, medical, or surgical treatment strategies [18,19]. Conventional diagnostic approaches combine auditory-perceptual evaluation, acoustic analysis, laryngoscopic imaging, palpation, and patient-reported measures [20,21]. Some methods, such as stroboscopy, require specialized equipment, trained clinicians, and in-person assessment. These requirements introduce cost, invasiveness, and logistical barriers that restrict accessibility, particularly in remote or resource-limited settings. During global health crises such as the COVID-19 pandemic, the risks associated with in-person evaluations further emphasized the need for scalable, non-invasive, and remote diagnostic alternatives [22]. Acoustic analysis provides an objective and reproducible option by quantifying features such as jitter, shimmer, fundamental frequency (F0), and harmonic-to-noise ratio (HNR) [23,24]. Sustained vowel phonation tasks, including /a/, /i/, and /u/, are especially advantageous because they are language independent, easy to elicit, and sensitive to subtle perturbations in vocal fold vibration [25]. However, manual acoustic assessment is time-consuming and reliant on expert interpretation, which may result in inter-rater variability and diagnostic inconsistency. Recent machine and deep learning advances have greatly improved automated classification of voice disorders. CNNs, LSTMs, BiLSTMs, CNN-RNN hybrids, and ensemble methods use acoustic inputs such as MFCCs, spectrograms, Mel-spectrograms, TQWT features, and glottal parameters to capture key spectral and temporal cues linked to dysphonia [26][27][28][29]. Pretrained and custom CNNs applied to spectrograms from sustained vowels often achieve strong results for binary pathology detection, while BiLSTMs trained on MFCC sequences from continuous speech provide better modeling of temporal patterns [26]. Hybrid systems that pair CNN feature extraction with LSTM sequence modeling, as well as ensembles based on EfficientNet, ResNet, or DenseNet, further improve performance. Gradient boosting models such as XGBoost combined with handcrafted features like jitter, shimmer, and HNR also remain competitive [26,28,29]. Model performance is strongly influenced by the dataset used. The Saarbruecken Voice Database (SVD) [30] is a common benchmark for sustained-vowel classification, while clinical datasets and resources such as VOICED or Mandarin continuous-speech corpora support evaluations with more complex speech [26,27,31]. Binary healthy-versus-pathology classification often reports accuracies above 90%. In contrast, multi-class subtype classification shows lower unweighted average recalls, typically in the range of 50% to 70% [26,28,31] even with Large Speech Models like HuBERT [32]. Although some CNN-based models achieve up to 95 percent accuracy or F1scores near

…(Full text truncated)…

📸 Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut