Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis Tool for Singers

Current computational-emotion research has focused on applying acoustic properties to analyze how emotions are perceived mathematically or used in natural language processing machine learning models.

Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis Tool for Singers

Current computational-emotion research has focused on applying acoustic properties to analyze how emotions are perceived mathematically or used in natural language processing machine learning models. While recent interest has focused on analyzing emotions from the spoken voice, little experimentation has been performed to discover how emotions are recognized in the singing voice – both in noiseless and noisy data (i.e., data that is either inaccurate, difficult to interpret, has corrupted/distorted/nonsense information like actual noise sounds in this case, or has a low ratio of usable/unusable information). Not only does this ignore the challenges of training machine learning models on more subjective data and testing them with much noisier data, but there is also a clear disconnect in progress between advancing the development of convolutional neural networks and the goal of emotionally cognizant artificial intelligence. By training a new model to include this type of information with a rich comprehension of psycho-acoustic properties, not only can models be trained to recognize information within extremely noisy data, but advancement can be made toward more complex biofeedback applications – including creating a model which could recognize emotions given any human information (language, breath, voice, body, posture) and be used in any performance medium (music, speech, acting) or psychological assistance for patients with disorders such as BPD, alexithymia, autism, among others. This paper seeks to reflect and expand upon the findings of related research and present a stepping-stone toward this end goal.


💡 Research Summary

This paper highlights the gap in current emotion recognition research, which has largely focused on analyzing emotions from spoken voice rather than singing voice. It argues that existing studies overlook the challenges of training machine learning models with more subjective data and testing them under noisy conditions. The authors emphasize the importance of developing models capable of recognizing emotions even within inaccurate or difficult-to-interpret data, including those with corrupted or distorted information.

The paper suggests that by incorporating a rich understanding of psycho-acoustic properties into new models, researchers can advance not only in training models to recognize information from extremely noisy datasets but also toward more sophisticated biofeedback applications. These applications could include the development of a model capable of recognizing emotions based on various human inputs such as language, breath, voice, and body posture.

Moreover, the research points out potential uses for these advancements beyond just performance mediums like music or speech; it envisions applications in psychological assistance for patients with disorders including Borderline Personality Disorder (BPD), alexithymia, autism, among others. This paper serves to reflect on related studies and presents a stepping-stone toward achieving more comprehensive emotional recognition systems that can be applied across different domains and contexts.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...