Automatic Detection of Auditory, Visual and Physiological Parameters for the Diagnosis
of Affective Disorders

EU Funded Project Affective Mind

Project Justification

Mental illnesses are among the most common diseases worldwide: In Germany, about every second to third adult develops a mental illness during his or her life. According to data from the statutory health insurance funds, mental illnesses are the most important type of illness in terms of the duration of incapacity to work. Almost half of the diagnoses are affective disorders. Previous research suggests that people with affective disorders (e.g., depression) differ from healthy individuals not only in their health status but also in various features of their voice, facial expression, and physiological parameters. All these parameters are currently not used when diagnosing mental disorders but could be a promising tool in the future.

Automatic Speech Feature Extraction. Input: Real time PC-Speaker Audio. Processing (a) Extraction of Mel Frequency Cepstral Coefficients (MFCC). Output (b): BDI-Depression Score.

Depression Symptoms (a) Low vs. (b) High Energy Sample

Our Approach

Therefore, the aim of the EU-funded project Affective Mind is to develop a system for the automatic detection of auditory, visual, and physiological parameters for the diagnosis of affective disorders. The system will be developed by analyzing parameters of patients with affective disorders and comparing them to healthy individuals. The system will then be tested with a new set of probes to ensure that it predicts disorders correctly.

Example of extraction of key figures from time-frequency diagrams of the speech signal. Shown as horizontal bars are the resonance frequencies of the vocal tract associated with jaw opening, horizontal tongue position and vocal tract tension. Vocal tract parameters provide, among other things, information on depression-associated speaker states such as sadness, fatigue depression states and comorbid anxiety states.

Work Packages, Insights and Outcomes

Responsibilities of the ixp within this project include the specification of features that are relevant for the diagnostic system, preparation and analysis of auditory data, and the application of state-of-the-art feature extraction procedures. The resulting prototype has been tested for user acceptability.

Overview of the Multi-Modal Depression Classification Framework

 Demonstrator Prototype: Sadness Detection via Speech   

Related Projects