Multi-modal Biomarker Extraction Framework for Therapy Monitoring of Social Anxiety and Depression Using Audio and Video

Research output: Contribution to conferencesPaperContributedpeer-review

Contributors

  • Tobias Weise - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Paula Andrea Pérez-Toro - , Friedrich-Alexander University Erlangen-Nürnberg, Universidad de Antioquia (Author)
  • Andrea Deitermann - , Bayreuth District Hospital (Author)
  • Bettina Hoffmann - , Bayreuth District Hospital (Author)
  • Kubilay can Demir - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Theresa Straetz - , Bayreuth District Hospital (Author)
  • Elmar Nöth - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Andreas Maier - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Seung Hee Yang - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Thomas Kallert - , Bayreuth District Hospital (Author)

Abstract

This paper introduces a framework that can be used for feature extraction, relevant to monitoring the speech therapy progress of individuals suffering from social anxiety or depression. It operates multi-modal (decision fusion) by incorporating audio and video recordings of a patient and the corresponding interviewer, at two separate test assessment sessions. The used data is provided by an ongoing project in a day-hospital and outpatient setting in Germany, with the goal of investigating whether an established speech therapy group program for adolescents, which is implemented in a stationary and semi-stationary setting, can be successfully carried out via telemedicine. The features proposed in this multi-modal approach could form the basis for interpretation and analysis by medical experts and therapists, in addition to acquired data in the form of questionnaires. Extracted audio features focus on prosody (intonation, stress, rhythm, and timing), as well as predictions from a deep neural network model, which is inspired by the Pleasure, Arousal, Dominance (PAD) emotional model space. Video features are based on a pipeline that is designed to enable visualization of the interaction between the patient and the interviewer in terms of Facial Emotion Recognition (FER), utilizing the mini-Xception network architecture.

Details

Original languageEnglish
Pages26-42
Number of pages17
Publication statusPublished - 26 Nov 2023
Peer-reviewedYes
Externally publishedYes

External IDs

Scopus 85178583539
Mendeley c47dca0c-c35b-3d70-a8f9-cf96b6131c06

Keywords

Keywords

  • biomarkers, depression, emotion recognition, multi-modal, prosody, social anxiety, telemedicine