Next generation AI: Emotional Artificial Intelligence based on audio | Dagmar Schuller

A person’s speech reveals much more than just the content of what they said. With the help of intelligent speech analysis, artificial intelligence can understand how humans communicate.Thanks to machine learning methods, the audio signal can be used to identify human demographic features such as gender or age as well as emotions, personality traits and health conditions.

Going even further – Emotion AI recognizes demographic features such as gender, age, personality traits, and health conditions. AI’s engagement with human emotions is especially essential because emotions influence our associations, our capacity for abstraction and our intuition. Emotions affect our well-being, they direct our attention, and they influence our decision-making. Multidimensional emotion models are able to recognize more than 50 emotions in real time. AI models also incorporate social aspects and behavioural patterns in their evaluation via acoustic scene recognition.

This provides the basis for a deeply meaningful interaction between man and machine. In her lecture, Dagmar Schuller will shed light on how exactly the analysis works and what potential the technology creates for a wide variety of industries such as health, mobility and devices. The question is not: Will intelligent machines understand emotion? The questions is: Can you call a machine intelligent if it can’t understand emotions?

Dagmar Schuller (audEERING GmbH) |