Skip to content

This repo contains my work on emotion recognition and classification in a conversation using multimodal Deeplearning.

Notifications You must be signed in to change notification settings

Saravanan1501/Emotion-recognition-in-conversations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 

Repository files navigation

Emotion-recognition-in-conversations

This repo contains my work on emotion recognition and classification in a conversation using multimodal Deeplearning.

Dataset

We use two publicly available dataset whch can be used as benchmarks for evaluating our model

MELD

Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance.

For more details about the dataset please visit https://github.com/declare-lab/MELD#note

IEMOCAP

The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. IEMOCAP database is annotated by multiple annotators into categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance. The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.

For more details about the dataset please visit https://sail.usc.edu/iemocap/

Papers referred

  • COSMIC
  • DialogueRNN
  • bl-LSTM

About

This repo contains my work on emotion recognition and classification in a conversation using multimodal Deeplearning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published