PROGRESS ON EMOTION RECOGNITION - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

PROGRESS ON EMOTION RECOGNITION

Description:

Similar top ranking features for 3 out of 4 FeelTracers (but still differences) Different top ranking features for different SALAS subjects - Is there a male ... – PowerPoint PPT presentation

Number of Views:84
Avg rating:3.0/5.0
Slides: 18
Provided by: NIK597
Category:

less

Transcript and Presenter's Notes

Title: PROGRESS ON EMOTION RECOGNITION


1
PROGRESS ON EMOTION RECOGNITION
  • J G Taylor
  • N Fragopanagos
  • Kings College
  • London

2
KCL WORK IN ERMIS
  • Analysis of emotion v cognition in human brain
    (?simulations of emotion/attention paradigms)
  • ? emotion recognition architecture ANNA
  • ANNA hidden layer emotion state, feedback
    control for attention ( IMC)
  • Learning laws for ANNA developed
  • ANNA fuses all modalities or only one
  • HUMAINE WP3 WP4

3
BASIC BRAIN EMOTION CIRCUIT
  • Valence in amygdala OBFC
  • Attention in parietal PFC
  • Interaction in ACG

4
  • SIMPLIFIED ARCHITECTURE OF EMOTIONAL/COGNITIVE
    PROCESSING IN THE BRAIN


5
DETAILED ARCHITECTURE FOR FACES CLASSIFICATION
gender
6
BASIC ERMIS EMOTION RECOGNITION ARCHITECTURE
Feature vector Inputs
Attention control system
Output as recognised emotional state
?
?
Emotion state as hidden layer
7
ANNAAssume linear outputHidden layer
responseIMC node responseThen solve
self-consistent equations for (y, z) for each
training input by relaxation
8
NATURE OF ANNA
  • Handles both unimodal and multi-modal data (input
    vector x of arbitrary dimension, not too large)
  • Needs consistent input and output data x(t),
    OUT(t), with t specified for both x
    OUT(activat, evaluat)
  • Uses SALAS date-base (450 tunes)
  • from QUB (Roddie/Ellie/Cate)

9
UNIMODAL RESULTS
  • Can use numerous representations of emotion
    extreme, continuous in n dimensions,
  • ANNA ? FEELTRACE output (continuous 2-D)
  • Trained on unimodal for prosody
  • First look at word content

10
Text Post-Processing Module
  • Prof. Whissell compiled
  • Dictionary of Affect in Language (DAL)
  • Mapping of 9000 words ? (activation-evaluation),
    based on students assessment
  • Take words from meaningful segments obtained by
    pause detection ? (activation-evaluation) space
  • But humans use context to assign emotional
    content to words

11
Text Post-Processing Module (SALAS data)
  • Table 1. Quadrant match for normal text (full
    DAL).
  • Participant P1 P2 P3 P4 P9 P12 All
  • Quadrant match () 21.4 12.5 21.4 30.4 25.0 19.6 1
    6.1
  • Table 2. Quadrant match for scrambled text (full
    DAL).
  • Participant P5 P6 P7 P8 P10 P11 All
  • Quadrant match () 07.1 23.2 25.0 32.1 23.2 21.4 2
    1.4
  • Table 3. Standard deviation of participants
    assessments for normal and scrambled text
    (average over all passages assessed).
  • Normal Scrambled
  • Evaluation 1.24 1.45
  • Activation 1.55 1.73
  • Table 4. Quadrant match averaged over
    participants groups for normal text and
    scrambled text when threshold for DAL range is
    varied.
  • Threshold 0.0 0.25 0.5 0.75
  • Normal text 16.1 16.0 12.5 16.4
  • Scrambled text 21.4 21.4 19.6 21.8
  • The higher the threshold the higher emotionally
    rated words are spotted only.

Conclude need further context/semantics
12
Correlational analysis of ASSESS features
  • Correlational analysis between 450 ASSESS
    features and FeelTrace gt
  • ASSESS features correlate more highly with
    activation
  • Similar top ranking features for 3 out of 4
    FeelTracers (but still differences)
  • Different top ranking features for different
    SALAS subjects
  • -gtIs there a male/female trend? Difficult to say
    - insufficient data

13
ANNA on top correlated ASSESS features
  • Quadrant match using top 10 activation features
    top 10 evaluation features and activation
    evaluation output space

14
ANNA on top correlated ASSESS features
  • Half-plane match using top 10 activation features
    and activation only output space

15
PRESENT SITUATION OF ANNA MULTIMODAL
  • Time-stamped data now becoming available for
    lexical (ILSP) face streams (NTUA)
  • Expect to have results in about 1 month for
    recognition for fused modalities
    (faces/prosody/words)

16
CONCLUSIONS
  • UNIMODAL ANNA on prosody OK (especially
    activation)
  • MULTIMODAL Soon to be done
  • On semi-realistic data (SALAS QUB)
  • Future work
  • 1) analysis of detailed results
  • 2) insert temporality in ANNA

17
QUESTIONS
  • How to handle variations across experiencers and
    across FEELTRACERS?
  • How to incorporate expert knowledge?
  • How combine recognition across models?
  • Coding of emotions as dimensional reps or as
    dissociated states (sad AMYG v angry OBFC)?
  • Nature of emotions as goal/reward assessment
    (frustration ? anger impossible ?sadness, etc
    brain-based)?
Write a Comment
User Comments (0)
About PowerShow.com