Part 6 HMM in Practice - PowerPoint PPT Presentation

About This Presentation
Title:

Part 6 HMM in Practice

Description:

... Tying Model Tying State Tying Mixture Tying Beam Search Although the Viterbi Algorithm has linear complexity ... Extraction Recognition n-gram ft ... – PowerPoint PPT presentation

Number of Views:110
Avg rating:3.0/5.0
Slides: 11
Provided by: Hua142
Category:
Tags: hmm | algorithm | gram | part | practice

less

Transcript and Presenter's Notes

Title: Part 6 HMM in Practice


1
Part 6 HMM in Practice
  • CSE717, SPRING 2008
  • CUBS, Univ at Buffalo

2
Practical Problems in the HMM
  • Computation with Probabilities
  • Configuration of HMM
  • Robust Parameter Estimation (Feature
    Optimization, Tying)
  • Efficient Model Evaluation (Beam Search, Pruning)

3
Dimension Reduction
  • Curse of Dimensionality
  • Principal Component Analysis (PCA)
  • Linear Discriminative Analysis (LDA)

4
Tying
  • Model Tying
  • State Tying
  • Mixture Tying

d o g
b a g
Feature vector clustering
Assign equal weight to every cluster
Fix and re-estimate weights of
mixtures using EM
5
Beam Search
  • Although the Viterbi Algorithm has linear
    complexity in the length of signal, it still has
    quadratic complexity in the number of model
    states.
  • Generic Viterbi
  • Beam Search

the set of active state at time t
6
Part 7 System
  • CSE717, SPRING 2008
  • CUBS, Univ at Buffalo

7
A Handwriting Recognition System Using HMM
Sliding Window Features
Embedded Training
Char HMM
Word HMM
PCA/LDA (optional)
Char HMM
Word HMM
Training
Feature Extraction
Viterbi Decoding
Recognition
n-gram
8
Feature Extraction
  • Divide the word image into vertical frames
  • Each frame is further divided into a number of
    cells
  • A fixed number of features are computed per cell
  • Frame level feature vectors are treated as a
    multivariate time series

ft
f1 f2
9
Embedded Training in The HMM
Features
Embedded Training
Char HMM
Word HMM
  • Feature vectors are represented by Gaussian
    Mixture Models (GMM)
  • Embedded training using the Baum-Welch algorithm
    shared character HMMs are trained using word
    images
  • Initialize character-level HMMs
  • Make word-level composite HMMs by concatenating
    character HMMs
  • Align training word images to word models
    (Baum-Welch)
  • Re-estimate GMM and state transition of each
    character HMM Go to step 3

10
Composite Word Model
d
o
g
Write a Comment
User Comments (0)
About PowerShow.com