Mixture of Experts Networks Applied to Narrowband SONAR Spectral Estimation - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Mixture of Experts Networks Applied to Narrowband SONAR Spectral Estimation

Description:

An array of hydrophones is used to record acoustic data ... single strong tone. Second set. single weak tone. Third set. masking. Fourth set. double masking ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 29
Provided by: dled1
Category:

less

Transcript and Presenter's Notes

Title: Mixture of Experts Networks Applied to Narrowband SONAR Spectral Estimation


1
Mixture of Experts NetworksApplied to
NarrowbandSONAR Spectral Estimation
  • David L. Edgerton
  • 22 April 1999

2
Overview
  • SONAR
  • Spectral Estimation
  • Mixture of Experts Architecture
  • Application
  • Current Status

3
SONAR
  • SONAR - Sound Navigation and Ranging
  • An array of hydrophones is used to record
    acoustic data from the surrounding water.
  • A target can oftentimes be identified by the
    frequencies it generates.

4
SONAR (cont.)
  • Unfortunately, the ocean is very noisy due to
    biologics, waves, etc. This noise is acceptably
    modeled as Gaussian white noise.
  • The acoustic data is analyzed via an FFT to
    estimate which frequencies are present in the
    water.

5
Spectral Estimation
  • While the FFT works well, there are several
    alternative spectral estimators that may work
    better- at times, such as
  • Bartlets Method
  • Welchs Method
  • Multitaper Method
  • Maximum Entropy Method

6
Goal
  • Given several spectral estimators to choose from,
    apply each only when it is the most effective.
  • While on-line, automatically switch estimators
    when necessary.
  • This task may be accomplished via a Mixture of
    Experts (MOE) framework regulated by a gating
    network.

7
Mixture of Experts
  • The MOE framework consists of several expert
    systems and a gating network.
  • Each expert is given an input vector and produces
    an output vector.
  • A gating network is also given the input vector
    and mediates the outputs of each expert.
  • The final output vector is the sum of the
    weighted outputs from the experts.

8
A MOE Network
9
A Typical Gating Network
10
Gating Networks (cont.)
  • The gating network is a single layer network with
    a softmax output non-linearity.
  • Where

11
Gating Networks (cont.)
  • The softmax nonlinearity ensures that

12
Localized Gating Networks
  • Instead of using the softmax function, a
    localized gating network can divide the input
    space into soft hyper-ellipsoids.
  • This limits the influence of each expert to a
    localized region of the input space.

13
Localized Gating Networks (cont.)
  • The localized gating network uses the following
    form
  • Where

14
Other MOE Applications
  • Function approximation
  • Adaptive Kalman Filtering
  • Each expert is a Kalman filter with a different
    set of parameters.
  • Used for interplanetary orbit determination.

15
Transforming the Input for the Gating Network
  • Typically, the input vector is used by the
    estimators and the gating network.
  • In this case, the input vector is an acoustic
    signal- the gating network cannot readily extract
    information from it to weight the outputs.
  • Instead of the actual input vector, the gating
    network is given the FFTd input vector. ().

16
Application
  • Each expert system is a spectral estimator.
  • The input vector is a series of samples from the
    hydrophone array.
  • Each estimator produces an estimate of the power
    spectral density.
  • The gating network regulates the weights on the
    estimator outputs.

17
Simulated Signal
  • Data was simulated in MatLab by adding several
    sine waves of varying amplitudes to random noise.
  • Several combinations of signals were simulated,
    such as a single strong signal, a single weak
    signal, or weak signals near strong signals.

18
Simulated Signals (cont.)
  • First set
  • single strong tone
  • Second set
  • single weak tone
  • Third set
  • masking
  • Fourth set
  • double masking
  • Noise at about 10 dB was also added.
  • F 70
  • A 9
  • F 70
  • A .9
  • F 70 70.5
  • A 9 1
  • F 70 70.5 71
  • A 9 1 9

19
Data Slides Here
20
Training
  • For training, the length of the output vector has
    to be severely reduced.
  • The outputs must be modified to each be the same
    reasonable length, and all over the same
    frequency range.

21
Training (cont.)
  • During training/testing, the desired output
    vector is simulated to be the ideal power
    spectral density.
  • The ideal output vector will also be the same
    length as the modified expert output vectors.

22
Implementation
  • A program provided by Dr. Ghosh will be used to
    implement the MOE network.
  • The program reads training/validation inputs from
    a data file.
  • The program will be modified to include each of
    the estimators, and will generate outputs from
    each.

23
Implementation (cont.)
  • A localized gating network will be used with a
    single layer network of 50 (?) inputs and 4
    softmax outputs.
  • As previously explained, the 50 inputs will come
    from the FFT of the actual input vector.

24
Interpretation of Results
  • A gating weight history plot will show if the MOE
    network switches estimators when the frequency
    content changes.
  • The gating weight history plot should also
    clarify which estimator works best under certain
    circumstances.

25
Current Status
  • All data has been simulated.
  • Model has been designed.
  • The training/validation phase has not yet begun,
    as the MOE program is still being modified.

26
Biggest Problems (so far)
  • The input space for the gating network.
  • Quantitatively determining error of some sort
    during training.
  • Several estimators appear to produce very good
    results under the same circumstances.

27
References
  • Chaer, W. S., Bishop, R. H., and Ghosh, J. (1997)
  • A Mixture-of-Experts Framework for Adaptive
    Kalman Filtering.
  • IEEE Transactions on Systems, Man, Cybernetics.
    27. Pt. B (June 1997), 452-464.
  • Chaer, W. S., Bishop, R.H., and Ghosh, J. (1998)
  • Hierarchical Adaptive Kalman Filtering for
    Interplanetary Orbit Determination.
  • IEEE Transactions on Aerospace and Electronic
    Systems, Vol. 34, No. 3 July 1998.
  • Ramamurti, V. and Ghosh, J. (1998)
  • On the Use of Localized Gating in Mixture of
    Experts Networks.
  • In Proc. SPIE Conf. On Applications and Science
    of Computational Intelligence, SPIE
    Proc., Orlando, April 1998.
  • Ramamurti, V. (1997)
  • Structural Adaptation and Generalization in
    Modular Neural Networks.
  • Dissertation, University of Texas at Austin

28
QuestionsorSuggestions?
Write a Comment
User Comments (0)
About PowerShow.com