Neural Networks - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

Neural Networks

Description:

Vector selected at random for the training set and presented to the lattice ... a type of network using Adaptive Resonance Theory (ART) put forward by Grossberg ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 24
Provided by: cies9
Category:
Tags: networks | neural

less

Transcript and Presenter's Notes

Title: Neural Networks


1
Neural Networks
  • Lecture 3
  • By Rob Buxton

2
Unsupervised Neural Networks
  • Last lecture
  • What are unsupervised neural networks?
  • How do they work?
  • What sort of problems are they used on?

3
Last lecture
  • We looked at artificial neurons
  • Supervised training
  • Perceptrons

4
Supervised neural networks
  • Supervised networks train with the aid of a
    teacher
  • An input vector (0.5,0.4,.) has with it an
    associated target value
  • Weight updates are made according to the
    perceived error

5
Unsupervised neural networks
  • In real life it is not always possible to get
    training data with target values
  • Sometimes we need a system that can learn in
    different ways, grouping items together using
    their underlying similarities

6
Self Organising Map
  • When we considered supervised neural networks we
    studied the perceptron
  • The most popular type of unsupervised is called
    the Self Organising Map (SOM)
  • We are going to spend the rest of this lecture
    studying this type of network in detail

7
SOM
Lattice of output nodes
Input nodes (2d)
8
Vector Quantisation
  • SOMs were developed by Teuvo Kohonen in 1981
  • They provide a way of representing very complex
    multi-dimensional data in much lower dimension
    spaces (usually 2 dimensions hence map)

9
Feature Maps
  • The output map is created during the training
    process
  • It automatically organises itself to represent
    the training data in such a way that like items
    are placed together

10
Training process
  • Initialise weights
  • Vector selected at random for the training set
    and presented to the lattice
  • Every nodes weight vector is tested to see which
    one is the closest to the input vector, a winner
    is chosen
  • The winning nodes weights are updated, as are
    its neighbours
  • Repeat from 2 for N iterations

11
In Detail
  • Its important that we understand how and why
    things are done in a certain way in the training
    algorithm
  • Lets look at each stage in some detail

12
Initialisation
  • At the start of the training process it is normal
    to initialise all the weights at small random
    values
  • This is done so that the map develops naturally
    from scratch

13
Winning node
  • The best or winning node is selected
    according to its weight vectors similarity to
    the input vector
  • How is the similarity measured?
  • Euclidean distance, weve seen this before

14
Euclidean Distance in SOM
The number of components
Every pair of components is compared and the
total is taken
X represents the input vector
Y represents the weight vector
15
Winner takes all!
  • The SOM training algorithm is sometimes described
    as a winner takes all process
  • This refers to the fact that the closest or
    winning node is rewarded by having its weights
    updated
  • Its closest neighbours also get some benefit

16
Neighbourhood
  • The neighbourhood defines the nodes that are
    affected by the weight update

Winning node
Neighbourhood
17
Neighbourhood
  • It is important to understand that the
    neighbourhood shrinks as the training progresses
  • At the start of the process the neighbourhood may
    cover most of the map
  • At the end only the winner and the closest nodes
    will be affected at all!

18
Learning rate
  • Another important variable that we need to
    consider is the learning rate
  • We have seen this before!
  • The purpose of the learning rate is to regulate
    the amount by which weights can be changed

19
Lets start to put all this together
Pick a vector at random
Weights
T
E
Training data
Compare the training vector with all weight
vectors
Pick the winner
Update the neighbours
Update the winner
20
Updates
  • The equation describing the weight updates is
  • W(n 1) W(n) LR N (Diff)

Previous weight
Difference between the input vector and the
weight vector
Learning rate
Neighbourhood
21
Uses of SOMs
  • As with most successful types of neural networks
    SOMs have been applied to many problem domains
  • Optics
  • Acoustic preprocessing
  • Process monitoring
  • Continuous speech recognition
  • Etcetc.

22
Other types of unsupervised neural networks
  • Although SOM is the most popular type there are
    others
  • The main challenge comes from a type of network
    using Adaptive Resonance Theory (ART) put forward
    by Grossberg
  • MIrMax (OConnor, Walley) a new unsupervised
    method uses information theory to create SOM
    like maps

23
Next Lecture
  • Revision!
Write a Comment
User Comments (0)
About PowerShow.com