Artificial Neural Networks - PowerPoint PPT Presentation


PPT – Artificial Neural Networks PowerPoint presentation | free to download - id: 7140f3-MWY2M


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Artificial Neural Networks


Artificial Neural Networks Unsupervised ANNs – PowerPoint PPT presentation

Number of Views:5
Avg rating:3.0/5.0
Slides: 35
Provided by: cast158
Learn more at:


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Artificial Neural Networks

Artificial Neural Networks
  • Unsupervised ANNs

  • Unsupervised ANNs
  • Kohonen Self-Organising Map (SOM)
  • Structure
  • Processing units
  • Learning
  • Applications
  • Further Topics Spiking ANNs Application
  • Adaptive Resonance Theory (ART)
  • Structure
  • Processing units
  • Learning
  • Applications
  • Further Topics ARTMAP

Unsupervised ANNs
  • Usually 2-layer ANN
  • Only input data are given
  • ANN must self-organise output
  • Two main models Kohonens SOM and Grossbergs
  • Clustering applications

Output layer
Feature layer
Learning Rules
  • Instar Learning rule incoming weights of neuron
    converge to input pattern (previous layer)
  • Convergence speed is determined by learning rate
  • Step size proportional to node output value
  • Neuron learns association between input vectors
    and their outputs
  • Outstar Learning rule outgoing weights of neuron
    converge to output pattern (next layer)
  • Learning is proportional to neuron activation
  • Step size proportional to node input value
  • Neuron learns to recall pattern when stimulated

Self-Organising Map (SOM)
  • T. Kohonen (1984)
  • 2D map of output neurons
  • Input layer and output layer fully connected
  • Lateral inhibitory synapses
  • Model of biological topographic maps, e.g.
    primary auditory cortex in animal brains (cats
    and monkeys)
  • Hebbian learning
  • Akin to K-means
  • Data clustering applications

Output layer
Feature layer
SOM Clustering
  • Neuron prototype for a cluster
  • Weights reference vector (protoype features)
  • Euclidean distance between reference vector and
    input pattern
  • Competitive layer (winner take all)
  • In biological systems winner take all via
    inhibitory synapses
  • Neuron with reference vector closest to input wins

SOM Learning Algorithm
  • Only weights of winning neuron and its neighbours
    are updated
  • Weights of winning neuron brought closer to input
    pattern (instar rule)
  • Reference vector is usually normalised
  • Neighbourhood function in biological systems via
    short range excitatory synapses
  • Decreasing width of neighbourhood ensures
    increasingly finer differences are encoded
  • Global convergence is not guaranteed.
  • Gradual lowering of learning rate ensures
    stability (otherwise vectors may oscillate
    between clusters)
  • At end neurons are tagged, similar ones become
    sub-clusters of larger cluster

N(t) Neighbourhood function
SOM Mapping
  • Adaptive Vector Quantisation
  • Reference vectors iteratively moved towards
    centres of (sub)clusters
  • Best performing on gaussian distributions
    (distance is radial)

SOM Topology
  • Surface of map reflects frequency distribution of
    input set, i.e. the probability of input class
  • More common vector types occupy
    proportionally more of output map.
  • The more frequent the pattern type, the finer
    grained the mapping.
  • Biological correspondence in brain cortex
  • Map allows dimension reduction and visualisation
    of input data

Some Issues about SOM
  • SOM can be used on-line (adaptation)
  • Neurons need to be labelled
  • Manually
  • Automatic algorithm
  • Sometimes may not converge
  • Precision not optimal
  • Some neurons may be difficult to label
  • Results sensitive to choice of input features
  • Results sensitive to order of presentation of
  • Epoch learning

SOM Applications
  • Natural language processing
  • Document clustering
  • Document retrieval
  • Automatic query
  • Image segmentation
  • Data mining
  • Fuzzy partitioning
  • Condition-action association

Further Topics Spiking ANNs
  • Image segmentation task
  • SOM of spiking units
  • Lateral connections
  • Short range excitatory
  • Long range inhibitory
  • Train using Hebbian Learning
  • Train showing one pattern at a time

Spiking SOM Training
  • Hebbian Learning
  • Different learning coefficients
  • afferent weights la
  • lateral inhibitory weights li
  • lateral excitatory weights le
  • Initially learn long-term correlations for
  • Then learn activity modulation for segmentation

N normalisation factor
li, le
Spiking Neuron Dynamics
Spiking SOM Recall
  • Show different shapes together
  • Bursts of neuron activity
  • Each cluster alternatively fires

Adaptive Resonance Theory (ART)
  • Carpenter and Grossberg (1976)
  • Inspired by studies on biological feature
  • On-line clustering algorithm
  • Leader-follower algorithm
  • Recurrent ANN
  • Competitive output layer
  • Data clustering applications
  • Stability-plasticity dilemma

Output layer
Feature layer
ART Types
  • ART1 binary patterns
  • ART2 binary or analog patterns
  • ART3 hierarchical ART structure
  • ARTMAP supervised ART

Stability-Plasticity Dilemma
  • Plasticity System adapts its behaviour according
    to significant events
  • Stability system behaviour doesnt change after
    irrelevant events
  • Dilemma how to achieve stability without
    rigidity and plasticity without chaos?
  • Ongoing learning capability
  • Preservation of learned knowledge

ART Architecture
  • Bottom-up weights wij
  • Normalised copy of vij
  • Top-down weights vij
  • Store class template
  • Input nodes
  • Vigilance test
  • Input normalisation
  • Output nodes
  • Forward matching
  • Long-term memory
  • ANN weights
  • Short-term memory
  • ANN activation pattern

ART Algorithm
  • Incoming pattern matched with stored cluster
  • If close enough to stored template joins best
    matching cluster, weights adapted according to
    outstar rule
  • If not, a new cluster is initialised with pattern
    as template

Recognition Phase
  • Forward transmission via bottom-up weights
  • Input pattern matched with bottom-up weights
    (normalised template) of output nodes
  • Inner product xwi
  • Hypothesis formulation best matching node fires
    (winner-take-all layer)
  • Similar to Kohonens SOM algorithm, pattern
    associated to closest matching template
  • ART1 fraction of bits of template also in input

Innner product
xinput pattern wibottom-up weight of neuron
I Ninput features
Comparison Phase
  • Backward transmission via top-down weights
  • Vigilance test class template matched with input
  • Hypothesis validation if pattern close enough to
    template, categorisation was successful and
    resonance achieved
  • If not close enough reset winner neuron and try
    next best matching
  • Repeat until
  • Either vigilance test passed
  • Or hypotheses (committed neurons) exhausted
  • ART1 fraction of bits of input pattern also in

xinput pattern vitop-down weight of neuron
I rvigilance threshold
Vigilance Threshold
  • Vigilance threshold sets granularity of
  • It defines basin of attraction of each prototype
  • Low threshold
  • Large mismatch accepted
  • Few large clusters
  • Misclassifications more likely
  • High threshold
  • Small mismatch accepted
  • Many small clusters
  • Higher precision

  • Only weights of winner node are updated
  • ART1 only features common to all members of
    cluster are kept
  • ART1 prototype is intersection set of members
  • ART2 prototype brought closer to last example
  • ART2 b determines amount of modification

Additional Modules
Categorisation result
Output layer
Gain control
Input layer
Reset module
Input pattern
Reset Module
  • Fixed connection weights
  • Implements the vigilance test
  • Excitatory connection from input lines
  • Inhibitory connection from input layer
  • Output of reset module inhibitory to output layer
  • Disables firing output node if match with pattern
    is not close enough
  • Duration of reset signal lasts until pattern is
  1. New pattern p is presented
  2. Reset module receives excitatory signal E from
    input lines
  3. All active nodes are reset
  4. Input layer is activated
  5. Reset module receives inhibitory signal I from
    input layer
  6. IgtE
  7. If pvltr inhibition weakens and reset signal is

Gain module
  • Fixed connection weights
  • Controls activation cycle of input layer
  • Excitatory connection from input lines
  • Inhibitory connection from output layer
  • Output of gain module excitatory to input layer
  • Shuts down system if noise produces oscillations
  • 2/3 rule for input layer
  1. New pattern p is presented
  2. Gain module receives excitation signal E from
    input lines
  3. Input layer allowed to fire
  4. Input layer is activated
  5. Output layer is activated
  6. Gain module turned down
  7. Now is feedback from output layer that keeps
    input layer active
  8. If pvltr output layer switched off and gain
    allows input to keep firing for another match

2/3 Rule
  • 2 inputs out of 3 are needed for input layer to
    be active

Input signal Gain module Output layer Input layer
1 1 1 0 1
2 1 1 0 1
3 1 0 1 1
4 1 1 0 1
5 1 0 1 1
6 1 0 1 1
7 0 0 1 0
  1. New pattern p is presented
  2. Input layer is activated
  3. Output layer is activated
  4. Reset signal is sent
  5. New match
  6. Resonance
  7. Input off

Issues about ART
  • Learned knowledge can be retrieved
  • Fast learning algorithm
  • Difficult to tune vigilance threshold
  • Noise tends to lead to category proliferation
  • New noisy patterns tend to erode templates
  • ART is sensitive to order of presentation of data
  • Accuracy sometimes not optimal
  • Assumes samples distribution to be Gaussian (see
  • Only winner neuron is updated, more
    point-to-point mapping than SOM

SOM Plasticity vs. ART Plasticity
SOM mapping
ART mapping
new pattern
new pattern
Given new pattern, SOM moves previously committed
node and rearrange its neighbours, prior learning
is partly forgotten
ART Applications
  • Natural language processing
  • Document clustering
  • Document retrieval
  • Automatic query
  • Image segmentation
  • Character recognition
  • Data mining
  • Data set partitioning
  • Detection of emerging clusters
  • Fuzzy partitioning
  • Condition-action association

Further Topics - ARTMAP
Desired output
  • Composed of 2 ART ANNs and a mapping field
  • Online, supervised, self-organising ANN
  • Mapping field connects output nodes of ART 1 to
    output nodes of ART 2
  • Mapping field trained using hebbian learning
  • ART 1 partitions input space
  • ART 2 partitions output space
  • Mapping field learns stimulus-response

Input layer
Output layer
Mapping field
Output layer
Input layer
Input pattern
Conclusions - ANNs
  • ANNs can learn where knowledge is not available
  • ANNs can generalise from learned knowledge
  • There are several different ANN models with
    different capabilities
  • ANNs are robust, flexible and accurate systems
  • Parallel distributed processing allows fast
    computations and fault tolerance
  • ANNs require a set of parameters to be defined
  • Architecture
  • Learning rate
  • Training is crucial to ANN performance
  • Learned knowledge often not available (black box)

Further Readings
  • Mitchell, T. (1997), Machine Learning, McGraw
  • Duda, R. O., Hart, P. E., and Stork, D. G.
    (2000), Pattern Classification, New York Wiley.
    2nd Edition.
  • ANN Glossary