Associative Learning in Hierarchical Self Organizing Learning Arrays - PowerPoint PPT Presentation

Loading...

PPT – Associative Learning in Hierarchical Self Organizing Learning Arrays PowerPoint presentation | free to download - id: 6837e5-MWE5M



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Associative Learning in Hierarchical Self Organizing Learning Arrays

Description:

Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer Science – PowerPoint PPT presentation

Number of Views:46
Avg rating:3.0/5.0
Slides: 29
Provided by: zhen58
Learn more at: http://www.ohio.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Associative Learning in Hierarchical Self Organizing Learning Arrays


1
Associative Learning in Hierarchical Self
Organizing Learning Arrays
  • Janusz A. Starzyk, Zhen Zhu, and Yue Li
  • School of Electrical Engineering and Computer
    Science
  • Ohio University, Athens, OH 45701, U.S.A.

2
Organization
  • Introduction
  • Network structure
  • Associative learning
  • Simulation results
  • Conclusions and future work

3
Introduction - SOLAR
  • SOLAR Self Organizing Learning Array
  • A concept inspired by the structure of biological
    neural networks
  • Regular, two or three-dimensional array of
    identical processing cells, connected to
    programmable routing channels
  • Self-organizing in individual cells and the
    network structure  

4
Introduction - SOLAR
  • SOLAR vs. ANN
  • Deep multi-layer structure with sparse
    connections
  • Self organized neuron functions
  • Dynamic selection of interconnections
  • Hardware efficiency
  • Online learning

A 15 X 7 SOLAR
A 15 X 3 ANN
5
Introduction - Associative Learning
  • Hetero-associative (HA)
  • To associate different types of input signals
  • e. g. a verbal command with an image
  • Auto-associative (AA)
  • To recall a pattern from a fractional part
  • e. g. an image with missing part
  • The proposed approach
  • An associative learning network in a hierarchical
    SOLAR structure - HA and AA

www.idi.ntnu.no/keithd/ classes/advai/lectures/as
socnet.ppt
6
  • Introduction
  • Network structure
  • Associative learning
  • Simulation results
  • Conclusions and future work

7
Network Structure
  • Two or three dimensional multi-layer regular
    structure
  • 2 D networks
  • Input span rows and network depth columns
  • 3 D networks, better for image applications
  • Small world network connection
  • More local connections with short Euclidean
    distance
  • (as in biological neural networks)
  • Few distant connections

8
Network Structure
  • Hierarchical network connection
  • Each neuron only connects to the preceding layer
  • Neuron connections
  • Redundant initial inputs
  • to be refined in learning
  • 2 inputs (I1 / I2) and
  • 1 output O
  • Feed-forward
  • and feed-back links

9
  • Introduction
  • Network structure
  • Associative learning
  • Simulation results
  • Conclusions and future work

10
Associative learning feed-forward
  • Semi-logic inputs and internal signals
  • scaling from 0 to 1, 0.5 unknown
  • 0 determinate low, 1 determinate high
  • gt 0.5 weak high, lt 0.5 weak low.
  • The I1/I2 relationship is are found with
  • P(I1 is low), P(I1 is high), P(I2 is low) P(I2
    is high)
  • The joint probabilities, e.g. P(I2 is low I1 is
    low)
  • The conditional probabilities, e.g.

11
Associative learning feed-forward
  • Compare the conditional probabilities against a
    confidence interval
  • , where N is samples.
  • If P(I2 I1) CI gt threshold, I2 can be implied
    from I1

12
Associative learning feed-forward
  • A neuron is an associative neuron if I1 can be
    implied from I2 and I2 can be implied from I1,
    otherwise it is a transmitting neuron

Six possible I1/I2 distributions for associative
neurons. A semi-logic function is designed for
each one.
13
Associative learning feed-forward
  • In an associative neuron
  • Functions are designed for data transformation
    and feedback calculation.
  • f1 to f4 for data centered in one dominant
    quadrant.
  • f5 to f6 for data mainly in two quadrants
  • Neuron output is 0.5 with an unknown input.

14
Associative learning feed-forward
  • In a transmitting neuron
  • The input with higher entropy (dominant input) is
    transmitted to O, with the other is ignored.
  • I1 is the dominant input if
  • O may be an input to other neurons.
  • O receives feedback Of from connected neurons,
    which generate feedback to its inputs.

15
Associative learning feedback
  • The network generates feedback to the unknown
    inputs through association.

16
Associative learning feedback
  • N1 -gt transmitting neurons
  • Of is passed back to the input.
  • N2 -gt associative neurons with determined inputs
  • Feedback takes no effect and information passes
    forward.
  • N3 -gt associative neurons with active feedback
    and inactive input(s)
  • Of creates feedbacks I1f and I2f through the
    function f,
  • These neurons only pass information backwards.
  • N4 -gt actively associating neurons with inactive
    feedback
  • If one of their inputs is inactive, it will be
    overwritten based on its association with the
    other input and the neurons function f.

17
Associative learning feedback
  • Calculation of the feedback (using f5)

With an active output feedback, I1f is determined
based on f5 and weighted using w5.
w5 measures the quality of learning.
18
  • Introduction
  • Network structure
  • Associative learning
  • Simulation results
  • Conclusions and future work

19
Simulation results
  • The TAE database (from University
    Wisconsin-Madison)
  • 151 instances, 5 features and 3 classes
  • The Iris plants database
  • 150 instances 4 features and 3 classes
  • The glass identification Database
  • 214 instances, 9 features and 6 classes
  • Image Recovery
  • Two letters B and J

20
Simulation results - TAE database
Not hierarchical Connections distribution
Gaussian vertically (STD 30) and horizontally
(STD 5) correct rate 62.33
Features coded into binary format with sliding
bars and classified using orthogonal codes
21
Simulation results - Iris database
Not hierarchical Connections distribution
Gaussian vertically (STD 30) and horizontally
(STD 5) correct rate 73.74
22
Simulation results - Iris database
Hierarchical vertical connections 80 Gaussian
(STD 2) and 20 uniform correct rate improved
to 75.33
23
Simulation results - Iris database
  • The hierarchical structure appears advantageous.
  • Using equal number of bits for features and class
    IDs gives better rate.
  • Performance further improved to 86 with mixed
    feature/classification bits.

24
Simulation results glass ID database
  • The depth of learning is related to the
    complexity of the target problem.
  • With more classes, more actively associating
    neurons and more layers are needed.

Average number of actively associating neurons
per layer, with 3 / 6 classes
25
Simulation results - Image Recovery
  • A 2-D image recovery task.
  • 200 patterns are generated by adding random noise
    to 2 black-white images of letters B and J. The
    network was trained with 190 patterns and tested
    using 10 patterns.
  • Mean correct classification rate 100

Training patterns
An Average image of training patterns
26
Simulation results - Image Recovery
Testing result and recovered images
Testing result and recovered image using input
redundancy
27
  • Introduction
  • Network structure
  • Associative learning
  • Simulation results
  • Conclusions and future work

28
Conclusion and Future Work
  • SOLAR has a flexible and sparse interconnect
    structure designed to emulate the organization of
    a human cortex
  • It handles a wide variety of machine learning
    tasks including image recognition, classification
    and data recovery, and is suitable for online
    learning
  • The associative learning SOLAR is adaptive
    network with feedback and inhibitory links
  • It discovers the correlation between inputs and
    establishes associations inside the neurons
  • It can perform auto associative and hetero
    associative learning
  • It can be modified to perform value driven
    interaction with environment
About PowerShow.com