CS621: Artificial Intelligence Lecture 18: Feedforward network contd - PowerPoint PPT Presentation

Loading...

PPT – CS621: Artificial Intelligence Lecture 18: Feedforward network contd PowerPoint presentation | free to download - id: 26f4b5-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

CS621: Artificial Intelligence Lecture 18: Feedforward network contd

Description:

XOR using 2 layers. Non-LS function expressed as a ... DISCUSSION ON LINEAR NEURONS. x2. x1 ... For (0,1), One class: For (1,0), One class: For (1,1) ... – PowerPoint PPT presentation

Number of Views:364
Avg rating:3.0/5.0
Slides: 12
Provided by: saur1
Learn more at: http://www.cse.iitb.ac.in
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: CS621: Artificial Intelligence Lecture 18: Feedforward network contd


1
CS621 Artificial IntelligenceLecture 18
Feedforward network contd
  • Pushpak Bhattacharyya
  • Computer Science and Engineering Department
  • IIT Bombay

2
Pocket Algorithm
  • Algorithm evolved in 1985 essentially uses PTA
  • Basic Idea
  • Always preserve the best weight obtained so far
    in the pocket
  • Change weights, if found better (i.e. changed
    weights result in reduced error).

3
XOR using 2 layers
  • Non-LS function expressed as a linearly
    separable
  • function of individual linearly separable
    functions.

4
Example - XOR
  • 0.5

? Calculation of XOR
w21
w11
x1x2
x1x2
x1 x2 x1x2
0 0 0
0 1 1
1 0 0
1 1 0
Calculation of
x1x2
  • 1

w21.5
w1-1
x2
x1
5
Example - XOR
  • 0.5

w21
w11
x1x2
1
1
x1x2
1.5
-1
-1
1.5
x2
x1
6
Some Terminology
  • A multilayer feedforward neural network has
  • Input layer
  • Output layer
  • Hidden layer (asserts computation)
  • Output units and hidden units are called
  • computation units.

7
Training of the MLP
  • Multilayer Perceptron (MLP)
  • Question- How to find weights for the hidden
    layers when no target output is available?
  • Credit assignment problem to be solved by
    Gradient Descent

8
DisCussion on linear neurons
9
Out
h2
h1
x2
x1
10
  • Note The whole structure shown in earlier slide
    is reducible to a single neuron with given
    behavior
  • Claim A neuron with linear I-O behavior cant
    compute X-OR.
  • Proof Considering all possible cases
  • assuming 0.1 and 0.9 as the lower and upper
    thresholds
  • For (0,0), Zero class
  • For (0,1), One class

11
  • For (1,0), One class
  • For (1,1), Zero class
  • These equations are inconsistent. Hence X-OR
    cant be computed.
  • Observations
  • A linear neuron cant compute X-OR.
  • A multilayer FFN with linear neurons is
    collapsible to a single linear neuron, hence no a
    additional power due to hidden layer.
  • Non-linearity is essential for power.
About PowerShow.com