BP - Review - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

BP - Review

Description:

When logistic sigmoidal activation functions are used. dj(n) is given by ... When logistic sigmoidal activation functions are used ... – PowerPoint PPT presentation

Number of Views:14
Avg rating:3.0/5.0
Slides: 11
Provided by: asimk
Category:
Tags: review | sigmoidal

less

Transcript and Presenter's Notes

Title: BP - Review


1
BP - Review
  • CS/CMPE 333 Neural Networks

2
Notation
  • Consider a MLP with P input, Q hidden, and M
    output neurons
  • There are two layers of inputs and outputs. Two
    single-layer networks are connected in series
    where the output of the first become the input to
    the second
  • For convenience each layer can be considered
    separately
  • If track of both layers have to be kept then an
    superscript index may be used to indicate layer
    number, e.g. w212

3
Identifying Parameters
  • Letter indices i, j, k, m, n, etc are used to
    identify parameters
  • If two or more indices are used, then the
    alphabetical order of the indices indicate the
    relative position of the parameters. E.g. xiyj
    indicates that the variable x corresponds to a
    layer that precedes the variable y (i -gt j)
  • wji synaptic weight connecting neuron i to
    neuron j

4
l
W1
W2
-1
x0 -1
1
x1
y1
Q
yM
xp
Layer 1
Layer 2
5
BP Equations (1)
  • Delta rule
  • wji(n1) wji(n) ?wji(n)
  • where
  • ?wji(n) ?dj(n)yi(n)
  • dj(n) is given by
  • If neuron j lies in the output layer
  • dj(n) fj(n)ej(n)
  • If neuron j lies in a hidden layer
  • dj(n) fj(vj(n)) Sk dk(n)wkj(n)

6
BP Equations (2)
  • When logistic sigmoidal activation functions are
    used
  • dj(n) is given by
  • If neuron j lies in the output layer
  • dj(n) yj(n)1 yj(n) ej(n)
  • yj(n)1 yj(n)dj(n) yj(n)
  • If neuron j lies in a hidden layer
  • dj(n) yj(n)1 yj(n) Sk dk(n)wkj(n)

7
Matrix/Vector Notation (1)
  • wji the synaptic weight from the ith neuron to
    the jth neuron (where neuron i precedes neuron j)
  • wji element in the jth row and ith column of
    weight matrix W
  • Consider a feedforward network with P inputs, Q
    hidden neurons, and M outputs
  • What should be the dimension for W from hidden to
    output layers?
  • W will have M rows and Q1 columns. First column
    is for the bias inputs

8
Vector/Matrix Notation (2)
  • yj output of the jth neuron (in a layer)
  • y vector in which the jth element is yj
  • What should be dimension of y for the hidden
    layer?
  • y is a vector of length Q1, where the first
    element is the bias input of -1
  • What should be the dimension of y for the output
    layer?
  • y is a vector of length M. No bias input is
    needed since this is the last layer of the
    network.

9
BP Equation in Vector/Matrix Form
  • Delta rule
  • Wj(n1) Wj(n) ?Wj(n)
  • where
  • ?Wj(n) ?dj(n)yi(n)T ? outer product
  • When logistic sigmoidal activation functions are
    used
  • dj(n) is given by (in the following, omit the
    bias elements from the vectors and matrices)
  • If j is the output layer
  • dj(n) yj(n)1 yj(n).dj(n) yj(n)
  • If neuron j lies in a hidden layer
  • dj(n) yj(n)1 yj(n).Wk(n)T dk(n)

10
l
W1
W2
-1
x0 -1
1
x1
y1
Q
yM
xp
Layer 1
Layer 2
Write a Comment
User Comments (0)
About PowerShow.com