FeedForward Neural Networks - PowerPoint PPT Presentation

1 / 84
About This Presentation
Title:

FeedForward Neural Networks

Description:

1943 McCulloch and Pitts proposed the first computational models of neuron. ... Linearly Graded Units (LGUs) : Widrow-Hoff learning Rule. Feed-Forward Neural Networks ... – PowerPoint PPT presentation

Number of Views:1364
Avg rating:3.0/5.0
Slides: 85
Provided by: taiwe
Category:

less

Transcript and Presenter's Notes

Title: FeedForward Neural Networks


1
Feed-Forward Neural Networks
  • ??? ???

2
Content
  • Introduction
  • Single-Layer Perceptron Networks
  • Learning Rules for Single-Layer Perceptron
    Networks
  • Perceptron Learning Rule
  • Adline Leaning Rule

3
Feed-Forward Neural Networks
  • Introduction

4
Historical Background
  • 1943 McCulloch and Pitts proposed the first
    computational models of neuron.
  • 1949 Hebb proposed the first learning rule.
  • 1958 Rosenblatts work in perceptrons.
  • 1969 Minsky and Paperts exposed limitation of
    the theory.
  • 1970s Decade of dormancy for neural networks.
  • 1980-90s Neural network return (self-organization,
    back-propagation algorithms, etc)

5
Nervous Systems
  • Human brain contains 1011 neurons.
  • Each neuron is connected 104 others.
  • Some scientists compared the brain with a
    complex, nonlinear, parallel computer.
  • The largest modern neural networks achieve the
    complexity comparable to a nervous system of a
    fly.

6
Neurons
  • The main purpose of neurons is to receive,
    analyze and transmit further the information in a
    form of signals (electric pulses).
  • When a neuron sends the information we say that a
    neuron fires.

7
Neurons
Acting through specialized projections known as
dendrites and axons, neurons carry information
throughout the neural network.
This animation demonstrates the firing of a
synapse between the pre-synaptic terminal of one
neuron to the soma (cell body) of another neuron.
8
A Model of Artificial Neuron
9
A Model of Artificial Neuron
10
Feed-Forward Neural Networks
  • Graph representation
  • nodes neurons
  • arrows signal flow directions
  • A neural network that does not contain cycles
    (feedback loops) is called a feedforward network
    (or perceptron).

11
Layered Structure
Hidden Layer(s)
12
Knowledge and Memory
  • The output behavior of a network is determined by
    the weights.
  • Weights ? the memory of an NN.
  • Knowledge ? distributed across the network.
  • Large number of nodes
  • increases the storage capacity
  • ensures that the knowledge is robust
  • fault tolerance.
  • Store new information by changing weights.

13
Pattern Classification
output pattern y
  • Function x ? y
  • The NNs output is used to distinguish between
    and recognize different input patterns.
  • Different output patterns correspond to
    particular classes of input patterns.
  • Networks with hidden layers can be used for
    solving more complex problems then just a linear
    pattern classification.

input pattern x
14
Training
Training Set
. . .
. . .
Goal
. . .
. . .
15
Generalization
  • By properly training a neural network may produce
    reasonable answers for input patterns not seen
    during training (generalization).
  • Generalization is particularly useful for the
    analysis of a noisy data (e.g. timeseries).

16
Generalization
  • By properly training a neural network may produce
    reasonable answers for input patterns not seen
    during training (generalization).
  • Generalization is particularly useful for the
    analysis of a noisy data (e.g. timeseries).

17
Applications
  • Pattern classification
  • Object recognition
  • Function approximation
  • Data compression
  • Time series analysis and forecast
  • . . .

18
Feed-Forward Neural Networks
  • Single-Layer Perceptron Networks

19
The Single-Layered Perceptron
20
The Single-Layered Perceptron
21
Training a Single-Layered Perceptron
Training Set
Goal
22
Learning Rules
  • Linear Threshold Units (LTUs) Perceptron
    Learning Rule
  • Linearly Graded Units (LGUs) Widrow-Hoff
    learning Rule

Training Set
Goal
23
Feed-Forward Neural Networks
  • Learning Rules for
  • Single-Layered Perceptron Networks
  • Perceptron Learning Rule
  • Adline Leaning Rule

24
Perceptron
Linear Threshold Unit
sgn
25
Perceptron
Goal
Linear Threshold Unit
sgn
26
Example
Goal
Class 1
g(x) ?2x1 2x220
Class 2
27
Augmented input vector
Goal
Class 1 (1)
Class 2 (?1)
28
Augmented input vector
Goal
29
Augmented input vector
Goal
A plane passes through the origin in the
augmented input space.
30
Linearly Separable vs. Linearly Non-Separable
AND
OR
XOR
Linearly Separable
Linearly Separable
Linearly Non-Separable
31
Goal
  • Given training sets T1?C1 and T2 ? C2 with
    elements in form of x(x1, x2 , ... , xm-1 , xm)
    T , where x1, x2 , ... , xm-1 ?R and xm ?1.
  • Assume T1 and T2 are linearly separable.
  • Find w(w1, w2 , ... , wm) T such that

32
Goal
wTx 0 is a hyperplain passes through the origin
of augmented input space.
  • Given training sets T1?C1 and T2 ? C2 with
    elements in form of x(x1, x2 , ... , xm-1 , xm)
    T , where x1, x2 , ... , xm-1 ?R and xm ?1.
  • Assume T1 and T2 are linearly separable.
  • Find w(w1, w2 , ... , wm) T such that

33
Observation
Which ws correctly classify x?

What trick can be used?
34
Observation
Is this w ok?

w1x1 w2x2 0
35
Observation
w1x1 w2x2 0
Is this w ok?

36
Observation
w1x1 w2x2 0
Is this w ok?

How to adjust w?
?w ?
37
Observation
Is this w ok?

How to adjust w?
?w ??x
reasonable?
gt0
lt0
38
Observation
Is this w ok?

reasonable?
How to adjust w?
?w ?x
gt0
lt0
39
Observation
Is this w ok?
?
?w ?
?x
??x
or
40
Perceptron Learning Rule
Upon misclassification on
Define error
41
Perceptron Learning Rule
Define error
42
Perceptron Learning Rule
43
Summary ? Perceptron Learning Rule
Based on the general weight learning rule.
correct
incorrect
44
Summary ? Perceptron Learning Rule
Converge?
45
Perceptron Convergence Theorem
  • Exercise Reference some papers or textbooks to
    prove the theorem.

If the given training set is linearly separable,
the learning process will converge in a finite
number of steps.
46
The Learning Scenario
Linearly Separable.
47
The Learning Scenario
48
The Learning Scenario
49
The Learning Scenario
50
The Learning Scenario
51
The Learning Scenario
w4 w3
w3
52
The Learning Scenario
w
53
The Learning Scenario
The demonstration is in augmented space.
w
Conceptually, in augmented space, we adjust the
weight vector to fit the data.
54
Weight Space
A weight in the shaded area will give correct
classification for the positive example.
w
55
Weight Space
A weight in the shaded area will give correct
classification for the positive example.
?w ?x
w
56
Weight Space
A weight not in the shaded area will give correct
classification for the negative example.
w
57
Weight Space
A weight not in the shaded area will give correct
classification for the negative example.
w
?w ??x
58
The Learning Scenario in Weight Space
59
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w1
60
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w1
w1
w0
61
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w2
w1
w1
w0
62
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w2
w3
w1
w1
w0
63
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w4
w2
w3
w1
w1
w0
64
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w4
w2
w3
w5
w1
w1
w0
65
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w4
w2
w3
w5
w1
w6
w1
w0
66
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w7
w4
w2
w3
w5
w1
w6
w1
w0
67
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w8
w7
w4
w2
w3
w5
w1
w6
w1
w0
68
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w9
w2
w8
w7
w4
w2
w3
w5
w1
w6
w1
w0
69
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w9
w10
w2
w8
w7
w4
w2
w3
w5
w1
w6
w1
w0
70
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w9
w10
w2
w11
w8
w7
w4
w2
w3
w5
w1
w6
w1
w0
71
The Learning Scenario in Weight Space
To correctly classify the training set, the
weight must move into the shaded area.
w2
w11
w1
w0
Conceptually, in weight space, we move the weight
into the feasible region.
72
Feed-Forward Neural Networks
  • Learning Rules for
  • Single-Layered Perceptron Networks
  • Perceptron Learning Rule
  • Adline Leaning Rule

73
Adline (Adaptive Linear Element)
Widrow 1962
74
Adline (Adaptive Linear Element)
In what condition, the goal is reachable?
Goal
Widrow 1962
75
LMS (Least Mean Square)
Minimize the cost function (error function)
76
Gradient Decent Algorithm
Our goal is to go downhill.
Contour Map
?w
(w1, w2)
77
Gradient Decent Algorithm
Our goal is to go downhill.
How to find the steepest decent direction?
Contour Map
?w
(w1, w2)
78
Gradient Operator
Let f(w) f (w1, w2,, wm) be a function over Rm.
Define
79
Gradient Operator
df positive
df zero
df negative
Go uphill
Plain
Go downhill
80
The Steepest Decent Direction
To minimize f , we choose ?w ?? ? f
df positive
df zero
df negative
Go uphill
Plain
Go downhill
81
LMS (Least Mean Square)
Minimize the cost function (error function)
82
Adline Learning Rule
Minimize the cost function (error function)
83
Learning Modes
  • Batch Learning Mode
  • Incremental Learning Mode

84
Comparisons
Habbian Assumption
Gradient Decent
Fundamental
Converge Asymptotically
Convergence
In finite steps
Linearly Separable
Linear Independence
Constraint
Write a Comment
User Comments (0)
About PowerShow.com