# CSE 473 Introduction to Artificial Intelligence Neural Networks - PowerPoint PPT Presentation

Title:

## CSE 473 Introduction to Artificial Intelligence Neural Networks

Description:

### Title: Neural Nets Author: Henry Kautz Last modified by: kautz Created Date: 4/4/2001 6:05:24 AM Document presentation format: On-screen Show Company – PowerPoint PPT presentation

Number of Views:382
Avg rating:3.0/5.0
Slides: 30
Provided by: HenryK157
Category:
Tags:
Transcript and Presenter's Notes

Title: CSE 473 Introduction to Artificial Intelligence Neural Networks

1
CSE 473Introduction to Artificial
IntelligenceNeural Networks
• Henry Kautz
• Spring 2006

2
(No Transcript)
3
(No Transcript)
4
(No Transcript)
5
(No Transcript)
6
(No Transcript)
7
(No Transcript)
8
(No Transcript)
9
(No Transcript)
10
(No Transcript)
11
Training a Single Neuron
• Idea adjust weights to reduce sum of squared
errors over training set
• Error difference between actual and intended
output
• Calculate derivative (slope) of error function
• Take a small step in the downward direction
• Step size is the training rate
• Single-layer network can train each unit
separately

12
13
Computing Partial Derivatives
14
Single Unit Training Rule
• Adjust weight i in proportion to
• Training rate
• Error
• Derivative of the squashing function
• Degree to which input i was active

15
Sigmoid Units
16
Sigmoid Unit Training Rule
• Adjust weight i in proportion to
• Training rate
• Error
• Degree to which output is ambiguous
• Degree to which input i was active

17
Expressivity of Neural Networks
• Single units can learn any linear function
• Single layer of units can learn any set of linear
inequalities (convex region)
• Two layers can learn any continuous function
• Three layers can learn any computable function

18
(No Transcript)
19
(No Transcript)
20
(No Transcript)
21
(No Transcript)
22
Character Recognition Demo
23
BackProp Demo 1
• http//www.neuro.sfc.keio.ac.jp/masato/jv/sl/BP.h
tml
• Local version BP.html

24
Backprop Demo 2
• http//www.williewheeler.com/software/bnn.html
• Local version bnn.html

25
Modeling the Brain
• Backpropagation is the most commonly used
algorithm for supervised learning with
feed-forward neural networks
• But most neuroscientists believe that brain does
not implement backprop
• Many other learning rules have been studied

26
Hebbian Learning
• Alternative to backprop for unsupervised learning
• Increase weights on connected neurons whenever
both fire simultaneously
• Neurologically plausible (Hebbs 1949)

27
Self-Organizing Maps
• Unsupervised method for clustering data
• Learns a winner take all network where just one
output neuron is on for each cluster

28
Why Self-Organizing
29
Recurrent Neural Networks
• Include time-delay feedback loops
• Can handle temporal data tasks, such as sequence
prediction