CAP6938 Neuroevolution and Artificial Embryogeny Basic Concepts - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

CAP6938 Neuroevolution and Artificial Embryogeny Basic Concepts

Description:

The network is then usually activated once per time tick ... However, 'all-at-once' activation utilizes the entire net in each tick with no extra cost ... – PowerPoint PPT presentation

Number of Views:14
Avg rating:3.0/5.0
Slides: 16
Provided by: KennethO1
Learn more at: http://www.cs.ucf.edu
Category:

less

Transcript and Presenter's Notes

Title: CAP6938 Neuroevolution and Artificial Embryogeny Basic Concepts


1
CAP6938Neuroevolution and Artificial
EmbryogenyBasic Concepts
  • Dr. Kenneth Stanley
  • January 11, 2006

2
We Care About Evolving ComplexitySo Why Neural
Networks?
  • Historical origin of ideas in evolving complexity
  • Representative of a broad class of structures
  • Illustrative of general challenges
  • Clear beneficiary of high complexity

3
How Do NNs Work?
Output
Output
Input
Input
4
How do NNs Work?Example
Outputs (effectors/controls)
Forward Left Right
Front Left Right Back
Inputs (Sensors)
5
What Exactly Happens Inside the Network?
  • Network Activation

Neuron j activation
out1
out2
H1
H2
w11
w22
w21
w12
X1
X2
6
Recurrent Connections
  • Recurrent connections are backward connections in
    the network
  • They allow feedback
  • Recurrence is a type of memory

7
Activating Networks of Arbitrary Topology
  • Standard method makes no distinction between
    feedforward and recurrent connections
  • The network is then usually activated once per
    time tick
  • The number of activations per tick can be
  • thought of as the speed of thought
  • Thinking fast is expensive

out
Wout-H
wH-out
H
w21
w11
X1
X2
8
Arbitrary Topology Activation Controversy
  • The standard method is not necessarily the best
  • It allows delay-line memory and a very simple
    activation algorithm with no special case for
    recurrence
  • However, all-at-once activation utilizes the
    entire net in each tick with no extra cost
  • This issue is unsettled

9
The Big Questions
  • What is the topology that works?
  • What are the weights that work?

?
?
?
?
?
?
?
?
?
?
?
?
?
10
Problem Dimensionality
  • Each connection (weight) in the network is a
    dimension in a search space
  • The space youre in matters Optimization is not
    the only issue!
  • Topology defines the space

21-dimensional space
3-dimensional space
11
High Dimensional Space is Hard to Search
  • 3 dimensional easy
  • 100 dimensional need a good optimization method
  • 10,000 dimensional very hard
  • 1,000,000 dimensional very very hard
  • 100,000,000,000,000 dim. forget it

12
Bad News
  • Most interesting solutions are high-D
  • Robotic Maid
  • World Champion Go Player
  • Autonomous Automobile
  • Human-level AI
  • Great Composer
  • We need to get into high-D space

13
A Solution (preview)
  • Complexification Instead of searching directly
    in the space of the solution, start in a smaller,
    related space, and build up to the solution
  • Complexification is inherent in vast examples of
    social and biological progress

14
So how do computers optimize those weights anyway?
  • Depends on the type of problem
  • Supervised Learn from input/output examples
  • Reinforcement Learning Sparse feedback
  • Self-Organization No teacher
  • In general, the more feedback you get, the easier
    the learning problem
  • Humans learn language without supervision

15
Significant Weight Optimization Techniques
  • Backpropagation Change weights based on their
    contibution to error
  • Hebbian learning Changes weights based on firing
    correlations between connected neurons

Homework -Fausett pp. 39-80 (in Chapter 2)-and
Fausett pp. 289-316 (in Chapter 6) -Online intro
chaper on RL -Optional RL survery
Write a Comment
User Comments (0)
About PowerShow.com