CS 4700: Foundations of Artificial Intelligence - PowerPoint PPT Presentation

About This Presentation
Title:

CS 4700: Foundations of Artificial Intelligence

Description:

Circa 1997. Computer processor speed (MIPS) Information or computer storage (Megabytes) ... Circa 1997. Carla P. Gomes. CS4700. Neural Networks. Computational ... – PowerPoint PPT presentation

Number of Views:20
Avg rating:3.0/5.0
Slides: 13
Provided by: csCor
Category:

less

Transcript and Presenter's Notes

Title: CS 4700: Foundations of Artificial Intelligence


1
CS 4700Foundations of Artificial Intelligence
  • Prof. Carla P. Gomes
  • gomes_at_cs.cornell.edu
  • Module
  • Intro Neural Networks
  • (Reading Chapter 20.5)

2
Neural Networks
  • Rich history, starting in the early forties with
    McCulloch and Pittss model of artificial neurons
    (McCulloch and Pitts 1943).
  • Two views
  • Modeling the brain
  • Just representation of complex
    functions (Continuous contrast decision trees)
  • Much progress on both fronts.
  • Drawn interest from Neuroscience, Cognitive
    science, AI, Physics, Statistics, and CS/EE.

3
Computer vs. Brain
Computer processor speed (MIPS)
Circa 1997
Information or computer storage (Megabytes)
4
Increasing Compute PowerMoores Law
In 1965, Gordon Moore, Intel co-founder,
predicted that the number of transistors on a
chip would double about every two years.
(popularly known as Moore's Law). Intel has kept
that pace for nearly 40 years.
5
Computer Power / Cost
Computer processor speed (MIPS)
Circa 1997
6
Neural Networks
  • Computational model inspired by the brain
  • based on the interaction of
  • multiple connected processing elements
  • (Connectionism, parallel distributed processing,
    neural computation)
  • .

Brain
When inputs reach some threshold ? an action
potential (electric pulse) is sent along the
axon to the outputs
Inputs
Outputs
Brains information and processing power emerges
from a highly interconnected network of neurons.
Connection between cells
Excitatory or inhibitory and may change over time
Around 1011 neurons, 1014 synapses a cycle time
of 1ms-10 ms.
7
Biological Neurons
  • The brain is made up of neurons which have
  • A cell body (soma)
  • Dendrites (inputs)
  • An axon (outputs)
  • Synapses (connection between cells)
  • Synapses can be excitatory or inhibitory and may
    change over time
  • When the inputs reach some threshold an action
    potential (electric pulse) is sent along the axon
    to the outputs
  • There are around 1011 neurons, 1014 synapses a
    cycle time of 1ms-10 ms.
  • Signals are noisy spike trains" of electrical
    potential

8
Issue The Hardware
  • The brain
  • a neuron, or nerve cell, is the basic information
  • processing unit (1011 )
  • many more synapses (1014) connect the neurons
  • cycle time 10(-3) seconds (1 millisecond)
  • How complex can we make computers?
  • 108 or more transistors per CPU
  • supercomputer hundreds of CPUs, 1010 bits of
    RAM
  • cycle times order of 10(-9) seconds (1
    nanosecond)

9
Compute Power vs. Brain Power
  • In near future we can have computers with as many
    processing elements as our
  • brain, but
  • far fewer interconnections (wires or synapses)
  • much faster updates (1 millisecond, 10-3 vs.
    1 nanosecond 10-9)
  • Fundamentally different hardware may require
    fundamentally different algorithms!
  • Very much an open question.

10
Why Neural Nets?
  • Motivation
  • Solving problems under the constraints similar
    to those of the brain may lead to solutions to AI
    problems that would otherwise be overlooked.
  • Individual neurons operate very slowly
  • massively parallel algorithms
  • Neurons are failure-prone devices
  • distributed and redundant representations
  • Neurons promote approximate matching
  • less brittle

11
Connectionist Models of Learning
  • Characterized by
  • A large number of very simple neuron-like
    processing elements.
  • A large number of weighted connections between
    the elements.
  • Highly parallel, distributed control.
  • An emphasis on learning internal representations
    automatically.

But of course the interconnectivity is not
really at the brain scale
12
Autonomous Learning Vehicle In a Neural Net
(ALVINN)
  • ALVINN learns to drive an autonomous vehicle at
    normal speeds on public highways.

ALVINN is a perception systems which learns to
control the NAVLAB vehicles by watching a person
drive.
Pomerleau et al, 1993
13
ALVINN drives 70mph on highways
30 x 32 grid of pixel intensities from camera
Each output unit correspond to a particular
steering direction. The most highly activated one
gives the direction to steer.
14
What kinds of problems are suitable for neural
networks?
  • Have sufficient training data
  • Long training times are acceptable
  • Not necessary for humans to understand learned
    target function or hypothesis

? neural networks are magic black boxes
15
Tasks
  • Function approximation, or regression analysis,
    including time series prediction and modeling.
  • Classification, including pattern and sequence
    recognition, novelty detection and sequential
    decision making.
  • Data processing, including filtering, clustering,
    blind signal separation and compression.

16
Example of Application Areas
  • Application areas include
  • System identification and control (vehicle
    control, process control),
  • Game-playing and decision making (backgammon,
    chess, racing),
  • Pattern recognition (radar systems, face
    identification, object recognition, etc.)
  • Sequence recognition (gesture, speech,
    handwritten text recognition),
  • Medical diagnosis
  • Financial applications
  • Data mining (or knowledge discovery in databases,
    "KDD"),
  • Visualization
  • E-mail spam filtering.
Write a Comment
User Comments (0)
About PowerShow.com