Chapter 1: Supplementary ANS Overview Machine Learning Overview - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Chapter 1: Supplementary ANS Overview Machine Learning Overview

Description:

The network must discover for itself patterns, features, regularities, ... Acquiring a function, based on past inputs and values, from new inputs to values. ... – PowerPoint PPT presentation

Number of Views:20
Avg rating:3.0/5.0
Slides: 30
Provided by: MirellaM3
Category:

less

Transcript and Presenter's Notes

Title: Chapter 1: Supplementary ANS Overview Machine Learning Overview


1
Chapter 1 Supplementary ANS Overview
Machine Learning Overview
  • C.S.Chen
  • 2007.03.08

2
Appendix
  • Machine Learning A Case Study
  • ANN Overview
  • Machine Learning Overview

3
Machine Learning A Case Study
  • Malfunctioning gearboxes have been the cause for
    CH-46 US Navy helicopters to crash.
  • Although gearbox malfunctions can be diagnosed by
    a mechanic prior to a helicopters take off, what
    if a malfunction occurs while in-flight, when it
    is impossible for a human to detect?
  • Machine Learning was shown to be useful in this
    domain and thus to have the potential of saving
    human lives!

4
How did it Work?
  • Consider the following common situation
  • You are in your car, speeding away, when you
    suddenly hear a funny noise.
  • To prevent an accident, you slow down, and either
    stop the car or bring it to the nearest garage.
  • The in-flight helicopter gearbox fault monitoring
    system was designed following the same idea. The
    difference, however, is that many gearbox
    malfunction cannot be heard by humans and must be
    monitored by a machine.

5
So, Wheres the Learning?
  • Imagine that, instead of driving your good old
    battered car, you were asked to drive this truck
  • Would you know a funny noise from a normal
    one?
  • Well, probably not, since youve never driven a
    truck before!
  • While you drove your car during all these years,
    you effectively learned what your car sounds like
    and this is why you were able to identify that
    funny noise.

6
What did the Computer Learn?
  • Obviously, a computer cannot hear and can
    certainly not distinguish between a normal and an
    abnormal sound.
  • Sounds, however, can be represented as wave
    patterns such as this one
  • which in fact is a series
  • of real numbers.
  • And computers can deal with strings of numbers!
  • For example, a computer can easily be programmed
    to distinguish between strings of numbers that
    contain a 3 in them and those that dont.

7
What did the Computer Learn? (Contd)
  • In the helicopter gearbox monitoring problem, the
    assumption is that functioning and malfunctioning
    gearboxes emit different noises. Thus, the
    strings of numbers that represent these noises
    have different characteristics.
  • The exact characteristics of these different
    categories, however, are unknown and/or are too
    difficult to describe.
  • Therefore, they cannot be programmed, but rather,
    they need to be learned by the computer.
  • There are many ways in which a computer can learn
    how to distinguish between two patterns (e.g.,
    decision trees, neural networks, bayesian
    networks, etc.) and that is the topics of this
    chapter

8
The Real and Artificial Neurons
9
  • ANNs are systems that are constructed to make
    use of some organizational principles resembling
    those of the human brains
  • ANNs are good at tasks such as
  • pattern matching and classification
  • function approximation
  • optimization
  • vector quantitization
  • data clustering

10
The Neuron (Processing Element)
11
  • Models of ANNs are specified by three basic
    entitites
  •  Models of the neurons themselves,
  • Models of synaptic interconnections and
    structures,
  • Training or learning rules for updating the
    connecting weights

12
  • There are three important parameters about a
    neuron
  • An integrated function associated with the input
    of a neuron to calculate the net input (for M-P
    neuron)
  • fi neti
  • II. A set of links, describing the neuron
    inputs, with weights W1, W2, , Wm

13
III. Activation Function
  • Activation functions a(f) output an activation
    value as a function of its net input. Some
    commonly used activation functions are
  • Step Function
  • Hard limiter (thresold function)
  • Ramp function
  • Unipolar sigmoid function
  • Bipolar sigmoid function
  •  

14
Learning Rules
  • Generally, we can classify learning in ANNs in
    two broad classes
  • Parameter learning which is concerned with
    updating of the connecting weights
  • Structure learning which focuses on the change
    in the network structure, including the number of
    PEs and their connection types.
  • These two kinds of learning can be performed
    simultaneously or separately.

15
  • In weight learning, we have to develop learning
    rules to efficiently guide the weight matrix W in
    approaching a desired matrix that yields the
    desired network performance. In general, learning
    rules are classified into three categories
  • Supervised Learning (learning with a teacher)
  • Reinforcement Learning (learning with a critic)
  • Unsupervised learning

16
In supervised learning when input is applied to
an ANN, the corresponding desired response of the
system is given. An ANN is supplied with a
sequence of examples (x1, d1) , (x2, d2)...(xk,
dk) of desired input-output pairs. In
reinforcement learning only less detailed
information than supervised learning is
available. There is only a single bit of feedback
information indicating whether the output is
right or wrong. That is, it just says how good or
how bad a particular output is and provides no
hint as to what the right answer should be.
17
  • In unsupervised learning,
  • There is no teacher to provide any feedback
    information. The network must discover for itself
    patterns, features, regularities, correlations or
    categories in the input data and code for them in
    the output.
  • While discovering these features, the network
    undergoes changes in its parameters this process
    is called self-organizing.

18
Three Categories of Learning
Supervised Learning
Reinforcement Learning
Unsupervised Learning
19
What is Machine Learning?
  • Machine Learning allows computers to learn from
    their experiences and from gathered data
  • Machines are good at gathering data and
    performing complex analysis
  • The goal of machine learning is to build
    computer systems that can adapt and learn from
    their experience.
  • Tom Dietterich

20
Machine Learning
  • Learning
  • Acquiring a function, based on past inputs and
    values, from new inputs to values.
  • Learn concepts, classifications, values
  • Identify regularities in data
  • Learning as Search
  • A hypothesis is a guess at a function that can be
    used to account for the inputs.
  • A hypothesis space is the space of all possible
    candidate hypotheses.
  • Learning is a search through the hypothesis space
    for a good hypothesis.

21
Another Definition of Machine Learning
  • Machine Learning algorithms discover the
    relationships between the variables of a system (
    input, output and hidden ) from direct samples of
    the system
  • These algorithms originate form many fields
  • Statistics, mathematics, theoretical computer
    science, physics, neuroscience, etc

22
A Generic System
Input Variables
Hidden Variables
Output Variables
23
Inductive Learning
  • Suppose the underlying problem domain is
    described by a function f
  • Given pairs ltx, f(x)gt
  • Compute a hypothesis h that approximates f as
    well as possible given the presented data
  • In general the input under-constrains the
    function h, so we have to choose. The way that
    choice is performed is called bias.

24
Inductive learning method
  • Construct/adjust h to agree with f on training
    set
  • (h is consistent if it agrees with f on all
    examples)
  • E.g., curve fitting

25
Inductive learning method
  • Construct/adjust h to agree with f on training
    set
  • (h is consistent if it agrees with f on all
    examples)
  • E.g., curve fitting

26
Inductive learning method
  • Construct/adjust h to agree with f on training
    set
  • (h is consistent if it agrees with f on all
    examples)
  • E.g., curve fitting

27
Inductive learning method
  • Construct/adjust h to agree with f on training
    set
  • (h is consistent if it agrees with f on all
    examples)
  • E.g., curve fitting

28
Inductive learning method
  • Construct/adjust h to agree with f on training
    set
  • (h is consistent if it agrees with f on all
    examples)
  • E.g., curve fitting

29
Inductive learning method
  • Construct/adjust h to agree with f on training
    set
  • (h is consistent if it agrees with f on all
    examples)
  • E.g., curve fitting
  • Ockhams razor prefer the simplest hypothesis
    consistent with data
Write a Comment
User Comments (0)
About PowerShow.com