Overview of Minicon Project - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Overview of Minicon Project

Description:

Minimum Cost Maximum Benefit Condition Monitoring ... Kone (4th largest supplier of elevators in the world) (Finland) VTT ... Mean, Kurtosis, Standard Deviation ... – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 33
Provided by: jfdead
Category:

less

Transcript and Presenter's Notes

Title: Overview of Minicon Project


1
Overview of Minicon Project
  • Condition monitoring and diagnostics for
    elevators
  • Dale Addison
  • CENTRE FOR ADAPTIVE SYSTEMS
  • University of Sunderland
  • School of Computing Technology

2
Project overview
  • MINICON
  • Minimum Cost Maximum Benefit Condition Monitoring
  • Framework 5, Competitive and Sustainable Growth,
    project value 3 million)
  • Two aspects
  • Condition monitoring of elevators
  • Condition monitoring of high speed machine tools
    (gt15000 rpm)

3
Project partners
  • Kone (4th largest supplier of elevators in the
    world) (Finland)
  • VTT (Finland)
  • Goratu (Bilbao, Spain
  • Tekniker (Bilbao, Spain)
  • Rockwell Manufacturing (Belgium Czechoslovakia)
  • Technical University of Talinn (Estonia)
  • IB Krates (Estonia)
  • Monitran ltd (UK)
  • University of Sunderland, (UK)
  • Entek (UK)
  • Truth (Athens, Greece)
  • ICCS/NTUA (Athens, Greece)

4
Application software and database with second
level Intelligence. Prior Knowledge Intelligence
System
Maintenance Management System With third level of
intelligence, human, providing Decision Support
Plant or Machine to be monitored e.g. Elevator
or machine tool
Service Engineer with Hand held PC, Email or
paging device etc.
5
(No Transcript)
6
Neural networks
  • Adaptive technology device based upon neurons
    found in the human brain
  • Neurons are connected together and send signals
    to each other. (networks)
  • Signals are summed and when they exceed a certain
    limit, the neuron fires (sends signal to other
    neurons)
  • Networks can be trained using algorithms which
    respond to the data.

7
A Neural network
  • Multi-layer perceptron
  • Each neuron performs a biased weighted sum of
    their inputs.
  • This activation level is passed through a
    transfer function (usually sigmoidal) to produce
    its output,
  • Neurons are arranged in a layered feed forward
    topology.

8
Multi layer perceptrons
  • Networks weights and thresholds are adjusted by a
    training algorithm which alters the weights
    according to the training data.
  • This ensures the smallest possible difference
    between the input data and the outputs

9
Dimensionality reduction techniques
  • Principal Components Analysis
  • Non-Linear
  • Weight regularisation techniques (Weigand method)
  • Genetic algorithms

10
Non-Linear principal components analysis (Auto
Associative network)
  • Neural network which uses its inputs as outputs
  • Has at least one hidden layer, with less neurons
    than the input and output layers, which have the
    same number of neurons
  • Data is effectively squeezed through a lower
    dimensionality

11
Non-linear Principal components analysis
12
Auto associative training
  • Produce Auto associative training set (Inputs map
    to outputs)
  • Create auto associative MLP
  • 5 layers
  • Middle hidden layer has less units than output
    layers
  • Other two hidden layers have a relatively large
    number of neurons, both should have the same
    number
  • Train network on data set
  • Delete last two layers
  • Collect reduced dimensionality input data,
    replace the original input data, retain original
    output variables
  • Create a second neural networkand train on the
    reduced data set.

13
Use of Genetic Algorithms
  • GAs are an optimisation technique which use
    Darwins concept of survival of the fittest to
    breed successively better strings according to an
    objective function
  • In this problem, that function helps to determine
    subsets of inter-related bits (correlated or
    mutually required inputs)

14
Sensitivity analysis
  • The sensitivity of a particular variable is
    determined, by running the network on a set of
    test cases, and accumulate the network error.
  • The network is then run using the same cases, but
    without certain information used
    earlier,(specific input variable) and the network
    error accumulated.
  • The sensitivity error is the ratio of error with
    missing value substitution to the original error.

15
Neural Network weight regularisation
  • Promotes low curvature by encouraging small
    weights to model the feature surface
  • Adds extra term to the error function which
    penalises gratuitous larger weights
  • Also prefers to tolerate a mixture of large and
    small weights, rather than medium sized weights

16
Neural networks used
  • Multi Layer Perceptrons (MLP)
  • Radial basis function (RBF)
  • Self Organisng Feature Maps (Kohonen)
  • Experiments were ran on several different data
    sets, recorded over a number of time periods
    (day, 2 days one week)

17
Radial basis function nets
  • Feature space is divided up using circles
    (hyperspheres)
  • Characterised by centre and radius
  • Response surface is a gaussian (bell shaped curve)

18
Radial basis function networks
  • RBF consists of
  • Hidden Layer of Radial Units modelling a Gaussian
    response surface (only one)
  • Training of RBFs
  • Centres and Deviations of Radial Units are set
  • Linear Output layer is optimised
  • Centres assigned to reflect clustering of data

19
Radial basis function networks
  • Centre assignment methods
  • Sub sampling Random number of training points
    are copied to the radial units
  • K-means algorithm Set of points are selected and
    placed at the centre of clusters of training data.

20
Radial basis function networks
  • Deviation assignment (Determines how spiky the
    gaussian functions are)
  • Explicit (choose yourself)
  • Isotropic heuristic method using the number of
    centres and volume of space occupied
  • K-NN Units deviation is individually set to the
    mean distance of its K-NN
  • Small in tightly packed areas (preserves details)
  • Higher in sparse areas of space

21
Artificial Neuron, and the Kohonen SOFM
22
Application to MINICON project
  • Picture shows test elevator used at KONE site,
    sensors mounted at various sites and results used
    as input to neural networks
  • Self Organising feature maps and Multi-Layer
    perceptrons trained on a variety of elevator
    data.

23
Results
24
Results for Genetic Algorithm
25
Results for Sensitivity Analysis
26
Results of weight regularisation techniques
27
Results for auto association
28
Best performing technique per data set
29
Final Networks
  • Two types of neural networks used in final
    product
  • Multi-Layer Perceptrons(with weight
    regularisation applied)
  • SOFM
  • Both networks require different numbers of inputs
    depending on the data set (5-15)

30
Alternative methods
  • Use of Statistical techniques
  • Mean, Kurtosis, Standard Deviation
  • For example the mean of one parameter suggests
    a significant rise for data set number 5. Since
    all the other data sets show a consistent mean
    then this example seems to be highly significant.

31
Use of mean value
32
Conclusions
  • Removing input data does not improve
    classification performance
  • Statistical techniques not consistent to make
    reliable estimates
  • MLPs and SOFMs are best performing NN
    techniques
  • MLPs performance can be improved by applying
    weight regularisation techniques.
Write a Comment
User Comments (0)
About PowerShow.com