Two Examples Of Indoor And Outdoor Surveillance Systems: Motivation, Design, And Testing - PowerPoint PPT Presentation

Loading...

PPT – Two Examples Of Indoor And Outdoor Surveillance Systems: Motivation, Design, And Testing PowerPoint presentation | free to view - id: c25ef-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Two Examples Of Indoor And Outdoor Surveillance Systems: Motivation, Design, And Testing

Description:

State of the Art. Active badges. small, electronic devices worn by people ... Vector O1p, O2q, and O1O2 are co-planar. References. Paolo Remagnino , et al (Editors) ... – PowerPoint PPT presentation

Number of Views:98
Avg rating:3.0/5.0
Slides: 40
Provided by: kgor
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Two Examples Of Indoor And Outdoor Surveillance Systems: Motivation, Design, And Testing


1
Two Examples Of Indoor And Outdoor Surveillance
Systems Motivation, Design, And Testing
  • Ioannis Pavlidis
  • Vassilios Morellas
  • Honeywell Laboratories

2
Graduate Seminar in CIS Video Processing and
Mining CIS 750 Spring 2003
  • Presented by
  • Ken Gorman

3
Agenda
  • CCN Cooperative Camera Network
  • DETER - Detection of Events for Threat Evaluation
    and Recognition

4
Cooperative Camera Network (CCN)
  • Network of cooperating cameras
  • Controlled by computer vision software
  • Features
  • mechanism for counting the people present in
    various parts of the building
  • An automatic or semi-automatic mechanism for
    tagging people.
  • Report tagged individuals whereabouts whenever
    they are within the field of view

5
Major Components
  • COTS Hardware Software Set-Up
  • Change Detection
  • Counting People
  • Tracking People

6
State of the Art
  • Active badges
  • small, electronic devices worn by people
  • transmit an ID signal to receivers placed around
    the building
  • ID signal corresponds to the identity of the
    badges wearer
  • received signals are used to compute the wearers
    location

7
Badge Examples
  • Infrared-transmitting badges at Olivetti Research
    and Xerox PARC,
  • Olivetti ultrasonic badges at ATT Laboratories
    in Cambridge, UK
  • Radio frequency tags from PinPoint
  • Wired and unwired motion trackers from
  • Ascension Technology
  • Polhemus

8
Disadvantages
  • Consumers unwilling to wear badges
  • Cumbersome

9
Alternatives
  • ????

10
Cameras
  • Pro Leaves users unencumbered
  • Cons Not as reliable as badge methods

11
Camera Arrangement
  • Overlapping Fields of View

12
Fundamentals
  • Imaging Technologies for Surveillance Systems
  • Image Segmentation
  • Tracking Mechanism
  • Multi-Camera Fusion
  • Threat Assessment

13
Multi-Normal Pixel Representation
14
Initialization
  • Goal - provide statistically valid values for the
    pixels corresponding to the scene.
  • Starting point for the dynamic process of
    foreground and background awareness

15
Initialization
  • Methods Used
  • K-Means better for plazas and malls
  • Expectation-Maximization better for changing
    weather conditions 1

16
Image Segmentation
  • Each pixel is considered as a mixture of five
    time-varying trivariate normal distributions

17
Image Segmentation
  • The term represents a trivariate Normal
    distribution with vector mean and
    variance-covariance matrix

18
Image Segmentation
  • The distributions are trivariate to account for
    the three component colors (red, green, and blue)
    of each pixel in the general case of a color
    camera.

19
Image Segmentation
  • For simplification, the variance-covariance
    matrix is assumed to be diagonal with xR,xG,xB ,
    having identical variance within each Normal
    component, but not across all components

20
Update Cycle
  • Distributions are ordered based upon their
    weights.
  • Member of a Distribution
  • Distribution is in background or foreground
  • Jeffreys 2 algorithm used for matching pixel
    to distribution
  • Distributions are Updated

21
Matching
  • We use the Jeffreys divergence measure (J) to
    determine whether the incoming data point belongs
    to one of the existing distributions
  • The Jeffreys number measures how unlikely it is
    that one distribution (g) was drawn from the
    population represented by the other (f)
  • K - prespecified cut-off value

22
Update Match Found
  • Incoming pixel state is labeled either background
    or foreground
  • All the parameters of the matched distribution
    are updated according to the method of moments
  • Only the weights of the other distributions are
    updated

23
Update No Match
  • Incoming pixel state is labeled foreground
  • Last distribution in the ordered list is replaced
  • All the parameters of the new distribution are
    updated
  • Only the weights of the other distributions are
    updated

24
Jeffreys Algorithm
  • Jeffreys number measures how unlikely it is that
    one distribution (g) was drawn from the
    population represented by the other (f).

25
Broken Clouds
  • Preferential, in order No Proference

26
Segmentation of Moving Objects
  • The form of some of the distributions could
    change
  • Some of the foreground states could revert to
    back-ground and vice versa.
  • One of the existing distributions could be
    dropped and replaced with a new distribution.

27
Predictive Tracking
  • On-line segmentation of foreground pixels
  • Calculation of blob centroids
  • Multiple-Hypotheses Tracking Algorithm

28
Multiple-Hypotheses Tracking
  • Recursive Bayesian probabilistic procedure
  • Does NOT commit early to trajectory

29
Multiple Hypothesis Tracking
30
Multiple Hypothesis Tracking
  • Kalman filtering prediction based on constant
    velocity models
  • K-best hypothesis trajectory tree generation,
    pruning and merging
  • Bayesian probability calculations for matching
    input data to track hypothesis
  • See references 5 and 6 for exact algorithm

31
Multi-Camera Fusion
  • Monitoring of large areas can only be
    accomplished using multiple cameras
  • Panoramic View is created by fusing individual
    camera FOVs
  • Object motion registered against a global
    coordinate system

32
Multi-Camera Fusion
  • Optimal Coverage Scheme is created
  • Minimal use of cameras to minimize cost

33
Multi-Camera Fusion
  • Compute HOMOGRAPHY matrix H between two cameras
    based on CoG of moving objects appearing in the
    overlapping areas of the two fields of view
  • Requirement Information exchange between
    respective computers (e.g., pixel intensity data
    and CoG of moving objects in pixel coordinates)

34
Homography Matrices Computation
  • Least Squares Method
  • Very popular
  • Relatively simple
  • Defined in Reference 6

35
Homography Matrix
  • Used Kanatani Method
  • Based on a statistical optimization theory for
    geometric computer vision
  • Cures the deficiencies exhibited by Least-Squares

36
Kanatani Method
  • Epipolar constraint may be violated by various
    noise sources due to the statistical nature of
    the imaging problem

37
Multi-Camera Fusion
  • O1 and O2 are Optical Centers
  • P(x,y,z) is a point in the scene that falls in
    the common area between the two camera
  • Vector O1p, O2q, and O1O2 are co-planar

38
References
  • Paolo Remagnino , et al (Editors). Video-Based
    Surveillance Systems Computer Vision and
    Distributed Processing. Kluwer Academic
    Publishers, 2002.
  • http//www.htc.honeywell.com/projects/deter/
  • 1 - A. P. Dempster, N. M. Laird, and D. B.
    Rubin, Maximum likelihood from incomplete data
    via the em algorithm (with discussion), J. Roy.
    Stat. Soc. B, vol. 39, pp. 138, 1977.
  • 2 - J. Lin, Divergence measures based on the
    Shannon entropy, IEEE Trans. Inform. Theory,
    vol. 37, pp. 145151, Jan. 1991.
  • 3 - C. Stauer and W.E.L. Grimson, Adaptive
    background mixture models for real-time
    tracking," in Proceedings 1999 IEEE Conference on
    Computer Vision and Pattern Recognition, Fort
    Collins, CO, June 23-25 1999, vol. 2, pp.
    246-252.

39
References (cont.)
  • 4 D. B. Reid, An algorithm for tracking
    multiple targets, IEEE Transactions on Automatic
    Control, vol. 24, pp. 843854, 1979.
  • 5 I. J. Cox and S. L. Hingorani, An efficient
    implementation of Reid's multiple hypothesis
    tracking algorithm and its evaluation for the
    purpose of visual tracking," IEEE Transactions on
    Pattern Analysis and Machine Intelligence, vol.
    18, no. 2, pp. 138-150, 1996.
  • 6 L. Lee, R. Romano, and G. Stein, Monitoring
    activities from multiple video streams
    Establishing a common coordinate frame," IEEE
    Transactions on Pattern Analysis and Machine
    Intelligence, vol. 22, no. 8, pp. 758 767,
    2000.
About PowerShow.com