Motivation - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Motivation

Description:

Motivation – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 33
Provided by: sreenivasr
Category:
Tags: lede | motivation

less

Transcript and Presenter's Notes

Title: Motivation


1
Motivation
Applications
Security at parking lots
Safety in hazardous environments
Driver safety
Aircraft / passenger safety
  • 3D mapping for safety and security using mobile
    robotic platforms

Our results
Vehicle simulators
Terrain mapping
Under vehicle inspection
Runway /Pavement inspection
These projects were funded under different grants
from Department of Energy under URPR program,
Department of Defense under the SAFER program
and Department of Defense (NAC) under the ARC
program.
2
Our approach
Multi-modality sensors
Position and orientation sensors
Visual
Thermal
3D range sensors
Indigo
JVC
Leica -GPS
Xsens IMU
IVP
SICK
RIEGL
Data visualization
State estimation
Data Fusion
Sensor modeling
System integration
Robotic inspection
Large-scale terrain mapping
Urban-mapping
Runway inspection
System prototypes
3
Our approach
Multi-modality sensors
Position and orientation sensors
Visual
Thermal
3D range sensors
Indigo
JVC
Leica -GPS
Xsens IMU
IVP
SICK
RIEGL
Data visualization
System integration
Data Fusion
Sensor modeling
State estimation
Robotic inspection
Large-scale terrain mapping
Urban-mapping
Runway inspection
Trajectory and orientation estimation
4
Our approach
Multi-modality sensors
Position and orientation sensors
Visual
Thermal
3D range sensors
Indigo
JVC
Leica -GPS
Xsens IMU
IVP
SICK
RIEGL
Data visualization
System integration
State estimation
Data Fusion
Sensor modeling
Runway inspection
Robotic inspection
Large-scale terrain mapping
Urban-mapping
Output 3D models
Major contribution
Minor contribution
5
Improvements with our approach
Method used before 1998
Multi-sensor visualization
  • Improvements
  • 3D imaging for under-vehicle inspection
  • Ease of automation and remote visualization.
  • Large-scale surveillance made easy

3D model for visualization purposes
Improvements by 2003 at IRIS (Ng, 2004)
Automatic change detection
6
The improvements of our approach
The need for a 3D system
Multi-sensor integrated dataset
Runway / Pavement Distress
Images from www.faa.gov
Our results
  • Improvements
  • Depth of cracks very important for crack repair
    decision
  • Textured dataset visualization indicates foreign
    objects and object of debris
  • Repeatability, coverage and reliability of
    detection and classification

7
Improvements with our approach
Cornerstone Drive, off Lovell Road, I-40 Exit
374 Knoxville, Tennessee, Knoxville Each loop a
length of 1.1 mile, Total distance covered on
scanning that day 2.2 miles ( 2 times) 4.4
miles of the same data.
Digitized 3D models introduce realism into
vehicle dynamic simulations
  • Improvements
  • Large-scale micro accuracy 3D mapping
  • Autonomous navigation capability
  • Realism of real-world introduced to simulations

8
Contributions
  • System contributions Deployable prototypes
    Hardware characterization Software

Robotic inspection
Large-scale terrain mapping
Urban-mapping
Runway inspection
  • Technical contributions
  • Data fusion framework for uncertainty management
  • Information theoretic framework for integrating
    sensor data amidst conflict, performance
    degradation and failure.
  • A statistical inference scheme that tries to
    resolve the fusion versus selection dilemma.
  • Reliable pose recovery from images
  • Making the fundamental matrix estimation
    process less dependent on the scene and the type
    of motion.
  • Treating pose recovery as a random process and
    improve the uncertainty by generating space
    statistics to replace uncertainty in time
    statistics.
  • Reliability in 3D scene mapping
  • Multi-modality data fusion for autonomous
    navigation and 3D mapping in large scale
    unstructured dynamic environments.

9
Evaluation Geometry
Scene of interest
Static scan
Mobile scan
Our system
Result up to scale
Result up to scale
3D from an image sequence
3D from a single image
Saxena, NIPS 2006
Pollyfeys, IJCV 2004
Geometric fidelity, Texture fidelity,
Reproducibility, Robustness
10
Evaluation Visual
Scene of interest
Static scan
Mobile scan
Our system
Result up to scale
Result up to scale
3D from an image sequence
3D from a single image
Saxena, NIPS 2006
Pollyfeys, IJCV 2004
Geometric fidelity, Texture fidelity,
Reproducibility, Robustness
11
Multi-sensor localization Outdoors
After integration
300 m path driven at approximately 20 mph,
produces a drift of 2 m more if not integrated
smartly.
Kalman fusion vs. sensor selection error
Our method
Kalman fusion
Sensor selection result
12
Zoomed-in views
After
Before
13
Imaging Background
Real world co-ordinates
Transformation to global axes
Camera calibration matrix
14
The inverse process
Camera calibration
Feature detection
Feature matching
Geometry estimation
Dense matching
Z. Zhang, "A flexible new technique for camera
calibration",   IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol.22, No.11,
pages 1330-1334, 2000.
15
The inverse process
Camera calibration
Feature detection
Feature matching
Geometry estimation
Dense matching
Frame 1
Frame 2
Feature detection methods
Edge map curvature
Fast radial symmetry
Intensity Auto-correlation
Multi-resolution SIFT
Curvature corners
16
The inverse process
Camera calibration
Feature detection
Feature matching
Geometry estimation
Dense matching
Choice of the feature matching method
  • Normalized cross-correlation
  • Monogenic phase

17
Step-by-step
Feature matching by cross correlation and
proximity
Frame 1
Frame 2
Motion estimation using n-point correspondence
Model fitting / Outlier rejection using RANSAC
Putative matches
Inliers
18
Dense matching
Camera calibration
Feature detection
Feature matching
Geometry estimation
Dense matching
Rectified frame 1
Rectified frame 2
Disparity map
3D Reconstruction
V. Kolmogorov and R. Zabih,"Multi-camera Scene
Reconstruction via Graph Cuts", In European
Conference on Computer Vision (ECCV), 2002.
19
Influence of motion model
Left
Right
Affine model
The importance of selecting the model of the
fundamental matrix .
Generic perspective model
Homography
20
The error in parameter estimation
Results within specified threshold
Apriori threshold
Apriori threshold
The importance of model fitting error in the
fundamental matrix estimation.
21
Experiments
Vegetation is a challenge
Nice features to track
Transition from linear perspective to aerial
perspective
With vegetation in the scene
When tracking features on buildings
22
The effect of the feature detector
23
Vision-based localization
1
2
3
4
6
5
9
16
24
Conclusions
  • Sampled scene structure

Error in range measurement
System integration
State estimation
  • How to minimize localization error ?
  • How to improve reliability on localization ?
  • What sensors to use for a given application ?

Error in 3D model
Measurement Error

Localization Error

Data Fusion
  • How to minimize error in the maps?
  • Better sensors ? More sensors ? Adaptive fault
    detection ?
  • How best can we perform in unknown unstructured
    dynamic environments?

Major contribution
Minor contribution
25
Contributions
  • System contributions Deployable prototypes
    Hardware characterization Software

Vehicle simulators
Terrain mapping
Under vehicle inspection
Runway/Pavement inspection
  • Technical contributions
  • Data fusion framework for uncertainty management
  • Information theoretic framework for integrating
    sensor data amidst conflict, performance
    degradation and failure.
  • A statistical inference scheme that tries to
    resolve the fusion versus selection dilemma.
  • Reliable pose recovery from images
  • Making the fundamental matrix estimation
    process less dependent on the scene and the type
    of motion.
  • Treating pose recovery as a random process and
    improve the uncertainty by generating space
    statistics to replace uncertainty in time
    statistics.
  • Reliability in 3D scene mapping
  • Multi-modality data fusion for autonomous
    navigation and 3D mapping in large scale
    unstructured dynamic environments.

26
Future Work Target localization
Cheng walks around the table
Room SERF 410 in common perspective
Region for hand-off
Camera 2
Camera 1
Region for hand-off
Motion segmentation
80 frames around hand-off
Camera 1
Camera 2
F. Fleuret, J. Berclaz, J, R. Lengagne, P. Fua,
Multi-camera people tracking with a
probabilistic occupancy map, IEEE Transaction on
Pattern Analysis and Machine Intelligence, Vol.
30, No. 2, pp. 267-282, 2008.
Fusion-selection in target tracking reduces
uncertainty on the target by 50.
27
More results
28
More results
BI-LO shopping center in Knoxville, TN
150 m
29
Large scale terrain
30
Real data Simulated noise
After pose recovery from range (R)
Ground truth dataset (G)
Integrated based on noisy measurements (N)
Terrain crack scanned using IVP ranger calibrated
to an accuracy of 0.1 mm.
Average absolute error 5.5 mm.
Average absolute error 4 mm.
Standard deviation in error 6.5 mm.
Standard deviation in error 3.7 mm.
31
The best we can do with belief propagation
Urban Scanning Project
Zoomed in view
Brads West Town Mall
Modular Robotics
Real-time data
SICK scanner on the conveyer
Raw Range Profiles aligned based on motion alone
Road Profiling
IRIS West Road Terrain
32
MuFeSaC Learning geometry from feature points
and n-point hypothesis generators
Write a Comment
User Comments (0)
About PowerShow.com