Real-Time Vision on a Mobile Robot Platform - PowerPoint PPT Presentation

About This Presentation
Title:

Real-Time Vision on a Mobile Robot Platform

Description:

Incremental least square fit for lines. Efficient and easy to implement. ... Vision 2: Edge pixels Least Squares?? Conventional approaches time consuming. ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 48
Provided by: mohansri
Category:
Tags: mobile | platform | real | robot | time | vision

less

Transcript and Presenter's Notes

Title: Real-Time Vision on a Mobile Robot Platform


1
Real-Time Vision on a Mobile Robot Platform
  • Mohan Sridharan
  • Joint work with Peter Stone
  • The University of Texas at Austin
  • smohan_at_ece.utexas.edu

2
Motivation
  • Computer vision challenging.
  • State-of-the-art approaches not applicable to
    real systems.
  • Computational and/or memory constraints.
  • Focus efficient algorithms that work in
    real-time on mobile robots.

3
Overview
  • Complete vision system developed on a mobile
    robot.
  • Challenges to address
  • Color Segmentation.
  • Object recognition.
  • Line detection.
  • Illumination invariance.
  • On-board processing computational and memory
    constraints.

4
Test Platform Sony ERS7
  • 20 degrees of freedom.
  • Primary sensor CMOS camera.
  • IR, touch sensors, accelerometers.
  • Wireless LAN.
  • Soccer on 4.5x3m field play humans by 2050!

5
The Aibo Vision System I/O
  • Input Image pixels in YCbCr Color space.
  • Frame rate 30 fps.
  • Resolution 208 x 160.
  • Output Distances and angles to objects.
  • Constraints
  • On-board processing 576 MHz.
  • Rapidly varying camera positions.

6
Robots view of the world
7
Vision System Flowchart
8
Vision System Phase 1 Segmentation.
  • Color Segmentation
  • Hand-label discrete colors.
  • Intermediate color maps.
  • NNr weighted average Master color cube.
  • 128x128x128 color map 2MB.

9
Vision System Phase 1 Segmentation.
  • Use perceptually motivated color space LAB .
  • Offline training in LAB generate equivalent
    YCbCr cube.

10
Vision System Phase 1 Segmentation.
11
Vision System Phase 1 Segmentation.
  • Use perceptually motivated color space LAB.
  • Offline training in LAB generate equivalent
    YCbCr cube.
  • Reduce problem to table lookup.
  • Robust performance with shadows, highlights.
  • YCbCr 82, LAB 91.

12
Sample Images Color Segmentation.
13
Sample Video Color Segmentation.
14
Some Problems
  • Sensitive to illumination.
  • Frequent re-training.
  • Robot needs to detect and adapt to change.
  • Off-board color labeling time consuming.
  • Autonomous color learning possible

15
Vision System Phase 2 Blobs.
  • Run-Length encoding.
  • Starting point, length in pixels.
  • Region Merging.
  • Combine run-lengths of same color.
  • Maintain properties pixels, runs.
  • Bounding boxes.
  • Abstract representation four corners.
  • Maintains properties for further analysis.

16
Sample Images Blob Detection.
17
Vision System Phase 2 Objects.
  • Object Recognition.
  • Heuristics on size, shape and color.
  • Previously stored bounding box properties.
  • Domain knowledge.
  • Remove spurious blobs.
  • Distances and angles known geometry.

18
Sample Images Objects.
19
(No Transcript)
20
Vision System Phase 3 Lines.
  • Popular approaches Hough transform, Convolution
    kernels computationally expensive.
  • Domain knowledge.
  • Scan lines green-white transitions candidate
    edge pixels.

21
Vision System Phase 3 Lines.
  • Incremental least square fit for lines.
  • Efficient and easy to implement.
  • Reasonably robust to noise.
  • Lines provide orientation information.
  • Line Intersections can be used as markers.
  • Inputs to localization.
  • Ambiguity removed through prior position
    knowledge.

22
Sample Images Objects Lines.
23
Some Problems
  • Systems needs to be re-calibrated
  • Illumination changes.
  • Natural light variations day/night.
  • Re-calibration very time consuming.
  • More than an hour spent each time
  • Cannot achieve overall goal play humans.
  • That is not happening anytime soon, but still

24
Illumination Sensitivity Samples.
  • Trained under one illumination
  • Under different illumination

25
Illumination Sensitivity Movie
26
Illumination Invariance - Approach.
  • Three discrete illuminations bright,
    intermediate, dark.
  • Training
  • Performed offline.
  • Color map for each illumination.
  • Normalized RGB (rgb use only rg) sample
    distributions for each illumination.

27
Illumination Invariance Training.
  • Illumination bright color map

28
Illumination Invariance Training.
  • Illumination bright map and distributions.

29
Illumination Invariance Training.
30
Illumination Invariance Testing.
31
Illumination Invariance Testing.
32
Illumination Invariance Testing.
33
Illumination Invariance Testing.
2
34
Illumination Invariance Testing.
  • Testing - KLDivergence as a distance measure
  • Robust to artifacts.
  • Performed on-board the robot, about once a
    second.
  • Parameter estimation described in the paper.
  • Works for conditions not trained for
  • Paper has numerical results.

35
Adapting to Illumination changes Video
36
Some Related Work
  • CMU vision system Basic implementation.
  • James Bruce et al., IROS 2000
  • German Team vision system Scan Lines.
  • Rofer et al., RoboCup 2003
  • Mean-shift Color Segmentation.
  • Comaniciu and Peer PAMI 2002

37
Conclusions
  • A complete real-time vision system on board
    processing.
  • Implemented new/modified version of vision
    algorithms.
  • Good performance on challenging problems
    segmentation, object recognition and illumination
    invariance.

38
Future Work
  • Autonomous color learning.
  • AAAI-05 paper available online.
  • Working in more general environments, outside the
    lab.
  • Automatic detection of and adaptation to
    illumination changes.
  • Still a long way to go to play humans ?.

39
Autonomous Color Learning Video
  • More videos online
  • www.cs.utexas.edu/AustinVilla/

40
THATS ALL FOLKS ?
www.cs.utexas.edu/AustinVilla/
41
(No Transcript)
42
Question 1 So, what is new??
  • Robust color space for segmentation.
  • Domain-specific object recognition line
    detection.
  • Towards illumination invariance.
  • Complete vision system closed loop.
  • Accept cannot compare with other teams, but
    overall performance good at competitions

43
Vision 1 Why LAB??
  • Robust color space for segmentation.
  • Perceptually motivated.
  • Tackles minor changes shadows, highlights.
  • Used in robot rescue

44
Vision 2 Edge pixels Least Squares??
  • Conventional approaches time consuming.
  • Scan lines faster
  • Reduces colors needing bounding boxes.
  • LS easier to implement fast too.
  • Accept have not compared with any other method

45
Vision 3 Normalized RGB ??
  • YCbCr separates luminance but not good for
    practice on Aibo.
  • Normalized RGB (rgb)
  • Reduces number of dimensions - storage.
  • More robust to minor variations.
  • Accept have compared with YCbCr alone LAB
    works but more storage and calculations

46
Illumination Invariance Training.
47
Illumination Invariance Testing.
Write a Comment
User Comments (0)
About PowerShow.com