General ideas to communicate - PowerPoint PPT Presentation

About This Presentation
Title:

General ideas to communicate

Description:

Title: Localization for mobile robot using vertical lines Author: 509509 Last modified by: mperkows Created Date: 10/20/2003 10:01:30 AM Document presentation format – PowerPoint PPT presentation

Number of Views:48
Avg rating:3.0/5.0
Slides: 24
Provided by: 50984
Learn more at: http://web.cecs.pdx.edu
Category:

less

Transcript and Presenter's Notes

Title: General ideas to communicate


1
General ideas to communicate
  • Show one particular Example of localization based
    on vertical lines.
  • Camera Projections
  • Example of Jacobian to find solution of the
    system of nonlinear equations

2
Localization for Mobile Robot Using Monocular
Vision
3
Vision based Self-localization methods
  • Self-localization methods of mobile robot
  • Position tracking encoder, ultrasonic sensors,
    local sensors
  • Global localization laser-range scanner,
    vision-based methods
  • Vision-based methods of indoor application
  • Stereo vision
  • Directly detects the geometric information,
    complicated H/W, much processing time
  • Omni-directional view
  • Using conic mirror, low resolution
  • Mono view using landmarks
  • Using artificial landmarks

4
Background on Monocular methods
  • Related work in monocular method
  • Sugihara(1988) did pioneering works in
    localization using vertical edges.
  • Atiya and Hager(1993) used geometric tolerance to
    describe observation error.
  • Kosaka and Kak (1992) proposed a model-based
    monocular vision system with a 3D geometric
    model.
  • Munoz and Gonzalez (1998) added an optimization
    procedure.

5
Previous work on monocular methods
  • Related work in monocular method
  • Talluri and Aggarwal (1996) considered
    correspondence problem between a stored 3D model
    and 2D image in an outdoor urban environment.
  • Aider et. al. (2005) proposed an incremental
    model-based localization using view-invariant
    regions.
  • Another approach adopting SIFT (Scale-Invariant
    Feature Transformation) algorithm to compute
    correspondence between the SIFT features saved
    and images during navigation.

6
Self-localization based on Vertical Lines
  • A self-localization method using vertical lines
    with mono view will be presented here.
  • Indoor environment, use horizontal and vertical
    line features(doors, furniture)
  • Find vertical lines, compute pattern vectors
  • Match the lines with the corners of map
  • Find position (x,y,?) with matched information

7
2. Localization algorithm
Map-making and path planning
No
Line segments 3
Yes
Matching lines with map
Localization(x,y,?)
No
Uncertainty gt T
Yes
No
Destination
Yes
end
Fig. 1 The flowchart of self-localization
8
Line feature detection
  • Vertical Sobel operation
  • Vertically projected histogram
  • One dimensional averaging, and thresholding
  • Local maximum are indexed as feature points

Fig. 2 Projected histogram and a local maximum
  • This method does not use edge detection and next
    Hough
  • Our method uses Canny and Hough

9
Experimental results using histograms
(a) Original Image (b) Vertical edges
Fig. 6 Mobile robot
(c) Projected histogram (d) Vertical lines
Fig. 7 The procedures of detecting vertical lines
10
Experimental results sequence of images
Sequence of Input images in an experiment with
the robot
11
How it worked Experimental results
predicted
real
Fig. 9. The result of localization in the given
map
Fig. 10. Errors through Y axis
12
Correspondence of feature vectors
13
Correspondence of feature vectors
  • They use geometrical information of the line
    features of the map
  • Feature vectors are defined with hue(H) and
    saturation (S)
  • Feature vectors of the right and left regions are
    defined
  • Check whether a line meets floor regions
  • Connected line, non-connected line
  • Define visibility of regions of connected line
  • Visible region, Occluded region

(1)
14
2.2 Correspondence using feature vectors (2)
  • Matching of feature vector of lines with map.
  • Lines of both visible region, one visible region,
    non-contacted line
  • The correspondence of neighbor lines are
    investigated with the lines having geometrical
    relationship.

. Contacted line x1 , x2 , x3 . Non-contacted
line x 4 . Visible region l1, l2, r2, l3, r3,
r4 . Occluded region r1 , l4
Fig. 3 Floor contacted lines and visible regions
15
2.3 Self-localization using vertical lines (1)
  • The coordinates of feature points
    are matched to the camera coordinates
    of the map .

Fig. 4 Global coordinates and camera coordinates
16
2.3 Self-localization using vertical lines (2)
Camera coordinates
Feature points of camera
coordinates
Features of image plane
Focal length of camera
Image plane coordinates
Fig. 5 Perspective transformation of camera
coordinates
17
2.3 Self-localization using vertical lines (3)
What is a relation of camera coordinates and
world coordinates?
  • Camera coordinates can be transformed to world
    coordinates by a rigid body transformation T.

(2)
Use a transformation!
  • The camera coordinates and world coordinates are
    related with translation and rotation. The
    transformation T can be defined as

(3)
18
2.3 Self-localization using vertical lines (4)
We create a system of nonlinear equations for the
camera view
  • Global coordinates are mapped to camera
    coordinates.
  • The perspective transformation is (5)
  • Perspective transformation and rigid
    transformation of the coordinates induce a system
    of nonlinear equations.
  • induces from (4), (5).

(4)
(5)
(6)
19
2.3 Self-localization using vertical lines (5)
  • Jacobian matrix
  • Newtons method to find the solution of the
    nonlinear equations is (8)
    when initial value is given.
  • where

fi from F from previous slide
(7)
(8)
From previous slide
We get update of camera (robot) position and
orientation
20
4. Conclusions
  1. A self-localization method using vertical line
    segment with mono view was presented.
  2. Line features are detected by projected histogram
    of edge image.
  3. Pattern vectors and their geometrical properties
    are used for match with the point of map.
  4. A system of nonlinear equations with perspective
    and rigid transformation of the matched points is
    induced.
  5. Newtons method was used to solve the equations.
  6. The proposed algorithm using mono view is simple
    and applicable to indoor environment.

21
(No Transcript)
22
Localization for Mobile Robot Using Monocular
Vision
  • Hyunsik Ahn
  • Jan. 2006
  • Tongmyong University

23
3. Experimental results (1)
Table 1 Real positions and errors
Real position (mm, ) Real position (mm, ) Real position (mm, ) Real position (mm, ) Measured position (mm, ) Measured position (mm, ) Measured position (mm, )
No. X Y Angle X Y Angle
1 0 0 0 23.79 46.13 0.04
2 -160 100 0 32.51 41.54 1.53
3 -160 200 0 49.90 58.28 1.24
4 -160 400 0 34.54 74.82 1.39
5 -160 600 0 37.67 61.41 1.20
6 -160 800 0 29.35 43.86 1.31
7 -160 1000 0 26.57 100.37 1.37
8 -160 1200 0 30.46 18.47 1.38
9 -160 1400 0 18.59 96.39 1.49
10 -160 1600 0 14.18 93.74 1.31
11 -160 1800 0 9.38 9.46 2.00
12 0 660 0 20.21 7.84 3.32
13 0 1150 0 34.61 72.98 2.29
14 0 1555 0 24.36 44.78 1.83
Real position (mm, ) Real position (mm, ) Real position (mm, ) Real position (mm, ) Measured position (mm, ) Measured position (mm, ) Measured position (mm, )
No. X Y Angle X Y Angle
15 0 2005 0 15.66 35.60 0.26
16 -650 380 0 50.08 88.55 0.61
17 -630 660 0 15.84 32.29 0.89
18 -560 1045 -48 4.15 27.56 1.78
19 -465 1300 -45 36.01 59.97 0.84
20 -375 1465 -47 28.03 41.17 2.82
21 -285 1740 -37 11.27 25.81 0.83
22 -190 2125 -32 7.33 73.29 1.69
23 -75 2435 -22 18.21 70.05 1.15
24 -25 2715 -10 19.25 44.07 3.24
25 165 3034 -23 10.03 70.43 1.97
26 325 3455 -27 80.63 66.94 1.45
27 370 3915 -15 9.045 11.04 2.5
2? of errors 2? of errors 2? of errors 2? of errors 32.83 53.80 1.58
Write a Comment
User Comments (0)
About PowerShow.com