Object Detection and Avoidance for Autonomous Lunar and Martian Operations - PowerPoint PPT Presentation

About This Presentation
Title:

Object Detection and Avoidance for Autonomous Lunar and Martian Operations

Description:

Object Detection and Avoidance for Autonomous Lunar and Martian Operations 2006 NASA Exploration Systems Summer Research Opportunities (ESSRO) – PowerPoint PPT presentation

Number of Views:175
Avg rating:3.0/5.0
Slides: 31
Provided by: Kam108
Category:

less

Transcript and Presenter's Notes

Title: Object Detection and Avoidance for Autonomous Lunar and Martian Operations


1
Object Detection and Avoidance for Autonomous
Lunar and Martian Operations
  • 2006 NASA Exploration Systems Summer Research
    Opportunities (ESSRO)(under NASA's Exploration
    Systems Mission Directorate)
  • Marshall Space Flight Center
  • Huntsville, Alabama
  • Sunil David, Bethune-Cookman College
  • davids_at_cookman.edu
  • Kamesh Namuduri, Wichita State University
  • kamesh.namuduri_at_wichita.edu
  • Ernest Wong, United States Military Academy
  • ernest.wong_at_usma.edu
  • Zach Zaccagni, Wichita State University
  • zjzaccagni_at_wichita.edu
  • Greg Carson, University of Southern Mississippi
  • gregory.carson_at_usm.edu
  • Jon Patterson, MSFC Investigator
  • Tom Bryan, MSFC Co-Investigator

2
NASA Projects goals
  • Autonomous navigation and operations on future
    space flights to the moon, Mars, and beyond
  • Autonomous navigation and operations of numerous
    types of vehicles (e.g. landers, rovers and other
    robotic agents)

Our groups goals
  • Evaluate and determine the specific needs of an
    Object Detection and Avoidance (ODA) system
  • Assess the techniques and suite of
    instrumentation that would be appropriate for
    real-time obstacle avoidance during landing and
    other operations.
  • Develop strategies addressing determined
    constraints
  • and if time permits, algorithms to implement
    ODA.

3
NASAs 2006 Strategic Goals
  1. Fly the Shuttle as safely as possible until its
    retirement, not later than 2010

2. Complete the International Space Station in a
manner consistent with NASAs International
Partner commitments and needs of human exploration
3. Develop a balanced overall program of
science, exploration, and aeronautics consistent
with the redirection of the human spaceflight
program to focus on exploration
4. Bring a new Crew Exploration Vehicle into
service as soon as possible after Shuttle
retirement
5. Encourage the pursuit of appropriate
partnerships with the emerging commercial space
sector
6. Establish a lunar return program having the
maximum possible utility for later missions to
Mars and other destinations
4
NASAs 2006 Strategic Goals
  1. Fly the Shuttle as safely as possible until its
    retirement, not later than 2010

2. Complete the International Space Station in a
manner consistent with NASAs International
Partner commitments and needs of human exploration
3. Develop a balanced overall program of
science, exploration, and aeronautics consistent
with the redirection of the human spaceflight
program to focus on exploration
4. Bring a new Crew Exploration Vehicle into
service as soon as possible after Shuttle
retirement
5. Encourage the pursuit of appropriate
partnerships with the emerging commercial space
sector
6. Establish a lunar return program having the
maximum possible utility for later missions to
Mars and other destinations
5
3. Develop a balanced overall program of
science, exploration, and aeronautics consistent
with the redirection of the human spaceflight
program to focus on exploration
Major goals and major functions of this system
6. Establish a lunar return program having the
maximum possible utility for later missions to
Mars and other destinations
6
Autonomous Hazard Avoidance (AHA)
7
Positive and Negative Object Detection
What are positive and negative obstacles?
  • Positive Obstacles
  • Rocks, Trees, Fences, Buildings, Steep inclines
    (relative to capabilities), etc
  • Negative Obstacles
  • Ditches, Holes, Depressions, Sudden drop-offs,
    Steep down grades (relative to capabilities), etc

8
Pictures of positive obstacles on Mars and a
negative obstacle on the Moon
9
(No Transcript)
10
  • Precision Landing
  • Assumptions during the first mission
  • Solid Rocket Motor will be fired for de-orbiting
    and terminal descent.
  • A circular landing area of 100 to 300 meters
    radius can be assumed
  • General Landing site is pre-determined.

11
  • 3 Regions
  • 2400 m
  • 1000-1400 m
  • 100-200 m

12
  • Precision Landing
  • Assumption during the follow-up missions
  • Several beacons that run atomic (beta) batteries
    will be available.
  • Beacons provide range, range rate, and bearing
    information.
  • Beacons can be interrogated by node addresses
    and they are capable of ping, transmit and
    receive data.
  • Beacons can also provide regular updates (say 5
    to 30 updates per second)

13
Sensors that needed to be investigated Radar LiDAR
/ Flash LiDAR Thermal Infrared Automated Video
Guidance Systems (AVGS) and others
14
Positive Obstacle Detection
  • A few current ways that positive obstacles are
    detected..
  • Stereoscopic Vision (aka binocular vision)
  • LIDAR (can also be used in negative obstacle, as
    well)
  • 3D Imaging from LIDAR, and others

15
Positive Object Detection Using Stereo Vision
What is Stereo Vision?
  • Simply, it is the way humans see the world.
  • Stereo Vision is the primary method that the
    human visual system uses to perceive depth.13
  • Effective at judging distance.13
  • There is a discrepancy between what the left eye
    sees and what the right sees.
  • Your eyes actually measure this disparity of
    corresponding images on the two retinas.13

The brain must match points between the two
separate images seen by the two eyes.12
12 Dr. Dave Pape, Virtual Reality 1,
Department of Media Study, University at Buffalo,
Fall 200313 David Wood, 3D Imagery
Introduction, NV News, nvnews.net, February 24,
2000
16
Positive Object Detection Using Stereo Vision
There are numerous ways to set up the cameras
parallel, angled, 2 cameras (as shown15), 1
device containing 2 cameras, 1 camera using
mirrors or a prism
Using two cameras to calculate the disparity or
distance map of a circuit board14. This leads to
a 3D image.
14 image source http//www.mvtec.com/halcon/ap
plications/application.pl?name3dmetro, July
200615 image cources http//www.indiana.edu/r
oboclub/projects/stereoIntro/index.html, July 2006
17
Positive Object Detection Using LIDAR
What is LIDAR?
  • LIght Detection And Ranging
  • Uses the same principle as RADAR. 2
  • The lidar instrument transmits light out to a
    target. The transmitted light interacts with and
    is changed by the target. 2
  • Some of this light is reflected / scattered back
    to the instrument where it is analyzed. The
    change in the properties of the light enables
    some property of the target to be determined. 2
  • The time for the light to travel out to the
    target and back to the lidar is used to determine
    the range to the target. 2

2 Dr. Michael J. Kavaya, What is LIDAR?,
www.ghcc.msfc.nasa.gov/sparcle/sparcle_tutorial.ht
ml, Aug. 1999
3 image source http//www.aeromap.com/lidar_ba
sics.htm
18
Positive Object Detection Using LIDAR
  • LIDAR can be used to create a digital surface
    model (DSM), a digital terrain model (DTM), or in
    conjuction with other sensors and cameras to
    gather LIDAR and image data simultaneously.
  • DSM is sometimes referred to a digital elevation
    model (DEM),
  • .5m-3000m range

DEM/DSM of same 5
Aerial photo 5
DTM of same 5
5 Teng-To Yu, Ming Yang, Chao-Shi Chen,
Automatic Feature Extraction and Stereo Image
Processing with Genetic Algorithms for LiDAR
data, Proceedings of the Computer Graphics,
Imaging and Vision New Trends (CGIV05)
19
Positive Object Detection Using LIDAR
LIDAR vs RADAR
  • Primary difference is that LIDAR uses much
    shorter wavelengths of the electromagnetic
    spectrum (typically in the ultraviolet, visible,
    or near infrared). Whereas RADAR uses radio
    waves6 ,which are 10,000 to 100,000 times longer.
  • LIDAR system can offer much higher resolution
    than radar. A laser has a very narrow beam which
    allows the mapping of features at very high
    resolution compared with radar6.

6 LIDAR, http//www.answers.com/topic/lidar,
July 2006
20
Positive Object Detection Using LIDAR
  • Combining LIDAR with other imaging can allow for
    3D Images to be generated. Some LIDAR companies
    (like SICK and Aeromap) offer multi-sensor
    systems, instrument integration, services, and
    applications that will aid in gathering LIDAR and
    other image data simultaneously to create this.

Healy, Alaska USA Colored shaded relief map3
LIDAR data overlay map7
3 image source http//www.aeromap.com/lidar_ba
sics.htm July 2006 7 image source
http//www.aerometric.com/gallery July 2006
21
Positive Object Detection Using LIDAR
For future Martian and Lunar rovers, integrated
multiple sensors (including LIDAR) would most
likely be placed higher than the body, so that it
could detect obstacles at a further distance.
Need to determine safest path of navigation
SICK LMS Laser Range Finder -- used for Robotics
Laboratory at UCF8
One type of LIDAR model9
8 image source http//robotics.ucf.edu/calculo
n/mechanical/cad0LARGE.jpg, July 19,20069
image source taken by Zachary J Zaccagni at
MSFC, NASA for Summer Research, July 2006
22
Positive Object Detection Using LIDAR
LIDAR and Stereo working in conjunction
The 3-D imaging abilities of a lidar could also
be used in conjunctionwith the stereo cameras
for active and autonomous rover guidance. 10In
this mode of operation the lidar has considerable
advantage over the passive cameras (digital
cameras) since it has considerably greater range
and distance resolution capabilities. 10 In
addition since the lidar carries its own laser
light source it operates equally well in either
sunlight or shadow. 10
Stereo vision cameras are mounted at the front of
the robot. The SICK LIDAR is mounted behind the
cameras so that the laser beam plane passes
directly over the cameras. 11
10 A. I. Carswell, A. Ulitsky, Surface-Based
3-D Lidar Measurements Of The Martian
Atmosphere11 Brian Yamauchi, The Wayfarer
modular navigation payload for intelligent robot
infrastructure, iRobot Research
23
Negative Obstacle Detection
  • A few current ways that negative obstacles are
    detected..
  • Stereoscopic Vision (aka binocular vision)
  • LIDAR (dependant upon device location e.g.
    best from above)
  • Thermal Imaging

Negative obstacles are considered more difficult
to detect, compared to positive obstacles
24
Negative Obstacle detection using thermal imaging
  • Negative obstacles are cavities that we might
    expect to retain heat (e.g. ditches, holes, and
    depressions).1
  • Negative obstacles tend to be warmer than the
    surrounding terrain for most of the night.1
  • Using thermal imaging, you can detect these
    negative obstacles in conditions for which other
    approaches fail (e.g. stereo vision-based range
    data).1

Left thermal image of a trench 0.6 m wide viewed
from 5.5 m away at a camera height of 1.0 m.
Right false color range image from stereo
vision yellow is closest, violet furthest, and
black represents no data. Cross-hairs in both
images are for reference. The red overlay on the
intensity image shows detection of the leading
edge of the trench.
1 L. Matthies and A. Rankin, Negative Obstacle
Detection by Thermal Signature, International
Conference on Intelligent Robots and Systems Oct.
2003
25
Negative Obstacle detection using thermal imaging
  • Detecting negative obstacles is much more
    difficult than positive.1
  • Negative obstacle detection algorithms in the
    past has relied primarily on geometric analysis
    of range data, and is considered highly dependent
    on illumination conditions.1
  • Ground-based sensors have a particularly
    difficult time with detecting or measuring these
    negative obstacles, leading to false alarms and
    missed detections. Aerial-based sensors are more
    proficient, but the stereo vision-based
    algorithms still rely on the exploitation of gaps
    in the data.1

Elevation plot of the range data, seen from
above. The camera was on the left, looking right.
Magenta overlay shows detection of the leading
edge of the trench
26
Negative Obstacle detection using thermal imaging
  • Convection tends to cool open terrain faster than
    interior of negative obstacles The rate of heat
    transfer depends on the rate of air motion..1
  • Following some transitional period after sunset,
    the interior should be warmer than surrounding
    terrain throughout the night.1
  • Weather and the width of the obstacle affect the
    duration of which negative obstacles remain
    warmer (e.g. rain reduces temp differences the
    larger the negative obstacle, the smaller the
    divergence in temperatures).1

LEFT Color and RIGHT 3-5 µm thermal infrared
imagery of a pothole dug in soil at a
construction site, taken at midnight.
27
Negative Obstacle detection using thermal imaging
15.2m 12.2m
  • An algorithm is needed to look for bright spots
    in thermal imagery that could be negative
    obstacles and apply simple geometric checks
    (possibly using stereo vision-based system) to
    rule out gross false alarms (the authors of
    referenced paper have developed a simple
    algorithm that does this).1
  • To 6.1m, thermal could reliably detect a negative
    obstacle, but this doesnt exclude warm
    buildings, or other false negative obstacles
    (like recently treaded tire tracks) .1

9.1m 6.1m
Trench detection results at 9 pm. There was
reliable detection to 6.1 m (based on thermal
alone)
1 L. Matthies and A. Rankin, Negative Obstacle
Detection by Thermal Signature, International
Conference on Intelligent Robots and Systems Oct.
2003
28
Negative Obstacle detection using thermal imaging
  • Combining thermal and geometric cues achieves
    greater success in negative obstacle detection,
    than using only range data alone.1
  • Further study needs to be done under various
    weather conditions. The current research has
    tested under clear weather and light rain.
  • Currently, this system is designed for night
    (after sunset) observations and detections.
    Modeling the solar illumination during the day
    might allow for thermal signatures to be applied
    to day-time negative obstacle detection .1

At 7 am at a distance of 2.8 m. LEFT Results
using range data alone (no detection). RIGHT
Results with thermal and geometric cues
(detection). Upper left panel is a false color
range image and the upper right panel is a false
color height image. Upper middle panel is
thermal. Bottom is elevation plot via range data.
29
Summary, Conclusions
A suite of instrumentation should be used for the
most accurate data for ODA.
  • Lander can acquire data from the surface (_at_
    2.4km) using LIDAR
  • At _at_1-1.4km, an integrated suite would use both
    LIDAR and Stereo Vision for ODA to narrow down an
    ideal landing zone (an area free from
    ridges and very large rocks)
  • LIDAR can be used to probe the surface density
    as well as range, to ensure a stable surface (no
    soft sand)
  • At 100-200m, thermal imaging can join the other
    two instruments in detecting negative and
    positive obstacles.

30
Summary, Conclusions
Some things to consider
  • Stereo vision is limited by its need for good
    light (not usable for landing at night).
  • Thermal imaging, for negative obstacle
    detection, is best used within a few hours of
    dusk or dawn.
  • Fortunately, LIDAR can be used to detect negative
    obstacles, and does not have light requirements.
  • On rovers, this triad suite can be used
    similarly. A shorter ranged LIDAR would be
    needed, and thermal imager could be used to
    support the LIDAR data.
Write a Comment
User Comments (0)
About PowerShow.com