ROBOTICS - PowerPoint PPT Presentation

About This Presentation
Title:

ROBOTICS

Description:

There are infinitely many ways to program a robot, but ... Consider a robot that moves in a maze: what does the robot need to know to navigate and get out? ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 72
Provided by: moni124
Category:
Tags: robotics | amaze

less

Transcript and Presenter's Notes

Title: ROBOTICS


1
  • ROBOTICS
  • COE 584
  • Robot Control Architectures

2
Review
  • Control Architectures
  • Languages for robot control
  • Computability
  • Organizing principles
  • Architecture comparison criteria

3
Robot Control Architectures
  • There are infinitely many ways to program a
    robot, but there are only few types of robot
    control
  • Deliberative control (no longer in use)
  • Reactive control
  • Hybrid control
  • Behavior-based control
  • Numerous architectures are developed,
    specifically designed for a particular control
    problem
  • However, they all fit into one of the categories
    above

4
Comparing Architectures
  • Architectures can be classified by the way in
    which they treat
  • Time-scale (looking ahead)
  • Modularity
  • Representation

5
Time-Scale and Looking Ahead
  • How fast does the system react? Does it look into
    the future?
  • Deliberative control
  • Look into the future (plan) then execute ? long
    time scale
  • Reactive control
  • Do not look ahead, simply react ? short time
    scale
  • Hybrid control
  • Look ahead (deliberative layer) but also react
    quickly (reactive layer)
  • Behavior-based
  • Look ahead while acting

6
Modularity
  • Refers to the way the control system is broken
    into components
  • Deliberative control
  • Sensing (perception), planning and acting
  • Reactive control
  • Multiple modules running in parallel
  • Hybrid control
  • Deliberative, reactive, middle layer
  • Behavior-based
  • Multiple modules running in parallel

7
Representation
  • Representation is the form in which the control
    system internally stores information
  • Internal state
  • Internal representations
  • Internal models
  • History
  • What is represented and how it is represented has
    a major impact on robot control
  • State refers to the "status" of the system
    itself, whereas "representation" refers to
    arbitrary information that the robot stores

8
An Example
  • Consider a robot that moves in a maze what does
    the robot need to know to navigate and get out?
  • Store the path taken to the end of the maze
  • Straight 1m, left 90 degrees, straight 2m, right
    45 degrees
  • Odometric path
  • Store a sequence of moves it has made at
    particular landmark in the environment
  • Left at first junction, right at the second, left
    at the third
  • Landmark-based path

9
Topological Map
  • Store what to do at each landmark in the maze
  • Landmark-based map
  • The map can be stored (represented) in different
    forms
  • Store all possible paths and use the shortest one
  • Topological map describes the connections among
    the landmarks
  • Metric map global map of the maze with exact
    lengths of corridors and distances between walls,
    free and blocked paths very general!
  • The robot can use this map to find new paths
    through the maze
  • Such a map is a world model, a representation of
    the environment

10
World Models
  • Numerous aspects of the world can be represented
  • self/ego stored proprioception, self-limits,
    goals, intentions, plans
  • space metric or topological (maps, navigable
    spaces, structures)
  • objects, people, other robots detectable things
    in the world
  • actions outcomes of specific actions in the
    environment
  • tasks what needs to be done, in what order, by
    when
  • Ways of representation
  • Abstractions of a robots state other
    information

11
Model Complexity
  • Some models are very elaborate
  • They take a long time to construct
  • These are kept around for a long time throughout
    the lifetime of the robot
  • E.g. a detailed metric map
  • Other models are simple
  • Can be quickly constructed
  • In general they are transient and can be
    discarded after use
  • E.g. information related to the immediate goals
    of the robot (avoiding an obstacle, opening of a
    door, etc.)

12
Models and Computation
  • Using models require significant amount of
    computation
  • Construction the more complex the model, the
    more computation is needed to construct the model
  • Maintenance models need to be updated and kept
    up-to-date, or they become useless
  • Use of representations complexity directly
    affects the type and amount of computation
    required for using the model
  • Different architectures have different ways of
    handling representations

13
An Example
  • Consider a metric map
  • Construction
  • Requires exploring and measuring the environment
    and intense computation
  • Maintenance
  • Continuously update the map if doors are open or
    closed
  • Using the map
  • Finding a path to a goal involves planning find
    free/navigational spaces, search through those to
    find the shortest, or easiest path

14
Simultaneous Mapping and Localization
15
Cooperative Mapping and Localization
16
Reactive Control
  • Reactive control is based on tight (feedback)
    loops connecting a robot's sensors with its
    effectors
  • Purely reactive systems do not use any internal
    representations of the environment, and do not
    look ahead
  • They work on a short time-scale and react to the
    current sensory information
  • Reactive systems use minimal, if any, state
    information

17
Collections of Rules
  • Reactive systems consist of collections of
    reactive rules that map specific situations to
    specific actions
  • Analog to stimulus-response, reflexes
  • Bypassing the brain allows reflexes to be very
    fast
  • Rules are running concurrently and in parallel
  • Situations
  • Are extracted directly from sensory input
  • Actions
  • Are the responses of the system (behaviors)

18
Mutually Exclusive Situations
  • If the set of situations is mutually exclusive
  • ? only one situation can be met at a given time
  • ? only one action can be activated
  • Often is difficult to split up the situations
    this way
  • To have mutually exclusive situations the
    controller must encode rules for all possible
    sensory combinations, from all sensors
  • This space grows exponentially with the number of
    sensors

19
Complete Control Space
  • The entire state space of the robot consists of
    all possible combinations of the internal and
    external states
  • A complete mapping from these states to actions
    is needed such that the robot can respond to all
    possibilities
  • This is would be a tedious job and would result
    in a very large look-up table that takes a long
    time to search
  • Reactive systems use parallel concurrent reactive
    rules ? parallel architecture, multi-tasking

20
Incomplete Mappings
  • In general, complete mappings are not used in
    hand-designed reactive systems
  • The most important situations are trigger the
    appropriate reactions
  • Default responses are used to cover all other
    cases
  • E.g. a reactive safe-navigation controller
  • If left whisker bent then turn right
  • If right whisker bent then turn left
  • If both whiskers bent then back up and turn left
  • Otherwise, keep going

21
Example Safe Navigation
  • A robot with 12 sonar sensors, all around the
    robot
  • Divide the sonar range into two zones
  • Danger zone things too close
  • Safe zone reasonable distance to objects
  • if minimum sonars 1, 2, 3, 12 lt danger-zone and
    not-stopped
  • then stop
  • if minimum sonars 1, 2, 3, 12 lt danger-zone and
    stopped
  • then move backward
  • otherwise
  • move forward
  • This controller does not look at the side sonars

22
Example Safe Navigation
  • For dynamic environments, add another layer
  • if sonar 11 or 12 lt safe-zone and
  • sonar 1 or 2 lt safe-zone
  • then turn right
  • if sonar 3 or 4 lt safe-zone
  • then turn left
  • The robot turns away from the obstacles before
    getting too close
  • The combinations of the two controllers above ?
    collision-free wandering behavior
  • Above we had mutually-exclusive conditions

23
Action Selection
  • In most cases the rules are not triggered by
    unique mutually-exclusive conditions
  • More than one rule can be triggered at the same
    time
  • Two or more different commands are sent to the
    actuators!!
  • Deciding which action to take is called action
    selection
  • Arbitration decide among multiple actions or
    behaviors
  • Fusion combine multiple actions to produce a
    single command

24
Arbitration
  • There are many different types of arbitration
  • Arbitration can be done based on
  • a fixed priority hierarchy
  • rules have pre-assigned priorities
  • a dynamic hierarchy
  • rules priorities change at run-time
  • learning
  • rule priorities may be initialized and are
    learned at run-time, once or continuously

25
Multi-Tasking
  • Arbitration decides which one action to execute
  • To respond to any rule that might become
    triggered all rules have to be monitored in
    parallel, and concurrently
  • If no obstacle in front ? move forward
  • If obstacle in front ? stop and turn away
  • Wait for 30 seconds, then turn in a random
    direction
  • Monitoring sensors in sequence may lead to
    missing important events, or failing to react in
    real time
  • Reactive systems must support parallelism
  • The underlying programming language must have
    multi-tasking abilities

26
Designing Reactive Systems
  • How to can we put together multiple (large
    number) of rules to produce effective, reliable
    and goal directed behavior?
  • How do we organize a reactive controller in a
    principled way?
  • The best known reactive architecture is the
    Subsumption Architecture (Rod Brooks, MIT, 1985)

27
Vertical v. Horizontal Systems
Traditional (SPA) sense plan act
Subsumption
28
Biological Inspiration
  • The inspiration behind the Subsumption
    Architecture is the evolutionary process
  • New competencies are introduced based on existing
    ones
  • Complete creatures are not thrown out and new
    ones created from scratch
  • Instead, solid, useful substrates are used to
    build up to more complex capabilities

29
The Subsumption Architecture
  • Principles of design
  • systems are built from
  • the bottom up
  • components are task-achieving
  • actions/behaviors (avoid-obstacles, find-doors,
    visit-rooms)
  • components are organized in layers, from the
    bottom up
  • lowest layers handle most basic tasks
  • all rules can be executed in parallel, not in a
    sequence
  • newly added components and layers exploit the
    existing ones

30
Subsumption Layers
  • First, we design, implement and debug layer 0
  • Next, we design layer 1
  • When layer 1 is designed, layer 0 is taken into
    consideration and utilized, its existence is
    subsumed (thus the name of the architecture)
  • As layer 1 is added, layer 0 continues to
    function
  • Continue designing layers, until the desired task
    is achieved

sensors
actuators
31
Suppression and Inhibition
  • Higher layers can disable the ones below
  • Avoid-obstacles can stop the robot from moving
    around
  • Layer 2 can either
  • Inhibit the output of level 1, nothing gets
    through
  • Suppress the input of level 1, signal is replaced
  • The process is continued all the way to the top
    level

sensors
actuators
32
Subsumption Language and AFSMs
  • The original Subsumption Architecture was
    implemented using the Subsumption Language
  • It was based on finite state machines (FSMs)
    augmented with a very small amount of state
    (AFSMs)
  • AFSMs were implemented in Lisp

33
Subsumption Language and AFSMs
  • Each behavior is represented as an augmented
    finite state machine (AFSMs)
  • Stimulus (input) or response
  • (output) can be inhibited or
  • suppressed by other active behaviors
  • An AFSM can be in one state at a time, can
    receive one or more inputs, and send one or more
    outputs
  • AFSMs are connected communication wires, which
    pass input and output messages between them only
    the last message is kept
  • AFSMs run asynchronously

collide
sonar
halt
34
Networks of AFSMs
  • Layers represent task achieving behaviors
  • Wandering, avoidance, goal seeking
  • Layers work concurrently and asynchronously
  • A Subsumption Architecture controller,
  • using the AFSM-based programming
  • language, is a network of AFSMs divided into
    layers
  • Convenient for incremental system design

35
Wandering in Subsumption
  • Brooks 87

The lowest level is our familar avoid objects
layer of control which includes avoidance and
halting behaviors. The next layer of control lets
the robot explore large areas. The third
level gives some extra heuristics for backing out
of tight situations.
36
Layering in AFSM Networks
  • Layers modularize the reactive system
  • Bad design
  • putting a lot of behaviors within a single layer
  • putting a large number of connections between the
    layers, so that they are strongly coupled
  • Strong coupling implies dependence between
    modules, which violates the modularity of the
    system
  • If modules are interdependent, they are not as
    robust to failure
  • In Subsumption, if higher layers fail, the lower
    ones remain unaffected

37
Module Independence
  • Subsumption has one-way independence between
    layers
  • With upward independence, a higher layer can
    always use a lower one by using suppression and
    inhibition
  • Two-way independence is not practical
  • No communication between layers is possible
  • Do we always have to use these wires to
    communicate between parts of the system?

38
Using the World
  • How can you sequence activities in Subsumption?
  • Coupling between layers need not be through the
    system itself (i.e., not through explicit
    communication wires)
  • It could be through the world. How?

39
A Robust Layered Control System for a Mobile
Robot, by Rodney A. Brooks.
  • Layer Zero
  • Takes a vector of sonar reading providing a robot
    centered map of surrounding obstacles.
  • Collide monitor sonar map to determine whether
    there is an object ahead. If in the course of
    moving it comes to collide with something then it
    halts the robot.
  • Feelforce generates a resultant (multiple
    direction) repulsive force. If something
    approaches then it forward the force to runaway
    in the right direction.
  • Runaway it sends a command to motor if force is
    significant to avoid the force direction.
  • Motor controlled by using two dofs that are the
    speed and the steering angle.

40
  • Layer One
  • Objective Wander around aimelessly (depends on
    level Zero)
  • Wander generates new heading for the robot every
    10 seconds.
  • Avoid takes force vector from feedforce and
    combine it with heading to produce modified
    heading. Avoid subsumes the runaway module as
    it may suppress its output in the case of a new
    heading to account for.
  • Layer Zero
  • Takes a vector of sonar reading providing a robot
    centered map of surrounding obstacles.
  • Collide monitor sonar map to determine whether
    there is an object ahead. If in the course of
    moving it comes to collide with something then it
    halts the robot.
  • Feelforce generates a resultant (multiple
    direction) repulsive force. If something
    approaches then it forward the force to runaway
    in the right direction.
  • Runaway it sends a command to motor if force is
    significant to avoid the force direction.
  • Motor controlled by using two dofs that are the
    speed and the steering angle.

41
  • Layer Two
  • Objective Exploratory mode using visual
    obersver to select interesting places to visit.
    Uses position servoing the robot to a desired
    position over a travel path including some local
    obstacles.
  • Grabber reads a global Goal. Ensures control by
    sending a halt to motor, this is done by
    inhibiting (See the many inhibiting lines of
    control) the lower level, to give time to plan a
    detailed motion. A goal is sent to pathplan
    module.
  • Inhibiting connection to level 1 so that no
    action can be initiated. The robot cannot
    approaching objects for 2 seconds.
  • Monitor track each motion by monitoring the
    status of motor and when motor is inactive it
    queries the robot to read the shaft encoder to
    find how far it traveled, whether it terminated,
    as supposed to, or it is in an early halt due to
    obstacles.
  • Integrate accumulates reports of motion from the
    monitor and sends its most recent result out of
    its integral line. It is restarted by a reset
    signal.
  • Pathplan takes a goal (angle to turn, a distance
    to travel, and a final orientation) and attempt
    to reach goal. It sends heading to Avoid, which
    may perturb them to avoid local obstacles, and
    monitors the integration of actual motion. It may
    suppress the random wanderings at input of Avoid
    as long as higher level planner remains active.
    When the robot is at goal it outputs the goal to
    Straighten.
  • Straighten modifying the final orientation of
    the robot. Perform fine-grain control of robot
    orientation without being exposed to feelforce.
    For this it directly sends its control to motor
    and monitor Integral for completness.It also
    inhibits the Collide because there is no chance
    of collision from forward motion.

Level 2 control lower layers using four lines (I
and S). A planned stereo system depth data
produce of corridor space. Extra control is
needed to stop the robot and take additional
views when the robot wander outside some limits
42
We wire finites state machines together into
layers of control. Each layer is built on top of
existing layers. Lower level layers never rely on
the existence of higher level layers
Intelligence without representation Rodney A.
Brooks
The last level is meant to add an exploratory
mode of behavior to the robot, using visual
observations to select interesting places to
visit. A vision module finds corridors of free
space. Additional modules provide a means of
position servoing the robot to along the corridor
despite the presence of local obstacles on its
path (as detected with the sonar sensing system).
The lower level two layers still play an active
role during normal operation of the second layer.
(In practice we have so far only reused the sonar
data for the corridor finder, rather than use
stereo vision.)
The next layer of control, when combined with the
lowest, imbues the robot with the ability to
wander around without hitting obstacles. This
controller level relies in a large degree on the
zeroth level's a version to hitting obstacles. In
addition it uses a simple heuristic to plan ahead
a little in order to avoid potential collisions
which would need to be handled by the zeroth level
The lowest level layer the robot does not come
into contact with other objects. If something
approaches the robot it will move away. If in the
course of moving itself it is about to collide
with an object it will halt. These two tactics
are sufficient for the robot to flee from moving
obstacles, perhaps requiring many motions,
without colliding with stationary obstacles. The
combination of the tactics allows the robot to
operate with very coarsely calibrated sonars and
a wide range of repulsive force functions.
43
Collecting Soda Cans
  • Herbert collected empty soda cans and took them
    home
  • Herberts capabilities
  • Move around without running into obstacles
  • Detect soda cans using a camera and a laser
  • An arm that could extend, sense if there is a
    can in the gripper, close the gripper, tuck the
    arm in

44
Herbert
  • Look for soda cans, when seeing one approach it
  • When close, extend the arm toward the soda can
  • If the gripper sensors detect something close the
    gripper
  • If can is heavy, put it down, otherwise pick it
    up
  • If gripper was closed tuck the arm in and head
    home
  • The robot did not keep internal state about what
    it had just done and what it should do next it
    just sensed!

45
Tom and Jerry
Embodied Intelligence Tom and Jerry Gleb
Chuvpilo, Jessica Howe, MIT, 2002
  • Tom and Jerry, as they are usually called, are a
    robotic cat and mouse pair.
  • Both are implemented as robotic vehicles that
    are able to move around within their
    environments, as well as interact with each other
    and modify behavior based on their current
    surroundings.
  • The mouse acts as a passive agent, simply moving
    and wandering through the environment with no
    sensory feedback, to give the cat something to
    chase.
  • The cat is the active agent who tracks and
    chases this mouse around the environment, with a
    layering of multiple behaviors that are able to
    take effect at appropriate times.

46
  • The Cat (Tom) has
  • Two bump sensors, activated by front left and
    front right whiskers which give a Boolean
    pressed / not pressed value, while the light
    sensors return integer values between 0 and 255,
  • Three light sensors mounted at the front of the
    vehicle, facing left, right, and forwards.
  • Tom
  • The mouse has got a light bulb on top of its
    head, and the cat has got light sensors to track
    down the mouse.
  • The cats behavior is to wander around in the
    darkness while exploring as much space as
    possible, and go towards the light if there is
    one. If the cat finds the mouse it begins to play
    with it, by sitting still in the same place for a
    while, pouncing on it and batting it a bit, then
    letting the mouse go away. The mouse cant see
    the cat, so it just wanders around.

47
  • The Mouse (Jerry)
  • Is strictly passive and has no feedback in the
    form of sensors.
  • Use light as the form in which the cat is able
    to track and chase the mouse.
  • The mouse has a halogen bulb mounted on top of it
    which emanates light in all directions
  • During the experiment the lights is turned off
    in the room.
  • The light is used rather than infrared for mouse
    tracking because it is easy to debug light
    tracking mechanisms because the light is either
    shining or not shining.
  • Using IR, on the other hand, is more difficult
    to debug simply because we cannot see with the
    naked eye what the robots see.

Jerry the mouse
Mouse is much smaller than Tom. The mouse also
has three wheels, two active and one passive.
However, a gearbox would be too bulky for the
robot of that size, so we decided to do without
it and use smaller wheels instead. The mouse has
no sensors, and its behavior is completely
deterministic. There is no randomness in the time
in which it goes forward or turns, as opposed to
the cat. On top of the mouse there is a source of
bright light (a hallogen lamp) so that the cat
would be able to see the mouse from far away. It
is worth noting that we had to decouple the
system and add another source of power.
48
  • CAT Behaviors
  • Four behaviors, each act in a layered fashion one
    upon another, so that the appropriate action is
    invoked at the appropriate time, subsuming the
    behavior of lower levels
  • Layer 0 The basic action is to wander around
    the environment searching for the mouse, i.e.
    waiting for the light from the mouse to be seen
    so that a higher subsumption level may be called
    in.
  • Layer 1 Follow light is activated by the three
    light sensors which allow the cat to turn towards
    the mouse and move towards it once the two side
    light sensors read at roughly the same levels.
  • Layer 2 obstacle avoidance, which is activated
    by the bump sensors. When an obstacle is hit the
    cat will back up and turn away from it, after
    which the lower levels of wander or
    light-following will resume.
  • Layer 3 the cat is playing with the mouse
    (complex) which is invoked when the cat is within
    a certain threshold range of the mouse and
    involves waiting, stalking, pouncing and freeing
    the mouse.

49
  • Wander (CAT)
  • Wandering aims at exploring the environment (in
    lack of stimulus that triggers other actions) in
    search of the mouse
  • Repeatedly move forward for a random amount of
    time and then turn for a random amount of time,
    pointing it in a new direction.
  • The random length of time is spread between a
    defined minimum and maximum amount of time.
  • If in the process of wandering some stimulus is
    encountered the resulting behavior will take
    higher precedence over the wander action.
  • When the action is complete wandering will
    resume.

50
  • Light Following (CAT)
  • When the cat is not pointing directly at the
    light the left and right light sensors will read
    different values. When these values are more than
    some defined delta apart the motors will spin to
    turn the cat in place so that the light is being
    faced head on.
  • If both side sensors read roughly the same
    values and the forward sensor reads above some
    defined threshold (when it actually sees the
    mouse rather than ambient light levels) it will
    move forward, correcting its direction if need
    be.

51
  • Obstacle Avoidance (bump level of CAT)
  • The obstacle avoidance is activated by the
    whiskers of the cat, or the left and right bump
    sensors.
  • When an obstacle is hit by either of these bump
    sensors the cat will slightly back up and turn
    away from the object that was hit.
  • When an object is hit either by the side of the
    cat (as in when it is turning) or in a poor
    location along the front which does not allow the
    bump sensors to be compressed. In a case like
    this the wheels of the cat will keep spinning
    until either its body shifts enough to activate
    one of the bump sensors or to free the body from
    the obstacle, or when an outside influence such
    as the mouse approaching causes another action to
    take over (like turn towards the mouse.)
  • There is also a tendency to turn away from
    objects based on the input from the light
    sensors. The tendency of the cat to turn towards
    the light also causes the cat to turn away from
    dark objects before they are reached.
  • For example, if the cat is approaching a dark
    object at an angle, one side light sensor will
    begin to read a light level lower than the other
    side. This will cause the turn towards the
    light action to take over and the cat will in
    effect turn away from the object before the
    object is reached.
  • This works best when the lights are on, and the
    behavior would be very visible if we ever changed
    mouse tracking based on light. When run with the
    lights off this behavior is not always apparent.

52
  • Play (CAT)
  • Play is a multi-stage behavior in which the cat
    stalks, pounces upon, and then lets the mouse
    escape. When the cat gets within a certain
    distance of the mouse the front light sensor hits
    a threshold level and the cat begins stalking the
    mouse. In this stage the cat sits still for a
    fixed length of time and just watches the mouse,
    rotating if necessary, but not moving towards it.
  • If the mouse has stayed within the threshold
    distance of the mouse during this entire stalking
    time it then pounces upon the mouse moving
    towards the mouse at full speed until it hits it.
    Once it has hit the mouse it briefly backs up and
    then moves forward to hit it again. This is
    repeated until the mouse has been hit 3 times.
    The cat then sits still for a fixed amount of
    time, ignoring all sensory input, to allow the
    mouse time to escape. Once this fixed amount of
    time runs out playing is completed and standard
    subsumption style behavior is resumed.
  • Play has the highest priority within the
    subsumption architecture, so while playing all
    other actions that would be performed by lower
    subsumption levels are ignored.
  • One problem with the play mode is that there is
    no distinction between hitting the mouse and
    hitting a wall or obstacle while pouncing.
  • While the cat is pouncing on the mouse it just
    moves at full speed towards the light until it
    hits something, which may be an obstacle rather
    than the mouse.
  • As it is the play mode subsumes all other
    subsumption level behaviors so there is no way to
    both use obstacle avoidance and mouse chasing as
    written in those subsumption levels without
    writing that code directly into the play
    functionality. Perhaps one modification that
    would be made if we were to redesign the
    subsumption architecture would be to allow lower
    subsumption levels to be accessed while playing.

53
  • Basic Architecture Design
  • The main debate was between using subsumption
    architecture and finite state automata, as in the
    ants problem set. The trouble is to have complex
    FSA code.
  • The FSA model would be much more complicated to
    implement. For this the subsumption architecture
    is selected by simply having a global variable
    for each output value passed by a block within
    the diagram to another, possibly subsuming
    another signal. If each of the functions
    representing diagram blocks is called, the
    functionality of each can depend on the current
    values of each of these global signals. In other
    words the previous signals are maintained during
    each time loop when each subsumption block
    calculates its desired output based on these
    input signals. It seemed to be a reasonable
    approach.
  • Another advantage of subsumption is that we could
    start easy and build up to a more complex design
    as time allowed.

54
More on Herbert
  • There is no internal wire between the layers that
    achieve can finding, grabbing, arm tucking, and
    going home
  • However, the events are all executed in proper
    sequence. Why?
  • Because the relevant parts of the control system
    interact and activate each other through sensing
    the world

55
World as the Best Model
  • This is a key principle of reactive systems
    Subsumption Architecture
  • Use the world as its own best model!
  • If the world can provide the information
    directly (through sensing), it is best to get it
    that way, than to store it internally in a
    representation (which may be large, slow,
    expensive, and outdated)

56
Subsumption System Design
  • What makes a Subsumption Layer, what should go
    where?
  • There is no strict recipe, but some solutions are
    better than others, and most are derived
    empirically
  • How exactly layers are split up depends on the
    specifics of the robot, the environment, and the
    task

57
Designing in Subsumption
  • Qualitatively specify the overall behavior needed
    for the task
  • Decompose that into specific and independent
    behaviors (layers)
  • Determine behavior granularity
  • Ground low-level behaviors in the robots sensors
    and effectors
  • Incrementally build, test, and add

58
Genghis (MIT)
  • Walk over rough terrain and follow a human
    (Brooks 89)
  • Standup
  • Control legs swing position and lift
  • Simple walk
  • Force balancing
  • Force sensors provide information about the
    ground profile
  • Leg lifting step over obstacles
  • Obstacle avoidance (whiskers)
  • Pitch stabilization
  • Prowling
  • Steered prowling

59
The Nerd Herd (MIT)
  • Foraging example (Mataric 93)
  • R1 robots (IS robotics)
  • Behaviors involved
  • Wandering
  • Avoiding
  • Pickup
  • Homing

60
Tom and Jerry (MIT)
Tom
  • fsdf

Jerry
61
Pros and Cons
  • Some critics consider the lack of detail about
    designing layers to be a weakness of the approach
  • Others feel it is a strength, allowing for
    innovation and creativity
  • Subsumption has been used on a vast variety of
    effective implemented robotic systems
  • It was the first architecture to demonstrate many
    working robots

62
Benefits of Subsumption
  • Systems are designed incrementally
  • Avoid design problems due to the complexity of
    the task
  • Helps the design and debugging process
  • Robustness
  • If higher levels fail, the lower ones continue
    unaffected
  • Modularity
  • Each competency is included into a separate
    layer, thus making the system manageable to
    design and maintain
  • Rules and layers can be reused on different
    robots and for different tasks

63
Behavior-Based Control
  • Reactive systems
  • too inflexible, use no representation, no
    adaptation or learning
  • Deliberative systems
  • Too slow and cumbersome
  • Hybrid systems
  • Complex interactions among the hybrid components
  • Behavior-based control involves the use of
    behaviors as modules for control

64
What Is a Behavior?
  • Behavior-achieving modules
  • Rules of implementation
  • Behaviors achieve or maintain particular goals
    (homing, wall-following)
  • Behaviors are time-extended processes
  • Behaviors take inputs from sensors and from other
    behaviors and send outputs to actuators and other
    behaviors
  • Behaviors are more complex than actions (stop,
    turn-right vs. follow-target, hide-from-light,
    find-mate etc.)

65
Principles of BBC Design
  • Behaviors are executed in parallel, concurrently
  • Ability to react in real-time
  • Networks of behaviors can store state (history),
    construct world models/representation and look
    into the future
  • Use representations to generate efficient
    behavior
  • Behaviors operate on compatible time-scales
  • Ability to use a uniform structure and
    representation throughout the system

66
Internal vs. Observable Behavior
  • Observable behaviors do not always have a
    matching internal behavior
  • Why not?
  • Emergent behavior interesting behavior can be
    produced from the interaction of multiple
    internal behaviors
  • Start by listing the desired observable behaviors
  • Program those behaviors with internal behaviors
  • Behavior-based controllers are networks of
    internal behaviors which interact in order to
    produce the desired, external, observable behavior

67
An Example
  • A robot that water plants around a building when
    they get dry
  • Behaviors avoid-collision, find-plant,
    check-if-dry, water, refill-reservoir,
    recharge-batteries
  • Complex behaviors may consist of internal
    behaviors themselves
  • Find-plant wander-around, detect-green,
    approach-green, etc.
  • Multiple behaviors may share the same underlying
    component behavior
  • Refill-reservoir may also use wander-around

68
An Example Task Mapping
  • Design a robot that is capable of
  • Moving around safely
  • Make a map of the environment
  • Use the map to find the shortest paths to
    particular places
  • Navigation mapping are the most common mobile
    robot tasks

69
Map Representation
  • The map is distributed over different behaviors
  • We connect parts of the map that are adjacent in
    the environment by connecting the behaviors that
    represent them
  • The network of behaviors represents a network of
    locations in the environment
  • Topological map Toto (Mataric 90)

70
Totos Behaviors
  • Toto the robot
  • Ring of 12 sonars, low-resolution compas
  • Lowest level
  • move around safely, without collisions
  • Next level
  • following boundaries, a behavior that keeps the
    robot near walls and other objects

71
Landmark Detection
  • Keep track of what was sensed and how it was
    moving
  • meandering ? cluttered area
  • constant compass direction, go straight ? left,
    right walls
  • moving straight, both walls ? corridor
Write a Comment
User Comments (0)
About PowerShow.com