CS451CS551EE565 ARTIFICIAL INTELLIGENCE - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

CS451CS551EE565 ARTIFICIAL INTELLIGENCE

Description:

An agent is completely specified by the agent function mapping percept sequences ... Actions: Left, Right, Suck, NoOp. The vacuum-cleaner world ... – PowerPoint PPT presentation

Number of Views:19
Avg rating:3.0/5.0
Slides: 31
Provided by: janicets
Category:

less

Transcript and Presenter's Notes

Title: CS451CS551EE565 ARTIFICIAL INTELLIGENCE


1
CS451/CS551/EE565ARTIFICIAL INTELLIGENCE
  • Agent Types
  • 8-29-2008
  • Prof. Janice T. Searleman
  • jets_at_clarkson.edu, jetsza

2
Recap Rational Agents
  • What is rational at a given time depends on four
    things
  • Performance Measure
  • Prior environment knowledge
  • Actions
  • Percept sequence to date (sensors).
  • Defn A rational agent chooses whichever action
    maximizes the expected value of the performance
    measure given the percept sequence to date and
    prior environment knowledge.

3
Recap Agent functions
  • An agent is completely specified by the agent
    function mapping percept sequences to actions
  • One agent function (or a small equivalence class)
    is rational
  • Aim find a way to implement the rational agent
    function concisely

4
Recap Structure of Intelligent Agents
  • Agent architecture program
  • Agent program the implementation of f P ? A,
    the agents perception-action mappingfunction
    Skeleton-Agent(Percept) returns Action memory ?
    UpdateMemory(memory, Percept) Action ?
    ChooseBestAction(memory) memory ?
    UpdateMemory(memory, Action)return Action
  • Architecture a device that can execute the agent
    program (e.g., general-purpose computer,
    specialized device, bot, etc.)?

5
Recap Environments
  • To design a rational agent, we must specify its
    task environment.
  • PEAS way to describe the environment
  • Performance
  • Environment
  • Actuators
  • Sensors

6
Recap Environment types
The environment types largely determine the agent
design.
7
Agent types
  • Four basic kind of agent programs will be
    discussed
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • All these can be turned into learning agents.

8
Agent types
  • Function TABLE-DRIVEN_AGENT(percept) returns an
    action
  • static percepts, a sequence initially empty
  • table, a table of actions, indexed by percept
    sequence
  • append percept to the end of percepts
  • action ? LOOKUP(percepts, table)?
  • return action

This approach is doomed to failure
9
Agent types simple reflex
  • Select action on the basis of only the current
    percept.
  • e.g. the vacuum-agent
  • Large reduction in possible percept/action
    situations(next page).
  • Implemented through condition-action rules
  • If dirty then suck

10
Vacuum-cleaner world
  • Percepts location and contents, e.g., A,Dirty
  • Actions Left, Right, Suck, NoOp

11
The vacuum-cleaner world
  • function REFLEX-VACUUM-AGENT (location, status)
    return an action
  • if status Dirty then return Suck
  • else if location A then return Right
  • else if location B then return Left
  • Reduction from 4T to 4 entries

12
Agent types simple reflex
  • function SIMPLE-REFLEX-AGENT(percept) returns an
    action
  • static rules, a set of condition-action rules
  • state ? INTERPRET-INPUT(percept)
  • rule ? RULE-MATCH(state, rule)
  • action ? RULE-ACTIONrule
  • return action
  • Will only work if the environment is fully
    observable otherwise infinite loops may occur.

13
Agent types reflex and state
  • To tackle partially observable environments.
  • Maintain internal state
  • Over time update state using world knowledge
  • How does the world change.
  • How do actions affect world.
  • ? Model of World

14
Agent types reflex and state
  • function REFLEX-AGENT-WITH-STATE(percept) returns
    an action
  • static rules, a set of condition-action rules
  • state, a description of the current world state
  • action, the most recent action.
  • state ? UPDATE-STATE(state, action, percept)
  • rule ? RULE-MATCH(state, rule)
  • action ? RULE-ACTIONrule
  • return action

15
Agent types goal-based
  • The agent needs a goal to know which situations
    are desirable.
  • Things become difficult when long sequences of
    actions are required to find the goal.
  • Typically investigated in search and planning
    research.
  • Major difference future is taken into account
  • Is more flexible since knowledge is represented
    explicitly and can be manipulated.

16
Agent types utility-based
  • Certain goals can be reached in different ways.
  • Some are better, have a higher utility.
  • Utility function maps a (sequence of) state(s)
    onto a real number.
  • Improves on goals
  • Selecting between conflicting goals
  • Select appropriately between several goals based
    on likelihood of success.

17
Performance Measure vs.Utility Function
  • performance measure
  • - used by outside observer to evaluate success
  • - function from histories to a real number
  • utility function
  • - used by the agent itself to evaluate how
    desirable states or histories are
  • - function from state(s) to a real number

18
Agent types learning
  • All previous agent-programs describe methods for
    selecting actions.
  • Yet it does not explain the origin of these
    programs.
  • Learning mechanisms can be used to perform this
    task.
  • Teach them instead of instructing them.
  • Advantage is the robustness of the program toward
    initially unknown environments.

19
Agent types learning
  • Learning element introduce improvements in
    performance element.
  • Critic provides feedback on agents performance
    based on fixed performance standard.
  • Performance element selecting actions based on
    percepts.
  • Corresponds to the previous agent programs
  • Problem generator suggests actions that will
    lead to new and informative experiences.
  • Exploration vs. exploitation

20
Behavior and performance of IAs
  • Perception (sequence) to Action Mapping
  • f P ? A
  • Ideal mapping specifies which actions an agent
    ought to take at any point in time
  • Description Look-Up-Table, Closed Form,

21
A driving example Beobots
  • Goal build robots that can operate in
    unconstrained environments and that can solve a
    wide variety of tasks.

Beowulf Robot Beobot http//iLab.usc.edu/be
obots/
22
Beowulf robot Beobot
23
Look up table
  • Compare to a closed form
  • Output (degree of rotation) F(distance)
  • example F(d) 10/d

24
Using a look-up-table to encode f P ? A
  • Example Collision Avoidance
  • Sensors 3 proximity sensors
  • Effectors Steering Wheel, Brakes
  • How to generate?
  • How large?
  • How to select action?

25
Using a look-up-table to encode f P?A
  • Example Collision Avoidance
  • Sensors 3 proximity sensors
  • Effectors Steering Wheel, Brakes
  • How to generate for each p ? Pl ? Pm ?
    Prgenerate an appropriate action, a ? S ? B
  • How large size of table possible percepts
    times possible actions Pl Pm Pr S
    BE.g., P close, medium, far3 A left,
    straight, right ? on, offthen size of table
    2732 162
  • How to select action? Search.

26
Behavior and performance of IAs
  • Perception (sequence) to Action Mapping
  • f P ? A
  • Ideal mapping specifies which actions an agent
    ought to take at any point in time
  • Description Look-Up-Table, Closed Form,

27
Interacting Agents
  • Collision Avoidance Agent (CAA)?
  • Goals Avoid running into obstacles
  • Percepts ?
  • Sensors?
  • Effectors ?
  • Actions ?
  • Environment Freeway
  • Lane Keeping Agent (LKA)?
  • Goals Stay in current lane
  • Percepts ?
  • Sensors?
  • Effectors ?
  • Actions ?
  • Environment Freeway

28
Interacting Agents
  • Collision Avoidance Agent (CAA)?
  • Goals Avoid running into obstacles
  • Percepts Obstacle distance, velocity,
    trajectory
  • Sensors Vision, proximity sensing
  • Effectors Steering Wheel, Accelerator, Brakes,
    Horn, Headlights
  • Actions Steer, speed up, brake, blow horn,
    signal (headlights)?
  • Environment Freeway
  • Lane Keeping Agent (LKA)?
  • Goals Stay in current lane
  • Percepts Lane center, lane boundaries
  • Sensors Vision
  • Effectors Steering Wheel, Accelerator, Brakes
  • Actions Steer, speed up, brake
  • Environment Freeway

29
Conflict Resolution by Action Selection Agents
  • Override CAA overrides LKA
  • Arbitrate if Obstacle is Close then CAA else
    LKA
  • Compromise Choose action that satisfies
    both agents
  • Any combination of the above
  • Challenges Doing the right thing

30
Summary
  • Intelligent Agents
  • Anything that can be viewed as perceiving its
    environment through sensors and acting upon that
    environment through its effectors to maximize
    progress towards its goals.
  • PAGE (Percepts, Actions, Goals, Environment)
  • Described as a Perception (sequence) to Action
    Mapping f P ? A
  • Using look-up-table, closed form, etc.
  • Agent Types reflex, state, goal-based,
    utility-based
  • Rational Action The action that maximizes the
    expected value of the performance measure given
    the percept sequence to date
Write a Comment
User Comments (0)
About PowerShow.com