Introduction to AI and Intelligent Agents Foundations of - PowerPoint PPT Presentation

Loading...

PPT – Introduction to AI and Intelligent Agents Foundations of PowerPoint presentation | free to download - id: 3b04f4-ODI5O



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Introduction to AI and Intelligent Agents Foundations of

Description:

Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence Some Definitions of AI Building systems that think like humans The exciting new ... – PowerPoint PPT presentation

Number of Views:104
Avg rating:3.0/5.0
Slides: 53
Provided by: mayaCsDe
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Introduction to AI and Intelligent Agents Foundations of


1
Introduction to AI andIntelligent Agents
  • Foundations of Artificial Intelligence

2
Some Definitions of AI
  • Building systems that think like humans
  • The exciting new effort to make computers think
    machines with minds, in the full and literal
    sense -- Haugeland, 1985
  • The automation of activities that we associate
    with human thinking, such as decision-making,
    problem solving, learning, -- Bellman, 1978
  • Building systems that act like humans
  • The art of creating machines that perform
    functions that require intelligence when
    performed by people -- Kurzweil, 1990
  • The study of how to make computers do things at
    which, at the moment, people are better -- Rich
    and Knight, 1991

3
Some Definitions of AI
  • Building systems that think rationally
  • The study of mental faculties through the use of
    computational models -- Charniak and
    McDermott, 1985
  • The study of the computations that make it
    possible to perceive, reason, and act --
    Winston, 1992
  • Building systems that act rationally
  • A filed of study that seeks to explain and
    emulate intelligent behavior in terms of
    computational processes -- Schalkoff, 1990
  • The branch of computer science that is concerned
    with the automation of intelligent behavior --
    Luger and Stubblefield, 1993

4
Thinking and Acting Humanly
  • Thinking humanly cognitive modeling
  • Develop a precise theory of mind, through
    experimentation and introspection, then write a
    computer program that implements it
  • Example GPS - General Problem Solver (Newell and
    Simon, 1961)
  • trying to model the human process of problem
    solving in general
  • Acting humanly
  • "If it looks, walks, and quacks like a duck, then
    it is a duck
  • The Turing Test
  • interrogator communicates by typing at a terminal
    with TWO other agents. The human can say and ask
    whatever s/he likes, in natural language. If the
    human cannot decide which of the two agents is a
    human and which is a computer, then the computer
    has achieved AI
  • this is an OPERATIONAL definition of
    intelligence, i.e., one that gives an algorithm
    for testing objectively whether the definition is
    satisfied

5
Thinking and Acting Rationally
  • Thinking Rationally
  • Capture correct'' reasoning processes
  • A loose definition of rational thinking
    Irrefutable reasoning process
  • How do we do this
  • Develop a formal model of reasoning (formal
    logic) that always leads to the right answer
  • Implement this model
  • How do we know when we've got it right?
  • when we can prove that the results of the
    programmed reasoning are correct
  • soundness and completeness of first-order logic
  • Acting Rationally
  • Act so that desired goals are achieved
  • The rational agent approach (this is what well
    focus on in this course)
  • Figure out how to make correct decisions, which
    sometimes means thinking rationally and other
    times means having rational reflexes
  • correct inference versus rationality
  • reasoning versus acting limited rationality

6
Turings Goal
  • Alan Turing, Computing Machinery and
    Intelligence, 1950
  • Can machines think?
  • How could we tell?

I propose to consider the question, Can
machines think? This should begin with
definitions of the meaning of the terms machine
and think. The definitions might be framed so
as to reflect so far as possible the normal use
of the words, but this attitude is dangerous. If
the meaning of the words machine and think
are to be found by examining how they are
commonly used it is difficult to escape the
conclusion that the meaning and the answer to the
question, Can machines think? is to be sought
in a statistical survey such as a Gallup poll.
But this is absurd. Instead of attempting such a
definition I shall replace the question by
another, which is closely related to it and is
expressed in relatively unambiguous words.
Alan Turing, Computing machinery and
intelligence, 1950
7
Turings Imitation Game
8
Necessary versus Sufficient Conditions
  • Is ability to pass a Turing Test a necessary
    condition of intelligence?
  • May not machines carry out something which ought
    to be described as thinking but which is very
    different from what a man does? This objection is
    a very strong one, but at least we can say that
    if, nevertheless, a machine can be constructed to
    play the imitation game satisfactorily, we need
    not be troubled by this objection. Turing,
    1950
  • Is ability to pass a Turing Test a sufficient
    condition of intelligence?

9
The Turing Syllogism
  • If an agent passes a Turing Test, then it
    produces a sensible sequence of verbal responses
    to a sequence of verbal stimuli.
  • If an agent produces a sensible sequence of
    verbal responses to a sequence of verbal stimuli,
    then it is intelligent.
  • Therefore, if an agent passes a Turing Test, then
    it is intelligent.

The Capacity Conception If an agent has the
capacity to produce a sensible sequence of verbal
responses to a sequence of verbal stimuli,
whatever they may be, then it is intelligent.
10
Memorizing all possible answers?(Berthas
Machine)
11
Exponential Growth
  • Assume each time the judge asks a question, she
    picks between two questions based on what has
    happened so far
  • Questions Asked Possible responses
  • 1 2
  • 2 4
  • 3 8
  • 4 16
  • 5 32
  • 6 64
  • n 2n

12
Storage versus Length
exponential
13
Polynomial vs. exponential time complexity
(one algorithm step 1 microsecond)
(Garvey Johnson 1979)
14
The Compact Conception
  • If an agent has the capacity to produce a
    sensible sequence of verbal responses to an
    arbitrary sequence of verbal stimuli without
    requiring exponential storage, then it is
    intelligent.

15
Size of the Universe
Time
16
Storage Capacity of the Universe
  • Volume (15109 light-years)3 (151091016
    meters)3
  • Density 1 bit per (10-35 meters)3
  • Total storage capacity 10184 bits lt 10200 bits lt
    2670 bits
  • Critical Turing Test length 670 bits lt 670
    characters
  • lt 140 words lt 1 minute

The universe is not big enough to hold a bertha
machine
17
Some Sub-fields of AI
  • Problem solving
  • Lots of early success here
  • Solving puzzles
  • Playing chess
  • Mathematics (integration)
  • Uses techniques like search and problem reduction
  • Logical reasoning
  • Prove things by manipulating database of facts
  • Theorem proving
  • Automatic Programming
  • Writing computer programs given some sort of
    description
  • Some success with semi-automated methods
  • Some error detection systems
  • Automatic program verification

18
Some Sub-fields of AI
  • Language understanding and semantic modeling
  • One of the earliest problems
  • Some success within limited domains
  • How can we understand written/spoken language?
  • Includes answering questions, translating between
    languages, learning from written text, and speech
    recognition
  • Some aspects of language understanding
  • Associating spoken words with actual word
  • Understanding language forms, such as
    prefixes/suffixes/roots
  • Syntax how to form grammatically correct
    sentences
  • Semantics understanding meaning of words,
    phrases, sentences
  • Context
  • Conversation

19
Some Sub-fields of AI
  • Pattern Recognition
  • Computer-aided identification of
    objects/shapes/sounds
  • Needed for speech and picture understanding
  • Requires signal acquisition, feature extraction,
    ...
  • Data mining and Information Retrieval
  • Expert Systems and Knowledge-based Systems
  • Designers often called knowledge engineers
  • Translate things that an expert knows and rules
    that an expert uses to make decisions into a
    computer program
  • Problems include
  • Knowledge acquisition (or how do we get the
    information)
  • Explanation (of the answers)
  • Knowledge models (what do we do with info)
  • Handling uncertainty

20
Some Sub-fields of AI
  • Planning, Robotics and Vision
  • Planning how to perform actions
  • Manipulating devices
  • Recognizing objects in pictures
  • Machine Learning and Neural Networks
  • Can we remember solutions, rather than
    recalculating them?
  • Can we learn additional facts from present data?
  • Can we model the physical aspects of the brain?
  • Classification and clustering
  • Non-monotonic Reasoning
  • Truth maintenance systems

21
Fundamental Techniques of AI
  • Knowledge Representation
  • Intelligence/intelligent behavior requires
    knowledge, which is
  • Voluminous
  • Hard to characterize
  • Constantly changing
  • How can one capture formally (i.e., computerize)
    everything needed for intelligent behavior? Some
    questions...
  • How do you store all of that data in a useful
    way?
  • Can you get rid of some?
  • How can you store decision making steps?
  • Characteristics of good data representation
    techniques
  • Captures general situation rather than being
    overly specific
  • Understandable by the people who provide it
  • Easily modified to handle errors, changes in
    data, and changes in perception
  • Of general use

22
Fundamental Techniques of AI
  • Search
  • How can we model the problem search space
  • How can we move between steps in a decision
    making process?
  • How can you find the info you need in a large
    data set?
  • Given a choice of possible decision sequences,
    how do you pick a good one?
  • Heuristic functions
  • Given a goal, how do you figure out what to do
    (planning)?
  • Base-level versus meta-level reasoning
  • How can we reason about what step to take next
    (in reaching the goal)?
  • How much do we reason before acting?

23
AI in Everyday Life?
  • AI techniques are used in many common
    applications
  • Intelligent user interfaces
  • Search Engines
  • Spell/grammar checkers
  • Context sensitive help systems
  • Medical diagnosis systems
  • Regulating/Controlling hardware devices and
    processes (e.g, in automobiles)
  • Voice/image recognition (more generally, pattern
    recognition)
  • Scheduling systems (airlines, hotels,
    manufacturing)
  • Error detection/correction in electronic
    communication
  • Program verification / compiler and programming
    language design
  • Web search engines / Web spiders
  • Web personalization and Recommender systems
    (collaborative/content filtering)
  • Personal agents
  • Customer relationship management
  • Credit card verification in e-commerce / fraud
    detection
  • Data mining and knowledge discovery in databases
  • Computer games

24
AI Spin-Offs
  • Many technologies widely used today were the
    direct or indirect results of research in AI
  • The mouse
  • Time-sharing
  • Graphical user interfaces
  • Object-oriented programming
  • Computer games
  • Hypertext
  • Information Retrieval
  • The World Wide Web
  • Symbolic mathematical systems (e.g., Mathematica,
    Maple, etc.)
  • Very high-level programming languages
  • Web agents
  • Data Mining

25
What is an Intelligent Agent
  • An agent is anything that can
  • perceive its environment through sensors, and
  • act upon that environment through actuators (or
    effectors)
  • Goal Design rational agents that do a good job
    of acting in their environments
  • success determined based on some objective
    performance measure

actuators
26
Example Vacuum Cleaner Agent
  • Percepts location and contents, e.g., A, Dirty
  • Actions Left, Right, Suck, NoOp

27
What is an Intelligent Agent
  • Rational Agents
  • An agent should strive to "do the right thing",
    based on what it can perceive and the actions it
    can perform. The right action is the one that
    will cause the agent to be most successful.
  • Performance measure An objective criterion for
    success of an agent's behavior.
  • E.g., performance measure of a vacuum-cleaner
    agent could be amount of dirt cleaned up, amount
    of time taken, amount of electricity consumed,
    amount of noise generated, etc.
  • Definition of Rational Agent
  • For each possible percept sequence, a rational
    agent should select an action that is expected to
    maximize its performance measure, given the
    evidence provided by the percept sequence and
    whatever built-in knowledge the agent has.
  • Omniscience, learning, autonomy
  • Rationality is distinct from omniscience
    (all-knowing with infinite knowledge)
  • Choose action that maximizes expected value of
    perf. measure given percept to date
  • Agents can perform actions in order to modify
    future percepts so as to obtain useful
    information (information gathering, exploration)
  • An agent is autonomous if its behavior is
    determined by its own experience (with ability to
    learn and adapt)

28
What is an Intelligent Agent
  • Rationality depends on
  • the performance measure that defines degree of
    success
  • the percept sequence - everything the agent has
    perceived so far
  • what the agent know about its environment
  • the actions that the agent can perform
  • Agent Function (percepts gt actions)
  • Maps from percept histories to actions f P ?
    A
  • The agent program runs on the physical
    architecture to produce the function f
  • agent architecture program
  • Action Function(Percept Sequence)
  • If (Percept Sequence) then do Action
  • Example A Simple Agent Function for Vacuum World
  • If (current square is dirty) then suck
  • Else move to adjacent square

29
What is an Intelligent Agent
  • Limited Rationality
  • Optimal (i.e. best possible) rationality is NOT
    perfect success limited sensors, actuators, and
    computing power may make this impossible
  • Theory of NP-completeness some problems are
    likely impossible to solve quickly on ANY
    computer
  • Both natural and artificial intelligence are
    always limited
  • Degree of Rationality the degree to which the
    agents internal "thinking" maximizes its
    performance measure, given
  • the available sensors
  • the available actuators
  • the available computing power
  • the available built-in knowledge

30
PEAS Analysis
  • To design a rational agent, we must specify the
    task environment
  • PEAS Analysis
  • Specify Performance Measure, Environment,
    Actuators, Sensors
  • Example Consider the task of designing an
    automated taxi driver
  • Performance measure Safe, fast, legal,
    comfortable trip, maximize profits
  • Environment Roads, other traffic, pedestrians,
    customers
  • Actuators Steering wheel, accelerator, brake,
    signal, horn
  • Sensors Cameras, sonar, speedometer, GPS,
    odometer, engine sensors, keyboard

31
PEAS Analysis More Examples
  • Agent Medical diagnosis system
  • Performance measure Healthy patient, minimize
    costs, lawsuits
  • Environment Patient, hospital, staff
  • Actuators Screen display (questions, tests,
    diagnoses, treatments, referrals)
  • Sensors Keyboard (entry of symptoms, findings,
    patient's answers)
  • Agent Part-picking robot
  • Performance measure Percentage of parts in
    correct bins
  • Environment Conveyor belt with parts, bins
  • Actuators Jointed arm and hand
  • Sensors Camera, joint angle sensors

32
PEAS Analysis More Examples
  • Agent Internet Shopping Agent
  • Performance measure??
  • Environment??
  • Actuators??
  • Sensors??

33
Environment Types
  • Fully observable (vs. partially observable)
  • An agent's sensors give it access to the complete
    state of the environment at each point in time.
  • Deterministic (vs. stochastic)
  • The next state of the environment is completely
    determined by the current state and the action
    executed by the agent. (If the environment is
    deterministic except for the actions of other
    agents, then the environment is strategic).
  • Episodic (vs. sequential)
  • The agent's experience is divided into atomic
    "episodes" (each episode consists of the agent
    perceiving and then performing a single action),
    and the choice of action in each episode depends
    only on the episode itself.

34
Environment Types (cont.)
  • Static (vs. dynamic)
  • The environment is unchanged while an agent is
    deliberating (the environment is semi-dynamic if
    the environment itself does not change with the
    passage of time but the agent's performance score
    does).
  • Discrete (vs. continuous)
  • A limited number of distinct, clearly defined
    percepts and actions.
  • Single agent (vs. multi-agent)
  • An agent operating by itself in an environment.

35
Environment Types (cont.)
The environment type largely determines the agent
design. The real world is (of course) partially
observable, stochastic, sequential, dynamic,
continuous, multi-agent
36
Structure of an Intelligent Agent
  • All agents have the same basic structure
  • accept percepts from environment
  • generate actions
  • A Skeleton Agent
  • Observations
  • agent may or may not build percept sequence in
    memory (depends on domain)
  • performance measure is not part of the agent it
    is applied externally to judge the success of the
    agent

function Skeleton-Agent(percept) returns action
static memory, the agent's memory of the world
memory Update-Memory(memory, percept)
action Choose-Best-Action(memory) memory
Update-Memory(memory, action) return action
37
Looking Up the Answer?
  • A Template for a Table-Driven Agent
  • Why can't we just look up the answers?
  • The disadvantages of this architecture
  • infeasibility (excessive size)
  • lack of adaptiveness
  • How big would the table have to be?
  • Could the agent ever learn from its mistakes?
  • Where should the table come from in the first
    place?

function Table-Driven-Agent(percept) returns
action static percepts, a sequence, initially
empty table, a table indexed by
percept sequences, initially fully specified
append percept to the end of percepts action
LookUp(percepts, table) return action
38
Agent Types
  • Simple reflex agents
  • are based on condition-action rules and
    implemented with an appropriate production
    system. They are stateless devices which do not
    have memory of past world states.
  • Reflex Agents with memory (Model-Based)
  • have internal state which is used to keep track
    of past states of the world.
  • Agents with goals
  • are agents which in addition to state information
    have a kind of goal information which describes
    desirable situations. Agents of this kind take
    future events into consideration.
  • Utility-based agents
  • base their decision on classic axiomatic
    utility-theory in order to act rationally.

Note All of these can be turned into learning
agents
39
A Simple Reflex Agent
  • We can summarize part of the table by formulating
    commonly occurring patterns as condition-action
    rules
  • Example
  • if car-in-front-brakes
  • then initiate braking
  • Agent works by finding a rule whose condition
    matches the current situation
  • rule-based systems
  • But, this only works if the current percept is
    sufficient for making the correct decision

function Simple-Reflex-Agent(percept) returns
action static rules, a set of condition-action
rules state Interpret-Input(percept) rule
Rule-Match(state, rules) action
Rule-Actionrule return action
40
Example Simple Reflex Vacuum Agent
41
Agents that Keep Track of the World
  • Updating internal state requires two kinds of
    encoded knowledge
  • knowledge about how the world changes
    (independent of the agents actions)
  • knowledge about how the agents actions affect
    the world
  • But, knowledge of the internal state is not
    always enough
  • how to choose among alternative decision paths
    (e.g., where should the car go at an
    intersection)?
  • Requires knowledge of the goal to be achieved

function Reflex-Agent-With-State(percept) returns
action static rules, a set of condition-action
rules state, a description of the
current world state Update-State(state,
percept) rule Rule-Match(state, rules)
action Rule-Actionrule state
Update-State(state, action) return action
42
Agents with Explicit Goals
  • Reasoning about actions
  • reflex agents only act based on pre-computed
    knowledge (rules)
  • goal-based (planning) act by reasoning about
    which actions achieve the goal
  • less efficient, but more adaptive and flexible

43
Agents with Explicit Goals
  • Knowing current state is not always enough.
  • State allows an agent to keep track of unseen
    parts of the world, but the agent must update
    state based on knowledge of changes in the world
    and of effects of own actions.
  • Goal description of desired situation
  • Examples
  • Decision to change lanes depends on a goal to go
    somewhere (and other factors)
  • Decision to put an item in shopping basket
    depends on a shopping list, map of store,
    knowledge of menu
  • Notes
  • Search (Russell Chapters 3-5) and Planning
    (Chapters 11-13) are concerned with finding
    sequences of actions to satisfy a goal.
  • Reflexive agent concerned with one action at a
    time.
  • Classical Planning finding a sequence of actions
    that achieves a goal.
  • Contrast with condition-action rules involves
    consideration of future "what will happen if I do
    ..." (fundamental difference).

44
A Complete Utility-Based Agent
  • Utility Function
  • a mapping of states onto real numbers
  • allows rational decisions in two kinds of
    situations
  • evaluation of the tradeoffs among conflicting
    goals
  • evaluation of competing goals

45
Utility-Based Agents (Cont.)
  • Preferred world state has higher utility for
    agent quality of being useful
  • Examples
  • quicker, safer, more reliable ways to get where
    going
  • price comparison shopping
  • bidding on items in an auction
  • evaluating bids in an auction
  • Utility function state gt U(state) measure of
    happiness
  • Search (goal-based) vs. games (utilities).

46
Shopping Agent Example
  • Navigating Move around store avoid obstacles
  • Reflex agent store map precompiled.
  • Goal-based agent create an internal map, reason
    explicitly about it, use signs and adapt to
    changes (e.g., specials at the ends of aisles).
  • Gathering Find and put into cart groceries it
    wants, need to induce objects from percepts.
  • Reflex agent wander and grab items that look
    good.
  • Goal-based agent shopping list.
  • Menu-planning Generate shopping list, modify
    list if store is out of some item.
  • Goal-based agent required what happens when a
    needed item is not there? Achieve the goal some
    other way. e.g., no milk cartons get canned milk
    or powdered milk.
  • Choosing among alternative brands
  • utility-based agent trade off quality for price.

47
General Architecture for Goal-Based Agents
Input percept state Update-State(state,
percept) goal Formulate-Goal(state,
perf-measure) search-space Formulate-Problem
(state, goal) plan Search(search-space ,
goal) while (plan not empty) do action
Recommendation(plan, state) plan
Remainder(plan, state) output action end
  • Simple agents do not have access to their own
    performance measure
  • In this case the designer will "hard wire" a goal
    for the agent, i.e. the designer will choose the
    goal and build it into the agent
  • Similarly, unintelligent agents cannot formulate
    their own problem
  • this formulation must be built-in also
  • The while loop above is the "execution phase" of
    this agent's behavior
  • Note that this architecture assumes that the
    execution phase does not require monitoring of
    the environment

48
Learning Agents
  • Four main components
  • Performance element the agent function
  • Learning element responsible for making
    improvements by observing performance
  • Critic gives feedback to learning element by
    measuring agents performance
  • Problem generator suggest other possible courses
    of actions (exploration)

49
Search and Knowledge Representation
  • Goal-based and utility-based agents require
    representation of
  • states within the environment
  • actions and effects (effect of an action is
    transition from the current state to another
    state)
  • goals
  • utilities
  • Problems can often be formulated as a search
    problem
  • to satisfy a goal, agent must find a sequence of
    actions (a path in the state-space graph) from
    the starting state to a goal state.
  • To do this efficiently, agents must have the
    ability to reason with their knowledge about the
    world and the problem domain
  • which path to follow (which action to choose
    from) next
  • how to determine if a goal state is reached OR
    how decide if a satisfactory state has been
    reached.

50
Intelligent Agent Summary
  • An agent perceives and acts in an environment. It
    has an architecture and is implemented by a
    program.
  • An ideal agent always chooses the action which
    maximizes its expected performance, given the
    percept sequence received so far.
  • An autonomous agent uses its own experience
    rather than built-in knowledge of the environment
    by the designer.
  • An agent program maps from a percept to an action
    and updates its internal state.
  • Reflex agents respond immediately to percepts.
  • Goal-based agents act in order to achieve their
    goal(s).
  • Utility-based agents maximize their own utility
    function.

51
Exercise
  • Do Exercise 1.3, on Page 30
  • You can find out about the Loebner Prize at
  • http//www.loebner.net/Prizef/loebner-prize.html
  • Also (for discussion) look at exercise 1.2 and
    read the material on the Turing Test at
  • http//plato.stanford.edu/entries/turing-test/
  • Read the article by Jennings and Wooldridge
    (Applications of Intelligent Agents). Compare
    and contrast the definitions of agents and
    intelligent agents as given by Russell and Norvig
    (in the text book) and and in the article.

52
Exercise
  • News Filtering Internet Agent
  • uses a static user profile (e.g., a set of
    keywords specified by the user)
  • on a regular basis, searches a specified news
    site (e.g., Reuters or AP) for news stories that
    match the user profile
  • can search through the site by following links
    from page to page
  • presents a set of links to the matching stories
    that have not been read before (matching based on
    the number of words from the profile occurring in
    the news story)
  • (1) Give a detailed PEAS description for the news
    filtering agent
  • (2) Characterize the environment type (as being
    observable, deterministic, episodic, static,
    etc).
About PowerShow.com