INSIGHT, COMPLEXITY, AND VALIDITY IN HUMAN BEHAVIOR REPRESENTATION 11 Sep 06 - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

INSIGHT, COMPLEXITY, AND VALIDITY IN HUMAN BEHAVIOR REPRESENTATION 11 Sep 06

Description:

Thanks to Christy Caballero (research assistant) and Deborah ... Newell & Simon (1972) Human Problem Solving. All goal-directed activity is problem solving ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 25
Provided by: siso7
Category:

less

Transcript and Presenter's Notes

Title: INSIGHT, COMPLEXITY, AND VALIDITY IN HUMAN BEHAVIOR REPRESENTATION 11 Sep 06


1
INSIGHT, COMPLEXITY, AND VALIDITY IN HUMAN
BEHAVIOR REPRESENTATION11 Sep 06
  • Kevin A. Gluck, PhD
  • Senior Research Psychologist
  • Human Effectiveness Directorate
  • Air Force Research Laboratory

2
Acknowledgments
  • Thanks to Christy Caballero (research assistant)
    and Deborah Russell (graduate student intern) for
    their contributions to portions of this
    presentation
  • I will borrow lightly from Bob Fosters BRIMS
    2003 Keynote Address
  • I will borrow heavily from the theorizing of Herb
    Simon

3
A Call for Insight
Achieving an ahh..ha in behavior
representations why DoD is interested
Slide adapted from Dr. Bob Fosters keynote
address at the 2003 Conference on Behavior
Representation in Modeling and Simulation (BRIMS)
in Scottsdale, Arizona.
4
Alternative InterpretationsThe Locus of the
Insight
  • In the humans
  • In the models

5
Overview
  • Insight
  • Insight as problem solving
  • Complexity
  • Provides both challenge and guidance
  • Validation
  • Crucial component of the process
  • Propose a change to standard practice

6
Sample Insight Problems
  • Verbal
  • One morning a woman's earring fell into a cup
    that was filled with coffee, yet her earring did
    not get wet. How could this be?

From http//www.indiana.edu/bobweb/Handout/insigh
tproblems.doc
7
Insight as Problem Solving
  • Newell Simon (1972) Human Problem Solving
  • All goal-directed activity is problem solving
  • Solving problems involves traversing a problem
    space

Problem Space
Initial State
Intermediate States
Goal State
operators
8
A Process Theory for Insight
The Mutilated Checkerboard Problem (Kaplan
Simon, 1990)
Cover the remaining 62 squares with 31 dominoes.
Each domino covers two adjacent
squares. Or Prove logically why such a covering
is impossible.
9
Conclusions from Insight Research
  • The key to achieving insight is to arrive at an
    appropriate representation (have to be in an
    appropriate problem space, with all necessary
    operators).
  • Typically do not have a sense of progress
    toward the goal until you arrive at the solution.
  • An important heuristic in insight problems
  • Attend to the features of the problem that remain
    invariant

10
The Connection to Complexity
  • We will achieve HBRs capable of having insight
    when they become an appropriate (necessary and
    sufficient) replication of the human cognitive
    system
  • This goal (computational replicates of humans)
    involves nested search across multiple, infinite,
    interacting problem spaces
  • The space of all possible HBRs
  • The space of all possible insight contexts
  • The use of heuristics constrains search and
    allows progress in very large (or infinite)
    problem spaces
  • The complexity of this goal is crippling, unless
    we find a way to draw heuristics out of the
    complexity

11
Complexity
  • The Architecture of Complexity Hierarchic
    Systems
  • Chapter 8 in Herb Simons book The Sciences of
    the Artificial

4 Key Points
Complex systems are nearly always
hierarchic Hierarchy facilitates evolution Near
decomposability Nearly decomposable, hierarchic
structures facilitate comprehension
12
A Vector to Validity
  • HBRs are complex systems that are evolving
  • Modular, hierarchic designs will increase the
    pace of that evolution
  • Evolution is a process that evaluates the
    goodness of a system for some purpose

Validity
Must have the evaluation function of validation
in order to make decisions about which
sub-features of the HBRs should remain stable and
which should change.
13
A Call for Validation
WHAT WE NEED
  • Affordability
  • Adaptability
  • Robustness
  • Scalability
  • Composability
  • Interoperability -- Jointness
  • Common research testbeds
  • Validity---validity---validity!!

Slide adapted from Dr. Bob Fosters keynote
address at the 2003 Conference on Behavior
Representation in Modeling and Simulation (BRIMS)
in Scottsdale, Arizona.
14
DoD Position on Validation
  • DoD Instruction 5000.61 (May 13, 2003)
  • 4.5 The DoD Components shall establish VVA
    policies and procedures for models and
    simulations they develop, use, or manage.
  • 6.1. Verification and validation (VV) shall be
  • 6.1.1. Incorporated into the development and
    life-cycle management processes of all MS.

15
DMSO Position on Validation
  • From DMSOs VVA Recommended Practices Guide
  • http//vva.dmso.mil/Default.htm
  • Why is VVA performed?
  • To determine whether a model or simulation or
    federation should be used in a given situation,
    its credibility should be established by
    evaluating fitness for the intended use.
  • The decision to use the simulation will depend
    on the simulations capabilities and correctness,
    the accuracy of its results, and its usability in
    the specified application.

16
The Situation
  • Despite appropriate policy guidance there is
    little or no standard practice (or expectation,
    apparently!) in our research community for
    validating HBRs
  • Often no validity evaluation at all
  • When there is, it is often reported in vague,
    qualitative terms
  • Results were very satisfying actual BRIMS
    paper quote, with no supporting quantitative data

17
The EvidenceData from BRIMS 2005
What sort of papers were presented/published at
BRIMS 2005?
Papers
  • Tool/Capability
  • Cognitive Model
  • Position/Review
  • Human Beh. Repr.

13 12 9 6
Without the position/review papers, thats 31
papers in the sample. Of those 31, how many
report any form of validity evaluation at all?
12
18
More Data from BRIMS 2005
When validity is evaluated, how is it evaluated?
Type
  • Application Validity
  • Construct Validity
  • SME Face Validity
  • Developer Face Validity

6 2 3 1
How many of these involved a quantitative
evaluation of validity?
5
19
A Little Closer to HomeData from Spring 2006 SIW
  • 7 Papers related to HBR
  • 029 Aha Christman testbed for analysis of SAF
    planning behaviors
  • 045 Goerger et al agents for traffic flow
    analysis
  • 055 McKenzie Piland crowd federate GUI
  • 073 Nanda Weeks path prediction of dynamic
    entities
  • 076 Tegner, Huang, Pakucs visual rep comm
    standards for HBRs
  • 085 Weisel et al crowd modeling
  • 127 Rowe et al improving human interfaces
  • (but one of these is a position paper)

Of the 6 papers on HBR-related models, tools, or
methods, how many report any form of validity
evaluation at all?
2
20
Why This is a Problem
  • We need objective, quantitative evaluations of
    validity in order to establish credibility, track
    ST progress, and increase the probability that
    our HBRs and associated tools are evolving in a
    direction that actually increases their utility.
  • How can we take ourselves seriously if were
    investing the effort to establish our credibility
    (through quantitative evaluation of validity) in
    less than 1/3rd of our published research
    efforts?
  • Why should anyone take us seriously?

21
You Want a Standard?
I propose a new standard practice
  • ALL government contracts and grants for
    developing cognitive models, human behavior
    representations, or new CM/HBR-related tools and
    capabilities must include an explicitly defined
    and budgeted plan for objective, quantitative
    validation.
  • Validation must be a technical deliverable, even
    in the early stages of the acquisition process

22
Validation Must Be Done
  • It is not enough to simply implement code and
    describe it.
  • Step 1 in any new HBR RD effort should be to
    identify or collect the empirical
    data/phenomena/functionality we are trying to
    replicate, understand, or achieve
  • Provides the foundation for assessing accuracy
    and correctness, thereby establishing credibility
  • Excuses such as its expensive/difficult/time-con
    suming are cop-outs that reduce credibility,
    slow progress, and are not sufficient
    justification for avoiding validation

23
Implications
  • If we want HBR systems that are understandable
    and that evolve as quickly as possible in
    demonstrably useful directions, we should
  • Design them hierarchically
  • Validate the sub-components at each level, so we
    know where we have stable utility and where we
    need adaptation/mutation
  • The probability that we will achieve the goal of
    HBRs that have a general capacity for achieving
    insights of their own, any time in the next 50
    years, depends on our adopting appropriate design
    and search heuristics such as these.

24
Thank You
Write a Comment
User Comments (0)
About PowerShow.com