UAST and Evolving Systems of Systems in the Age of the Black Swan - PowerPoint PPT Presentation

Loading...

PPT – UAST and Evolving Systems of Systems in the Age of the Black Swan PowerPoint presentation | free to download - id: 53c2f2-NDlmM



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

UAST and Evolving Systems of Systems in the Age of the Black Swan

Description:

Title: Tutorial Slot Author: Rick Dove Last modified by: Rick Dove Created Date: 12/2/2008 7:07:44 PM Document presentation format: Letter Paper (8.5x11 in) – PowerPoint PPT presentation

Number of Views:454
Avg rating:3.0/5.0
Slides: 34
Provided by: Rick144
Learn more at: http://www.parshift.com
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: UAST and Evolving Systems of Systems in the Age of the Black Swan


1
UAST and Evolving Systems of Systems in the Age
of the Black Swan Part 2 On Detecting Aberrant
Behavior
www.parshift.com/Files/PsiDocs/Pap090901IteaJ-Path
sForPeerBehaviorMonitoringAmongUAS.pdf
www.parshift.com/Files/PsiDocs/Pap091201IteaJ-Met
hodsForPeerBehaviorMonitoringAmongUas.pdf
There is no difficulty, in principle, in
developing synthetic organisms as complex and as
intelligent as we please. But we must notice two
fundamental qualifications first, their
intelligence will be an adaptation to, and a
specialization towards, their particular
environment, with no implication of validity for
any other environment such as ours and secondly,
their intelligence will be directed towards
keeping their own essential variables within
limits. They will be fundamentally selfish.
Principles of the self-organizing system, W.
Ross Ashby, 1962
Based on a presentation atUAST Tutorial
Session ITEA LVC Conference,12 Jan 2009, El
Paso, TX. UAST Unmanned Autonomous Systems test
also L3 Art Brooks did Masters paper here
2
Domain Independent Principles Can Inform UAST
ConOps
system
Class 2 (federated?) testing enterprise
Class 2 systems under test
environment (an ecology) Politics Technology Gov
t Procedures Mil procedures Military
reality Competitors Enemies
systems
Class 1 testingsystem(s)
UAST
UASoS
systems
  • Systems in Context

3
Problem and Observation
  • Self Organizing Systems of Systems are too
    complex to test beyond minimal functionality
    and apparent rationality.
  • Autonomous self organizing entities have a
    willful mind of their own.
  • Unpredictable emergent behavior will occur in
    unpredictable situations.
  • Emergent behavior is necessary and desirable
    (when appropriate).
  • Inevitable sub-system failure, command failure,
    enemy possession.
  • UAS will work together as flocks, swarms, packs,
    and teams.
  • Even human social systems exhibit unintended
    lethal consequences.
  • --------
  • In biological social systems, members
    monitor/enforce behavior bounds.
  • Could UAS have built-in socially attentive
    monitoring (SAM) on mission?
  • Could UAST employ SAM proxies for monitoring
    antisocial UAS?
  • Challenges
  • 1) Learning the behavior patterns to monitor.
  • 2) Technology for monitoring complex dynamic
    patterns in real time.
  • 3) Decisive counter-consequence action.

4
Survey on Lethality and Autonomous Systems
Responsibility for Lethal Errors by Responsible
Party. The soldier was found to be the most
responsible party, and robots the least.
Lethality and Autonomous SystemsSurvey Design
and Results, Lilia Moshkina, Ronald C.
Arkin, Technical Report GIT-GVU-07-16, Mobile
Robot Laboratory,College of Computing, Georgia
Institute of Technology, p. 30, 2007
5
www.cc.gatech.edu/ai/robot-lab/online-publications
/MoshkinaArkinTechReport2008.pdf
  • Applicability of ethical categories is ranked
    from more concrete and specific to more general
    and subjective.
  • Lilia Moshkina, Ronald C. Arkin, Lethality and
    Autonomous Systems Survey Design and Results,
    Technical Report GIT-GVU-07-16, Mobile Robot
    Laboratory, College of Computing, Georgia
    Institute of Technology, p. 29, 2007

6
0) A robot may not harm humanity, or, by
inaction, allow humanity to come to harm (added
later). 1) A robot may not injure a human being
or, through inaction, allow a human being to come
to harm. 2) A robot must obey orders given it by
human beings except where such orders would
conflict with the First Law. 3) A robot must
protect its own existence as long as such
protection does not conflict with the First or
Second Law.
This cover of I, Robot illustrates the story
"Runaround", the first to list all Three Laws of
Robotics (Asimov 1942)
7
Self Organizing Inevitability
  • Isaac Asimov's three laws of robotics were
    developed to allow UxVs to coexist with humans,
    under values held dear by humans (imposed on
    robots).
  • These were not weapon systems.
  • Asimovs robots existed in a peaceful social
    environment. Ours are birthing into a community
    of warfighters, with enemies, cyber warfare,
    great destructive capabilities, human confusion,
    and a code of war.
  • Ashby notes that a self organizing system by
    definition behaves selfishly, and warns that its
    behaviors may be at odds with its creators.
  • So can we afford to build truly self organizing
    systems?
  • A foolish question. We will do that regardless of
    the possible dangers, just as we opened the door
    to atomic energy, bio hazards, organism creation,
    nanotechnology, and financial meltdown.
  • Can a cruise missile on a mission be hacked and
    turned to the enemys bidding? Perhaps we can say
    that it hasnt occurred yet. Can a cruise missile
    get sick or confused, and hit something it
    shouldnt? Thats already happened.
  • The issue is not has it happened. The issue is
    can it happen.
  • We cannot test-away bad things from happening,
    so we better be vigilant for signs of
    imminence,and have actionable options when the
    time has come.

8
Four Selfish (Potential) Guiding Principles(for
synthetics)
  • Protection of permission to exist (civilians,
    public assets)
  • Protection of mission
  • Protection of self
  • Protection of others of like kind
  • A safety mechanism based on principles,
  • for we can never itemize
  • all of the
  • situational patternsand theappropriate response
    to each

9
(No Transcript)
10
(No Transcript)
11
and heres theCats Cradle
12
  • Aberrant behavior arising in a stable social
    systemis detected and opposed
  • Example Female penguin attempting to steal a
    replacement egg for the one she lost is prevented
    from doing so by others.

13
Ganging Up on Aberrant Behavior
T. Monnin, F.L.W. Ratnieks, G.R. Jones, R. Beard,
Pretender punishment induced by chemical
signaling in a queenless ant, Nature, V. 419,
5Sep2002
  • Queenless ponerine ants have no queen caste. All
    females are workers who can potentially mate and
    reproduce. A single gamergate emerges, by
    virtue of alpha rank in a near-linear dominance
    hierarchy of about 35 high-ranking workers.
    Usually the beta replaces the gamergate if she
    dies. A high-ranker can enhance her inclusive
    fitness by overthrowing the gamergate, rather
    than waiting for her to die naturally.
  • (a) To end coup behavior, the gamergate (left)
    approaches the pretender, usually from behind or
    from the side, briefly rubs her sting against the
    pretender depositing a chemical signal, then runs
    away, leaving subsequent discipline to others.
  • (b) One to six low-ranking workers bite and hold
    the appendages of the pretender for up to 34
    days with workers taking turns. Immobilization
    can last several days, and typically results in
    the pretender losing her high rank. It is not
    clear why punishment causes loss of rank, but it
    is probably a combination of the stress caused by
    immobilization and being prevented from
    performing dominance behaviours. Occasionally the
    immobilized individual is killed outright.

http//lasi.group.shef.ac.uk/pdf/mrjbnature2002.pd
f
14
Promising Things to Leverage
  • Social pattern monitoring
  • Relationships (Gal Kaminka, Ph.D. dissertation)
  • Trajectories (Stephan Intille, Ph.D.
    dissertation)
  • Emergence (Sviatoslav Braynov, repurposed
    algorithm concepts)
  • Technology and Knowledge
  • Human expertise (Gary Klein, Phillip Ross, Herb
    Simon)
  • Biological feedforward hierarchies (Thomas Serre,
    Ph.D. dissertation)
  • Parallel pattern processor (Curt Harris, VLSI
    architecture)

15
Accuracy Decentralized Beats Centralized
Monitoring
From Gal A. Kaminka, Execution Monitoring in
Multi-Agent Environments, Ph.D. Dissertation,
USC, 2000, p. 6. www.isi.edu/soar/galk/Publication
s/diss-final.ps.gz.
  • We explore socially-attentive algorithms for
    detecting teamwork failures under various
    conditions of uncertainty, resulting from the
    necessity of selectivity.
  • We analytically show that despite the presence of
    uncertainty about the actual state of monitored
    agents, a centralized active monitoring scheme
    can guarantee failure detection that is either
    sound and incomplete, or complete and unsound.
  • centralized no false positives (sound) or no
    false negatives (complete), not both
  • However, this requires monitoring all agents in a
    team, and reasoning about multiple hypotheses as
    to their actual state.
  • We then show that active distributed teamwork
    monitoring results in sound and complete
    detection capabilities, despite using a much
    simpler algorithm. By exploring the agents local
    states, which are not available to the
    centralized algorithm, the distributed algorithm
    (a) uses only a single, possibly incorrect
    hypothesis of the actual state of monitored
    agents, and (b) involves monitoring only key
    agents in a team, not necessarily all
    team-members (thus allowing even greater
    selectivity).

16
Execution Monitoring in Multi-Agent Environments
Gal A. Kaminka, Execution Monitoring in
Multi-Agent Environments, Ph.D. Dissertation,
USC, www.isi.edu/soar/galk/Publications/diss-final
.ps.gz.
  • A key goal of monitoring other agents
  • Detect violations of the relationships that
    agent is involved in
  • Compare expected relationships to those actually
    maintained
  • Diagnose violations, leading to recovery
  • Motivation for relationship failure-detection
  • Cover large class of failures
  • Critical for robust performance of entire team
  • Relationship models specify how agents states
    are related
  • Formation model specifies relative velocities,
    distances
  • Teamwork model specifies that team plans jointly
    executed
  • Many others Coordination, mutual exclusion, etc.
  • Agent Modeling
  • Infer agents state from observed actions via
    plan-recognition
  • Monitor agents, attributes specified by
    relationship models

17
Identifying Football Play Patterns from Real Game
Films
Visual Recognition of Multi-Agent Action Stephen
Sean Intille, Ph.D.Thesis, MIT,
1999http//web.media.mit.edu/intille/papers-file
s/thesis.pdf.
A p51curl play. Doesnt happen like the chalk
board, but is still recognizable.
Chalk board patterns a receiver can run.
  • The task of recognizing American football plays
    was selected to investigate the general problem
    of multi-agent action recognition.

This work indicates one method for
monitoringmulti-agent performance according to
plan
18
Maybe Even.Detecting Emergent Behaviors in
Process
Sviatoslav Braynov, Murtuza Jadliwala, Detecting
Malicious Groups of Agents. The First IEEE
Symposium on Multi-Agent Security and
Survivability, 2004.
  • In this paper, we studied coordinated attacks
    and the problem of detecting malicious networks
    of attackers. The paper proposed a formal method
    and an algorithm for detecting action
    interference between users. The output of the
    algorithm is a coordination graph which includes
    the maximal malicious group of attackers
    including not only the executers of an attack but
    also their assistants. The paper also proposed a
    formal metric on coordination graphs that help
    differentiate central from peripheral attackers.
  • Because the methods proposed in the paper allow
    for detecting interference between perfectly
    legal actions, they can be used for detecting
    attacks at their early stages of preparation. For
    example, coordination graphs can show all agents
    and activities directly or indirectly related to
    suspicious users.
  • ------------------------- conjecture begging
    investigation -------------------------
  • This work focused on identifying the members of a
    group of perpetrators among a group of
    benigns, based on their cooperative behaviors
    in causing an event. It is applied in both
    forensic analysis and in predictive trend
    spotting.
  • It may be a methodology for identifying the
    conditions of specific emergent behavior after
    the fact for learning new patterns of future
    use.
  • It may also provide an early warning mechanism
    for detecting emergent aberrant team behavior,
    rather than aberrant UAS behavior.

19
  • The RPD (Recognition Primed Decision) model
    offers an account of situation awareness. It
    presents several aspects of situation awareness
    that emerge once a person recognizes a situation.
    These are the relevant cues that need to be
    monitored, the plausible goals to pursue and
    actions to consider, and the expectancies.
    Another aspect of situation awareness is the
    leverage points. When an expert describes a
    situation to someone else, he or she may
    highlight these leverage points as the central
    aspects of the dynamics of the situation.
  • Experts see inside events and objects. They have
    mental models of how tasks are supposed to be
    performed, teams are supposed to coordinate,
    equipment is supposed to function. This model
    lets them know what to expect and lets them
    notice when the expectancies are violated. These
    two aspects of expertise are based, in part, on
    the experts mental models.

Garry Klein (1998) Sources of Power How people
make decisions, 2nd MIT Press paperback edition,
Cambridge, MA. page 152
20
'Field Sense Gretzky-Style
Five seconds of the 1984 hockey game between the
Edmonton Oilers and the Minnesota North Stars
The star of this sequence is Wayne Gretzky,
widely considered the greatest hockey player of
all time. In the footage, Gretzky, barreling down
the ice at full speed, draws the attention of two
defenders. As they converge on what everyone
assumes will be a shot on goal, Gretzky abruptly
fires the puck backward, without looking, to a
teammate racing up the opposite wing. The pass is
timed so perfectly that the receiver doesn't even
break stride. "Magic," Vint says reverently. A
researcher with the US Olympic Committee, he
collects moments like this. Vint is a connoisseur
of what coaches call field sense or "vision," and
he makes a habit of deconstructing psychic plays
analyzing the steals of Larry Bird and parsing
Joe Montana's uncanny ability to calculate the
movements of every person on the field.
  • Jennifer Kahn, Wayne Gretzky-Style 'Field Sense'
    May Be Teachable, Wired Magazine, May 22, 2007.
  • www.wired.com/science/discoveries/magazine/15-06/f
    f_mindgames

21
The Stuff of Expertise
  • Research indicates that human expertise (extreme
    domain specific sense-making) is primarily a
    matter of meaningful pattern quantity not
    better genes.
  • According to an interview with Nobel Prize winner
    Herb Simon (Ross 1998), people considered truly
    expert in a domain (e.g. chess masters, medical
    diagnosticians) are thought unable to achieve
    that level until theyve accumulated some 200,000
    to a million meaningful patterns, requiring some
    20,000 hours of purposeful focused pattern
    development.
  • The accuracy of their sense making is a function
    of the breadth and depth of their pattern
    catalog.
  • In biological entities, the accumulation of large
    expert-level pattern quantities does not manifest
    as slower recognition time.
  • All patterns seem to be considered simultaneously
    for decisive action. There is no search and
    evaluation activity evident.
  • On the contrary, automated systems, regardless of
    how they obtain and represent learned reference
    patterns, execute time-consuming sequential steps
    to sort through pattern libraries and perform
    statistical feature mathematics.
  • This is the nature of the computing mechanisms
    and recognition algorithms employed in this
    service.

Philip Ross (1998), Flash of Genius, an
interview with Herbert Simon,Forbes, November
16, pp. 98- 104, www.forbes.com//forbes/1998/1116/
6211098a.html. Also Philip Ross, The Expert
Mind, Scientific American, July 2006
22
  • Rapid visual categorization
  • Visual input can be classified very
    rapidlyaround 120 msec following image onsetAt
    this speed, it is no surprise that subjects often
    respond without having consciously seen the
    image consciousness for the image may come later
    or not at all.
  • Dual-task and dual-presentation paradigms support
    the idea that such discriminations can occur in
    the near-absence of focal, spatial attention,
    implying that purely feed-forward networks can
    support complex visual decision-making in the
    absence of both attention and consciousness.
  • This has now been formally shown in the context
    of a purely feed-forward computational model of
    the primates ventral visual system (Serre et
    al., 2007).

Reverse Engineering the Brain
www.scholarpedia.org/article/Attention_and_conscio
usness/processing_without_attention_and_consciousn
ess
www.technologyreview.com/printer_friendly_article.
aspx?id17111
23
Explaining Rapid Categorization. Thomas Serre,
Aude Oliva, Tomaso Poggio.http//cbcl.mit.edu/sem
inars-workshops/workshops/serre-slides.pdf
24
The Monitoring Selectivity ProblemUnacceptable
Accuracy Compromise
From Gal A. Kaminka, Execution Monitoring in
Multi-Agent Environments, Ph.D. Dissertation,
USC, 2000, pp. 3-4. www.isi.edu/soar/galk/Publicat
ions/diss-final.ps.gz.
  • A key problem emerges when monitoring multiple
    agents a monitoring agent must be selective in
    its monitoring activities (both raw observations
    and processing), since bandwidth and
    computational limitations prohibit the agent from
    monitoring all other agents to full extent, all
    the time.
  • However, selectivity in monitoring activities
    leads to uncertainty about monitored agents
    states, which can lead to degraded monitoring
    performance. We call this challenging problem the
    Monitoring Selectivity Problem Monitoring
    multiple agents requires overhead that hurts
    performance but at the same time, minimization
    of the monitoring overhead can lead to monitoring
    uncertainty that also hurts performance.
  • Key questions remain open
  • What are the bounds of selectivity that still
    facilitate effective monitoring?
  • How can monitoring accuracy be maintained in the
    face of limited knowledge of other agents
    states?
  • How can monitoring be carried out efficiently for
    on-line deployment?
  • This dissertation begins to address the
    monitoring selectivity problem in teams by
    investigating requirements for effective
    monitoring in two monitoring tasks Detecting
    failures in maintaining relationships, and
    determining the state of a distributed team (for
    both faire detection and visualization).

25
Processor Recognition Speed Independent of
Pattern Quantity and Complexity
Snort chart source Alok Tongaonkar, Sreenaath
Vasudevan, R. Sekar, Fast Packet Classification
for Snort by Native Compilation of Rules,
Proceedings of the 22nd Large Installation
System Administration Conference (LISA '08),
USENIX, Nov 914, 2008. www.usenix.org/events/lisa
08/tech/full_papers/tongaonkar/tongaonkar_html/ind
ex.html
Processor info source Rick Dove, Pattern
Recognition without Tradeoffs Scalable Accuracy
with No Impact on Speed, To appear in
Proceedings of Cybersecurity Applications
Technology Conference For Homeland Security,
IEEE, April 2009. www.kennentech.com/Pubs/2009-Pa
tternRecognitionWithoutTradeoffs-6Page.pdf.
8 million real packets run on3.06 GHz Intel
Xenon processor
Pattern processor comparative speed (unbounded)
  • Comparison shows pattern processors flat
    constant speed recognition vs typical
    computational alternative. Example chosen for
    ready availability.

26
Reconfigurable Pattern ProcessorReusable Cells
Reconfigurable in a Scalable Architecture
Cell-satisfaction output pointers
Independent detection cell content
addressable by current input byte If active, and
satisfied with current byte, can activate other
designated cells including itself
Up to 256 possible features can be satisfied
by all so-designated byte values
Cell-satisfaction activation pointers
Individual detection cells are configured into
feature cell machines by linking activation
pointers (adjacent-cell pointers not depicted
here)
an unbounded number of feature cells configured
as feature-cell machines can extend indefinitely
across multiple processors
All active cells have simultaneous access to
current data-stream byte
27
Simple Example Pattern Classification Method
Suitable for Many Syntactic, Attributed Grammar,
and Statistical Approaches
Reinitialization Transforms
Output Register R
Very SimpleWeighted FeatureExample
Logical Intersection Transforms
Output Register S
Logical Union Transforms
Output Register P
Threshold Counter Transforms
Output Register T
Multiple Threshold Down Counters
Output Transform Pointers
Output Transform Pointers
FCM Activation Pointers
½ Million Detection Cells
Configured FCMs
M1
M2
M3
M4
M5
Mn
Layered Architecture Stack
Partial Conceptual Architecture Stack
Weight3
Class-1
counter 1
classification output occurs for any down counter
reaching zero
output pointers
Class-2
Weight2
counter 2
Class-3
counter 3
Class-4
counter 4
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
? ? ? ?
FCM-1
FCM-2
FCM-3
FCM-4
FCM-6
FCM-7
FCM-n
FCM-5
Additional transforms provide sub-pattern
combination logic Finite Cell Machines, as
depicted, could represent sub-patterns or
chunked features shared by multiple pattern
classes. Padded FCM-7 and FCM-n increase feature
weight with multiple down counts.
On detecting and classifying aberrant behavior in
unmanned autonomous systems under test and on
mission,www.kennentech.com/Pubs/2009-OnDetectingA
ndClassifyingAberrantBehaviorInUAS.pdf
28
Value-Based Feature Example
A reference pattern example for
behavior-verification of a mobile object. Is it
traveling within the planned space/time
envelop? Using GPS position data Latitude,
Longitude, Altitude.
Output F failure S success
On detecting and classifying aberrant behavior in
unmanned autonomous systems under test and on
mission,www.kennentech.com/Pubs/2009-OnDetectingA
ndClassifyingAberrantBehaviorInUAS.pdf
29
Example Monitoring Complex Multi-Agent Behaviors
Packetized data can use multi-part headers to
activate appropriate reference pattern sets for
different times
Output F failure S success
On detecting and classifying aberrant behavior in
unmanned autonomous systems under test and on
mission,www.kennentech.com/Pubs/2009-OnDetectingA
ndClassifyingAberrantBehaviorInUAS.pdf
30
Hybrid Adaptation Could Improve on Natural Systems
Nature has sufficient, but not necessarily
optimal, systems One example
Grisogono, A.M. The Implications of Complex
Adaptive Systems Theory for C2. Proceedings of
the 2006 Command and Control Research and
Technology Symposium, 2006, www.dodccrp.org/events
/2006_CCRTS/html/papers/202.pdf
31
Related Implications and Points
  • TE cannot be limited to pre-deployment it must
    be an ongoing never-ending activity built-in to
    the SoS operating methods.
  • LVC Put the tester into the environment total
    VR immersion as a player with intervention
    capability (the ultimate driving machine). Humans
    will see experientially and recognize things in
    real-time that forensics and remote data analysis
    will not recognize.
  • These things we build are not children that we
    can watch and guide and correct. They need to
    have a sense of ethics and principles that inform
    unforeseen situational response.
  • The biological expertise pattern recognition
    capability needs to exist in both the testing
    environment and on-board. We are building
    intelligent willful entities that carry weapons.

32
Status Q1 2010
  • Kaminkas Socially Attentive Monitoring examples
    are modeled.
  • Intelles trajectory recognition modeling was
    started, another approach is wip.
  • Serres feedforward hierarchy image recognition
    Level 1 is modeled.
  • These algorithm models reside with others in a
    wikiinvestigating collaborative
    parallel-algorithm development.
  • A processor emulator/compiler exists for
    algorithm modeling.
  • One defense contractor already working on
    classified project.
  • VLSI availability eta Q1 2012.
  • 128,000 feature cells expected for first
    generation modules.
  • Chips can be combined for unbounded scalability.
  • Pursuits of interesting problems to attack with
    this new capability
  • x Inc Collision avoidance in cluttered airspace.
  • PSI Inc Distributed anomaly detection, and
    hierarchical sensemaking
  • OntoLogic LLC Secure software code verification
  • This work was supported in part by the U.S.
    Department of Homeland Security award NBCHC070016.

33
Aberrant behavior will not be tolerated!
About PowerShow.com