Title: Synthetic Phenomenology: Exploiting Embodiment to Specify the NonConceptual Content of Visual Experi
1Synthetic PhenomenologyExploiting Embodiment to
Specify the Non-Conceptual Content of Visual
Experience
- Ron Chrisley
- Centre for Research in Cognitive Science
- and Department of Informatics
- University of Sussex
- Brighton, UK
- E-Intentionality Seminar
- October 5th, 2006
2Experience specificationA kind of phenomenology
- Science requires an ability to refer to or
specify explananda and explanatia - So consciousness studies needs a way to refer to
or specify the content of conscious experiences - Constraints -- Specification must be
- Canonical
- Communicable
3Standard methodThat-clause specification
- "Joel believes that today is Thursday"
- Uses the content to be specified
- Specifications inherit the properties of the
contents specified - Constraints on specifications also restrict the
set of contents that can be specified - Same for specifications of experience "Mary is
having a visual experience of a red bike leaning
against a white fence"
4That-clause specification Problems
- Conceptual, so can't handle
- Fine-grained content (e.g. perception)
- Cognitively impenetrable content (e.g., optical
illusions) - Pre-objective content (e.g. animals and infants)
- Transitional content (mediating
conceptualisations) - Hot content (with constitutive motivational
implications for action) - I.e., can't specify non-conceptual content
- Disembodied no reference is made to the kinds
of abilities necessary/sufficient for having the
experience
5Do I have to draw you a picture?
- An obvious way to deal with some of the problems
is to use non-symbolic specifications - E.g., for the case of visual experiences, use
visual images - Can't just take a picture of the scene the
subject is seeing (literalism) - Even in the case of a robot model of experience,
can't just use the raw video camera output - For example the current "output" of a human
retina contains gaps or blindspots that are not
part of experience. - Furthermore, our visual experience, as opposed to
our retinal output, at any given time is stable,
encompassing more than the current region of
foveation, and is coloured to the periphery - So what alternatives are there?
6Method Synthetic Phenomenology
- Use a working robot, and try to specify the
visual experience it models - NOT saying that the robot has experience!
- But if there were a sufficiently similar organism
that does have experience, what would that
experience be like? - Not all of experience only lowest level
(non-conceptual) component of visual experience - Apply it to the case of an expectation-based
explanation of various perceptual phenomena, such
as change blindness
7The Grand Illusion?
- For example, some argue
- Change blindness data show that only foveal
information has an effect on our perceptual state - Thus, our perceptual experience is only of the
foveated world - Any appearance that anything else is experienced
is incorrect
8"Actualist" computationalism
- Grand Illusion view is thought to be implied by
computationalism - Typically, computationalist (or functionalist)
theories attempt to map - A perceptual phenomenal content
- To a computational (functional) state
- By virtue of the latter's actual causal origins
(and perhaps its actual causal effects)
9Being counterfactual
- But a computationalist theory that places
explicit emphasis on the role of counterfactual
states can avoid the Grand Illusion result - E.g. The phenomenological state corresponding to
a given computational state includes not just
current foveal input - But also the foveal input the computational
system would expect to have if it were to engage
in certain kinds of movement - "Expectation-based architecture" (EBA)
10More on EBA
- These expectations can be realized in, e.g., a
forward model, such as a feed-forward neural
network - The model is updated only in response to foveal
information - E.g., it learns "If I were to move my eyes back
there, I would see that (the current foveal
content)"
11The EBA explanation
- Thus, change blindness can be explained without
denying peripheral experience - Consider the system after an element of the scene
has changed, but before the system foveates on
that part of the scene - The expectations of the forward model for what
would be seen if one were to, say, foveate on
that area, have not been updated
12No Grand Illusion
- According to EBA, the (outdated) expectation is a
part of current experience - Thus no change is detected or experienced
- So our experience is just what it seems
13Detailed description of the model
- In normal cases of perceived change
- 1) An expectation that if you were to look at L,
you would see X - 2) The world at L changes from X to Y
- 3) Your periphery detects change at L A "change
flag" is put up for L
14Detailed description of the model
- 4 ) Foveal attention is drawn to any region that
has a change flag up. - If a few to choose from, one is selected.
- If many to choose from (in the case of a global
flash, say), change flags are reset and ignored - 5) Current foveal input at L (Y), Past expected
input for L (X), and change flag together
constitute our normal experience of change at L - 6) The current contents of foveal perception (Y)
are used to calculate new expectations for L as
usual
15Detailed description of the model
- "Did you see any change?"
- answered with "yes" only if a change flag
remained up (there may have been several) - "What was the change?"
- answered, for a location that was flagged, using
the past expectation and the actual perception (X
and Y)
16Modelling change blindness
- Steps 1 and 2, occur, but at the same time as
step 2, you have a global flash or blink. - Thus step 3 puts up a change flag for all
locations. - Whatever mechanism is used for step 4, attention
will not, typically be re-directed to L. - But even if it is, by accident, there will be no
change flag active to indicate change (step 5). - The model will be updated as usual (step 6), but
it will not be experienced as change - "Change of experience is not necessarily
experience of change"
17Possible mechanisms for step 4
- all change flags are reset because of the global
activity - flag activation for a location only redirects
attention to the extent that it is different from
other locations - flag activity competes in a winner-take-all
fashion - one is selected at random
18About the robot and display
- Work being done with Joel Parthemore
- "Filled-in" areas give the content of what the
robot would expect to see if it moved its head so
that it is looking in that location - cf Aleksander's "depictions" depictions of
depictions - Black areas do not indicate an expectation to see
black they indicate to you the absence of any
expectation for that location - You can imagine alternate architectures (e.g.,
generalising neural networks) that have no
undefined regions of state space - "Absence of expectation is not an expectation of
absence" - Thus, think of the AIBO's head as a big eyeball,
and its head movements as saccades
19Fertile ground
- Many possibilities here for empirical predictions
of the human case - Predicts that if a few changes occur at once, one
will be aware of several changes, but will only
be able to give details of what changed for those
regions to which one could saccade - Blind spot
- Value is not (yet!) in being a model of human
vision, but a method and framework for specifying
experience
20Elaborations to EBA
- Only a simplistic version of EBA presented here
- Can be elaborated to include change not
instigated by the system itself - E.g., expectations of what foveal information one
would receive if the world were to change in a
particular way
21More elaborations to EBA
- Weighted contributions to experience
- Current foveal info strongest of all
- Expected foveal input after a simple movement a
little less strong - Contribution strength of expected results of
complex movements/sequences inversely
proportional to their complexity
22Open questions for EBA
- E.g., what is the experience at a non-foveated
part of the visual field if one has different
expectations for what one would see depending on
the motor "route" one takes to foveate there? - Some "average" of the different expectations?
- Winner take all?
- Necker-like shift between top n winners?
- No experience at that part of field at all, as
coherence (systematicity, agreement) at a time is
a requirement for perceptual experience?