Perceptual Categories: - PowerPoint PPT Presentation

About This Presentation
Title:

Perceptual Categories:

Description:

Perceptual Categories: Old and gradient, young and sparse. Bob McMurray University of Iowa Dept. of Psychology Categorization occurs when: discriminably different ... – PowerPoint PPT presentation

Number of Views:203
Avg rating:3.0/5.0
Slides: 157
Provided by: bobmcm
Category:

less

Transcript and Presenter's Notes

Title: Perceptual Categories:


1
Perceptual Categories Old and gradient, young
and sparse.
Bob McMurray University of Iowa Dept. of
Psychology
2
Collaborators
Richard Aslin Michael Tanenhaus David Gow
Joe Toscano Cheyenne Munson Meghan Clayards Dana
Subik Julie Markant Jennifer Williams
The students of the MACLab
3

Categorization
Categorization occurs when
1) discriminably different stimuli
2) are treated equivalently for some purposes
3) and stimuli in other categories are treated
differently.
4

Categorization
Perceptual Categorization
  • Continuous input maps to discrete categories.
  • Semantic knowledge plays minor role.
  • Bottom-up learning processes important.

5

Categorization
Perceptual Categorization
  • Continuous inputs map to discrete categories.
  • Semantic knowledge plays less of a role.
  • Categories include
  • Faces
  • Shapes
  • Words
  • Colors
  • Exemplars include
  • A specific view of a specific faces
  • A variant of a shape.
  • A particular word in a particular utterance
  • Variation in hue, saturation, lightness

6
Categorization occurs when
1) Discriminably different stimuli
2) are treated equivalently for some purposes
3) and stimuli in other categories are treated
differently.
Approach Walk through work on speech and category
development. Assess this definition along the
way.
Premise For Perceptual Categories this definition
largely falls short. and this may be a good
thing.
7
Overview
  1. Speech perception Discriminably different and
    categorical perception.
  1. Word recognition exemplars of the same word are
    not treated equivalently. (Benefits)

3) Speech Development phonemes are not treated
equivalently.
  1. Speech Development (model) challenging other
    categories treated differently. (Benefits)

5) Development of Visual Categories challenging
other categories treated differently.
8
Categorical Perception
Subphonemic variation in VOT is discarded in
favor of a discrete symbol (phoneme).
9
Categorical Perception
Categorical Perception Demonstrated across wide
swaths of perceptual categorization. Line
Orientation (Quinn, 2005) Basic Level Objects
(Newell Bulthoff, 2002) Facial Identity
(Beale Keil, 1995) Musical Chords (Howard,
Rosen Broad, 1992) Signs (Emmorey, McCollough
Brentari, 2003) Color (Bornstein Korda,
1984) Vocal Emotion (Luakka, 2005) Facial
Emotion (Pollak Kistlerl, 2002) Whats going
on?
10
Categorical Perception
  • Across a category boundary, CP
  • enhances contrast.
  • Within a category, CP yields
  • a loss of sensitivity
  • a down-weighting of the importance of
    within-category variation.
  • discarding continuous detail.

11
Categorical Perception
  • Across a category boundary, CP
  • enhances contrast.
  • Within a category, CP yields
  • a loss of sensitivity
  • a downweighting of the importance of
    within-category variation.
  • discarding continuous detail.

Categorization occurs when 1) discriminably
different stimuli 2) are treated equivalently
for some purposes 3) and stimuli in other
categories are treated differently
Stimuli are not discriminably different. CP
Categorization affects perception. Definition C
ategorization independent of perception. Need a
more integrated view
12
Perceptual Categorization
Categorization occurs when
CP perception not independent of categorization.
  • discriminably
  • different stimuli

2) are treated equivalently for some purposes
  • and stimuli in
  • other categories
  • are treated
  • differently.

13
Categorical Perception
  • Across a category boundary, CP
  • enhances contrast.
  • Within a category, CP yields
  • a loss of sensitivity
  • a downweighting of the importance of
    within-category variation.
  • discarding continuous detail.

Is continuous detail really discarded?
14
Is continuous detail really discarded?
Evidence against the strong form of Categorical
Perception from psychophysical-type tasks
Sidebar This has never been examined with
non-speech stimuli
Goodness Ratings Miller (1994,
1997) Massaro Cohen (1983)
Discrimination Tasks Pisoni and Tash (1974)
Pisoni Lazarus (1974) Carney, Widin
Viemeister (1977)
Training Samuel (1977) Pisoni, Aslin, Perey
Hennessy (1982)
15
Is continuous detail really discarded? No.
?
Why not? Is it useful?
16
  • Online Word Recognition
  • Information arrives sequentially
  • At early points in time, signal is temporarily
    ambiguous.
  • Later arriving information disambiguates the word.

17
Input
b... u tt e r
time
beach

butter

bump

putter

dog
18
These processes have been well defined for a
phonemic representation of the input.
But considerably less ambiguity if we consider
within-category (subphonemic) information.
Example subphonemic effects of motor processes.
19
Coarticulation
Any action reflects future actions as it unfolds.
Example Coarticulation Articulation (lips,
tongue) reflects current, future and past
events. Subtle subphonemic variation in speech
reflects temporal organization.
Sensitivity to these perceptual details might
yield earlier disambiguation.
20
Experiment 1
?
What does sensitivity to within-category detail
do? Does within-category acoustic detail
systematically affect higher level language? Is
there a gradient effect of subphonemic detail on
lexical activation?
21
Experiment 1
Gradient relationship systematic effects of
subphonemic information on lexical activation.
If this gradiency is used it must be preserved
over time. Need a design sensitive to both
systematic acoustic detail and detailed temporal
dynamics of lexical activation.
McMurray, Tanenhaus Aslin (2002)
22
Acoustic Detail
Use a speech continuummore steps yields a better
picture acoustic mapping.
KlattWorks generate synthetic continua from
natural speech.
9-step VOT continua (0-40 ms) 6 pairs of
words. beach/peach bale/pale bear/pear bump/pump
bomb/palm butter/putter 6 fillers. lamp leg loc
k ladder lip leaf shark shell shoe ship sheep shi
rt
23
(No Transcript)
24
Temporal Dynamics
How do we tap on-line recognition? With an
on-line task Eye-movements
Subjects hear spoken language and manipulate
objects in a visual world. Visual world
includes set of objects with interesting
linguistic properties. a beach, a peach and some
unrelated items. Eye-movements to each object are
monitored throughout the task.
Tanenhaus, Spivey-Knowlton, Eberhart Sedivy,
1995
25
Why use eye-movements and visual world paradigm?
  • Relatively natural task.
  • Eye-movements generated very fast (within 200ms
    of first bit of information).
  • Eye movements time-locked to speech.
  • Subjects arent aware of eye-movements.
  • Fixation probability maps onto lexical
    activation..

26
Task
A moment to view the items
27
(No Transcript)
28
Task
Bear
Repeat 1080 times
29
Identification Results
High agreement across subjects and items for
category boundary.
proportion /p/
VOT (ms)
B
P
By subject 17.25 /- 1.33ms By item 17.24
/- 1.24ms
30
Task
Target Bear Competitor Pear Unrelated Lamp,
Ship
31
Task
32
Task
  • Given that
  • the subject heard bear
  • clicked on bear

How often was the subject looking at the pear?
Categorical Results
Gradient Effect
target
target
competitor
competitor
competitor
competitor
33
Results
Response
Response
VOT
VOT
0 ms
5 ms
Competitor Fixations
Time since word onset (ms)
Long-lasting gradient effect seen throughout the
timecourse of processing.
34
Response
Response
Looks to
Competitor Fixations
Looks to
Category Boundary
VOT (ms)
35
Response
Response
Looks to
Competitor Fixations
Looks to
Category Boundary
VOT (ms)
36
Summary
Subphonemic acoustic differences in VOT have
gradient effect on lexical activation.
  • Gradient effect of VOT on looks to the
    competitor.
  • Effect holds even for unambiguous stimuli.
  • Seems to be long-lasting.

Consistent with growing body of work using
priming (Andruski, Blumstein Burton, 1994
Utman, Blumstein Burton, 2000 Gow, 2001, 2002).
Variants from the same category are not treated
equivalently Gradations in interpretation are
related to gradations in stimulus.
37
Extensions
Word recognition is systematically sensitive to
subphonemic acoustic detail.
  • Voicing
  • Laterality, Manner, Place
  • Natural Speech
  • Vowel Quality

38
Extensions
Word recognition is systematically sensitive to
subphonemic acoustic detail.
  • Voicing
  • Laterality, Manner, Place
  • Natural Speech
  • Vowel Quality
  • ? Metalinguistic Tasks

39
Extensions
Word recognition is systematically sensitive to
subphonemic acoustic detail.
  • Voicing
  • Laterality, Manner, Place
  • Natural Speech
  • Vowel Quality
  • ? Metalinguistic Tasks

40
Extensions
Word recognition is systematically sensitive to
subphonemic acoustic detail.
  • Voicing
  • Laterality, Manner, Place
  • Natural Speech
  • Vowel Quality
  • ? Metalinguistic Tasks

ResponseP Looks to B
Competitor Fixations
ResponseB Looks to B
Category Boundary
0
5
10
15
20
25
30
35
40
VOT (ms)
41
Categorical Perception
Within-category detail surviving to lexical
level. Abnormally sharp categories may be due to
meta-linguistic tasks.
There is a middle ground warping of perceptual
space (e.g. Goldstone, 2002) Retain
non-independence of perception and categorization.
42
Perceptual Categorization
Categorization occurs when
CP perception not independent of categorization.
  • discriminably
  • different stimuli

Exp 1 Lexical variants not treated equivalently
(gradiency)
2) are treated equivalently for some purposes
  • and stimuli in
  • other categories
  • are treated
  • differently.

43
Perceptual Categorization
Categorization occurs when
CP perception not independent of categorization.
  • discriminably
  • different stimuli

Exp 1 Lexical variants not treated equivalently
(gradiency)
2) are treated equivalently for some purposes
WHY?
  • and stimuli in
  • other categories
  • are treated
  • differently.

44
Progressive Expectation Formation
Any action reflects future actions as it unfolds.
  • Can within-category detail be used to predict
    future acoustic/phonetic events?
  • Yes Phonological regularities create systematic
    within-category variation.
  • Predicts future events.

45
Experiment 3 Anticipation
Word-final coronal consonants (n, t, d)
assimilate the place of the following segment.
Maroong Goose
Maroon Duck
Place assimilation -gt ambiguous segments
anticipate upcoming material.
46
Subject hears select the maroon
duck select the maroon goose select the
maroong goose select the maroong duck
47
Results
Anticipatory effect on looks to non-coronal.
48
Onset of goose oculomotor delay
0.3
Assimilated
0.25
Non Assimilated
Fixation Proportion
0.2
0.15
0.1
0.05
0
0
200
400
600
Time (ms)
Looks to duck as a function of time
Inhibitory effect on looks to coronal (duck,
p.024)
49
Experiment 3 Extensions
Possible lexical locus
Green/m Boat
Eight/Ape Babies
Assimilation creates competition
50
  • Sensitivity to subphonemic detail
  • Increase priors on likely upcoming events.
  • Decrease priors on unlikely upcoming events.
  • Active Temporal Integration Process.

Possible lexical mechanism
NOT treating stimuli equivalently allows
within-category detail to be used for temporal
integration.
51
Adult Summary
  • Lexical activation is exquisitely sensitive to
    within-category detail Gradiency.
  • This sensitivity is useful to integrate material
    over time.
  • Progressive Facilitation
  • Regressive Ambiguity resolution (ask me about
    this)

52
Perceptual Categorization
Categorization occurs when
CP perception not independent of categorization.
  • discriminably
  • different stimuli

Exp 1 Lexical variants not treated equivalently
(gradiency)
2) are treated equivalently for some purposes
Exp 2 non equivalence enables temporal
integration.
  • and stimuli in
  • other categories
  • are treated
  • differently.

53
Development
Historically, work in speech perception has been
linked to development. Sensitivity to
subphonemic detail must revise our view of
development.
Use Infants face additional temporal integration
problems No lexicon available to clean up
noisy input rely on acoustic regularities. Ex
tracting a phonology from the series of
utterances.
54
Sensitivity to subphonemic detail For 30
years, virtually all attempts to address this
question have yielded categorical discrimination
(e.g. Eimas, Siqueland, Jusczyk Vigorito, 1971).
  • Exception Miller Eimas (1996).
  • Only at extreme VOTs.
  • Only when habituated to non- prototypical
    token.

55
Use?
Nonetheless, infants possess abilities that would
require within-category sensitivity.
  • Infants can use allophonic differences at word
    boundaries for segmentation (Jusczyk, Hohne
    Bauman, 1999 Hohne, Jusczyk, 1994)
  • Infants can learn phonetic categories from
    distributional statistics (Maye, Werker Gerken,
    2002 Maye Weiss, 2004).

56
Statistical Category Learning
Speech production causes clustering along
contrastive phonetic dimensions.
E.g. Voicing / Voice Onset Time B VOT
0 P VOT 40-50
57
To statistically learn speech categories, infants
must
  • This requires ability to track specific VOTs.

58
Experiment 4
Why no demonstrations of sensitivity?
  • Habituation
  • Discrimination not ID.
  • Possible selective adaptation.
  • Possible attenuation of sensitivity.
  • Synthetic speech
  • Not ideal for infants.
  • Single exemplar/continuum
  • Not necessarily a category representation

Experiment 3 Reassess issue with improved
methods.
59
HTPP
  • Head-Turn Preference Procedure
  • (Jusczyk Aslin, 1995)
  • Infants exposed to a chunk of language
  • Words in running speech.
  • Stream of continuous speech (ala statistical
    learning paradigm).
  • Word list.
  • Memory for exposed items (or abstractions)
    assessed
  • Compare listening time between consistent and
    inconsistent items.

60
Test trials start with all lights off.
61
Center Light blinks.
62
Brings infants attention to center.
63
One of the side-lights blinks.
64
When infant looks at side-light he hears a word
65
as long as he keeps looking.
66
Methods
7.5 month old infants exposed to either 4 b-, or
4 p-words. 80 repetitions total. Form a
category of the exposed class of words.
67
Stimuli constructed by cross-splicing naturally
produced tokens of each end point.
68
Novelty or Familiarity?
Novelty/Familiarity preference varies across
infants and experiments.
Were only interested in the middle stimuli (b,
p). Infants were classified as novelty or
familiarity preferring by performance on the
endpoints.
69

After being exposed to bear beach bail
bomb Infants who show a novelty effect will
look longer for pear than bear.
What about in between?
Listening Time
Bear
Bear
Pear
70
Experiment 3 Results
Novelty infants (B 36 P 21)
10000
9000
8000
Listening Time (ms)
7000
Exposed to
6000
B
P
5000
4000
Target
Target
Competitor
Target vs. Target Competitor vs. Target
plt.001 p.017
71
Familiarity infants (B 16 P 12)
Target vs. Target Competitor vs. Target
P.003 p.012
72
Infants exposed to /p/
Novelty N21
73
Infants exposed to /b/
74
Experiment 3 Conclusions
Contrary to all previous work
  • 7.5 month old infants show gradient sensitivity
    to subphonemic detail.
  • Clear effect for /p/
  • Effect attenuated for /b/.

75
Reduced effect for /b/ But
76
  • Bear ? Pear
  • Category boundary lies between Bear Bear
  • - Between (3ms and 11 ms) ??
  • Within-category sensitivity in a different range?

77
Experiment 4
Same design as experiment 3. VOTs shifted away
from hypothesized boundary Train
Bomb Bear Beach Bale -9.7 ms.
Test
-9.7 ms.
Bomb Bear Beach Bale
3.6 ms.
Bomb Bear Beach Bale
40.7 ms.
Palm Pear Peach Pail
78
Familiarity infants (34 Infants)
.01
9000
.05
8000
7000
Listening Time (ms)
6000
5000
4000
B-
B
P
79
Novelty infants (25 Infants)
.002
9000
.02
8000
7000
Listening Time (ms)
6000
5000
4000
B-
B
P
80
Experiment 4 Conclusions
  • Within-category sensitivity in /b/ as well as /p/.

Infants do NOT treat stimuli from the same
category equivalently Gradient.
81
Perceptual Categorization
Categorization occurs when
CP perception not independent of categorization.
  • discriminably
  • different stimuli

Exp 1 Lexical variants not treated equivalently
(gradiency)
2) are treated equivalently for some purposes
Exp 2 non equivalence enables temporal
integration.
Exp 3/4 Infants do not treat category members
equivalently
  • and stimuli in
  • other categories
  • are treated
  • differently.

82
Experiment 4 Conclusions
  • Within-category sensitivity in /b/ as well as /p/.

Infants do NOT treat stimuli from the same
category equivalently Gradient.
  • Remaining questions
  • Why the strange category boundary?
  • Where does this gradiency come from?

83
Experiment 4 Conclusions
Remaining questions 2) Where does this gradiency
come from?
Listening Time
B-
B
B
P
P
VOT
84
Remaining questions 2) Where does this gradiency
come from?
Results resemble half a Gaussian
B-
B
B
P
P
VOT
85
Remaining questions 2) Where does this gradiency
come from?
Results resemble half a Gaussian
And the distribution of VOTs is Gaussian
Lisker Abramson (1964)
Statistical Learning Mechanisms?
86
Remaining questions 1) Why the strange category
boundary?
/b/ results consistent with (at least) two
mappings.
1) Shifted boundary
  • Inconsistent with prior literature.

87
HTPP is a one-alternative task. Asks B or
not-B not B or P
Hypothesis Sparse categories by-product of
efficient learning.
88
  • Remaining questions
  • Why the strange category boundary?
  • Where does this gradiency come from?

?
Are both a by-product of statistical
learning? Can a computational approach
contribute?
89
Computational Model
Mixture of Gaussian model of speech categories
  • Models distribution of tokens as
  • a mixture of Gaussian distributions
  • over phonetic dimension (e.g. VOT) .

2) Each Gaussian represents a category. Posterior
probability of VOT activation.
90
Statistical Category Learning
1) Start with a set of randomly selected
Gaussians.
  1. After each input, adjust each parameter to find
    best description of the input.
  • Start with more Gaussians than necessary--model
    doesnt innately know how many categories.
  • ? -gt 0 for unneeded categories.

91
(No Transcript)
92
  • Overgeneralization
  • large ?
  • costly lose phonetic distinctions

93
  • Undergeneralization
  • small ?
  • not as costly maintain distinctiveness.

94
  • To increase likelihood of successful learning
  • err on the side of caution.
  • start with small ?

95
Sparseness coefficient of space not strongly
mapped to any category.
VOT
96
Start with large s
VOT
Starting ?
0.4
0.35
0.3
0.25
Avg Sparsity Coefficient
0.2
0.15
0.1
0.05
0
0
2000
4000
6000
8000
10000
12000
Training Epochs
97
Intermediate starting s
VOT
Starting ?
0.4
0.35
0.3
0.25
Avg Sparsity Coefficient
0.2
0.15
0.1
0.05
0
0
2000
4000
6000
8000
10000
12000
Training Epochs
98
Model Conclusions
Continuous sensitivity required for statistical
learning. Statistical learning enhances gradient
category structure.
To avoid overgeneralization better to start
with small estimates for ?
Small or even medium starting ? gt sparse
category structure during infancymuch of
phonetic space is unmapped.
Tokens that are treated differently may not be in
different categories.
99
Perceptual Categorization
Categorization occurs when
  • discriminably
  • different stimuli

CP perception not independent of categorization.
Exp 1 Lexical variants not treated equivalently
(gradiency)
Exp 2 non equivalence enables temporal
integration.
2) are treated equivalently for some purposes
Exp 3/4 Infants do not treat category members
equivalently
Model Gradiency arises from statistical learning.
  • and stimuli in
  • other categories
  • are treated
  • differently.

Model Tokens treated differently are not in
different categories (sparseness).
Model Sparseness by product of optimal learning.
100
AEM Paradigm
Examination of sparseness/completeness of
categories needs a two alternative task.
To AEM
101
AEM Paradigm
Exception Conditioned Head Turn (Kuhl, 1979)
  • After training generalization can be assessed.
  • Approximates Go/No-Go task.

102
AEM Paradigm
  • When detection occurs this could be because
  • Stimulus is perceptually equivalent to target.
  • Stimulus is perceptually different but member of
    same category as target.
  • When no detection, this could be because
  • Stimuli are perceptually different.
  • Stimuli are in different categories.

A solution the multiple exemplar approach
103
AEM Paradigm
  • Multiple exemplar methods (Kuhl, 1979 1983)
  • Training single distinction i/a.
  • Irrelevant variation gradually added (speaker
    pitch).
  • Good generalization.
  • This exposure may mask natural biases
  • Infants trained on irrelevant dimension(s).
  • Infants exposed to expected variation along
    irrelevant dimension.

Infants trained on a single exemplar did not
generalize.
104
AEM Paradigm
HTPP, Habituation and Conditioned Head-Turn
methods all rely on a single response criterion
effects.
  • Yes
  • Both dogs
  • Both mammals
  • Both 4-legged animals
  • No
  • Different breeds
  • Different physical properties

How does experimenter establish the decision
criterion?
105
AEM Paradigm
Multiple responses
Is a member of or
?
Pug vs. poodle Decision criteria will be based
on species-specific properties (hair-type,
body-shape).
  • Two-alternative tasks specify criteria without
    explicitly teaching
  • What the irrelevant cues are
  • Their statistical properties (expected
    variance).

106
AEM Paradigm
  • Conditioned-Head-Turn provides right sort of
    response, but cannot be adapted to
    two-alternatives (Aslin Pisoni, 1980).
  • Large metabolic cost in making head-movement.
  • Requires 180º shift in attention.
  • Could we use a different behavioral response in a
    similar conditioning paradigm?

107
AEM Paradigm
  • Smaller angular displacements detectable with
    computer- based eye-tracking.
  • Metabolically cheapquick and easy to generate.

How can we train infants to make eye movements
target locations?
108
AEM Paradigm
Infants readily make anticipatory eye movements
to regularly occurring visual events
109
AEM Paradigm
Anticipatory Eye-Movements (AEM) Train infants
to use anticipatory eye movements as a behavioral
label for category identity.
  • Two alternative response (left-right)
  • Arbitrary, identification response.
  • Response to a single stimulus.
  • Many repeated measures.

110
AEM Paradigm
Each category is associated with the left or
right side of the screen. Categorization stimuli
followed by visual reinforcer.
111
AEM Paradigm
Delay between stimulus and reward gradually
increases throughout experiment.
trial 1
REINFORCER
STIMULUS
trial 30
REINFORCER
time
Delay provides opportunity for infants to make
anticipatory eye-movements to expected location.
112
AEM Paradigm
113
AEM Paradigm
114
AEM Paradigm
After training on original stimuli, infants are
tested on a mixture of
  • new, generalization stimuli (unreinforced)
  • Examine category structure/similarity relative
    to trained stimuli.
  • original, trained stimuli (reinforced)
  • Maintain interest in experiment.
  • Provide objective criterion for inclusion

115
AEM Paradigm
Gaze position assessed with automated, remote
eye-tracker.
Gaze position recorded on standard video for
analysis.
116
Experiment 5
Multidimensional visual categories
Can infants learn to make anticipatory eye
movements in response to visual category identity?
?
  • What is the relationship between basic visual
    features in forming perceptual categories?
  • Shape
  • Color
  • Orientation

117
Experiment 5
Train Shape (yellow square and yellow
cross) Test Variation in color and
orientation. Yellow 0º (training
values) Orange 10º Red 20º
If infants ignore irrelevant variation in color
or orientation, performance should be good for
generalization stimuli. If infants shape
categories are sensitive to this variation,
performance will degrade.
118
Experiment 5 Results
9/10 scored better than chance on original
stimuli. M 68.7 Correct
Percent Correct
Training Stimuli
119
Some stimuli are uncategorized (despite very
reasonable responses) sparseness.
Sparse region of input spaces
120
Perceptual Categorization
Categorization occurs when
  • discriminably
  • different stimuli

CP perception not independent of categorization.
Exp 1 Lexical variants not treated equivalently
(gradiency)
Exp 2 non equivalence enables temporal
integration.
2) are treated equivalently for some purposes
Exp 3/4 Infants do not treat category members
equivalently
Model Gradiency arises from statistical learning.
  • and stimuli in
  • other categories
  • are treated
  • differently.

Model Tokens treated differently are not in
different categories (sparseness).
Model Sparseness by product of optimal learning.
Exp 5 Shape categories show similar sparse
structure.
121
Occlusion-Based AEM
  • AEM is based on an arbitrary mapping.
  • Unnatural mechanism drives anticipation.
  • Requires slowly changing duration of
    delay-period.

Can infants associate anticipated trajectories
(under the occluder) with target identity?
122
Red Square
123
Yellow Cross
124
Yellow Square
To faces
To end
125
Experiment 6
Can AEM assess auditory categorization? Can
infants normalize for variations in pitch and
duration? or Are infants sensitive to
acoustic-detail during a lexical identification
task?
126
Training Teak -gt rightward trajectory. Lamb
-gt leftward trajectory.
Test Lamb Teak with changes in Duration 33
and 66 longer. Pitch 20 and 40 higher
If infants ignore irrelevant variation in pitch
or duration, performance should be good for
generalization stimuli. If infants lexical
representations are sensitive to this variation,
performance will degrade.
127
Training stimulus (lamb)
128
Experiment 6 Results 2
Experiment 6 Results
20 Training trials. 11 of 29 infants performed
better than chance.
Proportion Correct Trials
Duration
Pitch
Training Stimuli
D1 / P1
D2 / P2
Stimulus
129
Variation in pitch is tolerated for
word-categories. Variation in duration is
not. - Takes a gradient form.
Again, some stimuli are uncategorized (despite
very reasonable responses) sparseness.
130
Perceptual Categorization
Categorization occurs when
  • discriminably
  • different stimuli

CP perception not independent of categorization.
Exp 1 Lexical variants not treated equivalently
(gradiency)
Exp 2 non equivalence enables temporal
integration.
2) are treated equivalently for some purposes
Exp 3/4 Infants do not treat category members
equivalently
Model Gradiency arises from statistical learning.
Exp 6 Gradiency in infant response to duration.
  • and stimuli in
  • other categories
  • are treated
  • differently.

Model Tokens treated differently are not in
different categories (sparseness).
Model Sparseness by product of optimal learning.
Exp 5,6 Shape, Word categories show similar
sparse structure.
131
Exp 7 Face Categorization
Can AEM help understand face categorization? Are
facial variants treated equivalently?
Train two arbitrary faces Test same faces
at 0, 45, 90, 180
Facial inversion effect.
132
(No Transcript)
133
Experiment 7 Results
1
0.8
0.6
Percent Correct
0.4
0.2
0
  • 90º vs. Vertical plt.001
  • 90º vs. 45º 180º plt.001.
  • 45º, 180º chance (pgt.2).
  • 90º p.111

134
Experiment 7
AEM useful with faces. Facial Inversion effect
replicated. Generalization not simple
similarity90º vs. 45º Infants own category
knowledge is reflected. Resembles VOT (b/p)
results within a dimension, some portions are
categorized, others are not.
Again, some stimuli are uncategorized (despite
very reasonable responses) sparseness.
135
Perceptual Categorization
Categorization occurs when
  • discriminably
  • different stimuli

CP perception not independent of categorization.
Exp 1 Lexical variants not treated equivalently
(gradiency)
Exp 2 non equivalence enables temporal
integration.
2) are treated equivalently for some purposes
Exp 3/4 Infants do not treat category members
equivalently
Model Gradiency arises from statistical learning.
Exp 6 Gradiency in infant response to duration.
  • and stimuli in
  • other categories
  • are treated
  • differently.

Model Tokens treated differently are not in
different categories (sparseness).
Model Sparseness by product of optimal learning.
Exp 5,6,7 Shape, Word, Face categories show
similar sparse structure.
136
Again, some stimuli are uncategorized (despite
very reasonable responses) sparseness.
Categories Variation Tolerated Variation Not tolerated
Exp 5 Shapes Color Orientation
Exp 6 Faces 90 Orientation
Exp 7 Words Pitch Duration
Evidence for complex, but sparse categories some
dimensions (or regions of a dimension) are
included in the category, others are not.
137
Infant Summary
  • Infants show graded sensitivity to continuous
    speech cues.
  • /b/-results regions of unmapped phonetic space.
  • Statistical approach provides support for
    sparseness.
  • - Given current learning theories, sparseness
    results from optimal starting parameters.
  • Empirical test will require a two-alternative
    task AEM
  • Test of AEM paradigm also shows evidence for
    sparseness in shapes, words, and faces.

138
Audience Specific Conclusions
For speech people Gradiency continuous
information in the signal is not discarded and is
useful during recognition. Gradiency Infant
speech categories are also gradient, a result of
statistical learning.
For infant people Methodology AEM is a
useful technique for measuring categorization in
infants (bonus works with undergrads
too). Sparseness Through the lens of a 2AFC
task, (or interactions of categories) categories
look more complex.
139
Perceptual Categorization
1) discriminably different stimuli
CP discrimination not distinct from
categorization. Continuous feedback relationship
between perception and categorization
2) are treated equivalently for some purposes
Gradiency Infants and adults do not treat
stimuli equivalently. This property arises from
learning processes as well as the demands of the
task.
3) and stimuli in other categories are treated
differently
Sparseness Infants categories do not fully
encompass the input. Many tokens are not
categorized at all
140
Conclusions
Categorization is an approximation of an
underlyingly continuous system. Clumps of
similarity in stimulus-space. Reflect
underlying learning processes and demands of
online processing. During development,
categorization is not common (across the complete
perceptual space)small, specific clusters may
grow to larger representations. This is
useful avoid overgeneralization.
141
Take Home Message
Early, sparse, regions of graded similarity
space grow, gain structure but
retain their fundamental gradiency.
142
Perceptual Categories Old and gradient, young
and sparse.
Bob McMurray University of Iowa Dept. of
Psychology
143
(No Transcript)
144
Misperception Additional Results
145
  • 10 Pairs of b/p items.
  • 0 35 ms VOT continua.

20 Filler items (lemonade, restaurant,
saxophone) Option to click X
(Mispronounced). 26 Subjects 1240 Trials over
two days.
146
Identification Results
1.00
0.90
0.80
0.70
Significant target responses even at
extreme. Graded effects of VOT on correct
response rate.
Voiced
0.60
0.50
Response Rate
Voiceless
0.40
NW
0.30
0.20
0.10
0.00
0
5
10
15
20
25
30
35
Barricade
Parricade
147
Phonetic Garden-Path
Garden-path effect Difference between looks
to each target (b vs. p) at same VOT.
148
Target
GP Effect Gradient effect of VOT. Target
plt.0001 Competitor plt.0001
Competitor
149
Assimilation Additional Results
150
runm picks runm takes
151
Exp 3 4 Conclusions
  • Within-category detail used in recovering from
    assimilation temporal integration.
  • Anticipate upcoming material
  • Bias activations based on context
  • - Like Exp 2 within-category detail retained to
    resolve ambiguity..
  • Phonological variation is a source of information.

152
Subject hears select the mud drinker select
the mudg gear select the mudg drinker
Critical Pair
153
Onset of gear
Avg. offset of gear (402 ms)
0.45
0.4
0.35
0.3
Fixation Proportion
0.25
0.2
0.15
0.1
0.05
0
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Time (ms)
Mudg Gear is initially ambiguous with a late bias
towards Mud.
154
Mudg Drinker is also ambiguous with a late bias
towards Mug (the /g/ has to come from
somewhere).
155
(No Transcript)
156
Non-parametric approach?
  • Not constrained by a particular equationcan fill
    space better.
  • Similar properties in terms of starting ? and
    sparseness.
Write a Comment
User Comments (0)
About PowerShow.com