Modelling Language Evolution Lecture/practical: The Talking Heads - PowerPoint PPT Presentation

About This Presentation

Modelling Language Evolution Lecture/practical: The Talking Heads


Modelling Language Evolution Lecture/practical: The Talking Heads Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit – PowerPoint PPT presentation

Number of Views:75
Avg rating:3.0/5.0
Slides: 20
Provided by: lelEdAcU1


Transcript and Presenter's Notes

Title: Modelling Language Evolution Lecture/practical: The Talking Heads

Modelling Language EvolutionLecture/practical
The Talking Heads
  • Simon Kirby
  • University of Edinburgh
  • Language Evolution Computation Research Unit

Meanings in models.
  • Most models either
  • Ignore meanings entirely (e.g., Elman,
    Christiansen etc.)
  • Treat meanings as pre-given (e.g., Kirby etc.)
  • Why have meanings at all?
  • Imagine an iterated learning model without them
  • what would the best language be like?
  • Probably say anything or say nothing
  • Need something to drive agents to be expressive

Problems with meanings.
  • Where do meanings come from?
  • If they are predefined, then the model must
    assume they are innate
  • Is this justified?
  • How different are different languages semantics?
  • How do we learn the meaning of words?
  • In models, agents are simply given the
    meaning-form pairs on a plate
  • What do meanings correspond to in the real world?
  • In models, they are symbolic representations
    there is no real world.

Enter the robots
  • Luc Steels and others
  • Get round these problems by using robots

Communication grounded in sensory-motor behaviour
  • Quinn (2001) Use evolving robots
  • Khepera robot
  • Two motors and several proximity sensors
  • Controlled by neural network
  • No learning
  • Weights evolved by genetic algorithm

Cooperative evolutionary task
  • Khepera inhabit environment in pairs
  • Put into environment close together but in random
  • Task (fixed time limit)
  • Move final average position as far away from
    initial position as possible
  • How to maximise fitness?
  • Need to coordinate to both go in the same
  • How can the robots coordinate?
  • One must lead, the other follow

The evolution of after you
  • Evolutionary early solution
  • Two genotypes leaders and followers
  • Problem if two leaders or two followers get
  • Communicative solution
  • Agent rotates until it faces the other
  • The first to face moves forward to close range
    (this will be the follower)
  • Once in range, it oscillates back and forth
  • This second agent starts reversing, and the other

Scales up to a number of robots
Grounding in learned system
  • Assume there is a real world out there and agents
    want to discriminate objects in the world and
    name them
  • Talking heads experiment (e.g., Paul Vogt)

  • World is a collection of objects (shapes on
  • Represented as features Red, Green, Blue, Area,
    X, Y
  • Context a set of objects on white board
  • Topic one particular object
  • Robots want to build a set of meanings
  • Meaning is a region represented by a prototype
  • A particular colour, area and location
  • The category of every object is the region
    represented by its nearest prototype
  • An object is discriminated if its category is
    different from all the others in the context

Simplified example
CONTEXT A(0.1, 0.3) B(0.3, 0.3) C(0.25,
ROBOTS PROTOTYPES a(0.15, 0.25) b(0.35,
A is categorised as a B is categorised as b C is
categorised as b
A is discriminated B and C are not
After discrimination
  • If discrimination of the topic fails
  • Add new prototype that corresponds to exactly
    that topic
  • If discrimination of the topic succeeds
  • Shift prototype slightly so that it moves closer
    to the topic
  • If discrimination succeeds, the distinctive
    category is used to play a language game
  • In some sense, the distinctive category is the
    meaning that will be communicated

The language game
  • The lexicon is a set of meaning-word pairs with
    an association score (between 0 and 1)
  • Language game speakers use a word to refer to
    the topic (possibly by invention) and hearers try
    to interpret that word
  • Different game types
  • Guessing game
  • Observational game

The guessing game step 1
  • Hearers try and guess the topic
  • Hearers look at all objects in context only
    consider discriminated objects as possible topics
  • (BUT Hearers categorise (discriminate) each
    object in context anyway)
  • Look for association between each possible topic
    and the speakers word
  • Select one with highest score
  • Speakers provide corrective feedback
  • Yes, thats the topic
  • No, thats not the topic and indicate the
    correct one

The guessing game step 2
  • Both speaker and hearer adjust lexicon
  • Speaker
  • Success increase association, and laterally
    inhibit other associations
  • Failure decrease association
  • Hearer
  • Success same as speaker
  • Failure to guess correct topic decrease
    association, and increase association with
    correct topic
  • Failure to understand word add word

What decides communicative success?
  • Number of games
  • Just colours
  • Noise
  • Context size
  • Number of agents

Look into their brains what determines lexicon
  • Number of symbols
  • Number of meanings

The observational game
  • The guessing game relies on corrective feedback
  • Instead, lets assume that learners can figure
    out what the topic is (joint attention? ToM?)
  • Hearers given the topic
  • Hearers play discrimination game only on topic
  • Looks for association between the topic and the
    speakers word
  • Speaker knows if hearer has correct association
  • Both speaker and hearer adjust lexicon just as

What difference does this make?
Repeat selected experiments
Write a Comment
User Comments (0)