MiCRAM Midbrain Computational and Robotic Auditory Model for focussed hearing - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

MiCRAM Midbrain Computational and Robotic Auditory Model for focussed hearing

Description:

MiCRAM (Midbrain Computational and Robotic Auditory Model for focussed hearing) ... Azimuth binaural, measured in the SOC (MSO, LSO, and MNTB) ... – PowerPoint PPT presentation

Number of Views:76
Avg rating:3.0/5.0
Slides: 27
Provided by: harry8
Category:

less

Transcript and Presenter's Notes

Title: MiCRAM Midbrain Computational and Robotic Auditory Model for focussed hearing


1
MiCRAM(Midbrain Computational and Robotic
Auditory Model for focussed hearing)
  • Harry R. Erwin, PhD
  • University of Sunderland
  • School of Computing and Technology

2
Who, What, When, Where, Why and How
  • Who? Harry Erwin, Stefan Wermter, and Adrian
    Rees.
  • What? An EPSRC grant to develop a computational
    model of the inferior colliculus and use it with
    a robot.
  • When? July 2006-June 2009.
  • Where? Universities of Sunderland and Newcastle.
  • Why? Because its time.
  • How? As a collaborative interdisciplinary project.

3
Purpose
  • This research involves the collaborative
    development of a biologically plausible model of
    auditory processing at the level of the inferior
    colliculus (IC).
  • This approach potentially clarifies the roles of
    the multiple spectral and temporal
    representations that are present at the level of
    the IC and investigate how representations of
    sounds interact with auditory processing at that
    level to focus attention and select sound sources
    for deeper analysis.

4
Concept
  • Hubel and Wiesel worked out how the retina
    operated. They were successful because the retina
    was accessible. The IC isnt (very).
  • Barry Richmond could then begin the mapping of
    cortical regions of visual processing.
  • The data now exist to do the same for the
    auditory system using computational modelling
    techniques.
  • This is expected to show that the IC presents
    multiple spectral representations to the cortex
    for processing.

5
Where do the robots fit in?
  • The IC model will provide spectral
    representations of the auditory scene.
  • The robot will use those to drive behaviour.
  • The robot is situatedit experiences the same
    environmental constraints as an animal would.
  • The cues the robot uses allow us to assess their
    role in the animals behaviour.

6
The Role of the IC
  • The IC plays a strategic role in the processing
    of auditory information.
  • It is the main midbrain nucleus in the auditory
    pathwaythe centre of convergence for parallel
    pathways that diverge from the cochlear nucleus.
  • Studies have shown that information necessary for
    fundamental aspects of auditory processing are
    extracted before the thalamo-cortical level.
  • We predict that the emergent properties in the
    outputs of the IC are sufficient to control
    sound-guided behaviour.

7
Practical Applications
  • Speech recognition technology makes use of a
    single smoothed sound spectrum for input. The IC
    appears to use multiple, parallel spectra. Why?
  • The IC seems to participate in an attentional
    match/mismatch process that may be useful in
    speech and sound processing.
  • The length of many sounds is long enough that
    cortical processing can take place to adapt the
    response of the IC and change the spectral
    representation being attended to. This adaptive
    approach may be useful for hearing aids.

8
Approach
  • We will use an interdisciplinary collaboration
    between experimental neuroscientists and
    computational modellers to study this.
  • The experimental neuroscientists will be the
    domain expertsin particular, assessing
    experimental data to determine their reliability
    and how they should be used.
  • The computational neuro-modellers will develop
    the model of the neural system, and perform
    computational experiments to model the results
    found by the biologists.

9
Data Mining
  • We will maximise the use of existing data from
    our own and other laboratories. Much of the
    existing body of data exists in isolation and has
    not been formally synthesised.
  • The goal of building a model with specific
    outcomes and measurable performance will provide
    a formal framework to underpin the data synthesis
    we propose.
  • Our approach of mining existing data will also
    reduce the number of animals used in experiments.

10
Databases
  • We will use object-oriented databases to store
    and process our models, but we will document them
    on the web in the form of a wiki.
  • (See http//scat-he-g4.sunderland.ac.uk/harryerw/
    phpwiki/index.php/AuditoryResearch)
  • The modelling will use PGENESIS running on the
    Beowulf cluster.
  • The robotics work will use Khepera or Koala
    robots.

11
Background to the Work
  • Auditory system description
  • Rules of organization
  • Connectivity
  • Role of the IC

12
The auditory system is a typical mammalian
sensory system
  • The auditory signal is processed by brainstem
    modules before the information arrives at the
    cortex.
  • Extensive cortical and somatic reafference is
    used to tune the brainstem processing.
  • Supports a series of functions
  • Reflexive movements (e.g., startle reflex)
  • Orientation towards stimuli (attention)
  • Localization (where is it?)
  • Classification (what is it?)
  • Multisensory integration (especially with vision
    and touch)

13
Components of the auditory system
  • Neurotransmitters and receptors
  • Cell Types
  • Neural Circuits
  • Overall organization

14
Neurotransmitters
  • Glutamate (Glu)
  • AMPA receptorsexcitatory, fast
  • NMDA receptorsexcitatory, learning, much slower
  • Aspartateexcitatory, fast, found in the cochlea.
  • GABAstandard inhibitory, very slow.
  • Glycineinhibitory, fast, common in audition,
    mandatory coagonist at NMDA receptors (?)
  • Acetylcholineexcitatory
  • Various neuromodulators
  • Remember the Cl- reversal potential!

15
Some basic cell types of the auditory brainstem
  • Primary-like (PL)
  • Primary-like, notch (PL-N)
  • Phase-lock (onset)
  • Onset, lock (O-L)
  • Chopper

16
Auditory Midbrain Rules of Organization
  • Many specialized nuclei, organized into parallel
    paths.
  • Convergence at the inferior colliculus (IC),
    much of it inhibitory or shunting. Left-to-right
    reversal at the IC (like vision). Does the IC
    function like the basal ganglia? We may know in 3
    yrs.
  • Glycine (Gly) is the most common inhibitory
    neurotransmitter, probably due to a faster time
    constant (1 msec) than GABA (5 msec).
    Inhibitory rebound is extensively exploited to
    produce delayed responsesa cell depolarizing
    enough to spike after being hyperpolarized.
  • Glutamate (Glu) is the usual excitatory
    neurotransmitter. AMPA receptors are fast
    subtypes, so a time constant of 200 ?sec
    (200x10-6 sec!) is typical. (Brand et al., 2002,
    in Nature indicate 100 ?sec for both Gly and Glu,
    which is probably too low.)

17
The Principle Connections of the Mammalian
Auditory System
Planum temporale
Planum temporale
Corrected from http//earlab.bu.edu/
intro/auditorypathways.html
18
Central Nucleus of the Inferior Colliculus
(Mesencephalon)
  • Largest auditory structure of the brainstem on
    the roof of the midbrain. A tectal structure
    behind the superior colliculus (SC). There is a
    spatial mapping from the IC to the SC (that
    triggers visual orientation to sounds in barn owl
    and possibly in mammals).
  • Primary point of convergence in the auditory
    brainstem.
  • Bidirectional connectivity with the auditory
    cortex. Excitatory inputs are received from the
    part of the AC (layer V) that then receives the
    outputs. This is fast enough to support
    cortically-controlled analysis of current sound
    afference.

19
IC components
  • Small multipolar fusiform cells with tufted
    dendrites. Cochleotopic tonotopic laminar
    organization, uniting inputs from all lower
    nuclei and the contralateral IC.
  • The anterior portion of the laminae receive
    cortical inputs, while the posterior portion
    receives brainstem and IC inputs.
  • Stellate cells also present that cross the
    laminae.
  • Recently it has been found that the signal at the
    IC is normalized in intensity. Several possible
    mechanisms.
  • Partly cerebellar-like (Curtis Bell).
  • Match/mismatch processing? Sparsification? Motion
    processing?

20
Where do things happen?
  • Azimuthbinaural, measured in the SOC (MSO, LSO,
    and MNTB).
  • Elevationmonaural, probably based on DCN notch
    detection.
  • Range, timing, and intervalsmonaural, measured
    by the LL, using inhibitory mechanisms.
  • Line spectrummonaural, measured by the LL.
  • Sensory integrationfor individual sounds,
    binaurally in the IC, using evidence developed by
    lower nuclei.
  • Comparisons between soundsauditory cortex.

21
Reconstructing the acoustic scene
  • How separate sound sources are distinguished,
    assigned to sound streams, and localized is not
    understood.
  • Attention probably chooses sounds out of
    background. Otherwise, the first sound has
    preference. Ray Meddis thinks sounds are
    disambiguated by ignoring ambiguous cues.
  • Intervals between sounds are very important in
    disambiguating them. Auditory neuroscientists are
    dubious about the binding problem.
  • Distinct sound characteristics are also important
    in assignment to sound streams. Harmonics
    important as are spectral segments of about 1 kHz.

22
Some lessons to draw
  • Dense representations are found throughout the
    auditory brainstem. The sparse representations
    needed for associative learning and retrieval
    seem to be cortical.
  • The auditory brainstem has solved the problem of
    handling (and modulating) duration tuning. This
    is currently a hard problem in cortical modeling,
    probably because the role of inhibition and
    inhibitory rebound is not well-understood. Recent
    results on persistent activity are important.
  • There is no evidence for a spatial map anywhere
    in the auditory brainstem. This probably means
    space is represented in spectral form. (Think
    spatial Fourier transform and Gabor functions.)
  • Timing, not synchronization, probably solves the
    binding problem in the auditory system.

23
Job Description
  • 3-year position at St. Peters, B-scale.
  • Develop and validate
  • biomimetic robots,
  • computational neural networks,
  • PGENESIS models, and
  • a neuroscience database for the MICRAM Project.
  • There will be an experimental neuroscience
    position at Newcastle that you have to work with.
    Hence travel between the campuses is required.

24
Job Requirements
  • Essential
  • Higher degree or extensive experience in
    computing
  • PhD or equivalent research experience
  • Desirable
  • A knowledge of biomimetic robotics
  • Experience with GENESIS or similar neural
    modelling
  • Knowledge of the auditory system in mammals
  • Knowledge of bioacoustics

25
Work Now Underway
  • A computational model of high-frequency CNIC disk
    cells
  • The initial question is whether the CNIC might
    function to visualise the sound in multiple
    ways, with the cortex selecting the image most
    useful to the context.
  • Were beginning by investigating how thoroughly
    CNIC afferents are mixed.

26
Conclusions
  • Come and see!
Write a Comment
User Comments (0)
About PowerShow.com