Ron Meir - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

Ron Meir

Description:

Ron Meir. Department of Electrical Engineering. Technion, Israel ... Formulated in sixties (Kushner, Zakai, Wonham) Extended in seventies to point process ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 53
Provided by: super80
Category:
Tags: meir | ron | sixties

less

Transcript and Presenter's Notes

Title: Ron Meir


1
State Estimation and Prediction Based On Dynamic
Spike Train Decoding Noise, Adaptation, and
Multimodality
Ron Meir Department of Electrical
Engineering Technion, Israel
With Omer Bobrowski and Yonina Eldar
2
The Problem
Noise ubiquitous
Sensory
Processing
Environment
World state
State estimator
Objective Based on partial noisy sensory input
- estimate world state
3
Desiderata
  • Compute posterior distribution
  • P(current statesensory input) or
  • P(next statesensory input)
  • Continuous time
  • Online - process each spike upon arrival
  • Real time fixed computational load
  • Implementation by a recurrent neural network

4
Summary of Main Results
  • Main assumption State is a Markov process
  • Mathematical foundations developed in the 70s
  • Suggest implementation by a simple bi-linear
    neural network
  • Extend the original framework
  • Noisy input
  • History dependent spike trains (exploring
    adaptation)
  • Multimodal inputs Prediction
  • Demonstrate known results from static-case

5
Comparative Dimensions
Main restrictions (1) finite-state (2) Markovian
6
Problem Setup
  • - A process representing world state
  • - Sensory cell
    responses
  • Spike trains
  • Partial, noisy, delayed, redundant,

..
?
. . .
.. . .
Decoding Network
7
Problem Formulation
  • Main assumptions
  • World state finite-state continuous-time Markov
    process
    with a generator matrix
  • Sensory activity Poisson processes with rates
    - tuning curves
  • Objective
  • Compute posterior probabilities
  • Requirements
  • Online real time computation
  • Neural network implementation

8
Mathematical Framework Nonlinear Filtering
  • History
  • Formulated in sixties (Kushner, Zakai, Wonham)
  • Extended in seventies to point processobservation
    s (Snyder, Segal, Kailath)
  • Our work
  • Relies on this rigorous theory
  • Demonstrates real-time neural implementation
  • Extensions Generalizations multimodality,
    noise, non-Poisson,

9
Key Concept
  • Zakai Equation
  • A simple filter for the non-normalized
    probabilities
  • Meaning
  • There is a special set of functions -Such
    that
  • Computing probabilities hardComputing
    non-normalized probabilities easy

10
Filtering Equation
Stochastic Differential Equation
prior
sensory data
no spikes bias
11
Neural Implementation
Sensory layer
Posterior network
12
Example Visual Tracking
No input Increased uncertainty
Objects trajectory
0
1
Probability
13
System Behavior
  • Between spikes
  • Spike arrival from the mth sensory cell
  • Closed form solution available

14
The Static Case
  • Setup
  • Constant input, selected from finite set
  • Assumptions
  • Gaussian prior
  • Gaussian tuning-curves
  • Posterior
  • Gaussian -

flat prior
Population vector(Georgopoulos 82)
15
Generalization and Extensions
16
Environmental Noise
  • Previous setup
  • Noisy setup
  • Rates are functions of the noisy state

Limitation Sensory response is a direct function
of the state
17
Environmental Noise
  • Assumptions
  • - a finite-state Markov process (
    )
  • are independent
  • Tuning-curves
  • E.g., additive noise -
  • Solution
  • Key observation is a
    Markov process ( )
  • Write recursive equation computing
  • Compute the marginal non-normalized probabilities

18
Environmental Noise
  • Result

where
Average sensory response (with regard to the
noise)
19
Environmental Noise
20
Optimal Tuning Curve Width
  • Motivation

C High precision D Poor coverage D Few spikes
Expected optimal tuning curve width
  • D Low precision
  • C Good coverage
  • C Many spikes

21
Optimal Tuning Curve Width
  • Setup
  • Static stimulus
  • Additive noise
  • Assumptions
  • Gaussian prior
  • Gaussian noise
  • Gaussian tuning-curves
  • Optimality criterion
  • MSE of the optimal estimator -

22
History Dependent Spike Trains
  • Why not Poisson?
  • Lack memory
  • Physiological phenomena
  • Refractory period firing is exhausting
  • Adaptation neurons get bored

23
History Dependent Spike Trains
  • Self-exciting point processes
  • Rate depends on history
  • Includes Poisson, Renewals, and more
  • Equation

(Synaptic depression ?)
24
History Dependent Spike Trains
  • Using spikes efficiently

25
History Dependent Spike Trains
  • Self inhibition term
  • No adaptation
  • With adaptation

cells near the true state are the least inhibited
26
Multisensory Integration
  • Setup
  • Two or more input modalities (e.g. visual and
    auditory)
  • Goal
  • Compute posteriorbased on both modalities

A
quack
V
27
Multisensory Integration
  • Reminder unimodal equation
  • Multimodal equation
  • where
  • - number of visual/auditory sensory cells
  • - mth visual/auditory input activity
  • - mth visual/auditory cells tuning-curve

multi-sensory data
28
Neural Implementation
Sensory layer
Posterior network
29
Multisensory Integration
  • Possible questionIs it the same as taking
    inputs of the same modality?
  • Answer no
  • Biologically
  • Different information conveyed by different
    tuning-curve properties (shape, latency, )
  • Mathematically
  • Different noise processes
  • Benefit - two noisy observations instead of one

30
Multisensory Integration
  • Example
  • Auditory input compensates for the lack of visual
    input
  • And vice versa

31
Multisensory Integration
  • Comparing multimodal vs. unimodal computation
  • - constant
  • - varying
  • Multimodal inputs of one modality and
    of another
  • Unimodal inputs of the same
    modality

32
Multisensory Integration The static case
  • Setup Constant input, multimodal observations
  • Well studied case Deneve, Latham, Pouget, others
  • Assumption Gaussian unimodal posteriors
  • Result Gaussian multimodal posterior

visual posterior network
auditory posterior network
image
sound
image
multimodal posterior network
sound
33
Multisensory Integration The static case
  • In our framework
  • Assumptions Gaussians tuning-curves and prior
    (as in the unimodal case)
  • ResultGaussian multimodal posterior, with

flat prior
34
Prediction
  • Objective
  • Compute posterior distribution of future states
  • Solution
  • Define (non-normalized probabilities of
    future state)
  • Then (when transition-matrix is
    regular)
  • Substitute into the original equation to get

where
35
Summary
  • Spike based filtering of a continuous-time Markov
    process
  • Online implementation using bi-linear neural
    networks
  • Mathematically rigorous treatment of continuous
    time
  • No temporal information lost
  • Multimodal effects easily accounted for
  • Many extensions
  • noise, history-dependent spike trains,
    prediction, and more
  • Demonstrating static-case known results

36
Extensions
  • Learning and adaptation
  • Distributed and robust representation
  • Continuous state-space
  • Physiological interpretation and implementation
  • Biological experiments

37
Thanks!
38
Prediction
Sensory layer
Posterior network
Prediction layer
39
Recent Work Overview
  • Pitkow, Sompolinsky Meister (PLoS Biology,
    2007)
  • Setup
  • Static stimulus horizontal/vertical bar
  • Dynamic noise fixational eye movements (2D
    random walk, continuous time, discrete space)
  • Sensory activity independent Poisson spike
    trains
  • Firing rate represents probability
  • Purpose
  • Determine objects retinal position
  • Distinguish between horizontal/vertical bars
  • Computation by a rate-based recurrent neural
    network

40
Recent Work Overview
  • Pitkow, Sompolinsky Meister (PLoS Biology,
    2007)
  • Dynamic posterior update

stimulus S at position x, given spiking activity
up to time t
spike train generated by retinal neuron y
tuning-curve related functions
discrete-space differential
41
Recent Work Overview
  • Pitkow, Sompolinsky Meister (PLoS Biology, 2007)

Pitkow et al. (PLoS Biology, 2007)
42
Recent Work Overview
  • Pitkow, Sompolinsky Meister (PLoS Biology,
    2007)
  • Summary
  • Equation neural implementation similar to our
    work
  • Provides evidence for biological plausibility in
    V1
  • Compares to psychophysical experiments
  • Comparing to this work
  • Handles 2D random walks only
  • Does not go beyond the basic equation (e.g.
    noise, prediction, multisensory, adaptation, etc.)

43
Recent Work Overview
  • Deneve (Neural Computation, 2008)
  • Setup
  • Dynamic stimulus a continuous-time binary
    Markov process
  • Sensory activity Poisson spike trains
  • Neurons encode probability as an inner state
  • Purpose
  • Compute the log-likelihood ratio
  • Represent the ratio in a single cell activity

44
Recent Work Overview
  • Deneve (Neural Computation, 2008)
  • Summary
  • A differential equation for computing the
    log-ratio
  • A single cell mechanism to propagate probability
  • Providing learning mechanisms
  • Comparing to this work
  • Binary stimulus only
  • Nonlinear equation
  • Inner state (posterior distribution) computation
    mechanism is not studied
  • Neural implementation might lose information
  • The log-ratio equation is a special case of
    framework suggested in our work

45
Recent Work Overview
  • Beck Pouget (Neural Computation, 2007)
  • Setup
  • Dynamic stimulus a continuous-time Markov chain
  • Sensory activity abstract
  • Firing rate represents probability
  • Purpose
  • Determine the stimulus state from sensory input
  • Computation by a rate-based recurrent neural
    network

46
Recent Work Overview
  • Beck Pouget (Neural Computation, 2007)

posterior state distribution, given spiking
activity up to time t
stimulus generator matrix
unspecified functions of the input spikes
47
Recent Work Overview
  • Beck Pouget (Neural Computation, 2007)
  • Summary
  • Posterior calculation by a recurrent neural
    network
  • Comparing to this work
  • Approximated derivation
  • Quadratic equation
  • Unspecified input terms

48
Recent Work Overview
  • Rao (Neural Computation 2004, NIPS 2005)
  • Setup
  • Dynamic stimulus a discrete-time Markov chain
  • Sensory activity abstract
  • Firing rate represents probability
  • Purpose
  • Determine the stimulus state from sensory input
  • Computation by a rate-based recurrent neural
    network

49
Recent Work Overview
  • Rao (Neural Computation 2004, NIPS 2005)
  • Summary
  • Posterior calculation by a discrete-time
    recurrent neural network
  • General structure similar to previous models
  • Comparing to this work
  • Discrete time
  • Approximated derivations (log of sum, random
    spikes)
  • Nonlinear interactions
  • Unspecified input terms

50
Recent Work Overview
  • Brown, Barbieri, Solo Co.
  • Setup
  • Dynamic stimulus a discrete-time Gaussian AR
    models.
  • Sensory activity independent Poisson spike
    trains
  • Assumed receptive fields Gaussian/Zernekie
    polynomials
  • Purpose
  • Computerized decoding of information from neural
    activity (Kalman based approach)
  • Highly different purpose than our work implies
    different requirements/assumptions/approximations

51
Recent Work Overview
  • Brown, Barbieri, Solo Co.
  • Main differences with our work
  • Implementation
  • computer vs. neural network
  • discrete vs. continuous time
  • Nonlinear equations, required iterative solutions
    vs.simple bi-linear equations
  • Model assumptions
  • discrete-time, continuous-space (Gaussian AR) vs.
    continuous-time, discrete-space (finite-state
    Markov)
  • special vs. generic receptive fields
  • Gaussian approximation for the posterior vs. no
    approximation

52
Example Multi-target Tracking
  • Combined state-space
  • Marginal probabilities
Write a Comment
User Comments (0)
About PowerShow.com