Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models - PowerPoint PPT Presentation

About This Presentation
Title:

Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models

Description:

Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models. Mike West ... The development of discrete mixture distributions as approximations to priors ... – PowerPoint PPT presentation

Number of Views:176
Avg rating:3.0/5.0
Slides: 19
Provided by: biSn
Category:

less

Transcript and Presenter's Notes

Title: Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models


1
Mixture Models, Monte Carlo, Bayesian Updating
and Dynamic Models
  • Mike West
  • Computing Science and Statistics, Vol. 24,
  • pp. 325-333, 1993

2
Abstract
  • The development of discrete mixture distributions
    as approximations to priors and posteriors in
    Bayesian analysis
  • Adaptive density estimation

3
Adaptive mixture modeling
  • p(?) the continuous posterior density function
    for a continuous parameter vector ?.
  • g(?) approximating density for importance
    sampling function.
  • T-distribution
  • ? ?j, j1,,n random sample from g(?).
  • ? wj, j1,,n weights
  • wj p(?)/(kg(?))
  • k

4
Importance sampling and mixture
  • Univariate random sampling
  • Direct Bayesian interpretations (based on
    mixtures of Dirichlet processes)
  • Multivariate kernel estimation
  • Weighted kernel estimator

5
Adaptive methods of posterior approximation
  • Possible patterns of local dependence exhibited
    by p(?)
  • Easy
  • Different regions of parameter space are
    associated with rather different patterns of
    dependence.
  • V is varying with local j and more heavily
    depending on ?j.

6
Adaptive importance sampling
  • The importance sampling distribution is sequently
    revised based on information derived from
    successive Monte Carlo samples.

7
AIS algorithm
  1. Choose an initial importance sampling
    distribution with density g0(?), draw a small
    sample n0 and compute weights, deducing the
    summary ?0 g0, n0, ?0, ?0. Compute the Monte
    Carlo estimates and V0 of the mean and
    variance of p0
  2. Construct a revised importance function g1(?)
    using (1) with sample size n0, points ?0,j,
    weights w0,j, and variance matrix V0
  3. Draw a larger sample of size n1 from g1(?), and
    replace ?0 with ?1
  4. Either stop, and base inferences on ?1, or
    proceed, if desired, to a further revised version
    g2(?), constructed similarly.

8
Approximating mixtures by mixtures
  • The computational burden increases if further
    refinement with larger sample sizes.
  • Solution) Using a mixtures of several thousand T
  • Reducing the number of components by replacing
    nearest neighboring components with some form
    of average

9
Clustering routine
  1. Set r n, starting with the r n component
    mixture, choose k lt n as the number of components
    for the final, reduced mixture.
  2. Sort r values of ?j. in ? in order of increasing
    values of weights wj in ?
  3. Find the index i such that ?j. is the nearest
    neighbor of ?1, and reduce the sets ? and ? to
    sets of size r 1 by removing components 1 and i,
    and inserting average values

10
  1. Proceed to (2), stopping here only when r k
  2. The resulting mixture, the locations based on the
    final k averaged values, with associated combined
    weights, the same scale matrix V but new, and
    larger, window-width h based on the current,
    reduced sample size r rather than n

11
Sequential updating and dynamic models
  • Updating a prior to posterior distribution for a
    random quantity or parameter vector based on
    received data summarized through a likelihood
    function for the parameter

12
Dynamic models
  • Observation model
  • Evolution model

13
Computations
  • Evolution step
  • Compute the current prior for ?t.
  • Updating step
  • Observing Yt, compute the current posterior

14
Computations evolution step
  1. Various features of the prior p(?tDt-1) of
    interest can be computer directly using the Monte
    Carlo structure
  2. The prior density function can be evaluated by
    Monte Carlo integration at any point

15
  1. The initial Monte Carlo samples ?t (by ?t from
    p(?t ? t-1,i)) provide starting values for the
    evaluation of the prior.
  2. ?t may be used with weights ?t-1 to construct a
    generalized kernel density estimate of the prior
  3. Monte Carlo computations can be performed to
    approximate forecast moments and probabilities

16
Computations updating step
  • Adaptive Monte Carlo density

17
Examples
  • Example 1
  • A normal, linear, first-order polynomial model
  • Example 2
  • Not normal
  • Using T distributions
  • Example 3
  • bifurcating

18
Examples
  • Example 4
  • Television advertising
Write a Comment
User Comments (0)
About PowerShow.com