Bayesian Methods with Monte Carlo Markov Chains III - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Bayesian Methods with Monte Carlo Markov Chains III

Description:

Bayesian Methods with Monte Carlo Markov Chains III Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu_at_stat.nctu.edu.tw – PowerPoint PPT presentation

Number of Views:139
Avg rating:3.0/5.0
Slides: 55
Provided by: tigpbpIis
Category:

less

Transcript and Presenter's Notes

Title: Bayesian Methods with Monte Carlo Markov Chains III


1
Bayesian Methods with Monte Carlo Markov Chains
III
  • Henry Horng-Shing Lu
  • Institute of Statistics
  • National Chiao Tung University
  • hslu_at_stat.nctu.edu.tw
  • http//tigpbp.iis.sinica.edu.tw/courses.htm

2
Part 8 More Examples of Gibbs Sampling
3
An Example with Three Random Variables (1)
  • To sample (X,Y,N) as follows

4
An Example with Three Random Variables (2)
  • One can see that

5
An Example with Three Random Variables (3)
  • Gibbs sampling Algorithm
  • Initial Setting t0,
  • Sample a value (xt1,yt1) from
  • tt1, repeat step 2 until convergence.

6
An Example with Three Random Variables by R
10000 samples with a2, ß7 and ?16
7
An Example with Three Random Variables by C (1)
8
An Example with Three Random Variables by C (2)
9
An Example with Three Random Variables by C (3)
10
Example 1 in Genetics (1)
  • Two linked loci with alleles A and a, and B and b
  • A, B dominant
  • a, b recessive
  • A double heterozygote AaBb will produce gametes
    of four types AB, Ab, aB, ab

F (Female) 1- r
r (female recombination fraction) M
(Male) 1-r
r (male recombination fraction)
10
10
11
Example 1 in Genetics (2)
  • r and r are the recombination rates for male and
    female
  • Suppose the parental origin of these heterozygote
    is from the mating
  • of . The problem is to estimate r
    and r from the offspring of selfed
    heterozygotes.
  • Fisher, R. A. and Balmukand, B. (1928). The
    estimation of linkage from the offspring of
    selfed heterozygotes. Journal of Genetics, 20,
    7992.
  • http//en.wikipedia.org/wiki/Genetics
    http//www2.isye.gatech.edu/brani/isyebayes/bank/
    handout12.pdf

11
11
12
Example 1 in Genetics (3)
MALE MALE MALE MALE
AB (1-r)/2 ab (1-r)/2 aB r/2 Ab r/2
F E M A L E AB (1-r)/2 AABB (1-r) (1-r)/4 aABb (1-r) (1-r)/4 aABB r (1-r)/4 AABb r (1-r)/4
F E M A L E ab (1-r)/2 AaBb (1-r) (1-r)/4 aabb (1-r) (1-r)/4 aaBb r (1-r)/4 Aabb r (1-r)/4
F E M A L E aB r/2 AaBB (1-r) r/4 aabB (1-r) r/4 aaBB r r/4 AabB r r/4
F E M A L E Ab r/2 AABb (1-r) r/4 aAbb (1-r) r/4 aABb r r/4 AAbb r r/4
12
12
13
Example 1 in Genetics (4)
  • Four distinct phenotypes AB, Ab, aB and
    ab.
  • A the dominant phenotype from (Aa, AA, aA).
  • a the recessive phenotype from aa.
  • B the dominant phenotype from (Bb, BB, bB).
  • b the recessive phenotype from bb.
  • AB 9 gametic combinations.
  • Ab 3 gametic combinations.
  • aB 3 gametic combinations.
  • ab 1 gametic combination.
  • Total 16 combinations.

13
13
14
Example 1 in Genetics (5)
14
14
15
Example 1 in Genetics (6)
  • Hence, the random sample of n from the offspring
    of selfed heterozygotes will follow a multinomial
    distribution

15
15
16
Example 1 in Genetics (7)
  • Suppose that we observe the data of y (y1, y2,
    y3, y4) (125, 18, 20, 24), which is a random
    sample from
  • Then the probability mass function is

16
17
Example 1 in Genetics (8)
  • How to estimate
  • MME (shown in the last week)http//en.wikipedia.
    org/wiki/Method_of_moments_28statistics29
  • MLE (shown in the last week)http//en.wikipedia.
    org/wiki/Maximum_likelihood
  • Bayesian Method
  • http//en.wikipedia.org/wiki/Bayesian_method

18
Example 1 in Genetics (9)
  • As the value of is between ¼ and 1, we can
    assume that the prior distribution of is
    Uniform (¼,1).
  • The posterior distribution is
  • The integration in the above denominator,
  • does not have a close form.

19
Example 1 in Genetics (10)
  • We will consider the mean of posterior
    distribution (the posterior mean),
  • The Monte Carlo Markov Chains method is a good
    method to estimate even if
    and the posterior mean do not have close forms.

20
Example 1 by R
  • Direct numerical integration when
  • We can assume other prior distributions to
    compare the results of posterior means
    Beta(1,1), Beta(2,2), Beta(2,3), Beta(3,2),
    Beta(0.5,0.5), Beta(10-5,10-5)

21
Example 1 by C/C
Replace other prior distribution, such as
Beta(1,1),,Beta(1e-5,1e-5)
22
Beta Prior
23
Comparison for Example 1 (1)
Estimate Method Estimate Method
MME 0.683616 Bayesian Beta(2,3) 0.564731
MLE 0.663165 Bayesian Beta(3,2) 0.577575
Bayesian U(¼,1) 0.573931 Bayesian Beta(½,½) 0.574928
Bayesian Beta(1,1) 0.573918 Bayesian Beta(10-5,10-5) 0.588925
Bayesian Beta(2,2) 0.572103 Bayesian Beta(10-7,10-7) show below
24
Comparison for Example 1 (2)
Estimate Method Estimate Method
Bayesian Beta(10,10) 0.559905 Bayesian Beta(10-7,10-7) 0.193891
Bayesian Beta(102,102) 0.520366 Bayesian Beta(10-7,10-7) 0.400567
Bayesian Beta(104,104) 0.500273 Bayesian Beta(10-7,10-7) 0.737646
Bayesian Beta(105,105) 0.500027 Bayesian Beta(10-7,10-7) 0.641388
Bayesian Beta(10n,10n) Bayesian Beta(10-7,10-7) Not stationary
25
Part 9 Gibbs Sampling Strategy
26
Sampling Strategy (1)
  • Strategy I
  • Run one chain for a long time.
  • After some Burn-in period, sample points every
    some fixed number of steps.
  • The code example of Gibbs sampling in the
    previous lecture use sampling strategy I.
  • http//www.cs.technion.ac.il/cs236372/tirgul09.ps

27
Sampling Strategy (2)
  • Strategy II
  • Run the chain N times, each run for M steps.
  • Each run starts from a different state points.
  • Return the last state in each run.

28
Sampling Strategy (3)
  • Strategy II by R

29
Sampling Strategy (4)
  • Strategy II by C/C

30
Strategy Comparison
  • Strategy I
  • Perform burn-in only once and save time.
  • Samples might be correlated (--although only
    weakly).
  • Strategy II
  • Better chance of covering the space points
    especially if the chain is slow to reach
    stationary.
  • This must perform burn-in steps for each chain
    and spend more time.

31
Hybrid Strategies (1)
  • Run several chains and sample few samples from
    each.
  • Combines benefits of both strategies.

32
Hybrid Strategies (2)
  • Hybrid Strategy by R

33
Hybrid Strategies (3)
  • Hybrid Strategy by C/C

34
Part 10 Metropolis-Hastings Algorithm
35
Metropolis-Hastings Algorithm (1)
  • Another kind of the MCMC methods.
  • The Metropolis-Hastings algorithm can draw
    samples from any probability distribution p(x),
    requiring only that a function proportional to
    the density can be calculated at x.
  • Process in three steps
  • Set up a Markov chain
  • Run the chain until stationary
  • Estimate with Monte Carlo methods.
  • http//en.wikipedia.org/wiki/Metropolis-Hasti
    ngs_algorithm

36
Metropolis-Hastings Algorithm (2)
  • Let be a probability density
    (or mass) function (pdf or pmf).
  • f(?) is any function and we want to estimate
  • Construct PPij the transition matrix of an
    irreducible Markov chain with states 1,2,,n,
    whereand p is its unique stationary distribution.

37
Metropolis-Hastings Algorithm (3)
  • Run this Markov chain for times t1,,N and
    calculate the Monte Carlo sum
  • then
  • Sheldon M. Ross(1997). Proposition 4.3.
    Introduction to Probability Model. 7th ed.
  • http//nlp.stanford.edu/local/talks/mcmc_2004_07_0
    1.ppt

38
Metropolis-Hastings Algorithm (4)
  • In order to perform this method for a given
    distribution p , we must construct a Markov chain
    transition matrix P with p as its stationary
    distribution, i.e. pP p.
  • Consider the matrix P was made to satisfy the
    reversibility condition that for all i and j,
    piPij pjPij.
  • The property ensures that and hence p is a
    stationary distribution for P.

39
Metropolis-Hastings Algorithm (5)
  • Let a proposal QQij be irreducible where Qij
    Pr(Xt1jxti), and range of Q is equal to
    range of p.
  • But p is not have to a stationary distribution of
    Q.
  • Process Tweak Qij to yield p.

40
Metropolis-Hastings Algorithm (6)
  • We assume that Pij has the formwhere
    is called accepted probability, i.e. given
    Xti,

41
Metropolis-Hastings Algorithm (7)
  • WLOG for some (i,j), .
  • In order to achieve equality (), one can
    introduce a probability on the
    left-hand side and set on the
    right-hand side.

42
Metropolis-Hastings Algorithm (8)
  • Then
  • These arguments imply that the accepted
    probability must be

43
Metropolis-Hastings Algorithm (9)
  • M-H AlgorithmStep 1 Choose an irreducible
    Markov chain transition matrix Q with transition
    probability Qij.Step 2 Let t0 and initialize
    X0 from states in Q. Step 3 (Proposal Step)
    Given Xti, sample Yj form QiY.

44
Metropolis-Hastings Algorithm (10)
  • M-H Algorithm (cont.)Step 4 (Acceptance
    Step)Generate a random number U from
    Uniform(0,1).
  • Step 5 tt1, repeat Step 35 until
    convergence.

45
Metropolis-Hastings Algorithm (11)
  • An Example of Step 35

46
Metropolis-Hastings Algorithm (12)
  • We may define a rejection rate as the
    proportion of times t for which Xt1Xt. Clearly,
    in choosing Q, high rejection rates are to be
    avoided.
  • Example

47
Example (1)
  • Simulate a bivariate normal distribution

48
Example (2)
  • Metropolis-Hastings Algorithm

49
Example of M-H Algorithm by R
50
Example of M-H Algorithm by C (1)
51
Example of M-H Algorithm by C (2)
52
Example of M-H Algorithm by C (3)
53
An Figure to Check Simulation Results
  • Black points are simulated samples color points
    are probability density.

54
Exercises
  • Write your own programs similar to those examples
    presented in this talk, including Example 1 in
    Genetics and other examples.
  • Write programs for those examples mentioned at
    the reference web pages.
  • Write programs for the other examples that you
    know.

54
Write a Comment
User Comments (0)
About PowerShow.com