Lecture 3: Markov processes, master equation - PowerPoint PPT Presentation

Loading...

PPT – Lecture 3: Markov processes, master equation PowerPoint presentation | free to download - id: 74c5cd-ZmQ3Y



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Lecture 3: Markov processes, master equation

Description:

Lecture 3: Markov processes, master equation Outline: Preliminaries and definitions Chapman-Kolmogorov equation Wiener process Markov chains eigenvectors and eigenvalues – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 107
Provided by: JohnHe171
Learn more at: http://www.nbi.dk
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Lecture 3: Markov processes, master equation


1
Lecture 3 Markov processes, master equation
  • Outline
  • Preliminaries and definitions
  • Chapman-Kolmogorov equation
  • Wiener process
  • Markov chains
  • eigenvectors and eigenvalues
  • detailed balance
  • Monte Carlo
  • master equation

2
Stochastic processes
Random function x(t)
3
Stochastic processes
Random function x(t) Defined by a distribution
functional Px, or by all its moments
4
Stochastic processes
Random function x(t) Defined by a distribution
functional Px, or by all its moments
5
Stochastic processes
Random function x(t) Defined by a distribution
functional Px, or by all its moments or by
its characteristic functional
6
Stochastic processes
Random function x(t) Defined by a distribution
functional Px, or by all its moments or by
its characteristic functional
7
Stochastic processes (2)
Cumulant generating functional
8
Stochastic processes (2)
Cumulant generating functional
9
Stochastic processes (2)
Cumulant generating functional where
10
Stochastic processes (2)
Cumulant generating functional where
correlation function
11
Stochastic processes (2)
Cumulant generating functional where etc.
correlation function
12
Stochastic processes (3)
Gaussian process
13
Stochastic processes (3)
Gaussian process
14
Stochastic processes (3)
Gaussian process (no higher-order
cumulants)
15
Stochastic processes (3)
Gaussian process (no higher-order
cumulants) Conditional probabilities
16
Stochastic processes (3)
Gaussian process (no higher-order
cumulants) Conditional probabilities
17
Stochastic processes (3)
Gaussian process (no higher-order
cumulants) Conditional probabilities
probability of x(t1) x(tk), given x(tk1)
x(tm)
18
Wiener-Khinchin theorem
Fourier analyze x(t)
19
Wiener-Khinchin theorem
Fourier analyze x(t) Power spectrum
20
Wiener-Khinchin theorem
Fourier analyze x(t) Power spectrum
21
Wiener-Khinchin theorem
Fourier analyze x(t) Power spectrum
22
Wiener-Khinchin theorem
Fourier analyze x(t) Power spectrum
23
Wiener-Khinchin theorem
Fourier analyze x(t) Power spectrum
24
Wiener-Khinchin theorem
Fourier analyze x(t) Power spectrum
Power spectrum is Fourier transform of the
correlation function
25
Markov processes
No information about the future from past values
earlier than the latest available
26
Markov processes
No information about the future from past values
earlier than the latest available
27
Markov processes
No information about the future from past values
earlier than the latest available
Can get general distribution by iterating
Q
28
Markov processes
No information about the future from past values
earlier than the latest available
Can get general distribution by iterating
Q
29
Markov processes
No information about the future from past values
earlier than the latest available
Can get general distribution by iterating
Q where P(x(t0)) is the initial
distribution.
30
Markov processes
No information about the future from past values
earlier than the latest available
Can get general distribution by iterating
Q where P(x(t0)) is the initial
distribution. Integrate this over x(tn-1),
x(t1) to get
31
Markov processes
No information about the future from past values
earlier than the latest available
Can get general distribution by iterating
Q where P(x(t0)) is the initial
distribution. Integrate this over x(tn-1),
x(t1) to get
32
Markov processes
No information about the future from past values
earlier than the latest available
Can get general distribution by iterating
Q where P(x(t0)) is the initial
distribution. Integrate this over x(tn-1),
x(t1) to get The case n 2 is the
33
Chapman-Kolmogorov equation
34
Chapman-Kolmogorov equation
35
Chapman-Kolmogorov equation
(for any t)
36
Chapman-Kolmogorov equation
(for any t)
  • Examples
  • Wiener process (Brownian motion/random walk)

37
Chapman-Kolmogorov equation
(for any t)
  • Examples
  • Wiener process (Brownian motion/random walk)

38
Chapman-Kolmogorov equation
(for any t)
  • Examples
  • Wiener process (Brownian motion/random walk)
  • (cumulative) Poisson process

39
Markov chains
Both t and x discrete, assuming stationarity

40
Markov chains
Both t and x discrete, assuming stationarity

41
Markov chains
Both t and x discrete, assuming stationarity
(because
they are probabilities)
42
Markov chains
Both t and x discrete, assuming stationarity
(because
they are probabilities) Equation of motion
43
Markov chains
Both t and x discrete, assuming stationarity
(because
they are probabilities) Equation of motion
Formal solution
44
Markov chains (2) properties of T
T has a left eigenvector

45
Markov chains (2) properties of T
T has a left eigenvector
(because )
46
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1.
47
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1. The corresponding right eigenvector is
48
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1. The corresponding right eigenvector
is (the stationary state, because the
eigenvalue is 1 )
49
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1. The corresponding right eigenvector
is (the stationary state, because the
eigenvalue is 1 ) For all other
right eigenvectors with components
50
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1. The corresponding right eigenvector
is (the stationary state, because the
eigenvalue is 1 ) For all other
right eigenvectors with components
51
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1. The corresponding right eigenvector
is (the stationary state, because the
eigenvalue is 1 ) For all other
right eigenvectors with components (because
they must be orthogonal to
)
52
Markov chains (2) properties of T
T has a left eigenvector
(because ) Its eigenvalue is
1. The corresponding right eigenvector
is (the stationary state, because the
eigenvalue is 1 ) For all other
right eigenvectors with components (because
they must be orthogonal to
) All other eigenvalues are lt 1.
53
Detailed balance
If there is a stationary distribution P0 with
components and

54
Detailed balance
If there is a stationary distribution P0 with
components and

55
Detailed balance
If there is a stationary distribution P0 with
components and

56
Detailed balance
If there is a stationary distribution P0 with
components and can prove (if ergodicity)
convergence to P0 from any initial state

57
Detailed balance
If there is a stationary distribution P0 with
components and can prove (if ergodicity)
convergence to P0 from any initial state

Can reach any state from any other and no cycles
58
Detailed balance
If there is a stationary distribution P0 with
components and can prove (if ergodicity)
convergence to P0 from any initial state Define
,
Can reach any state from any other and no cycles
59
Detailed balance
If there is a stationary distribution P0 with
components and can prove (if ergodicity)
convergence to P0 from any initial state Define
, make a similarity
transformation
Can reach any state from any other and no cycles
60
Detailed balance
If there is a stationary distribution P0 with
components and can prove (if ergodicity)
convergence to P0 from any initial state Define
, make a similarity
transformation
Can reach any state from any other and no cycles
61
Detailed balance
If there is a stationary distribution P0 with
components and can prove (if ergodicity)
convergence to P0 from any initial state Define
, make a similarity
transformation R is symmetric, has complete set
of eigenvectors , components (Eigenvalues ?j
same as those of T.)
Can reach any state from any other and no cycles
62
Detailed balance (2)

63
Detailed balance (2)

64
Detailed balance (2)

65
Detailed balance (2)
Right eigenvectors of T

66
Detailed balance (2)
Right eigenvectors of T Now look at
evolution
67
Detailed balance (2)
Right eigenvectors of T Now look at
evolution
68
Detailed balance (2)
Right eigenvectors of T Now look at
evolution
69
Detailed balance (2)
Right eigenvectors of T Now look at
evolution
70
Detailed balance (2)
Right eigenvectors of T Now look at
evolution (since )
71
Detailed balance (2)
Right eigenvectors of T Now look at
evolution (since )
72
Monte Carlo
an example of detailed balance
73
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1
74
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step,
75
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random
76
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random (2)
compute field of neighbors hi(t) SjJijSj(t)

77
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random (2)
compute field of neighbors hi(t) SjJijSj(t)
Jij Jji
78
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random (2)
compute field of neighbors hi(t) SjJijSj(t)
Jij Jji (3) Si(t ?t) 1 with
probability
79
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random (2)
compute field of neighbors hi(t) SjJijSj(t)
Jij Jji (3) Si(t ?t) 1 with
probability
80
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random (2)
compute field of neighbors hi(t) SjJijSj(t)
Jij Jji (3) Si(t ?t) 1 with
probability
81
Monte Carlo
an example of detailed balance Ising model
Binary spins Si(t) 1 Dynamics at every time
step, (1) choose a spin (i) at random (2)
compute field of neighbors hi(t) SjJijSj(t)
Jij Jji (3) Si(t ?t) 1 with
probability (equilibration of Si, given
current values of other Ss)
82
Monte Carlo (2)
In language of Markov chains, states (n) are

83
Monte Carlo (2)
In language of Markov chains, states (n)
are Single-spin flips transitions only between
neighboring points on hypercube
84
Monte Carlo (2)
In language of Markov chains, states (n)
are Single-spin flips transitions only between
neighboring points on hypercube
85
Monte Carlo (2)
In language of Markov chains, states (n)
are Single-spin flips transitions only between
neighboring points on hypercube T matrix
elements
86
Monte Carlo (2)
In language of Markov chains, states (n)
are Single-spin flips transitions only between
neighboring points on hypercube T matrix
elements all other Tmn 0.
87
Monte Carlo (2)
In language of Markov chains, states (n)
are Single-spin flips transitions only between
neighboring points on hypercube T matrix
elements all other Tmn 0. Note
88
Monte Carlo (2)
In language of Markov chains, states (n)
are Single-spin flips transitions only between
neighboring points on hypercube T matrix
elements all other Tmn 0. Note
89
Monte Carlo (3)
T satisfies detailed balance
90
Monte Carlo (3)
T satisfies detailed balance where p0 is the
Gibbs distribution
91
Monte Carlo (3)
T satisfies detailed balance where p0 is the
Gibbs distribution After many Monte Carlo
steps, converge to p0
92
Monte Carlo (3)
T satisfies detailed balance where p0 is the
Gibbs distribution After many Monte Carlo
steps, converge to p0 Ss sample Gibbs
distribution
93
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm

94
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm If hiSi
lt 0, Si(t?t) -Si(t),
95
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm If hiSi
lt 0, Si(t?t) -Si(t), If hiSi gt 0,
Si(t?t) -Si(t) with probability
exp(-hiSi)
96
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm If hiSi
lt 0, Si(t?t) -Si(t), If hiSi gt 0,
Si(t?t) -Si(t) with probability
exp(-hiSi) Thus,
97
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm If hiSi
lt 0, Si(t?t) -Si(t), If hiSi gt 0,
Si(t?t) -Si(t) with probability
exp(-hiSi) Thus,
98
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm If hiSi
lt 0, Si(t?t) -Si(t), If hiSi gt 0,
Si(t?t) -Si(t) with probability
exp(-hiSi) Thus, In either case,
99
Monte Carlo (3) Metropolis version
The foregoing was for heat-bath MC. Another
possibility is the Metropolis algorithm If hiSi
lt 0, Si(t?t) -Si(t), If hiSi gt 0,
Si(t?t) -Si(t) with probability
exp(-hiSi) Thus, In either
case, i.e., detailed balance with Gibbs
100
Continuous-time limit master equation
For Markov chain
101
Continuous-time limit master equation
For Markov chain
102
Continuous-time limit master equation
For Markov chain Differential
equation
103
Continuous-time limit master equation
For Markov chain Differential equation In
components
104
Continuous-time limit master equation
For Markov chain Differential equation In
components (using normalization of columns of
T)
105
Continuous-time limit master equation
For Markov chain Differential equation In
components (using normalization of columns of
T) (expect , m ? n)
106
Continuous-time limit master equation
For Markov chain Differential equation In
components (using normalization of columns of
T) (expect , m ? n)
transition rate matrix
About PowerShow.com