EE 551451, Fall, 2006 Communication Systems - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

EE 551451, Fall, 2006 Communication Systems

Description:

Maximum a posteriori. has no statistical basis. uses knowledge of ... Maximum a Posteriori (Bayesian) Estimate. Consider the class of linear systems y = Hx n ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 22
Provided by: zsa8
Category:

less

Transcript and Presenter's Notes

Title: EE 551451, Fall, 2006 Communication Systems


1
EE 551/451, Fall, 2006Communication Systems
  • Zhu Han
  • Department of Electrical and Computer Engineering
  • Class 15
  • Oct. 10th, 2006

2
Outline
  • Homework
  • Exam format
  • Second half schedule
  • Chapter 7
  • Chapter 16
  • Chapter 8
  • Chapter 9
  • Standards
  • Estimation and detection this class chapter 14,
    not required
  • Estimation theory, methods, and examples
  • Detection theory, methods, and examples
  • Information theory next Tuesday chapter 15, not
    required

3
Estimation Theory
  • Consider a linear process
  • y H q n
  • y observed data
  • q sending information
  • n additive noise
  • If q is known, H is unknown. Then estimation is
    the problem of finding the statistically optimal
    H, given y, q and knowledge of noise properties.
  • If H is known, then detection is the problem of
    finding the most likely sending information q,
    given y, H and knowledge of noise properties.
  • In practical system, the above two steps are
    conducted iteratively to track the channel
    changes then transmit data.

4
Different Approaches for Estimation
  • Minimum variance unbiased estimators
  • Subspace estimators
  • Least Squares
  • Maximum-likelihood
  • Maximum a posteriori

has no statistical basis
uses knowledge of noise PDF
uses prior information about q
5
Least Squares Estimator
  • Least Squares
  • qLS argmin y Hq2
  • Natural estimator want solution to match
    observation
  • Does not use any information about noise
  • There is a simple solution (a.k.a.
    pseudo-inverse)
  • qLS (HTH)-1 HTy
  • What if we know something about the noise?
  • Say we know Pr(n)

6
Maximum Likelihood Estimator
  • Simple idea want to maximize Pr(yq)
  • Can write Pr(n) e-L(n) , n y Hq, and
  • Pr(n) Pr(yq) e-L(y, q)
  • if white Gaussian n, Pr(n) e-n2/2 s2 and
  • L(y, q) y-Hq2/2s2
  • qML argmax Pr(yq) argmin L(y, q)
  • called the likelihood function
  • qML argmin y-Hq2/2s2
  • This is the same as Least Squares!

7
Maximum Likelihood Estimator
  • But if noise is jointly Gaussian with cov. matrix
    C
  • Recall C , E(nnT). Then
  • Pr(n) e-½ nT C-1 n
  • L(yq) ½ (y-Hq)T C-1 (y-Hq)
  • qML argmin ½ (y-Hq)TC-1(y-Hq)
  • This also has a closed form solution
  • qML (HTC-1H)-1 HTC-1y
  • If n is not Gaussian at all, ML estimators become
    complicated and non-linear
  • Fortunately, in most channel noise is usually
    Gaussian

8
Estimation example - Denoising
  • Suppose we have a noisy signal y, and wish to
    obtain the noiseless image x, where
  • y x n
  • Can we use Estimation theory to find x?
  • Try H I, q x in the linear model
  • Both LS and ML estimators simply give x y!
  • ? we need a more powerful model
  • Suppose x can be approximated by a polynomial,
    i.e. a mixture of 1st p powers of r
  • x Si0p ai ri

9
Example Denoising
Least Squares estimate
q LS (HTH)-1HTy
  • x Si0p ai ri

10
Maximum a Posteriori (MAP) Estimate
  • This is an example of using a signal prior
    information
  • Priors are generally expressed in the form of a
    PDF Pr(x)
  • Once the likelihood L(x) and prior are known, we
    have complete statistical knowledge
  • LS/ML are suboptimal in presence of prior
  • MAP (aka Bayesian) estimates are optimal

likelihood
posterior
prior
11
Maximum a Posteriori (Bayesian) Estimate
  • Consider the class of linear systems y Hx n
  • Bayesian methods maximize the posterior
    probability
  • Pr(xy) ? Pr(yx) . Pr(x)
  • Pr(yx) (likelihood function) exp(- y-Hx2)
  • Pr(x) (prior PDF) exp(-G(x))
  • Non-Bayesian maximize only likelihood
  • xest arg min y-Hx2
  • Bayesian
  • xest arg min y-Hx2 G(x) ,
  • where G(x) is obtained from the prior
    distribution of x
  • If G(x) Gx2 ? Tikhonov Regularization

12
Expectation and Maximization (EM)
  • Expectation and Maximization (EM) algorithm
    alternates between performing an expectation (E)
    step, which computes an expectation of the
    likelihood by including the latent variables as
    if they were observed, and a maximization (M)
    step, which computes the maximum likelihood
    estimates of the parameters by maximizing the
    expected likelihood found on the E step. The
    parameters found on the M step are then used to
    begin another E step, and the process is
    repeated.
  • E-step Estimation for unobserved event (which
    Gaussian is used), conditioned on the
    observation, using the values from the last
    maximization step.
  • M-step You want to maximize the expected
    log-likelihood of the joint event

13
Minimum-variance unbiased estimator
  • Biased and unbiased estimators
  • An unbiased estimator of parameters, whose
    variance is minimized for all values of the
    parameters.
  • The Cramer-Rao Lower Bound (CRLB) sets a lower
    bound on the variance of any unbiased estimator.
  • Biased estimator might have better performances
    than unbiased estimator in terms of variance.
  • Subspace methods
  • MUSIC
  • ESPRIT
  • Widely used in RADA
  • Helicopter, Weapon detection (from feature)

14
What is Detection
  • Deciding whether, and when, an event occurs
  • a.k.a. Decision Theory, Hypothesis testing
  • Presence/absence of signal
  • RADA
  • Received signal is 0 or 1
  • Stock goes high or not
  • Criminal is convicted or set free
  • Measures whether statistically significant change
    has occurred or not

15
Detection
  • Spot the Money

16
Hypothesis Testing with Matched Filter
  • Let the signal be y(t), model be h(t)
  • Hypothesis testing
  • H0 y(t) n(t) (no signal)
  • H1 y(t) h(t) n(t) (signal)
  • The optimal decision is given by the Likelihood
    ratio test (Nieman-Pearson Theorem)
  • Select H1 if L(y) Pr(yH1)/Pr(yH0) gt g
  • otherwise select H0

17
Signal detection paradigm
  • Signal trials
  • Noise trials

18
Signal Detection
19
Receiver operating characteristic (ROC) curve
20
Matched Filters
  • Optimal linear filter for maximizing the signal
    to noise ratio (SNR) at the sampling time in the
    presence of additive stochastic noise
  • Given transmitter pulse shape g(t) of duration T,
    matched filter is given by hopt(t) k g(T-t)
    for all k

21
Questions?
Write a Comment
User Comments (0)
About PowerShow.com