Digital Communication Systems 2012 - PowerPoint PPT Presentation

View by Category
About This Presentation
Title:

Digital Communication Systems 2012

Description:

CHAPTER 5 SIGNAL SPACE ANALYSIS Digital Communication Systems 2012 R.Sokullu */45 Digital Communication Systems 2012 R.Sokullu */45 Mean Value Let Wj, denote a random ... – PowerPoint PPT presentation

Number of Views:507
Avg rating:3.0/5.0
Slides: 46
Provided by: Edi60
Learn more at: http://erendemir.weebly.com
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Digital Communication Systems 2012


1
CHAPTER 5 SIGNAL SPACE ANALYSIS
2
Outline
  • 5.1 Introduction
  • 5.2 Geometric Representation of Signals
  • Gram-Schmidt Orthogonalization Procedure
  • 5.3 Conversion of the AWGN into a Vector Channel
  • 5.4 Maximum Likelihood Decoding
  • 5.5 Correlation Receiver
  • 5.6 Probability of Error

3
Introduction the Model
  • We consider the following model of a generic
    transmission system (digital source)
  • A message source transmits 1 symbol every T sec
  • Symbols belong to an alphabet M (m1, m2, mM)
  • Binary symbols are 0s and 1s
  • Quaternary PCM symbols are 00, 01, 10, 11

4
Transmitter Side
  • Symbol generation (message) is probabilistic,
    with a priori probabilities p1, p2, .. pM. or
  • Symbols are equally likely
  • So, probability that symbol mi will be emitted

5
  • Transmitter takes the symbol (data) mi (digital
    message source output) and encodes it into a
    distinct signal si(t).
  • The signal si(t) occupies the whole slot T
    allotted to symbol mi.
  • si(t) is a real valued energy signal (???)

6
  • Transmitter takes the symbol (data) mi (digital
    message source output) and encodes it into a
    distinct signal si(t).
  • The signal si(t) occupies the whole slot T
    allotted to symbol mi.
  • si(t) is a real valued energy signal (signal with
    finite energy)

7
Channel Assumptions
  • Linear, wide enough to accommodate the signal
    si(t) with no or negligible distortion
  • Channel noise is w(t) is a zero-mean white
    Gaussian noise process AWGN
  • additive noise
  • received signal may be expressed as

8
Receiver Side
  • Observes the received signal x(t) for a duration
    of time T sec
  • Makes an estimate of the transmitted signal si(t)
    (eq. symbol mi).
  • Process is statistical
  • presence of noise
  • errors
  • So, receiver has to be designed for minimizing
    the average probability of error (Pe)

What is this?
Pe
Symbol sent
cond. error probability given ith symbol was sent
9
Outline
  • 5.1 Introduction
  • 5.2 Geometric Representation of Signals
  • Gram-Schmidt Orthogonalization Procedure
  • 5.3 Conversion of the AWGN into a Vector Channel
  • 5.4 Maximum Likelihood Decoding
  • 5.5 Correlation Receiver
  • 5.6 Probability of Error

10
5.2. Geometric Representation of Signals
  • Objective To represent any set of M energy
    signals si(t) as linear combinations of N
    orthogonal basis functions, where N M
  • Real value energy signals s1(t), s2(t),..sM(t),
    each of duration T sec

Orthogonal basis function
coefficient
Energy signal
11
  • Coefficients
  • Real-valued basis functions

12
  • The set of coefficients can be viewed as a
    N-dimensional vector, denoted by si
  • Bears a one-to-one relationship with the
    transmitted signal si(t)

13
Figure 5.3 (a) Synthesizer for generating the
signal si(t). (b) Analyzer for generating the set
of signal vectors ?si?.
14
So,
  • Each signal in the set si(t) is completely
    determined by the vector of its coefficients

15
Finally,
  • The signal vector si concept can be extended to
    2D, 3D etc. N-dimensional Euclidian space
  • Provides mathematical basis for the geometric
    representation of energy signals that is used in
    noise analysis
  • Allows definition of
  • Length of vectors (absolute value)
  • Angles between vectors
  • Squared value (inner product of si with itself)

Matrix Transposition
16
Figure 5.4 Illustrating the geometric
representation of signals for the case when N ? 2
and M ? 3. (two dimensional space, three signals)
17
Also,
What is the relation between the vector
representation of a signal and its energy value?
  • start with the definition of average energy in a
    signal(5.10)
  • Where si(t) is as in (5.5)

18
  • After substitution
  • After regrouping
  • Fj(t) is orthogonal, so finally we have

The energy of a signal is equal to the squared
length of its vector
19
Formulas for two signals
  • Assume we have a pair of signals si(t) and
    sj(t), each represented by its vector,
  • Then

Inner product is invariant to the selection of
basis functions
Inner product of the signals is equal to the
inner product of their vector representations
0,T
20
Euclidian Distance
  • The Euclidean distance between two points
    represented by vectors (signal vectors) is equal
    to
  • si-sk and the squared value is given by

21
Angle between two signals
  • The cosine of the angle Tik between two signal
    vectors si and sk is equal to the inner product
    of these two vectors, divided by the product of
    their norms
  • So the two signal vectors are orthogonal if their
    inner product siTsk is zero (cos Tik 0)

22
Schwartz Inequality
  • Defined as
  • accept without proof

23
Outline
  • 5.1 Introduction
  • 5.2 Geometric Representation of Signals
  • Gram-Schmidt Orthogonalization Procedure
  • 5.3 Conversion of the AWGN into a Vector Channel
  • 5.4 Maximum Likelihood Decoding
  • 5.5 Correlation Receiver
  • 5.6 Probability of Error

24
Gram-Schmidt Orthogonalization Procedure
Assume a set of M energy signals denoted by
s1(t), s2(t), .. , sM(t).
  1. Define the first basis function starting with s1
    as (where E is the energy of the signal) (based
    on 5.12)
  2. Then express s1(t) using the basis function and
    an energy related coefficient s11 as
  3. Later using s2 define the coefficient s21 as

25
  • If we introduce the intermediate function g2 as
  • We can define the second basis function f2(t) as
  • Which after substitution of g2(t) using s1(t) and
    s2(t) it becomes
  • Note that f1(t) and f2(t) are orthogonal that
    means

Orthogonal to f1(t)
(Look at 5.23)
26
And so on for N dimensional space,
  • In general a basis function can be defined using
    the following formula
  • where the coefficients can be defined using

27
Special case
  • For the special case of i 1 gi(t) reduces to
    si(t).

General case
  • Given a function gi(t) we can define a set of
    basis functions, which form an orthogonal set, as

28
Outline
  • 5.1 Introduction
  • 5.2 Geometric Representation of Signals
  • Gram-Schmidt Orthogonalization Procedure
  • 5.3 Conversion of the AWGN into a Vector Channel
  • 5.4 Maximum Likelihood Decoding
  • 5.5 Correlation Receiver
  • 5.6 Probability of Error

29
Conversion of the Continuous AWGN Channel into a
Vector Channel
  • Suppose that the si(t) is not any signal, but
    specifically the signal at the receiver side,
    defined in accordance with an AWGN channel
  • So the output of the correlator (Fig. 5.3b) can
    be defined as

30
deterministic quantity
random quantity
contributed by the transmitted signal si(t)
sample value of the variable Wi due to noise
31
Now,
  • Consider a random process X1(t), with x1(t), a
    sample function which is related to the received
    signal x(t) as follows
  • Using 5.28, 5.29 and 5.30 and the expansion 5.5
    we get

which means that the sample function x1(t)
depends only on the channel noise!
32
  • The received signal can be expressed as

NOTE This is an expansion similar to the one in
5.5 but it is random, due to the additive noise.
33
Statistical Characterization
  • The received signal (output of the correlator of
    Fig.5.3b) is a random signal. To describe it we
    need to use statistical methods mean and
    variance.
  • The assumptions are
  • X(t) denotes a random process, a sample function
    of which is represented by the received signal
    x(t).
  • Xj(t) denotes a random variable whose sample
    value is represented by the correlator output
    xj(t), j 1, 2, N.
  • We have assumed AWGN, so the noise is Gaussian,
    so X(t) is a Gaussian process and being a
    Gaussian RV, X j is described fully by its mean
    value and variance.

34
Mean Value
  • Let Wj, denote a random variable, represented by
    its sample value wj, produced by the jth
    correlator in response to the Gaussian noise
    component w(t).
  • So it has zero mean (by definition of the AWGN
    model)
  • then the mean of Xj depends only on sij

35
Variance
  • Starting from the definition, we substitute using
    5.29 and 5.31

Autocorrelation function of the noise process
36
  • It can be expressed as (because the noise is
    stationary and with a constant power spectral
    density)
  • After substitution for the variance we get
  • And since fj(t) has unit energy for the variance
    we finally have
  • Correlator outputs, denoted by Xj have variance
    equal to the power spectral density N0/2 of the
    noise process W(t).

37
Properties (without proof)
  • Xj are mutually uncorrelated
  • Xj are statistically independent (follows from
    above because Xj are Gaussian)
  • and for a memoryless channel the following
    equation is true

38
  • Define (construct) a vector X of N random
    variables, X1, X2, XN, whose elements are
    independent Gaussian RV with mean values sij,
    (output of the correlator, deterministic part of
    the signal defined by the signal transmitted) and
    variance equal to N0/2 (output of the correlator,
    random part, calculated noise added by the
    channel).
  • then the X1, X2, XN , elements of X are
    statistically independent.
  • So, we can express the conditional probability of
    X, given si(t) (correspondingly symbol mi) as a
    product of the conditional density functions (fx)
    of its individual elements fxj.
  • NOTE This is equal to finding an expression of
    the probability of a received symbol given a
    specific symbol was sent, assuming a memoryless
    channel

39
  • that is
  • where, the vector x and the scalar xj, are sample
    values of the random vector X and the random
    variable Xj.

40
Vector x is called observation vector Scalar xj
is called observable element
Vector x and scalar xj are sample values of the
random vector X and the random variable Xj
41
  • Since, each Xj is Gaussian with mean sj and
    variance N0/2
  • we can substitute in 5.44 to get 5.46

42
  • If we go back to the formulation of the received
    signal through a AWGN channel 5.34

Only projections of the noise onto the basis
functions of the signal set si(t)Mi1 affect the
significant statistics of the detection problem
The vector that we have constructed fully
defines this part
43
Finally,
  • The AWGN channel, is equivalent to an
    N-dimensional vector channel, described by the
    observation vector

44
Outline
  • 5.1 Introduction
  • 5.2 Geometric Representation of Signals
  • Gram-Schmidt Orthogonalization Procedure
  • 5.3 Conversion of the AWGN into a Vector Channel
  • 5.4 Maximum Likelihood Decoding
  • 5.5 Correlation Receiver
  • 5.6 Probability of Error

45
Maximum Likelihood Decoding
  • to be continued.
About PowerShow.com