Module-3 : Transmission Lecture-4 (27/4/00)
Marc Moonen
Dept. E.E./ESAT, K.U.Leuven
marc.moonen@esat.kuleuven.ac.be
www.esat.kuleuven.ac.be/sista/~moonen/
Lecture-4: Receiver Design
Given p(t), h(t), reconstruct from r(t)
s k E a .
r(t)
aˆk
transmit pulse
s(t)
n(t)
p(t) +
AWGN
transmitter receiver (to be defined) h(t)
channel
...
constellation
transmit filter
???
a
Ka a
a
0,
1,
2,...,
Lecture-4: Receiver Design
Lecture-3 : 1st attempt receiver structure * matched filter front-end
* symbol-rate sampler
* memory-less decision device (=`slicer’) Optimal ?
(e.g. below Nyquist rate sampling ?)
aˆk
f(t)
front-end filter
1/Ts
receiver (see lecture-3) n(t)
+ AWGN
s
k E
a .
transmit pulse
p(t)
transmitter
h(t)
channel
Lecture-4: Receiver Design - Overview
• Optimal Receiver Structure
-minimum Bit-Error-Rate (BER) receiver
-maximum a-posteriori probability (MAP) receiver -maximum likelihood (ML) receiver
-minimum distance (MD) receiver
• Transmission of 1 symbol Matched Filter (MF) front-end revisited
• Transmission of a symbol sequence Whitened Matched Filter (WMF) front-end
MLSE/Viterbi Algorithm
Optimal Receiver Structure : Minimum BER
• Aim of a digital communication system is to transmit bits with as few bit errors as possible
• Hence optimal receiver minimizes Bit-Error- Probability (BEP or BER)
• Procedure :
- from minimum BER to MAP receiver
(Maximum-A-Posteriori probability) - from MAP to ML receiver (Maximum-Likelihood) - from ML to MD receiver (Minimum-Distance)
Optimal Receiver Structure : MAP
• Transmitted sequence is
• MAP Receiver sees r(t), and picks sequence
with maximum a-posteriori probability (probability that sent sequence is
after observing r(t) )
a
Ka a
a
0,
1,
2,...,
)) ( ˆ |
,..., , ˆ
, ˆ ( ˆ
max
ˆ , ˆ ,ˆ ,...,ˆ | 0 1 22 1
0 a a a
P
X Ya a a a
Kr t
a K
a
Ka a
a ˆ
0, ˆ
1, ˆ
2,..., ˆ a
Ka a
a ˆ
0, ˆ
1, ˆ
2,..., ˆ
Assignment 2.3
• MAP Receiver
• With
being the probability that the sent sequence was not (after observing r(t)) establish the link/difference between minimum-
BER and MAP ….
)) ( ˆ |
,..., , ˆ
, ˆ ( ˆ
max
ˆ , ˆ ,ˆ ,...,ˆ | 0 1 22 1
0 a a a
P
X Ya a a a
Kr t
a K
a
Ka a
a ˆ
0, ˆ
1, ˆ
2,..., ˆ
)) ( ˆ |
,..., , ˆ
, ˆ ( ˆ
1 P
X|Ya
0a
1a
2a
Kr t
Optimal Receiver Structure : MAP
• MAP detector
• Bayes’ rule says
with independent of transmitted sequence, hence
MAP-detection a.k.a. `Bayesian detection’
ˆ ) ,..., , ˆ
( ˆ ˆ ).
,..., , ˆ
| ˆ ) ( (
max ˆ ,ˆ ,..., ˆ | 0 1 0 1
1
0 a a Y X K X K
a P r t a a a P a a a
K
)) ( (
ˆ ) ,..., ( ˆ
ˆ ).
,...,
| ˆ ) ( )) (
( ˆ |
,...,
( ˆ0 | 0 0
| P r t
a a
P a
a t
r t P
r a
a P
Y
K X
K X
Y K
Y
X
)) ( ˆ |
,..., , ˆ
( ˆ
max
ˆ ,ˆ ,...,ˆ | 0 11
0 a a
P
X Ya a a
Kr t
a K
)) ( ( tr PY
Optimal Receiver Structure : ML
• MAP detector
• If all symbol sequences
are equally likely (=often the case or often a good approximation) this can be reduced to
= `Maximum Likelihood (ML) Detection’.
= choose to maximize the
likelihood that the observed channel output is r(t) ˆ )
,..., , ˆ
( ˆ ˆ ).
,..., , ˆ
| ˆ ) ( (
max ˆ ,ˆ ,..., ˆ | 0 1 0 1
1
0 a a Y X K X K
a P r t a a a P a a a
K
ˆ ) ,..., , ˆ
| ˆ ) ( (
max
ˆ , ˆ ,..., ˆ | 0 11
0 a a Y X K
a
P r t a a a
K
aK
a a
aˆ0, ˆ1, ˆ2,..., ˆ
aK
a a
aˆ0, ˆ1, ˆ2,..., ˆ
Optimal Receiver Structure : ML/MD
• ML detector
• Transmitted signal is Received signal
If n(t) is AWGN, then ML is (easily) proven to be equivalent with
= Minimum distance (MD) Detection’
ˆ ) ,..., , ˆ
| ˆ ) ( (
max
ˆ , ˆ ,..., ˆ | 0 11
0 a a Y X K
a
P r t a a a
K
k
s k
s a p t kT
E t
s( ) . . ( )
) ( )
( ' . .
)
(t E a p t kT n t
r
k
s k
s
k
s k
s a
a
a r t E a p t kT dt
K
2 ,...,ˆ
,ˆ
ˆ | ( ) . ˆ . '( ) |
min 0 1
Optimal Receiver Structure : ML/MD
• ML detector for linear channel with AWGN
• p’(t)=p(t)*h(t)=transmitted pulse, filtered by channel
• `Continuous-time’ minimum distance criterion
• Check all possible (M^K !!) sequences -> Need alternative/cheap procedure….
• PS: consider AWGN in the sequel, but can be generalized for non-white Gaussian noise
k
s k
s a
a
a r t E a p t kT dt
K
2 ,...,ˆ
,ˆ
ˆ | ( ) . ˆ . '( ) |
min 0 1
Optimal Receiver Structure : ML/MD
• ML detector for linear channel with AWGN
• Strategy:
-Transmission of 1 symbol
-> restatement of matched filter receiver of Lecture 3
-Transmission of a symbol sequence
-> matched filter front-end + `ML Sequence Estimation’
k
s k
s a
a
a r t E a p t kT dt
K
2 ,...,ˆ
,ˆ
ˆ | ( ) . ˆ . '( ) |
min 0 1
Transmission of 1 symbol (I)
• ML detection for transmission of 1 symbol over linear channel with AWGN
check all (M) possible symbols in alphabet, pick that minimizes distance
• This is equivalent to
E p t a dt
t
r s
a
2 ˆ ( ) . '( ).ˆ0
min 0
ˆa
0
* 0
* 2ˆ ( ). '( ) .ˆ . '( ). '( )
min 0
E a p t p t dt dt
t p t
r s
a
* =complex conjugate
Assignment 2.4
• Prove equivalence of 2 criteria :
1:
2:
E p t a dt
t
r
sa
2
ˆ
( ) . ' ( ). ˆ
0min
0
* 0
* 2ˆ ( ). '( ) .ˆ . '( ). '( )
min 0
E a p t p t dt dt
t p t
r s
a
Transmission of 1 symbol (II)
• ML detection for transmission of 1 symbol over linear channel with AWGN
Interpretation :
(*) r(t) filtered by [p’(-t)]*, and evaluated at t=0
(*) normalization constant
`Discrete-time’ criterion i.o. continuous-time criterion
dt t
p t
r
u0 ( ). '( ) *
dt t
p t
p
g0 '( ). '( ) *
cfr. convolution integral
* 0
* 2ˆ ( ). '( ) .ˆ . '( ). '( )
min 0
E a p t p t dt
dt t
p t
r s
a
Transmission of 1 symbol (III)
• Realization :
2 0 0
ˆ 0
( . ). ˆ
min
a0u E
sg a
ˆa0
p’(-t)*
front-end filter
1/Ts
=receiver Lecture-3 (p23) !!
n(t) + AWGN
Es
a .0
transmit pulse
p(t)
transmitter
h(t)
channel
sample at t=0 p’(t)=p(t)*h(t)
u0
Transmission of 1 symbol (IV)
Conclusion :
• For transmission of 1 symbol over linear
channel (p’(t)=p(t)*h(t)) with AWGN, optimal receiver consists of
-matched filter front-end -sampling at t=0
-memory-less decision device
• = restatement of optimality of matched filter receiver of Lecture-3
Transmission of a symbol sequence (I)
• ML detection for transmission of symbol sequence over linear channel with AWGN
check all (M^K) possible symbols sequences, pick that minimizes distance
• This is `continuous-time’ distance criterion
a
Ka
a ˆ
0, ˆ
1,..., ˆ
k
s k
s a
a
a r t E a p t kT dt
K
2 ,...,ˆ
,ˆ
ˆ | ( ) . ˆ . '( ) |
min 0 1
Transmission of a symbol sequence (II)
• Equivalent to discrete-time minimum distance criterion
With :
(*) r(t) filtered by [p’(-t)]*, and sampled at t=k.Ts
(*) autocorrelation of p’(t),
sampled at t=k.Ts
dt T
k t
p t
r
uk ( ). '( . s) *
dt T
k t
p t
p
gk '( ). '( . s) *
cfr. convolution integral
K
k
k k
l l k K
k
K
l
k s
a
a E a g a a u
K 1
*
1 1
* ,...,ˆ
ˆ . ˆ . .ˆ 2 ˆ .
min 0
Assignment 2.5
• Prove equivalence of 2 criteria :
1:
2:
(=generalization of assignment 2.3, but with `early stop’)
k
s k
s a
a r t E a p t kT dt
K
2 ,...,ˆ
ˆ | ( ) . ˆ . '( ) |
min 0
K
k
k k l
l k K
k
K
l
k s
a
a E a g a a u
K 1
*
1 1
* ,...,ˆ
ˆ . ˆ . .ˆ 2 ˆ .
min 0
Transmission of a symbol sequence (III)
• Realization :
aˆk
p’(-t)*
front-end filter
1/Ts
receiver n(t)
+ AWGN
s
k E
a .
transmit pulse
p(t)
transmitter
h(t)
channel sample at t=k.Ts
uk
K
k
k k l
l k K
k
K
l
k s
a
a E a g a a u
K 1
*
1 1
* ,...,ˆ
ˆ . ˆ . .ˆ 2 ˆ .
min 0
Transmission of a symbol sequence (IV)
Conclusion :
• For transmission of a symbol sequence over linear channel (p’(t)=p(t)*h(t)) with AWGN,
optimal receiver consists of -matched filter front-end
-sampling at t=k.Ts
-decision device based on discrete-time minimum distance criterion
• = different from zero-ISI-forcing receiver of Lecture-3
Transmission of a symbol sequence (V)
PS: Matched filter front-end with symbol rate sampling is remarkable :
- This is generally `below Nyquist-rate’ sampling.
Hence aliasing does not compromise performance as long as front-end filter is matched!
- Matched filter sampled at t=k.Ts provides `sufficient statistics’ (=summarize r(t) to a finite set of samples, that can be used i.o r(t) in any criterion of optimality (not just ML))
- Clock synchronization (`timing’) is crucial (i.e. sampling at t=k.Ts+x does not provide sufficient statistics)
Transmission of a symbol sequence (VI)
• Complexity:
-check M^K possible sequences
-over K^2 multiplications for each sequence
• Need for cheaper algorithm:
-whitened matched filter (WMF) implementation -MLSE/Viterbi Algorithm
K
k
k k l
l k K
k
K
l
k s
a
a E a g a a u
K 1
*
1 1
* ,...,ˆ
ˆ . ˆ . .ˆ 2 ˆ .
min 0
--- This is the hard part --- (VII)
• Same criterion, but in matrix-vector form:
• Enabling result is `Cholesky factorization’ of G:
K K
K K
s a
a
u u u a
a a
a a a G a
a a
K E 2 ˆ ˆ ... ˆ . :
ˆ : ˆ ˆ . ˆ .
ˆ ...
min ˆ 1
0
*
* 0
* 0 1
0
*
* 0
* ˆ 0
,..., ˆ0
ngular) lower tria
angular).(
(upper tri .
...
: :
:
...
...
0 1
* 1 0
1
*
* 1 0
L L
g g
g
g g
g
g g
g
G H
K K
K K
`complex conjugate transpose’
--- This is the hard part --- (VIII)
• With this:
• For all linear algebra operations (matrix- vector multiplications) may be interpreted as signal processing operations (filtering) …...
• For Cholesky factorization’ of G corresponds to
`spectral factorization’ of
2
2 1
0 1
0
,...,ˆ
ˆ . :
ˆ : ˆ ˆ . . min 0
K H
K s
a a
u u u L
a a a L
K E
1 ) ( ).
( .
)
( * *
L z z L z
g z
G k
k
k
K
K
vector 2-norm
--- This is the hard part --- (IX)
•
• For
-Multiplication with L (lower triangular) corresponds to filtering with causal & minimum-phase filter L(z),
-Multiplication with L^(-H) (upper triangular) corresponds to filtering with anti-causal filter 1/L*(1/z*)
2
2 1
0 1
0
,...,ˆ
ˆ . :
ˆ : ˆ ˆ . . min 0
K H
K s
a a
u u u L
a a a L
K E
K
1 ) ( ).
( .
)
( * *
L z z L z
g z
G k
k
k
Transmission of a symbol sequence (X)
Procedure:
• Matched filter front-end [p’(-t)]* ( p’(t)=p(t)*h(t) )
• Symbol-rate sampling
• Anti-causal `pre-cursor equalizer’ 1/L*(1/z*)
• It is shown that
where is shown to be discrete-time AWGN (!) =`Whitened matched filter front-end’
convolution complex conjugate
uk
yk
wk
k k
z L E
N N
k h h z h z h z a w
y
s
( . . ... . ).
) ( .
2 2
1 1
0
Transmission of a symbol sequence (XI)
• Resulting optimization problem is
=given channel impulse response & channel outputs, reconstruct channel inputs
=Maximum-Likelihood Sequence Estimation (MLSE) =equivalent with criterion on page 26
k k
z L E
k h h z h z h z a w
y
s
( . . . ...).
) ( .
3 3
2 2
1 1
0
2
1 1
,...,ˆ
ˆ ˆ .
min 0
K m
K k
k m k
m a
a y a h
K
Transmission of a symbol sequence (XII)
• Realization :
aˆk
p’(-t)*
front-end filter
1/Ts
receiver n(t)
+ AWGN
s
k E
a .
transmit pulse
p(t)
transmitter
h(t)
channel
uk
1/L*(1/z*)
yk
2
1 1
,...,ˆ
ˆ ˆ .
min 0
K
m
K
k
k m k
m a
a y a h
K
Transmission of a symbol sequence (XIII)
Viterbi Algorithm (A. Viterbi 1967)
• = recursive (dynamic programming) algorithm to solve this
• if channel length is N ( ), complexity is order M^N per sample (i.e.
independent of K !)
2
1 1
,...,ˆ
ˆ ˆ .
min 0
K
m
K
k
k m k
m a
a y a h
K
N k
hk 0 for
Conclusions
• Optimal receiver structures :
minimum-BER -> MAP -> ML -> Minimum-distance
• 1 Symbol over linear filter AWGN channel :
-matched filter front-end -symbol-rate sampling
-memoryless-decision device
• Symbol sequence over linear filter AWGN channel
-matched filter front-end -symbol-rate sampling
-pre-cursor equalization filter (-> `whitened matched filter’) -MLSE/Viterbi
Assignment 2
• 2.1 : See Lecture-3 : passband demodulation
• 2.2 : See Lecture-3 : zero-ISI-forcing design
• 2.3 : See Lecture-4 : minimum-BER vs. MAP
• 2.4 : See Lecture-4 : equivalence of 2 criteria for detection of 1 symbol over linear channel with AWGN
• 2.5 : See Lecture-4 : equivalence of 2 criteria for detection of symbol sequence over linear channel with AWGN
Assignment 2
• 2.6 : Self-study : Pick your favorite digital communications textbook, and study the chapter on the Viterbi algorithm
(e.g. Lee & Messerschmitt, section 9.6)
ps: Viterbi algorithm also used in channel coding, speech recognition, …
hence worth additional study... !