• No results found

Module-3 : Transmission Lecture-4 (27/4/00)

N/A
N/A
Protected

Academic year: 2021

Share "Module-3 : Transmission Lecture-4 (27/4/00)"

Copied!
34
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Module-3 : Transmission Lecture-4 (27/4/00)

Marc Moonen

Dept. E.E./ESAT, K.U.Leuven

marc.moonen@esat.kuleuven.ac.be

www.esat.kuleuven.ac.be/sista/~moonen/

(2)

Lecture-4: Receiver Design

Given p(t), h(t), reconstruct from r(t)

s k E a .

r(t)

k

transmit pulse

s(t)

n(t)

p(t) +

AWGN

transmitter receiver (to be defined) h(t)

channel

...

constellation

transmit filter

???

a

K

a a

a

0

,

1

,

2

,...,

(3)

Lecture-4: Receiver Design

Lecture-3 : 1st attempt receiver structure * matched filter front-end

* symbol-rate sampler

* memory-less decision device (=`slicer’) Optimal ?

(e.g. below Nyquist rate sampling ?)

k

f(t)

front-end filter

1/Ts

receiver (see lecture-3) n(t)

+ AWGN

s

k E

a .

transmit pulse

p(t)

transmitter

h(t)

channel

(4)

Lecture-4: Receiver Design - Overview

• Optimal Receiver Structure

-minimum Bit-Error-Rate (BER) receiver

-maximum a-posteriori probability (MAP) receiver -maximum likelihood (ML) receiver

-minimum distance (MD) receiver

• Transmission of 1 symbol Matched Filter (MF) front-end revisited

• Transmission of a symbol sequence Whitened Matched Filter (WMF) front-end

MLSE/Viterbi Algorithm

(5)

Optimal Receiver Structure : Minimum BER

• Aim of a digital communication system is to transmit bits with as few bit errors as possible

• Hence optimal receiver minimizes Bit-Error- Probability (BEP or BER)

• Procedure :

- from minimum BER to MAP receiver

(Maximum-A-Posteriori probability) - from MAP to ML receiver (Maximum-Likelihood) - from ML to MD receiver (Minimum-Distance)

(6)

Optimal Receiver Structure : MAP

• Transmitted sequence is

• MAP Receiver sees r(t), and picks sequence

with maximum a-posteriori probability (probability that sent sequence is

after observing r(t) )

a

K

a a

a

0

,

1

,

2

,...,

)) ( ˆ |

,..., , ˆ

, ˆ ( ˆ

max

ˆ , ˆ ,ˆ ,...,ˆ | 0 1 2

2 1

0 a a a

P

X Y

a a a a

K

r t

a K

a

K

a a

a ˆ

0

, ˆ

1

, ˆ

2

,..., ˆ a

K

a a

a ˆ

0

, ˆ

1

, ˆ

2

,..., ˆ

(7)

Assignment 2.3

• MAP Receiver

• With

being the probability that the sent sequence was not (after observing r(t)) establish the link/difference between minimum-

BER and MAP ….

)) ( ˆ |

,..., , ˆ

, ˆ ( ˆ

max

ˆ , ˆ ,ˆ ,...,ˆ | 0 1 2

2 1

0 a a a

P

X Y

a a a a

K

r t

a K

a

K

a a

a ˆ

0

, ˆ

1

, ˆ

2

,..., ˆ

)) ( ˆ |

,..., , ˆ

, ˆ ( ˆ

1  P

X|Y

a

0

a

1

a

2

a

K

r t

(8)

Optimal Receiver Structure : MAP

• MAP detector

• Bayes’ rule says

with independent of transmitted sequence, hence

MAP-detection a.k.a. `Bayesian detection’

ˆ ) ,..., , ˆ

( ˆ ˆ ).

,..., , ˆ

| ˆ ) ( (

max ˆ ,ˆ ,..., ˆ | 0 1 0 1

1

0 a a Y X K X K

a P r t a a a P a a a

K

)) ( (

ˆ ) ,..., ( ˆ

ˆ ).

,...,

| ˆ ) ( )) (

( ˆ |

,...,

( ˆ0 | 0 0

| P r t

a a

P a

a t

r t P

r a

a P

Y

K X

K X

Y K

Y

X

)) ( ˆ |

,..., , ˆ

( ˆ

max

ˆ ,ˆ ,...,ˆ | 0 1

1

0 a a

P

X Y

a a a

K

r t

a K

)) ( ( tr PY

(9)

Optimal Receiver Structure : ML

• MAP detector

• If all symbol sequences

are equally likely (=often the case or often a good approximation) this can be reduced to

= `Maximum Likelihood (ML) Detection’.

= choose to maximize the

likelihood that the observed channel output is r(t) ˆ )

,..., , ˆ

( ˆ ˆ ).

,..., , ˆ

| ˆ ) ( (

max ˆ ,ˆ ,..., ˆ | 0 1 0 1

1

0 a a Y X K X K

a P r t a a a P a a a

K

ˆ ) ,..., , ˆ

| ˆ ) ( (

max

ˆ , ˆ ,..., ˆ | 0 1

1

0 a a Y X K

a

P r t a a a

K

aK

a a

aˆ0, ˆ1, ˆ2,..., ˆ

aK

a a

aˆ0, ˆ1, ˆ2,..., ˆ

(10)

Optimal Receiver Structure : ML/MD

• ML detector

• Transmitted signal is Received signal

If n(t) is AWGN, then ML is (easily) proven to be equivalent with

= Minimum distance (MD) Detection’

ˆ ) ,..., , ˆ

| ˆ ) ( (

max

ˆ , ˆ ,..., ˆ | 0 1

1

0 a a Y X K

a

P r t a a a

K

k

s k

s a p t kT

E t

s( ) . . ( )

) ( )

( ' . .

)

(t E a p t kT n t

r

k

s k

s  

k

s k

s a

a

a r t E a p t kT dt

K

2 ,...,ˆ

,ˆ

ˆ | ( ) . ˆ . '( ) |

min 0 1

(11)

Optimal Receiver Structure : ML/MD

• ML detector for linear channel with AWGN

• p’(t)=p(t)*h(t)=transmitted pulse, filtered by channel

• `Continuous-time’ minimum distance criterion

• Check all possible (M^K !!) sequences -> Need alternative/cheap procedure….

• PS: consider AWGN in the sequel, but can be generalized for non-white Gaussian noise

k

s k

s a

a

a r t E a p t kT dt

K

2 ,...,ˆ

,ˆ

ˆ | ( ) . ˆ . '( ) |

min 0 1

(12)

Optimal Receiver Structure : ML/MD

• ML detector for linear channel with AWGN

• Strategy:

-Transmission of 1 symbol

-> restatement of matched filter receiver of Lecture 3

-Transmission of a symbol sequence

-> matched filter front-end + `ML Sequence Estimation’

k

s k

s a

a

a r t E a p t kT dt

K

2 ,...,ˆ

,ˆ

ˆ | ( ) . ˆ . '( ) |

min 0 1

(13)

Transmission of 1 symbol (I)

• ML detection for transmission of 1 symbol over linear channel with AWGN

check all (M) possible symbols in alphabet, pick that minimizes distance

• This is equivalent to



E p t a dt

t

r s

a

2 ˆ ( ) . '( ).ˆ0

min 0

ˆa

0

 

* 0

 

* 2

ˆ ( ). '( ) .ˆ . '( ). '( )

min 0





E a p t p t dt dt

t p t

r s

a

* =complex conjugate

(14)

Assignment 2.4

• Prove equivalence of 2 criteria :

1:

2:



E p t a dt

t

r

s

a

2

ˆ

( ) . ' ( ). ˆ

0

min

0

 

* 0

 

* 2

ˆ ( ). '( ) .ˆ . '( ). '( )

min 0





E a p t p t dt dt

t p t

r s

a

(15)

Transmission of 1 symbol (II)

• ML detection for transmission of 1 symbol over linear channel with AWGN

Interpretation :

(*) r(t) filtered by [p’(-t)]*, and evaluated at t=0

(*) normalization constant

`Discrete-time’ criterion i.o. continuous-time criterion

 



dt t

p t

r

u0 ( ). '( ) *

 



dt t

p t

p

g0 '( ). '( ) *

cfr. convolution integral

 

* 0

 

* 2

ˆ ( ). '( ) .ˆ . '( ). '( )

min 0





E a p t p t dt

dt t

p t

r s

a

(16)

Transmission of 1 symbol (III)

• Realization :

2 0 0

ˆ 0

( . ). ˆ

min

a0

uE

s

g a

ˆa0

p’(-t)*

front-end filter

1/Ts

=receiver Lecture-3 (p23) !!

n(t) + AWGN

Es

a .0

transmit pulse

p(t)

transmitter

h(t)

channel

sample at t=0 p’(t)=p(t)*h(t)

u0

(17)

Transmission of 1 symbol (IV)

Conclusion :

• For transmission of 1 symbol over linear

channel (p’(t)=p(t)*h(t)) with AWGN, optimal receiver consists of

-matched filter front-end -sampling at t=0

-memory-less decision device

• = restatement of optimality of matched filter receiver of Lecture-3

(18)

Transmission of a symbol sequence (I)

• ML detection for transmission of symbol sequence over linear channel with AWGN

check all (M^K) possible symbols sequences, pick that minimizes distance

• This is `continuous-time’ distance criterion

a

K

a

a ˆ

0

, ˆ

1

,..., ˆ

k

s k

s a

a

a r t E a p t kT dt

K

2 ,...,ˆ

,ˆ

ˆ | ( ) . ˆ . '( ) |

min 0 1

(19)

Transmission of a symbol sequence (II)

• Equivalent to discrete-time minimum distance criterion

With :

(*) r(t) filtered by [p’(-t)]*, and sampled at t=k.Ts

(*) autocorrelation of p’(t),

sampled at t=k.Ts



dt T

k t

p t

r

uk ( ). '( . s) *



dt T

k t

p t

p

gk '( ). '( . s) *

cfr. convolution integral

 

 



K

k

k k

l l k K

k

K

l

k s

a

a E a g a a u

K 1

*

1 1

* ,...,ˆ

ˆ . ˆ . .ˆ 2 ˆ .

min 0

(20)

Assignment 2.5

• Prove equivalence of 2 criteria :

1:

2:

(=generalization of assignment 2.3, but with `early stop’)

k

s k

s a

a r t E a p t kT dt

K

2 ,...,ˆ

ˆ | ( ) . ˆ . '( ) |

min 0

 

 



K

k

k k l

l k K

k

K

l

k s

a

a E a g a a u

K 1

*

1 1

* ,...,ˆ

ˆ . ˆ . .ˆ 2 ˆ .

min 0

(21)

Transmission of a symbol sequence (III)

• Realization :

k

p’(-t)*

front-end filter

1/Ts

receiver n(t)

+ AWGN

s

k E

a .

transmit pulse

p(t)

transmitter

h(t)

channel sample at t=k.Ts

uk

 

 



K

k

k k l

l k K

k

K

l

k s

a

a E a g a a u

K 1

*

1 1

* ,...,ˆ

ˆ . ˆ . .ˆ 2 ˆ .

min 0

(22)

Transmission of a symbol sequence (IV)

Conclusion :

• For transmission of a symbol sequence over linear channel (p’(t)=p(t)*h(t)) with AWGN,

optimal receiver consists of -matched filter front-end

-sampling at t=k.Ts

-decision device based on discrete-time minimum distance criterion

• = different from zero-ISI-forcing receiver of Lecture-3

(23)

Transmission of a symbol sequence (V)

PS: Matched filter front-end with symbol rate sampling is remarkable :

- This is generally `below Nyquist-rate’ sampling.

Hence aliasing does not compromise performance as long as front-end filter is matched!

- Matched filter sampled at t=k.Ts provides `sufficient statistics’ (=summarize r(t) to a finite set of samples, that can be used i.o r(t) in any criterion of optimality (not just ML))

- Clock synchronization (`timing’) is crucial (i.e. sampling at t=k.Ts+x does not provide sufficient statistics)

(24)

Transmission of a symbol sequence (VI)

• Complexity:

-check M^K possible sequences

-over K^2 multiplications for each sequence

• Need for cheaper algorithm:

-whitened matched filter (WMF) implementation -MLSE/Viterbi Algorithm

 

 



K

k

k k l

l k K

k

K

l

k s

a

a E a g a a u

K 1

*

1 1

* ,...,ˆ

ˆ . ˆ . .ˆ 2 ˆ .

min 0

(25)

--- This is the hard part --- (VII)

• Same criterion, but in matrix-vector form:

• Enabling result is `Cholesky factorization’ of G:

   

K K

K K

s a

a

u u u a

a a

a a a G a

a a

K E 2 ˆ ˆ ... ˆ . :

ˆ : ˆ ˆ . ˆ .

ˆ ...

min ˆ 1

0

*

* 0

* 0 1

0

*

* 0

* ˆ 0

,..., ˆ0

ngular) lower tria

angular).(

(upper tri .

...

: :

:

...

...

0 1

* 1 0

1

*

* 1 0

L L

g g

g

g g

g

g g

g

G H

K K

K K

`complex conjugate transpose’

(26)

--- This is the hard part --- (VIII)

• With this:

• For all linear algebra operations (matrix- vector multiplications) may be interpreted as signal processing operations (filtering) …...

• For Cholesky factorization’ of G corresponds to

`spectral factorization’ of

2

2 1

0 1

0

,...,ˆ

ˆ . :

ˆ : ˆ ˆ . . min 0

K H

K s

a a

u u u L

a a a L

K E

1 ) ( ).

( .

)

( * *

L z z L z

g z

G k

k

k





K

K

vector 2-norm

(27)

--- This is the hard part --- (IX)

• For

-Multiplication with L (lower triangular) corresponds to filtering with causal & minimum-phase filter L(z),

-Multiplication with L^(-H) (upper triangular) corresponds to filtering with anti-causal filter 1/L*(1/z*)

2

2 1

0 1

0

,...,ˆ

ˆ . :

ˆ : ˆ ˆ . . min 0

K H

K s

a a

u u u L

a a a L

K E

K

1 ) ( ).

( .

)

( * *

L z z L z

g z

G k

k

k





(28)

Transmission of a symbol sequence (X)

Procedure:

• Matched filter front-end [p’(-t)]* ( p’(t)=p(t)*h(t) )

• Symbol-rate sampling

• Anti-causal `pre-cursor equalizer’ 1/L*(1/z*)

• It is shown that

where is shown to be discrete-time AWGN (!) =`Whitened matched filter front-end’

convolution complex conjugate

uk

yk

wk

k k

z L E

N N

k h h z h z h z a w

y

s

 ( . . ... . ).

) ( .

2 2

1 1

0

(29)

Transmission of a symbol sequence (XI)

• Resulting optimization problem is

=given channel impulse response & channel outputs, reconstruct channel inputs

=Maximum-Likelihood Sequence Estimation (MLSE) =equivalent with criterion on page 26

k k

z L E

k h h z h z h z a w

y

s

 ( . . . ...).

) ( .

3 3

2 2

1 1

0

2

1 1

,...,ˆ

ˆ ˆ .

min 0

 

K m

K k

k m k

m a

a y a h

K

(30)

Transmission of a symbol sequence (XII)

• Realization :

k

p’(-t)*

front-end filter

1/Ts

receiver n(t)

+ AWGN

s

k E

a .

transmit pulse

p(t)

transmitter

h(t)

channel

uk

1/L*(1/z*)

yk

2

1 1

,...,ˆ

ˆ ˆ .

min 0

 

K

m

K

k

k m k

m a

a y a h

K

(31)

Transmission of a symbol sequence (XIII)

Viterbi Algorithm (A. Viterbi 1967)

• = recursive (dynamic programming) algorithm to solve this

• if channel length is N ( ), complexity is order M^N per sample (i.e.

independent of K !)

2

1 1

,...,ˆ

ˆ ˆ .

min 0

 

K

m

K

k

k m k

m a

a y a h

K

N k

hk  0 for 

(32)

Conclusions

• Optimal receiver structures :

minimum-BER -> MAP -> ML -> Minimum-distance

• 1 Symbol over linear filter AWGN channel :

-matched filter front-end -symbol-rate sampling

-memoryless-decision device

• Symbol sequence over linear filter AWGN channel

-matched filter front-end -symbol-rate sampling

-pre-cursor equalization filter (-> `whitened matched filter’) -MLSE/Viterbi

(33)

Assignment 2

• 2.1 : See Lecture-3 : passband demodulation

• 2.2 : See Lecture-3 : zero-ISI-forcing design

• 2.3 : See Lecture-4 : minimum-BER vs. MAP

• 2.4 : See Lecture-4 : equivalence of 2 criteria for detection of 1 symbol over linear channel with AWGN

• 2.5 : See Lecture-4 : equivalence of 2 criteria for detection of symbol sequence over linear channel with AWGN

(34)

Assignment 2

• 2.6 : Self-study : Pick your favorite digital communications textbook, and study the chapter on the Viterbi algorithm

(e.g. Lee & Messerschmitt, section 9.6)

ps: Viterbi algorithm also used in channel coding, speech recognition, …

hence worth additional study... !

Referenties

GERELATEERDE DOCUMENTEN

• SDMA supports multiple directional connections on a single conventional radio channel through the usage of antenna arrays and advanced signal processing.... Introduction

->BER performance in AWGN channel for transmission of 1 symbol (Gray coding, Matched filter reception).. • Transmission

In Lecture-3, a receiver structure was postulated (front-end filter + symbol-rate sampler + memory-less

• If the channel is unknown and/or time-varying, a fixed sequence of symbols (`training sequence’) may be transmitted for channel `probing’.. • example : GSM -> 26 training

: ADSL deployment, reoriented to data applications, as telco’s reaction to cable operators offering high- speed internet access with cable modems.. Driver

• tone structure : redundant tones, modulated with linear combination of other tones, such that time- domain signals have lower dynamic range?. • receiver re-design :

• PS : Direct sequence spreading and frequency hopping are `Spread Spectrum’ (SS) techniques, where transmission bandwidth for every signal is much larger than information

alignments are aligned against alignments (e.g. when progressing a group of sequences has already been aligned. The question is how to add the next sequence to the alignment. In