• No results found

A Comparison between Iterative block-LMS and Partial Rank Algorithm 1

N/A
N/A
Protected

Academic year: 2021

Share "A Comparison between Iterative block-LMS and Partial Rank Algorithm 1"

Copied!
14
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Katholieke Universiteit Leuven

Departement Elektrotechniek ESAT-SISTA/TR 1999-52

A Comparison between Iterative block-LMS and Partial Rank Algorithm 1

Koen Eneman, Simon Doclo, Marc Moonen

2

March 1999

Internal report ESAT-SISTA

1

This report is available by anonymous ftp from ftp.esat.kuleuven.ac.be in the directory pub/SISTA/eneman/reports/99-52.ps.gz

2

ESAT (SISTA) - Katholieke Universiteit Leuven, Kardinaal Mercier- laan 94, 3001 Leuven (Heverlee), Belgium, Tel. 32/16/321809, Fax 32/16/321970, WWW: http://www.esat.kuleuven.ac.be/sista. E-mail:

koen.eneman@esat.kuleuven.ac.be, simon.doclo@esat.kuleuven.ac.be. Simon

Doclo is a Research Assistant supported by the I.W.T. Marc Moonen is a

Research Associate with the F.W.O. Vlaanderen. This research work was car-

ried out at the ESAT laboratory of the Katholieke Universiteit Leuven, in

the frame of the concerted research action MIPS (`Model{based Information

Processing Systems') of the Flemish Government, the F.W.O. research project

nr. G.0295.97, the IUAP program P4{02 (1997{2001) ('Modeling, Identi ca-

tion, Simulation and Control of Complex Systems'), the IT{project Multimi-

crophone Signal Enhancement Techniques for Hands{free Telephony and Voice

Controlled Systems (AUT/970517/Philips ITCL) of the I.W.T. and was par-

tially sponsored by Philips{ITCL. The scienti c responsibility is assumed by

its authors.

(2)

A Comparison between Iterative block-LMS and Partial Rank Algorithm

Koen Eneman, Simon Doclo, Marc Moonen

March 17, 1999

(3)

1 Introduction

In this report, we examine the relation between two adaptive ltering techniques : iterative block-LMS [1] and PRA (Partial Rank Algorithm) [2][3]. It is shown that iterative block-LMS approaches PRA with stepsize 1, if the number of iterations goes to in nity. This is a simple block-extension of the fact that the normalised LMS algorithm can be considered as the limiting result of reiterating the LMS algorithm [4]. In each iteration of block-LMS, the a-priori (and a-posteriori) error decreases, and the di erence between the actual and the calculated lter coecients decreases.

The computational complexity for both algorithms is compared.

2 Notation



The size of each block is

L

and the lter length is

N

.



The input vector at time

k

is denoted by x k = [

x

k

x

k

;1 ::: x

k

;

N

+1

] T .



In block-LMS as well as in PRA, we consider

L

consecutive input vectors, such that the input data matrix X n

2R

N



L for block

n

is a Toeplitz-matrix,

X n = [ x nL x nL

+1 :::

x nL

+

L

;1

] =

2

6

6

6

4

x

nL

x

nL

+1 ::: x

nL

+

L

;1

x

nL

;1 x

nL

::: x

nL

+

L

;2

... ... ...

x

nL

;

N

+1 x

nL

;

N

+2 ::: x

nL

;

N

+

L

3

7

7

7

5 :

(1)



For block-LMS, the adaptive lter of length

N

for block

n

at iteration step

r

is denoted by w

(

n r

)

.



For PRA, the adaptive lter of length

N

for block

n

is denoted by w n .



The desired signal for block

n

is denoted by d n = [

d

nL

d

nL

+1 ::: d

nL

+

L

;1

] T .



The error signal for block

n

is denoted by e n = [

e

nL

e

nL

+1 ::: e

nL

+

L

;1

] T .

3 Block-LMS and PRA

Block-LMS and PRA (as many adaptive ltering algorithms) consist of two parts : 1. calculation of the a-priori error

2. weight update using this a-priori error

(4)

3.1 Calculation of the a-priori error

The calculation of the a-priori error is the same for both algorithms. For each element in e pri n , we can write

e

pri k =

d

k

;

x Tk w n

; k

=

nL:::nL

+

L;

1

;

such that

e pri n = d n

;

X Tn w n

:

(2)

3.2 Weight update

The weight update formula for block-LMS has the same form as the (well-known) weight update formula for LMS, except for the fact that the mean gradient over a block of size

L

is calculated and that the lters are updated every

L

samples :

w n

+1

= w n +



nL

X+

L

;1

k

=

nL x k

e

pri k

= w n +



X n e pri n

= w n +



X n

;

d n

;

X Tn w n



(3) For the same block of

L

samples, we can as well iterate this formula :

w

(0)

n = w n (4)

w

(

n r

+1)

= w

(

n r

)

+



X n



d n

;

X Tn w

(

n r

)

(5) Block-LMS for which equation 5 is iterated is called iterative block-LMS or block row action projection (BRAP)[5]. The di erent steps of the algorithm are summarised in table 1.

The partial rank algorithm (PRA) is a block version of the ane projection algorithm (APA) [5]. The weight update equation is an extension of equation 3. A normalisation step is added, resulting in the following weight update equation :

w n

+1

= w n +

X n

;

X Tn X n +



I L

;1;

d n

;

X Tn w n



(6) If

L

= 1, PRA reduces to NLMS. The steps of the PRA-algorithm are summarised in table 2.

4 The Relation between iterative block-LMS and PRA

4.1 Weight update

Theorem 1 For the

r

th iteration, the weights w

(

n r

)

can be written in function of w n

(0)

as

w

(

n r

)

= w

(0)

n +



X n

"

r

;1

X

l

;

I L

;

X Tn X n



l

#



d n

;

X Tn w

(0)

n



:

(7)

(5)

Proof : For

r

= 1, the proof is trivial and follows from equation 3.

Suppose equation (7) holds for w

(

n r

)

, then we can prove that

w n

(

r

+1)

= w

(

n r

)

+



X n



d n

;

X Tn w n

(

r

)



= w

(0)

n +



X n

"

r

;1

X

l

=0

;

I L

;

X Tn X n



l

#

d n

;

X Tn w

(0)

n



+



X n

"

d n

;

X Tn w n

(0)

+



X n

"

r

;1

X

l

=0

;

I L

;

X Tn X n



l

#

d n

;

X Tn w n

(0)



!#

= w

(0)

n +



X n

"

r

;1

X

l

=0

;

I L

;

X Tn X n



l + I L

;

X Tn X n

X

r

;1

l

=0

;

I L

;

X Tn X n



l

#



d n

;

X Tn w

(0)

n



= w

(0)

n +



X n

"

I L +

;

I L

;

X Tn X n

X

r

;1

l

=0

;

I L

;

X Tn X n



l

#



d n

;

X Tn w

(0)

n



= w

(0)

n +



X n

"

I L +

X

r

l

=1

;

I L

;

X Tn X n



l

#

d n

;

X Tn w

(0)

n



= w

(0)

n +



X n

"

r

X

l

=0

;

I L

;

X Tn X n



l

#

d n

;

X Tn w

(0)

n



Theorem 2 The iteration described in equations (4) and (5) converges to

w

(1)

n = w

(0)

n + X n

;

X Tn X n

;1



d n

;

X Tn w

(0)

n



;

(8)

if



is chosen such that





max2

, with



max the maximum eigenvalue of X Tn X n .

Proof : If we take the eigendecomposition of X Tn X n X Tn X n = Q T  Q

;

then we can write

r

;1

X

l

=0

;

I L

;

X Tn X n



l = r

X;1

l

=0

;

Q T Q

;

Q T  Q



l

= Q T

"

r

;1

X

l

=0

( I L

;

) l

#

Q

= Q T diag

X

r

;1

l

=0

(1

;

j ) l

!

Q

; =

j6=0

Q T diag



1

;

(1

;

j ) r



j



Q

;

(9)

such that equation (7) becomes

w

(

n r

)

= w n

(0)

+ X n Q T diag



1

;

(1

;

j ) r



j



Q



d n

;

X Tn w

(0)

n

:

(10)

(6)

For

r

going to

1

, equation (10) becomes

r lim

!1

w

(

n r

)

= w

(0)

n + X n Q T diag



r lim

!1

1

;

(1

;

j ) r



j



Q



d n

;

X Tn w

(0)

n



= w

(0)

n + X n Q T diag



1



j



Q



d n

;

X Tn w

(0)

n



(11) under the condition that

j

1

;

j

j 

1

8j

, which is satis ed if (1

;

max )



;

1 (assuming

>

0) or





max2

. We can rewrite equation (11) as

w

(1)

n = w

(0)

n + X n Q T 

;1

Q



d n

;

X Tn w n

(0)

= w

(0)

n + X n

;

X Tn X n

;1



d n

;

X Tn w

(0)

n



:

(12)

which is the same weight update formula as for PRA with

= 1 and



= 0 (see equation 6).

4.2 A-posteriori error

The a-posteriori error is de ned as :

e post n = d n

;

X Tn w n

+1

= d n

;

X Tn w

(

n R

;1):

(13) After

R

iterations with block-LMS the a-posteriori error is

e post n = d n

;

X Tn w

(

n R

;1)

= d n

;

X Tn

"

w

(0)

n + X n Q T diag 1

;

(1

;

j ) R

;1



j

!

Q



d n

;

X Tn w

(0)

n



#

=



d n

;

X Tn w

(0)

n

;

X Tn X n Q T diag 1

;

(1

;

j ) R

;1



j

!

Q



d n

;

X Tn w n

(0)

=

"

I L

;

Q T  QQ T diag 1

;

(1

;

j ) R

;1



j

!

Q

#



d n

;

X Tn w

(0)

n



(14)

=

h

I L

;

Q T diag



1

;

(1

;

j ) R

;1

Q

i

d n

;

X Tn w

(0)

n

:

(15) For

R

going to

1

, the a-posteriori error becomes zero :

R lim

!1

e post n =



I L

;

Q T Q



d n

;

X Tn w n

(0)



= 0

;

(16)

as could be expected. Iterative block-LMS approaches PRA with

= 1 and



= 0 and PRA is de ned such that the a-posteriori error is zero.

5 Computational complexity

5.1 Iterative block-LMS

The formulas de ning iterative block-LMS (BRAP) and the corresponding cost are

given in table 1.

(7)

for each block of L samples do

X n =

2

6

6

6

4

x

nL

x

nL

+1 ::: x

nL

+

L

;1

x

nL

;1 x

nL

::: x

nL

+

L

;2

... ... ... ...

x

nL

;

N

+1 x

nL

;

N

+2 ::: x

nL

;

N

+

L

3

7

7

7

5

w

(0)

n = w n

d n =

2

6

4 d

nL

...

d

nL

+

L

;1

3

7

5

for r

= 0

to R;

1

do

y

(

n r

)

= X Tn w

(

n r

)

2

LN;L

e

(

n r

)

= d n

;

y

(

n r

) L

w

(

n r

+1)

= w

(

n r

)

+



X n e

(

n r

)

2

LN

+

L

g

w n

+1

= w

(

n R

;1)

g

Table 1: Block Row Action Projection (BRAP)

The total cost per sample for iterative block-LMS (BRAP) (see table 1) is

4

NR

+

R

(17)

5.2 Partial Rank Algorithm

The formulas de ning PRA are given in table 2. In order to obtain vector p n the inverse of a regularised symmetric matrix M n +



I L = X Tn X n +



I L has to be com- puted. As data is shifted block by block ecient QR-based updating of X Tn X n is not possible for

L>

1. However, X Tn X n can be updated

1

:

M n = X Tn X n =

h

X

0

n T X

1

n T

:::

X N=L n

;1

T

i

2

6

6

6

4

X

0

n

X

1

n X N=L n ...

;1

3

7

7

7

5

(18)

= M n

;1

+ X

0

n T X

0

n

;

X N=L n T X N=L n (19)

1

assuming

N

is multiple of

L

and de ning

Xkn

=

"

x

nL;k L

::: x

nL+L;1;k L

... ... ...

x

nL;L+1;k L

::: x

nL;k L

#

(8)

for each block of L samples do

X kn

8

= k

"

x

nL ...

;

kL

:

...

:: x

nL

+

L ...

;1;

kL

x

nL

;

L

+1;

kL

::: x

nL

;

kL

#

; k

= 0

!

N L

X n =

2

6

6

6

4

X

0

n X

1

n

X N=L n ...

;1

3

7

7

7

5

d n =

2

6

4 d

nL

...

d

nL

+

L

;1

3

7

5

y n = X Tn w n 2

LN;L

e n = d n

;

y n

L

M n = M n

;1

+ X

0

n T X

0

n

;

X N=L n T X N=L n (2

L

+ 1)

L

(

L

+ 1)

p n = ( M n +



I L )

;1

e n

23L3

+

L2

+

L

w n

+1

= w n +

X n p n 2

LN

+

L

g

Table 2: Partial Rank Algorithm (PRA)

This reduces the cost substantially if

N=L

is large. p n = ( M n +



I L )

;1

e n can be computed eciently by solving the linear set of equations ( M n +



I L ) p n = e n using Gauss elimination (cost :

2

L

33

) and back substitution (cost :

L2

). The total cost per sample for PRA (see table 2) is

4

N

+ 8

L

3 + 4

2 L

+ 3 (20)

5.3 BRAP vs. PRA

Whereas the cost of BRAP is independent of the block length, PRA becomes more expensive for large values of

L

. For small values of

R

, BRAP will be cheaper than PRA. The more iterations

R

are performed with BRAP, the more expensive it be- comes, but the better it approaches the convergence behaviour of PRA (see section 6). We now compute

R

crit , the value for

R

for which BRAP is as expensive as PRA. By combining eq. 17 and 20, we obtain

R

crit = 4

N

+

8

L

32

+ 4

L

+ 3

4

N

+ 1

:

(21)

(9)

For long adaptive lters,

N >>L

. By taking the limit for

N

to

1

we nd

R

1

crit = 1 (22)

As could be expected, for small block lengths

L

,

R

crit is small and PRA outperforms BRAP both in terms of convergence and cost.

The e ect of parameter

L

on the convergence of PRA is shown in gure 1 and 2. Both

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

10−15 10−10 10−5 100

evolution of the filter weights for PRA

L=1 −> NLMS

L=100 L=50 L=20 L=10 L=2 L=3

Figure 1: Error between the lter coecients and the true path as a function of time for PRA. Coloured stationary noise was ltered with a 150-taps FIR lter. The adaptive lter length

N

is 150.

coloured noise and a music signal are ltered with an arti cial path and identi ed with PRA. Block length

L

was changed from 1 (=NLMS) to 100. Large values of

L

lead to better steady-state suppression and initial convergence. For large values of

L

also

R

crit will be considerable and BRAP with

R <R

crit might be employed to obtain a cheap adaptive algorithm with a performance comparable to that of PRA.

As ever, there exists a trade-o between good performance at a high cost (PRA) and reduced performance at a lower cost (BRAP with

R<R

crit ). The optimal

R

can be determined using the following heuristic :



increase

R

by making a trade-o between the expected improvement in perfor- mance and extra cost, taking in account

{ the available processor power

{ the additional cost and improvement in performance when choosing PRA



if

RR

crit , choose PRA

Some performance tests are needed to estimate the expected improvement in perfor-

mance for BRAP when

R

is increased.

(10)

1 2 3 4 5 6 7 8 x 104 10−1

100

evolution of the filter weights for PRA

L=1 −> NLMS

L=2

L=3

L=10

L=20

L=50

L=100

Figure 2: Error between the lter coecients and the true path as a function of time for PRA. A music signal was ltered with a 1200-taps FIR lter. The adaptive lter length is 1000.

6 Simulations

Coloured stationary noise was ltered with a 100-taps FIR lter. PRA and BRAP with

R

= 1

;

2

;

3

;

10

;

20

;

50

;

100 are compared. The length of the adaptive lters

N

is 100. In a rst experiment the block length

L

was chosen to be 20. For this value of

L

,

R

crit = 3

:

86. Figure 3 shows the evolution of the lter coecients. Figure 4 and 5 show the evolution of the a-priori and a-posteriori errors respectively. In a second experiment the block length

L

was chosen to be 3. Figure 6 shows the evolution of the lter coecients. For this value of

L

,

R

crit = 1

:

09. Figure 7 and 8 show the evolution of the a-priori and a-posteriori errors respectively.

7 Conclusion

In this report we have shown that the partial rank algorithm (PRA) can be consid-

ered as the limiting result of reiterating the block-LMS algorithm (BRAP) with the

same data. As a consequence, for BRAP, the convergence of the adaptive weights

and the a-priori error is inferior to that of PRA. For a small number of iterations

the computational cost of BRAP is lower than that of PRA. This means that for

large block sizes and limited processing power iterative block-LMS can be a viable

alternative for PRA.

(11)

0 50 100 150 200 250 300 350 400 450 500 10−7

10−6 10−5 10−4 10−3 10−2 10−1

PRA 100x 50x 10x 3x 2x 1x = BLMS evolution of the filter coefficients

Figure 3: Error between the lter coecients and the true path as a function of time for BRAP and PRA,

L

= 20. Coloured stationary noise was ltered with a 100-taps FIR lter. The adaptive lter length

N

is 100.

8 Acknowledgements

Simon Doclo is a Research Assistant supported by the I.W.T. Marc Moonen is a Research Associate with the F.W.O. Vlaanderen. This research work was carried out at the ESAT laboratory of the Katholieke Universiteit Leuven, in the frame of the concerted research action MIPS (`Model{based Information Processing Systems') of the Flemish Government, the F.W.O. research project nr. G.0295.97, the IUAP pro- gram P4{02 (1997{2001) ('Modeling, Identi cation, Simulation and Control of Com- plex Systems'), the IT{project Multimicrophone Signal Enhancement Techniques for Hands{free Telephony and Voice Controlled Systems (AUT/970517/Philips ITCL) of the I.W.T. and was partially sponsored by Philips{ITCL. The scienti c responsibility is assumed by its authors.

References

[1] S. Haykin, Adaptive Filter Theory, Information and system sciences series. Pren- tice Hall, 3 rd edition, 1996.

[2] K. Ozeki and T. Umeda, \An adaptive ltering algorithm using an orthogonal

projection to an ane subspace and its properties", Electron. Commun. Japan,

vol. 67{A, pp. 19{27, 1984.

(12)

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−180

−160

−140

−120

−100

−80

−60

−40

−20 0

evolution of the a−posteriori error

dB

1x = BLMS 2x 3x

10x

50x

100x PRA

Figure 4: Evolution of the a-priori errors for BRAP and PRA,

L

= 20. Coloured stationary noise was ltered with a 100-taps FIR lter. The adaptive lter length

N

is 100.

[3] S. G. Kratzer and D. R. Morgan, \The partial-rank algorithm for adaptive beam- forming", in Proc. SPIE Int. Soc. Optic. Eng., 1985, vol. 564, pp. 9{14.

[4] D. R. Morgan and S. G. Kratzer, \On a Class of Computationally Ecient, Rapidly Converging, Generalized NLMS Algorithms", IEEE Signal Processing Letters, vol. 3, no. 8, pp. 245{247, aug 1996.

[5] S. Gay, Fast Projection Algorithms with Application to Voice Echo Cancellation,

PhD thesis, Rutgers, The State University of New Jersey, New Brunswick, New

Jersey, USA, October 1994.

(13)

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−140

−120

−100

−80

−60

−40

−20 0

evolution of the a−priori error

dB

PRA 100x 50x 10x 3x 2x 1x = BLMS

Figure 5: Evolution of the a-posteriori errors for BRAP and PRA,

L

= 20. Coloured stationary noise was ltered with a 100-taps FIR lter. The adaptive lter length

N

is 100.

0 10 20 30 40 50 60 70 80 90 100

10−3 10−2 10−1

evolution of the filter coefficients

3x

50x 10x

100x PRA 2x 1x = BLMS

Figure 6: Error between the lter coecients and the true path as a function of time

for BRAP and PRA,

L

= 3. Coloured stationary noise was ltered with a 100-taps

FIR lter. The adaptive lter length

N

is 100.

(14)

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−350

−300

−250

−200

−150

−100

−50 0

evolution of the a−posteriori error

dB

PRA

100x 50x 10x 3x

2x 1x = BLMS

Figure 7: Evolution of the a-priori errors for BRAP and PRA,

L

= 3. Coloured stationary noise was ltered with a 100-taps FIR lter. The adaptive lter length

N

is 100.

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

−60

−50

−40

−30

−20

−10 0

evolution of the a−priori error

dB

1x = BLMS

2x

10x 3x 50x

100x PRA

Figure 8: Evolution of the a-posteriori errors for BRAP and PRA,

L

= 3. Coloured

stationary noise was ltered with a 100-taps FIR lter. The adaptive lter length

N

is 100.

Referenties

GERELATEERDE DOCUMENTEN

Firstly, the link between the different rank-1 approximation based noise reduction filters and the original speech distortion weighted multichannel Wiener filter is investigated

Hearing aids typically use a serial concatenation of Noise Reduction (NR) and Dynamic Range Compression (DRC).. However, the DRC in such a con- catenation negatively affects

This paper presents a variable Speech Distortion Weighted Multichannel Wiener Filter (SDW-MWF) based on soft output Voice Activity Detection (VAD) which is used for noise reduction

Once again it is clear that GIMPC2 has allowed noticeable gains in feasibility and moreover has feasible regions of similar volume to OMPC with larger numbers of d.o.f. The reader

A parallel paper (Rossiter et al., 2005) showed how one can extend the feasible regions for interpolation based predictive control far more widely than originally thought, but

In [1] the construction of controllability sets for linear systems with polytopic model uncertainty and polytopic disturbances is described. These sets do not take a given

Keywords : Predictive control, LPV systems, interpolation, computational simplicity, feasibility This paper first introduces several interpolation schemes, which have been derived

The new scheme is a refinement to the algorithm published in “Efficient robust constrained model predictive control with a time-varying terminal constraint set” [1].. The