• No results found

Coding for a computer network

N/A
N/A
Protected

Academic year: 2021

Share "Coding for a computer network"

Copied!
66
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Citation for published version (APA):

Schalkwijk, J. P. M. (1974). Coding for a computer network. (EUT report. E, Fac. of Electrical Engineering; Vol. 74-E-52). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1974

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)
(3)

VAKGROEP TELECOMMUNICATIE GROUP TELECOMMUNICATIONS

Coding for a computer network

by

J.P.M. Schalkwijk

TH-Report 74-E-52 October 1974

(4)

II. THE DUPLEX STRATEGY

III. CONVOLUTIONAL CODES

IV.

RECURSIVE CODING

APPENDIX A--

DUPLEX CODING

APPENDIX B--

WEIGHT ENUMERATION

(5)
(6)

INTRODUCTION

Four previous papers [1,2,3,4] were, concerned with the· theoretical development of a basic feedback strategy, multiple repetition coding (MRC), that was first described by,Scha1kwijk

in the IEEE Transactions on Information Theory, ~ay 1971. The multiple repetition strategy can be used in a block coding

fashion or, alternatively, as a recursive code where codeword

digits are estimated D (the coding delay) time units after their respective arrivals at the receiver. The block coding version of MRC has been extensively analysed [1,2]. The

recur-sive coding version of MRC has been analysed from a probability of error point of view [3,4], but not as far as the maximum

number of errors permissable lS concerned.

The objectives of the present report are two-fold. First, to study the application of the block coding version of MRC in a duplex (noisy feedback) situation. Second, to finish our theore-tical analysis of the recursive coding version of MRC referred

to above, i.e. to determine the error correcting capabilities

of the binary recursive signalling scheme. Chapter II and appendix A deal ",ith the duplex strategy in detail. Chapter II, in fact, was presented at the NATO Advanced Institute on New Directions in

Signal Processing in Communication & Control, August 5-17th 1974, Darlington, Durham, England. In the remainder of this introductory chapter we will emphasize on the practical significance of our duplex strategy. Chapter III gives some results on convolutional codes that are relevant to chapter II. Chapter IV and appendix B, finally, deal with the error correcting capabilities of the recur-sive signalling version of MRC.

(7)

A

-The basic idea of our duplex strategy, see also chapter II, is the following. Fig. 1 shows two stations A and B connected

h

..

..

B

- I

A

h

~I---~---...

..

_ - -I

a. Channel BA inoperative b. Channel BA partially operative

Fig. I. Two stations A and B connected by a duplex channel; A is active.

B

by a duplex channel, A being the active station, i.e. momentarily the information flow is from A to B. In Fig. la the check digits CI form a significant part of the total number of digits on the AB channel, while the BA channel is inoperative. In Fig. Ib the check digits CI are formed at the passive station B, instead, and are subsequently returned to the active station A via the previously inoperative BA channel. At the active station A we can now determine the transmission errors in the information II transmitted over the AB channel. These errors can then be correct-ed with fewer (than CI) rcorrect-edundant digits RCI' using ~RC. We have

thus used the previuosly inoperative BA channel to increase the information rate from the active station A to the passive station

B.

Aside from increasing the transmission rate from the active

sta-tion to the passive stasta-tion without suffering an increase in

error rate our duplex strategy has another even more important aspect. In one-way error control the decoder complexity is at the passive station. By returning the check digits CI to the active

(8)

from the passive station to the active station. Now consider the situation represented in Fig. 2. Assume that the central

satellite

computer

cent ra I

Fig. 2. Starnet with central computer and satellites.

satellite

computer

computer

computer has an elaborate decoder. For information flowing from a satellite computer to the central facility we use normal one-way error control with the complex decoder at the central loca-tion. However, for information flowing from the central facility to a satellite computer we use our duplex strategy and again the complexity is at the central facility. Whereas previously we had a complex decoder at each satellite we now use the decoder 1n the central computer once as a regular one-way decoder and once as an integral part of the duplex scheme. In this way we save as many decoders as there are satellite computers:

One final comment pertains to the situation where information is

simultaneously flowing from A to B and from B to A, see Fig. 3. It is obvious that in this case double one-way

(9)

h

C2

RCl

A

B

..

I

..

A

B

"II

..

..

I "II

RC2

Cl

12

a. double one-way operation b. duplex operation

Fig. 3. Simultaneous information flows A+B and A+B.

operation, Fig. 3a, is more efficient than duplex operation as in Fig. 3b. Whereas in Fig. 3a we send I+C digits in each direction, in Fig. 3b we need I+C+RC digits in each direction, i.e. more digits for the same information throughput and the same error rate. After these preliminary remarks we will now proceed to the mOre detailed results on the duplex strategy as presented in chapter II and appendix A.

(10)

1. J.P.M. Schalkwijk, "A class of simple and optimal strategies for block coding on the binary symmetric channel with noise-less feedback", IEEE Trans. Inform. Theory, vol. IT-17, pp. 283-287, May 1971.

2. D.W. Becker and J.P.M. Schalkwijk, "A simple class of

asymptotically optimum block coding strategies for the m-ary symmetric channel", IEEE Trans. Inform. Theory, to be publish-ed.

3. J.P.M. Schalkwijk and K.A. Post, "On the error probability for a class of binary recursive feedback strategies",

IEEE Trans. Inform. Theory, vol. IT-19, pp. 498-511, July 1973.

4. J.P.M. Schalkwijk and K.A. Post, "Correction to, On the error probability for a class of binary recursive feedback strategies",

(11)
(12)

CODING FOR DUPLEX CHANNELS

ABSTRACT

J.P.M. Schalkwijk

The author is with the Department of Electrical Engineering, Technological University of Eindhoven, Eindhoven, The Netherlands.

In this paper we describe a coding strategy for memory less duplex channels that allows each station in turn to transmit at a rate approaching channel capacity, and with a performance corresponding to that of a R=! systematic (convolutional) code.

I. INTRODUCTION

Consider two stations A and B connected by a duplex channel, See Fig. 1. At each arrow tail one finds a transmitter. The

receivers are at the arrow heads. One can now distinguish between

two situations.

A

8

Fig. 1. Two stations A and B connected by a duplex channel.

In Fig. 2a A is active and B passive, while in Fig. 2b B is active and A is passive. In each case the communication link corresponding

to the dashed arrow is inoperative. The situation represented in

(13)

Fig. 2. The solid station active and the dashed station passive.

fA\t-1

- - _ a

I'B~

\.:.J

I • ' .... ./

(A)"

1f8\

'- _

-.---ll~

a.

Fig. 3. Situation with twice the information throughput.

switching from the situation of Fig. 3a to the situation of Fig. 3b would require one to physically transport a transmitter from A to B and a receiver from B to A, which is, of course, not feasible. With our duplex coding strategy [IJ it is, however, possible to approximate the information throughput of Fig. 3 while

leaving transmitters and receivers at their respective positions as indicated in Fig. 2. We will now explain how this can be done.

Assume that on the active link in Fig. 2a (and vise versa for

Fig. 2b) a systematic R=~ code is used to combat transmission

errors. Then half of the transmitted digits are check digits and one could increase the information throughput by a factor 2 by only sending the information digits. The check digits are formed at the receiver and are, subsequently, sent back to the active station A (B for Fig. 2b) via the previously inoperative channel B ~ A. At station A (B for Fig. 2b) one can now determine the

transmission errors on the A ~ B and the B ~ A channel. However,

knowledge of the transmission errors on the A ~ B channel is required at the passive station B (or A in Fig. 2b) and not at the active station A. Fortunately, multiple repetition coding (NRC) [2,3,4,5J provides a way of exploiting knowledge of the

transmission errors at the transmitter side of a communication

lin~( to enable the receiver to do efficient error correction. For

binary channels one cannot quite achieve a rate of R=l bit per

transmission on the A 7 B channel (or on the B 7 A channel when

B is active as in Fig. 2b), as would be the case in Fig. 3. However. since the MR-codes are asymptotically sphere packed one obtains, for example, on a binary symmetric channel (BSC) with transition probability p=O.OI a transmission rate of R=1-H(p)=O.92 bits per

(14)

binary duplex channels (BDC's), i.e. duplex channels where both the A + B and the B + A channel, see Fig. I, are BSC's with transition probability p.

II. MULTIPLE REPETITION CODING

In MRC the information is precoded [2J into a binary sRquence ofklength L that does not allow subsequences ~!Ithe fO~IOI and 10 , an~ that does not terminate in either 01 or 10 ,k~3, where i , ic{O,I}, stands for a sequence of i's of length k. In order to correct e or fewer errors a tail of ke reversals is concatenated to the precoded information sequence to form the initial codeword. Then, k repetitions of each erroneously received digit are concatenated to the left of the sequence of digits that remains to be transmitted and k digits are dropped off the right hand side of the current codeword in order to maintain a fixed block length N. Transmission is resumed with the first repeat. Correction is done by repeatedkright-t~-left scanning of the received sequence replacing 01 and 10 by I and 0, respectively.

An

example of this coding strategy for k=3 is the following (hats

indicate erroneously received digits):

Pre coded information 0 0 0

Initial codeword 0 0 0 0 0 0 Second codeword 0 (0 0 0) 0 0 0 Transmitted codeword 0 (0 0 (0 0 0) 0) 0 0 Received sequence

14

1 0

I.

I 0 0 0 0 0 0 0 Estimated information 0 0 0

Let successive transmissions be spaced by a transmission

interval of ~ seconds each. It will then be clear from the previous

discussion that in MRC the transmitter needs a T seconds, 2<T<t,

delayed version of the forward noise n(a) = n + n a + nZa + ..• as side information. The transmitter can thenOstart a correction

sequence immediately after the occurrence of a transmission error. However, if the side information suffers a delay T, where 6<T<26,

it is still possible to use MRC without having to sacrifice in

transmission rate R by operating two MRC schemes A and B in time division. In other words, if we indicate the transmissions

pertain-ing to scheme A by a ,a

1,a2, . . . . ,and those pertaining to scheme

B ~Y bo,b1,b 2 , .. :" ~then.E~e.orde: of transmission when operating

SC~lemes A ana B 1.ll tl.me dlvision wIll be a ,b ,a

1,b1,a2,b2, . . . .

In general, to cope with a delay in the si~e ~nformation of

(15)

I 3 1

,

1 5

o

o

Fig. 4. Performance of MRC.

R

The performance [2] of MRC in correctable error fraction f versus transmission rate R is given in Fig. 4. For the k-fold

repetition code, k~3, the correctable error fraction f versus

transmission rate R is given by a straight line through the point (R,f) = (O,l/k) and tangent to the Hamming bound H(f) = I-R, where

H(f)

=

-f log2f -(I-f) log2(I-f). To achieve the performance

indicated in Fig. 4 one needs an optimum precoder [5] for conversion from the original binary information sequence to the pre coded

in~ormatio~ sequence that does not allow subsequences of the form

01 and 10 • This optimum precoder is quite complicated from an implementation point of view. Hence, in the next paragraph we discuss a more 'practical suboptimum precoder. The performance of the corresponding suboptimum MRC schemes is indicated in Fig. 5. Comparison of Figs. 4 and 5 shows that the price in performance one has to pay for using the practical suboptimum precoder to be described is very small indeed.

The pre coder converts the original binary information sequence of length K into a pre coded infoRmation ~equence of length L, such that subsequences of the form 01 and 10 , k~3, do no longer appear.

(16)

1 'j 1

Z

1 5 f OL-____________________

~~----~~~~~R~-o

Fig. 5. Performance of suboptimum MRC. 1

2

3

2 ~ 4

A very simple way of excluding the subsequences Olk and 10k is to add one dummy reversal after each k-2 information digits. For

example, for k=3, the original binary information sequence 01110 ... is converted into the pre coded information sequence 0 I I 0 I 0 I 00 I... ,where the dummy digits have been underlined. For this suboptimal pre coder we have

k-I

L = k-2 K, k,,3 ( I )

With an error fraction f, i.e. for e = fN errors, one needs ke = kfN tail digits to form the initial codeword, as we saw before. Hence, N = L+kfN and with (I) one obtains for the transmission rate

R = K/N = (k-2) (I-kf)/ (k-I) ,k"3. (2) In other words, the performance curves for the suboptimum MRC schemes are straight lines through the points (R,f) = (O,I/k) and

(17)

(R,f) = ( (k-2)/(k-l) ,0) as indicated in Fig. 5. It is easy to show that these lines are tangent to the ellipse

2f = 1--[R(2-R)J! (3)

III. PERFORMANCE TRADEOFFS

MRC (in time division) requires aT = (D-l~6+T seconds delayed estimate of the forward noise n(~) = n +nl~+n2~ + ••• as side

information at the active station. In 8rder to perform this estima-tion we form check digits at the passive staestima-tion and return these check digits to the active station via the previously inoperative mem2er of the duplex channel pair. The check digits u(a) = u +u1a+

u

2a + ••• at the passi1e station are obtained from the recei~ed data y(~) = y +Yla+Y2~ + •.. using a convolutional scrambler, see Fig. 6. For aOconvolutional scrambler with connection polynomial c(~) the relation between u(~) and y(~) is given by

(4)

At the active station we now form the binary signal w(~), as indi-cated in Fig. 7. Note that

weal = [x(~)+n(~)Jc(a)+z(~)+x(~)c(~) = n(~)c(a)+z(~) (5) is independent of the tran2mitted sequence x(~). In fact, one could think of weal = w +wl~+w2~ + ••• as being generated by the system-atic R=! convolut~onal encoder of Fig. 8. A Viterbi decoder [6J can now be used on the binary sequence o,w ,o,w

1,o,w2, ... to form the required estimate of n(~) at the activ~ station.

Let Pb be the bit error probability of the Viterbi decoder. The resultlng forward word error probability due to estimation

errors at the active station, i.e. due to the feedback noise z(~),

y(~)

U~)

(18)

n(OQ

x (ex) 1 - - - , - - - -

Y

(Ol)

w(o<)

Z(CX)

Fig. 7. Coding strategy for duplex channels.

n(CX)

.---~

+

1---"'000 ...

n(O()

1 . . - - - _ 0 0 {

+

)--~

...

VI(

~ ~

•••

(19)

is then upper bounded by BI

=

1-(I-Pb)N, where N is the common block length of the MR-codes operating in time division. If the MR-codes are designed to correct an error fraction f the word

error pro~~iltyy due to the forward noise nCo) can be upperbounded by B2

=

2 P ,[7,p.I02], where X (f)

=

T (f)-H(f), and T (f)

=

-f 10g2P-(I-f)log2(I-p). The total wgrd erro¥ probability P Pof the duplex strategy can now be upperbounded by the sum of BI ana B2, l.e. by

(6) Note that the BI term in (6! increases with increasing N, whereas the B2 term decreases as N lncreases. Hence, the value of N that approximately minimizes the right hand side of (6) can be found by solving the equation B)

=

B2 for N. For given f>p the maximum transmission rate R at the actlve station (for suboptimum MRC) can be found to a good approximation by solving (3) for R. The exact value of R for a given f can be found from Fig. 5. From Fig. 5 we can also determine the optimum constraint parameter k~3 on the precoded sequences. The smaller we choose f>p the higher the transmission rate R at the active station. However, the term B2 of (6) increases as f decreases. This can be compensated for by increasing the common block length N of the MR-codes. However, increasing N increases the term BI of (6). The only way to compens-ate for this increase in BI is to lower the bit error probability P

b of the Viterbi decoder at the active station. For a given channel transition probability p the bit error probability Pb can be lowered by increasing the length of the convolutional scrambler, see Fig. 6,

at the passive station. The coding delay D of the Viterbi decoder

at the active station should be roughly five times the length of

the convolutional scrambler at the passive station in order to

approximately achieve the minimum P

b consistent with the given

convolutional scrambler. Since, a larger value of D implies a higher

order of time division of the MR-codes operating in the forward

direction, i.e. a complicated system, One should always choose the

smallest value of D, i.e. the highest value of P

b' that is consist-ent with the requiremconsist-ent on the overall word error probability of

the system. Summarizing, we start out choosing P

b and, hence, the

length of the convolutional scrambler and the order D of time division of the MR-codes. Then we find the largest common block

length N such that BI is equal to half the required overall word error probability. Flnally, we find the f>p such that B is also equal to half the required overall word error

probabili~y.

If the resulting forward transmission rate R, given by (3), is too small restart the design procedure using a smaller value for P

b. More details on the available tradeoffs are given in reference [I].

One final comment pertains

decoder operating on the signal

to the complexity of the Viterbi to form

(20)

an estimate of n(a). Note that the input signal n(a) of the encoder in Fig. 8 has Pr(n.=I) = p<l, i=0,1,2, . . . • A scrambler with v memory stages has

2

v possible states and to each of these states

corresponds a metric register and a path register in the Viterbi decoder. However, for small p and large v states with Hamming weigth much greater than pv are extremely improbable and can,

hence, be deleted from the decoder without significantly increasing the bit error probability Ph. It is thus possible to use a greatly simplified Viterbi decoder for this particular application.

IV. CONCLUSIONS

The coding strategy for BOC's presented in this paper combines the ideas of MRC and Viterbi decoding. The word error probability P is essentially equal to the bit error probability P of a rate-j

s~stematic

convolutional code with ML decoding. For A €eing active and B passive the rate pair (C,o) in Fig. 9 can be achieved, while for A passive and B active one can achieve (o,C). By time-sharing [8J any point on the line (C,o) + (o,C) can be achieved. However, the region of achievable rate pairs in Fig. 9 includes the shaded part of the square. To achieve rate pairs in the shaded part of Fig. 9 one can proceed as follows. Instead of returning each bit u ,ul ,u

2' ... generated by the convolutional scrambler, see Fig. 6, tg the active station feed back only each n-th bit u ,u ,u

2 , .•• , n~1. The error probability Ph of the estimator will ghu~ ingrease,

however, a capacity of (n-I)C/n will still be available in the

reverse direction~ i.e. in the direction from the passive- to the

active station, for the transmission of information. Hence, the

rate pairs (C,(n-I)C/n) and «n-I)C/n,C) in the shaded region of

c

(21)

Fig. 9 are achievable in this way. Finally, note that symmetric operation of the duplex channel reserving an equal part of the transmission capacity in each direction for returning check digits makes no sense.Operating each link independently yields a higher total information throughput for the same coding effort. Our duple,

strategy can thus be used to facilitate error correction in one

direction at the expense of transmission capacity in the other direction!

REFERENCES

1. Schalkwijk, J.P.M., A coding scheme for duplex channels,

IEEE Trans. on Communications, to be published.

2. Schalkwijk, J.P.M., A class of simple and optimal strategies for block coding on the binary symmetric channel with noiseless feedback, IEEE Trans. Inform. Theory, IT-17,283, 1971.

3. Schalkwijk, J.P.M. and Post, K.A., On the error probability for a class of binary recursive feedback strategies, IEEE Trans.

Inform. Theory, IT-19, 498, 1973.

4. Schalkwijk, J.P.M. and Post, K.A., Correction to, On the error probability for a class of binary recursive feedback strategies,

IEEE Trans. Inform. Theory, IT-20, 284, 1974.

5. Becker, D.W. and Schalkwijk, J.P.M., A simple class of asymptot-ically optimum block coding strategies for the m-ary symmetric channel, IEEE Trans. Inform. Theory, to be published.

6. Viterbi, A.J., Convolutional codes and their performance in

communication systems, IEEE Trans. Communication Technology,

COM-19, 751, 1971.

7. Wozencraft, J.M. and Jacobs, I.M., Principles of Communication

Engineering, Wiley, New York, 1965, 102.

8. Wyner, A.D., Recent results in the Shannon theory, IEEE Trans.

(22)
(23)

The duplex strategy described in chapter II and appendix A uses a systematic rate! convolutional code with Viterbi-decoding [I] at the active station. This leads us naturally to the question, what are the best systematic rate

I

convolutional codes? In this chapter we will address a slightly more general question, i.e. what are the

best rate! convolutional codes, systematic and nonsystematic, of a

given constraint length L? Different authors have used different definitions for the constraint length. Our definition of constraint length L is the span of the nonzero code sequence that results from a single data I, where this length is measured in data-intervals. In other words, our constraint length L is the length of the encoder.

As there is of yet no simple algorithm for constructing "goad" convolutional codes of a given constraint length, various authors

[2,3,4] have resorted to computer search techniques for finding

convolutional codes that are optimum according to a particular

crite-rion. In the next paragraph we derive the new criterion of optimality on which our own computer search for good convolutional codes is based. Use of this latter criterion gives rise to a new set of optimal

codes.

Viterbi [I] derives an enumerator

T (D N) , =d!il nlil ~ ~ t d,n DdNn (I)

for the "error events", where the coefficient td

,n equals the number of error events of Hamming weight d that give rise to n bit errors in the decoded data sequence. Let the minimum distance d be defined by

o d

=

max {d:t

=

0 for k<d, n~I}.

o k,n (2)

The classical criterion of optimality is, 1) for the given constraint

length L find codes with maximum d , and 2) among these codes find the a

(24)

Note that td is the first nonzero coefficient of the generator o

a

00 d

[ aN T(D,N)]N=I = d~1 tdD Let P

d be the error probability between two binary codewords at Hamming distance d. On a BSC with crossover probability p, we have

dodd

d even

Now upperbound P d by

the bit error probability P

b of the Viterbi-decoder [I] can then be upperbounded by

(4 )

(5 )

(6)

(7)

The classical criterion of optimality now minimizes the first nonzero term td P

d of the bound (7) on the bit error probability: We made o 0

the following observation. A simple calculation [5] shows that

P2d = P2d-1 in (5). This leads to the following bound on the bit error probability

Pb<d~1

(t2d-l+t2d)P2d-l<

~N

{HT(D,N)+T(-D,N))+jD[T(D,N)-T(-D,N)]}. (8)

N= 1 ,D=2

V

p (I-p) which is significantly tighter than (7). Our simulation results are, in fact, in very close agreement with (8). Define

d' = max { d·t +t = 0 for k<d,n~I}.

(25)

length L find codes with maximum d' 0' and 2} among these codes find the ones with minimum

The new criterion of optimality now minimizes the first nonzero

(10)

term (t 2d '-1 o probability!

+ t2d,)P2d'-1 of the tighter bound (8) on the bit error o 0

Aside from the criterion of optimality our search algorithm for optimum convolutional codes is not significantly different from those used previously.

The results of our search are given 1n the following tables, i.e.

table 1 for nonsystematic and table 2 for systematic rate

I

convolutional codes. L 3 4 5 6 7 8 9 10 11 CONNECTIONS 101,111 1011,1111 1 001 1 , 111 0 1 10011,11011 100011,110101 1001101 , 1010 11 1 1000101,1111011 10001 1 01 , 1 01 1 00 1 1 10011011,11000101 101110101,110000111 1000110011,1110100101 10011100111,11011000101 2d'-1 2d' o ' 0 5,6 5,6 7,8 7,8 7,8 9, 10 9,10 9,10 9,10 1 1 , 1 2 11 , 1 2 13,14

TABLE 1. NONSYSTEMATIC CODES

5 5 2 2 16 16 16 2 14 36 14 2 14 33 14 43 92

(26)

4 1011 3,4 5 1 0111 5,6 8 11011 5,6 8 6 100111 5,6 3 101011 5,6 3 7 1010011 5,6 8 10110111 7,8 17 9 110100111 7,8 6 10 1011 10001 1 7,8 3 1100010111 7,8 3 1100100111 7,8 3 1101001011 7,8 3 1 1 1 0 1 001 001 1 1 7,8 10110100011 7,8 11000101011 7,8 11010001011 7,8

TABLE 2. SYSTEMATIC CODES

The column (t 2d '-1 o optimum w.r.t. the

+ t 2d ')old in table 1 is for o

classical criterion. Note by

codes [4] that are

comparing columns

(t2d '-1 +

o cases the

t 2d ')old and (t 2d '-1 + t 2d ')new in table 1 that in many

0 0 0

newly found codes are significantly better. For constraint lengths 3 through 8 our search has been exhaustive. For constraint

length 9 through 11 one of the connection polynomials has been constrain-ed to be primitive. For constraint length L=10 the search was terminatconstrain-ed

(27)

after the code with t2d'-1 + t2d, =·1 was found. For constraint

o 0

length L=II only 10% of all primitive connection polynomials were investigated. For table 2 all searchers were exhaustive.

(28)

communication systems", IEEE Trans. Communication Technology,

vol. COM-19, pp. 751-772, October 1971.

2. L.R. Bahl and F. Jelinek, "Rate

l

convolutional codes with

complementary generators", IEEE Trans. Inform. Theory, vol. IT-17, pp. 718-727, November 1971.

3. J.P. Odenwalder, "Optimal decoding of convolutional codes", Ph.D. dissertation, Dep. Syst. Sci., Sch. Eng. Appl. Sci., Univ. California, Los Angeles, 1970.

4. K.J. Larsen, "Short convolutional codes with maximal free

1 1 1

distance for rates

2' 3-

and

"4 ",

IEEE Trans. Inform. Theory, vol. IT-19, pp. 371-372, l1ay 1973.

5. L. v.d. Meeberg, "A tightened upper bound on the error probability of binary convolutional codes with Viterbi decoding", IEEE Trans. Inform. Theory, vol. IT-20, pp. 389-391, May 1974.

(29)
(30)

In chapter II we discuss a suboptimum precoding technique for excluding the subsequences Olk and 10k, i.e. add a dummy reversal after each k-2 information digits, where the precoding parameter

k~3. It was then shown that the performance curves in correctable

error fraction f versus rate R form a set, parameterized by k, of

straight lines through the points (R,f)=(O,l/k) and (R,f)=«k-2)/(k-I),O), see Fig. 5 of chapter II. According to chapter II, eqn.(S), the

above set of straight lines is tangent to the ell ips

I

2f = 1-[R(2-R)]', (I)

see Fig. I. Compare this ellips with the Hamming bound, H(f) = I-R, that is dashed in Fig. I. In recursive coding [1,2] a code digit is estimated D-I time units after its reception, D being the coding delay. In the next paragraph we will show that with SUboptimum precoding,

as discussed above, the estimation procedure gives the correct result

as long as the D successive code digits that start with the code digit to be estimated contain fewer than D/k errors, i.e. the correctable error fraction is f = Ilk. If there are no transmission errors the rate R of the recursive coding scheme with suboptimum precoding is

R =

(k-2)/(k-I).

The,points (R,f) =

[(k-2)/(k-I),l/k]

for recursive coding are located on the hyperbola

R = (1-2f)

I

(I -f) , (2)

in Fig. I. This hyperbola for recursive coding can be constructed using the ellips for block coding. This construction is identical to the construction of the error exponent for convolutional codes given the

error exponent for block coding.

(31)

rs

.4

,1

I

.3

1-::---'~-.---~:___---___f(R,f)

on locus balanced

precoder

.2

"-"-

:--.,.H(f)d-R

. 1

2

f=

1-

rR(2-R~~

.1

.2

.3

.4

.S

.6

.7

.8

.9

1.0

R

(32)

be estimated a random walk of D steps is initiated. This random walk starts at zero. Each time a data 0 is received I or (k-I)-is subtracted from the current value of the random walk and each time a data I is

received or (k-I) is added to the current value of the random walk. Steps away from zero have size (k-I) and steps towards 0 have size I. After D steps a code digit 0 or I is estimated according to whether the value of the random walk ~s negative or positive, respectively. With suboptimum precoding the average slope of the random walk is at least I/(k-l) in the absence of transmission errors, as every (k-I)-st precoded digit is a dummy reversal, see Fig. 2. The critical number of errors, e, can be found by solving the equation (D-e)/(k-l)

=

e, or

t, f= Die = Ilk.

\ e

\

-(k-1)x \

\

(3)

x/(k-1)

x

Fig.2. The critical number of errors.

For optimum precoding [3] some random walks have an average drift away from level zero that is smaller than I/(k-I), see Fig. 2. In fact, the precoded sequence 10 k-I 10 k-I .•• has average slope zero and can thus not

(33)

In appendix B it is shown, however, that for k=3 (note that the k used in appendix B is our present k minus 1) a modified precoder that only generates balanced sequences, i.e. sequences with roughly the same number of O's and l's, has the same rate as the optimum precoder without this restriction. Our conjecture is, that the restriction to precoded sequences that are balanced does not lower the rate for any k~3. With this balanced precoder one can again correct an error fraction f = 11k as can be seen from Fig. 3. The error fraction f

\ e

\

-(k-1}x

\ \

x(k - 2};-2

;k~3

___ X \

Fig. 3. Random walks for balanced pre coder

versus rate R curve for the balanced precoder can be found as the

locus of the points (R,f) such that (R,o) and (o,f) are two points on a tangent to" the Hamming bound in Fig. 1.

(34)

1. J.P.'!. Schalkwijk and K.A. Post, "On the error probability for

a class of binary recursive feedback strategies",

IEEE Trans. Inform. Theory, vol. IT-19, pp. 498-511, July 1973.

2. J.P.M. Schalkwijk and K.A. Post, "Correction to, On the error probability for a class of binary recursive feedback strategies", IEEE Trans. Inform. Theory, vol. IT-20, p. 284, '!arch 1974.

3. D.W. Becker and J.P.'!. Schalkwijk, "A simple class of asymptotical-ly optimum block coding strategies for the m-ary symmetric channel", IEEE Trans. Inform. Theory, to be published.

(35)
(36)

November 15, 1973

The author is with the Department of Electrical Engineering,

Technological University of Eindhoven, Eindhoven, The Netherlands.

ABSTRACT

In this paper we describe a coding strategy for binary duplex channels that essentiallY matches the performance of a system

with two identical binary forward channels using a systematic rate-!

convolutional code on each of these channels. In other words, our

coding strategy achieves the same effect as would the turning around of the feedback channel.

(37)

Fig.1 is a model of the binary duplex channel (BDC) considered

in this paper, where each of the variables x"Yi,u"v.,n., and 2 ," 1 1 1 1

i = 0,1,2, ... can take on the values 0 and I. The noise

X, Vi

...

Ni

+

+

Zj

Fig. I. Binary duplex channel.

Yi

Uj

variables no' zo' nl, zl' ••• are statistically independent and are equal to 1 with probability p <

!.

The addition in Fig.1 is modulo-2. Transmissions (of a binary digit) are spaced by a time

unit of 11 seconds each. The channel delay " is assumed small compared to this time unit.

Our coding strategy for the BDC of Fig.l uses multiple repetition coding [1] in the forward direction. Multiple repetition coding

(MRC) , to be described in section II, requires at the transmitter as side information a recent copy of the forward noise no' n

l, n2 ••. , see Fig.2, where the delay T is small compared to the time unit 11.

(38)

Ni recent S i d e < } < 0 - - T -information X·,

+

Yi

Fig.2. Recent side information as required by MRC.

recent can be weakened by operating D independent MRC- schemes in time division. The side information can now tolerate a delay of T = (D-I)~ + T seconds. see Fig. 3.

Ni

delayed S i d e < J

«:-- T

-"Information

Xi Yi

Fig.3. Delayed side information as required by D

independent MRC- schemes in time division; T=(D-I)~+T.

The main idea of this paper then concerns a method of

obtaining a reliable estimate of the T seconds delayed side information of Fig.3 using the noisy feedback channel in Fig.l. The solution of this problem is indicated in Fig.4. where each binary sequence boo b

l• b2 ••.. has been replaced by a formal power series b(a) = b + bla + bzaZ + •••.

(39)

Y(Il)

W(a)

U( ... )

Fig.4. Coding strategy for duplex channels.

Multiplication by. c(a) is implemented using the circuit of Fig.5. By adding x(a)c(a) to the feedback sequence v(a) one obtains at the transmitter:

w(a)

=

[x(a)+n(a)]c(a)+z(a)+x(a)c(a)

=

n(a)c(a)+z(a)

S(et) C(it)

~---~

Fig.5. Convolutional scrambler with c(a)

=

a 7+a 5+a4+a2+a+l.

Note that w(a) is independent of the transmitted sequence x(a). From w(a) one can obtain the required estimate of aD-ln(a), i.e. of the forward noise n(a) delayed by D-l time units. The larger D the more reliable an estimate can be obtained. However, a large value of D implies a high order of time division for the MRC- schemes operating

(I )

(40)

The duplex strategy can be so dimensioned that the MRC-schemes in time division add little to the overall error probability.

The overall system error rate will then be largely determined by the

D-I .

estimation procedure of a n(a) refered to above. As will be shown in section III the performance of our estimator is the same as that of a rate-l systematic convolutional code with maximum likelihood

(ML) decoding. However, the transmission rate R in the forward direction is the transmission rate of the MRC-schemes in time division which amounts to almost twice the rate of the systematic convolutional code.

(41)

II. MULTIPLE REPETITION CODING

In MRC the information is precoded [I] into a binary

sequence of length L that does not allow subsequences of the form Olk

Ok h . . . h k-I Ok-I 3

and I ,and t at does not term1nate 1n e1t er 01 or I , k~ , where ik, iE{O,I}, stands for a sequence of i'8 of length k. In order to correct e or fewer errors a tail of ke reversals is concatenated to the pre coded information sequence to form the initial codeword. Then, k repetitions of each erroneously received digit are concatenated to the left of the sequence of digits that remains to be transmitted and k digits are dropped off the rigth hand side of the current codeword in order to maintain a fixed block length N. Transmission is resumed with the first repeat. Correction is done by repeated rigth-to-left scanning of the received sequence replacing Olk and

10k by I and 0, respectively. An example of this coding strategy for k=3 is the following (hats indicate erroneously received digits):

Pre coded information 0 0 0

Inintial codeword 0 0 0 0 0 0 Second codeword 0 (0 0 0) 0 0 0 Transmitted codeword 0 (0 0 (0 0 0) 0) 0 0 Received sequence 0

I

I 0 0 0 0

I

0 0 0 Estimated information 0 0 0

It will be clear from the previous discussion that 1n MRC the transmitter needs a T seconds, O<T<A, delayed version of the forward

as side information. The transmitter

(42)

1 2 1 3 1 4 1 5

delay T, where 6<T<Z6, it is still possible to use MRC without having to sacrifice in transmission rate R by operating two MRC

schemes A and B in time division. In other words, if we indicate the transmissions pertaining to scheme A by ao,al,a

Z' ...

and those pertaining to scheme B by bo,bl,b Z"" then the order of transmission when operating schemes A and B in time division will be ao,bo,al,bl,az,bZ"" • In general, to cope with a delay in the side information of T=(D-I)6+, seconds, with 0<,<6 , we need D-th order time division, as was already mentioned in section I.

The performance [I] of MRC in correctable error fraction f versus transmission rate R is given in Fig.6. For the k-fold repetition

O~O---

____________

~~~

__

~=:::~;:~~~~~~R~

Fig.6. Performance of MRC.

(43)

1 3 1 4 1 5

code, k ~3, the correctable error fraction f versus transmission rate R is given by a straight line through the point (R,f)

=

(O,l/k) and tangent to the Hamming bound H(f) = I-R, where

H(f)

=

-f 10g2f -(I-f) 10g2(I-f). To achieve the performance

indicated in Fig.6 one needs an optimum precoder [2] for conversion

from the original binary information sequence to the precoded information sequence that does not allow subsequences of the form Olk and 10k•

This optimum precoder is quite complicated from an implementation

point of view. Hence, in the next paragraph we discuss a more

practical suboptimum precoder. The performance of the corresponding suboptimum MRC schemes is indicated in Fig.7. Comparison of Figs.6 and 7

O~ ________________________________ ~ ________ ~~ __ ~~ __ ~R~

o

1 2

2

:3

Fig.7. Performan~e of subQ~imum MRC.

shows that he price in performance one has to pay for using the

;a

4

(44)

k k

subsequences of the form 01 and 10 , k~3, do no longer appear. A very simple way of excluding the subsequences Olk and 10k is to add one dummy reversal after each k-2 information digits. For example, for k=3, the original binary information sequence 01110 •.• is

converted into the precoded information sequence 0 I I 0 I 0 I 00 I ... where the dummy digits have been underlined. For this suboptimal

precoder we have

k-I

L = K, k~3 (2)

k-2

With an error fraction f one needs ke = kfN tail digits to form the initial codeword, as we saw before. Hence, N = L+kfN and with (2) one obtains for the transmission rate

R =

KIN

= (k-2)(I-kf)/(k-I), k~3. (3)

In other words, the performance curves for the suboptimum MRG schemes are straight lines through the points (R,f) =

(O,l/k)

and

(R,f) = ( (k-2)/(k-l) ,0) as indicated in Fig.7.

At the transmitter an estimate of the forward noise n(~) is formed using the signal

w(~)

of (I). Assume that the estimate

~D-I n(~)

suffers a bit error probability P

b. The resulting forward word error probability due to estimation errors at the transmitter is then upper bounded by

N

BI = I-(I-Pb) · If the MRG schemes in time division operating in the forward direction are designed to correct an error fraction f the word error probability due to forward noise can be upperbounded by

[3,p.1021, where X (f) = T (f) - H(f), and

p p

Tp(f) = -f log2 p -(I-f) log2 (I-p). The total word error probability P of the BDG coding strategy can now be upperbounded by the sum of

e

(45)

(4 )

In Fig.B the bounds B1 and B2 have been plotted as a function

B

I

P b= 5.10 -6 '6=0.01703 10- 6

j

'4

=

0.08035

1

, 3=0.19098

"

~

1

I

1

10 -7 0 100 200 300 '" N

(46)

probability p = 0.0] , the estimation error probability Pb and che correctable error fraction f being the parameters. The values f

k,

k

=

3,4,5,6 , chosen for the parameter f are the ordinates of the points of tangency of the performance lines for the k-fold repetition codes and the Hamming bound in Fig.6. In these points of tangency the MRC's are asymptotically sphere packed, i.e. they furnish the highest

possible transmission rate R for the given correctable error fraction fk• Fig. B is used to find the optimum common block length N for the NRC's given certain values of the parameters P

b and f. The value of N for which the two terms B] and B2 of (4) are equal minimizes the right hand side of (4). This value of N corresponds to the abcissa of the intersection of the curves, corresponding to the pair of parameter values P

b and f in Fig.B. With a constraint set on the right hand side of (4) it is advantageous to choose the parameter f > P as small as possible as this results in the highest forward transmission rate R. Note, however, that for fixed p a smaller f leads to a larger common block length N for the MRC's in time division. The remaining unknown, the parameter P

b, depends on the transition probability p of the BDC and on the order D of time division as will be shown in the next section. For a larger value of D one can obtain a smaller estimation error probability P

b, however, a higher order of time division means a more complex system to build.

(47)

Fig.9, where c(~)

=

1 + ~ + ~2, represents the functional dependence of

N(a)

+

Z(a)

- ~-

---Fig.9. Dependence

-~f w(~)

on

n(~)

and

z(~); c(~)

=

1 +

~

+

~2

.

w(~) on n(~) and z(~) as given by (1). At the transmitter, see Fig.4,

one is now presented with the problem of estimating the input signal n(~) of Fig.9 given the signal w(~)

=

w + wl~ + W2~2 + .••• This

o

estimation problem can be looked at as the decoding of a rate-j systematic convolutional code, see Fig.10. A Viterbi decoder [4] can now be used on the signal 0, wo ' 0, w1' 0, w2 ' .•• to form the estimate ~ D-1 fi(~),

N(a)

,---:;-., ----;;.{ +

f---;;. 000 . • .

N(a)

Z(a)

(48)

b

-e

-:

-~

=

-:

:

-:

--1

-,

0

:J

-i

,

i

,

.

\

x

I

x

I

:x

I

I I

I

\ I I

I

I

1

0 I I I

I

I

i

I

I

,

I.

I

I

I

I

I 0

I

,

,

I

I

10 I' 0 ,

,

0

Fig.ll. Estimation error probability Pb versus the transition probabilicY p of the BDC;

c(~)

=

I + ~ + ~2 + ~4 + ~5 + ~7.

r

r

r

;:

r

-:

~

-~

l-r

r

r

r

r

r

I-r

(49)

Fig.1 I represent simulation results, the solid line corresponds to thc first nonzero term in the upper bound (8) on the bit error probability P

b that will be derived shortly. If

v=7

is the number of memory cells in Fig.5 then a coding delay D of about 5 (v+ I) = 40 was necessary for the bit error probability P

b to stabilize at the value given in Fig.ll. The optimum rate-\ sysytematic convolutional codes c(a) have been found by

a computer search.

A

final comment pertains to our search program for optimal convolutional codes. Viterbi [4] upperbounds the bit error probability Pb by first observing that the error pobability P

k for two codewords at Hamming distance k is given by

where q=l-p. Then P k of (5) 1S upperbounded by k P k < [2/p(l-p)] for k odd for k even The bound (6)

0.

P k th b t' t d . t th t . f t ' 0

~u 1S en su S 1tu e 1n 0 e genera 1ng unc 1 n

dT(D,N) dN

N=I

giving the following bound on the bit error probability

(6)

(50)

Let ad be the

0

dN

N=I ,D=2,1p (I -p)

first nonzero coefficient

=L

k=l

in (7) . k

The quantity d is

0

refered to as the minimum free distance of the code. Odenwalder [5] and

Larsen [6] found codes for constraint lengths v+ 1 up to 14 such that

d is maximum and ad is minimum. In other words, their codes minimize

o 0

the first nonzero term in the bound (7) on the bit error probability. It is easyl to show using (5) that P2n = P

2n-1 ' n = 1,2,... ,which leads to the following bound on the bit error probability

Pb <

{I

[dT~~'N)

+

dT(~~,N)J+!D [dT~~'N)

_

dT~~D'N~}

= N=I,D=2v'p(l-p) ~

L.

(a 2n + a2n-l) [UPCl_p)j2n n=1 (8)

Let no be the first index value in (8) for which a

2n + a2n-1 is nonzero, then for a given constraint length v+1 our search program rooks for codes with maximum n and among these for codes for which a + a

o 2no 2no-1

is minimum. The rate-j systematic convolutional code c(~)=I+~+~2+~4+~5+~7 used as an example throughout this paper was found in this fashion.

A complete list of optimal (in the above sentie) convolutional codes up to constraint length v+l= 11 will be published shortly.

(51)

IV. CONCLUSIONS

The coding strategy for BDC's presented in this paper combines the ideas of MRC and Viterbi decoding. The word error probability P is

e essentially equal to the bit error probability P

b of a rate-! systematic convolutional code with ML decoding. For a BDC with transition

probability p this bit error probability P

b can be driven to zero by using a longer convolutional scrambler. As a rule of thump the coding delay D should be roughly five times the constraint length of the convolutional scrambler. A larger coding delay D will not result in much further improvement in the bit error probability P

b• Since, a larger value of D implies a high order of time division of the MRC's operating in the forward direction, i.e. a complicated system, one

should always choose the smallest value of D that is consistent with the requirement on the overall word error probability of the system. A large value of the common block length N does not significantly add to the system complexity. For large N and a BDC with transition probability p < 0.2, see Fig.6, the forward transmission rate R approaches I-H(p) which amounts to 0.92 bits per transmission for p= 0.01.

One final comment pertains to the complexity of the Viterbi decoder operating on the signal 0, wo' 0, wI' 0, w2' ••. to form the

estimate

~D-lfi(~).

Note that the input signal

n(~)

of the encoder in Fig.IO has Pr(n. = I) = p, i= 0,1,2, •..

~ . A scrambler with V memory stages has

2V possible states and to each of these states corresponds a metric register and a path register in the Viterbi decoder. However, for small p and large V states with Hamming weigth much greater than pv are

extremely improbable and can, hence, be deleted from the decoder without significantly increasing the bit error probability Pb' It is thus possible to use a greatly simplified Viterbi decoder for this particular application.

(52)

The author wants to thank H. Hoeve, L. v.d. Waals and F. Loots for their assistance in the pre?aration of this manuscript.

REFERENCES

1. J.P.M. Schalkwijk, "A class of simple and optimal strategies for block coding on the binary symmetric channel with noiseless feedback", IEEE Trans. Inform. Theory, vol. IT-17, pp.283-287, May 1971.

2. D.W. Becker and J.P.M. Schalkwijk", A simple class of asymptotically optimum block coding strategies for the m-ary symmetric channel", IEEE Trans. Inform. Theory,

to be published.

3. J.M. Wozencraft and I.M. Jacobs, Principles of Communication

Engineering. New York: Wiley, 1965, p.I02.

4. A.J. Viterbi, Convolutional codes and their performance in

communication systems", IEEE Trans. Communication Technology,

vol. COM-19, pp.751-772, October 1971.

5. J.P. Odenwalder, "Optimal decoding of convolutional codes", Ph. D. dissertation, Dep. Syst. Sci., Sch. Eng. Appl. Sci., Univ. California, Los Angeles, 1970.

6. K. J. Larsen, "Short convolutional codes with maximum free distance for rates 1/2, 1/3, and 1/4", IEEE Trans. Inform. Theory, vol. IT-19, pp. 371-172, May 1973.

(53)
(54)

Binary sequences with restricted repetitions by

K.A. Post

T.H.-Report 74-WSK-02 May 1974

(55)

of r zeros and s ones, no bit occurring more than k times in succession. For k

=

2 a function theoretic analysis is given for the number of sequences containing as many zeros as ones.

(56)

Recently, in feedback communication theory the following coding scheme was considered:

Let k be a fixed integer ~ 2. A message sequence is supposed to be a binary sequence in which no k+ 1 successive bits are all of the same parity. This sequence ~s to be transmitted across a binary symmetric channel with a noiseless, delayless feedback link. The received digits are sent back via

the feedback link, so that the transmitter is aware of the transmission errors. Every time a transmission error occurs, a block of k+ 1 repetitions of the correct bit is inserted in the message sequence immediately after the symbol that was wrongly received. Transmission of message sequence plus in-serted correction bits is continued until a given part of the original message sequence is transmitted. The receiver has various decoding proce-dures at his disposal (cf.

[IJ, [2J).

Different message sequences turn out

to have different sensitivities with respect to channel errors, sequences with (almost) as many zeros as ones being the least sensitive (balanced sequences). In this paper a recurrence is given for the number of message sequences with prescribed (0,1) inventory. For k

=

2 a function theoretic analysis is given for the number of balanced sequences.

(57)

II. Mathematical formulation. Elementary results.

Let k be a fixed positive integer. Let S = Sk be the set of all finite binary sequences that contain no k + J zeros in succession and no k + Jones

in succession. More specifically, let A = ~ and B = Bk denote the (comple-mentary) subsets of S consisting of those binary sequences, that start with

a zero and with a one, respectively. Finally, for all r ~ 0, s ~ 0

«r,s) # (0,0», a and b are defined to be the number of sequences in r,s r,s

A

and B respectively, that contain r zeros and s ones. It is useful to de-fine aOO:=b OO := J, and ars:=brs:=O if r < 0 or s < O.

Every sequence in A can be split up uniquely in a starting block of, say j

(I

s j s k) zeros and a (possibly empty) sequence in

B. A

similar argument holds for sequences in B, so that

( J ) { ar,s = b r _ J ,s + b = a + r,s r,s-l b r-2,8 + ••• + b r-k,s ar s-2 , + ••• + a r ,8-k Define the generating functions CI and S by

~ ~ ,,(x,y) :=

L L

a x y r s r=O s=O r, s ~ ~ S(x,y) :=

L L

b x y r s r=O s=O r,s

Then ( J ) can be restated in the form

(2) { "(X,y) - J =

(x

+

x:

S(x,y) - J = (y + y + ••• + x )S(x,y) k + ••• + y k ),,(x,y) so that ,,(x,y) 2 k 1 + x + x + ••• + x J - (x + x 2 + ••• + x ) (y k + y 2 + ••• + y ) k (3) S(x,y) = 2 k 1 + Y + Y + •• - + y J - (x + x 2 + ••• + x ) (y k + y 2 + ••• + Y ) k (r ~ 0, s ~ 0, (r,s)

-I

(0,0»

(58)

=

that replacing zeros by ones and ones by zeros transforms every sequence of A into a unique sequence of B and conversely, so that a

r,s _ b s,r • This argument also enables us to give an explicit construction of the array

(a ) in a recurrent way, viz. r,s r,s

(4)

a r, s aO 0

,

a r,s := 0 (r < 0 or s < 0) := == a s,r-1 + a s,r-2 + ••• + as , r-k (r ;, 0, s ;, 0, (r,s)

'I

(0,0)) A more symmetric recurrence, which also directly follows from (3) may be obtained by applying (4) twice, i.e.

a := 0 (r r,s < 0 or s < 0) a r,O := (0 ,; r ,; k) a r,O := 0 (r > k) (5) a O,s :;:: 0 (s ;, 1 ) k k a =

I I

a r-i,s-j (r ;, 1 , s ;, 1 ) r, s i=1 j=l

For k 2, e.g. the array (a ) reads as follows: r,s r,s

s

r

o

2 3 4 5 6

o

0 0 0 0 0 0

0 0 0 0

2 2 3 2 0 0 The array (ar,s)r,s

3 0 2 5 7 6 3 for k = 2

4 0 5 12 17 16 10

5 0 0 3 13 29 42 42

(59)

Remark. The number of sequences in A of length n = r + s equals F , the n-th n

Fibonacci number (n-th Fibonacci number of order k). This is easily illus-db h . f . N(t,t)-_(I_t_t 2 _ .•• _t k )-I.

trate y t e generat~ng unct10n ~

An interesting subset of A is formed by the balanced sequences, i.e. se-quences for which r = s. Their number corresponds with the number of paths

in an s x s square from the left bottom vertex to the right top vertex, that have minimal length, consist of only horizontal and vertical segments of

integer length ~ k and start in horizontal direction. For arbitrary k the analysis of these numbers 1S hard. For k = 2, however, a generating function and a recurrence relation can be found explicitly. For k = 2 the numbers are found on the diagonal (ds)~s=O of the array (a ) and read as follows:

r,s r,s

~

(60)

The double series

I I

(cL (5»

r=O s=O

is absolutely and uniformly convergent for complex x and y, Ixl = I, I y I ,;

1(13 -

I), hence the integral

J

w-I Iwl=1 o:(w '-wZ)dw = - -21Ti

J

Iwl=1 r-s-I s a w Z dw r,s

I I

r=O s=O

may be calculated by term-by-term integration and arbitrary order of summa-tion and has the value ";=0 ass ZS for complex z of sufficiently small absolute value.

On the other hand (cf. (3»

J

w-I z 21Ti o:(w,-)dw = w Iwl=1

f

I 2 + w + W dw = 21Ti = (J 2 Iwl=1 w - + w) (zw + z ) I

J

I + w + w2 dw , = 2n

i

- z(w-w )(w-w) Iwl=1 , I 2

where wI and w2 are the roots of the quadratic equation

2 2 2

zw + (z + z - I)w + z = 0

,

so

-I 2 2 3 4

I

wI = (2z) [-z - z + + (I - 2z - z - 2z + z ) ]

For small z the root wI is outside and w2 1S inside the unit circle, so that by the residue theorem out integral has the value

(61)

where

I

s=o

ss 1 1

(2" -

z-z 1 1 1 2

= - -

+ - + - (1 - z) (1 2z Z Z ZzZ + z 1 1 = + -2z2 Z 1 2 + - (1 - z) (1 -2zZ 2 . --7f~ z2 = e 3

Corollary. Since this function has z4 as branch point of smallest absolute value, it follows that d

s asymptotically behaves as the Taylor coefficients of (1 -

:4)-1.

By Stirling's formula this yields d

s - D s-l Fzs' D being a constant, F2s a Fibonacci number.

It is also possible to obtain a recurrence relation for d

s from (6). For this purpose we write

G(z) := - - - + -1 1 + - -1 Z/ 2 Zz2 2 2

!

2

-!

(1 - z) (1 + z + Z ) (1 - 3z + z ) , so that 2 2 2 2

!

2 _1 2z G(z) + I - z = (J - z) (1 + z + z ) (I - 3z + z ) 2 ,

and, by logarithmic differentiation

4z G + 2z2 G' - 2z 2z2 G + 1 - / 2G + zG' - I 2z2 G + 1 _ z2 - 2 1 = -;---=- + - + 2z 1 3 - 2z 1 - z 2 2 +

"2

-=----::..=----;;-2 ' + Z + z - 3z + z 2 + 3z -1 - 3z + Z 2 - Z 3 3 z 4 5 ' + 3z - z

(62)

Substitution of G(z)

=

E:=O

ds ZS yields, by identification of coefficients { : : : 2:d' _ d:(:

+11~d

d 21 : n n-+ 3(n- 4)d n_4 - (n- 5)dn_5 = 0 3 , (n - 2)d - (n - I)d + n-2 n-3 (n;,5). References.

[IJ J.P.M. Schalkwijk, A class of simple and optimal strategies for block coding on the binary symmetric channel with noiseless feedback.

IEEE Trans. Inform. Theory, vol. IT-17, pp. 283-287, May 1971.

[2J J.P.M. Schalkwijk and K.A. Post, On the error probability for a class of binary recursive feedback strategies. IEEE Trans. Inform. Theory, vol. IT-19, pp. 498-511, July 1973.

Note added in proof:

In a recent paper (The Fibonacci Quarterly, Vol. 12, No I, 1974, p. 1-10) L. Carlitz gives generating functions like (3) for the slightly more general situation where no k + I successive ones and no ~ + I successive zeros are allowed.

Referenties

GERELATEERDE DOCUMENTEN

The day ahead capacity can differ from the maximum permanent technical capacity only in case of a specific planned or unplanned outage with significant impact

The product of risk premiums given in Table 23 with the number of persons insured in every cell can be used as row factors in the Bornhuetter-Ferguson method in section 4.3 for

The Anti-Phishing Working Group (APWG), an industry association focused on eliminating identity theft and fraud that results from the growing phishing problem, describes phishing as

Block transmission techniques were presented here in the general framework of affine precoding. Many channel identi- fication techniques have been proposed in this affine precod-

This assumption in combination with US-DS duality theory allows to reformulate the DS-US structure of the non-convex weighted sum-rate maximization problem into an easier

Exploiting the link between this process and the 1D Random Field Ising Model (RFIM), we are able to identify the Gibbs potential of the resulting Hidden Markov process.. Moreover,

RELAP5 and Flownex allow one to construct flow paths for all the components in the nuclear power plant, and a network of pipes is used to model the steam generator.. This allows for

schaalvergroting en mondialisering. Bij landschap gaat het om kwaliteiten als beleving, vrije tijd en recreatie. Dus om ‘high touch’ als tegenhanger van high tech.. Waarom pleiten