• No results found

Semantic Coding: Partial Transmission

N/A
N/A
Protected

Academic year: 2021

Share "Semantic Coding: Partial Transmission"

Copied!
6
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Semantic Coding: Partial Transmission

Citation for published version (APA):

Willems, F. M. J., & Kalker, A. A. C. M. (2009). Semantic Coding: Partial Transmission. In ISIT 2008, IEEE ISIT 2008, Toronto, Proceedings Ontario, Canada, 6 - 11 July 2008 (pp. 1617-1621). Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ISIT.2008.4595261

DOI:

10.1109/ISIT.2008.4595261

Document status and date: Published: 01/01/2009

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

Semantic Coding: Partial Transmission

Frans M.J. Willems

Eindhoven University of Technology Electrical Engineering Department

Eindhoven, The Netherlands

Ton Kalker

Hewlett-Packard Research Mobile and Media Systems Lab

Palo Alto, CA, USA

Abstract—Shannon wrote in 1948: ”The semantic aspects of communication are irrelevant to the engineering problem.” He demonstrated indeed that the information generated by a source depends only on its statistics and not on the meaning of the source output. The authors derived the fundamental limits for semantic compaction, transmission and compression systems recently. These systems have the property that the codewords are semantic however, i.e. close to the source sequences. In the present article we determine the minimum distortion for semantic partial transmission systems. In these systems only a quantized version of each source source symbol is transmitted to the receiver. It should be noted that our achievability proof is based on weak instead of strong typicality. This is unusual for Gelfand-Pinsker [1980] related setups as e.g. semantic coding and embedding.

I. INTRODUCTION

In [1] Shannon wrote: ”The semantic aspects of commu-nication are irrelevant to the engineering problem.” Indeed Shannon demonstrated that the information that is generated by a source depends only on the statistics of the source, and not on the meaning of the source output. In contrast with this we have investigated in [2] whether in a compaction system the codewords can be (almost) as meaningful as the source output sequences. We required the codewords to be close to the source sequences for some given distortion measure. Moreover we considered semantic transmission. Now the encoded source output sequence is transmitted over a memoryless noisy channel to a decoder. Semantic transmission requires the codewords, i.e. the channel input sequences, to be close to the source sequences again. Finally we investigated semantic compression. A semantic compression system is a vector quantizer for which the codeword, i.e. the index to the reproduction vector, resembles the source sequence. For semantic compaction, transmission, and compression we determined the fundamental limits for the i.i.d. case in [2].

Here we consider semantic partial transmission, which is the transmission over a memoryless noisy channel of a ”quantized” version of the source sequence, in such a way that the channel input sequence is semantic, i.e. close to the

source sequence. The quantized symbols could e.g. represent

the most significant bits of the source symbols. Since the receiver is only interested in the quantized source outputs we speak about partial transmission. For this model we determine the fundamental limit, i.e. set of achievable distortions. The obtained result is an extension of the semantic transmission problem, but can also be considered as a solution for the problem that combines semantic transmission and semantic

--  M1N M1N = d(ZN 1 ) Y1N= e(XN 1 ) Ps(x) Pc(z|y) Z N 1 X1N Y1N

Fig. 1. A semantic partial transmission system.

compression, however only for a special distortion measure. In this combined problem the decoder at the output of the noisy channel has to produce a sequence whose distortion to the source sequence is small. In addition the channel input sequence has to semantic. For the case where the distortion measure is such that for each source output only its quantized value is allowed as receiver output, we found the solution. It should be noted that for arbitrary distortion measures the problem is still unsolved.

The achievability parts in [2] are based on to the Gelfand-Pinsker proof for the side-information channel [3]. For seman-tic partial transmission we need a more general achievability proof than the one in [2]. Important is also that the proof that is presented here, is not based on strong typicality (developed by Wolfowitz [4], Berger [5], etc.) as in [3], but rather on weak typicality (developed by Forney [6] and Cover [7], etc.). Semantic coding is closely related to reversible embedding. In fact semantic compaction is identical to zero-rate reversible embedding [8], semantic transmission is the same as robust reversible embedding [9], and semantic compression is zero-rate partially reversible embedding [10]. For an overview see [11]. Reversible embedding work has the same flavor as the work of Sutivong et al. [12], [13], however there semantic distortion is absent. We conclude this article by demonstrating how it relates to a result of Yang and Sun on embedding correlated watermarks [14].

II. DEFINITIONS,STATEMENT OF RESULT

A. Definitions

In Figure 1 a model of a semantic partial transmission system is shown. The source is assumed to be independent and identically distributed (i.i.d.). It produces the sequence

xN

1 = (x1, x2, · · · , xN) with probability

Pr{XN

1 = xN1} = ΠNn=1Ps(xn) (1) for all xN

1 ∈ XN. Here X is the finite source alphabet and

{Ps(x), x ∈ X } is probability distribution of the source. N is the block length. We are now interested in transmitting a

(3)

quantized version mN

1 = (m1, m2, · · · , mN) of the source sequencexN

1. The mappingμ(·) from X on finite alphabet M

determines the partial source sequencemN

1 = (m1, m2, · · · ,

mN) of xN1 component by component, as follows:

mn= μ(xn), for n = 1, N. (2)

The mapping μ(·) defines the joint probability of a source

symbolx ∈ X and its quantized version m ∈ M as follows

Ps(x, m) = Ps(x)δm,μ(x), (3)

whereδi,j= 1 if i = j and zero otherwise (Kronecker delta). An encodere(·) transforms the source sequence xN

1 into a channel input sequence yN

1 = (y1, y2, · · · , yN) ∈ YN. The

modified sequenceyN

1 is close to the original sequencexN1 in the sense that the so-called average semantic distortionDxy

betweenXN

1 andY1N is not too large. Here

Dxy =Δ  xN1 Pr{XN 1 = xN1}D(xN1, e(xN1)) with D(xN1, yN1)=Δ N1 N  n=1 Dxy(xn, yn), (4) whereDxy(·, ·) is a matrix consisting of |X ||Y| non-negative

values. The semantic sequence yN

1 is now transmitted over

a discrete memoryless channel with input alphabetY, output alphabet Z, and transition probability matrix {Pc(z|y), y ∈

Y, z ∈ Z}. The probability that output sequence zN 1 = (z1, z2, · · · , zN) occurs when y1N is the channel input se-quence is

Pr{ZN

1 = z1N|Y1N = y1N} = ΠNn=1Pc(zn|yn). (5)

From the channel output sequence zN

1 a decoder d(·)

con-structs an estimate MN

1 of the partial source sequence mN1. The error probabilityPE is defined as

PE= Pr{ Δ M1N= M1N}. (6)

B. Statement of result

An(N, Dxy, PE)-code consists of an encoding function e(·) and a decoding functiond(·), both operating on sequences of

lengthN , resulting in an average semantic distortion Dxy and error probability PE. Distortion level Δxy is now said to be

achievable if for all  > 0 there exists for all large enough N,

codes(N, Dxy, PE) such that

Dxy≤ Δxy+ , and PE≤ . (7)

In sections III and IV we will prove the next theorem.

Theorem 1: For semantic partial transmission the set of

achievable distortions is equal toD which is defined as

D= {ΔΔ xy : Δxy 

x,y

P (x, y)Dxy(x, y), for

P (x, m, u, y, z) = Ps(x, m)Pt(u, y|x)Pc(z|y) for some auxiliaryU with |U| ≤ |X ||Y|

and test channel Pt(u, y|x)

such thatI(M, U ; Z) ≥ I(M, U ; X)}. (8)

-V1N Y1N  V1N V1N= d(Z1N) (VN 1 , Y1N) = e(X1N) Pc(z|y)Z N 1 Ps(x)X N 1

Fig. 2. An underlying model for semantic partial transmission.

The smallest possible distortion Δmin = minΔxy∈DΔxy.

Semantic transmission turns out to be impossible if H(M )

is larger than the capacity of the channel. III. ACHIEVABILITY PROOF

A. Introduction

We will first consider a slightly different problem and prove a lemma concerning a joint distributionP (x, v, y, z) = Ps(x)Pt(v, y|x)Pc(z|y) where Ps(·) is the source distribution,

V a finite auxiliary alphabet, Pt(·, ·|·) some fixed test-channel

between X and V × Y, and Pc(·|·) the channel. We claim

the existence of a set of sequences vN

1 ∈ VN having certain

properties within the following scenario. Consider Figure 2.

The memoryless source produces a sequencexN

1. An encoder

e(·) observing xN

1 transmits over the channel {Pc(z|y), y ∈

Y, z ∈ Z} a sequence vN

1 ∈ V that results in a channel input sequenceyN

1 and whose (semantic) distortion toxN1 should be acceptable. Moreover the sequencevN

1 should be the unique

sequence jointly-typical with zN

1 such that a decoder d(·)

can find vN

1 using typicality. An error occurs if the semantic distortion is too large or ifvN

1 is not decoded.

Lemma 1: In the scenario described above, for each  > 0,

for all N large enough, there exists a set of M sequences

vN

1 ∈ VN such that the error probability is not larger than4 if

I(V ; X) + 4 ≤ 1

N log2M ≤ I(V ; Z) − 4. (9) B. Definition and properties of typical setsAN

 and BN First letK be a positive integer and fix an 0 <  < 1. Definition 1: The set AN

 (V1V2· · · VK) of -typical N-sequences (v1, v2, · · · , vK) with respect to joint distribution

P (v1, v2, · · · , vK) is defined by AN  (V1V2· · · VK) Δ = {(v1, v2, · · · , vK) :  N1 log2 1 P (w)− H(W )   ≤ , ∀W ⊆ {V1, V2, · · · , VK}}, (10) whereP (w) =Nn=1P (wn).

For the properties ofAN

 we refer to Cover and Thomas [15].

Definition 2: For given v1, · · · vK−1 we define

AN

 (VK|v1, · · · , vK−1) (11) Δ

= {vK: (v1, · · · , vK−1, vK) ∈ AN (V1V2· · · VK)}, which is the set of sequences vK conditionally -typical on v1, · · · vK−1with respect to P (v1, v2, · · · , vK).

(4)

Note that the test-channelPt(v, y|x) determines the joint prob-ability distribution P (x, v, y, z) = Ps(x)Pt(v, y|x)Pc(z|y). The following definition is crucial. It allows us to avoid using strong typicality.

Definition 3: Consider the setsBN

 (XV ) defined as BN  (XV ) (12) Δ = {(x, v) : Pr{Z ∈ AN  (Z|x, v) ∧d(Y , x) ≤ Dexp+  | (X, V ) = (x, v)} ≥ 1 − }

where Z is the output of a ”channel” P (z|x, v) =

P (x, v, z)/zP (x, v, z), with fixed inputs x and v.

More-overDexp= 

x,yP (x, y)Dxy(x, y).

Property 1: For (x, v) ∈ BN

 (XV ) there is at least one

z such that (x, v, z) ∈ AN

 (XV Z), and therefore (x, v) ∈

AN  (XV ).

Property 2: Let X, V , Y , Z be i.i.d. with respect to P (x, v, y, z). Then observe that

Q= Pr{(X, V , Z) ∈ AΔ N (XV Z) ∧ d(X, Y ) ≤ Dexp+ }  (x,v)∈BN (XV ) p(x, v) +  (x,v)/∈BN  (XV ) p(x, v)(1 − ) = 1 −  Pr{(X, V ) /∈ BN  (XV )}, (13) or Pr{(X, V ) /∈ BN  (XV )} ≤ 1 − Q  . (14)

The weak law of large numbers implies that Q ≥ 1 − 2 for

large enough N . Using (14) this leads to the statement that

for large enoughN

 (x,v)∈BN

 (XV )

p(x, v) ≥ 1 − . (15)

C. Random code construction

Random coding: Generate M sequences v(w) for w ∈ {1, 2, · · · , M } at random according to p(v) =



x,yPs(x)Pt(v, y|x).

Encoding: The encoder chooses an index w such that

(x, v(w)) ∈ BN

 (XV ). If such an index cannot be

found an error is declared. The channel input sequence

y now results from applying the ”channel” p(y|x, v) = p(v, y|x)/yp(v, y|x) to (x, v(w)).

Decoding: The decoder upon receiving z, looks for the

index w such that (v(  w), z) ∈ AN

 (V Z). If a unique index does not exists an error is declared.

D. Error probability

LetX be the source sequence, W the index to V , and Z

the result of X and V (W ). Then for w ∈ {1, 2, · · · , M } we

define the events:

Bw = {(X, V (w)) ∈ BΔ N(XV )},

Aw = {(V (w), Z) ∈ AΔ N (V Z)},

Cw = {(X, V (w), Z) ∈ AΔ N (XV Z)}. (16)

The error probability (averaged over the ensemble of codes) is now: PE = Pr {(∩wBwc) ∪ AcW∪ (∪w=WAw)} (17) ≤ Pr {∩wBwc} + Pr {(∪wBw) ∩ CWc } +  w=W Pr{Aw}, where we used the fact thatCw ⇒ Aw. We will investigate these three terms now. First for all x let BN

 (V |x) = {v :Δ (x, v) ∈ BN

 (XV )}. Note that M ≥ 2N (I(X;V )+4). Then, see Gallager [16], p. 454, Pr ∩M w=1Bwc =  x∈XN p(x) M w=1⎝1 −  v∈BN(V |x) p(v) ⎞ ⎠ (18) (a)  x∈XN p(x)⎝1 − 2−N(I(X;V )+3)  v∈BN  (V |x) p(v|x) ⎞ ⎠ M (b)  x∈XN p(x)⎝1 −  v∈BN(V |x) p(v|x) + exp(−M2−N(I(X;V )+3))  (x,v)/∈BN (XV ) p(x, v) +  x∈XN p(x) exp(−2N)(c)≤ 2, forN large enough. Here (a) follows from the fact that for

(x, v) ∈ BN

 (XV ), using Property 1,

p(v) = p(v|x)p(x)p(v)

p(x, v) ≥ p(v|x)2

−N(I(X;V )+3), (19)

(b) from the inequality (1 − αβ)M ≤ 1 − α + exp(−Mβ),

which holds for 0 ≤ α, β ≤ 1 and M > 0, and (c) from

Property 2. Secondly we consider Pr ∪M w=1Bw  ∩ Cc W max (x,v)∈BN(XV )Pr{Z /∈ A N  (Z|x, v)|(V , X) = (x, v)} (d) ≤ . (20)

Here (d) follows directly from definition 3 of the setBN  (XV ). Thirdly, for a fixedz

Pr{V ∈ AN  (V |z)} =  v∈AN(V |z) p(v|z)p(v)p(z) p(v, z) ≤ 2−N(I(V ;Z)−3)  v∈AN(V |z) p(v|z) ≤ 2−N(I(V ;Z)−3). (21) FromM ≤ 2N (I(V ;Z)−4)we now get forN large enough

 w=W Pr{Aw} ≤  w=W max y Pr{V ∈ A N  (V |z)}  w=W 2−N(I(V ;Z)−3) ≤ M2−N(I(V ;Z)−3)≤ 2−N≤ . (22)

(5)

E. Last part achievability proof

In the ensemble of codes, for allN large enough, there now exists a a set of M sequences such that PE ≤ 2 +  +  = 4, as long asI(V ; X) + 4 ≤N1 log2M ≤ I(V ; Z) − 4, for our fixed0 <  < 1. This follows from combining (24), (20), and (22). This finishes the proof of the lemma.

Let Dmax Δ= maxx,yDx,y(x, y) then we obtain for the average semantic distortion of this code

Dxy ≤ (1 − PE)(Dexp+ ) + PEDmax

≤ Dexp+  + 4Dmax, (23)

Now returning to the achievability proof we assume that

V = (M, U ). If I(M, U ; X) < I(M, U ; Z) there should be

an > 0 such that I(M, U ; X) + 4 ≤ I(M, U ; Z) − 4. The

lemma implies for large enough N the existence of a code

(N, Dxy, PE) with PE ≤ 4 and Dxy ≤ Dexp+  + 4Dmax. Letting ↓ 0 proves the achievability part of the theorem.

Observe that we did not consider test-channels for which

I(M, U ; X) = I(M, U ; Z). Also for such a test-channel

achievability can be proved, using the idea that this test-channel can be adapted a little bit without increasing the distortion too much. We will not work out this idea here.

IV. CONVERSE

A. Mutual information part

Consider an (N, Dxy, PE)-code. From H(M1N|X1N) = 0,

andH(MN

1 |Z1N) ≤ 1 + PElog2(|M|N) we obtain

0 ≤ I(MN

1 ; Z1N) − I(M1N; X1N) + 1 + PElog2(|M|N). (24) For the differenceI(MN

1 ; Z1N) − I(M1N; X1N) we find I(M1N; Z1N) − I(M1N; X1N) =  n=1,N  H(Zn|Z1n−1) − H(Zn|M1N, Z1n−1, Xn+1N ) −I(Zn; Xn+1N |M1N, Z1n−1) − I(M1N; Xn|Xn+1N )  (a)=  n=1,N  H(Zn|Z1n−1) − H(Zn|M1N, Z1n−1, Xn+1N ) −I(Xn; Z1n−1|M1N, Xn+1N ) − I(M1N; Xn|Xn+1N )  =  n=1,N  H(Zn|Z1n−1) − H(Zn|M1N, Z1n−1, Xn+1N ) −H(Xn|Xn+1N ) + H(Xn|M1N, Z1n−1, Xn+1N )  (b)  n=1,N  H(Zn) − H(Zn|M1N, Z1n−1, Xn+1N ) −H(Xn) + H(Xn|M1N, Z1n−1, Xn+1N )  (c) =  n=1,N [I(Zn; Mn, Vn) − I(Xn; Mn, Vn)] , (25) with probability distribution

P (xn, mn, vn, yn, zn) = Ps(xn, mn)P (vn, yn|xn)Pc(zn|yn) (26) for some P (vn, yn|xn) for n = 1, N. Here (a) follows from the “summation by parts”-lemma in Csiszar and K¨orner [17],

(b) from the fact thatH(Xn|Xn+1N ) = H(Xn) since the source symbols are i.i.d., and (c) from the substitution

Vn= (MΔ 1n−1, Z1n−1, Xn+1N ) for n = 1, N, (27) where we should note thatMn+1N is contained in Xn+1N . We continue with  n=1,N [I(Zn; Mn, Vn) − I(Xn; Mn, Vn)] (d) = N [I(Z; M, V |T ) − I(X; M, V |T )] (e) ≤ N [H(Z) − H(Z|M, V, T ) −H(X) + H(X|M, V, T )] (f)

= N [I(Z; M, U) − I(X; M, U)] , (28)

with P (x, m, u, y, z) = Ps(x, m)P (u, y|x)Pc(z|y) for some

P (u, y|x). Here (d) follows from defining a time

sharing-variable T , independent of all the other variables, assuming

valuen ∈ {1, 2, · · · , N } with probability 1/N , and X = XT,

M = MT,U = UT,Y = YT andZ = ZT, (e) from the fact thatH(X|T ) = H(X) since the symbols Xn are assumed to be i.i.d., and (f) from the substitutionU = (V, T ).

B. Distortion part

Next we study the average semantic distortion

Dxy =  xN1,y1N P (xN 1, yN1) 1 N  n=1,N Dxy(xn, yn) = N  n=1 1 N  xn,yn P (xn, yn)Dxy(xn, yn) =  x,y P (x, y)Dxy(x, y), (29) where P (x, y) = m,u,zPs(x, m)P (u, y|x)Pc(z|y) for the sameP (u, y|x) that satisfies (28).

C. Last part converse

We now conclude that our(N, Dxy, PE)-code satisfies

0 ≤ I(Z; M, U) − I(X; M, U) + 1 N + PElog2(|M|), Dxy =  x,y P (x, y)Dxy(x, y), (30) for some

P (x, m, y, z) = Ps(x, m)P (u, y|x)Pc(z|y). (31) This implies that for an achievableΔxy for any > 0 and N large enough 0 ≤ I(Z; M, U) − I(X; M, U) + 1 N +  log2(|M|), Δxy ≥ Dxy−  =  x,y P (x, y)Dxy(x, y) − .

for some P (x, m, u, y, z) = Ps(x, m)P (u, y|x)Pc(z|y). Let-ting  ↓ 0 and N → ∞, proves the converse part of the

theorem.

(6)

D. Cardinality bounds

To find a bound on the cardinality of the auxiliary variable

U let D be the set of probability distributions on X × Y and

consider the|X ||Y| continuous functions of P ∈ D defined as

φxy(P ) = P (x, y) for all but one pair (x, y),

φh(P ) = HP(X|M) − HP(Z|M). (32)

By the Fenchel-Eggleston strengthening of the Caratheodory

lemma (see Wyner and Ziv [18]) there are |X ||Y| elements

Pu∈ D and αuthat sum to one, such that

P (x, y) = 

u=1,|X ||Y|

αuφxy(Pu) for all but one pair(x, y),

H(X|M, U ) − H(Z|M, U ) = 

u=1,|X ||Y|

αuφh(Pu). (33) The entire probability distribution {P (x, y), x ∈ X , y ∈ Y}

and consequently the entropyH(X) is now specified. Observe

that now also the distribution{P (y), y ∈ Y} and therefore also

H(Z), and x,yP (x, y)Dxy(x, y) are specified. Hence also

I(X; M, U ) − I(Z; M, U ). This implies that |U| = |X ||Y|

suffices.

V. EMBEDDING WATERMARKS CORRELATED TO THE COVERTEXT

Suppose that we have an i.i.d. source S to which M is

correlated (as in Yang and Sun [14]). The correlated sequence

mN

1 now has to be embedded into sN1. The sequence yN1

(semantic to sN

1) is conveyed via a memoryless channel

{Pc(z|y), Y, Z}. The decoder has to reconstruct mN1. The fundamental limit for this problem was determined by Yang and Sun [14]. We show here that this problem can also be cast into our ”semantic partial transmission” setup. This leads to the characterization of the set of achievable distortions.

TakeX = (S, M ) and let μ(X) = M . The set of achievable

distortion is now

D

{Δsy : Δsy≥ 

s,y

P (s, y)Dsy(s, y), for

P (s, m, u, y, z) = Ps(s, m)Pt(u, y|s, m)Pc(z|y) for test channelPt(u, y|s, m) such

thatI(M, U ; Z) ≥ I(M, U ; S, M )}. (34) It turns out that this result also can be obtained from [14]

if take an auxiliary random variable U that includes M .

Inspection of the converse in [14] shows that this is justified, and consequently our characterization is more specific. For the

U -cardinality bound we obtain that |U| ≤ |S||M|Y|.

VI. CONCLUSION

In this article we have proposed the semantic partial trans-mission setup. We could determine the set of achievable distortions for this situation. By introducing a typical set with some additional properties we were able to formulate an

achievability proof based on weak instead of strong typicality. Strong typicality proofs are standard in semantic-coding and embedding problems, which are based on the (strong typi-cality) Gelfand-Pinsker proof. We have investigated the con-nection between semantic partial transmission and embedding a watermark that is correlated to the cover-text. Yang and Sun [14] determined the fundamental limit for this embedding setup. We could refine their formulation using our results for semantic partial transmission.

REFERENCES

[1] C. E. Shannon, “A Mathematical Theory of Communication,” Bell Syst.

Tech. J., vol. 27, pt. I, pp. 379–423, 1948; pt. II, pp. 623–656, 1948.

[2] F.M.J. Willems and T. Kalker, ”Semantic Compaction, Transmission, and Compression Codes,” Proc. IEEE ISIT, Adelaide, South Australia, Australia, 4-9 September, 2005. CD-ROM, pp. 214-218.

[3] S. Gelfand and M. Pinsker, “Coding for a Channel with Random Parameters”, Prob. of Control and Inform. Th., vol. 9, pp. 19-31, 1980. [4] J. Wolfowitz, ”Coding theorems of information theory” , Berlin,

Springer-Verlag, 1961. Third edition 1978.

[5] T. Berger, ”Multiterminal Source Coding,” The Information Theory

Ap-proach to Communications, CISM Courses and Lectures, G. Longo, Ed.

Springer-Verlag, 1978, vol. 229, pp. 171-231.

[6] G.D. Forney, Jr., Information Theory, course notes (unpublished), Stan-ford University, 1972.

[7] T.M. Cover, ”A Proof of the Data Compression Theorem of Slepian and Wolf for Ergodic Sources,” IEEE Trans. Inform. Th., vol. IT - 21, pp. 226 - 228, March 1975.

[8] T. Kalker and F.M.J. Willems, ”Capacity Bounds and Constructions for Reversible Data-hiding,” Proc. Int. Conf. DSP, Santorini, Greece, July 1-3, 2002.

[9] T. Kalker and F.M.J. Willems, ”Capacity Bounds and Code Constructions for Reversible data-hiding,” IS & T / SPIE’s 15th Ann. Symp. Electronic

Imaging, January 20-24, 2003. Santa Clara, California. CDROM.

[10] F.M.J. Willems and T. Kalker, ”Methods for Reversible Embedding,”

Proc. 40th Ann. Allerton Conf. Communication, Control, and Computing,

Allerton House, Monticello, Illinois, Oct. 2-4, 2002. CDROM. [11] F.M.J. Willems and T. Kalker, ”Coding Theorems for Reversible

Em-bedding,” DIMACS Series in Discrete Math. and Theoretical Comp. Sc., vol. 66, American Mathematical Society, pp. 61 - 76, 2004.

[12] A. Sutivong, T.M. Cover, and M. Chiang, ”Trade-off Between Message and State Information Rates,” Proc. ISIT 2001, Washington-DC, June 24 - 29, 2001, p. 303.

[13] A. Sutivong, T.M. Cover, M. Chiang, and Young-Han Kim, ”Rate vs. Distortion Trade-off for Channels with State Information,” Proc. ISIT

2002, Lausanne, Switzerland, June 30 - July 5, 2002, p. 226.

[14] En-hui Yang and Wei Sun, ”On Information Embedding When Water-mark and Covertext are Correlated,” ISIT 2006, Seattle, USA, July 9-14, 2006, pp. 346- 350.

[15] T.M. Cover and Y. Thomas, ”Elements of Information Theory,” Wiley & Sons, New York, 1991. Second edition, 2006.

[16] R.G. Gallager, Information Theory and Reliable Communication. Wiley, 1968.

[17] I. Csiszar and J. Korner, “Broadcast Channels with Confidential Mes-sages,” IEEE Trans. Inform. Th., vol. IT-24, pp. 339-348, May 1978. [18] A.D. Wyner and J. Ziv, “The Rate-Distortion Function for Source Coding

with Side Information at the Decoder,” IEEE Trans. Inform. The., vol. IT-22, no. 1, pp. 1-10, Jan. 1976.

Referenties

GERELATEERDE DOCUMENTEN

You should, however, decide in the preamble if a given style should be used in math mode or in plain text, as the formatting commands will be different. If you only want to type

z Enhanced search of News topic (logical inferences) z Intelligent presentation – Semantic interfaces z Unified news management – Semantic CMS. What we have done

– incorporate design and discourse knowledge into multimedia presentations.. – represent this knowledge in Web and Semantic

The words and their corresponding semantic categories used for the card sorting task selected from the study by Huth et al.. Comparing the items of the category Social from the

‘WebVision’ [15], exploring the influence of topic segmentation on information retrieval quality on the on dataset of Wikipedia [16], the comparative analysis of

The open hierarchical card sorting task is going to be used because the participants can make their own groups of words, independent of the known clusters from Huth et al.. The

Keywords: causatives, causee, grammatical relation, case hierarchy, doubling, direct object, degree of control, affectedness, cross-linguistic study, morphological.

ALTERATIVE The morpheme -se- encodes the ALTERATIVE aspect in siSwati and describes an activity or event that was happening or not happening at an anterior time period prior