• No results found

Estimation of mutual information from limited experimental data

N/A
N/A
Protected

Academic year: 2021

Share "Estimation of mutual information from limited experimental data"

Copied!
5
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Estimation of mutual information from limited experimental

data

Citation for published version (APA):

Houtsma, A. J. M. (1983). Estimation of mutual information from limited experimental data. Journal of the Acoustical Society of America, 74(5), 1626-1629. https://doi.org/10.1121/1.390125

DOI:

10.1121/1.390125

Document status and date: Published: 01/01/1983

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

thefr ratios (the most sensitive ratio) lie approximately two- thirds of the way to the fr ratio for constant B. Using this piece of experimental data, and for a first-order approxima- tion assuming that one can linearly interpolate between Eqs. (1) and (3), one arrives at Eq. (7). This approximation can be arrived at graphically by performing the same interpolation on the graph on page 80 of Ref. 8.

= 1.064

[ 1 - ffr/fa

I ]

when ke• >0.75. The application of Eq. (7) yields k3n• ax = 0.875 for the 2605SC alloy. The authors ofRef. 1 used Eq. (6) to compute their k33 for the 2605SC alloy. When modified by using Eq. (7), their maximum k33 becomes 0.924 instead of 0.965.

IV. CONCLUSION

It has been empirically shown that metallic-glass piezo- magnetic ribbons of the 2605CO composition are in a con- stant Hboundary condition due to the transverse magnetiza- tion. The 2605SC alloy is in a "2/3 constant B condition" when kee > 0.75. This is due to partial rotation of the magne- tization of this alloy at very low fields because of the low anisotropy constant. An empirical relation between fr, fa, and k•e was derived for the 2605SC alloy. The maximum coupling coefficient of the 2605SC specimens tested was de- termined to be k33•,•0.875 (uncorrected for leakage flux).

The application of this empirical relation to the data ofRef. 1 yields k33•0.924, which is lower than the value of 0.965 reported there.

ACKNOWLEDGMENTS

The authors wish to thank Dr. A. E. Clark and Dr. L. T. Kabacoff for annealing the metallic-glass samples used in this study. Thanks are due to Clementina M. Ruggiero for assistance in the numerical solution of two equations, to Dr. R. W. Timme and Dr. R. Y. Ting for the critical review of this paper, and to L. C. Colquirt for technical assistance in collecting the data.

IC. Modzelewski, H. T. Savage, L. T. Kabacoff, and A. E. Clark, IEEE Trans. Magn. MAG-17, 2837-2839 (1981).

2j. C. Hill and S. W. Meeks, J. Acoust. Soc. Am. 74, 348-350 (1983).

3IEEE Standard on Magnetostrictive Materials: Piezomagnetic Nomen- clature, Std. #319-1971 (1971}.

4W. J. Marshall, USL Report No. 622, U.S. Navy Underwater Sound Lab., New London, CT ( 16 Oct. 1964).

5D. A. Berlincourt, D. R. Curran, and H. Jaffe, in PhysicalAcoustics, edited by W. P. Mason (Academic, New York, 1964), Vol. I, Part A, Chap. 3. 6National Defense Research Committee, The Design and Construction of

Magnetostriction Transducers, Summary Technical Report of Division 6 (Columbia U.P., Washington, DC, 1946), Vol. 13, Chap. 3.

7Solid State Magnetic and Dielectric Devices, edited by H. W. Katz (Wiley, New York, 1959), Chap. 3.

8D. A. Berlincourt, in Ultrasonic Transducer Materials, edited by O. E. Mattiat (Plenum, New York, 1971 ), p. 79.

9M. Brouha and J. van der Borst, J. Appl. Phys. 50, 7594 (1979).

Estimation of mutual information from limited experimental data

Adrianus J. M. Houtsma

Institute for Perception Research, Den Dolech 2, Eindhoven, The Netherlands

(Received 25 April 1983; accepted for publication 26 July 1983)

To obtain an unbiased estimate of mutual information from an experimental confusion matrix, one needs a minimum number of trials of about five times the number of cells in the matrix. This study presents a computer-simulated approach to derive unbiased estimates of mutual

information from samples of considerably fewer data. PACS numbers: 43.66.Yw, 43.85.Ta, 43.60.Cg [JH]

This letter will discuss a problem recently encountered

while performing an absolute identification experiment with

a set of many stimuli which differed along three physical dimensions. The purpose of that experiment was to examine independence of perceptual correlates of the three dimen- sions by trying to find out whether or not information con- veyed through a three-dimensional set of stimuli equals the sum of the amounts of information conveyed through three separate sets of stimuli that differ only along one dimension. The problem encountered was, how to obtain a reliable esti-

mate of mutual information from identification data for a

large set of alternative stimuli, while keeping the number of

required experimental trials within the realm of reality.

As it has been several decades since information theory first found widespread use in psychophysics, it may be worth while reviewing some fundamental ideas. If an event X has k possible outcomes (x•, x2,..., xk ), and the ith outcome oc- curs with probability p(xi), then the average uncertainty or entropy, according to the Shannon-Wiener theory, is

k

H (X) = -- • p(x,)log2

p(x,).

(1)

i=1

If successive events are observed through a noisy transmis- sion channel, an observation Y results, also with k possible outcomes. An entropy measure similar to Eq. (1) can be de- fined for Yas well. A useful measure of how much informa- 1626 d. Acoust. Soc. Am. 74(5), Nov. 1983; 0001-4966/83/111626-04500.80; ¸ 1983 Acoust. Soc. Am.; Letters to the Editor 1626

(3)

tion is received by the observer through the transmission

channel is the mutual information between X and Y,

T(X;Y) =

• p(x,,y•)log2

.

,

(2)

where

p(x•,y•)is

the joint probability

of the ith transmitted

and thejth observed message. Entropy and mutual informa- tion are both expressed in bits. In practice they cannot be computed from Eqs. (1) and (2), however, because the proba-

bilities

p(x•),

p(y•),

and

p(xi,y•)

are not known

a priori. They

must be estimated from frequencies of occurrence in empiri- cal data. The maximum likelihood estimate of H (X) is

/-/(X)

= -,__•,(•-)

log2 ,

(3)

where ni is the actual number of times the outcome xi oc- curred in a total of n successive events. Similarly, there is a maximum likelihood estimate for T (X; Y):

,4,

i=lj= l

where

n• is the

frequency

of the

joint

event

(x•,y•)

in a sample

of n events,

and n • = •= 1

n ij and

n• = • •= 1

n ij . The fre-

quencies

n•, n•, and n 0 can all be derived

from an empirical

confusion matrix, which is the typical form in which data from absolute identification experiments are cast.

Neither • nor • are unbiased estimates of H and T. It

can

be

shown

(Miller,

1954)

that

• is an

underestimate

and

is an overestimate. Since entropy increases when outcomes

of events

are

more

uni•rmly

distributed,

one

would

expect

the estimated entropy H, derived from a small data sample,

always to be on the low side since such uniformity in data distribution can only be reached asymptotically. On the oth- er hand, since mutual information Tis a measure of response consistency, i.e., whether or not the same observation is con-

sistently

made

for a given

input

event,

one

expects

a mutual

info•ation estimate T always to be on the high side. Espe-

cially when there is little basis for consistency, e.g., when the

transmission channel is very noisy and observations are rath-

er random, a small sample of observations may nevertheless

look reasonably consistent sin•e the observer did not have su•cient opportunity to be inconsistent. An extreme exam-

ple, of course, is a sample of one single observation which is

always consistent with itself, no matter whether it is a correct or an incorrect one. Miller (1954} demonstrated a method for

computing

the

bias

in • for

data

samples

in which

the

num-

ber of trials is at least five times the number of cells in the confusion matrix. The actual number of alternative stimuli

in an absolute identification experiment does not therefore have to be very large before the required number of trials

becomes unpraotically large (e.g., 50 •0 trials for l• alter-

native stimuli).

In most of the older experiments on absolute identifica-

tion

of relatively

large

stimulus

sets,

investigators

o'ften

col-

lected many t•als by running groups of subjects simulta- neously and pooling their data. Pollack (1953) measured about 2.5 bits of mutual information with a set of 25 pure tones differing only in frequency by presenting the entire set five times to ten subjects. The total number of trials thus

obtained was 1250, twice the number of cells in the confusion

matrix. Hake and Garner (1951) measured 3.25 bits for a stimulus set of 50 different points on a line by presenting 200 trials to 16 subjects. The total number of trials obtained was 1.28 times the number of matrix cells. Klemmer and Frick (1953), who measured recognition of the position of a dot in a square, presented for 400 alternative dot positions all 400 possible stimuli once to 80 subjects, obtaining a number of trials only 0.2 times the number of matrix cells. The most remarkable case is perhaps the study by Pollack and Ficks (1954) who measured information transfer for an auditory stimulus which could take on five values in each of six di- mensions, i.e., a set of 15 625 different stimuli. They present- ed an average of about 100 trials to 36 subjects, but did not pool the data. It is obvious that, even if the data were pooled, the amount would be far short of the number of trials re- quired for obtaining an unbiased estimate of mutual infor- mation from an overall confusion matrix. Such a matrix would have more than 200 million cells, requiring by Miller's criterion at least one billion trials! Instead, they

transformed their data into six 5 X 5 confusion matrices,

one for each physical dimension, estimated mutual informa- tion in each matrix by means of the 100 trials, and added the results under the assumption of independence for a grand total of 7.2 bits of mutual information. Finally, the author recently performed an absolute identification experiment with a set of vibrotactile stimuli which could assume five different values in each of three dimensions. On each trial three responses were given, one corresponding to each phys- ical dimension. A total of 5000 trials was obtained on one subject. Data were processed (1) by the method of Pollack

and Ficks with results of 0.89, 0.88, and 1.37 bits of mutual

information along the respective dimensions, and (2) by a direct estimate of T from the overall (125 X 125) confusion matrix having a trial/cell ratio of 0.32, resulting in 3.94 bits of mutual information. All results have been summarized in Table I.

One sees that in the first three examples the number of trials taken falls progressively short of the minimum stipu- lated by Miller for obtaining an unbiased estimate of mutual

TABLE I. Summary of stimulus set and observation sample sizes in selected absolute identification experiments.

stim. trials/matrix mut. inf.

Author in set trials cells (bits)

Pollack {1953) 25 1 250 2.0 2.5 Hake and Garner { 1951 ) 50 3 200 1.28 3.25 Klemmer and Frick {1953) 400 32 000 0.2 4.6 Pollack and Ficks { 1954). 15 625 100 .. ß 7.2 Houtsma {current rep.) 125 5 000 0.32 3.94

(4)

Sll S12 S21 S22 Rll R12 R21 R22 I o o o o o I o o I o o o o o I R11 R21 Rkl Rk2 Slj 0.5 0.5 Sil

S2j 0.5 0.5 Si2

0.5 0.5 0.5 0.5

FIG. 1. Hypothetical confusion matrix for two-dimensional stimulus $•

and two-dimensional response R kt, with only two possible values along each dimension. Two-by-two matrices are computed for the first dimension aver- aged over the second, and for the second dimension averaged over the first, respectively. Letters in subscripts indicate the dimension which was aver-

aged.

information. Pollack and Ficks explicitly assumed indepen- dence of stimulus-response relations between the various di- mensions. If this were not the case, their simple addition scheme would not work, as shown in the following example

of a two-dimensional

stimulus

S/• and response

R kl, where

both dimensions of the stimulus (and response) have only two possible values. A hypothetical confusion matrix, show-

ing the conditional

probabilities

p(Rkl/Si•

), is shown

in Fig.

1. Since every stimulus has a unique response, mutual infor-

mation

equals

two bits if stimuli

S 0 are presented

with equal

a priori probabilities. If, however, one computes from this matrix the two confusion matrices of each separate dimen- sion, as shown in the same figure, one obtains uniform distri- butions of conditional response probabilities (0.5)with zero bits of mutual information along each of the two dimensions. Information is clearly not additive here. In fact, one can show that for this simple two-dimensional case:

(a) If stimulus-response combinations for the two di- mensions are independent, mutual information is additive, i.e., the sum of the amounts of information conveyed through each dimension equals the total amount of informa- tion received.

(b) If the occurrence of a particular stimulus-response combination along one dimension makes a particular combi- nation in the other dimension more likely, the total amount of information received is larger than the sum of the amounts of information measured along each dimension.

(c) If the occurrence of a particular stimulus-reponse combination along one dimension makes a particular combi- nation in the other dimension less likely, total mutual infor- mation is less than the sum of the amounts measured in each dimension.

The author's data from the three-dimensional tactile experiment suggest that case (b) applied, and if the same were true for the Pollack and Ficks experiment, their result of 7.2 bits of total mutual information for a six-dimensional stimu- lus is an underestimate.

A practical approach to the problem of how to obtain an unbiased estimate of T from a limited sample of absolute

identification

data is to simulate

an identification

experi-

ment with varying numbers of trials and varying amounts of response noise (to simulate "good" and "bad" performance). On each trial, an integer X was chosen with equal probability in the range 1 <x< 125. A response Y = X + R was generat- ed as well, where R is a uniformly distributed random integer in the range - S<R <S. Trials in which Y came out larger

than 125

or smaller

than 1 were

repeated

to keep•

responses

within the proper range. Figure 2 shows plots of T, the max-

imum likelihood estimate of T, as a function of the data sam-

ple size L. Each of these dashed curves, corresponding to a particular value of $, represents about ten computed points (not visible because they all fall nearly exactly on the curves).

On

the

same

set

of coordinates

empirical

values

of •' from

the

author's experiment are shown as a solid curve, derived from the first L empirical data points of the total of 5000 trials.

The

dotted

curve

shows

these

same

empirical

values

of •rbut

bias-corrected with Miller's (1954)formula. One can easily see how much mutual information estimates are overcor- rected if that formula is applied to data samples that are too small. 6 I- 1- 0 0

I,•..

K=125

8=1 8=2 8=4 ß ß ß ß -._ ß ß -.- ß ..- :. 5 10 I 2 25 8=8 8=16 8=50 8=125

NUMBER OF TRIALS L ( THOUSANDS)

FIG. 2. Computer-simulated esti- mates of mutual information (dashed curves) for a set of 125 alternative stimuli. Curve parameters are amounts of response noise $ in the simulation model. Solid curve repre- sents empirical results from an abso- lute identification experiment with

125 possible different stimuli in

which 5000 trials were taken. Dotted

curve shows the same empirical re- sults after application of Miller's (1954) bias correction.

(5)

The curves shown in Fig. 2 clearly show that T de- creases monotonically with the number of trials L to an

asymptotic value T. They clearly show how much mutual

information is overestimated when the number of experi- mental trials taken is insufficiently large. Curves such as these provide at the same time a reasonably good unbiased estimate of T from data samples considerably smaller than those required when Miller's bias correction is to be used.

One

can

fit an empirically

determined

function

•r{L ) to the

nearest simulated function and read off the corresponding

asymptotic

value

of J",

although

it must

be

said

that

for very

small data samples this may be a difficult task too. Second, one sees that far fewer trials are needed to estimate a relative- ly high value of T than are needed for a low value of T. That is because in the latter case all or nearly all cells of the confu- sion matrix are used, whereas in the case oflarge information transfer far fewer cells are used, yielding a much smaller "effective" matrix. It takes more trials to estimate a particu- lar kind of distribution over a large number of possible out- comes by empirical means than it takes to estimate a distri- bution over only a few possible outcomes. Finally, the computed data points that determine each of the curves of Fig. 2 show, by repeated computation, extremely small vari- ance. Although an analytic expression for the variance of T is not simple to obtain, Miller and Madow {1954) and Roger

and

Green

{1954)

have

computed

approximate

expr?sions

for the first two moments of the entropy estimate E [H ] and

E [• 2].

Their

results

show

that,

even

if the

number

of trials

is

smaller than the number of possible input events {n < k ), the

variance

of •r is small

c•ompared

to its

mean.

For

the

same

reason the variance of T should be small, compared to its

mean, even for relatively small trial samples, which is sup-

ported by our simulation results. Therefore, the main prob-

lem of using insufficiently large numbers of trials to estimate mutual information in an absolute identification paradigm is

not the variance of the estimate, but its bias.

ACKNOWLEDGMENTS

The author is indebted to L. D. Briada, N. I. Durlach, I.

Pollack, J. Roufs, and P. Zurek for their encouragement and helpful comments. Work was supported by the National In- stitutes of Health, Grant 2R01 NS 11680-05, and by the

Karmazin Foundation, while the author was at the Research

Laboratory of Electronics, Massachusetts Institute of Tech- nology, Cambridge, MA.

Hake, H. W., and Garner, W. R. {1951). "The effect of presenting various numbers of discrete steps on scale reading accuracy," J. Exp. Psychol. 42,

358-366.

Klemmer, E. T., and Frick, F. C. (1953). "Assimilation of information from dot and matrix patterns," J. Exp. Psychol. 45, 15-19.

Miller, G. A. (1954). "Note on the bias of information estimates," in Infor- mation Theory in Psychology, edited by H. Quastler (The Free Press, Glencoe, IL).

Miller, G. A., and Madow, W. G. (1954). "On the maximum likelihood

estimate of the Shannon-Wiener measure of information," AFCRC-TR,

54-75.

Pollack, I. {1953}. "The information for elementary auditory displays II," J.

Acoust. Soc. Am. 25, 765-769.

Pollack, I., and Ficks, L. {1954}. "Information of elementary multidimen- sional auditory displays," J. Acoust. Soc. Am. 26, 155-158.

Rogers, M. S., and Green, B. F. {1954}. "The moments of sample informa- tion when the alternatives are equally likely," in Information Theory in Psychology, edited by H. Quastler {The Free Press, Glencoe, IL).

Referenties

GERELATEERDE DOCUMENTEN

However, for any probability distribution, we define a quantity called the entropy, which has many properties that agree with the intuitive notion of what a

Using Powell's algorithm for matching images we can find the maximum of the mutual information.. The algorithm expects the

Abstract— In this study, we assessed the information dy- namics of respiration and heart rate variability during mental stress testing by means of the cross-entropy, a measure of

Table 6.53 shows that there were no significant differences in the prioritisation of management development needs between principals and HODs regarding performance

A Monte Carlo comparison with the HLIM, HFUL and SJEF estimators shows that the BLIM estimator gives the smallest median bias only in case of small number of instruments

Here, the common development sequence is followed, first the data collection is created, followed by the student administration (serves as master data for the Student services

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

High angular resolution diffusion imaging (HARDI) has proven to better characterize complex intra-voxel structures compared to its pre- decessor diffusion tensor imaging