• No results found

Evaluating and Improving Local Hyperspectral Anomaly Detectors

N/A
N/A
Protected

Academic year: 2022

Share "Evaluating and Improving Local Hyperspectral Anomaly Detectors"

Copied!
8
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Evaluating and Improving

Local Hyperspectral Anomaly Detectors

Leonardo R. Bachega School of Electrical and Computer Engineering

Purdue University West Lafayette, IN 47907

lbachega@purdue.edu

James Theiler

Space and Remote Sensing Group Los Alamos National Laboratory

Los Alamos, NM 87545 jt@lanl.gov

Charles A. Bouman School of Electrical and

Computer Engineering Purdue University West Lafayette, IN 47907

bouman@purdue.edu

Abstract—This paper addresses two issues related to the detection of hyperspectral anomalies. The first issue is the evaluation of anomaly detector performance even when labeled data is not available. The second issue is the estimation of the covariance structure of the data in local detection methods, such as the RX detector, when the number of available training pixels n is not much larger than (and may even be smaller than) the data dimensionality p.

Our first contribution is to formulate and employ a mean-log-volume approach for evaluating local anomaly detectors. Traditionally, the evaluation of a detector’s accuracy has been problematic. Anomalies are loosely defined as pixels that are unusual with respect to the other pixels in a local or global context. This loose definition makes it easy to develop anomaly detection algorithms – and many have been proposed – but more difficult to evaluate or compare them. Our mean-log-volume approach allows for an effective evaluation of a detector’s accuracy without requiring labeled testing data or an overly-specific definition of an anomaly.

The second contribution is to investigate the use of the Sparse Matrix Transform (SMT) to model the local covariance structure of hyperspectral images. The SMT has been previously shown to provide full rank estimates of large covariance matrices even in the n < p scenario.

Traditionally, the number of training pixels needed for good estimates of the covariance needs to be at least as large as the data dimensionality (and preferably it should be several times larger). Therefore, when one deploys the RX detector in a sliding window, the choices to select small window sizes are limited because of the n > p restriction associated to the covariance estimation. Our results suggest that RX-style detectors using the SMT covariance estimates perform favorably compared to other methods even (indeed, especially) in the regime of very small window sizes.

I. INTRODUCTION

Anomaly detection promises the impossible: it is tar- get detection without knowing anything about the target.

In the context of hyperspectral imagery, the anomalous pixels are those that are unusual with respect to the other

pixels in a local or global context. A number of anomaly detectors have been developed for hyperspectral datasets, many of which are surveyed by Stein et. al. [1], and more recently by Matteoli et. al. [2]

Local detectors form an important class of algorithms.

They work using a statistical model of the background pixels in the local neighborhood of the pixel under test.

In general, only the pixels within a sliding window are used to estimate properties of the local context. To the extent that the background statistical properties are non- stationary across the image, this local statistical char- acterization has the potential to improve the detection accuracy. One problem with these local methods is that the number of training samples (pixels),n, needed for a good estimate of the covariance must be at least as large as the data dimensionality (number of spectral bands),p, and preferably should be several times larger thanp. [3], [4] This n ≫ p requirement rules out small window sizes. The potential increase in detection accuracy due to the local characterization of the background (in a small window) is compromised by the lack of adequate training samples needed to estimate the covariance.

Another way to address the covariance estimation problem is to use the Sparse Matrix Transform (SMT).

The SMT provides full rank estimates of large covariance matrices even when the number of training samples n is smaller than the data dimensionality p. [5] We have recently shown that the SMT improves the accuracy of

“global” anomaly detectors. [6] In this paper, we sug- gest that RX-style detectors using the SMT covariance estimates perform favorably compared to other methods, even in the regime of very small window sizes.

The rest of this paper is organized as follows: Sec- tion II formulates the anomaly detection task and reviews the most commonly used covariance estimation methods used in anomaly detection; Section III describes the SMT covariance estimation and how the SMT estimates yield

(2)

highly accurate detectors even when small window sizes are used; Section IV introduces the mean-log-volume as a measure of detection accuracy and show how it can be used to select the window size that maximizes the detection accuracy; Section V presents our main experimental results. Finally, Section VI presents the main conclusions.

II. HYPERSPECTRALANOMALYDETECTION

Hyperspectral anomaly detection consists in finding pixel regions (objects) in the hyperspectral image with pixels that differ substantially from the background, i.e., the pixels in the regions surrounding these objects.

In general, there is no precise definition of what con- stitutes an anomaly. A common way of defining anoma- lies is to say that anomalies are not concentrated. [7]

Here we assume that anomalous samples are drawn from a broad, uniform distribution with a much larger support than the distribution of typical (i.e., not anomalous) samples. This assumption allows us to describe anomaly detection in terms of a binary classification problem.

A. Anomaly Detection as Binary Classification

Letx be a p-dimensional random vector. We want to classify x as typical if it is drawn from a multivariate Gaussian distributionN (µ, R), or as anomalous if it is drawn from a uniform distribution U(x) = c, where c is some constant. Formally, we have the following hy- potheses:

H0: x ∼ N (µ, R)

H1: x ∼ U , (1)

where H0 andH1 are referred as the null and alterna- tive hypotheses respectively. According to the Neyman- Pearson lemma [8], optimal classifier has the form of a log-likelihood ratio test

l(x) = log p(x; H1) p(x; H0)



≷ l0, (2)

that maximizes the probability of detection, p(H1; H1) for a fixed probability of false alarm,p(H1; H0), which is controlled by the threshold l0.

The log-likelihood ratio test in (2) can be written as l(x) = log p(x; H1)

p(x; H0)



= log c − log p(x; H0)

= log c + p

2log 2π + 1 2log |R|

+1

2(x − µ)tR−1(x − µ) ≷ l0 (3)

We can incorporate the constant terms in (3) together withl0into a new threshold,η, such that the significance test in (3) is equivalent to the test

DR(x) =p(x − µ)tR−1(x − µ) ≷ η. (4) The statisticDR(x) is interpreted as the Mahalanobis distance between the sample x and the mean µ of the background distribution. If such distance exceeds the thresholdη, we label x as an anomaly.

In practice, one does not know the true parameters µ and R of the background pixel distribution N (µ, R).

In order to compute the statistic DR(x) in (4) , the practitioner needs first to compute good estimatesµ andˆ R of µ and R respectively, from the samples (pixels)ˆ available.

B. Sliding Window-based Detection

The RX detection algorithm [9], [10] uses a sliding window centered at the pixelx, as illustrated in Fig. 1.

The window pixels are used to compute the covariance estimate ˆR of the background. As argued in [2] the pixels closest tox within the Guard window are left out of the estimation to avoid contaminating the estimate with potentially anomalous pixels. The dimension of the guard window is chosen according to the expected maximum size of an anomalous object. An interesting variation of the RX detector (not investigated here) uses a third window around x, larger than the guard window but smaller than the outer window, to estimate the mean µ. [2] The motivation is that a good estimate of the mean requires fewer pixels than a good estimate of the covariance.

The pixels within the outer window are used as the training pixels in the estimation of the covariance ma- trixR. The choice of the window size is a compromise between two factors: (i) The window should be small enough that it covers a homogeneous region of the background, therefore, being accurately modeled by the multivariate GaussianN (µ, R); (ii) The window should be large enough that the number of pixels within the outer window is enough to produce reliable estimates of the covarianceR. At least p + 1 pixels are required for non-singular sample covariance estimates.

C. Covariance Estimation Methods

In this section, we discuss some of the methods used to estimate the covariance matrixR.

1) Sample Covariance: LetX = [x1, · · · , xn] be the set of n i.i.d. p-dimensional Gaussian random vectors drawn fromN (0, R). The sample covariance S is given by

S = 1 nXXt.

(3)

Outer window

Guard window

Pixel x

Fig. 1. Square sliding window used in the RX detection algorithm.

The pixels in the outer window are used to compute the covariance estimate ˆR of the background surrounding the pixel x. The pixels within the inner window (referred as the guard window are not used in the covariance computation to avoid that potential anomalous pixels contaminate the estimate ˆR.

which is the unconstrained maximum likelihood estimate of R. [8]

When n < p, the sample covariance S is singular, with rankn and overfits the data. As argued in [3], [2], in the case of hyperspectral data, it is usually desirable to have n ≥ 10p so that S is a reliable estimate of R.

But even when n is small and S is by itself unreliable, the sample covariance is still useful as a starting point for the regularized shrinkage estimates reviewed below as well as the SMT introduced in Section III.

2) Diagonal: Because it is the inverse of R that is used in (4) , it is important that the estimate of R be full-rank. A simple way to obtain a full-rank estimate ofR with a small number of samples n (especially when n < p) is to treat all the p dimensions as uncorrelated and simply estimate the variances for each of the p coordinates. This results in the estimator

D = diag(S),

which is generally of full-rank and can be well estimated even with small n. However, D tends to underfit the the data since the assumptions that the coordinates are uncorrelated is typically unrealistic.

3) Shrinkage: The shrinkage estimation is a very pop- ular method of regularizing estimates of large covariance matrices. [11], [12], [13] It is based on the combination of the sample covariance matrixS that overfits the data with another estimator T (called the shrinkage target) that underfits the data:

R = (1 − α)S + αT,ˆ (5) where α ∈ [0, 1]. The choice of the value α that maximizes the likelihood of the estimate ˆR is typically done through a cross-validation procedure.

The most common variation of the shrinkage method [11], [12] usesσ2I as the shrinkage target, where σ2 is the average variance across all the p dimensions and I is the p × p identity matrix. The covariance estimator is given by

R = (1 − α)S + ασˆ 2I. (6) A variation of (5) proposed by Hoffbeck and Land- grebe [13] uses D = diag(S) as the shrinkage target, resulting in the following shrinkage estimator

R = (1 − α)S + αD.ˆ (7) The authors in [13] also propose a computationally efficient leave-one-out cross-validation (LOOC) scheme to estimateα in (7) .

4) Quasilocal Covariance: This method proposed by Caefer et. al. [14] considers the eigen-decomposition of the covariance matrix R = EΛEt, and makes the observation that the eigenvalues in the matrixΛ are more likely to change across different image locations while the eigenvectors inE remain mostly pointed to the same directions across the entire image.

The observation above suggests that one can obtain a global estimate of the eigenvector matrix E using all the pixels in the image, and then can adjust the eigenvalues in Λ locally by computing the variances independently in each direction using only pixels that are within the sliding window. Since the number of pixels in the entire image, we typically have n ≫ p, and so the sample covariance S will provide a full-rank global estimate and its eigenvectors, ˆEglobalcan be used as the estimates ofE across all positions of the sliding window.

Finally, the estimate of the matrixΛ is computed locally at each position of the sliding window, by computing variances in each of the global eigenvector directions.

This approach results in the quasilocal estimator of covariance:

R = ˆˆ EglobalΛˆlocalglobalt .

III. THESPARSEMATRIXTRANSFORM(SMT) The Sparse Matrix Transform (SMT) [5], [6] can be used to provide full-rank estimates of the covariance matrixR used in the detection framework in Section II.

The method decomposes the true covariance R into the product R = EΛEt, where E is the orthonormal matrix containing the eigenvectors of R and Λ is a diagonal matrix containing the eigenvalues of R. The SMT then provides the estimates ˆE and ˆΛ with the diagonal elements of ˆΛ being strictly positive.

(4)

A. SMT Covariance Estimation

Given a training set withn independent p-dimensional i.i.d random vectors drawn from the multivariate Gaus- sian N (0, R), and organized into the data matrix X = [x1, · · · , xn]. The Gaussian likelihood of observing the dataX is given by

l(X; R) = |R|−n/2 (2π)np/2exp



−1

2trace(R−1S)

 , (8) whereS = n1XXtis the sample covariance, a sufficient statistic for the likelihood of the data X. The joint maximization of (8) with respect toE and Λ results in the maximum likelihood (ML) estimates

Eˆ = arg min

E∈ΩK

 diag(EtSE)

(9)

Λ =ˆ diag( ˆEtS ˆE) , (10) whereΩK is the set of allowed orthonormal transforms.

If n > p, and the set ΩK includes all orthonormal transforms, then the solution to (9) and (10) is given by the sample covariance; i.e, ˆE ˆΛ ˆEt = S. However, as discussed in Section II, when n < p, the sample covariance, S overfits the data and is a poor estimate of the true covarianceR.

In order to regularize the covariance estimate, we impose the constraint thatΩK be the set of sparse matrix transforms (SMT) or orderK. More specifically, we will assume that the eigen-transformation has the form

EK =

K

Y

k=1

Gk= G1· · · GK ∈ ΩK , (11)

for a model order K. Each Gk is a Givens rotation [5]

over some(ik, jk) coordinate pair by an angle θk, Gk= I + Θ(ik, jk, θk),

where

[Θ]ij =





cos(θk) − 1 ifi = j = ik or i = j = jk

sin(θk) ifi = ik andj = jk

− sin(θk) ifi = jk andj = ik

0 otherwise

,

(12) andK is the model order parameter.

The optimization of (9) is non-convex, so we use a greedy optimization approach to design each rota- tion, Gk, in sequence to minimize the cost [5]: Let Sk−1= Gtk−1Sk−2Gk−1. At thekth step of the greedy optimization, we select the pair of coordinates (ik, jk) such that

(ik, jk) = argi,jmax (Sk−1)2ij (Sk−1)ii(Sk−1)jj

! ,

i.e, the most correlated pair of coordinates, and choose the angle

θk= 1 2tan−1

 −2(Sk−1)ikjk

(Sk−1)ikik− (Sk−1)jkjk



that completely decorrelates the ik andjk dimensions.

This greedy optimization procedure can be done fast if a graphical constraint can be imposed to the data. [15]

Finally, for an SMT of orderK, we have the estimates EˆK = G1· · · GK (13) ΛˆK = diag( ˆEtKS ˆEK) , (14) with the covariance estimate given by

SM T = ˆEKΛˆKKt . (15) B. SMT Model Order

The model order parameterK can be estimated using cross-validation [5], [15], a Wishart Criterion [6], or the minimum description length (MDL) approach derived in [6]. We used the MDL criterion for the experiments in this paper. According to the MDL criterion, we select the smallest value of K such that the following inequality is satisfied:

maxij

[SK]2ij [SK]ii[SK]jj

!

≤ 1−exp

− log n − 5 log p n

 ,

where SK = ˆEKt S ˆEK.

It is often useful to express the order of the SMT as K = rp, where r is the average number of rotations per coordinate, being typically very small (r < 5) for several previously studied datasets. [5]

C. Shrinkage SMT

The SMT covariance estimate in (15) can be used as a shrinkage target, alternative to the ones described in Section II-C3, resulting in the following Shrinkage-SMT estimate:

R = (1 − α)S + α ˆˆ RSM T . IV. ELLIPSOIDMEANLOG-VOLUME

In this section, we develop the Ellipsoid Mean Log- Volume, a novel metric to evaluate the accuracy of anomaly detection algorithms that make detection de- cisions based on a Mahalanobis statistic such as DR

in (4) . Different versions of these detectors use dif- ferent techniques to estimate the covariance yielding different detection accuracies depending on how well the covariance estimate ˆR approximates the true background covarianceR.

(5)

Traditionally, receiver operating characteristics (ROC) curves have been widely used to evaluate anomaly detec- tors. The ROC approach requires both samples labeled as typical and samples labeled as anomalous in order to estimate the both the probability of detection and the probability of false alarm used in the ROC analysis.

Unfortunately, anomalies are rare events and it is often difficult to have enough data labeled as anomalous in order to estimate the probability of detection required in the ROC analysis.

The approach developed here seeks to characterize how well the estimates of the background model (i.e.,µˆ and ˆR) fit the training (typical) pixel data, overcoming the limitation of the ROC analysis described above.

More specifically, we evaluate the volume of the hyper- ellipsoid within the region

(x − ˆµ)t−1(x − ˆµ) ≤ η2, (16) where η controls the probability of false alarm, as described previously. Such a volume is evaluated by the following expression:

V (R, η) = πp/2p|R|

Γ(1 + p/2)ηp. (17) Smaller values ofV (R, η) indicate smaller probabilities that an anomalous data point would fall within the hyper- ellipsoid region of (16) . Based on this observation, the core idea in our approach is to use the value ofV (R, η) as a proxy for the probability of missed detection. There- fore, for a fixed probability of false alarm, smaller values ofV (R, η) indicate more accurate detection. Because the direct computation of V (R, η) tends to be numerically unstable, often leading to numerical overflow for large values ofp, in practice we work with log V (R, η) as our measure of detection accuracy.

This approach has been used before in global anomaly detection [6], [16], [17], but we are extending it here to local sliding window-based anomaly detection. These detectors produce a different local estimate of the back- ground covariance at each location of the sliding window across the image. We suggest measuring detection accu- racy in terms of the expected log-volume of the hyper- ellipsoid,E[log V ( ˆR, η)] across the whole hyperspectral image, where each different estimate ˆR is computed for each position of the sliding window using local training data pixels.

V. EXPERIMENTS

All experiments in this section were performed using the Blindrad hyperspectral dataset [18], a HyMap image of Cook City, MT of 800 × 280 pixels, each with 126

hyperspectral bands. Fig. 2 displays a RGB rendering of this dataset.

In all experiments, a sliding window like the one described in Fig. 1 moves across the image and, at each position it estimates the covarianceR from the samples of the outer window using several covariance estimation methods previously discussed. Such covariance is used to compute DR in (4) for each pixel within the guard window. The radius η is adjusted globally so that a fraction of the points corresponding to a fixed probability of false alarm is left out of the ellipsoid region. Finally we compute the expected value E[log V ( ˆR, η)] over all window positions and take that as the measure of anomaly detection performance.

Fig. 3 shows the coverage plots with the expected log- volume of ellipsoid vs. the probability of false alarm for different window sizes. The hyperspectral bands of the dataset were rotated to the Quasilocal coordinate system by the matrix ˆEglobalt (see Sec. II-C4). These “ROC-like”

curves suggest that the regularized methods are more accurate, especially when small window sizes are used.

When large window sizes are used, the unregularized sample covariance has its performance similar to the regularized methods.

Fig. 4 compares the performance of several detectors in both the original and the quasilocal coordinate systems at two different fixed false alarm rates. The diagonal covariance estimate performs poorly in the original co- ordinates (Figs. 4(a) and 4(b)), but remains a compet- itive method in the quasilocal coordinates (Figs. 4(c) and 4(d)); in fact, the diagonal estimator in quasilocal coordinates is just the quasilocal covariance estimator suggested by Caefer et. al. [14] The Shrinkage-SMT esti- mates are among the best methods in both spaces, though in the quasilocal space, Shrinkage-Diagonal detectors perform just as well. When the window size used to estimate the covariance matrix grows large, we observe the increase in the expected ellipsoid log-volume; i.e., the degradation of the detection accuracy for all the methods. This degradation is due to the distribution of the background pixels being non-stationary across the image. Therefore, the estimate of the covariance using large windows tends to yield poor estimates. When small window sizes are used, the training pixels are more likely to come from a homogeneous region with Gaus- sian distribution. Nevertheless, this is a regime where poor estimates of the covariance are due to the limited number of training samples, as observed in the curves for detectors using the sample covariance. On the other hand, the results suggest that the regularized methods perform best with smaller window sizes. Finally, the

(6)

Fig. 2. RGB rendering of the 800 × 280 pixel Blindrad hyperspectral dataset, captured using a HyMap sensor with 126 channels.

10

-4

10

-3

Prob False Alarm 10

-2

10

-1

10

0

250

300 350 400 450 500 550 600 650

Lo g M ea n V olu me

Blindrad Quasilocal (13x13 outer window)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

10

-4

10

-3

Prob False Alarm 10

-2

10

-1

10

0

250

300 350 400 450 500 550 600 650

Lo g M ea n V olu me

Blindrad Quasilocal (15x15 outer window)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

10

-4

10

-3

10

-2

10

-1

10

0

Prob False Alarm

250 300 350 400 450 500 550 600 650

Lo g M ea n V olu me

Blindrad Quasilocal (21x21 outer window)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

10

-4

10

-3

10

-2

10

-1

10

0

Prob False Alarm

250 300 350 400 450 500 550 600 650

Lo g M ea n V olu me

Blindrad Quasilocal (31x31 outer window)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

Fig. 3. Coverage plots with the expected ellipsoid log-volume vs. probability of false alarm for various outer window sizes.

(7)

15 20 25 30 35 40 45 50 Window size 480

500 520 540 560 580

me an lo g-v ol

Blindrad RX (P_fa =0.0001)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

15 20 25 30 35 40 45 50 Window size 460

480 500 520 540 560

me an lo g-v ol

Blindrad RX (P_fa =0.001)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

(a) (b)

15 20 25 30 35 40 45 50 Window size 480

500 520 540 560 580

me an lo g-v ol

Blindrad qlocal RX (P_fa =0.0001)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

15 20 25 30 35 40 45 50 Window size 460

480 500 520 540 560

me an lo g-v ol

Blindrad qlocal RX (P_fa =0.001)

Diagonal Sample Shrk-D Shrk-I SMTShrk-SMT

(c) (d)

Fig. 4. Expected ellipsoid log-volume vs. the dimension of the sliding window fixed probabilities of false alarm in both the original, (a) and (b), and the quasilocal, (c) and (d), coordinate systems.

practitioner can use the curves in Fig. 4 as a criterion to select the window size that produces the most accurate detector for a chosen covariance estimation method.

VI. CONCLUSIONS

In this paper we have shown how to use the ex- pected log-volume of ellipsoid to measure local detector accuracy. This measure was used to compare different detectors as well as a to provide a criterion for selecting the optimal size of the sliding window. We have also shown how to use the SMT to produce regularized covariance estimates to be used in detection. While Shrinkage-SMT often produces good results, our results show that Shrinkage-Diagonal performs just as well when combined with the quasilocal method proposed in [14]. In the future, we plan to address how to push the

covariance methods to work with even smaller window sizes.

REFERENCES

[1] D. W. J. Stein, S. G. Beaven, L. E. Hoff, E. M. Winter, A. P.

Schaum, and A. D. Stocker, “Anomaly detection from hyper- spectral imagery,” IEEE Signal Processing Magazine, vol. 19, pp. 58–69, Jan 2002.

[2] S. Matteoli, M. Diani, and G. Corsini, “A tutorial overview of anomaly detection in hyperspectral images,” IEEE Aerospace and Electronic Systems Magazine, vol. 25, no. 7, pp. 5–28, Jul 2010.

[3] I. S. Reed, J. D. Mallett, and L. E. Brennan, “Rapid convergence rate in adaptive arrays,” IEEE Trans. Aerospace and Electronic Systems, vol. 10, pp. 853–863, 1974.

[4] A. Ben-David and C. E. Davidson, “Estimation of hyperspectral covariance matrices,” in 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), july 2011, pp. 4324 – 4327.

(8)

[5] G. Cao, L. R. Bachega, and C. A. Bouman, “The sparse matrix transform for covariance estimation and analysis of high dimen- sional signals,” IEEE Trans. Image Processing, vol. 20, no. 3, pp. 625–640, 2011.

[6] J. Theiler, G. Cao, L. R. Bachega, and C. A. Bouman, “Sparse matrix transform for hyperspectral image processing,” IEEE J.

Selected Topics in Signal Processing, vol. 5, no. 1, pp. 424–437, 2011.

[7] I. Steinwart, D. Hush, and C. Scovel, “A classification framework for anomaly detection,” J. Machine Learning Research, vol. 6, pp.

211–232, 2005.

[8] S. M. Kay, Fundamentals of Statistical Signal Processing: De- tection Theory. New Jersey: Prentice Hall, 1998, vol. II.

[9] J. Y. Chen and I. Reed, “A detection algorithm for optical targets in clutter,” IEEE Trans. Aerospace and Electronic Systems, vol.

AES-23, no. 1, pp. 46–59, Jan 1987.

[10] I. S. Reed and X. Yu, “Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,” IEEE Trans. Acoustics, Speech, and Signal Processing, vol. 38, pp.

1760–1770, 1990.

[11] J. H. Friedman, “Regularized discriminant analysis,” J. American Statistical Association, vol. 84, no. 405, pp. 165–175, 1989.

[12] O. Ledoit and M. Wolf, “A well-conditioned estimator for large dimensional covariance matrices,” J. Multivariate Analysis, vol. 88, pp. 365 – 411, February 2004.

[13] J. P. Hoffbeck and D. A. Landgrebe, “Covariance matrix estima- tion and classification with limited training data,” IEEE Trans.

Pattern Analysis and Machine Intelligence, vol. 18, no. 7, pp.

763–767, 1996.

[14] C. E. Caefer, J. Silverman, O. Orthal, D. Antonelli, Y. Sharoni, and S. R. Rotman, “Improved covariance matrices for point target detection in hyperspectral data,” Optical Engineering, vol. 7, p.

076402, 2008.

[15] L. R. Bachega, G. Cao, and C. A. Bouman, “Fast signal analysis and decomposition on graphs using the sparse matrix transform,”

Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 5426–5429, 2010.

[16] J. Theiler and D. R. Hush, “Statistics for characterizing data on the periphery,” Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 4764–4767, 2010.

[17] J. Theiler, “Ellipsoid-simplex hybrid for hyperspectral anomaly detection,” Proc. IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2011.

[18] D. Snyder, J. Kerekes, I. Fairweather, R. Crabtree, J. Shive, and S. Hager, “Development of a web-based application to evaluate target finding algorithms,” Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), vol. 2, pp. 915–918, 2008.

Referenties

GERELATEERDE DOCUMENTEN

Hieruit zijn de volgende conclusies getrokken voor het fase 1 modelsysteem: i jaarlijkse water- stikstof en fosforbalansen zijn voor een reeks van jaren sluitend op te stellen, ii

54 Tevens kan worden gesteld dat er bij de analyse is gezocht naar de grote lijn uit de persoonlijke verhalen over de onderwijskansen, hierdoor is bijvoorbeeld niet naar voren

In analysing the Hong Kong situation and the collective identity of the Umbrella Movement, both individual and social... 12 identity should be considered, using theories

Hegel maak, soos Kant, ook aanspraak op hierdie integrasie tussen die subjektiewe en objektiewe in sy denke aangaande estetika wanneer hy die kunswerk beskou as

The general aim of this study was to establish long-term annual and seasonal trends for atmospheric inorganic gaseous species measured at the Cape Point Global

Furthermore, extending these measurements to solar maximum conditions and reversal of the magnetic field polarity allows to study how drift effects evolve with solar activity and

civil forfeiture is largely based on statutory provisions of the United States, based on the English fiction that the property is rendered guilty of the offence. Forfeiture

In het laboratorium werden de muggelarven genegeerd zowel door bodemroofmijten (Hypoaspis miles, Macrochelus robustulus en Hypoaspis aculeifer) als door de roofkever Atheta