• No results found

Automatic red-eye effect removal using combined intensity and colour information

N/A
N/A
Protected

Academic year: 2021

Share "Automatic red-eye effect removal using combined intensity and colour information"

Copied!
9
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Automatic red-eye effect removal using combined

intensity and colour information

T Ali*a, S Khattakaand I Kimb a

Department of Electrical Engineering, COMSATS Institute of Information Technology, Abbottabad, Pakistan

b

Department of Communication Engineering, Myongji University, Korea

Abstract: In this paper, we describe a robust and adaptive method to automatically detect and correct red-eye effect in digital photographs. It improves the existing iris pair detection approaches by introducing a novel process of tuning eye candidate points which is followed by robust iris pair selection among the tuned candidates. Finally, a novel and highly effective red-eye correction process is applied to the detected iris regions. The red-red-eye correction scheme is adaptive to the severity of redness and results in high correction rate and improved visual appearance. The performance of the proposed method is compared with two existing automatic red-eye correction methods and exhibits considerable performance gains. Additionally, the performance of eye detection part of the algorithm is separately evaluated on three well-known images databases. The results have shown that the method is extremely robust in detection and correction of red-eye artefact. The proposed method is designed to correct images without human intervention as the entire process from face detection to red-eye correction is fully automated.

Keywords: iris pair, grey-scale image, iris region

1 INTRODUCTION

The red-eye artefact is one of the most common problems in digital photographs. It is caused by the flash light used to illuminate the subject under insufficient natural light. Iris contracts and expands in response to the incident light in order to allow sufficient light into a human eye. When the flash light is used under low illumination conditions, the human eye allows more light to go in, which is then reflected off the red-blood vessels. Now, if the angle between the flash and the lens is small, the iris appears red in colour. This phenomenon is called red-eye artefact in digital photography.

Camera manufacturers utilize hardware based approach to solve red-eye problem. In one such approach, the distance between the flash light and the optical axis of the camera lens is increased.1,2 Another hardware-based approach commonly used nowadays by manufacturers such as Hara et al.3and Teremy4uses pre-flashes. This causes the pupil of the subjects to contract before using a final flash for illumination and capturing of photo. However, this scheme not only reduces battery life but also fails to completely eliminate the red-eye problem.

The second category is the software-based approach, e.g. Adobe Photoshop. These, however, allow for manual corrections of red-eye and are not easy to use.

Recently, several methods have been proposed for automatic red-eye effect removal. In Ref. 5, the author proposes a method in which skin-like regions are first located using thresholding and then red eyes

The MS was accepted for publication on 23 June 2010.

* Corresponding author: Tauseef Ali, Department of Electrical Engineering, COMSATS Institute of Information Technology, Abbottabad, Pakistan; email: tauseefali@ciit.net.pk

(2)

are detected within these regions. Huang et al.6 propose a method in which a series of heuristic filters are used to find candidate regions which are then classified into red eyes and other regions. Willamowski and Csurka7 propose to form a pixel-wise probability map based on colour measures. The eyes are then detected among the high probability regions. Miao and Sim8 propose an approach in which the red-eye regions are detected based on a difference image between flash and non-flash images taken by a digital camera. In the method of Zhang et al.,9 red pixels are first grouped into regions, and then colour, shade and highlight are used as features in the classification of these regions. These groups or regions are further classified using the AdaBoost algorithm. Also Luo et al.10 have used AdaBoost approach to classify regions that are first detected by square concentric templates.

There are several excellent approaches that use face detection as their first step towards automatic red-eye detection and correction. Gaubatz and Ulichney11 first employ multi-stage classifiers to detect faces in the image. After that, several refining masks are computed over the facial region to detect red-eyes pixels. In Ref. 12, the authors present an automatic red-eye detection method which detects the candidate regions, but selects only those regions that are located within face areas. Also red-eye outline detector is utilized in this approach and the final decision is made using a boosting algorithm. Volken et al.13 proposes a red-eye correction method that is based on the shape of the eye. In their approach, additional information for the red-eye detection comes from the recognition of the white colour of the sclera and the colour of the surrounding skin region.

In this paper, we present a robust and accurate method to correct red-eye artefact automatically from an input image. The existing iris pair detection approaches are enhanced by introducing a novel process of tuning eye candidate points and iris pair selection among the tuned candidates. Hence, the accuracy and robustness of the detection algorithm are considerably improved. Finally, a novel and adaptive red-eye correction process is applied to the detected iris regions. Here, the term ‘adaptive’ means that the desaturation of the red colour is proportional to the severity of redness. The only restriction on the input image is that the subject should have almost a frontal face, with a maximum head rotation not exceeding 30u. The proposed method produces

perceptually good results and is highly robust to different factors such as illumination conditions, size of red-eye effect and severity of redness.

In Fig. 1, the steps of the proposed method are illustrated using a test image. First, face is detected in the input image. It is then converted to grey-scale image in such a way that facilitates the detection of iris pair and their corresponding radii. In the next step, iris pair and their corresponding radii are found which are then utilized to find iris regions. Finally, iris regions are processed to desaturate the redness inside irises.

The rest of the paper is organized as follows. Section 2 briefly explains the face detection step. Section 3 describes the proposed grey-scale conver-sion scheme of RGB image into grey-scale for eye detection. In Section 4, we present how to detect iris pair and corresponding radius value of each iris. Section 5 shows detailed experimental results. In Section 6, we conclude our work.

2 FACE DETECTION

As a first step, the face in the input image is detected in order to restrict the background to the face. It saves searching time and improves accuracy. The face

(3)

detection is based on popular Viola’s method14which obtains a robust face classifier through supervised AdaBoost learning. Given a sample set of training data in the form {xi,yi}, the AdaBoost algorithm selects a set of weak classifiers {hj(x)} from a set of Haar-like rectangle features and combines them into a strong classifier g(x) defined as

g xð Þ~ 1, P kmax k~1 akhkð Þ§hx 0, otherwise 8 > < > : (1)

where h is the threshold that is adjusted to meet the detection rate goal and a is the weighting factor. All of the face images used for training are first processed using grey-scale and normalized to appropriate size (e.g. 20620 pixels). A detailed description of this face detection process can be found in Ref. 14.

3 GREY-SCALE CONVERSION

A human eye has black iris surrounded by white sclera and skin colour. This unique characteristic of eye is used to locate it in face. However, if the original RGB image is converted to grey-scale using conven-tional technique, it loses hue and saturation informa-tion and the red-eye pixels appear as white. This would make it difficult to detect the centre of eyes. To avoid this problem we convert the colour RGB image to grey-scale in such a way that the intensities of red pixels in grey-scale image are decreased proportion-ally to their redness in RGB image. The following equation is proposed to convert RGB image into grey-scale

Gray x,yð Þ~G x,yð ÞB x,yð Þ

R x,yð Þ (2)

where G(x,y), B(x,y) and R(x,y) are the green, blue and red channel values in the colour RGB image respectively. Gray(x,y) is the resultant grey-scale image as shown in Fig. 1c where (x,y) indicates the location of the pixel. This conversion facilitates eye detection in the next step and can be considered as a pre-processing step for automatic eye detection.

4 IRIS PAIR DETECTION

The performance of the proposed red-eye correction method greatly depends on the iris pair detection. Figure 2 shows different steps of iris pair detection. It

involves eye candidate detection, tuning candidate points and finally, iris pair selection. Here the input is the grey-scale image obtained in Section 3 as shown in Fig 2a.

4.1 Eye candidates detection

AdaBoost algorithm described in Section 2 is used for detecting several candidate points for eyes. An eye candidate detector is built by changing the training data and increasing the false-positive rate of the AdaBoost algorithm. A total of 7000 eye samples are used for training with the eye centre being the centre of the image and resized to 1668 pixels. Because at this stage, the face region is already detected so the negative samples are taken only from the face images. By setting a low threshold in equation (1), more false positives are accepted. By properly adjusting the training parameters, the eye candidate points are restricted to a small number (e.g. 15). Figure 2b shows the candidate points obtained on a test image. 4.2 Tuning candidate points

The candidate points generated in Section 4.1 contain two points which represent eyes. But those candidate points are not necessarily in the centre of irises. In this step, the separability filter proposed by Fukui and Yamaguchi15is utilized in an efficient and novel way to fine tune the positions of the candidate points within a small size of neighbourhood. This enables us to move the two candidate points representing the iris pair to

(4)

the centre of irises. Considering a neighbourhood of size m6n, we have mn new candidate points around each original candidate point. Using the template shown in Fig. 3, the separability value g is computed for each point in the neighbourhood by the following equation g~B A A~B~X2 k~1nk Pk{PPm   2 XN i~1 I xð i,yiÞ{PPm   2 (3)

where nk and Pk are the number of pixels and the average intensities in the region Rk, respectively (Fig. 3). Pmis the average intensity of N pixels in the union of R1and R2, and I(xi,yi) is the intensity values of the pixel (xi,yi) in the union of R1 and R2.

Using equation (3) and separability filter shown in Fig. 3, the separability value g is computed for each point in the neighbourhood. Since the exact radius of iris is not known, we vary the radius r of the template such that rL(r(rU. When the radius r of template matches with radius of iris and the template is exactly in the centre of iris, it gives maximum separability value. In our experiments we used rL53 and rU57. This range depends on the resolution of face region. The resolution of face images obtained in our face detection step varies in the range of 1006100 to 2006200. The point in the neighbourhood which gives maximum separability is considered as the new candidate point.

The above process of selecting a new candidate from its neighbourhood is repeated for each eye candidate. We call the new candidate point as tuned candidate point. In our experiments, we checked for

several sizes of neighbourhoods and concluded that a 969 neighbourhood is the most appropriate. Each new candidate point selected among its neighbour-hood has an associated separability value g and its corresponding optimal radius value ropt, where rL(ropt(rU. Figure 1c shows the tuned candidate points.

4.3 Iris pair selection

Three metrics are combined to measure the fitness of each tuned candidate point with iris. The first metric is based on the separability values determined in the previous section. The other two metrics, based on mean crossing function and convolution template, are proposed in this section.

4.3.1 Mean crossing function

A mean crossing function measures the intensity transitions from a low level to high level and vice versa at any point. It is evaluated by forming a rectangular subregion around each tuned candidate point. The size of the subregion is depicted in Fig. 4, where roptis the radius of the tuned candidate point as determined in Section 4.2. This subregion is scanned horizontally and the mean crossing function for pixel (x,y) is computed as

mC x,yð Þ~ 1, kI x,yð Þ{I xz1,yð Þk§2K 0, otherwise



(4) where K is a constant found empirically and m is the mean of the subregion. The horizontal mean crossing value for the subregion is determined as

mCsubregion~ XM x~1 XN y mC x,yð Þ (5)

where M and N are the horizontal and vertical sizes of the subregion, respectively.

Similarly, vertical mean crossing value for sub-region is evaluated by scanning the subsub-region vertically. Finally, the mean crossing value for the entire subregion is evaluated by adding its horizontal and vertical mean crossing values.

3 An eye template (R1 is the inside region of the

smal-ler circle and R2 is the region between the two

con-centric circles)

(5)

4.3.2 Convolution template

In this subsection, a circular convolution template is used to find the correspondence between the iris and region around candidate point (Fig. 5). The radius of the template is equal to the radius associated with tuned candidate point as determined in Section 4.2. As a first step, an edge image of the subregion around the tuned candidate point is found, where the size of the subregion is equal to that of the template. This subregion and the template are then convolved by placing the centre of the template on the tuned candidate point. The process is repeated for each of the tuned candidate point. The resultant signal from convolution is summed up and a single value is obtained.

Now, we define fitness F of an iris candidate Cjby combining the three metrics evaluated in Sections 4.2 and 4.3 as F (Cj)~ mC C j PL i~1mC Cð Þi z Conv Cj   PL i~1Conv Cð Þi z g C j PL i~1g Cð Þi (6)

where L is the total number of candidate points, mC(Cj) and Conv(Cj) are the mean crossing value and convolution result for candidate Cj, respectively, and g(Cj) is the separability value for candidate.

Finally, the candidate pair with the maximum fitness is taken as the iris pair according to the following equation

F Pair C  i,Cj~F Cð ÞzF Ci  j (7)

5 RED-EYE CORRECTION

In Section 4.2, we determined the radius of each tuned candidate point. Once iris pair is selected, we

use the corresponding radius value of each iris and its centre to determine the iris region. These regions are shown in Fig. 6c for a few sample images. The correction process is only applied to these regions in YCbCr colour space. First, the found iris region is converted to monochrome image in which the pixels are highlighted proportionally to their redness in RGB image. In Ref. 5, the author uses the following expression to convert RGB image to monochrome in order to highlight red pixels as

T x,yð Þ~fR x,yð Þ{ max G x,y½ ð Þ,B x,yð Þg

2

R x,yð Þ (8)

where T(x,y) is the monochrome image. The redness of a pixel in colour RGB image is proportional to the brightness in monochrome image T(x,y). In Ref. 5, this conversion is applied to the whole image to detect red areas in the image. In our approach, however, we utilized this technique only in the already found iris region. The objective is to find a numerical value which represents the severity or level of redness in order to avoid ‘hard fixing’ of the red-eye effect inside iris region. Using the above mentioned procedure, the monochrome image T(x,y) of each iris region is found.

Now in the YCbCrcolour space, the monochrome value of each pixel in the iris region is used to correct

5 Convolution template

6 Examples of red-eye correction by proposed method: (a) test image of the red-eye region; (b) detected iris centre; (c) iris region detected using iris centre and iris radius; (d) corrected red-eye region

(6)

its redness level as

Cr x,yð ÞCorrected~ 1{T x,y½ ð ÞCr x,yð ÞOriginal (9) where Cr(x,y) and T(x,y) are the red channel value and monochrome value of pixel (x,y), respectively. The Cr channel is reduced proportionally according to the value of T(x,y). This correction is adaptive and does not exhibit hard correction boundaries. The value of T is the proportional brightness in the range [0, 1] for a pixel in colour RGB image. Given an extremely effected red-eye, the corresponding value of T will be close to 1 (e.g. 0.8 or 0.9) resulting in greater reduction of Cr channel. On the other hand, for a pixel which is very lightly effected, the corresponding value of T will be close to 0 (e.g. 0.1 or 0.2) resulting in less reduction in Cr channel. For the case when there is no red-eye artefact, the corresponding value of T will ideally be 0 for pixels in iris region resulting in no change in colour of iris.

6 EXPERIMENTAL RESULTS

We tested our approach on 200 images with subjects having different pose, ethnicity and facial expression. The images database was purposely designed in such a way that it include red-eye artefact of different sizes and severity levels. Some images in the database have red-eye artefact covering only two or three pixels, while in some images, the whole iris is affected. Figure 6 shows some of the examples of the correc-tion procedure. As can be seen in Fig. 6d, the corrected red-eye has improved visual appearance and close-to-natural colour of the subject.

6.1 Quantitative analysis of proposed red-eye correction scheme

We compare the efficiency of the proposed algorithm with two automatic methods available: Hewlett Packard RedBot16 and STOIK RedEye AutoFix.17 Table 1 shows the quantitative results of each of the method when tested on the same 200 images. As can be seen in Table 1, our proposed algorithm performs better than these existing methods.

The performance of the red-eye correction algo-rithm is highly dependent on the robustness and accuracy of automatic eye detection from an input image. For this purpose, we present some results of automatic eye detection stage of our proposed algorithm.

Two criteria are used for the evaluation an automatic eye detection algorithm. The first is the detection rate which refers to the fraction of the total number of images for which two eyes are correctly detected. The second is the localisation accuracy which refers to the disparity between the manually detected eye position and the automatically detected eye position. Usually the larger of the two eye disparities in a face image is adopted for accuracy measure of eye detection. Jesorsky et al.18 proposed the relative error to judge the quality of eye detection as

deye~

max jjCl{~C1jj,jjCr{~Crjj

 

jjCl{Crjj

(10) where C andC are the manual and automatic eye~ centres, respectively. The subscripts l and r represent left and right eye, respectively. Obviously, this metric is not dependent upon image resolution. For most applica-tions, deye,0.25 is considered correct eye localisation (to claim eye detection). This precision roughly corresponds to a distance smaller than the eye width. However, this accuracy level may not be sufficient when the localized positions are used for the initialisation of subsequent techniques such as red-eye correction.

In our experiments, we use the criteria for correct iris localisation as deye,0.125 also adopted by Song et al.19Because the radius of an iris, denoted by R, is about one-quarter of an eye width, our criteria corresponds to

max jjCl{~Cljj,jjCr{~Crjj

 

vR (11)

The criteria in equation (11) also mean that if both the left and right eye positions detected hit irises, the eye detection is considered as success. This standard is very close to the that of Kawaguchi and Rizon.20

To evaluate the performance of the eye detection algorithm, we use three publically available data-bases: Bern,21Yale22and BioID.23

Table 1 Comparison of the proposed method with other automatic methods using 200 test images

Result/method No red-eye corrected One red-eye corrected False positives Both red-eyes corrected

Proposed algorithm 31 8 16 161

HP 35 48 32 127

(7)

In Fig. 7, we have shown some of the Bern face images for which the proposed eye detection algo-rithm can correctly locate iris of both eyes.

Figure 8 shows the single face image out of 150 for which eye detection algorithm cannot correctly detect iris pair.

Table 2 shows the performance of the proposed method when tested on Bern, Yale and BioID databases.

As can be seen from Table 2, the method is extremely robust in automatic eye detection stage. Note that Yale and BioID are very complex databases. These databases include images with closed eyes, partially closed eyes, and images in which subjects are wearing glasses. Images with different ethnicities and a variety of facial expression like anger, smile and wink etc exist in these databases. For red-eye correction, it is reasonable to assume that eyes in the input image must be open or partially open. When close eyes images are ignored from these databases, the result of the eye detection rate is shown in Table 2. 6.2 Discussion on quality of correction

Quality of the red-eye correction is very subjective. Available literature on automatic red-eye correction provides less or no discussion on how their proposed

schemes perform in terms of the quality of the correction. To evaluate our proposed method in terms of quality, we counted the number of images automatically corrected by the algorithm that do not require further manual editing to achieve better results. For this analysis, third-party observers were asked to evaluate the result. In 92% of the cases, the result could not be improved visually by manual editing. In fact, manual editing by changing the algorithmic result could sometimes degrade the visual appearance. In Fig. 9, we have shown a few results of red-eye correction of the proposed method along with the results of HP RedBot and STOIK RedEye AutoFix. Important conclusions that can be drawn from these results are as follows:

As our proposed method is based on detecting iris pair in the first stage, the error of correcting one red eye while leaving other is very less as compared to other methods (see subjects 1, 3 and 7). Some techniques such as Ref. 5 try to directly find red colour in image and then refine those red-colour areas based on certain conditions to find red-eye artefact. However, this kind of approach involves the risk of missing one eye, both eyes (if size of red-eye effect is small) or can detect other eye-like red regions as eyes which results in false positives. Our proposed method detects iris regions and applies correction to only found iris regions. Small sizes of red-eye artefact are better corrected by our proposed algorithm as seen for subject 4. The proposed algorithm gives more natural and visually appealing result. It is not based on ‘hard’ replacement of red colour by grey-level values. The approach of Ref. 5 replaces red-eye pixels with the mean value of green and blue channel. However, this kind of replacement results in perceptually less

7 Some of the successful eye detection results from

Bern database 8 The only image from 150 Bern images for which algorithm failed to localize iris pair correctly

Table 2 Performance of eye detection when tested on three public databases

Database

Total error (%)

Total error ignoring closed eye mages (%)

Bern 0.67 NA

Yale 7.88 3.64

(8)

attractive result and produces hard boundaries around the corrected red-eye regions. In fact, in some cases, STOIK RedEye AutoFix degrades the overall look of the whole images. This effect is visible in subject 2 in Fig. 9. HP RedBot is comparatively robust method but it usually replaces the red colour with almost perfect black which may not be the true colour of the subject’s eyes. This effect is easily visible in the case for subjects 5 and 6.

6.3 Computational cost

To find the average processing time, the algorithm is tested on 200 red-eye images and for each step, the average time is computed. The execution time of the proposed algorithm is about 713.7 ms on the average by a computer whose CPU is 2 GHz (Dual Core). Table 3 shows the execution time (in millisecond) of each step of the proposed algorithm.

The tuning process takes variable time depend-fing upon the interval rL(r(rU and the size of

neighbourhood being considered. Since the exact radius of iris is not known, so we vary the radius value of the template in a range rL(r(rU, so that when the radius r of template matches with radius of iris and the template is exactly in the centre of iris, it gives maximum separability value. In our experiments, we used rL53 and rU57. This range depends on the resolution of the face image. The resolution of face image obtained in face detection step varies in the range from 1006100 to 2006200 and this in turns depends on the resolution of input image.

Given some prior information about the resolution of input image, this range can be reduced and hence the processing time required by the tuning process.

7 CONCLUSION

We have proposed an algorithm which can correct red-eye images with high correction rate as well as natural visual appearance. The algorithm is based on robust iris pair localisation. After iris pair is detected, the iris radius and central points of irises are used to correct the redness inside irises. The correction process is proportional to the severity of redness and cases such as small sizes of red eye artefact can be handled. The algorithm is tested and compared with state-of-art techniques available and shows improved correction rate as well as better quality recovery of red eyes.

REFERENCES

1 Benati, P. J., Gray, R. T. and Cosgrove, P. A. ‘Automated detection and correction of eye color defects due to flash illumination’, US Patent 5,748,764, 1998.

2 Dobbs, C. M. and Goodwin, R. M. ‘Localized image recoloring using ellipsoid boundary function’, US Patent 5,130,789, 1992.

3 Hara, M., Yokonuma, N., Miyamoto, H., Inoue, H. and Sosa, T. ‘Control device for preventing red-eye effect on camera’, US Patent 5,950,023, 1999.

Table 3 Average processing time of each step in the proposed algorithm

Steps of proposed algorithm Average time (ms) Face and eye candidates detection 277.03

Tuning candidate points 394.11

Iris pair selection 26.23

Red-eye correction 16.33

Total time 713.7

9 Quality comparison of proposed method with HP RedBot and Stoik RedEye AutoFix

(9)

4 Teremy, P. ‘Camera with control to prevent pre-flash light emission for reducing red-eye effect’, US Patent 6,047,138, 2000.

5 Smolka, B., Czubin, K., Hardeberg, J. Y., Plataniotis, K. N., Szczepanski, M. and Wojciechowski, K. Towards automatic redeye effect removal. Patt. Recogn. Lett., 2003, 24, 1767–1785.

6 Huang, P.-H., Chien, Y.-C. and Lai, S.-H. Automatic multilayer red-eye detection, Proc. 13th Int. Conf. on Image processing: ICIP 2006, Atlanta, GA, USA, October 2006, IEEE Computer Society, pp. 2013– 2016.

7 Willamowski, J. and Csurka, G. Probabilistic automatic red eye detection and correction. Proc. 18th IEEE Int. Conf. on Pattern recognition: ICPR 2006, Hong Kong, China, August 2006, IEEE Computer Society, Vol. 3, pp. 762–765.

8 Miao, X.-P. and Sim, T. Automatic red-eye detection and removal, Proc. IEEE Int. Conf. on Multimedia and Expo: ICME 2004, Toronto, Ont., Canada, July 2004, IEEE Computer Society, pp. 1195–1198.

9 Zhang, L., Sun, Y., Li, M. and Zhang, H. Automated red eye detection and correction in digital photographs, Proc. IEEE Int. Conf. on Image processing: ICIP 2004, Singapore, October 2004, IEEE Computer Society, pp. 2363–2366.

10 Luo, H., Yen, J. and Tretter, D. An efficient automatic redeye detection and correction algorithm, Proc. 17th Int. Conf. on Pattern recognition: ICPR 2004, Cambridge, UK, August 2004, IEEE Computer Society, pp. 883–886.

11 Gaubatz, M. and Ulichney, R. Automatic red eye detection and correction, Proc. IEEE Int. Conf. on Image processing: ICIP 2002, Rochester, NY, USA, September 2002, IEEE Computer Society, pp. 804–807.

12 Ioffe, S. Red eye detection with machine learning, Proc. IEEE Int. Conf. on Image processing: ICIP 2003, Barcelona, Spain, September 2003, IEEE Computer Society, pp. 871–874.

13 Volken, F., Terrier, J. and Vandewalle, P. Automatic red eye removal based on sclera and skin tone detection, Proc. 3rd European Conf. on Colour in graphics, imaging, and vision: CGIV 2006, Leeds, UK, June 2006, University of Leeds, pp. 359–364.

14 Viola, P. and Jones, M. Rapid object detection using a boosted cascade of simple features, Proc. Int. Conf. on Computer vision and pattern recognition: CVPR 2001, Kauai, HI, USA, December 2001, IEEE Computer Society, pp. 511–518.

15 Fukui, K. and Yamaguchi, O. Facial feature point extraction method based on combination of shape extraction and pattern matching. Denshi Joho Tsushin Gakkai Ronbunshi, 1997, J80-D-II, 2170–2177.

16 Hewlett-Packard Labs. ‘RedBot automatic red eye correction’, http://redbot.net/

17 http://www.stoik.com/stoik_red_eye/

18 Jesorsky, O., Kirchberg, K. J. and Frischholz, R. W. Robust face detection using the Hausdorff distance, Proc. 3rd Int. Conf. on ‘Audio and video-based biometric person authentication: AVBPA 2001, Halmstad, Sweden, June 2001, Springer, pp. 90–95. 19 Song, J., Chi, Z. and Liu, J. A robust eye detection

method using combined binary edge and intensity information. Patt. Recogn., 2006, 39, 1110–1125. 20 Kawaguchi, T. and Rizon, M. Iris detection using

intensity and edge information. Patt. Recogn., 2003, 36, 549–562.

21 http://iamwww.unibe.ch/ykiwww/staff/achermann.html 22 http://cvc.yale.edu/projects/yalefaces/yalefaces.html 23 http://www.bioid.com/downloads/facedb/index.php

Referenties

GERELATEERDE DOCUMENTEN

Dieper kleuren bij 't verscheiden Van den dag zijn neergezet, Blijft mijn aandacht even dralen, Daar de vraag in mij ontwaakt, Hoe het licht, dat 's daags komt dalen En dat loodrecht

Haar echter nam zij bij de hand, geleidde met zoete woorden haar bij 't bleeke lijk, en liet haar 't voorhoofd kussen van wie sliep, om niet meer op te staan... Toen waschte zij

Ouer mense – en nou praat ek nie van mense wat so bekend is nie, maar mense soos Zack Mokgoebo 6 , Wesley Mabuza en ’n paar ander – het met trane in hulle. oë na my toe gekom

Keywords: Worcester; Cape Colony; Nineteenth century; Government education; Rhenish mission school; Teacher identity; Denominationalism; Albert Nicholas Rowan;

(Verderop in het advies komt dat nog aan de orde.) Dit brengt met zich mee dat de verzekeraars niet alleen op het kompas van de verzekerden en de markt kunnen varen,

Die wet het dus nie die karakter van ’n veroordelende en ’n vergeldende gesindheid van God teenoor die mens nie, maar van ’n appelerende oproep om soos nuwe mense te lewe

A multidisciplinary approach to chronic disease risk management captures the essence of personalized medicine as it could assist clinicians in the development and

Strength and agility skills of grade 1- learners: North-west child study Article in South African Journal for Research in Sport, Physical Education and Recreation · December