• No results found

Invariant color descriptors for efficient object recognition - Bibliography

N/A
N/A
Protected

Academic year: 2021

Share "Invariant color descriptors for efficient object recognition - Bibliography"

Copied!
11
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl)

Invariant color descriptors for efficient object recognition

van de Sande, K.E.A.

Publication date 2011

Link to publication

Citation for published version (APA):

van de Sande, K. E. A. (2011). Invariant color descriptors for efficient object recognition.

General rights

It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).

Disclaimer/Complaints regulations

If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.

(2)

Bibliography

[1] A. E. Abdel-Hakim and A. A. Farag, “CSIFT: A SIFT descriptor with color invariant characteristics,” in IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. 1978–1983.

[2] H. Akaike, “A new look at the statistical model identification,” Automatic Control, IEEE Transactions on, vol. 19, no. 6, pp. 716–723, 1974.

[3] B. Alexe, T. Deselaers, and V. Ferrari, “What is an object?” in IEEE Conference on Computer Vision and Pattern Recognition, 2010.

[4] P. Arbel´aez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011.

[5] A. Asuncion and D. Newman, “UCI machine learning repository,” 2007. [Online]. Available: http://archive.ics.uci.edu/ml

[6] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008.

[7] C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.

[8] R. Bordawekar, U. Bondhugula, and R. Rao, “Believe it or not! multi-core cpus can match gpu performance for flop-intensive application!” IBM Thomas J. Watson Research Center, Tech. Rep. IBM-RC24982, 2010.

[9] A. Bosch, A. Zisserman, and X. Muoz, “Scene classification using a hybrid generative/dis-criminative approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 04, pp. 712–727, 2008.

[10] G. J. Burghouts and J.-M. Geusebroek, “Performance evaluation of local color invariants,” Computer Vision and Image Understanding, vol. 113, pp. 48–62, 2009.

(3)

[11] K. P. Burnham and D. R. Anderson, “Multimodel Inference: Understanding AIC and BIC in model selection,” Sociological Methods and Research, vol. 33, no. 2, pp. 261–304, 2004.

[12] D. Cai, X. He, and J. Han, “Efficient kernel discriminant analysis via spectral regression,” in IEEE International Conference on Data Mining, 2007, pp. 427–432.

[13] J. Carreira and C. Sminchisescu, “Constrained parametric min-cuts for automatic object segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010. [14] B. Catanzaro, N. Sundaram, and K. Keutzer, “Fast support vector machine training and

classification on graphics processors,” in International conference on Machine learning, 2008, pp. 104–111.

[15] C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines, 2001, software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm.

[16] C.-C. Chang, Y.-C. Li, and J.-B. Yeh, “Fast codebook search algorithms based on tree-structured vector quantization,” Pattern Recognition Letters, vol. 27, no. 10, pp. 1077– 1086, 2006.

[17] D. Chang, N. A. Jones, D. Li, and M. Ouyang, “Compute pairwise euclidean distances of data points with GPUs,” in Intelligent Systems and Control, 2008, pp. 278–283.

[18] S.-F. Chang, D. Ellis, W. Jiang, K. Lee, A. Yanagawa, A. C. Loui, and J. Luo, “Large-Scale Multimodal Semantic Concept Detection for Consumer Video,” in ACM Interna-tional Workshop on Multimedia Information Retrieval, 2007, pp. 255–264.

[19] S.-F. Chang, J. He, Y.-G. Jiang, E. E. Khoury, C.-W. Ngo, A. Yanagawa, and E. Zavesky, “Columbia university/VIREO-CityU/IRIT TRECVID2008 high-level feature extraction and interactive video search,” in Proceedings of the TRECVID Workshop, 2008.

[20] O. Chum and A. Zisserman, “An exemplar model for learning object classes,” in IEEE Conference on Computer Vision and Pattern Recognition, 2007.

[21] R. T. Collins, Y. Liu, and M. Leordeanu, “Online selection of discriminative tracking fea-tures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1631–1643, 2005.

[22] N. Cornelis and L. Van Gool, “Fast scale invariant feature detection and matching on programmable graphics hardware,” in IEEE Computer Vision and Pattern Recognition Workshops, 2008.

[23] G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in ECCV Statistical Learning in Computer Vision, 2004.

(4)

BIBLIOGRAPHY 95 [24] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE

Conference on Computer Vision and Pattern Recognition, 2005, pp. 886–893.

[25] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Computing Surveys, vol. 40, no. 2, pp. 1–60, 2008.

[26] G. Diamos, “The design and implementation of ocelot’s dynamic binary translator from PTX to multi-core x86,” Center for Experimental Research in Computer Systems, Tech. Rep., 2009.

[27] G. Diamos, A. Kerr, and M. Kesavan, “Translating GPU binaries to tiered SIMD architec-tures with ocelot,” Center for Experimental Research in Computer Systems, Tech. Rep., 2009.

[28] T.-N. Do, V.-H. Nguyen, and F. Poulet, “Speed up SVM algorithm for massive classifica-tion tasks,” in Advanced Data Mining and Applicaclassifica-tions, 2008, pp. 147–157.

[29] M. D’Zmura, “Color in visual search,” Vision Research, vol. 31, no. 6, pp. 951–966, 1991. [30] B. Efron, “Bootstrap methods: Another look at the jackknife,” Annals of Statistics, vol. 7,

pp. 1–26, 1979.

[31] I. Endres and D. Hoiem, “Category independent object proposals,” in IEEE European Conference on Computer Vision, 2010.

[32] M. Enzweiler and D. M. Gavrila, “Monocular pedestrian detection: Survey and experi-ments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2179–2195, 2009.

[33] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.” [Online]. Available: http://www.pascal-network.org/challenges/VOC/voc2007/

[34] ——, “The PASCAL Visual Object Classes Challenge 2008 (VOC2008) Results.” [Online]. Available: http://www.pascal-network.org/challenges/VOC/voc2008/

[35] ——, “The PASCAL Visual Object Classes Challenge 2009 (VOC2009) Results,” 2009. [Online]. Available: http://www.pascal-network.org/challenges/VOC/voc2009/

[36] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The PASCAL Vi-sual Object Classes (VOC) Challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010.

[37] R. Farivar, “Dorsal-ventral integration in object recognition,” Brain Research Reviews, vol. 61, no. 2, pp. 144–153, 2009.

(5)

[38] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 1627–1645, 2010.

[39] P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient Graph-Based Image Segmentation,” International Journal of Computer Vision, vol. 59, pp. 167–181, 2004.

[40] R. Fergus, P. Perona, and A. Zisserman, “Object class recognition by unsupervised scale-invariant learning,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2003, pp. 264–271.

[41] R. Fergus, F.-F. Li, P. Perona, and A. Zisserman, “Learning object categories from Google’s image search,” in IEEE International Conference on Computer Vision, 2005, pp. 1816–1823.

[42] G. D. Finlayson, M. S. Drew, and B. V. Funt, “Spectral sharpening: sensor transformations for improved color constancy,” Journal of the Optical Society of America A, vol. 11, no. 5, p. 1553, 1994.

[43] G. D. Finlayson, S. D. Hordley, and R. Xu, “Convex programming colour constancy with a diagonal-offset model,” in IEEE International Conference on Image Processing, 2005, pp. 948–951.

[44] P.-E. Forss´en, “Maximally stable colour regions for recognition and matching,” in IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA, 2007. [45] A. Gaidon, M. Marszałek, and C. Schmid, “The PASCAL Visual Object Classes

Chal-lenge 2008 submission,” INRIA-LEAR, Tech. Rep., 2008.

[46] M. Garland, S. L. Grand, J. Nickolls, J. Anderson, J. Hardwick, S. Morton, E. Phillips, Y. Zhang, and V. Volkov, “Parallel computing experiences with CUDA,” IEEE Micro, vol. 28, no. 4, pp. 13–27, 2008.

[47] P. V. Gehler and S. Nowozin, “On feature combination for multiclass object classification,” in IEEE International Conference on Computer Vision, 2009.

[48] J.-M. Geusebroek, G. J. Burghouts, and A. W. M. Smeulders, “The Amsterdam library of object images,” International Journal of Computer Vision, vol. 61, no. 1, pp. 103–112, 2005.

[49] J.-M. Geusebroek, A. W. M. Smeulders, and J. van de Weijer, “Fast anisotropic gauss filtering,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 938–943, 2003. [50] J.-M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts, “Color

invariance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 12, pp. 1338–1350, 2001.

(6)

BIBLIOGRAPHY 97 [51] T. Gevers and H. M. G. Stokman, “Classification of color edges in video into shadow-geometry, highlight, or material transitions,” IEEE Transactions on Multimedia, vol. 5, no. 2, pp. 237–243, 2003.

[52] T. Gevers, J. van de Weijer, and H. Stokman, Color image processing: methods and ap-plications: color feature detection: an overview. CRC press, 2006, ch. 9, pp. 203–226. [53] N. Goda and M. Fujii, “Sensitivity to modulation of color distribution in multicolored

textures,” Vision Research, vol. 41, no. 19, pp. 2475–2485, 2001.

[54] M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends in Neurosciences, vol. 15, no. 1, pp. 20–25, 1992.

[55] K. Grauman and T. Darrell, “The pyramid match kernel: Efficient learning with sets of features,” Journal of Machine Learning Research, vol. 8, pp. 725–760, 2007.

[56] C. Gu, J. J. Lim, P. Arbel´aez, and J. Malik, “Recognition using regions,” in IEEE Confer-ence on Computer Vision and Pattern Recognition, 2009.

[57] T. Hansen and K. R. Gegenfurtner, “Higher level chromatic mechanisms for image seg-mentation,” Journal of Vision, vol. 6, no. 3, pp. 239–259, 2006.

[58] H. Harzallah, F. Jurie, and C. Schmid, “Combining efficient object localization and image classification,” in IEEE International Conference on Computer Vision, 2009.

[59] M. Hassaballah, S. Omran, and Y. B. Mahdy, “A review of simd multimedia extensions and their usage in scientific and engineering applications,” Computer Journal, vol. 51, no. 6, pp. 630–649, 2008.

[60] D. Hoiem, A. A. Efros, and M. Hebert, “Recovering surface layout from an image,” Inter-national Journal of Computer Vision, 2007.

[61] B. Huurnink, L. Hollink, W. van den Heuvel, and M. de Rijke, “Search behavior of media professionals at an audiovisual archive: A transaction log analysis,” Journal of the Amer-ican Society for Information Science and Technology, vol. 61, no. 6, pp. 1180–1197, June 2010.

[62] H. J´egou, M. Douze, and C. Schmid, “Packing bag-of-features,” in IEEE International Conference on Computer Vision, 2009.

[63] Y.-G. Jiang, C.-W. Ngo, and J. Yang, “Towards optimal bag-of-features for object cate-gorization and semantic video retrieval,” in ACM International Conference on Image and Video Retrieval, Amsterdam, The Netherlands, 2007, pp. 494–501.

[64] Y.-G. Jiang, J. Yang, C.-W. Ngo, and A. Hauptmann, “Representations of keypoint-based semantic concept detection: A comprehensive study,” IEEE Transactions on Multimedia, vol. 12, no. 1, pp. 42–53, 2010.

(7)

[65] F. Jurie and B. Triggs, “Creating efficient codebooks for visual recognition.” in IEEE International Conference on Computer Vision, 2005, pp. 604–610.

[66] W. Kahan, “Pracniques: further remarks on reducing truncation errors,” Communications of the ACM, vol. 8, no. 1, p. 40, 1965.

[67] KhronosGroup, OpenCL website, 2010, available at http://www.khronos.org/opencl/. [68] C. H. Lampert, M. B. Blaschko, and T. Hofmann, “Efficient subwindow search: A branch

and bound framework for object localization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 2129–2142, 2009.

[69] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid match-ing for recognizmatch-ing natural scene categories.” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2006, pp. 2169–2178.

[70] V. W. Lee, C. Kim, J. Chhugani, M. Deisher, D. Kim, A. D. Nguyen, N. Satish, M. Smelyanskiy, S. Chennupaty, P. Hammarlund, R. Singhal, and P. Dubey, “Debunk-ing the 100x GPU vs. CPU myth: an evaluation of throughput comput“Debunk-ing on CPU and GPU,” SIGARCH Computer Architecture News, vol. 38, no. 3, pp. 451–460, 2010.

[71] B. Leibe and B. Schiele, “Interleaved object categorization and segmentation,” in British Machine Vision Conference, 2003, pp. 759–768.

[72] T. K. Leung and J. Malik, “Representing and recognizing the visual appearance of mate-rials using three-dimensional textons,” International Journal of Computer Vision, vol. 43, no. 1, pp. 29–44, 2001.

[73] F. Li, J. Carreira, and C. Sminchisescu, “Object recognition as ranking holistic figure-ground hypotheses,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010.

[74] E. Lindholm, J. Nickolls, S. Oberman, and J. Montrym, “Nvidia tesla: A unified graphics and computing architecture,” IEEE Micro, vol. 28, no. 2, pp. 39–55, 2008.

[75] D. G. Lowe, “Distinctive image features from scale-invariant keypoints.” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.

[76] S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008.

[77] S. Maji and J. Malik, “Object detection using a max-margin hough transform,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009.

[78] T. Malisiewicz and A. A. Efros, “Improving spatial support for objects via multiple seg-mentations,” in British Machine Vision Conference, 2007.

(8)

BIBLIOGRAPHY 99 [79] M. Marszałek, C. Schmid, H. Harzallah, and J. van de Weijer, “Learning object represen-tations for visual object class recognition,” 2007, Visual Recognition Challenge workshop, in conjunction with IEEE ICCV.

[80] J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maxi-mally stable extremal regions,” Image and Vision Computing, vol. 22, no. 10, pp. 761 – 767, 2004.

[81] K. Mikolajczyk and et al. , “A comparison of affine region detectors,” International Jour-nal of Computer Vision, vol. 65, no. 1-2, pp. 43–72, 2005.

[82] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615– 1630, 2005.

[83] F. Mindru, T. Tuytelaars, L. Van Gool, and T. Moons, “Moment invariants for recognition under changing viewpoint and illumination,” Computer Vision and Image Understanding, vol. 94, no. 1-3, pp. 3–27, 2004.

[84] M. Mishkin and L. G. Ungerleider, “Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys,” Behavioural Brain Research, vol. 6, no. 1, pp. 57–77, 1982.

[85] F. Moosmann, B. Triggs, and F. Jurie, “Fast discriminative visual codebooks using ran-domized clustering forests,” in Neural Information Processing Systems, 2006, pp. 985– 992.

[86] M. Naphade, J. R. Smith, J. Tesic, S.-F. Chang, W. Hsu, L. Kennedy, A. Hauptmann, and J. Curtis, “Large-scale concept ontology for multimedia,” IEEE Multimedia, vol. 13, no. 3, pp. 86–91, 2006.

[87] J. Nickolls, I. Buck, M. Garland, and K. Skadron, “Scalable parallel programming with CUDA,” Queue, vol. 6, no. 2, pp. 40–53, 2008.

[88] Nvidia, CUDA Programming Guide, 2010, available at http://www.nvidia.com/CUDA. [89] J. D. Owens, M. Houston, D. Luebke, S. Green, J. E. Stone, and J. C. Phillips, “GPU

computing,” Proceedings of the IEEE, vol. 96, no. 5, pp. 879–899, 2008.

[90] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman, “Using multiple segmentations to discover objects and their extent in image collections,” in IEEE Confer-ence on Computer Vision and Pattern Recognition, 2006.

[91] F. J. Seinstra, J.-M. Geusebroek, D. Koelma, C. G. M. Snoek, M. Worring, and A. W. M. Smeulders, “High-performance distributed video content analysis with parallel-horus,” IEEE Multimedia, vol. 14, no. 4, pp. 64–75, 2007.

(9)

[92] M. Shafer, “Using color to seperate reflection components,” Color Research and Applica-tions, vol. 10, no. 4, pp. 210–218, 1985.

[93] T. Sharp, “Implementing decision trees and forests on a GPU,” in IEEE European Confer-ence on Computer Vision, 2008, pp. 595–608.

[94] S. N. Sinha, J.-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Machine Vision and Applications, 2007. [95] J. Sivic and A. Zisserman, “Video Google: A text retrieval approach to object matching in

videos,” in IEEE International Conference on Computer Vision, 2003, pp. 1470–1477. [96] A. F. Smeaton, P. Over, and W. Kraaij, “Evaluation campaigns and TRECVid,” in ACM

International Workshop on Multimedia Information Retrieval, 2006, pp. 321–330.

[97] C. G. M. Snoek, K. E. A. van de Sande, O. de Rooij, B. Huurnink, J. R. R. Uijlings, M. van Liempt, M. Bugalho, I. Trancoso, F. Yan, M. A. Tahir, K. Mikolajczyk, J. Kittler, M. de Ri-jke, J. M. Geusebroek, T. Gevers, M. Worring, D. C. Koelma, and A. W. M. Smeulders, “The MediaMill TRECVID 2009 semantic video search engine,” in Proceedings of the TRECVID Workshop, 2009.

[98] C. G. M. Snoek, K. E. A. van de Sande, O. de Rooij, B. Huurnink, J. C. van Gemert, J. R. R. Uijlings, and et al. , “The MediaMill TRECVID 2008 semantic video search engine,” in Proceedings of the TRECVID Workshop, 2008.

[99] C. G. M. Snoek, M. Worring, O. de Rooij, K. E. A. van de Sande, R. Yan, and A. G. Hauptmann, “VideOlympics: Real-time evaluation of multimedia retrieval systems,” IEEE Multimedia, vol. 15, no. 1, pp. 86–91, 2008.

[100] C. G. M. Snoek, M. Worring, J.-M. Geusebroek, D. C. Koelma, and F. J. Seinstra, “On the surplus value of semantic video analysis beyond the key frame,” in IEEE International Conference on Multimedia & Expo, 2005.

[101] C. G. M. Snoek, M. Worring, J. C. van Gemert, J.-M. Geusebroek, and A. W. M. Smeul-ders, “The challenge problem for automated detection of 101 semantic concepts in multi-media,” in ACM International Conference on Multimedia, 2006, pp. 421–430.

[102] S. Sonnenburg, G. Raetsch, S. Henschel, C. Widmer, J. Behr, A. Zien, F. de Bona, A. Binder, C. Gehl, and V. Franc, “The Shogun machine learning toolbox,” Journal of Machine Learning Research, vol. 11, pp. 1799–1802, 2010.

[103] S. Sonnenburg, G. R¨atsch, C. Sch¨afer, and B. Sch¨olkopf, “Large scale multiple kernel learning,” Journal of Machine Learning Research, vol. 7, pp. 1531–1565, 2006.

[104] J. Stratton, S. Stone, and W. mei Hwu, “MCUDA: An efficient implementation of CUDA kernels for multi-core CPUs,” in Workshop on Languages and Compilers for Parallel Computing, 2008.

(10)

BIBLIOGRAPHY 101 [105] M. A. Tahir, J. Kittler, K. Mikolajczyk, F. Yan, K. E. A. van de Sande, and T. Gevers, “Visual category recognition using spectral regression and kernel discriminant analysis,” in Workshop on Subspace Methods, in conjunction with IEEE ICCV, 2009.

[106] M. A. Tahir, K. E. A. van de Sande, J. R. R. Uijlings, and et al. , “University of Amsterdam and University of Surrey at PASCAL VOC 2008,” 2008, PASCAL Visual Object Classes Challenge Workshop, in conjunction with IEEE European Conference on Computer Vision. [Online]. Available: http://koen.me/research/pub/vandesande-pascalvoc2008.pdf [107] F. Tang, S. H. Lim, N. Chang, and H. Tao, “A novel feature descriptor invariant to complex

brightness changes,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 2631–2638, 2009.

[108] T. Tuytelaars and K. Mikolajczyk, “Local invariant feature detectors: A survey,” Founda-tions and Trends in Computer Graphics and Vision, vol. 3, no. 3, pp. 177–280, 2008. [109] T. Tuytelaars and C. Schmid, “Vector quantizing feature space with a regular lattice,” in

IEEE International Conference on Computer Vision, 2007, pp. 1–8.

[110] J. R. R. Uijlings, A. W. M. Smeulders, and R. J. H. Scha, “Real-time bag-of-words, ap-proximately,” in ACM International Conference on Image and Video Retrieval, 2009. [111] ——, “What is the spatial extent of an object?” in IEEE Conference on Computer Vision

and Pattern Recognition, 2009.

[112] K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek, “Evaluating color descriptors for object and scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1582–1596, 2010.

[113] K. E. A. van de Sande and T. Gevers, “Illumination-invariant descriptors for discriminative visual object categorization,” International Journal of Computer Vision, submitted, 2011. [114] K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek, “Evaluation of color descriptors for object and scene recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA, 2008.

[115] ——, “Empowering visual categorization with the GPU,” IEEE Transactions on Multime-dia, vol. 13, no. 1, pp. 60–70, 2011.

[116] K. E. A. van de Sande, J. R. R. Uijlings, T. Gevers, and A. W. M. Smeulders, “Segmen-tation as selective search for object recognition,” in IEEE International Conference on Computer Vision, 2011.

[117] J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image fea-ture detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 150–156, 2006.

(11)

[118] J. van de Weijer and C. Schmid, “Coloring local feature extraction,” in IEEE European Conference on Computer Vision, vol. 2, 2006, pp. 334–348.

[119] J. C. van Gemert, J.-M. Geusebroek, C. J. Veenman, C. G. M. Snoek, and A. W. M. Smeulders, “Robust scene categorization by learning image statistics in context,” in CVPR Workshop on Semantic Learning Applications in Multimedia (SLAM), 2006.

[120] J. C. van Gemert, C. J. Veenman, A. W. M. Smeulders, and J.-M. Geusebroek, “Vi-sual word ambiguity,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 7, pp. 1271–1283, 2010.

[121] V. N. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. New York, USA: Springer-Verlag, 2000.

[122] M. Varma and D. Ray, “Learning the discriminative power-invariance trade-off,” in IEEE International Conference on Computer Vision, 2007.

[123] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman, “Multiple kernels for object detec-tion,” in IEEE International Conference on Computer Vision, 2009.

[124] P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Com-puter Vision, vol. 57, pp. 137–154, 2004.

[125] J. Vogel and B. Schiele, “Semantic modeling of natural scenes for content-based image retrieval,” International Journal of Computer Vision, vol. 72, no. 2, pp. 133–157, 2007. [126] J. von Kries, “Influence of adaptation on the effects produced by luminous stimuli,” In

MacAdam, D.L. (Ed.), Sources of Color Vision. MIT Press., 1970.

[127] R. Vuduc, A. Chandramowlishwaran, J. W. Choi, M. E. Guney, and A. Shringarpure, “On the limits of GPU acceleration,” in USENIX Workshop on Hot Topics in Parallelism, 2010. [128] D. Wang, X. Liu, L. Luo, J. Li, and B. Zhang, “Video diver: generic video indexing with diverse features,” in ACM International Workshop on Multimedia Information Retrieval, 2007, pp. 61–70.

[129] A. R. Webb, Statistical Pattern Recognition, 2nd Edition. John Wiley & Sons, 2002. [130] J. Zhang, M. Marszałek, S. Lazebnik, and C. Schmid, “Local features and kernels for

clas-sification of texture and object categories: A comprehensive study,” International Journal of Computer Vision, vol. 73, no. 2, pp. 213–238, 2007.

[131] L. Zhu, Y. Chen, A. Yuille, and W. Freeman, “Latent hierarchical structural learning for object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010.

Referenties

GERELATEERDE DOCUMENTEN

With two series of experiments designed, a model-based recognition algorithm and an image-based recognition algorithm are applied to find the difference in object

and Lepetit, V.: 2015, Learning Descriptors for Object Recognition and 3D Pose Estimation, Conference on Computer Vision and Pattern Recognition (CVPR). Workman, S., Greenwell,

After determining the magnetic space group of m-O2, we have made a symmetry analysis of the phonons, librons, and magnons in the context of the random-phase approxi- mation,

Historians have begun to take note of the extent to which Russian and Soviet efforts in Central Asia – a colonial or quasi-colonial periphery – were focused on issues of economic

Although the presented method of holographic renormalization satisfactorily solves the prob- lem of extracting holographic correlation functions given the bulk field equations

If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of

It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly

Precision holography and its applications to black holes..