• No results found

The effect of background conruence and complexity on object recognition

N/A
N/A
Protected

Academic year: 2021

Share "The effect of background conruence and complexity on object recognition"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

No interaction between background complexity and

congruency found for object detection

Ramon Bussing

University of Amsterdam

Humans are able to recognise objects, presented in isolation, with ease, regardless of the orientation or other appearance variations. This object recognition is referred to as core object recognition. However, when the object is embedded in a complicated scene additional processing might be required. Not only the perception of the object accounts for object recognition, but rather more peripheral information from the visual system is used. Brief global scene information precedes or accompanies the usage of more specific features. These features play a role in core object recognition, since an object in a scene where that object would normally occur (congruent) leads to better object recognition than scenes that would not normally contain that object (incongruent). Furthermore, the amount of information in the scene seems to play a role; natural scene complexity modulates object recognition performance, with best performance for scenes with medium complexity. This study combines an object recognition task for participants with a computational approach to investigate how background congruency and complexity interact for object recognition. Participants are shown stimuli that contain an object with backgrounds of different complexity and congruency for a short amount of time, as well as a no background category to isolate the background effect. For network depth and a more texture-driven comparison, deep residual networks (ResNets) perform object detection for the same stimuli that will be shown to the participants. In conclusion: no interaction effect between background congruence and background congruence is found, object detection performance decreases as complexity increases and congruent backgrounds lead to better object detection performance than incongruent backgrounds.

(2)

Introduction

Humans are able to recognise objects, presented in isolation, with ease, regardless of the orientation or other appearance variations. This object recognition is referred to as core object recognition by (Rajaei, Mohsenzadeh, Ebrahimpour, & Khaligh-Razavi, 2019). A feedforward sweep, extracting basic features, is believed to be enough to facilitate this object recognition (DiCarlo & Cox, 2007; Julesz, 1981; Lamme & Roelfsema, 2000; Marr, 1982; Serre, Oliva, & Poggio, 2007; Treisman & Gelade, 1980). However, when the object is embedded in a complicated scene the sweep forward might not suffice. Not only the perception of the object accounts for object recognition, but rather more peripheral information from the visual system is used. Support for this idea is given by studies showing that recognition for the same object, emerged later in the recurrent-wired inferior temporal cortex in challenging than non-challenging conditions (Rajaei et al., 2019). Moreover, brief global scene information precedes (Hochstein & Ahissar, 2002) or accompanies (Rousselet, Joubert, & Fabre-Thorpe, 2005; Wolfe, Võ, Evans, & Greene, 2011) the usage of more specific features. These features play a role in core object recognition, since an object in a scene where that object would normally occur (congruent) leads to better object recognition than scenes that would not normally contain that object (incongruent) (Davenport & Potter, 2004; Seijdel, Tsakmakidis, de Haan, Bohte, & Scholte, 2019). Best performance however is for objects that are segmented and have no background.

An explanation could be that scene context or information contributes to object recognition. The way this contextual information affects object detection is likely by top-down reinforcement of the semantic representation of the object (Bar, 2003); figure-ground segmentation is proposed as possible method for improving core object recognition.

Deep convolutional neural networks (DCNNs), inspired by biological neurons, are a great modelling tool for computational neuroscience since they are currently best at modelling core object recognition (Kietzmann et al., 2019), allowing concepts like recurrence and network depth to be easily modelled and investigated within the confounds of such a generative model. Furthermore, the amount of information in the scene seems to play a role; natural scene complexity modulates object recognition performance, with best performance for scenes with medium complexity (Groen, Ghebreab, Prins, Lamme, & Steven Scholte, 2013). Scene complexity is expressed in two parameters, contrast energy (CE) and spatial coherence (SC). CE represents the amount and intensity of local edges, whereas SC represents the contrast’s spatial coherence. These two parameters are provided by a realistic computational model associated with responses from receptive fields in the lateral geniculate nucleus (LGN) that reflect contrast distributions (Groen, Ghebreab, Lamme, & Scholte, 2012; Groen et al., 2013). How background congruency and complexity interact for core object detection is however still unclear.

Scenes with low complexity generally contain very little information, whereas on the other end very complex images contain cues that are so cluttered that the expectation is that no context but only very global scene information could be used. The object detection performance difference between congruent/ incongruent is thus hypothesized to increase as background complexity increases, but decrease for very complex backgrounds. When looking at only texture-driven networks without more profound context knowledge, the expectation is that no increase in performance difference between congruent/incongruent backgrounds will be found but rather only performance difference decrease. As network depth increases, the performance difference between

(3)

Bussing, R

2

congruent/incongruent over complexity should become less.

To test this hypothesis, an object detection task for human participants will be used with 23 objects. Stimuli that contain an object with backgrounds of different complexity and congruency will be shown for a short amount of time, as well as a no background category to isolate the background effect. Masks follow to make the task more challenging. For network depth and a more texture-driven comparison, deep residual networks (ResNets) (He, Zhang, Ren, & Sun, 2016) will perform object detection for the same stimuli that will be shown to the participants.

Materials and Methods

Subjects

A total of thirty subjects, aged between 18 and 23 years (25 females, M = 19.63, SD = 1.39) participated in this study, which was approved by the by the ethical committee of the University of Amsterdam. The subjects were all students at the University of Amsterdam, free from neurological or psychiatric disorders (e.g. epileptic episodes, concussions) with normal or corrected to normal vision. Furthermore, all participants were familiar with the English language. As reward participants received 1 ‘Research Credit’ each. Research Credits were mandatory to pass their study.

Stimuli

The images for the objects are from the ILSVRC2012 database (Russakovsky et al., 2015) since those images provide much variation in object appearance, thus randomizing all factors except the type of object. These objects were segmented from the rest of the image using the

mask R-CNN DCNN (He, Gkioxari, Dollar, & Girshick, 2018), which was trained on the Microsoft COCO database (Lin et al., 2014) that contains segmentation masks for common objects in context. These segmented images were then cherry-picked by hand from the mutual object categories in the SUN2012 (Xiao, Hays, Ehinger, Oliva, & Torralba, 2010), ILSVRC2012 and Microsoft COCO database by the best results from the mask R-CNN DCNN, resulting in 819 used images for the 23 object categories. At least 18 images per object category were present to ensure enough variation in object appearance during the trials. The following objects were used: airplane, apple, backpack, bear, bird, boat, bottle, bus, car, cat, clock, couch, dog, elephant, keyboard, knife, orange, pizza, remote, sandwich, sheep, train and vase.

The SUN2012 database contains a large amount of images, sorted by scene category, with annotated objects. Congruent object and background scene category combinations were those that have at least one annotated object per scene category from that database. Incongruent object and background scene category combinations were those that have no objects annotated per scene category. The backgrounds from the background scene categories in SUN2012 were scraped from Google Images using the Selenium Webdriver (Gojare, Joshi, & Gaigaware, 2015) to ensure a resolution of at least 1080x1080 pixels; the SUN2012 database did not provide such high quality images. These automatically assigned backgrounds per object and congruency were cherry picked by hand to remove anomalies such as congruent combinations in the incongruent selection and vice versa. In addition to this, the congruent backgrounds that already contain the corresponding object were removed, resulting in 2880 unique background images.

In order to isolate the effect of the background variable there is a no background group,

(4)

Bussing, R

3

containing only the segmented object placed on a 50% grey background.

Background complexity is defined as combination between the Contrast Energy (CE) and Spatial Coherence (SC), as defined by (Groen et al., 2013). CE was normalized using:

A regression of CEnormalized and SC show a

significant correlation( r2 = 0.692, p < 0.001, SE =

0.015). To create a single variable for complexity, CEnormalized and SC were combined by a rotation of

axis so that the regression line of CEnormalized and

SC became the x-axis. The values on the x-axis were used as single variable for complexity. 10 complexity categories were created to allow for uniform complexity distribution per congruence and object: the lowest complexity category was everything below the 5% percentile, and the highest complexity category limit was everything above the 95% percentile. The other 8 categories were uniform between those limits.

(5)

Bussing, R

4

For each participant, an unique set of 600 stimuli was created by selecting a random object / background per congruence for ⅔ trials and a random object for ⅓ segmented (no background) trials, while keeping the complexity categories uniform per object and congruence. Object categories were also uniform per background type (congruent, incongruent and segmented/ no background), and all objects were located in the centre of the image with a size of 60000 pixels.

Figure 2: complexity and congruency exemplars.

Exemplars of low- and high complex stimuli, as well as exemplars of congruent and incongruent stimuli.

Object recognition task

Beforehand, the participants were instructed to categorize the object shown at the location of the cross. Each trial consisted of a cross shown for two seconds indicating the position of the object to be categorised. Thereafter, the object and corresponding background type were shown for 1/30 sec (~33 ms), followed by 4 masks shown for 1/20 sec (50 ms) each to make the task more difficult, preventing a ceiling effect. The masks contained random placed circles of random colour with random diameter, as can be seen in figure xx. To minimize the effect of getting better with more practice and to get familiar with the object categories, the participants practiced 15 trials before the actual experiment started. Each participant did 600 trials, corresponding to a duration of approximately 50 min. To ensure focus on the task, the participants had a break after 150, 300 and 450 trials.

The task was programmed in python 3 using the PsychoPy 3.2.4 framework and conducted using a 23 inch Full HD 60Hz IPS monitor. The distance between the participant and the monitor was set to 60 ~ 70cm.

Figure 3: Object recognition task. An indicator was shown for 2 seconds at the location where the object would be placed.

Thereafter, the stimuli was shown for 1/30 sec (~33 ms), followed by 4 masks for 4x 1/20 sec (50 ms). Object selection then took place by mouse cursor to select the object from a list of all 23 objects.

(6)

Bussing, R

5

Results

Object recognition task

Figure 4: Object detection performance per background congruence category. Mean object detection performance

for the object detection task per congruence condition. Error bars indicate standard deviation. A main effect for background congruence was found. The following significance indicators are used: * = p < 0.05, ** = p < 0.01, *** = p < 0.001.

A one way repeated measures ANOVA(n = 30) was conducted to investigate the presence of a main effect per congruence category (segmented, congruent and incongruent). No outliers of more than three times the SD were removed. The assumption for normality was met, but the assumption for sphericity was not met (p < 0.001). The Greenhouse-Geisser estimates of sphericity (epsilon = 0.6514) has thus been applied to correct the degrees of freedom. The ANOVA showed a main effect (F = 216.14, p < 0.001) over all categories. Post-hoc pairwise T-tests showed that the segmented condition had better object detection performance than the congruent condition (t = 13.396, p < 0.001), and that the congruent condition had better object detection performance than the incongruent condition (t = 6.0807, p < 0.001).

Figure 5: Object detection performance per background congruence and complexity. Mean object detection

performance for the object detection task per congruence and complexity. Error bars indicate standard deviation. A main effect for congruence and a main effect for complexity was found, but an interaction effect between congruency and complexity was not found. The following significance indicators are used: * = p < 0.05, ** = p < 0.01, *** = p < 0.001. A two way repeated measures ANOVA(n = 30) was conducted to investigate interaction between congruence and complexity. No outliers of more than three times the SD were removed. The assumption for sphericity (p = 0.2481) and normality was met. The ANOVA showed a main effect for background complexity (F = 67.3889, p < 0.001) and background congruence (F = 32.3709, p < 0.001), but an interaction effect between those groups was not found (F = 0.0146, p = 0.9855). Post-hoc pairwise T-tests for congruence showed that congruent backgrounds (low: M = 0.7930, SD = 0.0744; mid: M = 0.7070, SD = 0.1211; high: M = 0.6618, SD = 0.1013) lead to better object detection performance than incongruent backgrounds (low: M = 0.7571, SD = 0.0744; mid: M = 0.6745, SD = 0.1072; high: M = 0.6268, SD = 0.1219) for low complexity (t = 2.9106, p = 0.0069), medium

(7)

Bussing, R

6

complexity (t = 2.2538, p = 0.0319) and high complexity (t = 2.7618, p = 0.0099).

Post-hoc pairwise T-tests for complexity showed that for congruent backgrounds, low complex backgrounds lead to better object detection performance than medium complex backgrounds (t = 5.9823, p < 0.001) and that medium complex backgrounds lead to better object detection performance than high complex backgrounds (t = 3.6716, p < 0.001). For incongruent backgrounds, low complex backgrounds lead to better object detection performance than medium complex backgrounds (t = 5.1692, p < 0.001) and medium complex backgrounds lead to better object detection performance than high complex backgrounds (t = 2.7975, p = 0.0091).

DCNNs

Figure 6: DCNNs performance per background congruence. Object detection performance for ResNet 18,

34, 50 and 101 for segmented, congruent and incongruent backgrounds.

Per background congruence category, as network depth increased the performance got better for all conditions. Segmented objects had

the best object detection performance, followed by congruent and incongruent backgrounds.

Figure 7: DCNNs performance per complexity and congruence. Object detection performance for ResNet 18,

34, 50 and 101 per complexity and congruence.

For background complexity and congruency, the network performance increased for all groups as network depth increased. Congruent backgrounds showed best object detection performance, followed by incongruent backgrounds. Furthermore, low complex backgrounds showed best object detection performance, followed by medium and then high complex backgrounds.

(8)

Bussing, R

7

Figure 8: DCNNs congruent/ incongruent performance difference per background complexity. Object detection

performance difference between congruent/ incongruent backgrounds for ResNet 18, 34, 50 and 101 per complexity. As complexity increased, the performance difference between congruent and incongruent became a little smaller. Differences in network depth did not show a pattern.

Explorative analysis

Indoor/ outdoor comparison

Figure 9: Object detection performance per

indoor/outdoor and no background. Object detection

performance for the object detection task per

indoor/outdoor and no background. Error bars indicate standard deviation. No background had better object detection performance compared to a background, but indoor and outdoor backgrounds showed no effect. The following significance indicators are used: n.s. = p > 0.05, * = p < 0.05, ** = p < 0.01, *** = p < 0.001.

A one way repeated measures ANOVA (n = 30) was conducted to investigate a main effect between indoor/ outdoor and no background (indoor: M = 0.7047, SD = 0.0948; outdoor: M = 0.7009, SD = 0.0915; no background: M = 0.8838, SD = 0.0476). No outliers of more than three times the SD were removed. The assumption for sphericity (p = 0.0386) and normality was met. The ANOVA (n = 30) showed a main effect (F = 171.51, p < 0.001). Post hoc T-tests showed better performance for no background than indoor backgrounds (t = 14.08, p < 0.001), whereas no effect for indoor / outdoor backgrounds is found (t = 0.4507, p = 0.6556).

(9)

Bussing, R

8

Object /animal comparison

Figure 10: Object detection performance per animal and object. Mean object detection performance animals and

objects. Error bars indicate standard deviation. Object detection performance is better for objects than animals. The following significance indicators are used: n.s. = p > 0.05, * = p < 0.05, ** = p < 0.01, *** = p < 0.001.

No outliers of more than three times the SD were removed and the assumption for normality was met. A T-test (n = 30) showed that object detection performance is better for objects (M = 0.7891, SD = 0.0701) than for

animals (M = 0.6926, SD = 0.0973) (t = -7.6664, p < 0.001).

Bayesian

interaction

analysis

for

background congruence and complexity

Table 1: Bayesian two factor repeated measures ANOVA for congruence and complexity. Bayesian two factor

ANOVA comparison for congruence and complexity. P(M), P(M|data) BFM BF10 and the error are reported for

different models for congruence and complexity. A Bayesian two factor repeated measures

ANOVA (n = 30) for congruence and complexity was conducted to investigate the interaction between congruence and complexity. All assumptions were met. A Bows Factor of 0.1014 for the interaction between congruence and complexity is deducted from table xx, showing substantial evidence against an interaction.

Discussion

In this study the effect of background information on object detection performance was investigated. More specifically, whether background congruence and background complexity show an interaction effect. As in line with previous findings (Davenport & Potter, 2004; Seijdel et al., 2019), the segmented objects showed the best object detection performance, followed by congruent and then incongruent backgrounds in the object recognition task for participants. In contrary to the predicted increase in object detection performance difference between congruent and incongruent backgrounds for the medium complex backgrounds, no interaction between complexity and congruence is found for the object recognition task. Even no increase in object detection performance for medium complex backgrounds, as found by (Groen et al., 2013), although that was only for natural scenes. In addition to this, the exploratory bayesian

(10)

Bussing, R

9

statistics show evidence against an interaction between complexity and congruency. Furthermore, DCNNs even show decrease in object detection performance difference between congruent and incongruent backgrounds as complexity increases.

In conclusion: no interaction effect between background congruence and background congruence is found, object detection performance decreases as complexity increases and congruent backgrounds lead to better object detection performance than incongruent backgrounds.

An explanation could be that as complexity increases the proposed figure ground segmentation becomes harder to perform due to the decrease in object feature ground feature difference. Given the current findings, the brief perceived information from the scene is likely used in a more low-level feature associated manner rather than high-level content driven manner since more relevant background content does not show an increase in object detection performance difference between congruent and incongruent scenes as complexity increases, but still an object detection performance difference between congruent and incongruent backgrounds is shown for all complexities. This explanation is confirmatory to the hypothesized segmentation by additional processing for more complex images; the additional processing for complex images in this study is likely not used in a content driven way, although this is purely speculation. Furthermore, the usage of DNNs seem to somewhat confirm this explanation since that when network depth increases, the difference in object detection performance difference between the congruent and incongruent condition as complexity increases does not seem to differ. In DNNs, more network depth and thus more advanced feature extraction does not change this; the difference between congruent and incongruent scenes is thus likely

to emerge from other ways than advanced feature extraction.

In contrast to the hypothesized increase in object detection performance difference as complexity increases between congruent and incongruent scenes, the DNNs show a slight decrease. Since advanced feature extraction does likely not account for this difference, an explanation could be that as complexity increases the background information becomes so cluttered that only cues from the object itself remain usefull.

The speculated lack of high-level processing could be investigated by a similar experiment, but with rated backgrounds for how congruent or incongruent they are, giving more insight in how much congruent or incongruent information they really contain rather than taking complexity as a measurement for the amount of information, although this is something beyond the scope of this paper. Also, the usage of masks shown briefly after the stimulus set is suggested to prevent recurrent processing, which is suggested to be used for high-level feedback (Groen et al., 2018). This could also account for the conflicting hypothesized results. Future research could also take in account the influence of object size corresponding to the background, since the stimulus set used has unnatural large or small objects regarding their background in comparison to their real size.

Although the results in this paper are very informative, more research is needed to give clarification on how specifically background information is used for object detection.

(11)

Bussing, R

10

References

Bar, M. (2003). A cortical mechanism for triggering top-down facilitation in visual object recognition. Journal of Cognitive Neuroscience. https://doi.org/10.1162/089892903321662976 Davenport, J. L., & Potter, M. C. (2004). Scene consistency in object and background perception. Psychological Science.

https://doi.org/10.1111/j.0956-7976.2004.00719.x DiCarlo, J. J., & Cox, D. D. (2007). Untangling invariant object recognition. Trends in Cognitive

Sciences. https://doi.org/10.1016/j.tics.2007.06.010

Gojare, S., Joshi, R., & Gaigaware, D. (2015). Analysis and design of selenium webdriver automation testing framework. Procedia

Computer Science.

https://doi.org/10.1016/j.procs.2015.04.038

Groen, I. I. A., Ghebreab, S., Lamme, V. A. F., & Scholte, H. S. (2012). Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories. PLoS

Computational Biology, 8(10).

https://doi.org/10.1371/journal.pcbi.1002726 Groen, I. I. A., Ghebreab, S., Prins, H., Lamme, V. A. F., & Steven Scholte, H. (2013). From image statistics to scene gist: Evoked neural activity reveals transition from low-level natural image structure to scene category. Journal of

Neuroscience, 33(48), 18814–18824.

https://doi.org/10.1523/JNEUROSCI.3128-13.2013

He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2018). Mask R-CNN. IEEE Transactions on

Pattern Analysis and Machine Intelligence.

https://doi.org/10.1109/TPAMI.2018.2844175 He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition.

Proceedings of the IEEE Computer Society

Conference on Computer Vision and Pattern Recognition.

https://doi.org/10.1109/CVPR.2016.90

Hochstein, S., & Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in

the visual system. Neuron.

https://doi.org/10.1016/S0896-6273(02)01091-7 Julesz, B. (1981). Textons, the elements of texture perception, and their interactions. Nature. https://doi.org/10.1038/290091a0

Kietzmann, T. C., McClure, P., Kriegeskorte, N., Kietzmann, T. C., McClure, P., & Kriegeskorte, N. (2019). Deep Neural Networks in Computational Neuroscience. In Oxford Research Encyclopedia of

Neuroscience.

https://doi.org/10.1093/acrefore/9780190264086.0 13.46

Lamme, V. A. F., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences. https://doi.org/10.1016/S0166-2236(00)01657-X Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., … Zitnick, C. L. (2014). Microsoft COCO: Common objects in context.

Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8693 LNCS(PART

5), 740–755. https://doi.org/10.1007/978-3-319-10602-1_48

Marr, D. (1982). Vision: a computational investigation into the human representation and processing of visual information. Vision: A

Computational Investigation into the Human Representation and Processing of Visual Information.

https://doi.org/10.1016/0022-2496(83)90030-5 Rajaei, K., Mohsenzadeh, Y., Ebrahimpour, R., & Khaligh-Razavi, S. M. (2019). Beyond core object recognition: Recurrent processes account for object recognition under occlusion. PLoS

(12)

Bussing, R

11

Computational Biology, 15(5), 1–30.

https://doi.org/10.1371/journal.pcbi.1007001 Rousselet, G. A., Joubert, O. R., & Fabre-Thorpe, M. (2005). How long to get to the “gist” of real-world natural scenes? Visual Cognition.

https://doi.org/10.1080/13506280444000553 Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., … Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer

Vision, 115(3), 211–252.

https://doi.org/10.1007/s11263-015-0816-y

Seijdel, N., Tsakmakidis, N., de Haan, E., Bohte, S., & Scholte, S. (2019). Implicit Scene Segmentation

in Deeper Convolutional Neural Networks.

https://doi.org/10.32470/ccn.2019.1149-0

Serre, T., Oliva, A., & Poggio, T. (2007). A feedforward architecture accounts for rapid categorization. Proceedings of the National

Academy of Sciences of the United States of America.

https://doi.org/10.1073/pnas.0700622104

Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive

Psychology.

https://doi.org/10.1016/0010-0285(80)90005-5

Wolfe, J. M., Võ, M. L. H., Evans, K. K., & Greene, M. R. (2011). Visual search in scenes involves selective and nonselective pathways. Trends in

Cognitive Sciences.

https://doi.org/10.1016/j.tics.2010.12.001

Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010). SUN database: Large-scale scene recognition from abbey to zoo. Proceedings

of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3485–

Referenties

GERELATEERDE DOCUMENTEN

1998a, &#34;An Experimental Design System for the Very Early Design Stage&#34;, Timmermans (ed.) Proceedings of the 4th Conference on Design and Decision Support Systems

- Voor waardevolle archeologische vindplaatsen die bedreigd worden door de geplande ruimtelijke ontwikkeling en die niet in situ bewaard kunnen blijven: wat is

To test whether industry specialized audit firms have a decreasing impact on the amount of abnormal discretionary accruals of the audit client and thus provide higher quality

Combining Dewey ’s idea with assumptions from the Dialogical Self Theory, this means that the efforts of professionals who work in or for such contexts should transcend a

Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the

In the Dutch RhEumatoid Arthritis Monitoring cohort, all patients with a clinical diagnosis of RA were treated according to a protocolled T2T strategy, aimed at 28- joint

We propose to construct a hierarchical representation of a pair of rectified stereo images by computing 1D Max-Trees on the scan-lines. Leaf nodes in a Max-Tree correspond to fine

geibesig om kinders skrik aan te ja; oorsprong onseker, ten- minste die van die laaste deel. pay en Mal. Paai, oorspronklik uit Portugees, is in Afr. oorgeerf uit die