• No results found

University of Groningen Computer vision techniques for calibration, localization and recognition Lopez Antequera, Manuel

N/A
N/A
Protected

Academic year: 2021

Share "University of Groningen Computer vision techniques for calibration, localization and recognition Lopez Antequera, Manuel"

Copied!
2
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Computer vision techniques for calibration, localization and recognition Lopez Antequera, Manuel

DOI:

10.33612/diss.112968625

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.

Document Version

Publisher's PDF, also known as Version of record

Publication date: 2020

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

Lopez Antequera, M. (2020). Computer vision techniques for calibration, localization and recognition. University of Groningen. https://doi.org/10.33612/diss.112968625

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)

Stellingen

behorende bij het proefschrift

Computer Vision Techniques for Calibration,

Localization and Recognition

van

Manuel L ´opez Antequera

First

Learning-based methods can exploit subtle cues to predict a camera’s intrinsic parameters as well as its orientation with respect to the local gravity from a single image, outperforming traditional two-stage approaches that rely on detecting explicit geometric patterns such as lines or vanishing points.

Second

Learning-based techniques can be used to train low-dimensional representations for whole images that are discriminative with respect to the location of the cam-era, while being invariant to unrelated effects such as different illumination and weather conditions.

Third

The response of image-level representations with respect to changes in the pose of a camera can be modeled using a Gaussian Process. It can then be used as an observation model for a particle filter, enabling robust online visual localization using image-level descriptors.

Fourth

Contemporary machine learning techniques might achieve surprisingly good re-sults when used as black boxes, however, proper use of domain knowledge will make the difference between good results and excellent results.

Fifth

Don’t learn what you already know: It’s a waste of time and energy to use learning-based techniques to model a system if we can already do so effectively and efficiently with other techniques.

Sixth

Excellence rarely occurs in isolation. An association of competent individuals will produce far greater results than what could be accomplished separately.

Referenties

GERELATEERDE DOCUMENTEN

This combination of geometry and learning is present in most chapters of this thesis, as we deal with the problems of single image camera calibration, place recognition and

Notice that when θ is small (i.e. when the horizon is close to.. the center of the image), the prediction errors for the tilt and roll angles are small as well, while the errors for

We have trained a convolutional neural network to perform place recognition under challenging appearance changes due to weather, seasons, time of day and and point of view. The

For this reason, we compare our observation model (using DCNN features) and the laser-based likelihood field model in simulation to ascertain their performance with respect to the

Unlike typical approaches, we do not restrict the problem to that of sequence-to-sequence or sequence-to-graph localization. Instead, the image sequences are localized in an

To classify a new image J, we compute the feature vector v(J) using the CNN- COSFIRE filters configured in the training phase and use such vector as input to the classifier to

The results that we achieved using LeNet on the MNIST data set, and ResNet and DenseNet models on corrupted versions of the CIFAR data set, namely the CIFAR-C bench- mark data

The work described in chapter 2 involves training a convolutional neural network using a fully supervised scheme where panoramas are cropped to simulate images taken with cameras