• No results found

Scenario based SLAM

N/A
N/A
Protected

Academic year: 2021

Share "Scenario based SLAM"

Copied!
75
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

M.Sc. Thesis Kasper van Zon

University of Twente

Department of Electrical Engineering, Mathema cs & Computer Science (EEMCS) Signals & Systems Group (SAS)

P.O. Box 217 7500 AE Enschede The Netherlands

Report Number: SAS2011-13 Report Date: August 15, 2011

Period of Work: 01/09/2011 -- 25/08/2011 Thesis Commi ee: Prof. Dr. ir. C.H. Slump

Dr. ir. F. van der Heijden

Ir. G. Subramanian

(2)

Contents

1 Introduc on 3

1.1 Background . . . . 3

1.2 The goal of this research . . . . 3

1.3 Previous work . . . . 3

1.4 Thesis outline . . . . 5

2 Implementa on 6 2.1 Simultaneous Localiza on and Mapping . . . . 6

2.2 Choosing the proper SLAM implementa on . . . . 9

2.3 The infrared sensors . . . . 12

2.4 The scenario based approach . . . . 13

3 Realiza on 17 3.1 Mo on and observa on Models . . . . 17

3.2 Feature ini aliza on . . . . 22

3.3 Improving the consistency of EKF-SLAM . . . . 33

3.4 Realiza on of Scenario based SLAM . . . . 36

3.5 Object Orientated approach . . . . 39

4 Experiments 43 4.1 Objec ves . . . . 43

4.2 Setup . . . . 43

4.3 Results . . . . 45

4.4 Discussion . . . . 57

5 Conclusions & Recommenda ons 59 5.1 Conclusions . . . . 59

5.2 Recommenda ons . . . . 59

A Theore cal backgrounds 61 A.1 State es ma on . . . . 61

A.2 The Kalman filter . . . . 63

A.3 The Extended Kalman filter . . . . 65

A.4 Par cle filters . . . . 66

B Class descrip ons 70 B.1 The cRobot class . . . . 70

B.2 The cWall class . . . . 70

B.3 The cSensor class . . . . 70

B.4 The cSLAM class . . . . 71

B.5 The cFeature class . . . . 72

(3)

1 Introduc on

1.1 Background

At Philips Consumer Lifestyle in Drachten, research is being performed on autonomous vacuum cleaning robots.

The efficiency of these mobile robots will improve if the system is able to navigate through an unknown indoor environment. This ability requires a type of robot learning known as SLAM (Simultaneous Localiza on and Map- ping). SLAM looks like a chicken and egg problem: for localiza on, a map of the environment is needed. For crea ng a map, localiza on is needed. Fortunately, the problem can be solved by recursive Bayesian es ma on.

This is a method which itera vely updates the es ma on of the unknown parameters each me a new observa- on is available.

SLAM relies on the use of accurate sensory systems. In many applica ons, scanning laser range finders are used, but these devices are bulky and very expensive. For this reason these sensors are not suitable for small robots meant for the consumer market. An alterna ve is the infrared (IR) range finder. This sensor is o en more than a hundred mes cheaper, but this comes with a cost of a very limited measurement range and much more measurement noise.

1.2 The goal of this research

Philips Consumer Lifestyle consumer lifestyle would like to know if a small mobile robot is able to perform SLAM, when it can only observe its environment using IR sensors. As an extra challenge the number of infrared range finders is limited to two. It can be seen from Figure 1 that the use of this very limited number of range finding sensors lead to the following problem: Where a scanning laser range finder is able to see a complete 270 degree view of its environment, a IR range finder can only measure a single distant point. As a result, the data from the infrared sensory system will be very sparse and it will give ambiguous informa on about the environment. For this reason the classical point feature based SLAM approach will not suffice.

The goal of this research was to develop a new type of SLAM, called Scenario Based SLAM. In this approach the robots environment is modeled as a collec on of wall and corner features and the ambigui es in the iden - fica on of these map features are resolved by using sta s cal hypotheses tes ng. The concept of SLAM with line segment features is not completely new, but as far as we know, we are the first to perform SLAM with sta s cal probability tests of different scenarios. Moreover, we are also the first to implement SLAM using only two IR sensors. In this thesis we present a proof of concept of this method.

1.3 Previous work

The simultaneous localiza on and mapping problem is one of the most fundamental problems in robo cs [30].

For this reason, in the past two decades, this subject has witnessed immense a en on from the robo c com- munity. The number of papers that is wri en about this subject is immense and there is a variety of excellent books and tutorials available on this topic [30, 4, 13].

In the classical point feature based range-bearing SLAM approach a scanning laser range finder is used which measures the distance (range) and orienta on (bearing) of a landmark, rela ve to the vehicles loca on. The map of the vehicles surrounding is represented as a discrete set of landmark points.

In the past decade SLAM is applied using other types of sensing systems. In most of these cases the laser range

is replaced by a high-end camera. This results in bearing-only SLAM, because one single camera cannot observe

depth.

(4)

Object

Robot Object

Robot

Scanning laser range finder Infrared range finder

Field of view

Figure 1: Using a single scan from a scanning laser range finder, a robot is able to es mate the dimensions of an object in it's proximity. In contrast, a robot equipped with two infrared sensors can only detect an object if it is in the line of sight of at least one of its sensors. Moreover, the data given by the sensor is to sparse to enable to robot to dis nguish between different types of objects.

1.3.1 Related Work

Vision based SLAM and laser range finder based SLAM techniques have been proven to be successful and they are now used in many real applica ons. The problem with these techniques is however that they heavily depend on the use of an expensive sensory system. Recently, a number of researchers have proposed SLAM solu on using rela vely low cost cameras, sonar devices [20, 15, 27] and infrared range finders [8, 1, 11, 16, 23]. In contrast to laser range finders, sonar sensors and infrared range finders, can only measure a limited range. The sonar sensors can measure a larger range the infrared sensor, but due to the width of the sonar beam, the angular uncertainty of these devices is very high (up to 30 degrees). The applica on of sonar will therefore result in range-only SLAM.

Sparse sensing

In the past five years, several researchers have studied the problems of SLAM with sparse sensing. In most of this work the spa al resolu on of the sensory system is increased by construc ng an array of sonar [32] or infrared sensors [8, 1, 11]. In all of these methods the minimal required number of distance measuring sensors was at least 7.

Recently Gifford et al. [16] implemented a SLAM algorithm which uses a IR scanning sensor. This sensor consists of an array of IR sensors mounted on a servo motor. In a same manner Magnenat et al. [23] mounted 24 IR sensors on a rota ng disk to make a 360 degree scanning device.

To further improve the sparse sensing a method called Mul scanning was proposed [8, 32]. This method com-

bines consecu ve measurements to achieve a higher data density. The downside of this method is however that

it comes in exchange for greater measurement uncertainty due to higher odometric errors.

(5)

Map representa on

Most SLAM algorithms make few assump ons about the environment. The classical point feature based ap- proach does not take advantage of prior informa on when the environment is known to have specific structural characteris cs. It can however be assumed that the environment of a robot designed to operate indoors, is roughly rec linear. Using this knowledge, a number of different line based SLAM algorithms have been imple- mented [33, 14, 25]. Although these line based SLAM implementa ons were successful, the main goal of these implementa ons was however to lower the computa onal load by reducing the number of map features. More- over, these implementa ons did s ll rely on the use of accurate sensing systems.

Except for the work of Gifford et al. [16] and Magnenat et al. [23], where occupancy grids were used, in prac- cally all other studies of SLAM with limited, sparse and noisy sensing, line features were used as the basic obstacle representa on [8, 7, 11, 32]. These studies provide suppor ve evidence for the poten al viability of SLAM with sparse sensing, but in contrast to our work, a rela vely large number of sensors and Mul scanning were required.

1.4 Thesis outline

In this thesis a proof of concept of Scenario Based SLAM is presented. The following chapter provides informa- on on how the SLAM problem is solved in prac ce. It explains why, in our implementa on, we prefer the use of the Extended Kalman filter and we will present the outline of the scenario based SLAM approach.

The third chapter of this thesis contains informa on on how we have realized Scenario Based SLAM. It presents the models we used, the choices we made and the methods we used to generate our proof of concept.

In chapter four a number of experiments and their results are presented. These experiments are used to validate

the concept of Scenario Based SLAM using simulated data. The fi h and final chapter of this thesis provides the

conclusions and a number of recommenda ons are given for further research.

(6)

2 Implementa on

In this chapter we will present an overview of the Simultaneous Localiza on and Mapping problem, along with the two most common feature based SLAM approaches from the literature. In the second sec on of this chap- ter we will explain why, in our approach, we prefer the SLAM algorithm based on the Extended Kalman Filter.

A erwards we will go into details on the problems which arise when performing SLAM with infrared sensors and finally we will present our scenario based SLAM approach.

2.1 Simultaneous Localiza on and Mapping

Simultaneous Localiza on and Mapping enables a mobile robot to build a map an unexplored environment while simultaneously using this map to localize itself. There are numerous approaches to solve the SLAM problem [30].

The most common approaches can be classified into grid based SLAM and feature based SLAM approaches.

In the grid based approaches the environment is divided in into a grid of cells of fixed size. Each cell contains the probability of being occupied by an object. The major advantage grid based SLAM is that it is intui vely simple and easy to implement. The drawbacks are that these approaches have a high computa onal cost and a high storage requirement. Moreover, they have a weak theore cal founda on. For these reasons we did not consider the grid based SLAM in our research.

Feature based SLAM approaches use easily iden fiable elements in the environment, such as planes, corners and edges in an indoor environment, and build an internal representa on (map) with the loca on of these land- marks. The sensors used in this approach produce dense informa on, from which these simple features are extracted.

In the feature based approach both the robot pose and the feature coordinates are constantly updated using recursive Bayesian es ma on. This is a probabilis c approach where the robot es mates the posterior proba- bility of its loca on and the map, given the informa on it observes.

2.1.1 The SLAM principle

The basic principle of SLAM is that the robot vehicle is able to make a predic on about its loca on, using the informa on from its wheel encoders (odometry) and a kinema c model of the vehicles movement. If it knows its staring point, it can itera vely es mate the vehicles trajectory by integra ng the wheel encoder informa on. In the same me, due to wheel slippage, modeling errors and other disturbances, the uncertainty of the es mated path will grow as me proceeds.

While driving around, the robot vehicle will observe its surroundings. As men oned is sec on 1.3, in the classical feature based SLAM approach the map of the surrounding environment is modeled as a set of point landmarks and the map will consist of a collec on of x− and y− coordinates. When the vehicle observes a landmark for the first me, the vehicle registers this landmark rela ve to its own loca on.

When using a scanning laser range finder the vehicle measures the rela ve distance (range) and the rela ve orienta on (bearing) to this landmark, using a model which relates the measurements to the robot pose. Due to the occurrence of measurement noise, errors in the observa on model and the fact that the vehicle has an uncertainty of its es mated loca on, the landmark loca on will also have an uncertainty.

A er observing a landmark the vehicle will proceed exploring the rest of the environment. During this phase

the uncertainty of the vehicles es mated loca on will grow. If, a er a while, the vehicle closes a loop and re-

observes a known landmark, the vehicle can update the uncertain es mate of its own loca on es mate with the

more accurate knowledge about the landmark loca on. This is in fact a two way process: if the es mate of the

(7)

x-axis

y- ax is

x

r

y

r

x-axis range

Point feature

bearing

θ

r

uncertainty over robot

location

uncertainty over map

feature location

orientation uncertainty

Figure 2: Point feature based SLAM. The robot is has a pose [x

r

, y

r

, θ

r

] and the measures the rela ve range and

bearing to a point beacon. The ellipses represent the uncertainty over the robot pose and map feature loca on.

(8)

vehicle loca on is more accurate than the es mated landmark loca on, the SLAM process updates the map. A graphical representa on of the point feature based SLAM approach is given in Figure 2.

2.1.2 SLAM algorithms

Currently there are two main algorithms to solve the feature based SLAM problem [13, 4]. The first approach is based on the Extended Kalman filter. The second approach is called FastSLAM [29, 30, 24] and it uses Rao- Blackwellized Par cle filtering (RBPF), which is a combina on of Extended Kalman filtering and Par cle filtering.

The theore cal backgrounds of these recursive state es ma on methods are presented in Appendix A.

In both methods the robot vehicles posi on (x

r

, y

r

) and its orienta on θ

r

are represented as a pose state vector x

r

= [x

r

, y

r

, θ

r

]

T

. In addi on, x

m

is the map state vector, containing coordinates [x

1

, y

1

, . . . , x

n

, y

n

]

T

of the n observed landmarks. The vector z

k

contains informa on about the map, observed by the robot at me k. The control vector u

k

is applied at me k to drive the robot to its next vehicle state at me k + 1. The vehicle state transi on is assumed to be a Markov process and it is independent of both the observa ons and the map.

The EKF SLAM algorithm and FastSLAM are both recursive Bayesian es mators. As explained in Appendix A, recursive Bayesian es ma on consist of two stages: the predic on and the update stage. In the predic on stage a transi on func on ˆx

r,k+1

= f (x

r,k

, u

k

) is used to predict the next vehicle pose, given the current pose x

r,k

and the control signal u

k

. In the update stage an observa on func on h(x

k

) is used to update the current vehicle pose es mate and map es mate with the informa on gained while measuring the vehicles environment.

EKF SLAM

The EKF SLAM algorithm is the easiest SLAM algorithm [30, 13] to implement. As any EKF algorithm, EKF SLAM makes a Gaussian noise assump on for robot mo on and percep on. At each me instance k it es mates the posterior density p(x

k

|z

1:k

, u

k−1

), where x

k

= [x

Tr,k

, x

Tm,k

]

T

is a combined state vector containing the robot pose and the parameters all observed landmarks. Due to the Gaussian assump on, the es mated posterior density can be completely represented by its mean ˆx

k|k

= E[ x

k

|z

1:k

, u

k−1

] and its covariance matrix P

k|k

. This covariance matrix represents the uncertainty of the es mate.

The basis of the EKF-SLAM approach is to describe the robot's mo on in the following form:

p(x

r,k

|x

r,k−1

, u

k−1

) ⇐⇒ f(x

r,k−1

, u

k−1

) + v

k

, (1) where f () models vehicle kinema cs and v

k

are addi ve, zero mean uncorrelated Gaussian mo on disturbances with covariance Q

k

. This means that in the predic on phase the robot uses a mo on model f () and the control signal u

k

to predict its next pose.

The observa on model describes the probability of making an observa on z

k

when the vehicle pose and the landmark loca ons are known. This observa on model is described by

p(z

k

|x

k

) ⇐⇒ h(x

k

) + w

k

, (2)

where h() describes the geometry of the observa on and where w

k

are addi ve, zero mean uncorrelated Gaus- sian observa on errors with covariance R

k

.

With these defini ons the standard EKF algorithm can be applied. A detailed descrip on of this algorithm is given in Appendix A.3.

FastSLAM

FastSLAM is a SLAM algorithm that integrates par cle filters and extended Kalman filters. The algorithm is

based on the fact that, assuming the environment is sta c, the correla ons in the uncertainty among differ-

ent map features arise only through robot pose uncertainty. This means that if the robot knows its correct path

(9)

x

r,0:k

= {x

r,0

, x

r,1

, . . . , x

r,k

}, the errors in its feature es mates would be independent of each other. The result of this property is that the posterior density p(x

r,0:k

, x

m

|z

0:k

, u

0:k−1

) can be factored into a term that es mates the probability of a path and N terms that es mate the parameters of the map features, condi onal on each (hypothe cal) path:

p(x

r,0:k

, x

m,k

|z

0:k

, u

0:k−1

) = p(x

r,0:k

|z

0:k

, u

0:k−1

)

N i=1

p(x

m,i

|x

r,0:k

, z

0:k

) (3) In this equa on, the posterior p(x

r,0:k

|z

0:k

, u

0:k−1

) is similar to the localiza on problem, since only the path of the vehicle needs to be es mated. FastSLAM samples the path using a par cle filter. In addi on, each par cle has its own map, consis ng of N Extended Kalman filters. The term p(x

m

|x

r,0:k

, z

0:k

) can be computed effi- ciently since the poses x

r,0:k

of the robot are known when es ma ng the map x

m

.

The original FastSLAM algorithm is called FastSLAM 1.0. It is conceptually simple and easy to implement be- cause the samples can directly be generated from the mo on model. In certain situa ons, however, the par cle filter component of FastSLAM 1.0 generates these samples inefficiently [30, 24]. When the odometric noise is larger than the sensor noise, the diversity of the par cles will get lost due to the frequent resampling. This means that a great part of the samples become iden cal for most of the robots trajectory. Due to the resul ng low variance in par cles, the filter will underes mate its own uncertainty and can become inconsistent.

A solu on to this problem is to incorporate the current sensor measurement z

k

into the proposal distribu on, as explained in the sec on about Par cle filtering. This means that we sample from p(x

r,k

|x

r,0:k−1

, u

k−1

, z

k

) instead of only from p(x

r,k

|x

r,0:k−1

, u

k−1

) . This idea is the basis of FastSLAM 2.0.

2.2 Choosing the proper SLAM implementa on

In the previous sec on, we presented two methods to solve the simultaneous localiza on and mapping problem.

Although EKF-SLAM and FastSLAM have been proven to be very successful, both methods have a number of drawbacks. In this sec on we will evaluate both methods to determine which one is most appropriate to use in our scenario based SLAM approach.

2.2.1 Consistency

Consistency is one of the primary criteria for evalua ng the performance of any es mator. Under the Linear- Gaussian assump on the pair ˆx, ˆ P is a consistent es mate for two first sta s cal moments of the random variable x if the two following condi ons are sa sfied:

1. the es ma on errors are zero-mean (unbiased):

E[x − ˆx] = 0

2. the es ma on errors have covariance matrix smaller or equal to the one calculated by the filter:

E[(x − ˆx)(x − ˆx)

T

] ≤ ˆP

If an es mator is inconsistent, the accuracy of the produced state es mates is unknown. An inconsistent es - mator is therefore unreliable.

In SLAM inconsistent es mates can lead to big problems. An example of this is given in Figure 3. In this ex-

ample the es mate of the robot pose is inconsistent. A er a measurement update the map feature es mate

will also get inconsistent. When the robot then con nues to drive around and updates each subsequent map

feature es mate, the map will converge to a wrong representa on of the robots environment. Moreover, the

robot localiza on will also fail. In the next two subsec ons we will discuss the consistency problems of both EKF

SLAM and FastSLAM.

(10)

ra n ge true map

feature

bearing true

robot pose

estimated map feature

estimated robot pose

Before update

true map feature

true robot pose

estimated map feature

estimated robot pose

After update

Figure 3: The es mate of the robot pose is overconfident: the es mated pose uncertainty is smaller than the es ma on error. A er the measurement update the es mate of the map feature will also become inconsistent.

Consistency of EKF SLAM

In the past decade a number of researchers have examined the consistency of EKF SLAM. In empirical studies they have shown that EKF SLAM will always lead to inconsistent es mates [22]. The degree of inconsistency does however depend on a number of factors. These factors will be explained in the following paragraphs.

One of the causes of inconsistency in EKF SLAM can be easily explaned: The Extended Kalman filter presumes that both the measurement noise and system noise are Gaussian. In addi on it linearizes the non-linear mo on and observa ons models so that it can approximate the mean and variance of the es mates of the map and of the robot pose. In reality these approxima ons may not accurately match the true first and seconds moments.

Moreover, the true probability density is non-Gaussian, so even the true mean and variance will not fully describe the true density. As a result, over a sequence of mo ons and measurement, all these approxima on errors will accumulate and the SLAM es mates will become inconsistent.

The inconsistency that are caused by the effects which are explained above can be reduced by using be er approxima on methods such as the itera ve EKF or the Unscented Kalman filter. Another method reduce the ef- fects of approxima on errors is by ar ficially infla ng the es mated covariance [26, 5]. A number of researchers [2, 10, 5, 19, 17, 18] have however shown that, even if they minimize the approxima on errors, there results of EKF SLAM are s ll inconsistent.

In an experiment, where a sta onary robot measures the rela ve posi on of a new landmark mul ple mes, it was shown that the es mated variance of the robot orienta on σ

θ2

decreases unexpectedly [17, 28]. Since the sensor always measures the rela ve posi on of the feature with respect to the robot, the state vector of the robot pose and its covariance should remain unchanged. Bailey et al. [5] show that the degree of this in- consistency depends on the uncertainty of the robot orienta on. If the orienta on variance is small this effect is less notable. For this reason they suggest that, in order to improve consistency, the uncertainty of the robot orienta on should be kept small. This can for instance be accomplished by using a gyroscope.

In [28] Tamjidi shows that the level of inconsistency also depends on the choice of observa on model. Bear-

ing only models lead to a more consistent result than range bearing or range only models. In another paper

Hui-Ping et al. [22] prove that the measurement noise and measurement me also play an important role in

the constancy of EKF SLAM. They show that with an increase in the value of measurement noise, the degree of

(11)

inconsistency will decrease. In addi on they show that, by increasing the observa on me, the algorithm will inevitably become inconsistent. This inconsistency will however reach an upper bound. They also conclude that the degree of improvement to consistency by increasing the measurement noise is greater than the degree of damage to consistency by increasing the observa on me.

Huang et al. [19, 17, 18] did further analyze the occurrence of a sudden decrease in es mated orienta on variance. They analyzed the observability of the standard EKF-SLAM and that of the ideal EKF SLAM, where all Jacobians are calculated at the true state. They showed that the unobservable subspace of the standard EKF SLAM does not match that of the ideal EKF SLAM and as a consequence, the es mated orienta on covariance is mistakenly reduced when actually no informa on is injected to the system. To deal with this problem they pre- sented two improvements to the standard EKF SLAM method. One of these improved methods will be discussed in the next chapter.

Consistency of FastSLAM

Like the EKF SLAM method, FastSLAM also has problems with consistency. In [6] Bailey et al. showed that in the short-term FastSLAM may produce consistent uncertainty es mates, but in the long term, the state es mates become overconfident. This inconsistency is caused by the frequent resampling [21]. Every me a par cle is lost due to resampling, an en re pose history and map hypothesis is lost forever.

In prac ce FastSLAM can however s ll produce quite accurate results in terms of devia on from the true state.

The final quality of this result is dependent on sensor precision. Like with EKF SLAM, the rate in which FastSLAM becomes inconsistent depends on the amount of sensor noise and the observa on frequency. Accurate sensors and more frequent observa ons will improve the accuracy but it will also speed up par cle deple on.

2.2.2 Computa onal complexity

EKF SLAM

The computa onal cost of EKF-SLAM grows quadra cally with the number of landmarks. This is because the update step requires that all landmarks and the covariance matrix must be updated at every observa on. In point feature based SLAM, the number of landmarks will o en come in the range of hundreds or even thousands and this will lead to computa onal problems. In scenario based SLAM, we will however expect that the number of landmarks will be rela ve small. We will therefore expect that we will not have to cope with this problem of computa onal complexity.

FastSLAM

Because all map features in FastSLAM are condi onally independent, each par cle contains N independent low dimensional EKFs, where N is the number of map features. As a consequence, the algorithm has linear me complexity [30, 21]. This complexity scales however with the number of par cles that the algorithm will use.

As explained in the previous subsec on FastSLAM suffers from inconsistency due to par cle deple on. In the scenario based approach we will most likely make very frequent observa ons of one single map feature. As a result we will expect that the inconsistency of FastSLAM will likely occur. One way to temporarily compen- sate for this effect, is to use a large amount of par cles. This will however have a very nega ve effect on the computa onal complexity of the algorithm.

2.2.3 Data associa on

In SLAM the loop closing stage is very important, because this is the part where the robot updates its loca on

es mate by using the map. To make this loop closure possible, the robot has to es mate the correspondence of

(12)

A B PSD

Emitter

Range

Target Lens

Lens

Projection of A

Projection of B

Projection of the target

Optical axis

Figure 4: The principle of triangula on.

the measured features to the previously observed landmarks in the map. The problem of es ma ng the corre- spondence is known as the data associa on problem and this is one of the most difficult problems in SLAM.

In EKF SLAM this data associa on is usually applied by using maximum likelihood es ma on [30]. However, once a bad associa on is made, it cannot be un-done. This can cause the filter to fail catastrophically. FastSLAM allows each par cle to perform its own data associa on, which implements mul -hypothesis data associa on.

This ability to simultaneously pursue mul ple data associa ons makes FastSLAM significantly more robust to data associa on problems than EKF SLAM.

As will be shown in sec on 2.4, our scenario based SLAM approach also implements mul -hypothesis data as- socia on. Therefore, even if we use EKF SLAM, a wrong data associa on will less likely get us into trouble.

2.2.4 Conclusions

Based on the informa on above, the consistency seems to be most important criterium to select the proper SLAM method. However, over me, both SLAM methods will inevitably lead to inconsistent es mates. In FastSLAM, the inconsistency due to par cle deple on will speed up when a map feature is observed frequently. As will be shown in the last sec on of this chapter, the map feature used in scenario based SLAM are rela ve large and they will therefore be observed very frequently. This will have drama c consequences for the consistancy of the map feature es mates and robot pose. For this reason we have chosen to implement scenario based SLAM using an Extended Kalman filter.

2.3 The infrared sensors

Infrared range finders are small, low-cost, low-power, and low-weight devices. For these reasons they seem very

suitable to use as range finding sensors on small and low-cost robo c devices. Compared to a scanning laser

range finder, an IR range finder however has a number of limita ons. These limita ons are mainly caused by the

way these sensors operate.

(13)

Laser range finder IR range finder

Maximum range > 5m < 0.8m

Maximum field of view >180 degrees -

Angular resolu on < 0.5 degrees -

Accuracy < 2% of measurement > 10% of measurement Table 1: IR range sensors versus scanning laser range finders

2.3.1 Opera on

The infrared range measuring sensors work is on the principle of triangula on. The sensor emits a modulated infrared pulse. A por on of this light gets reflected by an object in the sensors field of view. This light is captured by a posi on sensi ve device (PSD). Before the light hits the PSD, it passes trough a lens. The loca on on which the ray of infrared light hits the PSD depends on the angle of incidence on the lens, which in turn depends on the distance between the object and the lens. A graphical representa on of this principle is given in Figure 4.

2.3.2 The performance of IR range sensors

Range

Due to its limited field of view, the range of the infrared range finders is rela vely small. The Infrared sensors which are commercially available come in different ranges. Examples of these ranges are 4 - 30 cm, 10 - 80 cm and 20 - 150 cm. It is possible to cover a larger range by coupling mul ple sensors with different ranges. This comes with the cost of reduced sensor speed to avoid the occurrence of crosstalk.

Spa al resolu on

A scanning laser range finder offers dense coverage of the environment. Typically, it is able to make a 270 de- gree scan with a spa al resolu on of less than one degree. In contrast, an IR range finder can only measure one single distant spot. As a result, the informa on provided by the IR sensor is very sparse and it is not possible to reconstruct an unknown object from a single measurement. An example of this was already given in Figure 1.

One approach to compensate for low-resolu on sensing is to sweep the sensor over the environment by mount- ing it on some actuated mechanism. This approach introduces extra mechanical complexity and costs and it will therefore not be applied in this research.

Accuracy

The underlying principle of the IR range sensors makes them suscep ble to different types of noise. These in- clude interference from ambient light, reflec on, absorp on, crosstalk etc. As the measured range of the sensors increase, the sensor readings become less accurate.

A comparison between the performance of the IR range finder and the scanning laser range finder is presented in Table 1.

2.4 The scenario based approach

The sensors which are used in most SLAM applica ons, such as a high-end camera or a scanning laser range

finder, provide a high resolu on measurement of the robots environment. From this dense informa on a num-

ber of dis nc ve features are subtracted. These features act as point landmarks, and range-bearing or bearing-

(14)

Straight wall concave

corner convex

corner

Figure 5: Two sensors provide ambiguous informa on. From one measurement, the robot cannot dis nguish between types of objects it sees.

only SLAM is applied.

When using only two infrared sensors, the robot gets very limited informa on about its surroundings. First of all, it can only sense an object which is in the field of view and within the range of the sensor. Secondly, the informa on it gets is mostly ambiguous. An example of this is given in Figure 5, where the robot gets two sensor readings. In despite of the fact that these readings provide both range and bearing informa on, the robot is only able to detect if there is an object in its proximity. It cannot dis nguish which type of object it sees.

Another problem is that the range measurements from the IR sensors are really noisy, especially when measuring distant objects. As a result, when working with point features, the feature loca ons will have large uncertainty.

In places where the density of point features is large, the robot will have difficul es to dis nguish between the different map features.

Given these limita ons, we can conclude that in our situa on, normal point feature range-bearing SLAM is im- possible. We therefore propose a method in which we do not use point features to model the map robots environment and we will use hypotheses tes ng to solve the ambiguity problems. We will describe this method by first explaining how we will represent the map. A erwards, we will explain how we will build the map.

2.4.1 Map representa on

Because a commercial vacuum cleaning robot will most likely operated in man made indoor environments, we can incorporate prior knowledge of the robots surroundings. We will assume that an indoor environment is mostly rec linear. With this assump on we define a set of feature types, which act as the building blocks of our map. These building blocks must fit a number of requirements:

1. The map features must be large, so that the sparse sensory system is able to detect them.

2. The map features types must be discrimina ve, the robot has to be able to see the difference between them.

3. The number of parameters which described these map features must be small.

Given these requirements, we have defined three map features types: straight walls, straight convex corners and straight concave corners.

Wall features

On our approach we will model a wall as a straight line. One way to represent a line is by using the slope-intercept

form y = ax + b, where a is the slope and b is the y-intercept. This method is however not very stable, because

when a line becomes ver cal, the values of a and b will reach infinity.

(15)

x-axis

y-axis

φc

(xc,yc)

φw

rw

W al l

Corner

Figure 6: The corner and wall map features and their parameters.

A be er representa on of a line is therefore the normal form r

w

= x sin(φ

w

) + y cos(φ

w

). Where r

w

is the distance of a line segment from the origin which is perpendicular to the wall. φ

w

is the orienta on of this line with respect to the posi ve y-axis. For each point (x, y) on a line, the parameters r

w

and φ

w

are constant. A graphical representa on of the parameters is given in Figure 6.

Corner features

The second and third map feature types are convex and concave straight corners. Both of these feature type have three parameters: the first two parameters x

c

and y

c

describe the coordinates of the corner point. The third parameter φ

c

represents the orienta on of the corner. This orienta on is similar to φ

w

, the orienta on of a wall. The corner parameters can also be seen in Figure 6. The difference between a convex and concave corner is visible in Figure 5.

2.4.2 Map building by hypotheses tes ng

In the previous subsec on we introduced the map feature types from which the robot can build the map of its environment. Each of these feature types can occur mul ple mes on the map. We will call each single repre- senta on of a feature on the map a feature instance or map object. Such a feature instance can represent itself in different ways to the robots sensory system. We define a scenario as the way a feature instance manifests itself in the robots observa onal system.

There can exist different scenarios for the same feature instance. For example, a straight corner consists of two wall segments. A robot equipped with two range sensors can observe such a corner in different ways: two sensor measurements on the same wall segment, two measurements on different wall segments, etc. Each of these possibili es is considered as a different scenario. The presump on of a certain scenario is called a hypoth- esis.

Because the robot receives ambiguous data about its environment, most of the me it will not know which

feature instance it observes. The robot deals with this problem by means of hypothesis tes ng. Each me a

measurement is available, the robot generates a number of hypotheses. Examples of these hypotheses are:

(16)

• The robot observes a new wall instance.

• The robot observes a known wall instance.

• The robot observes the le wall segment of a known corner instance.

• The robot observes both wall segments of a known corner instance.

• The robot observes the right wall segment of a known corner instance.

When the robot receives the next measurement, for each hypothesis, it es mates the likelihood of this measure- ment given this hypothesis. In this way, the robot checks if a scenario is consistent with the current observa on.

If not, the hypothesis is rejected. A er a number of subsequent measurements, only the most likely scenario's

will survive. These surviving scenarios are used to build the map. How this method works in prac ce will be

explained in the next chapter.

(17)

3 Realiza on

In this chapter we will describe how we have realized the Scenario based SLAM approach. First we will describe the kinema c mo on model and the observa on models that are used in by the Extended Kalman filter. In the following sec on we will describe how we can ini alize new map features. We will show that ini alizing new corner features using only two sensors is not an easy task.

The third sec on of this chapter is about First Es mate Jacobian EKF SLAM, a method which improves the consis- tency of es mates of the robot pose and map features. To make the realiza on of the scenario based approach more convenient, we used an object orientated approach in Matlab. This is explained in the fourth sec on of this chapter. In the fi h and final sec on we show how we have realized the Scenario based SLAm approach.

3.1 Mo on and observa on Models

As described in Sec on 2.1.2, the EKF SLAM method consists of an predic on and an update phase. In the predic on phase, the odometric data and a kinema c mo on model x

r,k+1

= f (x

r,k

, u

k

) + v

k

is used to make an es mate of the robot pose. In the update phase, the EKF uses sensor data and a measurement model z

k

= h(x

k

) + w

k

to update the es mated robot pose and map features. In this sec on both of these models will be explained.

3.1.1 The Mo on model

The literature [30] describes two general robot models. The first model assumes that the mo on data u

k

spec- ifies the velocity commands given to the robots motors. The second model assumes that one has access to odometry informa on.

For two reasons we prefer to use the odometric model. The first reason is that, for the sake of computa onal complexity, we have to keep the state space as small as possible. The odometric informa on from the wheel encoders are actually sensor measurements and not control signals. If we, however, want to model odometry as measurements, we have to include both the transla onal and the rota onal velocity as state variables.

The second reason for using the odometric model is that odometric models tend to be more accurate than velocity models [30]. The reason for this is that most commercial robots do not execute velocity commands with the level of accuracy that can be obtained by measuring the revolu on of the robots wheels.

The odometry is only available a er the robot has already moved. Hence it cannot be used for mo on plan- ning. As a result odometry models are usually applied for es ma on, whereas velocity models are used for probabilis c mo on planning [30].

The odometric model

A vacuum cleaning robot usually consist of two differen ally driven wheels, which are mounted in the middle of the robot. In addi on it has two small caster wheels at its front and back side. Using this configura on, the odometry can be modeled by a simplified differen al steering system [9, 7]. This model does not incorporate the accelera on, so it is only valid for rela vely light robots.

We will explain the model by using Figure 7, where we see a robot changing its pose from x

r,k

to x

r,k+1

. By

independently rota ng both wheels, the robot follows a curved trajectory. By equipping the robot with two

wheel encoders, it can measure the number of revolu ons of each wheel during a me interval T . Combined

with informa on about the length of the robots wheel axle b

r

and its wheels radius r

r

, the robot can compute

(18)

x

r,k

y

r,k

r

r

b

r

θ

r,k

θ

r,k+1

x

r,k+1

y

r,k+1

Figure 7: The odometric model:By independently rota ng both wheels, the robot changes its pose from x

r,k

to

x

r,k+1

.

(19)

x

r,k

y

r,k

x

r,k+1

y

r,k+1

tr ansl at ion

υ

r

T

rotation 1 ω

r

T/2

rotation 2 ω

r

T/2

Δx

r

Δy

r

Figure 8: The odometric model: The robot mo on in the me interval T is approximated by a rota on

12

ω

r

T , followed by a transla on υ

r

T and a second rota on

12

ω

r

T .

its wheel displacement Θ

L,R,k

by:

Θ

L,R,k

= 2πr

r

C u

L,R,k

(4)

where u

L,R,k

are the number of pulses generated by the le (L) and right (R) wheel encoders, during me interval T . C is the wheel encoder resolu on, meaning that during one revolu on the wheel encoder generates C pulses.

If the me interval T is small, the robot mo on can be approximated by a rota on

12

ω

r

T , followed by a transla- on υ

r

T and a second rota on

12

ω

r

T [30]. A graphical representa on of this approxima on is given in Figure 8, where we see the same change in robot pose as in Figure 7.

The transla on of the robot is computed using the robots transla onal velocity υ

r,k

. This velocity is approxi- mated by the average wheel displacement of both wheels on the me interval T :

υ

r,k

= Θ

L,k

+ Θ

R,k

2T . (5)

The two rota ons are determined from the robots angular rate ω

r,k

, which is also obtained from the le and right wheel displacement:

ω

r,k

= Θ

L,k

− Θ

R,k

b

r

T . (6)

It can be seen from Figure 8 that the current pose x

r,k+1

is obtained from the previous pose x

r,k

and the pose displacement ∆x

r

= [∆x

r

, ∆y

r

, ∆θ

r

]

T

during me interval T . This can be wri en as

x

r,k+1

y

r,k+1

θ

r,k+1

 =

x

r,k

y

r,k

θ

r,k

 +

∆x

r

∆y

r

∆θ

r

(20)

=

x

r,k

y

r,k

θ

r,k

 +

υ

r,k+1

T cos(θ

r,k

+

∆θ2r

) υ

r,k+1

T sin(θ

r,k

+

∆θ2r

)

ω

r,k+1

T

 (7)

By combining Equa ons 4 to 7 we can now write the complete odometric func on x

r,k+1

= f (x

r,k

, u

k+1

) as

x

r,k+1

y

r,k+1

θ

r,k+1

 =

x

r,k

y

r,k

θ

r,k

 +

 

(u

L,k+1

+ u

R,k+1

)

2πr2Cr

cos (

θ

k

+ (u

R,k+1

− u

L,k+1

)

bπrr

rC

)

(u

L,k+1

+ u

R,k+1

)

2πr2Cr

sin (

θ

k

+ (u

R,k+1

− u

L,k+1

)

πrb r

rC

)

(u

R,k+1

− u

L,k+1

)

2πrbCr

 

 (8)

The odometric error model

In the previous sec on we explained how the robot itera vely es mates its pose, using its previous pose and the wheel encoder outputs u

L

and u

R

. This principle is called dead-reckoning and has the disadvantage that in each new pose es mate, all small errors made in previous pose es mates will accumulate. This means that, a er a while, the error in pose es mate could become very large. Luckily, SLAM can deal with this accumula on of pose es mate errors and it will not lead to problems as long as the robot has a good es mate of these errors.

To obtain this es mate, the robot needs an odometric error model.

We can define two main categories of odometric errors: intrinsic and extrinsic errors. The intrinsic errors are the ones related with the hardware configura on. In this category we can name: encoder noise, unequal wheel diameter, wheel misalignment, uncertainty in the track or distance between wheels. The extrinsic errors are the ones generated by the wheel terrain interac on, such as skid or slippage.

The effects of some intrinsic errors, such as track and wheel diameter uncertainty, can be reduced by accurate es ma on of these parameters. All other errors, such as the ones due to noise in the encoder measurement and the extrinsic errors need to be modeled by using the odometric error model.

We will model the odometric errors v

k

as addi onal white Gaussian noise on le and right encoder counts u

L

and u

R

. The errors in both encoder counts are assumed to be independent, with a variance propor onal to the magnitude of the count: σ

u

= a

u

u + b

u

. The parameters a

u

and b

u

can be determined experimentally [8].

The odometric noise v

k

is assumed to have zero mean and a covariance matrix Q

k

. With this informa on we can itera vely es mate the covariance matrix P

x,k

, represen ng the es mated pose uncertainty:

P

x,k+1

= F

x,k

P

x,k

F

Tx,k

+ F

u,k

Q

k

F

Tu,k

. (9) where F

x,k

and F

u,k

are Jacobian matrices:

F

x,k

= ∂f

x

r,k

ˆxr,k,uk+1

(10)

and

F

u,k

= ∂f

∂u

k+1

ˆxr,k,uk+1

. (11)

3.1.2 The measurement models

The measurement model z

k

= h(x

k

) + v

k

describes the rela on between the robots observa ons z

k

at me k, its pose x

r

and the map features m. The vector v

k

describes the measurement noise, which is assumed addi ve, zero mean, white and Gaussian. This noise has a covariance matrix R

k

In the classical feature based SLAM approach, one uses only point features. As a result, the robot only needs

(21)

Figure 9: A convex corner observed in three different ways.

one single observa on func on h(). In our approach, we have different environment features and each feature has its own observa on func on. Moreover, for the corner feature the robot even needs mul ple observa on func ons.

The observa on func ons are deduced from the coordinates (x

z,i

, y

z,i

) of the point where the infrared beam is projected on a object. As can be seen in Figure 10, we can use a transla on and a rota on over the robot pose x

r

to express the observed coordinates as

x

z,i

= x

r

+ z

i

cos(θ

r

+ ψ

i

) (12)

y

z,i

= y

r

+ z

i

sin(θ

r

+ ψ

i

) (13)

where the index i corresponds to the i

th

IR sensor, z is the measured distance and ψ is the orienta on of the sensor.

Walls

In Sec on 2.4 we modeled a wall feature with parameters r

w

and φ

w

. As shown in Figure 10, we can draw a straight line through two observed points on a wall. Using Equa ons 12 and 13 we can write the observa on func on of a wall as

z

i

= r

w

− y

r

cos(φ

w

) + x

r

sin(φ

w

)

sin(θ

r

+ ψ

i

− φ

w

) (14)

Corners

A corner is defined as a point where two walls come together. The observa on func on of a corner depends on which of these walls the robot observes. This is shown in Figure 9, where we observe three different situa ons:

1. The robot observes points on two different walls

2. The robot observes points on the le wall of the concave corner 3. The robot observes points on the right wall of the concave corner

In each of these situa ons different combina ons of observa on func ons are required. When the i

th

sensor observes the le wall of a concave corner feature, the observa on func on belonging to this sensor equals

z

i,L

= (y

c

− y

r

) cos(φ

c

) − (x

c

− x

r

) sin(φ

c

)

sin(θ

r

+ ψ

i

− φ

c

) . (15)

When the i

th

sensor observes the right wall of a concave corner feature, the observa on func on equals z

i,R

= (x

c

− x

r

) cos(φ

c

) + (y

c

− y

r

) sin(φ

c

)

cos(θ

r

+ ψ

i

− φ

c

) . (16)

Convex corner features have the same observa on func ons, but for these features le and right in Equa on 15

and 16 have to be interchanged.

(22)

ψ1 ψ2 z1

z2 (xz1,yz1)

(xz2,yz2)

x‐axis

y‐axis

x=0, y=b

y=0, x=‐b/a

line y=ax+b

rw φw

φw

xr yr

θr

Figure 10: Es ma ng r

w

and φ

w

from x

r,k

and z

k

3.2 Feature ini aliza on

While the robot is exploring its environment and observes a new feature, it has to register this new landmark on the map. During this registra on process the state vector x

k

of the Extended Kalman filter is extended with the parameters of the observed object. To do this, the robot needs to make an ini al es ma on of the new map parameters. In this sec on we will describe how this feature ini aliza on takes place. As we will show, in case of a wall, this ini aliza on is rela vely straigh orward. The ini aliza on of a corner is however more complex and we will therefore describe a number of approaches to do this.

3.2.1 Walls

For the ini aliza on of the wall parameters r

w

and φ

w

, we need a func on g

w

() , which deduces this informa on from the knowledge of the current robot pose x

r,k

and the measurements z

k

:

[ r

w

φ

w

]

= g

w

(x

r

, z

k

) (17)

According to Equa ons 12 and 13, the measurements and the robot pose provide informa on to calculate the coordinates of two points (x

z,1

, y

z,1

) and (x

z,2

, y

z,2

). As shown in Figure 10, we can draw a straight line through these points. This line can be expressed as func on y = ax + b

1

. If we subs tute the two coordinates of the observed points in this line equa on, we get values of the line parameters a and b:

a = y

z,1

− y

z,1

x

z,1

− x

z,2

(18)

b = x

z,1

y

z,2

− x

z,2

y

z,1

x

z,1

− x

z,2

. (19)

a = (y

r

+ z

1

sin(ψ

1

+ θ

r

)) − (y

r

+ z

2

sin(ψ

2

+ θ

r

))

(x

r

+ z

1

cos(ψ

1

+ θ

r

)) − (x

r

+ z

2

cos(ψ

2

+ θ

r

)) . (20)

1We use the slope intercept form y = ax + b of a line only to derive the formulas for the wall paramters. The problems with the stability of this line representa on are therefore no issue.

(23)

By subs tu ng Equa ons 12 and 13 for the variables x

z,i

and y

z,i

we get the following equa ons:

b = (x

r

+ z

1

cos(ψ

1

+ θ

r

))(y

r

+ z

2

sin(ψ

2

+ θ

r

)) − (x

r

+ z

2

cos(ψ

2

+ θ

r

))(y

r

+ z

1

sin(ψ

1

+ θ

r

))

z

1

cos(ψ

1

+ θ) − z

2

cos(ψ

2

+ θ) . (21)

Now that we have expressed a and b as func ons of the robot pose and the measurements, we can write the wall parameters as

φ

w

= arctan(a) (22)

and

r

w

= b cos(φ

w

). (23)

When performing SLAM we do not have exact knowledge about the robot pose x

r,k

. In stead we have an es mate ˆ x

r,k

of the pose and an es mate ˆ P

r,k

of the pose uncertainty. Due to the sensor noise, the measurements z

k

are also inaccurate and they have an es mated uncertainty R

k

. As a result the es mates of wall parameters are also uncertain, so we need to also make an es mate of the wall covariance matrix ˆ P

w,k

. This matrix is ini alized by

P ˆ

w,k

= G

r,k

ˆ P

r,k

G

Tr,k

+ G

z,k

R

k

G

Tz,k

, (24) where G

r,k

is the Jacobian matrix

G

r,k

= ∂g

w

∂x

r,k

ˆxr,k,zk

(25) and G

z,k

is

G

z,k

= ∂g

w

∂z

k

ˆxr,k,zk

. (26)

This method can also be used when more than two measurements are available. In that case one has to perform a fi ng method, such as least squares es ma on, to get values of a and b. Using this approach, it is however not possible to calculate the Jacobians G

z,k

and G

z,k

. As an alterna ve, one could ini ate a covariance matrix by crea ng a diagonal matrix with very large values, indica ng that the uncertainty of the ini al es mate is high.

This is a very common approach in EKF SLAM [30].

3.2.2 Corners

As described in Sec on 2.4, a corner feature has three parameters: x

c

, y

c

and φ

c

. If we want to make an ini al es ma on of a corner from two measurements, we have to solve an under-determined problem, because we have more unknown parameters than (measurement) equa ons. To solve this problem we tried four different approaches. In the next subsec ons we will present these approaches. At the end of this sec on we will discuss which of these methods is most suitable for scenario based SLAM.

Mul -scanning

To solve this problem we could try to use the mul -scanning approach, where we use a number of consecu ve measurements to make a first guess of the corner parameters. When using IR range finding sensors, this ap- proach will only work if we use large number of distance measurements. Assuming that the measurements are updated at 20 Hz and the robot moves at 0.2 m/s, the robot has to measure for a period of several seconds before it is able to make a good ini al es mate. Due to the high varia on in the sensor noise, the distance mea- surement varia on is larger than the path the robot travels. This can be seen in Figure 11, where the red marks indicate the sensors output. A er one second, the variance in the direc on of the measurements is s ll to large to solve the ambiguous corner es ma on problem.

For the simula on in Figure 11, we modeled the sensor noise as a Gaussian noise process with a standard devi-

a on of 10 percent of the measured distance. At this moment we do not know if this is an realis c value. We

need to analyze the behavior of a real IR sensor to see if our sensor noise model is correct.

(24)

1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2.5 1.9

2 2.1 2.2 2.3 2.4 2.5 2.6 2.7

x−axis [m]

y−axis [m]

20 measurements

Figure 11: A er driving forward for a period of 1 second, the measurements do not provide sufficient informa on to make an ini al es ma on of a corner.

ψ1 ψ2 z1

z2 (xz1,yz1)

(xz2,yz2)

x‐axis

y‐axis

line 1: y =a

1

x+b

1

φc

φc

xr yr

θr (xc,yc)

line 2: y =a

2

x+b

2

Figure 12: Es ma ng new corner feature parameters from a known wall feature and one addi onal measure-

ment.

(25)

From a known wall

A be er way to solve the corner ini aliza on problem is to incorporate prior knowledge, by first es ma ng the parameters of one of the wall features connected to this corner. In this way, Assuming the corner has a right angle, we only need one addi onal observa on to es mate the three parameters of the newly observed corner feature. To do this, the robot needs a func on g

c

() , which converts the prior knowledge of the wall features and the knowledge gained by the measurement into an ini al corner feature es mate:

x

c

y

c

φ

c

 = g

c

( x

r

,

[ r

w

φ

w

] , z

k

)

(27)

As can be seen from Figure 12, where we assume to already have knowledge about line 1, the orienta on φ

c

of the new corner feature equals the orienta on of the wall feature at the le of this corner:

φ

c

= φ

w,L

(28)

The coordinates (x

c

, y

c

) of the corner are at the point of intersec on of line 1 and line 2. If we express each of these lines as a func on y = ax + b we know that, at the point of intersec on, the following rela on holds:

y

c

= a

1

x

c

+ b

1

= a

2

x

c

+ b

2

. (29)

If we have prior knowledge of the le wall feature, and we assume that the two corner walls are perpendicular, we can es mate a

1

, b

1

and a

2

:

a

1

= tan(φ

w,L

) (30)

b

1

= r

w,L

cos(φ

w,L

) (31)

a

2

= −1

tan(φ

w,L

) (32)

(33) With one measurement of a point (x

z2

, y

z2

) on the second wall, we can obtain the unknown line parameter b

2

:

b

2

= y

z2

− a

2

x

z2

(34)

= y

r

+ z

i

sin(θ

r

+ ψ

i

) + x

r

+ z

i

cos(θ

r

+ ψ

i

) tan(φ

w,L

)

A er subs tu ng the knowledge of b

2

, we can solve Equa on 29 for the remaining corner feature parameters x

c

and y

c

:

x

c

= y

r

− r

w,L

/ cos(φ

w,L

) + z

2

sin(ψ

2

+ θ

r

) + (x

r

+ z

2

cos(ψ

2

+ θ

r

))/ tan(φ

w,L

)

tan(φ

w,L

) + 1/ tan(φ

w,L

) (35)

y

c

= r

w

cos(φ

w,L

) + tan(φ

w,L

)(y

r

− r

w,L

/ cos(φ

w,L

) + z

2

sin(ψ

2

+ θ

r

) + (x

r

+ z

2

cos(ψ

2

+ θ

r

))/ tan(φ

w,L

))

tan(φ

w,L

) + 1/ tan(φ

w,L

) .

(36) Equa ons 28, 35 and 36 define the ini aliza on func on g

c

() from a measurement z

2

and knowledge of the wall feature to the le of the corner.

In the same manner as explained above, we could also es mate the corner feature parameters from a known wall feature to the right of the corner. In this case we need an addi onal measurement of a point (x

z1

, y

z1

).

Using the prior knowledge of φ

w,R

, r

w,R

and z

1

we get φ

c

= φ

w,R

+ π

2 , (37)

Referenties

GERELATEERDE DOCUMENTEN

Wat wèl klopt is dat Pavese op het toppunt van zijn `succes’ stond, want kort vóór zijn zelfmoord (op 26 augustus 1950) kreeg hij de prestigieuze Stregaprijs, en ook in Mulders

Figure 21 Accuracy and consistency of the rotation on simulated vSLAM data ( e ) NEES of rotation SEIF ( f ) NEES of rotation Graph... In Figure 22 can be seen that only at i=100,

Since the late 1960’s, when MacArthur &amp; Wilson released their Island Biogeography Theory (IBT), ecologists and evolutionary biologists have become familiar with the idea that

“Soos jou twee blou albasterghoens,” het ek vir Stephanus probeer beskryf, “wat voel asof hulle in ‘n mens inskyn.” “Ja, tot jou kop groot en lig voel, soos in ‘n droom,

Therefore, I hope to reveal the ways in which nightclubs reproduce essentialising gender dynamics that encourage male harassment and sexual aggression and also explore how

The main objective of this study was to determine the significance of the effect of capitalising long-term operating leases on the financial ratios of the Top

The general fabrication method consists of a few basic steps: (1) mold fabrication, (2) conformal deposition of the structural material, (3) isotropic thinning of the

Figure 9 shows the modifications that can be applied on the analysis such that the position of the billet’s nodes belonging to the die-billet contact zone will