• No results found

Development of a centering algorithm for the structured light sensor on the PIRATE pipe inspection robot

N/A
N/A
Protected

Academic year: 2021

Share "Development of a centering algorithm for the structured light sensor on the PIRATE pipe inspection robot"

Copied!
52
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Development of a centering algorithm for the structured light sensor on the PIRATE pipe inspection robot

C.C.N. (Chantal) Schneider

BSc Report

C e

Dr.ir. J.B.C. Engelen N. Botteghi, MSc Dr.ir. W.W. Wits

June 2017 015RAM2017 Robotics and Mechatronics

EE-Math-CS University of Twente

P.O. Box 217

7500 AE Enschede

The Netherlands

(2)
(3)

Summary

The goal of the PIRATE (pipe inspection robot for autonomous exploration) is to inspect small diameter gas pipes and petrochemistry industrial pipelines. The PIRATE uses a non-destructive inspection approach by sending a robot through the pipe and measuring the state of the pipe from the inside. To detect obstacles, bends and damages, a structured laser light sensor is used.

The sensor projects a laser cone into the pipe and a camera captures this projection.

In order to extract useful information from this projection, it is important that the projection is circular in shape, thus the sensor needs to be centered inside the pipe. This bachelor project aims to build a robot arm that can control the position of the sensor and to develop a centering algorithm to find the center of the pipe and to move the sensor to this position. The algorithm will use image-processing techniques to find the center of the pipe. A kinematics model will be used to control the position of the robot arm. The project is implemented in MATLAB and Arduino.

The resulting algorithm works in straight pipes without obstacles.

(4)
(5)

Contents

Summary iii

1 Introduction 1

1.1 Context . . . . 1

1.2 Problem statement and goal . . . . 2

1.3 Suggested approach . . . . 2

1.4 Report Outline . . . . 3

2 State of the Art 4 2.1 Pipe inspection methods . . . . 4

2.2 The structured light sensor . . . . 4

2.3 Image Processing . . . . 6

3 Design Requirements 7 3.1 Problems with previous works . . . . 7

3.2 Requirements . . . . 7

4 Setup 9 5 Implementation 11 5.1 Capturing images using the camera . . . . 11

5.2 Image processing . . . . 16

5.3 Connecting algorithm . . . . 20

6 Evaluation 24 6.1 Evaluation of the design requirements . . . . 24

6.2 Number of iterations needed . . . . 24

6.3 Evaluation of the white LED algorithm . . . . 25

6.4 Evaluation of the red laser algorithm . . . . 26

6.5 Cases for which the algorithm will work . . . . 27

6.6 Overall evaluation of the accuracy of centering the sensor . . . . 29

7 Conclusion and Recommendations 30 7.1 Conclusion . . . . 30

7.2 Recommendations . . . . 30

A Design of the Robot Arm 31

B Arduino Connection to MATLAB 34

(6)

C MATLAB Code 37 C.1 White LED algorithm . . . . 37 C.2 Red Laser algorithm . . . . 40

Bibliography 46

(7)

1 Introduction

1.1 Context

The Dutch gas network consists of over 20,000 km of high-mid pressure main transmission pipes and 100,000 km low-pressure pipes, distributing gas to all urban areas (Dertien, 2014).

Leakages and damages to the pipes can cause risks for public health and safety (Gas Networks Ireland, 2017). Thus, it is important to constantly monitor the state of the pipes and to replace them if necessary.

Traditional ways of pipe inspection involve inspection from the surface or using an endoscope approach. Another way is to use a device with a camera mounted on it, which is being manually controlled evaluated. These way are expensive and time consuming. Fully autonomous pipe inspection methods do not exist yet (Dertien, 2014).

The research group Robotics and Mechatronics (RAM) at the University of Twente created the PIRATE (pipe inspection robot for autonomous exploration) project, with the aim to develop an autonomous pipe inspection robot which will be able to detect defects inside a small diameter, low-pressure gas pipe network in a non-destructive way.

Additionally to this original goal, another goal has been added recently to the PIRATE project.

The new application for the PIRATE is to be able to inspect petrochemistry pipelines. This is relevant, because it is safer, cheaper and cleaner to perform maintenance in a non-destructive way. Petrochemistry pipes are, unlike gas pipes, made out of metal and have liquids inside.

Adjustments for the PIRATE to operate in this new environment have to be made.

Figure 1.1: Current Design of the PIRATE inside a pipe

Figure 1.2: Alternative design of the PIRATE. Image retrieved from Dertien (2014).

Past work in this project has led to the design of the pipe robot (see figures 1.1, 1.2). Currently

the robot can move through the pipe while being controlled manually. Further work has to be

done on the autonomous operation of the robot. To detect irregularities or defects in the pipe

(such as deformations of the pipe, dents, bends or tears) work has been done on developing a

(8)

vision sensor (Drost, 2009; Mennink, 2010; Brilman, 2011; Reiling, 2014). This approach uses a camera and a laser. The sensor projects a laser cone into the pipe. A camera captures this projection (see figure 1.3). By using image-processing techniques, defects can be found. In order to accurately find irregularities, the sensor needs to be attached to the PIRATE robot and it needs to be centered in the middle of the pipe with the same orientation as the pipe axis.

Figure 1.3: Laser projection of the circle in the pipe. The laser can be clearly distinguished from the pipe.

1.2 Problem statement and goal

Previous work (Kamermans, 2017; Brilman, 2011; Meer, 2016) on mapping the pipes by using the laser sensor assumed that the axis of the sensor was aligned with the axis of the pipe and that the sensor was centered in the middle of the pipe. Furthermore, they did not attach the sensor to the PIRATE, but rather used an independent sensor vehicle. This vehicle can move through the pipe independently from the PIRATE. The gathered sensor data was saved and processed afterwards.

The goal of this Bachelor project will be to work on an image-processing algorithm to find the center of the circle and to build a device which positions the sensor in the center of the circle.

This means that the processing of the image data and the adjustment of the camera position will need to be done in real time.

1.3 Suggested approach

To find the center of the pipe, the structured light vision system developed by Reiling will be used. This system consists of a laser cone projector, a set of LEDs to light up the pipe and a camera. In a first step, the LEDs will be switched on and a picture will be taken. Via image- processing techniques, the position of the center of the pipe will be determined. Then the sensor will be moved to this position. In a next step, the laser projection will be used. By apply- ing a polar transform to the captured image it can be determined if the sensor is centered. The polar transform can also be used to gather information on how the tilt of the sensor needs to be changed to center the projection.

To move the sensor, a robot arm needs to be built to which the sensor can be attached. This will be done by mounting two servos on top of each other, giving the arm two degrees of freedom.

One will control the vertical position of the sensor and the other servo will control the tilt of the camera. The robot arm will be mounted in a fixed position in the pipe.

1.3.1 Assumptions and simplifications in project

This project will serve as a proof of concept rather than a full working product. To simplify the design process and due to time constraints some assumptions and simplifications were made.

This project will only look at straight pipes, so it will not consider bends, obstacles or damages

to the pipes. Furthermore, it will be assumed that the camera is already in the correct hori-

(9)

full diameter of the pipe. This means that it can be assumed that the sensor only needs to be aligned along the vertical axis.

Another simplification is that the robot arm that will be built will not be attached to the PIRATE, but will be fixed to the pipe. This can be done, because the centering algorithm should work independently from the movement of the PIRATE. The algorithm will be developed in steps.

First, the easiest case will be considered (the sensor already being in a near ideal position) and then the algorithm will be improved if time allows.

1.4 Report Outline

This thesis will first look at the state of the art of pipe inspection methods and then focus on

the structured laser light technique (see chapter 2). After that, the design requirements for the

project will be set up (see chapter 3). Chapter 4 will describe the setup that was used. Chapter

5 describes the implementation of the algorithm. After that, there will be a chapter on the

evaluation of the algorithm (see chapter 6), followed by conclusions and recommendations for

future works (see chapter 7).

(10)

2 State of the Art

2.1 Pipe inspection methods

There are several possible methods of pipe-inspection, which can be categorized in destruc- tive and non-destructive pipe-inspection. Destructive pipe-inspection involves digging up the pipes and inspecting them over ground, however this is both time-consuming and expensive.

The preferred way is thus non-destructive in-pipe-inspection. One possible way is that a robot will be sent into the pipe. This robot can then investigate damages from the inside (Shukla and Karki, 2016).

To achieve this, a suitable sensor has to be used. Previous research has looked into a wide range of possible sensing techniques (Feng et al., 2016; Liu and Kleiner, 2013; Shukla and Karki, 2016). Possible in-pipe sensing techniques include electromagnetic, acoustic, ultrasound, ra- diographic, tentacle sensing and visual inspection methods. Electromagnetic methods, such a magnetic flux leakage and remote field eddy current, only work for metal pipes, thus they are not suitable for the PIRATE, since this robot also aims to inspect PVC and PE pipes (Dertien, 2014). Ultrasound sensing methods require a coupling medium (a liquid) in order to work, thus are not applicable for the inspection of gas pipelines. Radiographic inspection methods are expensive and can be unsafe. Tentacle sensing methods make use of sensors that are in direct contact with the wall and therefore are more applicable for larger diameter pipes. Vi- sual sensing techniques include CCTV (closed-circuit television) scans and laser scan methods.

CCTV scans make use of a camera to film the inside of a (lighted) pipe. Inspection is often done manually by a worker. The absence of reference points in the video makes it hard to perform quantitative measurements (Dertien, 2014). Stereo cameras could improve this, however it is not practical to use separate cameras in small diameter pipes. To be able to easily analyze im- ages, a clear point of reference has to be added, such as being applied in laser scan methods.

Here a laser pattern is projected onto the inside of the pipe wall. Image processing can be used to extract information from the videos and to map the inside of the pipe.

Several laser projections exist (e.g. circles, point-grids, cross-hair patterns) (Dertien, 2014).

One approach uses line patterns that are projected into the pipe (Lee et al., 2011). This ap- proach can be used for navigation purposes, however it is less suited for defect detection. For defect detection, it is desirable that the whole pipe is mapped in one frame, while using a line pattern requires several frames, because the pattern needs to be rotated. Another approach is to use the single spot scanning method (Duran et al., 2002). Here an individual light point that is being rotated continuously is projected into the pipe. To detect all defects inside the pipe, one frame needs to be taken for every position of the laser dot. This requires a higher inspection time.

The method chosen for the PIRATE was to use the whole circle image method (Duran et al., 2002; Dertien, 2014; Reiling, 2014). Here a full laser circle is projected into the pipe. The ad- vantage of this is that the cross-section of the pipe can be measured in one frame. Thus, it has faster inspection times compared to the other described laser projection methods.

2.2 The structured light sensor 2.2.1 Build-up of sensor

Previous work on the PIRATE on creating and using a sensor which uses laser projections and

a camera to find defects in pipes and to enable autonomous movement through the pipes has

been done by (Drost, 2009; Mennink, 2010; Brilman, 2011). This sensor has been further devel-

oped by (Reiling, 2014). A cross-section of the sensor can be seen in figure 2.1. The sensor is

small enough (80mmx31mm) to fit into small diameter pipes.

(11)

Figure 2.1: Cross section of the sensor. Figure reproduced from Reiling (2014)

The structured light sensor can project a cone of laser light into the pipe. Assuming this is done in a dark enough environment, such as an underground pipe, this creates a circular projection inside the pipe. This projection is captured by a small camera mounted in such a way that the center of the laser projection and of the webcam image align. The camera will be used at a resolution of 640x480 pixels and it has an accuracy of 0.35mm (Reiling, 2014). From the projection it can be seen if there are any damages inside the pipe. This can be seen if there are any imperfections in the circle. This requires that the projection is a circle, which can only be reached if the sensor is centered in the pipe and if the axis of the sensor and the pipe align. If the sensor is not centered or aligned with the pipe, this does not result in a circular projection, but rather an ellipse (see figure 2.2). This is a problem, because no information about the pipe can be extracted in this way.

Figure 2.2: Effect of a non-centered and aligned sensor. The projection ends up being elliptical and no information about the pipe can be extracted. Figure reproduced from Kamermans (2017)

Furthermore, the sensor has three LEDs mounted to the front which can illuminate the inside of the pipe. This can be used to gather additional information about the inside of the pipe.

2.2.2 Work done with the sensor

Several previous works use a structured light sensor to gather information about the pipe.

(12)

Brilman (2011) used a previous version of the structured light sensor and put this on a vehicle independent from the PIRATE. He filmed the pipe inside with the sensor and created an image- processing algorithm to analyze the image data.

Meer (2016) used the sensor developed by Reiling (2014) to create a 3D reconstruction of the pipe to detect anomalies. He used the image data every 3mm and calculated 3D coordinates of the pipe. Based on these coordinates he reconstructed the inside of the pipe. The research shows that the sensor is able to detect obstacles inside the pipe. In this project, the sensor was mounted onto a sensor vessel which was manually controlled and independent from the PIRATE. It was assumed that the sensor aligns with the sensor of the pipe. The data was stored and processed later in MATLAB.

Kamermans (2017) used the Reiling’s sensor to create a setup to investigate water pipes. He uses a pipe inspection gauge which is pushed through the pipe by using water pressure. He concludes that the sensor can be used to create 3D reconstructions of water pipes.

2.3 Image Processing

Previous projects have developed some algorithms to extract data about the pipe from the cap- tured laser projections. Brilman (2011) uses an algorithm from Mennink (2010) and further improves it. His algorithm uses the OpenCV library and is implemented in C++. It works as follows: first a webcam image of the laser projection is captured. This picture has a resolution of 640x480 pixels and it is a 3-channel 8-bit image. This is being converted to a one-channel 8-bit greyscale image. Then thresholding is applied to reduce noise. All the pixels which are below a certain fixed threshold will be made black. In the next step, dilation is being applied. A 3x3 window is used and the algorithm assigns the value of the highest pixel inside the window to all the pixels inside it. This is done to close unwanted gaps in the circle. However, this also blurs the circle edge. Then an 8 to 32-bit transform is being applied in order to be able to apply the polar coordinate transform. To apply the polar transform the center of the circle needs to be known. For this he uses a calibration technique developed by Mennink (2010). By applying the polar transform, information about the pipe can be extracted from the picture. The laser is mapped from a (near) circle to a (near) straight line. To detect the laser he uses the averaged out minimum value in each column. This algorithm is an improvement over Mennink’s in terms of computational load and accuracy. This is also due to removing the Gaussian blur filter and blob detection filter.

Reiling (2014) uses an image-processing algorithm similar to Brilman. The algorithm consists

of color space conversion, undistortion to compensate for irregularities in the lens, polar map-

ping, thresholding and dilation and weighted averaging. He suggests further improvements

which could be implemented in this algorithm, however he does not test them. He suggests

using adaptive thresholding dependent on the conditions (e.g. pipe size and material) instead

of using a fixed thresholding value as Brilman does. A second suggestion he makes is to not use

dilation to fill the gaps in the circle, because this adds blurring and decreases resolution, thus

leads to less accuracy. Instead, he suggests filling gaps by using interpolation.

(13)

3 Design Requirements

3.1 Problems with previous works

Previous works on mapping the pipe by using the structured light sensor assume that axis of the sensor is aligned with the axis of the pipe and that the camera is centered in the pipe. Further- more, the sensor is placed onto a sensor vehicle in a fixed position (Kamermans, 2017; Meer, 2016; Brilman, 2011). However, the final goal should be to place the sensor on the front of the PIRATE. The sensor should be able to adjust its position to center itself in the pipe. This is nec- essary to accurately find flaws of the pipe by using the polar transform. Thus, a robot arm needs to be build which holds the sensor and allows the vertical position and the tilt of the camera to be changed.

This also implies that the image processing, which is up until now done after gathering all the image data, needs to be done in real time.

3.2 Requirements

Dertien (2014) set the basic performance criteria for the PIRATE. Further requirements for the robot can be derived from these basic requirements. Other requirements are derived from the problems of previous works. This section will list the requirements for the final product. Re- quirements for the prototype will be indicated.

3.2.1 Setup

Requirement 1: The sensor should be small enough to fit into small diameter pipes and be lightweight.

Based on the criteria by (Dertien, 2014) the sensor should be small and lightweight. The robot should eventually be able to navigate through pipes with an inner radius from 57-118mm. The structured light sensor has been greatly improved by Reiling (2014) and will be used in this project.

Requirement 2: The camera should take good quality pictures of a certain resolution.

The camera should be as accurate as possible to detect defects in the pipe. The camera will be operated at a resolution of 640x480 pixels. The sensor developed by Reiling has an accuracy of 0.35mm.

Requirement 3: The field of view should be big enough to capture the whole circle.

To capture the whole laser projection in the pipe it is necessary to use a fish eye lens on the sensor camera.

Requirement 4: The project should work for straight pipes of certain diamters.

In the real product, the robot would need to be able to recognize the center of pipes of an inner radius of 57-118mm and be able to also do this for corners and bends. To simplify the project, in this case a straight piece of pipe of 125 mm outer diameter will be used. The results will also be applicable for pipes of other sizes. No focus will be placed on pipes with corners or other obstacles.

Requirement 5: The robot arm should have two degrees of freedom.

The sensor will be mounted on a robot arm with two degrees of freedom (DOF), one to position

the sensor vertically and the other one to tilt the camera. It is assumed that the PIRATE centers

(14)

itself horizontally in the pipe automatically. In a final product, more degrees of freedom might be added, but in this project only 2 DOF will be used. This is done for simplification.

Requirement 6: The robot arm should be small and lightweight.

In a final product, the robot arm would need to be small and lightweight in size. In this project no focus is placed on reducing the arm in weight or size. Instead, a prototype is used as a means of proof of concept. Two servos will be used to control the robot arm.

Requirement 7: The robot arm has to be fixed in place.

In a final product, the robot arm will be fixed onto the PIRATE. However, in order to simplify the setup, in this project the arm will be mounted into a fixed position into the pipe. The sensor will already need to be centered horizontally.

3.2.2 Software

Requirement 8: The processing will be done in real-time processing

Since the PIRATE will move through the pipe, it is necessary that the sensor is continuously ad- justed to the center of the pipe. This means that real-time processing of the sensor is required.

Requirement 9: The image-processing results need to be as accurate as possible.

The results of the image processing should be accurate. This means the algorithm should lead to the same results when starting from the same position. An accuracy of 5 pixels will be con- sidered accurate.

3.2.3 Others

Requirement 10: The processing time should be and speed of the sensor position adjustment should be optimized.

In a real product, the speed of centering the sensor will be important. This means that the data

processing and adjustment of the sensor position will need to be done quickly. For this a low

complexity algorithm would be necessary. Since this project is a proof of concept, no focus will

be placed on the speed.

(15)

4 Setup

For this project, it was chosen to use a straight PVC pipe with an inner diameter of 118 mm.

This diameter was chosen because currently the PIRATE works in pipes from 63 mm to 125 mm external diameter (Dertien, 2014). 118 mm inner diameter, the biggest size possible, was chosen for the ease of use and because currently the robot arm and sensor are too large to operate in smaller pipes.

The pipe is kept in place by a PVC bend on one side and a PVC T-bend on the other side (see figure 4.1).

Figure 4.1: The pipe used had an inner diameter of 118 mm. The robot arm was placed inside the T- bend.

The pipe should be completely dark inside in order to get consistent results independent of the outside light conditions, so both ends of the pipe are covered. In real life pipe inspection would be done underground, thus also in complete darkness.

The robot arm is placed inside the T-bend and placed on a raised platform to keep it at a con- stant height. This platform has been laser cut and it is made in such a way that it fits tightly into the pipe. It is fixed in the horizontal position, meaning that the robot arm cannot turn sideways. This is a realistic assumption, because of the way the PIRATE clamps itself into the pipe. The most likely position to clamp itself is along the biggest possible distance, thus over the complete inner diameter of the pipe. This would mean that the robot centers itself along one axis automatically.

The robot arm itself consists of two servomotors that are connected together. The arm has 2

degrees of freedom: one degree of freedom moves the whole arm and the second degree adjusts

the tilt of the camera. The servos are being controlled via an Arduino and powered by a power

supply with 12V.

(16)

Figure 4.2: Assembled robot arm. It is controlled by two servos.

The structured light sensor is attached to the top part of the arm. The sensor is held in place by

a 3D printed holder. The sensor was originally designed by Reiling, but the design was slightly

adjusted when the camera needed to be replaced. Another change had to be made for the lens,

because the original lens did not have a wide enough field of view for the new camera. Instead

of one lens, two lenses in series have been used in the new setup of the sensor. They are held

together by a 3D printed ring. The camera is connected via USB and the white LEDS and the

laser are connected to an Arduino.

(17)

5 Implementation

The code to center the sensor in the pipe is written in MATLAB. This was chosen for the ease of use, because MATLAB has many image processing functions implemented already and allows real-time processing as well as controlling the camera and servos. The servos are controlled in MATLAB via a connection to the Arduino IDE.

Figure 5.1: White LEDs inside the pipe. It can clearly be seen where the center of the pipe is approxi- mately located.

The algorithm consists of two main parts: in the first part the white LEDS are used to position the sensor (point B) in the center of the pipe independently of the tilt and in the second part the red laser is used to adjust the tilt of the camera while keeping the y-position of B fixed.

The white LED image is good to use as a first step, because from this image it is easy to get the camera in the approximately correct position. From the images, it can easily be seen where the center of the pipe is located, because this part will be in the dark (see figure 5.1). It is not possible to use the red laser image if the sensor position is far from the ideal position, because it could be possible that the camera will not detect a circular shape (see figure 5.2a). The red laser image can be used once the sensor is in a position close to the centered position. Once the circle shape can be seen (see figure 5.2b), the polar transform can be applied and information about the tilt and positioning can be extracted.

Three basic functions need to be implemented in the code. This involves controlling the cam- era, controlling the servos and image processing. Every part will be described separately first and then it will be shown how the different parts are interconnected.

5.1 Capturing images using the camera

To capture images the USB Webcam support toolbox in MATLAB is used. This toolbox gives a real-time preview of the camera and it allows snapshots of this preview to be taken. It also al- lows all the camera parameters (e.g. resolution, exposure, brightness, white balance) to be set manually. In this case, two sets of camera parameters were chosen: one set for the optimal con- ditions to capture the white LED image and another set of parameters to capture the red laser image. Setting the parameters ensures that all pictures are being taken in the same conditions and that we can optimally extract information.

In order to have a field of view which is big enough to see the laser circle, two lenses in series

have been used in this setup. The wide-angle and fish-angle lenses introduce a lot of distor-

(18)

(a) Incomplete circle (b) Complete circle

Figure 5.2: The left figure shows an inclomplete circle. This happens if the tilt of the sensor is too far of the ideal position. No information can be extracted from the polar transform of this image. The right image shows a complete circle. The polar transform can be applied and information extracted.

tion into the picture, which is problematic if useful measurements from the images need to be extracted. Thus, an undistortion algorithm needs to be applied.

This algorithm is implemented in a MATLAB function (MathWorks, 2014). It requires a set of images from different angles that include a checkerboard pattern with squares of a known size.

The algorithm can use these pictures to calculate how they can be mapped so that they are undistorted. The result of the undistortion procedure can be seen in figure 5.3. It is obvious in the first image that the lens introduces a lot of distortion. In the second image the distortion is removed.

(a) Distorted image (b) Undistorted image

Figure 5.3: The left figure shows the distorted image taken by the camera. The right figure shows the ef- fect of the undistortion algorithm. It can clearly been seen that the checkerboard lines are now straight, so that the distortion of the lens is removed.

Another problem of using the two lenses used is that they are not perfectly aligned and thus

the exact view axis cannot be determined. In a real situation, the camera would have to be

calibrated better. Furthermore, in the pictures a small part of the lens is visible in some corners

of the image. This is also due to the lenses. Image processing will be used to remove this error.

(19)

Modelling the robot arm

The robot arm has 2 degrees of freedom (DOF), which are both along the x-y plane. There is no degree of freedom to turn the robot arm around the x- and y-axis, thus the robot can turn along the z-axis but is constrained along the x-y plane.

Figure 5.4: Schematic of the robot arm with 2 DOF (MathWorks, 2017)

Figure 5.5: Schematic of the 2 DOF robot arm applied to the real robot arm

The robot arm can be modelled in the following way:

The bottom of the first arm is fixed in the origin.

x

0

= 0 y

0

= 0 The position for point A can be expressed as

x

A

= L

1

∗ cos θ

1

y

A

= L

1

∗ sin θ

1

Position B can be modelled as

x

B

= L

1

∗ cos θ

1

+ L

2

∗ cos(θ

1

+ θ

2

)

y = L ∗ sin θ + L ∗ sin(θ + θ )

(20)

To get the inverse kinematics the following equations can be used θ

2

= ±arctan 2( p

1 − c

2

, c)

c = x

2B

+ y

B2

− L

21

− L

22

2L

1

∗ L

2

θ

1

= arctan 2(y

B

, x

B

) − arctan2(L

2

∗ sin θ

2

, L

1

+ L

2

∗ cos θ

2

)

With this model the correct angles to reach a certain point B can be calculated. In figure 5.6 an example of the model can be seen. The model uses an input of two different positions for point B and calculates the correct angles to reach this point.

Figure 5.6: Schematic model of the robot arm. The image shows two possible positions that can be reached by the robot arm. It can be seen that every point B can be reached in two possible ways.

In the written algorithms and in the further report θ

2

will not be used, because this gives less intuitive insights. Instead a variable "tilt" will be used. Tilt gives the angle between the second arm and the x-axis.

t i l t = θ

1

+ θ

2

Controlling the robot arm in MATLAB Setting boundaries for movement

Because the space inside the pipe and the construction of the robot arm itself restricts certain positions, boundaries need to be set for the servos to move to. This is necessary, because else the arm could crash into the pipe wall or into itself, which could cause damages to the robot arm.

In order to set the boundaries so that the robot does not crash into itself, the limits were found and set manually.

The above-described model of the robot arm can be used to set the boundaries so that the arm does not crash into the pipe. In order to this, the pipe walls have to be modelled. For this the orientation and inner diameter of the pipe, as well as the position of the robot relative to the pipe need to be known. In a real-life situation these values would not be known, thus they would need to be measured. Since in this case a fixed setup is used, these values are known.

Because the robot arm has a certain thickness, certain limits on how close the arm can come to

the pipe walls have been set.

(21)

the allowed limits and whether neither point A or B cross the limit towards the pipe wall. Only if all the conditions are fulfilled, the motors can be moved. Figure 5.7 shows one allowed position and one forbidden position.

Figure 5.7: The figure shows one allowed and one forbidden position for the robot arm. The green lines are the outlines of the inner pipe walls. The yellow lines indicate the allowed limits towards the pipe walls. It can be seen that the lower point B (175 mm, 20 mm) is allowed, because points A and B do not cross the yellow limits. The upper point B (175 mm, 60 mm) is not allowed, because it crosses the yellow boundary.

Moving the arm

To move the servos, the angles need to be send to the Arduino. We can use the above-described model to calculate an angle from a given position.

Before the arm can be moved to a certain position, it needs to be checked whether that position is allowed (see paragraph above). If that is the case, the angles of θ

1

and θ

2

(in degrees) need to be mapped to the correct servo input values (between 1 and 1023 servo units).

To send the values, MATLAB needs to set up a connection to the Arduino, send the values for

the respective servos and the close the Arduino communication.

(22)

Figure 5.8: Schematic plot of the current position of the robot arm inside the pipe

Whenever the servos are being moved, MATLAB can plot a schematic of the robot arm at that moment (see figure 5.8).

5.2 Image processing

The goal of the image processing is to be able to determine the positions to which the servos need to move so that the sensor will be centered in the pipe.

Finding the center of the pipe by using the white LEDs image

This part of the algorithm makes use of the fact that when white LEDs are lit up inside the pipe, the direct surrounding part of the pipe is lit up, while the parts which are further away stay in the dark. The center of the pipe is located in the middle of the dark area.

In this case it is assumed that the camera can look into the pipe. If the camera is too far away from the ideal position (i.e. the pipe is not visible), this algorithm cannot be applied (this will be discussed in chapter 6).

It is assumed that the current value of θ

1

and the tilt of the camera are known at all times. In this case, the angles are set to a starting position, however in a real world situation a function would be needed to read out the current values of the servo positions and map them back to degrees.

The first step is to take a snapshot of the pipe inside with the white LEDs turned on. This picture is then undistorted. Then image processing is being applied to extract the center of the pipe.

The image is being turned to a grayscale image, because the colors are not needed and this reduces the computational load. To smooth the image a median filter is being applied. Then this image is binarized. The upper and lower limits of this binarized shape can be obtained and then the average between the upper and lower bounds can be computed. An approximately straight line is obtained. The mean value of this line gives an approximate value for the center of the pipe. Figure 5.9 shows the different steps of the algorithm.

The shape of the binarized image is not circular, but rather of a triangular shape. This is because

the sensor has three white LEDs placed in a regular circular pattern around the camera. Even if

the shape is not circular, the shape can be used to find the center of the pipe, because the LEDs

are arranged regularly. In an ideal situation, a ring of more LEDs could be used to light up the

pipe in a more regular way, so that a circular pattern could be obtained. Then it would also be

possible to directly apply circle-finding algorithms.

(23)

(a) Original image (b) Median filtered

(c) Binarized

(d) Outlines of binarized shape

(e) Averaged line of outlines

(f ) Center marked in pipe

Figure 5.9: These figures (a)-(f ) show what happens inside the algorithm. The first step is to take a picture of the LEDs inside the pipe (a), to undistort it and then to turn it to greyscale. A median filter is applied (b) before the image is binarized (c). Then the outlines of the binarized shape are traced (d) and the average value of this outline is calculated. This average is being threshholded to remove noise (e).

The median value of these averages is the middle of the pipe (f ).

(24)

Extracting information from the polar transform of the red laser circle image

This part of the algorithm will be applied after the white LED algorithm. It uses the red laser circle which is projected into the pipe. The underlying idea here is that the projected circle will either be thicker at the top or bottom part of the circle if the sensor is not perfectly aligned.

From this, it can be calculated in which direction the tilt of the camera needs to be changed.

To calculate the thickness of the circle, a polar transform can be used. This maps the image from Cartesian coordinates to polar coordinates. This means that an approximate circular shape in Cartesian coordinates becomes an approximate straight line in polar coordinates.

From this the thickness of the line can be distracted. In order to get a useful result from ap- plying the polar transform, the center of the circle needs to be known. If the polar transform is applied at any other point than the center of the circle, the resulting line will look more sinu- soidal than like a straight line and it is not possible to extract useful information from it. Figure 5.10 shows the effects of applying a polar transform. It also shows how a non-centered circle affects the result.

(a) Centered circle (b) Polar transform

(c) Non-centered circle (d) Polar transform

Figure 5.10: Images (a)-(d) show the effect of applying a polar transform. Image (a) is perfectly centered circle. If the polar transform is applied around the center, the result is a perfectly straight line in polar coordinates (b). Image (c) shows a prefect non-centered circle. Applying a polar transform to the center of the image results in a sinusoidal wave (d).

The first step in this algorithm is to take a picture of the laser circle inside the pipe and to undis-

tort it. Then the image is turned into grayscale, because the colors are not needed to analyze

the image. The image is then threshholded to reduce noise (for example from reflections of the

laser in the pipe). To close gaps in the circle shape, dilation is applied. From this image, the

outer outline of the circle can be detected. Then the average of the upper and lower bounds

of the outlines is taken. The median of this average line is the y-position of the middle of the

(25)

ideal case with an ideally calibrated camera and setup, the x-position of the middle of the cir- cle should correspond with the x-center position of the image. However, it was not possible to calibrate the setup in this way, so the x-value of the center is set manually. The steps of the algorithm until here can be seen in figure 5.11.

(a) Gray-scaled, average filtered and threshholded laser image

(b) Outlines of laser projection

(c) Average of outlines (d) Marked center position of the laser circle

Figure 5.11: Finding the center of the laser circle. The algorithm first takes a snapshot of the laser pro- jection in the pipe. Then the image is gray-scaled, threshholded and average filtered. After that, the oulines of the laser are taken and averaged. This average corresponds to the y-position of the center of the pipe.

Once the center of the circle is known, the polar transform can be applied. For this a function written by Gutierrez and Montoya (2007) was used. This function applies the polar transform around the center of the image. Because we only get useful results if the polar transform is applied at the center of the circle, it is necessary to shift the center of the circle to the center of the image before applying the polar transform.

The resulting image shows an approximately straight line. To smooth the line and remove po-

tential gaps in the laser line, the image can be blurred a bit by applying an averaging filter. It

(26)

would also be possible to use other smoothing filters. After this the image gets binarized. Then the upper and lower outline of the laser line can be found (see figure 5.12). From this the thick- ness of the laser line can be calculated. The relevant parts of the laser circle in this case are the top and bottom part of the circle. This is because the horizontal position is assumed to be fixed, so only the vertical thickness can change. To extract the thickness of the top and bottom part of the circle, the mean value over a small interval is taken from the straight laser line at the points corresponding to the top and bottom of the circle.

Figure 5.12: Once the center of the laser circle has been found (see figure 5.11), the image will be shifted, so that it is centered around the found middle of the pipe. Then the polar transform will be applied.

This image is the average filtered and binarized. The outlines of the laser are then traced. From this the thickness of the laser line can be extracted.

5.3 Connecting algorithm

The above parts described the steps of taking and undistorting images, moving the motors and applying image processing to extract data from the images separately. This part will describe how all the components interact. The algorithms which use the white LEDs and the red laser circle will be considered separately.

Figure 5.13: Schematic drawings of possible positions that the robot arm can be in. The red lines repre-

sent the pipe walls. The blue lines represent the robot arm. θ

1

and the tilt can be adjusted.

(27)

Currently the robot arm is set to an arbitrary starting position. This is done because there are limits on the positions for which the algorithm works. In a real world situation, the current angles need to be read out.

Figure 5.13 shows the possible positions the robot arm can be in. θ

1

and the tilt can be either too high, correct or too low. The goal of the algorithm is to get both θ

1

and the tilt correct (see middle subfigure). The interesting positions are the corner positions, because here neither θ

1

nor the tilt are correct.

White LED algorithm

The goal of using the white LED image is to adjust the tilt of the sensor in such a way that the center of the pipe becomes centered in the image. Figure 5.14 shows the situations of the corners of figure 5.13. The figure also shows the wanted results of the white LED algorithm. It can be seen that only the tilt will be adjusted, while θ

1

stays constant.

Figure 5.14: Schematic drawings of the possible starting positions of the robot arm and the positions the arm will be in after applying the white LED algorithm. The dotted blue line shows that the sensor’s view axis goes exactly to the center of the pipe.

Section 5.2 described how image processing is used to find the middle of the pipe in pixels. The next necessary step is to adjust the position of the camera based on the extracted information.

In an ideal case, the center of the image should correspond with the middle of the pipe after

adjusting the sensor. Because there are misalignments in this case (e.g. due to an uncalibrated

camera or small offsets in the position of the robot arm) there will be an offset between the

(28)

found center of the pipe and the center of the image (if the sensor centered). Since this offset cannot assumed to be known in a real world situation, all calculations have been done with the center of the image instead.

The aim of this algorithm is to adjust the tilt of the sensor, thus to move point B of the robot arm, in such a way that the center of the pipe aligns with the center of the image.

The algorithm works iteratively. At the beginning the upper and lower limits for θ

2

are set. Then the algorithm takes a picture of the white LEDs in the current position of the robot. In the next step the algorithm calculates the position of the center of the pipe inside the image. If the found middle of the pipe is above the center of the image, the tilt of the camera is too high, thus the current position becomes the new upper limit. If the found middle is below the center of the image, the current position becomes the new lower limit. The sensor will then be moved to a position halfway between the upper and lower limits. Then this process is being repeated. The algorithm stops once the interval between the upper and lower limits for θ

2

gets smaller than 1 degree or once the difference in pixels between the found middle of the pipe and the center of the image is smaller than 5 pixels. It also stops once six iterations have been reached.

The result can be seen in figure 5.15. It shows the found middle of the pipe (red line) before and after applying the above-described algorithm. The yellow line is the position to which the red line should be moved.

(a) Before applying the algorithm (b) After applying the algorithm

Figure 5.15: The figures show the found center of the pipe (red line) before (a) and after (b) applying the written algorithm. The yellow line shows the y-center of the image. It can clearly be seen that our algorithm moves the motors correctly to center the middle of the pipe.

Red laser circle algorithm

The goal of using the red laser image is to move the robot arm so that the tilt of the sensor is parallel to the pipe axis. By applying the white LED algorithm, the y-position for B has been determined, thus this value will be kept constant while applying the laser algorithm.

From the image-processing algorithm the thickness of the top and bottom part of the laser cir-

cle have been found. Figure 5.16 shows how different tilts of the sensor affect the laser circle

projection in the pipe. It can be seen that in the case of a near perfect tilt (see 6.1b), the thick-

ness of the top and bottom part of the circle are about the same. If the tilt is too low (see 6.1a)

the bottom part of the circle will be much thicker than the top. The opposite happens if the tilt

is too high (see 5.16c).

(29)

(a) Tilt is too low (b) Correct tilt (c) Tilt is too high

Figure 5.16: The figures show the effect different tilts have on the laser projection ( θ

1

is fixed). It can clearly be seen that the tilt of the sensor influences how thick the line is at the top and bottom of the circle.

The thickness can thus be used to create an iterative algorithm to center the circle. The idea is here that the top and bottom part should be equally thick in the end. In the algorithm θ

1

will be adjusted, while keeping y

B

constant.

To start with, certain limits for the θ

1

angles have to be set. It has to be made sure that the robot arm does not crash into the pipe walls. In this case, the θ

1

limits are -10 and 35 degrees with respect to the x-axis.

The algorithm then calculates the thickness of the bottom and top part of the circle and deter- mines if the tilt needs to be moved up or down. If the tilt is too high, this means that θ

1

has to be increased. The current value for θ

1

will be set as the new upper θ

1

limit and if the tilt is too low, the current value for θ

1

becomes the new lower θ

1

limit. The algorithm then calculates the new value for θ

1

by setting it to the middle value between the upper and lower tilt. Because y

B

has to be kept constant, a new value for θ

2

has to be calculated. This is done by using the kine- matics model described in section 5.1.1. The arm will then be moved to these newly calculated positions.

This process of setting the new θ

1

limits and moving the sensor is repeated iteratively. This

means that θ

1

keeps approaching the ideal value. The iterations are stopped once a certain

accuracy is reached. In this case, this is determined by the maximum precision of the servos,

which can be set to one servo unit, which equals 0.29 degrees. The algorithms also stops once

the difference between the upper and lower thickness of the circle is small enough (in this case

less than 1 pixel). The iterations will also be stopped if a certain maximum amount of iterations

has been reached.

(30)

6 Evaluation

6.1 Evaluation of the design requirements

The project can be evaluated by using the design requirements set up at the beginning of the project (see chapter 3).

The requirement of a small and lightweight sensor (see requirement 1) has been partially met.

The sensor designed by Reiling has been optimized in size and weight and could thus be imple- mented. However, the sensor will be too large to fit into pipe of diameters of less then 80-90mm.

The design of the sensor was no focus of this project, thus no work has been done to improve the size.

The requirement of a good camera quality (see requirement 2) is partially met. The camera takes pictures at the specified resolution of 640x480 pixels, however due to the two lenses in series it is not possible to take a sharp image at every point. The project would be improved by using a camera that takes images of better quality.

The requirement of a wide enough field of view (see requirement 3) has not been met. It is not possible to detect a full circle with the camera that is used. This can be solved easily by replacing the current camera with a new camera with a bigger CCD. This could not be implemented due to time constraints.

As specified, the algorithm works for straight pipes of certain diameters (see requirement 4).

The algorithm works for diameters of 118 mm inner diameter, but it would also work for other diameters.

The robot arm that has been designed has two degrees of freedom, so requirement 5 has been reached.

Requirement 6 stated that the robot arm should be small and lightweight in a final product, however this will not be taken into account for the prototype. The current robot arm serves as a proof of concept. In a real product it would need to be reduced in size and weight.

Furthermore, the robot arm is fixed in place inside the pipe. It cannot move along the z-axis.

Thus, requirement 7 is reached.

Requirement 8, doing all processing in real time, has also been reached. The algorithm takes images and directly processes them and moves the motors. The images are not saved to analyze them at a later point.

The requirement of having an accurate image-processing algorithm (see requirement 9) will be discussed in sections 6.3 and 6.4.

The last requirement, optimizing the time to run the algorithm and center the sensor (see re- quirement 10), has, as specified, not been taken into account for the prototype. However, for a final product the algorithm has to be optimized in processing time. It would also be desirable to have an algorithm that could reach the final position in as few iterations as possible.

6.2 Number of iterations needed

Both the white LEDs and the red laser algorithm work iteratively. In both cases upper and lower limits are set for possible positions (for the tilt in case of the white LEDs algorithm and for θ

1

in case of the red laser algorithm).

The white LEDs algorithm ends once the difference between the found center of the pipe and

the center of the image is small enough, when a certain amount of iterations is reached or once

the interval between the upper and lower limits for the tilt is smaller than 1 degree. The starting

(31)

capture the top and bottom part of the circle for this interval. Since the algorithm halves this interval in every iteration, the maximum amount of iterations can be calculated. The interval is 13 degrees at the beginning. After one iterating this interval is half as big, measuring 6.5 degrees. Repeating this process will mean that after four iterations, the interval is less than 1 degree. This means the maximum amount of iterations with these initial angles for the upper and lower limit is four.

The red laser algorithm ends when the difference between the upper and lower thickness is small enough, when a certain amount of iterations has been reached, or when the interval be- tween the limits of θ

1

is smaller than 1 degree. The limits for θ

1

are set to -10 to 35 degrees, creating an interval of 45 degrees. The arm does not crash into the pipe for these values. Apply- ing the same reasoning as above, the algorithm will stop after a maximum of six iterations.

In a real situation it would be desirable to reduce the amount of iterations needed as far as pos- sible. To do this it would for example be possible to not half the interval in every iteration, but rather to use information from the images to make an initial guess about the correct position.

For the white LED algorithm the difference in pixels could for example be converted into mm’s and then into angles for the tilt. This was tested in this project, however due to the distortion introduced by the lenses no accurate conversion factor from pixels to mm’s could be found. For the red laser algorithm it might be possible to extract information from the thicknesses of the circle to decide by how θ

1

needs to be adjusted.

6.3 Evaluation of the white LED algorithm

Figure 6.1 shows the results of the algorithm. At the beginning it can be seen that the center of the pipe and of the image do not overlap. After applying the algorithm, the centers overlap, as intended.

(a) Before applying the white LED algorithm (b) After applying the white LED algorithm

Figure 6.1: Figure (a) shows the starting position of the algorithm. The red line shows the found center of the pipe and the yellow line shows the center of the image. It can be seen that the center of the pipe and the image do not overlap. After applying the algorithm (b) the centers overlap. Thus the algorithm works successfully.

Tests have been done with the white LED algorithm to determine how accurate the algorithm

can find the center of the pipe. The results can be seen in figure 6.2. It can be concluded that

(32)

the algorithm leads to nearly the same results when the starting position is the same. It can also be seen from the results that if the starting angle for θ

1

is the same, but the starting tilts are different, the algorithm still leads to nearly the same results for the new tilt. This is as expected.

Thus it can be concluded that the algorithm works. The slight differences in angles can be explained by slight changes in he light conditions or by slight changes in the internal camera parameters.

Figure 6.2: Results of the testing of the white LED algorithm. For certain starting configurations of θ

1

and the tilt the algorithm has been tested. It can be seen that for the same starting configurations the algorithm finds the same result for θ

1

+- 1 degree. It can also be seen that for the same starting position of θ

1

the tilt ends up at nearly the same values.

It has to be noted that the final position of the robot arm does not correspond to the ideally centered position. This is because there are offsets in the camera. Because two lenses in series are used, the view axis cannot be determined exactly. However, for this algorithm to give accu- rate results, it is important that the view axis of the camera and the projection axis of the laser and LEDs are the same. If this were the case, the algorithm would lead to correct results.

6.4 Evaluation of the red laser algorithm

The red laser algorithm can position the robot arm in such a way that the y position of B stays constant and that the top and bottom part of the circular have the same thickness. The algo- rithm could accurately find these positions.

The red laser algorithm was tested for three starting configurations to determine whether the

algorithm leads to the same results when starting in the same initial position. The results can

be seen in figure 6.3. It can be seen that when the starting position is the same, the algorithm

leads to the same result. This means that the algorithm works accurately.

(33)

Figure 6.3: Results of the testing of the red laser algorithm. It can be seen that for the same starting positions of θ

1

and the tilt, the same final positions for θ

1

and tilt are found.

6.5 Cases for which the algorithm will work

The implemented algorithm will not work in all cases. First of all, due to the simplifications made at the beginning of the project, the algorithm was only developed for straight pipes with- out bends, damages or obstacles.

Furthermore, there are restrictions on the angles for both θ

1

and the tilt. This section will de- scribe the cases for which the algorithm will work and for which cases it will give no or wrong results.

Figure 6.4: Different cases for the white LED positions. In the first case, the middle of the pipe can be

extracted perfectly. In the second case, the irregular lightning caused by the LEDs influences the result,

however the algorithm can still detect the center. In the third case, the influence of the irregular lightning

becomes so big that the algorithm can no longer detect the center of the pipe accurately. However in all

cases the general direction of the center of the pipe can be extracted.

(34)

The white LED algorithm will work perfectly, if the camera can see perfectly into the pipe (see first case in figure 6.4). Then the image-processing algorithm can extract a good binarized shape and can easily find the middle of this shape. If the camera view axis deviates further from the pipe axis, for example when the tilt becomes too high, the results become more inaccurate.

Because the white LED ring consists of 3 LEDs, the lightning of the inside of the pipe will be non-homogeneous. This can be seen in figure 6.4 for the second and third case. While the al- gorithm can still detect the middle of the pipe, the dark parts caused by the non-homogeneous lights influence the result. The result would be better if a full ring of LEDs had been used. How- ever, it can be seen that the result of the algorithm (see the red lines in figure 6.4) accurately show the direction in which the sensor has to move. So it would still be possible to center the sensor iteratively.

Figure 6.5: Different cases for the red laser positions. If a closed circle or a circle with only small gaps can

be seen, the algorithm will accurately detect the y-position of the center. The algorithm will not work if

big parts of the circle are missing (when the circle is not closed), or when either the top or bottom part

of the circle are not visible.

(35)

tom and the top part of the circle. If the circle can be seen perfectly (see figure 6.5, case 1), the center of the circle can be extracted. The algorithm will also work if there are small gaps in the circle (see case 2). This is because the algorithm uses the mean value of the average of the outlines. However, the algorithm will no longer work when big parts of the circle are missing (see case 3). The algorithm will also not work if either the top or bottom part of the circle are missing (see case 4).

So in order to ensure that the algorithm will work correctly, limits on the possible angles for the tilt have been set. The tilt should be in the limits between -3 and 10 degrees. These are the limits for which the camera can see the full circle. If the camera is replaced with a camera with a wider field of view, the limits would be wider.

6.6 Overall evaluation of the accuracy of centering the sensor

The goal of this project was to build a device which allows it to control the position of the sensor and to write an algorithm which uses image-processing techniques to center the sensor in the pipe. It is not possible to quantify how accurately the created algorithm can center the sensor.

This is because there are too many uncertainties in the project. These are introduced due to the setup used. The robot arm was not centered exactly in the horizontal position in the pipe.

Also the camera was not calibrated, which leads to offsets in the result. Especially problematic here is that the view axis of the camera does not overlap with the axis of the white LEDs and red laser projections. All these uncertainties will make it impossible to center the sensor exactly.

However, the tests showed that the algorithm would work if these uncertainties were removed

or at least greatly reduced. This would require a good calibration of the camera and centering

of the robot in the horizontal position in the pipe.

(36)

7 Conclusion and Recommendations

7.1 Conclusion

This bachelor project lead to the design and implementation of a centering algorithm for the structured laser sensor for the PIRATE. Furthermore a robot arm has been built that can con- trol the position of the sensor and center it based on the results of the built algorithm in real time. Thus the overall goal of the project has been reached. Furthermore, most of the design requirements have been met.

7.2 Recommendations

Future work based on this project could look into evaluating the built algorithm with a better setup. As mentioned in previous chapters, the uncertainties and offsets in the setup due to misalignments and an uncalibrated camera can lead to big offsets in the result. Furthermore, the small field of view of the camera made restricted the amount of cases the algorithm could handle. Any future work on centering algorithms for the structured laser sensor should thus first aim to improve the setup to increase the accuracy. This would first of all mean replacing the current camera setup for a calibrated camera with a wider field of view. Also the robot arm should be reduced in size.

Another thing which has to be looked into are ways to connect the robot arm to the PIRATE robot. The centering of the sensor needs to happen simultaneously while the PIRATE is moving.

In this project, no tests have been done on a moving PIRATE.

Future work also needs to focus on improving the algorithm. The current algorithm was written without focusing on efficiency and processing time. In the final PIRATE the power supply will be limited, thus centering the sensor should be as efficient as possible. The current algorithm works iteratively, however an ideal algorithm would center the sensor in one move.

The algorithm also needs to be improved so that it can work in all possible configurations. The

current algorithm only works for certain angle configurations (in this project big limitations

were the small field of view of the camera and the size of the robot arm). Until now only straight

pipes were considered, thus the algorithm needs to be improved so that it also works for bends,

T-bends etc.

(37)

A Design of the Robot Arm

The robot arm consists of several parts: a platform on which the arm stands, two servo motors that are connected together, a sensor holder, the structured laser light sensor (including a ring of white LEDs, a laser cone projector and a USB webcam with two lenses connected in series) (see figure A.1). The LEDs and laser are being controlled by an Arduino. The servos are power by a power supply with 12V.

Figure A.1: Setup of the robot arm. It consists of a platform, two servos, a sensor holder and the struc- tured light sensor.

Platform

The platform has been 3D modeled in FreeCAD, an open source CAD software. The platform

has been designed in such a way that the separate parts can be clicked into each other and

therefore lock the parts in place. The platform fits perfectly into the T-bend, so that it is fixed in

place. The modeled parts can be seen in figure A.2

(38)

Figure A.2: All laser cut parts of the platform. The parts connect to each other by screws or by locking into each other.

Servos

The robot arm is made up of two Dynamixel AX-12A servos. They are connected together by using the connector supports that come with the Dynamixel servos. These parts are screwed together. The arm is connected to the platform.

Sensor holder

The structured laser sensor needs to be connected to the robot arm, but the sensor itself has no connection pieces. Thus, a sensor holder had to be designed. The holder was designed in FreeCAD (see figure A.3) and then 3D printed. The holder is a cylindrical part that fits perfectly around the structured laser sensor. By turning the sensor, it locks in place inside the holder.

The holder is connected to the arm by screws and bolt.

Structured Laser Light sensor

The structured light sensor has been designed by Reiling. During the project, the webcam of

the sensor broke and had to be replaced. Because the design of this new camera was slightly

different in shape (a squared instead of a circular base), the top part of the sensor had to be

redesigned. The original Solidworks files have been slightly adjusted (see figure A.4). The part

is designed such that the laser cone is not interrupted. The part has been 3D printed and placed

onto the original sensor.

(39)

Figure A.3: The 3D model of the sensor holder in FreeCAD. The bottom part of the sensor fits perfectly into the holder.

Figure A.4: The left image shows the original design and the right image the new design. The top part of the sensor had to be redesigned to fit the new camera. The original design by Reiling (Reiling, 2014) has only been slightly adjusted.

Because the CCD of the new camera is significantly smaller than that of the original camera, the field of view of the camera was significantly smaller as well. This lead to the problem that the laser circle could not be seen anymore. To solve this, another fish-eye lens was placed in series to the already present lens. The two lenses are connected by a small 3D printed ring. It is not possible to align both lenses perfectly, so there will always be offsets in the camera image.

Because now two lenses are used, more distortion is added to the images than before.

Arduino

The servos, the white LEDs and the laser are controlled via an Arduino Mega 2560.

Referenties

GERELATEERDE DOCUMENTEN

Chapters 3 and 4 offer answers from the selected body of literature to the main questions with regard to Islamic and extreme right-wing radicalism in the Netherlands

The goal of this project is to study the impact between a steel ball and concrete and to design an autonomous impactor that could generate the right impact to excite the sewer pipe

Figure 24 shows that the model with 8 cells in the LSTM layer and 128 cells in the convolu- tional layer and the model with 50 cells in the LSTM layer and 256 cells in the

Different prototyping techniques are used in a number of design itera- tions in order to investigate the new innovative designs, resulting in a mock-up with internal gears in all

Figure 5.3: The calculated bend properties while translating the sensor block from left to right with respect to the pipe centerline.. These results are obtained with the 2D

Therefore, this experiment will be conducted inside two different pipe segment, i.e., a long straight textureless pipe and a pipe with a T-junction as shown in figure 5.1, where

Geerlings (2018) implemented a blob detection to estimate the position of the robot. First, an undistortion algorithm is applied to make straight things in the real-world also

When these four-bar systems were determined the displacement of rotation points were calculated and compared to the measured displacements of markers near the rotation points of