• No results found

Numerically controlled machining from three dimensional machine vision data

N/A
N/A
Protected

Academic year: 2021

Share "Numerically controlled machining from three dimensional machine vision data"

Copied!
168
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Numerically Controlled Machining From Three

Dimensional Machine Vision Data

A C C E P T E D

rA C U L T Y OF G R A D U A T E S T U D I E S

by

Colin Bradley

B.A.Sc. University o f British Columbia, 1985

M.Sc. Heriot-Watt University, 1987

A Dissertation Submitted in Partial Fulfillment o f the

Requirements for the Degree o f

DOCTOR OF PHILOSOPHY

--- „ --- ■ V m ■ . . . i — —

---9^!

Oh i 1

L &

L^I>

t^le Department o f Mechanical Engineering

We accept this dissertation as conforming

to the required standard.

W 7 7 7

Dr. G. W. Vickers, Supervisor (Mechanical Engineering)

eener. Member (Mechanical F.n«rineering)

■ y - --- » --- — h i — ---Cj/r G. F. McLean, M \nber (Mechanical Engineering)

--- . --- J---- ^ ---1

---Dr. R. L. Kirlin, Outside Member (Electrical and Computer Engineering)

Dr. V/. El-Maragh^f External Examiner (Mechanical Engineering)

© Colin Bradley 13 1 2 . University of Victoria

All rights reserved. The dissertation may not be reproduced in whole or in part, by mimeograph or other means, without permission of the author.

(2)

S u p e r v i s o r : Dr. G.W. Vi c k e r s

Abstract

Prototyping is an essential step in the manufacture of many objects, both consumer and industrial. A fundamental step in this process is the definition of die three dimensional form o f the object shape; for example, a designer's models created in clay or wood. A three dimensional vision system (range sensor) offers the advantage of speed in defining shapes compared to traditional tactile sensing. In this thesis, the viability o f using range sensors is demonstrated by developing a rapid prototyping system comprised o f a laser- based range sensor and software that creates a computer model o f the object. One particularly important application of the computer model is for the generation o f a control program, or toolpath, for a computer-numerically-controled (CNC) machine tool. This is an important application in mold and die manufacture and mold manufacture for automobile components from full scale models. The computer model can also be incorporated into computer aided design and analysis programs.

The most suitable vision system, for rapid prototyping applications, has been selected from a group of available sensors and integrated with a coordinate measuring machine that acts as a translation system. The range data produced have been utilised in a multi-patch surface modelling approach in order to model objects where many types of surface patches, such as quadric and free form, are blended together on one object. This technique has been demonstrated to provide accurate and smooth surface reconstructions that are suitable for generating CNC toolpaths. The viability of machining from multiple surface patch models has been demonstrated and, in addition, a new technique for the machining of free form surfaces developed. An alternative method for fully denning complex three dimensional shapes employing a rotary sensing o f the object is also presented that permits the efficient generation of CNC machine toolpaths.

Examiners:

Dr. & Vi^/Vidkers, Supervisor (Mechanical Engineering)

Dr. J./Wes!ner, Member (Mbchanical Engineering)

Dfc^TR McLean.Memfief (MechanicaTEflglneering)

Dr. R. L Kirlin. OutsCde Member (Electrical and Computer Engineering)

(3)

i l l

Table of Contents

Abstract ii

Table of Contents iii

List of Tables vi

List of Figures vii

Acknowledgements xi

Dedication xii

Chapter 1 Introduction 1

1.1 Surface Measurement Techniques 3

1.1.1 Fundamental Measurement Technologies 3

1.1.2 Commercial Range Sensors 6

1.1.3 Comparison of Commercial Range Sensors 10

1.2 The Hyscan 45C Vision System 12

1.2.1 Interfacing with a Coordinate Measuring Machine 12

1.2.2 Installation of the Vision System 12

1.2.3 Calibration and Operation of the Vision System 13 1.2 . Operating Characteristics of the Vision System 14

1.3 Software Design 15

(4)

1.3.2 Surface Patch Boundary Identification 18

1.3.3 Software Engineering Issues 19

1.4 Layout of the Thesis 19

Chapter 2 Quadric Surface Reconstruction 29

2.1 Fitting Planes to 3-D Data 31

2.2 Fitting Cylinders to 3-D Data 35

2.3 Fitting Spheres to 3-D Data 38

2.4 Robust Estimation of Quadric Surfaces 41

2.4.1 Robust Plane Fitting 42

2.4.2 M-estimation 45

2.4.3 Robust Quadric Surface Modelling 47

2.4.4 Performance of the Robust Quadric Surface Modelling

Scheme 51

Chapter 3 Free Form Surface Reconstruction 60

3.1 Modelling Large, Scattered Range Data Sets 62

3.2 Description of Free Form Surface Modelling Techniques 63

3.2.1 Spatial Binning 64

3.2.2 Shepard's Interpolation 65

3.2.3 Thin Plate Splines 67

3.2.4 Hardy's Multiquadric Interpolation 68

3.2.5 Two-Stage Surface Fitting 70

3.3 Local Radial Basis Function Modelling 71

3.4 Evaluation of the Free Form Surface Modelling Techniques 73

3.4.1 Spatial Binning 74

(5)

V

3.4.3 Thin Plate Splines 75

3.4.4 Hardy's Multiquadric Interpolation 76

3.4.5 Comparison of the Methods 77

3.5 Testing of the Local Modelling Method 79

Chapter 4 CNC Machining of Reconstructed Surfaces 93

4.1 Toolpath Formation for Quadric Surfaces 95

4.1.1 Part Programs for Planar Patches 95

4.1.2 Part Programs for Cylindrical Patches 96

4.1.3 Part Programs for Spherical Patches 96

4.2 Toolpath Formation for Free Form Surfaces 96

4.3 Laser Scanning and CNC Machining Results 98

Chapter 5 Free-Form Surface Machining Through Circular

Arc Interpolation 108

5.1 Experimental Evaluation of Cutter Motion 109

5.1.1 Cutter Motion in Linear Interpolation Mode 110 5.1.2 Cutter Motion in Circular Interpolation Mode 112 5.2 Line-Arc Segmentation for Toolpath Formation 113

5.3 Examples of the Segmentation Algorithm 115

Chapter 6 Quasi-Helical Toolpath Definition 131

6.1 Rotary Laser Scanning System 133

6.2 Surface Model Generation 134

6.3 Quasi-Helical Toolpath Formation 136

(6)

Chapter 7 Conclusions and Future Research 151

(7)

VII

Lisi of Tables

1.1 Spatial and speed performance factors of range sensors. 11 2.1 Effect of Gaussian noise on estimating planar parameters. 35 2.2 Effect of Gaussian noise on estimating cylindrical parameters. 38 2.3 Effect of Gaussian noise on estimating spherical parameters. 40 2.4 Effect of number of outliers on robust planar surface fitting. 44 2.5 Performance of the robust estimator in modelling quadric surface entities. 52

3.1 Accuracy of spatial binning method. 74

3.2 Accuracy of Shepard's interpolation method. 75

3.3 Accuracy of the local thin plate spline method. 76

3.4 Accuracy of the local multiquadric method. 77

3.5 Comparison of the free form surface reconstruction methods. 78 5.1 Summary of experimental velocity measurements for a total

distance moved of 15.24 mm at a feedrate of 42.4 mm/s. 110

5.2 Summary of results of segmentation algorithm. 116

(8)

List of Figures

1.1 (a) Triangulation principle for a point range sensor.

(b) Range measurement equation for triangulation techniques.

21 21

1.2 Principle o f operation of the White Scanner. 22

1.3 Three-dimensional range sensor employing synchronised scanning. 23 1.4 Camera and projector geometry for the EOIS range sensor. 24 1.5 Interface of the laser scanner with a CNC coordinate measuring machine. 24 1.6 Installation error of the scanner head with respect to the CMM's axes. 25

1.7 Laser scanner standoff and depth of field. 26

1.8 Accuracy evaluation from a scan line of a high-precision ball bearing. 27

1.9 Flowchart of interactive surface fitting. 28

2.1 Photograph of planar patch fitted to cloud data. 53

2.2 Error norir and derivative L2, LI and Huber estimators. 54

2.3 R«*descending psi-function. 55

2.4 Cutoff selection for Huber psi-function. 56

2.5 Breakdown point of Huber robust estimator. 57

2.6 Breakdown point of redescending robust estimator. 58

2.7 (a) Data on sphere surface contaminated with outliers. (b) Modelled surface after outliers have been removed.

59 59 3.1 Formation of local egion modelling cells for the multiquadric method. 80

3.2 Flowchart of the local region modelling algorithm. 81

3.3 Test surfaces for evaluating free form modelling schemes. 82

3.4 Test surfaces modelled using spatial binning. 83

3.5 Effect of bin size on accuracy of spatial binning. 85

(9)

ix

3.7 Effect of local region radius on accuracy of Shepard’s method. 87 3.8 Test surfaces modelled using the multiquadric method. 88 3.9 Effect of the parameter b on reconstruction accuracy for the

multiq'iadric method. 89

3.10 Effect of the parameter b on reconstruction smoothness for the

multiquadric method. 90

3.11 Test surfaces modelled using the thin plate spline method. 91

3.12 Photograph o f honeycomb material. 92

4.1 Calculation of cutter offset for generalised shaped end mill. 100

4.2 (a) CNC toolpath formation for a planar patch. 100

(b) CNC toolpath formation for a cylindrical patch. 100 (c) CNC toolpath formation for a spherical patch. 100 4.3 Reconstructed surface pitch prior to toolpath generation. 102 4.4 Comparison of analytic and re-machined free-form surfaces. 103 4.5 CNC toolpath for the free-form surface patch of Figure 4.3. 104

4.6 CNC toolpaths for quadric surface patches. 105

4.7 Photographs of the three machined quadric surface patches. 106 4.8 Telephone handset reconstructed from five surface patches. 107 5.1 Experimental arrangement for Y-axis velocity measurements. 119 5.2 (a) Velocity profile for Y-axis linear interpolation. 120 (b) Velocity profile for Y-axis linear interpolation. 120 (c) Velocity profile for Y-axis linear interpolation. 121 (d) Velocity profile for Y-axis linear interpolation. 121 5.3 (a) Velocity profile for Y-axis circular interpolation. 122 (b) Velocity profile for Y-axis circular interpolation. 122 5.4 Flowchart o f the line-arc segmentation algorithm. 123 5.5 Tolerance between a circular arc and surface data points. 124 5.6 Plot of data and segmented toolpath for a cross-section through a Francis 125

turbine blade.

5.7 Plot of data and segmented toolpath for a cross-section through a torso. 126 5.8 Plot of data and segmented toolpath for a cross-section through a gear 127

(10)

5.9 In-plane adjustment of cutter offset points. 128 5.10 Photograph of a toolpath created using the line-arc segmentation algorithm. 129 5.11 Machining time comparison of point-to-point and arc-line methods for

a 50 x 50 test surface.

130

6.1 Orientation of the scanner head and object with respect to the CNC machine. 139 6.2 Overlapping scan line sections on an object’s surface. 140 6.3 Spline control points and helical toolpath for a compound curvature object. 141

6.4 Spline control ooints on the perf ume bottle. 142

6.5 Wireframe representation of th small perfume bottle. 143

6.6 The quasi-helical machining process. 144

6.7 Photograph of quasi-helical mill turning. 145

6.8 Relation of cusp height to tool radius and feedrate per revolution. 146 6.9 Part toolpath on workstation prior to CNC machining. 147 6.10 Photograph o f small bottle machined using the quasi-helical method. v r .

6.11 Geometry of tool and workpiece for interference avoidance. 149 6.12 Calculation of longtitudinal interference avoidance. 150

(11)

A c k n o w le d g e m e n ts

The author would like to thank Professor O W. Vickers for his support, encouragement, and guidance throughout the course of this work; without his assistanc e it would not have been possible. Many other people have had input into this work, however, some are worthy of a special mention. Minh Ly and Arthur Makosinski provided technical assistance; Jamie Weir, Chris Jones, and Brian Corrie worked on many aspects of the computer programming; my peers Peter Wild and Rolf Oetter gave freely of advice **r ’• ideas. Thank you all.

(12)

I dedicate this thesis to Judith and to Norman and Joan who have supported me throughout this endeavour.

(13)

1

Chapter 1

Introduction

In engineering design, where style, ergonomic and/or aerodynamic factors are important, prototype sculptured clay models are often created by hand in design studios [1]. Rapid prototyping of such models is then needed in order to manufacture the product in minimum time. Ideally, this entails automatic surface digitization, conversion of data to computer based models (compatible with current CAD/CAM systems) and computer-numerically-controlled (CNC) manufacture in the most appropriate form - a process which is collectively termed "reverse engineering". Currently, this prototyping is done through measurements taken manually by a coordinate measuring machine (CMM). This process, while accurate, is very laborious, slow and is not ideally suited for defining intricate surface patch boundaries on an object. The surface shape in the majority of these applications (from automobile body panels through to household appliances) usually contains many separate and distinct features, including free-form surface patches as well as planes, cylinders and other quadric surfaces, all blending together to create a stylish effect.

Computer vision systems, capable of measuring three-dimensional points on a surface, can be applied to reverse engineering. They have many beneficial features which, if incorporated into rapid prototyping tasks, would increase the efficiency of the

(14)

laser-based. The vision system employs a laser source, optics and light sensitive detector to measure the distance between the sensor and the surface. The most common laser technique employs the triangular geometry formed by die locations o f the laser source, diffusely reflected laser beam, focussing lens and imaged laser spot on the light sensitive detector. Compared to CMM touch probes, laser-based surface measurement has the advantages of speed, non-contact sensing and immediate storage o f the measured data into a computer file. The primary limitations of computer vision systems are cost and the tradeoff between the accuracy and the depth of the range sensor. In this work, several types o f computer vision technology were evaluated for their suitability as range sensors within a computer integrated rapid prototyping system.

In rapid prototyping applications, the data generated by a laser-based range sensor will be used either for controlling CNC machine tools or for incorporating into a CAD system. Direct CNC machining of laser scanned data is not possible because the set of digitised surface points does not necessarily also define a suitable path for the cutting tool to follow. If the surface data generated is not smooth or makes sudden jumps in height between regions on a surface, then the cutter path will be erratic and unsuitable for creating an accurate reproduction of the object's surface. Therefore, it is essential to intelligently process the data generated by the range sensor in order that smooth and accurate reconstructions of the constituent surface patches o f an object are attained.

In this thesis an investigation into new approaches for using laser-based range sensors (also called laser scanners) for defining and machining prototype models is presented. The integration of a laser-based vision system with a CNC machining centre, or CMM, is discussed. Methods for segmenting the raw scanned data into patches, reconstructing geometric features such as planes, cylinders and spheres, or any other general quadric surface, and approximating compound curved surfaces within patch boundaries are described. A procedure for suitably offsetting the cutting tool from the

(15)

modelled patch surfaces and generating CNC cutter paths, for generalised shaped end- milling cutters, is given. An additional approach to defining and machining curved surface models using a quasi-helical definition and machining technique is outlined that employs a rotary stepping table. The rapid prototyping of a number of components was undertaken to demonstrate the feasibility of the developed techniques.

1.1 Surface Measurement Techniques

The majority of machine vision applications in manufacturing employ two- dimensional, video camera-based sensors that use the reflected light energy from a scene in order to obtain shape information or control processes. These methods and the associated image processing software are becoming widely used, particularly for inspection and quality control tasks. Comparatively, three-dimensional (3- D) range sensors are only just emerging from the research stage; and the spectrum of applications is still narrow. For example, laser range finders have found a small niche in the automobile industry; the General Motors Design Centre (Warren, Michigan) uses laser range sensors to define prototype model cross-sectional profiles [2]. The vvood products industry in Briiish Columbia is actively using laser range sensors for real time log profile measurement.

1 .1 .1

F u n d a m en ta l M easu rem en t T ech n o lo g ies

Detailed explanations and comparisons o f the fundamental options available for constructing range sensors are given by Strand [3] and by Besl [4]. A summary of the techniques suitable for defining surfaces for reverse engineering applications is given below.

(16)

Coordinate Measuring Machines

Coordinate measuring machines employ a touch sensitive probe mounted on the vertical arm of a three-axis, gantry style translation system. The touch probe record: the precise 3-D location of the contact point with the surface. The probe can be manually moved or it can be controlled via a joystick and servo motor system. The most recent CMMs are integrated with a computer and software dedicated to locating the coordinates of the contact points. A typical accuracy specification o f a CMM is 4 microns in a working volume o f 350 x 450 x 550 mm. Common industrial applications o f a CMM are measurement for quality control and part inspection. Two disadvantages o f touch probe measurements are du, slow data collection rate (1 point per second maximum) and the ability o f the probe to deform the surface of clay models at the contact point.

Moire Contourography

Moire contourography is a technique that uses the projected image of a grid on a surface to produce a low spatial frequency pattern by viewing the original surface pattern through a second similar grid (see Pirodda [5] and Reid, Rixon and Messer [6] for comprehensive reviews of this technology). An underlying smooth surface modulates the moire pattern in a manner that enables depth, relative to the unmodulated pattern, to be extracted by registering the low .requencv moire pattern to the surface. The spacings between the patterns (analogous to contours on a map) determine depth increments. A major advantage o f the current video camera-based techniques is the ability to acquire a "range snapshot" without resorting to mechanical scanning methods. However, moire contourography methods are restricted to defining smoothly varying curved surfaces. The accuracy of moire systems is dependent on the field of view, the resolution of the video camera and the spacing of the structured light pattern projected onto the surface. A typical

(17)

laboratory setup with a 640 x 480 pixel camera, positioned 1.0 metre above the surface, and a one line/millimetre projection grating has an accuracy of approximately +/- 0.05 mm.

Triangulation

Triangulation can be subdivided into passive and active methods. A passive scheme utilises the ambient light reflected off an object; whereas in active triangulation, a structured light pattern is projected into the scene. Stereo-disparity based approaches to range finding arc passive [7]. They use the disparity between the same feature in two separate images to form the baseline distance for a triangulation calculation. Stereo photogrammetry is similar to the more modem stereo disparity technique; however, it uses two photographic plates instead of digital images. Skilled operators digitise a pair of photographs taken from nearby positions and calculate the range at a point from the parallax revealed in the two photographic images. Duncan [8] used photogrammetry for digitising and machining the surface profib of a human face. A photogrammetry system developed by the Renault Seri automobile company for the digitisation of car body panels had a working volume of 4m x 4m x lm and an accuracy of 0.5 mm [9],

In active triangulation the structured light can be a spot of laser light or one or more lines of laser light [10]. As the light structure form is known in advance, the triangulation problem reduces to one of locating the reflected light pattern on a suitable detector. The method of calculating the range value for a spot of laser light projected onto a surface is illustrated in Figure 1.1 (a). Given the baseline separation (b) bet ween the laser source and the detector array and the image triangle height (h), the range (z) to the surface can be calculated from the deflection on the detector array (dx) by

bh z = —

(18)

apparent that uniform steps in dx yield non-uniform steps in z. Therefore, highest accuracy is obuined for small values of z (largest resolution dx). The range resolution tradeoff is common to all triangulation-based range sensors. As detailed in Besl [4], triangulation-based range sensors can have accuracies as good as 25 microns over depths of field o f 6.25 mm or, alternatively, have accuracies o f several millimetres over depths of field of several meters.

Other techniques, such as interferometry, laser radar, focussing and diffraction, can be employed to determine range to an object. However, as highlighted in the reviews of Besl and Strand, none are suitable for application as a general purpose range sensor.

1 .1 .2

C om m ercial R ange S en sors

In determining the computer vision system most suitable for use in reverse engineering applications, a group of commercially available 3-D shape sensing systems was evaluated. A description of each 3-D range sensor and typical applications o f each are given below:

Vigitiser

Manufacturer: Okada Machinery Corporation, Japan Principle of Operation: Laser triangulation

(19)

The vigitiser is a laser-based, triangulation point sensor that is housed in a cylindrical container mounted in the tool holder of a CNC machining centre. The laser beam is focussed onto the object's surface, and the diffuse reflection collected by an annular lens mounted on the front o f the housing. The diffuse spot is imaged onto three linear CCD arrays, and the average deflection on the arrays is used to compute the range. The laser point sensor, mounted in the tool holder of the CNC machine, gathers surface data range points as it is translated over the object. A personal computer is used to integrate the range data from the sensor with the positional data gathered from the CNC machine tool's servo motors. This sensor is described by Saito and Miyoshi [11]. The sensor .'s accurate (+/- 0.05 mm) but limited with regard to its data collection rate and fixed viewing angle.

White Scanner

Manufacturer: Technical Arts Inc., Seattle, WA, U.S.A. Principle of Operation: Laser triangulation

Typical Applications: Printed circuit board inspection, tubing inspection

The White Scanner generates a plane of laser light by passing the laser beam through a cylindrical lens mounted on the front of the laser source, as illustrated in Figure 1.2. A line o f laser light is formed as the plane of light strikes the object's surface, and this line is imaged onto the rectangular CCD array o f a video camera. As the line is traversed over and deformed by the shape of the object, a triangulation calculation is made for eaca pixel on the camera's CCD array, which is illuminated by the deformed laser line. In this manner, cross-sectional range profiles of the object are constructed. The sensor has a good accuracy specification (+/- 0.05 mm) but is inflexible due to the fixed geometry between the laser source and camera.

(20)

H yscan 45C

Manufacturer: Hymarc Ltd., Ottawa, Ontario Principle of Operation: Laser triangulation

Typical Applications: Shape sensing, inspection, research in computer vision

A focussed laser spot is linearly scanned across a surface, by means of a two-sided mirror, and imaged onto a linear photosensitive array, as shown in Figure 1.3. The angular position of the two-sided mirror and the location of the imaged spot on the photosensitive array are used by the data collection computer to calculate range values. The arrangement of the source and detector mirrors and the rotating two-sided mirror enable a synchronised scanning technique [12] to be employed in the Hyscan 45C range sensor. This arrangement is unique in that it allows high accuracy data to be collected with a physically small triangulation base (although an effectively wide optical base is maintained) thus enabling the scanner head to be kept physically compact, which is particularly suitable for CMM or CNC installations. Another advantage is the improved ambient light noise immunity due to the small instiintaneous field of view. The scanner head images a spot of laser light as compared to a laser line as in the Technical Arts White Scanner. Also, the range sensor’s viewing angle (with respect to the Z axis) can be changed during scanning an object. The scanner computer will integrate data collected from numerous viewing angles with respect to one reference point. The Hyscan 45-C laser scanner provides a dense surface digitization o f an object by combining the range sensor data and the positional data gathered from the servo motors of the translation system.

(21)

9

RVSI

Manufacturer: Robotic Vision System Inc. New York, N.Y., U.S.A. Principle of Operation: Laser triangulation

Typical Applications: Custom system for measuring piopellor surface form

This sensor is very similar to the Hyscan 45C in its principle o f operation, method of construction, physical size, and operating specifications. It is more accurate than the Hyscan sensor, although it does have a shorter depth of field, and has the ability to adjust the viewing angle while maintaining a data set referenced to one point. It also requires a three-axis translation system to move the range sensor in a 3-D work volume.

Cyberware 4020/PS

Manufacturer: Cyberware Laboratory Inc., Monterey, C.A., U.S.A. Principle of Operation: Laser triangulation

Typical Applications: Human face measurement for reconstructive and cosmetic surgery

The Cyberware system uses practically the same technique as the Technical Arts White Scanner. However, a slight improvement is evident as the Cyberwarc sensor optically combines two views of the laser line, on the object's surface, prior to imaging on the CCD camera. This optical design helps to alleviate shadowing and maximise the capture of complex shapes.

EOIS MK VII

Manufacturer: Electro-Optical Information Systems, Santa Monica, C.A., U.S.A. Principle of Operation: Moire interferomeiry

(22)

CCD video camera, as illustrated in Figure 1.4. An illumination unit projects a sinusoidal fringe pattern o f light-to-dark intensities onto the surface, which are then viewed by the camera from a Axed location. The fringe pattern has distortions corresponding to the surface form, t he phase shift o f the distorted pattern from the undistorted case can be used to calculate range information at each pixel location. This technique is a variation of moire contourography and also has the benefit o f capturing a "range snapshot". However, there is no facility for integrating data captured from multiple images. This range sensor is also limited to measuring surfaces that have no sudden changes in height as this can lead to problems in the software that decodes the phase information from the intensity patterns.

1 .1 .3

C om p arison o f C om m ercial R an ge S ensors

The accuracy, data collection speed, depth of field, and number o f data points collected are summarised in Table 1.1. Accuracy specifications are comparable for all systems; however, the better accuracy of the RVSI sensor can be attributed to its decreased depth o f field. The Vigitise. sensor also has a higher accuracy specification, which is due to the configuration o f the three linear CCD arrays used for determining the reflected spot location. Data collection rate is markedly slower for the Vigitiser sensor because it is dependent on the translation speed of the machining center with which it is integrated. The Hyscan and RVSI sensors translate an entire scan line across the surface as opposed to a single point. The quoted data collection speed of each sensor is given as the time interval necessary to collect the number o f data points tabulated in the adjacent column. The EOIS, Technical Arts and Cyberware range sensors were considered unsuitable for rapid prototyping applications due to the fixed locations of the light source

(23)

and camera. This limitation makes full-surface digitisation o f an object impractical in many cases. The RVSI sensor is very expensive as each system is custom designed for each application. The Vigitiser range sensor is limited to work with certain CNC machine tools and is also inflexible with regards to an adjustable viewing angle. After careful consideration of each system's specifications and a visit to the site of each manufacturer (with the exception of RVSI), the Hyscan 4SC range sensor was selected as the most appropriate system for use in rapid prototyping.

Table 1.1. Spatial and speed performance factors of range sensors.

Manufacturer Absolute Accuracy (mm) Data Collection Rate (sec.) Number of Points Collected Depth of Field (mm) Hymarc 0.05 1 10 000 100 EOISMK VII 0.025 10 512x512 100 Vigitiser 0.025 0.5 1 100 RVSI 0.0125 1.0 14 400 50 Cyberware 0.20 1.0 15000 400 Technical Arts 0.05 1 512x512 100

(24)

1.2 The Hyscan 45C Vision System

In this section, the method of integrating the Hyscan vision system with a CNC machine tool or CMM is described, typical steps in using the range sensor are outlined, and a closer examination of the vision system's characteristics is presented.

1 .2 .1

In terfacin g w ith a C oordin ate M easuring M achine

The 3-D laser scanner system has been integrated with both a CNC machining centre and a programmable CMM, as illustrated in Figure 1.3. The scanner operation is controlled on a terminal through a hierarchical menu and data are displayed, as collected, on a high-resolu ion graphics monitor. The menu system and keyboard are used to control all operating and file management tasks during the scanner operation. The scanner head is moved within a 3-D work envelope by a CMM or CNC machine tool, and multiple passes may be made at any depth across the prototype surface until complete digitization is achieved. The mounting attachment to the CMM or CNC machine allows the viewing angle of the scanner head to be altered according to the surface form being digitized. The servo motor position encoders of the CNC mill or CMM update the scanner computer as data are collected.

1 .2 .2

In stallation o f the V ision System

Misalignment of the laser scan line with the mechanical axes of motion of the CMM results in an error between the measured data position and the true data position, as illustrated in Figure 1.6. This is termed the installation error, and its magnitude is

(25)

13

proportional to the angle of misalignment and the offset distance between the CMM Z-axis and the plane o f the laser beam. If the position of the laser scanner head is maintained over time, this error will not change. Therefore, incorrect installation will only affect accuracy and not repeatability o f collected data. The calculation of the installation error for a misalignment angle 6 between the X-axis of the CMM and the axis of the trunnion mount of the scanner is derived. The offset distance is L0, true position in the XY-plane is (Xt,Yt), and measured position in the XY-plane is (Xm,Ym). It is apparent that the effect of the misalignment angle, 6, is amplified by the magnitude of the offset distance, L0, from the Z-axis to scan line plane. The error magnitudes are

This error can be eliminated by correctly orienting the laser beam parallel to the true axis of the CNC machine tool (either X or Y-axis). A gauge block and software program are currently under development to assist in the minimisation of the installation error.

Prior to collecting data on an object, the vision system must be calibrated. Calibration is performed by scanning the horizontal deck o f the CMM and a surface patch on a high-precision ball bearing. From these two separate measurements, the scanner controller calculates the inclination of the scanner head with respect to the CMM’s Z-axis

( 1.2)

(1.3)

(26)

and also the coordinates of the ball bearing centre, which are used as the collected data reference point. After calibration, the sequence of steps in scanning an object are:

1. A data file for the new object is opened on the system hard disk.

2. Object scanning is commenced by traversal of the laser scan line over the object’s surface by means of the CMM’s motion control system. The range information from the scanner head and the positional information from the CMM’s servo motors are integrated within the scanner controller to give a set o f (x,y,z) data points referenced to the ball bearing centre.

3. If the object's surface form requires the scanner head viewing angle to be altered, a new data record is opened within the same file by adjusting the scanner head orientation, re­ scanning the horizontal deck, and re-scanning the ball bearing. All subsequent data points will be integrated with respect to the reference location.

4. Data files are uploaded from the scanner computer hard disk to a workstation over an ethemet link.

The data files are well structured, making the data very accessible, with headers containing information on the scanner calibration parameters. Data file size can be large even for relatively small curved surface patches; however, the number of points collected on each line can be varied during the scanning process. This feature provides for direct control of data resolution over separate regions of an object.

1 .2 .4

O perating C h aracteristics o f the V ision System

Provided the scanner is moved within its stand off range, it will gather data points on a surface, as illustrated in Figure 1.7. If the scanner head is moved out o f its standoff

(27)

15

range, no data points will be collected. The quoted accuracy of +/- 0.05 mm could theoretically be increased by reducing the depth o f field from the current 80.0 mm to 40.0 mm. The scan width is 100.0 mm when operated at the top of the stand off distance. Raw data collected by the scanner are termed "cloud data" due to its unformatted or unstructured spatial arrangement. Although the data points are collected on lines, die final data file is not arranged in an ordered manner as it would be if it were collected by a rectangular array CCD camera.

The validity of the scanner accuracy specifications was verified by collecting a scan line on a high-precision ball bearing. The line of data passing through the ball bearing centre was used to calculate the error between the true ball bearing radius and the radius calculated at each cloud point location. The graph of Figure 1.8 shows the error at each point along half of the scan line. The best fit straight line through the Gaussian distributed data has an error range of 0.05 mm to 0.07 min, which is in keeping with the manufacturer's specification. The user can control the extent along the scan line to which data are collected. By limiting this extent to 80.0 mm o f the full 100.0 mm, the smaller window reduces the occurrence o f larger errors that tend to occur at the end o f the laser scan lines. Data collection does not continue to the edge of the ball bearing as the diffuse laser spot is reflected away from the imaging lens in the scanner head at too great an angle. This is a general limitation on triangulation based laser measurements. However, the Hyscan scanner can avoid this limitation by readjustment of the scanner head inclination with respect to the Z-axis and integrating data from two views.

1.3 Software Design

Two software packages have been implemented to incorporate the surface modelling algorithms and the algorithms for controlling a CNC machine tool developed in

(28)

this work. The first program, called LaserCAM, is an interactive computer-aided- manufacturing (CAM) package for use with objects that have been scanned with the da. x collected in the normal cartesian sense. The second program developed is for objects that have been scanned in a rotary sense. The following sections focus on LaserCAM; however, some issues, such as software engineering, are common to both programs. Details specific to the rotary scanning program are covered in Chapter 6.

1 .3 .1

F u n ction s o f the L aserC A M Softw are

The LaserCAM software package developed to process the scanner cloud data has the following major functions:

1. Display the 3-D croud data collected by the laser scanner.

2. Visualise distinct surface patches, separated from one. another by boundary lines, which are suitable for fitting with geometric entities such as planes and free-form surfaces. 3. Provide a means of separating cloud data points lying within these surface patches from

the overall cloud data set.

4. Generate surface models for t’re segmented patches.

5. Generate programs for controlling CNC machine tools for rapid prototyping.

6. Translate the swface patch entity data into a standard graphical data exchange format, specifically DXF or IGES.

Methods of implementing these functions have been developed in the course of this research work and are embodied in LaserCAM. The software currently runs on a Sun SparcStation and utilises the PHIGS (Programmer’s Hierarchical Interactive Graphics System li3 ]) 3-D graphics libraries to display the object, interactively select surface patches, and display CNC toolpaths. PHIGS is designed to let programmers

(29)

17

define and display 3-D graphical objects without having to write code to perform graphical operations, such as translations and rotations. These, and many other, graphical operations can be performed by incorporating into application code calls to the PHIGS graphics libraries. It is an ANSI/ISO standard that provides portability between various workstation platforms running the UNIX operating system.

When the research project was initiated, it became clear that it was impossible to fit one surface entity over an entire set of cloud data points, primarily because there was too much data but also because it was spatially arranged in a scattered form. A "toolbox" approach was deemed most suitable for modelling an entire object. This approach allows the designer to identify distinct surface patches and model each one with the best surface modelling "tool" available, a plane or a cylinder for example. To accomplish this, a sophisticated, interactive program was designed with sufficient flexibility to allow the user to improve the fit of the surface model by selecting another surface entity, if desired.

A flowchart of the steps used in the LaserCAM program is shown in Figure 1.9. The input cloud data file is displayed on the workstation screen, using four standard viewing angles. Surface patches are interactively defined and cloud data points within the patch boundary are separated from the overall data file and stored for subsequent modelling or editing. An appropriate modelling entity, such as a plane or sphere, is selected from a menu and used to reconstruct an individual surface patch through the maximum likelihood estimation (MLE) theory. If the accuracy o f fit is below a predefined level, other modelling entities may be attempted. Once a satisfactory fit is obtained, the relevant patch parameters (radii, centres, etc.) are stored. With free-form patches a multiquadric basis function technique is adopted to model the surface. When all patches are complete, cutter offsets for any generalised shaped milling cutter are calculated, and tool paths for direct CNC machining are generated. Alternatively, surface data may be exported to an external CAD program using the DXF or IGES graphics file transfer protocols for further design.

(30)

1 .3 .2

Su rface Patch B oundary Id en tification

In the computer vision field, much work has been performed on range image segmentation [14]. Segmentation is a procedure that breaks a range image down into its constituent parts, where each part is a bounded and cohesive surface. One method for accomplishing this is by applying mathematical operators to the range data to determine where these boundaries lay by detecting abrupt changes in depth. However, segmentation algorithms require the range data to be arranged on a regular grid and not in the unstructured manner of the scanner's cloud data.

Segmenting individual patch data from a cloud data set is accomplished by first using the workstation mouse to define a patch boundary. The PHIGS stroke mechanism is used to locate the vertices o f the desired patch boundary. A PHIGS stroke mechanism is defined as a series of mouse button presses in a single view. This mechanism provides a means of drawing a stroke (line) between user-defined mouse clicks and a means of connecting the start and end points o f the stroke to get a closed polygon boundary. The individual vertices in the stroke are defined by a left mouse button click; to close the polygon, the user presses the CTRL D key sequence. The data gathered by the stroke mechanism are then returned by the PHIGS libraries to the LaserCAM program. The operation for adding cloud data points to a surface patch data list is performed by a module that takes a list of cloud data points and determines which o f those cloud points lie within the boundary of the patch. It uses a separate module to determine if each individual point is contained in the patch. This is accomplished by transforming the patch so the origin is at the data point being tested by creating a line that starts at the origin along the positive X- axis. It then tests this line against each edge in the boundary of the patch to determine if the line and the edge intersect. If there are an odd number of line edge intersections for the

(31)

19

patch, then the data point lies within the patch boundary (sec the algorithm in [IS] for details).

If a patch boundary is incorrectly segmented a data point (or points) from a different surface class might be included in the patch. For example, a point lying on a planar patch might be erroneously included in a segmented patch of data that define a cylindrical surface region. These erroneous points, termed outliers, can produce incorrect results in any subsequent mathematical modelling o f the surface patch. In Chapter 2, methods of modelling quadric surface patches even in the presence o f outlier points are. described. Outliers may also be caused by incorrect adjustment of the laser scanner's minimum light detection level or by particularly strong specular reflections from a bright spot on an object's surface.

1 .3 .3

S o ftw a re E n g in eerin g Issu es

The LaserCAM program has been designed according to basic software engineering principles to ensure a well structured, easily modifiable and easy to understand program. Specifically, the concepts o f modularity and modular interfaces have been adhered to. A module is a program building block that performs a specific task and generally is kept to a small size and simple function. A modular approach to software design has some major benefits such as ease of understanding and ease of modification. For example, the process of segmenting the patch data from the cloud data is accomplished by one module. This module has a well-defined task; however, if modifications to this algorithm become necessary, they can be made without altering other parts o f the code. The module communicates with other modules i:i the program by means of the module interface. A module interface allows modules to be “plugged in or out” without knowing details of its operation, provided the module interface is identical.

(32)

1.4 Layout of the Thesis

Methods for reconstructing planar, cylindrical, and spherical surface patches are described in Chapter 2. The technique employed to model the scanner cloud data is based on maximum likelihood theory. Methods of fitting general quadric surfaces to range data that are contaminated with Gaussian noise and outlier noise are also developed. Several techniques for reconstructing free form surface patches to range data are examined in Chapter 3. The most suitable method for use with the Hyscan range sensor is selected, and an algorithm for applying this method to large cloud data sets is developed. The accuracy of the reconstructed quadric and free form patches as compared to the input cloud data is examined.

The CNC machining o f reconstructed quadric and free form surface patches is detailed in Chapter 4. Accuracy results are presented for quadric and free form surface test objects machined using the methods outlined in Chapter 4. Improvements to the standard approach for machining free form surfaces are given in Chapter S. A new technique, employing recursive straight line and circular arc fitting, demonstrates considerable savings in machining time over conventional methods.

An additional technique for digitising arbitrary curved surfaces is outlined in Chapter 6. The objects are rotated in angular increments and lines o f surface data points are acquired at each location. CNC machining of the objects digitised using this technique is possible by moving the cutting tool in a quasi-helical manner.

(33)

21

linear detector with fixed resolution

♦ d x - »

laser point source image triangle imaging lens object triangle z - range to object diffuse reflection object surface

Figure 1.1. (a). Triangulation principle for a point range sensor.

Z dz z = bh / dx - - u L -I dz dx dx CCD deflection

(34)

cylindrical lens

CCD camera

motion of objeci

sheet of laser light

pattern on camera CCD array

(35)

23

linear photosensitive array detector collecting lens source mirror V --- — - X * ro tatio n rotating / 2-sided mirror laser source f diffusely / reflected / beam incident beam scan line translated 0 perpendicular to paper

' laser scan line in

plane of pape.' object

surface

(36)

CCD camera projection unit

Ronchi grating

projected fringe pattern

object

Figure 1.4. Camera and projector geometry for the EOIS range sensor.

X-axis mo»'*r

-co

Y-axis motor

-co

Z-axis motor

-CO

incremental position signals from CNC encoders

laser scanner

head scannercontrol

computer scanner range data object a/b decoder disk storage graphics monitor scanner control terminal sensor data integration

(37)

2 5

trunnion mo'int axis

Z

plane of laser light alignedwith CMM Z-axis scanner head laser Hght plane Z misaligned relative to CMM Z-axis position A front view position B

scan line in position B

trunnion mount axis

misalignment angle

scan line in position A X

plan view

(38)

«■' scanner motion

(39)

A b so lu te Er ro r (m m ) 0.6 2 7 0.5 -0.4 0.3 0.2 0 . 0 i i i | i i i i i v ■■■" 0 10 20 3 0

data point location (m m )

data point location (m m ) m easured location h i g h p r e c i s i o n ball b e a r i n g scanner head

(40)

P3 Pm P2 no acceptable fit? CNC machining or CAD database integration select a surface entity

reconstruct the patch segment surface into m patches

(41)

2 9

Chapter 2

Quadric Surface Reconstruction

It is reported by Hakala [16] that 85 percent o f manufactured parts are well approximated by quadric surfaces, such as planes, cylinders and spheres. This is not surprising given the mode of manufacture: part facing by a mill or part turning by a lathe. As the focus of this reseaich is on the definition o f object surfaces for subsequent manufacturing applications, quadric surfaces provide a relevant tool for surface reconstruction. Numerically cont .oiled machining of a quadric surface, such as a plane, can be performed more quickly if the surface is machined as a plane by facing, rather than machining as a general surface defined by a mesh of points. If the object is intended for analysis or design in an external finite element or CAD package, it is necessary for the surface modelling system to provide all necessary information. For example, a bounded planar patch needs to be specified by a surface normal direction vector, a point on the planar patch, and the polyline of points defining the boundary. In this chapter, a system for modelling surfaces that employs quadrics to represent the range data generated by the scanner head is presented.

Quadrics are a subset of the family of algebraic polynomial surfaces; specifically, they constitute the general second degree algebraic surfaces and are implicitly described by the equation

(42)

f(x, y,z) = ajX2 + u2y2 + a3z2 + a4xy + asxz + a6yz + a7x + agy + a9z + a,0 = 0 (2.1)

Quadric surfaces have been applied in the interpretation o f range data by several researchers. Flynn and Jain [17] investigated the use of quadric surfaces for modelling objects in order to classify objects after segmentation had been performed. Ultimately, the classified surface types are used to identify objects for the purpose o f robot guidance. Hall, Tio, McPherson and Sadjadi [18] used quadric surface entities as geometric modelling tools for robot guidance. Faugeras [19] developed a region-based segmentation technique using quadric surface fitting. Instead of detecting patch edges, regions of range data that share a common quadric function description are grown until the boundaries o f the patch are reached. The Hough Transform is a technique developed in two-dimensional image processing for identifying image edges comprised of straight lines o r circular arcs. Muller and Mohr [20] applied the Hough Transform to detect planes and other quadric elements in range images. Although the technique performs well in two dimensions, the method is limited for use with range data due to the computational expense. Bolle [21] proposed a technique for estimating parameters of complex objects by examining relatively small sub-regions and optimally combining the locally extracted information; for example, estimating the position and pose of a three-dimensional object in order that a robot hand could grasp the object. It was shown that a maximum likelihood approach to parameter estimation of object surfaces was viable. A major benefit of this statistical approach to fitting quadric surfaces is that it does not require the data to be on a uniform grid.

In this work, a similar approach to Bolle's is applied to estimating the parameters of quadric surface patches so that relevant information for the subsequent machining operations can be extracted. Patches of surface data are segmented from the overall object using the method outlined in Chapter 1. Shape and orientation parameters of planes, cylinders, and spheres are then extracted. It is assumed that only a Gaussian noise

(43)

3 1

component is added to the range data. As described in Chapter 1, another noise population, termed outlier points, can occur due to incorrect segmentation o f the initial cloud data set. This results in data points from one surface patch being incorrectly included in data from a neighbouring one. If outlier points are present, then modifications of the maximum likelihood approach can be used that fit planes or implicit quadric equations to the cloud data. These techniques are based on robust estimation methods.

Robust statistics was introduced in the 1950s, and the effect of outliers on the classical least squares estimator can be traced back to the 1930s. Forstner [22] applied robust estimation techniques to computer vision in an examination of the reliability of parameter estimation in linear measurement models. The Hough Transform is inherently a robust estimator. Median filtering [23] is also a robust operator commonly used in machine vision and image processing. It is widely used in computer vision as a smoothing operator that preserves step edges but not roof edges. However, it does not concurrently provide estimates of modelling function parameters as does the Hough Transform.

2.1 Fitting Planes to 3-D Data

A statistical framework for estimating the position of a plane from a collection of cloud data points is described. A set of range points on a surface is given by Y « {ij}",, where r; = (x^y^Zj)1, and the noise corrupting the cloud data is assumed to be Gaussian with zero mean and variance a 2. The objective is to estimate the parameters o f the planar surface that most likely correspond to the range data set Y. A maximum a posteriori (MAP) classifier is a decision algorithm that decides a planar surface is in a certain position if the conditional probability of that model being present given the data, p(ModellY), is maximum

(44)

over all possibilities. The a posteriori probability can be related to the a priori probabilities p(Model) by Bayes rule

p(ModellY) = p(YIModel) p(Model)/p(Y) (2.2)

where p(YIModel) is the conditional probability density function of the data given the model and p(Y) is the probs’ility density function of the range data. Assuming that p(Y) is independent o f the planar parameters and that all models are equally likely, which also makes p(Model) independent of the planar parameters, then a maximum likelihood decision classifier maximises the conditional probability density function p(YIModel). The likelihood p(YIModel) is the product of the conditional likelihoods o f the individual range points.

For a set of N range data points embedded in 3-D space and assumed to be lying on a planar patch, the parameters of interest are the location vector q of a point on the plane and the vector normal w . As specified by Bolle [24], the N range points Y = { l [ o n this plane can be considered as random variables with the plane as the mean-value function and perturbation vectors perpendicular to the plane. Given that the noise vectors, w‘(q - q ) , are zero-mean and have a Gaussian distribution, the likelihood o f the range data is found from the product of the conditional likelihoods of each range point

p(YIModel) = n ^ j e x p { - 5 i 5.[w '(? - ?) f } (2.3)

where the model parameters are the position vector q and normal vector w

(45)

33

Therefore, the maximum likelihood estimates of the location vector q and the normal vector w are found by maximising equation (2.4)

J r { f [ * ' ( f „ - q )f +/>(|w f - 1)} = 0

(2.5)

and

<2-6>

by setting the partial derivatives with respect to q and w equal to zero. In this minimisation, a Lagrange multiplier (/?) has been used to include the constraint

||w||2 - 1 = 0 , that implies w is a unit vector. The maximum likelihood estimate of the

location vector q from equation (2.5) becomes

wl = Nw‘q (2.7)

n=l

On rearranging this expression, q becomes

q = I =

n=J

<2’8)

which is the mean of the data vectors r„. Equation (2.5) may be reduced to the expression for the maximum likelihood estimate of the normal vector

(46)

* + / J w = 0 L n - l

(2.9)

The normal vector w is an eigenvector of the covariance (scatter) matrix

Cov = [ x a - M ] (

2

.

10

)

To calculate the parameters q and w o f the plane, the mean vector r and covariance matrix, Cov, of the data set are calculated. The mean vector is calculated from (2.8) and the Jacobi method fen* finding the eigenvalues and eigenvectors of a real symmetric matrix is used to determine the eigenvalues X, £ X2 £ X3 and the eigenvectors v ,,v 2 and v3 of the covariance matrix (2.10). The eigenvectors v, and v2 define a plane parallel to and passing through the data centroid whereas v3 = (v31,V32,V33) is the vector normal to that

plane. Given the equation of a plane

ax + by + c z + d = 0 (2.11)

where the vector (a,b,c) is the normal to the plane, the following equalities can be made

a = v3, b = v32 c = v33 d = ax + by + cz (2.12)

Geometrically, this method minimises the sum of the squared perpendicular distances between the cloud data points and the fitted plane. As an illustration o f the approach, a plane is fit to a cloud data set, collected by the laser scanner, and which is assumed to be lying on a plane, as shown in the photograph o f Figure 2.1. A mean and maximum error

(47)

3 5

of fit between the cloud data and the modelled plane o f 0.013 mm and 0.063 mm respectively is present. The effect of various levels of Gaussian noise on the planar parameters estimated from a set of fifty synthetic data points is illustrated in Table 2.1; the parameter estimates and the percent error from the true values are shown. The true parameters are a=0.707, b=0.00, c=0.707, and d=1.414; and the zero mean nois. 'ariance is increased from 0.05 to 1.0.

Table 2.1. Effect of Gaussian noise on estimating planar parameters.

a b c d o2=0.05 0.712 -0.003 0.702 1.416 % error 0.7 0.3 0.7 0.1 o 2=0.1 0.718 -0.006 0.696 1.417 % error 1.5 0.6 1.5 0.2 o2=0.5 0.744 -0.017 C.668 1.416 % error 5 1.7 5.5 0.1 Q to ii © 0.804 -0.048 0.593 1.387 % error 13 4.8 16 2

2.2 Fitting Cylinders to 3-D Data

If the cloud data are assumed to lie on a cylindrical surface with an axis specified by the vector w , a point on the axis specified by vector q = (x0,y0,z0) and a radius Q, then the

(48)

likelihood of a set o f cloud data Y = {q}", is formulated in a similar manner to Section 2.1. The distance from any cloud data point to the true cylinder axis is given by Bolle [18]

((3 - q)‘[l “ ww‘](q - q ))= |q - q - (? - q)‘ ww| (2.13)

where the Jfj - q - (l; - q)‘wwj are assumed to be independently distributed in a Gaussian

manner with zero mean. Therefore, the likelihood of the data is given by

p(Y|Model) = - q - (f, - q)‘ww| - q)*) (2.14)

In a similar fashion to equation (2.3), the model is comprised of the position vector q , the normal vector w, and the radius Q. On simplifying, equation (2.14) becomes

p(Y|w,q,Q) = ( o 22w) % e x p j - ^ j £ ( | n - q ) ‘w w | - Q ) j (2.15)

or

P(Y |w ,q ,Q )= (o 22^) "A ex p | - ^ r l ^ ( ( n - q)‘[ i - w ‘]0l - q)) - q ] | <2 -16)

The maximum likelihood estimates o f the cylinder parameters w ,q and Q are found from minimising

(49)

3 7

The method of Levenberg and Marquardt [25] is used to perform the non-linear least squares minimisation. The problem is formulated by rewriting equation (2.17) in the form

F(x) = f ‘(x)f(x) = X ^ 2(x) (2.18)

i« l

where F(x) is the objective function to be minimised and f(x) = (f1,...,fM)t is given by

f>(x) = ((*i - x0)cos(<p) + - yo)sin(0))2 + zf - Q (2.19)

In equation (2.19), the vector w is laying in the XY-plane, rotated at an angle <p around the Z-axis as measured from the X-axis. The point q on the cylinder axis is now designated by the point (x0,y 0) lying in the XY-plane.

A cloud data set from a perfect half cylinder, o f length 100.0 ir m and radius 24.94 mm, lying on the coordinate measuring machine deck and oriented at an approximate angle of 90 degrees to the X-axis, was obtained. The precise values of the cylinder's radius and oiientation angle were found to be 25.01 mm and 89.82 degrees respectively. In Table 2.2 the effect of Gaussian noise levels on the estimation o f cylindrical parameters is shown. Here, a synthetic data set of 30 points corrupted by a Gaussian noise process is generated from a cylinder having true parameters R = 20.0 mm, (x0,y0) = (20.0,5.0) and ip = 0.245 radians. The cylinder has an extent in the Y-axis of 0 & y £ 60.0. As illustrated in Table 2.2, the cylinder estimates are accurate for low Gaussian noise levels but quite poor for noise levels at which the plane estimates were still accurate.

(50)

Table 2.2. Effect of Gaussian noise on estimating cylindrical parameters. R Xo Yo * <r=0.01 20.02 20.05 5.05 0.248 % error 0.1 0.2 0.9 1.2 o2=0.05 20.12 20.65 5.40 0.25 % error 0.6 3.3 7.4 2.0 s 2=0.1 20.26 21.47 5.85 0.334 % error 1.3 6.8 13.6 26.0

2.3 Fitting Spheres to 3-D Data

If the cloud data are assumed to lie on a spherical surface with a centre q and radius Q, then the likelihood of the cloud data set is formulated in a similar manner to Section 2.1. Given the data Y = {q}"=1, and assuming the noise is a Gaussian distribution in a direction radially outward from the patch surface, the noise is given by (|fj - q | - Q). The likelihood of the data given that the variance is a 2 is found from the conditional probability density function

p(YI Model) = n ^ ^ xp ( - ^ r ( l fn - q f " Q)2) (2.20)

Here again, the model is comprised of the radius Q and the centre o f the sphere q

(51)

3 9

The maximum likelihood estimates of the sphere centre q and radius Q are found from minimising

Again, as in Section 2.2, the Levenberg - Marquardt method is used to perform the non­ linear minimisation by rewriting equation (2.22) in the form of equation (2.18) where f((x) is given by

The maximum likelihood estimates of the sphere center q = (x0,y0,z0)‘ and radius Q are obtained by minimising the objective function F(x).

A spherical test object was scanned, and its parameters estimated from the cloud data set. Using the coordinate measuring machine touch piube, the sphere was found to have a radius o f 25.4 mm and a location of (x0,y 0,z0) = (0.0,0.0,25.4). Using the above m ethod, the radius was calculated to be 25.34 mm and the location

(x0,y 0,z0) = (0.02,0.00,25.41). As in Sections 2.1 and 2.2, the effect of Gaussian noise on the sphere parameter estimator is illustrated in Table 2.3. True parameters of x0 = 0.5, y0 = 0.5, z0 = 1.00 and Q = 10.0 are compared against the estimated parameter values using twenty synthetic data points corrupted by Gaussian noise with increasing values of variance cr2. Initial estimates of the sphere parameters are made by taking four of the data points and forming a set of four linear equations from the general expression o f a sphere for each point. The set of equations is solved using Gauss’s method to get initial starting ooints for the Levenberg - Marquardt method. This initial estimation is very fast to perform

(2.22)

(52)

and reduces the number of iterations required to obtain the final estimates. The initial estimates are shown in Table 2.3. It is apparent that the sphere parameter estimation technique performs very well for low and moderate noise levels but quite poorly for high noise levels. Table 2.3 also illustrates the improvement in estimating the parameters of the sphere using this method over a basic fit of the data to the sphere equation. The initial estimates for a 2= 0.5 arc greatly in error as compared to the results of the iterative method. This illustrates the benefits of surface modelling, in the presence of Gaussian noise, using the maximum likelihood formulation.

Table 2.3. Effect of Gaussian noise on estimating spherical parameters.

*0 y<> Zo Q o 2 initial 0 660 0.579 1.165 9.802 0.1 final 0.500 0.500 1.000 9.999 initial 0.884 0.691 1.394 9.528 0.2 final 0.500 0.500 1.000 9.999 initial 2.849 1.671 3.414 7.366 0.5 final 0.499 0.499 0.999 10.0 initial 95 48 98 136 0.7 final 257 53 10 141

(53)

2.4 Robust Estimation of Quadric Surfaces

In the conventional estimation algorithms of the previous section, the presence of even a single outlier point would severely degrade the algorithm performance. Other parameter estimation methods, such as least squares, are also very susceptible to outliers. In least squares, the residuals between the true data points and the modelled points are minimised; and, if an outlier point is present, the magnitude of the corresponding residual becomes very large thereby distorting the parameter estimates. The inefficiency of conventional estimators in the presence of outliers leads to the notion of robust estimation.

In statistical terms, efficiency is the ability of an algorithm to yield optimal estimates under ideal conditions. On the other hand, robustness is the ability of an algorithm to provide reasonable estimates under less than ideal conditions. Obviously there exists a tradeoff between robustness and efficiency. One cannot have a fully robust and totally efficient algorithm.

Robustness is quantified by a parameter termed the breakdown point. A definition for the multivariate case is provided by Rousseeuw and Leroy [26]. Applying a regression estimator T() to a sample of N points z = {x,,...,xN} results in a vector o f regression coefficients

T(z) = 0 (2.24)

A contaminated sample is obtained by replacing any m original points by arbitrary values. The maximum bias, bias(m;T,z), that can be cause 7, - such a contamination is

(54)

where the supremum (sup) is over all possible z’ and T(z’) are the regression coefficients calculated from the contaminated sample. If bias(m;T,z) = then the m outliers had an arbitrarily large effect on T() and caused the estimator to break down. The finite sample breakdown point of an estimator T() at the sample z is defined as

f^(T ,z) = minj-j-^; bias(m; T, z) is infinite

j

(2.26)

It is the smallest fraction of contamination that can cause T() to breakdown. As an example, one outlier point can cause the least squares estimator to breakdown; therefore, its breakdown point is

4 ( T ,z ) = l (2.27)

A robust estimator is nearly optimal at the nominal noise distribution and is only slightly affected by the presence of outliers.

2 .4 .1

R obust Plane Fitting

Robust techniques can be applied to the determination of location and covariance, which are both used in estimating planar parameters. A robust algorithm designed for fitting planes to a cloud data set, Y = {I;}”,, in the presence of outlier points is described. In a similar manner to Section 2.1, the plane location vector q is determined from the mean vector

(55)

4 3

(2.28)

The mean, however, is definitely not robust as one outlier point can severely distort the mean from :;s best estimate. The breakdown point o f the mean estimator is 1/N (or a 0% breakdown). The covariance matrix

is also susceptible to outlier points. The robust algorithm presented here for planar surface fitting utilises a function, termed the Mahalanobis distance, to identify outlier points so they can be removed. The squared Mahalanobis distance is given by

The robust algorithm proceeds in the following sequence:

1. Examine all data and determine minimum and maximum MD2(^,Y ) values. Set the threshold value for determining outliers to 1.5 * min(MD2(i;, Y )).

2. Calculate the MD2(^,Y ) value for all cloud data points.

3. If MDZ(^,Y ) is larger than the threshold, the point is eliminated from the cloud data set. 4. Check the remaining cloud data set for MD2(f;,Y) values larger than the threshold and

remove any outlier points.

5. Calculate the mean and covariance values when all outliers have been removed, in order to determine the planar parameters.

(2.29)

Referenties

GERELATEERDE DOCUMENTEN

The tweet fields we will use as features for the Bayes model are: timezone, user location, tweet language, utc offset and geoparsed user location.. When a field is empty we ignore

• Further experimental medicine with new tools (e.g. PET ligand ORM-13070) to learn about the role of the α2C-AR in human brain in healthy subjects and patients. • Considering

Ter gelegenheid van haar 25-jarige bestaan heeft de Stichting Jacob Campo Weyerman onder redactie van Anna de Haas een bundel met opstellen uitgegeven over merendeels onbekende

Dat empathie geen verschil lijkt te maken in zelf- rapportage en het voorkomen van cognitieve klachten na het horen over mogelijke cognitieve bijwerkingen na chemotherapie, wil

In het De Swart systeem wordt varkensdrijfmest door een strofilter gescheiden in een dunne en een dikke fractie.. Het strofilter is in een kas geplaatst van lichtdoorlatend kunst-

ge- daan om fossielhoudendsediment buiten de groeve te stor- ten; er zou dan buiten de groeve gezeefd kunnen worden. Verder is Jean-Jacques bezig een museum in te

geeft een auteur om een bepaalde reden het materiaal (nog) niet tot op het niveau van sub species een naam, maar slechts tot op het niveau van soort, terwijl het materiaal niet

In summary, we have demonstrated a 21 higher out- of-plane emission intensity for quantum dots spectrally and spatially in resonance with the photonic crystal waveguide mode,