• No results found

An evaluation of the lighting conditions for robot vision

N/A
N/A
Protected

Academic year: 2021

Share "An evaluation of the lighting conditions for robot vision"

Copied!
247
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A.

EVALUATION OF THE LIGHTING CONDITIONS FOR ROBOT VISION

A THESIS PRESENTED TO

THE DEPARTXB.T OF ELECTRICAL AND ELECTRONIC EHGIBEERIBG FACULTY OF EIGIDERIIG

UIIVERSITY OF STELLEIBOSCH

I I

PARTIAL FULFILJlBIT OF

TBE;REQUIREXEWTS

FOR

THE

by

DIRK WOUTER ACKERMAIH OCTOBER 1987

(2)

DBCLARATIO.

1 hereby declare that the work for this thesis

.y. .

lf. that I wrote it ~.elf and that i t

aub.itted to any other university in order

degree.

D. W. Acker.ann.

October 1987.

11

was done

by

has not been

(3)

ili

ACKXOWLBDGMEITS

Xy

aincere appreciation ia extended to:

Prof X.

J.

Case,

my

superviaor, for his support and encou-rag. . .nt.

The Council for Scientific and Industrial Research for their financial support.

Mr. L. P. du Tolt for his help with the design of the robot.

Mr.

F. G. F. Byleveld, Xr. W. F. Canradie and their staff for their help in the laboratory.

BUCB, University of Stellenbosch, for the use of their tel.vision ca. .ra and monitor.

(4)

iv

BY.OPSIS

A vision robot. with co.parable characteristics currently being u.ed. was designed and built. The response of the robot is evaluated in term. of the lighting conditions it ia subjected to. treated as a transfer function with a visual display as input and a decision made as output. The sensitivity for luminance. contrast and detail of the display are given.

acco.plished. The limitations of evaluated and the result of these

Successful classification of certain displays are each part of the robot is limitations on the total response of the robot is pointed out.

(5)

COBTBllTS

v

Page

CHAPTER OIlE INTRODUCTION 1

CHAPTER TWO THE SENSOR 3

CHAPTER THREE THE INTERFACE 8

CHAPTER FOUR PATTERN RECOGNITION 13

CHAPTER FIVE HUHAN VISION 24

CHAPTER SIX ROBOT VISION 29

CHAPTBR SBVElT COl:CLUSION 37

Appendix A The Television Camera A.l

Appendix B User~s Manual for Interface B.l

Appendix C Data Sheets for Electronic Components C.l

Appendix D Robot Vision Program D.l

Appendix E Test Objects for Robot E.1

Appendix F Results for Experiment 1 F.l

Appendix G Results for Experiment 2 G.l

Appendix H Results for Experiment 3 H.1

Appendix I Instrumentation 1.1

(6)

,to perform of machines and visual 1 CHAPTER ONE INTRODUCTION

In a modern world where repe~i~ive, strenuous and dangerous labour is replaced by robots, there is a need for developing and improving the abilities of these machines. Robots are regarded as machines with some intelligence, able to do tasks usually done by man.

Many of the senses and abilities of man are copied in machines, thus making it possible for machines

equivalent tasks. Examples of abilities

copied from man are voice recognition

perception (vision).

Visual machine perception <vision) is used in fields such as car~ography,

lines and handwri~ing recogni~ion. The

sys~em is a light signal, and the output

an ability already medicine, production inpu~ to a vision is a decision made. INPUT OUTPU

...

SENSOR

f---?

INTERFACE'

H

PROCESSOR

...

,

-"

Figure 1.1 A vision system.

T

A vision sys~em consists of three parts) a sensor) an

in~erface and a data processor. A block diagram of a robot vision system is shown in Figure 1.1. The sensor converts the light signal to an electrical signal. An interface has the purpose of converting the signal obtained from the sensor to a form suitable for data processing. The data processor analyses the data in order to classi£y or identify the objects in the field of vision. Usually this is done

with

a

compu~er under program control. The intelligence of

(7)

2

The a114 of ~h1s ~hesis is ~o evalua~e ~he performance of a vision sys~em in ~erms of the ligh~ing conditions. The performance of human vision in terms of the lighting

cond1~ions has been defined in ~he field of Illumination Engineering. The vision system is evaluated in a similar way as human vision, using existing methods of evalua~ion in the field of Illumination Engineering.

Several factors such as the type of sensor, the recognition algorithm and lighting conditions influence the performance of a vision system.

By

optimizing the lighting conditions, a vision system can be more efficiently utilized. Having

s~tisfied ~he lighting conditions, simpler data processing can often be used to solve the same vision problem. Thus by optimiZing the lighting conditions, the software requirements for robot vision can be reduced.

Con~rolled lighting condi~ions can only be achieved in a controlled environment, ~herefore ~he field of application for this thesis has to be in a controlled environment. The production line was chosen as a field of application.

(8)

beam and sensor are in a ras~er fashion.

taken. The sampled through interfacing CHAPTER TWO

THE SENSOR

The sensor conver~s ~he ligh~ signal to an electrical signal. Two ~ypes of devices are available for this purpose. For many years scannj.ng devices such as op~ical scanners and television cameras have been used. Lately however. charge transfer devices have been developed. This type of device

conver~ ligh~ signals ~o a digital electronic form. The main considerations when selecting a sensor are accuracy and speed. The accuracy and speed depends on the application.

2.1 SCANNING DEVICES

Several scanning devices are used for vision systems. These are optical scanners, mechanical scanners} television cameras, light striping and ranging.

Optical scanners, also called flying spot scanners, have a stationary film and sensor. A light beam is deflected to the appropriate coordinate points on the film. A sample is taken at each coordinate paint and made available to the computer through interfacing electronics.

With mechanical scanners, the light stationary, and the film is transported At each point of the raster a sample is data is made available to the computer electronics.

Using a television camera is a popular way of converting a light signal to an electrical signal. The advantages of a television cameras are the optical compatibility and high conversion speed. Digitizing of an entire £rame can be

p~rformed in 40 me.

3

Light striping 1s a sensing technique for using structured light. The object

illuminated with planes of light at regUlar

image £ormation 1s successively intervals across

(9)

4

the object. These planes of light result in stripes of light

on the object. After digitizing the image, three

dimensional information can be derived using a priory information of the striping mechanism.

spot ranging and ultrasonic ranging can be used to produce three d:lmensional images of obj ects. With both ranging techniques a signal is transmitted to the object, and a reflected signal is received by the sensor. The time difference between the transmitted signal and the returned signal is propDrtional tD the distance tD the object. An entire object is scanned. producing an image with three dimensional infDrmation of the object. A laser is used for spot ranging. whereas ultrasonic sound is used for ultrasonic ranging. [11]

2.2 CHARGE TRANSFER DEVICES

Charge transfer devices (CTD) convert an optical image to a digital array, directly accessible for readout. Two types of charge transfer devices are available. namely charge coupled devices <CCD)and charge injection devices <CID).

The CCD resembles an array consisting of HOSFETs on which an

image is focussed. The photons incident on the

semiconductors generate a series Df charges on the CCD array. Charges in the depletion region are transferred to the output by applying a series of clocking pulses to a row of electrodes between the source and drain.

The CID is similar to the CCD except that during sensing the charge is con£ined to the image site where i~ was generated. The data can thus be read in a similar way to computer memory.

2.3 TELEVISION CAMERAS

A television camera was chosen for the compatibility. availability and high wide variety of television cameras

sensor due to optica: conversion speed. A are available. Only

(10)

monochrome cameras, being much less expensive than colour cameras, are considered,

Small lightweight cameras, known as closed circuit television cameras. are commonly available. These cameras can be obtained for as little as

R

1500. Closed circuit television cameras are commonly used for su~veillance of security. shops etc. Usually these have vidicon tubes, but the more expensive types. of higher quality, have image orthicon tubes. The vidicon tubes are lighter, smaller. more rugged and cheaper than the image orthicon.

I

Figure 2.1. The Vidicon Tube. <Courtesyl [11]).

A closed circuit television camera with a vidicon tube was obtained for this vision system. The technical data of the camera used are given in Appendix A. Several factors concerning the television camera are discussed here.

2.3.1 The Vidicon

The vidicon tube is shown in Figure 2.1. The target is coated with a transparent conducting film which forms the video signal ~lectrode. This conducting film is coated with a photosensitive layer which consists of tiny resistive globules. The resistance of the globules decreases on illumination. As the target is scanned in a raster fashion with a low velocity electron beam, the surface potential 1s reduced. The two surfaces of the target forms a capacitor,

(11)

6

and the scanning action of the beam produces a capacitive current in the video signal.

2.3.2 Sensitivity

The electrical signal obtained from a vidicon tube consists

of two componente. The photo-electric current is

proportional to the light input and target voltage. The dark current is proportional to the fifth power of the target voltage, and flows even in the absence of light. The dark current is not constant over the entire surface of the target. This results in shading in reproduced pictures, and can be seen as darkened edges on a picture [1]. The light versus current relationship of vidicon tube is shown in Figure 2.2. ',0~---..----.---.---.----. 0'3 I----+---:-.IF----....F----+:~::=;.j 0'21 - - - - + - -...-1----7"l~..--__::.,..:::;+_--1 ~0'1 J----~,r,, , - - - - f 7 ' - - - h o " : . - . - - f - - - - 4

..

c

t

,.

..

01 I 10 100

TARGtl' IL.I.UMINATION LUME:N5 PER SQ FT

Figure 2.2. Vidicon Characteristics. (Courtesy [1]).

Xost closed circuit television cameras have electronic cirCUitry that adjusts the target voltage automatically as the illuminance of the target changes. The target voltage adjusts so that the average output of the vidicon tube (taken over several frames) stays constant. This ensures the average output level of the vidicon tube to have a 50% grey value.

(12)

disadvantages. Since the average value of the signal is taken over several frames, the target voltage has a considerable time lag before i t adjusts as the luminance of the scene changes. Furthermore, the output voltage of the vidicon tube is not proportional to the luminance of a surface in the scene. To overcome these disadvantages, the target voltage was set at a fixed value as described in AppendiX A.

2.3.3 The video signal

The output of the television camera 1s an unmodulated, monochrome, twin interlaced video signal with a field

frequency of 50 Hz. The output impedance is 75 Ohm. In Figure 2.3 a video signal is shown, indicating the voltage

levels and time base of the signal .

. ." .- -".-..

-

..- ---."

I

Figure 2.3. T~e Video Signal. (Courtesy [1]).

The video signal has 625 lines in a frame, reSUlting in either 312 or 313 lines in a field. Only 300 lines of a field have video information since the first 12 or 13 lines are used to designate either the odd or even field. The ratio of the vertical to the horizontal of a displayed picture is about 3:4.

(13)

8

CHAPTER THREE THB IliTERFACE

The abilities of a vision system-are directly the properties of the interface used. This designed to meet certain constraints such practicability. The interface was designed certain global specifications such as resolution and conversion time.

influenced by interface was as cost and according to memory size.

The interface designed for this vision system digitizes a video signal in real time. The IBM PC (or equivalent) has become very popular due to its low cost, compatibility and software availability. and was therefore chosen as the data processor. The interface has one video signal input, a parallel port for communication and a computer bus. The data of the digitized video signal, called the image, is transfel'red to dedicated memory on the interface. The user""s manual for the interface is given in AppendiX B. This manual

compri. .s

of the user"'s operating instructions, circuit

diagraa and component layout.

3.1 RESOLUTIOI

Pixel resolution is the amount of grey levels at each SAmpled point. called a pixel. Spatial resolution is the amount of samples taken to form the image. anddetermines

the size of the memory matrix in which the image is stored.

3.1.1. Pixel resolution

Since this vision system was designed for robot application. the pixel resolution was chosen ~o meet certain constraints. One bit systems can be used for simple two-dimensional object recognition such as a character reader (19.21]. Such a simple system was not regarded suitable for the evaluation of lighting conditions. Vision systems With higher p1~els resolutions can be used for more complex

(14)

recogni~ion problems such as three-dimensional object

recogni~ion or general scene analyses. Pixel resolution of four, six or eight bi~s a~e commonly used in vision sys~ems

[14,20].

Since integrating type analog ~o digital converters for up to 8 bits are !airly inexpensive, they need to be considered

firs~. The conversion speed of this type of converters is at best 1.5 ~s. A~ this rate only about 25 conversions can be done in one line of the video signal. Such a low conversion speed produces an image with poor spatial resolution.

Flash converters have high conversion speeds and signals can be digitized at a rate of 20 Megasamples per second (MPS) and higher. Flash convertors are expensive. For example, eight bit converters are in the price range of R500, whereas six bit converters are available for about R200. Due to these high prices of flash convertors, a four bit converter was designed and built using comparators and latches.

3.1.2 Spatial resolution

The sampling theorem states that the maximum frequency of a signal must be less than half the sampling frequency so that the sampled signal can represent the original signal unambiguously [41. For a CCIR s~andard video signal bandwidth of 7,5 MHz, the sampling rate mus~ be more ~han 15 KHz. At ~his sampling rate about 800 samples are taken in a single line. For a frame with 600 lines containing video

information, a memory size of 480 kbytes is required to store the image.

Using a lower sampling rate, resulting in a smaller memory size, sufficient data can be obtained for correct recognition. Hore efficient recognition is thus done due ~o

a reduction in processing time with a smaller memory. Ease of handling data with a personal compu~er are usually limited to a memory size of 64 kilobytes. This results in a maximum image size of 256 by 256. Image sizes of 32 by 32 to

(15)

10

256 by 256 are commonly used in machine perception systems

[21.221.

exos

memory devices

are

less expensive than bipolar memory devices. These devices are available at high levels of integration of up to 8 kilobytes per device. Such devices are available at about R10 each. The maximum access rate of

these devices is about 10 MHz.

For ease of handling data the memory size was limited to 16

kilobytes, capable of storing an image size of 128 by 128. Sampling can only be done on integer' multiples of lines of the video signal. The video signal has 600 lines with information in a frame, reSUlting in 300 lines in a field. By sampling every third line in a field, the image size is reduced to 100 by 128. Whether sampling every sixth line in a frame or every third 1,ne in a field results in eqUivalent

image information, t£~ latter at twice the ~rame

frequency. 100% Whit. 90·/. Whitt level

..

!5

-.W .~ ~

-

;; ~ .5

}

VI

..

~ elack level Blinking

l.vel--,.J

30%

I

20·/.

L

O.I.-SVIlC ~110I'S--~) Frail' sync -7~5pS Line sync (-64pS~

Figure 3.1 The Video Signal.

~.2 SYICHRONrZATION

In Figur~ 3.1 i t can be seen that the synchronization pulses in a video signal consist of the frame synchronization pulses <vertical sync) each followed by a series of line synchronization pulses (horizontal sync). aperatio~s on the interface are done synchronously with the video signal. The

(16)

11

synchronization pulses are separated from the video signal

wi~h a level detector.

The frame synchronization pulses are separated from the combined synchronization pulses using a low pass filter and

a level detector. Since only every third line is to be sampled, the synchronization pulses following the frame synchronization pulse are divided by three uding digital cirCUitry.

3.3 THE PARALLEL PORT

Communication between the computer and the interface is

a~complished by the use of a parallel port on the interface card. By using this port, the interface can be switched to either of the two modes available. Mode one is the sampling mode during which the video signal is continuously digitized and -the sampled data are stored 1:.:' the interface memory. Hode two is the access mode duriug whioh the in-terface memory oan be accessed by the computer. Simple handshaking

is used to affirm the computer of the interface;s mode.

To ensure that a complete field has been sampled and stored in ~he interface memory, switching from one mode to the

o~her is done synohronously with the frame synchronization pulses.

Buffers are used for the address and data buses On the interface to allow access to the memory in either of the two modes.

The addresses of the parallel port are I/O port mapped. Addresses of the parallel port oan be selected with switohes as described in AppendiX B. The programming procedure for the parallel port is described in Appendix B.

3.4 ANALOG TO DIGITAL CONVERSION

Analog to digital converters are specified by temporal resolution <conversion rate) and acouraoy of output.

(17)

Referring to the video signal, information of a single line is signal. With 128 samples to conversion rate is 2,5 MHz.

12

it can be seen that picture shown in 52 ~s of the video be taken in a line the

For this 4 bit converter, 15 comparators are used for digitizing. The outputs of the comparators are encoded to obtain a 4 bit binary output. The 4 bit output is latched synchronized with the write and address signals to ensure stable data. This data output is buffered and connected to the interface data bus.

The 5 MHz clock determines the conversion rate of the convertor. The clock frequency is divided by two, resulting in a conversion rate of 2,5 MHz. The write signal for the memory is derived from the clock output.

Address selection is achieved with two seven bit counters. The seven least significant bits are generated by counting the clock pulses. After 128 samples in a line have been taken, the clock is inhibited by the overflow bit of the counter. The line synchronizat:L:m pulses, separated :from the video signal and divided by three, reset the counter. The seven most significant bits o:f the address are generated by counting the divided line synchronization pulses. This counter is reset by the separated frame synchronization pulse.

3.5 MEMORY ACCESS

In mode two the memory can be accessed. The memory is memory mapped in this mode and can be accessed directly by the computer. The starting address of the 16 kilobytes me~ry

can be selec~ed using switches as described in AppendiX B. The image is stored in memory in such a :fashion that the seven most significant bits designate the line number, and the seven least significant bits designate the sampled point ina line.

(18)

13

CHAPTER FOUR

PATTERN RECOGNITION

Pattern recognition is part of the field of Artificial Intelligence, and concerns with the recognition or classification of patterns. Patterns are representations of the real world for instance photographs, odours, sound or speech patterns.

A

pattern class is a group of patterns with certain common properties. Pattern recognition i s the process of assigning an unknown pattern to a known pattern

class based on certain criteria.

...

'---~---_.

__

._--- .

-_._.-'.

Figure 4.1. A Representation of an Image.

In a vision system pattern recognition concerTS with the

recognition or classification of the pattern obtained from

the light signal input to the system. Two- or

three-dimensional chromatic patterns can be obtained using certain pickUp devices and interfaces.

An image is a digital pattern, and represented

mathematically by the function

F(X)

=

{F red (X~. F blue (X), F green <X)}

with

x

=

<x,y,z,t)

(19)

14

Only two-dimensional, time invariant monochromatic images, represented by the function F(x,y). will be considered here. A representation of such a function is shown in Figure 4.1.

Image processing for a executed at different pattern recognition levels. These device levels are are

remove the slowly varying background is removed by for the slowly varying are piecewise polynomial PreproceSSing, Segmentation and Classification.

4.1 PREPROCESSING

Preprocessing are done on digital images to enhance certain detail or to transform the image to a form suitable for higher levels of processing. Frequently more than one preprocessing operation are done on an image to obtain the desired result.

4.1.1 Filtering

Several filtering techniques have been developed. Filtering changes the grey level appearance of an image.

Median filtering can reduce noise and preserves edges. [23J This operation is done on an image using a n by n window. The entire image is scanned with the filter.

Background subtraction attempts to gray levels of an image [11l. The either using an analytical model background or to use splines, which approximation functions.

High frequency filtering passes high frequency components. This enhances high gradient changes in the image.

A histogram of a two-dimensional digital image is the function of the frequency of occurrence of each grey level. The histogram of the image represented in Figure 4.1 is shown in Figure 4.2. Histogram equalization transforms the image so that ~he frequency of occurrence of all grey levels

(20)

are equal. This transform expands the range of grey levels near histogram maxima and compresses range of grey levels near histogram minima.

-l88B

..

I

n

1 .

J

I

. i

!

I

..

fi

I . I I i ! I l _

,

i

~ , ""l...-. __J , o - - - _ _ _ _ _ _ _ ._._.~_.1._.• ....:. _ o 1 2 3 4 5 6 ? 0 9 A B C D E , ' I igllt

Figure 4.2. A Histogram of an Image.

4.1.2 Finding local edges

Edge detection is an important part of the recognition process. Research in the field of Lighting Engineering has

shown that humans see because of the difference in light intensity of objects. Similarly, objects can only ~e

perceived by a compu" .g device when there exists a difference in grey levels of the image. To enhance or find local edges in an image, several types of operators have been developed.

Edge operators outputs a direction aligned with maximum grey level change across the edge, and a magnitude indicating the severity of the change. The magnitude S and the direction

e

of an edge operator is given by

The gradient

der1va-tives.

operator is

Rober-ts used a

the approximation of the first

(21)

16

shown in Figure

4.3.

The difference operators are given

by

where p<i,j> is the value of the pixel at <i,j) .

• i ,j ...1

• i+1 , j

Figure 4.3. Robert~s 2

by

2 Neighbourhood.

Sobel and Prewit t have used a 3 by 3 neighbourhood' [141 as shown in Figure 4.4. The difference operators are given by

D1

=

<a2 + ca3 + a4> - <a0 + ca.? + a6> D2

=

(a6

+

ca5 + a4) - <a0

+

cal

+

a?)

where a is the value of the pixels and constant. c is a chosen

.as

• a7 • a6 • a1 • i,j

• as

• a2 • a 3 • a4

(22)

17

The Laplacian operation is the approximation of the mathematical Laplacian. It has fallen into disuse because no directional information is available and i t enhances noise being the second derivative. [14J

Template matching operators use multiple templates or masks at different orientations to find local edges. These templates are fitted at each pixel, and the mask with the highest output determines the magnitude of the edge, and the orientation of this mask gives the direction of the edge. Edge masks have been developed for 3 by 3 and 5 by 5 neighbourhoods. Bar masks tend to detect line edges, and a combination of bar and edge masks are used for more complex templates. Schematic masks are shown in Figure 4.5.

-1

2

-1

-1

+1

Figure 4.5. Schematic Bar and Edge Hasks. (Courtesy [~4])

Local intensities are fitted with parametric edge models to find local edges. Heuckel (22] has developed a simplified model of an ideal edge as shown in Figure 4.6. An edge is at an angle

e

and distal ~e r from the center and separates two regions with brightness band b+h. The desire is to compute the ideal step function that matches best with the picture function by varying the parameters B,rtb and h.

It is very difficult to compare and evaluate edge operators. Some operators find all the edges, but are sensitive to noise, while others are insensitive to noise but might miss some crucial edges. Pratt (llJ has developed a measure for comparing edge operators.

(23)

18

ueasurements by adjusting ita threshold based on

measurements of neighbouring edges. The edges are adjusted using local information and parallel-iterative techniques.

b

Figure 4.6. Heuckel's Kadel af an Ideal Edge. (Courtesy [14])

4.1.3 Resolution Pyramids

A resolution pyramid is a set of images created from the original image. each having a different resolution. The resolution is either spatially or grey level. These pyramids are created to speed up higher levels of proceSSing. Processing are first done on low resolution images. and then refined at a higher resolution.

4.2 SEGMENTATION

Segmentation is the process of dissecting the image into meaningful units for further proceSSing. Part of the segmentation process is coding or labelling of the segmented iDage [24]. The type of labelling depends on the type of classification to be used.

4.2.1 Boundary De~~ction

Boundary detection segments an image on ground of the dissimilarities between the regions in an image.

(24)

19

approy.1mate location and refining the boundary. This is done after an approximate boundary has been determined so~how.

Analytical curves are fitted when the boundary has been refined.

The Hough method for boundary detection [25J 1s applicable if little 1s known about the location of the boundary. but its shape can be described by an analytical curve such as a

straight line or conic. The main advantage of this method is that it is relat~vely unaffected by gaps in curves and noise.

A graph is a general object that consists of a set of nodes and arcs between nodes. Each arc has a certain weigh or cost associated with it. Graph searching is a technique for determining the lowest cost path between nodes. This technique is used to· refine boundaries after some edges have been detected in an imag~.

Dynamic programming is a technique that can be used tor finding boundaries by optimizing the path on the basis of the strength and direction of local edges detected in an

image.

Contour follOWing an image. This operation.

is used to find the boundary of a shape in technique is a simple edge-following

into regions is the pixels in the segmentation 4.2.2 Segmentation: Regions Segmentation of an image of the similarity of techniques exist for regions.

done on the ground a region. Three of images into

Blob colorin~

into regi~ns.

top to bottom

is a local technique for The image is scanned from while labelling each region

image segmentation left to right and in the image.

(25)

The image is assumed to object and background. determine which pixels lie

20

consist of two parts namely the A threshold value is used to on the background or object.

Splitting-and-merging [261 is a segmentation technique where a region or subregion is sp11tted when the pixels in the region is not homogeneous, and two regions are joined or merged when the pixels in the two regions are homogeneous.

4.2.3 Texture

Texture is treated under the SUbject of segmentation since texture in a region is a commdn property of a region just as the absolute pixel value in a smooth region. [27]

Texture primitives is the smallest basic unit of which a textured surfaces consists. These basic units, called texels. are repeated in different orientations and with

de~ormations in a region to form a textured region.

Structural models are used to define the relationships or orientations of the texels. [28] Using these medels texture can be recognized or classified.

Some textures such as landscapes do not have regular formations of texels and such textures are classified using the frequency domain of the texture.

When a texture has been classified, useful information concerning the orientation of the region in space can be derived.

4.3 PATTERI

CLASSIFICATION APPROACHES

Several methods or approaches can be used for the

classif~cationof patte~ns. The approach chosen depends on the application. These approaches are Template matching, Classification in feature space, Cluster analyses and Syntactical pattern classification.

(26)

4.3.1 Templa~e Xa~ching

Template patterns. medels of

model with

matching is a direct method of classifying The image or sub image is compared with stored known patterns. The image is classified to the the closest match.

4.3.2 Classification in Feature Space

A feature of a pattern is a parameter or measurement taken from the pattern. The feature vector is a set of features taken from a pattern. Pa~tern classification in feature space is the assigning of the feature vector to the proper pattern class. In Figure 4.7 an example of three classes separated in a two-dimensional feature space is shown.

-CLASS 1

' - - - 1 . -

f

1

Figure 4.7. Thre~ Classes Separated in Feature Space. (Courtesy [14])

The process of classification can also be seen as the mapping of feature space into decision space. This mapping is done using classifiers or decision functions. Some classifiers that has been developed are DeterDaned classifier t Linear discriminant functions. Minimum distance classifier, Nearest neighbour classifica~iont Polynomial

discriminant functions and Bayes (parametric)

(27)

22

4.3.3 Cluster Analyses

This pattern classification method are characterized by the use of resemblance or dissemblance measures between the patterns to be classified.

4.3.4 Syntactical Pattern Classification

In syntactical pattern classification the pattern must satisfy,relational properties of the model. [15]

4.4 THE PROGRAM

This program was written in Turbo Pascal, using some of the techniques outlined in the previous sections. The objects to be recognized are round disks of different sizes. The vision system must classify these disks according to size.

4.4.1 Preprocessing

Io preprocessing was done on the image except finding an edge on the object. This is done by scanning the image from left to right using a template until an edge of sufficient strength has been found. The template used for finding the edge as well as the boundary is shotm in Figure 4.8.

Figure 4.8. Template for Edge Detection.

4.4.2 Segmentation

Segmentation of the image is done using a contour following algorithm. This contour follower is a search algorithm using the template of Figure 4.8 at each step to find an optimum edge. The algorithm tends to produce an eight-connected

pattern of an image with a smooth outline. In an

eight-connected pattern every pixel on the outline is connected to only two pixels in any of the eight possible

(28)

23

directions.

4.4.3 Classification

Only one featuret the circumference, 16 extr~cted from a

pattern. The circumference of a pattern is measured with the distance between two pixels in a rectangular direction as unit. The distance of two pixels in the 450 direction is ~2. The detected boundary of the image of Figure 4.1 is shown in Figure 4.9.

---.-

-

.

.

"

-.

l~ '~ { ~

i

~

I,

i

..

I.. .IS

.'

"',

....

,-."""

(29)

24

CHAPTER FIVE

HUKAN VISION

Human vision is our perception of the physical world through the use of light. The purpose of this chapter is to give an overview of human vision and the lighting conditions

influencing the ability or disability of human vision.

5.1 THE ANATOMY OF HUMAN VISION

The anatomy of human vision consists of the eye, optical nerve and brain. Light entering the eye is transformed to neurological impulses which are interpreted by the brain.

Distinction can be made between physiological and

psychological aspects of human vision.

5.1.1 Psysiological Aspects

The human eye is one of the most

body. It has the function

neurological stimuli, suitable brain. The eye incorporates environmental adaptation.

complex organs of the human of converting light to for interpretation by the feedback mechanisms for

A horizontal cross-section of the human eye is shown in Figure 5.1. The optical components of the eye are the cornea, aqueous humor, iris, lens and vitreous humor. All these components have filtering characteristics which limit the eye-s visual response between 380 nm and 950 nm.

Light is refracted by the cornea and crystalline lens to focus the retinal image. The retina has two types of receptors which convert light to a neurological signal. Rods are sensitive to low light conditions. and are responsible for scotopic or night Vision. Cones are responsible for photopic or daylight vision, and are active at high light conditions. Adaptation to scotopic vision 1s about thirty minutes, and about ten minutes are needed for photopic adaptation.

(30)

25 AQUEQUS HUMOR CORNEA / / CILIARY r.1USCLE

Figure 5.1. A Horizontal Cross-section of the Human Eye. <Courtesy [7])

5.1.2 Psychological Aspects

The human brain interprets the neurological stimuli received via the optical nerve. The interpretation is based on the specific knowledge the observer has of the task. or how well the observer has been trained for the task. The quality of the interpretation is based on factors such as fatigue. age and alertness of the observer.

5.2 ABILITIES OF HUMAN VISION

The abilities of human vision depend on all the components of a human vision system and the properties of a visual display. Internationally adopted CIE standard observers are used for the evaluation of the abilities human vision. The ability of a specific observer can change due to factors such as fatigue, stress. age or disease.

For the evaluation of visual ability. the observer is subjected to a series of tests. The observer has to detect

(31)

26

certain aspects of visual displays with different properties. A visual display i5 described by its luminance, contrast, spectral distribution, size. relative position, duration, movement and temporal frequency.

As a measure of comparison, some of the abilities of human vision for stationary displays in the primary line of sight are discussed.

5.2.1 Luminance

The human vision system can only distingUish detail in a display if the luminance of the displays falls between certain limits. Above about 10000 cd/ml and below about

0,003 cd/m2 the human fails to distingUish any detail.

5.2.2 Contrast

The detection of contrast is the basic task on which all other visual abilities are based upon. The visual system 1s highly specialized to inform about luminous discontinuities in the visual field.

The luminance contrast C is a luminance of a object differs luminance contrast C is given by

measure of how much the from its background. The

where LQ denotes the greater luminance and

L

1 denotes the lesser luminance

This equation of contrast results in a contrast value of between 0 and 1.

The simplest visual task, the detection of small round disk on an uniform background, has been studied in great detail. The probability of detecting the disk versus the relative target contrast 15 shown in Figure 5.2. The contrast at which an object can be detected 50% of the time is called

(32)

27

the threshold contrast.

100

20

0,4 0.6 0.8 1.0

RELATIVE TARGET CONTRAST

1.2

Figure 5.2, Probability of Seeing versus Contrast. (Courtesy (7)

The visibility reference function, shown in Figure 5.3, is the plot of the threshold contrast versus background

luminance.

o

...: 101 (I) <C a:

!Z

10°

o

o ~10-1 ~ 10-2 ...-...I...-;::---'--=---'---:---l~-...I...:_-_L..::__--L~----l 10-4 10-3 10-2 10-1 100 101 102 103 104

BACKGROUND LUMINANCE IN FOOTLAMBERTS

Figure 5.3. Visibility Reference Function. (Courtesy [7])

Similar relationships has been determined with variations in size, duration, colour etc.

(33)

28

5.2.3 Visual acuity

of small detail of an Resolution aruity is two stimuli in the The visibility

visual acuity. that there are

object depends on the the ability to detect visual field. This is de+.erm1ned by measuring the smallest angle at the eye when two stimuli are detected.

The Landolt ring, shown Figure 5.4, is often used for determining the resolution acuity. The acuity

e

is deter' ined with the formula 8=arctan d/a, where d is the critical distance and a is the distance from the display to the eye.

Figure 5.4. Landolt Ring with Critical Distance d.

Acuity has been determined with variations in duration, background luminance etc. [7]

Recognition acuity is the ability to correctly visual task. This can involve complex perceptual depending on the context, such as to distinguish characters 1 and 1.

identify a processing, between the

(34)

CHAPTBR SIX

ROBOT VISIO.

The vision system described in Chapter 2, 3 and 4 was evaluated in a similar way as for human vision. Three exper1uents with forty-six objects were conducted to evaluate the performance of the robot.

5.1 EXPERIMENTAL PROCEDURE

In order to minimize the effect of specularities of the surfaces of the objects and the possibility of shadows, these experiments were conducted in diffuse lighting conditions. Perfectly diffuse lighting conditions are difficult to create. It was decided that the experimental setup shown in Figure 5.1 will suffice. The setup consists of a plate steel cube, a table, a light source, and a camera DOunted above a hole in the center of the cube. The inside of the cube is painted with matt white paint.

29 0 '" N 'II SurfQc~

(:;

light. sourc. o o 52 ( 1000 )

(35)

30

The light source consists of household 220 V filament type bulbs. Three mountings were fitted, and the bulbs were fed via a variable supply, thus making it possible to change the illuminance.

For the measuring of luminance of the surfaces of the

o~jects, a luminance meter was mounted above the cube, replacing the camera. The objects, described in Appendix

E,

were placed on the surface of the table. The size of the objects is 258 mm by 335 Mm. For each experiment the values

of the voltage limits (see Appendix B), the f-stop of the camera lens or the illumination of the light source were changed. These values were selected to utilize t 1'le d.ynamic

range of the robot to its fullest extent.

5.1.1 Experiment One

For thia experiment the following settings applies:

Luminance of background

ea..

ra target voltage F-stop ot camera

Upper voltage limit Lower voltage limit Fitted bulbs Bulb voltage 510 cd/m.2 30 V 4 -100 mV -1100 mV 100 W + 100 W + 60 W 250

V

The results of this experiment are given in Appendix F.

Objects 0 to 30 were tested. Of each object a representation of the i~ge, a plot of a single line of the video signal, the luminance of the object in the center of the image, and the histogram are documented.

5.1.2 Experiment Two

For this experiment the following settings applies:

Luminance of background Camera target voltage F-stop of camera

275 cd/m.2

30 V

(36)

Upper voltage limit Lower voltage limit Fitted bulbs Bulb voltage -200 mV -1000 mV 100 W

+

100 W

+

60 W 235 V 31

The results for this experiment are given in Appendix G. Objects 0 to 36 were tested. Of each object a representation of the iuage. the luminance of the object in the center of the image. the histogram. the detected boundary, and the circumference are documented.

5.1.3 Experiment Three

For this experiment the folloWing settings applies:

Luminance of background Luminance of object Ca. .ra target voltage F-stop of camera

Upper voltage limit Lower voltage limit Fitted bulbs Bulb voltage 230 cd/m2 13 cd/m2 30

V

2,8 -200 mV -1000 mV 100 W + 100 W + 60 W 220 V

The results for this experiment are given in AppendiX H.

Objects 37 to 46 were tested. Of each object a

representation of the image and the detected boundary are documented.

5.2 ABILITIES OF ROBOT VISION

The per~orDance of the robot were evaluated by making use of the experimental results. The abilities of the robot depend on all the di~ferent components, i.e. the sensor, the interface and the processor. each haVing its li.its. The sensitivity of the robot for luminance. contrast and critical detail are given.

(37)

32

5.2.1 Luainance

ib. .en.ltivity of the closed circuit television camera ver.u. level of luminance of the object 1s given in Pigure

~.2. This figure was drawn from Table F.lt of which the

voltage levels were obtained from Figure F.3. The voltage versus digital output of the digitizer of the interface, for BxperlDent One, is shown in Figure 6.3.

. . . . ~ -'-~_.''''--,

~i··~-::-!.

--:--- _. -:---!--*-.

- _._--;---~~-~-}. ~ .: . t . a 50 "0 15a 200 250 lOO 350 ~oo 45' 500 550

INi>UT LUHlHAN:E (cd/Ill"]

~lsur. 6.2. Transfer Function of Ca.era.

Prom Figure 6.2 it Cdn be seen that the linear range of the ca.era transfer function is between about -200 mV and -1000

aV with a background luminance of about 275 cd/m2 • These

values were used for Experiment Two.

The dark edges near the border of an image representation is

a result of shading discussed iu Section 2.3.2. This reduces the usable image size, and influe~~es the shape of the hlstogra•.

Iot~ the high noise level of in Figure P.3. The maximum noise is ~bout 7~ mV, and

the video signal as ~~n be seen peak value of the superimposed this results in about one bit uncertainty

Figure D.l

in the digitized signal. The comparison of

(38)

33

the distribution of samples taken from a smooth surface. The influence of noise on the appearance of the image can be . .en when comparing Figure D.l and Figure F.l.

:

~n1:;J:~~~;i;

;i:·:::·1

o :~l-.--~~·--·

;,"':-'--' .

c ':

i :

.::!

I

1 ... s =--·~r--~:1T=-r·

....

. ... .... ~ A . . _ _ _r_._~~. ~~.. __ 7 0- • ~ ~ ~ Co. •. - - 1 . ~c : , - .I 4 I a[ ~--- -_ ..,-J .. B 3 ' :s

-_____

, _1 ...- ...

-,__ -::.3 -385 -5:'9 -G71 -1m -537 ·;~~o ~mJT VOLTAGE I r.:V l

Pisur- 6.3. Tran.fer Characteristics of the Digitizer.

15.2.2 Contrast

For the purpose of this discussion, when the circun£erence <in units)

an object is recognized falls within 5% of the expected value. The patterns are thus assigned to the right pattern class if the extracted feature falls within the

Ii_its of a deterministic classifier.

Although the boundaries are not necessarily smooth, as for instance for Object 15 shown in Figure G.3, all the features extracted from patterns of which boundaries have been found. fell in their respective pattern classes. The relationship of the circumference (in units) of recognized patterns for Experiment Two (see Table G.l) versus the circumference (in millimeter) of the objects. calculated from their descriptions in Appendi~ E. is given in Figure

6.4.

To know whether an object has been det.ected or not. does not yield practicr'.l information such as which obje~t has been

(39)

detected or whether this detection will

34

result in recognition. In order to overcome this problem, the ••n.ltlvlty of the robot for contrast is expressed in terms of the probability of recognition versus contrast rather than the probability of detection versus contrast as for hUDan vision. Table G.l is used for this probability funotion shown in Figure 6.5.

o 100 200 300 400 500 600 700 eoo CIRCUHFERENCE [mm]

Figure ~.4. Classification Characteristics of Robot.

Due to the high noise level of the video signal, edges in the image are not well defined, as can be seen when co.paring Figure D.l and Figure G.l. This results in a oonsiderable reduction in the ability of the robot to recognize patterns. For a synthesized image in Figure D.l, the robot has the ability of finding the boundary when the difference in digital value between the object and background is only two. For real images, it can be seen from the histograDS <Figure G.2), that the average distribution of the dig:J..tal value must differ with about five for recognition.

The sensitivity of the robot for contrast can be expressed in another way. Referring to Table G.l, it can be seen that an object was recognized if there is a difference in luminance of at leas:. 110 cd/ml betw~en the object and

(40)

a practical different background, in a range cf 0-275 cd/m2 • For

situation these results implies that only three shades can be distinguished.

o

0,2 0,4 0,6

CONTRAST

O,B 1,0

Figure 6.5. Probability of Recognition versus Contrast.

5.2.3 Acuity

In Figure 6.4,

when sufficient

the classification capability

contrast exists can be

of the robot seen. The determination of the ability of the robot to detect fine detail was done on tasks with sufficient contrast. The acuity 8=arctan d/~, where d is the size of the critical

detail and a 1s the distance fl'om the camera to the task.

3

2,5

D,S 1 1,5 2

CRITICAL DETAIL [DegreesJ

1 > I -::::i iii O,S ~ co a 0: 0-0 ,::.::~t ., " I · · · : · , · · ·

I''':' -'

i':

:::::: :::

. . . • . . . . .

"j: :::' :,:: ;:: ,:;

. . . ,

:1:::

Ii':

I . ' .. ,.. ':.:: .:' .. :,' ' . ':.' I .:: .. ': ' . ( . "... '·1 .. ··

:::::.:':

.~~.

:

:T:;~·.

:.:-:1;: ...

:. "lJ. :

I::·: ... ::.::.:';:

::-U

: - 1

1

_1

1

; . : .

-.T'-·

1----:;:i~~::: ··/~:~:i :... ::: ::: .. :. .:=:':'1

Figure 6.6. Probability of Detection versus Acuity.

From the results of Experiment Three (Appendix H), i t can be observed that the robot was capable of detecting detail size of 4 mm, but failed to detect a 3 rom opening in the Landolt ring. The probability of detecting versus acuity 1s shown in

(41)

36

Due to ~he .pacific nature of the search algorithm for

boundary de~ection, se.rching clockwise at each step, the

robot failed to trace the boundary on the inside of the

Landolt ring. To extend the ability in this regard, cer~a1n

adjustDents can be Dade to the starting orientation of the

(42)

37

CHAPTER SEVEN

CONCLUSION

The robot conditions.

operated successfully under certain lighting Other vision robots can now be evaluated in a similar way, and compared in terms of their visual response.

It has been pointed out which part of the vision system contributes to limiting its response. The program that has been used is simple, giving a fast and accurate response. The accuracy of the interface exceeds that of the sensor, and does not limit the response of the robot by itself. Hoise, superimposed on the video signal, the accuracy

by

which the electron beam of the camera is deflected, and shading are the greatest limiting factors on the response of the robot. A higher quality camera will hopefUlly greatly i.prove the ability of this robot.

(43)

APPENDIX A

(44)

CONTENTS

I TECHNICAL DATA

II INSTAllATION AND OPERATION

(45)

TECHNICAL DATA SCllnningstandard

Type LDH 0050/00: celR 625lines,50Cields!s, 2: I intcrlnccd

Type LDH 0050/ I0: EIA525 lines, (iO fields!s,

2: I intcrlaced

Currcnt suppl)'

a. Mains 'f)'PC l.DB On50jOO: 220 V:.' lOr;(,. opcnttion:'\0 Hz

Type LDH 0050jOI: 110 V:!: 10%, 60 Hz

By changing t!le transCormer mnncctions, buth types arc slIi!nble for opcration on

llO, 117,220 and 2J4V mains (lowcr consumption: appro I RVA b. Battery Type LDH 0050/00 NOIl1. 12Y

(11.2-opcrnliLlll 16V)

Type LDH Ou50/01 Nom. 12V (l1.2~

16V)

L.oad currcnt appr.:0.7 A

The bnttcryvoltage is upplicd to a sepnrale socket, and should not be c-onneeted with the camera housing. The battery connec-tion is electronicnlly protected against polarity reversal.

Camero lens

Schneider "Xenoplan" f/1.9; 2511101:

Exchangeable foralltypes of 16mOlfilm mid Vidicon lenses with C-mount.

Pick-up tube

I" -Vidicon, typeXQ 1030,with magnctic deflection and focussing, and6.3V/90 mA filament power; Exchangeable for other equivalent types.

Requircd iIIumhmtion

150Lux reflected light, foraVidicon of average sen-sitivity. lens iris tof/l.9,and a signal-to-noise ratioof

-38dB;

5Lux wilh a lens of f/0.95, wilh increased signal am-plification (see "Amplifier sensitivity") and ;leccptable picture quality;

The values may deviate Crom - 50% to + !00% • de-pendent on Ihe sensitivity tolerances of the Vidicon tubes.

A.3

Ou11lUt signals

a. VBS - :-ignal. 1,4 VPII ;ICI1l1'oS 75 Ohm. pl1:-ilivc guing pkturc si,gnal. ncg;lliw going~ynl:signal.

h. HF - carrier wave, Xtt) 15 IlIV (r.lll.s.) :Ieross 75

Ohm, negatively modulated with the VBS signal; modulation deplh appro80%

Nomimll C;lrricI'

r

retluene};

CorLDB 00511/00; european TV - channc14 forLDII1l1l511;O!; american TV· dHlllnd 4 HF oscill~Ih'rlUning to channels:! and 3 is 'abo possible.

Amplifier !icl1sitivity

Nominal IVppVB •signal, atasiglwlcurrcnl

or

U.35

11/\;

'Bypreset potcnlioll1cli.:r, Ihe scnsitivity CWI he uoublcJ. with silllullan(.'ous tlci.:re;Isl.' in bandwidthand signal-Ill-noise ratio.

Frel)ul!nc~'rcsponse

Flat to 5 MHz mC;Isurd with a high-ohmic current sourcc replacing the Vidicon:

- 3 dB at5 MHz, at maximum amplificr sensitivity.

Rcsolution

600 lines visible onn gmlllpicture monitor.

Signal-to-nuise rntio

- 38 dB (r.m.s.) noise voltagc against 1 Vpp signal voltage. measured with a 5 MHi: weighting filter

- :0dB at maximum amplifier~el1sitivity.

Light r:mge control

Automatic Va control for <In illumination nlngc of approximately I :60 with Vidicon XO IIBO. For other types of Vidicon, tllC Va control range l'an be rc-adjusted.

Control spc:ed from bright to dark: appr.(isec. from dnrk to bright: nppr. 3.5 sec. White level

While compression h~tWt'Cll 1.0 and 1.2 Vpp (If Ihe

VU -signal i1l1lpliIUt!c; Icvcllilllitationill1.2 V I'p.

macklevel

(46)

Time'"

a. with mains loddng: pull-in range:!: 2 Hz

retaining lone :': 4 Hz mains

frequency deviation from the nominal value (50 Hz and 60 Hz, respectively)

b.withbattery supply and Crcc-running from mains:

± 2 Hz deviation from the nominal field frequency (50 Hz

amI 60 Hz,respectively).

I.cvcl control

Fkt-prorile instrument at the camera rear with red-green-red scale to indicate when the automatic control rangeisexceeded, due to an incorrect iris setting or a too strong respectively too weak scene illumination.

Mounting facility

J/S"·\V.W. thread in the base plate for tripod mount-'ing;

Type LDH 0050/01 includes a 5/16"-W.W. adapter

screw;

Two additional threaded holes, 5 mm metric thread, in the base plate arc available for a different way of

mounting.

A.4

Connections

a. Mains connection: rixed mains Icad, length 5ro.with

fittedplug

b. Dattery connection: a connection lead. length 2 m, with ROKA plug can be supplied separately under typenumberLDH8113/00

c. VBS • output: DNC coaxial socket, matching plug

EL 8498/ II;7SOhmterminationplugisdelivered with the camera

U. HP - output: BNC coaxial socket; a coaxial cable, length 10m, with BNC plug andwitha 75/300 Ohm adapting trunsformeri~ delivered with the camera

(392240650380).

I'emlissible ambient temperature

From - 10° to

+

45°C; tropicaJised design.

Dimensions

secdimensionedsketch. Fig.1.

Weight

3.9kgincluding Vidicon tube and standard lens.

l

~_EJJ

'"i@0©0@~

I

I

L,

J - . -1=

...

J

LJIJ:::=:J

2JJ I I

/lOt'

1lllso/~Q

I

2l0V IfW SDH.

..

.., H5 ~ HS 1T~~_'.L_:I

¥

- J

t"

-I'

.

j

"

d

~

~

C /I t II J ( II j t II ) I II I L I L I I I [ I L~I I ETY"lf fIG.'

(47)

n

INSTALLATION AND OPERAT10N

...\£ter unp,lcking, proceed as follows:

- Conncct the video output to the input of ,1 vi~ko

monitor (UNC connectors are used as uutput sock-ets). Terminate the monitor with75 U.

- Cunncct C,\mera :lnu llltJnitur tu the mains and switchonboth.

- Open the aperture uf the call1era so far that the pointerofthe level indicator is in the green scclllr

ofthe scale. andaproper picture is ubtainetl.

IfaT.V.receiver is usedasph.:tUIC monitor, prucecd as follows:

- Connect the I~.F.outputofthe C,lI11era10the aerial input of the rcceiver by mcans of the eablc with matching transformer supplicd.

- Sct the channel !>(,'Ici:torofthe receiver10channcl4. - Connect the camera and the receiver to the mains

and switch on the instruments.

- Open the apcrtun: so far that the pointerofthe level indicator is in ,the green sector of the scalc and it proper pictu re is obtained.

III

SPECIAL' APPLICATIONS

1 FIXEDTARGET VOLTA<iE ADJUSTI\JENT

For some applications (document vic\\lers, slide pro-jectors etc.) it is desired to switch orf the automatic target voltage circuit,sothat the target voltage will not become too high if no image is projected onto the target.

Ifa fixed target voltage is required. proceed as follows: - Tum potentiometer R41 fully counter clockwise

(cursor at earth potential).

- Adjust the target voltage to the correct value with potentiometer R68. (The voltage value can be measured by connecting a valve voltmeter to the cursor of potentiometerR68).

The control range of potentiometer R68 is approx.

19 V - 75 V.Ifalower target voltage is desired thiscan

be realisedbyconnecting a resistorinparallel with R68.

Itis not advisable to adjust the target voltage to a value lower than 15 V, because this will give rise to shading on account of landing errors.

2 CONTROL RANGE LIMITATION OF TilE AUTOMATIC TARGET VOLTAGE CIRCUIT

Ifatwo-layer vidicon is employed it maybeimportant,

inview of the life of the tube, to limit the target voltage at a certain value (e.g. 60 V or 70 V). This can be effected as follows:

- Adjust the camt:ra as indicated in the relevant

in-structions.

- Cover the Icns anc.J adjust the target voltage to the desired value with potentiometer R68. (The target vullage can be measuredbyconnecting a valve volt-meter.to the cursor(Ifpolcntiomeler R68).

Indicator M I will nut show the wrrect value.

A.5

3 m.li\lINATING TilE MAINS LOCK

The nlnins Illck canhe eliminatedilSfollows:

- Remove the two plugs from points 11and 12of the time base p.c. hoard.

- Cut of( bllth plug.'i anu interconnect thL: plugs by

mC:lIls ora short wire.

- Refit the two interconnected plugs to points I I and 12.

.- Insul:ltc the ends of the cut-off wires.

.~ CONNECTING TilE Lilli 01150 01 0·" Oi'

(1JO liz) TO A 50 Hz ~f.\I~H.

- Removc Ihe mains lock (see point3).

- Connccl lin electrolytic capacitor of 320flF (i V in

parallelwillIR7R.

Itis auvisablc not to usc fluorescent tubes or inc<lmfcs-ccnt lamps with thin filaments for the illumination. in order to avoiu I() Hzflicker. Preferably usc low volt-age lamps with thick filaments (e.g. car lamps).

5 CONVERTINGAN LUll lJOSO/OOj03/06

(50 Ut)INTO AN LDH 0050/01/04/07 (60liz)

- Auapt the mains Iransformer to the correct mains voltage.

- Replace capacitors CI43 andCI45by capacitors of

I K8 pFand3KpF respectively. - Adjust the camera (sec chapterIV).

6 TURNING TilEJ)EI'I.I~GTlONCOILS 90"

For certain upplicutions (such as document viewing) it is necessary that the deflection coils arc tumed900

;

proceed as follows:

- Disconnect the video p.c. board.

- Remove the vidicon (sec chapterIV "Replacing the vidicon".)

- Remove ornamental plate3 Fig. 7.The C'''rJumenlal plate is glued in place and can be prized off carcfully with tIle aid of a koife.

- Remove the three screws 31 Fig. 7 by melll1S of which the deflection unit is secured.

- Take the deflection unit carefully out of the camera. Remove the two screws 1 Fig. 2 and pull the coils approx.1cm backwards.

- Wnnting:The coils should not be pulled back more than approx. 1 cm as otherwise the connection wire of the target may break.

- Turn the coils900

clockwise and pusht1~emforward again, so that the5connection wires of the deflection coils fit into the second recess.

- Refit the two screws 1 Fig. 5 however, donOI yet ti£hten them completely.

- Refit the deflection unit in lhe camera and glue the ornamental plateinposition.

- Refit the vidicon and the objective.

- Check that the image is vertical. Ifthis is not the case, this may be corrected by slighlly turning the defleclion coils.

(48)

- Tighten the two scrcw!> 1 Fig. 5 andfitthe vidco p.c. board (make sure the correct rings arc fiued).

7 HORIZONI'AL MOUNI'ING OF TilE CAMERA

With the aid of the bracket shown in Fig. 4, the camera canbemounted in a horizontal position.

8 SWITCHING OFF THE II.F. OSCILLATOR

For certain appHcation~ it may be required to make theH.F.oscillator inoper.ltive to avoid radiation.TIle

simplest way to do this1::to disconnect resistor R72 on one side. thus intcrrupf:mg the supply vollage of the oscillator.

9 ADJUSTING TO A IlIGHER SENSITIVITY

When the amount of light is very small, e.g. in the case of microscopy, it i!:pos~ible to increase the sensitivity

bya factor 1-2 with potentiometerRl3.

10 BATIERYOPERATION

The camera can also operate on a ) 2 V battery. With special cableLDH 8113/00 the camera can be con-nected to the battery.

- Warning: If the camera is connected to a car battery the camem housing must be insulated completely Crom the tripot, or the battery must be free from earth (noating). Usually one of the two poles is can· nected to chassis (earth).

(49)

..

.

4C 7 2 3 ~118 a: I

I

--I , ~ -.J

~

, - C10J

+}-91

~~~

--fQ!ITID- _

-1

Rl10 ~ S "

~ ~

.e'.

...j

ClC'

II-

~

"

~

:: T

~ ~-<B;::~6'~

5 11

~ ~.L ~:

I s b TS102 T510; ~----+-~ ~;;

(5

R1l7

~

I'l

1'~

157 .""

",~.m

~ ~Sl12,

!;;

~

EJ

5'1 301 5]1

~

u

0

~ ~

L102

~10(~2!l

52 5126 .-

~

511

~

I I II T u

~

~

ll)J a: c

S1~1-

151171 , 5!If1\ +

~

.

a

eI ; !

.-

u I I

Oc:

' bI

=f$fr.

R:15t +:!a:

~

~ ~

R ---.----

~

__L - 156

~lOL

113 In

i " ·...

IC~i t

....5.

'~~123~~

I

~I~I~I~" ~:~:~ ~ ~

'B

1.~':i6=:t

lu

C1:]

"~

.- t=

~

;!

~~_

9E!i •

"u

.",

-{OO}-I.. +

r

u"

~

--.J

-:1+ C13'

-f-

Y\ -

.'S. •'"

.--l-.

I

C,<0

I

L

.\...Irs", . '

RID" ...J-.

1·-'

~

~

~R16O

\ T5U

'"lmt

~

~

_.

~

., TnoctU

J-

Iso

(50)

A.8 R_79~ ~

...

...

... Cl>

...

...

~ -'"

'"

-:::

...

...

~"-

-s

........ ....

...

CI> '" -!:; n~ c- n~ -t:;

_

...

-'"

'"

'"

(51)

iii

...

..

J:", n_ e ... :0 :0 '"z .::!

(52)

A.10 ,..- - ---i1..-.•

..

-

....

-:_~:

---_

..

..

,~. ... ,.---

...

---1.-- --~+ ,e3 ~2__~J L._~ _

,

r1, , I ,

.

1;0,

....

,

I I ll~ 1 •

.

"

·

• I \

·

\'...

----,

.-... ~:o 0 " , n'" »

...

• I ,...., .Jof:~ 1 I : I • I 'n' ...L, I-...I~ 1;1J. t .... I,.", ,0' I •lG)l .I I '., ':' ',.J LI,J' ,. l ~R68h I • '::::':.i . ,..1.. -_rRiLJ-M i ' : ' L_ ..

:'

.

(53)

APPENDIX B

THE VIDEO INTERFACE USER~S MANUAL

(54)

CONTENTS

1. General description

2. Installing the interface card

3. Adjustments 4. Programming procedure 5. Component layout 6. Circuit diagram 7. Component list B.2

(55)

B.3 1. GENERAL DESCRIPTION

This interface digitizes a monochrome television signal for video processing. It was designed for IBX compatible

pe~'sonal computers and fits into the standard ·expansion slot of these computers. The interface was used for this thesis for interfacing between the sensor and data processor as described in Chapter Three.

Several features have been added for ease of operation by the user. Data are automatically updated in the 16 kbyte memory every 20 ms. and can be randomly accessed just after a frame has been stored. A parallel port enables the user to communicate synchronously with the video signal and other external devices.

Four bit digitiZing is performed Adjustments to the digitiZing range, offset can be made.

on the video signal. digitizing rate and an

(56)

B.4

2. INSTALLING THE INTERFACE CARD

Unpack the interface. Refer to your and find unused I/O mapped and me~3ry

Select the addresses according to the

computer~s DOS manual mapped address areas. tables below.

When the respective bit of the selected address is 1, the switch for that address must be ON. As an example the switches are set at addresses C0000 <16 k memory mapped) and

300 <parallel port) as shown below.

16 k MEMORY MAPPED ADDRESS SELECTION (SW1)

A19 AlB A17 A16 A15 A14

• • •

• •

• •

ON

SWl 1 2 3 4 5 6 7 8

PARALLEL PORT ADDRESS SELECTION (SW2)

A9 A8 A7 A6 A5 A4 AS A2

....

• • •

ON

SW2 1 2 3 4 5 6 8

With the compu~er~s power switched off, open the top cover.

Remove one of the expansion slot covers in the rear panel of

the computer. Insert the interface card into the

corresponding expansion slot of the computer.

Close the cover of the computer. Connect the output lead of a closed circuit television camera to the video input of the interface card with the BNC connector. The interface card has now been installed and can be utilized through software.

(57)

B.5

3. ADJUSTMENTS

Certain adjustments can be made to the interface card to suit a specific need. These adjustments are the upper and lower voltage limits of the digitizer, the digitizing frequency and the offset from left of a frame before digitizing is performed.

Voltage limits

The upper and lower limits of the digitizer can each be set to a level of between -2 and 2 volts. The lower limit is set with Pl. The upper limit is set with P2. Input voltages to the digitizer of above or below these values result in a digital value of 0 or 0F (hex).

Clock frequency

The digitizing rate of the video signal is determined by the frequency of the oscillator and can be set with P3. This potentiometer is usually set for an oscillator frequency of 5KHz. This frequency setting results in a digitized image of which a unit length in the horizontal or vertical direction of the viewed object results in the same amount of memory allocations in either direction.

Left offset

Digitizing of a line of a video signal is started after each line synchronization pulse. The offset from the end of the line synchronization pulse to the start of digitizing can be set with P4. This setting represents the offset from left on a monitor.

(58)

4. PROGRAMMING PROCEDURE

The programming procedure for the interface can be divided into two parts. The first part is to program the parallel interface for the application. The second part 1s to continuously dig!tiz,~ and read images with the interface.

Parallel interface

The 8255 parallel interface has three output ports, A, Band C. The mode of the parallel interface determines which of these ports are selected as input or output. The mode that is selected must utilize port C so that the lower ~uor bits are output. and the upper four bits are input. The other two ports can be utilized in any manner. An example is given below:

Write the value 9A (hex) to I/O address xxxx11 (binary).

Memory outlay and access

The video interface has two modes. In mode one the video signals are continuously digitized and the data stored in memory. In mode two the memory can be randomly accessed.

The digitized video signal. called the image, 1s stored and refreshed in memory every 20 IDS while in mode one. Of every

line of the picture signal 128 samples are taken. A 104 lines in a field of the video signal are sampled. To enter the interface into mode one, the least significant bit of port C must be zero. An example is given below:

Write xxxxxxx0 (binary> to I/O address xxxx10 (binary).

While in mode two, the 16 kbyte memory can be accessed randomly by the user. To enter mode two. the !east significant bit of port C must be set. An exc:.mple is given below:

(59)

B.7

Within 20 me bit 4 of port C will be set by the interface

synchronously with the video signal. The memory can be

accessed after bit

4

has been set or at least 20 DIS after

bit 1 of port C has been set.

The seven lJICs't significan't bits of the memory addres_, index

the line number of the image. Line numbers 5 to 104

(binary addresses xxxxxx0000100xxxxxxx to xxxxxxl101000)

should be used. since the first four lines contain no

video information. The seven least significant bits of the

memory address index the sample in a line. These samples

can be accessed with binary addresses xxxx0000000 to

xxxxl111111. An example where a memory allocation near the

centre of the image 1s read from memory 1s given below:

(60)

5.

COMPONENT LAYOUT

I

'l

I I

I

I

'----§]I

c:

j

~

G

~~lu6

I

~

~·lilll,U7

II

~

[§]

@]I,ua

,}

[ill

~I

U9

I

~ + U 24

1I

U 21

i

I

U18

q

U 2S

q

I

U22

1I

U19

1

+

I

P4

I@J

~

I

pz

I

0 IP3 I ~

ITill

~

~

@ill

§ j

I

US4J

I

U 26

J

I

U 27

~

~

~

@ill

[]ill

~

~

~

+

+~~

(P1

I

@!]

@ill

~

~

~

[§]

~

~

(61)

6. CIRCUIT DIAGRAX

B.9

L

Referenties

GERELATEERDE DOCUMENTEN

The nature of her knowing in relation to her seeing is not only at stake in the long scene about the museum and the newsreels at the beginning of hook and film, but also later

children’s human Ž gure drawings. Psychodiagnostic test usage: a survey of the society of personality assessment. American and International Norms, Raven Manual Research Supplement

Conclusion: moral stances of the authoritative intellectual Tommy Wieringa creates in his novel These Are the Names a fi ctional world in which he tries to interweave ideas about

rapport3.cls report compatible, design 3 book.cls book compatible, design 1 ntg10.clo 10 point option for all styles ntg11.clo 11 point option for all styles ntg12.clo 12 point

Note also that (since v0.991) this is a 1-parameter macro so doesn’t expand subsequent tokens until meets a ⟨balanced text⟩ but just takes first single token or ⟨text⟩..

Arguments: henv i:=#1 is the internal environment name, houtput namei:=#2 is its keyword to be used in the output, #3 is the running number, and #4 is the optional text argument in

A number of options allow you to set the exact figure contents (usually a PDF file, but it can be constructed from arbitrary L A TEX commands), the figure caption placement (top,

If the global option pseudoauthor is set to ‘true’ (and the entry option pseu- doauthor is used), the author of this entry is printed.. The new commands \bibleftpseudo