• No results found

High speed stereo visualization of hair-skin dynamics

N/A
N/A
Protected

Academic year: 2021

Share "High speed stereo visualization of hair-skin dynamics"

Copied!
76
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)High speed stereo visualization of hair-skin dynamics Citation for published version (APA): Sridhar, V., & Technische Universiteit Eindhoven (TUE). Stan Ackermans Instituut. Design and Technology of Instrumentation (DTI) (2010). High speed stereo visualization of hair-skin dynamics. Technische Universiteit Eindhoven.. Document status and date: Published: 01/01/2010 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne. Take down policy If you believe that this document breaches copyright please contact us at: openaccess@tue.nl providing details and we will investigate your claim.. Download date: 04. Oct. 2021.

(2) HIGH SPEED STEREO VISUALIZATION OF HAIR-SKIN DYNAMICS. by Vijayalakshmi Sridhar -Confidential-. One year project presented to Eindhoven University of Technology towards the degree of Professional Doctorate in Engineering in Design and Technology of Instrumentation Appendices. November 2010 Care and Health Applications Philips Research, Eindhoven, The Netherlands.

(3) High Speed Stereo Visualization of Hair-Skin Dynamics / by Vijayalakshmi Sridhar Eindhoven, 2010. – (Design and Technology of Instrumentation) A catalogue record is available from the Eindhoven University of Technology Library ISBN: 978-90-444-1003-7 (Eindverslagen Stan Ackermans Institute; 2010/083).

(4) /Design and Technology of Instrumentation Author Vijayalakshmi Sridhar. Date 30th November 2010. High Speed Stereo Visualization of HairSkin Dynamics Appendices V. Sridhar. Supervisors N.Uzunbajakava C.Ciuhu J.Botman.

(5) Technische Universiteit Eindhoven University of Technology. High Speed Stereo Visualization of Hair-Skin Dynamics.

(6) CONTENTS Appendix A.................................................................................................................................. 3 Stereo Imaging - Mathematical Proof ..................................................................................... 3 Appendix B.................................................................................................................................. 7 3D Optical Setup – Camera Selection and Optical Elements ................................................... 7 Appendix C ................................................................................................................................ 13 3D Optical Setup – Illumination ............................................................................................ 13 C.1 Cross-polarization for specular reflection blocking ....................................................... 14 C.2 Wavelength characterization ....................................................................................... 15 C.3 Light sources tested for the setup ................................................................................ 17 C.4 Summary of Illumination source selection ................................................................... 19 Appendix D ............................................................................................................................... 21 3D Optical Setup – Mechanics .............................................................................................. 21 D.1 Mechanical components in the setup .......................................................................... 21 D.2 Degrees of freedom – static and dynamic .................................................................... 21 D.3 Force sensor measurements ........................................................................................ 22 D.4 Integrating cameras and mechanics with LabView ....................................................... 23 Appendix E ................................................................................................................................ 25 3D Optical Setup – Video Processing Tool............................................................................. 25 Appendix F ................................................................................................................................ 27 System Calibration................................................................................................................ 27 F.1 Verification of resolution and contrast ......................................................................... 27 F.2 Verification of field of view .......................................................................................... 29 F.3 Whole system calibration............................................................................................. 31 Appendix G ............................................................................................................................... 35 Error Analysis of 3D setup..................................................................................................... 35 G.1 Sources of error........................................................................................................... 37 G.2 Error model ................................................................................................................. 41 Appendix H ............................................................................................................................... 49 1.

(7) HSM Principles and Implimentation ..................................................................................... 49 H.1 Hair – Skin Manipulation (HSM) modules..................................................................... 49 H.2 Laser markings and software update ........................................................................... 52 H.3 Manipulated hair definition ......................................................................................... 54 Appendix I ................................................................................................................................. 55 Experimental Pre-Conditions ................................................................................................ 55 I.1 Module geometry parameters ...................................................................................... 55 I.2 Subject dependent parameters ..................................................................................... 56 I.3 Physical parameters...................................................................................................... 56 I.4 Environmental parameters ........................................................................................... 56 Appendix J................................................................................................................................. 59 Experiments Performed On each hair ................................................................................... 59 Appendix K ................................................................................................................................ 67 Results in detail .................................................................................................................... 67 K.1 Overall performance of modules.................................................................................. 67 K.2 Module Performance ................................................................................................... 68 K.3 Region wise performance ............................................................................................ 68 K.4 Most Efficient Manipulation ........................................................................................ 69 K.5 Effect of Force on Hair Manipulation ........................................................................... 70 References…………………………………………………………………………………………………..…………………………..71. 2.

(8) APPENDIX A STEREO IMAGING - MATHEMATICAL PROOF In order to prove the concept of stereo imaging a mathematical derivation is given in this section. Fig. A.1 shows a diagram of the proposed setup’s vector representation. Basically, two high speed cameras are placed at an angle θ from the z axis. The skin and hair to be imaged are assumed to be in the x-y plane. As the hair-skin is manipulated, the two cameras capture the same field of view from different angles.. In the vector diagram, C1 and C2 are the cameras with vectors, 1    and 2    . Hair is  in x-y plane.  

(9) ,

(10)  ,

(11)   . The cameras will visualize the projection of vector. vector. The camera vectors can be further expanded as:.    ,  ,    , 0,      ,   ,     , 0,  . Fig.A.1: Vector representation of stereo system The projection onto a plane (x-y) orthogonal to an axis (z-axis) is given by projection matrix 1        .   1     3.     1  .

(12) Substituting the values above,.   1  ! 0 .   2  ! 0 . 0  " 1 0 0 . 0  " 1 0 0 . The projection of hair vector on horizontal and vertical planes as viewed by cameras C1 and C2 are given as # , $  and # , $ . The cameras are rotated by angle θ about the y axis. The rotation matrix is given as:  %&  ' 0 . 0  1 0 ( 0 . The projection of the hairs on the x-y plane while the cameras are rotated about the y-axis is given as: # + +& , $    1.  * %  0. Where, m is the magnification.. #   * + %+& ,$  2.. 0. Solving the values above gives the dimensions of the hair:. * + # . # 

(13)  2

(14)  .

(15)  . * + $ . $  2. * + #  #  2. Thus, knowing the projections of the hair in the camera’s image plane and the angle of placement of cameras will give 3 dimensional information of the hair. Fig.A.2 shows such a sample stereo image.. 4.

(16) Fig.A.2: Stereo image of dummy hair-sample. Here, #  41**, $  26** and #  30**, $  26**. Based on the calculations above, the length of the hair marked in Fig.1.6 is : Length of hair, 2  3

(17) .

(18)  .

(19)   1.67**. 5.

(20) 6.

(21) APPENDIX B 3D OPTICAL SETUP – CAMERA SELECTION AND OPTICAL ELEMENTS. Fig.B.1: Photron Fastcam camera system As was mentioned in Chapter 1, successful visualization of the hair-skin manipulation process should meet the following core specifications: 1. Resolution: 10µm 2. Depth of focus: 300µm 3. Imaging speed : 1000-2000fps 4. Field of view: 1mm2 – 3mm2 After considering several camera options based on the specifications needed, the camera system chosen was Photron Fastcam MC2 (Fig.B.1). These cube shaped C - mount cameras have a maximum frame rate of 2000fps at a pixel resolution of 512 x 512 pixels. They can reach a maximum frame rate of 10,000fps but at a lower pixel resolution of 512 (horizontal) x 96 (vertical). Embedded with light sensitive CMOS sensor, the pixel pitch is 10µm. Each camera weighs 90 grams. Various design features that need to be considered while creating the 3D optical setup are: 1. Numerical Aperture 2. Resolution 3. Depth of focus 4. Frame rate. 7.

(22) 5. Field of view and magnification 6. Angle between the cameras 7. Working distance 8. Numerical aperture: Before going into in any further discussion, it is first important to discuss an important parameter which impacts resolution, depth of focus and frame rate, the numerical aperture. The parameter limits the amount of light entering into the camera and thus decides the frame rate of the video taken. For taking movies at high frame rate as is dictated by our requirements, there needs to be enough light to form bright images. The numerical aperture should then be as large as possible allowing maximum light to enter in. It must be kept in mind though that the numerical aperture also influences the resolution and the depth of focus and keeping a large numerical aperture would not be beneficial for obtaining a large depth of focus. Numerical aperture is a measure of the light gathering power of the system and is given as [1]: 56  . Where, n is the refractive index. In our case we perform imaging in air, where n=1. θ is the half angle of the acceptance cone.. Fig.B.2: Numerical aperture definition (top); calculation of focal length (bottom) From Fig.B.2, D is the diameter of the entrance pupil of the lens and f is the focal length of the lens. This diameter of the entrance pupil of the lens can be changed by using a diaphragm with a specific diameter in front of the lens. The diameter was chosen carefully as this determines the numerical aperture. 8.

(23) By lens equation, we know:. 1 1 1 1 1  .  . 7 

(24) 44 95. Here, u is the object to lens distance and v is the lens to image distance. 7  30**. :# . ;/2 2.5/2  30 7. θ  2.39. 56  1 sin 2.39  0.04. Here again it is interesting to mention that lenses can be directly attached to the camera sensor part without using the extension tubes (Fig. B.2, bottom). But this would give us less freedom with respect to the working distance and the magnification of the object. Thus specific lenses were chosen with specific extension tubes to get desired working distance and thus desired magnification. We will see further how numerical aperture affects these parameters in coming sections. B.1 RESOLUTION Optical resolution is the ability of the system to resolve detail in the object that is being imaged [1]. The calculated resolution is %A . 1.22B 1.22 . 600 A 9   9C* 256 2 0.04. Here, λ is the central wavelength in the spectrum of the camera sensor. This is the optical resolution when the camera faces the object normally. Given from the above equation it is clear that the resolution improves with higher numerical apertures. The spatial sampling rate of the camera sensor is given as: #*D2E F#:A . G. 2250   4.4C*/DHA2 512 *$AF 7 DHA2. This means that 4.4C* of the object under view is sampled into 1 pixel (10C*). B.2 DEPTH OF FOCUS (DOF). Depth of focus is the range of distance within which an image looks acceptably sharp. The depth of focus is given as [2]: ;G . B.  . A 600 A 9. 1 1 . 10A 6 .  .  485C* 56 *. 56 0.04 2.3 0.04 9.

(25) Where, e is the resolution of the camera sensor (10µm). The depth of focus decreases or becomes worse as the numerical aperture increases. There is always a tradeoff between resolution and depth of focus. The numerical aperture needs to be fixed based on what is acceptable for the system. B.3 FRAME RATE Frame rate is the frequency at which the camera captures frames. The maximum frame rate possible with the camera system is 2000fps (with pixel size of 512 x 512). Depending on the power of the illumination system, the numerical aperture and the possible usage of components like filters and polarizers, the frame rate decreases as a part of the light is blocked by the diaphragm or is being absorbed by these components. B.4 MAGNIFICATION AND FIELD OF VIEW (FOV) The camera sensor size is 5.12mm x 5.12mm. In order to achieve a field of view of 1 – 2mm2, the magnification is about *. *#EA JA 5.12  ~ 5 F 2.5 $KA: JA 1 F 2. With an average stubble length of 24h beard hair (300µm), 1 mm FOV should in principle be sufficient to visualize. However, we found experimentally that using this FOV some longer hairs and especially, hairs positioned at an angle fall outside of an image. To achieve a larger FOV to capture hairs more efficiently, a magnification of 2X was used. With a magnification of 2X the field of view achieved theoretically is: G . 5.12 ~ 2.56** 2. B.5 ANGLE BETWEEN THE CAMERAS The angle between the cameras needs to be optimal such that we can capture the maximum points of difference to calculate 3D information from 2 stereo images and also such that there are enough points of similarities or correspondence between the stereo images to recognize that the images from two cameras are indeed of the same object. While it is very convenient to place the cameras at 90 deg such that one camera is normal to the skin surface and the other is parallel to it, this arrangement is not optimal as none of the camera/lens components should touch the hair-skin. It also becomes difficult to co-relate and identify common points between hairs when the angle between the cameras is large. It is also not possible to reduce the angle between the cameras (say 10 deg, similar to that of eyes) as lenses have actual size and they would collide with each other at such small angles. It becomes increasingly difficult to focus both the cameras at the same field of view at smaller angles.. 10.

(26) After brief research [3], experiments and calculations, it was found that the optimal angle between the cameras is between 30 and 40 degrees. During all experiments, the angle between the cameras was fixed at 40 deg such that each camera is 20 deg from the normal. B.6 WORKING DISTANCE Working distance is the distance between the optical lens system and the object that is in focus of the imaging system. The working distance is preferred to be such that there is sufficient space to place additional optics (filters, polarizers) in front of the lens system. As shown in the section of the numerical aperture, working distance also determines the where the object is placed in front of the lens and hence the magnification obtained. Based on the above mentioned criteria, we selected our lens system as shown in Fig.B.3. Camera InfiniStix Lens Doubler. Fig.B.3: Camera and lens system The doubler and InfiniStix lenses (Edmund Optics) each has a magnification of 2X, hence a total magnification of 4X. The InfiniStix lenses have a working distance of 44mm. For our system, the doubler is removed to create a magnification of 2X.. 11.

(27) 12.

(28) APPENDIX C 3D OPTICAL SETUP – ILLUMINATION It is crucial to have an illumination source with a power large enough to take bright images at high frame rates. Having fixed the angle between the cameras at 40 deg, the light source should not be too bulky. It must fit within the space of the two cameras. The illumination source must also give maximum contrast between the hairs and the skin. As it can be seen from Fig.C.1 the image shows 2 typical problems: specular reflections (reflections following Snell’s law) and hairs visible under the skin. These can lead to problems with software or manual inspection used to calculate hair length. Namely, this can lead to errors in hair length estimation, 1) since information originating from the part of the hair below the skin level will be interpreted by the software as a hair above the skin level and 2) specular reflections on hair and skin would make it difficult to analyze which part is skin and which part is hair. A reflection spot on the hair would be mistaken by the software as skin, thus leading to errors.. Specular Reflections Hair visible under skin. Fig. C.1: Problems in illumination. The following topics relevant for selecting illumination system, will be discussed below: 1. Usage of light polarization – Under this topic we discuss how to avoid specular reflections. 2. Wavelength – Here we check which wavelength is the most suitable for our application 3. Source of illumination: Laser, LED, lamps etc. – Based on requirements of power, good signal to noise ratio, wavelength, polarization condition we explore several illumination sources.. 13.

(29) C.1 CROSS-POLARIZATION FOR SPECULAR REFLECTION BLOCKING It is well known that the problem of specular reflections (bright spots) as shown in Fig. C.1 can in principle be solved by using cross-polarized detection [4].. Fig C.2: Polarization to block specular reflections In cross-polarized detection (Fig. C.2), one polarizer is used at the output of a light source to create illumination with one polarization state only. A second polarizer, called an analyzer, is oriented 90 ̊ with respect to the orientation of the polarizer. The analyzer then blocks most of the specular reflections and allows components of diffusely reflected light which are parallel to the orientation of the analyzer. The light source used here provided by Schott (KL1500 LCD) with a halogen lamp of 150W. Fig.C.3 shows an image taken by using this technique. As can be seen, there are no specular reflections seen in the image. When using polarizers, light intensity drops significantly 1) due to intrinsic absorption of light by polarizers and 2) due to rejection of polarization states, which are blocked by polarizer and analyzer. Since the light source was already used at its maximal power, it was necessary to decrease the frame rate of the data acquisition to allow for acceptable image quality. In our case, the image was taken at 125fps.. Fig. C.3: Linear polarized image without specular reflections Since this low frame rate was not sufficient for our application, no polarizers were used during the follow-up experiments.. 14.

(30) C.2 WAVELENGTH CHARACTERIZATION To discuss the choice of the wavelength for illumination it is important to refer to structure and composition of human skin, Fig. C.4 shows the cross-section of human skin. The skin contains different layers and components, which can absorb and scatter light [5]. At the junction between the top layer of skin – epidermis and the layer below – dermis lies cells called melanocytes, which produce the pigment melanin capable of absorbing UV light. The dermal layer contains haemoglobin which is also a light absorbing component. Furthermore, melanin, haemoglobin, and collagens in the dermal layer also scatter light. Thus as light propagates through the skin, light will be absorbed and scattered. It is known from literature that the shorter wavelengths are absorbed and scattered more, and therefore penetrate less into the skin [6]. We wanted to check if by using a shorter wavelength, which penetrates less into the skin, we could achieve significant rejection of the signal originating from a hair under the skin. Using the filter provided in the KL 1500 LCD Schott white light source, images with different wavelengths were acquired. The results are shown in Fig.C.5. A hair was chosen such that some part of the hair was visible under the skin using white light. As can be seen, the hair under the skin is still visible under different wavelengths of light. Blue light which has the shallowest penetration (about 100µm) is still deep enough to penetrate the skin and scatter/reflect light around it such that we can visualize hair just under the skin.. Fig. C.4: Cross-Section of human skin [7]. 15.

(31) White, 1000fps. Blue, 250fps. Green, 500fps. Yellow, 1000fps. Red, 1000fps. Day light filter, 500fps. Fig.C.5: Visualization of hair with different wavelength light and their corresponding frame rates.. Fig. C.6 gives some additional data about the color filters used in these experiments. In terms of contrast, white light and green light both seemed satisfactory, blue light seemed to have the worst contrast. So taking into consideration the two issues of hair visibility under the skin and best possible contrast, it was decided to continue with white light.. KL 1500 LCD, typ. spectra at light guide exit with color filters. 7000 6000 Pos. 5 with optics no filter Pos. 5 with optics blue 158302 Pos. 5 with optics red 158303 Pos. 5 with optics green 158304 Pos. 5 with optics yellow 158305 Pos. 5 with optics daylight 158306. irradiance / [rel. units]. 5000 4000 3000 2000 1000 0 350. 450. 550. 650. 750 wavelength [nm]. Fig. C.6.: Irradiance data of color filters used in KL 1500 LCD Schott lamp. 16.

(32) POWER LEFT WITH USAGE OF COLOR FILTER AND POLARIZER To get an idea of what would happen if for best results in terms of visualization blue light and polarizers were both used is illustrated in Fig. C.7. From data provided by Schott, the power at the tip of the optical fiber is 25W.. Fig. C.7: Linear polarized image without specular reflections If an imaging is done without any other optical components between the fiber and the object, while the object is placed at a distance of not more than 2cm from the light source then the target frame rate of 2000fps is achieved. If we use a polarizer, ideally ½ of the light component is blocked. But when a measurement was made almost 1/4th of the component is observed to be blocked. This could be due to the absorption occurring in the optical component. If we place a blue filter (which absorbs the maximum light amongst other color filters) almost 1/6th of the light is blocked by the polarizer. So beginning with 25W at the fiber tip, what is left at the object is 1W. At this power, imaging can be done at only 80fps. Power sources with 20 to 25 times more power are required.. C.3 LIGHT SOURCES TESTED FOR THE SETUP HALOGEN LAMPS: Power of the light source is essential to achieve acceptable hair-to-skin contrast at the required frame rate. The image in Fig. 2.11 was made using a white light source – KL 1500 LCD provided by Schott. A halogen lamp with a power of 150W was used. The light from this source is guided through a flexible light guide of single branch with 5mm diameter. POWER LEDS: Power LEDs were chosen as an exploratory option to illuminate the hair- skin field of view. LEDs are light emitting diodes which essentially consist of a diode made with a specific semi conductor material (such as Indium Gallium Arsenide). The material is chosen such that when electrons combine with holes near the p-n junction of the diode, the photons emitted must have energy levels in the visible spectrum of light.. 17.

(33) Fig.C.8: LUXEON Rebel color LED and its coupling optical element. Philips Lumiled Lighting Company is a leading manufacturer of high power LEDs (~0.5W). The chosen LED was LUXEON Rebel Color, specifically the Royal-Blue (460nm wavelength). This specific LED has a drive current of 500mA, a forward voltage of 3.4V and a power of 525mW. The Luxeon LED was attached to an aluminum heat sink (Fig. C.8) as heat dissipation is one of the most critical factors associated with LEDs in general. Two contact pads were attached to the heat sink so that electrical connection can be given to the LED easily. The LED has a dome shaped plastic above the diode through which maximum light is given out. But this makes the light emitted by LED to spread out and thus to focus it to some extent an optical element was obtained from Carclo Optics. This element causes total internal reflection of light and forces it to an angle between 12 and 35 deg from the normal up to a distance of 20mm.. Fig. C.9: Illumination by LUXEON Rebel Color LED (Royal – Blue) Fig. C.9 shows an image of hair – skin obtained from camera under the illumination of royal blue LED at a frame rate of 250fps. In order to obtain a frame rate of 2000fps, 8 such LEDs would be required. Placing 8 LEDs with their bulky optical element would require a lot of space. If further a polarizer and analyzer combination is placed then the frame rate would be reduced even further. Hence it was decided that LEDs were not a practical solution for our system and thus the choice was eliminated.. 18.

(34) LASER: Laser was another exploratory option considered for illuminating the object. Laser is a good option because it gives the freedom to choose a specific wavelength and the beam in general is already polarized. The power of the laser can also be chosen to suit our needs. The biggest challenge with laser is its property – coherency. This means that the laser beam arrives in phase with each other. When laser beam hits a rough object, the laser beam is scattered. Some could arrive exactly in phase with each other and interfere constructively creating bright spots but they could also arrive completely out of phase and interfere destructively to create dark spots. This combination of bright and dark spots is called speckles and it deteriorates the quality of the image formed. There are some techniques to reduce or eliminate speckles like by using laser at an angle to an object, by using two lasers which differ slightly in their wavelengths, by passing the laser through a rotating glass wedge and by using laser with different polarization states [8]. We decided to pass the laser through an optical fiber. Due to the multiple reflections that occur through the length of the fiber the laser would lose its coherency at least to some extent by the time it reaches the object. Fig. C.10 shows the results. The test was done using a simple laser pointer used for presentations.. Fig.C.10: Image formed with (left) and without (right) the optical fiber The image on the left shows the speckle formation as expected. But the image on the right shows minor formation of speckles. This was still not an acceptable image quality for our experiments. Hence we decided to eliminate the choice of lasers as an illumination source.. C.4 SUMMARY OF ILLUMINATION SOURCE SELECTION To summarize all the queries leading towards the choice of the illumination source: 1. 2. 3. 4.. Source of illumination: Halogen Lamp (KL 1500 LCD, Schott) Power : 150W Diffused or focused light source: Focused beam by keeping it close to the object (2cm). Polarization: Was not chosen as it reduced the frame rate enormously. 19.

(35) 5. Wavelength to be used for best contrast between hair-skin: White light, as choosing a wavelength reduces the intensity of light and high intensity is required to create a movie with high frame rate. 6. Rejection of specular reflections on hair/skin: This could not be avoided directly. 7. Rejection of visualization of under the skin hair: This could not be avoided directly. 8. Avoiding shadows: This problem did not exist as hairs that were chosen were short.. 20.

(36) APPENDIX D 3D OPTICAL SETUP – MECHANICS The optical setup is combined together with mechanics to perform similar to actual shaving strokes on a test subject. The various components used, the degrees of freedom allowed and further their operation.. D.1 MECHANICAL COMPONENTS IN THE SETUP The opto-mechanical design for the 3D setup is shown in Fig.D.1. The setup consists of a detachable face holder where the person positions his face during measurements and an angular stage on which the two cameras are placed. The HSM module is located next to the face holder plates. The illumination fiber is positioned between the two cameras. The setup including the cameras, the light fiber and the HSM module are attached to a linear translation stage.. Fig.D.1: Opto-mechanical design of the setup. D.2 DEGREES OF FREEDOM – STATIC AND DYNAMIC Fig. D.2 shows the actual setup without the face holder. Here all the possible motions available for the mechanical system are drawn out using red arrows. There are several ways to orient the setup depending on the volunteer’s height, position on face where measurement is done (cheek, chin or neck), the direction of attack based on the hair growth pattern etc. This is classified as static degrees of freedom (shown as red arrows). 21.

(37) Fig.D.2: Actual opto-mechanical setup The whole setup is attached to a stand making it possible to adjust the height of the setup based on the height of a volunteer. The setup can also be rotated to allow measurements on the neck. The setup can be rotated towards the left and right such that the shaving-like stroke can be made from any direction with and against the grain of the hair. The angular stages allow free motion of the cameras with adjustable angles up to a maximum of 90 deg between them. Each camera has a xyz stage for minor adjustments of the focus. The module has one degree of freedom that allows moving it linearly towards or away from the cameras. The dynamic degrees of freedom are the motions allowed while the shaving stroke is made (shown as blue arrows). The linear translational stage (Newport Corporation, M-ILS50CC) attached to a DC servo motor allows up and down incremental motion of the whole setup (including the cameras, lighting and HSM module simulating motion of shaving with and against the grain of the hair) to perform a shaving-like stroke. The stage moves up and down with a maximum stroke of 50mm at a specific speed up to 10 cm/sec. There are springs added to the section where the modules are attached, making the module less stiff. This helps in a better contour following while the shaving stroke is made by the linear stage.. D.3 FORCE SENSOR MEASUREMENTS In order to measure forces exerted on the HSM module during hair-skin manipulation, a force sensor (Load cells from Burster with a maximum load capacity of 5N) was mounted on to a specially designed holder which is then attached to the HSM module (Fig. D.3).. 22.

(38) Fig. D.3: 5N Burster load cell illustration (top) and its implementation in the setup (bottom) These sensors have two interior stabilizing membranes which reduce the lateral forces and toques to a minimum. The sensor has a passive side which is solidly connected to the housing and has a constant load connected to it through its threaded head. The active side is where the actual load is applied to. The sensing element is an elastic steel spring which is implemented in the form of a bending beam. On top of this bending beam is a strain gauge foil created as a meandering structure. As the steel beam bends, so does the strain gauge and thus its resistance changes. Thus a mechanical input is read out as an electrical output. Using this sensor, the normal forces acting on the cheek, chin and neck can be measured simultaneously as the movies were being taken. For simplicity of the setup, only one sensor was attached to the middle of the module and the passive side is attached to the module holder which is further along attached to the angular stage of the setup.. D.4 INTEGRATING CAMERAS AND MECHANICS WITH LABVIEW A multifunction data acquisition module (National Instruments, USB 6259) is used to control and synchronize the movement of the motorized stage, the camera operation, and the recording of the force sensor data using LabView .This is done in the following way. First, the motorized stage is setup such that it will operate at the required speed and acceleration till a chosen distance (not more than 50mm). Secondly, while the motorized stage moves, it simultaneously triggers the camera to start capturing the movie and to record the force data simultaneously with 23.

(39) acquisition of the movies. For safety purposes an electric grounding is given to the entire setup and a hard stop button is attached to the motorized stage which allows the stage motion to be stopped mechanically without relying on the software.. 24.

(40) APPENDIX E 3D OPTICAL SETUP – VIDEO PROCESSING TOOL Video processing is the final and essential part of the setup through which all the movies made from the camera can be analyzed using the basic mathematical proof described before to get required data. The initial idea was to create fully automated software which would accept two movies from two cameras for a single hair process it and then give us data about the hair length, angle and hair catching efficiency.. Fig. E.1: Screenshot of a movie being processed by automated software. Fig. E.1 shows a screen shot of such automated software. The software assumes white things (a specific grey scale value range) to be skin and gives it a yellow color and dark things (with a specific grey scale value range) to be hair and gives it a color blue. The yellow and blue contrast is good for easily inspecting the hair motion pattern visually. Crosses are made at the beginning and end of the hair and these crosses are tracked continuously until the particular hair is out of the field of view. The cosses give us the x-y coordinate information of the hair thus we can calculate the information like the length and angle through it. The software is also capable of associating hairs on the two movies. This is especially useful when there is more than 1 hair in the field of view. The challenges associated with such software are many fold. Since the specular reflection problem creating bright spots on the image and the hair under the skin problem were not solved, it is very difficult for the software to choose properly where the hair begins and ends. There are at times hairs which have some loose skin at the base of the hair. Here again it is. 25.

(41) difficult for the software to judge in an accurate manner. This also holds true for slightly blurry and dark movies.. Fig. E.2: Screenshot of a movie being processed by semi-automated software. At this point of the project it was decided that our eyes are better judge of the beginning and end of hair hence semi automated software was created. Fig. E.2 gives a screenshot of this software. It shows the images from both cameras and the movie can be run frame by frame and at each frame the top and the bottom of the hair can be marked respectively with green and red crosses. As soon as the crosses are marked, the software automatically calculates the length and vertical and horizontal angle associated with the hair and the skin. It is also possible to save the data marked on these movies for further references.. 26.

(42) APPENDIX F SYSTEM CALIBRATION Calculated parameters such as optical resolution, field of view of the 3D setup were described in the previous sections. Here we focus on experimental verification of these parameters.. F.1 VERIFICATION OF RESOLUTION AND CONTRAST The theoretical optical resolution of the camera system was found to be close to 10µm. This resolution though holds true if the object is placed in the line of sight of the camera. As we visualize the object with two cameras at an angle of 40 deg between them, the actual resolution at 2X magnification was found to be close to 15µm. The explanation over the calculation of the new resolution is given below. The Modulation transfer function (MTF) is given as the spatial frequency response of an imaging system; it is the contrast at a given spatial frequency relative to low frequencies. Spatial frequency here is measured in cycles or line pairs per millimeter as available in the standard resolution target (1951 USAF). MNG  Where,  7 .  7  0. *#H  *. *#H . *.  0 . O  $. O . $. Vmax and Vmin are the maximum and minimum intensity levels measured across a grating at any spatial frequency ‘f’. Vw and Vb are the maximum and minimum intensity levels measured across a grating (Fig. F.1) of low frequency.. 27.

(43) Fig. F.1: 1951 USAF resolution test target Table F.1 shows the different line pairs per millimeter measured, its contrast and its MTF value calculated. MTF values between 0.4 and 0.6 are considered acceptable balance between resolution and contrast. Cycle per mm. Contrast. MTF. 4.49. 0.82. 1. 5.04. 0.82. 1. 5.66. 0.82. 1. 8. 0.82. 1. 8.98. 0.82. 1. 10.1. 0.82. 1. 11.3. 0.82. 1. 12.7. 0.75. 0.92. 16. 0.67. 0.82. 17.9. 0.67. 0.82. 20.1. 0.67. 0.82. 22.6. 0.63. 0.77. 25.4. 0.60. 0.73. 32. 0.42. 0.51. 35.9. 0.38. 0.46. 40.3. 0.25. 0.31. 45.3. 0.19. 0.23. 50.8. 0.13. 0.16. 57. 0.07. 0.09. 64. 0.03. 0.04. 71.8. 0.03. 0.04. 80.6. 0.02. 0.02. Table F.1: MTF measurements 28.

(44) Fig. F.2 shows the MTF chart from which it can be said that a minimal spatial frequency of 35.9 (36) is acceptable. This is between 64 to 72 lines per millimeter or a resolution of about 15μm.. Fig. F.2: MTF chart. F.2 VERIFICATION OF FIELD OF VIEW As explained before the object does not lie in the line of sight of the cameras. Both the cameras are placed an angle of 20 deg with respect to the normal. This means that while the camera sensor is square in size, the actual image area is a rectangle as the cameras are placed at an angle to the object. The y axis remains the same, but the x axis value changes as the angle of view between the cameras changes. To verify this, a resolution target with 40lp/mm was placed (Fig.F.3) in the field of view and on actually counting the lines in the image both on the x axis and y axis lead to the actual field of view when the angle between the cameras is 40 deg: G #:#2   2.275** P 2.225**. 29.

(45) 2.225m. 2.275mm. Fig.F.3: Grating lines showing horizontal (left); and vertical lines (right). 30.

(46) F.3 WHOLE SYSTEM CALIBRATION For this purpose a calibration standard was created. This standard was chosen to be close to the actual hair – skin target object. It was made in the following way. A white plastic material (polyoxymethylene, POM) was used as a substrate. A dark-colored fiber material was inserted and glued in the holes made in the substrate at predetermined angles to mimic hairs. Fig. F.4 (top) shows the calibration standard positioned in the focal plane of the 3Dsetup. Fig. F.4: Hair calibration standard - phantom in front of the camera (top); its top view illustration (bottom) By knowing the absolute length of the hairs and their angle, measured using an independent and accurate method and by comparing these values to those acquired using the 3D setup, the experimental accuracy of the 3Dsetup can be verified with respect to the calculated values. To select an accurate reference method, it was decided to compare three techniques (Fig. F.5): Tesa Visio microscope, Vivascope 1000 Confocal microscope, and Optical coherence tomography microscope from Thorlabs. All these methods are expected to have a spatial resolution of ~ 1 to 8µm, which is comparable or even higher than in 3D setup. Four hairs were considered and marked in Fig.F.4 (bottom).. 31.

(47) Fig F.5: Tesa Visio microscope (left); Vivascope 1000, Confocal microscope (middle); Optical coherence tomography,Thor Labs (right). Table F.2 gives the results of the lengths and the angle of the hairs measured using various techniques. The result derived from optical coherence tomography (OCT) was more consistent and the measurement technique was more convincing over Tesa Visio and confocal microscope. Hence it was decided to use the measurements made by OCT as the standard.. Table F.2: Comparing the lengths measured with that of the 3D setup. 32.

(48) Table F.3: Comparing the OCT and 3D setup measurements. Table F.3 compares the results of OCT measurements and the calculations obtained from 3D setup. The error in length reaches a maximum of 3.3% (16.4% seems to be an anomaly) and the error in angle reaches a maximum of 8%.. 33.

(49) 34.

(50) APPENDIX G ERROR ANALYSIS OF 3D SETUP. Fig: G.1: Actual setup (top); Vector diagram of the 3D setup including the quantities of interest (bottom) 35.

(51) It is essential to be aware and to account for the various errors associated with the system in order to be as accurate as possible. A vector diagram of the 3D setup is first drawn so that it would be easier to build a mathematical error model. Fig.G.1 (top) shows the actual setup. The xyz coordinates assumed for the setup are shown in the top left corner of the diagram. This is translated into a vector diagram where C1 and C2 are the two cameras with associated vectors  and    . The cameras are both placed at an angle θ with respect to the z axis in the x-z plane. The blade moves up and down along the y axis on top of the skin which is assumed to be flat over the x-y plane. The hair on top of the skin is vector

(52)  with its 3 projections

(53) ,

(54)  and

(55)  . From the movies made by the camera, projections of the hair vector

(56)  can be obtained. The parameters calculated from there are: # . # 2* $ . $

(57)   2* #  #

(58)   2*.

(59) . Where # and $ are the horizontal and vertical projections obtained from camera C1 and # and $ are projections obtained from camera C2. θ is the angle between the camera and the normal along z axis and m is the lens magnification.. With these the details of the x, y and z projections of the hair

(60)  are obtained. The parameters of interest in our study are: Length of the hair, 2 Vertical angle, Θ. 2  3

(61) .

(62)  .

(63)  Θ  :#+. Horizontal angle, Φ. Φ  :#+.

(64) 

(65) .

(66) 

(67) . The directly derived values from the cameras are

(68) ,

(69)  and

(70)  . These quantities in turn depend on a1, a2 and b1, b2 projections from the two cameras and the angle θ. From here we can deduce that errors can occur in our calculations because of errors in a-b projections and the angle θ. In all cases the error along x and y projection to be the same, + or – ‘ε’ (It will be proved later in section 4.2, why this choice is made). This leads to the expressions for error in x, y and z coordinates: S  T WXYZ&  T WXYZ& UVU. U. 36. (1a).

(71) S  T W U. S  T WZ[\&  T WZ[\& UVU. U. (1b) (1c). G.1 SOURCES OF ERROR We investigate the various sources leading to error under these 3 categories: 1. Resolution limitation 2. Visual and clicking errors 3. Error in camera angle G.1.1. RESOLUTION LIMITATION Optical resolution is the ability of the system to resolve detail in the object that is being imaged. It is the smallest resolvable physical space and any point chosen within the unresolved space can lead to an error. The resolution of the system is approximated to 15μm. So if the x projection is noted as 300μm, it could be between 293.5μm to 307.5μm. It is interesting to note here (see formula 1b) that the resolution that was calculated before holds true only along the y axis. All the information seen along the object’s y axis has the same value no matter whether the camera is placed 90 deg to the object or at any angle. But this is not true for x axis and hence does not hold true for the z axis as the z axis information is derived through the x axis information (refer to

(72)  derivation). When the object is viewed with camera at theta=0 position, the field of view is a square. But as the camera views the object from an angle (which is the case with our setup), the field of view becomes a rectangle. Hence x axis information becomes larger than it actually is. On the same sensor size, more information is captured from the x-direction. This is the reason why the resolution along x, y and z axis vary as the angle at which the camera views the object varies. So if the resolution along y axis is 15μm,. ]

(73)   15C*. ]

(74)  15C*   16C*  20 ]

(75)  15C* ]

(76)     44C*  20. For a camera angle of 20 deg between the camera and the normal, ]

(77) . Table G.1 shows calculations for various other camera angles.. 37.

(78) Table G.1: Error in hair length due to resolution limitation Thus when the angle between the cameras is 90 deg this error reaches an optimum balance between x- and z- errors as expected. But through research, experimentations and calculations it was found that an optimal angle between the cameras lies between 30 to 40 deg. Larger angles lead to disparities between the images such that it becomes difficult to recognize common points between images. Very small angles are also not possible as the lenses then begin to touch each other and eventually it becomes impossible to focus at the same point with two cameras. G.1.2. VISUAL AND CLICKING ERRORS In order to get an accurate measurement of the hair length it is critical that the top and the bottom of the hair are visible clearly. While the goal is to make quality movies with acceptable hair-skin contrast, good illumination, it sometimes does happen that the movie is blurred, dark and the hair has a piece of skin sticking at the bottom. Fig.G4.2 illustrates this problem.. Fig. G.2: Hair visualization problems, hair under skin & non uniform hair tip (left); dark and blurry (middle); loose skin around hair (right). 38.

(79) Thus each person’s interpretation of where the hair begins and ends may vary. Since manual software demands that the top and bottom of the hair is marked to get the length of the hair, errors are prone to occur. Another issue is that the hair does not have a uniform length as can be seen from Fig. G.2 (left). This is illustrated further in Fig.G.3.. Fig. G.3: Illustration of hair with non-uniform length. Under these conditions, it becomes a little confusing as to what is the actual length of the hair. It was decided that the hair’s length should be taken about the central axis of the hair. It was decided that while placing crosses on the top and bottom of the hair, ellipses would be drawn (mentally) such that it touches the edges of the hair’s cross section (Fig. G.4). This is important as all hairs are not circular in cross section. The green cross is then placed in the centre of the top ellipse such that the centre of the cross coincides with the centre of the ellipse. Similarly a red cross is placed at the bottom of the ellipse. A line joining the two crosses is now the length of the hair. The line drawn should be parallel to the edges of the hair.. Fig G.4: Placement of crosses 39.

(80) Clicking errors occur due to the software’s clicking necessity of the hair’s top and bottom. Even for a perfectly filmed hair with good hair-skin contrast, no hair visible under skin etc., it is still possible to place the top and bottom hair crosses in the software at different places. Fig. G.5 illustrates this issue by placing crosses on a magnified image.. Fig. G.5: Clicking errors Placing the crosses 1 pixel away from desired point along both x and y axis for all the four crosses can create a change hair length of about 10μm. G.1.3. ERROR IN CAMERA ANGLE While we assume that the angle between the two cameras is 40 deg, there are two possible chances of error in this assumption. First is that the angular markings may have a slight error in them and second that that there might be a manual error in placing the camera at the exact point. A maximum error of + or – 1 deg is assumed in the between the two cameras.. Table G.2: Error in the camera angle. 40.

(81) In Table G.2, we see error calculations done for hairs of 3 different lengths. These are random lengths chosen such that they are closer to the length of 24h stubble. These lengths were recalculated assuming with the error angle of 19.5 deg in one camera and 20.5 deg on the other. An error of roughly + or - 7μm is seen for a typical 24h stubble length of 300μm. The error in camera angle is a systematic error. Once the cameras are fixed, all the experiments are performed under the same camera setup and thus every measurement would have the same systematic error. It is possible to actually measure this error and correct for it. If a resolution target (say with a frequency of 40lp/mm) is placed in front of the camera in both vertical and horizontal orientation, it is possible to see a difference in the number of lines visible in the field of view. Based on this information, one can calculate back the angle between the cameras. This error can then be corrected.. G.2 ERROR MODEL To be able to assess the various errors involved in the 3D setup, an error model* is built. The factors that affect the hair length and hair angle errors were found using a mathematical model *. First of all, when addressing the errors caused by projection variables (a1, a2 and b1, b2) we need to investigate whether there exists any dependency between them. An experiment was performed to see the variation in x (a1, a2) and y (b1, b2) axes projections while measuring a single hair. A single hair was imaged in the same position over a period of time. The same hair’s length information was obtained using the software by clicking on the hair over a period of time. The graph below (Fig. G.6) gives the spread of the data obtained. On the x axis is spread of x projection data (a1) from camera C1 and y axis is the spread of y projection data (b1) from camera C1. The units of both x and y axes are in microns. The scatter plot shows a standard deviation of about 2.1 pixels or 21μm (as 1 pixel size is 10μm) along x and y axes. It also shows that the two variables a1 and b1 are un-correlated. It is interesting to observe here that the clicking errors also include the errors caused by resolution limitation and visual errors (if any imperfections exist) in this hair. Based on the scatter plots we keep the error along x and y axes in all calculations to be 21μm. **. *Acknowledgement: Mr. Jan Engel from CQM helped immensely towards building the mathematical error model and the importance of statistical analysis for finalizing our error analysis. ** So far it is not clear to us how to combine the pixel units to the microns units, due to the optical resolution. For the moment we simply associate the measure 1 pixel = 10um.. 41.

(82) Fig. G.6: Scatter plot of x and y coordinates of a single hair. Before proceeding with this analysis, a quick look at some of the basic information needed towards the calculations. Variance is the mean of the squared deviation of a variable from its expected value. For simplicity, we assume that the projection a1, a2, b1, b2 have the same variance of σ2 (u). Thus: ^ #   ^ #   ^ $   ^ $   ^ . Thus the variance of the three measured quantities is derived from the below. ^ H   ^ H. ^ _

(83)  .

(84) . 1 ^  . ^ # . ^ # . 4*   2*   $ . $

(85)   2*. ^ _

(86)    ^ _

(87)   . # . # 2*. 1 ^  . ^ $ . ^ $  .  4* 2* #  #

(88)   2*. 1 ^  . ^ #  ^ #  .  4*   2*  . 42.

(89) For further calculations:. ^ # . :$  

(90) #F #  . :

(91) #F $ . 2:

(92) #$. Here, var is the variance and cov is covariance. Covariance is the measure of the extent to which two variables to move in the same direction. ^ # . :$   ^  . : ^  . 2:^ . The analysis is sub divided into two main topics of discussion as given below. In all our derivations below, the co-variance term is zero as the two variables a1 and b1 are found to be un-correlated.. G.2.1 ESTIMATION OF ERROR IN HAIR LENGTH There was a rising suspicion that the hair orientation could be influencing the error in hair length. This was so because a hair with 90 deg with respect to skin appears like a dot in the camera and hence it is difficult to say where the top and bottom are of the hair while a hair with small angles with respect to the skin appears clearly with distinct top and bottom. Based on the observations, a mathematical explanation to this observation was tried out. We know the length of the hair is:. _2 . ^ _2  . 2  3

(93) .

(94)  .

(95) . 1 

(96) _

(97) .

(98)  _

(99)  .

(100)  _

(101)   2 . ^  ^  ^  1 ^ . `

(102) .

(103) .

(104) 

(105)

(106) a    2* 2*   2*  2 2*   . ^ _2 . ^  1 1 1. b

(107) .

(108) .

(109) 

(110)

(111) c.  .  2* 2     . Taking θ as 20 deg and m = 2 ^ _2  . ^  0.14

(112) . 0.13

(113)  . 1.08

(114)   0.39

(115)

(116)   2. Divide and multiply the right side of the above equation by vy2 ^ _2  . ^ 

(117)  0.14

(118) 1.08

(119)  0.39

(120)

(121)  ` . 0.13 .  a 2

(122) 

(123) 

(124) . 43.

(125) If all the hairs are assumed to be against the grain then vy>>vx, all the (vx/vy) terms disappear as they are too small. Also,

(126)  tanΘ 

(127)  Thus,. cosΘ .

(128)  2. ^ _2   ^ cos Θ 0.13 . 1.08tan Θ. ^ _2   ^  0.13cos Θ . 1.08 Θ. Shi  j^ _2   j^  0.13cos Θ . 1.08 Θ. Table G.3 uses the equation above to calculate the error in length for different hair orientation. j^  is taken as the value obtained from scatter plot ~ 21μm. Thus hairs that are at 90 deg with respect to the skin have a maximum error of + or - 22μm and the hairs which are at 0 deg with respect to the skin have a minimum error of + or - 4μm. Fig.G.7 shows this trend.. Table G.3: Error in hair length due to hair orientation. Based on this result it can be said that indeed the error in the hair’s length is dependent on the orientation of the hair. A similar effect can be observed with the change in the horizontal angle Φ. As Φ increases towards 90 deg, the error in hair length increases and the error is minimum as Φ tends to zero.. 44.

(129) Error in hair length (um). Error in hair length, ε (l) 25 20 15 10 5 0 0. 50. 100. 150. 200. Hair angle (Θ). Fig. G.7: Error in hair length with respect to hair orientation Θ. G.2.2. ESTIMATION OF ERROR IN VERTICAL HAIR ANGLE Θ Hair length in itself determines what the possible error in the hair angle is. To illustrate this better, Fig G.8, shows two graphs. On the left we see a graph with hair AB in the y-z axis making an angle θ with the z axis. Now the hair has an allowed error margin in both y and z axis. This is shown as two boxes around the points A and B. Connecting various ends of the box at point A to various ends of the box at point B gives us an estimate of what the maximum (θmax) and minimum (θmin) hair angles are and thus what the error margin is in terms of the hair angles. Now if a similar analysis is done on a much smaller hair (right graph in Fig. 4.8), it can be seen that the minimum and maximum angles are much larger and hence the error in hair angle is quite large such that the hair angle result may no longer be reliable.. Fig. G.8: Illustration of effect of hair length on hair angle error 45.

(130) The hypothesis is that smaller the hair angles, larger the error in the hair angle (Θ). Now this hypothesis is proved mathematically. We know that:. Θ  tan+. dΘ  The variance of dΘ is given as:. Taking m = 2, θ = 20 deg:.

(131) 

(132) .

(133)  _

(134)  

(135)  _

(136) 

(137)  .

(138) .

(139) 

(140)  σ u σ dΘ  `  a 2*   2*

(141)  .

(142)   σ dΘ  1.1

(143)   0.1

(144)  . σ u

(145)  .

(146)  . These hairs are oriented closer to the skin and we assume vy>>vz. .The hair is also assumed to be perfectly against the grain and hence vy>>vx and vz>>vx. The length of the hair is thus: 2  3

(147)  .

(148)   3

(149) . σ dΘ . Since vy>>vz. 1.1

(150)  . σ u

(151)  .

(152)  . σ dΘ  1.1

(153)   σ dΘ . σ u 2n. 1.1σ u 2. εpq  jσ dΘ . 1.05jσ u 2. Using the equation above, the error in hair angles were calculated for different hair lengths (~ 24h stubble lengths). σ(u) is again taken as 21μm. Table G.4 gives the values of angle errors for hair lengths starting from 50μm to 500μm.. 46.

(154) Table G.4: Error in vertical hair angle Θ (0 to 44deg) with change in hair length. From the above table it is clear that the hypothesis is true. The error in hair’s vertical angle, Θ is maximum at 25deg when the hair’s length is 50μm and it reaches a minimum of 3deg when the hair’s length is 500μm. The error in hair vertical angle, Θ gets worse as the length of the hair is shorter in length (<150μm). For a typical 24h stubble (~300μm), the hair angle error is a maximum of + or – 4 deg.. 47.

(155) 48.

(156) APPENDIX H HSM PRINCIPLES AND IMPLIMENTATION The first section of this chapter will begin with the module developed in house to test various concepts of hair-skin manipulation.. H.1 HAIR – SKIN MANIPULATION (HSM) MODULES. Fig. H.1: HSM module front view (top); illustration of HSM module side view (bottom). Fig. H.1 shows the HSM module designed in house. The goal of the module is to manipulate hair and skin such that a close and smooth shaving occurs. This particular module has been specifically designed to manipulate “against the grain” hairs, which means if the hair grows from north to south (assuming the root of the hair is anchored at north; blue arrow in Fig. H.2), then the module attacks the hair from south to north (black arrow in Fig. 41), or it attacks the bottom of the hair before it reaches its root. The module is made by using a Gillette shaving head with its blades removed and by re-adjusting it to suit our needs. The yellow square in the front view is the slit which would be simultaneously visualized by 2 cameras. Hence it is the field of view of interest to us. The blue strip is the lubra-strip which is supposed to release a thin lubricating layer over the skin during each shaving process making the shaving process smoother and less irritating. The other parts of the module are an orange color rubber stretcher and a plastic blade (poly-oxymethylene, POM) which are supposed to have a maximum effect in hair and skin manipulation. How far the edge of the blade is from the edge of the rubber is termed as ‘exposure’ of the blade. Two parameters can be changed in each module, one being the slit size or the opening between the rubber and the blade at the front of the module and the second being the blade exposure. Several configurations of the blade were made (Table H.1). The blade exposures were chosen based on how far the skin needed to be pushed in and the slit size was chosen such that the rubber part and the blade can capitalize each other’s manipulation.. 49.

(157) Fig.H.2: Top view of against the grain manipulation (top); side view of against the grain manipulation of a single hair at different positions (bottom). 50.

(158) Table H.1: Various HMS module geometries. Fig. H.3 (top) shows the various concepts that were implemented using the current module and (bottom) concepts that occur sometimes depending on the module geometry and the size of the hair.. Fig.H.3: Implemented major concepts in current HSM module (top); possible concepts also occurring in current HSM module (bottom). The rubber stretcher stretches the skin against the grain as shown in top left corner of Fig.H.3. Both the rubber part (especially for long hairs) and the blade push against the hair parallel to the skin (Fig.H.3, top middle). The blade portion depending on its ‘exposure’ setting pushes on the skin against the grain (Fig.H.3, top right corner). Two other possible concepts also occur depending upon the blade geometry and the hair length and diameter. If the hair has a large diameter and the slit size is small, then to some extent, the skin on either sides of the hair is pushed inside by the blade and the rubber (Fig.H.3 bottom left corner). The other effect is observed when the hairs are longer in length (>~500μm). In this case it is observed that the hairs are retracted by the rubber as it goes over the hair and then it flips it back into position. It 51.

(159) cannot be viewed as direct retraction, but it is believed that the rubber with quite a large friction with the skin can pull the hair out while it glides over it. The bottom concepts are hypothesis, it would be interesting to see if they are observed in the experiments.. H.2 LASER MARKINGS AND SOFTWARE UPDATE. Fig. H.4: Laser markings on the wedge of the blade and its dimensions (top); visualization of markings through the camera (bottom). While the software could find the length and the associated angles of the hair, it became apparent that we needed to have some kind of reference point with which it could be determined how far the hairs are with respect to the blade. This is important because if at some point in the future a shaving blade is inserted instead of the dummy blade, it is critical to know how close the blade is with respect to the root of the hair, as this determines how close the shave would be. To solve this issue, three laser markings were made at specific height from the bottom tip of the blade (900μm). The markings were made by using CO2 laser ablation technique (Fig.H.4 (bottom). Since the ablations left white crosses on a white background, black ink was 52.

(160) injected into the crosses to improve their contrast. For each blade three crosses were made instead of one for more accuracy. The software was accordingly updated to accomadate the new features in the module. The software can now automatically detect the center point of each of these crosses. It then takes an average of the centers and marks it as a yellow mark on the center cross on both the camera field of views. Now the software has the coordinates of the middle point of the laser markings. After placing the green and red crosses on the top and bottom of the hair respectively, the coordinates associated with the hair are also known. At this stage the software has an estimate of what is the distance between the laser markings (its center point) and the hair. Since the laser markings were placed at a height of 900μm from the edge of the blade, the coordinates of the edge of the blade are also known.. Fig. H.5: Updated software There are 3 specific customer requirements adopted into the software. First, is assuming a point 100μm above the edge of the blade called as Optical Baseline Distance or OBD (Fig. H.4 (top)),the software automatically calculates the coordinates of this point and translates this as a blue cross on top of the hair (Fig. H.5). This means that the hair would be virtually cut at the position of the blue cross. The length of the hair cut is given as ‘dCut’ in the software which is the length of the hair between the green and the blue cross. Another requirement was to keep track of the hair’s vertical angle denoted as ‘ver.angle’. As it’s angle is any where between 75 deg and 105 deg, the ‘accep.cone’ value is 1. Else it is 0. The final requirement is to draw a dotted line in front of the blade such that the distance between the blade’s edge and the dotted line (called as Working Distance, WD) is 100μm. When the center point of the blue cross coincides with the dotted line the ‘focus’ value is made 1. Else this value remains 0. 53.

(161) Force data obtained from the force sensor is also shown on the screen’s left corner. There is a force data corresponding to every frame.. H.3 MANIPULATED HAIR DEFINITION. Fig. H.6: Definition of manipulated hair The hair is said to be manipulated when: 1) Its vertical angle is within the acceptance cone (75<Θ<105), 2) The OBD point lies within the top and bottom of the hair and 3) The hair lies within a working distance of 100µm. At this position hair is said to be in focus or at focal point. Manipulation efficiency is thus the ratio of the number of hairs that are manipulated to the total number of hairs.. 54.

(162) APPENDIX I EXPERIMENTAL PRE-CONDITIONS Before beginning these experiments it is essential to know, what are the various parameters that we need to consider that would directly or indirectly affect the hair and skin manipulation. These parameters are categorized into 4 groups: 1. Module geometry 2. Subject dependent 3. Physical 4. Environmental. I.1 MODULE GEOMETRY PARAMETERS The entire module geometry is shown in Fig. I.1. The most important parameters include already discussed parameters such as: 1. Slit size: 0.5mm, 1.0mm and 1.5mm 2. Blade exposure: 0μm and 220μm, size of blade (radius of curvature, width of blade, angle of blade w.r.t skin) 3. Presence and size of rubber stretcher 4. Presence and size of lubra-strip and its distance from stretcher 5. Placement of module on the face, with rubber stretcher placed first and blade below it or the other way around.. Fig.I.1: Module Geometry 55.

(163) The parameters varied here are slit size and blade exposure. Usually for ‘against the grain’ measurements on the hair, the rubber stretcher is placed above the blade. The latest version of the module had a blade wedge angle of 20 deg instead of 7 deg.. I.2 SUBJECT DEPENDENT PARAMETERS The most important parameters in this case are: 1. Hair skin contrast 2. Skin stiffness 3. Hair stiffness 4. Face area, contour etc. 5. Grain direction of hair 6. Growth rate of hair varies person to person 7. Diameter of hair 8. Age of the beard 9. Hydration of skin 10. Each individual’s shaving habits including the razor used. For the experiments done volunteers with good hair skin contrast with about 24h stubble were chosen so that it was easier to identify the top and bottom of the hairs. ‘Against the grain’ hairs were the target of choice. The hydration of the skin is managed by applying a ‘Nivea’ shaving gel to all the volunteers so that artifacts involving dry skin cannot be seen in through the cameras. Care must be taken to apply only minimal gel as too much gel can form watery blobs and would obstruct proper visualization. All the other parameters were too complicated to be monitored and controlled.. I.3 PHYSICAL PARAMETERS The various parameters in this category are: 1. Speed and acceleration of the stage containing the module 2. Normal force applied by skin on the module 3. Shear force (frictional forces) between skin and module The speed of the stage attached to the module was maintained at 30mm/sec. The normal shaving speed is close to this speed. The acceleration was about 100mm2/sec. The normal force applied by skin on the module was monitored using force sensors implemented in the setup. The shear forces were not measured at this stage.. I.4 ENVIRONMENTAL PARAMETERS Some external conditions which affect the experiments are: 1. Time of the day 2. Weather/seasonal conditions 56.

(164) While experiments were conducted during day time for all volunteers, it was difficult to control the exact time of the tests. The weather conditions were controlled as all experiments were performed under normal room temperature conditions. It is hard to predict if the external weather conditions affect the growth patterns of beard though.. 57.

(165) 58.

Referenties

GERELATEERDE DOCUMENTEN

In this section, we establish the ground truth for the PIE ∩ MultiPIE problem by comparing images with frontal view and illumination in the PIE (camera c27, see Figure3) and

Figure 50 shows the RP-LC chromatograms of the three fractions collected from a mixture containing high and low nitrogen content and molecular weight.. 45 and its retention time

The ability to adapt processes at different levels, results in collections of process model variants (process variants for short) created from the same process model, but

Beren eten niet alleen kleinere dieren, maar ook bessen en vruchten die aan struiken en bomen groeien.. Welke drie (dierlijke) entiteiten kun je in de tekst

Geographical distribution (Robinson projection) of 1833 Global Applications of Soil Erosion Modelling Tracker (GASEMT), grouped using a hexagonal grid, superimposed on (panel a)

Voor wat betreft de inwendige kwaliteit bij tomaat: door Quinnsworth werden er geen bepalingen verricht van het suiker- en zuurgehalte. Wel verzamelde het Kinsealy Research

Although Court decisions can form jurisprudence which might influence future judgements, strictly spoken all cases are individualized (van Banning, 2002). Ensuing from the

Neirude Lissone (boegbeeld van de NICU), Aartie Toekoen (baken van de pediatrie) en Shirley Heath, jullie ook dank voor jullie deelname aan deze stichting en voor de