• No results found

Combined three-dimensional flow-and temperature field measurements using digital light field photography

N/A
N/A
Protected

Academic year: 2021

Share "Combined three-dimensional flow-and temperature field measurements using digital light field photography"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Combined Three-Dimensional Flow- and Temperature-Field Measurement

Using Digital Light Field Photography

Conference Paper · August 2014 DOI: 10.1615/IHTC15.min.008605 CITATIONS 3 READS 125 4 authors:

Some of the authors of this publication are also working on these related projects: Falling Liquid FilmsView project

OXYFLAME View project Manuel Rietz RWTH Aachen University 8PUBLICATIONS   48CITATIONS    SEE PROFILE Oliver Garbrecht RWTH Aachen University 12PUBLICATIONS   179CITATIONS    SEE PROFILE Wilko Rohlfs University of Twente 73PUBLICATIONS   690CITATIONS    SEE PROFILE Reinhold Kneer RWTH Aachen University 307PUBLICATIONS   3,125CITATIONS    SEE PROFILE

(2)

IHTC15-8605

COMBINED THREE-DIMENSIONAL FLOW- AND TEMPERATURE FIELD

MEASUREMENTS USING DIGITAL LIGHT FIELD PHOTOGRAPHY

M. Rietz,1,O. Garbrecht,1W. Rohlfs,1 R. Kneer1

1Institute of Heat and Mass Transfer RWTH Aachen University, Aachen, Germany, 52056

ABSTRACT

The recent developments in the field of light field imaging allow for new approaches to the experimental char-acterization of three-dimensional flow phenomena. With the light field imaging technique, a single camera is able to capture an object’s position in all three spatial dimensions. This possibly simplifies the setup and calibration of particle image velocimetry (PIV) experiments in comparison to state of the art methods like to-mographic PIV or holographic PIV. In order to explore the capabilities of light field PIV, a three-dimensional convection problem is investigated as a test case. The three-dimensional flow and temperature field, essen-tial for the characterization of complex heat transfer mechanisms, can be measured simultaneously by using thermometric liquid crystals as tracer particles. The present work shows first results of the application of the introduced measurement technique, combining particle image velocimetry and liquid crystal thermometry (LCT) on the basis of light field imaging.

KEY WORDS:Light Field Imaging, Particle Image Velocimetry, Liquid Crystal Thermometry

1. INTRODUCTION

Flow phenomena of industrial or scientific interest are often characterized by unsteadiness and three-dimensio-nality. Thus, experimental methods able to resolve complex flows are needed, either for direct diagnosis or for the validation of related numerical simulations. For the measurement of flow velocity fields in a plane, particle image velocimetry (PIV) has been state of the art for several decades. More recently, strong efforts have been made to extend PIV to the third spatial dimension. For the characterization of heat transfer mechanisms, not only the 3D velocity field of a flow, but also the temperature distribution is needed.

In this study, a new approach to the simultaneous measurement of 3D velocity and temperature fields is pre-sented, combining light field imaging, particle image velocimetry and liquid crystal thermometry (LCT). The performance of the experimental method is analyzed by investigating a simple convection problem in a semi-confined cell. The setup of the experiment is significantly simplified by using one high performance light field camera to capture and reconstruct the 3D distribution of thermometric liquid crystals within the measurement volume.

An overview of alternative 3D particle image velocimetry techniques is presented in the next section, followed by a description of the employed temperature measurement method, liquid crystal thermometry. After an in-sight in light field imaging, recent developments of light field PIV are outlined and the experimental setup and methods are presented. The study concludes with first results of the application of the introduced methods to the test case.

(3)

1.1 3D Particle Image Velocimetry

A common approach to resolve 3D flow fields is the application of multiple viewpoints, i.e. multiple cameras which observe tracer particles from different perspectives. The differing images are used to reconstruct depth information.

3D particle tracking velocimetry (PTV), introduced by Maas et al. [16] and Malik et al. [17], is an early appli-cation of the multiple viewpoint approach. Multiple cameras in a stereoscopic configuration are used to resolve a 3D flow field. However, this method is limited to relatively low seeding densities allowing to indentify and follow single particles in the flow.

Defocusing digital particle image velocimetry (DDPIV [22, 23]) utilizes a mask of three pinhole cameras forming an equilateral triangle. Objects are imaged on different points of the camera sensor through these re-spective pinholes. Algorithms, searching for equilateral triangles on the image, calculate the depth through the triangle’s location and size. Since depth estimation requires particles to be resolved by all three apertures, the seeding density is again a limiting factor.

Tomographic particle image velocimetry is another technique using multiple viewpoints. Optical tomography reconstructs the particle distribution in the measurement volume as a 3D intensity distribution from the images obtained from multiple cameras. The 3D intensity field is reconstructed by a cross-correlation algorithm. The reconstruction is an inverse and under-determined problem, because projections on the camera chips can result from various different 3D configurations. Therefore, the most likely distribution has to be determined. The technique was first introduced and thoroughly evaluated through a parametric study by Elsinga et al. [7]. Scanning particle image velocimetry [4] is a method that does not rely on the multiple viewpoints approach. A laser sequentially illuminates parallel sheets throughout the measurement volume while a high speed camera records the particle distribution in the different planes. Under the premise, that the scanning process is signif-icantly faster than the smallest time scales in the investigated flow, a quasi instantaneous velocity field can be reconstructed. A drawback of the described method is the limitation to lower speed flows and the complexity of the experimental setup due to the need for a scanning mechanism.

Holographic particle image velocimetry [12] is a technique which differs significantly from the other ap-proaches. In this method, information about the position of the tracer particles in the measurement volume is encoded on a sensor plane (hologram) as the interference pattern of a reference beam and light which is scat-tered by the particles. For reconstruction, the hologram is illuminated by the reference beam which reproduces the encoded instantaneous 3D intensity distribution allowing to be recorded by a sensor, e.g. a CCD chip. A comprehensive analysis of inherent drawbacks and challenges in holographic PIV is given by Meng et al [18].

The various approaches to 3D PIV have their own strengths and weaknesses. Differing in accessible measure-ment volumes, complexity of the experimeasure-mental setup, maximum seeding densities and precision, a technique most suitable for a respective application has to be chosen. The advances in light field imaging have enabled a new approach to PIV. Without the need for a multiple camera setup or additional equipment as in scanning PIV or holographic PIV, respective experimental setups promise to be radically simplified. Meanwhile, the performance of light field PIV setups regarding precision and seeding densities are of current research and still undergoing change corresponding to the improvement of available light field cameras.

1.2 Liquid Crystal Thermometry

Thermotropic liquid crystals acting as tracer particles for PIV are well established for the simultaneous mea-surement of velocity and temperature fields. Local temperatures can be indicated by the selective reflection of incoming light by the molecules which build the liquid crystal’s structure. Temperature induced structural changes of the crystals correspond to certain wavelengths of incoming light being reflected with highest in-tensity, which changes the apparent color of the particle. In addition to the local temperature, the observed color also varies with the viewing angle. Consequently, calibrations have to be conducted, which include the temperature and viewing angle in order to obtain valid measurement results.

(4)

Since first pioneering works by Hiller and Kowalewski [10], liquid crystals have been utilized in combination with many different PIV setups and proved their suitability for respective applications. Being coupled to the performance of the employed PIV method, LCT, just as PIV, was at first limited to planar cross sections of the investigated flow. Hereby, in plane PIV in combination with LCT was applied to many different flow phe-nomena, such as the thermal flow inside a cavity [11], thermo-capillary flows [27] or the flow around a heated circular cylinder [21]. Advances in 3D PIV enabled the characterization of 3D temperature/velocity fields. Amongst other applications, several researchers have succeeded in characterizing a 3D Rayleigh-B´enard con-vection problem by combining 3D PIV and liquid crystal thermometry in the last decade. Ciofalo et al. [6] analyze the Rayleigh-B´enard problem using tomographic PIV and LCT. Their setup is limited to the analysis of steady-state flows. Fujisawa et al. [8] later extend the field of application to turbulent Rayleigh-B´enard con-vection employing a scanning PIV setup.

Several advantages qualify LCT as a suitable technique to be combined with light field imaging and PIV to obtain a 3D velocity/temperature distribution. Not specific for the presented application and an explanation for the numerous previous applications are the non-invasiveness of the measurement method and the duality of the liquid crystals to act simultaneously as local thermometers and as tracer particles. As for light field imaging, the use of a single and fixed plenoptic camera simplifies the calibration which is inherently more complex for a multi camera setup as employed by other PIV techniques. Recent soft- and hardware used with light field imaging enable the computation of color images in combination with depth information in real time. This allows a simultaneous characterization of tracer particles regarding position and color, both needed to gather temperature information, from a single image frame.

2. LIGHT FIELD IMAGING

The term light field was first introduced by Gershun in 1937 [9] to describe the properties of light in the three-dimensional space. Adelson and Bergen [1] later formalize the full optical information of light as the plenoptic function, where a ray of light is fully parameterized in space through five dimensions, e.g. three spatial coordinates and two directional properties. Without distortions, light rays will follow their original trajectory, reducing the 5D plenoptic function to 4D. A possible parameterization is given by the coordinates of the intersection points with two consecutive planes (x,y,u,v) (Fig.1). Under this premise, a traditional camera only captures two dimensions of this light field, disregarding information about the angle under which the ray falls on to the sensor. The intensity measured by one single pixel of a camera is the average of all light rays, whose trajectories intersect with the pixel. Contrary, a plenoptic camera or light field camera captures the entire four-dimensional light field. By obtaining the directional information of the light, the three-dimensional properties of the imaged scene can be reconstructed.

The underlying basic principles of capturing the 4D light field of a scene are described by Adelson et al. [2]. Two pinhole cameras observing the same scene will produce two different images as shown in Fig. 2.

x y

u

v light ray intersection planes

(5)

object

images

pinhole

Fig. 2 Basic principle of capturing the light field

main lens micro lensarray

sensor main lens

image

Fig. 3 Basic design of a plenoptic camera

The differences in the images captured together with the position of the two cameras yield depth information about the observed scene. This equals the well-known stereoscopic approach, or simply human vision. The directional information is gained through the use of multiple viewpoints. In early applications, the 4D light field was captured through either moving cameras or a multi camera setup. With the obtained information of the light field, the appearance of an object from different viewpoints can be reconstructed. This reconstruction method was first proposed by Levoy et al. [14] and was called synthetic aperture photography later [13, 26]. The complexity of an experimental setup with moving or multiple cameras due to the required calibration procedure has encouraged a revolutionary redesign of light field captureing by Adelson et al. [2]. This design, illustrated in Fig. 3, was later improved by Ng et al. [19] and is used in all recent commercial light field cameras. Ng et al. point out, that the design can be thought of as a combination of human vision and insect vision, which interestingly is not found in nature. This plenoptic camera consists of a main lens (human eye) and an array of micro lenses (insect eye), positioned close to the photo sensor. Each micro lens projects a sub-image of the main lens image onto the pixels which are positioned beneath the specific micro lens. Hereby, views of neighboring micro lenses overlap in a way that every sub-image is seen by multiple micro lenses and consequently by different pixels on the sensor. This way, directional information is provided, which allows for reconstructing the 4D light field.

Ng et al. present several applications of the light field technique using the described single camera setup. Fig. 4 illustrates these typical applications through a scene observed during first tests of the experimental setup used in this study. The raw data generated by a light field camera is a projection of the main lens image through the micro lens array (Fig. 4, left). The micro lens images are clearly seen in the close up. Matching the aperture sizes and focal lengths of micro lenses and main lens in a way that the respective quotients (f-number) are equal ensures that the images of neighboring micro lenses just touch. Overlapping or too small micro images reduce the light field camera resolution. Through post processing, the reconstructed light field allows to synthetically refocus the observed scene at different depth. By adding the sharpest parts of images focused at different positions through the depth of field (DoF), the DoF is extended, allowing to reconstruct a total focus image (Fig. 4, middle). However, the most important step of post-processing the light field information in the present study is the extraction of depth information (Fig. 4, right). This information can be used to investigate a 3D flow field by identifying the 3D position of tracer particles.

Since the demonstration of the possibilities of light field imaging by Ng et al. [19], significant progress has been made in terms of light field camera performance. The fact, that the main lens is focused on the micro lens plane while the micro lenses are focused on infinity in Ng’s concept design, intrinsically sets the maximum spatial resolution of the camera to the number of micro lenses, regardless of the image sensor properties (see [24] for a detailed analysis). The lateral resolution of a light field camera is always lower than the resolution of the imaging sensor due to the need of multiple lenses observing the same point of the image. Recent high-end

(6)

Fig. 4 Application of the 4D light field information: Raw data (left); digitally refocused images and total focus image with extended depth of field (middle); color coded depth information with increasing distance from green to blue (right) (Camera: Raytrix R29)

light field cameras are able to obtain a resolution of up to25% of the sensor resolution, while allowing for an

up to six-fold increase in the DoF compared to traditional digital cameras. 2.1 Theoretical Background of Light Field Imaging for PIV

Both, the depth of field and the resolution are major criteria for the suitability of light field imaging for PIV. In order to calculate the depth of an object with tolerable uncertainty, the object has to be in focus and conse-quently positioned in the DoF of the camera. The maximum depth of the measurement volume in PIV experi-ments is thus limited by the respective DoF of the employed optical setup. The maximum effective resolution of the camera influences the precision with which particle positions can be detected and the overall quality of recorded images. Hence, higher resolutions enable a more precise resolution of respective flow phenomena. Based on these considerations, the concepts and performance limits of recent light field cameras, regarding effective resolution and DoF, are analyzed in the following. The presented concepts are based on the work of Perwaß and Wietzke [24], who describe the functionality of recent light field cameras and the relation between resolution and DoF in detail.

In order to clarify the special features of light field cameras with respect to the DoF and the resolution, Fig.

5 illustrates the image formation of a traditional camera. Hereby, the image formation of an object pointP at

distancea to the main lens is shown. Two object points PandPexist, whose images will be projected to an

image distancebandb, respectively. The distances are larger (b) and shorter (b) than the distance between

(7)

DoF p

sensor plane main lens

amain s object point P P P b b B

Fig. 5 Depth of field of a traditional camera

0

object distance

effective resolution ratio (ERR)

1

focus distance DoF

(a) ERR of a standard camera

0 0.1 0.2 0.3

object distance

effective resolution ratio (ERR)

DoF (ERR = 0.1) focus distance

(b) ERR of a plenoptic camera

Fig. 6 Comparison of the effective resolution ratio (ERR) of a traditional camera and a plenoptic camera

will occur, such that the two object distances limit the DoF of the given setup. A useful quantity to characterize the performance of a camera in terms of effective resolution and DoF is the effective resolution ratio (ERR), as described by Perwaß and Wietzke [24]. Disregarding the wave character of light, the ERR can be thought of as

the quotient of the pixel sizep and the maximum of p and the diameter of the image on the sensor |s|.

ERR = max(|s|, p)p (1)

For an object outside the DoF, the diameter of the image will be larger than a pixel (Fig. 5) resulting in a reduced resolution. This is illustrated in Fig. 6 (a), where the DoF is characterized by an ERR equal to one. In a plenoptic camera, the installation of a micro lens array in front of the sensor plane influences the ERR in object space as seen in Fig. 6 (b). While displaying the same basic shape as in the standard case, the ERR in the DoF

is not equal to unity for a light field camera, rather showing a proportionality to1/amicro, withamicro being

the distance between micro lens plane and the image of the main lens. Asamicroincreases, more micro lenses

see the same point. Fig. 7 illustrates this relationship. Plane 1 indicates the so called “total covering plane”

[24], being the plane closest to the micro lens array where the entire image plain is seen by at least one lens.

(8)

p

Bmicro amicro = Bmicro

amicro = 2Bmicro

1 2

plane 1: maximum ERR = 1

- no directional information

plane 2: maximum ERR = 0.5

- reconstruction of the light field possible image plane:

sensor

Fig. 7 Inherent reduction in resolution of a light field camera

the acquisition of directional information and the calculation of the object’s depth. Therefore, the maximum resolution of a plenoptic camera is half the resolution of the sensor, following the equation:

ERR = Bamicro

micro ·

p

max(|s|, p) (2)

Note, that this relation describes the one dimensional case. In two dimensions the ERR is approximately

squared, resulting in a maximum resolution of 25% of the sensor resolution, which is the value achieved

by recent light field cameras.

Fig. 8 shows the entire imaging process of a plenoptic camera regarding the DoF. The DoF of the micro lenses is hereby positioned on the image side of the main lens. As the maximum effective resolution of the micro lenses is half the sensor resolution (1D) and decreases with distance from the micro lens array, a minimum ERR has to be chosen to calculate a respective DoF. Therefore, the declaration of the DoF for a light field camera is only appropriate in combination with a corresponding effective resolution. The projection through the main lens gives the object side DoF. Note, that the main lens should be positioned in a way, that the fo-cused point in object space is projected to the maximum ERR of the micro lenses for DoF maximization. Due to the characteristics of the ERR in object space, the effective resolution varies with distance to the camera. Note, that up to this point only real main lens images have been assumed and illustrated. For virtual main lens images, projected behind the image plane (object distance of the micro lenses is negative), the ERR increases with object distance, in contrast to Fig. 8. Based on the concepts described above, recent light field cameras are able to combine the need for a high effective resolution and a large DoF by combining micro lenses with different focal lengths to a hexagonal multi lens array (MLA). This allows for the afore mentioned maximum

resolution of25% of the sensor resolution (2D) with a six-fold increase in the DoF in comparison to standard

digital cameras (Fig.9). The relative depth of field extension in comparison to a traditional camera is maximal for close objects and increases with the focal lengths of the main lens.

In order to provide deeper insights into the resolution limits of a light field camera as described by Perwaß and Wietzke [24], Fig. 10 illustrates the relationship between achievable DoF and lateral/depth resolution for an exemplary light field camera. The sensor resolution used in the calculations was 6576x4384 pixel with a

pixel pitch of5.5 μm as present in the employed light field camera. The main lens focal length is assumed to

be100 mm and the focus distance varies from 0.5 to 5 m in the depicted range. Since the ERR varies within

(9)

DoF

micro lens DoF

p

ERR ERR

bmain

Bmicro amicro amain

sensor main lens

Fig. 8 DoF of a plenoptic camera 2

ERR of lens types 1,2,3 MLA sensor extended DoF 0.5 1 image plane: lens type 1 2 3

Fig. 9 Extention of the DoF by the use of a multi lens array (MLA)

The depth resolution can be understood as quantized distinguishable depth layers in object space. Conse-quently, this resolution is strongly linked to the precision of detecting the position of a particle on the image sensor. Since the depth is estimated by comparing the position of corresponding particle images under neigh-boring micro lenses, the resolution can be improved, if the center position of particle images is determined with sub-pixel accuracy. Provided good contrast in the observed scene in combination with well resolved particles whose images should only cover a few pixels on the sensor, a determination of the particle center with up to a 10th of a pixel accuracy is reasonable. The depth resolution is correspondingly improved ten-fold. For a DoF

of about 2 cm, lateral and depth resolution are estimated with4.6 μm and 410 μm respectively (see Fig. 10).

These values indicate that the inherent difference between the resolutions in lateral and axial direction can be several orders of magnitude. Note, that these values are based on a simplified model of the light field camera that was used. However, first experimental results show similar trends for the spatial resolutions.

Summarizing, the presented concepts of single camera light field imaging show the suitability of this technique for PIV. The computation of depth information using only one camera instead of three to four as in other PIV methods promises a simplification of the experimental setup and calibration process. The depth of the investi-gated measurement volume is limited to the DoF of a given optical setup. However, an up to six-fold extended DoF allows for measurement volumes sufficient for many applications. The inherent resolution reduction of a plenoptic camera can partly be compensated by the implementation of higher resolution sensors, which allows

(10)

0 0.1 0.2 0.3 0.4 0.5 10−6 10−5 10−4 10−3 10−2 10−1 100 DoF [m] Resolution [m] 0.023 410 μm 46 μm 4.6 μm 4.1 mm axial lateral 1 pixel 1 pixel 0.1 pixel 0.1 pixel

Fig. 10 Depth of field versus lateral and depth resolution for an exemplary simplified model of a light field camera. Main lens parameters: focal length = 100 mm and focus distance = 0.5 to 5 m. The pixel values indicate the precision of particle center detection.

for a further development of light field cameras regarding performance and application. This is particularly true for the depth resolution and respectively for the number of distinguishable depth layers within the mea-surement range. Tracer particle centers should be resolved with sub-pixel accuracy to obtain a sufficient depth estimation. This corresponds to well illuminated high contrast images of the particles. A possible drawback ex-ists in the limitation of the viewpoints of a light field camera in comparison to a multi camera setup, where the cameras are much more apart then single micro lenses observing the same object point. Thus, closer particles can conceal particles behind, limiting the seeding density of the tracer particles.

2.2 Recent Applications of Light Field Particle Image Velocimetry

In the past few years, various researchers have succeeded in applying principles of light field imaging to 3D PIV. Instead of using a light field camera as proposed and built by Ng et al. [19] and Adelson and Bergen[2], Belden et al. [3] and Truscott et al. [25] utilize a multiple camera setup to reconstruct the light field. They termed their approach synthetic aperture PIV referring to methods previously applied in synthetic aperture photography. The approach allows for higher seeding densities in the measurement volume using multiple viewpoints to overcome occlusion. By focusing to different depths through the measurement volume and dis-carding blurred particles not positioned in plane of focus, the 3D particle distribution is reconstructed.

Lynch [15] developed computational procedures for reconstructing the particle position captured with a plenop-tic camera by using syntheplenop-tic plenopplenop-tic images. By simulations, the performance of light field PIV was theo-retically analyzed.

Examples for recent applications of single camera light field PIV were presented by Cenedese et al. [5] and Nonn et al. [20]. The work presented here adds a new component to the application of light field PIV, in-cluding temperature field measurements by means of using thermometric liquid crystals as tracer particles. Furthermore, it aims to provide insights regarding the setup of light field PIV experiments.

(11)

q// Xenon light source Cubic cell Light field camera 22.5 mm | 49.5 mm 12.5 mm |39 mm 45 mm | 100 mm . . x y z

(a) Schematic (b) Photography

Fig. 11 Experimental setup 3. EXPERIMENTAL SETUP

The combined 3D flow- and temperature field measurements using light field photography and liquid crystal thermometry are applied to a simple convection problem as a first test case. The experimental setup used in the present study is illustrated in Fig. 11. Two different cubic cells with given dimensions (Fig. 11 (a)) are employed and subject to a bottom side heat flux. The optical setup of the light field camera is adjusted in a way that the DoF and respective depth of the measurement volume match. Thermotropic liquid crystals with

a mean diameter of29 μm (R30C20W LCR from Hallcrest) are seeded in a glycerol-water mixture with low

seeding densities in order to allow for particle tracking. Image analysis and particle tracking was performed with a commercial light field PIV software provided by Raytrix. The measurement volume is illuminated volu-metrically by a Xenon light source, which is positioned perpendicular to the viewing direction of the light field camera. The employed light field camera (Raytrix R29) uses a 28 megapixel image sensor and has a maximal effective resolution of 7 megapixel. In addition to the described components, a neutral, uniform background is installed in the line of sight of the light field camera behind the measurement volume. Thus, a defined color value is measured in between observed tracer particles, which simplifies particle detection.

Note, that the use of liquid crystals as local thermometers results in limitations and certain requirements with respect to the choice of illumination sources. As the color of the crystals relies on selective reflection of incom-ing light, the high intensity illumination provided by lasers cannot be employed. Thus, current measurements

are limited to low speed flows, while exposure times of about50 ms are needed for sufficient illumination by

the xenon light and particle tracking algorithms are used for data analysis. The use of LED spotlights in addi-tion to the xenon light source has been tested to allow for shorter exposures (Fig. 11), but only at the expense of image quality.

4. RESULTS

The present study aims at exploring the performance of combined velocimetry and thermometry by light field imaging. Thus, exemplary results of recent experiments (see Fig. 11) are shown and discussed in order to identify the current limits of this technique.

4.1 Velocimetry

The measurement of particle positions and velocities of recent experiments are illustrated in Fig. 12. A cubic

(12)

−4 −2 0 2 4 0.5 1 1.5 2 2.5 0 20 40 60 425 425.5 426 426.5 z [mm] x [mm] y [mm] frames frames vz [mm/s] vy [mm/s] vx [mm/s] −6 −4 −2 0 2 4 6 8 10 −20 −15 −10 −5 0 5 10 x [mm] y [mm] −2.5 −1.5 −0.5 0.5 0 0.1 0.2 0.3 0.4 0 20 40 60 z = 420 - 430 mm 0 0.1 0.2 0.3 0.4 0.5 polynomial fit σ = 9.3 μm σ = 3.5 μm σ = 428 μm σ = 0.042 mm/s σ = 0.043 mm/s σ = 2.2 mm/s evaluated trace

Fig. 12 Front view of 3D particle traces in a stationary convection cell (top) and resolution of a single trace in x,y and z (bottom)

(13)

−20 −15 −10 −5 0 5 10 410 420 430 −20 −15 −10 −5 0 5 10 x [mm] y [mm] z [mm] slice a slice b slice c −20 −15 −10 −5 0 5 10 −20 −15 −10 −5 0 5 10 −20 −15 −10 −5 0 5 10 −20 −15 −10 −5 0 5 10 −20 −15 −10 −5 0 5 10 −20 −15 −10 −5 0 5 10 x [mm] x [mm] x [mm] y [mm] z = 412 - 413 mm z = 415 - 416 mm z = 422 - 423 mm a b c

Fig. 13 3D view of particles, detected over a timespan of 9 s (top); thin slices (a-c) through the volume at indicated z-positions (bottom)

center of the image sensor. A front view of particle traces recorded over a time span of20 s is presented on the

top. The frame rate is 5 fps with an exposure time of50 ms. Only traces of particles, which were traced over

more than 10 frames are illustrated in order to suppress noise.

A single particle trace, marked red in the insert, is investigated with respect to the accuracy of the measured position (bottom left plots) and consequently velocity (bottom right plots). The particle position in x and y direction is determined with a low variance. The particle trajectory is well described by a polynomial fit with

a standard deviation below σ = 10 μm. This allows for a reasonable calculation of the respective velocity.

Contrary, the depth information is of much higher variance (σ = 428 μm), such that the velocity calculation

is of high noise (two orders of magnitude larger than the noise in x- and y-direction). Note, that the depicted velocities are evaluated for consecutive frames. The error can be reduced by evaluating the position of the particle over a few frames and fitting a most probable trajectory to the data as indicated by the polynomial fits in Fig. 12. In addition, measurements could be improved by further optimizing experimental conditions. Particularly the use of liquid crystals as tracer particles is found to complicate exact particle center detection due to reflections on the surface of the crystals’ micro-encapsulation and inhomogeneities in the reflected light. A parallel setup of illumination and viewing direction could exceed achievable resolutions of a perpendicular setup as employed in the presented result.

Concluding, current results suggest high uncertainties in the direct measurement of the z-component of the velocity field with the employed setup. However, the suitability of light field imaging for the 3D positioning of particles is apparent, while high resolution in plane velocity fields (two velocity components) can be obtained within consecutive layers throughout the entire measurement volume.

This is illustrated in Fig. 13. Particle traces in a cubic cell of39 mm depth are shown in a 3D view. In addition,

(14)

−4 −2 0 2 4 6 8 10 12 −15 −10 −5 0 5 10 x [mm] y [mm] 6 6.5 7 7.5 8 422 424 426 428 −10 −9 −8 −7 −6 −5 0 0.5 1 1.5 2 422 424 426 428 −3 −2.5 −2 −1.5 −1 −0.5 0 6 6.5 7 7.5 8 422 424 426 428 4 4.5 5 5.5 6 6.5 7 7.5 8 Medium temperatures T ≈ 29 °C Low temperatures T ≈ 28 °C High temperatures T ≈ 30 °C x y z x y z x y z

Fig. 14 Evaluation of the particle color: The distribution of detected particles in combination with their color in the RGB color space for one instant of time is illustrated. Particle traces are shown for flow visualization. The volumetric distribution of particles and respective colors is illustrated for exemplary regions of differing temperature.

4.2 Thermometry

For the identification of the temperature field of the investigated flow, the apparent color of the seeded liquid crystals has been evaluated. In this context, the total focus image of the observed scene was employed, which displays all particles within the depth of field in focus. Superimposing this image with the positions of parti-cles combines information on the position with color information. Fig. 14 illustrates the distribution of detected particles in combination with their color in the RGB color space for one instant of time. In addition, all

par-ticle positions, detected over a time span of10 s are plotted to indicate the flow. For particle color detection,

thresholds for the brightness value of the image have been applied in order to suppress background noise and bright reflections. Subsequently, the detected color has been averaged over a 10x10 pixel interrogation window around the detected particle position accounting for the size of the particle. In Fig. 14, the marker size has been increased for better visibility. The volumetric distribution of particles and respective colors is illustrated for exemplary regions of different temperatures. A blue dominance is found at the bottom of the cell, indicating high temperatures due to the imposed bottom side heat flux. Red and green dominant volumes indicate low

(15)

and medium local temperatures within the working range of the liquid crystals of approximately 28 to30C. An instantaneous temperature distribution can be achieved by correlating apparent particle colors with temper-ature using well-established calibration procedures, e.g. calibration over particle hue values. The accuracy of the temperature measurement has to be evaluated in regard to two aspects, the spatial resolution of the temper-ature and the accuracy of the tempertemper-ature value itself. While the spatial resolution is linked to the precision of the detection of particle positions, the accuracy of the temperature value depends on the homogeneity of the particles’ color responses.

5. CONCLUSION

The suitability of single camera light field imaging for the simultaneous determination of 3D velocity and temperature fields within flows has been analyzed through application to a simple convection problem in a semi-confined cell. Thermotropic liquid crystals have been used as tracer particles and the techniques’ ability to resolve their position as well as apparent color has been evaluated. Following theoretical considerations, an inherent and significant discrepancy between lateral and axial resolution of the particle position has been observed. Typical values for the uncertainty in the position of a particle in the experiments shown are a few mi-crometers in lateral direction and a few hundred mimi-crometers in depth. The total depth of field was in the range of centimeters. Particle color and particle position have been combined by superimposing detected particle positions and the color image. In a next step, fluctuations in the response of individual liquid crystals, as de-scribed in associated studies ([21]), could be addressed by volumetric averaging within a defined interrogating volume.

ACKNOWLEDGMENTS

This work was financially supported by the Deutsche Forschungsgemeinschaft (DFG) (Grant No. DFG KN 764/3-2).

REFERENCES

[1] Adelson, E. H. and Bergen, J. R., “The plenoptic function and the elements of early vision,” In Computational Models of Visual Processing, edited by Michael S. Landy and J. Anthony Movshon. Cambridge, Mass.: MIT Press, (1991).

[2] Adelson, E. H. and Wang, J. Y., “Single Lens Stereo with a Plenoptic Camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, (1992).

[3] Belden, J., Truscott, T. T., Axiak, M. C., and Techet, A. H., “Three-dimensional synthetic aperture particle image velocimetry,” Measurement Science and Technology, 21, (2010).

[4] Br¨ucker, C., “Digital-Particle-lmage-Velocimetry (DPIV) in a scanning light-sheet: 3D starting flow around a short cylinder,” Experiments in Fluids, 19, pp. 255–263, (1995).

[5] Cenedese, A., Cenedese, C., Furia, F., Marchetti, M., Moroni, M., and Shindler, L., “3D particle reconstruction us-ing light field imagus-ing,” 16th Int. Symp. on Applications of Laser Techniques to Fluid Mechanics, Lisbon, Portugal, (2012).

[6] Ciofalo, M., Signorino, M., and Simiano, M., “Tomographic particle-image velocimetry and thermography in Rayleigh-B´enard convection using suspended thermochromic liquid crystals and digital image processing,” Ex-periments in Fluids, 34, pp. 156–172, (2003).

[7] Elsinga, G. E., Scarano, F., Wieneke, B., and van Oudheusden, B. W., “Tomographic particle image velocimetry,” Experiments in Fluids, 41, pp. 933–947, (2006).

[8] Fujisawa, N., Funatani, S., and Katoh, N., “Scanning liquid-crystal thermometry and stereo velocimetry for simul-taneous three-dimensional measurement of temperature and velocity field in a turbulent Rayleigh-B´enard convec-tion,” Experiments in Fluids, 38, pp. 291–303, (2005).

(16)

[10] Hiller, W. J. and A.Kowalewski, T., “Simultaneous measurement of temperature and velocity fields in thermal convective flows,” In Flow Visualization IV, Ed. Claude Veret, Hemisphere, Paris, pp. 617–622, (1987).

[11] Hiller, W. J., Koch, S., and Kowalewski, T. A., “Onset of Natural Convection in a Cube,” International Journal of Heat and Mass Transfer, 13, pp. 3251–3263, (1993).

[12] Hinsch, K. D., “Holographic particle image velocimetry,” Measurement Science and Technology, 13, pp. R61–R72, (2002).

[13] Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., and Bolas, M., “Synthetic Aperture Confocal Imag-ing,” Proceedings of the 31st annual Conference on Computer Graphics and Interactive techniques, SIGGRAPH ’04, (2004).

[14] Levoy, M. and Hanrahan, P., “Light field rendering,” Proceedings of the 23rd annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, pp. 31-42, (1996).

[15] Lynch, K., (2011). “Development of a 3-D Fluid Velocimetry Technique based on Light Field Imaging,” . Master’s thesis, Auburn University.

[16] Maas, H. G., Gruen, A., and Papantoniou, D., “Particle tracking velocimetry in three-dimensional flows Part 1. Photogrammetric determination of particle coordinates,” Experiments in Fluids, 15, pp. 133–146, (1993).

[17] Malik, N. A., Dracos, T., and Papantoniou, D. A., “Particle tracking velocimetry in three-dimensional flows Part II: Particle tracking,” Experiments in Fluids, 15, pp. 279–294, (1993).

[18] Meng, H., Pan, G., Pu, Y., and Woodward, S. H., “Holographic particle image velocimetry: from film to digital recording,” Measurement Science and Technology, 15, pp. 673–685, (2004).

[19] Ng, R., Levoy, M., Brdif, M., Duval, G., Horowitz, M., and Hanrahan, P., “Light Field Photography with a Hand-held Plenoptic Camera,” , Tech. rep., Stanford Tech Report CTSR, (2005).

[20] Nonn, T., Kitzhofer, J., Hess, D., and Br¨ucker, C., “Measurements in an IC-engine Flow using Light-field Volu-metric Velocimetry,” 16th Int Symp on Applications of Laser Techniques to Fluid Mechanics, Lisbon, Portugal, (2012).

[21] Park, H. G., Dabiri, D., and Gharib, M., “Digital particle image velocimetry/thermometry and application to the wake of a heated circular cylinder,” Experiments in Fluids, 30, pp. 327–338, (2001).

[22] Pereira, F. and Gharib, M., “Defocusing digital particle image velocimetry and the three-dimensional characteriza-tion of two-phase flows,” Measurement Science and Technology, 13, pp. 683–694, (2002).

[23] Pereira, F., Gharib, M., Dabiri, D., and Modarress, D., “Defocusing digital particle image velocimetry: a 3-component 3-dimensional measurement technique. Application to bubbly flows,” Experiments in Fluids, 29, pp. 78–84, (2000).

[24] Perwaß, C. and Wietzke, L., “Single Lens 3D-Camera with Extended Depth-of-Field,” Proc. SPIE 8291, Human Vision and Electronic Imaging, 17, (2012).

[25] Truscott, T. T., Belden, J., Nielson, J. R., Daily, D. J., and Thomson, S. L., “Determining 3D Flow Fields via Multi-camera Light Field Imaging,” Journal of Visualized Experiments, 73, (2013).

[26] Vaish, V., Wilburn, B., Joshi, N., and Levoy, M., “Using Plane + Parallax for Calibrating Dense Camera Arrays,” IEEE Transactions on Computer Vision and Pattern Recognition, (2004).

[27] Wozniak, G. and Wozniak, K., “Buoyancy and thermocapillary flow analysis by the combined use of liquid crystals and PIV,” Experiments in Fluids, 17, (1993).

Referenties

GERELATEERDE DOCUMENTEN

When an oriented liquid crystal sample containing reactive mesogens and a dichroic photoinitiator is illuminated, only those locations where the orientation axis is planar and

• The final published version features the final layout of the paper including the volume, issue and page numbers.. Link

BackupResults for Results to journal file Originalinput for original input expression AllElrVariant for All EL/R interpretations ElrTypeResult for type of EL/R

op een boog van een cirkel met BD als koorde vastgesteld. Deze cirkel is te construeren met behulp van de basis-tophoek constructie. Zie hiervoor het algemene gedeelte dat aan

De hoeveelheid Nmin in deze laag was in de maanden mei en juli, maar vooral in de maand juni, bij beregening aan de hand van de DACOM-bodemvochtsensor veel hoger dan bij

Door grootschalig gebruik van rassen selecteert men automatisch het best aangepaste (meest agressieve) P.

Gifvrije on- kruidbestrijding door gemeenten Hand- boek voor gifvrij beheer van groen en verhardingen in gemeenten. Zuiveringsschap

Sekuriteit maak 22.44 persent van die totale koste uit en is die grootste koste-item. Sekuriteit kan onder meer elektriese omheining van die kompleks, wagte en