Modelling atmospheric turbulence and its influence on face recognition performance
H.G. Sikkema
Abstract—Atmospheric turbulence heavily affects image qual- ity, and with it face recognition performance. As light travels through the complex air flows it is redirected, creating blurs and geometric distortions in the final image. A model proposed for atmospheric turbulence in free space optical communications applications is recreated for face recognition applications. It mod- els atmospheric turbulence by recreating the complex airflows.
They are recreated with bubbles with varying refractive indices.
In two separate implementations, the edges of these bubbles are shown to cause artefacts, that distort the image differently from atmospheric turbulence. The effects of atmospheric turbulence on the performance of face recognition have also been tested, using a model based on Zernike coefficients. It showed that at mid to long ranges atmospheric turbulence can cause a significant decrease in facial recognition performance. Depending on the atmosphere, the face recognition can be either less accurate, or completely fail to detect any faces.
I. I NTRODUCTION
In the last decade, the accuracy of facial recognition meth- ods has greatly improved. Some methods have even reached 100% accuracy [1], however that was achieved on a data set that varied only pose and illumination. The images in the data set were taken at close range to the subject, removing an important possible cause of distortions. When images are taken at larger distances, atmospheric turbulence will start to distort the image.
Atmospheric turbulence is used to describe the turbulent flow of air in the atmosphere. Similar to water, air can have laminar flow or turbulent flow. With laminar flow, all the air will flow in the same direction without mixing, similar to water flowing through a hose. With turbulent flow, the air will flow chaotically, similar to water flowing in a river. Instead of the air flowing in one direction, many smaller air flows will be flowing in many different directions with varying velocities.
Each small airflow has its own humidity and temperature, which causes the light to change velocity slightly every time it transitions from one airflow to the next. The light will refract due to its change in velocity, as is described in Snell’s law.
An example of this transition can be seen in Figure 1. The strength of the refraction depends on the variations between air flows, in hotter more humid weather, these variations are much larger than in cold dry weather.
When an image is taken through atmospheric turbulence, the image will become distorted. These distortions can be severe enough that, facial recognition methods can no longer detect and recognise the subjects at all, as shown by Leonard et al.
[3]. In the same paper, it is also shown that distorted images can be partially restored, which improves the performance of face recognition.
Fig. 1: The refraction of light at the transition from one medium to another, with n 1 < n 2 [2].
Due to the chaotic nature of atmospheric turbulence, neural networks are seen as a promising method to restore distorted images. In order to create these networks large training sets of atmospheric turbulence distorted images are required, but these are not readily available. Multiple methods to model at- mospheric turbulence using undistorted data sets have already been proposed.
Fig. 2: A comparison between the approach of most models and the model proposed by Yuksel and Davis.
These methods use mathematical approaches to recreate
the geometric distortions and blur created by atmospheric
turbulence. In 2006 Yuksel and Davis [4] proposed a method
for modelling atmospheric turbulence in Free-Space Optical
(FSO) communication that is more closely related to the cause
of atmospheric turbulence. The model tries to recreate the
random air flows with varying refractive indices. It places
bubbles with varying sizes and varying refractive indices in a
3D space between the emitter and the receiver, as can be seen
in Figure 3 . The model is intended to model atmospheric
turbulence for a single beam of light, but could also form
the basis of a more accurate model for turbulence distorted
images. This is a unique approach, since it recreates the cause
of atmospheric turbulence distortion, instead of just recreating
the distortion itself, as can be seen in Figure 2.
Fig. 3: Atmospheric turbulence model as proposed by Yuksel and Davis [4].
To test whether such a model is possible, two implementa- tions will be tested. One is created in Java and it works similar to a ray tracer. Except that it traces the beams of light from the camera to its origin. This allows it to only look at those beams that can be seen by the camera. The second implementation is created in three.js, a JavaScript library that can create 3D computer graphics. In a 3D scene, the bubbles will be placed in between the original image and the camera.
The first aim of this paper is to study the feasibility of the model proposed by Yuksel and Davis, by recreating it with two different implementations and compare the results to the results of existing models. The second aim of this paper is to analyse the effect of variations of atmospheric turbulence on the performance of face recognition.
II. R ELATED W ORK
1) Modelling atmospheric turbulence: Various approaches to modelling atmospheric turbulence distortions in images have been already been proposed, as shown in Figure 2, these focus on modelling the resulting distortion of the images.
Deledalle et al. [5] used the Fried kernel and added white Gaussian noise on top of that, in order to model atmospheric turbulence. Chak et al. [6] recreates atmospheric turbulence by geometrically distorting a set area around a randomly selected pixel, this process is repeated for numerous pixels after which the entire image is blurred. Yasarla et al. [7] uses Gaussian blur kernels. Chimitt and Chan [8] model atmospheric turbulence Zernike coefficients, this model is explained in more detail in Section III-B.
2) The effect of atmospheric turbulence: Evaluating the effects of atmospheric turbulence on the performance of facial recognition has also been done before. Yasarla et al. [7] and Deledalle et al. [5] both use the distorted images in their comparisons, were they discuss the results of their turbulence mitigation methods. Espinola et al. [9] use real atmospheric turbulence distorted images, taken at a distance of 300m to discuss its effects on face recognition performance. Leonard et al. [3] show the effects of atmospheric turbulence at ranges up to 400m, they also show the improvement in face recognition performance by partially restoring the distorted images.
III. M ETHODS
A. A geometrical optics approach
The method proposed by Yuksel and Davis was created for FSO communication applications, where signals are sent using EM waves from an emitter to a receiver. The main focus of this study was to estimate the distance from where the signal arrives at the centre of the receiver. Using these estimates an appropriate size for the receiving aperture can be chosen in order to receive the required amount of power. To model atmospheric turbulence distortion in an image using a similar approach some changes need to be made to the model.
The model places bubbles with varying refractive indices in a 3D space. The image is placed on one side and the camera on the other. As the light passes through the bubbles, it is distorted. Similar to how light is distorted after passing through the air flows in atmospheric turbulence. In order to recreate the chaotic nature of atmospheric turbulence, the model uses multiple random distributions. The index of refraction is deter- mined using a normal distribution with a mean of 1.0001 and a variance of 10 −5 and the radius of the bubbles is determined using a uniform distribution ranging from 1mm to 1m.
In the original model [4], 14000 bubbles were placed in the 3D space, modelling so many bubbles is very demanding on a computer. In the original model, one run would be enough to generate the required data for certain parameters. To generate accurate data on image distortion caused by certain parameters, hundreds of images would need to be distorted.
So the implementations of the bubbles need to be created with efficiency in mind. Since the only interesting bubbles are those that influence the light coming from the images, these are the only bubbles that will be placed.
Two different implementations of the model were created.
The first one functions similar to a raytracer, but it only cal- culates the rays of light that eventually reach the camera. The second implementation creates the bubbles as 3D computer graphics, while this allows for more complex light refraction and reflection, it is also more demanding on the hardware of a computer. So this second implementation can render far fewer bubbles than the first.
1) Java implementation: The first implementation was cre- ated in Java, the pipeline for this implementation was created by Zeinstra. As mentioned before it functions similar to a ray tracer. It starts off by determining the size of the distorted image it wants to create, for each pixel it traces the path of light back. If the light passed through a bubble it will determine how it passed through the bubble. Depending on the angle between the light beam and the surface of the bubble, it either refracts or reflects. If the light can be traced back to a pixel of the original image, the new pixel will have the same RGB value. If the light cannot be traced back to the original image, the pixel will be white.
In this model ’the camera’ is as large as the image that it
wants to create, this is set to be as large as the original image
that is distorted. So without bubbles, the light would travel in
a straight line from the original image to the camera. With the
bubbles, the light can be redirected slightly to the outside and
still be directed back and reach the camera.
The radius of the bubble is chosen from a uniform distri- bution ranging from 1mm to 1m, so the average bubble has a radius of 0.5m. The area where the centre of the bubbles can be placed extends two times the average radius in each direction from the size of the images themselves. In Figure 4 a schematic overview can be seen of the placement of the bubbles. This implementation takes approximately 30 seconds to simulate distortion with 100 bubbles in one image.
Fig. 4: A schematic overview of the placement of the bubbles in the Java implementation.
2) Three.js implementation: The second implementation uses three.js, three.js is a JavaScript library for creating 3D computer graphics [10]. It renders these 3D computer graphics with WebGL using the GPU. In order to render the bubbles with a certain index of refraction, the material property re- fractionRatio in three.js is used. The refractionRatio is equal to n n
12