• No results found

The use of photo response non-uniformity patterns for the comparison of online videos

N/A
N/A
Protected

Academic year: 2021

Share "The use of photo response non-uniformity patterns for the comparison of online videos"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

The use of photo response non-uniformity patterns for the

comparison of online videos

Student: Rick Cents, 10628347

Supervisor: Zeno Geradts NFI/UvA Examiner: Marcel Worring UvA MSc Forensic Science, University of Amsterdam

Nederlands Forensisch Instituut Period: 01-04-2015 until 21-10-2015

EC: 36

Journal: Digital investigation

A

BSTRACT

Many videos are uploaded to the internet to share videos between different users. These videos can contain illegal content and can be part of a forensic investigation. This research focused on the possibility to use PRNU patterns for camera identification in online videos which are uploaded to social media. Videos were first analyzed before uploading and there was a clear separation between videos recorded with the same camera and videos recorded with a different camera. However, this separation disappeared after uploading the videos to YouTube and Facebook. Different types of frames were

considered and the Coiflet filter and 2nd order filter were used but no distinction could be made

between videos recorded with the same camera and videos recorded with a different camera. Analysis of individual frames showed that less PRNU noise is present after uploading the videos to the internet. A lot of noise is possibly filtered out by the compression applied on the videos which causes that the PRNU pattern cannot be used to determine if videos are recorded by the same camera.

Keywords: Digital Forensics, Camera identification, PRNU patterns, Social media

1. I

NTRODUCTION

Much digital evidence is acquired during an investigation. All the different digital devices contain a lot of useful information such as videos and pictures. Camera and mobile devices are used to take pictures and videos and these are easily shared between different persons using social media and messaging applications. In 2014 more than 700 million cameras were shipped worldwide [1]. The content of videos taken with those cameras can provide useful information, but it is important in many cases to determine who created the images or videos. The question which camera is used to create a

certain video or picture can be crucial, for example in child pornography or movie piracy cases. Information about the camera or mobile device used can be found in the metadata [2], [3], [4]. This contains the date when the picture or movie was taken and the camera model for example. However, metadata can easily be changed or removed by the user making this data an unreliable source in a forensic investigation. Also the use of dead pixels is suggested as a feature for camera identification [5]. To determine which camera created a certain picture or movie the differences in sensitivity to light from adjacent pixels is often used. All pixels should report the same values

(2)

2

under uniform lighting conditions. However, this is not the case due to imperfections which are created during the manufacturing of the camera sensors [6]. Some pixels report slightly higher or lower values as their adjacent pixels. This is called the Photo Response Non-Uniformity (PRNU) pattern and can be seen as the fingerprint of a camera and is a type of noise present in a picture or video. This pattern is unique for every camera and is stable over time [7].

The PRNU patterns are used for camera identification by comparing the PRNU pattern of a questioned image with patterns from reference images. Previous research has been performed on the use of PRNU patterns in offline videos and pictures, but only limited research has been conducted on the use of PRNU patterns in online videos. In the previously research it was assumed that a camera is available to create reference pictures and videos and that the camera type and model is known. However, it can also be important to link online videos to each other to determine how many cameras are used to record the questioned videos. This can add additional tactical information to the investigation. No reference videos can be recorded for the comparison and the extra compression used by online platforms can make it more difficult to determine if videos have a common origin. In this research the main research question addressed is: Is it possible to use PRNU patterns as a reliable feature to determine if videos are recorded by the same camera after the videos are uploaded to the internet?

P

REVIOUS WORK

There are some important aspects for the use of PRNU patterns in forensics. It is stable over time and under a lot of physical conditions and

it is present in every picture and video [8]. Research has been performed on the use of PRNU patterns in videos [7], [9], [10]. Geradts and van Houten performed research on the use of camera identification on videos which were uploaded to YouTube [9]. Flat field reference videos were uploaded to YouTube as reference videos and in most cases PRNU patterns could be used to determine the source camera. Scheelen and van der Lelie also examined PRNU patterns in videos uploaded to YouTube, but they first encoded the videos offline with an extra encoding and determined that the reliability of PRNU patterns is brand specific if videos contain an extra encoding before they are uploaded to YouTube [11]. One of the problems which could influence the PRNU patterns is the compression used by online platforms. Analysis has been performed on heavily compressed JPEG files and it was shown by averaging pixels in a square of 8 x 8 pixels into one block improved the separation between matches and mismatches [12]. A discrete cosine transform is used to compress a JPEG file and uses blocks of 8x8 pixels [13]. Lines which are part of the 8x8 blocks generated by the discrete cosine transform could be visible in the PRNU pattern. This could distort the PRNU pattern and by averaging the pixels inside a block the noise generated by the discrete cosine transform is filtered out of the PRNU pattern. Different filters are used to extract the PRNU pattern out of an image or video. The two most common filters used are wavelet filters and total variation based filters [7], [14]. Extra filters are used to remove noise residue which is present due to color interpolation and compression. Research has shown that the Wiener filter in combination with the zero mean filter performs the best in enhancing the PRNU pattern [15].

(3)

3

These filters will also be used in this research to extract the most optimal PRNU pattern. It can be important to determine if two videos posted on the internet share a common origin. This can give information about the number of cameras involved in the recording of the content. The main challenge is that no flat field reference videos are available for the comparison. The higher amount of detail present in videos could make it more difficult to give an accurate estimation of the PRNU pattern. This is the main reason why flat field references are used for the comparison of PRNU patterns from photos and videos. Also the extra compression used by online platforms could make it even harder to extract the PRNU pattern. Chuang et al. investigated the reliability of different types of frames [10]. They found out that I-frames were more reliable as P-frames in offline videos. It could be useful to only use a certain type of frame when longer videos are present. This could limit the time needed to analyze the PRNU patterns and could be more efficient in the storage of frames in a database.

2. M

ATERIAL AND METHODS

A total of three different camera models were used in this research and those are presented in Table 1. Five devices of each model were used to determine the difference in correlation between videos recorded with the same camera and videos recorded with a different camera of the same type and brand. The cameras were

numbered from one to five. Camera one was used to determine the correlation of movies created with the same camera and camera two to five were used to determine the correlation of a video recorded with a different camera. One movie was recorded with camera one and was used as the questioned video. All the reference videos were compared to this video. Five reference movies were recorded with all five cameras. Two movies were recorded inside a room in the same setting as the questioned movie and three movies were recorded outside to create a different setting and lighting conditions to determine the impact of different

conditions on the comparison of the PRNU patterns. The movies recorded with

camera one will be referred to as matches and the movies recorded with camera two until five will be referred to as mismatches.

At first, the frames were extracted from a video V. This resulted in a set of frames F = 1….N. N is the total amount of frames in a certain interval. The intervals used are 0-30 seconds, 0-60 seconds until 0-150 seconds. Three different extractions were performed. The I-frames and P-frames were analyzed separately and also all the frames in a certain interval were analyzed together. Different types of frames are analyzed to determine what the most optimal setting is for the use of PRNU patterns in online videos. I-frames are also known as key frames or intra-coded frames. A video can consist out of I-frames followed by P-I-frames [16]. An I-frame contains all the data needed to decode a specific frame and can be compared with a picture. An I-frame is often followed by frames, also known as a predicted frame. A P-frame only contains the changes relative to the previous frame. For example, if only a small portion of the frame moves, but the rest of the

Device Resolution Frames

per second

Compression

Nikon Coolpix 1280x720 30 Motion JPEG

Canon PowerShot SX 210 IS 1280x720 29 H.264 Samsung Digimax L70 640x480 30 MPEG-4

(4)

4

frame stays the same compared to the previous frame, then only the change is needed to decode the frame. The stable part can be decoded using the previous frame. Therefore, less data is needed in the storage of the frames. This is also illustrated in Figure 1. The different types of frames will be extracted using FFMPEG [17]. The P-frames are entirely decoded using FFMPEG.

The next step contained all the frames in a

specific set F. F is added to PRNUCompare.

PRNUCompare is a tool developed by the Netherlands Forensic Institute for the analysis of PRNU patterns in pictures and videos [18]. This tool is able to extract the average PRNU pattern from all the frames. The first step performed by PRNUCompare is the extraction of the PRNU pattern from all the individual

frames, Fn. A filter was used to obtain the noise

present in the individual frames. This resulted in

a PRNU pattern Pn. The patterns extracted from

all the individual frames, Pn, were averaged to

create the final PRNU pattern. This step was performed for all the different settings described before.

The extraction of the PRNU pattern can be performed by different filters. Subsequent pixels should have almost identical values and any variation could be caused by noise. These variations were used to determine the noise in

a specific frame. The 2nd order filter was used,

as proposed by Gisolf et al. [14]. Another type of filter that was used is the Coiflet filter. The main difference is that the Coiflet filter extracts the PRNU noise in the wavelet domain [19]. The Coiflet filter was used on the frames extracted with a total length of 150 seconds to determine

if the Coiflet filter gives a better result as the 2nd

order filter. PRNUCompare can also perform a comparison between different PRNU patterns.

The normalized cross correlation was used for the comparison of the different movies and frames. The correlation values varied between -3 and 3. The correlation of matches and mismatches were compared to determine if all the matches have a higher correlation as the mismatches. All the videos recorded with camera one should have a higher correlation as the videos recorded with camera two to five to have a good separation between matches and mismatches.

The videos were analyzed before they were uploaded to an online platform to determine the impact of the online platforms on the correlation values and the separation between matches and mismatches. Subsequently, they were uploaded to YouTube and Facebook and were downloaded again using the Video Downloadhelper plugin which was installed in Mozilla Firefox [20]. The videos were analyzed in the same way as the offline videos and the correlation values of the matches and mismatches were compared to investigate if it is still possible to determine if two videos share a common origin. The lowest correlation value of a movie recorded with the same camera as the questioned movie, camera one, should be higher as the highest correlation of the movies recorded with different cameras, camera two

until five.

(5)

5

3. R

ESULTS

The videos were analyzed before uploading them to the internet to determine if there was a clear separation between matches and mismatches. The types of frames present in those movies differed per camera. The movies recorded with the Nikon Coolpix L27 did not contain different types of frames because of the motion.jpeg codec. However, the movies recorded with the Canon Powershot SX210 IS and the Samsung Digimax L70 contained I-frames and P-I-frames. The videos were uploaded to YouTube subsequently and due to the compression used by YouTube also the movies recorded with the Nikon camera contained I-frames and P-I-frames. H.264 was used by Facebook and YouTube for the encoding of the videos. The size of the videos decreased after uploading the videos to the online platforms. The size of the YouTube videos was approximately ten times smaller and the Facebook videos approximately fifteen times smaller using the Nikon camera. In the Canon camera the size decreased eight times in the YouTube videos and thirteen times in the Facebook videos. The videos from the Samsung camera decreased four times in the YouTube videos, but only two times in the Facebook videos. However, it should be noted that YouTube decreased the resolution to 480x360, because the initial resolution used with the Samsung camera was not supported by YouTube.

4.1

N

IKON

C

OOLPIX

L27

The motion.jpeg codec was used in the movies recorded with the Nikon Coolpix L27 and therefore no different types of frames were present before uploading them to the internet. All frames were entered into PRNUcompare and

analyzed using the 2nd order filter. The results

showed that there was a good separation between the videos recorded with the same camera and the videos recorded with different cameras. The separation was already visable at 90 seconds and was still growing at 150 seconds. The correlations of the mismatches were almost stable. The movies which were recorded in the same setting had higher correlations as the movies which were created in a different setting. This applied on the movies recorded with camera one, but also on the movies recorded with the mismatching cameras. However, there was still a good separation between matches and mismatches

when the 2nd order filter was used, but this was

not the case when the Coiflet filter was used. The videos were uploaded to YouTube and downloaded again and the presence of different types of frames was analyzed again. After the uploading of the movies there were I-frames and P-frames present in the movies. The average amount of I-frames per 30 seconds was 17 with a standard deviation of 3.8. At first, all frames were analyzed to determine if there was a big difference before uploading and after uploading the movies to YouTube. The separation from matches and mismatches was not present anymore. The lowest correlation of the matches was lower as the highest correlation of the mismatches, even when the first 150 seconds were used. All the mismatch correlations were higher after they were uploaded to YouTube, but the matching movies had a lower correlation. The lowest correlation at 150 seconds of a match was 0.024611 and the highest correlation of a mismatch was 0.079833. The correlation of the matches dropped approximately 1.5 times, but the average correlations of the mismatches increased with 3.2 times compared to the correlations before they were uploaded.

(6)

6

The next step was the analysis of only the I-frames and the I-frames. The graph of the P-frames showed a similar result as when all the frames were used, but the graph of the I-frames showed a slightly better separation between matches and mismatches, but not all the movies created with the matching camera have a higher correlation as the movies recorded with the other cameras. There was a difference in the height of the correlation. The average correlation of the matches at 150 seconds when only the P-frames were taken into account was 0.046215 and in the I-frames 0.01896. This difference can also be observed in the graphs presented in Appendix A. These graphs represent the average correlation at 30 and 150 seconds. Figure 2 shows the lowest correlation of the matches and the highest correlation of the mismatches from the videos before they were uploaded and Figure 3 after they were uploaded to the YouTube. It can be seen that there was a huge impact on the use of the PRNU patterns. Another observation from the graphs is that all the correlation values increased while more frames were taken into account. This applied on the matching movies

but also on the mismatches. The results after the analysis of the videos downloaded from Facebook showed similar results as the movies downloaded from YouTube. However, the average amount of I-frames was higher, namely 29 per 30 seconds with a standard deviation of 25. The correlation values decreased in the matches with an average of 1.7 times, but the mismatches increased with 2.8 times compared to the correlation before they were uploaded to the internet. No distinction could be made between matches and mismatches. There was also no separation present between matches and mismatches when the Coiflet filter was used for the comparison of the videos which were uploaded to YouTube and Facebook. Table 2-3 represents the correlation values of the videos with a length of 150 seconds which were uploaded to Facebook. A clear difference between the two filters was observed. However, none of the filters showed a good separation between matches and mismatches. The standard deviation of the Coiflet filter was

larger as the standard deviation of the 2nd order

filter. 0 0,01 0,02 0,03 0,04 0,05 0,06 0 30 60 90 120 150 Norm al iz e d cr os s corr e lat ion Time in seconds

Nikon Coolpix L27 before uploading

Camera 1 Camera 2 Camera 3 Camera 4 Camera 5

0 0,01 0,02 0,03 0,04 0,05 0,06 0,07 0,08 0,09 0 30 60 90 120 150 Norm al iz e d cr os s corr e lat ion Time in seconds

Nikon Coolpix L27 YouTube

Camera 1 Camera 2 Camera 3 Camera 4 Camera 5

FIGURE 2: LOWEST CORRELATION CAMERA 1(MATCH) HIGHEST CORRELATION CAMERA 2-5(MISMATCH) BEFORE UPLOADING

FIGURE 3: LOWEST CORRELATION CAMERA 1(MATCH) HIGHEST CORRELATION CAMERA 2-5(MISMATCH) YOUTUBE VIDEOS

(7)

7

TABLE 2-3: CORRELATIONS AT 150 SECONDS FACEBOOK VIDEOS NIKON CAMERA

4.2

C

ANON

P

OWER

S

HOT

SX210

IS

The movies recorded with the Canon cameras show a clear separation between matches and mismatches. The separation was already visible at 30 seconds when the lowest correlations of the matches and the highest correlations of the mismatches were compared. The movies contained both I-frames and P-frames and they were also analyzed. The reliability was almost equal, but there were less I-frames present per 30 seconds, which could make them more valuable because less time is needed to create a PRNU pattern out of those frames. The movies recorded in the same setting have a higher correlation as the movies recorded in a different setting. The correlation of the matches was approximately ten times higher as the correlation of the mismatches. This was the case for all the frames, the P-frames, and the I-frames. The number of I-frames and P-frames was stable in the movies. The first frame was an I-frame followed by fourteen P-frames. This pattern was observed throughout the whole movie. Every 30 seconds contained 60 I-frames and 840 P-frames. The number of I-frames decreased after the videos were uploaded to

YouTube. The average number of I-frames per 30 seconds was 17 with a standard deviation of 2.6. The average amount of I-frames in the Facebook videos was 26 with a standard deviation of 17.3. The correlation was lower after the videos were uploaded to YouTube. The correlation of the matches decreased with an average of 7.6 times, but the mismatches only decreased with 3.2 times compared to the correlation before uploading. However, there was still a separation between movies recorded with the same camera and with different cameras. The lowest correlation of the same camera was higher at 90 seconds compared to the highest correlation of a movie recorded with a different camera. The I-frames and P-frames were also analyzed and the results

showed that this separation was not present

anymore using only the I-frames, but the separation was still present when only the P-frames were used. The results also showed that the movies which were recorded in the same room had a higher correlation as the movies which were recorded outside. The separation between mismatches and matches was not present in the movies downloaded from Facebook. The difference can also be seen in Figure 4 and 5. There was a separation present in the videos downloaded from YouTube, but this was not present anymore in the Facebook videos. The correlation of the matches decreased with an average of 19.6 times, but the mismatches only with 3.9 times compared with the correlation of the movies which were not uploaded to the internet. The Coiflet filter was used for the comparison of the frames present in the first 150 seconds of the movie.

The separation between matches and

mismatches was smaller when the Coiflet filter was used, but there was also a separation between matches and mismatches before the videos were uploaded and the YouTube videos.

2nd order Movie 1 Movie 2 Movie 3 Movie 4 Movie 5

Camera 1 0.041475 0.052892 0.046368 0.021763 0.039241

Camera 2 0.049940 0.050147 0.024340 0.028069 0.042189

Camera 3 0.056750 0.054218 0.007566 0.012594 0.027475

Camera 4 0.064625 0.053718 0.018816 0.035767 0.043218

Camera 5 0.067043 0.048937 0.040446 0.020583 0.043005

Coiflet Movie 1 Movie 2 Movie 3 Movie 4 Movie 5 Camera 1 0.167164 0.031108 0.094194 -0.02696 -0.01197

Camera 2 -0.01297 0.090713 0.023713 0.032110 0.016872

Camera 3 -0.06459 0.024731 -0.010460 0.00844 0.028841

Camera 4 0.078214 0.068235 -0.027310 0.012032 0.013665

(8)

8 0 0,02 0,04 0,06 0,08 0,1 0,12 0,14 0,16 0 30 60 90 120 150 Norm al iz e d cr os s corr e lat ion Time in seconds

Canon SX210 IS YouTube

Camera 1 Camera 2 Camera 3 Camera 4 Camera 5

FIGURE 5: LOWEST CORRELATION CAMERA 1(MATCH) HIGHEST CORRELATION CAMERA 2-5(MISMATCH) FACEBOOK VIDEOS

However, no separation was present in the Facebook videos when the Coiflet filter was used. Therefore, this filter did not improve the

results compared to the 2nd order filter.

4.3

S

AMSUNG

D

IGIMAX

L70

The movies recorded with the Samsung Digimax L70 were analyzed before uploading and a separation between all the matches and mismatches was visible when the first 90 seconds of the movies were used. The number of I-frames was not stable throughout the movie. The average amount of I-frames was 92 with a standard deviation of 5.1. No separation was present when only the I-frames were analyzed and the separation was present at 90 seconds when only the P-frames were used in the comparison of the PRNU patterns. The movies were uploaded to YouTube and analyzed again. The resolution of the videos changed, because YouTube did not support the

initial resolution and downgraded the

resolution to 480x360. The average amount of I-frames decreased to 19 I-frames per 30 seconds with a standard deviation of 7. There was no separation present between matches

and mismatches. The lowest correlation of the matching videos was lower as the highest correlation of the mismatches. The lowest correlation from a matching movie was 0.006752 and the highest correlation of a mismatch was 0.178807. The difference in correlation between the different types of frames can also be seen in Table 4-6. The correlations of the matches were 2.9 times lower, but the correlations of the mismatches were 2.6 times higher compared to the correlations of the videos before uploading. The use of a Coiflet filter did not improve the results and still no separation between matches and mismatches was present. The videos which were uploaded to Facebook were not resized. The average amount of I-frames decreased to 10 I-frames per 30 seconds with a standard deviation of 0.4. There was also no separation between matches and mismatches in the Facebook videos. The correlations of the matches were 4.6 times lower, but the mismatches were 1.6 times higher as the correlations before uploading.

0 0,02 0,04 0,06 0,08 0,1 0 30 60 90 120 150 Norm al iz e d cr os s corr e lat ion Time in seconds

Canon SX210 IS Facebook

Camera 1 Camera 2 Camera 3 Camera 4 Camera 5

FIGURE 4: LOWEST CORRELATION CAMERA 1(MATCH) HIGHEST CORRELATION CAMERA 2-5(MISMATCH) YOUTUBE VIDEOS

(9)

9

TABLE 4-6: CORRELATIONS AT 150 SECONDS YOUTUBE VIDEOS SAMSUNG CAMERA

4.4

E

FFECTS OF COMPRESSION ON

PRNU

PATTERNS

The results showed that there was a huge impact on the use of PRNU patterns when the videos were uploaded to the internet. The correlation values decreased in all the matches after uploading the videos to the internet, but this was not the case for the mismatches. The PRNU patterns of individual frames were extracted to examine if for example extra artifacts were present after uploading the videos to determine if these could be removed to improve the use of PRNU patterns in online videos. The patterns of individual frames were extracted and the patterns from before uploading, after uploading to YouTube, and after uploading to Facebook were compared. It showed that there was a lot of PRNU noise present in the frames before uploading.

However, the amount of PRNU noise decreased after the videos were uploaded to the YouTube and Facebook. The difference in the PRNU pattern can be seen in Figure 6. More noise is present in the top pattern compared to the bottom pattern. Multiple frames from all the cameras which were used were examined and less PRNU noise was present in all the frames which were extracted from the online videos. An implementation was made of the method suggested by Alles et al. to filter out compression artifacts [12]. However, this did not improve the results. The average correlation values of the matches and

mismatches was still almost equal.

All frames

Movie 1 Movie 2 Movie 3 Movie 4 Movie 5 Camera 1 0.178687 0.186094 0.029712 0.006752 0.128688 Camera 2 0.169806 0.16562 0.040031 0.081588 0.074418 Camera 3 0.168875 0.178807 -0.00187 0.013833 0.016616 Camera 4 0.137632 0.151885 0.020625 0.015894 0.021112 Camera 5 0.138966 0.130703 0.020299 0.024445 0.018595 Only I-frames

Movie 1 Movie 2 Movie 3 Movie 4 Movie 5 Camera 1 0.072774 0.073375 0.012899 -0.00160 0.035252 Camera 2 0.058856 0.055284 0.016987 0.030724 0.034390 Camera 3 0.063663 0.072018 0.003414 0.012687 0.009605 Camera 4 0.040497 0.071933 0.018389 0.017381 0.001129 Camera 5 0.036359 0.036938 -0.00013 0.020001 -0.00380 Only P-frames

Movie 1 Movie 2 Movie 3 Movie 4 Movie 5 Camera 1 0.171113 0.176100 0.028334 0.006253 0.120329

Camera 2 0.162555 0.157417 0.039694 0.078921 0.068773

Camera 3 0.163977 0.171366 -0.003870 0.011528 0.013395

Camera 4 0.135339 0.114834 0.018838 0.013879 0.015910

Camera 5 0.132805 0.125203 0.020402 0.023102 0.018037

FIGURE 6: PRNU PATTERN TOP BEFORE UPLOADING, BOTTOM AFTER UPLOADING TO FACEBOOK

(10)

10

4. D

ISCUSSION

The compressions used by the online platforms have a huge impact on the videos. A high level of compression is applied on the videos to decrease the storage space needed for the videos. The size of the movies decreases in both YouTube and Facebook. However, this also has an impact on the use of the PRNU patterns. The effects are different for each camera. In the Canon camera a separation is present between videos recorded with the same camera and videos with a different camera before uploading at 30 seconds. Only the I-frames can be used for the comparison before the videos are uploaded. This is not the case for the Samsung camera. The separation is present at 90 seconds when all the frames are used, but no separation is present when only the I-frames are used. Therefore, it is advised to use all the frames present. The larger the amount of frames the better the separation between matches and mismatches. This is probably caused by the averaging of the frames and more frames can create a better PRNU pattern. The Coiflet filter was applied on the frames present in the first 150 seconds of the video, but no separation is present between the matches and mismatches in the Nikon camera before the videos are

uploaded to the internet. The 2nd order filter

has a good separation between matches and mismatches. The correlation values drop in all the matching movies after uploading the videos to YouTube, but this is not the case for the mismatches. The amount of change varies per camera. This could be due to the height of the correlation before uploading the videos to the internet. The analysis of the different types of frames shows that not one type of frame can be used to create a separation between matches and mismatches in the YouTube videos.

The correlations of the P-frames are higher, but this is caused due to the larger amount of P-frames present in the videos.

The Canon camera is the only camera which shows a separation between matches and mismatches in the YouTube videos. This could be caused due to the initial encoding settings used by the Canon camera, which could be more similar to the YouTube encoding as the encoding from the other cameras. The separation between matches and mismatches is also larger before the videos are uploaded to the internet. The analysis of individual frames shows that there is a decrease in PRNU noise present in the frames. The PRNU noise which is still present is not on the same position as before the videos are uploaded.

None of the cameras show a good separation between matches and mismatches in the Facebook videos. The correlation of both matching videos and mismatching videos increases when a longer part of the movie is used. This is expected for the matching videos, but the correlations of the mismatches should stay stable or decrease. This is the case before they are uploaded, but not after they are uploaded to YouTube and Facebook. The mismatches also increase when a longer part of the video is taken into account. The PRNU patterns of individual frames show that there is a change in the amount of noise present in the frames after the uploading to the internet. The patterns exist mainly out of compression artifacts. More artifacts are present in the extracted pattern when more frames are used which would explain why all the correlations are increasing.

(11)

11

The compression leaves the same artifacts in the movies created with all the cameras, which could cause that the average correlations are approximately the same for the matches and the mismatches.

There are a lot of settings which can be configured in the h.264 encoding. Facebook and YouTube can also apply filters on the videos to get the optimal quality with the consequence that the PRNU noise is filtered out. A deblocking filter is integrated in the h.264 standard to filter out blocks generated by the discrete integer transform. This filter can be configured when the standard is implemented. A high value can be used in the deblocking filter which could possibly causes the PRNU noise to be filtered out.

More research should be performed to determine the exact settings used by Facebook and YouTube in the processing of the movies to determine if other measures could be used for camera identification purposes. A study has been performed on JPEG images and the effect on the images if they are uploaded on Facebook [21]. It can be useful to perform a similar research on videos to determine if there is other data present which can be used for camera identification.

5. C

ONCLUSION

This research focused on the possibility to determine if PRNU patterns are a reliable feature to determine the amount of cameras used in the recording of videos after the videos are uploaded to the internet. The results show that there is a clear separation between videos recorded with the same camera and videos recorded with a different camera of the same type and brand before they are uploaded to the internet. This is not the case when the Coiflet

filter is used for the Nikon camera. The Canon camera was the only camera who showed a separation between matches and mismatches in the YouTube videos. None of the used cameras showed a good result after the videos were uploaded to the internet.

Different types of frames were tested and also different filters were used. After examining individual frames it showed that less PRNU noise is present after the videos are uploaded and the PRNU pattern mainly exists out of sharp edges in the pictures such as lines from objects and compression artifacts. This causes that no reliable PRNU pattern can be extracted to determine if videos are recorded with the same camera or with a different camera after the videos are uploaded to social media. Therefore, research should be performed to determine if other features are present to determine if videos are recorded with the same camera or with a different camera if the videos are uploaded to the internet.

A

CKNOWLEDGEMENTS

I would like to thank the Netherlands Forensic Institute for giving me the opportunity to conduct this research. A special thanks to Zeno Geradts for the supervision during the project and the department of Digital Technology and Biometrics for their help during this project.

(12)

12

R

EFERENCES

[1] [Online]. Available:

http://www.cipa.jp/index_e.html. [Accessed 24 8 2015].

[2] "Using extended file information (EXIF) file headers in digital evidence analysis," International Journa, vol. 2, no. 3, pp. 1-5, 2004. [3] M. Boutell and J. Luo, "Photo Classification by Integrating Image Content and Camera Metadata," Proceedings of the 17th

International Conference on Pattern

Recognition, vol. 4, pp. 901-904, 2004.

[4] J. Tešić, "Metadata practices for consumer photos," IEEE Multimedia, vol. 12, no. 3, pp. 86-92, 2005.

[5] Z. Geradts, J. Bijhold, M. Kieft, K. Kurosawa and K. Kuroki, "Methods for Identification of images acquired with digital cameras," Proc. of SPIE, Enabling Technologies for Law Enforcement and Security, vol. 5232, pp. 505-512, 2001.

[6] J. Lukáš, J. Fridrich and M. Goljan, "Determining digital image origin using sensor imperfections," Electronic Imaging, pp. 249-260, 2005.

[7] J. Fridrich, M. Chen, M. Goljan and J. Lukáš, "Source digital camcorder identification using sensor photo response non-uniformity," Electronic Imaging, 2007.

[8] M. Goljan and J. Fridrich, "Camera Identification from Cropped and Scaled Images," Proc. SPIE, Electronic Imaging , Forensics, Security,

Steganography, and Watermarking of

Multimedia Contents X, pp. OE-1-OE30, 2008. [9] Z. Geradts and W. v. Houten, "Using sensor

noise to identify low resolution compressed videos," International Workshop on

Computational Forensics,, pp. 104-115, 2009. [10] W.-H. Chuang, H. Su and M. Wu, "Exploring

compression effects for improved source camera identification using strongly compressed video," 2011 18th IEEE International Conference on Image Processing, pp. 1953-1956, 2011.

[11] Y. Scheelen and J. v. d. Lelie, "Camera Identification on YouTube," 2012.

[12] E. J. Alles, Z. Geradts and C. Veenman, "Source camera identification for heavily JPEG compressed low resolution still images," Journal of forensic sciences, vol. 54, no. 3, pp. 628-638, 2009.

[13] Y. Wiseman, "The still image lossy compression standard - JPEG," Encyclopedia of Information and Science Technology, 2014.

[14] F. Gisolf, A. Malgoezar, T. Baar and Z. Geradts, "Improving source camera identification using a simplified total variation based noise removal algorithm," Digital investigation, vol. 10, pp. 207-214, 2013.

[15] B.-B. Liu, X. Wei and J. Yan, "Enhancing sensor pattern noise for source camera identification: an emperical evaluation," Conference: ACM

Workshop on Information Hiding and

Multimedia Security, 2015.

[16] B. Juurlink, M. Alvarez-Mesa, C. C. Chi, A. Azevedo, C. Meenderinck and A. Ramirez, Scalable Parallel Programming Applied to H.264/AVC Decoding, New York: Springer-Verlag , 2012.

[17] FFMPEG, "FFMPEG," [Online]. Available: https://www.ffmpeg.org/. [Accessed 18 06 2015].

[18] Netherlands Forensic Institute, "PRNU Compare LE," 08 2012. [Online]. Available:

(13)

13 http://academy.forensicinstitute.nl/uploads/PR

NU%20Compare%20LE%20factsheet.pdf. [Accessed 07 07 2015].

[19] J. Lukáš, J. Fridrich and M. Goljan, "Digital Camera Identification from Sensor Pattern Noise," IEEE Transactions on Information Forensics and Security, pp. 205-214, 2006. [20] DownloadHelper, "Welcome to

DownloadHelper," [Online]. Available: http://www.downloadhelper.net/index.php. [Accessed 07 07 2015].

[21] M. Moltisanti, A. Paratore, S. Battiato and L. Saravo, "Image manipulation on facebook for forensic evidence," International Conference on Image Analysis and Processing - See more at: http://iplab.dmi.unict.it/publication/524#sthash .slW8eDoD.dpuf, 2015.

[22] C.-C. Wang, C.-W. Tung, H.-C. Wang and R.-H. Chiou, "Efficient algorithm for early detection all-zero DCT blocks in h.264 video encoding," Proceedings IEEE Circuits and Systems Society, vol. 15, no. 6, pp. 748-788, 2005.

[23] Trudef, "TRUDEF Intra Frame," 2015. [Online]. Available: http://www.tmmi.us/trudef-intraframe. [Accessed 15 9 2015].

(14)

14

A

PPENDIX

A:

C

ORRELATIONS

N

IKON

C

OOLPIX

L27

0,00 0,01 0,01 0,02 0,02 0,03 Norm al iz e d cr os s corr e lat ion

30 Seconds All Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

0,00 0,02 0,04 0,06 0,08 0,10 Norm al iz e d cr os s corr e lat ion

150 Seconds All Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

-0,01 -0,01 0,00 0,01 0,01 0,02 0,02 Norm al iz e d cr os s corr e lat ion

30 Seconds Only I-Frames

Match YouTube Match Facebook Mismatch YouTube Mismatch Facebook

0,00 0,01 0,02 0,03 0,04 Norm al iz e d cr os s corr e lat ion

150 Seconds Only I-Frames

Match YouTube Match Facebook Mismatch YouTube Mismatch Facebook

0,00 0,01 0,01 0,02 0,02 0,03 Norm al iz e d cr os s corr e lat ion

30 Seconds Only P-Frames

Match YouTube Match Facebook Mismatch YouTube Mismatch Facebook

0,00 0,01 0,02 0,03 0,04 0,05 0,06 0,07 Norm al iz e d cr os s corr e lat ion

150 Seconds Only P-Frames

Match YouTube Match Facebook Mismatch YouTube Mismatch Facebook

(15)

15 0,00 0,00 0,01 0,02 0,03 0,06 0,13 0,25 0,50 1,00 N orm al iz ed cross co rr el at ion

30 Seconds Only I-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

0,00 0,01 0,02 0,03 0,06 0,13 0,25 0,50 1,00 2,00 Norm al iz e d cr os s corr e lat ion

30 Seconds Only P-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

A

PPENDIX

B:

CORRELATIONS

C

ANON

POWERSHOT

SX210

IS

0,01 0,04 0,11 0,33 1,00 3,00 Norm al iz e d cr os s corr e lat ion

150 Seconds Only I-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

0,01 0,04 0,11 0,33 1,00 3,00 Norm al iz e d cr os s corr e lat ion

150 Seconds Only P-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

0,00 0,01 0,04 0,11 0,33 1,00 3,00 Norm al iz e d cr os s corr e lat ion

30 Seconds All Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

0,01 0,04 0,11 0,33 1,00 3,00 Norm al iz e d cr os s corr e lat ion

150 Seconds All Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

(16)

16 -0,10 0,00 0,10 0,20 0,30 0,40 N orm al iz ed cross co rr el at ion

150 Seconds Only P-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

-0,02 0,00 0,02 0,04 0,06 0,08 0,10 Norm al iz e d cr os s corr e lat ion

30 Seconds Only P-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

A

PPENDIX

C:

C

ORRELATION

S

AMSUNG

D

IGIMAX

L70

-0,10 0,00 0,10 0,20 0,30 0,40 0,50 0,60 Norm al iz e d cr os s corr e lat ion

150 Seconds All Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

-0,05 0,00 0,05 0,10 0,15 0,20 Norm al iz e d cr os s corr e lat ion

30 Seconds All Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

-0,05 0,00 0,05 0,10 0,15 0,20 Norm al iz e d cr os s corr e lat ion

30 Seconds Only I-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

-0,10 0,00 0,10 0,20 0,30 0,40 0,50 0,60 Norm al iz e d cr os s corr e lat ion

150 Seconds Only I-Frames

Match before uploading Match YouTube

Match Facebook Mismatch before uploading Mismatch YouTube Mismatch Facebook

Referenties

GERELATEERDE DOCUMENTEN

This report addresses the quality of the population registers which are currently being used as sampling frames in countries participating in the four cross-European

For example, a link on a frame before a subframe (or a group of subframes) with details should skip them in embed mode, while in append mode the same link should lead to the

Intertextual frames, which are a subtype of the generic textual CFR, seem to have overlapped with the organisational ones in BSSA‟s shift. Intertextual frames are influences

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Zijn zij een indicatie voor een oudere donjon in dit type steen, waarvan de blokken werden herbruikt bij de bouw van het poortgebouw voor de nieuwe Gobertange- donjon.. -

The Kingdom capacity (K-capacity) - denoted as ϑ (ρ) - is defined as the maximal number of nodes which can chose their label at will (’king’) such that remaining nodes

This notion follows the intuition of maximal margin classi- fiers, and follows the same line of thought as the classical notion of the shattering number and the VC dimension in

A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates from Photoplethysmographic Signals using Time-Frequency Spectral Features. LS- SVMlab Toolbox