• No results found

Forensic Evidencing for Green Forensic Studies Stable Isotope Analysis : a review

N/A
N/A
Protected

Academic year: 2021

Share "Forensic Evidencing for Green Forensic Studies Stable Isotope Analysis : a review"

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculty of Science

MSc Chemistry

Analytical Sciences

Literature Thesis

Forensic Evidencing for Green Forensic Studies

Stable Isotope Analysis – a review

by

Liz Leenders

Supervisors:

dr. E. de Rijke dr. L.J. de Koning

(2)

2

Summary

Consumers are more concerned with the authenticity of their food nowadays, when compared to the last decade. This is the reason why the determination of the geographical origin of food products has been a growing issue over the past years. This can be seen by the rising amount of research articles published, investigating to determine the geographical origin of food products using methods such as chemical profiling and isotope ratio analysis combined with chemometrics. In stable isotope analysis, the parameters most used are the isotope ratios of light elements as hydrogen, carbon, nitrogen, oxygen and sulphur, mostly in combination with isotope ratios of heavier elements and elemental analysis. Isotope ratio mass spectrometry is the most commonly used technique in stable isotope analysis, but other spectrometric and spectroscopic methods, such as nuclear magnetic resonance, are being increasingly used in the food industry.

This review article focuses on several applications of stable isotope analysis and the chemometric methods used to determine the geographical origin of food products worldwide in the period of 2010 to 2016. Isotope fractionation processes are described, and an overview of the most frequently used analytical techniques and chemometrics is given, including validation tools for these methods. 45 publications are reviewed, divided into 5 different categories: wine, honey, oil and vinegar, fruits and vegetables, and dairy. Detailed information of each publication is provided, including the number of samples for analysis and the parameters measured, and the analytical techniques and statistical methods used.

Samenvatting

Het bepalen van de geografische oorsprong van uiteenlopende voedselproducten is de afgelopen tijd in populariteit gestegen in landen over de gehele wereld, dit komt door het feit dat consumenten meer interesse tonen in de authenticiteit van de voedselproducten die zij consumeren. De laatste jaren is het aantal publicaties over de herkomst van voedselproducten gestegen, waarbij methoden als chemische profilering en isotoop ratio analyse in combinatie met chemometrie worden gebruikt. De indicatoren die het meest gebruikt worden in stabiele isotoop ratio analyse zijn de isotopen van de lichte elementen uit het periodieke systeem, zoals waterstof, koolstof, stikstof, zuurstof en zwavel. Analyse van deze isotopen wordt vaak gecombineerd met analyse van isotopen van de zwaardere elementen, in combinatie met element analyse. Isotoop ratio massaspectrometrie is de meest gebruikte techniek in stabiele isotoop analyse, maar andere spectrometrische en

spectroscopische methodes, zoals bijvoorbeeld nucleaire magnetische resonantie, zijn bezig aan hun opmars in de voedselindustrie.

Deze literatuurscriptie legt de focus op het gebruik van stabiele isotoop analyse en verschillende statistische methodes om de geografische oorsprong van voedselproducten te achterhalen. De publicatieperiode waarnaar gekeken wordt, is 2010 tot 2016, de publicaties zijn afkomstig uit landen over de hele wereld. Verschillende isotoop fractioneringsprocessen worden beschreven en

informatie over de meest gebruikte analytische en statistische technieken wordt gegeven, inclusief methodes om deze technieken te valideren. Er worden 45 publicaties besproken, deze publicaties zijn verdeeld in vijf categorieën; wijn, honing, olie en azijn, groente en fruit, en zuivelproducten. Een gedetailleerd overzicht, inclusief het aantal monsters, de parameters die gemeten worden, de analytische technieken en de statistische methodes die gebruikt worden, wordt gegeven.

(3)

3

List of abbreviations

MS mass spectrometry

NMR nuclear magnetic resonance

CRDS cavity ring down spectroscopy

IRMS isotope ratio mass spectrometry

SNIF-NMR site-specific natural isotope fractionation nuclear magnetic resonance

PCA principle component analysis

LDA linear discriminant analysis

(M)ANOVA (multivariate) analysis of variance (H)CA (hierarchical) cluster analysis IAEA International Atomic Energy Agency

VSMOW Vienna Standard Mean Ocean Water

SLAP standard light Antarctic precipitation

VPDB Vienna Pee Dee Belemnite

CDT Cañon Diablo Troilite

TIMS thermal ionization mass spectrometry

ICP-MS inductively coupled plasma mass spectrometry CSIA compound-specific isotope analysis

BSIA bulk stable isotope analysis

EA elemental analyser

CF continuous flow

TC thermal conversion

GC gas chromatography

FC Faraday Cup

AES atomic emission spectroscopy/Auger electron spectroscopy

XRF X-ray fluorescence

XPS X-ray photoelectron spectroscopy

PIXE particle-induced X-ray emission

AAS atomic absorption spectroscopy

(N)IR (Near) infrared spectroscopy AOAC Association of Analytical Chemists

(4)

4

LOD limit of detection

LOQ limit of quantitation

PLS(-DA) partial least squares (discriminant analysis)

FDA flexible discriminant analysis

(M)CDA (multi-) criteria decision analysis C&RT/CART classification and regression tree

BP-ANN back propagation artificial neural networks CPANN counter propagation artificial neural networks

kNN k-nearest neighbours

DFA discriminant function analysis

SIMCA soft independent modelling of class analogies

CCA canonical correlation analysis

(5)

5

Tabel of Contents

Summary ... 2 List of abbreviations ... 3 Introduction ... 6 1. Isotope Ratios ... 8

1.1 Natural abundance isotope ratios ... 8

1.2 Delta notation... 9

2. Isotope Fractionation ... 11

2.1 Equilibrium isotope fractionation ... 11

2.2 Kinetic isotope fractionation ... 12

2.3 Mass-independent and transient kinetic isotope fractionation ... 13

3. Stable isotope analysis techniques ... 14

3.1 Mass spectrometric techniques for stable isotope analysis ... 14

3.1.1 Bulk stable isotope analysis (BSIA) and compound-specific isotope analysis (CSIA) ... 14

3.1.2 Isotope ratio mass spectrometry (IRMS) ... 16

3.2 Elemental analysis ... 17

3.3 Spectroscopic techniques for stable isotope analysis ... 18

3.3.1 Site-specific natural isotope fractionation-nuclear magnetic resonance (SNIF-NMR) ... 18

3.3.2 Cavity ring down spectroscopy (CRDS) ... 19

3.4 Validation of the analytical method ... 19

4. Multivariate statistics ... 22

4.1 Principal component analysis (PCA) ... 23

4.2 Cluster analysis (CA) ... 23

4.3 Linear discriminant analysis (LDA) ... 24

4.4 Analysis of variance (ANOVA) ... 24

4.5 Hypothesis testing ... 25

4.6 Validation of the chemometric method ... 25

5. Food products ... 27

5.1 Wine ... 27

5.2 Honey ... 29

5.3 Oil and vinegar ... 31

5.4 Fruit and vegetables ... 34

5.5 Dairy products ... 36

Conclusions ... 39

(6)

6

Introduction

The aim of this thesis is to present an overview of stable isotope analysis methods and statistical methods applied in studies involving the geographical origin of plant-based products. Topics of special interest include processes responsible for isotope fractionation in nature, statistical classification methods and validation of the analytical and statistical model.

Food authenticity

In the food retail business worldwide the competition of brands is increasing. Consumers would like high-quality products, preferably with prices as low as possible. Monitoring food quality in

combination with protection of the consumer can be described in one keyword, authenticity, which can be defined as ‘having the origin supported by unquestionable evidence’ 1. In other words, food

authenticity is buying a food product that matches its description. Fraud in the food retail business appears everywhere and in many ways, such as additions of flavours to a product without declaring it on the package, dilution of the product with water to sell more volume for the same production price or addition of additives to the food product in order to increase bulk manufacturing. All of these cases are done to maximize the profit, while providing the consumer with a false description, so food control systems must be used in order to protect food authenticity. Another reason why food authenticity is necessary in the food industry, is because non-authentic products can cause health issues or affect the confidence of the consumer if the food product is sold as legitimate product 2,3.

In order to receive more insight in the authenticity, the geographical origin or the ingredients of food products, analytical techniques should be developed that are able to analyse plant-based products on their authenticity. An upcoming theme in authentication studies is the establishment of a database of specific original products, in which a test sample can be compared to this database to verify its authenticity. This database uses isotope ratios of plant-based products, which are calculated using a delta notation, this delta notation shows isotopic variation which can be used to create this database, in which elemental concentrations or isotopic variations verify the geographical origin of the product 4. Stable isotopes and their ratios using the delta notation are explained in Chapter 1.

Stable isotopes

All plant-based products consist of chemical compounds, which contain elements composed of different isotopes. Isotopes are forms of an element that have the same atomic number, but different atomic weight. The elements hydrogen, carbon, nitrogen, oxygen and sulphur have 2 or more stable isotopes when they appear in nature. These isotopes have the same electronic structure, so similar chemical reactivity. Despite these similarities, isotopes show different natural abundances because of fractionation processes that occur in nature 2. These fractionation processes happen for

example during the photosynthesis or other biosynthesis of plant-based products, and can be used in food authenticity studies. Processes responsible for isotope fractionation in plant-based products will be discussed in Chapter 2.

The stable isotope ratio of food products varies in places worldwide, but in one place this isotopic composition varies with such a limited range that it can be considered constant. Therefore, it is referred to as the isotopic fingerprint of plant-based material 1. This isotopic fingerprint can be used

to trace the geographical origin of food products, which could be measured using analytical techniques such as nuclear magnetic resonance (NMR), cavity ring down spectroscopy (CRDS) or isotope ratio mass spectrometry (IRMS). Isotope ratio analysis of food products is often combined with multi-elemental analysis in order to provide as much information as possible.

(7)

7 has been widely used due its high precision and quality to measure small samples, and its ability to measure both liquid and gas samples, if it is coupled to liquid- or gas-chromatography 2. IRMS is a

specialization in mass spectrometry, in which relative abundances of isotopes in a sample are measured. These relative abundances can be used to create the isotopic fingerprint, which leads to the geographical origin of the sample. Both compound-specific (IRMS coupled to gas- or liquid-chromatography) and bulk (IRMS coupled to elemental analysis) methods will be discussed. Other new promising techniques, such as Cavity Ring Down Spectroscopy (CRDS) and Site-specific Natural Isotope Fractionation Nuclear Magnetic Resonance (SNIF-NMR), will be discussed briefly. To check whether an analytical method approaches reality, the method needs to be validated. In the last part of this chapter, some validation methods are described.

The isotopic fingerprint of a food product needs to be correlated to its geographical origin, which is done by multivariate statistics, such as Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Analysis of Variance (ANOVA) and Cluster Analysis (CA), often in combination with some hypothesis testing techniques. PCA is a technique in which the dimensionality of a dataset is reduced. This is done by creating matrices which are called scores and loadings, one pair of scores and loadings is called a principle component, each principle component describes variation in the data. In this way the model will be less complex but will still provide as much information present in the original dataset as possible. PCA is closely related to LDA, which is method that is able to characterize or separate two or more datasets by taking the relationship between variables in each dataset into account. This is done by calculating the distance in the variable space in such a way that the variance within datasets is small, but between datasets is large. ANOVA is a technique which is closely related to LDA, this method is able to differentiate between means of datasets, and the variation between them. The last one, CA, is a technique which groups variables based on their (dis-) similarities 5. In Chapter 4, a more detailed description of each technique is given, including their

applications. Same as in the case of an analytical method, a multivariate statistics method must approach reality. In the last part of this chapter, some validation methods are described.

In the final Chapter 5 a detailed overview of 45 recent studies in the period of 2010-2016 on food authenticity is given, including analytical techniques and multivariate statistics used, based on the information given in the Chapters 1 to 4.

(8)

8

1. Isotope Ratios

Isotopes are atoms of the same element with a different mass number, which means they have a different amount of neutrons in their nucleus. In the periodic table, 34 chemical elements have no stable isotopes, 20 elements have one stable isotope and 61 elements exist as two or more isotopes

3.

1.1 Natural abundance isotope ratios

The abundance of a chemical element in nature is called the natural abundance. A chemical element with two or more isotopes has a dominant light isotope and one or more heavy ones, with a natural abundance of a few percent. Table 1 shows the relative abundances of isotopes occurring in nature, commonly analysed by spectrometric or spectroscopic methods 2,3. As shown in the table, given the

natural abundances of the elements hydrogen, carbon, nitrogen, oxygen and sulphur, the light isotopes are by far the most abundant. For heavier elements this ratio between light and heavy isotope is more balanced 3.

Table 1: Relative abundances of naturally occurring isotopes. Reproduced from ref. [2,3].

Element Stable isotopes Mean natural abundance (%)

Hydrogen 1H 99.99885 2H 0.0115 Carbon 12C 98.893 13C 1.107 Nitrogen 14N 99.636 15N 0.364 Oxygen 16O 99.757 17O 0.038 18O 0.205 Sulphur 32S 94.99 33S 0.75 34S 4.25 36S 0.01

An approximation of the time the mean isotopic natural abundance was set, is when the Earth was created 6. Throughout the universe the natural abundances of isotopes vary, but also from place to

place on Earth. On a short timescale these abundances remain more or less constant, which is the reason why natural abundance isotope ratios can be used for the geographical origin determination of plant-based products 3.

During transport or transformation processes of elements, and during phase transfer, isotopic fractionation takes place. These fractionation effects are caused by small differences in the mass of elements, due to isotope substitution 3. A detailed overview of these isotopic variations is given in

Chapter 2. In order to explore these small variations in the natural abundance of isotopes, which are shown in Table 2, very precise measurement techniques are necessary 7.

(9)

9

Table 2: Observed ranges of natural abundance variations on earth. Reproduced from ref. [3].

Element Stable isotopes Observed ranges of natural variations (in isotope-amount fraction)

Hydrogen 1H 0.999816-0.999974 2H 0.000026-0.000184 Carbon 12C 0.98853-0.99037 13C 0.00963-0.01147 Nitrogen 14N 0.99579-0.99654 15N 0.00346-0.00421 Oxygen 16O 0.99738-0.99776 18O 0.00188-0.00222 Sulphur 32S 0.94454-0.95281 34S 0.03976-0.04734

Natural variations in the isotopic composition of a plant-based product are used to trace the geographical origin of the product, and to understand environmental processes which change these variations. The natural variation is expressed in a delta notation, which is applied worldwide in stable isotope ratio analysis.

1.2 Delta notation

Stable isotope ratio analysis requires a very accurate way to measure differences in isotope ratios, because these differences are normally in the order of a few per mill. Measuring these differences is accomplished by calculating the ratio between the heavy and light isotopes of the product, followed by a comparison of this ratio to a standard. This calculation is called a delta notation, which can be expressed by the formula:

𝛿 = ( 𝑅𝑠𝑎𝑚𝑝𝑙𝑒 𝑅𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑

− 1) ∗ 1000

In which δ is the isotope ratio of the plant-based product (in ‰) compared to the standard. Rsample

and Rstandard are the isotope ratios of the product and the standard 4.

Differences in isotope ratios are very small, this is why delta notations are given in parts per mill, which allows comparison of isotope ratios of products at natural abundance level. For heavier elements fractionation processes are insignificant when compared to the original isotope ratios, but the stable isotope ratios can still be used for identification of the geographical origin of a product 8.

The isotope ratio of a product is calculated relative to a standard. For each element a different standard is selected 9. The International Atomic Energy Agency (IAEA) has made a recommendation

for these standards, in order to avoid confusion. Their recommendations are as follows 10:

(I) Hydrogen isotopic ratios are measured relative to Vienna Standard Mean Ocean Water (VSMOW). These ratios need to be normalized in such a way that the 2H/1H ratio of Standard Light Antarctic

Precipitation (SLAP) is 0.572 times that of VSMOW.

(II) Carbon isotopic ratios are measured relative to Vienna Pee Dee Belemnite (VPDB). These ratios need to be normalized in such a way that 13C/12C ratio of N-bromosuccinimide-19-carbonate is

(10)

10 (III) Oxygen isotopic ratios are measures relative to either VSMOW or VPDB. These ratios need to be normalized in such a way that the 18O/16O ratio of SLAP is 0.9445 times that of VSMOW.

Hydrogen, carbon and oxygen are the most common elements in nature. Other elements are measured against other standards. Nitrogen and sulphur are the other two most common elements, therefore the standards for these two elements are also listed:

(I) Nitrogen isotopic ratios are measured relative to nitrogen in ambient air (AIR) 11, and (II) Sulphur is

(11)

11

2. Isotope Fractionation

Any process in which the relative natural abundance of the stable isotopes of an element could change, is called isotope fractionation. Isotope fractionation can be described by the isotopic fractionation factor α 11:

𝛼𝐴−𝐵 = 𝑅𝐴 𝑅𝐵

In this formula, the distribution of stable isotopes in two substances A and B are expressed. R is the ratio between the heavy and light isotope, for instance 2H/1H, and the overall values for α are very

close to 1 11.

Table 3 shows an overview of fractionation processes occurring in nature that affect the isotope ratios of compounds with light elements 4. These fractionation processes can be roughly divided in

four categories, which will be discussed in this chapter.

Table 3: Overview of fractionation processes in the environment. Reproduced from ref. [4].

Isotope ratio Fractionation Information

2H/1H Evaporation, condensation, precipitation Geographical

13C/12C C3 and C4 plants Diet (geographical proxy)

15N/14N Trophic level, marine and terrestrial plants,

agricultural practice

Diet (geographical proxy)

18O/16O Evaporation, condensation, precipitation Geographical

34S/32S Bacterial Geographical (marine)

2.1 Equilibrium isotope fractionation

If isotopes are separated between two or more substances that are in a chemical equilibrium, this fractionation process is called equilibrium isotope fractionation. At low temperatures this type of fractionation process is strongest, and together with kinetic isotope fractionation it forms the basis of the most widely used isotopic ratios; 2H/1H and 18O/16O 13. Equilibrium fractionation can be

observed in many elements, from hydrogen (2H/1H) to uranium (238U/235U), but overall lighter

elements are more sensitive for equilibrium fractionation due to large differences in natural abundances between the heavy and light isotopes 3. These elements, such as hydrogen, carbon,

nitrogen, oxygen and sulphur, have isotopes that can be better separated than heavier elements, as explained in Chapter 1.

A reduction in vibrational energies can cause equilibrium fractionation. This reduction occurs when a heavy isotope is substituted by a lighter isotope. Compounds with a higher bond force constant between elements are less likely to substitution of isotopes, in other words, the heavy isotope concentration in a compound with small vibrational energy will be higher 14.

Figure 1 gives an overview of hydrogen isotope fractionation processes in the hydrological cycle. In this figure, equilibrium fractionation processes listed in Table 3 are shown, such as evaporation or condensation.

(12)

12

Figure 1: Hydrogen isotope fractionation in the hydrological cycle. Isotope fractionations are calculated as difference between product and source water in each process, at typical temperature and humidity conditions. Reproduced from ref. [15].

Evaporation leads to differences in the concentration of heavy isotopes of oxygen (18O) in liquid

water, compared to water vapour 16. Condensation of water vapour is an equilibrium fractionation

process, in which the heavy isotopes (1H

218O) will appear more in the liquid phase, while the light

isotopes (1H

216O) will appear more in the vapour phase, as shown in this equilibrium equation: 1H

216O (l) + 1H218O (g) 1H218O (l) + 1H216O (g)

If the temperature outside is 20 0C, α for this reaction will be 1,0098, which is close to 1 11.

Equilibrium isotope fractionation, which is a fractionation process that depends on the mass of the compounds, is, together with kinetic isotope fractionation, the most important isotope fractionation process.

2.2 Kinetic isotope fractionation

If stable isotopes are separated from each other by their mass in a process which is not in

equilibrium, it is called kinetic fractionation. Kinetic fractionation is caused by differences in bond strengths between the heavy and light isotopes. This difference leads to a difference in reaction rate for the bonds. Examples of kinetic isotope fractionation are all biological processes, because these processes are generally not in equilibrium. Organisms in nature prefer lighter isotopes; in this case the energy cost is lower, because the elements will form weaker bonds. This leads to a fractionation between the heavier isotope, the substrate, and the lighter isotope, the biologically resolved product

17.

An example of a kinetic fractionation process is photosynthesis. Plants prefer to take up the light isotope of carbon (12C) during the photosynthesis while assimilating a carbon dioxide (CO

2) molecule

in the atmosphere. This process is an explanation for the fact that plant-based products and fossil fuels, because these are derivatives of plants, are typically characterized by the amount of heavy carbon isotopes (13C) relative to amount of carbon on Earth 13.

(13)

13 Evaporation can also be an example of a kinetic isotope fractionation process. If seawater is

evaporated to form a cloud, under the condition that some parts of the transport are not in equilibrium such as the evaporation into dry air, it is an example of kinetic fractionation. Light molecules (1H

216O) will evaporate more easily than heavier molecules (1H218O, 2H2O). The difference

between light and heavy molecules will be greater in kinetic fractionation than it would be in equilibrium fractionation 16. During this kinetic fractionation process the isotopes of oxygen are

fractionated; the 16O isotopes will appear in the clouds, as 18O isotopes will remain in the seawater,

as shown in Figure 2. As explained in paragraph 2.1, equilibrium fractionation will reduce the amount of heavy oxygen isotope (18O) in vapour by 1 percent when compared to liquid water, while kinetic

fractionation will reduce the heavy oxygen isotope in vapour about 1,5 percent. Condensation is a process which almost always occurs as an equilibrium process. Cloud droplets are less enriched in 18O

isotopes by condensation than vapour is deprived in 18O isotopes by evaporation 18. This is one of the

reasons why rainwater is normally lighter in isotopes than seawater.

Figure 2: Oxygen (16O and 18O) kinetic isotope fractionation. Reproduced from ref. [18].

In kinetic fractionation, the sensitivity of the isotope of hydrogen in water (2H) is less than the oxygen

isotope if this sensitivity is compared to the amount in equilibrium fractionation. This explains the fact that kinetic fractionation does not reduce 2H as much as 18O, which leads to a relatively large

amount of 2H in vapour and rainwater when compared to seawater 19.

2.3 Mass-independent and transient kinetic isotope fractionation

There are two other fractionation processes that are less common, mass-independent and transient kinetic isotope fractionation. Mass-independent fractionation can be any chemical or physical process in which isotopes are separated, if the amount of isotopes is not in proportion to the differences in the mass of the isotopes 20. These processes are not very common; they only occur in

photochemical or spin-forbidden reactions. This fractionation process can be used to study and trace photochemical reactions in nature, usually this type of fractionation process is found in the isotopes of oxygen or sulphur 20.

Transient kinetic isotope fractionation is a process in which the fractionation does not follow first-order kinetics. The isotope fractionation effects thus cannot be described using equilibrium fractionation equations as used in classical equilibrium cases, or steady-state kinetics 21. In case of

transient kinetic isotope fractionation, processes can be described using some general equations for biochemical isotope kinetics or fractionation 19.

(14)

14

3. Stable isotope analysis techniques

The isotope analysis of plant-based products is generally done by using mass spectrometric or spectroscopic methods. These methods can select large numbers of compounds with low molecular mass in complex samples, therefore they play an important role in the chemical analysis of food products 22. This chapter covers these two important techniques, mass spectrometry (MS) and

spectroscopic methods, to characterize isotope ratios in plant-based products. These techniques are often combined with elemental analysis; this technique is also covered in this chapter.

3.1 Mass spectrometric techniques for stable isotope analysis

The most commonly used technique for stable isotope analysis, for both heavy and light compounds, is mass spectrometry (MS). Measuring isotope ratios at natural abundance level is a very precise process, therefore an extremely precise mass spectrometer is necessary 22. High precision in mass

spectrometry can be achieved by making use of a magnetic sector field instrument, which contains Faraday Cups. Faraday Cups are needed in isotope ratio analysis, because they are able to detect ion currents from different mass-to-charge ratios at the same time. In this review we mainly focus on the light atoms hydrogen (2H/1H), carbon (13C/12C), nitrogen (15N/14N), oxygen (18O/16O) and sulphur

(34S/32S), which represent the most common elements in plant-based products 3. Isotopes of these

elements can be measured using isotope ratio mass spectrometry (IRMS). For isotope ratio analysis of heavy isotopes, it is more advisable to use other techniques, such as inductively coupled plasma mass spectrometry (ICP-MS) and thermal ionization mass spectrometry (TIMS). ICP-MS is a technique which makes use of inductively coupled plasma to ionize a sample before using a mass spectrometer to separate and detect the isotopes. TIMS is a technique that starts by thermal ionization of a sample in solid state, before transferring the sample to the mass spectrometer to separate and detect its isotopes. Describing all techniques in detail is beyond the scope of this review, therefore the focus will be on IRMS. IRMS measures stable isotope abundances, and stable isotope analysis can be divided into bulk or compound-specific stable isotope analysis (BSIA or CSIA), which is explained in paragraph 3.1.1. In paragraph 3.1.2 a detailed overview of IRMS is given.

3.1.1 Bulk stable isotope analysis (BSIA) and compound-specific isotope analysis (CSIA)

In Figure 3, an overview of stable isotope analysis techniques is given. A classification can be made according to the number of compounds analysed, either bulk samples or individual molecules. Bulk stable isotope analysis (BSIA) is a technique in which the mixture is homogenized first, before obtaining the average isotope ratio of the bulk sample. In the case compound-specific isotope analysis (CSIA), individual molecules are separated by chromatography in order to obtain their isotope ratios.

(15)

15

Figure 3: Differentiation between different techniques used for stable isotope analysis by IRMS. Reproduced from ref. [3].

Figure 4 shows the difference between BSIA (analysis by an elemental analyser coupled to isotope ratio mass spectrometry (EA-IRMS)) and CSIA (using gas chromatography or flow injection analysis coupled to isotope ratio mass spectrometry (FIA- or GC-IRMS). In BSIA, an elemental analyser (EA) transforms the molecules to gases with low molecular weights, such as CO2, N2, CO, SO2 and H2. The

elemental analyser uses either continuous flow (CF/EA) or thermal conversion (TC/EA). The transformation is then followed by a separation of the gases with a wide-bore GC column before detection by IRMS. BSIA is a technique which allows multiple atom isotope analysis, but only one analysis of the whole sample at a time. CSIA is able to differentiate isotope ratios of individual molecules, by separating molecules in complex mixtures by GC, followed by introduction of the separated molecules into IRMS. The disadvantage of CSIA is the fact that only one element can be measured at once for the separated molecules 3.

Figure 4: Comparison of bulk and compound-specific isotope ratio analysis. Reproduced from ref. [3].

CSIA is the most commonly used method for stable isotope ratio analysis, because it is an easy and efficient method, with possibility of automated online sample preparation and injection method. It is also capable of obtaining data of all molecules in a sample in one run. The last advantage is the fact that the sample sizes in CSIA are much smaller than in BSIA 3.

(16)

16

3.1.2 Isotope ratio mass spectrometry (IRMS)

Sector mass spectrometers (MS) used for IRMS are instruments able to determine isotopic

abundances in a very precise way. The first sector MS was developed in the 1940s by Alfred Nier 23.

He created an instrument that was able to isolate uranium-235. With this isolation he demonstrated that uranium-235 experiences nuclear fission. Though the mass spectrometer has been upgraded since the invention of Nier, the fundamental design of the mass spectrometer has not changed 24. In

stable isotope ratio analysis, the sample is transformed into a gas, as explained in paragraph 3.1.1. This gas has a specific isotope pattern for every original sample. After the transformation to the gas phase, the sample is injected into the ion source of the IRMS instrument 25. The isotopic natural

abundance of the gas is determined, which is then compared to a reference sample or a standard, as explained in paragraph 1.2.

IRMS can give very precise compound-specific stable isotope analysis at natural abundance level when coupled to a gas chromatography (GC) system. This coupling is done by a combustion interface (Figure 5), because GC cannot be directly coupled to IRMS.

Figure 5: Set-up of an IRMS coupled to a GC via a combustion interface to measure 13C/12C and 15N/14N. Reproduced from ref. [25].

The different ions generated in the MS are collected in Faraday Cups. For example, in the case of CO2

as analyte gas, there are three commonly appearing different isotope combinations, namely 12C16O 2, 13C16O

2 and 12C18O16O. These isotope combinations have different mass-to-charge ratios of 44,45 and

46 m/z, which are obtained in one run by a multiple Faraday Cup (FC) collector 26. Each Faraday Cup

is able to collect one ion trace. All this ion trace information is then transferred to a computer that is able to integrate the peaks corresponding to the different ion traces. This integration can be used to calculate the isotope ratios. A detailed scheme of the multiple Faraday Cup collectors is shown in Figure 6.

(17)

17

Figure 6: Schematic of isotope detection in IRMS by Faraday Cup collectors. Reproduced from ref. [26].

Though IRMS is a good measurement technique for isotope ratios, it also has some disadvantages. IRMS is not suitable for water or gases that could condense, therefore sample preparation is a necessary step, either by removing the water with e.g. (solid-phase) extraction or evaporation. These extra steps in the analysis cost time, and may result in loss of sample. Other disadvantages include the fact that IRMS instruments are very costly and voluminous, and IRMS requires experienced operators 27. In paragraph 3.3 some spectroscopic methods for stable isotope analysis are described,

and these optical measurement techniques could be a solution to these issues.

3.2 Elemental analysis

Isotope ratio mass spectrometry is often combined with elemental analysis in order to get as much information as needed, especially in the determination of geographical origin of wine, because wine is a complex mixture of compounds.

Elemental analysis is often combined with isotope analysis in order to detect both trace and rare-earth elements. A trace element can be any chemical element, as long as the concentration of the element is at a trace amount. In analytical chemistry an element is called a trace element if its concentration is less than 100 micrograms/gram 28. Trace elements can give information about the

geographical origin of a food product, because their availability depends on several factors in nature, for example the pH of the soil, and the nutrients available in the soil, which are different from place to place on earth.

In the periodic table, seventeen chemical elements are listed as rare-earth elements, specifically the elements in period 6 and group 3. Rare-earth elements are not extremely rare, they are present in the Earth’s crust, but in nature they are occurring together so they are hard to separate 28. This is the

reason why rare-earth element analysis can give information about the geographical origin of food products.

If a sample is analysed for its elemental composition, the process is called elemental analysis. It is mostly used for qualitative measurements; it determines the elements present in a sample. The techniques used for the elemental analysis of food products are mostly those with multi-element detection capability, such as inductively coupled plasma mass spectrometry (ICP-MS), which

(18)

18 examines the mass of atoms, and ICP coupled to atomic emission spectroscopy (ICP-AES), which determines metals and non-metals at the same time. The main disadvantage of these ICP techniques are the fact that the techniques are costly and they require trained operators 28. This is why also

other spectroscopic methods are used, which determine the inner electron structure of an atom. Techniques as X-ray fluorescence (XRF), particle-induced X-ray emission (PIXE), X-ray photoelectron spectroscopy (XPS), atomic absorption spectroscopy (AAS) or Auger electron spectroscopy (AES) are commonly used techniques.

3.3 Spectroscopic techniques for stable isotope analysis

Isotopic fractionations can cause differences in the vibrations of molecules in the near- or mid-infrared spectral regions, this phenomenon is the reason why spectroscopic methods can be used to analyse the isotopic composition of food products. Spectroscopic methods as infrared spectroscopy (IR), Raman, atomic absorption/emission spectroscopy (AAS/AES) and near-infrared spectroscopy (NIR) can be used in combination with IRMS, to give some extra information about these vibrations. Spectroscopic methods are also upcoming as alternative to IRMS. Nuclear magnetic resonance (NMR) is such an alternative, NMR a selective technique that can detect a large number of compounds at once without damaging the sample. The sample preparation step is less time consuming than it is in IRMS, these advantages makes NMR a promising alternative for IRMS 27.

In this paragraph, spectroscopic techniques for measuring the stable isotope ratios and other chemical compounds as alternative to IRMS are described. There are many spectroscopic techniques available and describing them all is beyond the scope of this review, therefore only some new promising techniques will be described in this chapter.

3.3.1 Site-specific natural isotope fractionation-nuclear magnetic resonance (SNIF-NMR)

An upcoming technique in the analysis of isotope ratios of samples in the liquid phase, for example ethanol in a wine sample, is site-specific natural isotope fractionation-nuclear magnetic resonance, or SNIF-NMR. SNIF-NMR was invented in 1981 by Professor G.J. Martin, he used SNIF-NMR to detect the (over-) sugaring of wine and grape musts enrichments 29. In 1990 the SNIF-NMR method was

registered by the European Union as an official method for the analysis of wines 30, and in the period

of 1996-2001, the Association of Analytical Chemists (AOAC) registered the method for the analysis of fruit juices, maple syrup and vanillin 31.

SNIF-NMR is a type of NMR specialized in measuring 2H/1H ratio in organic compounds at natural

abundances. It is also used to measure the 13C/12C ratio, but this is less common. SNIF-NMR is a

powerful tool in stable isotope analysis, because it can discriminate between hydrogen atoms from different chemical environments. SNIF-NMR requires a strong magnetic field instrument, because it is hard to obtain 2H peaks with reasonable resolution 32.

SNIF-NMR has a low sensitivity in the range of 1 mmol of sample per measurement 33. Also the

precision of SNIF-NMR is low, especially when compared to mass spectrometry instruments. The error of a SNIF-NMR instrument depends on the type of compound used, but nowadays SNIF-NMR is only able to detect large 2H/1H ratios of certain compounds at high concentrations in food products

due to the relatively large error of the instrument 34.

An advantage of SNIF-NMR, when compared to mass spectrometry, is the fact that the samples are analysed in a non-destructive way. A disadvantage of SNIF-NMR is the fact that it is not yet possible to couple it on-line with chromatographic techniques 3.

(19)

19

3.3.2 Cavity ring down spectroscopy (CRDS)

Another promising spectroscopic technique is cavity ring down spectroscopy (CRDS). CRDS is a technique that can determine light isotope ratios by making use of the shifts in rotational vibrations of a sample, resulting from different masses of isotopomers. The isotope ratio most commonly measured using this technique is 13C/12C; however, it is also possible to measure the isotopes of other

light elements such as hydrogen, nitrogen, oxygen and sulphur. The advantage of CRDS over IRMS is the fact that CRDS can differentiate between isobars with identical m/z values 35.

The prototype for CRDS was invented by O’Keefe and Deacon in 1988 36. CRDS is a technique which is

immune to variation in the intensity of the laser. The path length in CRDS is extremely long, because of the fact that a CRDS contains a cavity with mirrors that can reflect the laser light many times, which leads to a very high sensitivity 35. CRDS can be used for studies of molecular spectroscopy

because of the high resolution of its laser, the only requirement is the fact that the sample must absorb the electromagnetic radiation the laser generates 37.

Figure 7: Schematic diagram of the apparatus used for CRDS studies. Reproduced from ref. [35].

Figure 7 shows a schematic diagram of a CRDS instrument. The ring-down cavity is the unique part of a CRDS system. The cavity contains an absorption cell which consists of a few mirrors, the mirrors each act as vacuum chamber windows or are located in a vacuum chamber 35. A disadvantage of

CRDS are the fact that the precision is relatively low when compared to IRMS, but CRDS is a much cheaper technique able to detect molecules at very low concentrations or weak transitions, so CRDS could be a promising technique in the future 37.

3.4 Validation of the analytical method

In order to demonstrate that an analytical method is useful, the method needs to be validated. Typical validation characteristics are accuracy, precision, specificity, linearity, range, and detection and quantitation limit 38. Many companies have written guidelines for these validation

characteristics, this chapter focuses on 5 of them.

Specificity means the analytical method is able to correctly detect the analyte in the presence of other components that might be present in the mixture 38. Other components could be impurities or

(20)

20 of the analyte in a mixture, which include impurities. Sometimes more than one analytical procedure is necessary in order to achieve the level of detection needed for the analysis.

Linearity means that the analytical method is able to obtain test results that are in line with the concentration of the analyte in the mixture, and it can be visualized by plotting the data of the analyte analysed at different concentrations 38. The linearity can be shown in a graph, by applying

linear regression equations to the results of an analysis. This linearity should have an intercept which is close to zero 39. If the intercept is not at zero, the method needs to be validated in order to show

that this nonlinearity has no effect on the accuracy of the technique. The linearity is shown in a graphical representation in Figure 8.

Accuracy means that the analytical method is able to obtain the same test results as the reference values, in other words if the test results and the true values agree 39. The true values for this

validation technique can be obtained by comparing the results with a reference method, or by comparing it with known concentrations 40.

Precision means that the analytical method is able to obtain a similarity between a series of measurements under the same conditions 40. Precision can be divided in three categories: (I)

repeatability, (II) intermediate precision and (III) reproducibility. The repeatability is the similarity of performing the same operating conditions over a short period of time. This is tested by performing the analysis several times after each other. The intermediate precision is the similarity in variations in one laboratory, such as different days or analysts. This is usually tested by performing the same analysis within a laboratory over a number of days, with different analysts, in order to verify that the same laboratory will provide the same results, despite the day or analyst, once the development phase is over. The reproducibility is the similarity of performing the same operating conditions in different laboratories, to verify that the method gives the same results in different laboratories 38.

The range of an analytical method is the interval of concentrations of analytes in the sample suitable for the analytical technique 38. It is usually expressed in the same units as the test results (for

example percentage, parts per million) obtained by the analytical method 40. Figure 8 shows the

range of an analytical technique.

Figure 8: Definition for linearity, range, limit of detection and limit of quantitation. Reproduced from ref. [40].

The detection limit of an analytical method is the lowest amount of sample that can be detected 38.

The limit of detection (LOD) is the point at which the measured amount of sample is larger than the uncertainty value. The detection limit is calculated by performing the analysis with samples that have

(21)

21 known concentrations of analyte present in the sample, and by measuring the minimum level at which the analyte can be detected 41.

The quantitation limit of an analytical method is the lowest amount of sample that can be

quantitatively measured, with a good precision and accuracy. The limit of quantitation (LOQ) is the point at which the sample can be measured and still be quantitated. It is often used for determining the impurities of a sample 41. The LOQ is calculated by performing the analysis with known

concentrations, and measuring the minimum level at which the analyte is quantified. The LOQ is larger than the LOD, up to 20 times 42.

Figure 9: Limit of detection and limit of quantitation via signal-to-noise. Reproduced from ref. [40].

Figure 8 shows the LOD and the LOQ, in Figure 9 the LOD and LOQ are illustrated in terms of the signal-to-noise ratio, which is a measure that compares the level of a desired signal to the level of background noise 40. The LOD and LOQ must be validated by performing several experimental tests

(22)

22

4. Multivariate statistics

In this chapter a description of multivariate statistical techniques, or chemometrics, used for

authentication studies of food products is given. Chemometric methods classify multivariate data by using statistical tools 5. Chemometrics is crucial in stable isotope analysis of food products, because

the data consists of a large number of variables and only the significant information needs to be extracted 28. In Table 4, statistical methods used in the studies reviewed in Chapter 5 for the

identification or authentication of food products are given.

Table 4: Summary of the statistical methods applied to the identification or authentication of food products.

Statistical method Food products References

Principal component analysis (PCA)

wine, honey, potatoes, apples, cheese, milk

57,58,59,60,61,62,63,64,65, 66,67,71,72,73,87,88,89,93, 97,99

Partial least squares analysis (PLS, PLS-DA, OPLS-DA)

honey, milk 42,99

Discriminant analysis (LDA, DA, FDA, CDA, MCDA)

wine, honey, olive oil, pumpkin oil, tomatoes, lettuce, red onions, potatoes, apples, cheese, milk

57,58,59,61,63,66,67,68,69, 70,76,77,78,79,83,84,85,87, 90,91,92,93,96,97,99

Cluster analysis

(CA, HCA, CARTs, C&RT, kNN)

wine, honey 57,58,61,63,65,66,67,70,73,

84 Analysis of variance

(ANOVA, MANOVA, Kruskal-Wallis test, multiple bilateral

comparison)

wine, honey, olive oil, vinegar, balsamic/wine vinegar, potatoes, tomatoes, lettuce, apples, cheese, milk

56 58,59,60,61,67,68,70,74, 75,76,78,80,81,82,87,88,90, 92,95,96,98,100

Artificial neural network analysis (BP-ANN CPANN)

wine, tomatoes, red onions 62,83

Discriminant function analysis (DFA)

potatoes 88

Soft independent modelling of class analogies

(SIMCA)

tomatoes, red onions 83,84

Correlation analysis

(Pearson’s correlation, CCA)

wine, honey, olive oil, tomatoes, milk, cheese

58,60,64,70,74,76,86,92, 96

Hypothesis testing

(Bonferroni test, Tukey’s test, Grubbs test, U-Mann-Whitney test, T-test, likelihood ratio)

wine, honey, pumpkin oil, balsamic/wine vinegar, potatoes, tomatoes, lettuce, apples, cheese

56,60,69,76,79,81,82,86, 87,90,91,92,94,96

Generalized Procrustes analysis (GPA)

(23)

23 As can be seen in the table, there are many chemometric methods available and used in publications, and describing them all is beyond the scope of this review. The focus will be on some of the most commonly used techniques, which are principle component analysis, cluster analysis, linear discriminant analysis, analysis of variance and some hypothesis testing methods.

4.1 Principal component analysis (PCA)

Principal component analysis, or PCA, reduces the dimensionality of a dataset, but retains as much of the information present in the original dataset as possible. This reduction is done by a linear

transformation to a new set of variables, building principal components (PCs) 28. The first PC (PC1)

explains most of the total variance in the dataset, the second PC (PC2) shows the maximum residual variance. This continues until the total variance is explained 43. The total number of PCs is less than or

equal to the number of original variables.

Figure 10: Structure of a PCA model. PCA is a form of variable reduction. Reproduced from ref. [44].

Figure 10 shows the structure of a PCA model, the results of a PCA are usually given in terms of scores and loadings. Scores are the transformation values of the variables which correspond to a data point and the loadings are the weight of these values, each variable should be multiplied with this weight to obtain the score 44.

4.2 Cluster analysis (CA)

Cluster analysis, or CA, is grouping technique, able to make a discrimination between variables according to similarities between them. It tries to group variables in so-called clusters, to divide the variables in such a way that variables in one cluster have more similarities compared to variables in other clusters 28. CA is a chemometric tool used to explore the data, it is a commonly used technique

in data analysis, especially in combination with other chemometric methods.

The two main classes of cluster analysis are hierarchical and non-hierarchical clustering. As shown in Figure 11, hierarchical clustering is a method which produces a large cluster of variables with

similarities. This cluster is then divided into smaller clusters of closer similarity between variables, so in other words it builds a hierarchy of clusters. Results are usually shown in a dendrogram, as displayed in the figure. Non-hierarchical clustering is a method which produces clusters in a large dataset. The variables in one cluster have similarities, but the clusters have no similarities between them 45.

(24)

24

Figure 11: Types of clustering analysis. Left: non-hierarchical clustering and right: hierarchical clustering. Reproduced from ref. [46].

4.3 Linear discriminant analysis (LDA)

Linear discriminant analysis, or LDA, is a chemometric method that is used in statistics to group variables by trying to find linear combinations of features, in such a wat that two or more classes of objects are characterized or separated. It makes use of linear discriminant functions in order to maximize the between-class variance, and minimize the within-class variance 28. LDA makes the

assumption that the data follows a normal distribution, and the variables are linearly separated. LDA is a projection technique that is, similar to PCA, also trying to reduce the dimensionality of the dataset in order to classify the data 47. The difference between PCA and LDA lies in the fact that LDA

makes a distinction between variables in terms of independency and dependency, PCA does not. LDA is also similar to ANOVA, because of the fact that it also tries to express one variable as a linear combination of others. The difference between them is related to the continuous variable, ANOVA uses a dependent one, whilst LDA uses an independent one in combination with a categorical dependent variable 47.

4.4 Analysis of variance (ANOVA)

Analysis of variance, or ANOVA, is used to develop and confirm exploratory data. It is a method that makes a division in the variance in the variables by creating components that have a contribution in several sources of the variations. ANOVA tests if the means of groups of variables are equal,

according to a statistical test similar to a t-test, but the difference is that ANOVA does the t-test for more than two groups 28. Therefore, ANOVA is a very useful tool in multivariate statistics, and this is

why ANOVA is the most commonly used technique in authentication or identification of food products 48.

The Kruskal-Wallis test is a type of one-way analysis of variance. It is a method which is non-parametric, and is often used to test if samples are originating from the same distribution. Unlike regular ANOVA, it does not assume that the data is normally distributed. It is used for comparing two or more independent samples of equal or different sample sizes. It is a developed version of the U-Mann-Whitney test, see paragraph 4.5, when there are more than two groups of variables. One assumption made in the Kruskal-Wallis test is that one sample dominates another sample stochastically, but it does not identify the source of this domination or for how many pairs of variables this domination occurs 49.

(25)

25

4.5 Hypothesis testing

The chemometric methods described in this chapter are often combined with hypothesis testing methods in order to get as much statistical information as possible. Two of the most used methods in Table 4 are described in this chapter.

Tukey’s test is a statistical test often used in combination with ANOVA, and it is one of the several tests that is used to make a determination between means amongst a dataset, based on a

distribution similar to a t-test 50. The Tukey’s test assumes that data follow a normal distribution and

standard deviations of a group needs to be the same.

The U-Mann-Whitney test is null-hypothesis test used to compare two means that come from the same group of variables, or to check whether two group means are equal or not. The sample sizes in the U-Mann-Whitney test must be equal, the change that a randomly selected value from a sample is less or greater than a value from another sample must be equal. It does not assume the variables follow a normal distribution, but the test is almost as efficient as a t-test 51.

4.6 Validation of the chemometric method

In chemometrics, a method needs to be validated in order to measure the error, which is measured using an independent validation set. A method leading to a minimum of this error is optimal. There are some common validation strategies, which will be explained shortly in this chapter. First a validation set should be selected. This validation set should be representative of the situations that the method is going to face when it goes in operation, and it should be different enough from the calibration set in such a way that the noise structure from the calibration set is not reproduced in the validation set. The strategies to select the validation set depend on the amount of data that is available.

If there is a data-rich situation, the best approach is to randomly divide the data set in two sets, the calibration set and the validation set. An ideal situation will be to leave 25% for validation, and 75% for calibration, but this may vary 52.

An alternative to a random divide is to make use of the Kennard-Stone algorithm, again used in a data-rich situation. The Kennard-Stone algorithm selects the validation set in such a way that it is representative for the range of calibration. Kennard and Stone proposed a chemometric validation method that should cover the data in a uniform way. The algorithm selects a next sample, called the validation object, most distant from the objects that already exist, called the calibration set. Kennard and Stone called their invention a uniform mapping algorithm, because of the fact that it follows a flat distribution preferably for a regression model 52.

If there is not much data available, then k-fold cross-validation is the preferred option. In k-fold cross-validation the original data set is divided in k subsets, randomly. Subset 1 is withdrawn, and the model is fitted with the rest. The model is then tested with subset 1, the same is done for the other k sections. The error of the model in all validation sets is summed up, as shown in Figure 12.

When k becomes equal to the sample size, the method is called leave-one-out cross-validation. Selecting the adequate value for k is a trade-off; a high number of k makes the model quite stable (no lack of samples to calibrate), but there is the danger that the validation samples are almost a

‘repetition’ of the calibration samples, which leads to overfitting the model. A low number of k makes the model difficult to calibrate (lack of samples), and the evaluation is unstable 54.

(26)

26

Figure 12: Data splitting in k-fold cross-validation. Reproduced from ref. [53].

An alternative to the cross-validation discussed before consists of selecting the validation samples contiguously (eliminate the random part of the algorithm). Here one should be careful to make sure that there is no structure in the way the data is sorted.

When computational power is available, Monte-Carlo k-fold cross-validation is the preferred option. It consists of repeating the operation of k-fold cross-validation (selecting the subsets randomly), in order to obtain a more stable (accurate) value of the cross-validation prediction error. This method randomly splits the dataset into calibration and validation data. For each split, the model is fit to the calibration data, and the predictive accuracy is assessed using the validation data. The results are then averaged over the splits 55. Data splitting in Monte-Carlo cross-validation is shown in Figure 13.

Figure 13: Data splitting in Monte-Carlo cross-validation. Reproduced from ref. [53].

The advantage of this method over normal k-fold cross-validation is that the proportion of the calibration/validation splits is not dependent on the number of iterations. A disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, some validation subsets may overlap.

(27)

27

5. Food products

In this chapter, 45 studies published in in the period 2010-2016 will be discussed, categorized in 5 food products; wine, honey, oil and vinegar, fruit and vegetables, and dairy. Detailed information about the number of samples, the parameters measured, the analytical techniques and the statistical methods used are given.

5.1 Wine

One of the most regularly exported food products all over the world is wine, therefore countries need to provide companies with strict guidelines when it comes to the quality and origin of wine. Nowadays many techniques are combined in order to discriminate between different kinds of wine and their geographical origin. The most commonly used analytical technique in the analysis of wine is elemental analysis, followed by stable isotope analysis. Often a combination of these two techniques is used, in order to get as much information as possible about the complex mixture of compounds in wine. Table 5 gives a summary of the reviewed articles investigating the geographical origin of wine.

Table 5: Discrimination of wines based on elemental composition and isotope ratios. ME, Multi-element composition.

Number of samples

Parameters Origin Analytical technique

Statistical method Ref.

13 ME Brazil PIXE ANOVA, Tukey’s test 56

47 ME Spain AAS, AES CA, PCA, LDA 57

85 ME Portugal ICP-MS ANOVA, Pearson’s correlation, CA,

PCA, LDA

58

64 ME Portugal AAS, AES ANOVA, PCA, LDA 59

26 ME Croatia HR-ICPMS ANOVA, Pearson’s correlation,

Bonferroni test, PCA

60

75 ME Croatia ICP-MS PCA, CA, LDA, ANOVA 61

272 ME Slovenia ICP-MS,

ICP-OES

PCA, CPANN 62

120 ME

South-Africa

ICP-MS PCA, CA, DA 63

3948 1H/2H, 13C/12C, 18O/16O

Italy SNIF-NMR,

IRMS

PCA, correlation analysis 64

Wines originating from four different regions of Brazil were investigated using PIXE. Elements as Al, Cu, K and Mg were analysed to determine differences in these elements among the different regions. In this publication, it was shown that climatic conditions were in correlation with the compositions of these elements in wine, and the elemental composition was also correlated with the wine making process 56.

The mineral composition of Spanish wines was analysed using AAS and AES. The Spanish wines originated from Cordoba, and analysis showed that magnesium was the variable best able to make a discrimination between the age of the wine. Figure 14 shows the PCA plot of the analysis, PCA and CA were able to group the wine samples according their geographical origin and the age of the wine, with a precision of 100% for young wines, and 90% for aged wines 57.

(28)

28

Figure 14: Principle component scatter plot of young (J) and aged (C) wine samples of Montilla-Moriles (MM) and Villaviciosa (VV). Reproduced from ref. [57].

AAS and AES were also used, in combination with ICP-MS, to analyse wines originating from Portugal and some of its islands. This study showed that samples originating from red wine were different from white wine samples in a significant way, with a p-value of less than 0,05. Elemental analysis with ICP-MS showed that red wines had a significantly higher level of several elements as B, Ba, K, Mg and Ni 58. The wine samples could be assigned to their geographical origin in terms of the island

or archipelago they originated from 59.

ICP-MS as technique for elemental analysis was also used to make a determination of wines

originating from Croatia. The environment of grapes used to produce the wine samples was related to the presence of several elements in the sample, such as Zn, Fe and Pb 60. ICP-MS in combination

with PCA and ANOVA made a good discrimination between wines originating from three different regions in Croatia, by making use of seven elements; Al, As, Be, Li, Sr, Ti and Tl. ANOVA was the most successful chemometric method, it was able to discriminate all sources of variability in one wine sample 61.

Slovenian and South-African wines were also investigated using ICP-MS. In case of the Slovenian wine samples, with PCA it was not possible to make a classification between white wine samples,

therefore a less common chemometric tool (counter-propagation artificial neural networks CPANNs) was used in order to group the data in a successful way, even of the different Slovenian regions that were close to each other (300km2) 62. The distance between the regions in South-Africa was also

small (1000km2), but ICP-MS in combination with CA and DA was a successful technique in classifying

the wine samples 63.

Isotope ratio analysis is less common in the analysis of wine samples in the reviewed articles, but it was used to determine the geographical origin of Italian wine samples. IRMS and SNIF-NMR were used to analyse the isotope ratios of 1H/2H, 13C/12C and 18O/16O. In this study an enormous amount of

samples (~4000) were used, and not only the isotopic parameters were studied, but also the climatic and geographic contributors to differences in these isotope ratios. The isotopic parameters studied in this publication by SNIF-NMR were [2H/1H]

1, which represents the hydrogen isotope ratio of the

methyl site of ethanol in wine, and [2H/1H]

2, which represents the hydrogen isotope ratio of the

methylene site. IRMS was used to study the other two isotopic ratios, 13C/12C originating from

ethanol and 18O/16O from water. The oxygen isotopic ratio showed the strongest relationship with

climatic and geographical origin, followed by the hydrogen isotopic ratio. Temperature and precipitation rate were variables that caused great differences in stable isotope ratios, which is a relationship that can be used in order to predict the geographical origin of wine samples 64.

(29)

29

5.2 Honey

Honey is a product used worldwide in every diet, especially for its nutritional and medicinal qualities. The geographical origin of a honey product provides the composition of the sample; this composition is related to the qualities of the product. Table 6 summarizes the recent literature concerning

discrimination of honey samples from various origin. As can be seen in the table, in the

authentication of honey stable isotope analysis is the most used analytical technique, followed by elemental analysis.

Table 6: Discrimination of honey based on elemental composition and isotope ratios. ME, Multi-element composition.

Number of samples

Parameters Origin Analytical technique

Statistical method Ref.

55 ME Poland ORS-ICPMS PCA, CA 65

140 ME Poland DRC-ICPMS CA, PCA, LDA, CARTs 66

122 13C/12C,

15N/14N, ME

Slovenia CF-IRMS, XRF ANOVA, Kruskal-Wallis test, CA, PCA, LDA

67

271 13C/12C,

15N/14N

Slovenia CF-IRMS ANOVA, Kruskal-Wallis test, LDA 68

617 13C/12C,

15N/14N, 18O/16O 34S/32S

Europe TC/EA-IRMS Tukey’s test, CDA 69

79 13C/12C,

87Sr/86Sr, ME

Argentina QICP-MS, IRMS, TIMS

ANOVA, DA, GPA, CCA, C&RT 70

40 1H/2H, 13C/12C, 18O/16O Romania CF-IRMS, SNIF-NMR PCA 71 83 1H/2H, 13C/12C, ME New Zealand IRMS, ICP-MS, Raman, FT-IR, NIR PCA, OPLS-DA 72

54 13C/12C Turkey CM-CRDS PCA, HCA 73

Honey samples originating from Poland were investigated by elemental analysis. The discrimination was performed with an ICP-MS instrument, equipped with an octopole reaction system (ORS) or dynamic reaction cell (DRC), in combination with several statistical methods as PCA and CA. These two publications showed that the ratio of minerals in a honey sample could predict the geographical origin of the sample 65,66.

Several honey types originating from different regions in Slovenia, as black locust, lime, multifloral, spruce and chestnut, were analysed according to some physico-chemical parameters. The elemental analysis was performed using XRF 67 and the stable isotope analysis of the ratios 13C/12C and 15N/14N

by IRMS 68. Both studies were performed in combination with several statistical methods. The

elemental analysis showed that samples of different honey types could be coupled to their

geographical origin, the LDA plot is shown in Figure 15. The samples were successfully classified, with a precision of 100% for lime honey samples, 98.2% for black locust and 94.6% for chestnut honey samples, respectively. Analysis with IRMS showed that the method was capable of identifying the samples according to their botanical and geographical origin. Also honey adulteration could be identified, with only 2.2% of the samples to be actually adulterated. The botanical origin showed a higher amount of 13C/12C ratio for black locust, but for the other samples there was no influence on

Referenties

GERELATEERDE DOCUMENTEN

c. een ongelijkmatig verdeelde snijsreling, welke reeds aanwezig is in de gemonteerde stempel, zal waarschijnlijk onder invloed van het te verwachten asymmetrisch

 Whether expert opinions influence the hedonic liking of Pinotage for young South African Millennials..  Whether the relevance of expert opinions differs for

 distributions. To obtain accurate p-value, bootstrap procedure is required as described by Aguirre-Hernández, R. Pearson goodness of fit requires an arbitrary grouping

Binnen het evaluatierapport werd, naast een voorlopige beschrijving en interpretatie van de aangetroffen sporen en structuren, ook een argumentatie voor dit

Prof Faythe Freese (Professor of Organ, University of Alabama, Tuscaloosa, Alabama, USA) Prof Elsabé Kloppers (Philosophy, Practical and Systematic Theology, UNISA, Pretoria,

Where family knows this person and his/her personality already for a lifetime, caregivers just come into his/her life at the moment s/he got dementia in such

Data obtained from illegal as well as legal migrants were used, firstly, to describe the relational experiences of illegal migrants faced with multiple risks in South Africa

Following the integrated theoretical framework developed in Chapter 2 and the key characteristics of the subnational cross-border cooperation in Europe presented in Chapter 3,