• No results found

Aberration corrected light modulation for the selective addressing of atom clouds on a chip

N/A
N/A
Protected

Academic year: 2021

Share "Aberration corrected light modulation for the selective addressing of atom clouds on a chip"

Copied!
91
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Aberration corrected light modulation

for the selective addressing of atom

clouds on a chip

by

Rosanne Jacintha Nusselder

Supervisor: Dr. Robert Spreeuw

Daily supervisor: Julian Naber

Second reviewer: Dr. Rudolf Sprik

August 2016

Msc Physics

Track: Advanced Matter and Energy Physics

Master thesis

(2)
(3)

taining multiple spots in arbitrary geometries. The distributions are optimized to match 10 µm period microtrap lattices on a magnetic atom chip to allow for selective addressing of87Rb

atom clouds. The distributions are created by means of a Spatial Light Modulator (SLM). The optical setup built for this purpose is designed according to the 4f-configuration, which dictates the spacing between optical elements and ensures maximal power at the trapping sites. We demonstrate an SLM-based wavefront correction technique for compensation of optical aberra-tions in our setup. We observe improvements on the flatness of the wavefront and on the shape and the intensity of the spots. To match the requirements set by two-photon atomic excitation schemes, the SLM is programmed to control both halves of the SLM display independently for simultaneous tailoring of two laser beams at 480 and 780 nm. The SLM is successfully multi-plexed by applying an empirically optimized conversion factor to computations employed for the 480 nm laser beam. In addition, we have accounted for chromatic dispersion by imprinting a lens phase onto the left half of the SLM display to focus the 780 nm beam. We present overlapped images of two spot geometries matching microtrap lattices on the magnetic atom chip with both laser beams simultaneously, where we correct for wavefront aberrations and chromatic dispersion.

(4)
(5)

sante toepassingen in het veld van de informatica. Een computer die gebaseerd is op quantum-fysische entiteiten biedt veelbelovende vooruitzichten voor de snelheid waarmee berekeningen kunnen worden volbracht: van sommige algorithmes is voorspeld dat deze exponentieel veel sneller kunnen worden berekend met een computer die gebaseerd is op quantummechanische wetten. In de Quantum Gases and Quantum Information groep van de Universiteit van Am-sterdam wordt gewerkt aan een platform waarop de quantumfysische toepassingen kunnen worden bestudeerd. De qubits -quantum tegenhangers van de klassieke bits- worden in dit systeem gerepresenteerd door wolkjes 87Rb atomen. De atomen worden op een atoomchip

vastgehouden in een rooster van microvallen met een periode van 10 µm. Voordat een87Rb wolkje dienst kan doen als qubit moet deze ge¨exciteerd worden naar een hoog energetische toestand. Hiervoor gebruiken wij twee lasers van verschillende golflengtes: 480 nm en 780 nm. Om de87Rb wolkjes op een controleerbare wijze te exciteren moet het ruimtelijke profiel van de lasers gemoduleerd worden, zodat deze in plaats van een hele regio van de atoomchip alleen specifieke microvallen aan kan slaan. Hiervoor willen wij een Spatial Light Modulator (SLM) implementeren in de experimentele setup. Een SLM bestaat uit een computergestuurde gepixelde chip die een invallende laserstraal moduleert door de fase van de straal per pixel op een net andere wijze te vertragen. Op deze manier wordt een interferentiepatroon gecre¨eerd dat bestaat uit verscheidene spots, waarbij iedere spot op een specifieke microval kan worden gericht. In deze scriptie werken wij aan het optimaliseren en programmeren van de SLM, zodat deze gebruikt kan worden voor de modulatie van de twee bovengenoemde lasers. Er is een tijdelijke optische setup gebouwd waarop de resultaten die in dit werk worden gepresenteerd zijn gebaseerd.

Wij maken gebruik van een enkele SLM om twee lasers van verschillende golflengtes te mod-uleren. Dit doen wij door de chip op te delen in twee onafhankelijke helften, zodat de ene helft alleen de 480 nm laserstraal moduleert, en de andere helft alleen de 780 nm laserstraal. Omdat de SLM maar voor een enkele golflengte gecalibreerd kan worden, moeten de berekeningen van ´e´en helft van de chip worden omgerekend met behulp van een empirisch geoptimaliseerde for-mule. Daarnaast compenseren we voor chromatische aberraties die worden veroorzaakt door het gebruik van verschillende golflengtes. Dit doen we door een lens fase op de SLM te zetten. Er zijn nog meer aberraties in het systeem, die het gevolg zijn van imperfecties in de optische opstelling. Deze aberraties hebben tot gevolg dat het golffront van de laserstralen, dat in het ideale geval plat is, gedeformeerd wordt. Hier compenseren wij voor door met de SLM de deformaties van het golffront te achterhalen, en vervolgens een correctiepatroon op de SLM chip te zetten. Als gevolg van deze correctieprocedure is het golffront na het aanbrengen van de correcties 62% vlakker dan eerst. Daarnaast wordt de kwaliteit van de interferentiepatronen verbeterd, voornamelijk in termen van de vorm en intensiteit van de spots. Met de genoemde correcties presenteren wij uiteindelijk interferentiepatronen waarbij beide lasers tegelijk wor-den gemoduleerd en overlapt. De patronen komen overeen met de microval roosters op de atoomchip.

(6)
(7)

1 Introduction 1

2 Theory 5

2.1 The Spatial Light Modulator . . . 5

2.1.1 Liquid crystal on silicon technology . . . 6

2.1.2 SLM driving scheme and gamma functions . . . 9

2.2 Phase pattern synthesis . . . 11

2.2.1 Fourier description of the imaging system . . . 12

2.2.2 The Iterative Fourier Transform Algorithm . . . 15

2.2.3 Phase diffraction gratings . . . 19

2.3 Wavefront correction . . . 20

2.3.1 The Shack-Hartmann wavefront sensor . . . 21

2.3.2 SLM for wavefront sensing . . . 21

2.3.3 Wavefront analysis: the Zonal method . . . 23

2.3.4 Wavefront analysis: the Modal method . . . 25

3 Experimental setup 29 3.1 Laser setup and outcoupling optics . . . 30

3.2 Beam waist measurements . . . 31

3.3 Illumination of the SLM . . . 32

3.4 The 4f-configuration . . . 33

3.5 Imaging . . . 36

4 Results Part I: Wavefront correction 39 4.1 Calculation of the correction phase pattern . . . 40

4.2 Impact of wavefront correction . . . 42

4.3 Wavefront shape after correction . . . 46

4.4 Uniformity of intensity after correction . . . 49

4.5 Sensitivity to setup adjustments . . . 51

5 Results Part II: Spot pattern generation 53 5.1 Image analysis . . . 53

5.2 Determination of spot position . . . 54

5.3 Intensity fluctuations . . . 57

5.4 Multiplexing the SLM for two wavelengths . . . 61

5.5 Spot patterns on a grid . . . 65

5.6 Overlapping two beams to match the atom chip . . . 69

6 Discussion 73

7 Conclusion 77

8 Aknowledgements 79

(8)
(9)

Introduction

Although is has long been agreed upon that the physical world is governed by the laws of quan-tum theory and that Newtonian mechanics is merely an approximation to this description, digital devices nowadays are still based on the laws of classical physics [1]. Especially in the context of com-puter technology this appears limiting. As chip-fabricators are in a constant race for ever smaller computer processors they inherently move towards a regime in which quantum phenomena such as superposition and entanglement could play an exciting role. The science of quantum comput-ing is based on two-state quantum-mechanical systems that define units of quantum information, analogue to the binary digit (bit) in classical computers. The advantage of quantum information processing over its classical counterpart lies in the manner in which it can store information: whereas classical bits are described by either the |0i or |1i state, quantum bits can be in a superposition of the two. The opportunities offered by quantum-mechanical systems have been studied exten-sively both theoretically and experimentally. Large-scale quantum computers have been predicted to solve certain problems faster than classical computers - in some cases calculation times are even predicted to reach exponential speedups [2]. Furthermore, as a direct result of their quantum na-ture, quantum computers offer a promising avenue to the study of quantum many-body systems [3].

Quantum information with Rydberg atoms

Analogue to the classical computer, quantum computers need single qubit operations as well as two-qubit logical gates for computation purposes. An efficient quantum information platform needs a sufficiently large number of qubits, on which high-fidelity one-qubit and two-qubit operations can be performed [1]. Several implementations of quantum information platforms have been realised, such as trapped ions [4, 5] and superconducting qubits [6]. Another approach is the use of neutral atoms. The suitability of neutral atoms for quantum computing is related to their interaction prop-erties: While insensitive to their environment in the ground state, neutral atoms exhibit two-atom interactions exceeding ground state interactions by 12 orders of magnitude when excited to the highly energetic Rydberg state [7]. The Rydberg state is an excited state of high principle quantum number n with beneficial characteristics such as a large dipole moment and polarisability [8]. As a result of the strong dipolar interaction, the presence of a single atom in the Rydberg state shifts the energy levels of atoms in its vicinity. Excitation by the same resonance frequency is inhibited for all atoms within a certain ”blockade” radius rb [9]. This effect is called the dipole blockade: the

central ingredient for Rydberg-based quantum computation platforms [10]. Because of the dipole blockade, excitation to the Rydberg state is delocalised over all atoms within the blockade radius rb.

In the Quantum Gases and Quantum Information group at the University of Amsterdam, a scalable quantum platform is realised based on neutral 87Rb atoms trapped on a magnetic atom chip. Heavy alkali-metal atoms such as87Rb are valuable candidates because they are easily laser cooled and, in addition, possess large hyperfine-Zeeman splittings of the excited state, which facili-tates state initialisation by optical pumping [11]. The atom chip consists of a patterned magnetised layer of FePt. The two-dimensional structure of the FePt layer in conjunction with a homogeneous external magnetic field creates a landscape of microtraps for 87Rb clouds. Two different types of

lattice geometries exist in the chip, which are depicted in Figure 1.1. Qubit encoding is realised in two hyperfine-Zeeman levels of the atomic ground state. An entire ensemble of 87Rb atoms in a

(10)

(a) (b)

Fig. 1.1: Two lattice geometries on the magnetic chip: (a) the square geometry, and (b) the hexagonal geometry. Dark regions indicate the position of microtraps. The axes represent length scales in units of µm. Image adapted from [8].

single microtrap represents a single qubit when in the collectively excited Rydberg state. Transi-tions between the |0i and |1i state can be achieved by the two-photon Raman transition scheme, which consists of two individual lasers at 780 nm [12]. Excitation to the Rydberg state is achieved by two lasers at 780 nm and 480 nm. The 780 nm wavelength corresponds to the transition fre-quency between the 5S1/2ground state and the 5P3/2excited state of87Rb with a detuning of ∆d.

The 480 nm beam subsequently excites the atoms to a nS1/2 Rydberg state. In the magnetic chip

experiment, a frequency-doubled Toptica laser system can produce a wavelength in the 479 - 488 nm range, which in theory allows for excitation to any Rydberg level with n > 17 [8]. Stable laser power of both lasers is needed for Rydberg excitation, as fluctuations in intensity induce shifts in the atomic energy levels [12].

Spatial light modulator

Up until now, experiments on 87Rb atoms in our experiment consisted of experimental cycles

addressing regions of the atom chip containing multiple microtraps. For one-qubit as well as two-qubit operations, individual addressing of (multiple) lattice sites is indispensable. Spatial intensity distributions are required that match the magnetic trapping geometries. To achieve spatial shaping of the relevant optical fields, a spatial light modulator (SLM) will be implemented in the setup. The term spatial light modulator (SLM) refers to a wide variety of devices that are used to tailor the spatial distribution of light. An especially versatile tool in the field of experimental physics are SLMs that allow for dynamic modulation of the spatial profile, such as digital micro-mirror devices (DMDs), which consist of arrays of around 106mirrors that can be switched between two reflective

alignments with refresh rates of up to 50kHz [13]. A different type of SLM makes use of liquid crystal (LC) technology: by exploiting the controlled birefringence of LC materials, the phase, amplitude or polarisation of a diffracted beam can be modulated. They can thus be considered a sophisticated ways of implementing diffractive optical elements (DOEs) in an optical system. Typical examples of DOEs such as glass or silicon masks with binary relief profiles shape a beam statically. LC SLMs however do the same dynamically, imitating the physical relief of regular DOEs with a computer-controlled birefringence and resolutions of up to 10 Megapixels.

Both binary and multilevel LC SLMs are available, all with their own characteristics in terms of refresh rates, efficiency and spatial noise. Multilevel SLMs typically have an 8-bit modulation depth with high dynamic ranges of more than 2π. They are characterized by relatively low refresh rates of a couple of tens of Herz, compared to several kHz for the binary LC type. However,

(11)

multi-level SLMs have the advantage of allowing for high efficiencies, while binary LCs as well as DMDs encounter large spatial quantization noise [14,15]. This can be reduced by using multiple subframes with a high refresh rate, thereby exploiting temporal averaging, but this will come at the cost of the contrast ratio [16]. Therefore, as long as fast update rates are not of crucial importance, multilevel modulation depth is preferable over high refresh rates with binary modulation. For the purpose of the controllable addressability of atom clouds on a chip, intensity patterns are not required to change more than once per experimental cycle of ∼20s. Meanwhile, beam shaping has to be performed with high precision in order to match magnetic trapping geometries at the micrometre scale. In addition to high-precision spatial tailoring, two-photon excitation of 87Rubidium atoms to the Rydberg level entail high power laser beams. With a high dynamic range and high efficiency compared to both DMDs and binary LCs, we conclude that the multilevel LC SLM best meets our requirements.

Although much of the liquid crystal technology has been developed for commercial purposes, such as telecommunications and the video industry, LC SLMs are proving themselves in a variety of research fields. In experimental physics, they have been deployed for the purpose of spatial filter-ing for enhanced contrast in microscopy [17, 18], reconfigurable holographic interconnections with magneto-optic SLMs [19] and acousto-optic SLMs [20], and adaptive optics [21], amongst others. In addition, recent publications on the topic of ultracold atomic physics show the implementation of SLMs for the purpose of optical trapping [22–24] and Rydberg excitation of cold atoms [25].

In the magnetic chip experiment, an SLM will be implemented for the addressing of individual microtraps. In order to shape light fields appropriately, the SLM needs to be programmed and characterized before implementation. In this thesis we discuss the different aspects that are impor-tant for optimization of the SLM’s light shaping properties. They are evaluated by means of an experimental SLM setup built for the purpose of characterization and programming of the SLM. Control of the SLM is achieved with the SLM control software developed by Rick van Bijnen as part of his PhD thesis at the Technical University of Eindhoven [26]. Modifications and additions to the control software are necessary to meet the specific requirements of the magnetic chip exper-iment. Firstly, we demonstrate the generation of spatial intensity distributions containing multiple spots in arbitrary geometries for the selective addressing of multiple lattice sites in different lattice geometries. Secondly, we present simultaneous spatial tailoring of two laser beams at 480 nm and 780 nm to meet requirements set by the Rydberg excitation scheme. In order to multiplex the SLM for two wavelengths we choose to spatially separate the laser beams, and modify the SLM control software to allow for independent control of the left- and right half of the SLM display. Thirdly, aberrations in the imaging system have to be compensated for in order to generate reproducible spot patterns with a uniform intensity distribution. We demonstrate an SLM-based wavefront cor-rection technique for compensation of optical aberrations in our setup.

The thesis is structured as follows. In Section 2, a theoretical understanding will be provided of the working principles of the SLM, in addition to a discussion of the computation algorithms used for the generation of intensity distributions. Furthermore, the section introduces the methods of wavefront correction, which we carry out by exploiting the light modulating properties of the SLM. Section 3 describes the experimental setup. The result section has been split into two chapters: Section 4 presents the results of wavefront correction, and Section 5 discusses the results of spot pattern generation. Finally, we conclude with a discussion and a conclusion.

(12)
(13)

Theory

In this chapter we introduce the theory and concepts that form the basis for the rest of the thesis. Section 2.1 provides a basic understanding of the working principles of the spatial light modulator (SLM), including the requirements it imposes on the experimental setup and its limitations in terms of efficiency. In addition, we briefly review the SLM’s electronic driving scheme, as it allows for the adjustment of important device characteristics such as refresh rate and modulation depth. In Section 2.2 we introduce the different algorithms that are employed to generate target intensity distributions by means of phase modulation. In order to optimize the quality of the resulting intensity patterns, the SLM is also employed for measurement of the wavefront of the electromagnetic field, and subsequently for correction of optical aberrations in the setup. Section 2.3 provides the theoretical framework upon which the realisation of wavefront correction is realised.

2.1

The Spatial Light Modulator

As was concluded in Section 1, we choose to implement a liquid crystal (LC) based SLM for the purpose of individually addressing atom clouds on a chip. The liquid crystal state is a phase of matter that shows liquid-like as well as crystal-like properties. Molecules within the LC show some ordering, such that it lacks the isotropy of a fluid. Still, flow-like behaviour is possible in the form of rotation of the molecules. There exist different mesophases of the liquid crystal state, such as the smectic, the chiral and the nematic mesophase. The smectic state is not used in LC technology because its high viscosity makes it unsuitable for dynamic spatial tailoring [15]. Chiral-phase LCs constitute the binary SLMs discussed in the previous section, and will therefore not be discussed here further. Instead, we will focus on the nematic LC mesophase used for multilevel LC SLMs. In the nematic phase, the rod-like molecules show a long-range preferential alignment in which their long axes are aligned along a common axis called the director. They do not posses further positional order. For weak distortions, the nematic LCs act as an elastic medium, and the director of the LC can be reoriented with relatively low external electric, magnetic or optical fields [27]. In most cases, it is the electro-optical properties that are exploited to modulate the amplitude and the phase of a passing electromagnetic field.

Modulating properties of the nematic liquid crystal are based on its birefringence ∆n, expressed in terms of the ordinary and extraordinary refractive indices no and ne as ∆n = ne− no. The

refractive index experienced by an incident light beam depends on its polarisation direction relative to the director of the LC molecules. Figure 2.2(a) shows a positive uniaxial optical indicatrix which represents all liquid crystals with ∆n > 0, as is the case for most LCs. The figure demonstrates the indices of refraction for linearly polarised light passing through the material from any direction. The extraordinary refractive index nelies in the z-direction along the director, indicated by the red

arrow, whereas nois oriented in the plane perpendicular it. Although an optical indicatrix in

gen-eral demonstrates the birefringence of a material as a whole, in this case the figure can be thought of as representing individual LC molecules, which are indeed stretched in the direction of ne. The

birefringence of the material can be exploited by rotating the molecules while leaving the angle of incidence and polarisation state of the light unchanged. For example, consider a linearly polarised electromagnetic field propagating in the positive y-direction, and oscillating in the x-y-plane. By rotating the molecules 90◦along the y-axis, the refractive index changes from noto ne. As a result,

(14)

Fig. 2.1: Schematic of a typical LCOS SLM architecture. The liquid crystal material is sandwiched be-tween two alignment layers which confine the orientational ordering of the molecules. On the front side of the display, the alignment layer is covered by a transparent electrode and a cover glass. At the back, a layer of reflective aluminium pixels is deposited onto a CMOS, and controlled from a PCB. Image from http://holoeye.com.

the material depends on the refractive index as v = c/n. 2.1.1 Liquid crystal on silicon technology

In science as well as commercial applications, controlling the liquid crystal is often achieved by means of complementary metal oxide semiconductor (CMOS) technology [15]. The liquid crystal layer is controlled through a pixelated aluminium surface, which is deposited on a CMOS silicon backplane. It is then referred to as liquid crystal on silicion (LCOS) technology. Figure 2.1 shows the structure of a typical LCOS microdisplay. The LC material is sandwiched in between two layers which serve to align the molecules. The alignment layers are rubbed in a preferred direction with a velvet cloth, and the molecules that are in direct contact with the surface align with the rubbing direction because of their strong interaction with the surface substrate, called anchoring [28]. On the front side of the display the alignment layer is covered by a transparent electrode and a cover glass. At the back, a reflective layer of aluminium pixels is deposited on the silicon CMOS backplane, which is controlled from the PCB on which it rests. Each pixel of the display controls one liquid crystal ”cell”, which is the volume of LC material that lies directly on top of it. Most of the pixel circuitry that constitutes the CMOS backplane is buried underneath the pixels, outside of the beam path, which results in high fill factors of more than 90% [15]. The fill factor is defined as the area actively involved in light shaping divided by the total area of the microdisplay. The remaining so-called dead space in between pixels constitutes a periodic structure of uncontrolled LC material. Light incident on the gap structure cannot be controlled, but gets diffracted from the periodic structure [29]. This results in a diffraction pattern consisting of a bright, undeflected spot which we will refer to as the zeroth order, and a series of higher orders that are visible in the horizontal and vertical direction. In addition to the reflective LCOS SLMs, there exist transmissive SLMs, which are characterized by a significantly lower fill factor of typically less than 60%. These SLMs will not be discussed here further.

SLMs tailor the spatial distribution of a laser beam by imprinting it with a spatially varying phase retardation. The desired intensity distribution is created by interference when focussing the diffracted beam with a lens. In order to generate the target intensity distribution, a so-called phase pattern is set to the SLM that assigns a specific voltage to each LC cell of the microdisplay. The

(15)

(a) (b)

Fig. 2.2: (a) Positive uniaxial optical indicatrix demonstrating the birefringent properties of a nematic liquid crystal with ∆n > 0. The director of the molecules lies in the z-direction along the extraor-dinary refractive index ne. The ordinary refractive index nolies in the plane perpendicular to the

director. Image adapted from [31]. (b) Schematic of the Twisted Nematic (TN) configuration, demonstrating the off state (left) and on state (right). A layer of LC molecules (in purple) is sandwiched between two polarising filters with a 90◦angle between their polarisation axes. With-out the presence of an external electric field (left), the molecules are twisted from top to bottom as a result of the rubbing directions of the alignment layers, which are aligned with a 90◦angle between them. Light entering from the top changes its polarisation state from linear to elliptical, and a maximum amount of light is transmitted by the second polariser. In the off state, the molecules reorientate along the externally applied electric field such that the polarisation of the electromagnetic field is unchanged and all light is blocked by the second analyser. Image from http://www.longtech-display.com.

phase pattern should have the same resolution as the SLM and can designate each pixel with a value in a range specified by the SLM’s modulation depth. The values assigned to the pixels are called grey values, and they correspond to a specific refractive index of the corresponding liquid crystal cell.

There are several common LCOS configurations that employ the nematic liquid crystal phase. To better understand what kind of nematic LC is best for our purposes, it serves to understand the difference between amplitude modulation and phase modulation. To illustrate the two modulation methods, two types of LCOS implementations are discussed: the Twisted Nematic (TN) liquid crystal, often used in conventional LC devices for video and image projections, and the Parallel Aligned Nematic (PAN) LC, used in our setup.

Amplitude-only modulation

The reflective TN configuration consists of an architecture similar to that in Figure 2.1, except that in addition to the layers shown in this schematic, it makes use of two polarising filters that have a 90◦ angle in between their polarisation axes. Figure 2.2(b) shows a schematic of the on and off state of the TN SLM configuration. The rubbing directions of the alignment layers have a perpendicular orientation, which realises a twist in the orientation of the molecules from the top to the bottom of the LC cells when no voltage is applied. Light enters the LC cell from the top and linearly polarised light is transmitted by the first polarising filter. The linearly polarised light is transmitted through the LC cell, and subsequently through a second polarising filter whose polarisation axis forms a 90◦angle with the first. In the off state, the molecules are twisted along

(16)

n

o

min Volt max Volt

n

o Back electrode Back electrode Front electrode Front electrode k

n

e

n

o

n

o

n

e k

Fig. 2.3: Schematic showing some LC molucules (purple) of a parallel aligned nematic (PAN) LC cell, located in between a back and front electrode (dashed in red). Incident light with a horizontal linear polarisation propagates along k and experiences a refractive index ne, for V = Vmin, and

a refractive index no, for V = Vmax. Left = planar, right is homeotropic orientation

(direc-tor=normal to surface).

the bulk, which causes the polarisation of the light to change from linear to elliptical. This way a maximum amount of light passes through the second polarising filter. In the on state, the external voltage causes rotation of the molecules along the electric field, such that they are now all aligned in the direction of the propagation of light. In this configuration, the polarisation of the light is left unchanged, and all light is blocked by the second polarising filter.

Phase-only modulation

An SLM based on the TN configuration described above can be used for phase-only modulation if the polarisers are removed [27]. Because of the birefringence of the material, two refracted beams with different phase retardation and different polarisation will then exit the material. The calculation of the phase pattern is then significantly more complicated because of the coupling between modulation of phase and modulation of polarisation. It is possible to block one of the two rays, but this would strongly limit the efficiency of the SLM. Besides, changes in polarisation state are unacceptable in quantum optics. Therefore, we will refer to the parallel aligned nematic (PAN) LC for a discussion of phase-only modulation.

The PAN configuration differs from the TN configuration in that it does not use polarisers, such that no light gets absorbed, which dramatically increases its efficiency. The molecules are not twisted but aligned continuously across the bulk of the cell, as shown in Figure 2.3. On the left, the alignment of the molecules is shown in the case of a minimum voltage across the electrodes, in which case the molecules’ director is aligned perpendicular to the propagation direction of the light, indi-cated with k. In the PAN configuration, the polarisation of the light has to be aligned as depicted in the image: horizontally and perpendicular to the surface electrodes. On the left image, the light then experiences a refractive index no. On the right, a maximum voltage is applied, causing the

molecules to tilt along the electric field and towards the front electrode. As a result, the incident light experiences a refractive index ne. Any position of the molecules in between both states yields

a different phase retardation depending on the birefringence of the material. The polarisation of the light is not affected by the rotation of the molecules. In principle the alignment of the molecules and thus the phase retardation can be altered continuously. In general however this will not be achievable since the voltage will be set in discrete steps. This aspect shall be discussed in Section 2.1.2.

(17)

Because the PAN configuration allows for modulation of the light without absorption and with-out changes in the polarisation of the laser light, a phase-only spatial light modulator is preferred for the magnetic chip experiment. The spatial light modulator bought for this purpose is the Holoeye Photonics AG Pluto-NIR-11 SLM, type HES 6010.

The Holoeye Pluto-NIR-11

The Holoye Pluto-NIR-11 is a reflective, phase-only and electrically controlled PAN LCOS SLM. The Pluto driver is used to control a chip with a resolution of NxxNy =1920x1080 pixels, where Nx

is the number of pixels along the horizontal axis of the display, and Ny along the vertical axis. The

pixel pitch, indicating the spacing between the center of one pixel to the center of the next pixel, is 8µm, and the active area of the display measures 15x8 mm. The Pluto-NIR-11 is developed to cover a minimum phase stroke of 2π phase retardation in a wavelength range of 420-1100 nm, a range that includes both wavelengths necessary for Rydberg excitation in the magnetic chip experiment. It differs from SLMs of other wavelength ranges in its broadband anti-reflection coating as well as its cell thickness for adapted phase retardation depth [32]. The PAN SLM almost fully exploits the range of refractive indices from ne to no, which can be set to all pixels individually in 256 steps of

modulation levels.

An important characteristic of an SLM is its total light efficiency, which is the percentage of intensity that can be used for modulation purposes relative to the total intensity incident on the display. The total light efficiency of our SLM depends on the fill factor (87%), the reflectivity in the 480-780 nm range (67%), and the intensity in the zeroth order relative to the total intensity incident on the display for a non-addressed SLM, which is about 60% [32, 33]. From these numbers we can conclude that the total light efficiency is roughly 56%. The maximum diffraction efficiency is obtained when the polarisation axis of the incident light is along the long display axis of the SLM display [32].

2.1.2 SLM driving scheme and gamma functions

To address the pixels of the microdisplay, the Holoeye Pluto-NIR-11 makes use of a static random-access memory (SRAM) pixel circuitry typical for a digital driving scheme [15]. In this scheme, the incoming video signal is converted on the basis of pulse-code modulation, a method to digitally represent a sampled analogue system. The liquid crystal molecules respond to the time integral over the applied pulse sequence [34]. The voltages that can be applied are either Vst-Vmin or Vst-Vmax,

where Vst is the static voltage across the electrodes. Vmin and Vmax can be set with the Holoeye

software, and in our case are set to Vmin= 0.03V and Vmax= 4.06V.

As a result of the SRAM driving scheme, the relation between calculated grey value and applied voltage is not a continuous function in the sense that there does not exist a continuous set of volt-ages that can be applied. Rather, the relation is based on a set of quantized values, the so-called Look Up Table (LUT) values. The discrete function that relates grey value to LUT-value is called a gamma function. The gamma function can be expressed with higher or lower resolution, depending on the settings used for the addressing scheme. The configuration chosen for our SLM is the 18:6 bit sequence. In this configuration, the grey values are converted into 24 different pulses [34]. The amount of pulses determines the amount of different LUT-values that can be used in the assignment of the SLM pixels. The 18:6 sequence allows for 1216 different LUT-values, which means that in principle all grey values can represent a unique LUT-value [32]. The 18:6 sequence is the longest sequence and can be addressed only twice within one frame which leads to a base frequency of 120 Hz [32]. The pulsed nature of the addressing scheme leads to a superimposed phase flicker, which leads to a time-dependent phase level with frequencies of multiples of the refresh rate [32]. To diminish phase flicker, it is possible to choose a configuration with a shorter bit sequence and thus faster update rates, but this comes at the cost of the amount of available LUT-values.

(18)

Fig. 2.4: Gamma curves for 780 nm (red) and 480 nm (blue). LUT-values are given as a function of grey value. Data from [35]

The response of the liquid crystal molecules to voltage is non-linear. The gamma function can be used to compensate for this non-linearity by calibration, a process that should be carried out for light of different wavelengths separately. The reason for this can be seen from the effect of birefringence. The phase retardation that a light wave experiences due to the birefringent LC material can be expressed as [27]

δ(V, T, λ) = 2π∆n(V, T, λ)d/λ (2.1) where d is the thickness of the slab of LC material. The birefringence, and thus the phase retarda-tion, depends on applied voltage V , on temperature T and on the wavelength of the incident light, λ. The temperature dependence comes from the fact that nematic liquid crystals are thermotropic and can change into the crystalline phase for lower temperatures, as well as the smectic mesophase in some materials. For commercially available SLMs operated at room temperature the temperature dependence is negligible [27]. For a reflective SLM, d should be substituted by 2d.

The gamma function was measured for 780 and 480 nm with an interferometric setup in reference [35]. The setup is similar to a Mach-Zender interferometer. A laser beam is expanded to cover the whole SLM surface, and passes through a mask with two holes placed in front of the SLM such that each half of the display is being illuminated by a separate sub-beam. The two sub-beams that reflect from the surface are made to interfere in the focal plane of a lens. On one half of the SLM, a constant phase pattern is encoded, while on the other half, a continuously changing but global grey value is set. The relative change in path length in one of the arms is the result of the difference in optical path length due to the different birefringence in the LC cells on the left and right half. The change in voltage on one half will induce a shift in the fringes of the interference pattern, which can be calculated back to the correct gamma function by comparing to data of the default gamma function, given by Holoeye for a wavelength of 850nm. The results are the two gamma functions shown in Figure 2.4. These can be loaded onto the SLM using the Holoeye configuration software. Note that in this image, the x-axis shows 1024 grey values rather than 256. This is related to correction circuits that are part of the driving scheme. Any phase patterns send to the SLM driver however consist of 256 grey values, and the factor of 4 will be neglected in the remaining sections.

(19)

Because only one gamma function can be loaded onto the SLM, a problem arises when mul-tiplexing the SLM for two wavelengths simultaneously. If the calibration curve set to the SLM corresponds to a wavelength of λ1, but the light incident on the SLM has a different wavelength

λ2, calculation of the grey values that constitute the phase pattern has to be adjusted accordingly.

The phase retardation δλ1 that a beam with wavelength λ1experiences for a certain voltage can be

related to the phase retardation that the same voltage imparts on another beam of wavelength λ2

as δλ1 =  λ2 λ1  δλ2 (2.2)

This is derived from the inverse relation between δ and λ given in Equation 2.1. To assure a minimum phase stroke of 2π for both wavelengths λ2has to be greater than λ1. However, through

this procedure it is not possible to exploit the full range of grey values for λ1. Consider the

wavelengths of 480 nm and 780 nm, in the case that the 780 nm gamma curve is set to the SLM. Assume that the 780 nm gamma curve utilizes the maximum amount of 256 different LUT-values. For the 480 nm, part of these 256 values are redundant, since a phase stroke of 2π for 780 nm corresponds to a phase stroke of (780/480) · 2π ≈ 3.25π for 480 nm. Instead, the 480 nm beam will use just (256 · 2π)/3.25π ≈ 157 different LUT-values. This means that for 480 nm, this method reduces the levels of modulation of the system from 256 to 157 grey values, a reduction of almost 40%. This is likely to result in a reduction of the diffraction efficiency. Compare for example the effect of the number of modulation levels on the diffraction efficiency of a blazed grating. An elucidation on the comparison of the SLM to a blazed grating will be provided in Section 2.2.3. For such a blazed (phase) grating, the maximum fraction of power ηmax that can be diffracted into a

single order is related to the number of equally spaced modulation levels M as [14] ηmax=  M π sin π M 2 (2.3) According to Equation 2.3, the diffraction efficiency would reduce by about 2% when decreasing the number of modulation levels from 256 to 157. Although the modulation levels are not necessarily equally spaced in our system, it is likely that the act of multiplexing the SLM for two wavelengths results in a diminished quality of patterns generated with 480 nm.

2.2

Phase pattern synthesis

To achieve Rydberg excitation or ground state transitions of individual 87Rb atom clouds in a

lattice of microtraps, we need controllable shaping of the light field at the atom chip. A spatial light modulator allows for the dynamic modulation of a laser beam to achieve the desired intensity profile. The computer-controlled microdisplay of the SLM consists of NxxNy pixels, and a phase

pattern assigns a grey value between 0 and 255 to all of them to impart a spatially varying phase retardation on the deflected beam. In the next section we review how the modulated light field in the SLM plane can be related to the intensity distribution in the focal plane. More specifically, a discussion is provided of the effect of the phase pattern’s pixelated nature on the intensity distri-butions in the focal plane.

The intensity distributions that we are interested in consist of multiple spots of high intensity which can address individual lattice sites. There are different types of phase patterns that can generate such intensity profiles. It is possible to create patterns of multiple spots by a linear super-position of simple gratings [36]. This method is limited however, because the higher orders that are the inevitable result of working with diffraction gratings produce so-called ghost-spots. Based on the phase diffraction grating there exists another, more elaborate method to create multiple spot patterns, called geometric beam shaping [26]. This method will not be discussed in this thesis,

(20)

y'

y

z

SLM Focal plane z=2f z=0 z=f Lens Plane wave

Fig. 2.5: Schematic showing the simplified SLM setup. A plane wave is incident onto the SLM, represented as a rectangular aperture in an otherwise opaque screen. The SLM is located at a distance of one focal length f in front of the lens, and the plane of observation overlaps with the focal plane of the lens.

because there exists another algorithm that yields better results: the iterative Fourier transform algorithm (IFTA). The problem with the geometric beam shaping method is twofold:(1) the method is only applicable to certain intensity distributions, namely those of the separable kind, as the cal-culations on non-separable problems get very complicated very quickly. The difference between separable and non-separable distributions will be discussed in the next section; for now it suffices to know that we are in general interested in the creation of non-separable distributions. (2) any errors in the description of the input beam (i.e. any deviations of the laser beam from a perfect Gaussian) result in errors of similar magnitude in the focal plane. We will thus generate phase patterns by the IFTA, a procedure that is reviewed in the next section. Phase patterns that resemble simple gratings are still used, however, for simple translation in the focal plane. Phase synthesis by blazed gratings reviewd in Section 2.2.3.

Calculation of the phase pattern is reviewed in Sections 2.2.2 and 2.2.3, in which we address two complementary computation methods that are employed by the SLM control software: the iterative Fourier transform algorithm (IFTA) and phase synthesis by blazed gratings.

2.2.1 Fourier description of the imaging system

The SLM is employed to generate a target intensity distribution at a desired location. The setup built for the generation of intensity patterns with the SLM basically consists of three elements: an SLM, a lens and a light source. A detailed description of the setup is provided in Section 3. For now, we will consider a simplified version of the setup, which consists of a collimated monochromatic laser beam which illuminates the entire surface of a transmissive SLM and gets directed onto a positive lens. This is depicted in Figure 2.5. The SLM is positioned at a distance of one focal length f in front of the lens, and the plane of observation at a distance f behind the lens. We briefly introduce a mathematical description of this system, thereby following the reasoning of Van Bijnen in reference [26].

The light source in our setup is a monochromatic and linearly polarised laser beam. Considering the scalar description of the electromagnetic light field in vacuum we can represent it by a complex amplitude U0(x0, y0) depending on spatial variables x0 and y0 as

U0(x0, y0) = |U0|e−ik·r (2.4)

where |U0| is the amplitude and e−ik·r the phase of the light field. The SLM can be regarded as a

(21)

can be represented by a boxcar function as Π 2x 0 Lx ,2y 0 Ly  = ( 1 if 2xL0 x < 1, 2y0 Ly < 1 0 otherwise (2.5)

where x0 and y0 denote spatial variables in the SLM plane. The SLM imposes a phase retardation on the incident plane wave which adds a term eiφ(x0,y0) to the description of the light field right behind the aperture. We thus have a description of the light field Ui at that point:

Ui(x0, y0) = U0(x0, y0)eiφ(x 0,y0) Π 2x 0 Lx ,2y 0 Ly  (2.6) Assuming that the plane of observation is located in the near field, we describe the light field that diffracts from the SLM under the Fresnel approximation [26]. The light field in the focal plane of the lens is then related to the the light field Ui in the SLM plane by a Fourier Transform (FT):

Uf(x, y) = F [Ui(x0, y0)] (u, v) (2.7a)

Ui(x0, y0) = F [Uf(x, y)]−1 (2.7b)

evaluated at the frequencies u = x/λf and v = y/λf with λ the wavelength of the light and f the focal length of the lens. According to reference [26], a displacement of the SLM from its position z = 0 in the setup only adds a quadratic phase to this description that can be ignored to good approximation.

Because the SLM display is pixelated, the light field in the SLM plane can be approximated by a series of delta-peaks, centered at the pixel centers [26]. This approximation holds as long as pixel dimensions are small compared to the total size of the display, which is the case for our SLM, which has a pixel size of 8 µm and display dimensions of 15x8 mm. We can then approximate Equation 2.7a by considering Discrete Fourier Transforms (DFTs):

Uf(x, y) = DF T [Ui(x0, y0)] (2.8a)

Ui(x0, y0) = DF T [Uf(x, y)]−1 (2.8b)

Let us illustrate this relation for the simplified example of a dimensional light field in a one-dimensional SLM plane with complex amplitude G = |Ui|eiφ(x

0)

, where |Ui| is the amplitude of

the field and φ(x0) the phase. In the pixelated approach the complex amplitude is described by a discretized one-dimensional complex array Gk = Ukeiφk of length N , where N is the dimension of

the SLM, and k is the array index. This field is related to the complex amplitude g = |Us|eiψ(x)in

the focal plane as

gm= N −1 X k=0 Gke−2πi(km/N ) (2.9a) Gm= N −1 X k=0 gke2πi(km/N ), (2.9b)

where m denotes the mth element in the discretized array representing either g or G. The sum-mation sums over all elements k up to N . As a direct result of the pixelated nature of the SLM display, the description of the complex amplitude in the focal plane is pixelated as well. Although the actual physical electromagnetic field in the focal plane is continuous, the pixelated ”drawing area” in the focal plane determines the way in which computations are employed. The spatial intensity distribution in the focal plane must be defined as a discretized array gk similar to Gk.

(22)

all values Gk in the SLM plane. Note that although the complex amplitude Gk has an amplitude

and a phase term, only the phase can be set by the SLM. We shall return to this aspect in the next section.

The pixels of the drawing area in the focal plane constitute the smallest elements in the focal plane and will therefore be referred to as focal units (FUs). As a result of the FT-based relation between g and G, the highest spatial frequency in the focal plane is related to the lowest frequency in the SLM plane: the aperture of the SLM itself. The exact dimensions of the focal units can thus be derived by considering a diffraction limited spot in the focal plane. A diffraction limited spot is achieved by illuminating the entire SLM display encoded with zero phase by a plane wave. The spatial intensity distribution for this system can be calculated by means of Fraunhofer diffraction from a rectangular aperture, which holds in the focal plane of a positive lens [37]. The Fraunhofer diffraction pattern of a rectangular aperture is a sinc, defined as sinc(a) = (sin πa)/πa. From Equation 2.6 with ψ(x, y) = 0 evaluated at u = x/λf, v = y/λf we find

F  Π 2x 0 Lx ,2y 0 Ly  = sinc xLx λf  sinc yLy λf  (2.10) and I(x, y, f ) ∝ sinc2 x ∆xsinc 2 y ∆y (2.11) where ∆x = λf Lx , ∆y = λf Ly (2.12) Here, ∆x and ∆y are the dimensions of a diffraction limited spot in the focal plane. They determine the diffraction limited spot size for the simplified setup presented here, as thus the size of the focal units. The focal units are the focal plane-equivalents of the pixels at the SLM surface. We can only define the position of spots in the focal plane in terms of these focal units. In other words, the focal units define the resolution with which we can move spots in the focal plane. If there is a magnifying optical element in between the SLM and the lens, then the magnification has to be taken into account for the calculation of the FUs. The effective dimensions of the SLM surface in such a system will depend on the magnification as Lx,effective= M · Lx, and similarly for Ly. In our

experimental setup, a telescope system with a magnification of M=2 is placed in between the SLM and the lens. The SLM in our setup has a resolution of 1920x1080 pixels, and its dimensions are Lx= 15 mm in width and Ly= 8 mm in height. The FUs for this experimental setup are given in

Table 3.1 for the wavelengths of 480 nm and 780 nm. In addition, we present the total size of the drawing area in the focal plane, which is calculated as the number of pixels on the SLM display multiplied by the size of the FUs.

Tab. 2.1: Focal units for a system with a magnification of M=2.

480 nm 780 nm FUx 0.3 µm 0.49 µm

FUy 0.56 µm 0.91 µm

Ax 576 µm 941 µm

Ay 605 µm 983 µm

As a final remark, we stress that the size of the focal units is determined by the size of the SLM, as defined in the computations. As pointed out in Section 1, we will programmatically split the SLM display into two half to multiplex the SLM for two wavelengths. Although one could argue

(23)

FT

FT

-1 g=|Us|eiψ(x',y') g'=|g'|eiψ(x',y') G=|Ui|eiφ(x,y) G'=|G'|eiφ(x,y) |G'|=|Ui| |g'|=|Us|

SLM plane Focal plane

START

initial phase ψ0(x',y')

Fig. 2.6: Schematic of the Gerchberg-Saxton algorithm. The complex amplitude is propagated back and forth between the SLM plane and the focal plane by a an interative series of fourier transforms (FTs). The algorithm starts with an initial random phase ψ0(x, y). The inputs to the algorithm

are amplitudes |Ui| and |Us| in the SLM plane and the focal plane, respectively. |G| and |g|

represent the complex amplitudes in the SLM plane and the focal plane respectively. In each plane a constraint is set to the amplitude of the field by replacing amplitudes that result from a FT by the input amplitudes.

that this reduces the effective width of the SLM for a single laser beam by a factor of two, this does not have to be taken into account for the size of the FUs, as long as the phase computations that are described in the nect section are computed as normal. With as normal, we mean that the entire area of the SLM display is used for the calculations. It also means that the size of the FUs can be decreased by pretending the SLM is bigger than it actually is. This method will be discussed in Section 6.

2.2.2 The Iterative Fourier Transform Algorithm

Now that we know that the light field represented by the complex amplitude G = |Ui|eiφ(x

0,y0)

in the SLM plane is related to the light field g = |Us|eiψ(x,y) in the focal plane by a DFT, we are

concerned with the synthesis of a phase φ(x0, y0) that projects the input amplitude |Ui| onto the

desired amplitude |Us|. This phase can readily be calculated by means of an inverse FT if the phase

in the focal plane is known. However, since we are only interested in the intensity distribution I ∝ |Us|2, the phase ψ(x, y) in the focal plane can be considered a free parameter [38]. Setting

ψ(x, y) to be a constant would allow for a simple generation of G. However, the FT of a complex amplitude with a constant phase generally yields a function with a phase that contains both real and imaginary terms [36]. As we are constricted to phase-only modulation, all terms of the phase should be imaginary such that they cancel out when computing intensity I. Therefore, the phase in the focal plane is set as a free parameter, and we optimize the phase φ(x0, y0) in the SLM plane. A well known method of finding an optimum solution to the phase-retrieval problem is by means of an Iterative Fourier Transform Algorithm. There exist many types of IFTAs, all based on the original Gerchberg-Saxton algorithm developed by Gerchberg and Saxton in the 1970’s [39], which we shall discuss here first.

The Gerchberg-Saxton (GS) algorithm consists of a series of iterations in which the complex amplitude is mathematically propagated back and forth between the SLM plane and the focal plane

(24)

DFT

DFT

-1 gn,m=|Us|eiψ(x',y') gn,m'=|g'|eiψ(x',y') Gn,m=|Ui|eiφ(x,y) Gn,m'=|G'|eiφ(x,y) |G'|=|Ui| |g'|=|Us| Everywhere

SLM plane

Focal plane

Phase

Fr

eedom

Amplitude

Fr

eedom

DFT

DFT

-1 gi=|Us|eiψ(x',y') gi'=|g'|eiψ(x',y') Gi=|Ui|eiφ(x,y) Gi'=|G'|eiφ(x,y) |G'|=|Ui| |g'|=|Us| within W gmax= max[DEm,DEm-1] m<100: m=100: g{i=0}=gmax or n=30 n=n+1 i=i+1

FINISH

i=30

START

m=0 n=0 initial phase ψ0(x',y') initial phase ψ0(x',y') m=m+1 n=0 i<30 n<30

Fig. 2.7: The Iterative Fourier Transform Algorithm (IFTA) as implemented by Van Bijnen in the SLM control software for the generation of phase patterns. The complex amplitude is propagated between the SLM plane and the focal plane. Inputs of the algorithm are the desired amplitude |Us| in the focal plane and input amplitude |Ui| in the SLM plane. The IFTA is divided into the

Phase Freedom (PF) stage and the subsequent Amplitude Freedom (AF) stage. The PF stage consists of mxn iterations and the AF stage of i iterations. The indices n,m and i are counters that increment by one after each corresponding iteration. The IFTA starts in the PF stage with an initial random phase ψ0 and generates g0,0 = |Us|eiψ0(x

0,y0)

. The complex amplitude is propagated to the SLM plane by a inverse Discrete Fourier Transform (DFT). The amplitude |G0| is replaced by input amplitude |U

i|, and the resulting complex amplitude G is propagated

to the focal plane with a DFT. Subsequently, the amplitude |g0| is replaced by desired input amplitude |Us|. This process is repeated n = 30 times and subsequently restarted with a new,

random initial phase. After each series of 30 iterations, the diffraction efficiency (DE) of loop m is compared to the DE of the previous loop m − 1, and the complex amplitude with the highest DE is saved as gmax. After the series of 30 loops has been performed m = 100 times,

the complex amplitude in the focal plane of highest DE, gmax, is taken as the input for the

Amplitude Freedom stage. During this stage, a similar Gerchberg-Saxton loop is performed with i = 30 iterations. As opposed to the PF stage, the constraint in the focal plane is applied only within a Window W in the local area arround the intensity pattern. After i = 30 iterations, the phase of the complex amplitude G0 is send to the SLM.

(25)

through a series of FTs and inverse FTs. In each plane a constraint is set to the amplitude of the field. We follow Van Bijnen and L¨upkens [26, 40] and define two sets H1 and H2 that represent

the constraints on the complex amplitude in the SLM plane. The elements corresponding to these sets are G ∈ H1 and G0 ∈ H2 which satisfy G = |Ui|eiφ and F [G0]−1 = |Us|eiψ. An element

G ∈ H1∩ H2 would satisfy both conditions and therefore be a solution to our problem. In general,

such a function does not exist, and instead we search for one that does to good approximation [26]. A schematic of the Gerchberg-Saxton algorithm is shown in Figure 2.6. The inputs to the al-gorithm are amplitudes |Ui| and |Us| in the SLM plane and the focal plane, respectively. The GS

algorithm starts with an initial, random phase ψ0(x, y) which is combined with |Us| to generate a

complex amplitude g in the focal plane. Subsequently, G0 = F [g] is generated and the constraints on the amplitude are imposed on the description of the field by replacing the resulting amplitude |G0| by input amplitude |U

i|. A similar constraint is applied in the focal plane after the complex

amplitude G is propagated back to the focal plane with a FT. Here, |g0| is replaced by target

ampli-tude |Us|. By this iterative process the complex amplitude eventually converges to a solution [26].

There are many local minima that can result from a phase calculation [26]. Therefore, performing the IFTA will typically yield a different phase pattern each time the IFTA algorithm is computed. The quality of the pattern as well as the speed of the calculations can be improved by considering some finite area or ’window’ W in the focal plane within which the pattern should exactly match Us, and letting the amplitude outside this be free [26]. This is useful if the area of interest is

smaller than the drawing area in the focal plane, which is typically the case. The GS algorithm is divided into two stages. During the first stage, the amplitude is set to exactly match the desired amplitude Us, both within and outside of W. This stage exploits the phase freedom of the light

field in the focal plane. During the second stage, the amplitude outside of the window is set to be a free parameter, exploiting the amplitude freedom outside of the area of interest. This enlarges the set of phase patterns that correspond to the desired signal, which leads to a better quality of the intensity patterns [26]. The two stages of the Iterative Fourier Transform Algorithm (IFTA) are called the phase freedom (PF) and amplitude freedom (AF) stage. A more detailed explanation of the IFTA algorithm as it is implemented by Van Bijnen in the SLM control software can be found in Figure 2.7, with a description in the caption.

Spot pattern generation

For the purpose of individually addressing atoms clouds on a chip, the IFTA is deployed to create intensity patterns of arbitrarily positioned spots. The spots referred to here are high intensity regions with a Gaussian intensity profile in the focal plane of the lens. The two-dimensional target amplitude |Us| then describes a spot pattern. The input amplitude |Ui| represents a Gaussian

laser beam which illuminates the SLM surface in the experimental setup. It is thus defined as the squareroot of a Gaussian intensity distribution. As was discussed in section 2.2.1, these amplitudes as well as the resulting complex amplitudes g and G are expressed as discretized arrays.

To illustrate this, let us consider a two-dimensional SLM with a resolution of 12x15 pixels and the corresponding two-dimensional drawing area in the focal plane, as depicted in Figure 2.8 from reference [26]. The figure depicts the two input amplitudes |Ui| and |Us|, where the latter represents

a 4 by 3 spot pattern. The elements of input amplitude |Ui| have a value between 0 and 255. The

center of the Gaussian profile can be adjusted to match the position of the laser beam on the display. In this image, the beam is centered in the center of the SLM surface. The elements of amplitude |Us| are restricted to a value of either 0 or 1, where a 1 is set to indicate the position of a spot,

and 0 everywhere else. The position of the upper left spot is defined relative to the undeflected zeroth order, positioned by definition at (n, m) = (0, 0) in Figure 2.8(b). In this example, the upper left spot is displaced from the zeroth order by four focal units in the horizontal direction, and four focal units in the vertical direction. The red dashed outline depicts the window W described in

(26)

(a) (b)

Fig. 2.8: Two arrays illustrating input amplitudes |Ui| (left) and |Us| (right), representing the squareroot

of a Gaussian intensity profile and a 4x3 spot pattern, respectively. The red dashed outline on the right image indicates window W. Image from [26].

the previous section. For future reference, we define here the two sets of parameters that are used to set the position of the spot pattern with respect to the zeroth order: (1) parameters ”sigma x offset” and ”sigma y offset” define the offset of the window W from the zeroth order (2) parameters ”spot x offset” and ”spot y offset” define the offset of the spots relative to the window W. In this image, all four parameters have a value of 2. Furthermore, we define the parameters ”x spacing” and ”y spacing” that determine the spacing between the spots in horizontal and vertical direction, measured in focal units. In this example, both parameters have a value of 1. Recall that FUs have different dimensions in the x- and y-direction. If the input amplitudes described here would be used to compute a phase pattern by the IFTA method, encoding this phase pattern onto the SLM would thus result in an intensity distribution with unequal spacings in both directions. Note that spots can only be displaced in steps of focal units, and that a spacing of one focal unit is the minimum allowed spacing.

In general, computation of the phase pattern by means of an IFTA procedure with two 2D input amplitudes requires a series of 2D DFTs, rather than the 1D DFTs presented in Equation 2.9a. However, in many cases the 2D input amplitudes can be described as the product of two 1D arrays. The amplitudes are then said to be of the separable kind. An amplitude that is separable is of the form [26]

I(x, y) = |U (x, y)|2= u1(x)u2(y) (2.13)

By describing a 2D array as the product of two 1D arrays, one can dramatically increase the com-putation speed of the IFTA.

In the example presented above and in Figure 2.8, both of the amplitudes |Ui| and |Us| are of

the separable kind. For the case of |Ui|, this can be seen from the description of a two-dimensional

intensity profile of a Gaussian beam, given by [41] I = I0exp  −2x2 w2 x  exp −2y 2 w2 y  (2.14) which satisfies Equation 2.13 with

ux= p I0exp  −2x2 w2 x  (2.15) uy= p I0exp  −2y2 w2 y  , (2.16)

(27)

one-dimensional arrays ux[k] of size Nx= 15 and uy[l] of size Ny = 12

ux[k] = [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0] (2.17)

uy[l] = [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0] (2.18)

where k and l are indices indicating the kth and lth element of the array with 0 ≤ k ≤ Nx and

0 ≤ l ≤ Ny.

2.2.3 Phase diffraction gratings

In the previous section we have introduced several parameters with which one can set the position of a spot in the focal plane. In addition to this method, the SLM can introduce a simple displacement in the focal plane by imposing a phase grating on the light field. The working principle of the phase grating is analogue to that of a blazed diffraction grating. The phase pattern imprinted on the SLM imposes a gradual change in phase on the deflected light field along the length of the SLM display. Referring to the Fourier description of the imaging system as in Equations 2.6 and 2.7a, this adds an exponential term e2πi(ax0+by0)to the description of the complex amplitude in the SLM plane [26]. We then have

Uf(x, y) = F [e2πi(ax

0+by0)

Ui(x0, y0)](u, v) = F [Ui(x0, y0)](u − a, v − b) (2.19)

where u, v denote the frequencies u = x/λf, v = y/λf as before. The additional phase gradient in the SLM plane thus results in a spatial shift of the intensity pattern in the focal plane, proportional to the slope of the phase. If Ui(x0, y0) already defines an intensity distribution in the focal plane,

for example a spot pattern computed by the IFTA method, then the linear phase shift is imposed on the phase φ(x0, y0) of Ui(x0, y0), and the spot pattern is shifted away from the zeroth order. If

φ(x0, y0) is constant, the additional linear phase term constructs a displaced sinc-spot in the focal plane.

Whereas the position of spots created by the IFTA procedure is determined in steps of focal units, the phase diffraction grating allows for theoretical translation of submicron resolution. To see why, we consider the case of a strictly horizontal displacement in the focal plane constituted by some linear phase φ = ax. Because the pixels of the SLM display are assigned with a grey value between 0 and 255, the phase pattern is calculated as mod 256 of the linear phase term. This results in a phase pattern consisting of several periods of length d along the length of the display, of which the last one might be cut off to fit the SLM dimensions. The values that are assigned to each pixel are calculated as

H[k] = k Nx

· 256 · X (2.20)

where Nx is the number of pixels along the horizontal axis of the SLM display, and k/Nx are the

normalized pixel coordinates, with k ranging from 0 to Nx. Here, X is a dimensionless number

which sets the displacement in the focal plane in focal units, calculated as X = x

F Ux

(2.21) with x the displacement in µm, and F Ux the size of the focal unit in the x-direction in µm. If x

would only take values in steps of FUs, then X would by definition be an integer, and all periods of the phase pattern would start off with a pixel value of 0, and end with a pixel value of 255. However, by allowing the desired displacement x to take on a decimal number and rounding off the values

(28)

of H[k] to integers, smaller displacements can be realised by letting each each subsequent period of the phase pattern have a slightly different slope. Thus, because an SLM-controlled grating can dynamically change not only the period but also the slope of its structure, theoretical translations in the focal plane can be achieved with submicron resolution. This is in contrast with the discrete step size mentioned in Section 2.2.1. The final resolution of the system is not only defined by the size of the SLM which determines the focal units, and the amount of pixels, but also by the size of the beam: The bigger the surface covered by the laser beam, the more periods can attribute to the final interference pattern and the more precise the final translation in the focal plane.

In addition to a phase diffraction grating, the SLM control software written by Van Bijnen allows for the generation of a lens phase. The lens phase mimics the phase modulation properties of a lens, and as such ’adds’ a lens to the experimental setup. The lens phase is separable in the x-and y-direction, x-and is calculated as (here shown for x)

LensPhasex= x2  π λ(Lensx+ f ) − π λf  256 2π (2.22)

and similarly for y. Here, f is the focal length of the lens already present in the experimental setup, and Lensx is the focal length that the user want to add to f . The factor 256/2π converts the

computed phase delay into the correct grey value. Finally, x is a coefficient that changes linearly in value from −0.5Lxto 0.5Lx, in Nx steps. The formula is thus weighted with a parabolic function

x2centered at the SLM surface center where it has a value of 0.

2.3

Wavefront correction

Besides its ability to effectively modulate the phase of a laser beam, the spatial light modulator is a useful device for wavefront correction. As the name suggests, wavefront correction is a method characterised by the correction for deformations in a wavefront, usually that of a laser beam. A wavefront is the surface over which a light field has a constant phase. In this thesis we choose to implement a wavefront correction scheme because a deformed wavefront of our laser beams can have a negative effect on the quality of intensity patterns created by the SLM. Aspects of pattern quality are for example the overall intensity in the first order, the shape of the spots, and the uniformity of intensity between the spots. Ideally, the wavefronts of our laser beams are perfectly flat if no phase pattern is applied to the SLM. In that case, we have complete control over the shape of the wavefront in the focal plane. However, the wavefront of a laser beam can get deformed as a result of optical aberrations caused by distorting elements in the optical setup. Optical elements then introduce phase deformations on the light field. Examples of optical aberrations are spherical aberrations that result from imperfectly polished aspherical lenses and the imperfect flatness of a mirror. The latter includes curvature in the backplane of the SLM itself. The backplane curvature of our SLM was measured in reference [35]. In addition, the wavefront gets deformed when a laser beam is not perfectly collimated.

The wavefront correction scheme basically consists of two procedures: first we reconstruct the wavefront of the laser beam, and subsequently we correct for the obtained deformations. Re-construction of the wavefront is achieved by using the SLM to mimic the Shack-Hartmann (SH) wavefront sensor; a frequently used device for the estimation of optical aberrations in an imaging system. The SH sensor uses a microlens array to divide an incoming wavefront into smaller beams and focus them into an array of spots. The displacement of the spots from the optical axis is proportional to the slope of the wavefront at that point. The wavefront is then reconstructed from the set of discrete slope measurements. There exist various reconstruction approaches, which can be categorized as being either zonal or modal [42]. The zonal reconstruction method estimates the phase value at a local zone. The modal method expresses the wavefront as a set of polynomials and retrieves the weighting coefficients that describe it best. Both reconstruction methods are used for

(29)

determination of the wavefront in our experiment. Correction for obtained aberrations is realised by subtracting the measured phase deformations from the wavefront of the laser beam. In order to achieve this, we invert the phase of the deformations and we use the SLM to impose the inverted phase information onto the laser beam.

In the next section we detail the working principles of the Shack-Hartmann wavefront sensor, and in Section 2.3.2 we explain how the SH sensor is mimicked by an SLM. The last two sections of this chapter provide the theoretical framework for implementation of wavefront reconstruction by the zonal and modal approach.

2.3.1 The Shack-Hartmann wavefront sensor

In the year 1900 the German physicist Johannes Hartmann published an article in which he de-scribed a method to test the trajectory of light rays through a collimator objective lens [43]. His test consists of a mask with two pinholes which is set in front of the lens, after which images are taken on a photographic plate at different distances from the focal plane. In the case of a perfect lens, the distance between the positions of the two beams on the photographic plate should be linearly related to the distance of that plate from the focal plane. Deviations from this constant give information about the aberrations in the system. The Hartmann screen test, as it is nowadays referred to, allowed Hartmann to successfully identify the problematic lens of his telescope system at the institute of physics in Potsdam, where he was professor at the time [44]. However, there are two practical drawbacks to his method that hinders a more versatile usage. First, the Hartmann screen test blocks almost all light, resulting in low intensities in the focal plane. Second, the lens cannot be used for other purposes while measuring the wavefront.

A next step in the development of the Hartmann screen test did not arise from the field of physics but rather from a problem posed by the Armed Forces in times of the Cold War [45]. The US Air Force saw need to improve images taken from satellites with ground-base telescopes, which suffered from aberrations caused by the atmospheric turbulence. To measure the optical transfer function of the atmosphere, physicist Roland Shack adapted the Hartmann screen test to overcome both above-mentioned deficiencies. Figure 2.9(a) shows a schematic representation of the first Shack-Hartmann (SH) wavefront sensor. A beam splitter is placed after the collimator lens of the telescope, and an array of contiguous microlenses is placed in the optical beam path of the reflected beam. The lens array focusses a set of small beams down onto the focal plane, where the image is recorded with a sensor [46]. Because the light is focussed, high photon densities can be achieved. Moreover, the use of a beam splitter constructs a real-time wavefront sensor that can measure aberrations simultaneous with image-tacking.

The displacements in the focal plane measured by a two-dimensional sensor are shown on the right in Figure 2.9(b). When a planar beam is incident on the microlens array, all individual beams will be focussed down onto the optical axis of the corresponding microlens. In the case of a distorted wavefront, the image points are displaced.

2.3.2 SLM for wavefront sensing

In our experiments the Shack-Hartmann wavefront sensor is substituted by an SLM. This can be realised in various ways. The SLM can be encoded such that it mimicks an array of microlenses by creating a series of lens phases [48]. Another option is to replace the lens array of the SH wavefront sensor by an SLM and one single, large lens [23, 49]. This is the method of choice for our optical system because the configuration is compatible with our experimental setup. Details about the setup can be found in Section 3.

In the large-lens configuration, the SLM serves as a programmable moving aperture which scans the incoming wavefront. For this purpose, the SLM is imprinted with a phase pattern consisting of an overall zero phase, except for a small circular area that is covered with a blazed grating. An

Referenties

GERELATEERDE DOCUMENTEN

Steeds nog klinkt het buitelend geluid van een Kievit of het jubelend fluiten van een Grutto zo nu en dan op in de val­ lende schemer die zich met de rijzende neve l vermengt.

ondervonden. - Ten slotte noem ik als een oorzaak, die de slordig-. heid sterk bevordert, de eigenaardige voorliefde voor het gebruik van teekens in plaats van woorden. Het

The call by Frye and by Brooks for literary criticism as a structure of unified knowledge raises a fundamental question regarding Biblical literature?. Is Biblical literature –

De prospectie met ingreep in de bodem, die werd uitgevoerd op 7 oktober 2015 aan de Leerwijk te Antwerpen, leverde geen archeologisch relevante sporen of structuren op. Er

Op korte afstand van de stallen, voor de windsingel, werden tot 6 maal hogere concentraties gemeten dan voorspeld voor een situatie zonder windsingel. Het gebied met concentraties

aanvullende maatregelen nodig om de nitraatconcentratie te verlagen, in combinatie met een beperkte aanscherping van het toegelaten overschot op intensieve bedrijven door

En omdat niemand beter weet dan ik hoe belangrijk Adrie en haar Afzettingen voor de WTKG zijn, ben ik direct naar huis gevlogen om u. op de hoogte te bren- gen van

The results of the study indicates that training and supervision, safe work procedures, management commitment and behavioural safety are significant predictors of