• No results found

Investigation of alternative pyramid wavefront sensors

N/A
N/A
Protected

Academic year: 2021

Share "Investigation of alternative pyramid wavefront sensors"

Copied!
113
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

by

Maaike van Kooten

B.Sc., University of Victoria, 2014

A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of

MASTER OF APPLIED SCIENCE

in the Department of Mechanical Engineering

c

Maaike van Kooten, 2016 University of Victoria

All rights reserved. This dissertation may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author.

(2)

Investigation of Alternative Pyramid Wavefront Sensors.

by

Maaike van Kooten

B.Sc., University of Victoria, 2014

Supervisory Committee

Dr. Colin Bradley, Supervisor

(Department of Mechanical Engineering)

Dr. Jean Pierre Veran, Outside Member (Department of Physics and Astronomy)

(3)

Supervisory Committee

Dr. Colin Bradley, Supervisor

(Department of Mechanical Engineering)

Dr. Jean Pierre Veran, Outside Member (Department of Physics and Astronomy)

ABSTRACT

A pyramid wavefront sensor (PWFS) bench has been setup at the National Re-search Council-Herzberg (Victoria, Canada) to investigate: the feasibility of a lenslet based PWFS and a double roof prism based PWFS as alternatives to a classical PWFS, as well as to test the proposed methodology for pyramid wavefront sensing to be used in NFIRAOS for the Thirty Meter Telescope (TMT). Traditional PWFS require shallow angles and strict apex tolerances, making them difficult to manufac-ture. Lenslet arrays, on the other hand, are common optical components that can be made to the desired specifications, thus making them readily available. A double roof prism pyramid, also readily available, has been shown to optically equivalent by optical designers. Characterizing these alternative pyramids, and understanding how they differ from a traditional pyramid will allow for the PWFS to become more widely used, especially in the laboratory setting. In this work, the response of the SUSS microOptics 300-4.7 array and two ios Optics roof prisms are compared to a double PWFS as well as an idealized PWFS. The evolution of the modulation and dithering hardware, the system control configuration, and the relationship between this system and NFIRAOS are also explored.

(4)

Contents

Supervisory Committee ii

Abstract iii

Table of Contents iv

List of Tables vii

List of Figures viii

Acknowledgements xv

1 Introduction 1

1.1 Wavefront Sensing . . . 3

1.2 Pyramid Wavefront Sensor . . . 6

1.2.1 Knife Edge Test . . . 8

1.2.2 Details of a Pyramid Wavefront Sensor . . . 9

1.2.3 Modulation . . . 10

1.3 Challenges Facing Pyramid Wavefront Sensing . . . 11

1.3.1 Manufacturing Difficulties . . . 11

1.3.2 Implementation of Modulation and Calibration . . . 11

1.3.3 Chromatic Aberrations . . . 12

1.4 Lenslet Array Pyramid Wavefront Sensor . . . 12

1.5 Double Roof Prism Pyramid Wavefront Sensor . . . 16

1.6 Double Pyramid Wavefront Sensor . . . 17

1.7 Non-Common Path Aberrations . . . 17

1.7.1 Dithering to Determine Optical Gain . . . 18

(5)

2 Model of an Ideal Pyramid Wavefront Sensor 20

2.1 Physical Optics . . . 20

2.2 Object-Oriented MATLAB Adaptive Optics (OOMAO) . . . 21

2.3 Modelling an Idealized Pyramid Using OOMAO . . . 23

3 Optical Design and Experimental Setup 27 3.1 Optical Design . . . 27

3.1.1 Pyramid Wavefront Sensor Design Parameters and Specifications 28 3.1.2 Lenslet Pyramid Wavefront Sensor . . . 31

3.1.3 Roof Prism Pyramid Wavefront Sensor . . . 32

3.1.4 Classical Pyramid Wavefront Sensor . . . 33

3.2 Implementation of the Design . . . 36

3.2.1 Alignment . . . 36

3.2.2 Software . . . 41

3.2.3 Calibration . . . 41

3.3 Modulation and Dithering . . . 41

3.4 Experimental Overview . . . 47

3.4.1 Definition of PWFS Signal . . . 48

3.4.2 Response Curves . . . 48

4 Results and Discussion: A Comparison of PWFS 49 4.1 Model Results for an Idealized PWFS . . . 49

4.2 Experimental Results for the Lenslet PWFS . . . 53

4.3 Experimental Results for Double Roof Prism PWFS . . . 58

4.4 Experimental Results Double PWFS . . . 61

4.5 Gold, Silver and Bronze: How do the Three PWFS Compare? . . . . 65

4.5.1 Further Investigation of the Double Roof Prism PWFS and Double PWFS . . . 66

4.6 Dithering and Optical Gain Tracking . . . 74

5 Summary and Conclusions 78 5.1 Main Findings . . . 79

5.2 Future work . . . 80

(6)

Bibliography 82

A Supplementary Material 84

A.1 Experimental Details . . . 84 A.2 Results . . . 89

A.2.1 Supplementary Lenslet PWFS and Double Roof Prism PWFS Data . . . 89 A.2.2 Supplementary Comparison Data . . . 91 A.3 Physical Properties of the Double Roof Prism PWFS . . . 95

(7)

List of Tables

Table 1.1 List of adaptive optics systems using a pyramid wavefront sensor. 7 Table 2.1 List of parameters and their values used in OOMAO for

simu-lation work. *** indicates parameters only used for closed-loop work. . . 24 Table A.1 The minimum, maximum, and the contrast (the difference

be-tween max and min) values for the peaks and valleys for both 2 cpa and 4 cpa at 100 nm sine wave amplitudes and a modulation of 7 λ/D for the double roof prism PWFS. . . 91 Table A.2 The minimum, maximum, and the contrast (the difference

be-tween max and min) values for the peaks and valleys for both 2 cpa and 4 cpa at 100 nm sine wave amplitudes and a modulation of 7 λ/D for the double PWFS. . . 92 Table A.3 Using the maximum and minimum values from Figure A.14, their

ratios, and edge thicknesses. . . 95 Table A.4 All four width edges at three different locations on the PWFS.

The width edge remains relatively constant as the modulation radius is increased. . . 98

(8)

List of Figures

Figure 1.1 Incoming light from a distant celestial body is distorted by the Earth’s atmosphere, due to temperature fluctuations in the at-mosphere, before the wavefronts reach a telescope. . . 2 Figure 1.2 Simple AO system with a adaptive (deformable) mirror, a

wave-front sensor, a real-time computer (control system) and a high resolution (science) camera [1]. . . 3 Figure 1.3 The galactic center without (left) and with (right) AO. Image

taken at Keck Observatory (http://www.keckobservatory.org/ images/blog/galactic_center_ao.jpg). . . 4 Figure 1.4 A pyramid wavefront sensor including a tip tilt mirror for

mod-ulation in the pupil plane and the labeling of pupils. . . 5 Figure 1.5 Signal-to-noise-ratio in Fourier space for a pyramid wavefront

sensor and Shack-Hartmann wavefront sensor. The y-axis is sen-sitivity and the x-axis is spatial frequency. [2] . . . 6 Figure 1.6 On sky Strehl ratios for FLAO on LBT. From [3]. . . 8 Figure 1.7 Comparison of Strehl ratios for FLAO with new CCD

(LLL-CCD), old CCD (CCD39), and the SHWFS with LGS on Keck. From [3]. . . 8 Figure 1.8 The glass pyramid used on the William Herschel telescope (source:

(http://www.ing.iac.es/PR/wht_info/whtpwfs.html). Note the entire element is only a couple of centimeters across. . . 13 Figure 1.9 An image of the lenslet array used in this work. The blue dot

indicates where the focal spot would hit the array. . . 13 Figure 1.10A lenslet array can be used to create both a PWFS and a SHWFS,

(9)

Figure 1.11The pixels in the SHWFS are equivalent to the lenslets in the PWFS. The lenslets in the PWFS are equivalent to the pixels in the SHWFS. http://www.cfao.ucolick.org/aosummer/2007/ pdfs/Wavefront_Sensing_van_Dam.pdf . . . 15 Figure 1.12Orientation of two roof prisms to create an element that is

opti-cally equivalent to a glass pyramid. . . 16 Figure 2.1 Flow chart of the class structure in OOMAO. Note the pyramid

wavefront sensor is still in development as more functionality is added to the class. [5] . . . 22 Figure 2.2 Phase map of pyramid mask generated by OOMAO. Here each

pupil is 40 pixels in diameter and therefore the mask is 160 by 160. . . 23 Figure 2.3 An example of an input phase map for astigmatism produced by

the Zernike function in OOMAO. . . 25 Figure 2.4 The x and y slopes corresponding to the input phase map shown

in Fig 2.3. . . 25 Figure 2.5 The corresponding four pupils, for the incoming wavefront shown

in Fig 2.3 on the detector created by OOMAO’s pyramid class. 26 Figure 2.6 The x and y slope maps calculated by OOMAO for the pyramid

in Fig 2.5. . . 26 Figure 3.1 Ray diagram of the optical bench. Important pupil planes and

focal planes are shown. The key optical components are listed in the table along with their functionality. . . 30 Figure 3.2 Footprint of the lenslet pyramid showing the four pupils (in blue)

expected to be seen on the pupil camera (outlined in red). The black grid and circle are for reference and show the symmetry of the system. . . 32 Figure 3.3 Footprint for a double roof prism pyramid showing the four

pupils (blue, green, red, and yellow) expected to be seen on the square camera outlined in red. . . 33

(10)

Figure 3.4 Drawing of the double PWFS made by Arcetri Observatory for use on the LBT. The light enters from the left, passing through the apex of the first four-facet pyramid. The beam is split into four pupils and then passes through the boundary into the next glass pyramid. Chromatic aberration is corrected due to the different indices of refraction of the two glasses. The four pupils leave the double PWFS, far right, as if having passed through a single glass pyramid with a roof angle equal to the difference between the two roof angles shown here. [6] . . . 34 Figure 3.5 Optical layout showing the double PWFS path. . . 35 Figure 3.6 Footprint for the double pyramid wavefront sensor showing the

four pupils (colored circles) expected to be seen on the detector (red box). . . 35 Figure 3.7 The wavefront sensing bench at NRC-Herzberg. The gold shows

the system calibration path and the science path while the yellow indicates the PWFS path. . . 36 Figure 3.8 The resulting pupils for the lenslet PWFS after alignment with

a modulation of 5 λ/D. Note the blurriness of the pupil edges. The pupils are each 128 pixels in diameter. . . 37 Figure 3.9 The optical mount made for the roof prisms. Based on designs

provided by Subaru Telescope. The crossing of the roofs can be seen. . . 38 Figure 3.10The final separation between the roof prism apexes achieved in

the laboratory. . . 38 Figure 3.11The resulting pupils for the double roof prism PWFS after

align-ment with a modulation of 5 λ/D. The pupils are sharp, show structure, and are each a 150 pixels in diameter, slightly larger than expected due to the placement of the last lens. . . 39 Figure 3.12The double PWFS in the optical path using a V mount. The

PWFS sits approximately in the middle of the tube with an entrance diameter of 8mm and an exit diameter the same size as the PWFS. . . 39 Figure 3.13The resulting pupils for the double classical PWFS after

align-ment with a modulation of 5 λ/D. The pupils are sharp, show structure, and are each 340 pixels in diameter as expected. . . . 40

(11)

Figure 3.14The three different signals produced by the microcontroller. The top is the modulation signal for the x- and y- axes, middle shows the dithering signal for both axis, and the bottom shows the PWM used to trigger the camera (the exposure is the length of the high signal - bulb mode). Green dotted line indicates what part of the modulation and dithering signal are sampled during an exposure. Time between exposure is used by the computer to poll for an image and process the image. For the modula-tion signal, a full cycle occurs for each exposure. The dithering signal completely sampled every four PWM (camera exposures) periods. The top two signals are at the speed required by TMT, the PWM would be an example of speed needed on the bench to sample the dithering cycle. . . 44 Figure 3.15Electronics used to create two DAC signals for the FSM and a

PWM signal for the camera. The computer interface allows the user to load the code to the board and power the board, while the DC power supply stabilizes the board’s current. The chip is programmed through a JTAG interface between the board and the chip via Atmel Studio. . . 46 Figure 4.1 Response for varying pupil sampling. The x-axis indicates the

amount of tilt introduced into the system, and the y-axis the total signal measured by the PWFS. . . 50 Figure 4.2 Response of the closed-loop control for varying modulation

am-plitudes. The open loop residual is shown in green while the residual wavefront errors are yellow and red (noise added to the wavefront sensor), respectively. The residual is calculated from the difference between the atmosphere and the deformable mirror shape, not the wavefront sensor itself. . . 51 Figure 4.3 The response of the OOMAO PWFS with increasing RMS

wave-front error produced by tilt for different modulation amplitudes. 52 Figure 4.4 The response of the OOMAO PWFS with increasing RMS

wave-front error produced by astigmatism for different modulation am-plitudes. . . 53

(12)

Figure 4.5 The response of the OOMAO PWFS with increasing RMS wave-front error produced by coma for different modulation amplitudes. 54 Figure 4.6 The response of the OOMAO PWFS with increasing RMS

wave-front error produced by the 21 Zernike mode (Noll indexing) for different modulation amplitudes. . . 54 Figure 4.7 Response for OOMAO PWFS with a modulation of 2 λ/D for

tilt, astigmatism, and coma aberrations. . . 55 Figure 4.8 Lenslet pupils for three different Zernike modes with a

modula-tion of 2 λ/D. . . 56 Figure 4.9 Lenslet pupils for a large tilt and a modulation of 2 λ/D. . . 56 Figure 4.10Response for lenslet PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for tilt as modulation increases. . . 58 Figure 4.11Response for lenslet PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for coma as modulation increases. . . 59 Figure 4.12Double roof prism PWFS pupils for three different Zernike modes

with a modulation of 2 λ/D. . . 59 Figure 4.13Response for double roof prism PWFS (dotted lines) compared

to the OOMAO PWFS (solid) for tilt as modulation increases. . 60 Figure 4.14Response for double roof prism PWFS (dotted lines) compared

to the OOMAO PWFS (solid) for astigmatism as modulation increases. . . 60 Figure 4.15Response for double roof prism PWFS (dotted lines) compared

to the OOMAO PWFS (solid) for coma as modulation increases. 61 Figure 4.16Double PWFS pupils for three different Zernike modes with a

modulation of 2 λ/D. . . 62 Figure 4.17Response for double PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for tilt as modulation increases. . . 63 Figure 4.18Response for double PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for astigmatism as modulation increases. . . 64 Figure 4.19Response for double PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for coma as modulation increases. . . 64 Figure 4.20Pupils for all three PWFS with a flat deformable mirror and a

(13)

Figure 4.21Pupils for all three PWFS for tilt, astigmatism, and coma 7 λ/D. Top: lenslet PWFS, Middle: double roof prism PWFS, Bottom: double PWFS . . . 68 Figure 4.22Response for three PWFS for tilt. . . 69 Figure 4.23PWFS pupils for two different sine wave with Mod 7λ/D. Note

that each image has a different color scale. . . 69 Figure 4.24Response of the the double roof prism PWFS (dotted lines) and

the double PWFS (solid lines) for 4 cycles across the DM aper-ture. The PWFS is normalized to the maximum value for the given sine wave with no modulation. . . 70 Figure 4.2545 ◦cross section for the double roof prism PWFS pupils in

Fig-ure 4.23 for 2 cpa and 4 cpa. . . 71 Figure 4.2645-degree cross section for the double PWFS pupils in Figure 4.23

for 2 cpa and 4 cpa. . . 72 Figure 4.27The four pupils of the double roof prism PWFS masked with a

circle with a diameter of 150 pixels. Yellow rings are portions of the pupil not included in the mask. Same routine was used to create the binned mask to determine the response curves. . . . 72 Figure 4.28Masked image of the double PWFS pupils. The dark blue

por-tions are what is not included in the four 340 diameter pupil mask. . . 73 Figure 4.29Red and blue are each differences in flux between two pupils,

x-and y- axis respectively, for a pure modulation signal. No fluc-tuation over the 50 images taken indicate good synchronization. The duty cycle was fixed while the PWM period was changed. . 76 Figure 4.30The Fourier transform of the flux (difference between two pupils

in x- and y- directions) showing a peak at 0.25 modulation cycles. 77 Figure A.1 Single poke on the deformable mirror for the double roof prism

PWFS. . . 85 Figure A.2 Single poke on deformable mirror for the double PWFS. . . 85 Figure A.3 Checker board pattern (alternating 0.5 and -0.5 commands) on

(14)

Figure A.4 Improvement in the double PWFS signal for different amounts of pixel binning. It illustrates the effects of noise on the PWFS signal. . . 87 Figure A.5 Schematic of the amplifiers used to increase the signal from

0-3.3V to 0-10V for the FSM input. . . 88 Figure A.6 Response for lenslet PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for astigmatism as modulation increases . . . 89 Figure A.7 Response for lenslet PWFS (dotted lines) compared to the OOMAO

PWFS (solid) for a fixed modulation of 2 λ/D. . . 90 Figure A.8 Response for double roof prism PWFS (dotted lines) compared

to the OOMAO PWFS (solid) for a fixed modulation of 2 λ/D. 90 Figure A.9 PWFS pupils for two different sine waves with Mod 2λ/D . . . 91 Figure A.10All three PWFS for tilt, astigmatism, and coma 2 λ/D. Top:

lenslet PWFS. Middle: double roof prism PWFS. Bottom: dou-ble PWFS. . . 92 Figure A.11Response of the the double roof prism PWFS (dotted lines) and

the double PWFS (solid lines) for 2 cpa. . . 93 Figure A.12A close look at the pupil shape for the double roof prism PWFS.

The dark blue shows the circular pupil mask while the bright yellow is not included in the mask. . . 93 Figure A.13Full view of the double roof prism PWFS pupils. The darkest

blue shows parts of the pupil mask that include background while the yellow shows part of the pupil outside the mask. . . 94 Figure A.14The total flux in the four pupils stepping through one modulation

cycle with a radius of 5 λ/D. . . 96 Figure A.15When the focal spot (yellow) passes over the knife edge (blue),

(15)

ACKNOWLEDGEMENTS

Dr Bradley: thank you for giving me this wonderful opportunity to jump into the field of Adaptive Optics. Thank you for your guidance, for sending me to the Big Island (not only once but twice), and for allowing me to just go for it! It has been a wonderful experience and I cannot believe how much I have learned.

Thank you to the wonderful group at the National Research Council- Herzberg for supporting and guiding my research. Specifically, to Olivier for always being able to help and explain things. Thank you Jean-Pierre for all your technical help, great discussion, and guidance. Dave and Glen, thank you for your patient explanations, and willingness to help. Thank you to Masen, Ben, Matthias, and all the other graduate students and postdocs for your help and time.

Thank you, Jonathan for being so encouraging and for all your love, support and patience. Thank you for always being there to edit my work and provide invaluable feedback, while at the same time writing your own thesis. Love you to the moon and back.

Finally, to my parents and sister: thank you for being so understanding and supportive. You made this all possible.

(16)

Introduction

When light from a distant astronomical object, such as a star or galaxy, passes through the Earth’s atmosphere, the incoming wavefront gets distorted. Wavefront distortion is caused by turbulence within layers of the atmosphere. Different temperature gra-dients and shear mixing cause inhomogeneities resulting in regions of fluctuating temperature and therefore fluctuating indices of refraction. The variation in index of refraction as light passes through the atmosphere causes the optical path to vary slightly, leading to wavefront distortions (Figure 1.1). The differences between op-tical paths can be described as a wavefront error. As a planar wavefront enters the atmosphere, differences in the index of refraction cause phase delays, with some of the wavefront delayed more than the rest due to spatial variability in the turbulence. The amount of phase delay is the wavefront error and can be quantified as either the peak-to-valley wavefront error (difference between the minimum and maximum value of the wavefront’s phase) or the root mean square wavefront error (root mean square of the entire wavefront’s phase). As an image of the target is acquired, the wavefront distortions effectively broaden the target’s point-spread-function (psf), increasing the full-width-half-maximum (FWHM) and in turn limit the resolution.

This effect is particularly relevant when considering the design of telescopes with increasingly larger primary mirrors (such as the Thirty Meter Telescope, TMT). In theory, the resolution of a telescope is limited by the wavelength of the incoming light divided by the telescope’s diameter (Rayleigh criterion). However, improve-ment in resolution is no longer achievable as the atmospheric distortions limit the seeing capability of the telescope. To maximize the resolution of the next generation of large telescopes, and thereby improve the quality of data collected, atmospheric effects must be accounted for. Adaptive optics, AO, systems reduce incoming

(17)

wave-Figure 1.1: Incoming light from a distant celestial body is distorted by the Earth’s atmosphere, due to temperature fluctuations in the atmosphere, before the wavefronts reach a telescope.

front distortions, and significantly improve telescope resolution. A closed-loop AO control system senses the wavefront distortion and applies a phase correction to the subsequent wavefront (lag is determined by the control system). Provided the control system’s frequency bandwidth is greater than the Greenwood frequency (depends on the transverse wind speed and the strength of the atmospheric turbulence) [7], the AO system can correct the wavefront distortion in real time.

To measure and correct the wavefront distortion, an AO system makes use of four main components: a wavefront sensor, a deformable mirror, a real-time computer, and a science camera (Figure 1.2). The wavefront sensor measures the wavefront error of the incoming light. A deformable mirror is used to apply the corrections to the incoming beam by applying the inverse shape of the measured wavefront error. The shape of the mirror can be changed at high frequencies by actuators on the reverse of the mirror. The real-time computer (RTC) controls the system, performing the calculations at rates much greater than the timescale of the atmospheric turbulence as it takes the information from the wavefront sensor and determines the correct

(18)

Figure 1.2: Simple AO system with a adaptive (deformable) mirror, a wavefront sensor, a real-time computer (control system) and a high resolution (science) camera [1].

commands to send to the deformable mirror. The science camera provides the required science data, observing the AO corrected image of the target. The goal of the AO system is to provide the best image at the science camera. Figure 1.3 shows the science image of an object with and without AO, with the improvement in resolution obvious.

1.1

Wavefront Sensing

Determining the distortions of a wavefront is a common task in optics, whether it be for fabrication and quality control or for more complex purposes, such as determining phase delays due to atmospheric turbulence for adaptive optics systems. Instruments such as interferometers directly measure phase delays and are commonly used in the former but are unsuitable for use in adaptive optics systems due to the light sources being incoherent and faint, as well as having relatively large distortions and evolving quickly. Therefore, it is necessary for wavefront sensing in AO to employ

(19)

Figure 1.3: The galactic center without (left) and with (right) AO. Image taken at Keck Observatory (http://www.keckobservatory.org/images/blog/galactic_ center_ao.jpg).

more sophisticated methods.

Although each of the wavefront sensing methods have three main components (an optical device, a detector, and a reconstructor) they can be split into two main categories of wavefront sensors: direct and indirect [8]. Indirect wavefront sensors are located in the focal plane and determine the wavefront properties from measuring the intensity of the whole aperture either at or near the focal plane. These are typically iterative methods and include phase diversity, and focal plane sharpening. They are commonly used, in combination with other techniques, in high contrast imaging systems for direct exoplanet imaging to help remove the host star and reveal planets. Conversely, direct wavefront sensors are in the pupil plane and split the pupil into sub-apertures, using the intensity in each aperture to determine the phase of the wavefront. This leads to slope sensing such as Shack-Hartmann sensing and pyramid sensing where the first derivative of the incoming wavefront is measured directly.

The idea of the Shack-Hartmann wavefront (SHWFS) sensor was first conceived in the 1960s by Dr. Aden Mienel, Dr. Roland Shack, and Dr. Ben Platt (at the University of Arizona) as a method for improving the quality of satellite images for the US Air force. The initial sensor made use of the standard Hartmann test in which panels (with an array of holes) are placed over the telescope’s aperture. Rays of light pass through the system and are imaged onto photographic plates located in front and behind the focus. Each hole would produce a blurred image of the object (e.g., star). Using the two images, the rays are traced through the focal plane and

(20)

the figure of merit is found for large telescopes. Mienel realized that covering the aperture of a satellite would be impossible and proposed a beam splitter to send part of the light through a panel in the pupil plane. It was Shack that proposed the use of an array of lenses all with the same focal length to replace a panel with an array of holes. The proposal would allow for the system to be used on a satellite with the hope of measuring the image and wavefront error at the same time. With the help of Platt, who produced the necessary lens arrays, Shack was able to realize the Shack-Hartmann wavefront sensor. Today’s SHWFS used in AO systems are very similar to the sensor conceived in the 1970s for satellite applications. A beam splitter is used to send light from a guide star to a wavefront sensor positioned in the pupil plane of the system. The wavefront passes through an array of lenslets, effectively focusing the light into many sub-apertures on a detector. By measuring the position of the focal spot of each sub-aperture the localized tilt can be determined and the wavefront can be reconstructed. A planar wavefront will have no focal spot displacement while an aberrated wave clearly shifts the focal spot. [9]

Figure 1.4: A pyramid wavefront sensor including a tip tilt mirror for modulation in the pupil plane and the labeling of pupils.

The pyramid wavefront sensor (PWFS) was proposed in 1996 as an alternative to the SHWFS to provide wavefront measurements with smaller error. The optical device, a glass pyramid, is placed in the image plane of the system, with the spot focused on the pyramid’s apex. The light is divided into four quadrants and imaged onto a detector located at the pupil plane (see Figure 1.4). The pixels on the detector split the pupil into sub-apertures (done on all four images of the pupil) analogous to

(21)

how each lenslet array splits the pupil in the SHWFS. All four pupils are images of the actual pupil and therefore each sub-aperture, or pixel, of one pupil corresponds to a pixel in the other three pupil images. By comparing the intensity of light pixel-by-pixel, the shape of the wavefront can be reconstructed. [8]

This thesis discusses in detail the PWFS: its performance, challenges, alternatives, and implementation of the proposed methodology for pyramid wavefront sensing to be used in the Thirty Meter Telescope’s adaptive optics system, the Narrow Field InfraRed AO System (NFIRAOS).

1.2

Pyramid Wavefront Sensor

The PWFS consists of the optical element (traditionally a four-facet glass pyramid), a camera, and reconstructor. The incoming light is focused on the apex of the glass pyramid (Figure 1.4). With a spot size of the incoming light’s wavelength divided by telescope diameter (λ/D), the focused beam is larger than the pyramid apex, allowing light to pass through all four faces of the pyramid.

0.0 1.0 2.0 3.0 4.0 0.0 0.5 1.0 1.5 2.0 Fourier SNR FC = 1/(2d) α/λ SH PYR

Spatial Frequency (u) [m-1]

SN R ( ξ(u )/ σph )

Figure 1.5: Signal-to-noise-ratio in Fourier space for a pyramid wavefront sensor and Shack-Hartmann wavefront sensor. The y-axis is sensitivity and the x-axis is spatial frequency. [2]

Ragazzoni showed in 1999 [10] that the pyramid wavefront sensor has a significant signal-to-noise ratio increase when operating in a closed-loop adaptive optics system,

(22)

as shown in Figure 1.5. Due to the optical design of the pyramid, it is limited by the diffraction of the telescope aperture, rather than the sub-aperture size as is the case for the SHWFS. For this reason, the PWFS is more sensitive than the SHWFS. There is an increase in sensitivity for low frequencies (compared to a SHWFS), which lowers the wavefront sensor’s susceptibility to noise, therefore it is more sensitive to faint stars than a SHWFS. At higher spatial frequencies, the pyramid is less sensitive (compared to a SHWFS) and for bright guide stars this results in a lower aliasing error. These characteristics result in an overall gain compared to a SHWFS, leading to an increase in sky coverage. Pyramid wavefront sensors also allow for a change in spatial sampling (through binning pixels or adding a relay lens to change sampling across the pupil) and a change in gain (modulation) to better correct for changes in wavelength, seeing conditions, and source brightness.

Results from First Light Adaptive Optics (FLAO) on the Large Binocular Tele-scope have demonstrated the on sky benefits of implementing a PWFS adaptive optics system. It can be seen from Figure 1.6 that with the natural guide star (NGS) adap-tive optics system, Strehl ratios (the ratio of intensities of the measured point spread function, psf, to the expected psf of the system) in the H band are very high with values of greater than 80 percent for 0.7 arcsecond seeing conditions at magnitudes brighter than 10. Figure 1.7 shows the performance of the FLAO (with two different detectors) compared to one of the best laser guide star (LGS) adaptive optics systems, located at Keck Observatory. Remembering that a faint NGS star is required to give tip/tilt information to the LGS system, it can be seen that in the K-band, the PWFS shows substantial gains.

Telescope Instrument

Large Binocular FLAO

Subaru SCExAO

Magellan MagAO

Telescopio Nazionale Galileo AdOpt@TNG Mont Megantic INO Demonstrator

Calar Alto PYRAMIR

Table 1.1: List of adaptive optics systems using a pyramid wavefront sensor.

It is for these reasons that the pyramid wavefront sensor is becoming more popular. Table 1.1 shows current systems with PWFS, however, many extremely large tele-scopes (ELTs) have planned instruments that currently have PWFS in their designs

(23)

(such as NFIRAOS, ATLAS, and EPICS ).

Figure 1.6: On sky Strehl ratios for FLAO on LBT. From [3].

Figure 1.7: Comparison of Strehl ratios for FLAO with new CCD (LLLCCD), old CCD (CCD39), and the SHWFS with LGS on Keck. From [3].

1.2.1

Knife Edge Test

The knife edge test (or Foucault test) test is a redirection, by diffraction, of incident radiation that strikes a well-defined obstacle such as a blade, the edge of a building, or a mountain range. It is explained by the Huygens-Fresnel principle which states that

(24)

a well defined obstruction of an electromagnetic wave acts as a secondary source and creates new wavefronts (wavelets). The new wavefronts propagate into the geometric shadow area of the obstacle.

This commonly used test is for determining the longitudinal and transverse aber-rations of a lens or system. For a single lens, a sharp edge (knife edge) is placed near the focus and passed through the image of a point. For a perfect lens, there will be one image point that darkens instantaneously when the knife passes through the image [6]. The PWFS can be thought of as a two dimensional knife edge test (the apex of the pyramid creates two knife edges orthogonal to each other), measuring the aberrations in the x and y directions simultaneously.

1.2.2

Details of a Pyramid Wavefront Sensor

The pyramid wavefront sensor consists of a four-facet glass pyramid placed in the focal plane. The facet angle of the pyramid is approximately one to six degrees, with pyramid faces having optical quality of one quarter wavelength, and edge thicknesses of approximately 1-4 µm. The detector is located in the pupil plane. As light hits the apex of the pyramid, photons pass through all four of the faces and create four pupil images on the detector. The sampling of the wavefront is therefore directly related to the number of pixels across the diameter of one of the pupils. Each pixel (sub-aperture) of one pupil corresponds to a pixel in the other three pupil images. By comparing the intensity of light pixel-by-pixel, the shape of the wavefront can be reconstructed [8]. The horizontal slopes S(x) and vertical slopes S(y) can be calculated from Equations 1.1 and 1.2.

S(x) = (I1+ I4) − (I2+ I3) I1+ I2+ I3+ I4 (1.1) S(y) = (I1+ I2) − (I3+ I4) I1+ I2+ I3+ I4 (1.2) where Ii are the intensities in the i th pupil and correspond to the labeling scheme

in Figure 1.4. The PWFS relies on the ability to correctly match the corresponding pixels to each other in all four pupils.

(25)

1.2.3

Modulation

The discussion until now has approached the pyramid wavefront sensor as a slope sensing technique. However, the PWFS acts both as a slope sensor and a phase sensor[11]. It is suggested by Verinaud [2] (and then confirmed by Rosensteiner [11]) that the duality of the wavefront sensor (WFS) leads to an increase in performance and sky coverage. Mathematical models and laboratory tests have verified that, at low spatial frequencies, the PWFS behaves like a slope sensor with linearly increasing sensitivity as a function of spatial frequency (Figure 1.5). The sensitivity eventu-ally saturates and the pyramid wavefront sensor begins to act as a phase sensor as the pyramid’s response remains constant (e.g. the WFS cannot distinguish between 200 nm or 500 nm wavefront error if it is saturated). It is important to note that at the Nyquist frequency the pyramid wavefront sensor has the same sensitivity as a SHWFS (Figure 1.5). This duality behavior is not acceptable for slope space closed-loop operations where the wavefront sensor needs to constantly be measuring slopes. It limits the range of wavefront error that can be sensed by the pyramid. If aber-rations seen on the pyramid wavefront sensor are larger than the dynamic range, non-linearities are introduced simply because the sensor is saturated and degrades the performance of the system.

To increase the dynamic range, and thereby extend the slope sensing mode, modu-lation of the beam can be performed. Modumodu-lation refers to moving the beam in a circle about the central apex, with a pre-determined radius. As the beam is modulated, residing equal amounts of time on each of the pyramid’s glass faces, the photons are more evenly distributed in the four pupils and therefore the range of spatial frequen-cies that can be sensed is increased. Physically, one complete circle (or any integer number of circles) needs to be completed during one exposure for this to truly occur. If the modulation and exposure are not synchronized, more time is spent on one face of the pyramid than another, therefore introducing error.

Although modulation increases the dynamic range of the sensor, it degrades the sensitivity as the aberration is smeared across each pupil. As a result, a PWFS will only experience a significantly larger sensitivity than the SHWFS without modulation (the sensitivity of the PWFS will be larger until the Nyquist frequency). However, the exact behavior of the pyramid during modulation is not completely understood and therefore the change in sensitivity of the PWFS is not quantified. It is thought that the pyramid’s performance can be optimized if the system is well known, such that a

(26)

minimum modulation radius can be found before observing to ensure the sensitivity is maximized for the conditions. Therefore, either knowledge of the normal seeing conditions or control of the modulation during closed-loop operation is desirable.

1.3

Challenges Facing Pyramid Wavefront Sensing

Similar to any new technology, implementing the pyramid wavefront sensor in the laboratory or on sky is not an easy task and there are some difficulties to overcome.

1.3.1

Manufacturing Difficulties

The design specifications of a pyramid are relatively strict, requiring all four faces to have the same angle - often to within 0.05 percent - and a good surface flatness (Figure 1.8). The apex also needs to be smaller than the beam size. The FWHM of the Large Binocular Telescope (LBT) at 0.75 µm is approximately 33 µm at the apex of the pyramid [12]. However, most importantly, the edges need to be straight and narrow. If the overall scratch width of the optical component is 20 µm, the closed-loop performance will be degraded as the edge width is too large. Although current manufacturers can easily meet surface flatness (Edmund Optics can achieve one eighth of a wavelength flatness and angle tolerances of 15 acrseconds), they have surface quality of 40-20 (compared to 60-40 for precision quality optics and 20-10 for high precision quality optics) for high tolerance right angled prisms [13]. Ideally, a pyramid should have a surface quality of 20-10 or better. This can be difficult for manufactures due to the combination of increasing the number of facets (compared to above) and improving the surface quality. Along with the low demand for pyramids and the time to develop the necessary techniques, it is not cost effective to implement in a laboratory setting and potentially on sky.

1.3.2

Implementation of Modulation and Calibration

The second difficulty is the exact method used for modulation and synchronization. Mechanically, modulation is difficult as it requires a moving part within the WFS. Many different schemes have been proposed including moving the glass pyramid it-self [14]. However, with the quality and reliability of fast steering mirrors currently available, the most suitable choice is to place these types of mirrors in the pupil plane

(27)

before the pyramid wavefront sensor for modulation as shown in Figure 1.4. For modulation to be implemented without dramatically slowing down the system, it is necessary for the camera exposure to be synchronized to the modulation period, such that one complete rotation (or a set integer number) occurs during a single exposure (meaning the light spends an equal amount of time on each face and edge). If this does not occur, then one pupil will receive more photons and bias the resulting slopes. However, for this to be accomplished, a good understanding of the settling time of the mirror (often different for each axis), trigger delays, and shutter delays are needed, as well as a stable clock source and voltage source.

Calibrating a pyramid wavefront sensor is a challenge in that the relation between an adaptive optics system’s deformable mirror and the wavefront sensor can be hard to find if the deformable mirror is not already in a relatively flat position. Since slope sensing requires the spatial frequency of the aberrations to be low for a non-modulated pyramid, there is no guarantee that the closed-loop starting point to determine the deformable mirror’s flat position, or to find the command matrix, falls within the slope sensing range of the pyramid. The aberrations from the deformable mirror could easily be outside the dynamic range of the pyramid and prevent the pyramid from closing the loop.

1.3.3

Chromatic Aberrations

Finally, chromatic aberration poses an issue for on sky application of the PWFS as the incoming light spans a range of wavelengths in the infrared (it is common for laboratory research to be performed at a specified wavelength). As the light passes through the PWFS, the optical path travelled by each wavelength varies and results in a blurring or elongation of the pupil image, as the pupils corresponding to each wavelength are imaged at a slightly different locations. This can result in pixel mismatching. It is for this reason that the double PWFS was developed to make the classical PWFS achromatic.

1.4

Lenslet Array Pyramid Wavefront Sensor

PWFS are difficult to manufacture as specifications on the apex are strict, often requiring nanometer precision [6]. One simple solution is to implement a lenslet array as a pyramid. This idea was presented by the Laboratory for Adaptive Optics (LAO)

(28)

Figure 1.8: The glass pyramid used on the William Herschel telescope (source: (http: //www.ing.iac.es/PR/wht_info/whtpwfs.html). Note the entire element is only a couple of centimeters across.

in California [15].

A lenslet array, also called a mircolens array, is a grid of small lenses approximately a few hundred microns in pitch with focal lengths of a few millimeters. If the array is placed in the focal plane and the beam spot is placed at the intersection of four lenses (see Figure 1.9), four pupils will form. This is optically equivalent to the pyramid at the intersection as it behaves like a knife edge. Edge quality of the lenslet is analogous to apex/edge quality for traditional pyramid and requires equivalent precision. The benefits of the lenslet are its ready availability and constant improvement in quality by the manufacturers. The lack of apex angle also makes it easier to manufacture as different manufacturing techniques can be used.

Figure 1.9: An image of the lenslet array used in this work. The blue dot indicates where the focal spot would hit the array.

Although at first glance a lenslet array provides a simple, flexible solution to the lack of available classical four-facet pyramids, the performance and usefulness is

(29)

limited by the intersection width. Current manufacturer’s standards state that the intersection width is proportional to the pitch (diameter) of a lenslet; typically a factor of 1 %. The minimal spot size of the beam is driven by the this width as it must be larger than the width to propagate through the glass. In turn, the maximum amount of modulation is limited by the physical size of the lenslet. With minimal diffraction, the absolute maximum radius of modulation for a lenslet can be three quarters of the pitch before light begins to spill onto another lenslet and in turn another set of four pupils. If diffraction becomes an issue due to spider arms holding a telescope’s secondary mirror, this maximum radius is decreased. The radius of modulation is proportional to the spot size. Therefore, for increasing lenslet pitch, the spot size must also increase which in turns limits the effective amount of modulation; large modulations are not achievable with lenslet arrays. Nor are they ideal to be used with telescopes that have segmented mirrors or anything that might increase the amount of diffraction.

Figure 1.10: A lenslet array can be used to create both a PWFS and a SHWFS, from [4].

Although not necessarily practical for implementation in a telescope’s AO system, lenslet arrays may prove useful in initial design and laboratory work as they are quick and cheap to implement. They might be an ideal stepping stone and/or place holder while a project must be pushed forward in a short time period, or while a more classical pyramid is being fabricated. Due to the variety in pitch and focal length, it would be very easy to find an array such that it could be replaced with a classical pyramid without a change of other optics in the system. Lenslet arrays might also prove useful

(30)

in non-astronomical adaptive optic fields where systems are simpler and change on slower timescales. They are also ideal for the classroom environment where students can be taught about how they work and understand the underlying principles.

With a lenslet and a detector, both a Shack-Hartmann and a pyramid wavefront sensor can be made by simply placing the optical components in different parts of the propagating beam, Figure 1.10. A SHWFS is made up of a lenslet array producing multiple images of an object through the telescope aperture by sub-dividing the pupil plane and focusing each sub-pupil onto a detector. In contrast, in a PWFS the lenslet array is placed in the focal plane producing multiple images of telescope apertures. Due to this, the two can be related as shown in Figure 1.11.

Figure 1.11: The pixels in the SHWFS are equivalent to the lenslets in the PWFS. The lenslets in the PWFS are equivalent to the pixels in the SHWFS. http://www. cfao.ucolick.org/aosummer/2007/pdfs/Wavefront_Sensing_van_Dam.pdf

Finally, if large modulation with a lenslet-based pyramid wavefront sensor was desired, it might be possible to reconstruct the wavefront from multiple sets of four pupils. As light hits another grouping of lenslets or is scattered away from the central group, the light still contains information. The ‘lost light’ simply moves across dif-ferent intersections where the knife edge tests are performed by difdif-ferent boundaries;

(31)

the matrix of lenses is essentially an array of PWFS. Therefore, by increasing the de-tector size or by decreasing the pupil size and distance between pupils, it is possible to gather information from a grid of PWFS. This, however, requires knowledge on how the light moves at the boundaries and what is happening at the pupil plane. It would also increase the computational time.

1.5

Double Roof Prism Pyramid Wavefront

Sensor

A roof prism is an optical element such that any two faces of glass meet at ninety degrees (not the roof angle). This design results in the beam entering from the side opposite to the roof being split into two beams as the light exits the prism. By combining two roof prisms and aligning them such that their peaks point to each other but are aligned orthogonally, the light will pass through the first prism, creating two beams which will each double as they pass through the second prism. A sketch of the alignment is shown in Figure 1.12. The result is four pupil images each having passed through edge interfaces (roof angles) allowing for the knife edge test to be performed in both directions, creating a PWFS. An advantage of this design is that the prisms can be slid back and forth to get the best optical quality at the apex/edges. Additionally, four roof prisms can be used to remove any chromatic aberrations by carefully selecting the glasses (two sets of prims glued back-to-back), although not necessarily an optimal design.

Figure 1.12: Orientation of two roof prisms to create an element that is optically equivalent to a glass pyramid.

Roof prisms and similar prisms are used in binoculars, making them easy to both buy off-the-shelf and order to specifications. Since the number of faces for one

(32)

op-tical element is halved (compared to a classical pyramid), the strict tolerances can also more easily met by manufacturers, resulting in better tip quality and smaller edges. This translates into smaller spot size and increases the maximum modulation radius. Due to the potential sharpness of the edges, the amount of scattered light will be reduced and the pupils are expected to be sharp. This would also improve the performance of the pyramid for a telescope with spider arms in the aperture.

1.6

Double Pyramid Wavefront Sensor

The double PWFS consists of two classical PWFS glued back-to-back such that beam enters the four facet side and leaves through a four facet side. They are made from different materials such that the second pyramid corrects for the chromatic aberration from the first pyramid. This is done by carefully choosing the indices of refraction for the two pyramids. The FLAO PWFS is an example of a double PWFS.

1.7

Non-Common Path Aberrations

For current and future adaptive optics systems, non-common path errors prove to be major limiting factors for performance. Reducing these errors is extremely impor-tant for high contrast imaging systems as they can noticeably restrict the contrast achievable for a given system.

Non-common path aberrations (NCPA) cause the errors and are due to the fact that the incoming light from the telescope is split (often by wavelength, but this is not necessary) to travel through the adaptive optics system. Some of the light enters into the adaptive optics system where the wavefront error is detected, while the other optical path takes the light to the science instrument. The WFS will correct for all aberrations (ideally) that lie along the common path shared by the wavefront sensing path and the science path. However, after the light is split, the WFS is blind to any aberrations introduced by the optics in the science path. At the same time, there are additional aberrations in the wavefront sensing path that that are unique to optics in the path and should not be corrected for by the WFS. Although all of these aberrations are static, they can considerably degrade the performance of the system depending on the path taken by the science light (light might need to enter into an adjoining instrument such as a spectrograph or polarimeter). Calibration for the NCPA can be done with focal plane wavefront sensing, using techniques such as

(33)

focal plane sharpening and phase diversity to measure the static aberrations. These aberrations are then added as offsets to the WFS before it begins correcting. For on sky applications this might be performed at the beginning of the night or multiple times throughout. However, to know scale of the system and therefore, the amplitude offset to give the adaptive optics system the optical gain is needed.The optical gain also serves as a unit conversion as it encodes the relation between how much error was injected and how much error was measured. To monitor the performance of the system and potentially improve the system during operation, the optical gain needs to be tracked.

An ideal adaptive optics systems operates with an optical gain of one. By injecting a known amount of aberration into the system and measuring it during observing, the gain can be found. By seeing how close to unity the value is, the observer instantly gets a sense of the system’s performance, as the optical gain directly relates to the spot size on the pyramid. The gain is one for the conditions in which the interaction matrix of the system was taken for. Typically, a diffraction limited source is used for calibration, therefore a gain of one would mean a diffraction limited spot on the pyramid apex. Having this information for different targets and seeing conditions allows the engineers to monitor the performance and plan for future upgrades to the system.

If desired, it is also possible to use the optical gain during observing to change the amount of modulation to stabilize the optical gain, keeping it at one. This would essentially mean feeding the gain information to the fast steering mirror, creating another control loop. Currently, little benefit is seen for this scheme as it requires fine control over the modulation and a better understanding of how modulation alters the performance of the system. Most challenging, it would require the reconstructor to be changed for each modulation radius.

1.7.1

Dithering to Determine Optical Gain

Determining the optical gain can be done through a few different methods. The current plan for the adaptive optics system for the Thirty Meter Telescope is to implement dithering. Dithering requires a known amount of wavefront error to be injected into the system either using a deformable mirror or fast steering mirror placed before the WFS. By comparing the amount of wavefront error measured to the amount injected, the ratio between the two, the optical gain, can be used to determine the

(34)

scaling of the system. NFIRAOS’ dithering will consist of a sinusoidal wave that is done over and above the modulation by a fast steering mirror. The frequency of the dithering is a quarter of the modulation frequency, with a much smaller amplitude. The average motion of the beam for multiple images is compared and the radius of the dither circle can be extracted from the slopes. Since a specific radius is expected, the ratio between the two is the optical gain. The optical gain changes (decreases) in an AO system as the seeing conditions get worse (more aberrations). The AO correction gets worse, resulting in a larger spot size on the pyramid, which becomes less sensitive, decreasing the optical gain.

1.8

Objectives

Pyramid wavefront sensors have been shown to improve the performance of an AO system and increase the sky coverage for NGS. However, the four-facet PWFS have strict tolerances, are difficult to manufacture, and are expensive to implement. Al-ternative PWFS such as a lenslet PWFS or a double roof prism PWFS are currently cheaper and more readily available. A lot is currently unknown about the PWFS, therefore it is necessary to implement these alternatives in the laboratory environment to further understand their behavior.

The following work aims to answer an important question: are the lenslet PWFS and the double roof prism PWFS viable alternatives to a classical pyramid wavefront sensor? Other issues such as, how well does a classical PWFS compare to theoretical PWFS, are also explored, as well as the trade-offs and differences between the different sensors. Finally, hardware design and methodologies for dithering are tested and the results are reported.

Chapter 2 outlines the steps taken to model the PWFS in MATLAB. Following this, Chapter 3 outlines the experimental setup from initial design to final implemen-tation. The results are reported and discussed in Chapter 4. Finally, the impact of this work is discussed in Chapter 5.

(35)

Chapter 2

Model of an Ideal Pyramid Wavefront

Sensor

2.1

Physical Optics

Geometric optics is used to describe the propagation of light as rays. However, be-yond this first order approximation, a more in depth analysis of the nature of light is required to understand certain phenomena, such as diffraction, interference, and polarization. Physical optics models are able to describe these phenomena well, in addition to a variety of devices (such as gratings and thin-film coatings), due to treat-ing light as electromagnetic waves. Physical optics is often also called wave optics for this reason.

Physical optics are based on the principle of superposition and Huygens’ principle [16]:

1. Principle of superposition

When two or more waves move simultaneously through a region of space, each wave proceeds independently as if the other were not present. The resulting wave displacement at any point and time is subsequently the vector sum of the displacements of the individual waves.

2. Huygens’ principle

Every point on a known wavefront in a given medium can be treated as a point source of secondary wavelets which spread out in all directions with a wave speed characteristic of that medium.

(36)

An AO system can be modeled employing these principles to correctly emulate how light interacts with different surfaces: wavefront sensors extract slope information, and wavefront errors distort a point spread function (psf).

If one assumes that the system under consideration consists of a coherent point source that is perfectly parallel and conjugate to the image plane, as well as both the source and image plane lying near to the optical axis, then Fraunhofer diffraction (resulting from the above theories) can be assumed in the far field (the distance between the aperture and the image plane is large enough that the optical path length between light passing through the extremes of the aperture are much smaller than the wavelength- criteria for Fraunhofer regime). This allows one to make use of the Fourier transform to describe what is happening within the system. [16]

A lens is an element that performs an exact Fourier transform in real time, if the light source is placed a distance of one focal length in front of the lens and the observer views the signal a focal distance behind the lens. As a result, the focal plane of a lens contains the Fourier transform of the object, while the pupil plane contains the image of the physical aperture. The focal plane and the pupil plane are then related by the Fourier transform. A simple lens system, or a complex system of lenses and other optical elements, can all be expressed using Fourier transforms.

In a simulation, an efficient way of calculating the Fourier transform of a signal through a system is through the use of the Fast Fourier Transform (FFT) algorithm. Although there are many different methods of performing an FFT, they all aim to obtain the exact discrete Fourier transform by decomposing an array of values into different frequencies. MATLAB has a few different FFT functions including fft (one dimensional), fft2 (two dimensional), and fftn (n dimensional). These functions can be used on very large sequences of numbers before any reduction in performance, making them ideal for optical modelling.

2.2

Object-Oriented MATLAB Adaptive Optics

(OOMAO)

OOMAO [5], a MATLAB toolbox for adaptive optics, was written by Dr. Rodolphe Conan in collaboration with Laboratoire d’Astrophysique de Marseille (who provided the code for the pyramid wavefront sensor extension to the toolbox). The source code can be downloaded from https://github.com/rconan/GMT/commits/master.

(37)

Figure 2.1: Flow chart of the class structure in OOMAO. Note the pyramid wavefront sensor is still in development as more functionality is added to the class. [5]

OOMAO is a library of MATLAB classes that uses vectorized code and transpar-ent parallel computing (via the parallel computing toolbox in MATLAB). The main classes represent the necessary components of an AO system and include a source, an atmosphere, a telescope, a Shack-Hartmann wavefront sensor, a pyramid wavefront sensor, a deformable mirror, and an imager (bundled within a wavefront sensor). Figure 2.1 shows the class flow within OOMAO. A wavefront is propagated through the system interacting with each class through physical optics via the standard FFT functions in MATLAB. This allows for easy simulation of open and closed-loop AO systems. In addition to the object classes, the toolbox has a large library of statis-tical tools to evaluate the performance of a system and determine various statisstatis-tical properties of a wavefront.

The pyramid wavefront sensor (PWFS) is defined by up to four input parameters: 1. nLenslet (number of lenslets)

2. nPix (number of pixels sampling each pyramid pupil)

3. modulation (expressed in terms of the diffraction limit of the system - wave-length divided by diameter)

4. binning (number of pixels to be added together to reduce the sampling). . . The first two parameters are private properties for the object and cannot be changed during simulation as they define the physical properties of the pyramid.

(38)

Figure 2.2: Phase map of pyramid mask generated by OOMAO. Here each pupil is 40 pixels in diameter and therefore the mask is 160 by 160.

nLenslet and nPix creates a pyramid wavefront sensor object equivalent to a Shack-Hartmann wavefront sensor with the same properties, allowing the user to easily compare the two and integrate the newer pyramid wavefront sensor into existing code.

A PWFS is simulated by creating a phase map that encodes the physical infor-mation of the optical element (Fig 2.2). The incoming wavefront (if modulation is desired it is first multiplied by a phasor) is then Fourier transformed. The trans-formed (rotated) wavefront is then multiplied by the pyramid mask producing four pupils. The Fourier transform is taken again and the intensity of the pupils is given by the square of the complex wavefront. For modulation, this process is repeated for n steps (each step being 360/n degrees) until a complete circle has been made, resulting in an intensity map for each step. The resulting intensity maps are summed together creating a single intensity map of the four pupils on the PWFS camera.

2.3

Modelling an Idealized Pyramid Using OOMAO

The setup for the model was based on the closed-loop tutorial provided in the tool-box (adaptiveOpticsHowto.m). Within the OOMAO environment the objects were created as in Table 2.1.

(39)

Class Input Parameters

Telescope diameter = 8 m, resolution = 60 pixels

Source wavelength=photometry.J

Atmosphere*** r0 = 15 cm, L0 = 30m, windSpeed = [5,10,20] m/s, alti-tude=[0,4,10]*1e3 m, fractionnalR0=[0.7,0.25,0.05]

PWFS nLenslet=10, nPix=60, modulation=varies

Deformable Mirror*** nActuators = 11, modes = monotonic influence function, res-olution = 60 pixels

Table 2.1: List of parameters and their values used in OOMAO for simulation work. *** indicates parameters only used for closed-loop work.

The main goal for implementing the model is to compare an ideal PWFS to those tested in the laboratory setting. To characterize the response of the pyramid for different input modes of varying amplitudes, the atmosphere is replaced by the built-in Zernike function allowbuilt-ing for a wavefront contabuilt-inbuilt-ing a sbuilt-ingle mode to be propagated through the system (e.g. Figure 2.3 and 2.4) or a wavefront with multiple modes to be propagated, allowing the user full control and knowledge of the input wavefront. The resulting slopes (Figures 2.5 and 2.6) are used to determine the PWFS signal. The pyramid class has a built-in gain calibration function that is called in the initialization of the wavefront sensor; however, this feature is turned off for this work so as to allow for the response to be measured.

For closed-loop operation, all the objects listed as in Table 2.1 are used with a gain of 0.5. The loop was closed to initially test the PWFS and get a sense of the parameter space as well as visually see the effects of modulation for a full atmospheric model. A dithering scheme was also implemented and tested in closed-loop to track the optical gain during correction. To determine the optical gain, a dithering function was added to the pyramid class. When this feature is turned on, a sinusoidal wave is added onto the modulation signal. The extra rotation is added for one complete modulation period by multiplying the incoming wavefront by a phasor before the modulation is performed. During slope commutation, the tip and tilt are found and used to calculate the radius of the applied dither. This is compared to the expected output (knowing the amplitude injected into the system) and the optical gain is determined. The value is updated every four modulation cycles to obtain an average value and a better estimate.

(40)

5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40

Figure 2.3: An example of an input phase map for astigmatism produced by the Zernike function in OOMAO.

(41)

5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40

Figure 2.5: The corresponding four pupils, for the incoming wavefront shown in Fig 2.3 on the detector created by OOMAO’s pyramid class.

5 10 15 20 25 30 35 40 2 4 6 8 10 12 14 16 18 20 Pixel -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

(42)

Chapter 3

Optical Design and Experimental

Setup

The experimental setup used to test and compare the different types of PWFS was designed as an additional optical path on the existing wavefront sensing bench at NRC-Herzberg. Upon adding the pyramid wavefront sensing path, the entire bench was realigned and rebuilt for optimal performance. The pyramid optical path evolved over the course of this work with three main stages as different types of PWFS were tested on the bench, with each situation requiring redesign and realignment of the PWFS path to complete the work. This chapter outlines the optical design and implementation of all three optical elements: the lenslet pyramid, the double roof prism pyramid, and a double classical pyramid. It also explores the evolution of the modulation hardware, the system control configuration, and the relationship between this system and NFIRAOS, designed for the Thirty Meter Telescope.

3.1

Optical Design

The bench’s three optical paths (Figure 3.1) are described below:

1. The system calibration path (includes the common path and the SHWFS) allows for the loop between the SHWFS and the deformable mirror (DM) to be closed, providing a flat DM as well as allowing different Zernike modes to be applied to the DM. This is used to inject different shapes into the system via the DM as well as have the SHWFS act as a truth wavefront sensor.

(43)

plane sensing and control to be tested, including techniques such as focal plane sharpening [17], phase diversity [17], and speckle nulling [18]. All of these techniques are important for non-common path error correction and for extreme adaptive optics systems [19]. In this work some of these techniques were used to further flatten the DM along with the SHWFS.

3. The PWFS path contains a fast steering mirror (FSM) and a PWFS (the optical element, re-imaging lens, and camera). This path is used to test the sensitivity of each type of PWFS.

The system calibration path is a simple 4F optical scheme (the pupil plane is at infinity at every focal plane, and vice versa), with a single mode fiber operating at 655 nm as the source. It is followed by a 100 mm focal length lens used to create a pupil plane where optional phase plates can be placed to simulate atmospheric turbulence as well as an additional light source for optical alignment purposes.

Re-imaging optics are used to send the light to the deformable mirror (DM), which defines the aperture of the system. After the DM, there is a beam splitter that sends half of the light down the science path (the second optical path used for focal plane sensing). The rest of the light continues to propagate through the system, eventually creating a focal plane. Following this focal plane, a beam cube is placed in the path to further split the light, sending some toward the SHWFS where a lens is situated to create a pupil plane for the SHWFS lenslet array. The rest of the light is sent down the PWFS path towards the FSM. After being reflected off the FSM, the light is focused onto the PWFS and then imaged onto the detector which is located in the pupil plane.

3.1.1

Pyramid Wavefront Sensor Design Parameters and

Spec-ifications

Three PWFS types, having fixed design characteristics, were used on the bench (lenslet from University of Victoria’s RAVEN team, double roof prisms from the SCExAO team at Subaru telescope, and double PWFS from Arcetri Observatory). The edge thickness was therefore fixed and limited the minimum spot size of the beam (spot size needs to be larger than the apex of the optical element). In turn, the max-imum amplitude of modulation was limited for each case. This depended not only on the spot size but also on the physical dimensions of the pyramid element (especially

(44)

in the lenslet array case). The apex angle of the pyramid, or it’s equivalent, was fixed as well. The speed, stroke, and resolution of the FSM are also important parameters that depend on the FSM model and the f-number hitting the pyramid.

The specifications for NFIRAOS’ PWFS also played an important role in the optical design as it was desirable to have the bench simulate the PWFS planned to be used for the TMT’s AO system. Design specifications taken into consideration from the NFIRAOS documentation include: F/45 beam, an integer number of pixels between pupils; a pupil diameter of either 48, 96, or 128 pixels to allow for on-chip binning (all allow a maximum binning of 4 pixels); and a modulation amplitude of 3, 5, and 40 λ/D. These specifications drove the lenses chosen before and after the PWFS’s optical element. The lens before the PWFS sets the pupil spacing and diameter ratio while the lens (or optics) following the PWFS magnifies the overall image. Although these are design considerations, none of the following work completely met all of the specifications due to many of the components’ specifications being predetermined.

(45)

Comp onen t Details F u nction Fib er Source Thorlabs single mo de at 655 nm Ligh t source First Lens Singlet lens, f=100 mm Imaging pupil plane Second Lens Singlet lens, f=150 mm Re-imaging optics Third Lens Singlet lens, f=400 mm Re-imaging optics Deformable Mirror ALP A O DM-97 Inject w a v efron t error in to system and correct w a v efron t error existing in the system F ourth Lens Singlet lens, f=600 mm Re-imaging optics Sixth Lens Singlet lens, f= 150 mm Re-imaging optics Shac k-Hartmann W a v efron t sensor HASO2 A ct a s truth w a v efron t sensor and closed-lo op on DM to mak e Zernik e mo des on DM F ast Steering Mirror PI P erform mo dulation and dithering for PWFS Science Camera P oin t Grey Flea3 Image the fo cal plane Pupil Camera P oin t Grey Flea3/Grasshopp er2 Image of the pupil plane Lenslet 300-4.7 SUSS microOpitcs arra y PWFS Ro of pris ms ios Optics / SCExA O, 3.775 ◦ ro of PWFS Double Pyramid Arcetri Observ atory , 30 ◦ & 28.150 ◦ ap ex PWFS Figure 3.1: Ra y diagram of the optical b enc h. Imp ortan t pupil planes and fo cal planes are sho wn. The k ey optical comp onen ts are listed in the table along with their functionalit y.

(46)

3.1.2

Lenslet Pyramid Wavefront Sensor

Initially, an idealized (perfect) system was simulated in Zemax for each lenslet avail-able in the laboratory. This was done using paraxial lenses in Zemax to optimize the system and find a solution to the merit function given. Multiple solutions were found, each for a different lens. Excluding solutions that clearly would not be possible on the bench (i.e., lenses would be too close to each other), realistic systems were created and optimized by using the lens catalogues in Zemax. Once again, more than one solution was achieved; however, upon further inspection, a lenslet used by the RAVEN project (multi-object AO demonstrator built by the University of Victoria) was chosen due to its superior edge quality and a final solution was found. The lenslet was a 300-4.7 array from SUSS microOptics.

Simulating a Lenslet in Zemax

A simple, yet powerful, technique was used to produce the lenslet array in Zemax. A single paraxial lens with the correct focal length was created, then through the use of coordinate breaks and four configurations, a perfect lenslet array was created and the light passing through an intersection of four lenslets was simulated. Each configuration was offset with a coordinate break of half the pitch to create the correct effect.

Merit Function

A merit function is used in a linear regression to adjust parameters of the specific function. The merit function measures the agreement between the data and fitted model as predetermined by the user, with the user constraining the model and weighting the importance of the constraints. The merit function used in this problem included the following constraints:

• Pupil plane is on camera.

• Each pupil is approximately 128 x 128 pixels in diameter (well sampled) . • Total length of the optical axis is less than 1.5 m so as to fit on the optical

bench.

• Distance between lenslet and final lens is large enough to align. • Distance between pupils on the camera is an integer number.

Referenties

GERELATEERDE DOCUMENTEN

By performing a systematic research mapping of the empirical research that has been carried out to examine the relationship between factors in primary and lower secondary

De deviantie van de GBM-schattingen van de testset is lager dan dat van de GLM-schattingen. Door de testset op te delen, kan aangetoond worden voor welke deelverzamelingen van

The objective of this study is to explore how research objects can serve as a bridge between disciplines and specialties in the social sciences and humanities and to therefore

bendamustine en rituximab bij de behandeling van volwassenen met recidiverend/refractair diffuus grootcellig B-cellymfoom (r/r DLBCL) die niet in aanmerking komen voor

To advice Akzo Nobel Salt Specialties on the business model that should be used on the Indian retail market for the DFS(Fe) and DFS(Se) products, so that this is commercial

Please note that the model is called moderated mediation but we still look at moderation only at first: Model 3 (Moderated Mediation): Y = Refusal, X= Low transparency (LT),

Deze zullen in de meeate gevallen _ echter niet onafhanlceli;ik van elkaar z~n aangezien er binnen iedere groep van variabelen slechta een heel beperkt aantal

In this case the problem is almost the same as the one studied in Lin and Zhang (2006) under the name of Component Selection and Smoothing Operator (COSSO).. The main difference