• No results found

1. Introduction

This chapter provides a brief introduction to this project by discussing its context, goal, and scope.

Additionally, this chapter also introduces the Image Sensing (IS) functionality within the ASML lithog-raphy machines. Section 1.6 presents the outline of this report.

1.1 Context

This project is carried out as a final design project for the Professional Doctorate in Engineering (PDEng) program. The PDEng degree program in Software Technology is provided by the Department of Mathematics and Computer Science at Eindhoven University of Technology (TU/e) in the context of the 4TU. School for Technological Design, Stan Ackerman’s Institute. This project was conducted at ASML, and people from both parties (ASML and TU/e) supervised me throughout the project.

ASML is the leading provider of lithography systems in the world. These lithography systems are com-plex machines that are critical in the production of integrated circuits or microchips. In the process of manufacturing integrated circuits, lithography is the name for the cycle of steps where one of the circuit layouts is scanned (printed) onto a microchip. Hence, ASML’s lithography TwinScan machines need to perform with high precision to ensure that manufactured chips work as intended. For this reason, ASML machines heavily rely on a complex software that is divided into functional clusters, which contain one or more components. The components in each functional cluster are developed and main-tained by a specific group of developers.

One of these software groups in ASML is the Image Sensing Embedded Software Group. This group initiated this project. The group develops and maintains image sensor drivers for measuring various optical system parameters such as lens aberrations, pupil illumination, and polarization state of the scanner for ASML’s lithography machines. A subteam of this group develops subsystem drivers for different sensor types. One of these drivers is the Image Sensor Subsystem Driver. It is responsible for measuring the shape of wavefronts passing through the optical system. Since this driver's software logic is implemented using the C programming language, it is becoming more complex as new features are added over time. Most importantly, the implementation of this complex component using a C program-ming language is not efficient and not modular. Therefore, a modular redesign and implementation of this component were needed to continue delivering a high-quality product to ASML customers.

1.2 Project goal and scope

This project aims to redesign the image sensor driver component using Object-Oriented Design (OOD) techniques and implement the redesign using C++. It aims to assess the benefits of using Object-Ori-ented (OO) programming techniques and the integrability of C++ code with surrounding C code. The scope of the integration with surrounding C code is mainly with the client of this component. This project shall act as a technical proof of concept for applying an OO programming approach to the image sensor driver using C++. The following subgoals (tasks) were formulated to achieve the goals of this project:

1. Extract the functional and non-functional requirements of the image sensor driver component from the current existing source code, requirement and design documents, and experts.

2. Redesign the component using OOD techniques.

2

3. Implement a prototype of the redesign using the C++ language (C++11 + boost)

4. Validate the prototype against the main use cases of the component. Validation here is about functional correctness and includes non-functional requirements, particularly the extendibility of the redesign for new sensors and relays.

5. Report the project findings for future use in the ASML image sensing domain and possibly for subsystem driver development in other ASML domains.

The scope of the project is the image sensor’s camera driver. It is responsible for controlling the differ-ent types of cameras to capture wavefront images. It also manages communication links that connect the cameras with the CPU board on which the driver runs.

1.3 Image Sensors

1.3.1. Purpose of the Image Sensors

Manufacturing integrated circuits in the semiconductor industry requires several process steps, from slicing a cylinder of purified silicon material into wafers to packaging. The ASML machine performs one of the IC manufacturing key process steps called the lithography process. It is the name for the cycle of steps in which one layer of the circuit layout is exposed onto a chip. This process of pattern printing on a wafer using ASML lithographic machines is shown in Figure 1. First, a beam of ultraviolet (UV) light is passed through a reticle that is to be printed on the wafer. Then, the pattern is projected on the wafer by the projection lens after light passes through it. Several image sensors are used in the wafer stage to achieve the quality requirements, such as image resolution and overlay accuracy.

Figure 1: Process of pattern printing on a wafer

3 During the pattern projection, the lens’s accuracy is critical to print the chip patterns on the wafer cor-rectly. Hence, the image sensing subsystem provides a measurement of the scanner's optical parameters at the exposed side.

1.3.2. Functionality of image sensors

In ASML lithographic machines, a wafer life cycle has two stages: measurement and exposure. These two stages are shown in Figure 2. A wafer can be either at the measure or expose stage. The wafer position is measured with respect to the chuck in the X, Y, and Z directions on the measurement side.

The result of these measurements is used to align the position of the wafer with the reticle on the expose stage. After the wafer is aligned to the reticle, it is exposed to the reticle’s pattern to be printed on it.

Figure 2: Life cycle of a wafer

The image sensors technically exist on both sides. However, they are only used on the expose side because they are used to measure the quality of the light at the wafer level, which is done only at the exposed side. The beam of light is diffracted, using a diffraction material, into the chip’s layout in the form of a spherical wavefront. The projection lens turns this wavefront into a spherical wavefront with opposite curvature and converges to a focal point to form an image of the layout, as shown in Figure 3.

A perfect lens creates a perfectly spherical wavefront, and hence the image produced at the focus point is sharp. However, a lens has aberrations due to aging, heat, and dust that make a non-spherical wave-front, leading to so-called aberrations that cause distortion on the wavefront and compromise the image quality printed at the wafer.

4

Figure 3: Example of a wavefront and perfect lens

Therefore, the image sensors are used to capture the wavefronts at the requested exposure time, and the captured image is post-processed to know the lens’s degree of deviation. This degree of deviation is used by higher-level software layers in order to close a control loop that actuates on moving lens ele-ments to minimize aberrations. For more information on image sensor functionalities, refer to [1].

1.4 Image sensor subsystem driver

The image sensor driver is divided into software layers, and there is one layer to control physical actions in the machine and another layer to control logical actions. The image sensor subsystem driver provides functions to execute physical actions requested by the image sensor logical actions. It programs the image sensor electronics, measures various optical parameters of the scanner, and provides information needed to set up the scanner. Besides, the driver offers functionality for simulation and diagnostics of the image sensor subsystem electronics.

The image sensor is used to scan different images. The image sensor moves, based on its setting, while taking pictures, creating real scans that allow further use of interferometry to measure lens aberrations.

To apply these scans, the image sensor subsystem driver forwards the command to its own image sen-sor’s camera driver, as shown in Figure 4, to execute the requested action. This scan is necessary to measure the lens’s aberrations and other main optical parameters.

5 Figure 4: Context diagram of image sensor subsystem driver

For measuring the scanner’s optical parameters, different scan types are applied with different direc-tions, posidirec-tions, and a number of frames at a given exposure time. Hence, the image sensor subsystem driver’s main purpose is to apply different scan types and measure the scanner's optical parameters to keep it in a high-performing state during production.

1.5 Outline

This report is further structured as follows. Chapter 2 provides the problem analysis and an overview of the design challenges. Chapter 3 presents the stakeholders’ interests and goals. The project require-ments elicited from stakeholder meetings and docurequire-ments are presented in Chapter 4. Chapter 5 describes the high-level design and detailed design of the image sensor’s camera driver, including design alterna-tives that were considered. In addition, the design decisions that guided the design process are docu-mented. Chapter 6 explains the implementation aspects of the driver. In Chapter 7, the verification and validation techniques performed in this project as well as the results of applying those techniques, are discussed in detail. Chapter 8 discusses the conclusion, good practices on migrating C code drivers to C++, and future work. Finally, Chapter 9 and Chapter 10 describe the project management and retro-spective, respectively.

6