• No results found

Eindhoven University of Technology MASTER Test fixture optimization Oudman, F.H.

N/A
N/A
Protected

Academic year: 2022

Share "Eindhoven University of Technology MASTER Test fixture optimization Oudman, F.H."

Copied!
94
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MASTER

Test fixture optimization

Oudman, F.H.

Award date:

2019

Link to publication

Disclaimer

This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

(2)

Department of Mathematics and Computer Science Department of Electrical Engineering

Test Fixture Optimization

Master Thesis

F.H. Oudman

Supervisors:

dr. R.H. Mak ir. M.G.M. Spierings prof.dr. H. Corporaal

Eindhoven, August 2019

(3)
(4)

Abstract

In the world of electronics manufacturing, functional and in-circuit tests are often used to verify the functioning of a PCBA. In order to test a Printed Circuit Board Assembly (PCBA), it is clamped in a so-called test fixture. This fixture holds, among other parts, the different test probes used for testing the PCBA. However, when the PCBA to be tested is clamped in a fixture, board deformation may occur. While tin-lead solder joints and through hole components are flexible enough to handle such deformation, the industry switch to lead-free solder and surface mount components has resulted in an increase in board defects caused by board deformation. Much research has been done in finding flexible solder joints, or in the redesign of components, all aimed to reduce the strain caused by this PCBA deformation. In this document, a new strain-reducing approach is introduced, focusing on fixture optimization. The main objective is to design a software tool which can propose a fixture design that minimizes strain. Additionally, using such a software tool might reduce fixture production costs, as well as the time to market.

(5)
(6)

Contents

Contents v

List of Figures vii

List of Tables ix

1 Introduction 1

1.1 Problem context . . . 1

1.2 Board strain. . . 3

1.3 Research objective . . . 3

1.4 Document outline. . . 3

2 Automated Electrical Test 5 2.1 Printed circuit board assembly . . . 5

2.2 Fixture . . . 7

2.2.1 Bottom fixture . . . 7

2.2.2 Top fixture . . . 7

2.3 Testing process . . . 7

3 Board strain 9 3.1 Strain definition . . . 9

3.1.1 Modelling approaches and deformation domains. . . 9

3.1.2 Stress and strain . . . 10

3.2 Finite Element Analysis . . . 12

3.3 DUT strain limits. . . 14

4 Optimization 17 4.1 Trajectory search . . . 17

4.1.1 Gradient descent . . . 17

4.1.2 Hill climbing . . . 18

4.1.3 Stochastic hill climbing . . . 18

4.1.4 Simulated annealing . . . 18

4.2 Swarm intelligence . . . 18

4.3 Shape optimization . . . 19

5 Problem statement 23 5.1 Problem definition . . . 23

5.2 Problem scope . . . 23

5.3 Research questions . . . 24

5.4 Project deliverables. . . 24

(7)

6 Proposed algorithms 25

6.1 Initialization strategy . . . 25

6.2 Random step . . . 26

6.3 Slope based step . . . 26

6.4 Force based step . . . 26

6.4.1 Probability version . . . 26

6.4.2 Quantity version . . . 27

6.4.3 Boundary version. . . 27

6.5 Displacement based step . . . 27

6.6 Strain based step . . . 27

6.7 Counteract step. . . 28

6.8 Hill climb controller . . . 28

7 Implementation 31 7.1 Input/output file formats . . . 31

7.2 Contact placement . . . 32

7.3 Slope based step . . . 32

7.4 Force based step . . . 32

7.5 Displacement based step . . . 33

7.6 Strain based step . . . 33

7.7 Hill climb controller . . . 33

8 Experimental results 35 8.1 Benchmark set . . . 35

8.2 Evaluation criteria . . . 38

8.2.1 DUT strain . . . 38

8.2.2 Fixture costs . . . 38

8.2.3 Computational costs . . . 38

8.3 Initialization strategy . . . 40

8.4 Slope based step . . . 44

8.5 Force based step . . . 49

8.5.1 Boundary version. . . 49

8.5.2 Probability version . . . 54

8.6 Displacement based step . . . 58

8.7 Strain based step . . . 62

8.8 Hill climb controller . . . 65

8.8.1 Parameters used . . . 65

8.8.2 The FTS board . . . 67

8.8.3 The P1SMD board . . . 67

8.8.4 The THCOBO board . . . 70

8.8.5 The TOTSB board. . . 72

8.8.6 The LAN2RF board . . . 74

8.8.7 Summary . . . 76

9 Conclusion 77 9.1 Discussion . . . 77

9.2 Future work . . . 78

9.3 Conclusion . . . 79

Bibliography 81

A Abbreviations and glossary 83

(8)

List of Figures

1.1 An empty PCBA clamped between a top and a bottom fixture . . . 1

1.2 Schematic overview of BGA soldering . . . 2

1.3 Pad cratering: a crack in the resin between copper and fibreglass[11] . . . 2

2.1 Example panelized PCBA . . . 6

2.2 The different modules of the AET system . . . 6

2.3 The DUT brought into the AET system . . . 8

2.4 The DUT aligned with the bottom fixture . . . 8

2.5 Alignment of top fixture with the bottom fixture . . . 8

2.6 Establishing contact between DUT and test probes . . . 8

3.1 Sign convention of stresses in 1D . . . 10

3.2 Sign convention of stresses in 2D; rotation of coordinate system . . . 11

3.3 Mohr’s circle . . . 11

3.4 Example meshed PCBA . . . 12

3.5 Classical mechanics versus static finite element analysis . . . 13

3.6 Example: max. allowable strain vs. strain rate vs. PCB thickness[10] . . . 15

4.1 Example function with multiple local maxima f (x) = sin(πx) − x2 . . . 18

4.2 Shape optimization: example problem . . . 20

4.3 Shape optimization: example continuous density solution[1] . . . 20

4.4 Shape optimization: example SIMP solution[1] . . . 20

4.5 Shape optimization: example SIMP solution with filtering[1]. . . 21

6.1 Example explored search tree . . . 29

8.1 Benchmark set: DUT photos . . . 36

8.2 Benchmark set: DUT+fixture rendering . . . 37

8.3 Example results. . . 39

8.4 Initialization; FTS; fixture versus strain map . . . 40

8.5 Initialization; P1SMD; fixture versus strain map . . . 41

8.6 Initialization; THCOBO; fixture versus strain map . . . 42

8.7 Initialization; TOTSB; fixture versus strain map . . . 43

8.8 Initialization; LAN2RF; fixture versus strain map. . . 43

8.9 Slope step; FTS; aggressiveness 50.0; strain versus iteration . . . 44

8.10 Slope step; FTS; aggressiveness 40.0; strain versus iteration . . . 45

8.11 Slope step; FTS; aggressiveness 40.0; first stage; fixture versus strain map . . . 46

8.12 Slope step; P1SMD; aggressiveness 500.0; strain versus iteration. . . 47

8.13 Slope step; P1SMD; aggressiveness 500.0; generated fixtures . . . 47

8.14 Slope step; P1SMD; aggressiveness 250.0; strain versus iteration. . . 47

8.15 Slope step; P1SMD; aggressiveness 250.0; first stage; fixture versus strain map . . 48

8.16 Boundary force step; FTS; strain/price versus iteration. . . 50 8.17 Boundary force step; FTS; force goal 1500mN; first stage; fixture versus strain map 51

(9)

8.18 Boundary force step; P1SMD; strain/price versus iteration. . . 52

8.19 Boundary force step; P1SMD; force goal 1500mN; first stage; fixture versus strain. 53 8.20 Probability force step; FTS; force goal 1500mN; strain/price versus iteration. . . . 55

8.21 Probability force step; P1SMD; force goal 1500mN; strain/price versus iteration . . 56

8.22 Probability force step; force goal 3000mN; strain/price versus iteration . . . 57

8.23 Displacement step; FTS; strain/price versus iteration . . . 58

8.24 Displacement step; P1SMD; strain/price versus iteration . . . 59

8.25 Displacement step; FTS; first stage; fixture versus strain map . . . 60

8.26 Displacement step; P1SMD; first stage; fixture versus strain map . . . 61

8.27 Strain step; FTS; strain/price versus iteration . . . 62

8.28 Strain step; P1SMD; strain/price versus iteration . . . 62

8.29 Strain step; FTS; first stage; fixture versus strain map . . . 63

8.30 Strain step; P1SMD; first stage; fixture versus strain map . . . 64

8.31 Hill climb controller; FTS; aggressiveness 0.25; strain/price versus iterations . . . . 67

8.32 Hill climb controller; P1SMD; strain/price versus iterations . . . 68

8.33 Hill climb controller; P1SMD; aggressiveness 0.25; dominant step algorithms . . . . 69

8.34 Hill climb controller; P1SMD; strain map . . . 69

8.35 Hill climb controller; THCOBO; aggressiveness 0.25; strain/price versus iterations 70 8.36 Hill climb controller; THCOBO; aggressiveness 0.25; dominant step algorithms . . 70

8.37 Hill climb controller; THCOBO; strain map . . . 71

8.38 Hill climb controller; TOTSB; aggressiveness 0.25; strain/price versus iterations . . 72

8.39 Hill climb controller; TOTSB; aggressiveness 0.25; dominant step algorithms. . . . 72

8.40 Hill climb controller; TOTSB; strain map . . . 73

8.41 Hill climb controller; LAN2RF; aggressiveness 0.25; strain/price versus iterations . 74 8.42 Hill climb controller; LAN2RF; aggressiveness 0.25; dominant step algorithms . . . 74

8.43 Hill climb controller; LAN2RF; strain map. . . 75

8.44 Hill climb controller; average; dominant step algorithms . . . 76

9.1 Hill climb controller; P1SMD; optimum found; strain distribution . . . 77

(10)

List of Tables

6.1 Example priority queue over time . . . 29

8.1 Benchmark set: fixture statistics . . . 36

8.2 Hill climb controller: step algorithm instances . . . 66

8.3 Benchmark set: strain/price comparison . . . 76

A.1 Abbreviations . . . 83

A.2 Glossary . . . 83

(11)
(12)

Chapter 1

Introduction

This chapter provides an overview of the test fixture optimization problem, after which the re- maining chapters will describe the different research subjects in greater detail. In the first section, the problem context is presented. The second section gives a description of the problem itself.

The third section then introduces a basic problem statement and research objective. Finally, the fourth section describes the outline of the rest of this document.

1.1 Problem context

In the world of electronics manufacturing, functional and in-circuit tests are often used. In such a test, a Printed Circuit Board Assembly (PCBA) is placed onto a set of test probes. This set of test probes can then test if all PCBA components behave in accordance with specification. In this document, such a test will be referred to as an Automated Electronic Test (AET).

In order for the test probes to make good contact with the PCBA, the PCBA needs to be properly secured with respect to these test probes. Therefore, the PCBA is enclosed in a so-called fixture, consisting of a top half, the top fixture, and a bottom half, the bottom fixture. The bottom fixture holds all the test probes, plus some extra support probes. The top fixture holds small plastic fingers, the so-called push fingers, pointing downwards. By moving the top fixture downwards with respect to the bottom fixture, the PCBA is clamped between push fingers at the top side and test probes and support probes at the bottom side, ensuring good contact between test probes and the PCBA. Figure 1.1 shows an example PCBA clamped between a top and bottom fixture.

In this process of fixture clamping, multiple so-called test stages can be distinguished. First, there is the resting stage. This is the PCBA resting on bottom fixture. Next, there is the first test stage. This is the PCBA fully clamped between top and bottom fixture. Optionally, a second test stage can be executed. This is the PCBA still clamped between top fixture and bottom fixture, but with a larger distance between the two halves, resulting in only a subset of the test probes making contact with the PCBA.

During this testing, the tested PCBA is often referred to as the Device Under Test (DUT).

Figure 1.1: An empty PCBA clamped between a top and a bottom fixture

(13)

copper resin fibers package solder ball

Figure 1.2: Schematic overview of BGA soldering

Figure 1.3: Pad cratering: a crack in the resin between copper and fibreglass[11]

(14)

CHAPTER 1. INTRODUCTION

1.2 Board strain

During this clamping described above, the PCBA might deform. Such deformation can be ex- pressed in strain. Although mediocre visible, Figure1.1 shows an example of such deformation at the left side of the empty PCBA. When tin-lead solder joints and/or through hole components are being used, such deformation is relatively safe. However, the conversion to the stiffer lead-free soldering and/or the use of surface mount components has increased the chance of PCBA damage.

As an example of such PCBA damage, think of a chip package soldered to the PCB using a ball grid array (BGA) mount. A BGA is an array of small solder balls, serving as a connection that holds the chip package in place, and acting as a high-throughput connection thanks to the many different connection points. Figure1.2shows a schematic overview of one of the solder balls of a BGA. On top, we have the chip, interfacing the solder ball by some copper. At the bottom, we have a PCB, build from fibres and resin. Board traces are made with copper.

If on a PCBA too much strain is created near a BGA, cracks might occur in the chip package, in the copper between chip package and solder ball, in the solder ball itself, in the copper between solder ball and PCB, or in the resin from which the PCB is made[10]. This last type of failure mode is called pad cratering. Figure 1.3shows an example of pad cratering caused by too much board strain.

Sometimes such defects are big enough to be detected by a functional test, sometimes not, resulting in shipping a partially damaged product with a lower mean time to failure (MTTF). In all cases, these risks caused by strain during an AET should be minimized.

A standard approach for measuring strain is trough the use of strain gauges[9]. While the use of strain gauges provides more information about the product, it does not prevent the damage caused by strain. Much research is done on finding lead-free soldering alternatives that allow better flexibility[3, 8]. Alternatives have also been researched, such as adding glue[14] or epoxy[18] to BGA corners. While this research has shown reductions in damages caused by strain, additional manufacturing costs are added as well. This research proposes a new direction for reducing strain, by optimizing the way the DUT is held by the fixture.

1.3 Research objective

The main topic of research is a proof of concept which can automatically generate a fixture such that:

1. Maximum DUT strain is minimized.

2. Total fixture costs are minimized.

Some optimization constraints and some optimization freedoms apply. The most important freedom is the push finger and support probe placement. The most important constraints are the test probe types and locations, and DUT geometry limiting placement locations. Chapter 5 provides a precise problem statement.

1.4 Document outline

Chapter 2 gives a more detailed description of the different aspects of a DUT and a fixture.

Furthermore, a step-by-step explanation of the testing process is given. Chapter 3 states the exact definition of strain. Additionally, literature on acceptable strain limits is presented and a computation method for strain analysis is presented. Chapter 4 describes some optimization algorithms.

Chapter 5 details the global research objective, the research questions and a project scope.

Chapter 6 introduces potential solutions for these research questions. Chapter 7 discusses the implementation of these potential solutions, after which Chapter8presents the results obtained.

Chapter 9concludes by discussing the results obtained and presenting possible future work.

(15)
(16)

Chapter 2

Automated Electrical Test

This chapter describes the most important elements used in an Automated Electrical Test, as well as a description of an example test procedure. The first section gives an introduction on the elements of a PCBA. The second section then describes the different mechanical parts of a test fixture. The third section then describes the testing process. The strain that arises during this testing is further described in the next chapter.

2.1 Printed circuit board assembly

The base of every PCBA is of course the PCB. Such a printed circuit board is made from woven glass fibres and resin, together forming the board. On this board, a thin copper foil is attached. By precisely removing parts of this copper, one can use the remaining copper as an electrical network which provides connectivity between the different components that will later be placed on the PCB. To accommodate more difficult routing, multiple copper and resin layers can be added to create the desired electrical network.

By soldering different components on the board, the PCB becomes a PCBA. One can classify (most) components into two categories: through hole technology (THT) and surface mount tech- nology (SMT). When using THT, small holes are drilled at PCB contact point locations, through which the leads of trough hole components are inserted. At the opposite side of the board, the lead is then soldered to the board. In SMT, a components is soldered directly to a soldering pad on the PCB.

When a PCB deforms due to fixture clamping, components react different depending on the used technology. In case of THT, component leads provide a bit of flexibility, thus providing some bending freedom. In case of SMD, the different components add extra stiffness to the PCB. This extra stiffness reduces strain directly under the THT component, but can create extra strain at the soldering connections at the edges of the component. This strain at the edges has a higher change of damaging the PCB or the component since their is no component lead flexibility.

Each PCBA needs to have a rectangular shape of which the length and the width have minimum and maximum dimensions to be used in an AET test. In case one needs to produce a PCBA that is smaller than the minimum PCBA size, and/or in case one needs to produce a non-rectangular shaped PCBA, one needs to apply panelization. Using panelization, one fits multiple PCBAs on a single PCBA. During the PCB manufacturing, cutouts are created around the different panels.

In order to hold the panels in place, small breakaway tabs are retained. Figure 2.1 shows an example of a panelized PCBA. After the PCBA is fully assembled and tested, the different panels are separated from the main board by milling away the breakaway tabs.

Since the different panels are connected to the main PCBA only through small tabs, one needs to be careful with push fingers, test probes and/or support probes around the breakaway tabs, to prevent excessive strain at these locations.

(17)

panel

cutout breakaway

tab mill away

Figure 2.1: Example panelized PCBA

bottom fixturetop fixture

probe plate

top fixture alignment pin

push finger

DUT

conveyor cutouts

support plate DUT alignment pin

probe plate

fixture PCB

Figure 2.2: The different modules of the AET system

(18)

CHAPTER 2. AUTOMATED ELECTRICAL TEST

2.2 Fixture

During the testing process, the DUT is clamped in the fixture. Such a fixture consists of two main parts: the top fixture and the bottom fixture. Between these two main parts, the DUT is clamped. Figure2.2presents a more detailed drawing of this.

2.2.1 Bottom fixture

The bottom fixture holds the different test probes that will test the DUT. Furthermore, the bottom fixture contains some support probes, that provide additional support for the DUT during the resting stage. Three alignment pins are used to align the DUT with the bottom fixture.

2.2.2 Top fixture

The top fixture consists of a plate with push fingers mounted onto it. These push fingers push the DUT downwards onto the different probes of the bottom fixture. Furthermore, it contains alignment pins to align the top fixture with the bottom fixture.

2.3 Testing process

This section describes the execution of an example test. Figure2.3shows a schematic drawing of the start situation. The top fixture and bottom fixture are separated. In the middle, conveyor belts bring in the DUT. One of these belts is located to the front of the machine, a second is located towards the back side. The front and back edges of the DUT are resting on these conveyor belts. When the DUT is fully shifted into the machine, it is stopped by a small stopper.

Next step is the lowering of the conveyor. The DUT is aligned with the bottom fixture through three alignment pins. The DUT rests on multiple support probes such that there is not yet any contact between DUT and test probes. This situation is referred to as the resting stage. Figure2.4 shows a schematic drawing of this situation.

Next, the top fixture is lowered. First, alignment pins in the top fixture and alignment bushes in the bottom fixture allow the top fixture to align itself with the bottom fixture and therefore with the DUT. Figure2.5shows a schematic drawing of this situation. After alignment, the top fixture continues with the lowering, pushing down the DUT with the push fingers attached to the top fixture. This way, the DUT makes contact with the test probes. This situation is referred to as the first test stage. This is shown in Figure2.6.

Next, the different electrical tests are performed trough the test probes. After all testing is completed, the top fixture is raised again, followed by the raising of the conveyor belts, followed by the conveyor belts shifting the DUT out of the AET system.

These four steps describe the most basic AET test procedure, but different extensions are possible. One commonly used extended procedure is the so-called two-stage AET test. In such a test, the bottom fixture contains two different types of test probes. In the first stage, the DUT is pushed down to make contact with all probes. In the second test stage, the DUT is raised a bit, such that only a subset of the bottom fixture test probes makes contact with the DUT.

(19)

TOP Fixture

Bottom Fixture

DUT

conveyor stopper

Figure 2.3: The DUT brought into the AET system

TOP Fixture

Bottom Fixture

support probe alignment pin

Figure 2.4: The DUT aligned with the bottom fixture

Bottom Fixture TOP Fixture

alignment pin push finger

Figure 2.5: Alignment of top fixture with the bottom fixture

Bottom Fixture TOP Fixture

test probe

Figure 2.6: Establishing contact between DUT and test probes

(20)

Chapter 3

Board strain

In this chapter, strain related topics are discussed. In the first section, an exact strain definition is given. In the next section, a strain analysis method is presented. In the final section, practical strain limits are discussed.

3.1 Strain definition

This section handles several important strain related definitions. First, a short introduction on mechanical modelling and on deformation domains is given. Second, exact definitions and repres- entations of stress and strain are presented.

3.1.1 Modelling approaches and deformation domains

Matter is always made of small particles. At this low level, particle displacements caused by ex- ternal forces result in bumpy, discrete displacements. When observing the complete object, object deformation caused by external forces is a smooth, continuous collective. In continuum mechanics, materials are modelled as a continuous mass rather than a summation of their individual particles.

Suppose one wants to take a look at the deformation of an object. If the displacement of the individual particles are much smaller than any relevant dimension of the object, then one can assume that material properties like density, stiffness and geometry remain unchanged. This math- ematical approach is called the infinitesimal strain theory - sometimes called small deformation theory.

If particle displacements are large enough to change also other object dimensions, these other changes also need to be accounted for in the model. This mathematical approach is called the finite strain theory - sometimes called large deformation theory.

Since the vertical displacement of DUT particles is much smaller than the horizontal dimensions of the DUT, the infinitesimal strain theory suffices for DUT strain analysis.

The simplest form of deformation calculation is in the domain of one dimensional (1D) in-axis deformation. In this domain, an object is analyzed in only one direction. In this direction, the object can compress, or elongate. An example of such a 1D in-axis deformation is a spring, which can either be compressed or be elongated. Modelling is done trough Hooke’s law, defined by Robert Hooke in 1676. Note that by assuming infinitesimal deformation, one can use 1D in-axis analysis on objects spanning multiple dimensions. An example of a 3D object for which 1D in-axis analysis can suffice to calculate 3D deformation is a light truss, e.g. used at shows for positioning spot lights and speakers. A light truss consists of multiple metal tubes, welded together at the end points. Hence, all deformation can be assumed to solely take place as a combination of tube compressions and elongations.

A second form of deformation is 1D out-of-axis deformation. In this domain, an object spans one direction, but the deformation takes place in a direction perpendicular to the main dimension.

(21)

tension (+) compression (-)

Figure 3.1: Sign convention of stresses in 1D

Take for example a flagpole, mounted horizontally at the outside wall of some shop, deformed by the weight of the flag. The main axis is formed by the flagpole, the second dimension is the vertical direction of gravity. Deformation can be calculated using the classical Euler-Bernoulli beam theory, named after Jacob Bernoulli, published around 1750.

Another form of deformation is 2D in-plane deformation. Take for example a square plate of metal. Deformations causing the square to become more of a rectangular or diamond shape are called respectively normal deformation and shear deformation.

The last form of deformation mentioned in this document is also the most relevant one: 2D out- of-plane deformation. In this domain, an object spans two dimensions and deformation is done in a third dimensions. For example, take a PCB, which can be modelled as a 2D plate. Deformation by a combination of push fingers and test probes is done perpendicular to the board. Deformation can be calculated using the classical Kirchhoff-Love plate theory, an extension to the Euler-Bernoulli beam theory, and published in 1888. However, such calculations can only be performed on simple shapes with specific boundary conditions. The deformation of more complicated shapes, such as a DUT, can however be approximated using Finite Element Analysis (FEA). More on FEA in Section3.2.

3.1.2 Stress and strain

All these different kinds of deformation can be expressed in amounts of strain. In-axis strain is defined as  = ∆LL , where  denotes strain, L the original object length and ∆L the change in object length. Because of this definition,  is a unitless quantity.

Stress is a concept closely related to strain. Strain describes the deformation, stress describes the amount of force exerted per area that is created by or is creating the deformation causing the strain. Stress is written as σ and is expressed in Pascal, defined as P a = mN2 = m·skg2. In case of infinitesimal deformations, the deformation can be assumed to be elastic. The relation between stress and strain can then be described using Young’s modulus, defined as E = σ.

E is by definition a positive number, giving stress and strain the same sign convention. E is a material-specific constant. Figure3.1shows the stress and strain signing conventions used in 1D.

In 2D, stress and strain need to be measured in three directions: two principle stresses along the x-axis and y-axis, and one shear stress in the xy-plane. Principle stresses, denoted σx and σy, are of the same type as as the 1D stress. Shear stress, denoted τxy, can be thought of as an in-between stress, trying to force a square plate in a diamond shape. The combination of these different stresses can be written in tensor notation, and is called the Cauchy stress tensor:

σ = σx τxy τxy σy



(3.1) Figure 3.2shows the sign convention used in 2D.

Suppose one has found σx, σy and τxy for some point on some 2D plane, for some xy-axis convention. How can one find the stress along an axis rotated with an angle θ, such as shown in Figure 3.2? And, even more important, for what value of θ is |σx| maximal? One can see that θ = π2 gives σx0 = σy and θ = π gives σx0 = σx, but interpolating is not as straightforward.

A nice way of understanding the interpolation from σx to σ0x is the visualization of Mohr’s circle, named after C.O. Mohr. Define a 2D graph, with σ as the horizontal axis and −τ as the vertical axis. Draw a line between coordinates (σx, τxy) and (σy, −τxy). By rotating this line around its centre by 2θ, the endpoints of this line give the values of σ0x, σy0 and τxy0 that result

(22)

CHAPTER 3. BOARD STRAIN

(+)

σy

σx τxy

x y

x’

y’

θ

Figure 3.2: Sign convention of stresses in 2D; rotation of coordinate system

x, τxy) y, -τxy)

σ major σ minor

(σ'x, τ'xy) (σ'y, -τ'xy)

τmax

τmax

Figure 3.3: Mohr’s circle

after rotating the coordinate system of the measured point by θ, such as shown in Figure3.2. An example of Mohr’s circle is given in Figure3.3. The leftmost point on the circle created by rotating the earlier drawn line gives σminor, the rightmost point on the circle gives σmajor. These are the maximal stresses and hence the two values that are most relevant for the research objective.

Both the Cauchy stress tensor and Mohr’s circle can be extended to 3D. In 3D, the Cauchy stress tensor is the following:

σ =

σx τxy τxz

τxy σy τyz

τxz τyz σz

 (3.2)

Assuming infinitesimal 2D out-of plane deformation in the DUT, the stress tensor at the mid plane of the DUT in can be reduced to:

σ =

0 0 τxz 0 0 τyz

τxz τyz 0

 (3.3)

If stress is not measured at the mid plane but instead at the top or bottom of the PCB, all relevant stresses can be described using a 2D Cauchy stress tensor as shown in Equation (3.1).

(23)

Using this 2D stress representation, one can calculate for different points on the DUT principle strain in any direction.

This reduction from a 3D tensor to a 2D tensor might reduce the number of strain variables from six to three (the two tensors are symmetric), but these three variables are still not easy to present in a graphical manner. One solution to this representational problem is to use the Von Mises yield criterion, formulated by Von Mises in 1913:

σv =q

σ2x− σxσy+ σy2+ 3τxy2 (3.4) The resulting scalar σvis proportional to the distortion energy, which, as the next section will explain, indicates the chance of board defects.

To summarize: strain describes object deformation, while stress describes the (surface) forces that cause or are caused by object deformation. 1D stresses can be expressed as a single unit, but higher dimensional stresses need the Cauchy stress tensor to express the different normal and shear stresses. The mathematical formulas that are used to translate normal and shear stresses to a rotated coordinate system can be visualized using Mohr’s circle. Mohr’s circle is also a nice help to find the maximum stress magnitude and angle.

3.2 Finite Element Analysis

In this section, an overview is given of the mathematics behind Finite Element Analysis (FEA).

In FEA, analysis is performed on a finite system of elements with known properties, ‘known’

meaning a direct derivation of classical mechanics theory, or an approximation that has shown to be accurate enough in practice. Different kinds of FEA exists, of which one of the most elementary, static linear analysis, will be explained in this section.

The term ‘static’ refers to the analyzed model to be assumed static: an equilibrium between forces and displacements is assumed. The term ‘linear’ refers to a system being linear. This part does not reflect the modelling of a DUT resting on support probes or clamped in a fixture: all probes have a preload, giving non-linear behaviour. Furthermore, not all contacts necessarily make contact with the DUT. In practice, this modelling inaccuracy has measurable, but minor influence on the analysis results.

As an illustration to the explanation of the working of linear static analysis, a simple rectangular PCBA is used. On all four corners and halfway the two long edges, push fingers are placed. Two probes are located on the two long edges, halfway the two left corners and the middle push fingers.

For simplicity, the 2D out-of-plane system is modelled as a simple 1D out-of-axis problem. The PCBA is represented by a mesh containing sixteen triangles. A visualization of this all is shown in Figure3.4.

Z

80mm thickness: 1.6mm

P801

P802 P818

T701 T702

T703 T704 P804

P803

T705 T706 P806

P805

T707 T708 P808

P807

T709 T710 P810

P809

T711 T712 P812

P811

T715 T716 P816

P815 T713

T714 P814

P813 P817

Y X

Figure 3.4: Example meshed PCBA

(24)

CHAPTER 3. BOARD STRAIN

-0.05 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02

0 10 20 30 40 50 60 70 80

z-displacement (mm)

x-distance (mm)

reference Nastran

Figure 3.5: Classical mechanics versus static finite element analysis

In the FEA software used, Nastran, the elements used are triangles with both an in-plane and an out-of-plane stiffness. Each triangle is a defined as a combination of three vertices. Forces are associated with these vertices, and hence indirectly associated with the triangles.

Vertices have six degrees of freedom (DOFs): three displacements x, y and z, and three rotations φ, θ and ψ.

Structural elements such as beams or plates are defined as a structure connecting two or more vertices. These connections can be expressed in forms of stiffness, e.g. a spring connecting two grid points adds a stiffness between these two points equal to the stiffness such as used in Hooke’s law.

Define a vector u that holds the displacements/rotations of all DOFs of all vertices. At first sight, the 18 vertices of the example mesh result in a vector u of size 108. However, the displace- ments/rotation of x, y and ψ are constrained. Hence, u can be limited to three DOFs per vertex, resulting in a vector u of size 54. The mesh has six push fingers constraining mesh displacement, of which four will hold the DUT after deformation. Hence, these 4 out of 54 entries are set to 0mm. The other 50 entries are unknown.

Define a matrix K that holds the stiffness relations between every two vertex DOFs. In the example, this matrix would be of size 54 × 54. Because all vertices are connected to each other thanks to the 16 triangles, all matrix entries are known. For details on how the matrix entries are exactly defined based on triangle specifications, the reader is referred to [16].

Define a vector F that holds for each DOF of each vertex the external force applied to it. By definition F is of the same length as u. Since we have two probes applying known forces to the mesh and four push fingers applying unknown forces to the mesh, respectively two and four entries of F can be set to −0.6N and unknown. The rest of the entries need to be set to 0N .

In order to find the force equilibrium, one needs to solve the following equation:

F = K · u (3.5)

All in all, finding the force equilibrium is just a matter of simply solving a system of linear equations. Because the assembly of u, K and F is quite error-prone, all of these steps can be done by Nastran. In order to compare accuracy, the example has been entered into Nastran and the analysis has been performed. Figure3.5shows a comparison between the exact displacement calculated using the Euler-Bernoulli beam theory and FEA displacement results calculated using Nastran. It is clearly visible that the results are practically identical, despite the relative low number of used triangles.

In this example, only four out of six push fingers were modelled, since it is easy to see that the remaining two will not make contact with the DUT. However, by automating the analysis, these two push fingers are assumed to keep contact, leading to a wrong deformation and strain

(25)

calculation. In this specific example, results would differ a lot. In practice, such as the results presented in Chapter8, such inaccuracies give less than 10% deviation.

3.3 DUT strain limits

In order to decide if an AET test can safely be performed, one does not only need to know the definition of maximal DUT strain, but also how to interpret these strain magnitude.

Bansal, Yoon and Mahadev[2] used a high-speed bending test to research the PCB strain to failure relationship. Two of their conclusions are very relevant to the research objective:

• “The strain to failure and failure modes are strain rate dependent. At slow strain rates ( 500 µstrain/s) the strain to failure is relatively high and the failure mode is PCB pad lifting. At high strain rates (5000 to 13000 µstrain/s) the strain to failure is relatively low. The failure mode is a combination of brittle fracture at the component substrate ENIG interface and PCB pad lifting.”

• “Flip-chip packages with component sizes varying from 27mm to 33mm (square) did not show any measurable difference in brittle fracture strain.”

In other words: the two factors most important for the prevention of DUT failure are strain and strain rate. The same conclusion has been drawn by the IPC[10, 11], who suggested to plot strain against strain rate for points on the PCBA at different locations and times. Based on the location on the diagram, each point could then be classified as safe or potentially harmful.

Figure3.6shows a diagram on which such points should be plotted. Points above the drawn line of the figure are considered potentially harmful, points below the drawn line as safe.

(26)

CHAPTER 3. BOARD STRAIN

0 500 1000 1500 2000 2500

100 1000 10000 100000

max principle strain (uStrain)

strain rate (uStrain/sec)

0.8 mm 1.0 mm 1.6mm 2.4mm

Figure 3.6: Example: max. allowable strain vs. strain rate vs. PCB thickness[10]

(27)
(28)

Chapter 4

Optimization

The previous chapter presented an exact strain definition, and introduced some literature on safe DUT strain limits. The next step now is to minimize such strain. But first, a small introduction on the problem complexity.

Suppose one wants to find the perfect fixture design using t test probes, s support probes and p push fingers. Contact positions must be accurate up to amm. DUT dimensions are wmm by hmm. Simply trying all possible configurations would then involve this many operations:

 w a ·h

a

(s+p)

(4.1) By exploiting symmetry (switching two push fingers or two support probes gives an equal fixture), this number can be reduced to:

w

a ·ha(s+p)

s!p! (4.2)

In practical applications, the number of possible solutions is equal to the total number of hydrogen atoms in the universe, raised to the power two to twenty. In order words: impossible to solve.

Therefore, this chapter introduces some algorithms that try to find a solution that is not guaranteed to be the perfect solution, but a solution that should be pretty good. The first two sections describe two categories of optimization algorithms, the third section describes the practical results of the application of such algorithms in a field related to the research problem.

4.1 Trajectory search

Suppose one wants to find the value of x for which f (x), shown in Figure 4.1, is maximal. This function contains two local maxima, but only one global maximum.

4.1.1 Gradient descent

In case that given some x, both f (x) and f0(x) can be calculated, the gradient descent algorithm might be helpful. The gradient descent algorithm consists of two steps which are repeated until convergence. As an initialization step, pick some point q0 and some scalar γ. Calculate qn+1 = qn+ γ · f0(qn). Repeat until convergence.

One can see that this algorithm will always converge to a local maximum, but not necessary to the global maximum. If q0and/or γ is/are chosen poorly, the algorithm is destined to end at the local minimum. Following the example of Figure 4.1, pick q0= 2 and γ = 0.1. Then, q1≈ 1.914, q2≈ 1.834, q3 ≈ 1.740 and qn ≈ 0.415, which is the global maximum. However, picking q0= −2

(29)

-8 -7 -6 -5 -4 -3 -2 -1 0 1 2

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

f(x)

x

Figure 4.1: Example function with multiple local maxima f (x) = sin(πx) − x2

and γ = 0.1 results in q1 ≈ −1.286, q2≈ −1.224, q3≈ −1.219 and qn ≈ −1.218, which is only a local maximum.

4.1.2 Hill climbing

In case f0 is unknown, hill climbing is an alternative with characteristics similar to gradient descent. Again, pick some point q0. Pick some δ0. If f (qn+ δn) > f (qn), then qn+1 = qn+ δn, else qn+1= qn. Pick some δn+1. Repeat until convergence.

One can see that this algorithm behaves similar to the gradient descent algorithm, including the possible convergence to a local maximum instead of the global maximum.

4.1.3 Stochastic hill climbing

To overcome getting stuck in a local maximum that may happen in simple hill climbing, one can apply the stochastic hill climbing algorithm. This time, the decision of accepting δn is not based on the direct comparison f (qn) > f (qn+ δn), but on some probability mapping, e.g. e∆E> rand, with ∆E = f (qn+ δn) − f (qn), and rand a random number 0 ≤ rand ≤ 1. In this example, uphill climbing is always acceptable, but going downhill has a smaller chance of being accepted.

4.1.4 Simulated annealing

Simulated annealing can be thought of as stochastic hill climbing with a small modification: instead of the comparison e∆E> rand, the comparison e∆ET > rand is made. In this modified comparison, T represents a temperature which is initialized at some positive number, e.g. 20, and is slowly lowered to 0 during algorithm execution. Again, uphill climbing is always accepted. When the temperature is high, going downhill is often accepted as well. However, when the temperature slowly drops to zero, downhill moves get a smaller probability of being accepted. When the temperature reaches zero, limT →0, downhill movements are no longer accepted. Furthermore, the algorithm has finished execution and qn is (hopefully) set to the global maximum.

4.2 Swarm intelligence

Instead of allowing the optimization algorithm to take downhill search paths to overcome the problem, one could also take the approach of applying the standard hill climbing technique multiple times, each with a different initial starting location. Of all the rerun outcomes, the best outcome is saved. This technique is sometimes referred to as shotgun hill climbing.

(30)

CHAPTER 4. OPTIMIZATION

Instead of sequentially exploring different sections of the solution space, the solution space could also be explored in parallel using multiple solution points. Furthermore, these different points cannot only perform local search, they can also share information. The intelligence emerging from these group of points working together is called swarm intelligence.

On of the first algorithms describing points working together is Particle Swarm optimization (PSO) [6, 13]. With PSO, define a set of n points which can move freely through the solution space, call these points particles. Each particle has a position p and a speed v. At each p, the value of that potential solution is evaluated. In the example of Figure 4.1, p would be represented by some value x and the evaluated value would be f (p). Each particle stores its best known position in a shared array pbest, with pbest[i] being the best position visited by particle i. The globally best known position is held by particle gbest, with pbest[gbest] the best known position. Every iteration, each particle updates its speed according to the following formula, with r1and r2being two random numbers with 0 ≤ rx≤ 2:

v[i] ← v[i] + r1· (pbest[i] − p[i]) + r2· (pbest[gbest] − p[i]) (4.3) Particle positions are then updated accordingly:

p[i] ← p[i] + v[i] (4.4)

After which pbest and gbest are updated:

pbest[i] ← f (p[i]) > f (pbest[i]) ? p[i] : pbest[i] (4.5) gbest ← i | i ∈ R, 0 ≤ i ≤ n, ∀p ∈ pbest, p ≤ pbest[i] (4.6) Together, the particles behave as a swarm that fast converges to the global optimum. In multimodal problems, this can result in the swarm converging to a local optimum instead of the global optimum. In order to stimulate a parallel local search more than the converging caused by gbest, PSO can also be used with a lbest. In this method, each particle has k neighbours, and each particle has its own lbest variable holding the reference to its neighbour with the best known position. Experiments have shown that using this approach, bigger amounts of the solution spaces are explored at the cost of slower convergence [7]. Research done by [5] suggests usage of lbest for multimodal problems and gbest for unimodal problems.

4.3 Shape optimization

The research topic could be summarized to: given a shape (the DUT), optimize the forces (the fixture), such that strain is minimized. Shape optimization is an active research field that handles the exact opposite problem: given some forces, optimize a shape, such that strain is minimized.

For example, take the box shown in Figure4.2, with length l, height h, constrained at the left side and with an upward pointing force at the bottom right corner. The objective is to create a shape filling 30% of the bounding box, that is optimized for maximum stiffness.

Bendsøe [4] introduced his approach with the following sentence: “Shape optimization in a general setting requires the determination of the optimal spatial material distribution for given loads and boundary conditions. Every point in space is thus a material point or a void and the optimization problem is a discrete variable one”. By meshing the bounding box, one can use FEA to approximate the different stiffnesses resulting from selecting different mesh subsets to fill with material.

To avoid the computational impossibility of comparing all possible designs, Mlejnek [15] in- troduced the Solid Isotropic Microstructures with Penalty (SIMP) method. In this method, the design is improved in an iterative manner. Each step, FEA is used to find the strain distribution.

Second, the design is adjusted such that regions with a lot of strain grow in material, while regions with less strain are shrunk. In order to give the discrete mesh a more continuous behaviour, mesh cells cannot just contain material or void, but have a continuous density range from 0 to 1. With

(31)

l

F h

Figure 4.2: Shape optimization: example problem

Figure 4.3: Shape optimization: example continuous density solution[1]

this continuous range, the mesh can be initialized with all mesh cell densities having the goal density. Then, each iteration, the density of cells with a lot of strain can be increased, while the density of cells with a small amount of strain can be decreased.

This approach has one drawback: the mesh may converge to a lot of cells with a in-between density. Figure4.3shows two example of such converged solutions. White indicates empty space, black indicates material, grey a less-density material.

This is why the SIMP method introduced a penalty: each cell has a density 0 ≤ ρ ≤ 1, but the FEA uses a power function which penalizes half-way densities: instead of using ρ to calculate the stiffness in a cell, ρp is used, with p > 1. Figure 4.4 shows two examples of such converged solutions using p = 3. Again, white indicates empty space, black indicates material, grey a less- density material.

This penalty does however introduce a new problem: chequerboarding. At chequerboard regions, the model converges to a optimum with very small features. One solution to the problem is to extend SIMP with a filtering step. For a detailed overview of the working of such a filter and a comparison of different filters, the reader is referenced to [17]. Figure4.4 shows two examples of SIMP solutions with applied filtering.

These example results correspond with the current known optimal structures. In other words:

the found local optimum is equal to the global optimum. Of course, this does not necessarily hold for more complex problems. What it does show however, is the potential of the optimization algorithms.

Figure 4.4: Shape optimization: example SIMP solution[1]

(32)

CHAPTER 4. OPTIMIZATION

Figure 4.5: Shape optimization: example SIMP solution with filtering[1]

Again, this shape optimization problem differs from the strain optimization problem: these two are opposites. However, the examples presented show that using the appropriate algorithms, finding a local optimum that is quite as good as the global optimum might be comprehensible with very limited computational power.

(33)
(34)

Chapter 5

Problem statement

Section1.3stated the following objective:

“The main topic of research is a proof of concept which can automatically generate a fixture design such that 1) maximum DUT strain is minimized and 2) total fixture costs are minimized.”

This chapter will discuss the different aspects of this objective. The first section will discuss the different aspects of the objective. The second section will set a project scope. The third section states the research questions, after which the fourth section describes the different deliverables.

5.1 Problem definition

This section will discuss three topics: input/output data types, DUT strain and fixture costs definitions, and choosing between multiple optimal solutions.

Input of the application will consist of two parts: DUT geometry and locations and types of test probes. The DUT geometry can be used for e.g. contact interference checking and strain calculations. The locations and types of the test probes are a design constraint, and support probe and push finger types and locations are a design freedom.

Output of the application will be the fixture design: locations and types of all test probes, support probes and push fingers. Additionally, a report showing DUT strain must be exported.

DUT strain has already been introduced in Chapter 3. To simplify matters, the Von Mises strain definition will be used. Fixture costs will simply be the sum of the prices of the used contacts.

Since such prices can fluctuate over time and between companies, fictional but reasonable prices suffice.

The problem statement consists of two optimization factors. Hence, it is possible that multiple optimal solutions exist, the so-called Pareto points. To simplify matters, it is acceptable to reduce the two-dimensional optimization problem to a one-dimensional problem by combining the two optimization goals through a weight factor. Such dimensional reduction must however always be accompanied by an explanation of why the specific weight factor was chosen, preferably together with a comparison of different weight factors.

5.2 Problem scope

The previous chapters introduced several topics related to or part of the research problem. This section selects which of these topics are part of the research problem, and which ones fall outside the problem scope.

For DUT strain analysis, the FEA model discussed in Section3.2will be used as-is. Correcting the mentioned shortcomings will not be part of the project. However, care will be taken that these will have only a minimal impact on the presented results, by manual verification of the produced results.

(35)

Section 2.3distinguished several test stages and transitions between stages. Of these different test stages, only the resting stage, first test stage and, if needed, the second test stage will be part of the research problem. To prevent the need for an extension to the FEA model, the transitions between states will be ignored, despite Section3.3showing that both strain and strain rate influence DUT defects.

Although not explicitly mentioned in Chapter 2, some distance constraints with respect to other contacts, DUT components and PCB edges are relevant as well. These distances have of course to be respected, but only minimal safety margins are used in this research.

5.3 Research questions

Having a clear problem definition and problem scope, the main research question will the following:

• In what way can a fixture design be automatically proposed based on a DUT design file and a test probe mapping file, such that both DUT strain and fixture costs are minimized?

Minimization of DUT strain is defined as the minimization of the board strain maximum, as discussed in Section3.3. Fixture costs are defined in Section5.1.

In order to answer this main research question, the following sub questions need to be answered:

1. What algorithms exist to generate a new, or adjust an existing fixture design? How do these algorithms behave?

2. What optimization algorithms can use such algorithms to generate an optimal fixture? How do these algorithms behave?

3. How good are the generated fixture designs in comparison to manually created fixture designs?

4. How good are the generated fixture designs in comparison with the IPC/JEDEC guidelines, discussed in Section3.3?

5.4 Project deliverables

The research questions of Section5.3must be answered with two deliverables:

• A proof of concept application answering the main research question, within the scope de- scribed in Section5.2.

• A report accompanying this application, answering the different sub questions in more detail.

This second deliverable is fulfilled with this document. Answers to the research questions can be found in Chapter 9.

(36)

Chapter 6

Proposed algorithms

Chapter 4 described two categories of optimization algorithms: trajectory searches and particle swarms. A particle swarm needs a fixed number of dimensions, but the test fixture optimization problem needs to optimize this number as well. Hence, the investigated optimization algorithms are all trajectory search variants.

As discussed in Section4.1, there are different trajectory search algorithms. For each of these trajectory search algorithms, two main parts can be distinguished: the optimization step and the optimization controller.

An optimization step is an algorithm without memory: given a seed fixture, propose a derived fixture that is hopefully better. An example is the random step algorithm of Section4.1.2, which in this case would randomly adjust all support probe and push finger locations by a small amount.

Note that such an optimization step does not necessarily result in a better/optimized fixture.

The name ‘optimization step’ refers to this algorithm being a small part of a bigger whole. One individual step might not optimize a fixture design, but this one step is needed to get to the final optimized result.

An optimization controller is an algorithm with memory and is responsible that the created fixture is (close to be) the best possible fixture. It analyzes the fixtures proposed by step algorithms and decides which ones can be used as new seed fixtures. It controls the convergence process and determines when to terminate the optimization process in order to return the best found fixture. Again, the example of Section4.1.2can be used, a controller algorithm which repeatedly applies the random step algorithm. After each application of the random step algorithm, it checks if the proposed fixture is better than the seed fixture, if so, it replaces the seed for the next iteration, if not, is thrown away. Each application of an optimization step algorithm indicates a transition, the current best known fixture and history that lead to this design indicate form the optimization controller state. The algorithm finishes after a certain strain threshold is reached, or if the maximum number of iterations is reached.

This chapter describes seven optimization step algorithms and one optimization controller algorithm. Chapter8 describes per algorithm the results when applied to some benchmark set.

6.1 Initialization strategy

This section describes the initialization strategy, which can be used to generate a seed fixture which can then be optimized by the optimization controller, using the different step algorithms.

First, test probes are placed. The test file contains a list of test pad identifiers and test probe types. Using these test pad identifiers, the corresponding test pad is retrieved from the DUT and a test probe of the prescribed type is added to the fixture at the test pad location.

Second, support probes are placed. The DUT weight is divided by the support probe preload, then multiplied with the gravitational constant g = 9.81ms−2. The outcome of this calculation is the absolute needed minimum number of support probes. Therefore, the calculation outcome

(37)

is multiplied with a safety factor. This number of support probes are then added to the fixture, evenly distributed over the DUT.

Third, push fingers are placed. Above each placed test probe and each support probe, a push finger is added to the fixture.

Section 8.3describes test results acquired.

6.2 Random step

The random step algorithm is perhaps the most elementary optimization step algorithm: each iteration, each support probe and each push finger is adjusted in a random direction by a random distance. A possible implementation is the following: take a random value in the range [0, 2π) as direction, and take a random value in the range [0, ∆] as distance.

6.3 Slope based step

Strain is the result of deformation. The slope step algorithm tries to minimize strain by reducing this deformation, by trying to flatten the deformed PCB.

The slope based step is applied per support probe and per push finger for some test stage.

For each such contact, one takes the known slope of the deformed DUT and takes a step that is proportional to this slope. Each push finger moves ‘uphill’, each support probe ‘downhill’.

Note that this step algorithm is something different than the gradient descent method discussed in Section 4.1.1: the gradient descent method uses the derivative of the current solution in the solution space, but the slope based step uses the physical slope of the the DUT per contact point.

Section 7.3 describes the implementation of this algorithm. Section8.4 describes test results acquired.

6.4 Force based step

The force based step algorithm uses a seed fixture to create a new fixture. Each support probe has a so-called preload. If the force exerted by the DUT on the probe is less than this preload, the support probe will not compress. If the force exerted by the DUT on the support probe is more than the preload, the support probe will be compressed by a distance equal to the force exerted minus the preload, divided by the support probe spring constant. One can use this property to define a utilization factor u, equal to the force exerted divided by the spring preload. Hence, u ≤ 1 indicates a non-compressed support probe, and u > 1 indicates a compressed support probe. By definition, u ≥ 0.

A similar definition of u can be made for push fingers. However, since push fingers do not have a spring preload, a configurable force goal must be used instead.

Given a seed fixture, a new fixture can be proposed based on the utilization factors of the different support probes and push fingers. Three algorithm variants have been designed, which are called the probability version, the quantity version and the boundary version. This section de- scribes the way the algorithm works, Section7.4describes the implementation of these algorithms, Section8.5describes test results acquired.

6.4.1 Probability version

The probability version uses the utilization factor to move towards a situation in which the work- load is evenly distributed among the different contacts. Two situations can be distinguished: one with a focus on the resting stage, one with a focus on any other stage.

First, the case with a focus on the resting stage. All test probes and all push fingers are placed on the proposed fixture in the same way they are on the seed fixture. Each support probe is placed

(38)

CHAPTER 6. PROPOSED ALGORITHMS

buc times on the proposed fixture, at or as close to its position on the seed fixture. An additional support probe is placed with a probability of u mod 1.

Second, the case with a focus on a different test stage. This situation is comparable to the previous situation, but this time all probes remain unchanged, while push fingers are placed zero or more times based on their utilization factor.

Both situations can be extended with an extra aggressiveness parameter 0 ≤ a ≤ 1 controlling the change between seed fixture and new fixture, by using u ← a(u − 1) + 1.

Reason for working with probabilities instead of simply rounding u is twofold. First, the random element reduces the chance of getting stuck in a local optimum. Second, this avoids the situation in which all utilization factors are rounded up- or downwards. As an example, suppose a seed fixture contains twenty support probes, each with u = 0.25. Rounding would result in no support probes being placed, while this approach results in a fixture with approximately five support probes.

6.4.2 Quantity version

Per test stage i, two numbers niand miare given, indicating the number of support probes or push fingers need to be respectively removed or duplicated. First, all test probes remain unchanged.

Second, the n0 support probes with the smallest u are removed, the m0 support probes with the largest u are placed twice, and all other support probes are placed once. Third, a removal set is created by selecting per test stage i the nipush fingers with the smallest u, and a duplication set is created by selecting per test stage i the mipush fingers with the largest u. Finally, all push fingers that are not in the removal set are placed once, after which all push fingers in the duplication set are placed a second time. Note that using this structure, a push finger that is marked for removal in one test stage, can be marked for duplication at another test stage, resulting in the push finger being placed exactly once.

6.4.3 Boundary version

The boundary version differs slightly from the quantity version. First, all test probes remain unchanged between seed and proposed fixture. Second, all support probes with u < 0.5 are disregarded, all support probes with 0.5 ≤ u ≤ 1 are remain unchanged, and all support probes with u > 1 are placed twice at or as close to the original position. Push fingers are placed in the same way as support probes, using the u calculated in the first test stage.

6.5 Displacement based step

The displacement based step algorithm adds support probes and/or push fingers based on peaks in the DUT deformation.

Per test stage, the number of to be added contacts is given. Per test stage, these are then added near the points with biggest absolute displacement. In case this displacement is in positive Z direction, a push finger is added as close as possible to this displacement peak. In case this displacement is in negative Z direction, a support probe is added with a push finger directly above it.

Section 7.5 describes the implementation of these two algorithms. Section 8.6 describes test results acquired.

6.6 Strain based step

The strain based step algorithm adds support probes and/or push fingers based on strain peaks on the deformed DUT.

Per test stage, the number of to be added contacts is given. Per test stage, these are then added near the DUT locations with biggest absolute strain. In case the displacement at this location

Referenties

GERELATEERDE DOCUMENTEN

– Weekly number of jobs processed – Weekly number of dies processed – Weekly capacity processed – Average effective process time..

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

The software for the sensor controller consists of four functional blocks: the sensor input handling, the position decoding, the weight (data acquisition) and the in previous

The control unit first set the right values for address and data, when writing, on the input lines, and then writes on the r_w line.. RowO through row7 are

This section will review the dierent sound source signals that are used in auralisation studies and the possible dierences in the test results.. The length of the used signals will

The remainder of this thesis describes two methods that are used to study the error of CPET equipment quantitatively, namely, an error analysis method based on general error

This block is similar to the one presented for the constant step motor emulator in section 3.2.3; a voltage divider to reduce the voltages coming from the step

As mentioned in chapter 4, sampled-data systems are mostly analysed in the frequency domain, because both discrete and continuous time parts can be analysed at the same time and a