• No results found

Micromechanical framework of nonlinear excitations in glassy solids

N/A
N/A
Protected

Academic year: 2021

Share "Micromechanical framework of nonlinear excitations in glassy solids"

Copied!
31
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Micromechanical framework of nonlinear excitations in glassy

solids

Matthijs Wesseling

July 25, 2018

Studentnumber 11011564

Assignment Report Bachelor Project Physics and Astronomy

Size 15 EC

Conducted between April 3, 2018 and July 25, 2018

Institute Institute for theoretical physics Amsterdam

University Universiteit van Amsterdam

Faculty Faculteit der Natuurwetenschappen, Wiskunde en Informatica

Superviser Dr. Edan Lerner

Second examiner Dr. Philippe Corboz

Abstract

Amorphous solids exhibit a disordered atomic structure. In crystalline solids, irregu-larities in the atomic lattice can cause weak points, defects. Because of their disordered nature, defects in amorphous solids are not as straightforwardly defined. In order to under-stand the properties of amorphous solids, a microscopical description of their deformation mechanics is necessary. This thesis explores a method of locating plastic instabilities in amorphous solids, which play a role in plastic deformation. This method makes use of the so-called barrier function. The barrier function is a function derived from a glassy state, and its local minima are indicative of instabilities of this state. We aimed to explore the landscape of the barrier function with multiple methods. Treating the barrier function

(2)

1

Samenvatting

Veel materialen hebben een nette atoomstructuur, waarbij de atomen in een regelmatig kristal-rooster zitten. Er bestaan echter ook materialen waar de atomen willekeurig door elkaar zitten. Dit heten amorfe materialen. Deze materialen kunnen interessante eigenschappen bezitten, maar er is nog niet veel bekend over hoe deze eigenschappen uitgerekend kunnen worden. In dit onderzoek is een model onderzocht dat hier mogelijk meer duidelijkheid over kan verschaffen. Het onderzoek is verricht met behulp van computersimulaties. Uiteindelijk is gebleken dat dit model zwakke punten in amorfe materialen kan identificeren. De eigen-schappen van deze zwakke punten zijn onafhankelijk van de hoeveelheid materiaal, en het aantal zwakke puntent neemt proportioneel toe met de hoeveelheid materiaal.

(3)

Contents

1 Samenvatting 2

2 Introduction 4

3 The barrier function 5

3.1 Nonlinear plastic modes . . . 6

3.2 Stiffness of NPMs . . . 7

3.3 Participation ratio . . . 8

4 Dynamical simulation of the barrier function 11 4.1 Variable timestep . . . 12 4.2 Initial conditions . . . 13 4.3 Thermostating . . . 13 4.4 Correlation function . . . 13 4.5 Results . . . 14 4.6 Discussion . . . 14

5 Monte Carlo simulation of the barrier function 15 5.1 Rotating the displacement vector . . . 15

5.2 Component-wise . . . 16 5.3 Multiple simulations . . . 17 5.4 Discussion . . . 19 6 Counting minima 20 6.1 Discussion . . . 25 7 Conclusion 26 8 Acknowledgements 26 Appendices 28

(4)

2

Introduction

Amorphous materials are solids with a disordered atomic structure. An example of such a material is the glass that windows are made of, however amorphous metals exist as well. Materials with a regular, crystalline atomic structure can contain defects, like dislocations or vacancies. These defects are irregularities in the crystal

Figure 1: By applying a stress σ, the object is sheared.

lattice, and cause weak points in the material. Amorphous solids cannot contain these defects, because their atomic structure is per definition irregular. This can give amor-phous solids some interesting properties [1].

One class of interesting properties a material are its mechanical properties. These properties are being studied both experimentally, computationally and theoretically. By deforming the material information about many

dif-ferent properties can be obtained. This deformation is performed by applying opposing forces to the material, causing it to shear. This process is illustrated in figure 1.

Figure 2: During a shear trans-formation, a subset of particles is moved. The initial and final positions of the particles are in-dicated by the arrows.

The amount of force is called the shear stress, and the amount of deformation is quantified by the shear strain. By performing this experiment, information about many properties can be derived.

As a force is applied to a material, it undergoes three stages of deformation [2]. These stages are: elastic formation, plastic deformation and fracture. Elastic de-formation is reversible, for example stretching out a rub-ber band. Plastic deformation irreversible, for example bending a spoon. If a material is deformed beyond plas-tic deformation, fracture occurs and the material breaks. The behaviour of a material under these types of deforma-tion provides insight into many different material proper-ties. Understanding these types of deformations is there-fore very useful. In this research, the focus lies on the behaviour of a material under plastic deformation.

At the microscopic level, a deformation leads to plastic instabilities which are followed by a ”shear transformation”. An example of a shear transformation is shown in figure 2. During a shear transformation, a small amount of particles is rearranged. Afterwards, the glass is a configuration with lower energy. The rate of plastic instabilities and their interactions

(5)

determine the rate of plastic deformation during deformation.

Figure 3: Example of a glass. In this case it contains two types of particles.

A method of detecting imminent plastic instabilities (soft spots) is from the eigenmodes of the matrix M ≡ ∂~x∂~2Ux [3]. However, this method only works at shear strains γ close to a critical strain γc(when the plastic instability occurs). In [4], a framework was introduced that is able to detect plas-tic instabilities at strains further from γc. This framework uses nonlinear plastic modes (NPMs) to detect imminent plastic instabilities. These NPMs can be found using the barrier function b(ˆz). The purpose of this bachelorthesis project is to explore and understand the landscape of the barrier function. This is done by analyzing the behaviour of a system governed by the barrier function, and by lo-cating all of its local minima.

3

The barrier function

Consider a system of N particles in d dimensions in a state of mechanical equilibrium ~x0. The particles interact with a pairwise potential and the system has a total potential U given by

U =X

ι

φι (1)

where φ is the pairwise potential and ι denotes a pair of interacting particles [i, j]. The particles in the system are displaced in a certain displacement direction ˆz, where ˆz is an N × d dimensional unit vector, and the displacement of a single particle is denoted as ˆzi. As the particles are moved in the displacement direction, the potential energy of the system changes. For small displacements, the change in potential energy is approximated by a third order Taylor expansion. We define the second and third derivatives of the potential energy as

M ≡ ∂ 2U ∂~x∂~x ~x=~x0 (2) U000≡ ∂ 3U ∂~x∂~x∂~x ~x=~x0 (3)

where M and U000 are tensors of respective ranks 2 and 3. We are interested in the derivatives of the potential energy in the displacement direction. These are obtained by contracting the

(6)

κˆz≡ M : ˆz ˆz = X ι φ00 r2 ι − φ 0 r3 ι  (~zι· ~xι)2+ X ι φ0 rι (~zι· ~zι) (4) τzˆ≡ U000...ˆz ˆz ˆz = X ι φ000 r3 ι −3φ 00 r4 ι +3φ 0 r5 ι  (~zι· ~xι)3+ 3 X ι φ00 r2 ι −φ 0 r3 ι  (~zι· ~xι)(~zι· ~zι) (5) where any vector with a subscript ι should be understood as ~xι = ~xj−~xi and rιis the distance between particle i and j. The sum runs over all pairs of interacting particles with i < j. The derivatives of φ are with respect to the distance rι between the particles.

With these quantities, the change in potential energy for small displacements s can be approximated as δU ' 1 2κˆzs 2+1 6τzˆs 3. (6)

δU has a local minimum at s = 0 and a local maximum at s∗ = −2κτ

ˆ z, as is illustrated in figure 4.

s

δ

U

ˆz

(

s)

b

z

)

s

0 0

Figure 4: The barrier function and its two stationary points. The barrier function gives the value of the potential at the local maximum s∗.

The height of the local maximum is given by

b(ˆz) = 2 3 κ3zˆ τ2 ˆ z (7) where b(ˆz) is the barrier function. An important property of the barrier function is that it is independent of the norm of ˆz, in other words b(ˆz) = b(cˆz) = b(~z).

3.1 Nonlinear plastic modes

(7)

point. If b(~z) is small, then the stiffness κ~z is small and the asymmetry τ ~z is large. This implies that the location of the local maximum s∗ is small, and therefore the cubic expansion is accurate in the region near s∗. Small values of b should therefore pertain to small energy barriers separating the current state of the system and neighboring states. Nonlinear plastic modes ˆπ are defined as local minima of b:

db d~z

~z=ˆπ = 0. (8)

While these local minima do not necessarily correspond to low values of b, small local minima are indicative of directions that move towards a low-lying energy barrier. They are therefore helpful in identifying potential plastic instabilities.

3.2 Stiffness of NPMs

The stiffness associated with a nonlinear plastic mode is given by κπ = M : ˆπ ˆπ. Upon defor-mation, the particles are moved with respect to eachother and at certain critical strains γc, shear transformations occur. This implies that the potential energy of the system encounters a saddle point. This means that the barrier function has a value of global minimum with a value of 0, and therefore the associated stiffness is expected to be 0 as well. The derivative of κπ with respect to the strain γ is given by

dκπˆ dγ = dM dγ : ˆπ ˆπ + 2M : dˆπ dγπˆ (9) = U000...ˆπ ˆπd~x dγ + ∂M ∂γ : ˆπ ˆπ + 2M : dˆπ dγπ.ˆ (10)

Under deformation, mechanical equilibrium is required to remain preserved. This leads to

d dγ ∂U ∂~x = ∂2U ∂γ∂~x + d~x dγ · ∂2U ∂~x∂~x = 0 (11) =⇒ d~x dγ = −M −1· ∂2U ∂γ∂~x (12)

The gradient of b (given in (20)) evaluated at an NPM ˆπ equals 0, implying

M · ˆπ = κπˆ τπˆ

U000: ˆπ ˆπ (13)

(8)

U000...ˆπ ˆπd~x dγ = − τπˆ κπˆ ˆ π · M · M−1· ∂ 2U ∂γ∂~x (14) = −τπˆνˆπ κˆπ , (15) where νπˆ = ˆπ · ∂2U ∂γ∂~x. (16)

Close to plastic instabilities, κπˆ approaches 0. This means that the first term of the RHS of (15) dominates, and the other two can be neglected. This reduces the differential equation to

dκπˆ dγ γ→γc− ' −τˆπνπˆ κπˆ . (17)

This equation is solved by

κπˆ(γ → γc−) ' √

2τπˆνˆπ √

γ → γc. (18)

In [3], it is shown that this equation holds for large intervals of strains (γ → γc). This means that NPMs are effective tools for identifying imminent plastic strains.

3.3 Participation ratio

The participation ratio is a measure of the localization of a displacement vector. It is defined as e = 1 N (P izi2)2 P iz4i (19) where the ~zi are the displacements of individual particles. Its possible values are [1/N, 1], where 1/N is a maximally localized vector and 1 is a maximally delocalized vector. Examples of localized and delocalized vectors are shown in figure 5.

(9)

(a) Delocalized. (b) Localized.

Figure 5: Examples of a localized and a delocalized displacement direction.

When picking a random displacement direction, it is unlikely that it is localized, but also unlikely that it is completely delocalized. As the system size increases, the distribution of likely values of e becomes sharper, as is illustrated in figure 6. For less than 6 particles, the distributions have strange peaks. An analytic derivation of the distributions for 2 and 3 particles is provided in appendix A, providing some insight in the origin of these peaks.

(10)

Figure 6: Distribution functions of the participation ratio for different amounts of particles. For more than 5 particles, the distributions close resemble normal distributions. As N in-creases, the average value converges to 1/3 and the width of the distribution decreases. An analytic derivation of the distributions of 2 and 3 particles is provided in appendix A.

As N increases, the expectation value of the participation ratio converges to 1/3. A derivation of this limit is provided in [5]. The probability distribution becomes sharply peaked, with the standard deviation converging to 0 as N approaches infinity.

(11)

4

Dynamical simulation of the barrier function

To understand the properties of the barrier function, it is treated as the energy of a system. We then try to equilibrate it using a dynamical simulation. The system is initialized with a random displacement ˆz. The state is then evolved using the gradient of the system,

∂2~z ∂t2 = ∂b ∂~z = 4 κ2 ~ z τ~z2  M · ~z −κ~z τ~z U000: ~z~z. (20)

At this point we encounter a problem: because b is independent of the norm of ~z, ∂~∂bz is inversely proportional to the norm of ~z. The force acting on ~z is always orthogonal to ~z, however this does not ensure that the velocity is (and remains) orthogonal. In order to remain in a stable circular orbit, the velocity should be orthogonal to a centre point and remain constant, while the object is accelerating towards the centre. However, if the acceleration is orthogonal to this centre point, the velocity will increase and the trajectory is no longer circular. Therefore, |~z| will increase, decreasing the size of the gradient. The relative change to the velocity decreases and the velocity will eventually be parallel to ~z. At this point the value of b(~z) is approximately constant and the simulation becomes useless. In practice this process happens rapidly, it takes very few iterations to reach this point.

The first solution is to normalize ~z after each iteration, which will prevent the gradient from decreasing. This might work if the size of the gradient is approximately constant, however it is not. During the first t0 of the simulation, | ~∇b(~z)| decreases by several orders of magnitude. Depending on the initial state, this drop is anywhere between a factor 103 and 1012. This gives ~z a large initial boost, and we encounter the same problem as before; the relative change to the velocity rapidly becomes negligible. Therefore b will still become approximately constant. In order to prevent this issue, the velocity has to be orthogonalized with respect to ~z. This has as a consequence that the total energy of the system is no longer conserved. In figure 7, a comparison of the three methods is shown.

(12)

Figure 7: Comparison of three runs of the dynamical simulation, with identical initial con-ditions. Using just the Newtonian dynamics, the gradient decreases multiple orders of mag-nitude, at which point the velocity is parallel to ~z and the system is not evolving anymore. Normalizing ~z prevents the gradient from decreasing as much, but it is not enough to solve the issue. Orthogonalizing the velocity ensures that ~z continues to evolve.

4.1 Variable timestep

With the orthogonalized velocity, the system is able to evolve indefinitely. However, as a consequence energy is being removed from the system. To ensure that the amount of energy that is being removed is minimal, the timestep δt needs to be small enough. The higher the velocity or the larger the gradient are, the smaller δt should be. To achieve this, a variable timestep was used. During the simulation, δt is determined using the slope density ρ, given by

ρ(t + δt) = e−ρ0∗ ρ(t) + (|~∇b| + | ˙~z|) ∗ ρ

0∗ δt (21)

where ρ0 is the slope scale, a constant that determines the sensitivity of the slope density to fluctuations in the norms of the gradient and the velocity. Using this slope density, δt is determined as follows, δtnew=          2 ∗ δt, ρ < ρmin 0.5 ∗ δt, ρ > ρmax δt, else (22)

(13)

where ρmin and ρmax specify the range of acceptable values of ρ. If δt is halved or doubled, the same is done for ρ.

4.2 Initial conditions

Because δt depends on the size of the gradient, it is necessary to select proper initial conditions. If the gradient is too large at the initial conditions, the simulation will take too long to be practical (multiple days for tens of t0). An effective way of limiting the size of the gradient is using a localized initial condition. These initial displacements are generated in two steps. First, a random displacement is generated. Then, a random centre is chosen, and each component of z is multiplied by a factor.

f = exp −r 2 w 

(23) where r is the distance from the particle to the centre, and w is a positive constant that determines the size of the localization. A lower w gives a more localized initial displacement and a shorter simulation time.

4.3 Thermostating

We want to know the behaviour of the material at different temperatures. To achieve these temperatures, the velocities of the particles need to be controlled. To do this, a thermostating scheme was employed. Thermostating is done with a scheme introduced by [6]. Each iteration, the velocity is multiplied by a factor

Cber= s 1 + δt τber T − ˜T (t) ˜ T (t) (24)

where ˜T (t) = |d~dtz|/2N is the instantaneous temperature. τber is a constant that determines how much the velocity is altered. This constant needs to be large enough to ensure that the thermostating does not interfere with the dynamics.

4.4 Correlation function

During a simulation, the displacement will change. As a measure for the amount of change we define the correlation function C(t)

(14)

4.5 Results

The results of a dynamical simulation are shown in figure 8. The simulation was run at a temperature T = 0.01 and thermostating was performed with τber = 10. The displacement becomes less localized during the simulation, but there is still a strong dipole at the location of the original localization. Most of the energy of the system is kinetic energy. As the simulation progresses, the total energy of the system decreases due to thermostating, until the target temperature is reached.

(a) The progression of the energies during the sim-ulation.

(b) The displacement before (blue) and after (or-ange) the simulation.

Figure 8: Results from a dynamical simulation with a duration of 100 τ0, with thermostating starting after 12.5τ0.

4.6 Discussion

Because of the hypersphericality of the barrier function, simulating its Newtonian dynamics gives no conclusive results. In order to circumvent this issue, the velocity needs to be orthog-onal to the current position. However, this removes energy from the system and completely alters the dynamics. Therefore this will probably not give any valid results. One solution could be to rewrite the barrier function in a generalized version of spherical coordinates, and remove the radial coordinate. However, this is likely computationally intensive and therefore not realistic. There are most likely more reliable and efficient methods of equilibrating the barrier function.

(15)

5

Monte Carlo simulation of the barrier function

In order to prevent the problems of the dynamical simulation and decrease simulation times, a Monte Carlo scheme was implemented. The algorithm used is the Metropolis Hastings algorithm [7]. The algorithm works by performing a directed random walk. It starts by assuming the states of the system are governed by a probability function P (x) ∝ f (x), indicating the chance of encountering state x when a random state is chosen. Starting from a state x, the system is moved to a random state y. If f (y) > f (x) then the move is accepted. Otherwise, the move is accepted with a chance f (y)/f (x). If the move is rejected, the system remains in state x. This way, the behaviour of the system can be faithfully reproduced. In the case of classical statistical the physics, the probability of encountering a state is proportional to e−β where β = 1/T and  is the energy of the state.

The proposed moves during the simulation should satisfy detailed balance. This means that the chance of proposing a move from any x to any y should be the same as moving from y to x. Two ways of perturbing the system were chosen, either rotating the entire displacement vector, or altering the displacement of a single particle.

5.1 Rotating the displacement vector

The direction of the rotation is determined by orthogonalizing a random vector ~z0with respect to ~z. The new vector is then equal to

~zi+1= ~zicos θ + ~z0sin θ (26)

where θ is a small angle that determines distance between the current and next vector. During the simulation, θ was set at 1e−3. An example of the results of a simulation are shown in 9. During the simulation, 2e7 steps were performed, which took approximately 1.5 hours. It takes approximately 1e5 steps for the system to reach its equilibrium values. The displacement field becomes more localized during the simulation, with the participation ratio dropping to ∼ 0.005. The system does not entirely decorrelate after the correlation function is reset, remaining at a value of ∼ 0.76, corresponding to an angle of 40◦. This means that the displacement vector remains confined to a certain area on the hypersphere. The norm of the gradient decreases rapidly from 1e10 to 8.5, indicating that the landscape of the the energy has very steep areas.

(16)

Figure 9: Results of a Monte Carlo simulation of the barrier function at T = 0.001. (a) The energy quickly reaches its equilibrium value. (b) The displacement becomes highly localized, causing the participation ratio to decrease. (c) Halfway, the correlation function is reset. (d) The size of the gradient during the simulation quickly decreases.

5.2 Component-wise

With this method, a random particle is chosen. Then a random shift is added to both directions of the displacement of that particle. The size of the shift is chosen according to a uniform distribution 1

100√NU [−1, 1]. This way the shifts are small compared to the average size of the components. The results of a component-wise Monte Carlo simulation are shown in figure 10. A total of 4e9 steps were performed, which took approximately 15 minutes. It takes approximately 5e7 steps for the barrier function to reach its equilibrium value. The participation decreases to 2e−2, indicating a strongly localized displacement field. The correlation function (25) is reset every 5e8 steps. After each reset it remains at ∼ 0.27, corresponding to an angle of 0.41 rad or 74◦. This means that ~z gets confined to a region, bounded by the equilibrium value of b.

(17)

Figure 10: Results of a Monte Carlo simulation of the barrier function at T = 0.1. (a) The energy quickly reaches its equilibrium value. (b) The displacement becomes highly localized, causing the participation ratio to decrease. (c) Every 5e8 steps, the correlation function is reset. (d) The size of the gradient during the simulation.

5.3 Multiple simulations

With an efficient simulation in place, it is possible to compare the properties of b at multiple temperatures and system sizes. Simulations were done at temperatures between 0.0001 and 10 at system sizes of 400, 1600 and 6400 particles. Each simulation, 6e5 ∗ N steps are tried. A total of 48 glasses were used per system size, for each glass at each temperature 4 simulations were run with a different random initial condition. The equilibrium values are obtained by averaging over the final quarter of a simulation. The correlation function is calculated using the displacement at halfway the simulation. The results of these simulations are shown in figure 11.

(18)

Figure 11: Results of multiple component-wise Monte Carlo simulations at multiple tem-peratures and system sizes. (a) The energy quickly reaches its equilibrium value. (b) The displacement becomes highly localized, causing the participation ratio to decrease. (c) Every 5e8 steps, the correlation function is reset. (d) The size of the gradient during the simulation.

As would be expected of a classical system, the energy b increases with the system size. However, it does not increase as much as expected. For a classical system, extensive properties, like the energy of the system, increase proportional to the size of the system. However, in the case of the barrier function, if the system size is increased by a factor 4, the energy increases by a factor 1.5 − 2.5. This means that b(~z) has a nonextensive energy.

(19)

Figure 12: The ratios of the equilibrium energy and gradient size of the system, after a metropolis simulation. If the system size increases by a factor 4, the equilibrium energy increases by a factor 1.5 − 2.5.

5.4 Discussion

Equilibrating the barrier function with the metropolis algorithm gave better results than the dynamical simulation. The system does not need to be altered, and the time to perform a simulation is much shorter. The two methods of moving the system perform similar to eachother. It takes approximately 27 seconds (cpu time) to reach equilibrium when rotating the entire vector, and 11 seconds when altering single components. However, these values also depend on the step size, which makes it more complicated to compare the two methods. To make a fair comparison, one should define an acceptance rate, indicating how many of the proposed steps are accepted. For systems of high dimensionality, the ideal acceptance rate is approximately 0.234 [8].

Despite its efficiency, the usefulness of the metropolis algorithm is as of yet uncertain. The metropolis algorithm is based on the assumption that the probability of finding a state with energy  is proportional to e−β, where β = 1/T . However, there are several suggestions that this assumption may not be valid. First of all, the hypersphericality of the system gives unexpected behaviour in a dynamical simulation. Secondly, it appears the system has a sub-extensive energy. Because not much is known about these types of systems, it is unsure whether the assumptions that were made for the metropolis simulation are valid. Therefore the results of these simulations should be regarded as inconclusive.

(20)

6

Counting minima

In order to examine the barrier function without equilibrating it, we sample its local minima. The local minima of b are obtained by performing a gradient descent, starting from an initial state. The code for performing the gradient descent was provided by Dr. Edan Lerner. 4 methods of selecting these initial states have been tried:

• Random

• Maximally localized • Dipoles

• Dipole responses

A typical random displacement direction is shown in figure 5a, and a typical dipole and its response are shown in figure 13. Random sampling was done by generating random initial states across the hypersphere. An even distribution is generated by drawing each component from a normal distribution with average 0 and variance 1. By normalizing and removing translations the points are ensured to be evenly distributed along the hypersphere. The search is stopped if no new minima have been found for a certain amount of consecutive tries. Two minima are the same if

|~zi· ~zj| > 0.95. (27)

Maximally localized states are states with only one non-zero component. Dipoles are created by selecting two adjacent particles, and giving them a displacement a direction away from each other. Dipole responses are defined as

M · ~z = ~d (28)

where ~d is a dipole, and ~z is the corresponding dipole response. An example is shown in figure 13.

(21)

Figure 13: Example of a dipole, and the corresponding dipole response.

The most successful method for locating minima is by using dipole responses, as is illustrated in figure 14. In figure 15, scatterplots are shown comparing the different methods. Each plot shows the energy of a minimum versus how often the minimum was encountered during the sampling.

Figure 14: Comparison of initial conditions for minimizing. All methods were tried on 100 glasses of system size N=400. The random method was stopped after no new minima were found for 100 consecutive tries.

(22)

Figure 15: Scatterplots of the results of the 4 sampling methods. The plots show the energy of each minimum versus how often it was encountered during the sampling.

With random and local sampling of the hypersphere, most minima with higher energies are encountered rarely. However, peaks at E ≈ 2 indicate that some high-energy minima are found often. The peak of the random sampling method has an especially strange structure, shown in figure 16. The origin of this structure is unknown. There appears to be no relation between the peaks and the localizations of the minima in the peaks.

(23)

Figure 17: The energy of the minima versus how often they were encountered for multiple system sizes. At the lowest energies, all minima are found often. Therefore it is likely that all minima are found at these energies. The minima were sampled using dipole responses as initial conditions

It turns out that the lowest local minima are not hard to find. While locating minima, some minima are encountered more often than others. This is illustrated in figure 17. At the lowest energies, there are no minima that are encountered rarely, all of them are found frequently. This means that they are not difficult to find. The probability distribution of the energies of the minima appears to be independent of the system size, as illustrated in figure 18. Most of the NPMs have an energy of ∼ 1. No minima were encountered with energies above 3. This means that either there are no minima above this energy, or they are harder to locate. With dipole responses as initial displacements, there is an upper bound on the energies that can be encountered, because their energies are relatively l With random initial vectors, no minima are found above this energy either.

(24)

Figure 18: The probability distribution of the energies of the local minima, for 400, 1600 and 6400 particles.

The amount of found local minima is approximately linearly proportional to the system size.

Figure 19: The average amount of local minima in a glass, as a function of system size. The amount of encountered NPMs is approximately linearly proportional to the system size. The local minima were sampled with dipole responses as initial conditions.

The size of a displacement field is characterized by e ∗ N , where e is the participation ratio. Figure 20 shows e ∗ N versus the amount of encounters. The overlap of the plots indicates that the size of an NPM is independent of the amount of particles in the system.

(25)

Figure 20: Participation ratio of the minima times system size, versus how often they were encountered for multiple system sizes. The overlap of the plots indicates that the size of the displacement fields does not scale with system size. Instead, the localization of the NPMs is the same for all system sizes.

6.1 Discussion

No minima were encountered with energies above ∼ 2.9. This means that either there are no minima above this energy, or they are difficult to locate. With dipole responses as initial displacements, there is an upper bound on the energies that can be encountered, because their energies are relatively low. However, with random initial vectors, no minima are found above this energy either. This means that using dipole responses as initial conditions, we are not missing any more minima than with other methods. A downside of using dipole responses as initial methods is that calculating a dipole response takes longer on larger systems, at N = 25600 it takes approximately 20 minutes. Because the lowest minima are the most interesting and the easiest to find, random sampling is probably more effective at this system size.

From 18 and 20 we can conclude that the properties of NPMs are not dependent on the size of the system. This is not unexpected, as shear transformations are local phenomena. The linear scaling of the amount of NPMs is not unexpected either. NPM’s are indications of ’weak spots’, and assuming that these weak spots have a finite size, the amount that fits into a system is expected to be linear with system size.

(26)

7

Conclusion

Equilibrating the barrier function with conventional methods has proven difficult. A reliable dynamic Newtonian simulation is not possible due to the hypersphericality of the barrier function. The metropolis algorithm has given some results, however at this point it is un-certain whether all assumptions behind the algorithm are valid. Especially the sub-extensive behaviour of the barrier function indicates that classical statistical physics may not be ap-plicable. More research needs to be done into these kind of systems before any conclusive results can be obtained.

8

Acknowledgements

I would like to thank Dr. Edan Lerner for his valuable supervision and feedback during this project. Our discussions about new research angles and possible solutions to problem were very enjoyable. I would also like to thank Geert Kapteijns for sharing his computer with me so I could run my simulations.

References

[1] C. A. Schuh, T. C. Hufnagel, and U. Ramamurty, “Mechanical behavior of amorphous alloys,” Acta Materialia, vol. 55, no. 12, pp. 4067–4109, 2007.

[2] W. D. Callister and D. G. Rethwisch, Fundamentals of materials science and engineering, vol. 471660817. Wiley London, UK:, 2000.

[3] E. Lerner, “Micromechanics of nonlinear plastic modes,” Physical Review E, vol. 93, no. 5, p. 053004, 2016.

[4] L. Gartner and E. Lerner, “Nonlinear plastic modes in disordered solids,” Physical Review E, vol. 93, no. 1, p. 011001, 2016.

[5] T. B. Clark and A. Del Maestro, “Moments of the inverse participation ratio for the laplacian on finite regular graphs,” arXiv preprint arXiv:1506.02048, 2015.

[6] H. J. Berendsen, J. v. Postma, W. F. van Gunsteren, A. DiNola, and J. Haak, “Molecular dynamics with coupling to an external bath,” The Journal of chemical physics, vol. 81, no. 8, pp. 3684–3690, 1984.

(27)

applica-[8] G. O. Roberts, A. Gelman, W. R. Gilks, et al., “Weak convergence and optimal scaling of random walk metropolis algorithms,” The annals of applied probability, vol. 7, no. 1, pp. 110–120, 1997.

(28)

Appendices

A

Distribution function of the participation ratio

The participation ration is defined as e = 1 N (P izi2)2 P iz4i (29) where z is the vector containing all displacements and zi2 is the squared norm of the displace-ment of particle i. The participation ratio is independent of the norm of z and the number of dimensions.

To derive the distribution function analytically, the function for e must be split into symmetrical parts, such that every part is monotonically increasing. Then it must be inverted. The probability of finding a certain value of e is then proportional to the derivative of the inverted function. In the case of two particles, the form of the participation ratio is

e(θ) = 1 2

(cos2θ + sin2θ)2

cos4θ + sin4θ (30)

where cos θ is the norm of displacement 1 and sin θ is the norm of displacement 2. This function can be divided into 8 symmetrical octants, one of which being 0 ≤ θ < π/4. This octant is sufficient to derive the distribution function. The inverse function of e in this octant is θ(e) = 1 4arccos 2 − 3e e  (31) and its derivative i.e. the distribution function is given by

D(e) = 1 , 4e2 r −2 − 1 e2 + 3 e ! . (32)

(29)

Figure 21: The numerical integral versus 107 samples.

In the case of 3 particles, the participation ratio is given by

e(θ, φ) = 1 3

(sin2θ cos2φ + sin2θ sin2φ + cos2θ)2

sin4θ cos4φ + sin4θ sin4φ + cos4θ (33) In this case, there are two variables and it doesn’t matter with respect to which one is inverted. However, the easiest is inverting w.r.t. φ. In this case, the function can be divided into 16 equivalent parts, one of them being 0 ≤ θ < π/2, 0 ≤ φ < π/4. In this case the inverse function is given by φ(e, θ) = arctan p 3 − f (e, θ) p3 + f(e, θ) ! (34) where f (e, θ) = √ 3 e csc

4θq−e sin4θ(−2 cos4θ + 6e cos4θ − 4 cos2θ sin2θ − 2 sin4θ + 3e sin4θ)

The derivative of this function to e is dφ de = 2 , e r (−4 + 9e + 3e cos 4θ) csc4θ e q

e sin4θ(16 − 27e − 12e cos 2θ − 9e cos 4θ) !

. (35) This is the distribution function, however there is still a dependence on θ. This can be

(30)

D(e) = Z

de sin θdθ (36)

The integration boundaries can be found by filling in the boundary values of φ (0 and π4 in (33). This gives the minimum and maximum value of e as a function of θ. Inverting these w.r.t. θ gives the integration boundaries of θ. There are a total of 4 boundary lines,

θ1 = arctan q 2 −p2 (1 − e)/e q 1 +p2 (1 − e)/e ! , (37a) θ2 = 1 4arccos 4 − 9e 3e  , (37b) θ3 = 1 4 h 2π − arccos 4 − 9e 3e i , (37c) θ4 = arctan q 2 +p2 (1 − e)/e q 1 −p2 (1 − e)/e ! . (37d)

In the region where 13 ≤ e < 2

3 the integration area is divided into 2 parts, 1 bounded by (37a) and (37b), and 1 bounded by (37c) and θ = π2. For 23 ≤ e ≤ 1, the area is bounded by (37a) and (37d).

Figure 22: Plot of the integration boundaries, numbered in the same order as in (37).

(31)

Referenties

GERELATEERDE DOCUMENTEN

In eerste lid (nieuw) van artikel 2.2 van de Rzv is thans geregeld dat de voorwaardelijk toegelaten zorg onder de te verzekeren prestatie valt voor zover de verzekerde deelneemt aan

Knowing that blazed gratings exhibit relatively large efficiencies for higher orders we doubled the incident photon energy for all measured points, but kept the incidence angle the

De samenstelling van de bodem lijkt bepalend te zijn voor de mate waarin bodemroofmijten zich kunnen handhaven en effectief zijn tegen

jes, Om u een indikètie.' te geven waar soorten van dit genus te verwachten zijn, geef ik u hierbij enige, informatie:. Eoceen/Bartonien Frankrijk, zoals La Chapelle en

However, it is unclear if this is caused by the numerical model or by the flow measurements (ref [31]), which have shown that the upstream flow velocities are higher for the

De gebouwen zijn in beide gevallen deels bewaard onder vorm van uitgebroken muren en deels als in situ architecturale resten (fig. Het betreft zeer

The growth, biomass partitioning and water and nutrient uptake of soilless grown tomato plants in.. relation to solar radiation levels and nutrient

Let’s do something really imaginative like edu- cating business people to think, argue and examine, as well as to do a balance sheet and make a marketing plan as smart as