• No results found

General comments: averaging in random systems

Carlo methods for lattice systems

5.4.1 General comments: averaging in random systems

By quenched randomness we imply that the model Hamiltonian of interest depends on random variables other than the degrees of freedom which are considered in the thermal average, and these random variables are kept fixed in one physical realization of the system. For example, consider a magnetic binary

01:17:26

Fig. 5.7 Topological excitations in the two-dimensional anisotropic Heisenberg model.

(a) ‘Snapshots’ of vortex behavior in the SF state for L = 40, H||/J = 4.0. Open and closed circles represent vortices and antivortices, respectively.

(b) Vortex-pair density in the SF state.

The energy (in units of J) needed to create a vortex–antivortex pair is 2μ. From Landau and Binder (1981).

01:17:26

alloy AxB1–x, where a crystal is grown from a melt containing a fraction x of A-atoms and a fraction 1 − x of B-atoms. Assuming that both species carry Ising spins Si = ±1, it is nevertheless natural to assume that the exchange constants Jij depend on the type of pair that is considered: JAA, JAB, or JBB, respectively. Denoting the occupation variable ci= 1 if site i is taken by an A-atom, ci= 0 if it is taken by a B-atom, one would arrive at the Hamiltonian (assuming nearest neighbor exchange only) Of course, this model includes the dilution of a magnetic crystal by a non-magnetic species as a special case (then JAB = JBB = 0). While the config-urations of the spins {Si} in all averages are weighted with the Boltzmann factor exp[−H{Si,ci}/kBT] in all averages, the configurations of the {ci} are not assumed to occur with a weight given by the Boltzmann factor, but rather with a predetermined distribution P{ci}. Depending on the history of sample preparation in the laboratory, one may wish to choose the ci completely at random, but consistent with the chosen concentration x, or with some built-in correlations reflecting ‘chemical’ short range order. In any case, an average of some observable A{Si,ci} (e.g. the magnetization M of crystal) then becomes

[A{Si,ci}T]av = Thus one sees there is a double average that needs to be carried out: for a fixed realization {ci}, one computes the thermal average as usual, and then this average is averaged once more with P{ci}. While the thermal averaging is done with the usual Metropolis importance sampling, the disorder average [. . .]av= d {ci} P {ci} . . . can immediately be realized by simple sampling.

In principle, this problem is hence straightforwardly suitable for Monte Carlo simulation. However, the question arises how large the sample has to be for the averaging with P{ci} over the configurations {ci} of the quenched disorder variables. In an experiment, typically measurements are carried out for a single probe, there is no need to repeat the experiment for a large number of samples, the observable quantities are ‘self-averaging’. One would expect that a similar self-averaging property would also apply to simulations, if very large systems away from any phase transition are studied, and then simula-tion of a single (or a few) realizasimula-tions of the {ci} would suffice. However, the situation is rather different in the case of a finite size scaling analysis, where one considers systems of finite linear dimension L right at the critical tem-perature Tc of the model: the fluctuations from one sample {ci} to the next one cause a significant sample-to-sample fluctuation of the effective pseudo-critical temperature Tc(L) of the system (defined e.g. by the maximum of the specific heat or the maximum slope of the fourth order cumulant, etc.). This

01:17:26

sample-to-sample fluctuation of Tc(L) causes a lack of self-averaging for certain quantities (typically for the order parameter and its susceptibility) at Tc. This lack of self-averaging shows up when one considers ratios such as (Wiseman and Domany, 1995)

RA≡ [( AT− [ AT]av)2]av/([AT]av)2. (5.27) Lack of self-averaging implies that (ξ is the correlation length)

RA→ CA if L/ξ → 0 (i.e. for T = Tc) (5.28) while away from Tcthere is self-averaging, ratios such as RAdecay for L → ∞ inversely proportional to the volume,

RA∝ (ξ /L)d if L ≫ ξ. (5.29)

The lack of self-averaging implies that a sample of the order n  104 real-izations is desirable, in order to get the relative error of the disorder average at Tc,[ATc]av, down to 1% or less. This consideration already shows that the Monte Carlo study of phase transitions in random systems may be very computer time consuming. Of course, sometimes a relative error of 10% may seem acceptable, and then only a sample of n ≈ 102realizations is required.

In addition, one has to be careful in the precise manner in which the disorder averaging is carried out. Suppose we consider the case c = 0.5 for the AxB1−xalloy. We can generate a sample {ci} by drawing a random uniformly distributed number ηi with 0 ≤ ηi <1 for each lattice site, and choosing ci = 1 if ηi>x and otherwise setting ci = 0. However, for a crystal with N = Ldsites the average composition will differ from x = 0.5 also by a random deviation of order 1/√

N. Since often dependence of the critical temperature Tc(x) on concentration x is rather strong, this sample-to-sample variation of the concentration may contribute substantially to the sample-to-sample fluctuation of the pseudo-critical temperature Tc(L). However, this problem is avoided if one simply selects Nx= xN lattice sites at random, setting ci= 1 at each of these sites and otherwise putting ci= 0. Then the concentration of every sample is strictly equal to x, and the sample-to-sample fluctuation of the concentration is suppressed. It turns out that the ‘universal’ numbers CAdefined above, that characterize the lack of self-averaging at Tcin a random system, do differ for these two choices (Wiseman and Domany, 1998). In a sense these two choices to fix the concentration of the random alloy correspond to the canonical and semi-grand canonical ensemble of statistical mechanics. If we were to treat the disorder as ‘annealed’ rather than ‘quenched’ for annealed disorder, the average would simply be

 A{Si,ci}T = 1 Z Tr

{Si,ci}A{Si,ci} exp(−H{Si,ci}/ kBT), (5.30) i.e. in the trace the two types of variables {Si}, {ci} are now both included, and treated on an equal footing – so the local concentration on the lattice site also exhibits thermal fluctuations (e.g. due to interdiffusion of the species A, B in the crystal), unlike the quenched case. In the semi-grand canonical

01:17:26

ensemble of alloys, the chemical potential difference μ = μA− μBbetween the species is the independent thermodynamic variable, and then the concen-tration undergoes thermal fluctuations, while in the canonical ensemble x is the independent thermodynamic variable and hence strictly non-fluctuating (thermal fluctuations then occur in the conjugate variable μ, but this vari-able often is not even recorded in a simulation). These distinctions between the various thermodynamic ensembles naturally have analogs for the calcula-tion of quenched averages, since one can consider quenched averaging as an averaging of the disorder variables ({ci} in our example) as a thermal averaging at a different (higher) temperature: for a completely random selection of lattice sites, we average at infinite temperature. We can introduce some correlations in the occupancy of lattice sites by defining

P{ci} = 1

Z0exp(−Hc{ci}/ kBT0), (5.31) where Hcis some model Hamiltonian describing the ‘crystallographic’ inter-action between the species A, B, and one assumes that at the temperature T0 (≫ T ) the {ci} are still in full thermal equilibrium, before one quenches in the configurations of the {ci} thus generated by sudden cooling from T0 to T, where the {ci} are forbidden to relax. Obviously, these considerations are motivated by the actual experimental procedures, but they also clarify that the different ensembles with which the averaging at T0 is performed lead to different ensembles for carrying out quenched averages. In most cases one considers uncorrelated disorder, i.e. 1T0→ 0, but these considerations apply in this limit as well.

One important aspect about quenched averaging is that the distribution P(A) generated in this way ([A{Si,ci}]av = dA P(A)A) typically is not symmetric around its average, mean value and most probable value may differ appreciably. Consider, for instance, the magnetization for the above model Hamiltonian at a temperature slightly above the average value of Tc(L): those samples for which Tc(L) > T due to the sample-to-sample fluctuation of Tc(L) will have a large magnetization, while those samples where Tc(L) deviates in the other direction will have a very small magnetization. This asymmetry of the distribution creates problems if one calculates quantities which have very small averages, e.g. spin correlations [SiSjT]av with large distances ri− rj

between the sites i, j.

An even more subtle effect may occur due to extremely rare fluctuations.

Consider e.g. the case of simple dilution in the above model Hamiltonian, where JAB= JBB = 0, JAA≡ J . Then for x < 1 the critical temperature Tc(x) will be clearly less than Tc(1). However, the probability is non-zero (albeit extremely small) that somewhere in the system we find a large compact region free of dilution sites. This region will effectively order already at Tc(1), in a still disordered environment. A mathematical consideration of this problem shows that there is a whole temperature region Tc(x) < T < Tc(1) where very weak singularities are already present (known as ‘Griffiths singularities’; Griffiths, 1969). One also expects that these singularities cause some anomalous tails in

01:17:26

dynamic correlation functions at long times, but due to problems of sampling such very small correlations accurately enough this problem is not yet so well understood.

Monte Carlo simulation of systems with quenched disorder is a difficult task; due to the need of carrying out the double averaging procedure over both thermal disorder and quenched disorder the demand for computer resources is huge and the judgement of the accuracy is subtle, particularly due to metasta-bility and slow relaxation at low temperatures. Many problems are still incom-pletely understood. In the following, we mention two types of problems more explicitly, but only on the level of rather introductory comments. For extensive reviews of the state of the art in this field, we refer to Young (1998).

5.4.2 Parallel tempering: a general method to better