• No results found

A Guide to Monte Carlo Simulations in Statistical Physics

N/A
N/A
Protected

Academic year: 2022

Share "A Guide to Monte Carlo Simulations in Statistical Physics"

Copied!
536
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Dealing with all aspects of Monte Carlo simulation of complex physical systems encountered in condensed-matter physics and statistical mechanics, this book provides an introduction to computer simulations in physics.

This fourth edition contains extensive new material describing numerous powerful algorithms not covered in previous editions, in some cases represent- ing new developments that have only recently appeared. Older methodologies whose impact was previously unclear or unappreciated are also introduced, in addition to many small revisions that bring the text and cited literature up to date. This edition also introduces the use of petascale computing facilities in the Monte Carlo arena.

Throughout the book there are many applications, examples, recipes, case studies, and exercises to help the reader understand the material. It is ideal for graduate students and researchers, both in academia and industry, who want to learn techniques that have become a third tool of physical science, complementing experiment and analytical theory.

     .       is the Distinguished Research Professor of Physics and founding Director of the Center for Simulational Physics at the University of Georgia, USA.

          is Professor Emeritus of Theoretical Physics and Gutenberg Fellow at the Institut f ¨ur Physik, Johannes-Gutenberg-Universit¨at, Mainz, Germany.

(2)
(3)

A Guide to

Monte Carlo Simulations in Statistical Physics

Four th Edition

David P. Landau

Center for Simulational Physics, University of Georgia, USA

Kurt Binder

Institut f ¨ur Physik, Johannes-Gutenberg-Universit¨at, Germany

(4)

Cambridge University Press is part of the University of Cambridge.

It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence.

www.cambridge.org

Information on this title: www.cambridge.org/9781107074026

© D. Landau and K. Binder 2015

This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

First published 2000 Second edition published 2005 Third edition published 2009 Fourth edition published 2015

Printed in the United Kingdom by TJ International Ltd. Padstow Cornwall A catalogue record for this publication is available from the British Library ISBN 978-1-107-07402-6 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

(5)

Preface

page xv

1 Introduction 1

1.1 What is a Monte Carlo simulation? 1

1.2 What problems can we solve with it? 2

1.3 What difficulties will we encounter? 3

1.3.1 Limited computer time and memory 3

1.3.2 Statistical and other errors 3

1.4 What strategy should we follow in approaching a problem? 4 1.5 How do simulations relate to theory and experiment? 4

1.6 Perspective 5

References 6

2 Some necessary background 7

2.1 Thermodynamics and statistical mechanics: a quick

reminder 7

2.1.1 Basic notions 7

2.1.2 Phase transitions 15

2.1.3 Ergodicity and broken symmetry 27

2.1.4 Fluctuations and the Ginzburg criterion 27 2.1.5 A standard exercise: the ferromagnetic Ising model 28

2.2 Probability theory 30

2.2.1 Basic notions 30

2.2.2 Special probability distributions and the central limit

theorem 31

2.2.3 Statistical errors 33

2.2.4 Markov chains and master equations 33

2.2.5 The ‘art’ of random number generation 35 2.3 Non-equilibrium and dynamics: some introductory

comments 41

2.3.1 Physical applications of master equations 41 2.3.2 Conservation laws and their consequences 43 2.3.3 Critical slowing down at phase transitions 46

2.3.4 Transport coefficients 48

v

(6)

2.3.5 Concluding comments: why bother about dynamics

when doing Monte Carlo for statics? 48

References 48

3 Simple sampling Monte Carlo methods 51

3.1 Introduction 51

3.2 Comparisons of methods for numerical integration of given

functions 51

3.2.1 Simple methods 51

3.2.2 Intelligent methods 53

3.3 Boundary value problems 54

3.4 Simulation of radioactive decay 56

3.5 Simulation of transport properties 57

3.5.1 Neutron transport 57

3.5.2 Fluid flow 58

3.6 The percolation problem 58

3.6.1 Site percolation 59

3.6.2 Cluster counting: the Hoshen–Kopelman algorithm 62

3.6.3 Other percolation models 63

3.7 Finding the groundstate of a Hamiltonian 63

3.8 Generation of ‘random’ walks 64

3.8.1 Introduction 64

3.8.2 Random walks 65

3.8.3 Self-avoiding walks 66

3.8.4 Growing walks and other models 68

3.9 Final remarks 69

References 69

4 Importance sampling Monte Carlo methods 71

4.1 Introduction 71

4.2 The simplest case: single spin-flip sampling for the simple

Ising model 72

4.2.1 Algorithm 73

4.2.2 Boundary conditions 76

4.2.3 Finite size effects 79

4.2.4 Finite sampling time effects 93

4.2.5 Critical relaxation 100

4.3 Other discrete variable models 108

4.3.1 Ising models with competing interactions 108

4.3.2 q-state Potts models 112

4.3.3 Baxter and Baxter–Wu models 113

4.3.4 Clock models 114

4.3.5 Ising spin glass models 115

4.3.6 Complex fluid models 116

4.4 Spin-exchange sampling 117

4.4.1 Constant magnetization simulations 117

4.4.2 Phase separation 118

(7)

4.4.3 Diffusion 120

4.4.4 Hydrodynamic slowing down 122

4.5 Microcanonical methods 123

4.5.1 Demon algorithm 123

4.5.2 Dynamic ensemble 123

4.5.3 Q2R 124

4.6 General remarks, choice of ensemble 124

4.7 Statics and dynamics of polymer models on lattices 126

4.7.1 Background 126

4.7.2 Fixed bond length methods 126

4.7.3 Bond fluctuation method 128

4.7.4 Enhanced sampling using a fourth dimension 128 4.7.5 The ‘wormhole algorithm’ – another method to

equilibrate dense polymeric systems 130

4.7.6 Polymers in solutions of variable quality: θ -point,

collapse transition, unmixing 130

4.7.7 Equilibrium polymers: a case study 133

4.7.8 The pruned enriched Rosenbluth method (PERM): a biased sampling approach to simulate very long

isolated chains 136

4.8 Some advice 139

References 140

5 More on importance sampling Monte Carlo methods for

lattice systems 144

5.1 Cluster flipping methods 144

5.1.1 Fortuin–Kasteleyn theorem 144

5.1.2 Swendsen–Wang method 145

5.1.3 Wolff method 148

5.1.4 ‘Improved estimators’ 149

5.1.5 Invaded cluster algorithm 149

5.1.6 Probability changing cluster algorithm 150

5.2 Specialized computational techniques 151

5.2.1 Expanded ensemble methods 151

5.2.2 Multispin coding 151

5.2.3 N-fold way and extensions 152

5.2.4 Hybrid algorithms 155

5.2.5 Multigrid algorithms 155

5.2.6 Monte Carlo on vector computers 155

5.2.7 Monte Carlo on parallel computers 156

5.3 Classical spin models 157

5.3.1 Introduction 157

5.3.2 Simple spin-flip method 158

5.3.3 Heatbath method 160

5.3.4 Low temperature techniques 161

5.3.5 Over-relaxation methods 161

(8)

5.3.6 Wolff embedding trick and cluster flipping 162

5.3.7 Hybrid methods 163

5.3.8 Monte Carlo dynamics vs. equation of motion

dynamics 163

5.3.9 Topological excitations and solitons 164

5.4 Systems with quenched randomness 166

5.4.1 General comments: averaging in random systems 166 5.4.2 Parallel tempering: a general method to better

equilibrate systems with complex energy landscapes 171

5.4.3 Random fields and random bonds 172

5.4.4 Spin glasses and optimization by simulated

annealing 173

5.4.5 Ageing in spin glasses and related systems 178 5.4.6 Vector spin glasses: developments and surprises 178 5.5 Models with mixed degrees of freedom: Si/Ge alloys,

a case study 179

5.6 Methods for systems with long range interactions 181 5.7 Parallel tempering, simulated tempering, and related

methods: accuracy considerations 183

5.8 Sampling the free energy and entropy 186

5.8.1 Thermodynamic integration 186

5.8.2 Groundstate free energy determination 187 5.8.3 Estimation of intensive variables: the chemical

potential 188

5.8.4 Lee–Kosterlitz method 189

5.8.5 Free energy from finite size dependence at Tc 189

5.9 Miscellaneous topics 190

5.9.1 Inhomogeneous systems: surfaces, interfaces, etc. 190 5.9.2 Anisotropic critical phenomena: simulation boxes

with arbitrary aspect ratio 196

5.9.3 Other Monte Carlo schemes 198

5.9.4 Inverse and reverse Monte Carlo methods 200 5.9.5 Finite size effects: review and summary 202

5.9.6 More about error estimation 202

5.9.7 Random number generators revisited 204

5.10 Summary and perspective 207

References 208

6 Off-lattice models 212

6.1 Fluids 212

6.1.1 NVT ensemble and the virial theorem 212

6.1.2 NpT ensemble 216

6.1.3 Grand canonical ensemble 220

6.1.4 Near critical coexistence: a case study 224

6.1.5 Subsystems: a case study 226

6.1.6 Gibbs ensemble 231

(9)

6.1.7 Widom particle insertion method and variants 234

6.1.8 Monte Carlo phase switch 236

6.1.9 Cluster algorithm for fluids 239

6.1.10 Event chain algorithms 241

6.2 ‘Short range’ interactions 242

6.2.1 Cutoffs 242

6.2.2 Verlet tables and cell structure 242

6.2.3 Minimum image convention 243

6.2.4 Mixed degrees of freedom reconsidered 243

6.3 Treatment of long range forces 243

6.3.1 Reaction field method 243

6.3.2 Ewald method 244

6.3.3 Fast multipole method 245

6.4 Adsorbed monolayers 246

6.4.1 Smooth substrates 246

6.4.2 Periodic substrate potentials 246

6.5 Complex fluids 247

6.5.1 Application of the Liu–Luijten algorithm to

a binary fluid mixture 250

6.6 Polymers: an introduction 251

6.6.1 Length scales and models 251

6.6.2 Asymmetric polymer mixtures: a case study 257 6.6.3 Applications: dynamics of polymer melts; thin

adsorbed polymeric films 261

6.6.4 Polymer melts: speeding up bond fluctuation

model simulations 265

6.7 Configurational bias and ‘smart Monte Carlo’ 267 6.8 Estimation of excess free energies due to walls for fluids

and solids 270

6.9 A symmetric, Lennard–Jones mixture: a case study 272 6.10 Finite size effects on interfacial properties: a case study 275

6.11 Outlook 277

References 278

7 Reweighting methods 282

7.1 Background 282

7.1.1 Distribution functions 282

7.1.2 Umbrella sampling 282

7.2 Single histogram method 285

7.2.1 The Ising model as a case study 286

7.2.2 The surface-bulk multicritical point: another case

study 292

7.3 Multihistogram method 295

7.4 Broad histogram method 296

7.5 Transition matrix Monte Carlo 296

7.6 Multicanonical sampling 297

(10)

7.6.1 The multicanonical approach and its relationship

to canonical sampling 297

7.6.2 Near first order transitions 299

7.6.3 Groundstates in complicated energy landscapes 300

7.6.4 Interface free energy estimation 301

7.7 A case study: the Casimir effect in critical systems 302

7.8 Wang–Landau sampling 303

7.8.1 Basic algorithm 303

7.8.2 Applications to models with continuous variables 307 7.8.3 A simple example of two-dimensional

Wang–Landau sampling 307

7.8.4 Microcanonical entropy inflection points 308

7.8.5 Back to numerical integration 309

7.8.6 Replica exchange Wang–Landau sampling 310 7.9 A case study: evaporation/condensation transition

of droplets 314

References 316

8 Quantum Monte Carlo methods 319

8.1 Introduction 319

8.2 Feynman path integral formulation 320

8.2.1 Off-lattice problems: low temperature properties

of crystals 320

8.2.2 Bose statistics and superfluidity 327 8.2.3 Path integral formulation for rotational degrees

of freedom 328

8.3 Lattice problems 331

8.3.1 The Ising model in a transverse field 331

8.3.2 Anisotropic Heisenberg chain 332

8.3.3 Fermions on a lattice 336

8.3.4 An intermezzo: the minus sign problem 338

8.3.5 Spinless fermions revisited 340

8.3.6 Cluster methods for quantum lattice models 342

8.3.7 Continuous time simulations 344

8.3.8 Decoupled cell method 345

8.3.9 Handscomb’s method and the stochastic series

expansion (SSE) approach 346

8.3.10 Wang–Landau sampling for quantum models 347

8.3.11 Fermion determinants 349

8.4 Monte Carlo methods for the study of groundstate

properties 350

8.4.1 Variational Monte Carlo (VMC) 351

8.4.2 Green’s function Monte Carlo methods (GFMC) 353 8.5 Towards constructing the nodal surface of off-lattice,

many-Fermion systems: the ‘survival of the fittest’

algorithm 355

(11)

8.6 Concluding remarks 359

References 360

9 Monte Carlo renormalization group methods 364 9.1 Introduction to renormalization group theory 364

9.2 Real space renormalization group 368

9.3 Monte Carlo renormalization group 369

9.3.1 Large cell renormalization 369

9.3.2 Ma’s method: finding critical exponents and

the fixed point Hamiltonian 371

9.3.3 Swendsen’s method 372

9.3.4 Location of phase boundaries 374

9.3.5 Dynamic problems: matching time-dependent

correlation functions 375

9.3.6 Inverse Monte Carlo renormalization group

transformations 376

References 376

10 Non-equilibrium and irreversible processes 378

10.1 Introduction and perspective 378

10.2 Driven diffusive systems (driven lattice gases) 378

10.3 Crystal growth 381

10.4 Domain growth 384

10.5 Polymer growth 387

10.5.1 Linear polymers 387

10.5.2 Gelation 387

10.6 Growth of structures and patterns 389

10.6.1 Eden model of cluster growth 389

10.6.2 Diffusion limited aggregation 389

10.6.3 Cluster–cluster aggregation 392

10.6.4 Cellular automata 392

10.7 Models for film growth 393

10.7.1 Background 393

10.7.2 Ballistic deposition 394

10.7.3 Sedimentation 395

10.7.4 Kinetic Monte Carlo and MBE growth 396

10.8 Transition path sampling 398

10.9 Forced polymer pore translocation: a case study 399 10.10 The Jarzynski non-equilibrium work theorem and its

application to obtain free energy differences from

trajectories 402

10.11 Outlook: variations on a theme 404

References 404

11 Lattice gauge models: a brief introduction 408 11.1 Introduction: gauge invariance and lattice gauge theory 408

11.2 Some technical matters 410

(12)

11.3 Results for Z(N) lattice gauge models 410

11.4 Compact U(1) gauge theory 411

11.5 SU(2) lattice gauge theory 412

11.6 Introduction: quantum chromodynamics (QCD) and

phase transitions of nuclear matter 413

11.7 The deconfinement transition of QCD 415

11.8 Towards quantitative predictions 418

11.9 Density of states in gauge theories 420

11.10 Perspective 421

References 421

12 A brief review of other methods of computer simulation 423

12.1 Introduction 423

12.2 Molecular dynamics 423

12.2.1 Integration methods (microcanonical ensemble) 423 12.2.2 Other ensembles (constant temperature, constant

pressure, etc.) 427

12.2.3 Non-equilibrium molecular dynamics 430

12.2.4 Hybrid methods (MD + MC) 430

12.2.5 Ab initio molecular dynamics 431

12.2.6 Hyperdynamics and metadynamics 432

12.3 Quasi-classical spin dynamics 432

12.4 Langevin equations and variations (cell dynamics) 436

12.5 Micromagnetics 437

12.6 Dissipative particle dynamics (DPD) 438

12.7 Lattice gas cellular automata 439

12.8 Lattice Boltzmann equation 440

12.9 Multiscale simulation 440

12.10 Multiparticle collision dynamics 442

References 444

13 Monte Carlo simulations at the periphery of physics

and beyond 447

13.1 Commentary 447

13.2 Astrophysics 447

13.3 Materials science 448

13.4 Chemistry 449

13.5 ‘Biologically inspired’ physics 451

13.5.1 Commentary and perspective 451

13.5.2 Lattice proteins 451

13.5.3 Cell sorting 453

13.6 Biology 454

13.7 Mathematics/statistics 455

13.8 Sociophysics 456

13.9 Econophysics 456

13.10 ‘Traffic’ simulations 457

13.11 Medicine 459

(13)

13.12 Networks: what connections really matter? 460

13.13 Finance 461

References 462

14 Monte Carlo studies of biological molecules 465

14.1 Introduction 465

14.2 Protein folding 466

14.2.1 Introduction 466

14.2.2 How to best simulate proteins: Monte Carlo or

molecular dynamics? 467

14.2.3 Generalized ensemble methods 467

14.2.4 Globular proteins: a case study 469

14.2.5 Simulations of membrane proteins 470 14.3 Monte Carlo simulations of RNA structures 472 14.4 Monte Carlo simulations of carbohydrates 472

14.5 Determining macromolecular structures 474

14.6 Outlook 475

References 475

15 Outlook 477

Appendix: Listing of programs mentioned in the text 479

Index 511

(14)
(15)

Historically physics was first known as ‘natural philosophy’ and research was carried out by purely theoretical (or philosophical) investigation. True progress was obviously limited by the lack of real knowledge of whether or not a given theory really applied to nature. Eventually experimental investigation became an accepted form of research although it was always limited by the physicist’s ability to prepare a sample for study or to devise techniques to probe for the desired properties. With the advent of computers it became possible to carry out simulations of models which were intractable using ‘classical’ theoreti- cal techniques. In many cases computers have, for the first time in history, enabled physicists not only to invent new models for various aspects of nature but also to solve those same models without substantial simplification. In recent years computer power has increased quite dramatically, with access to com- puters becoming both easier and more common (e.g. with personal computers and workstations), and computer simulation methods have also been steadily refined. As a result computer simulations have become another way of doing physics research. They provide another perspective; in some cases simulations provide a theoretical basis for understanding experimental results, and in other instances simulations provide ‘experimental’ data with which theory may be compared. There are numerous situations in which direct comparison between analytical theory and experiment is inconclusive. For example, the theory of phase transitions in condensed matter must begin with the choice of a Hamilto- nian, and it is seldom clear to what extent a particular model actually represents real material on which experiments are done. Since analytical treatments also usually require mathematical approximations whose accuracy is difficult to assess or control, one does not know whether discrepancies between theory and experiment should be attributed to shortcomings of the model, the approx- imations, or both. The goal of this text is to provide a basic understanding of the methods and philosophy of computer simulations research with an empha- sis on problems in statistical thermodynamics as applied to condensed matter physics or materials science. There exist many other simulational problems in physics (e.g. simulating the spectral intensity reaching a detector in a scattering experiment) which are more straightforward and which will only occasionally be mentioned. We shall use many specific examples and, in some cases, give

xv

(16)

explicit computer programs, but we wish to emphasize that these methods are applicable to a wide variety of systems including those which are not treated here at all. As computer architecture changes, the methods presented here will in some cases require relatively minor reprogramming and in other instances will require new algorithm development in order to be truly efficient. We hope that this material will prepare the reader for studying new and different problems using both existing as well as new computers.

At this juncture we wish to emphasize that it is important that the simulation algorithm and conditions be chosen with the physics problem at hand in mind. The interpretation of the resultant output is critical to the success of any simulational project, and we thus include substantial information about various aspects of thermodynamics and statistical physics to help strengthen this connection. We also wish to draw the reader’s attention to the rapid development of scientific visualization and the important role that it can play in producing understanding of the results of some simulations.

This book is intended to serve as an introduction to Monte Carlo methods for graduate students, and advanced undergraduates, as well as more senior researchers who are not yet experienced in computer simulations. The book is divided up in such a way that it will be useful for courses which only wish to deal with a restricted number of topics. Some of the later chapters may simply be skipped without affecting the understanding of the chapters which follow.

Because of the immensity of the subject, as well as the existence of a number of very good monographs and articles on advanced topics which have become quite technical, we will limit our discussion in certain areas, e.g. polymers, to an introductory level. The examples which are given are in FORTRAN, not because it is necessarily the best scientific computer language, but because for many decades of Monte Carlo simulations it was the most widespread. Many existing Monte Carlo programs and related subprograms are in FORTRAN and will be available to the student from libraries, journals, etc. (FORTRAN has also evolved dramatically over its more than 50 years of existence, and the newest versions are efficient and well suited for operations involving arrays and for parallel algorithms. Object oriented languages, like C++, while useful for writing complex programs, can be far more difficult to learn. Programs written in popular, non-compiler languages, like Java or MATLAB, can be more difficult to debug and run relatively slowly. Nevertheless, all the methods described in this book can be implemented using the reader’s ‘language of choice’.) A number of sample problems are suggested in the various chapters;

these may be assigned by course instructors or worked out by students on their own. Our experience in assigning problems to students taking a graduate course in simulations at the University of Georgia over a more than 30-year period suggests that for maximum pedagogical benefit, students should be required to prepare cogent reports after completing each assigned simulational problem. Students were required to complete seven ‘projects’ in the course of the semester for which they needed to write and debug programs, take and analyze data, and prepare a report. Each report should briefly describe the algorithm used, provide sample data and data analysis, draw conclusions, and

(17)

add comments. (A sample program/output should be included.) In this way, the students obtain practice in the summary and presentation of simulational results, a skill which will prove to be valuable later in their careers. For convenience, most of the case studies that are described have been simply taken from the research of the authors of this book – the reader should be aware that this is by no means meant as a negative statement on the quality of the research of numerous other groups in the field. Similarly, selected references are given to aid the reader in finding more detailed information, but because of length restrictions it is simply not possible to provide a complete list of relevant literature. Many coworkers have been involved in the work which is mentioned here, and it is a pleasure to thank them for their fruitful collaboration. We have also benefited from the stimulating comments of many of our colleagues and we wish to express our thanks to them as well.

The pace of developments in computer simulations continues at breakneck speed. This fourth edition of our ‘guide’ to Monte Carlo simulations updates some of the references and included numerous additions reflecting new algo- rithms that have appeared since work on the third edition was completed.

The emphasis on the use of Monte Carlo simulations in biologically related problems in the third edition proved to foretell the future, as the use of Monte Carlo methods for the study of biological molecules has continued to expand.

Similarly, the use of Monte Carlo methods in ‘non-traditional’ areas of research has continued to grow. There have been exciting new developments in com- puter hardware; in particular, the use of GPUs in scientific computing has dramatically altered the price/performance ratio for many algorithmic imple- mentations. Because of advances in computer technology and algorithms, new results often have much higher statistical precision than some of the older examples in the text. Nonetheless, the older work often provides valuable ped- agogical information for the student and may also be more readable than more recent, and more compact, papers. An additional advantage is that the reader can easily reproduce some of the older results with only a modest investment of modern computer resources. Of course, newer, higher resolution studies that are cited often permit yet additional information to be extracted from simulational data, so striving for higher precision should not be viewed as

‘busy work’. We hope that this guide will help impart to the reader not only an understanding of the methodology of Monte Carlo simulations but also an appreciation for the new science that can be uncovered with the Monte Carlo method.

(18)

1 . 1 W H AT I S A M O N T E C A R L O S I M U L AT I O N ? In a Monte Carlo simulation we attempt to follow the ‘time dependence’

of a model for which change, or growth, does not proceed in some rigorously predefined fashion (e.g. according to Newton’s equations of motion) but rather in a stochastic manner which depends on a sequence of random numbers which is generated during the simulation. With a second, different sequence of random numbers the simulation will not give identical results but will yield values which agree with those obtained from the first sequence to within some ‘statistical error’. A very large number of different problems fall into this category: in percolation an empty lattice is gradually filled with particles by placing a particle on the lattice randomly with each ‘tick of the clock’.

Lots of questions may then be asked about the resulting ‘clusters’ which are formed of neighboring occupied sites. Particular attention has been paid to the determination of the ‘percolation threshold’, i.e. the critical concentration of occupied sites for which an ‘infinite percolating cluster’ first appears. A percolating cluster is one which reaches from one boundary of a (macroscopic) system to the opposite one. The properties of such objects are of interest in the context of diverse physical problems such as conductivity of random mixtures, flow through porous rocks, behavior of dilute magnets, etc. Another example is diffusion limited aggregation (DLA) where a particle executes a random walk in space, taking one step at each time interval, until it encounters a ‘seed’ mass and sticks to it. The growth of this mass may then be studied as many random walkers are turned loose. The ‘fractal’ properties of the resulting object are of real interest, and while there is no accepted analytical theory of DLA to date, computer simulation is the method of choice. In fact, the phenomenon of DLA was first discovered by Monte Carlo simulation.

Considering problems of statistical mechanics, we may be attempting to sample a region of phase space in order to estimate certain properties of the model, although we may not be moving in phase space along the same path which an exact solution to the time dependence of the model would yield.

Remember that the task of equilibrium statistical mechanics is to calculate thermal averages of (interacting) many-particle systems: Monte Carlo simu- lations can do that, taking proper account of statistical fluctuations and their effects in such systems. Many of these models will be discussed in more detail 1

(19)

in later chapters so we shall not provide further details here. Since the accu- racy of a Monte Carlo estimate depends upon the thoroughness with which phase space is probed, improvement may be obtained by simply running the calculation a little longer to increase the number of samples. Unlike in the application of many analytic techniques (e.g. perturbation theory for which the extension to higher order may be prohibitively difficult), the improvement of the accuracy of Monte Carlo results is possible not just in principle but also in practice.

1 . 2 W H AT P R O B L E M S C A N W E S O LV E W I T H I T ?

The range of different physical phenomena which can be explored using Monte Carlo methods is exceedingly broad. Models which either naturally or through approximation can be discretized can be considered. The motion of individual atoms may be examined directly; e.g. in a binary (AB) metallic alloy where one is interested in interdiffusion or unmixing kinetics (if the alloy was pre- pared in a thermodynamically unstable state) the random hopping of atoms to neighboring sites can be modeled directly. This problem is complicated because the jump rates of the different atoms depend on the locally differing environment. Of course, in this description the quantum mechanics of atoms with potential barriers in the eV range is not explicitly considered, and the sole effect of phonons (lattice vibrations) is to provide a ‘heat bath’ which provides the excitation energy for the jump events. Because of a separation of time scales (the characteristic times between jumps are orders of magni- tude larger than atomic vibration periods) this approach provides very good approximation. The same kind of arguments hold true for growth phenomena involving macroscopic objects, such as DLA growth of colloidal particles; since their masses are orders of magnitude larger than atomic masses, the motion of colloidal particles in fluids is well described by classical, random Brownian motion. These systems are hence well suited to study by Monte Carlo sim- ulations which use random numbers to realize random walks. The thermal properties of a fluid may be studied by considering ‘blocks’ of fluid as individ- ual particles, but these blocks will be far larger than individual molecules. As an example, we consider ‘micelle formation’ in lattice models of microemulsions (water–oil–surfactant fluid mixtures) in which each surfactant molecule may be modeled by two ‘dimers’ on the lattice (two occupied nearest neighbor sites on the lattice). Different effective interactions allow one dimer to mimic the hydrophilic group and the other dimer the hydrophobic group of the surfac- tant molecule. This model then allows the study of the size and shape of the aggregates of surfactant molecules (the micelles) as well as the kinetic aspects of their formation. In reality, this process is quite slow so that a deterministic molecular dynamics simulation (i.e. numerical integration of Newton’s second law) is not feasible. This example shows that part of the ‘art’ of simulation is the appropriate choice (or invention) of a suitable (coarse-grained) model. Large

(20)

collections of interacting classical particles are directly amenable to Monte Carlo simulation, and the behavior of interacting quantized particles is being studied either by transforming the system into a pseudo-classical model or by considering permutation properties directly. These considerations will be discussed in more detail in later chapters. Equilibrium properties of systems of interacting atoms have been extensively studied as have a wide range of models for simple and complex fluids, magnetic materials, metallic alloys, adsorbed surface layers, etc. More recently polymer models have been studied with increasing frequency; note that the simplest model of a flexible polymer is a random walk, an object which is well suited for Monte Carlo simulation. Fur- thermore, some of the most significant advances in understanding the theory of elementary particles have been made using Monte Carlo simulations of lattice gauge models. A topic which finds increasing applications is the solution of the Schr¨odinger equation for many interacting quantum particles by Monte Carlo methods.

1 . 3 W H AT D I F F I C U LT I E S W I L L W E E N C O U N T E R ?

1.3.1 Limited computer time and memory

Because of limits on computer speed there are some problems which are inherently not suited to computer simulation at this time. A simulation which requires years of CPU time on whatever machine is available is simply imprac- tical. Similarly a calculation which requires memory which far exceeds that which is available can be carried out only by using very sophisticated program- ming techniques which slow down running speeds and greatly increase the probability of errors. It is therefore important that the user first consider the requirements of both memory and CPU time before embarking on a project to ascertain whether or not there is a realistic possibility of obtaining the resources to simulate a problem properly. Of course, with the rapid advances being made by the computer industry, it may be necessary to wait only a few years for computer facilities to catch up to your needs. Sometimes the tractability of a problem may require the invention of a new, more efficient simulation algorithm. Of course, developing new strategies to overcome such difficulties constitutes an exciting field of research by itself.

1.3.2 Statistical and other errors

Assuming that the project can be done, there are still potential sources of error which must be considered. These difficulties will arise in many different situations with different algorithms so we wish to mention them briefly at this time without reference to any specific simulation approach. All computers operate with limited word length and hence limited precision for numerical values of any variable. Truncation and round-off errors may in some cases

(21)

lead to serious problems. In addition there are statistical errors which arise as an inherent feature of the simulation algorithm due to the finite number of members in the ‘statistical sample’ which is generated. These errors must be estimated and then a ‘policy’ decision must be made, i.e. should more CPU time be used to reduce the statistical errors or should the CPU time available be used to study the properties of the system under other conditions. Lastly there may be systematic errors. In this text we shall not concern ourselves with tracking down errors in computer programming – although the practitioner must make a special effort to eliminate any such errors – but with more fundamental problems. An algorithm may fail to treat a particular situation properly, e.g.

due to the finite number of particles which are simulated, etc. These various sources of error will be discussed in more detail in later chapters.

1 . 4 W H AT S T R AT E G Y S H O U L D W E F O L L O W I N A P P R O A C H I N G A P R O B L E M ?

Most new simulations face hidden pitfalls and difficulties which may not be apparent in early phases of the work. It is therefore often advisable to begin with a relatively simple program and use relatively small system sizes and modest running times. Sometimes there are special values of parameters for which the answers are already known (either from analytic solutions or from previous, high quality simulations) and these cases can be used to test a new simulation program. By proceeding in this manner one is able to uncover which are the parameter ranges of interest and what unexpected difficulties are present. It is then possible to refine the program and then to increase running times.

Thus both CPU time and human time can be used most effectively. It makes little sense of course to spend a month to rewrite a computer program which may result in a total saving of only a few minutes of CPU time. If it happens that the outcome of such test runs shows that a new problem is not tractable with reasonable effort, it may be desirable to attempt to improve the situation by redefining the model or redirect the focus of the study. For example, in polymer physics the study of short chains (oligomers) by a given algorithm may still be feasible even though consideration of huge macromolecules may be impossible.

1 . 5 H O W D O S I M U L AT I O N S R E L AT E T O T H E O RY A N D E X P E R I M E N T ?

In many cases theoretical treatments are available for models for which there is no perfect physical realization (at least at the present time). In this sit- uation the only possible test for an approximate theoretical solution is to compare with ‘data’ generated from a computer simulation. As an example we wish to mention activity in growth models, such as diffusion limited aggrega- tion, for which a very large body of simulation results already existed before

(22)

corresponding experiments were carried out. It is not an exaggeration to say that interest in this field was created by simulations. Even more dramatic examples are those of reactor meltdown or large scale nuclear war: although we want to know what the results of such events would be, we do not want to carry out experiments. There are also real physical systems which are sufficiently complex that they are not presently amenable to theoretical treatment. An example is the problem of understanding the specific behavior of a system with many competing interactions and which is undergoing a phase transition. A model Hamiltonian which is believed to contain all the essential features of the physics may be proposed, and its properties may then be determined from sim- ulations. If the simulation (which now plays the role of theory) disagrees with experiment, then a new Hamiltonian must be sought. An important advantage of the simulations is that different physical effects which are simultaneously present in real systems may be isolated and, through separate consideration by simulation, may provide a much better understanding. Consider, for example, the phase behavior of polymer blends – materials which have ubiquitous appli- cations in the plastics industry. The miscibility of different macromolecules is a challenging problem in statistical physics in which there is a subtle interplay between complicated enthalpic contributions (strong covalent bonds compete with weak van der Waals forces, and Coulombic interactions and hydrogen bonds may be present as well) and entropic effects (configurational entropy of flexible macromolecules, entropy of mixing, etc.). Real materials are very dif- ficult to understand because of various asymmetries between the constituents of such mixtures (e.g. in shape and size, degree of polymerization, flexibility, etc.). Simulations of simplified models can ‘switch off ’ or ‘switch on’ these effects and thus determine the particular consequences of each contributing factor. We wish to emphasize that the aim of simulations is not to provide better ‘curve fitting’ to experimental data than does analytic theory. The goal is to create an understanding of physical properties and processes which is as complete as possible, making use of the perfect control of ‘experimental’

conditions in the ‘computer experiment’ and of the possibility to examine every aspect of system configurations in detail. The desired result is then the elucidation of the physical mechanisms that are responsible for the observed phenomena. We therefore view the relationship between theory, experiment, and simulation to be similar to those of the vertices of a triangle, as shown in Fig. 1.1: each is distinct, but each is strongly connected to the other two.

1 . 6 P E R S P E C T I V E

The Monte Carlo method has had a considerable history in physics. As far back as 1949 a review of the use of Monte Carlo simulations using ‘modern computing machines’ was presented by Metropolis and Ulam (1949). In addi- tion to giving examples they also emphasized the advantages of the method. Of course, in the following decades the kinds of problems they discussed could be treated with far greater sophistication than was possible in the first half of

(23)

Fig. 1.1 Schematic view of the relationship between theory, experiment, and computer simulation.

the twentieth century, and many such studies will be described in succeeding chapters. Now, Monte Carlo simulations are reaching into areas that are far afield of physics. In succeeding chapters we will also provide the reader with a taste of what is possible with these techniques in other areas of investigation.

It is also quite telling that there are now several software products on the market that perform simple Monte Carlo simulations in concert with widely distributed spreadsheet software on PCs.

With the rapidly increasing growth of computer power which we are now seeing, coupled with the steady drop in price, it is clear that computer simu- lations will be able to rapidly increase in sophistication to allow more subtle comparisons to be made. Even now, the combination of new algorithms and new high performance computing platforms has allowed simulations to be performed for more than 106(in special cases exceeding 3 × 1011(Kadau et al., 2006)) particles (spins). As a consequence it is no longer possible to view the system and look for ‘interesting’ phenomena without the use of sophisticated visualization techniques. The sheer volume of data that we are capable of pro- ducing has also reached unmanageable proportions. In order to permit further advances in the interpretation of simulations, it is likely that the inclusion of intelligent ‘agents’ (in the computer science sense) for steering and visualiza- tion, along with new data structures, will be needed. Such topics are beyond the scope of the text, but the reader should be aware of the need to develop these new strategies.

R E F E R E N C E S

Kadau, K., Germann, T. C., and Lomdahl, P. S. (2006), Int. J. Mod.

Phys. C 17, 1755.

Metropolis, N. and Ulam, S. (1949), J. Amer. Stat. Assoc. 44, 335.

(24)

2 . 1 T H E R M O DY N A M I C S A N D S TAT I S T I C A L M E C H A N I C S : A Q U I C K R E M I N D E R 2.1.1 Basic notions

In this chapter we shall review some of the basic features of thermodynamics and statistical mechanics which will be used later in this book when devising simulation methods and interpreting results. Many good books on this subject exist and we shall not attempt to present a complete treatment. This chapter is hence not intended to replace any textbook for this important field of physics but rather to ‘refresh’ the reader’s knowledge and to draw attention to notions in thermodynamics and statistical mechanics which will henceforth be assumed to be known throughout this book.

2.1.1.1 Partition function

Equilibrium statistical mechanics is based upon the idea of a partition function which contains all of the essential information about the system under consid- eration. The general form for the partition function for a classical system is

Z = 

all states

e−H/kBT, (2.1)

where H is the Hamiltonian for the system, T is the temperature, and kB is the Boltzmann constant. The sum in Eqn. (2.1) is over all possible states of the system and thus depends upon the size of the system and the number of degrees of freedom for each particle. For systems consisting of only a few interacting particles the partition function can be written down exactly with the consequence that the properties of the system can be calculated in closed form. In a few other cases the interactions between particles are so simple that evaluating the partition function is possible. Of course this means that we have only adopted classical Hamiltonians when describing the potential energy of a system rather than using a quantum mechanical operator.

7

(25)

Fig. 2.1 (left) Energy levels for the two-level system in Eqn. (2.2);

(right) internal energy for a two-level system as a function of temperature.

Example

Let us consider a system with N particles each of which has only two states, e.g.

a non-interacting Ising model in an external magnetic field H, and which has the Hamiltonian

H = −H

i

σi, (2.2)

where σi= ±1. The partition function for this system is simply Z =

 e−H/kBT

+ e+H/kBTN

, (2.3)

where for a single spin the sum in Eqn. (2.1) is only over two states. The energies of the states and the resultant temperature dependence of the internal energy appropriate to this situation are pictured in Fig. 2.1.

Problem 2.1 Work out the average magnetization per spin, using Eqn. (2.3), for a system of N non-interacting Ising spins in an external magnetic field.

[Solution M = −(1/N)∂ F /∂ H , F = −kBT ln Z ⇒ M = tanh(H /kBT )]

There are also a few examples where it is possible to extract exact results for very large systems of interacting particles, but in general the partition function cannot be evaluated exactly. Even enumerating the terms in the partition function on a computer can be a daunting task. Even if we have only 10 000 interacting particles, a very small fraction of Avogadro’s number, with only two possible states per particle, the partition function would contain 210 000

(26)

terms! The probability of any particular state of the system is also determined by the partition function. Thus, the probability that the system is in state μ is given by

Pμ = e−H(μ)/kBT/Z, (2.4)

where H(μ) is the Hamiltonian when the system is in the μth state. As we shall show in succeeding chapters, the Monte Carlo method is an excellent technique for estimating probabilities, and we can take advantage of this property in evaluating the results.

2.1.1.2 Free energy, internal energy, and entropy

It is possible to make a direct connection between the partition function and thermodynamic quantities and we shall now briefly review these relationships.

The free energy of a system can be determined from the partition function (Callen, 1985) from

F = −kBT lnZ (2.5)

and all other thermodynamic quantities can be calculated by appropriate dif- ferentiation of Eqn. (2.5). This relation then provides the connection between statistical mechanics and thermodynamics. The internal energy of a system can be obtained from the free energy via

U = −T2(F/T)/∂ T. (2.6)

By the use of a partial derivative we imply here that F will depend upon other variables as well, e.g. the magnetic field H in the above example, which are held constant in Eqn. (2.6). This also means that if the internal energy of a system can be measured, the free energy can be extracted by appropriate integration, assuming, of course, that the free energy is known at some reference temperature. We shall see that this fact is important for simulations which do not yield the free energy directly but produce instead values for the internal energy. Free energy differences may then be estimated by integration, i.e. from

(F/T) = d (1/T)U.

Using Eqn. (2.6) one can easily determine the temperature dependence of the internal energy for the non-interacting Ising model, and this is also shown in Fig. 2.1. Another important quantity, the entropy, measures the amount of disorder in the system. The entropy is defined in statistical mechanics by

S = −kBln P, (2.7)

where P is the probability of occurrence of a (thermodynamic) microstate. The entropy can be determined from the free energy from

S = −(∂ F/∂ T)V,N. (2.8)

(27)

Fig. 2.2 Schematic view of different paths between two different points in thermodynamic p−T space.

2.1.1.3 Thermodynamic potentials and corresponding ensembles

The internal energy is expressed as a function of the extensive variables, S, V, N, etc. There are situations when it is appropriate to replace some of these variables by their conjugate intensive variables, and for this purpose additional thermodynamic potentials can be defined by suitable Legendre transforms of the internal energy; in terms of liquid–gas variables such relations are given by:

F = U − T S, (2.9a)

H = U + p V, (2.9b)

G = U − T S + p V, (2.9c)

where F is the Helmholtz free energy, H is the enthalpy, and G is the Gibbs free energy. Similar expressions can be derived using other thermodynamic variables, e.g. magnetic variables. The free energy is important since it is a minimum in equilibrium when T and V are held constant, while G is a minimum when T and p are held fixed. Moreover, the difference in free energy between any two states does not depend on the path between the states. Thus, in Fig. 2.2 we consider two points in the p−T plane. Two different paths which connect points 1 and 2 are shown; the difference in free energy between these two points is identical for both paths, i.e.

F2− F1=



path I

dF =



path II

dF. (2.10)

The multidimensional space in which each point specifies the complete microstate (specified by the degrees of freedom of all the particles) of a system is termed ‘phase space’. Averages over phase space may be constructed by con- sidering a large number of identical systems which are held at the same fixed conditions. These are called ‘ensembles’. Different ensembles are relevant for different constraints. If the temperature is held fixed, the set of systems is said to belong to the ‘canonical ensemble’ and there will be some distribution of energies among the different systems. If instead the energy is fixed, the ensemble is termed the ‘microcanonical’ ensemble. In the first two cases the

(28)

number of particles is held constant; if the number of particles is allowed to fluctuate the ensemble is the ‘grand canonical’ ensemble.

Systems are often held at fixed values of intensive variables, such as temper- ature, pressure, etc. The conjugate extensive variables, energy, volume, etc.

will fluctuate with time; indeed these fluctuations will actually be observed during Monte Carlo simulations.

It is important to recall that Legendre transformations from one thermody- namic potential to another one (e.g. from the microcanonical ensemble, where U(S, V ) is a function of its ‘natural variables’ S and V, to the canonical ensem- ble where F(T, V ) is considered as a function of the ‘natural variables’ T and V ) are only fully equivalent to each other in the thermodynamic limit, N →

. For finite N it is still true that, in thermal equilibrium, F(T, V, N ) is a min- imum at fixed T and V, and U(S, V, N ) is minimized at fixed S and V, but Eqn.

(2.9) no longer holds for finite N since finite size effects in different ensem- bles are no longer equivalent. This is particularly important when considering phase transitions (see Section 2.1.2). However, on the level of partition func- tions and probability distributions of states there are exact relations between different ensembles. For example, the partition function Y(μ, V, T ) of the grand-canonical ensemble (where the chemical potential μ is a ‘natural variable’

rather than N ) is related to the canonical partition function Z(N, V, T ) by Y(μ, V, T) =



N=0

exp(μN/ kBT)Z(N, V, T), (2.11a)

and the probability to find the system in a particular state X with N particles is ( X stands for the degrees of freedom of these particles)

PμV T( X, N) = (1/Y) exp(μN/kBT) exp[−U( X) / kBT]. (2.11b) In the canonical ensemble, where N does not fluctuate, we have instead

PNV T( X) = (1/Z) exp[−U( X) / kBT]. (2.11c) In the limit of macroscopic systems (V → ) the distribution PμV T( X, N) essentially becomes a δ-function in N, sharply peaked at N =



NPμV T( X, N)N. Then, averages in the canonical and grand-canonical ensembles become strictly equivalent, and the potentials are related via Legen- dre transformations ( = −kBT ln Y = F − μN). For ‘small’ systems, such as those studied in Monte Carlo simulations, use of Legendre transformations is only useful if finite size effects are negligible.

In the context of the study of phase transitions and phase coexistence, it is sometimes advantageous to complement studies of a system in the canonical (NVT) ensemble that has been emphasized so far in this book, by studying the system in the microcanonical (NVE) ensemble. If we include the kinetic energy K({ pi}) of the particles in the discussion, the Hamiltonian becomes H({ xi}, { pi}, V) = K({ pi}) + U({ xi}, V). Including the kinetic energy is necessary, of course, when discussing molecular dynamics simulation methods, since solving Newton’s equation of motion conserves the total energy E = H({xi},

(29)

{ pi}, V). Here we denote the coordinates of the particles in the d-dimensional volume by {xi, i = 1, . . . , N}, and { pi} are the momenta.

The basic thermodynamic potential then is the entropy

S = S(E, V, N) = kBln ZMC= kBln(E, V, N), (2.12) where the microcanonical partition function ZMCis nothing but the phase space volume (E, V, N) defined by (Becker, 1967; Dunkel and Hilbert, 2006):

(E, V, N) = 1 N!hdN

 dx



dp  [E − H ({xi} , { pi}, V)], (2.13) where h is Planck’s constant, (z) is Heaviside’s step function {Θ(z < 0) = 0, Θ(z  0) = 1}, and the integrals dx, dp stand symbolically for N d-dimensional integrals over the components of coordinates and momenta of the particles. Note that due to the high dimensionality of phase space for large N almost all the weight for this integration comes from the surface of the integration volume, and hence often an expression, equivalent for N → ,

ZMC = ε0

∂ 

E = 1 N!hdNε0

 dx



dp δ [E − H ({xi} , { pi}, V)], (2.14) is quoted in textbooks (wher ε0is the thickness of a thin energy shell around the phase space surface defined by H({xi}, { pi}, V) = E). However, Eqn. (2.13) is preferable since it also holds for small N, and the unspecified parameter ε0is not needed. ∂/∂ E ∂ is proportional to the microcanonical energy density of states.

If S(E, V, N) is given, other thermodynamic variables are found as usual from derivatives

1

T = ∂ S

E

N,V

, P

T = ∂ S

V

N,T

(2.15) In the thermodynamic limit, one can show that S(E, V, N) is a convex function of its variables, e.g. the energy E. This fact is of particular interest if we consider a system undergoing a first order phase transition between two phases I and II.

As described in Section 2.1 in the canonical ensemble we then find that both the total internal energy U and the entropy S undergo jumps at the transition temperature Tt, from UIto UIIand from SIto SII, respectively. (Remember that for classical systems, momenta cancel out from the canonic partition function, so kinetic energy does not need to be considered.) Of course, for N →

 all statistical ensembles are equivalent, related by Legendre transformations, and hence in the microcanonical ensemble the first order transition shows up as a linear variation of S(U) from SI(UI) to SII(UII), and 1T (Eqn. (2.15)) is then simply a constant inbetween UI and UII. This physical meaning of this linear variation is that the system passes through a two-phase coexistence region for which the simple lever rule holds.

In a finite system, the linear variation in S(U) typically is replaced by a concave intruder, and the constant piece in the 1T vs. U curve is replaced by a loop. However, these observations should not be interpreted in terms of

(30)

concepts such as Landau potentials, van der Waals loops, and the like; rather for large but finite N these phenomena can be attributed to interfacial effects on phase coexistence (although in the thermodynamic potential, the entropy in the present case, these effects are small, down by a surface to volume ratio, they often can be rather easily recorded). In any case, these ‘intruders’ and

‘loops’ are useful indicators for the presence of a first order phase transition and can be used to characterize them precisely.

Problem 2.2 Consider a two-level system composed of N non-interacting particles where the groundstate of each particle is doubly degenerate and separated from the upper level by an energy E. What is the partition function for this system? What is the entropy as a function of temperature?

2.1.1.4 Fluctuations

Equations (2.4) and (2.5) imply that the probability that a given ‘microstate’ μ occurs is Pμ= exp{[F − H(μ)]/ kBT} = exp{−S/kB}. Since the number of different microstates is so huge, we are not only interested in probabilities of individual microstates but also in probabilities of macroscopic variables, such as the internal energy U. We first form the moments (where β  kBT; the average energy is denoted U and U is a fluctuating quantity),

U(β) = H(μ) ≡

μ

PμH(μ) =

μ

H(μ)e−βH(μ)



μ

e−βH(μ), (2.16)

H2 =



μ

H2e−βH(μ)



μ

e−βH(μ),

and note the relation −(∂U(β)/∂β)V = H2 − H2. Since (∂U/∂ T)V = CV,the specific heat thus yields a fluctuation relation

kBT2CV = H2 − H2 = (U)2NV T, U ≡ H − H. (2.17) Now for a macroscopic system (N ≫ 1) away from a critical point, U ∝ N and the energy and specific heat are extensive quantities. However, since both H2 and H2 are clearly proportional to N2, we see that the relative fluctuation of the energy is very small, of order 1N. While in real experiments (where often N  1022) such fluctuations may be too small to be detectable, in simu- lations these thermal fluctuations are readily observable, and relations such as Eqn. (2.17) are useful for the actual estimation of the specific heat from energy fluctuations. Similar fluctuation relations exist for many other quantities, e.g.

the isothermal susceptibility χ = (∂M/∂ H)Tis related to fluctuations of the magnetization M =

iσ

i,as

kBTχ = M2 − M2=



i j

(σiσj − σiσj). (2.18) Writing the Hamiltonian of a system in the presence of a magnetic field H as H = H0− H M, we can easily derive Eqn. (2.18) from M =

(31)



μM exp[−βH(μ)]/

μexp[−βH(μ)] in a similar fashion as above. The relative fluctuation of the magnetization is also small, of order 1N.

It is not only of interest to consider for quantities such as the energy or magnetization the lowest order moments but to discuss the full probability distribution P(U) or P(M), respectively. For a system in a pure phase the probability is given by a simple Gaussian distribution

P(U) = (2πkBCVT2)−1/2exp[−(U)2/2kBT2CV] (2.19) while the distribution of the magnetization for the paramagnetic system becomes

P(M) = (2πkBTχ )−1/2exp[−(M − M)2/2kBTχ ] (2.20) It is straightforward to verify that Eqns. (2.19) and (2.20) are fully consistent with the fluctuation relations (2.17) and (2.18). Since Gaussian distributions are completely specified by the first two moments, higher moments Hk, Mk, which could be obtained analogously to Eqn. (2.16), are not required. Note that on the scale of U /N and M/N the distributions P(U), P(M) are extremely narrow, and ultimately tend to δ-functions in the thermodynamic limit. Thus these fluctuations are usually neglected altogether when dealing with relations between thermodynamic variables.

An important consideration is that the thermodynamic state variables do not depend on the ensemble chosen (in pure phases) while the fluctuations do.

Therefore, one obtains the same average internal energy U(N, V, T) in the canonical ensemble as in the NpT ensemble while the specific heats and the energy fluctuations differ (see Landau and Lifshitz, 1980):

(U)2Np T= kBT2CV

T ∂ p

T

V

− p 2

kBT ∂ V

p

T

. (2.21) It is also interesting to consider fluctuations of several thermodynamic vari- ables together. Then one can ask whether these quantities are correlated, or whether the fluctuations of these quantities are independent of each other.

Consider the NVT ensemble where entropy S and the pressure p (an inten- sive variable) are the (fluctuating) conjugate variables { p = −(∂ F/∂ V)NT, S = −(∂ F/∂ T)NV}. What are the fluctuations of S and p, and are they corre- lated? The answer to these questions is given by

(S)2NV T = kBCp, (2.22a)

(p )2NV T = −kBT(∂ p/∂ V)S, (2.22b)

(S)(p )NV T = 0. (2.22c)

One can also see here an illustration of the general principle that fluctuations of extensive variables (like S) scale with the volume, while fluctuations of intensive variables (like p) scale with the inverse volume.

(32)

2.1.2 Phase transitions

The emphasis in the standard texts on statistical mechanics clearly is on those problems that can be dealt with analytically, e.g. ideal classical and quantum gases, dilute solutions, etc. The main utility of Monte Carlo methods is for problems which evade exact solution such as phase transitions, calculations of phase diagrams, etc. For this reason we shall emphasize this topic here. The study of phase transitions has long been a topic of great interest in a variety of related scientific disciplines and plays a central role in research in many fields of physics. Although very simple approaches, such as mean field theory, provide a very simple, intuitive picture of phase transitions, they generally fail to provide a quantitative framework for explaining the wide variety of phenomena which occur under a range of different conditions and often do not really capture the conceptual features of the important processes which occur at a phase transition. The last half century has seen the development of a mature framework for the understanding and classification of phase transitions using a combination of (rare) exact solutions as well as theoretical and numerical approaches.

We draw the reader’s attention to the existence of zero temperature quantum phase transitions (Sachdev, 1999). These are driven by control parameters that modify the quantum fluctuations and can be studied using quantum Monte Carlo methods that will be described in Chapter 8. The discussion in this chapter, however, will be limited to classical statistical mechanics.

2.1.2.1 Order parameter

The distinguishing feature of most phase transitions is the appearance of a non- zero value of an ‘order parameter’, i.e. of some property of the system which is non-zero in the ordered phase but identically zero in the disordered phase. The order parameter is defined differently in different kinds of physical systems.

In a ferromagnet it is simply the spontaneous magnetization. In a liquid–gas system it will be the difference in the density between the liquid and gas phases at the transition; for liquid crystals the degree of orientational order is telling.

An order parameter may be a scalar quantity or may be a multicomponent (or even complex) quantity. Depending on the physical system, an order parameter may be measured by a variety of experimental methods such as neutron scattering, where Bragg peaks of super-structures in antiferromagnets allow the estimation of the order parameter from the integrated intensity, oscillating magnetometer measurement directly determines the spontaneous magnetization of a ferromagnet, while NMR is suitable for the measurement of local orientational order.

2.1.2.2 Correlation function

Even if a system is not ordered, there will, in general, be microscopic regions in the material in which the characteristics of the material are correlated.

Referenties

GERELATEERDE DOCUMENTEN

De herhaalde curatieve toepassing op compost en dekgrond van de deze entompathogene schimmel geeft in een LNV-project een vergelijkbare reductie van het aantal champignonvliegen..

Inherent veilige 80 km/uur-wegen; Ontwikkeling van een strategie voor een duurzaam-veilige (her)inrichting van doorgaande 80 km/uur-wegen. Deel I: Keuze van de

Praktijkbegeleiding van nieuwe chauffeurs wordt nuttig en noodzakelijk geacht, maar niet alleen voor verbetering van verkeersinzicht; de begeleiding wordt vooral ook

waar heeft nog vlakke of lensbodems. twee wandfragmenten van ceramiek met schel- pengruis- of kalkverschraling. Het gaat om een reducerend baksel met lichtgrijze kern en buiten-

Dr Francois Roets (Department of Conservation Ecology and Entomology, Stellenbosch University) and I are core team members of this CoE, and our main research aim is to study

De resultaten van het archeologische waarderingsonderzoek maken zeer duidelijk dat er een dense Romeinse occupatie was aan de westkant van de Edingsesteenweg te Kester en dat

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

82.80P - Electron spectroscopy for chemical analysis (photoelectron, Auger spectroscopy, etc.). - We prepared amorphous Ni-Zr powder by mechanical alloying. The