An Introduction to Computational Physics Numerical simulation is now an integrated part of science and technology. Now ...

Author:
Tao Pang

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

An Introduction to Computational Physics Numerical simulation is now an integrated part of science and technology. Now in its second edition, this comprehensive textbook provides an introduction to the basic methods of computational physics, as well as an overview of recent progress in several areas of scientiﬁc computing. The author presents many step-by-step examples, including program listings in JavaTM , of practical numerical methods from modern physics and areas in which computational physics has made signiﬁcant progress in the last decade. The ﬁrst half of the book deals with basic computational tools and routines, covering approximation and optimization of a function, differential equations, spectral analysis, and matrix operations. Important concepts are illustrated by relevant examples at each stage. The author also discusses more advanced topics, such as molecular dynamics, modeling continuous systems, Monte Carlo methods, the genetic algorithm and programming, and numerical renormalization. This new edition has been thoroughly revised and includes many more examples and exercises. It can be used as a textbook for either undergraduate or ﬁrst-year graduate courses on computational physics or scientiﬁc computation. It will also be a useful reference for anyone involved in computational research. Tao Pang is Professor of Physics at the University of Nevada, Las Vegas. Following his higher education at Fudan University, one of the most prestigious institutions in China, he obtained his Ph.D. in condensed matter theory from the University of Minnesota in 1989. He then spent two years as a Miller Research Fellow at the University of California, Berkeley, before joining the physics faculty at the University of Nevada, Las Vegas in the fall of 1991. He has been Professor of Physics at UNLV since 2002. His main areas of research include condensed matter theory and computational physics.

An Introduction to Computational Physics Second Edition Tao Pang University of Nevada, Las Vegas

cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge cb2 2ru, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521825696 © T. Pang 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2006 isbn-13 isbn-10

978-0-511-14046-4 eBook (NetLibrary) 0-511-14046-0 eBook (NetLibrary)

isbn-13 isbn-10

978-0-521-82569-6 hardback 0-521-82569-5 hardback

isbn-13 isbn-10

978-0-521-53276-1 0-521-53276-0

Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

To Yunhua, for enduring love

Contents

Preface to ﬁrst edition Preface

xi xiii

Acknowledgments

xv

1 Introduction 1.1 Computation and science 1.2 The emergence of modern computers 1.3 Computer algorithms and languages Exercises

1 1 4 7 14

2 Approximation of a function 2.1 Interpolation 2.2 Least-squares approximation 2.3 The Millikan experiment 2.4 Spline approximation 2.5 Random-number generators Exercises

16 16 24 27 30 37 44

3 Numerical calculus 3.1 Numerical differentiation 3.2 Numerical integration 3.3 Roots of an equation 3.4 Extremes of a function 3.5 Classical scattering Exercises

49 49 56 62 66 70 76

4 Ordinary differential equations 4.1 Initial-value problems 4.2 The Euler and Picard methods 4.3 Predictor–corrector methods 4.4 The Runge–Kutta method 4.5 Chaotic dynamics of a driven pendulum 4.6 Boundary-value and eigenvalue problems

80 81 81 83 88 90 94 vii

viii

Contents

4.7 4.8 4.9

The shooting method Linear equations and the Sturm–Liouville problem The one-dimensional Schr¨odinger equation Exercises

96 99 105 115

5 Numerical methods for matrices 5.1 Matrices in physics 5.2 Basic matrix operations 5.3 Linear equation systems 5.4 Zeros and extremes of multivariable functions 5.5 Eigenvalue problems 5.6 The Faddeev–Leverrier method 5.7 Complex zeros of a polynomial 5.8 Electronic structures of atoms 5.9 The Lanczos algorithm and the many-body problem 5.10 Random matrices Exercises

119 119 123 125 133 138 147 149 153 156 158 160

6 Spectral analysis 6.1 Fourier analysis and orthogonal functions 6.2 Discrete Fourier transform 6.3 Fast Fourier transform 6.4 Power spectrum of a driven pendulum 6.5 Fourier transform in higher dimensions 6.6 Wavelet analysis 6.7 Discrete wavelet transform 6.8 Special functions 6.9 Gaussian quadratures Exercises

164 165 166 169 173 174 175 180 187 191 193

7 Partial differential equations 7.1 Partial differential equations in physics 7.2 Separation of variables 7.3 Discretization of the equation 7.4 The matrix method for difference equations 7.5 The relaxation method 7.6 Groundwater dynamics 7.7 Initial-value problems 7.8 Temperature ﬁeld of a nuclear waste rod Exercises

197 197 198 204 206 209 213 216 219 222

8 Molecular dynamics simulations 8.1 General behavior of a classical system

226 226

Contents

8.2 8.3 8.4 8.5 8.6 8.7 8.8

Basic methods for many-body systems The Verlet algorithm Structure of atomic clusters The Gear predictor–corrector method Constant pressure, temperature, and bond length Structure and dynamics of real materials Ab initio molecular dynamics Exercises

228 232 236 239 241 246 250 254

9 Modeling continuous systems 9.1 Hydrodynamic equations 9.2 The basic ﬁnite element method 9.3 The Ritz variational method 9.4 Higher-dimensional systems 9.5 The ﬁnite element method for nonlinear equations 9.6 The particle-in-cell method 9.7 Hydrodynamics and magnetohydrodynamics 9.8 The lattice Boltzmann method Exercises

256 256 258 262 266 269 271 276 279 282

10 Monte Carlo simulations 10.1 Sampling and integration 10.2 The Metropolis algorithm 10.3 Applications in statistical physics 10.4 Critical slowing down and block algorithms 10.5 Variational quantum Monte Carlo simulations 10.6 Green’s function Monte Carlo simulations 10.7 Two-dimensional electron gas 10.8 Path-integral Monte Carlo simulations 10.9 Quantum lattice models Exercises

285 285 287 292 297 299 303 307 313 315 320

11 Genetic algorithm and programming 11.1 Basic elements of a genetic algorithm 11.2 The Thomson problem 11.3 Continuous genetic algorithm 11.4 Other applications 11.5 Genetic programming Exercises

323 324 332 335 338 342 345

12 Numerical renormalization 12.1 The scaling concept 12.2 Renormalization transform

347 347 350

ix

x

Contents

12.3 12.4 12.5 12.6 12.7

Critical phenomena: the Ising model Renormalization with Monte Carlo simulation Crossover: the Kondo problem Quantum lattice renormalization Density matrix renormalization Exercises

352 355 357 360 364 367

References

369

Index

381

Preface to first edition

The beauty of Nature is in its detail. If we are to understand different layers of scientiﬁc phenomena, tedious computations are inevitable. In the last half-century, computational approaches to many problems in science and engineering have clearly evolved into a new branch of science, computational science. With the increasing computing power of modern computers and the availability of new numerical techniques, scientists in different disciplines have started to unfold the mysteries of the so-called grand challenges, which are identiﬁed as scientiﬁc problems that will remain signiﬁcant for years to come and may require teraﬂop computing power. These problems include, but are not limited to, global environmental modeling, virus vaccine design, and new electronic materials simulation. Computational physics, in my view, is the foundation of computational science. It deals with basic computational problems in physics, which are closely related to the equations and computational problems in other scientiﬁc and engineering ﬁelds. For example, numerical schemes for Newton’s equation can be implemented in the study of the dynamics of large molecules in chemistry and biology; algorithms for solving the Schr¨odinger equation are necessary in the study of electronic structures in materials science; the techniques used to solve the diffusion equation can be applied to air pollution control problems; and numerical simulations of hydrodynamic equations are needed in weather prediction and oceanic dynamics. Important as computational physics is, it has not yet become a standard course in the curricula of many universities. But clearly its importance will increase with the further development of computational science. Almost every college or university now has some networked workstations available to students. Probably many of them will have some closely linked parallel or distributed computing systems in the near future. Students from many disciplines within science and engineering now demand the basic knowledge of scientiﬁc computing, which will certainly be important in their future careers. This book is written to fulﬁll this need. Some of the materials in this book come from my lecture notes for a computational physics course I have been teaching at the University of Nevada, Las Vegas. I usually have a combination of graduate and undergraduate students from physics, engineering, and other majors. All of them have some access to the workstations or supercomputers on campus. The purpose of my lectures is to provide xi

xii

Preface to first edition

the students with some basic materials and necessary guidance so they can work out the assigned problems and selected projects on the computers available to them and in a programming language of their choice. This book is made up of two parts. The ﬁrst part (Chapter 1 through Chapter 6) deals with the basics of computational physics. Enough detail is provided so that a well-prepared upper division undergraduate student in science or engineering will have no difﬁculty in following the material. The second part of the book (Chapter 7 through Chapter 12) introduces some currently used simulation techniques and some of the newest developments in the ﬁeld. The choice of subjects in the second part is based on my judgment of the importance of the subjects in the future. This part is speciﬁcally written for students or beginning researchers who want to know the new directions in computational physics or plan to enter the research areas of scientiﬁc computing. Many references are given there to help in further studies. In order to make the course easy to digest and also to show some practical aspects of the materials introduced in the text, I have selected quite a few exercises. The exercises have different levels of difﬁculty and can be grouped into three categories. Those in the ﬁrst category are simple, short problems; a student with little preparation can still work them out with some effort at ﬁlling in the gaps they have in both physics and numerical analysis. The exercises in the second category are more involved and aimed at well-prepared students. Those in the third category are mostly selected from current research topics, which will certainly beneﬁt those students who are going to do research in computational science. Programs for the examples discussed in the text are all written in standard Fortran 77, with a few exceptions that are available on almost all Fortran compilers. Some more advanced programming languages for data parallel or distributed computing are also discussed in Chapter 12. I have tried to keep all programs in the book structured and transparent, and I hope that anyone with knowledge of any programming language will be able to understand the content without extra effort. As a convention, all statements are written in upper case and all comments are given in lower case. From my experience, this is the best way of presenting a clear and concise Fortran program. Many sample programs in the text are explained in sufﬁcient detail with commentary statements. I ﬁnd that the most efﬁcient approach to learning computational physics is to study well-prepared programs. Related programs used in the book can be accessed via the World Wide Web at the URL http://www.physics.unlv.edu/∼pang/cp.html. Corresponding programs in C and Fortran 90 and other related materials will also be available at this site in the future. This book can be used as a textbook for a computational physics course. If it is a one-semester course, my recommendation is to select materials from Chapters 1 through 7 and Chapter 11. Some sections, such as 4.6 through 4.8, 5.6, and 7.8, are good for graduate students or beginning researchers but may pose some challenges to most undergraduate students. Tao Pang Las Vegas, Nevada

Preface

Since the publication of the ﬁrst edition of the book, I have received numerous comments and suggestions on the book from all over the world and from a far wider range of readers than anticipated. This is a ﬁrm testament of what I claimed in the Preface to the ﬁrst edition that computational physics is truly the foundation of computational science. The Internet, which connects all computerized parts of the world, has made it possible to communicate with students who are striving to learn modern science in distant places that I have never even heard of. The main drive for having a second edition of the book is to provide a new generation of science and engineering students with an up-to-date presentation to the subject. In the last decade, we have witnessed steady progress in computational studies of scientiﬁc problems. Many complex issues are now analyzed and solved on computers. New paradigms of global-scale computing have emerged, such as the Grid and web computing. Computers are faster and come with more functions and capacity. There has never been a better time to study computational physics. For this new edition, I have revised each chapter in the book thoroughly, incorporating many suggestions made by the readers of the ﬁrst edition. There are more examples given with more sample programs and ﬁgures to make the explanation of the material easier to follow. More exercises are given to help students digest the material. Each sample program has been completely rewritten to reﬂect what I have learned in the last few years of teaching the subject. A lot of new material has been added to this edition mainly in the areas in which computational physics has made signiﬁcant progress and a difference in the last decade, including one chapter on genetic algorithm and programming. Some material in the ﬁrst edition has been removed mainly because there are more detailed books on those subjects available or they appear to be out of date. The website for this new edition is at http://www.physics.unlv.edu/˜pang/cp2.html. References are cited for the sole purpose of providing more information for further study on the relevant subjects. Therefore they may not be the most authoritative or deﬁning work. Most of them are given because of my familiarity with, or my easy access to, the cited materials. I have also tried to limit the number of references so the reader will not ﬁnd them overwhelming. When I have had to choose, I have always picked the ones that I think will beneﬁt the readers most.

xiii

xiv

Preface

Java is adopted as the instructional programming language in the book. The source codes are made available at the website. Java, an object-oriented and interpreted language, is the newest programming language that has made a major impact in the last few years. The strength of Java is in its ability to work with web browsers, its comprehensive API (application programming interface), and its built-in security and network support. Both the source code and bytecode can run on any computer that has Java with exactly the same result. There are many advantages in Java, and its speed in scientiﬁc programming has steadily increased over the last few years. At the moment, a carefully written Java program, combined with static analysis, just-in-time compiling, and instruction-level optimization, can deliver nearly the same raw speed as C or Fortran. More scientists, especially those who are still in colleges or graduate schools, are expected to use Java as their primary programming language. This is why Java is used as the instructional language in this edition. Currently, many new applications in science and engineering are being developed in Java worldwide to facilitate collaboration and to reduce programming time. This book will do its part in teaching students how to build their own programs appropriate for scientiﬁc computing. We do not know what will be the dominant programming language for scientiﬁc computing in the future, but we do know that scientiﬁc computing will continue playing a major role in fundamental research, knowledge development, and emerging technology.

Acknowledgments

Most of the material presented in this book has been strongly inﬂuenced by my research work in the last 20 years, and I am extremely grateful to the University of Minnesota, the Miller Institute for Basic Research in Science at the University of California, Berkeley, the National Science Foundation, the Department of Energy, and the W. M. Keck Foundation for their generous support of my research work. Numerous colleagues from all over the world have made contributions to this edition while using the ﬁrst edition of the book. My deepest gratitude goes to those who have communicated with me over the years regarding the topics covered in the book, especially those inspired young scholars who have constantly reminded me that the effort of writing this book is worthwhile, and the students who have taken the course from me.

xv

Chapter 1

Introduction

Computing has become a necessary means of scientiﬁc study. Even in ancient times, the quantiﬁcation of gained knowledge played an essential role in the further development of mankind. In this chapter, we will discuss the role of computation in advancing scientiﬁc knowledge and outline the current status of computational science. We will only provide a quick tour of the subject here. A more detailed discussion on the development of computational science and computers can be found in Moreau (1984) and Nash (1990). Progress in parallel computing and global computing is elucidated in Koniges (2000), Foster and Kesselman (2003), and Abbas (2004).

1.1 Computation and science Modern societies are not the only ones to rely on computation. Ancient societies also had to deal with quantifying their knowledge and events. It is interesting to see how the ancient societies developed their knowledge of numbers and calculations with different means and tools. There is evidence that carved bones and marked rocks were among the early tools used for recording numbers and values and for performing simple estimates more than 20 000 years ago. The most commonly used number system today is the decimal system, which was in existence in India at least 1500 years ago. It has a radix (base) of 10. A number is represented by a string of ﬁgures, with each from the ten available ﬁgures (0–9) occupying a different decimal level. The way a number is represented in the decimal system is not unique. All other number systems have similar structures, even though their radices are quite different, for example, the binary system used on all digital computers has a radix of 2. During almost the same era in which the Indians were using the decimal system, another number system using dots (each worth one) and bars (each worth ﬁve) on a base of 20 was invented by the Mayans. A symbol that looks like a closed eye was used for zero. It is still under debate whether the Mayans used a base of 18 instead of 20 after the ﬁrst level of the hierarchy in their number formation. They applied these dots and bars to record multiplication tables. With the availability of those tables, the

1

2

Fig. 1.1 The Mayan number system: (a) examples of using dots and bars to represent numbers; (b) an example of recording multiplication.

Introduction

(a)

0

(b)

15

1

5

20

17

=

255

=

Fig. 1.2 A circle inscribed and circumscribed by two hexagons. The inside polygon sets the lower bound while the outside polygon sets the upper bound of the circumference.

lk π/ k

d=1

Mayans studied and calculated the period of lunar eclipses to a great accuracy. An example of Mayan number system is shown in Fig. 1.1. One of the most fascinating numbers ever calculated in human history is π, the ratio of the circumference to the diameter of the circle. One of the methods of evaluating π was introduced by Chinese mathematician Liu Hui, who published his result in a book in the third century. The circle was approached and bounded by two sets of regular polygons, one from outside and another from inside of the circle, as shown in Fig. 1.2. By evaluating the side lengths of two 192-sided regular polygons, Liu found that 3.1410 < π < 3.1427, and later he improved his result with a 3072-sided inscribed polygon to obtain π 3.1416. Two hundred years later, Chinese mathematician and astronomer Zu Chongzhi and his son Zu Gengzhi carried this type of calculation much further by evaluating the side lengths of two 24 576-sided regular polygons. They concluded that 3.141 592 6 < π < 3.141 592 7, and pointed out that a good approximation was given by

1.1 Computation and science

π 355/113 = 3.141 592 9 . . . . This is extremely impressive considering the limited mathematics and computing tools that existed then. Furthermore, no one in the next 1000 years did a better job of evaluating π than the Zus. The Zus could have done an even better job if they had had any additional help in either mathematical knowledge or computing tools. Let us quickly demonstrate this statement by considering a set of evaluations on polygons with a much smaller number of sides. In general, if the side length of a regular k-sided polygon is denoted as lk and the corresponding diameter is taken to be the unit of length, then the approximation of π is given by πk = klk .

(1.1)

The exact value of π is the limit of πk as k → ∞. The value of πk obtained from the calculations of the k-sided polygon can be formally written as πk = π∞ +

c2 c1 c3 + 2 + 3 + ··· , k k k

(1.2)

where π∞ = π and ci , for i = 1, 2, . . . , ∞, are the coefﬁcients to be determined. The expansion in Eq. (1.2) is truncated in practice in order to obtain an approximation of π . Then the task left is to solve the equation set n

ai j xj = bi ,

(1.3)

j=1

for i = 1, 2, . . . , n, if the expansion in Eq. (1.2) is truncated at the (n − 1)th j−1 order of 1/k with ai j = 1/ki , x1 = π∞ , xj = c j−1 for j > 1, and bi = πki . The approximation of π is then given by the approximate π∞ obtained by solving the equation set. For example, if π8 = 3.061 467, π16 = 3.121 445, π32 = 3.136 548, and π64 = 3.140 331 are given from the regular polygons inscribing the circle, we can truncate the expansion at the third order of 1/k and then solve the equation set (see Exercise 1.1) to obtain π∞ , c1 , c2 , and c3 from the given πk . The approximation of π π∞ is 3.141 583, which has ﬁve digits of accuracy, in comparison with the exact value π = 3.141 592 65 . . . . The values of πk for k = 8, 16, 32, 64 and the extrapolation π∞ are all plotted in Fig. 1.3. The evaluation can be further improved if we use more πk or ones with higher values of k. For example, we obtain π 3.141 592 62 if k = 32, 64, 128, 256 are used. Note that we are getting the same accuracy here as the evaluation of the Zus with polygons of 24 576 sides. In a modern society, we need to deal with a lot more computations daily. Almost every event in science or technology requires quantiﬁcation of the data involved. For example, before a jet aircraft can actually be manufactured, extensive computer simulations in different ﬂight conditions must be performed to check whether there is a design ﬂaw. This is not only necessary economically, but may help avoid loss of lives. A related use of computers is in the reconstruction of an unexpectred ﬂight accident. This is extremely important in preventing the same accident from happening again. A more common example is found in the cars

3

4

Introduction

Fig. 1.3 The values of πk , with k = 8, 16, 32, and 64, plotted together with the extrapolated π∞ .

3.15 ×

×

×

3.13 × 3.11 πk 3.09 3.07 × 3.05 0.00

0.03

0.06

0.09

0.12

0.15

1/k

that we drive, which each have a computer that takes care of the brakes, steering control, and other critical components. Almost any electronic device that we use today is probably powered by a computer, for example, a digital thermometer, a DVD (digital video disc) player, a pacemaker, a digital clock, or a microwave oven. The list can go on and on. It is fair to say that sophisticated computations delivered by computers every moment have become part of our lives, permanently.

1.2

The emergence of modern computers

The advantage of having a reliable, robust calculating device was realized a long time ago. The early abacus, which was used for counting, was in existence with the Babylonians 4000 years ago. The Chinese abacus, which appeared at least 3000 years ago, was perhaps the ﬁrst comprehensive calculating device that was actually used in performing addition, subtraction, multiplication, and division and was employed for several thousand years. A traditional Chinese abacus is made of a rectangular wooden frame and a bar going through the upper middle of the frame horizontally. See Fig. 1.4. There are thirteen evenly spaced vertical rods, each representing one decimal level. More rods were added to later versions. On each rod, there are seven beads that can be slid up and down with ﬁve of them held below the middle bar and two above. Zero on each rod is represented by the beads below the middle bar at the very bottom and the beads above at the very top. The numbers one to four are repsented by sliding one–four beads below the middle bar up and ﬁve is given be sliding one bead above down. The numbers six to nine are represented by one bead above the middle bar slid down and one–four beads below slid up. The ﬁrst and last beads on each rod are never used or are only used cosmetically during a calculation. The Japanese abacus, which was modeled on the Chinese abacus, in fact has twenty-one rods, with only ﬁve beads

1.2 The emergence of modern computers

5

Fig. 1.4 A sketch of a Chinese abacus with the number 15 963.82 shown.

on each rod, one above and four below the middle bar. Dots are marked on the middle bar for the decimal point and for every four orders (ten thousands) of digits. The abacus had to be replaced by the slide rule or numerical tables when a calcualtion went beyond the four basic operations even though later versions of the Chinese abacus could also be used to evaluate square roots and cubic roots. The slide rule, which is considered to be the next major advance in calculating devices, was introduced by the Englishmen Edmund Gunter and Reverend William Oughtred in the mid-seventeenth century based on the logarithmic table published by Scottish mathematician John Napier in a book in the early seventeenth century. Over the next several hundred years, the slide rule was improved and used worldwide to deliver the impressive computations needed, especially during the Industrial Revolution. At about the same time as the introduction of the slide rule, Frenchman Blaise Pascal invented the mechanical calculating machine with gears of different sizes. The mechanical calculating machine was enhanced and applied extensively in heavy-duty computing tasks before digital computers came into existence. The concept of an all-purpose, automatic, and programmable computing machine was introduced by British mathematician and astronomer Charles Babbage in the early nineteenth century. After building part of a mechanical calculating machine that he called a difference engine, Babbage proposed constructing a computing machine, called an analytical engine, which could be programmed to perform any type of computation. Unfortunately, the technology at the time was not advanced enough to provide Babbage with the necessary machinery to realize his dream. In the late nineteenth century, Spanish engineer Leonardo Torres y Quevedo showed that it might be possible to construct the machine conceived earlier by Babbage using the electromechanical technology that had just been developed. However, he could not actually build the whole machine either, due to lack of funds. American engineer and inventor Herman Hollerith built the very ﬁrst electromechanical counting machine, which was commisioned by the US federal government for sorting the population in the 1890 American census. Hollerith used the proﬁt obtained from selling this machine to set up a company, the Tabulating Machine Company, the predecessor of IBM (International

6

Introduction

Business Machines Corporation). These developments continued in the early twentieth century. In the 1930s, scientists and engineers at IBM built the ﬁrst difference tabulator, while researchers at Bell Laboratories built the ﬁrst relay calculator. These were among the very ﬁrst electromechanical calculators built during that time. The real beginning of the computer era came with the advent of electronic digital computers. John Vincent Atanasoff, a theoretical physicist at the Iowa State University at Ames, invented the electronic digital computer between 1937 and 1939. The history regarding Atanasoff ’s accomplishment is described in Mackintosh (1987), Burks and Burks (1988), and Mollenhoff (1988). Atanasoff introduced vacuum tubes (instead of the electromechanical devices used earlier by other people) as basic elements, a separated memory unit, and a scheme to keep the memory updated in his computer. With the assistance of Clifford E. Berry, a graduate assistant, Atanasoff built the very ﬁrst electronic computer in 1939. Most computer history books have cited ENIAC (Electronic Numerical Integrator and Computer), built by John W. Mauchly and J. Presper Eckert with their colleagues at the Moore School of the University of Pennsylvania in 1945, as the ﬁrst electronic computer. ENIAC, with a total mass of more than 30 tons, consisited of 18 000 vacuum tubes, 15 000 relays, and several hundred thousand resistors, capacitors, and inductors. It could complete about 5000 additions or 400 multiplications in one second. Some very impressive scientiﬁc computations were performed on ENIAC, including the study of nuclear ﬁssion with the liquid drop model by Metropolis and Frankel (1947). In the early 1950s, scientists at Los Alamos built another electronic digital computer, called MANIAC I (Mathematical Analyzer, Numerator, Integrator, and Computer), which was very similar to ENIAC. Many important numerical studies, including Monte Carlo simulation of classical liquids (Metropolis et al., 1953), were completed on MANIAC I. All these research-intensive activities accomplished in the 1950s showed that computation was no longer just a supporting tool for scientiﬁc research but rather an actual means of probing scientiﬁc problems and predicting new scientiﬁc phenomena. A new branch of science, computational science, was born. Since then, the ﬁeld of scientiﬁc computing has developed and grown rapidly. The computational power of new computers has been increasing exponentially. To be speciﬁc, the computing power of a single computer unit has doubled almost every 2 years in the last 50 years. This growth followed the observation of Gordon Moore, co-founder of Intel, that information stored on a given amount of silicon surface had doubled and would continue to do so in about every 2 years since the introduction of the silicon technology (nicknamed Moore’s law). Computers with transistors replaced those with vacuum tubes in the late 1950s and early 1960s, and computers with very-large-scale integrated circuits were built in the 1970s. Microprocessors and vector processors were built in the mid-1970s to set the

1.3 Computer algorithms and languages

stage for personal computing and supercomputing. In the 1980s, microprocessorbased personal computers and workstations appeared. Now they have penetrated all aspects of our lives, as well as all scientiﬁc disciplines, because of their affordability and low maintenance cost. With technological breakthroughs in the RISC (Reduced Instruction Set Computer) architecture, cache memory, and multiple instruction units, the capacity of each microprocessor is now larger than that of a supercomputer 10 years ago. In the last few years, these fast microprocessors have been combined to form parallel or distributed computers, which can easily deliver a computing power of a few tens of gigaﬂops (109 ﬂoating-point operations per second). New computing paradigms such as the Grid were introduced to utilize computing resources on a global scale via the Internet (Foster and Kesselman, 2003; Abbas, 2004). Teraﬂop (1012 ﬂoating-point operations per second) computers are now emerging. For example, Q, a newly installed computer at the Los Alamos National Laboratory, has a capacity of 30 teraﬂops. With the availability of teraﬂop computers, scientists can start unfolding the mysteries of the grand challenges, such as the dynamics of the global environment; the mechanism of DNA (deoxyribonucleic acid) sequencing; computer design of drugs to cope with deadly viruses; and computer simulation of future electronic materials, structures, and devices. Even though there are certain problems that computers cannot solve, as pointed out by Harel (2000), and hardware and software failures can be fatal, the human minds behind computers are nevertheless unlimited. Computers will never replace human beings in this regard and the quest for a better understanding of Nature will go on no matter how difﬁcult the journey is. Computers will certainly help to make that journey more colorful and pleasant.

1.3 Computer algorithms and languages Before we can use a computer to solve a speciﬁc problem, we must instruct the computer to follow certain procedures and to carry out the desired computational task. The process involves two steps. First, we need to transform the problem, typically in the form of an equation, into a set of logical steps that a computer can follow; second, we need to inform the computer to complete these logical steps.

Computer algorithms The complete set of the logical steps for a speciﬁc computational problem is called a computer or numerical algorithm. Some popular numerical algorithms can be traced back over a 100 years. For example, Carl Friedrich Gauss (1866) published an article on the FFT (fast Fourier transform) algorithm (Goldstine, 1977,

7

8

Introduction

pp. 249–53). Of course, Gauss could not have envisioned having his algorithm realized on a computer. Let us use a very simple and familiar example in physics to illustrate how a typical numerical algorithm is constructed. Assume that a particle of mass m is conﬁned to move along the x axis under a force f (x). If we describe its motion with Newton’s equation, we have f = ma = m

dv , dt

(1.4)

where a and v are the acceleration and velocity of the particle, respectively, and t is the time. If we divide the time into small, equal intervals τ = ti+1 − ti , we know from elementary physics that the velocity at time ti is approximately given by the average velocity in the time interval [ti , ti+1 ], vi

xi+1 − xi xi+1 − xi ; = ti+1 − ti τ

(1.5)

the corresponding acceleration is approximately given by the average acceleration in the same time interval, ai

vi+1 − vi vi+1 − vi , = ti+1 − ti τ

(1.6)

as long as τ is small enough. The simplest algorithm for ﬁnding the position and velocity of the particle at time ti+1 from the corresponding quantities at time ti is obtained after combining Eqs. (1.4), (1.5), and (1.6), and we have xi+1 = xi + τ vi , τ vi+1 = vi + f i , m

(1.7) (1.8)

where f i = f (xi ). If the initial position and velocity of the particle are given and the corresponding quantities at some later time are sought (the initial-value problem), we can obtain them recursively from the algorithm given in Eqs. (1.7) and (1.8). This algorithm is commonly known as the Euler method for the initial-value problem. This simple example illustrates how most algorithms are constructed. First, physical equations are transformed into discrete forms, namely, difference equations. Then the desired physical quantities or solutions of the equations at different variable points are given in a recursive manner with the quantities at a later point expressed in terms of the quantities from earlier points. In the above example, the position and velocity of the particle at ti+1 are given by the position and velocity at ti , provided that the force at any position is explicitly given by a function of the position. Note that the above way of constructing an algorithm is not limited to one-dimensional or single-particle problems. In fact, we can immediately generalize this algorithm to two-dimensional and three-dimensional problems, or to the problems involving more than one particle, such as the

1.3 Computer algorithms and languages

motion of a projectile or a system of three charged particles. The generalized version of the above algorithm is Ri+1 = Ri + τ Vi , Vi+1 = Vi + τ Ai ,

(1.9) (1.10)

where R = (r1 , r2 , . . . , rn ) is the position vector of all the n particles in the system; V = (v1 , v2 , . . . , vn ) and A = (a1 , a2 , . . . , an ), with a j = f j /m j for j = 1, 2, . . . , n, are the corresponding velocity and acceleration vectors, respectively. From a theoretical point of view, the Turing machine is an abstract representation of a universal computer and also a device to autopsy any algorithm. The concept was introduced by Alan Turing (1936–7) with a description of the universal computer that consists of a read and write head and a tape with an inﬁnite number of units of binaries (0 or 1). The machine is in a speciﬁed state for a given moment of operation and follows instructions prescribed by a ﬁnite table. A computer algorithm is a set of logical steps that can be achieved by the Turing machine. Logical steps that cannot be achieved by the Turing machine belong to the class of problems that are not solvable by computers. Some such unsolvable problems are discussed by Harel (2000). The logical steps in an algorithm can be sequential, parallel, or iterative (implicit). How to utilize the properties of a given problem in constructing a fast and accurate algorithm is a very important issue in computational science. It is hoped that the examples discussed in this book will help students learn how to establish efﬁcient and accurate algorithms as well as how to write clean and structured computer programs for most problems encountered in physics and related ﬁelds.

Computer languages Computer programs are the means through which we communicate with computers. The very ﬁrst computer program was written by Ada Byron, the Countess of Lovelace, and was intended for the analytical engine proposed by Babbage in the mid-1840s. To honor her achievement, an object-oriented programming language (Ada), initially developed by the US military, is named after her. A computer program or code is a collection of statements, typically written in a well-deﬁned computer programming language. Programming languages can be divided into two major categories: low-level languages designed to work with the given hardware, and high-level languages that are not related to any speciﬁc hardware. Simple machine languages and assembly languages were the only ones available before the development of high-level languages. A machine language is typically in binary form and is designed to work with the unique hardware of a computer. For example, a statement, such as adding or multiplying two integers, is represented by one or several binary strings that the computer can recognize and follow. This is very efﬁcient from computer’s point of view, but extremely

9

10

Introduction

labor-intensive from that of a programmer. To remember all the binary strings for all the statements is a nontrivial task and to debug a program in binaries is a formidable task. Soon after the invention of the digital computer, assembly languages were introduced to increase the efﬁciency of programming and debugging. They are more advanced than machine languages because they have adopted symbolic addresses. But they are still related to a certain architecture and wiring of the system. A translating device called an assembler is needed to convert an assembly code into a native machine code before a computer can recognize the instructions. Machine languages and assembly languages do not have portability; a program written for one kind of computers could never be used on others. The solution to such a problem is clearly desirable. We need high-level languages that are not associated with the unique hardware of a computer and that can work on all computers. Ideal programming languages would be those that are very concise but also close to the logic of human languages. Many high-level programming languages are now available, and the choice of using a speciﬁc programming language on a given computer is more or less a matter of personal taste. Most high-level languages function similarly. However, for a researcher who is working at the cutting edge of scientiﬁc computing, the speed and capacity of a computing system, including the efﬁciency of the language involved, become critical. A modern computer program conveys the tasks of an algorithm for a computational problem to a computer. The program cannot be executed by the computer before it is translated into the native machine code. A translator, a program called a compiler, is used to translate (or compile) the program to produce an executable ﬁle in binaries. Most compilers also have an option to produce an objective ﬁle ﬁrst and then link it with other objective ﬁles and library routines to produce a combined executable ﬁle. The compiler is able to detect most errors introduced during programming, that is, the process of writing a program in a high-level language. After running the executable program, the computer will output the result as instructed. The newest programming language that has made a major impact in the last few years is Java, an object-oriented, interpreted language. The strength of Java lies in its ability to work with web browsers, its comprehensive GUI (graphical user interface), and its built-in security and network support. Java is a truly universal language because it is fully platform-independent: “write once, run everywhere” is the motto that Sun Microsystems uses to qualify all the features in Java. Both the source code and the compiled code can run on any computer that has Java installed with exactly the same result. The Java compiler converts the source code (file.java) into a bytecode (file.class), which contains instructions in ﬁxed-length byte strings and can be interpreted/executed on any computer under the Java interpreter, called JVM (Java Virtual Machine). There are many advantages in Java, and its speed in scientiﬁc programming has been steadily increased over the last few years. At the moment, a carefully written Java program, combined with static analysis, just-in-time compiling, and

1.3 Computer algorithms and languages

instruction-level optimization, can deliver nearly the same raw speed as the incumbent C or Fortran (Boisvert et al., 2001). Let us use the algorithm that we highlighted earlier for a particle moving along the x axis to show how an algorithm is translated into a program in Java. For simplicity, the force is taken to be an elastic force f (x) = −kx, where k is the elastic constant. We will also use m = k = 1 for convenience. The following Java program is an implementation of the algorithm given in Eqs. (1.7) and (1.8); each statement in the program is almost self-explanatory. // An example of studying the motion of a particle in // one dimension under an elastic force. import java.lang.*; public class Motion { static final int n = 100000, j = 500; public static void main(String argv[]) { double x[] = new double[n+1]; double v[] = new double[n+1]; // Assign double x[0] = v[0] =

time step and initial position and velocity dt = 2*Math.PI/n; 0; 1;

// Calculate other position and velocity recursively for (int i=0; i j

(8.29)

because g(r ) can be interpreted as the probability of seeing another particle at a distance r . Then we have P = ρ kB T +

ρ w + P, N

(8.30)

231

232

Molecular dynamics simulations

which can be evaluated quite easily, because at every time step the force fi j = −∇V (ri j ) is calculated for each particle pair.

8.3

The Verlet algorithm

Hamilton’s equations given in Eqs. (8.2) and (8.3) are equivalent to Newton’s equation mi

d 2 ri = fi , dt 2

(8.31)

for the ith particle in the system. To simplify the notation, we will use R to represent all the coordinates (r1 , r2 , . . . , r N ) and G to represent all the accelerations (f1 /m 1 , f2 /m 2 , . . . , f N /m N ). Thus, we can rewrite Newton’s equations for all the particles as d 2R = G. dt 2

(8.32)

If we apply the three-point formula to the second-order derivative d 2 R/dt 2 , we have 1 d 2R = 2 (Rk+1 − 2Rk + Rk−1 ) + O(τ 2 ), dt 2 τ

(8.33)

with t = kτ . We can also apply the three-point formula to the velocity V=

1 dR = (Rk+1 − Rk−1 ) + O(τ 2 ). dt 2τ

(8.34)

After we put all the above together, we obtain the simplest algorithm, which is called the Verlet algorithm, for a classical many-body system, with Rk+1 = 2Rk − Rk−1 + τ 2 Gk + O(τ 4 ), Rk+1 − Rk−1 + O(τ 2 ). Vk = 2τ

(8.35) (8.36)

The Verlet algorithm can be started if the ﬁrst two positions R0 and R1 of the particles are given. However, in practice, only the initial position R0 and initial velocity V0 are given. Therefore, we need to ﬁgure out R1 before we can start the recursion. A common practice is to treat the force during the ﬁrst time interval [0, τ ] as a constant, and then to apply the kinematic equation to obtain R1 R0 + τ V0 +

τ2 G0 , 2

(8.37)

where G0 is the acceleration vector evaluated at the initial conﬁguration R0 . Of course, the position R1 can be improved by carrying out the Taylor expansion to higher-order terms if the accuracy in the ﬁrst two points is critical. We can also replace G0 in Eq. (8.37) with the average (G0 + G1 )/2, with G1 evaluated at R1 , given from Eq. (8.37). This procedure can be iterated several

8.3 The Verlet algorithm

times before starting the algorithm for the velocity V1 and the next position R2 . The Verlet algorithm has advantages and disadvantages. It preserves the time reversibility that is one of the important properties of Newton’s equation. The rounding error may eventually destroy this time symmetry. The error in the velocity is two orders of magnitude higher than the error in the position. In many applications, we may only need information about the positions of the particles, and the Verlet algorithm yields very high accuracy for the position. If the velocity is not needed, we can totally ignore the evaluation of the velocity, since the evaluation of the position does not depend on the velocity at each time step. The biggest disadvantage of the Verlet algorithm is that the velocity is evaluated one time step behind the position. However, this lag can be removed if the velocity is evaluated directly from the force. A two-point formula would yield Vk+1 = Vk + τ Gk + O(τ 2 ).

(8.38)

We would get much better accuracy if we replaced Gk with the average (Gk + Gk+1 )/2. The new position can be obtained by treating the motion within t ∈ [kτ, (k + 1)τ ] as motion with a constant acceleration Gk ; that is, Rk+1 = Rk + τ Vk +

τ2 Gk . 2

(8.39)

Then a variation of the Verlet algorithm with the velocity calculated at the same time step of the position is Rk+1 = Rk + τ Vk + Vk+1 = Vk +

τ2 Gk + O(τ 4 ), 2

τ (Gk+1 + Gk ) + O(τ 2 ). 2

(8.40) (8.41)

Note that the evaluation of the position still has the same accuracy because the velocity is now updated according to Eq. (8.41), which provides the cancelation of the third-order term in the new position. Here we demonstrate this velocity version of the Verlet algorithm with a very simple example, the motion of Halley’s comet, which has a period of about 76 years. The potential between the comet and the Sun is given by V (r ) = −G

Mm , r

(8.42)

where r is the distance between the comet and the Sun, M and m are the respective masses of the Sun and the comet, and G is the gravitational constant. If we use the center-of-mass coordinate system as described in Chapter 3 for the two-body collision, the dynamics of the comet is governed by µ

d 2r r = f = −G Mm 3 , dt 2 r

(8.43)

233

234

Molecular dynamics simulations

with the reduced mass µ=

Mm m, M +m

(8.44)

for the case of Halley’s comet. We can take the farthest point (aphelion) as the starting point, and then we can easily obtain the comet’s whole orbit with one of the versions of the Verlet algorithm. Two quantities can be assumed as the known quantities, the total energy and the angular momentum, which are the constants of motion. We can describe the motion of the comet in the x y plane and choose x0 = rmax , vx0 = 0, y0 = 0, and v y0 = vmin . From well-known results we have that rmax = 5.28 × 1012 m and vmin = 9.13 × 102 m/s. Let us apply the velocity version of the Verlet algorithm to this problem. Then we have τ2 x (k+1) = x (k) + τ vx(k) + gx(k) , 2 τ , (k+1) gx + gx(k) , vx(k+1) = vx(k) + 2 τ 2 (k) g , y (k+1) = y (k) + τ v (k) y + 2 y τ , (k+1) v (k+1) g , = v (k) + g (k) y y + y 2 y

(8.45) (8.46) (8.47) (8.48)

where the time-step index is given in parentheses as superscripts in order to distinguish it from the x-component or y-component index. The acceleration components are given by x , r3 y g y = −κ 3 , r

gx = −κ

(8.49) (8.50)

with r = x 2 + y 2 and κ = G M. We can use more speciﬁc units in the numerical calculations, for example, 76 years as the time unit and the semimajor axis of the orbital a = 2.68 × 1012 m as the length unit. Then we have rmax = 1.97, vmin = 0.816, and κ = 39.5. The following program is the implementation of the algorithm outlined above for Halley’s comet. // An example to study the time-dependent position and // velocity of Halley's comet via the Verlet algorithm. import java.lang.*; public class Comet { static final int n = 20000, m = 200; public static void main(String argv[]) { double t[] = new double [n]; double x[] = new double [n]; double y[] = new double [n]; double r[] = new double [n]; double vx[] = new double [n]; double vy[] = new double [n]; double gx[] = new double [n]; double gy[] = new double [n]; double h = 2.0/(n-1), h2 = h*h/2, k = 39.478428;

8.3 The Verlet algorithm

2.0

Fig. 8.1 The distance between Halley’s comet and the Sun calculated with the program given in the text. The period of the comet is used as the unit of time, and the semimajor axis of the orbit is used as the unit of distance.

1.5

r

1.0

0.5

0.0 0.0

235

0.5

1.0 t

1.5

2.0

// Initialization of the problem t[0] = 0; x[0] = 1.966843; y[0] = 0; r[0] = x[0]; vx[0] = 0; vy[0] = 0.815795; gx[0] = -k/(r[0]*r[0]); gy[0] = 0; // Verlet algorithm for the position and velocity for (int i=0; i j

ms 2 v − GkB T ln s, 2 s

(8.89)

where s and vs are the coordinate and velocity of an introduced ﬁctitious variable that rescales the time and the kinetic energy in order to have the constraint of the canonical ensemble satisﬁed. The rescaling is achieved by replacing the time element dt with dt/s and holding the coordinates unchanged, that is, xi = ri . The velocity is rescaled with time: r˙ i = s x˙ i . We can then obtain the equation of motion for the coordinate xi and the variable s by applying the Lagrange equation. Hoover (1985; 1999) showed that the Nos´e Lagrangian leads to a set of equations very similar to the result of the constraint force scheme discussed at the beginning of this subsection. The Nos´e equations of motion are given in the Hoover version by r˙ i = vi ,

(8.90)

fi v˙ i = − ηvi , mi

(8.91)

where η is given in a differential form, 1 η˙ = ms

N pi2 − GkB T mi i=1

,

(8.92)

and the original variable s, introduced by Nos´e, is related to η by s = s0 eη(t−t0 ) ,

(8.93)

8.6 Constant pressure, temperature, and bond length

with s0 as the initial value of s at t = t0 . We can discretize the above equation set easily with either the Verlet algorithm or one of the Gear schemes. Note that the behavior of the parameter s is no longer directly related to the simulation; it is merely a parameter Nos´e introduced to accomplish the microscopic processes happening in the constant-temperature environment. We can also combine the Andersen constant-pressure scheme with the Nos´e constant-temperature scheme in a single effective Lagrangian L=

N ms 2 M ˙2 mi 2 2 2 s L x˙ i − s˙ − GkB T ln s + V (L xi j ) + − P0 , 2 2 2 i=1 i> j

(8.94)

which is worked out in detail in the original work of Nos´e (1984a; 1984b). Another constant-temperature scheme was introduced by Berendsen et al. (1984) with the parameter η given by 1 η= ms

N

m i vi2 − GkB T

,

(8.95)

i=1

which can be interpreted as a similar form of the constraint that differs from the Hoover–Nos´e form in the choice of η. For a review on the subject, see Nos´e (1991).

Constant bond length Another issue we have to deal with in practice is that for large molecular systems, such as biopolymers, the bond length of a pair of nearest neighbors does not change very much even though the angle between a pair of nearest bonds does. If we want to obtain accurate simulation results, we have to choose a time step much smaller than the period of the vibration of each pair of atoms. This costs a lot of computing time and might exclude the applicability of the simulation to more complicated systems, such as biopolymers. A procedure commonly known as the SHAKE algorithm (Ryckaert, Ciccotti, and Berendsen, 1977; van Gunsteren and Berendsen, 1977) was introduced to deal with the constraint on the distance between a pair of particles in the system. The idea of this procedure is to adjust each pair of particles iteratively to have 2 ri j − di2j /di2j ≤

(8.96)

in each time step. Here di j is the distance constraint between the ith and jth particles and is the tolerance in the simulation. The adjustment of the position of each particle is performed after each time step of the molecular dynamics simulation. Assume that we are working on a speciﬁc pair of particles and for the lth constraint and that we would like to have (ri j + δri j )2 − di2j = 0,

(8.97)

245

246

Molecular dynamics simulations

where ri j = r j − ri is the new position vector difference after a molecular time (0) step starting from ri j and the adjustments for the l − 1 constraints have been completed. Here δri j = δr j − δri is the total amount of adjustment needed for both particles. One can show, in conjunction with the Verlet algorithm, that the adjustments needed are given by (0)

m i δri = −gi j ri j = −m j δr j ,

(8.98)

with gi j as a parameter to be determined. The center of mass of these two particles remains the same during the adjustment. If we substitute δri and δr j given in Eq. (8.98), we obtain 2 (0) (0) ri j gi2j + 2µi j ri j · ri j gi j + µi2j ri2j − d 2 = 0,

(8.99)

where µi j = m i m j /(m i + m j ) is the reduced mass of the two particles. If we keep only the linear term in gi j , we have gi j =

µi j (0) 2ri j

· ri j

di2j − ri2j ,

(8.100)

which is reasonable, because gi j is a small number during the simulation. More importantly, by the end of the iteration, all the constraints will be satisﬁed as well; all gi j go to zero at the convergence. Equation (8.100) is used to estimate each gi j for each constraint in each iteration. After one has the estimate of gi j for each constraint, the positions of the relevant particles are all adjusted. The adjustments have to be performed several times until the convergence is reached. For more details on the algorithm, see Ryckaert et al. (1977). This procedure has been used in the simulation of chain-like systems as well as of proteins and nucleic acids. Interested readers can ﬁnd some detailed discussions on the dynamics of proteins and nucleic acids in McCammon and Harvey (1987).

8.7

Structure and dynamics of real materials

In this section, we will discuss some typical methods used to extract information about the structure and dynamics of real materials in molecular dynamics simulations. A numerical simulation of a speciﬁc material starts with a determination of the interaction potential in the system. In most cases, the interaction potential is formulated in a parameterized form, which is usually determined separately from the available experimental data, ﬁrst principles calculations, and condition of the system under study. The accuracy of the interaction potential determines the validity of the simulation results. Accurate model potentials have been developed for many realistic materials, for example, the Au (100) surface (Ercolessi, Tosatti, and Parrinello, 1986) and Si3 N4 ceramics (Vashishta et al., 1995). In the next

8.7 Structure and dynamics of real materials

section, we will discuss an ab initio molecular dynamics scheme in which the interaction potential is obtained by calculating the electronic structure of the system at each particle conﬁguration. We then need to set up a simulation box under the periodic boundary condition. Because most experiments are performed in a constant-pressure environment, we typically use the constant-pressure scheme developed by Andersen (1980) or its generalization (Parrinello and Rahman, 1980; 1981). The size of the simulation box has to be decided together with the available computing resources and the accuracy required for the quantities to be evaluated. The initial positions of the particles are usually assigned at the lattice points of a closely packed structure, for example, a face-centered cubic structure. The initial velocities of the particles are drawn from the Maxwell distribution for a given temperature. The temperature can be changed by rescaling the velocities. This is extremely useful in the study of phase transitions with varying temperature, such as the transition between different lattice structures, glass transition under quenching, or liquid–solid transition when the system is cooled down slowly. The advantage of simulation over actual experiment also shows up when we want to observe some behavior that is not achievable experimentally due to the limitations of the technique or equipment. For example, the glass transition in the Lennard–Jones system is observed in molecular dynamics simulations but not in the experiments for liquid Ar, because the necessary quenching rate is so high that it is impossible to achieve it experimentally. Studying the dynamics of different materials requires a more general timedependent density–density correlation function C(r, r ; t) = ρ(r ˆ + r , t)ρ(r ˆ , 0) ,

(8.101)

with the time-dependent density operator given by ρ(r, ˆ t) =

N

δ[r − ri (t)].

(8.102)

i=1

If the system is homogeneous, we can integrate out r in the time-dependent density–density correlation function to reach the van Hove time-dependent distribution function (van Hove, 1954) . / N 1 G(r, t) = δ{r − [ri (t) − r j (0)]} . ρ N i, j

(8.103)

The dynamical structure factor measured in an experiment, for example, neutron scattering, is given by the Fourier transform of G(r, t) as S(k, ω) =

ρ 2π

ei(ωt−k·r) G(r, t) dr dt.

(8.104)

The above equation reduces to the static case with

S(k) − 1 = 4πρ

sin kr [g(r ) − 1]r 2 dr kr

(8.105)

247

248

Molecular dynamics simulations

if we realize that G(r, 0) = g(r) +

and

S(k) =

∞

δ(r) ρ

S(k, ω) dω,

−∞

(8.106)

(8.107)

where g(r) is the pair distribution discussed earlier in this chapter and S(k) is the angular average of S(k). Here G(r, t) can be interpreted as the probability of observing one of the particles at r at time t if a particle was observed at r = 0 at t = 0. This leads to the numerical evaluation of G(r, t), which is the angular average of G(r, t). If we write G(r, t) in two parts, G(r, t) = G s (r, t) + G d (r, t),

(8.108)

with G s (r, t) the probability of observing the same particle that was at r = 0 at t = 0, and G d (r, t) the probability of observing other particles, we have G d (r, t)

1 N (r, r ; t) , ρ (r, r )

(8.109)

where (r, r ) 4πr 2 r is the volume of a spherical shell with radius r and thickness r , and N (r, r ; t) is the number of particles in the spherical shell at time t. The position of each particle at t = 0 is chosen as the origin in the evaluation of G d (r, t) and the average is taken over all the particles in the system. Note that this is different from the evaluation of g(r ), in which we always select a particle position as the origin and take the average over time. We can also take the average over all the particles in the evaluation of g(r ). Here G s (r, t) can be evaluated in a similar fashion. Because G s (r, t) represents the probability for a particle to be at a distance r at time t from its original position at t = 0, we can introduce / . N 1 2n = r 2n G s (r, t) dr (t) = [ri (t) − ri (0)] N i=1 2n

(8.110)

in the evaluation of the diffusion coefﬁcient with n = 1. The diffusion coefﬁcient can also be evaluated from the autocorrelation function N 1 [vi (t) · vi (0)], N i=1

c(t) = v(t) · v(0) =

with D=

1 3c(0)

(8.111)

∞

c(t) dt,

(8.112)

0

because the velocity of each particle at each time step vi (t) is known from the simulation. The velocity correlation function can also be used to obtain the power spectrum P(ω) =

6 πc(0)

0

∞

c(t) cos ωt dt,

(8.113)

8.7 Structure and dynamics of real materials

which has many features similar to those of the phonon spectrum of the system: for example, a broad peak for the glassy state and sharp features for a crystalline state. Thermodynamical quantities can also be evaluated from molecular dynamics simulations. For example, if a simulation is performed under the constantpressure condition, we can obtain physical quantities such as the particle density, pair-distribution function, and so on, at different temperature. The inverse of the particle density is called the speciﬁc volume, denoted V P (T ). The thermal expansion coefﬁcient under the constant-pressure condition is then given by αP =

∂ V P (T ) , ∂T

(8.114)

which is quite different when the system is in a liquid phase than it is in a solid phase. Furthermore, we can calculate the temperature-dependent enthalpy H = E + P,

(8.115)

where E is the internal energy given by

. / N N mi 2 v + E= V (ri j ) , 2 i i=1 i = j

(8.116)

with P the pressure, and the volume of the system. The speciﬁc heat under the constant-pressure condition is then obtained from cP =

1 ∂H . N ∂T

(8.117)

The speciﬁc heat under the constant-volume condition can be derived from the ﬂuctuation of the internal energy (δ E)2 = [E − E ]2 with time, given as cV =

(δ E)2 . kB T 2

(8.118)

The isothermal compressibility κT is then obtained from the identity κT =

T α 2P , c P − cV

(8.119)

which is also quite different for the liquid phase than for the solid phase. For more discussions on the molecular dynamics simulation of glass transition, see Yonezawa (1991). Other aspects related to the structure and dynamics of a system can be studied through molecular dynamics simulations. The advantage of molecular dynamics over a typical stochastic simulation is that molecular dynamics can give all the information on the time dependence of the system, which is necessary for analyzing the structural and dynamical properties of the system. Molecular dynamics is therefore the method of choice in computer simulations of many-particle systems. However, stochastic simulations, such as Monte Carlo simulations, are sometimes easier to perform for some systems and are closely related to the simulations of quantum systems.

249

250

Molecular dynamics simulations

8.8

Ab initio molecular dynamics

In this section, we outline a very interesting simulation scheme that combines the calculation of the electronic structure and the molecular dynamics simulation for a system. This is known as ab initio molecular dynamics, which was devised and put into practice by Car and Parrinello (1985). The maturity of molecular dynamics simulation schemes and the great advances in computing capacity have made it possible to perform molecular dynamics simulations for amorphous materials, biopolymers, and other complex systems. However, in order to obtain an accurate description of a speciﬁc system, we have to know the precise behavior of the interactions among the particles, that is, the ions in the system. Electrons move much faster than ions because the electron mass is much smaller than that of an ion. The position dependence of the interactions among the ions in a given system is therefore determined by the distribution of the electrons (electronic structure) at the speciﬁc moment. Thus, a good approximation of the electronic structure in a calculation can be obtained with all the nuclei ﬁxed in space for that moment. This is the essence the Born–Oppenheimer approximation, which allows the degrees of freedom of the electrons to be treated separately from those of the ions. In the past, the interactions among the ions were given in a parameterized form based on experimental data, quantum chemistry calculations, or the speciﬁc conditions of the system under study. All these procedures are limited due to the complexity of the electronic structure of the actual materials. We can easily obtain accurate parameterized interactions for the inert gases, such as Ar, but would have a lot of difﬁculties in obtaining an accurate parameterized interaction that can produce the various structures of ice correctly in the molecular dynamics simulation. It seems that a combined scheme is highly desirable. We can calculate the many-body interactions among the ions in the system from the electronic structure calculated at every molecular dynamics time step and then determine the next conﬁguration from such ab initio interactions. This can be achieved in principle, but in practice the scheme is restricted by the existing computing capacity. The combined method devised by Car and Parrinello (1985) was the ﬁrst in its class and has been applied to the simulation of real materials.

Density functional theory The density functional theory (Hohenberg and Kohn, 1964; Kohn and Sham, 1965) was introduced as a practical scheme to cope with the many-electron effect in atoms, molecules, and solids. The theorem proved by Hohenberg and Kohn (1964) states that the ground-state energy of an interacting system is the optimized value of an energy functional E[ρ(r)] of the electron density ρ(r) and that the

8.8 Ab initio molecular dynamics

corresponding density distribution of the optimization is the unique ground-state density distribution. Symbolically, we can write E[ρ(r)] = E ext [ρ(r)] + E H [ρ(r)] + E K [ρ(r)] + E xc [ρ(r)],

(8.120)

where E ext [ρ(r)] is the contribution from the external potential Uext (r) with

E ext [ρ(r)] =

Uext (r)ρ(r) dr,

(8.121)

E H [ρ(r)] is the Hartree type of contribution due to the electron–electron interaction, given by 1 2 e 2

E H [ρ(r)] =

ρ(r )ρ(r) dr dr , |r − r|

4π 0

(8.122)

E K is the contribution of the kinetic energy, and E xc denotes the rest of the contributions and is termed the exchange–correlation energy functional. In general, we can express the electron density in a spectral representation ρ(r) =

ψi† (r)ψi (r),

(8.123)

i

†

where ψi (r) is the complex conjugate of the wavefunction ψi (r) and the summation is over all the degrees of freedom, that is, all the occupied states with different spin orientations. Then the kinetic energy functional can be written as E K [ρ(r)] = −

h¯ 2 2m

ψi† (r)∇ 2 ψi (r) dr.

(8.124)

i

There is a constraint from the total number of electrons in the system, namely, ρ(r) dr = N ,

(8.125)

which introduces the Lagrange multipliers into the variation. If we use the spectral representation of the density in the energy functional and apply the Euler equation with the Lagrange multipliers, we have δ E[ρ(r)] δψi† (r)

− εi ψi (r) = 0,

(8.126)

which leads to the Kohn–Sham equation h¯ 2 2 ∇ + VE (r) ψi (r) = εi ψi (r), − 2m

(8.127)

where VE (r) is an effective potential given by VE (r) = Uext (r) + VH (r) + Vxc (r),

(8.128)

251

252

Molecular dynamics simulations

with Vxc (r) =

δ E xc [ρ(r)] , δρ(r)

(8.129)

which cannot be obtained exactly. A common practice is to approximate it by its homogeneous density equivalent, the so-called local approximation, in which we assume that Vxc (r) is given by the same quantity of a uniform electron gas with density equal to ρ(r). This is termed the local density approximation. The local density approximation has been successfully applied to many physical systems, including atomic, molecular, and condensed-matter systems. The unexpected success of the local density approximation in materials research has made it a standard technique for calculating electronic properties of new materials and systems. The procedure for calculating the electronic structure with the local density approximation can be described in several steps. We ﬁrst construct the local approximation of Vxc (r) with a guessed density distribution. Then the Kohn– Sham equation is solved, and a new density distribution is constructed from the solution. With the new density distribution, we can improve Vxc (r) and then solve the Kohn–Sham equation again. This procedure is repeated until convergence is reached. Interested readers can ﬁnd detailed discussions on the density functional theory in many monographs or review articles, for example, Kohn and Vashishta (1983), and Jones and Gunnarsson (1989).

The Car---Parrinello simulation scheme The Hohenberg–Kohn energy functional forms the Born–Oppenheimer potential surface for the ions in the system. The idea of ab initio molecular dynamics is similar to the relaxation scheme we discussed in Chapter 7. We introduced a functional U=

0 dψ(x) 2 1 (x) − ρ(x)ψ(x) d x 2 dx

(8.130)

for the one-dimensional Poisson equation. Note that ρ(x) here is the charge density instead of the particle density. The physical meaning of this functional is the electrostatic energy of the system. After applying the trapezoid rule to the integral and taking a partial derivative of U with respect to φi , we obtain the corresponding difference equation (Hi + ρi )ψi = 0,

(8.131)

for the one-dimensional Poisson equation. Here Hi φi denotes i+1/2 φi+1 + i−1/2 φi−1 − (i+1/2 + i−1/2 )φi . If we combine the above equation with the relaxation scheme discussed in Section 7.5, we have (k+1)

ψi

(k)

(k)

= (1 − p)ψi + p(Hi + ρi + 1)ψi ,

(8.132)

8.8 Ab initio molecular dynamics

which would optimize (minimize) the functional (the electrostatic energy) as k → ∞. The indices k and k + 1 are for iteration steps, and the index n is for the spatial points. The iteration can be interpreted as a ﬁctitious time step, since we can rewrite the above equation as (k+1)

ψi

(k)

− ψi p

(k)

= (Hi + ρi )ψi ,

(8.133)

with p acting like a ﬁctitious time step. The solution converges to the true solution of the Poisson equation as k goes to inﬁnity if the functional U decreases during the iterations. The ab initio molecular dynamics is devised by introducing a ﬁctitious timedependent equation for the electron degrees of freedom: µ

d 2 ψi (r, t) 1 δ E[ρ(r, t); Rn ] + =− i j ψ j (r), 2 dt 2 δψi† (r, t) j

(8.134)

where µ is an adjustable parameter introduced for convenience, i j is the Lagrange multiplier, introduced to ensure the orthonormal condition of the wavefunctions ψi (r, t), and the summation is over all the occupied states. Note that the potential energy surface E is a functional of the electron density as well as a function of the ionic coordinates Rn for n = 1, 2, . . . , Nc , with a total of Nc ions in the system. In practice, we can also consider the ﬁrst-order time derivative equation, with d 2 ψi (r, t)/dt 2 replaced by the ﬁrst-order derivative dψi (r, t)/dt, because either the ﬁrst-order or the second-order derivative will approach zero at the limit of convergence. Second-order derivatives were used in the original work of Car and Parrinello and were later shown to yield a fast convergence if a special damping term is introduced (Tassone, Mauri, and Car, 1994). The ionic degrees of freedom are then simulated from Newton’s equation Mn

∂ E[ρ(r, t); Rn ] d 2 Rn =− , 2 dt ∂Rn

(8.135)

where Mn and Rn are the mass and the position vector of the nth particle. The advantage of ab initio molecular dynamics is that the electron degrees of freedom and the ionic degrees of freedom are simulated simultaneously by the above equations. Since its introduction by Car and Parrinello (1985), the method has been applied to many systems, especially those without a crystalline structure, namely, liquids and amorphous materials. We will not go into more detail on the method or its applications; interested readers can ﬁnd them in the review by Tassone, Mauri, and Car (1994). Progress in ab initio molecular dynamics has also included mapping the Hamiltonian onto a tight-binding model in which the evaluation of the electron degrees of freedom is drastically simpliﬁed (Wang, Chan, and Ho, 1989). This approach has also been applied to many systems, for example, amorphous carbon and carbon clusters. More discussions on the method can be found in several review articles, for example, Oguchi and Sasaki (1991).

253

254

Molecular dynamics simulations

Exercises Show that the Verlet algorithm preserves the time reversal of Newton’s equation. 8.2 Derive the velocity version of the Verlet algorithm and show that the position update has the same accuracy as in the original Verlet algorithm. 8.3 Write a program that uses the velocity version of the Verlet algorithm to simulate the small clusters of ions (Na+ )n (Cl− )m for small n and m. Use the empirical interaction potential given in Eq. (5.64) for the ions and set up the initial conﬁguration as the equilibrium conﬁguration. Assign initial velocities from the Maxwell distribution. Discuss the time dependence of the kinetic energy with different total energies. 8.4 Explore the coexistence of the liquid and solid phases in a Lennard– Jones cluster that has an intermediate size of about 150 particles through the microcanonical molecular dynamics simulation. Analyze the melting process in the cluster and the size dependence of the coexistence region. 8.5 A two-dimensional system can behave quite differently from its threedimensional counterpart. Use the molecular dynamics technique to study a two-dimensional Lennard–Jones cluster of the intermediate size of about 100 particles. Do the liquid and solid phases coexist in any temperature region? Discuss the difference found between the three-dimensional and two-dimensional clusters. 8.6 Develop a program with the ﬁfth-order Gear predictor–corrector scheme and apply it to the damped pendulum under a sinusoidal driving force. Study the properties of the pendulum with different values of the parameters. 8.7 Apply the ﬁfth-order Gear scheme to study the long-time behavior of a classical helium atom in two dimensions. Explore different initial conditions. Is the system unstable or chaotic under certain initial conditions? 8.8 Derive the Hoover equations from the Nos´e Lagrangian. Show that the generalized Lagrangian given in Section 8.6 can provide the correct equations of motion to ensure the constraint of constant temperature as well as that of constant pressure. 8.9 Develop a molecular dynamics program to study the structure of a cluster of identical charges inside a three-dimensional isotropic, harmonic trap. Under what condition does the cluster form a crystal? What happens if the system is conﬁned in a plane? 8.10 Show that the Parrinello–Rahman Lagrangian allows the shape of the simulation box to change, and derive equations of motion from it. Find the Lagrangian that combines the Parrinello–Rahman Lagrangian and Nos´e Lagrangian and derive the equations of motion from this generalized Lagrangian. 8.1

Exercises

8.11 For quantum many-body systems, the n-body density function is deﬁned

by 1 N! Z (N − n)!

ρn (r1 , r2 , . . . , rn ) =

|(R)|2 drn+1 drn+2 · · · dr N ,

where (R) is the ground-state wavefunction of the many-body system and Z is the normalization constant given by

Z=

|(R)|2 dr1 dr2 · · · dr N .

The pair-distribution function is related to the two-body density function through ρ(r)g(r, r )ρ(r ) = ρ2 (r, r ).

Show that

˜ r ) − 1] dr = −1, ρ(r)[g(r,

where ˜ r ) = g(r,

1

g(r, r ; λ) dλ,

0

with g(r, r ; λ) as the pair-distribution function under the scaled interaction V (r, r ; λ) = λV (dr, r ). 8.12 Show that the exchange–correlation energy functional is given by 1 ˜ r ) − 1]ρ(r ) dr dr , ρ(r)V (r, r )[g(r, E xc [ρ(r)] = 2

˜ dr ) given from the last problem. with g(r,

255

Chapter 9

Modeling continuous systems

It is usually more difﬁcult to simulate continuous systems than discrete ones, especially when the properties under study are governed by nonlinear equations. The systems can be so complex that the length scale at the atomic level can be as important as the length scale at the macroscopic level. The basic idea in dealing with complicated systems is similar to a divide-and-conquer concept, that is, dividing the systems with an understanding of the length scales involved and then solving the problem with an appropriate method at each length scale. A speciﬁc length scale is usually associated with an energy scale, such as the average temperature of the system or the average interaction of each pair of particles. The divide-and-conquer schemes are quite powerful in a wide range of applications. However, each method has its advantages and disadvantages, depending on the particular system.

9.1

Hydrodynamic equations

In this chapter, we will discuss several methods used in simulating continuous systems. First we will discuss a quite mature method, the ﬁnite element method, which sets up the idea of partitioning the system according to physical condition. Then we will discuss another method, the particle-in-cell method, which adopts a mean-ﬁeld concept in dealing with a large system involving many, many atoms, for example, 1023 atoms. This method has been very successful in the simulations of plasma, galactic, hydrodynamic, and magnetohydrodynamic systems. Then we will brieﬂy highlight a relatively new method, the lattice Boltzmann method, which is closely related to the lattice-gas method of cellular automata and has been applied in modeling several continuous systems. Before going into the detail of each method, we introduce the hydrodynamic equations. The equations are obtained by analyzing the dynamics of a small element of ﬂuid in the system and then taking the limit of continuity. We can obtain three basic equations by examining the changes in the mass, momentum, and energy of a small element in the system. For simplicity, let us ﬁrst assume that the ﬂuid is neutral and not under an external ﬁeld. The three fundamental

256

9.1 Hydrodynamic equations

hydrodynamic equations are ∂ρ + ∇ · (ρv) = 0, ∂t

∂ (ρv) + ∇ · Π = 0, ∂t ∂ 1 ρε + ρv 2 + ∇ · je = 0. ∂t 2

(9.1) (9.2) (9.3)

The ﬁrst equation is commonly known as the continuity equation, which is the result of mass conservation. The second equation is known as the Navier–Stokes equation, which comes from Newton’s equation applied to an inﬁnitesimal element in the ﬂuid. The ﬁnal equation is the result of the work–energy theorem. In these equations, ρ is the mass density, v is the velocity, ε is the internal energy per unit mass, Π is the momentum ﬂux tensor, and je is the energy ﬂux density. The momentum ﬂux tensor can be written as Π = ρvv − Γ,

(9.4)

where Γ is the stress tensor of the ﬂuid, given as , Γ = η ∇v + (∇v)T +

ζ−

2η ∇ · v − P I, 3

(9.5)

where η and ζ are coefﬁcients of bulk and shear viscosity, P is the pressure in the ﬂuid, and I is the unit tensor. The energy ﬂux density is given by 1 2 je = v ρε + ρv − v · Γ − κ∇T, 2

(9.6)

where the last term is the thermal energy ﬂow due to the gradient of the temperature T , with κ being the thermal conductivity. The above set of equations can be solved together with the equation of state f (ρ, P, T ) = 0,

(9.7)

which provides a particular relationship between several thermal variables for the given system. Now we can add the effects of external ﬁelds into the related equations. For example, if gravity is important, we can modify Eq. (9.2) to ∂ (ρv) + ∇ · Π = ρg, ∂t

(9.8)

where g is the gravitational ﬁeld given by g = −∇.

(9.9)

Here is the gravitational potential from the Poisson equation ∇ 2 = 4π Gρ,

with G being the gravitational constant.

(9.10)

257

258

Modeling continuous systems

When the system is charged, electromagnetic ﬁelds become important. We then need to modify the momentum ﬂux tensor, energy density, and energy ﬂux density. A term (BB − IB 2 /2)/µ should be added to the momentum ﬂux tensor. A magnetic ﬁeld energy density term B 2 /(2µ) should be added to the energy density, and an extra energy ﬂux term E × B/µ should be added to the energy ﬂux density. Here µ is the magnetic permeability with a limit of µ = µ0 in free space. The electric current density j, electric ﬁeld E, and magnetic ﬁeld B are related through the Maxwell equations and the charge transport equation. We will come back to this point in the later sections of this chapter. These hydrodynamic or magnetohydrodynamic equations can be solved in principle with the ﬁnite difference methods discussed in Chapter 7. However, in this chapter, we introduce some other methods, mainly based on the divideand-conquer schemes. For more detailed discussions on the hydrodynamic and magnetohydrodynamic equations, see Landau and Lifshitz (1987) and Lifshitz and Pitaevskii (1981).

9.2

The basic ﬁnite element method

In order to show how a typical ﬁnite element method works for a speciﬁc problem, let us take the one-dimensional Poisson equation as an example. Assume that the charge distribution is ρ(x) and the equation for the electrostatic potential is ρ(x) d 2 φ(x) =− . dx2 0

(9.11)

In order to simplify our discussion here, let us assume that the boundary condition is given as φ(0) = φ(1) = 0. As discussed in Chapter 4, we can always express the solution in terms of a complete set of orthogonal basis functions as φ(x) =

ai u i (x),

(9.12)

i

where the basis functions u i (x) satisfy

1

u j (x)u i (x) d x = δi j

(9.13)

0

and u i (0) = u i (1) = 0. We have assumed that u i (x) is a real function and that the summation √ is over all the basis functions. One of the choices for u i (x) is to have u i (x) = 2 sin iπ x. In order to obtain the actual value of φ(x), we need to solve all the coefﬁcients ai by applying the differential equation and the orthogonal condition in Eq. (9.13). This becomes quite a difﬁcult task if the system has a higher dimensionality and an irregular boundary. The ﬁnite element method is designed to ﬁnd a good approximation of the solution for irregular boundary conditions or in situations of rapid change of the

9.2 The basic finite element method

solution. Typically, the approximation is still written as a series φn (x) =

n

ai u i (x),

(9.14)

i=1

with a ﬁnite number of terms. Here u i (x) is a set of linearly independent local functions, each deﬁned in a small region around a node xi . If we use this approximation in the differential equation, we have a nonzero value rn (x) = φn (x) +

ρ(x) , 0

(9.15)

which would be zero if φn (x) were the exact solution. The goal now is to set up a scheme that would make rn (x) small over the whole region during the process of determining all ai . The selection of u i (x) and the procedure for optimizing rn (x) determine how accurate the solution is for a given n. We can always improve the accuracy with a higher n. One scheme for performing optimization is to introduce a weighted integral

1

gi =

rn (x)w i (x) d x,

(9.16)

0

which is then forced to be zero during the determination of ai . Here w i (x) is a selected weight. The procedure just described is the essence of the ﬁnite element method. The region of interest is divided into many small pieces, usually of the same topology but not the same size, for example, different triangles for a two-dimensional domain. Then a set of linearly independent local functions u i (x) is selected, with each deﬁned around a node and its neighborhood. We then select the weight w i (x), which is usually chosen as a local function as well. For the one-dimensional Poisson equation, the weighted integral becomes

1

gi = 0

n

a j u j (x) +

j=1

ρ(x) w i (x) d x = 0, 0

(9.17)

which is equivalent to a linear equation set Aa = b,

(9.18)

with

1

Ai j = −

u i (x)w j (x) d x

(9.19)

ρ(x)w i (x) d x.

(9.20)

0

and bi =

1 0

1

0

The advantage of choosing u i (x) and w i (x) as local functions lies in the solution of the linear equation set given in Eq. (9.18). Basically, we need to solve only a tridiagonal or a band matrix problem, which can be done quite quickly. The idea

259

260

Modeling continuous systems

is to make the problem computationally simple but numerically accurate. That means the choice of the weight w i (x) is rather an art. A common choice is that w i (x) = u i (x), which is called the Galerkin method. Now let us consider in detail the application of the Galerkin method to the one-dimensional Poisson equation. We can divide the region [0, 1] into n + 1 equal intervals with x0 = 0 and xn+1 = 1 and take ⎧ ⎪ ⎪ ⎨ (x − xi−1 )/ h u i (x) = (xi+1 − x)/ h ⎪ ⎪ ⎩0

for x ∈ [xi−1 , xi ], for x ∈ [xi , xi+1 ],

(9.21)

otherwise.

Here h = xi − xi−1 = 1/(n + 1) is the spatial interval. Note that this choice of u i (x) satisﬁes the boundary condition. Let us take a very simple charge distribution ρ(x) = 0 π 2 sin π x

(9.22)

for illustrative purposes. We can easily determine the weighted integrals to obtain the matrix elements

1

Ai j = 0

and the vector

⎧ ⎪ ⎨ 2/ h u i (x)u j (x) d x = −1/ h ⎪ ⎩0

for i = j, for i = j ± 1, otherwise

1 1 ρ(x)u i (x) d x 0 0 π = (xi−1 + xi+1 − 2xi ) cos π xi h 1 + (2 sin π xi − sin π xi−1 − sin π xi+1 ). h

(9.23)

bi =

(9.24)

Now we are ready to put all these values into a program. Note that the coefﬁcient matrix A is automatically symmetric and tridiagonal because the local basis u i (x) is conﬁned in the region [xi−1 , xi+1 ]. As we discussed in Chapters 2, 5, and 7, an LU decomposition scheme can easily be devised to solve this tridiagonal matrix problem. Here is the implementation of the algorithm. // Program for the one-dimensional Poisson equation. import java.lang.*; public class Poisson { final static int n = 99, m = 2; public static void main(String argv[]) { double d[] = new double[n]; double b[] = new double[n]; double c[] = new double[n]; double h = 1.0/(n+1), pi = Math.PI; // Evaluate the coefficient matrix elements

9.2 The basic finite element method

1.0 0.8 0.6 φ (V) 0.4 0.2

rrrrrrrrrrrrrrrrrr rrrr rrr r r rrr rr r r rr rr rr r r rr rr rr r r rr r r rr r rr rr r rr r r rr r r rr r r rr r r rr rr rr r r rr r r rr r r rr rr r

0.0 0.0

0.2

0.4

0.6

0.8

1.0

x (m)

for (int d[i] = c[i] = double double double b[i] =

i=0; i

An Introduction to Computational Physics Second Edition Tao Pang University of Nevada, Las Vegas

cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge cb2 2ru, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521825696 © T. Pang 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2006 isbn-13 isbn-10

978-0-511-14046-4 eBook (NetLibrary) 0-511-14046-0 eBook (NetLibrary)

isbn-13 isbn-10

978-0-521-82569-6 hardback 0-521-82569-5 hardback

isbn-13 isbn-10

978-0-521-53276-1 0-521-53276-0

Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

To Yunhua, for enduring love

Contents

Preface to ﬁrst edition Preface

xi xiii

Acknowledgments

xv

1 Introduction 1.1 Computation and science 1.2 The emergence of modern computers 1.3 Computer algorithms and languages Exercises

1 1 4 7 14

2 Approximation of a function 2.1 Interpolation 2.2 Least-squares approximation 2.3 The Millikan experiment 2.4 Spline approximation 2.5 Random-number generators Exercises

16 16 24 27 30 37 44

3 Numerical calculus 3.1 Numerical differentiation 3.2 Numerical integration 3.3 Roots of an equation 3.4 Extremes of a function 3.5 Classical scattering Exercises

49 49 56 62 66 70 76

4 Ordinary differential equations 4.1 Initial-value problems 4.2 The Euler and Picard methods 4.3 Predictor–corrector methods 4.4 The Runge–Kutta method 4.5 Chaotic dynamics of a driven pendulum 4.6 Boundary-value and eigenvalue problems

80 81 81 83 88 90 94 vii

viii

Contents

4.7 4.8 4.9

The shooting method Linear equations and the Sturm–Liouville problem The one-dimensional Schr¨odinger equation Exercises

96 99 105 115

5 Numerical methods for matrices 5.1 Matrices in physics 5.2 Basic matrix operations 5.3 Linear equation systems 5.4 Zeros and extremes of multivariable functions 5.5 Eigenvalue problems 5.6 The Faddeev–Leverrier method 5.7 Complex zeros of a polynomial 5.8 Electronic structures of atoms 5.9 The Lanczos algorithm and the many-body problem 5.10 Random matrices Exercises

119 119 123 125 133 138 147 149 153 156 158 160

6 Spectral analysis 6.1 Fourier analysis and orthogonal functions 6.2 Discrete Fourier transform 6.3 Fast Fourier transform 6.4 Power spectrum of a driven pendulum 6.5 Fourier transform in higher dimensions 6.6 Wavelet analysis 6.7 Discrete wavelet transform 6.8 Special functions 6.9 Gaussian quadratures Exercises

164 165 166 169 173 174 175 180 187 191 193

7 Partial differential equations 7.1 Partial differential equations in physics 7.2 Separation of variables 7.3 Discretization of the equation 7.4 The matrix method for difference equations 7.5 The relaxation method 7.6 Groundwater dynamics 7.7 Initial-value problems 7.8 Temperature ﬁeld of a nuclear waste rod Exercises

197 197 198 204 206 209 213 216 219 222

8 Molecular dynamics simulations 8.1 General behavior of a classical system

226 226

Contents

8.2 8.3 8.4 8.5 8.6 8.7 8.8

Basic methods for many-body systems The Verlet algorithm Structure of atomic clusters The Gear predictor–corrector method Constant pressure, temperature, and bond length Structure and dynamics of real materials Ab initio molecular dynamics Exercises

228 232 236 239 241 246 250 254

9 Modeling continuous systems 9.1 Hydrodynamic equations 9.2 The basic ﬁnite element method 9.3 The Ritz variational method 9.4 Higher-dimensional systems 9.5 The ﬁnite element method for nonlinear equations 9.6 The particle-in-cell method 9.7 Hydrodynamics and magnetohydrodynamics 9.8 The lattice Boltzmann method Exercises

256 256 258 262 266 269 271 276 279 282

10 Monte Carlo simulations 10.1 Sampling and integration 10.2 The Metropolis algorithm 10.3 Applications in statistical physics 10.4 Critical slowing down and block algorithms 10.5 Variational quantum Monte Carlo simulations 10.6 Green’s function Monte Carlo simulations 10.7 Two-dimensional electron gas 10.8 Path-integral Monte Carlo simulations 10.9 Quantum lattice models Exercises

285 285 287 292 297 299 303 307 313 315 320

11 Genetic algorithm and programming 11.1 Basic elements of a genetic algorithm 11.2 The Thomson problem 11.3 Continuous genetic algorithm 11.4 Other applications 11.5 Genetic programming Exercises

323 324 332 335 338 342 345

12 Numerical renormalization 12.1 The scaling concept 12.2 Renormalization transform

347 347 350

ix

x

Contents

12.3 12.4 12.5 12.6 12.7

Critical phenomena: the Ising model Renormalization with Monte Carlo simulation Crossover: the Kondo problem Quantum lattice renormalization Density matrix renormalization Exercises

352 355 357 360 364 367

References

369

Index

381

Preface to first edition

The beauty of Nature is in its detail. If we are to understand different layers of scientiﬁc phenomena, tedious computations are inevitable. In the last half-century, computational approaches to many problems in science and engineering have clearly evolved into a new branch of science, computational science. With the increasing computing power of modern computers and the availability of new numerical techniques, scientists in different disciplines have started to unfold the mysteries of the so-called grand challenges, which are identiﬁed as scientiﬁc problems that will remain signiﬁcant for years to come and may require teraﬂop computing power. These problems include, but are not limited to, global environmental modeling, virus vaccine design, and new electronic materials simulation. Computational physics, in my view, is the foundation of computational science. It deals with basic computational problems in physics, which are closely related to the equations and computational problems in other scientiﬁc and engineering ﬁelds. For example, numerical schemes for Newton’s equation can be implemented in the study of the dynamics of large molecules in chemistry and biology; algorithms for solving the Schr¨odinger equation are necessary in the study of electronic structures in materials science; the techniques used to solve the diffusion equation can be applied to air pollution control problems; and numerical simulations of hydrodynamic equations are needed in weather prediction and oceanic dynamics. Important as computational physics is, it has not yet become a standard course in the curricula of many universities. But clearly its importance will increase with the further development of computational science. Almost every college or university now has some networked workstations available to students. Probably many of them will have some closely linked parallel or distributed computing systems in the near future. Students from many disciplines within science and engineering now demand the basic knowledge of scientiﬁc computing, which will certainly be important in their future careers. This book is written to fulﬁll this need. Some of the materials in this book come from my lecture notes for a computational physics course I have been teaching at the University of Nevada, Las Vegas. I usually have a combination of graduate and undergraduate students from physics, engineering, and other majors. All of them have some access to the workstations or supercomputers on campus. The purpose of my lectures is to provide xi

xii

Preface to first edition

the students with some basic materials and necessary guidance so they can work out the assigned problems and selected projects on the computers available to them and in a programming language of their choice. This book is made up of two parts. The ﬁrst part (Chapter 1 through Chapter 6) deals with the basics of computational physics. Enough detail is provided so that a well-prepared upper division undergraduate student in science or engineering will have no difﬁculty in following the material. The second part of the book (Chapter 7 through Chapter 12) introduces some currently used simulation techniques and some of the newest developments in the ﬁeld. The choice of subjects in the second part is based on my judgment of the importance of the subjects in the future. This part is speciﬁcally written for students or beginning researchers who want to know the new directions in computational physics or plan to enter the research areas of scientiﬁc computing. Many references are given there to help in further studies. In order to make the course easy to digest and also to show some practical aspects of the materials introduced in the text, I have selected quite a few exercises. The exercises have different levels of difﬁculty and can be grouped into three categories. Those in the ﬁrst category are simple, short problems; a student with little preparation can still work them out with some effort at ﬁlling in the gaps they have in both physics and numerical analysis. The exercises in the second category are more involved and aimed at well-prepared students. Those in the third category are mostly selected from current research topics, which will certainly beneﬁt those students who are going to do research in computational science. Programs for the examples discussed in the text are all written in standard Fortran 77, with a few exceptions that are available on almost all Fortran compilers. Some more advanced programming languages for data parallel or distributed computing are also discussed in Chapter 12. I have tried to keep all programs in the book structured and transparent, and I hope that anyone with knowledge of any programming language will be able to understand the content without extra effort. As a convention, all statements are written in upper case and all comments are given in lower case. From my experience, this is the best way of presenting a clear and concise Fortran program. Many sample programs in the text are explained in sufﬁcient detail with commentary statements. I ﬁnd that the most efﬁcient approach to learning computational physics is to study well-prepared programs. Related programs used in the book can be accessed via the World Wide Web at the URL http://www.physics.unlv.edu/∼pang/cp.html. Corresponding programs in C and Fortran 90 and other related materials will also be available at this site in the future. This book can be used as a textbook for a computational physics course. If it is a one-semester course, my recommendation is to select materials from Chapters 1 through 7 and Chapter 11. Some sections, such as 4.6 through 4.8, 5.6, and 7.8, are good for graduate students or beginning researchers but may pose some challenges to most undergraduate students. Tao Pang Las Vegas, Nevada

Preface

Since the publication of the ﬁrst edition of the book, I have received numerous comments and suggestions on the book from all over the world and from a far wider range of readers than anticipated. This is a ﬁrm testament of what I claimed in the Preface to the ﬁrst edition that computational physics is truly the foundation of computational science. The Internet, which connects all computerized parts of the world, has made it possible to communicate with students who are striving to learn modern science in distant places that I have never even heard of. The main drive for having a second edition of the book is to provide a new generation of science and engineering students with an up-to-date presentation to the subject. In the last decade, we have witnessed steady progress in computational studies of scientiﬁc problems. Many complex issues are now analyzed and solved on computers. New paradigms of global-scale computing have emerged, such as the Grid and web computing. Computers are faster and come with more functions and capacity. There has never been a better time to study computational physics. For this new edition, I have revised each chapter in the book thoroughly, incorporating many suggestions made by the readers of the ﬁrst edition. There are more examples given with more sample programs and ﬁgures to make the explanation of the material easier to follow. More exercises are given to help students digest the material. Each sample program has been completely rewritten to reﬂect what I have learned in the last few years of teaching the subject. A lot of new material has been added to this edition mainly in the areas in which computational physics has made signiﬁcant progress and a difference in the last decade, including one chapter on genetic algorithm and programming. Some material in the ﬁrst edition has been removed mainly because there are more detailed books on those subjects available or they appear to be out of date. The website for this new edition is at http://www.physics.unlv.edu/˜pang/cp2.html. References are cited for the sole purpose of providing more information for further study on the relevant subjects. Therefore they may not be the most authoritative or deﬁning work. Most of them are given because of my familiarity with, or my easy access to, the cited materials. I have also tried to limit the number of references so the reader will not ﬁnd them overwhelming. When I have had to choose, I have always picked the ones that I think will beneﬁt the readers most.

xiii

xiv

Preface

Java is adopted as the instructional programming language in the book. The source codes are made available at the website. Java, an object-oriented and interpreted language, is the newest programming language that has made a major impact in the last few years. The strength of Java is in its ability to work with web browsers, its comprehensive API (application programming interface), and its built-in security and network support. Both the source code and bytecode can run on any computer that has Java with exactly the same result. There are many advantages in Java, and its speed in scientiﬁc programming has steadily increased over the last few years. At the moment, a carefully written Java program, combined with static analysis, just-in-time compiling, and instruction-level optimization, can deliver nearly the same raw speed as C or Fortran. More scientists, especially those who are still in colleges or graduate schools, are expected to use Java as their primary programming language. This is why Java is used as the instructional language in this edition. Currently, many new applications in science and engineering are being developed in Java worldwide to facilitate collaboration and to reduce programming time. This book will do its part in teaching students how to build their own programs appropriate for scientiﬁc computing. We do not know what will be the dominant programming language for scientiﬁc computing in the future, but we do know that scientiﬁc computing will continue playing a major role in fundamental research, knowledge development, and emerging technology.

Acknowledgments

Most of the material presented in this book has been strongly inﬂuenced by my research work in the last 20 years, and I am extremely grateful to the University of Minnesota, the Miller Institute for Basic Research in Science at the University of California, Berkeley, the National Science Foundation, the Department of Energy, and the W. M. Keck Foundation for their generous support of my research work. Numerous colleagues from all over the world have made contributions to this edition while using the ﬁrst edition of the book. My deepest gratitude goes to those who have communicated with me over the years regarding the topics covered in the book, especially those inspired young scholars who have constantly reminded me that the effort of writing this book is worthwhile, and the students who have taken the course from me.

xv

Chapter 1

Introduction

Computing has become a necessary means of scientiﬁc study. Even in ancient times, the quantiﬁcation of gained knowledge played an essential role in the further development of mankind. In this chapter, we will discuss the role of computation in advancing scientiﬁc knowledge and outline the current status of computational science. We will only provide a quick tour of the subject here. A more detailed discussion on the development of computational science and computers can be found in Moreau (1984) and Nash (1990). Progress in parallel computing and global computing is elucidated in Koniges (2000), Foster and Kesselman (2003), and Abbas (2004).

1.1 Computation and science Modern societies are not the only ones to rely on computation. Ancient societies also had to deal with quantifying their knowledge and events. It is interesting to see how the ancient societies developed their knowledge of numbers and calculations with different means and tools. There is evidence that carved bones and marked rocks were among the early tools used for recording numbers and values and for performing simple estimates more than 20 000 years ago. The most commonly used number system today is the decimal system, which was in existence in India at least 1500 years ago. It has a radix (base) of 10. A number is represented by a string of ﬁgures, with each from the ten available ﬁgures (0–9) occupying a different decimal level. The way a number is represented in the decimal system is not unique. All other number systems have similar structures, even though their radices are quite different, for example, the binary system used on all digital computers has a radix of 2. During almost the same era in which the Indians were using the decimal system, another number system using dots (each worth one) and bars (each worth ﬁve) on a base of 20 was invented by the Mayans. A symbol that looks like a closed eye was used for zero. It is still under debate whether the Mayans used a base of 18 instead of 20 after the ﬁrst level of the hierarchy in their number formation. They applied these dots and bars to record multiplication tables. With the availability of those tables, the

1

2

Fig. 1.1 The Mayan number system: (a) examples of using dots and bars to represent numbers; (b) an example of recording multiplication.

Introduction

(a)

0

(b)

15

1

5

20

17

=

255

=

Fig. 1.2 A circle inscribed and circumscribed by two hexagons. The inside polygon sets the lower bound while the outside polygon sets the upper bound of the circumference.

lk π/ k

d=1

Mayans studied and calculated the period of lunar eclipses to a great accuracy. An example of Mayan number system is shown in Fig. 1.1. One of the most fascinating numbers ever calculated in human history is π, the ratio of the circumference to the diameter of the circle. One of the methods of evaluating π was introduced by Chinese mathematician Liu Hui, who published his result in a book in the third century. The circle was approached and bounded by two sets of regular polygons, one from outside and another from inside of the circle, as shown in Fig. 1.2. By evaluating the side lengths of two 192-sided regular polygons, Liu found that 3.1410 < π < 3.1427, and later he improved his result with a 3072-sided inscribed polygon to obtain π 3.1416. Two hundred years later, Chinese mathematician and astronomer Zu Chongzhi and his son Zu Gengzhi carried this type of calculation much further by evaluating the side lengths of two 24 576-sided regular polygons. They concluded that 3.141 592 6 < π < 3.141 592 7, and pointed out that a good approximation was given by

1.1 Computation and science

π 355/113 = 3.141 592 9 . . . . This is extremely impressive considering the limited mathematics and computing tools that existed then. Furthermore, no one in the next 1000 years did a better job of evaluating π than the Zus. The Zus could have done an even better job if they had had any additional help in either mathematical knowledge or computing tools. Let us quickly demonstrate this statement by considering a set of evaluations on polygons with a much smaller number of sides. In general, if the side length of a regular k-sided polygon is denoted as lk and the corresponding diameter is taken to be the unit of length, then the approximation of π is given by πk = klk .

(1.1)

The exact value of π is the limit of πk as k → ∞. The value of πk obtained from the calculations of the k-sided polygon can be formally written as πk = π∞ +

c2 c1 c3 + 2 + 3 + ··· , k k k

(1.2)

where π∞ = π and ci , for i = 1, 2, . . . , ∞, are the coefﬁcients to be determined. The expansion in Eq. (1.2) is truncated in practice in order to obtain an approximation of π . Then the task left is to solve the equation set n

ai j xj = bi ,

(1.3)

j=1

for i = 1, 2, . . . , n, if the expansion in Eq. (1.2) is truncated at the (n − 1)th j−1 order of 1/k with ai j = 1/ki , x1 = π∞ , xj = c j−1 for j > 1, and bi = πki . The approximation of π is then given by the approximate π∞ obtained by solving the equation set. For example, if π8 = 3.061 467, π16 = 3.121 445, π32 = 3.136 548, and π64 = 3.140 331 are given from the regular polygons inscribing the circle, we can truncate the expansion at the third order of 1/k and then solve the equation set (see Exercise 1.1) to obtain π∞ , c1 , c2 , and c3 from the given πk . The approximation of π π∞ is 3.141 583, which has ﬁve digits of accuracy, in comparison with the exact value π = 3.141 592 65 . . . . The values of πk for k = 8, 16, 32, 64 and the extrapolation π∞ are all plotted in Fig. 1.3. The evaluation can be further improved if we use more πk or ones with higher values of k. For example, we obtain π 3.141 592 62 if k = 32, 64, 128, 256 are used. Note that we are getting the same accuracy here as the evaluation of the Zus with polygons of 24 576 sides. In a modern society, we need to deal with a lot more computations daily. Almost every event in science or technology requires quantiﬁcation of the data involved. For example, before a jet aircraft can actually be manufactured, extensive computer simulations in different ﬂight conditions must be performed to check whether there is a design ﬂaw. This is not only necessary economically, but may help avoid loss of lives. A related use of computers is in the reconstruction of an unexpectred ﬂight accident. This is extremely important in preventing the same accident from happening again. A more common example is found in the cars

3

4

Introduction

Fig. 1.3 The values of πk , with k = 8, 16, 32, and 64, plotted together with the extrapolated π∞ .

3.15 ×

×

×

3.13 × 3.11 πk 3.09 3.07 × 3.05 0.00

0.03

0.06

0.09

0.12

0.15

1/k

that we drive, which each have a computer that takes care of the brakes, steering control, and other critical components. Almost any electronic device that we use today is probably powered by a computer, for example, a digital thermometer, a DVD (digital video disc) player, a pacemaker, a digital clock, or a microwave oven. The list can go on and on. It is fair to say that sophisticated computations delivered by computers every moment have become part of our lives, permanently.

1.2

The emergence of modern computers

The advantage of having a reliable, robust calculating device was realized a long time ago. The early abacus, which was used for counting, was in existence with the Babylonians 4000 years ago. The Chinese abacus, which appeared at least 3000 years ago, was perhaps the ﬁrst comprehensive calculating device that was actually used in performing addition, subtraction, multiplication, and division and was employed for several thousand years. A traditional Chinese abacus is made of a rectangular wooden frame and a bar going through the upper middle of the frame horizontally. See Fig. 1.4. There are thirteen evenly spaced vertical rods, each representing one decimal level. More rods were added to later versions. On each rod, there are seven beads that can be slid up and down with ﬁve of them held below the middle bar and two above. Zero on each rod is represented by the beads below the middle bar at the very bottom and the beads above at the very top. The numbers one to four are repsented by sliding one–four beads below the middle bar up and ﬁve is given be sliding one bead above down. The numbers six to nine are represented by one bead above the middle bar slid down and one–four beads below slid up. The ﬁrst and last beads on each rod are never used or are only used cosmetically during a calculation. The Japanese abacus, which was modeled on the Chinese abacus, in fact has twenty-one rods, with only ﬁve beads

1.2 The emergence of modern computers

5

Fig. 1.4 A sketch of a Chinese abacus with the number 15 963.82 shown.

on each rod, one above and four below the middle bar. Dots are marked on the middle bar for the decimal point and for every four orders (ten thousands) of digits. The abacus had to be replaced by the slide rule or numerical tables when a calcualtion went beyond the four basic operations even though later versions of the Chinese abacus could also be used to evaluate square roots and cubic roots. The slide rule, which is considered to be the next major advance in calculating devices, was introduced by the Englishmen Edmund Gunter and Reverend William Oughtred in the mid-seventeenth century based on the logarithmic table published by Scottish mathematician John Napier in a book in the early seventeenth century. Over the next several hundred years, the slide rule was improved and used worldwide to deliver the impressive computations needed, especially during the Industrial Revolution. At about the same time as the introduction of the slide rule, Frenchman Blaise Pascal invented the mechanical calculating machine with gears of different sizes. The mechanical calculating machine was enhanced and applied extensively in heavy-duty computing tasks before digital computers came into existence. The concept of an all-purpose, automatic, and programmable computing machine was introduced by British mathematician and astronomer Charles Babbage in the early nineteenth century. After building part of a mechanical calculating machine that he called a difference engine, Babbage proposed constructing a computing machine, called an analytical engine, which could be programmed to perform any type of computation. Unfortunately, the technology at the time was not advanced enough to provide Babbage with the necessary machinery to realize his dream. In the late nineteenth century, Spanish engineer Leonardo Torres y Quevedo showed that it might be possible to construct the machine conceived earlier by Babbage using the electromechanical technology that had just been developed. However, he could not actually build the whole machine either, due to lack of funds. American engineer and inventor Herman Hollerith built the very ﬁrst electromechanical counting machine, which was commisioned by the US federal government for sorting the population in the 1890 American census. Hollerith used the proﬁt obtained from selling this machine to set up a company, the Tabulating Machine Company, the predecessor of IBM (International

6

Introduction

Business Machines Corporation). These developments continued in the early twentieth century. In the 1930s, scientists and engineers at IBM built the ﬁrst difference tabulator, while researchers at Bell Laboratories built the ﬁrst relay calculator. These were among the very ﬁrst electromechanical calculators built during that time. The real beginning of the computer era came with the advent of electronic digital computers. John Vincent Atanasoff, a theoretical physicist at the Iowa State University at Ames, invented the electronic digital computer between 1937 and 1939. The history regarding Atanasoff ’s accomplishment is described in Mackintosh (1987), Burks and Burks (1988), and Mollenhoff (1988). Atanasoff introduced vacuum tubes (instead of the electromechanical devices used earlier by other people) as basic elements, a separated memory unit, and a scheme to keep the memory updated in his computer. With the assistance of Clifford E. Berry, a graduate assistant, Atanasoff built the very ﬁrst electronic computer in 1939. Most computer history books have cited ENIAC (Electronic Numerical Integrator and Computer), built by John W. Mauchly and J. Presper Eckert with their colleagues at the Moore School of the University of Pennsylvania in 1945, as the ﬁrst electronic computer. ENIAC, with a total mass of more than 30 tons, consisited of 18 000 vacuum tubes, 15 000 relays, and several hundred thousand resistors, capacitors, and inductors. It could complete about 5000 additions or 400 multiplications in one second. Some very impressive scientiﬁc computations were performed on ENIAC, including the study of nuclear ﬁssion with the liquid drop model by Metropolis and Frankel (1947). In the early 1950s, scientists at Los Alamos built another electronic digital computer, called MANIAC I (Mathematical Analyzer, Numerator, Integrator, and Computer), which was very similar to ENIAC. Many important numerical studies, including Monte Carlo simulation of classical liquids (Metropolis et al., 1953), were completed on MANIAC I. All these research-intensive activities accomplished in the 1950s showed that computation was no longer just a supporting tool for scientiﬁc research but rather an actual means of probing scientiﬁc problems and predicting new scientiﬁc phenomena. A new branch of science, computational science, was born. Since then, the ﬁeld of scientiﬁc computing has developed and grown rapidly. The computational power of new computers has been increasing exponentially. To be speciﬁc, the computing power of a single computer unit has doubled almost every 2 years in the last 50 years. This growth followed the observation of Gordon Moore, co-founder of Intel, that information stored on a given amount of silicon surface had doubled and would continue to do so in about every 2 years since the introduction of the silicon technology (nicknamed Moore’s law). Computers with transistors replaced those with vacuum tubes in the late 1950s and early 1960s, and computers with very-large-scale integrated circuits were built in the 1970s. Microprocessors and vector processors were built in the mid-1970s to set the

1.3 Computer algorithms and languages

stage for personal computing and supercomputing. In the 1980s, microprocessorbased personal computers and workstations appeared. Now they have penetrated all aspects of our lives, as well as all scientiﬁc disciplines, because of their affordability and low maintenance cost. With technological breakthroughs in the RISC (Reduced Instruction Set Computer) architecture, cache memory, and multiple instruction units, the capacity of each microprocessor is now larger than that of a supercomputer 10 years ago. In the last few years, these fast microprocessors have been combined to form parallel or distributed computers, which can easily deliver a computing power of a few tens of gigaﬂops (109 ﬂoating-point operations per second). New computing paradigms such as the Grid were introduced to utilize computing resources on a global scale via the Internet (Foster and Kesselman, 2003; Abbas, 2004). Teraﬂop (1012 ﬂoating-point operations per second) computers are now emerging. For example, Q, a newly installed computer at the Los Alamos National Laboratory, has a capacity of 30 teraﬂops. With the availability of teraﬂop computers, scientists can start unfolding the mysteries of the grand challenges, such as the dynamics of the global environment; the mechanism of DNA (deoxyribonucleic acid) sequencing; computer design of drugs to cope with deadly viruses; and computer simulation of future electronic materials, structures, and devices. Even though there are certain problems that computers cannot solve, as pointed out by Harel (2000), and hardware and software failures can be fatal, the human minds behind computers are nevertheless unlimited. Computers will never replace human beings in this regard and the quest for a better understanding of Nature will go on no matter how difﬁcult the journey is. Computers will certainly help to make that journey more colorful and pleasant.

1.3 Computer algorithms and languages Before we can use a computer to solve a speciﬁc problem, we must instruct the computer to follow certain procedures and to carry out the desired computational task. The process involves two steps. First, we need to transform the problem, typically in the form of an equation, into a set of logical steps that a computer can follow; second, we need to inform the computer to complete these logical steps.

Computer algorithms The complete set of the logical steps for a speciﬁc computational problem is called a computer or numerical algorithm. Some popular numerical algorithms can be traced back over a 100 years. For example, Carl Friedrich Gauss (1866) published an article on the FFT (fast Fourier transform) algorithm (Goldstine, 1977,

7

8

Introduction

pp. 249–53). Of course, Gauss could not have envisioned having his algorithm realized on a computer. Let us use a very simple and familiar example in physics to illustrate how a typical numerical algorithm is constructed. Assume that a particle of mass m is conﬁned to move along the x axis under a force f (x). If we describe its motion with Newton’s equation, we have f = ma = m

dv , dt

(1.4)

where a and v are the acceleration and velocity of the particle, respectively, and t is the time. If we divide the time into small, equal intervals τ = ti+1 − ti , we know from elementary physics that the velocity at time ti is approximately given by the average velocity in the time interval [ti , ti+1 ], vi

xi+1 − xi xi+1 − xi ; = ti+1 − ti τ

(1.5)

the corresponding acceleration is approximately given by the average acceleration in the same time interval, ai

vi+1 − vi vi+1 − vi , = ti+1 − ti τ

(1.6)

as long as τ is small enough. The simplest algorithm for ﬁnding the position and velocity of the particle at time ti+1 from the corresponding quantities at time ti is obtained after combining Eqs. (1.4), (1.5), and (1.6), and we have xi+1 = xi + τ vi , τ vi+1 = vi + f i , m

(1.7) (1.8)

where f i = f (xi ). If the initial position and velocity of the particle are given and the corresponding quantities at some later time are sought (the initial-value problem), we can obtain them recursively from the algorithm given in Eqs. (1.7) and (1.8). This algorithm is commonly known as the Euler method for the initial-value problem. This simple example illustrates how most algorithms are constructed. First, physical equations are transformed into discrete forms, namely, difference equations. Then the desired physical quantities or solutions of the equations at different variable points are given in a recursive manner with the quantities at a later point expressed in terms of the quantities from earlier points. In the above example, the position and velocity of the particle at ti+1 are given by the position and velocity at ti , provided that the force at any position is explicitly given by a function of the position. Note that the above way of constructing an algorithm is not limited to one-dimensional or single-particle problems. In fact, we can immediately generalize this algorithm to two-dimensional and three-dimensional problems, or to the problems involving more than one particle, such as the

1.3 Computer algorithms and languages

motion of a projectile or a system of three charged particles. The generalized version of the above algorithm is Ri+1 = Ri + τ Vi , Vi+1 = Vi + τ Ai ,

(1.9) (1.10)

where R = (r1 , r2 , . . . , rn ) is the position vector of all the n particles in the system; V = (v1 , v2 , . . . , vn ) and A = (a1 , a2 , . . . , an ), with a j = f j /m j for j = 1, 2, . . . , n, are the corresponding velocity and acceleration vectors, respectively. From a theoretical point of view, the Turing machine is an abstract representation of a universal computer and also a device to autopsy any algorithm. The concept was introduced by Alan Turing (1936–7) with a description of the universal computer that consists of a read and write head and a tape with an inﬁnite number of units of binaries (0 or 1). The machine is in a speciﬁed state for a given moment of operation and follows instructions prescribed by a ﬁnite table. A computer algorithm is a set of logical steps that can be achieved by the Turing machine. Logical steps that cannot be achieved by the Turing machine belong to the class of problems that are not solvable by computers. Some such unsolvable problems are discussed by Harel (2000). The logical steps in an algorithm can be sequential, parallel, or iterative (implicit). How to utilize the properties of a given problem in constructing a fast and accurate algorithm is a very important issue in computational science. It is hoped that the examples discussed in this book will help students learn how to establish efﬁcient and accurate algorithms as well as how to write clean and structured computer programs for most problems encountered in physics and related ﬁelds.

Computer languages Computer programs are the means through which we communicate with computers. The very ﬁrst computer program was written by Ada Byron, the Countess of Lovelace, and was intended for the analytical engine proposed by Babbage in the mid-1840s. To honor her achievement, an object-oriented programming language (Ada), initially developed by the US military, is named after her. A computer program or code is a collection of statements, typically written in a well-deﬁned computer programming language. Programming languages can be divided into two major categories: low-level languages designed to work with the given hardware, and high-level languages that are not related to any speciﬁc hardware. Simple machine languages and assembly languages were the only ones available before the development of high-level languages. A machine language is typically in binary form and is designed to work with the unique hardware of a computer. For example, a statement, such as adding or multiplying two integers, is represented by one or several binary strings that the computer can recognize and follow. This is very efﬁcient from computer’s point of view, but extremely

9

10

Introduction

labor-intensive from that of a programmer. To remember all the binary strings for all the statements is a nontrivial task and to debug a program in binaries is a formidable task. Soon after the invention of the digital computer, assembly languages were introduced to increase the efﬁciency of programming and debugging. They are more advanced than machine languages because they have adopted symbolic addresses. But they are still related to a certain architecture and wiring of the system. A translating device called an assembler is needed to convert an assembly code into a native machine code before a computer can recognize the instructions. Machine languages and assembly languages do not have portability; a program written for one kind of computers could never be used on others. The solution to such a problem is clearly desirable. We need high-level languages that are not associated with the unique hardware of a computer and that can work on all computers. Ideal programming languages would be those that are very concise but also close to the logic of human languages. Many high-level programming languages are now available, and the choice of using a speciﬁc programming language on a given computer is more or less a matter of personal taste. Most high-level languages function similarly. However, for a researcher who is working at the cutting edge of scientiﬁc computing, the speed and capacity of a computing system, including the efﬁciency of the language involved, become critical. A modern computer program conveys the tasks of an algorithm for a computational problem to a computer. The program cannot be executed by the computer before it is translated into the native machine code. A translator, a program called a compiler, is used to translate (or compile) the program to produce an executable ﬁle in binaries. Most compilers also have an option to produce an objective ﬁle ﬁrst and then link it with other objective ﬁles and library routines to produce a combined executable ﬁle. The compiler is able to detect most errors introduced during programming, that is, the process of writing a program in a high-level language. After running the executable program, the computer will output the result as instructed. The newest programming language that has made a major impact in the last few years is Java, an object-oriented, interpreted language. The strength of Java lies in its ability to work with web browsers, its comprehensive GUI (graphical user interface), and its built-in security and network support. Java is a truly universal language because it is fully platform-independent: “write once, run everywhere” is the motto that Sun Microsystems uses to qualify all the features in Java. Both the source code and the compiled code can run on any computer that has Java installed with exactly the same result. The Java compiler converts the source code (file.java) into a bytecode (file.class), which contains instructions in ﬁxed-length byte strings and can be interpreted/executed on any computer under the Java interpreter, called JVM (Java Virtual Machine). There are many advantages in Java, and its speed in scientiﬁc programming has been steadily increased over the last few years. At the moment, a carefully written Java program, combined with static analysis, just-in-time compiling, and

1.3 Computer algorithms and languages

instruction-level optimization, can deliver nearly the same raw speed as the incumbent C or Fortran (Boisvert et al., 2001). Let us use the algorithm that we highlighted earlier for a particle moving along the x axis to show how an algorithm is translated into a program in Java. For simplicity, the force is taken to be an elastic force f (x) = −kx, where k is the elastic constant. We will also use m = k = 1 for convenience. The following Java program is an implementation of the algorithm given in Eqs. (1.7) and (1.8); each statement in the program is almost self-explanatory. // An example of studying the motion of a particle in // one dimension under an elastic force. import java.lang.*; public class Motion { static final int n = 100000, j = 500; public static void main(String argv[]) { double x[] = new double[n+1]; double v[] = new double[n+1]; // Assign double x[0] = v[0] =

time step and initial position and velocity dt = 2*Math.PI/n; 0; 1;

// Calculate other position and velocity recursively for (int i=0; i j

(8.29)

because g(r ) can be interpreted as the probability of seeing another particle at a distance r . Then we have P = ρ kB T +

ρ w + P, N

(8.30)

231

232

Molecular dynamics simulations

which can be evaluated quite easily, because at every time step the force fi j = −∇V (ri j ) is calculated for each particle pair.

8.3

The Verlet algorithm

Hamilton’s equations given in Eqs. (8.2) and (8.3) are equivalent to Newton’s equation mi

d 2 ri = fi , dt 2

(8.31)

for the ith particle in the system. To simplify the notation, we will use R to represent all the coordinates (r1 , r2 , . . . , r N ) and G to represent all the accelerations (f1 /m 1 , f2 /m 2 , . . . , f N /m N ). Thus, we can rewrite Newton’s equations for all the particles as d 2R = G. dt 2

(8.32)

If we apply the three-point formula to the second-order derivative d 2 R/dt 2 , we have 1 d 2R = 2 (Rk+1 − 2Rk + Rk−1 ) + O(τ 2 ), dt 2 τ

(8.33)

with t = kτ . We can also apply the three-point formula to the velocity V=

1 dR = (Rk+1 − Rk−1 ) + O(τ 2 ). dt 2τ

(8.34)

After we put all the above together, we obtain the simplest algorithm, which is called the Verlet algorithm, for a classical many-body system, with Rk+1 = 2Rk − Rk−1 + τ 2 Gk + O(τ 4 ), Rk+1 − Rk−1 + O(τ 2 ). Vk = 2τ

(8.35) (8.36)

The Verlet algorithm can be started if the ﬁrst two positions R0 and R1 of the particles are given. However, in practice, only the initial position R0 and initial velocity V0 are given. Therefore, we need to ﬁgure out R1 before we can start the recursion. A common practice is to treat the force during the ﬁrst time interval [0, τ ] as a constant, and then to apply the kinematic equation to obtain R1 R0 + τ V0 +

τ2 G0 , 2

(8.37)

where G0 is the acceleration vector evaluated at the initial conﬁguration R0 . Of course, the position R1 can be improved by carrying out the Taylor expansion to higher-order terms if the accuracy in the ﬁrst two points is critical. We can also replace G0 in Eq. (8.37) with the average (G0 + G1 )/2, with G1 evaluated at R1 , given from Eq. (8.37). This procedure can be iterated several

8.3 The Verlet algorithm

times before starting the algorithm for the velocity V1 and the next position R2 . The Verlet algorithm has advantages and disadvantages. It preserves the time reversibility that is one of the important properties of Newton’s equation. The rounding error may eventually destroy this time symmetry. The error in the velocity is two orders of magnitude higher than the error in the position. In many applications, we may only need information about the positions of the particles, and the Verlet algorithm yields very high accuracy for the position. If the velocity is not needed, we can totally ignore the evaluation of the velocity, since the evaluation of the position does not depend on the velocity at each time step. The biggest disadvantage of the Verlet algorithm is that the velocity is evaluated one time step behind the position. However, this lag can be removed if the velocity is evaluated directly from the force. A two-point formula would yield Vk+1 = Vk + τ Gk + O(τ 2 ).

(8.38)

We would get much better accuracy if we replaced Gk with the average (Gk + Gk+1 )/2. The new position can be obtained by treating the motion within t ∈ [kτ, (k + 1)τ ] as motion with a constant acceleration Gk ; that is, Rk+1 = Rk + τ Vk +

τ2 Gk . 2

(8.39)

Then a variation of the Verlet algorithm with the velocity calculated at the same time step of the position is Rk+1 = Rk + τ Vk + Vk+1 = Vk +

τ2 Gk + O(τ 4 ), 2

τ (Gk+1 + Gk ) + O(τ 2 ). 2

(8.40) (8.41)

Note that the evaluation of the position still has the same accuracy because the velocity is now updated according to Eq. (8.41), which provides the cancelation of the third-order term in the new position. Here we demonstrate this velocity version of the Verlet algorithm with a very simple example, the motion of Halley’s comet, which has a period of about 76 years. The potential between the comet and the Sun is given by V (r ) = −G

Mm , r

(8.42)

where r is the distance between the comet and the Sun, M and m are the respective masses of the Sun and the comet, and G is the gravitational constant. If we use the center-of-mass coordinate system as described in Chapter 3 for the two-body collision, the dynamics of the comet is governed by µ

d 2r r = f = −G Mm 3 , dt 2 r

(8.43)

233

234

Molecular dynamics simulations

with the reduced mass µ=

Mm m, M +m

(8.44)

for the case of Halley’s comet. We can take the farthest point (aphelion) as the starting point, and then we can easily obtain the comet’s whole orbit with one of the versions of the Verlet algorithm. Two quantities can be assumed as the known quantities, the total energy and the angular momentum, which are the constants of motion. We can describe the motion of the comet in the x y plane and choose x0 = rmax , vx0 = 0, y0 = 0, and v y0 = vmin . From well-known results we have that rmax = 5.28 × 1012 m and vmin = 9.13 × 102 m/s. Let us apply the velocity version of the Verlet algorithm to this problem. Then we have τ2 x (k+1) = x (k) + τ vx(k) + gx(k) , 2 τ , (k+1) gx + gx(k) , vx(k+1) = vx(k) + 2 τ 2 (k) g , y (k+1) = y (k) + τ v (k) y + 2 y τ , (k+1) v (k+1) g , = v (k) + g (k) y y + y 2 y

(8.45) (8.46) (8.47) (8.48)

where the time-step index is given in parentheses as superscripts in order to distinguish it from the x-component or y-component index. The acceleration components are given by x , r3 y g y = −κ 3 , r

gx = −κ

(8.49) (8.50)

with r = x 2 + y 2 and κ = G M. We can use more speciﬁc units in the numerical calculations, for example, 76 years as the time unit and the semimajor axis of the orbital a = 2.68 × 1012 m as the length unit. Then we have rmax = 1.97, vmin = 0.816, and κ = 39.5. The following program is the implementation of the algorithm outlined above for Halley’s comet. // An example to study the time-dependent position and // velocity of Halley's comet via the Verlet algorithm. import java.lang.*; public class Comet { static final int n = 20000, m = 200; public static void main(String argv[]) { double t[] = new double [n]; double x[] = new double [n]; double y[] = new double [n]; double r[] = new double [n]; double vx[] = new double [n]; double vy[] = new double [n]; double gx[] = new double [n]; double gy[] = new double [n]; double h = 2.0/(n-1), h2 = h*h/2, k = 39.478428;

8.3 The Verlet algorithm

2.0

Fig. 8.1 The distance between Halley’s comet and the Sun calculated with the program given in the text. The period of the comet is used as the unit of time, and the semimajor axis of the orbit is used as the unit of distance.

1.5

r

1.0

0.5

0.0 0.0

235

0.5

1.0 t

1.5

2.0

// Initialization of the problem t[0] = 0; x[0] = 1.966843; y[0] = 0; r[0] = x[0]; vx[0] = 0; vy[0] = 0.815795; gx[0] = -k/(r[0]*r[0]); gy[0] = 0; // Verlet algorithm for the position and velocity for (int i=0; i j

ms 2 v − GkB T ln s, 2 s

(8.89)

where s and vs are the coordinate and velocity of an introduced ﬁctitious variable that rescales the time and the kinetic energy in order to have the constraint of the canonical ensemble satisﬁed. The rescaling is achieved by replacing the time element dt with dt/s and holding the coordinates unchanged, that is, xi = ri . The velocity is rescaled with time: r˙ i = s x˙ i . We can then obtain the equation of motion for the coordinate xi and the variable s by applying the Lagrange equation. Hoover (1985; 1999) showed that the Nos´e Lagrangian leads to a set of equations very similar to the result of the constraint force scheme discussed at the beginning of this subsection. The Nos´e equations of motion are given in the Hoover version by r˙ i = vi ,

(8.90)

fi v˙ i = − ηvi , mi

(8.91)

where η is given in a differential form, 1 η˙ = ms

N pi2 − GkB T mi i=1

,

(8.92)

and the original variable s, introduced by Nos´e, is related to η by s = s0 eη(t−t0 ) ,

(8.93)

8.6 Constant pressure, temperature, and bond length

with s0 as the initial value of s at t = t0 . We can discretize the above equation set easily with either the Verlet algorithm or one of the Gear schemes. Note that the behavior of the parameter s is no longer directly related to the simulation; it is merely a parameter Nos´e introduced to accomplish the microscopic processes happening in the constant-temperature environment. We can also combine the Andersen constant-pressure scheme with the Nos´e constant-temperature scheme in a single effective Lagrangian L=

N ms 2 M ˙2 mi 2 2 2 s L x˙ i − s˙ − GkB T ln s + V (L xi j ) + − P0 , 2 2 2 i=1 i> j

(8.94)

which is worked out in detail in the original work of Nos´e (1984a; 1984b). Another constant-temperature scheme was introduced by Berendsen et al. (1984) with the parameter η given by 1 η= ms

N

m i vi2 − GkB T

,

(8.95)

i=1

which can be interpreted as a similar form of the constraint that differs from the Hoover–Nos´e form in the choice of η. For a review on the subject, see Nos´e (1991).

Constant bond length Another issue we have to deal with in practice is that for large molecular systems, such as biopolymers, the bond length of a pair of nearest neighbors does not change very much even though the angle between a pair of nearest bonds does. If we want to obtain accurate simulation results, we have to choose a time step much smaller than the period of the vibration of each pair of atoms. This costs a lot of computing time and might exclude the applicability of the simulation to more complicated systems, such as biopolymers. A procedure commonly known as the SHAKE algorithm (Ryckaert, Ciccotti, and Berendsen, 1977; van Gunsteren and Berendsen, 1977) was introduced to deal with the constraint on the distance between a pair of particles in the system. The idea of this procedure is to adjust each pair of particles iteratively to have 2 ri j − di2j /di2j ≤

(8.96)

in each time step. Here di j is the distance constraint between the ith and jth particles and is the tolerance in the simulation. The adjustment of the position of each particle is performed after each time step of the molecular dynamics simulation. Assume that we are working on a speciﬁc pair of particles and for the lth constraint and that we would like to have (ri j + δri j )2 − di2j = 0,

(8.97)

245

246

Molecular dynamics simulations

where ri j = r j − ri is the new position vector difference after a molecular time (0) step starting from ri j and the adjustments for the l − 1 constraints have been completed. Here δri j = δr j − δri is the total amount of adjustment needed for both particles. One can show, in conjunction with the Verlet algorithm, that the adjustments needed are given by (0)

m i δri = −gi j ri j = −m j δr j ,

(8.98)

with gi j as a parameter to be determined. The center of mass of these two particles remains the same during the adjustment. If we substitute δri and δr j given in Eq. (8.98), we obtain 2 (0) (0) ri j gi2j + 2µi j ri j · ri j gi j + µi2j ri2j − d 2 = 0,

(8.99)

where µi j = m i m j /(m i + m j ) is the reduced mass of the two particles. If we keep only the linear term in gi j , we have gi j =

µi j (0) 2ri j

· ri j

di2j − ri2j ,

(8.100)

which is reasonable, because gi j is a small number during the simulation. More importantly, by the end of the iteration, all the constraints will be satisﬁed as well; all gi j go to zero at the convergence. Equation (8.100) is used to estimate each gi j for each constraint in each iteration. After one has the estimate of gi j for each constraint, the positions of the relevant particles are all adjusted. The adjustments have to be performed several times until the convergence is reached. For more details on the algorithm, see Ryckaert et al. (1977). This procedure has been used in the simulation of chain-like systems as well as of proteins and nucleic acids. Interested readers can ﬁnd some detailed discussions on the dynamics of proteins and nucleic acids in McCammon and Harvey (1987).

8.7

Structure and dynamics of real materials

In this section, we will discuss some typical methods used to extract information about the structure and dynamics of real materials in molecular dynamics simulations. A numerical simulation of a speciﬁc material starts with a determination of the interaction potential in the system. In most cases, the interaction potential is formulated in a parameterized form, which is usually determined separately from the available experimental data, ﬁrst principles calculations, and condition of the system under study. The accuracy of the interaction potential determines the validity of the simulation results. Accurate model potentials have been developed for many realistic materials, for example, the Au (100) surface (Ercolessi, Tosatti, and Parrinello, 1986) and Si3 N4 ceramics (Vashishta et al., 1995). In the next

8.7 Structure and dynamics of real materials

section, we will discuss an ab initio molecular dynamics scheme in which the interaction potential is obtained by calculating the electronic structure of the system at each particle conﬁguration. We then need to set up a simulation box under the periodic boundary condition. Because most experiments are performed in a constant-pressure environment, we typically use the constant-pressure scheme developed by Andersen (1980) or its generalization (Parrinello and Rahman, 1980; 1981). The size of the simulation box has to be decided together with the available computing resources and the accuracy required for the quantities to be evaluated. The initial positions of the particles are usually assigned at the lattice points of a closely packed structure, for example, a face-centered cubic structure. The initial velocities of the particles are drawn from the Maxwell distribution for a given temperature. The temperature can be changed by rescaling the velocities. This is extremely useful in the study of phase transitions with varying temperature, such as the transition between different lattice structures, glass transition under quenching, or liquid–solid transition when the system is cooled down slowly. The advantage of simulation over actual experiment also shows up when we want to observe some behavior that is not achievable experimentally due to the limitations of the technique or equipment. For example, the glass transition in the Lennard–Jones system is observed in molecular dynamics simulations but not in the experiments for liquid Ar, because the necessary quenching rate is so high that it is impossible to achieve it experimentally. Studying the dynamics of different materials requires a more general timedependent density–density correlation function C(r, r ; t) = ρ(r ˆ + r , t)ρ(r ˆ , 0) ,

(8.101)

with the time-dependent density operator given by ρ(r, ˆ t) =

N

δ[r − ri (t)].

(8.102)

i=1

If the system is homogeneous, we can integrate out r in the time-dependent density–density correlation function to reach the van Hove time-dependent distribution function (van Hove, 1954) . / N 1 G(r, t) = δ{r − [ri (t) − r j (0)]} . ρ N i, j

(8.103)

The dynamical structure factor measured in an experiment, for example, neutron scattering, is given by the Fourier transform of G(r, t) as S(k, ω) =

ρ 2π

ei(ωt−k·r) G(r, t) dr dt.

(8.104)

The above equation reduces to the static case with

S(k) − 1 = 4πρ

sin kr [g(r ) − 1]r 2 dr kr

(8.105)

247

248

Molecular dynamics simulations

if we realize that G(r, 0) = g(r) +

and

S(k) =

∞

δ(r) ρ

S(k, ω) dω,

−∞

(8.106)

(8.107)

where g(r) is the pair distribution discussed earlier in this chapter and S(k) is the angular average of S(k). Here G(r, t) can be interpreted as the probability of observing one of the particles at r at time t if a particle was observed at r = 0 at t = 0. This leads to the numerical evaluation of G(r, t), which is the angular average of G(r, t). If we write G(r, t) in two parts, G(r, t) = G s (r, t) + G d (r, t),

(8.108)

with G s (r, t) the probability of observing the same particle that was at r = 0 at t = 0, and G d (r, t) the probability of observing other particles, we have G d (r, t)

1 N (r, r ; t) , ρ (r, r )

(8.109)

where (r, r ) 4πr 2 r is the volume of a spherical shell with radius r and thickness r , and N (r, r ; t) is the number of particles in the spherical shell at time t. The position of each particle at t = 0 is chosen as the origin in the evaluation of G d (r, t) and the average is taken over all the particles in the system. Note that this is different from the evaluation of g(r ), in which we always select a particle position as the origin and take the average over time. We can also take the average over all the particles in the evaluation of g(r ). Here G s (r, t) can be evaluated in a similar fashion. Because G s (r, t) represents the probability for a particle to be at a distance r at time t from its original position at t = 0, we can introduce / . N 1 2n = r 2n G s (r, t) dr (t) = [ri (t) − ri (0)] N i=1 2n

(8.110)

in the evaluation of the diffusion coefﬁcient with n = 1. The diffusion coefﬁcient can also be evaluated from the autocorrelation function N 1 [vi (t) · vi (0)], N i=1

c(t) = v(t) · v(0) =

with D=

1 3c(0)

(8.111)

∞

c(t) dt,

(8.112)

0

because the velocity of each particle at each time step vi (t) is known from the simulation. The velocity correlation function can also be used to obtain the power spectrum P(ω) =

6 πc(0)

0

∞

c(t) cos ωt dt,

(8.113)

8.7 Structure and dynamics of real materials

which has many features similar to those of the phonon spectrum of the system: for example, a broad peak for the glassy state and sharp features for a crystalline state. Thermodynamical quantities can also be evaluated from molecular dynamics simulations. For example, if a simulation is performed under the constantpressure condition, we can obtain physical quantities such as the particle density, pair-distribution function, and so on, at different temperature. The inverse of the particle density is called the speciﬁc volume, denoted V P (T ). The thermal expansion coefﬁcient under the constant-pressure condition is then given by αP =

∂ V P (T ) , ∂T

(8.114)

which is quite different when the system is in a liquid phase than it is in a solid phase. Furthermore, we can calculate the temperature-dependent enthalpy H = E + P,

(8.115)

where E is the internal energy given by

. / N N mi 2 v + E= V (ri j ) , 2 i i=1 i = j

(8.116)

with P the pressure, and the volume of the system. The speciﬁc heat under the constant-pressure condition is then obtained from cP =

1 ∂H . N ∂T

(8.117)

The speciﬁc heat under the constant-volume condition can be derived from the ﬂuctuation of the internal energy (δ E)2 = [E − E ]2 with time, given as cV =

(δ E)2 . kB T 2

(8.118)

The isothermal compressibility κT is then obtained from the identity κT =

T α 2P , c P − cV

(8.119)

which is also quite different for the liquid phase than for the solid phase. For more discussions on the molecular dynamics simulation of glass transition, see Yonezawa (1991). Other aspects related to the structure and dynamics of a system can be studied through molecular dynamics simulations. The advantage of molecular dynamics over a typical stochastic simulation is that molecular dynamics can give all the information on the time dependence of the system, which is necessary for analyzing the structural and dynamical properties of the system. Molecular dynamics is therefore the method of choice in computer simulations of many-particle systems. However, stochastic simulations, such as Monte Carlo simulations, are sometimes easier to perform for some systems and are closely related to the simulations of quantum systems.

249

250

Molecular dynamics simulations

8.8

Ab initio molecular dynamics

In this section, we outline a very interesting simulation scheme that combines the calculation of the electronic structure and the molecular dynamics simulation for a system. This is known as ab initio molecular dynamics, which was devised and put into practice by Car and Parrinello (1985). The maturity of molecular dynamics simulation schemes and the great advances in computing capacity have made it possible to perform molecular dynamics simulations for amorphous materials, biopolymers, and other complex systems. However, in order to obtain an accurate description of a speciﬁc system, we have to know the precise behavior of the interactions among the particles, that is, the ions in the system. Electrons move much faster than ions because the electron mass is much smaller than that of an ion. The position dependence of the interactions among the ions in a given system is therefore determined by the distribution of the electrons (electronic structure) at the speciﬁc moment. Thus, a good approximation of the electronic structure in a calculation can be obtained with all the nuclei ﬁxed in space for that moment. This is the essence the Born–Oppenheimer approximation, which allows the degrees of freedom of the electrons to be treated separately from those of the ions. In the past, the interactions among the ions were given in a parameterized form based on experimental data, quantum chemistry calculations, or the speciﬁc conditions of the system under study. All these procedures are limited due to the complexity of the electronic structure of the actual materials. We can easily obtain accurate parameterized interactions for the inert gases, such as Ar, but would have a lot of difﬁculties in obtaining an accurate parameterized interaction that can produce the various structures of ice correctly in the molecular dynamics simulation. It seems that a combined scheme is highly desirable. We can calculate the many-body interactions among the ions in the system from the electronic structure calculated at every molecular dynamics time step and then determine the next conﬁguration from such ab initio interactions. This can be achieved in principle, but in practice the scheme is restricted by the existing computing capacity. The combined method devised by Car and Parrinello (1985) was the ﬁrst in its class and has been applied to the simulation of real materials.

Density functional theory The density functional theory (Hohenberg and Kohn, 1964; Kohn and Sham, 1965) was introduced as a practical scheme to cope with the many-electron effect in atoms, molecules, and solids. The theorem proved by Hohenberg and Kohn (1964) states that the ground-state energy of an interacting system is the optimized value of an energy functional E[ρ(r)] of the electron density ρ(r) and that the

8.8 Ab initio molecular dynamics

corresponding density distribution of the optimization is the unique ground-state density distribution. Symbolically, we can write E[ρ(r)] = E ext [ρ(r)] + E H [ρ(r)] + E K [ρ(r)] + E xc [ρ(r)],

(8.120)

where E ext [ρ(r)] is the contribution from the external potential Uext (r) with

E ext [ρ(r)] =

Uext (r)ρ(r) dr,

(8.121)

E H [ρ(r)] is the Hartree type of contribution due to the electron–electron interaction, given by 1 2 e 2

E H [ρ(r)] =

ρ(r )ρ(r) dr dr , |r − r|

4π 0

(8.122)

E K is the contribution of the kinetic energy, and E xc denotes the rest of the contributions and is termed the exchange–correlation energy functional. In general, we can express the electron density in a spectral representation ρ(r) =

ψi† (r)ψi (r),

(8.123)

i

†

where ψi (r) is the complex conjugate of the wavefunction ψi (r) and the summation is over all the degrees of freedom, that is, all the occupied states with different spin orientations. Then the kinetic energy functional can be written as E K [ρ(r)] = −

h¯ 2 2m

ψi† (r)∇ 2 ψi (r) dr.

(8.124)

i

There is a constraint from the total number of electrons in the system, namely, ρ(r) dr = N ,

(8.125)

which introduces the Lagrange multipliers into the variation. If we use the spectral representation of the density in the energy functional and apply the Euler equation with the Lagrange multipliers, we have δ E[ρ(r)] δψi† (r)

− εi ψi (r) = 0,

(8.126)

which leads to the Kohn–Sham equation h¯ 2 2 ∇ + VE (r) ψi (r) = εi ψi (r), − 2m

(8.127)

where VE (r) is an effective potential given by VE (r) = Uext (r) + VH (r) + Vxc (r),

(8.128)

251

252

Molecular dynamics simulations

with Vxc (r) =

δ E xc [ρ(r)] , δρ(r)

(8.129)

which cannot be obtained exactly. A common practice is to approximate it by its homogeneous density equivalent, the so-called local approximation, in which we assume that Vxc (r) is given by the same quantity of a uniform electron gas with density equal to ρ(r). This is termed the local density approximation. The local density approximation has been successfully applied to many physical systems, including atomic, molecular, and condensed-matter systems. The unexpected success of the local density approximation in materials research has made it a standard technique for calculating electronic properties of new materials and systems. The procedure for calculating the electronic structure with the local density approximation can be described in several steps. We ﬁrst construct the local approximation of Vxc (r) with a guessed density distribution. Then the Kohn– Sham equation is solved, and a new density distribution is constructed from the solution. With the new density distribution, we can improve Vxc (r) and then solve the Kohn–Sham equation again. This procedure is repeated until convergence is reached. Interested readers can ﬁnd detailed discussions on the density functional theory in many monographs or review articles, for example, Kohn and Vashishta (1983), and Jones and Gunnarsson (1989).

The Car---Parrinello simulation scheme The Hohenberg–Kohn energy functional forms the Born–Oppenheimer potential surface for the ions in the system. The idea of ab initio molecular dynamics is similar to the relaxation scheme we discussed in Chapter 7. We introduced a functional U=

0 dψ(x) 2 1 (x) − ρ(x)ψ(x) d x 2 dx

(8.130)

for the one-dimensional Poisson equation. Note that ρ(x) here is the charge density instead of the particle density. The physical meaning of this functional is the electrostatic energy of the system. After applying the trapezoid rule to the integral and taking a partial derivative of U with respect to φi , we obtain the corresponding difference equation (Hi + ρi )ψi = 0,

(8.131)

for the one-dimensional Poisson equation. Here Hi φi denotes i+1/2 φi+1 + i−1/2 φi−1 − (i+1/2 + i−1/2 )φi . If we combine the above equation with the relaxation scheme discussed in Section 7.5, we have (k+1)

ψi

(k)

(k)

= (1 − p)ψi + p(Hi + ρi + 1)ψi ,

(8.132)

8.8 Ab initio molecular dynamics

which would optimize (minimize) the functional (the electrostatic energy) as k → ∞. The indices k and k + 1 are for iteration steps, and the index n is for the spatial points. The iteration can be interpreted as a ﬁctitious time step, since we can rewrite the above equation as (k+1)

ψi

(k)

− ψi p

(k)

= (Hi + ρi )ψi ,

(8.133)

with p acting like a ﬁctitious time step. The solution converges to the true solution of the Poisson equation as k goes to inﬁnity if the functional U decreases during the iterations. The ab initio molecular dynamics is devised by introducing a ﬁctitious timedependent equation for the electron degrees of freedom: µ

d 2 ψi (r, t) 1 δ E[ρ(r, t); Rn ] + =− i j ψ j (r), 2 dt 2 δψi† (r, t) j

(8.134)

where µ is an adjustable parameter introduced for convenience, i j is the Lagrange multiplier, introduced to ensure the orthonormal condition of the wavefunctions ψi (r, t), and the summation is over all the occupied states. Note that the potential energy surface E is a functional of the electron density as well as a function of the ionic coordinates Rn for n = 1, 2, . . . , Nc , with a total of Nc ions in the system. In practice, we can also consider the ﬁrst-order time derivative equation, with d 2 ψi (r, t)/dt 2 replaced by the ﬁrst-order derivative dψi (r, t)/dt, because either the ﬁrst-order or the second-order derivative will approach zero at the limit of convergence. Second-order derivatives were used in the original work of Car and Parrinello and were later shown to yield a fast convergence if a special damping term is introduced (Tassone, Mauri, and Car, 1994). The ionic degrees of freedom are then simulated from Newton’s equation Mn

∂ E[ρ(r, t); Rn ] d 2 Rn =− , 2 dt ∂Rn

(8.135)

where Mn and Rn are the mass and the position vector of the nth particle. The advantage of ab initio molecular dynamics is that the electron degrees of freedom and the ionic degrees of freedom are simulated simultaneously by the above equations. Since its introduction by Car and Parrinello (1985), the method has been applied to many systems, especially those without a crystalline structure, namely, liquids and amorphous materials. We will not go into more detail on the method or its applications; interested readers can ﬁnd them in the review by Tassone, Mauri, and Car (1994). Progress in ab initio molecular dynamics has also included mapping the Hamiltonian onto a tight-binding model in which the evaluation of the electron degrees of freedom is drastically simpliﬁed (Wang, Chan, and Ho, 1989). This approach has also been applied to many systems, for example, amorphous carbon and carbon clusters. More discussions on the method can be found in several review articles, for example, Oguchi and Sasaki (1991).

253

254

Molecular dynamics simulations

Exercises Show that the Verlet algorithm preserves the time reversal of Newton’s equation. 8.2 Derive the velocity version of the Verlet algorithm and show that the position update has the same accuracy as in the original Verlet algorithm. 8.3 Write a program that uses the velocity version of the Verlet algorithm to simulate the small clusters of ions (Na+ )n (Cl− )m for small n and m. Use the empirical interaction potential given in Eq. (5.64) for the ions and set up the initial conﬁguration as the equilibrium conﬁguration. Assign initial velocities from the Maxwell distribution. Discuss the time dependence of the kinetic energy with different total energies. 8.4 Explore the coexistence of the liquid and solid phases in a Lennard– Jones cluster that has an intermediate size of about 150 particles through the microcanonical molecular dynamics simulation. Analyze the melting process in the cluster and the size dependence of the coexistence region. 8.5 A two-dimensional system can behave quite differently from its threedimensional counterpart. Use the molecular dynamics technique to study a two-dimensional Lennard–Jones cluster of the intermediate size of about 100 particles. Do the liquid and solid phases coexist in any temperature region? Discuss the difference found between the three-dimensional and two-dimensional clusters. 8.6 Develop a program with the ﬁfth-order Gear predictor–corrector scheme and apply it to the damped pendulum under a sinusoidal driving force. Study the properties of the pendulum with different values of the parameters. 8.7 Apply the ﬁfth-order Gear scheme to study the long-time behavior of a classical helium atom in two dimensions. Explore different initial conditions. Is the system unstable or chaotic under certain initial conditions? 8.8 Derive the Hoover equations from the Nos´e Lagrangian. Show that the generalized Lagrangian given in Section 8.6 can provide the correct equations of motion to ensure the constraint of constant temperature as well as that of constant pressure. 8.9 Develop a molecular dynamics program to study the structure of a cluster of identical charges inside a three-dimensional isotropic, harmonic trap. Under what condition does the cluster form a crystal? What happens if the system is conﬁned in a plane? 8.10 Show that the Parrinello–Rahman Lagrangian allows the shape of the simulation box to change, and derive equations of motion from it. Find the Lagrangian that combines the Parrinello–Rahman Lagrangian and Nos´e Lagrangian and derive the equations of motion from this generalized Lagrangian. 8.1

Exercises

8.11 For quantum many-body systems, the n-body density function is deﬁned

by 1 N! Z (N − n)!

ρn (r1 , r2 , . . . , rn ) =

|(R)|2 drn+1 drn+2 · · · dr N ,

where (R) is the ground-state wavefunction of the many-body system and Z is the normalization constant given by

Z=

|(R)|2 dr1 dr2 · · · dr N .

The pair-distribution function is related to the two-body density function through ρ(r)g(r, r )ρ(r ) = ρ2 (r, r ).

Show that

˜ r ) − 1] dr = −1, ρ(r)[g(r,

where ˜ r ) = g(r,

1

g(r, r ; λ) dλ,

0

with g(r, r ; λ) as the pair-distribution function under the scaled interaction V (r, r ; λ) = λV (dr, r ). 8.12 Show that the exchange–correlation energy functional is given by 1 ˜ r ) − 1]ρ(r ) dr dr , ρ(r)V (r, r )[g(r, E xc [ρ(r)] = 2

˜ dr ) given from the last problem. with g(r,

255

Chapter 9

Modeling continuous systems

It is usually more difﬁcult to simulate continuous systems than discrete ones, especially when the properties under study are governed by nonlinear equations. The systems can be so complex that the length scale at the atomic level can be as important as the length scale at the macroscopic level. The basic idea in dealing with complicated systems is similar to a divide-and-conquer concept, that is, dividing the systems with an understanding of the length scales involved and then solving the problem with an appropriate method at each length scale. A speciﬁc length scale is usually associated with an energy scale, such as the average temperature of the system or the average interaction of each pair of particles. The divide-and-conquer schemes are quite powerful in a wide range of applications. However, each method has its advantages and disadvantages, depending on the particular system.

9.1

Hydrodynamic equations

In this chapter, we will discuss several methods used in simulating continuous systems. First we will discuss a quite mature method, the ﬁnite element method, which sets up the idea of partitioning the system according to physical condition. Then we will discuss another method, the particle-in-cell method, which adopts a mean-ﬁeld concept in dealing with a large system involving many, many atoms, for example, 1023 atoms. This method has been very successful in the simulations of plasma, galactic, hydrodynamic, and magnetohydrodynamic systems. Then we will brieﬂy highlight a relatively new method, the lattice Boltzmann method, which is closely related to the lattice-gas method of cellular automata and has been applied in modeling several continuous systems. Before going into the detail of each method, we introduce the hydrodynamic equations. The equations are obtained by analyzing the dynamics of a small element of ﬂuid in the system and then taking the limit of continuity. We can obtain three basic equations by examining the changes in the mass, momentum, and energy of a small element in the system. For simplicity, let us ﬁrst assume that the ﬂuid is neutral and not under an external ﬁeld. The three fundamental

256

9.1 Hydrodynamic equations

hydrodynamic equations are ∂ρ + ∇ · (ρv) = 0, ∂t

∂ (ρv) + ∇ · Π = 0, ∂t ∂ 1 ρε + ρv 2 + ∇ · je = 0. ∂t 2

(9.1) (9.2) (9.3)

The ﬁrst equation is commonly known as the continuity equation, which is the result of mass conservation. The second equation is known as the Navier–Stokes equation, which comes from Newton’s equation applied to an inﬁnitesimal element in the ﬂuid. The ﬁnal equation is the result of the work–energy theorem. In these equations, ρ is the mass density, v is the velocity, ε is the internal energy per unit mass, Π is the momentum ﬂux tensor, and je is the energy ﬂux density. The momentum ﬂux tensor can be written as Π = ρvv − Γ,

(9.4)

where Γ is the stress tensor of the ﬂuid, given as , Γ = η ∇v + (∇v)T +

ζ−

2η ∇ · v − P I, 3

(9.5)

where η and ζ are coefﬁcients of bulk and shear viscosity, P is the pressure in the ﬂuid, and I is the unit tensor. The energy ﬂux density is given by 1 2 je = v ρε + ρv − v · Γ − κ∇T, 2

(9.6)

where the last term is the thermal energy ﬂow due to the gradient of the temperature T , with κ being the thermal conductivity. The above set of equations can be solved together with the equation of state f (ρ, P, T ) = 0,

(9.7)

which provides a particular relationship between several thermal variables for the given system. Now we can add the effects of external ﬁelds into the related equations. For example, if gravity is important, we can modify Eq. (9.2) to ∂ (ρv) + ∇ · Π = ρg, ∂t

(9.8)

where g is the gravitational ﬁeld given by g = −∇.

(9.9)

Here is the gravitational potential from the Poisson equation ∇ 2 = 4π Gρ,

with G being the gravitational constant.

(9.10)

257

258

Modeling continuous systems

When the system is charged, electromagnetic ﬁelds become important. We then need to modify the momentum ﬂux tensor, energy density, and energy ﬂux density. A term (BB − IB 2 /2)/µ should be added to the momentum ﬂux tensor. A magnetic ﬁeld energy density term B 2 /(2µ) should be added to the energy density, and an extra energy ﬂux term E × B/µ should be added to the energy ﬂux density. Here µ is the magnetic permeability with a limit of µ = µ0 in free space. The electric current density j, electric ﬁeld E, and magnetic ﬁeld B are related through the Maxwell equations and the charge transport equation. We will come back to this point in the later sections of this chapter. These hydrodynamic or magnetohydrodynamic equations can be solved in principle with the ﬁnite difference methods discussed in Chapter 7. However, in this chapter, we introduce some other methods, mainly based on the divideand-conquer schemes. For more detailed discussions on the hydrodynamic and magnetohydrodynamic equations, see Landau and Lifshitz (1987) and Lifshitz and Pitaevskii (1981).

9.2

The basic ﬁnite element method

In order to show how a typical ﬁnite element method works for a speciﬁc problem, let us take the one-dimensional Poisson equation as an example. Assume that the charge distribution is ρ(x) and the equation for the electrostatic potential is ρ(x) d 2 φ(x) =− . dx2 0

(9.11)

In order to simplify our discussion here, let us assume that the boundary condition is given as φ(0) = φ(1) = 0. As discussed in Chapter 4, we can always express the solution in terms of a complete set of orthogonal basis functions as φ(x) =

ai u i (x),

(9.12)

i

where the basis functions u i (x) satisfy

1

u j (x)u i (x) d x = δi j

(9.13)

0

and u i (0) = u i (1) = 0. We have assumed that u i (x) is a real function and that the summation √ is over all the basis functions. One of the choices for u i (x) is to have u i (x) = 2 sin iπ x. In order to obtain the actual value of φ(x), we need to solve all the coefﬁcients ai by applying the differential equation and the orthogonal condition in Eq. (9.13). This becomes quite a difﬁcult task if the system has a higher dimensionality and an irregular boundary. The ﬁnite element method is designed to ﬁnd a good approximation of the solution for irregular boundary conditions or in situations of rapid change of the

9.2 The basic finite element method

solution. Typically, the approximation is still written as a series φn (x) =

n

ai u i (x),

(9.14)

i=1

with a ﬁnite number of terms. Here u i (x) is a set of linearly independent local functions, each deﬁned in a small region around a node xi . If we use this approximation in the differential equation, we have a nonzero value rn (x) = φn (x) +

ρ(x) , 0

(9.15)

which would be zero if φn (x) were the exact solution. The goal now is to set up a scheme that would make rn (x) small over the whole region during the process of determining all ai . The selection of u i (x) and the procedure for optimizing rn (x) determine how accurate the solution is for a given n. We can always improve the accuracy with a higher n. One scheme for performing optimization is to introduce a weighted integral

1

gi =

rn (x)w i (x) d x,

(9.16)

0

which is then forced to be zero during the determination of ai . Here w i (x) is a selected weight. The procedure just described is the essence of the ﬁnite element method. The region of interest is divided into many small pieces, usually of the same topology but not the same size, for example, different triangles for a two-dimensional domain. Then a set of linearly independent local functions u i (x) is selected, with each deﬁned around a node and its neighborhood. We then select the weight w i (x), which is usually chosen as a local function as well. For the one-dimensional Poisson equation, the weighted integral becomes

1

gi = 0

n

a j u j (x) +

j=1

ρ(x) w i (x) d x = 0, 0

(9.17)

which is equivalent to a linear equation set Aa = b,

(9.18)

with

1

Ai j = −

u i (x)w j (x) d x

(9.19)

ρ(x)w i (x) d x.

(9.20)

0

and bi =

1 0

1

0

The advantage of choosing u i (x) and w i (x) as local functions lies in the solution of the linear equation set given in Eq. (9.18). Basically, we need to solve only a tridiagonal or a band matrix problem, which can be done quite quickly. The idea

259

260

Modeling continuous systems

is to make the problem computationally simple but numerically accurate. That means the choice of the weight w i (x) is rather an art. A common choice is that w i (x) = u i (x), which is called the Galerkin method. Now let us consider in detail the application of the Galerkin method to the one-dimensional Poisson equation. We can divide the region [0, 1] into n + 1 equal intervals with x0 = 0 and xn+1 = 1 and take ⎧ ⎪ ⎪ ⎨ (x − xi−1 )/ h u i (x) = (xi+1 − x)/ h ⎪ ⎪ ⎩0

for x ∈ [xi−1 , xi ], for x ∈ [xi , xi+1 ],

(9.21)

otherwise.

Here h = xi − xi−1 = 1/(n + 1) is the spatial interval. Note that this choice of u i (x) satisﬁes the boundary condition. Let us take a very simple charge distribution ρ(x) = 0 π 2 sin π x

(9.22)

for illustrative purposes. We can easily determine the weighted integrals to obtain the matrix elements

1

Ai j = 0

and the vector

⎧ ⎪ ⎨ 2/ h u i (x)u j (x) d x = −1/ h ⎪ ⎩0

for i = j, for i = j ± 1, otherwise

1 1 ρ(x)u i (x) d x 0 0 π = (xi−1 + xi+1 − 2xi ) cos π xi h 1 + (2 sin π xi − sin π xi−1 − sin π xi+1 ). h

(9.23)

bi =

(9.24)

Now we are ready to put all these values into a program. Note that the coefﬁcient matrix A is automatically symmetric and tridiagonal because the local basis u i (x) is conﬁned in the region [xi−1 , xi+1 ]. As we discussed in Chapters 2, 5, and 7, an LU decomposition scheme can easily be devised to solve this tridiagonal matrix problem. Here is the implementation of the algorithm. // Program for the one-dimensional Poisson equation. import java.lang.*; public class Poisson { final static int n = 99, m = 2; public static void main(String argv[]) { double d[] = new double[n]; double b[] = new double[n]; double c[] = new double[n]; double h = 1.0/(n+1), pi = Math.PI; // Evaluate the coefficient matrix elements

9.2 The basic finite element method

1.0 0.8 0.6 φ (V) 0.4 0.2

rrrrrrrrrrrrrrrrrr rrrr rrr r r rrr rr r r rr rr rr r r rr rr rr r r rr r r rr r rr rr r rr r r rr r r rr r r rr r r rr rr rr r r rr r r rr r r rr rr r

0.0 0.0

0.2

0.4

0.6

0.8

1.0

x (m)

for (int d[i] = c[i] = double double double b[i] =

i=0; i

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close