Our goal in this section is to use the techniques of statistical mechanics to describe the dynamics of the simplest system: a gas. This means a bunch of particles, flying around in a box. Although much of the last section was formulated in the language of quantum mechanics, here we will revert back to classical mechanics. Nonetheless, a recurrent theme will be that the quantum world is never far behind: we’ll see several puzzles, both theoretical and experimental, which can only truly be resolved by turning on .
For most of this section we will work in the canonical ensemble. We start by reformulating the idea of a partition function in classical mechanics. We’ll consider a simple system – a single particle of mass moving in three dimensions in a potential . The classical Hamiltonian of the system33 3 If you haven’t taken the Classical Dynamics course, you should think of the Hamiltonian as the energy of the system expressed in terms of the position and momentum of the particle. is the sum of kinetic and potential energy,
We earlier defined the partition function (1.21) to be the sum over all quantum states of the system. Here we want to do something similar. In classical mechanics, the state of a system is determined by a point in phase space. We must specify both the position and momentum of each of the particles — only then do we have enough information to figure out what the system will do for all times in the future. This motivates the definition of the partition function for a single classical particle as the integration over phase space,
(2.49) |
The only slightly odd thing is the factor of that sits out front. It is a quantity that needs to be there simply on dimensional grounds: should be dimensionless so must have dimension or, equivalently, Joules-seconds . The actual value of won’t matter for any physical observable, like heat capacity, because we always take and then differentiate. Despite this, there is actually a correct value for : it is Planck’s constant, .
It is very strange to see Planck’s constant in a formula that is supposed to be classical. What’s it doing there? In fact, it is a vestigial object, like the male nipple. It is redundant, serving only as a reminder of where we came from. And the classical world came from the quantum.
It is possible to derive the classical partition function (2.49) directly from the quantum partition function (1.21) without resorting to hand-waving. It will also show us why the factor of sits outside the partition function. The derivation is a little tedious, but worth seeing. (Similar techniques are useful in later courses when you first meet the path integral). To make life easier, let’s consider a single particle moving in one spatial dimension. It has position operator , momentum operator and Hamiltonian,
If is the energy eigenstate with energy , the quantum partition function is
(2.50) |
In what follows, we’ll make liberal use of the fact that we can insert the identity operator anywhere in this expression. Identity operators can be constructed by summing over any complete basis of states. We’ll need two such constructions, using the position eigenvectors and the momentum eigenvectors ,
We start by inserting two copies of the identity built from position eigenstates,
But now we can replace with the identity matrix and use the fact that , to get
(2.51) |
We see that the result is to replace the sum over energy eigenstates in (2.50) with a sum (or integral) over position eigenstates in (2.51). If you wanted, you could play the same game and get the sum over any complete basis of eigenstates of your choosing. As an aside, this means that we can write the partition function in a basis independent fashion as
So far, our manipulations could have been done for any quantum system. Now we want to use the fact that we are taking the classical limit. This comes about when we try to factorize into a momentum term and a position term. The trouble is that this isn’t always possible when there are matrices (or operators) in the exponent. Recall that,
For us . This means that if we’re willing to neglect terms of order — which is the meaning of taking the classical limit — then we can write
We can now start to replace some of the operators in the exponent, like , with functions . (The notational difference is subtle, but important, in the expressions below!),
where, in the final line, we’ve used the identity
This completes the derivation.
The first classical gas that we’ll consider consists of particles trapped inside a box of volume . The gas is “ideal”. This simply means that the particles do not interact with each other. For now, we’ll also assume that the particles have no internal structure, so no rotational or vibrational degrees of freedom. This situation is usually referred to as the monatomic ideal gas. The Hamiltonian for each particle is simply the kinetic energy,
And the partition function for a single particle is
(2.52) |
The integral over position is now trivial and gives , the volume of the box. The integral over momentum is also straightforward since it factorizes into separate integrals over , and , each of which is a Gaussian of the form,
So we have
We’ll meet the combination of factors in the brackets a lot in what follows, so it is useful to give it a name. We’ll write
(2.53) |
The quantity goes by the name of the thermal de Broglie wavelength,
(2.54) |
has the dimensions of length. We will see later that you can think of as something like the average de Broglie wavelength of a particle at temperature . Notice that it is a quantum object – it has an sitting in it – so we expect that it will drop out of any genuinely classical quantity that we compute. The partition function itself (2.53) is counting the number of these thermal wavelengths that we can fit into volume .
is the partition function for a single particle. We have , non-interacting, particles in the box so the partition function of the whole system is
(2.55) |
(Full disclosure: there’s a slightly subtle point that we’re brushing under the carpet here and this equation isn’t quite right. This won’t affect our immediate discussion and we’ll explain the issue in more detail in Section 2.2.3.)
Armed with the partition function , we can happily calculate anything that we like. Let’s start with the pressure, which can be extracted from the partition function by first computing the free energy (1.36) and then using (1.35). We have
(2.56) | |||||
This equation is an old friend – it is the ideal gas law, , that we all met in kindergarten. Notice that the thermal wavelength has indeed disappeared from the discussion as expected. Equations of this form, which link pressure, volume and temperature, are called equations of state. We will meet many throughout this course.
As the plots above show44 4 Both figures are taken from the web textbook “General Chemistry” and credited to John Hutchinson., the ideal gas law is an extremely good description of gases at low densities. Gases deviate from this ideal behaviour as the densities increase and the interactions between atoms becomes important. We will see how this comes about from the viewpoint of microscopic forces in Section 2.5.
It is worth pointing out that this derivation should calm any lingering fears that you had about the definition of temperature given in (1.7). The object that we call really does coincide with the familiar notion of temperature applied to gases. But the key property of the temperature is that if two systems are in equilibrium then they have the same . That’s enough to ensure that equation (1.7) is the right definition of temperature for all systems because we can always put any system in equilibrium with an ideal gas.
The partition function (2.55) has more in store for us. We can compute the average energy of the ideal gas,
(2.57) |
There’s an important, general lesson lurking in this formula. To highlight this, it is worth repeating our analysis for an ideal gas in arbitrary number of spatial dimensions, . A simple generalization of the calculations above shows that
Each particle has degrees of freedom (because it can move in one of spatial directions). And each particle contributes towards the average energy. This is a general rule of thumb, which holds for all classical systems: the average energy of each free degree of freedom in a system at temperature is . This is called the equipartition of energy. As stated, it holds only for degrees of freedom in the absence of a potential. (There is a modified version if you include a potential). Moreover, it holds only for classical systems or quantum systems at suitably high temperatures.
We can use the result above to see why the thermal de Broglie wavelength (2.54) can be thought of as roughly equal to the average de Broglie wavelength of a particle. Equating the average energy (2.57) to the kinetic energy tells us that the average (root mean square) momentum carried by each particle is . In quantum mechanics, the de Broglie wavelength of a particle is , which (up to numerical factors of and ) agrees with our formula (2.54).
Finally, returning to the reality of dimensions, we can compute the heat capacity for a monatomic ideal gas. It is
(2.58) |
We introduced Boltzmann’s constant in our original the definition of entropy (1.2). It has the value,
In some sense, there is no deep physical meaning to Boltzmann’s constant. It is merely a conversion factor that allows us to go between temperature and energy, as reflected in (1.7). It is necessary to include it in the equations only for historical reasons: our ancestors didn’t realise that temperature and energy were closely related and measured them in different units.
Nonetheless, we could ask why does have the value above? It doesn’t seem a particularly natural number. The reason is that both the units of temperature (Kelvin) and energy (Joule) are picked to reflect the conditions of human life. In the everyday world around us, measurements of temperature and energy involve fairly ordinary numbers: room temperature is roughly ; the energy required to lift an apple back up to the top of the tree is a few Joules. Similarly, in an everyday setting, all the measurable quantities — , and — in the ideal gas equation are fairly normal numbers when measured in SI units. The only way this can be true is if the combination is a fairly ordinary number, of order one. In other words the number of atoms must be huge,
(2.59) |
This then is the real meaning of the value of Boltzmann’s constant: atoms are small.
It’s worth stressing this point. Atoms aren’t just small: they’re really really small. is an astonishingly large number. The number of grains of sand in all the beaches in the world is around . The number of stars in our galaxy is about . The number of stars in the entire visible Universe is probably around . And yet the number of water molecules in a cup of tea is more than .
While we’re talking about the size of atoms, it is probably worth reminding you of the notation used by chemists. They too want to work with numbers of order one. For this reason, they define a mole to be the number of atoms in one gram of Hydrogen. (Actually, it is the number of atoms in 12 grams of Carbon-12, but this is roughly the same thing). The mass of Hydrogen is , so the number of atoms in a mole is Avogadro’s number,
The number of moles in our gas is then and the ideal gas law can be written as
where is the called the Universal gas constant. Its value is a nice sensible number with no silly power in the exponent: .
“It has always been believed that Gibbs’s paradox embodied profound thought. That it was intimately linked up with something so important and entirely new could hardly have been foreseen.”
Erwin Schrödinger
We said earlier that the formula for the partition function (2.55) isn’t quite right. What did we miss? We actually missed a subtle point from quantum mechanics: quantum particles are indistinguishable. If we take two identical atoms and swap their positions, this doesn’t give us a new state of the system – it is the same state that we had before. (Up to a sign that depends on whether the atoms are bosons or fermions – we’ll discuss this aspect in more detail in Sections 3.5 and 3.6). However, we haven’t taken this into account – we wrote the expression which would be true if all the particles in the were distinguishable — for example, if each of the particles were of a different type. But this naive partition function overcounts the number of states in the system when we’re dealing with indistinguishable particles.
It is a simple matter to write down the partition function for indistinguishable particles. We simply need to divide by the number of ways to permute the particles. In other words, for the ideal gas the partition function is
(2.60) |
The extra factor of doesn’t change the calculations of pressure or energy since, for each, we had to differentiate and any overall factor drops out. However, it does change the entropy since this is given by,
which includes a factor of without any derivative. Of course, since the entropy is counting the number of underlying microstates, we would expect it to know about whether particles are distinguishable or indistinguishable. Using the correct partition function (2.60) and Stirling’s formula, the entropy of an ideal gas is given by,
(2.61) |
This result is known as the Sackur-Tetrode equation. Notice that not only is the entropy sensitive to the indistinguishability of the particles, but it also depends on . However, the entropy is not directly measurable classically. We can only measure entropy differences by the integrating the heat capacity as in (1.10).
The benefit of adding an extra factor of was noticed before the advent of quantum mechanics by Gibbs. He was motivated by the change in entropy of mixing between two gases. Suppose that we have two different gases, say red and blue. Each has the same number of particles and sits in a volume V, separated by a partition. When the partition is removed the gases mix and we expect the entropy to increase. But if the gases are of the same type, removing the partition shouldn’t change the macroscopic state of the gas. So why should the entropy increase? This is referred to as the Gibb’s paradox. Including the factor of in the partition function ensures that the entropy does not increase when identical atoms are mixed55 5 Be warned however: a closer look shows that the Gibbs paradox is rather toothless and, in the classical world, there is no real necessity to add the . A clear discussion of these issues can be found in E.T. Jaynes’ article “The Gibbs Paradox” which you can download from the course website.
It is worth briefly looking at the ideal gas in the grand canonical ensemble. Recall that in such an ensemble, the gas is free to exchange both energy and particles with the outside reservoir. You could think of the system as some fixed subvolume inside a much larger gas. If there are no walls to define this subvolume then particles, and hence energy, can happily move in and out. We can ask how many particles will, on average, be inside this volume and what fluctuations in particle number will occur. More importantly, we can also start to gain some intuition for this strange quantity called the chemical potential, .
The grand partition function (1.39) for the ideal gas is
From this we can determine the average particle number,
Which, rearranging, gives
(2.62) |
If then the chemical potential is negative. Recall that is roughly the average de Broglie wavelength of each particle, while is the average volume taken up by each particle. But whenever the de Broglie wavelength of particles becomes comparable to the inter-particle separation, then quantum effects become important. In other words, to trust our classical calculation of the ideal gas, we must have and, correspondingly, .
At first sight, it is slightly strange that is negative. When we introduced in Section 1.4.1, we said that it should be thought of as the energy cost of adding an extra particle to the system. Surely that energy should be positive! To see why this isn’t the case, we should look more closely at the definition. From the energy variation (1.38), we have
So the chemical potential should be thought of as the energy cost of adding an extra particle at fixed entropy and volume. But adding a particle will give more ways to share the energy around and so increase the entropy. If we insist on keeping the entropy fixed, then we will need to reduce the energy when we add an extra particle. This is why we have for the classical ideal gas.
There are situations where . This can occur if we have a suitably strong repulsive interaction between particles so that there’s a large energy cost associated to throwing in one extra. We also have for fermion systems at low temperatures as we will see in Section 3.6.
We can also compute the fluctuation in the particle number,
As promised in Section 1.4.1, the relative fluctuations are vanishingly small in the thermodynamic limit.
Our discussion above focusses on understanding macroscopic properties of the gas such as pressure or heat capacity. But we can also use the methods of statistical mechanics to get a better handle on the microscopic properties of the gas. Like everything else, the information is hidden in the partition function. Let’s return to the form of the single particle partition function (2.52) before we do the integrals. We’ll still do the trivial spatial integral , but we’ll hold off on the momentum integral and instead change variables from momentum to velocity, . Then the single particle partition function is
We can compare this to the original definition of the partition function: the sum over states of the probability of that state. But here too, the partition function is written as a sum, now over speeds. The integrand must therefore have the interpretation as the probability distribution over speeds. The probability that the atom has speed between and is
(2.64) |
where the normalization factor can be determined by insisting that probabilities sum to one, , which gives
This is the Maxwell distribution. It is sometimes called the Maxwell-Boltzmann distribution. Figure 10 shows this distribution for a variety of gases with different masses at the same temperature, from the slow heavy Xenon (purple) to light, fast Helium (blue). We can use it to determine various average properties of the speeds of atoms in a gas. For example, the mean square speed is
This is in agreement with the equipartition of energy: the average kinetic energy of the gas is .
The above derivation tells us the distribution of velocities in a non-interacting gas of particles. Remarkably, the Maxwell distribution also holds in the presence of any interactions. In fact, Maxwell’s original derivation of the distribution makes no reference to any properties of the gas. It is very slick!
Let’s first think about the distribution of velocities in the direction; we’ll call this distribution . Rotational symmetry means that we must have the same distribution of velocities in both the and directions. However, rotational invariance also requires that the full distribution can’t depend on the direction of the velocity; it can only depend on the speed . This means that we need to find functions and such that
It doesn’t look as if we possibly have enough information to solve this equation for both and . But, remarkably, there is only one solution. The only function which satisfies this equation is
for some constants and . Thus the distribution over speeds must be
We see that the functional form of the distribution arises from rotational invariance alone. To determine the coefficient we need the more elaborate techniques of statistical mechanics that we saw above. (In fact, one can derive it just from equipartition of energy).
The name kinetic theory refers to the understanding the properties of gases through their underlying atomic constituents. The discussion given above barely scratches the surface of this important subject.
Kinetic theory traces its origin to the work of Daniel Bernoulli in 1738. He was the first to argue that the phenomenon that we call pressure is due to the constant bombardment of tiny atoms. His calculation is straightforward. Consider a cubic box with sides of length . Suppose that an atom travelling with momentum in the direction bounces elastically off a wall so that it returns with velocity . The particle experiences a change in momentum is . Since the particle is trapped in a box, it will next hit the wall at a time later. This means that the force on the wall due to this atom is
Summing over all the atoms which hit the wall, the force is
where is the average velocity in the -direction. Using the same argument as we gave in Maxwell’s derivation above, we must have . Thus and the pressure, which is force per area, is given be
If this equation is compared to the ideal gas law (which, at the time, had only experimental basis) one concludes that the phenomenon of temperature must arise from the kinetic energy of the gas. Or, more precisely, one finds the equipartition result that we derived previously: .
After Bernoulli’s pioneering work, kinetic theory languished. No one really knew what to do with his observation nor how to test the underlying atomic hypothesis. Over the next century, Bernouilli’s result was independently rediscovered by a number of people, all of whom were ignored by the scientific community. One of the more interesting attempts was by John Waterson, a Scottish engineer and naval instructor working for the East India Company in Bombay. Waterson was considered a crackpot. His 1843 paper was rejected by the Royal Society as “nothing but nonsense” and he wrote up his results in a self-published book with the wonderfully crackpot title “Thoughts on Mental Functions”.
The results of Bernouilli and Waterson finally became accepted only after they were re-rediscovered by more established scientists, most notably Rudolph Clausius who, in 1857, extended these ideas to rotating and vibrating molecules. Soon afterwards, in 1859, Maxwell gave the derivation of the distribution of velocities that we saw above. This is often cited as the first statistical law of physics. But Maxwell was able to take things further. He used kinetic theory to derive the first genuinely new prediction of the atomic hypothesis: that the viscosity of a gas is independent of its density. Maxwell himself wrote,
”Such a consequence of the mathematical theory is very startling and the only experiment I have met with on the subject does not seem to confirm it.”
Maxwell decided to rectify the situation. With help from his wife, he spent several years constructing an experimental apparatus in his attic which was capable of providing the first accurate measurements of viscosity of gases66
6
You can see the original apparatus down the road in the corridor of the Cavendish lab. Or, if you don’t fancy the walk, you can simply click here:
http://www-outreach.phy.cam.ac.uk/camphy/museum/area1/exhibit1.htm. His surprising theoretical prediction was confirmed by his own experiment.
There are many further developments in kinetic theory which we will not cover in this course. Perhaps the most important is the Boltzmann equation. This describes the evolution of a particle’s probability distribution in position and momentum space as it collides with other particles. Stationary, unchanging, solutions bring you back to the Maxwell-Boltzmann distribution, but the equation also provides a framework to go beyond the equilibrium description of a gas. You can read about this in the lecture notes on Kinetic Theory.
“I must now say something about these internal motions, because the greatest difficulty which the kinetic theory of gases has yet encountered belongs to this part of the subject”.
James Clerk Maxwell, 1875
Consider a molecule that consists of two atoms in a bound state. We’ll construct a very simple physicist’s model of this molecule: two masses attached to a spring. As well as the translational degrees of freedom, there are two further ways in which the molecule can move
Rotation: the molecule can rotate rigidly about the two axes perpendicular to the axis of symmetry, with moment of inertia . (For now, we will neglect the rotation about the axis of symmetry. It has very low moment of inertia which will ultimately mean that it is unimportant).
Vibration: the molecule can oscillate along the axis of symmetry
We’ll work under the assumption that the rotation and vibration modes are independent. In this case, the partition function for a single molecule factorises into the product of the translation partition function that we have already calculated (2.53) and the rotational and vibrational contributions,
We will now deal with and in turn.
The Lagrangian for the rotational degrees of freedom is77 7 See, for example, Section 3.6 of the lecture notes on Classical Dynamics
(2.65) |
The conjugate momenta are therefore
from which we get the Hamiltonian for the rotating diatomic molecule,
(2.66) |
The rotational contribution to the partition function is then
(2.67) | |||||
From this we can compute the average rotational energy of each molecule,
If we now include the translational contribution (2.53), the partition function for a diatomic molecule that can spin and move, but can’t vibrate, is given by , and the partition function for a gas of these object , from which we compute the energy and the heat capacity,
In fact we can derive this result simply from equipartition of energy: there are 3 translational modes and 2 rotational modes, giving a contribution of to the energy.
The Hamiltonian for the vibrating mode is simply a harmonic oscillator. We’ll denote the displacement away from the equilibrium position by . The molecule vibrates with some frequency which is determined by the strength of the atomic bond. The Hamiltonian is then
from which we can compute the partition function
(2.68) |
The average vibrational energy of each molecule is now
(You may have anticipated since the harmonic oscillator has just a single degree of freedom, but equipartition works slightly differently when there is a potential energy. You will see another example on the problem sheet from which it is simple to deduce the general form).
Putting together all the ingredients, the contributions from translational motion, rotation and vibration give the heat capacity
This result depends on neither the moment of inertia, , nor the stiffness of the molecular bond, . A molecule with large will simply spin more slowly so that the average rotational kinetic energy is ; a molecule attached by a stiff spring with high will vibrate with smaller amplitude so that the average vibrational energy is . This ensures that the heat capacity is constant.
Great! So the heat capacity of a diatomic gas is . Except it’s not! An idealised graph of the heat capacity for , the simplest diatomic gas, is shown in Figure 11. At suitably high temperatures, around , we do see the full heat capacity that we expect. But at low temperatures, the heat capacity is that of monatomic gas. And, in the middle, it seems to rotate, but not vibrate. What’s going on? Towards the end of the nineteenth century, scientists were increasingly bewildered about this behaviour.
What’s missing in the discussion above is something very important: . The successive freezing out of vibrational and rotational modes as the temperature is lowered is a quantum effect. In fact, this behaviour of the heat capacities of gases was the first time that quantum mechanics revealed itself in experiment. We’re used to thinking of quantum mechanics as being relevant on small scales, yet here we see that affects the physics of gases at temperatures of . But then, that is the theme of this course: how the microscopic determines the macroscopic. We will return to the diatomic gas in Section 3.4 and understand its heat capacity including the relevant quantum effects.
Until now, we’ve only discussed free systems; particles moving around unaware of each other. Now we’re going to turn on interactions. Here things get much more interesting. And much more difficult. Many of the most important unsolved problems in physics are to do with the interactions between large number of particles. Here we’ll be gentle. We’ll describe a simple approximation scheme that will allow us to begin to understand the effects of interactions between particles.
We’ll focus once more on the monatomic gas. The ideal gas law is exact in the limit of no interactions between atoms. This is a good approximation when the density of atoms is small. Corrections to the ideal gas law are often expressed in terms of a density expansion, known as the virial expansion. The most general equation of state is,
(2.69) |
where the functions are known as virial coefficients.
Our goal is to compute the virial coefficients from first principles, starting from a knowledge of the underlying potential energy between two neutral atoms separated by a distance . This potential has two important features:
An attractive force. This arises from fluctuating dipoles of the neutral atoms. Recall that two permanent dipole moments, and , have a potential energy which scales as . Neutral atoms don’t have permanent dipoles, but they can acquire a temporary dipole due to quantum fluctuations. Suppose that the first atom has an instantaneous dipole . This will induce an electric field which is proportional to which, in turn, will induce a dipole of the second atom . The resulting potential energy between the atoms scales as . This is sometimes called the van der Waals interaction.
A rapidly rising repulsive interaction at short distances, arising from the Pauli exclusion principle that prevents two atoms from occupying the same space. For our purposes, the exact form of this repulsion is not so relevant: just as long as it’s big. (The Pauli exclusion principle is a quantum effect. If the exact form of the potential is important then we really need to be dealing with quantum mechanics all along. We will do this in the next section).
One very common potential that is often used to model the force between atoms is the Lennard-Jones potential,
(2.70) |
The exponent is chosen only for convenience: it simplifies certain calculations because .
An even simpler form of the potential incorporates a hard core repulsion, in which
the particles are simply forbidden from closer than a fixed distance by imposing an infinite potential,
(2.71) |
The hard-core potential with van der Waals attraction is sketched to the right. We will see shortly that the virial coefficients are determined by increasingly difficult integrals involving the potential . For this reason, it’s best to work with a potential that’s as simple as possible. When we come to do some actual calculations we will use the form (2.71).
We’re going to change notation and call the positions of the particles instead of . (The latter notation was useful to stress the connection to quantum mechanics at the beginning of this Section, but we’ve now left that behind!). The Hamiltonian of the gas is
where is the separation between particles. The restriction on the final sum ensures that we sum over each pair of particles exactly once. The partition function is then
where is the thermal wavelength that we met in (2.54). We still need to do the integral over positions. And that looks hard! The interactions mean that the integrals don’t factor in any obvious way. What to do? One obvious way thing to try is to Taylor expand (which is closely related to the so-called cumulant expansion in this context)
Unfortunately, this isn’t so useful. We want each term to be smaller than the preceding one. But as , the potential , which doesn’t look promising for an expansion parameter.
Instead of proceeding with the naive Taylor expansion, we will instead choose to work with the following quantity, usually called the Mayer f function,
(2.72) |
This is a nicer expansion parameter. When the particles are far separated at , . However, as the particles come close and , the Mayer function approaches . We’ll proceed by trying to construct a suitable expansion in terms of . We define
Then we can write the partition function as
(2.73) | |||||
The first term simply gives a factor of the volume for each integral, so we get . The second term has a sum, each element of which is the same. They all look like
where, in the last equality, we’ve simply changed integration variables from and to the centre of mass and the separation . (You might worry that the limits of integration change in the integral over , but the integral over only picks up contributions from atomic size distances and this is only actually a problem close to the boundaries of the system where it is negligible). There is a term like this for each pair of particles – that is such terms. For , we can just call this a round . Then, ignoring terms quadratic in and higher, the partition function is approximately
where we’ve used our previous result that . We’ve also engaged in something of a sleight of hand in this last line, promoting one power of from in front of the integral to an overall exponent. Massaging the expression in this way ensures that the free energy is proportional to the number of particles as one would expect:
(2.74) |
However, if you’re uncomfortable with this little trick, it’s not hard to convince yourself that the result (2.75) below for the equation of state doesn’t depend on it. We will also look at the expansion more closely in the following section and see how all the higher order terms work out.
From the expression (2.74) for the free energy, it is clear that we are indeed performing an expansion in density of the gas since the correction term is proportional to . This form of the free energy will give us the second virial coefficient .
We can be somewhat more precise about what it means to be at low density. The exact form of the integral depends on the potential, but for both the Lennard-Jones potential (2.70) and the hard-core repulsion (2.71), the integral is approximately , where is roughly the minimum of the potential. (We’ll compute the integral exactly below for the hard-core potential). For the expansion to be valid, we want each term with an extra power of to be smaller than the preceding one. (This statement is actually only approximately true. We’ll be more precise below when we develop the cluster expansion). That means that the second term in the argument of the should be smaller than 1. In other words,
The left-hand side is the density of the gas. The right-hand side is atomic density. Or, equivalently, the density of a substance in which the atoms are packed closely together. But we have a name for such substances – we call them liquids! Our expansion is valid for densities of the gas that are much lower than that of the liquid state.
We can use the free energy (2.74) to compute the pressure of the gas. Expanding the logarithm as we get
As expected, the pressure deviates from that of an ideal gas. We can characterize this by writing
(2.75) |
To understand what this is telling us, we need to compute . Firstly let’s look at two trivial examples:
Repulsion: Suppose that for all separations with . Then and the pressure increases, as we’d expect for a repulsive interaction.
Attraction: If , we have and the pressure decreases, as we’d expect for an attractive interaction.
What about a more realistic interaction that is attractive at long distances and repulsive at short? We will compute the equation of state of a gas using the hard-core potential with van der Waals attraction (2.71). The integral of the Mayer function is
(2.76) |
We’ll approximate the second integral in the high temperature limit, , where . Then
Inserting this into (2.75) gives us an expression for the equation of state,
We recognise this expansion as capturing the second virial coefficient in (2.69) as promised. The constants and are defined by
It is actually slightly more useful to write this in the form . We can multiply through by then, rearranging we have
Since we’re working in an expansion in density, , we’re at liberty to Taylor expand the last bracket, keeping only the first two terms. We get
(2.78) |
This is the famous van der Waals equation of state for a gas. We stress again the limitations of our analysis: it is valid only at low densities and (because of our approximation when performing the integral (2.76)) at high temperatures.
We will return to the van der Waals equation in Section 5 where we’ll explore many of its interesting features. For now, we can get a feeling for the physics behind this equation of state by rewriting it in yet another way,
(2.79) |
The constant contains a factor of and so capures the effect of the attractive interaction at large distances. We see that its role is to reduce the pressure of the gas. The reduction in pressure is proportional to the density squared because this is, in turn, proportional to the number of pairs of particles which feel the attractive force. In contrast, only contains and arises due to the hard-core repulsion in the potential. Its effect is the reduce the effective volume of the gas because of the space taken up by the particles.
It is worth pointing out where some quizzical factors of two come from in . Recall that is the minimum distance that two atoms can approach. If we think of the each atom as a hard sphere, then they have radius and volume . Which isn’t equal to . However, as illustrated in the figure, the excluded volume around each atom is actually . So why don’t we have sitting in the denominator of the van der Waals equation rather than ? Think about adding the atoms one at a time. The first guy can move in volume ; the second in volume ; the third in volume and so on. For , the total configuration space available to the atoms is
And there’s that tricky factor of .
Above we computed the equation of state for the dipole van der Waals interaction with hard core potential. But our expression (2.75) can seemingly be used to compute the equation of state for any potential between atoms. However, there are limitations. Looking back to the integral (2.5.2), we see that a long-range force of the form will only give rise to a convergent integral for . This means that the techniques described above do not work for long-range potentials with fall-off or slower. This includes the important case of Coulomb interactions.
Above we computed the leading order correction to the ideal gas law. In terms of the virial expansion (2.69) this corresponds to the second virial coefficient . We will now develop the full expansion and explain how to compute the higher virial coefficients.
Let’s go back to equation (2.73) where we first expressed the partition function in terms of ,
(2.80) | |||||
Above we effectively related the second virial coefficient to the term linear in : this is the essence of the equation of state (2.75). One might think that terms quadratic in give rise to the third virial coefficient and so on. But, as we’ll now see, the expansion is somewhat more subtle than that.
The expansion in (2.80) includes terms of the form where the indices denote pairs of atoms, and and so on. These pairs may have atoms in common or they may all be different. However, the same pair never appears twice in a given term as you may check by going back to the first line in (2.80). We’ll introduce a diagrammatic method to keep track of all the terms in the sum. To each term of the form we associate a picture using the following rules
Draw atoms. (This gets tedious for but, as we’ll soon see, we will actually only need pictures with small subset of atoms).
Draw a line between each pair of atoms that appear as indices. So for , we draw a line between atom and atom ; a line between atom and atom ; and so on.
For example, if we have just , we have the following pictures for different terms in the expansion,
We call these diagrams graphs. Each possible graph appears exactly once in the partition function (2.80). In other words, the partition function is a sum over all graphs. We still have to do the integrals over all positions . We will denote the integral over graph to be . Then the partition function is
Nearly all the graphs that we can draw will have disconnected components. For example, those graphs that correspond to just a single will have two atoms connected and the remaining sitting alone. Those graphs that correspond to fall into two categories: either they consist of two pairs of atoms (like the second example above) or, if shares an atom with , there are three linked atoms (like the third example above). Importantly, the integral over positions then factorises into a product of integrals over the positions of atoms in disconnected components. This is illustrated by an example with atoms,
We call the disconnected components of the graph clusters. If a cluster has atoms, we will call it an -cluster. The example above has a single 3-cluster and a single 2-cluster. In general, a graph will split into -clusters. Clearly, we must have
(2.81) |
Of course, for a graph with only a few lines and lots of atoms, nearly all the atoms will be in lonely 1-clusters.
We can now make good on the promise above that we won’t have to draw all atoms. The key idea is that we can focus on clusters of -atoms. We will organise the expansion in such a way that the -clusters are less important than the -clusters. To see how this works, let’s focus on -clusters for now. There are four different ways that we can have a -cluster,
Each of these 3-clusters will appear in a graph with any other combination of clusters among the remaining atoms. But since clusters factorise in the partition function, we know that must include a factor
contains terms of order and . It turns out that this is the correct way to arrange the expansion: not in terms of the number of lines in the diagram, which is equal to the power of , but instead in terms of the number of atoms that they connect. The partition function will similarly contain factors associated to all other -clusters. We define the corresponding integrals as
(2.82) |
Notice that is simply the integral over space, namely . The full partition function must be a product of ’s. The tricky part is to get all the combinatoric factors right to make sure that you count each graph exactly once. The sum over graphs that appears in the partition function turns out to be
(2.83) |
The product counts the number of ways to split the particles into -clusters, while ignoring the different ways to internally connect each cluster. This is the right thing to do since the different internal connections are taken into account in the integral .
Combinatoric arguments are not always transparent. Let’s do a couple of checks to make sure that this is indeed the right answer. Firstly, consider atoms split into two 2-clusters (i.e ). There are three such diagrams, , , and . Each of these gives the same answer when integrated, namely so the final result should be . We can check this against the relevant terms in (2.83) which are as expected.
Another check: atoms with . All diagrams come in the combinations
together with graphs that are related by permutations. The permutations are fully determined by the choice of the two atoms that sit in the pair: there are 10 such choices. The answer should therefore be . Comparing to (2.83), we have as required.
Hopefully you are now convinced that (2.83) counts the graphs correctly. The end result for the partition function is therefore
The problem with computing this sum is that we still have to work out the different ways that we can split atoms into different clusters. In other words, we still have to obey the constraint (2.81). Life would be very much easier if we didn’t have to worry about this. Then we could just sum over any , regardless. Thankfully, this is exactly what we can do if we work in the grand canonical ensemble where is not fixed! The grand canonical ensemble is
We define the fugacity as . Then we can write
One usually defines
(2.84) |
Notice in particular that so this definition gives . Then we can write the grand partition function as
(2.85) |
Something rather cute happened here. The sum over all diagrams got rewritten as the exponential over the sum of all connected diagrams, meaning all clusters. This is a general lesson which also carries over to quantum field theory where the diagrams in question are Feynman diagrams.
Back to the main plot of our story, we can now compute the pressure
and the number of particles
(2.86) |
Dividing the two gives us the equation of state,
(2.87) |
The only downside is that the equation of state is expressed in terms of . To massage it into the form of the virial expansion (2.69), we need to invert (2.86) to get in terms of the particle density . Equating (2.87) with (2.69) (and defining ), we have
where we’ve used both and . Expanding out the left- and right-hand sides to order gives
Comparing terms, and recollecting the definitions of (2.84) in terms of (2.82) in terms of graphs, we find the second virial coefficient is given by
which reproduces the result (2.75) that we found earlier using slightly simpler methods. We now also have an expression for the third coefficient,
although admittedly we still have a nasty integral to do before we have a concrete result. More importantly, the cluster expansion gives us the technology to perform a systematic perturbation expansion to any order we wish.
There are many other applications of the classical statistical methods that we saw in this chapter. Here we use them to derive the important phenomenon of screening. The problem we will consider, which sometimes goes by the name of a “one-component plasma”, is the following: a gas of electrons, each with charge , moves in a fixed background of uniform positive charge density . The charge density is such that the overall system is neutral which means that is also the average charge density of the electrons. This is the Debye-Hückel model.
In the absence of the background charge density, the interaction between electons is given by the Coulomb potential
where we’re using units in which . How does the fixed background charge affect the potential between electrons? The clever trick of the Debye-Hückel model is to use statistical methods to figure out the answer to this question. Consider placing one electron at the origin. Let’s try to work out the electrostatic potential due to this electron. It is not obvious how to do this because will also depend on the positions of all the other electrons. In general we can write,
(2.88) |
where the first term on the right-hand side is due to the electron at the origin; the second term is due to the background positive charge density; and the third term is due to the other electrons whose average charge density close to the first electron is . The trouble is that we don’t know the function . If we were sitting at zero temperature, the electrons would try to move apart as much as possible. But at non-zero temperatures, their thermal energy will allow them to approach each other. This is the clue that we need. The energy cost for an electron to approach the origin is, of course, . We will therefore assume that the charge density near the origin is given by the Boltzmann factor,
For high temperatures, , we can write and the Poisson equation (2.88) becomes
where . This equation has the solution,
(2.89) |
which immediately translates into an effective potential energy between electrons,
We now see that the effect of the plasma is to introduce the exponential factor in the numerator, causing the potential to decay very quickly at distances . This effect is called screening and is known as the Debye screening length. The derivation of (2.89) is self-consistent if we have a large number of electrons within a distance of the origin so that we can happily talk about average charge density. This means that we need .