Saturday, September 20, 2025

Section 40–3 Evaporation of a liquid

Intermolecular potentials / Molecular separations / Many-Body problem

 

In this section, Feynman analyzes intermolecular potentials, molecular separations, and the complexity of many-body problems in liquids. A more accurate title could be “The Role of Intermolecular Potentials in Determining Molecular Separations”—or, more briefly, “Intermolecular Potentials and Molecular Separations.” By contrast, the title “Evaporation of a Liquid” would be misleading, since the section is not specifically about the evaporation process. Evaporation refers to the phase transition from liquid to vapor at a surface, governed by vapor pressure, surface interactions, and the molecular energy distribution. However, evaporation appears only briefly as a side remark connected to the many-body problem, rather than as the main focus. In fact, Feynman discusses evaporation directly in Chapter 42, under the section titled “Evaporation.

 

1. Intermolecular potentials:

“Let us take the case of just two molecules: the e−P.E./kT would be the probability of finding them at various mutual distances r (Feynman et al., 1963, p. 40-3).”

 

A shortcoming in Feynman’s description is that he equates the Boltzmann factor directly with probability, while it only provides relative probability weights. Strictly speaking, proper probability requires normalization through the partition function. In Statistical Mechanics, Feynman (1972) writes: “The key principle of statistical mechanics is as follows: If a system in equilibrium can be in one of N states, then the probability of the system having energy En is (1/Q)e-En/kT, where… Q is called the partition (p. 1).” In other words, the relative probability of finding two molecules separated by a distance r is proportional to e−U(r)/kT. To obtain the actual probability distribution, the Boltzmann factor must be normalized by the partition function. Thus, the actual probability density of finding the molecules at separation r is: P(r) = (1/Z)e−V(r)/kT where Z is the normalization factor (partition function) that ensures the total probability sums to 1.

 

“The total potential energy in this case would be the sum over all the pairs, supposing that the forces are all in pairs... Then the probability for finding molecules in any particular combination of rij’s will be proportional to exp[−∑i,jV(rij)/kT] (Feynman et al., 1963, p. 40-3).”

 

Feynman’s expression i,jV(rij) counts each pair of molecules twice: once as (i, j) and again as (j, i). Since the interaction is symmetric, this double counts the total potential energy. A simple analogy is counting handshakes in a group: if you let everyone record “I shook hand with you,” each handshake is counted twice. To correct this, we can either divide the total by two or sum over unique pairs only. In statistical mechanics, the intermolecular potential can be written as ½ ∑i≠jV(rij) or equivalently i<jV(rij). This fixes the summation index problem, making it clear that U is the sum over unique pairs (i<j), and accurately defines U before substituting it into the Boltzmann factor. With this correction, we can account for pairwise interactions without double counting.

 

2. Molecular separations:

“Let us take the case of just two molecules: the e−P.E./kT would be the probability of finding them at various mutual distances r. Clearly, where the potential goes most negative, the probability is largest, and where the potential goes toward infinity, the probability is almost zero, which occurs for very small distances. That means that for such atoms in a gas, there is no chance that they are on top of each other, since they repel so strongly. But there is a greater chance of finding them per unit volume at the point r0 than at any other point (Feynman et al., 1963, p. 40-3).”

 

The probability of finding a molecule at a given separation r from another is governed by the intermolecular potential V(r) and the Boltzmann factor. This probability can be understood in three characteristic regions:

1. Very close (r ≈ 0): At extremely small separations, V(r) → +∞ due to strong repulsion. The exponent −V(r)/kT → −∞, so e−V(r)/kT ≈ 0. Thus, the probability of molecules overlapping is close to zero.

2. Equilibrium position (r = r0): At the equilibrium distance, V(r₀) reaches its minimum (most negative value), corresponding to maximum attractive interaction. Here, −V(r0)/kT is maximally positive, making e−V(r)/kT largest. Molecules are most likely to be found near r0​.

3. Far apart (r >> r0​): At large separations, V(r)→0, meaning negligible interaction. The exponent −V(r)/kT → 0, so e−V(r)/kT → 1. The probability is lower than at r₀ but greater than at r ≈ 0, and it increases with temperature as thermal motion overcomes attractive forces.

In short, the Boltzmann factor provides a probabilistic map of molecular separations, showing how intermolecular potentials and temperature jointly determine the spatial distribution of molecules in a liquid.

 

Summary:

  • r ≈ 0: Probability ≈ 0 (Molecules are separated by repulsion).
  • r = r0​: Maximum probability (most favorable separation).
  • r→∞: Probability moderate; molecular separation increases with temperature.


Note: The Lennard–Jones (LJ) potential is one of the most extensively studied intermolecular potentials.

 

“Now, if the temperature is very high, so that kT>>|V(r0)|, the exponent is relatively small almost everywhere, and the probability of finding a molecule is almost independent of position… As the temperature falls, the atoms fall together, clump in lumps, and reduce to liquids, and solids, and molecules, and as you heat them up they evaporate (Feynman et al., 1963, p. 40-3).”

 

Evaporation does not require a liquid to reach a high overall temperature because it is governed by the statistical distribution of molecular energies, not the average alone. Specifically, at any temperature above absolute zero, molecules in a liquid have a spread of kinetic energies described by the Maxwell–Boltzmann distribution. A fraction of the surface molecules will always possess enough energy to overcome intermolecular attractions and escape into the air. At very high temperatures, where kT >> |V(r0)|, this collective excitation leads to boiling rather than mere surface evaporation. Thus, evaporation can occur at any T>0, without the liquid needing to reach its boiling point. Even if the mean kinetic energy is relatively small, the high-energy tail of the distribution allows some molecules exceed the binding energy (or equivalently, the latent heat of vaporization) and are able to leave the liquid. That said, at very low temperatures the fraction of such energetic molecules becomes vanishingly small, so the evaporation rate is extremely slow.

 

“The requirements for the determination of exactly how things evaporate, exactly how things should happen in a given circumstance, involve the following. First, to discover the correct molecular-force law V(r), which must come from something else, quantum mechanics, say, or experiment (Feynman et al., 1963, p. 40-4).”

 

The stringent requirements for determining the exact mechanism of evaporation make the task effectively impossible. A more practical approach is provided by the Hertz–Knudsen equation, which predicts evaporation rates under the assumption of an ideal gas in thermal equilibrium. Derived from the Maxwell–Boltzmann distribution of molecular speeds, this relation—together with experimentally measured saturation pressures and an empirical condensation coefficient—yields reliable estimates without requiring detailed knowledge of the intermolecular potential. On the other hand, the distribution function obtained from realistic models such as the Lennard–Jones (LJ) potential and Monte Carlo simulations exhibits multiple maxima and minima (Hansen & McDonald, 2006). This behavior differs from the simpler two-body model, where the distribution shows only a single maximum at the equilibrium separation and no oscillatory structure. Thus, even though detailed microscopic calculations reveal the complexity of molecular organization, simple models like the Hertz–Knudsen relation can still provide insights into evaporation without the need to satisfy such stringent requirements.

Source: (Hansen & McDonald, 2006)

 

3. Many-Body Problem:

“It is often called an example of a “many-body problem,” and it really has been a very interesting thing. In that single formula must be contained all the details, for example, about the solidification of gas, or the forms of the crystals that the solid can take, and people have been trying to squeeze it out, but the mathematical difficulties are very great, not in writing the law, but in dealing with so enormous a number of variables (Feynman et. al, 1963, p. 40-4).

 

In classical mechanics, the many-body problem generalizes the three-body problem—the challenge of predicting the motion of three mutually interacting objects. Unlike the two-body case, which has exact closed-form solutions (Kepler’s laws), the three-body problem admits no general analytic solution. This shows that even Newton’s deterministic laws can lead to motions too complex for exact solutions. While special cases—such as the equilateral triangle configuration—can be solved, and numerical simulations can track orbits with high precision for finite times, a universal closed-form solution remains impossible. The difficulty arises not simply from the large number of variables but from intrinsic nonlinear interactions, where tiny differences in initial conditions cause chaotic divergence.

 

“That then, is the distribution of particles in space. That is the end of classical statistical mechanics, practically speaking, because if we know the forces, we can, in principle, find the distribution in space, and the distribution of velocities is something that we can work out once and for all, and is not something that is different for the different cases (Feynman et. al, 1963, p. 40-4).”

 

Feynman’s statement captures the “minimalist” foundation of statistical mechanics: if the intermolecular forces are known and the Boltzmann principle is invoked, you can deduce the distribution. The shortcoming lies in presenting this foundation as the “end,” when it is better understood as the beginning. For example, Monte Carlo methods enriched classical statistical mechanics by overcoming its central mathematical obstacle—the high-dimensional integral—thus enabling predictive modeling of complex many-body systems. More recently, artificial intelligence and machine learning models have advanced the field further by recognizing patterns, improving sampling, and exploring emergent behaviors. What Feynman called the “end” is, in practice, the starting point for some of the most important and exciting modern challenges.

 

Historical note:

Classical statistical mechanics, developed by Maxwell, Boltzmann, and Gibbs in the 19th century, provided the framework to derive macroscopic properties from microscopic particle interactions. However, solving the modeling equations exactly for real many-body systems is analytically impossible. This led to the development of modern computational tools:

1. Monte Carlo Methods (Metropolis et al., 1953): Building on earlier random sampling ideas from the 1940s Los Alamos project, the Metropolis algorithm introduced stochastic sampling of configurations to evaluate equilibrium properties. Using “fast electronic computing machines” and bypassing direct integration of Newton’s equations of motion, it provided an efficient means of analyzing collective phenomena such as phase transitions.

2. Molecular Dynamics (Alder & Wainwright, 1957): Using electronic computers, Alder and Wainwright developed a method by directly integrating Newton’s equations of motion for many interacting particles. This computational approach enabled the simulation of physical trajectories, allowing for the study of time-dependent behavior, transport properties, and microscopic mechanisms in gases, liquids, and solids.

3. AI and Machine Learning (2010s–present): Advances in algorithms and computing power have enabled AI to transform approaches to statistical mechanics. Recent studies link AI approaches to classical frameworks, such as the Yang–Lee theory of phase transitions, which is directly relevant to processes like evaporation. For example, artificial neural-network (Carleo & Troyer, 2017) marked a breakthrough in representing many-body systems, while later work (Noé et al., 2019) showed how machine learning can accelerate the study of phase transitions and collective behaviors.

In summary, the field has evolved from theoretical foundations (Maxwell, Boltzmann, Gibbs), to computational simulation (Monte Carlo and Molecular Dynamics), to AI-enhanced prediction and analysis. Each stage overcame the calculational barriers of its predecessor, extending the reach of statistical mechanics into previously inaccessible domains.

 

Review Questions:

1. Do you agree with Feynman’s expression of intermolecular potential in relation to the probability of finding molecules?

2. How does the intermolecular potential determine the separation of molecules? Can you provide a physical intuition or real-world analogy?

3. Feynman describes the distribution of particles in space as “the end of classical statistical mechanics.” Do you agree with this view? How do modern developments extend the scope beyond Feynman’s characterization?

 

Key Takeaway (in Feynman’s spirit):

The point of this whole discussion isn't really about evaporation—that's just a specific example that comes later. What we're really getting at is something much more beautiful and fundamental: the molecular “dance.” Once you understand this dance—how the potential tells the molecules where to go—sometimes spreading into a gas, clustering as a liquid, or locking into the regular structure of a solid. Evaporation? That’s just what happens when some dancers get so energetic that some of them break away from the group. But the real story, the music they're all dancing to, is the intermolecular potential.

 

The Moral of the Lesson: Feynman presented the spatial distribution of particles as the end of classical statistical mechanics, but in reality, it is only the beginning Education is not about memorizing final results, but about entering the never-ending process of inquiry. Foundations are not endpoints—they are starting points, laid down so that future understanding may be built upon them.

 

In short: what may appear to be the “end of knowledge” is, in truth, the foundation for the next discovery.

 

References:

Alder, B. J., & Wainwright, T. E. (1957). Phase transition for a hard sphere system. The Journal of chemical physics27(5), 1208–1209.

Carleo, G., & Troyer, M. (2017). Solving the quantum many-body problem with artificial neural networks. Science, 355(6325), 602–606.

Feynman, R. P. (1972). Statistical mechanics: a set of lectures. W. A. Benjamin.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Noé, F., Olsson, S., Köhler, J., & Wu, H. (2019). Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 365(6457), eaaw1147.

Hansen, J. P., & McDonald, I. R. (2006). Theory of Simple Liquids (3rd ed.). Academic Press

Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. The journal of chemical physics21(6), 1087-1092.

Tuesday, September 9, 2025

Section 40–2 The Boltzmann law

Exponential distribution / Energy states / Scaling denominator kT

 

In this section, Feynman explains the Boltzmann factor, the distribution of molecules across energy states, and the role of the scaling denominator kT. A more fitting title might simply be The Boltzmann Distribution, which is not a strict deterministic law but rather a statistical model of molecular behavior. Some historians and physicists also use the term Boltzmann–Gibbs distribution, acknowledging that while the Boltzmann factor itself originates with Boltzmann, the broader framework of statistical ensembles was developed by Gibbs.

 

1. Exponential distribution

Equation (40.3) n = (constant)e−P.E./kT, known as Boltzmann’s law, is another of the principles of statistical mechanics: that the probability of finding molecules in a given spatial arrangement varies exponentially with the negative of the potential energy of that arrangement, divided by kT (Feynman et al., 1963, p. 40-3).

 

It is misleading to describe the equation n=(constant)e−P.E./kT as Boltzmann’s law, which is used for the probability of finding molecules. To avoid confusion, it is useful to distinguish between the Boltzmann factor and Boltzmann distribution. The Boltzmann factor, e−E/kT, gives the relative weight or likelihood of an energy state E, but by itself it is not a probability distribution because it lacks normalization. On the other hand, the Boltzmann distribution is obtained by normalizing the Boltzmann factor over all possible states to ensure a proper probability distribution. By calling the formula “Boltzmann’s law” complicates the matter, since it is not really an absolute law but a derived statistical mechanical distribution that holds under equilibrium conditions. In short, keeping the distinctions clear among the Boltzmann factor, the Boltzmann distribution, and the resulting number density helps prevent the misconception that number density is itself a probability or that the formula represents a fundamental law.

 

Boltzmann’s distribution of molecular number density is a statistical tendency, but not a deterministic law. It does not describe the trajectory of any single molecule, but rather the collective behavior of a large ensemble. The distribution expresses the most probable arrangement of molecules consistent with energy conservation and thermal equilibrium, rather than a rule that every molecule follows at every instant. This distinction lies at the heart of statistical mechanics: predictable regularities emerge from the random motion of individuals, yet they do not have the absoluteness of Newtonian laws. The deeper lesson is that approximate order can arise from apparent chaos — a conceptual shift that anticipates the probabilistic foundations of quantum theory. Pedagogically, the Boltzmann distribution offers a clear first step into the probabilistic way of thinking that lies at the heart of statistical mechanics.

 

2. Energy states

“Here we note the interesting fact that the numerator in the exponent of Eq. (40.1) is the potential energy of an atom… (Feynman et al., 1963, p. 40-2).

 

In general, the numerator in the exponent of the Boltzmann factor is not limited to potential energy; it can represent any form of energy relevant to the system. In the case of an isothermal atmosphere, it is specifically the gravitational potential energy that sets the relative probabilities. When the potential energy of a state is low, the exponential factor stays close to one, so the state is relatively common. As potential energy increases, the exponent becomes more negative, and the probability decreases exponentially, so higher-energy states become progressively less likely. More broadly, the Boltzmann distribution applies to all kinds of energy states— whether due to position, motion, or quantum levels.

 

Pigeonhole analogy:

We can picture a vast wall of pigeonholes (energy states) and a great many pigeons (molecules). Each pigeon gets bumped around by random hits (thermal collisions), which can push it up into a higher hole (more energy) or let it drop into a lower one (less energy). Although each collision is random, the overall pattern is highly regular: the lower holes are much more crowded, and progressively fewer pigeons occupy the higher ones. The guiding rule is that the likelihood of a pigeon settling in a specific hole decreases exponentially with the hole’s height. The key lesson is that we cannot track any particular pigeon at a given moment — what matters is the collective distribution. This statistical pattern—where the chance of finding a pigeon in a hole is predictable—is precisely what the Boltzmann distribution describes.

 

“Therefore what we noticed in a special case turns out to be true in general. (What if F does not come from a potential? Then (40.2) has no solution at all. Energy can be generated, or lost by the atoms running around in cyclic paths for which the work done is not zero, and no equilibrium can be maintained at all (Feynman et al., 1963, p. 40-3).

 

According to Feynman, the equation F=kT d​(ln n)/dx has no solution when the force F is non-conservative (i.e., it cannot be expressed as the gradient of a potential F≠−U). The equation is derived from balancing forces in equilibrium: F dx=kTd(ln n), which implies: F/kT = d​(ln n)/dx.

For this equation to have a solution, F must be the derivative of some function.

If F is conservative (e.g., gravity): F=−dU/dxÞ (−1/kT)dU/dx=d​(ln n)/dx  Þ ò(−1/kT)dU=òd(ln n) Þ ln n =  (−1/kT)U + C Þ  n=n0eU(x)/kT. This yields the Boltzmann distribution.

If F is non-conservative (e.g., friction): F cannot be written as −dU/dx. ​In other words, the equation F=kTd​(ln n)/dx cannot be integrated to find n(x) because

1.      Mathematically: The path-dependence of its work integral makes it impossible to define a single-valued potential energy function U.

2.      Physically: It transforms mechanical energy into thermal energy, violating the conservation of energy that the concept of a potential function is built upon.

In short, a non-conservative force prevents the system from reaching a true thermodynamic equilibrium. 

 

3. Scaling denominator kT

“Thermal equilibrium cannot exist if the external forces on the atoms are not conservative (Feynman et al., 1963, p. 40-3).

 

From a practical perspective, the Boltzmann distribution is a powerful tool for approximating the behavior of systems near thermal equilibrium, even when the temperature is only roughly constant. Its shape is governed by the Boltzmann factor, where the product kT has the dimensions of energy, allowing E/kT to be a dimensionless ratio and the exponent is unitless. At high temperatures (large kT), the distribution flattens: higher-energy states gain significant probability, and populations spread across a wider range of energies. At low temperatures (small kT), the distribution “sharpens”: low-energy states dominate, and the population is concentrated within a narrower range of energies (See fig below). Conceptually, kT acts as nature’s probability scale, setting how energy spreads across states in terms of their relative likelihood. Even when temperature is not strictly constant, this scaling relation enables the Boltzmann factor to provide reliable probabilistic predictions about complex systems.

 

Source: Boltzmann distribution – Wikipedia

What if k were larger or smaller?

The value of the Boltzmann constant is crucial because it sets the scale at which thermal energy translates into probabilities; if it were different, the entire statistical structure of the universe would change. A larger k would make the ratio E/kT smaller at a given temperature, increasing the likelihood of higher-energy states and broadening the distribution. A smaller k, by contrast, would have a narrower range of lower-energy states, making the universe more rigid, with molecules rarely accessing higher energies. In a sense, the magnitude of k governs the balance between stability and variability—the very balance on which chemistry, biology, and cosmic structure depend.

 

The Boltzmann constant k can be viewed as a unit-conversion factor: it connects the macroscopic scale of temperature (Kelvin) to the microscopic scale of energy (joules). If k were numerically larger or smaller, we would simply measure temperature in different units, while the product kT—the true physical quantity that sets the energy scale in the Boltzmann factor—would remain unchanged. The underlying physics of distributions, fluctuations, and equilibrium would be identical; only the human-assigned numerical values of temperature would shift. For example, if k were ten times bigger, the same gas with the same kinetic energy per particle would have a temperature reading ten times smaller. Thus, k does not control the universe’s behavior; rather, it calibrates the conversion between microscopic kinetic energy and macroscopic temperature—much like how c links space and time units in relativity.

 

Review questions:

1. Is the Boltzmann distribution of molecular number density a strict physical law, or is it better understood as a statistical tendency?

2. In what way does potential energy influence the form of the Boltzmann distribution?

3. Why is thermal equilibrium an essential condition for applying the Boltzmann distribution?

 

A Brief History of the Boltzmann Constant  

Phase 1 — Radiation Constant (1900):

In the famous December 14, 1900 paper, Max Planck introduced two new constants, h and k, as fitting parameters (Hilfsgrößen) in his derivation of the blackbody spectrum. At this stage, k was called die Strahlungskonstante (radiation constant), because Planck regarded it as specific to thermal radiation and the resonators in the cavity walls (Planck, 1900; Kuhn, 1978). In the same paper, h was framed by Planck as a first constant of nature, k was described as a second constant of nature, though still within the restricted context of radiation theory.

Phase 2 — Universal Constant (1901 onward):

In his 1901 paper On the Law of Distribution of Energy in the Normal Spectrum (Annalen der Physik, 4, 553–563), Planck introduced k as one of the universal constants. Similarly, H. A. Lorentz emphasized the universal role of k in connecting the gas constant R with Avogadro’s number N​, reinforcing its status as a fundamental link between microscopic and macroscopic physics (Lorentz, 1905).

Phase 3 — Boltzmann Constant (1906 onward):

Paul Ehrenfest was among the first to refer to k as the Boltzmannsche Konstante (Boltzmann constant) (Ehrenfest, 1906). By 1920, the name had become common enough that Planck, in his Nobel lecture, noted with some irony:This constant is often referred to as Boltzmann’s constant, although, to my knowledge, Boltzmann himself never introduced it – a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant…” Since Planck had originally introduced k alongside h in his radiation theory, it is understandable that some of his contemporaries referred to k as the “Planck coefficient,” “Planck constant,” or “Boltzmann–Planck constant.” However, Planck mentioned Boltzmann-Drude constant (a = 3k/2) that is related to k in his 1900 paper, but adopting the term “Boltzmann–Drude–Planck constant” for k would have been too cumbersome for use.

 

In 1900, Planck was the first to determine the constant k, assigning it the value 1.346 × 10⁻¹⁶ erg/deg. Over the following century, successive refinements improved its precision, resulting in the 2019 redefinition of the International System of Units (SI), where the Boltzmann constant was fixed at the exact value k = 1.380 649 × 10⁻²³ J K⁻¹. This redefinition established k as a fundamental constant of measurement, marking its transformation from a provisional fitting parameter in Planck’s blackbody theory into a cornerstone of modern physics and metrology (BIPM, 2019).

 

The Moral of the Lesson:

The Boltzmann distribution shows that order can emerge from apparent chaos. Imagine a vast wall of pigeonholes (energy states) filled with countless pigeons (molecules), each jostled randomly by collisions. While we cannot track which pigeon is in which hole at any moment, a predictable pattern emerges: lower-energy holes are more crowded, higher-energy holes less so. This reminds us that structure and success often arise not from controlling every detail, but from understanding the overall tendencies of a complex system.

 

Key Takeaway (In Feynman’s Spirit):

Don’t waste energy trying to follow every pigeon (molecule). Focus on the probabilities instead. The universe doesn’t obey our deterministic expectations, but by appreciating the statistical patterns, we can understand and even predict the collective behavior of the system.

 

References

Bureau International des Poids et Mesures (BIPM). (2019). SI Brochure, 9th edition.

Ehrenfest, P. (1906). Zur Planckschen Strahlungstheorie. Physikalische Zeitschrift7(2), 528-532.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Kuhn, T. S. (1978). Black-Body Theory and the Quantum Discontinuity, 1894–1912. Oxford: Clarendon Press.

Lorentz, H. A. (1905). Einige Bemerkungen über die Molekulartheorie. Verslagen en Mededeelingen der Koninklijke Akademie van Wetenschappen te Amsterdam, 14, 1273–1280.

Planck, M. (1900). Zur Theorie des Gesetzes der Energieverteilung im Normalspectrum. Verhandlungen der Deutschen Physikalischen Gesellschaft, 2, 237–245.

Planck, M. (1901). Über das Gesetz der Energieverteilung im Normalspektrum. Annalen der Physik, 4, 553–563.