Tuesday, June 17, 2025

Section 39–2 The pressure of a gas

Force per unit area / Energy per unit volume / Quasi-static adiabatic compression

 

In this section, Feynman relates the pressure of a gas to force per unit area, energy per unit volume, and quasi-static adiabatic compression. Most of these concepts trace back to Clausius’ (1857) paper, “On the Nature of the Motion which we call Heat.” In that seminal work, Clausius laid the foundation for the kinetic theory of gases, linking macroscopic properties like pressure and temperature to the microscopic motion of molecules. However, Feynman’s discussion goes beyond a basic explanation of gas pressure and includes a derivation of the adiabatic law.

 

1. Force per unit area

“We define the pressure, then, as equal to the force that we have to apply on a piston, divided by the area of the piston: P = F/A……. So we see that the force, which we already have said is the pressure times the area, is equal to the momentum per second delivered to the piston by the colliding molecules (Feynman et al., 1963, p. 39-3).”

 

The pressure of a gas is a macroscopic property that arises from the collective motion of microscopic particles. Its physical origin can be understood from three perspectives:

1. Macroscopic definition: Pressure (P) is defined as the force (F) exerted perpendicularly on a surface, divided by the area (A) of that surface: P = F / A.

2. Microscopic Origin: At the molecular level, gas particles move randomly at high speeds. When they collide with the walls of a container, they transfer momentum to the surface—producing a measurable force.

3. Statistical Average: The net pressure arises from averaging the momentum changes of many molecular collisions, as described by kinetic theory.

Feynman shows how macroscopic quantities like force and pressure arise from the statistical behavior of microscopic particles, bridging Newtonian mechanics with kinetic theory of gases and linking individual molecular motion to thermodynamic properties.

 

“… but eventually, when equilibrium has set in, the net result is that the collisions are effectively perfectly elastic. On the average, every particle that comes in leaves with the same energy. So we shall imagine that the gas is in a steady condition, and we lose no energy to the piston because the piston is standing still (Feynman et al., 1963, p. 39-3).”

 

In his 1857 paper, Clausius writes: “In order that Mariotte's and Gay-Lussac's laws, as well as others in connexion with the same, may be strictly fulfilled, the gas must satisfy the following conditions with respect to its molecular condition:

(1) The space actually filled by the molecules of the gas must be infinitesimal in comparison to the whole space occupied by the gas itself.

(2) The duration of an impact, that is to say, the time required to produce the actually occurring change in the motion of a molecule when it strikes another molecule or a fixed surface, must be infinitesimal in comparison to the interval of time between two successive collisions.

(3) The influence of the molecular forces must be infinitesimal (p. 116).”

In essence, Clausius stated three key simplifying assumptions to model ideal gases:

  1. Infinitesimal Molecular Volume: The volume occupied by gas molecules is negligible compared to the container’s volume.
  2. Infinitesimal Collision Duration: The time a collision takes is negligible compared to the time between collisions.
  3. Infinitesimal Molecular Forces: Intermolecular forces are negligible except during collisions.

While Clausius restricted his model to these three assumptions, kinetic theory has refined them to account for real-gas effects (e.g., van der Waals forces). However, Clausius's framework remains a milestone in the development of statistical mechanics.

 

The kinetic theory of gases is built on a set of simplifying assumptions that ensure analytical solvability while offering reasonable agreement with experimental observations for gases:

1. Point Particles: Gas molecules are idealized as point masses since their individual volumes are negligible compared to the container size (L). Thus, the time between collisions with the same wall can be simplified as Δt = 2L/vx, where v is the average velocity component in the x-direction.

2. No Intermolecular Forces (Except During Collisions): Molecules are assumed not to exert forces on each other except during brief, elastic collisions. Between collisions, they move in straight lines at constant speeds.

3. Short Collision Duration: Collisions are assumed to occur instantaneously, allowing the momentum change to be treated as abrupt without the need of modeling the detailed interaction over time.

4. Perfectly Elastic Collisions: All collisions—whether between molecules or with the container walls—are assumed to be perfectly elastic. This implies:

(a) Kinetic energy is conserved, with no energy loss to heat or deformation.

(b) When a molecule collides with a wall, its momentum changes from +p​ to –p​, but its speed remains unchanged.

5. Large Number of Particles: The gas consists a large number of molecules (e.g., 1023 or more), allowing statistical averaging. This enables definitions of macroscopic quantities such as pressure and temperature.

6. Random Motion: Molecules move randomly, following the Maxwell-Boltzmann distribution. At any moment, molecules are equally likely to move in any direction. The mean square velocity is distributed evenly among the three spatial dimensions: v2=vx2+vy2+vz2 where · denotes an ensemble averaging.

7. Negligible Gravitational Effects: Gravitational forces are considered too weak to significantly influence molecular motion. As a result, the velocity distribution remains isotropic: vx2 = vy2 = vz2 = v2/3.

Some physicists introduce additional simplifying assumptions, such as identical particle masses (m), negligible relativistic effects (valid at low to moderate temperatures), and the absence of quantum effects (valid at high temperatures and low densities). These assumptions underpin the derivation of the ideal gas law and help connect microscopic particle dynamics to macroscopic thermodynamic observables. While real gases deviate from ideal behavior—especially at high densities or low temperatures—the kinetic theory remains a foundational framework for understanding gas behavior under most practical conditions.

 

2. Energy per unit volume

“For a monatomic gas we will suppose that the total energy U is equal to a number of atoms times the average kinetic energy of each, because we are disregarding any possibility of excitation or motion inside the atoms themselves. Then, in these circumstances, we would have PV= (2/3)U (Feynman et al., 1963, p. 39-5).”

 

Feynman showed that the product PV of a monatomic ideal gas corresponds is directly proportional to the internal energy U. This does not mean PV represents the work done by compressing a gas to zero volume at constant pressure—such a process would be physically unattainable. Rather, pressure and volume are interdependent, governed by the ideal gas law, and cannot be varied independently without altering other state variables. Notably, kinetic theory offers a more fundamental view of pressure—not merely as force per unit area, but as the rate of momentum transfer per unit area due to molecular collisions. This perspective also allows pressure to be interpreted as an energy density (energy per unit volume), revealing a deep connection between the mechanical origin of pressure and its thermodynamic role.

 

Note: A single particle cannot exert pressure in the thermodynamic sense—this pressure is a statistical property that emerges only from the collective behavior of many particles.

 

“It is only a matter of rather tricky mathematics to notice, therefore, that they are each equal to one-third of their sum, which is of course the square of the magnitude of the velocity: vx2= (1/3)vx2+ vy2+ vz2=v2/3 (Feynman et al., 1963, p. 39-4).”

 

The "trick" Feynman highlights goes beyond the familiar Pythagorean theorem (or famous theorem of Greek*); it lies in bridging geometric symmetry with statistical reasoning. First, the equation v2=vx2​+vy2​+vz2​, which relies on the three-dimensional Pythagorean theorem, applies to the velocity of a single atom or molecule. Second, the expression <v2> = <vx2​ + vy2 + vz2> may resemble the Pythagorean theorem, but its physical meaning is about statistical averaging over a large number of atoms. Third, a key insight comes from exploiting spherical symmetry. In an idealized isotropic system—where no direction is preferred and external forces such as gravity are absent—the average kinetic energy is evenly distributed across all spatial dimensions. This symmetry allows us to reduce the complexity of three-directional motion to a simple relation: <vx2​> = <v2>/3. The one-third factor, a consequence of isotropy, underpins the derivation of the equipartition theorem and ultimately leads to the ideal gas law.

 

*In the audio recording [18:00], Feynman mentions the 'famous theorem of the Greeks,' more commonly known today as the Pythagorean theorem—though in higher dimensions, it is sometimes associated with de Gua’s theorem.

 

3. Quasi-static adiabatic compression

“A compression in which there is no heat energy added or removed is called an adiabatic compression, from the Greek a (not) + dia (through) + bainein (to go). (The word adiabatic is used in physics in several ways, and it is sometimes hard to see what is common about them.) (Feynman et al., 1963, p. 39-5).”

 

The term adiabatic derives from the Greek a- (not) and diabatos (passable), meaning “not passable”—in this context, referring to the absence of thermal energy transfer. In thermodynamics, a process is considered adiabatic if there is no heat transfer between a system and its surroundings, formally expressed as Q = 0 where Q is the heat transfer. This condition can be achieved—or more accurately, approximated—in two main ways:

1.      Perfect insulation – The system is thermally isolated, preventing any heat flow.

2.      Rapid process – The process occurs so quickly that there is insufficient time for significant heat transfer (e.g., a sudden gas expansion).

In general, adiabatic conditions are idealized approximations—used to simplify the analysis of systems in which heat transfer is minimal or intentionally ignored. In reality, perfectly adiabatic processes are physically unattainable; even under highly controlled conditions, some degree of thermal interaction inevitably occurs. As such, the term adiabatic can be somewhat misleading when applied to complex real-world systems—such as quantum systems—where complete isolation from thermal exchange is practically impossible.

 

“That is, for an adiabatic compression all the work done goes into changing the internal energy. That is the key—that there are no other losses of energy—for then we have PdV=−dU (Feynman et al., 1963, p. 39-5).”

 

Specifically, it is a quasi-static adiabatic compression, which has the following key features:

1. Quasi-static (Reversible): The process is carried out infinitely slowly, allowing the system to remain in thermal equilibrium at every stage. This ensures the process is reversible and that pressure P and volume V are well-defined throughout, allowing work to be calculated as ∫P dV.

2. Adiabatic condition (no heat transfer): The system is perfectly insulated, so no heat transfer with the surroundings. This implies ΔQ=0, and all energy transfer occurs only through mechanical work.

3. Compression (external work): Work is done on the gas by compressing it, which increases its internal energy. Since ΔQ = 0, the first law of thermodynamics reduces to dU = −PdV, where the change in internal energy dU results entirely from volume change under pressure.

This idealized model is fundamental in thermodynamics, providing insight into processes such as the temperature rise of a gas during compression and forming the theoretical basis for thermodynamic cycles, including those in heat engines.

 

Everyday Connection

“In banging against the eardrums they make an irregular tattoo—boom, boom, boom—which we do not hear because the atoms are so small, and the sensitivity of the ear is not quite enough to notice it. The result of this perpetual bombardment is to push the drum away, but of course there is an equal perpetual bombardment of atoms on the other side of the eardrum, so the net force on it is zero…... We sometimes feel this uncomfortable effect when we go up too fast in an elevator or an airplane…... (Feynman et al., 1963, p. 39-3).”

 

Another notable example is swimming-induced vertigo, often caused by air pressure imbalances in the ear—a condition known as alternobaric vertigo (swimming-induced vertigo). This type of vertigo can occur in several scenarios:

Shallow-water diving: Even small changes in depth (1–2 meters) can cause discomfort due to unequal pressure in the ears.

Uneven pressure equalization: Clearing pressure in one ear but not the other, sometimes due to nasal congestion or poor technique.

Tight swim goggles: Excessive external pressure on the outer ear may worsen air pressure imbalances.

While swimming may improve posture (e.g., reverse hunchback), relieve neck discomfort or reduce back pain, addressing one issue can sometimes introduce another.

 

Review Questions:

1. What is the minimum number of assumptions needed for the kinetic theory of (ideal) gases?

2. How does the mechanical definition of pressure as force per unit area (F/A) relate to its thermodynamic interpretation as energy per unit volume (U/V)?

3. Can a Zen master releasing intestinal gas be considered an example of an adiabatic process? What conditions (e.g., rapid expansion, thermal isolation) would be necessary for it to approximate an adiabatic process?

 

The moral of the lesson (in Feynman’s spirit): While one might theoretically liken flatulence to a quasi-static process, biological reality imposes strict constraints—no human can regulate the release slowly enough to maintain thermal equilibrium. In practice, achieving 'no heat transfer' requires either near-perfect insulation or a process so rapid that heat exchange is negligible—such as an adiabatic compression.


Note: The term 'adiabatic' was first introduced by William John Macquorn Rankine in his 1866 publication as shown below.

(Rankine, 1866)

References:

Clausius, R. (1857). Ueber die Art der Bewegung, welche wir Wärme nennen [On the nature of the motion which we call heat]. Annalen der Physik und Chemie, 176(3), 353–380.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman lectures on physics, Vol. I: Mainly mechanics, radiation, and heat. Addison-Wesley.

Rankine, W. J. M. (1866). On the theory of explosive gas engines. Proceedings of the Institution of Civil Engineers, 25, 509–539.

Thursday, May 22, 2025

Section 39–1 Properties of matter

Idealizations / Approximations / Limitations

 

Though titled Properties of Matter, this section might be more fittingly named Introduction to Thermodynamics, as Feynman briefly discusses the theory’s foundational idealizations, mathematical approximations, and intrinsic limitations. To appreciate the power and elegance of thermodynamics, one might recall Einstein’s often-quoted words:

 

“A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more extended its area of applicability. Therefore the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content which I am convinced will never be overthrown, within the framework of applicability of its basic concepts.”

 

From a modern standpoint, Einstein’s admiration—though justified for classical systems— overlooks an inherent limitation. Here lies classical thermodynamics’ subtle paradox: its strength is its wide generality, yet that same generality narrow its power. While it offers a small number of universal principles applicable to diverse physical systems, it cannot capture their richer complexities—thermal fluctuations, quantized energy exchange, or nonequilibrium dynamics. However, Feynman’s discussion serves as a valuable entry point into thermodynamics and it prompts questions on how the classical theory must adapt to remain relevant in contemporary physics.

 

1. Idealizations: The Foundations of Thermodynamics

“It is the first part of the analysis of the properties of matter from the physical point of view, in which, recognizing that matter is made out of a great many atoms, or elementary parts, which interact electrically and obey the laws of mechanics, we try to understand why various aggregates of atoms behave the way they do (Feynman et al., 1963, p. 39-1).

 

According to Feynman, analyzing the properties of matter from a physical point of view requires at least three key idealizations. They form the foundation of the kinetic theory of gases, which bridges microscopic particle dynamics with macroscopic observables. The key idealizations are:

1. Atomic Model of Matter – Gas molecules are modeled as a large number of randomly moving particles, enabling a statistical treatment of their behavior.

2. Newton’s Laws of motion – They govern particle collisions (momentum-conserving), e.g., linking microscopic motion to pressure, via momentum transfer to container walls.

3. Electrostatic Interactions – Intermolecular forces are assumed to be predominantly Coulombic, while gravitational and magnetic effects are negligible. In certain scenarios, Coulomb forces may be ignored, simplifying the distribution of energy (or velocity).

These idealizations allow for a simplified but insightful framework, especially when applied to macroscopically homogeneous, isotropic, and uncharged systems (Callen, 1985).

 

“For instance, when we compress something, it heats; if we heat it, it expands. There is a relationship between these two facts which can be deduced independently of the machinery underneath. This subject is called thermodynamics (Feynman et al., 1963, p. 39-2).”

 

Feynman’s description of thermodynamics emphasizes the interplay among heat and work, and the physical properties of matter. More broadly, thermodynamics is the branch of physics that explores how energy—particularly in the forms of heat and work—relates to state variables such as temperature, pressure, and entropy, thereby determining the behavior of physical systems. The term thermodynamics—derived from the Greek therme (heat) and dynamis (power)—means “heat in motion” or “heat power.” This name can be somewhat misleading: classical thermodynamics primarily addresses equilibrium states, which are static or time-independent, rather than dynamic processes (Atkins & de Paula, 2010). Despite its wide-ranging applicability, thermodynamics is a phenomenological theory—its laws are grounded in empirical observation, instead of derived from microscopic principles. Kinetic theory, by contrast, provides a statistical foundation for thermodynamic behavior and bridges the gap to statistical mechanics.

 

“We shall also find that the subject can be attacked from a nonatomic point of view, and that there are many interrelationships of the properties of substances… The deepest understanding of thermodynamics comes, of course, from understanding the actual machinery underneath, and that is what we shall do: we shall take the atomic viewpoint from the beginning and use it to understand the various properties of matter and the laws of thermodynamics (Feynman et al., 1963, p. 39-2).”

 

An automotive analogy may help clarify the distinctions among thermodynamics, kinetic theory, and statistical mechanics:

  • Thermodynamics is like reading a car’s speedometer—it offers a macroscopic description (nonatomic viewpoint) based on observable quantities, without reference to microscopic mechanisms.
  • Kinetic theory is akin to analyzing the engine’s revolutions per minute (RPM) to explain the car’s speed—it adopts a microscopic perspective (atomic viewpoint) to understand macroscopic behavior in terms of particle motion.
  • Statistical mechanics is like the general theory of engines—it provides a unifying framework that applies probabilistic principles to a wide range of systems, but not limited to gases.

Note that Feynman’s terms ‘nonatomic point of view’ and ‘atomic viewpoint’ correspond to what is commonly known as macroscopic and microscopic perspectives respectively.

 

2. Approximations: From Microscopic Chaos to Macroscopic Order

“Anyone who wants to analyze the properties of matter in a real problem might want to start by writing down the fundamental equations and then try to solve them mathematically. Although there are people who try to use such an approach, these people are the failures in this field; the real successes come to those who start from a physical point of view, people who have a rough idea where they are going and then begin by making the right kind of approximations, knowing what is big and what is small in a given complicated situation (Feynman et al., 1963, p. 39-2).

 

Thermodynamics describes macroscopic properties—such as temperature and entropy—without invoking the microscopic details of matter. Statistical mechanics, in contrast, provides a deeper foundation by using probability theory and statistical methods to connect the microscopic behavior of particles to these observable thermodynamic quantities. Since tracking the motion of every individual particle is practically impossible, statistical approaches serve as a bridge between microscopic disorder (e.g., particle velocities and collisions) and macroscopic regularity (e.g., temperature and pressure). For example, the ideal gas law models gases as collections of non-interacting point particles, while the Van der Waals equation incorporates molecular size and intermolecular forces, offering a more realistic description of real gases. In this context, Feynman’s remark about “knowing what is big and what is small” can be interpreted not only in terms of physical size, but as an invitation to identify which variables significantly affect a system’s macroscopic behavior and which can be safely ignored.

 

“As an interesting example, we all know that equal volumes of gases, at the same pressure and temperature, contain the same number of molecules. The law of multiple proportions, that when two gases combine in a chemical reaction the volumes needed always stand in simple integral proportions, was understood ultimately by Avogadro to mean that equal volumes have equal numbers of atoms. Now why do they have equal numbers of atoms? (Feynman et al., 1963, p. 39-2).”

 

In 1808, Joseph Gay-Lussac observed that gases react in simple whole-number volume ratios—e.g., 2 volumes of hydrogen combine with 1 volume of oxygen to form 2 volumes of water vapor (2H₂ + O₂ → 2H₂O). However, John Dalton (1808) rejected Gay-Lussac’s findings because Dalton incorrectly assumed water had the formula HO. The turning point came in 1811 when Amedeo Avogadro proposed two revolutionary ideas:

(1) equal gas volumes, under the same temperature and pressure, contain equal number of molecules;

(2) Many gases, including hydrogen and oxygen, exist as diatomic molecules.

Avogadro’s insight resolved the discrepancy in Dalton’s model by correctly identifying the composition of water and explaining why volume ratios align with molecular stoichiometry. Yet his ideas were largely ignored until 1860, when Stanislao Cannizzaro revived them at the Karlsruhe Congress, enabling chemists to determine atomic masses with consistency.

 

“Now why do they have equal numbers of atoms? Can we deduce from Newton’s laws that the number of atoms should be equal? (Feynman et al., 1963, p. 39-2).”

 

Newton’s laws of motion alone cannot explain Avogadro’s principle, because they deal with how every particle move—not with the big picture of how a large number of particles behave together. Avogadro’s insight depended on recognizing gases as composed of discrete molecules with definite stoichiometries—a chemical idea beyond the scope of Newton’s laws. While classical mechanics can explain how particle collisions generate pressure, but connecting equal volumes to equal numbers of molecules requires statistical averaging. (You need to average over a large number of particles to get meaningful results.) The ideal gas law (PV = nRT), which formalizes Avogadro’s principle*, emerges only when Newton’s laws are combined with molecular assumptions and statistical reasoning. Thus, Feynman’s question underscores this gap: classical physics alone cannot explain macroscopic gas behavior without invoking probability and the atomic nature of matter.

 

* Avogadro’s principle (n V at fixed PT) is embedded in the ideal gas law PV = nRT and this requires assuming gases are composed of discrete molecules (a non-Newtonian idea).

 

3. Limitations: Where Classical Thermodynamics Fails

“… from a physical standpoint, the actual behavior of the atoms is not according to classical mechanics, but according to quantum mechanics, and a correct understanding of the subject cannot be attained until we understand quantum mechanics (Feynman et al., 1963, p. 39-1).”

 

Feynman rightly remarks that classical mechanics is inadequate for understanding atomic behavior, which fundamentally requires quantum mechanics. However, he could have been more precise about the limitations of classical thermodynamics, particularly its failure to accurately describe systems under certain conditions. While classical thermodynamics is a widely applicable theory, it breaks down when applied to quantum systems such as photons, ultracold atoms, or few-particle system. For example, quantum systems exhibit non-classical behavior due to their sensitivity to environmental interactions, which can induce decoherence and destroy their coherent quantum states. Thus, quantum thermodynamics has emerged as a suitable framework that redefines heat, work, and temperature in contexts where fluctuations, discreteness, and quantum correlations are unavoidable. It extends thermodynamic principles into the quantum domain, where basic assumptions underpinning the classical theory no longer apply.

 

“Here, unlike the case of billiard balls and automobiles, the difference between the classical mechanical laws and the quantum-mechanical laws is very important and very significant, so that many things that we will deduce by classical physics will be fundamentally incorrect. Therefore there will be certain things to be partially unlearned; however, we shall indicate in every case when a result is incorrect, so that we will know just where the ‘edges’ are (Feynman et al., 1963, p. 39-1).”

 

In his textbook Statistical Mechanics, Feynman writes: “If a system is very weakly coupled to a heat bath at a given 'temperature,' if the coupling is indefinite or not known precisely, if the coupling has been on for a long time, and if all the 'fast' things have happened and all the 'slow' things not, the system is said to be in thermal equilibrium.” This emphasizes that thermal equilibrium is not just about fast processes such as molecules settling into a Maxwell-Boltzmann distribution. It also requires that the system has interacted long enough with its environment to reach a stable state, while slower processes (e.g., container erosion) remain negligible. Thermodynamics holds under such conditions, but its accuracy depends on system size. In large systems (over 1,000 particles), fluctuations average out, making macroscopic quantities well-defined. In small systems (under 100 particles), fluctuations dominate, and corrections from statistical mechanics are needed. While thermodynamics may apply at the nanoscale, its assumptions must be used with care, as the boundary between large and small systems is inherently fuzzy.


Review Questions:

1. What idealizations underpin classical thermodynamics, and how do they simplify reality?

2. How does statistical mechanics employ approximations, and what justifies them?

3. When does classical thermodynamics fail, and how do modern frameworks address its limitations?

 

The moral of the Lesson (In Feynman’s spirit):

What makes a theory great? Simple principles, surprising connections, and wide applicability. Thermodynamics has all three—it’s the one theory I’d bet my life on.

But here’s the twist: a theory that powerful comes with its own limits. Thermodynamics is like a brilliant, no-nonsense old professor—it delivers rock-solid answers, but only to the big, timeless questions. Ask it about the messy details, the fluctuations, the microscopic chaos, and it just shrugs and says: "Not my department!"

 

References:

Avogadro, A. (1811). Essay on a Manner of Determining the Relative Masses of the Elementary Molecules of Bodies, and the Proportions in Which They Enter Into These Compounds. Journal de Physique.

Callen, H. B. (1985). Thermodynamics and an introduction to thermostatistics (2nd ed.). Wiley.

Cannizzaro, S. (1860). Sunto di un Corso di Filosofia Chimica. Paper presented at the Karlsruhe Congress.

Dalton, J. (1808). A New System of Chemical Philosophy. Manchester: Bickerstaff.

Einstein, A. (1970). Autobiographical notes. In P. A. Schilpp (Ed.), Albert Einstein: Philosopher-scientist (pp. 1–95). Open Court. (Original work published 1949).

Feynman, R. P. (2018). Statistical mechanics: a set of lectures. CRC press.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman lectures on physics, Vol. I: Mainly mechanics, radiation, and heat. Addison-Wesley.

Gay-Lussac, J. L. (1808). Mémoire sur la combinaison des substances gazeuses entre elles. Annales de Chimie.