Tuesday, July 15, 2025

Section 39–4 Temperature and kinetic energy

Thermal equilibrium / Isotropic distribution / Absolute Temperature

 

This section explores three interrelated concepts: thermal equilibrium, isotropic velocity distribution, and absolute temperature. These ideas trace back to James Clerk Maxwell’s 1860 seminal paper, Illustrations of the Dynamical Theory of Gases. Part I: On the Motions and Collisions of Perfectly Elastic Spheres. In essence, Feynman discusses the behavior of an ideal gas in thermal equilibriuma state in which molecular velocities are distributed uniformly in all directions (isotropically), and the system maintains a constant, well-defined temperature.

 

1. Thermal equilibrium

“What are the conditions for equilibrium? We must realize that this is not the only condition over the long run, but something else must happen more slowly as the true complete equilibrium corresponding to equal temperatures sets in (Feynman et al., 1963, p. 39-7).”

 

To avoid conceptual ambiguity, Feynman could have more precisely referred to thermal, mechanical, or thermodynamic equilibrium. Thermal equilibrium occurs when a system attains a state in which temperature is uniform throughout the system and no net heat transfer takes place. Mechanical equilibrium requires the absence of net forces or pressure gradients. Thermodynamic equilibrium implies that the system is in thermal, mechanical, and chemical equilibrium at the same time. When Feynman remarks that “something else must happen more slowly,” he seems to allude to these equilibrium conditions without naming them. While this omission may make his explanation more intuitive, it is less precise from a thermodynamic standpoint.

 

Temperature can be conceptualized through three fundamental features:

1. Microscopic Definition: Temperature is directly proportional to the average kinetic energy of molecules in a system. This provides a molecular-level understanding that links thermodynamic temperature to the motion of molecules.

2. Zeroth Law of thermodynamics: If system A is in thermal equilibrium with system B, and system B is in thermal equilibrium with system C, then systems A and C are also in equilibrium. This establishes temperature as a transitive and measurable property that can be used to compare the thermal conditions of different systems.

3. Temporal Stability: Once a system reaches thermal equilibrium, its temperature remains stable over time, distinguishing it from transient conditions and making it a reliable thermodynamic variable.

These three aspects—average kinetic energy, transitive property, and temporal stability—provide an operational basis for defining and measuring temperature.

 

2. Isotropic Distribution

“Now then, what is the distribution resulting from this? From our previous argument we conclude this: that at equilibrium, all directions for w are equally likely, relative to the direction of the motion of the CM. There will be no particular correlation, in the end, between the direction of the motion of the relative velocity and that of the motion of the CM (Feynman et al., 1963, p. 39-8).”

 

Maxwell’s assumption of an isotropic velocity distribution implies that, in thermal equilibrium, molecular motion has no preferred direction. However, in reality, gravity causes gas molecules to follow parabolic trajectories between collisions rather than idealized straight lines. Specifically, upward-moving molecules lose kinetic energy as they ascend, whereas downward-moving molecules gain kinetic energy as they descend, leading to a directional asymmetry in the velocity distribution. Thus, Feynman’s explanation, which assumes molecular motion is equally probable in all directions, applies to an idealized, force-free system. This approximation is valid only locally, in small regions where gravitational effects are negligible. Across larger vertical distances, however, gravity becomes significant and breaks the isotropy of the velocity distribution by imposing a preferred downward direction.

 

Note on Distributions: In The Feynman Lectures on Physics (Section 40-1), the Boltzmann distribution is derived to describe the spatial distribution of molecules under the influence of gravity. This distribution predicts an exponential decrease in particle number density with increasing altitude. Crucially, this differs from the Maxwell-Boltzmann distribution, which characterizes the probability distribution of molecular speeds in a system at thermal equilibrium in the absence of external forces.

 

Footnote: “This argument, which was the one used by Maxwell, involves some subtleties. Although the conclusion is correct, the result does not follow purely from the considerations of symmetry that we used before, since, by going to a reference frame moving through the gas, we may find a distorted velocity distribution. We have not found a simple proof of this result (Feynman et al., 1963, p. 39-8).”

 

In his 1860 derivation of the molecular speed distribution for an ideal gas, Maxwell assumed that the velocity components are statistically independent and that all directions of rebound are equally likely. These assumptions lead to some criticisms, e.g., Richet (2001) argues that “the assumed isotropy of the gas does not necessarily imply the statistical independence of the variables along different directions of space” (p. 319). A more fundamental challenge arises from relativistic physics: Walstad (2013) points out that in a relativistic gas, kinetic energy cannot be decomposed into independent functions of the Cartesian velocity components. As a result, the probability distribution for one component of velocity depends inherently on the others. Interestingly, Walstad concludes that Maxwell’s derivation lacks even pedagogical validity. However, Maxwell’s molecular speed distribution is also recognized as the first statistical law proposed in physics. Thus, we need not expect Maxwell’s derivation to be fully rigorous by today’s standards—it was a pioneering insight rather than a formal proof.

 

Footnote: Although the conclusion is correct, the result does not follow purely from the considerations of symmetry that we used before, since, by going to a reference frame moving through the gas, we may find a distorted velocity distribution (Feynman et al., 1963, p. 39-8).”

 

The distorted velocity distributions in different reference frames can be understood through a relativistic analogy. Just as electric field lines change direction under a Lorentz transformation (See figure below), molecular velocity distributions appear anisotropic when viewed from a moving reference frame—though the transformation rules governing each are fundamentally different. This asymmetry is a kinematic effect—it reflects the motion of the observer rather than the intrinsic property of the gas. In the lab frame (where thermal equilibrium is defined), Maxwell’s assumption of isotropy remains valid. However, while the average molecular momentum remains approximately zero in directions perpendicular to the observer’s motion, a non-zero net momentum emerges in the direction opposite to that motion. This illustrates how velocity distributions are frame-dependent, and how apparent anisotropies can emerge solely from changes in the observer’s reference frame.

Source: (Resnick, 1991).

 

 

3. Absolute Temperature

“We may arbitrarily define the scale of temperature so that the mean energy is linearly proportional to the temperature. The best way to do it would be to call the mean energy itself ‘the temperature’…… we use a constant conversion factor between the energy of a molecule and a degree of absolute temperature called a degree Kelvin (Feynman et al, 1963, p. 39-10).”

 

The Absolute Nature—and Arbitrary Aspects—of Temperature

Physicists often emphasize that absolute temperature is not an arbitrarily construct, but is grounded in fundamental physical principles. There are at least three possible arguments: (1) Universal minimum: The Kelvin scale’s zero point (absolute zero) represents a theoretical limit where all thermal motion ceases, in accordance with the classical laws. (2) Fundamental constant: Unlike empirical scales (e.g., Celsius or Fahrenheit), the Kelvin is defined via the Boltzmann constant (k), linking temperature directly to energy and separating it from material-dependent references like the boiling point of water. (3) Universal standard: Absolute temperature is linearly proportional to the average kinetic energy of the particles, making it a universal standard and objective measure applicable from real gases to cosmological observations. Thus, the Kelvin scale is often called the absolute (not arbitrary) temperature scale, a framework based on physical principles.

 

Arbitrary Conventions Remain

Despite its foundation in physical principles, the so-called absolute temperature is not entirely free from human-defined conventions. Firstly, the size of the Kelvin unit was historically chosen to match the Celsius degree for practical continuity. Secondly, the Kelvin scale is defined via the Boltzmann constant (k), which connects temperature to energy through the expression kT. However, energy is measured in joules—a unit based on human-defined standards (e.g., the kilogram and second). Thirdly, the choice to define temperature as linearly proportional to average kinetic energy (including close to 0 Kelvin) is a convention, agreed upon for consistency across physical theories. In summary, although the Kelvin scale is grounded in physical principles, its construction still depends on human-defined conventions—such as unit choices, scaling, and dimensional system. This interplay between the objective foundation of temperature and its conventional elements reflects a deeper philosophical question, which is closely associated with conventionalism in the philosophy of science.

 

Review questions:

1. Should the term “thermal equilibrium” be used in introductory discussions?

2. How would you explain the directions of molecules after collisions in different frames?

3. To what extent is the Kelvin scale (or absolute temperature) arbitrarily defined?

 

The Moral of the lesson:

Scientific conventions—such as systems of measurement—are built on collective agreement rather than absolute truths. Similarly, societal norms like laws, ethics, and customs are developed by consensual agreement instead of universal morality. This highlights the importance of dialogue, cooperation, and shared understanding in building a functional and adaptable society.

 

Fun Facts:

Why did SARS virus struggle to spread in tropical regions? Research suggests that temperature and humidity accelerate the breakdown of the virus (Biryukov et al, 2020; Chan et al, 2011). In contrast, cooler and drier conditions—such as Hong Kong’s springtime or air-conditioned environments—allow the virus to survive longer, increasing transmission risk. This may help explain why countries like Indonesia, Malaysia, and Singapore experienced fewer major outbreaks: their warm, humid climates acted as a natural barrier. On the other hand, sunlight may also help, but its antiviral power comes not from warmth, but from ultraviolet (UV) radiation.


References:

Biryukov, J., Boydston, J. A., Dunning, R. A., Yeager, J. J., Wood, S., Reese, A. L., ... & Altamura, L. A. (2020). Increasing temperature and relative humidity accelerates inactivation of SARS-CoV-2 on surfaces. MSphere5(4), 10-1128.

Chan, K. H., Peiris, J. M., Lam, S. Y., Poon, L. L. M., Yuen, K. Y., & Seto, W. H. (2011). The effects of temperature and relative humidity on the viability of the SARS coronavirus. Advances in virology2011(1), 734690.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Maxwell, J.C. (1860). Illustrations of the dynamical theory of gases. Part I. On the motions and collisions of perfectly elastic spheres. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 4th Series, vol.19, pp.19–32.

Richet, P. (2001). The Physical Basis of Thermodynamics: With Applications to Chemistry. New York: Kluwer Academic/Plenum.

Resnick, R. (1991). Introduction to special relativity. John Wiley & Sons.

Walstad, A. (2013). On deriving the Maxwellian velocity distribution. American Journal of Physics81(7), 555-557.

Friday, July 4, 2025

Section 39–3 Compressibility of radiation

Adiabatic system / Adiabatic law / Adiabatic index

 

In this section, there are three closely related concepts: the adiabatic system, the adiabatic law for photon gases, and the adiabatic index. Although titled “Compressibility of Radiation,” it is related to stellar structure and stability. These ideas originate in Arthur Eddington’s (1926) seminal work The Internal Constitution of the Stars, which laid the theoretical foundation for modern astrophysics.

       While Eddington’s model was groundbreaking, it was later refined by Subrahmanyan Chandrasekhar, whose 1933 theory of white dwarfs introduced a critical mass threshold—now known as the Chandrasekhar limit—and earned him the 1983 Nobel Prize in Physics. Initially, both Milne and Eddington praised Chandrasekhar’s thesis for resolving discrepancies in their models, but Chandrasekhar’s conclusion—that stars exceeding a certain mass cannot become white dwarfs—challenged Eddington’s predictions and reshaped our understanding of stellar evolution.

 

1. Adiabatic system

“We have a large number of photons in a box in which the temperature is very high. (The box is, of course, the gas in a very hot star. The sun is not hot enough; there are still too many atoms, but at still higher temperatures in certain very hot stars, we may neglect the atoms and suppose that the only objects that we have in the box are photons.) (Feynman et al., 1963, p. 39-6).”

 

Stars are often modeled as adiabatic systems, meaning that heat transfer with the surroundings is negligible. This approximation holds well in the stellar interior, where the high density inhibits significant energy loss. Within the stars, energy is transported primarily by radiative diffusion and convection (see below), but both processes operate over timescales much longer than those of local dynamical processes (Kippenhahn et al., 2012). Under conditions of extreme pressure and density, the photon behaves approximately adiabatic, especially in regions where radiation pressure dominates (Eddington, 1926). However, this approximation breaks down near the stellar surface, where densities decrease and photons can escape into space; near the photosphere, radiative losses becomes significant, and the adiabatic model no longer applies.

 

(Johnson et al., 2000, p. 311)


The Sun is composed primarily of hydrogen (» 71%) and helium (» 27%), with trace amounts of heavier elements such as oxygen, carbon, and iron (see below). Its energy is generated through nuclear fusion in the core, producing high-energy photons in the process. Due to the Sun’s extreme interior density, these photons undergo countless scatterings, taking thousands to millions of years to reach the surface. To illustrate how light behaves in such hot, dense environments, Feynman introduced a simplified model: a box filled with photons, representing an idealized photon gas. This model captures key concepts like radiation (photon) pressure, but it omits essential features of the real star—such as photon-matter interactions, the role of convection, and the star’s complex layered structure.

 

Source: (Wilkinson, 2012)

Note: The adiabatic assumption can be found in The Internal Constitution of The Stars, where Eddington (1926) mentions: “By hypothesis there is no appreciable gain or loss of heat by conduction or radiation it therefore expands without gain or loss of heat, i.e., adiabatically (p. 98).”

 

2. Adiabatic law:

For photons, then, since we have 1/3 in front, (γ−1) in (39.11) is 1/3, or γ=4/3, and we have discovered that radiation in a box obeys the law PV4/3=C (Feynman et al., 1963, p. 39-6).”

 

It is more accurate to say that we idealize a system of photons as obeying the adiabatic law. This law can be expressed in various equivalent way: e.g., as a pressure-density relation (P = kργ), a temperature-volume relation (TVγ−1 = constant), and a pressure-the temperature relation (P(1−γ)/γT = constant). In astrophysics, the pressure-density form is preferred because it directly relates two main variables without requiring knowledge of temperature profile. In short, Eddington’s (1926) work was a brilliant deduction—a logical consequence of applying known physics to stars, i.e., it was not a discovery whereby photons strictly obey the adiabatic law. By proposing the relation P = kργ, a polytropic equation of state, he treated k and γ as adjustable parameters, thereby simplifying the stellar model by letting temperature as a dependent variable.

 

In Eddington’s model of stellar structure, the polytropic process serves as a powerful tool because it offers greater flexibility than the strict adiabatic assumption. A polytropic model introduces an adjustable index n, which is related to the adiabatic index by the relation γ=1+1/n. This allows the model to represent different types of energy transport, including both convection and radiation. Crucially, polytropic models allow intermediate values of n (e.g., n = 3 in Eddington’s model), making them suitable for modeling real stars in which both gas pressure and radiation pressure contribute significantly. In this way, Eddington’s use of polytropes provided a more general and adaptable framework, with adiabatic behavior emerging as a special case within a broader continuum.

 

In The Internal Constitution of The Stars, Eddington (1926) writes: “… we content ourselves with laying down an arbitrary connection between P and r and tracing the consequences. In general, whether the gas is perfect or imperfect, any value of the pressure can be made to correspond to given density by assigning an appropriate temperature our procedure thus amounts to imposing a particular temperature distribution on the star… The third relation is taken to be of the form P = kρg where k and g are disposable constants (p. 80).”

 

3. Adiabatic index:

“So we know the compressibility of radiation! That is what is used in an analysis of the contribution of radiation pressure in a star, that is how we calculate it, and how it changes when we compress it (Feynman et al., 1963, p. 39-6).”

 

In general, the adiabatic index γ depends on the microscopic structure of the gas, as it reflects how energy is distributed among translational, rotational, and vibrational degrees of freedom. In Eddington’s model, γ=4/3​ applies to the radiative core, where radiation pressure dominates, while γ=5/3​ is more appropriate for the outer convective layers, where gas pressure governs the dynamics. In Chandrasekhar’s theory of white dwarfs, the condition γ=4/3​ emerges as a critical threshold: when the effective γ falls below this value—due to relativistic electron degeneracy at high densities—the star becomes dynamically unstable and collapses under its own gravity. This threshold encapsulates the balance between internal pressure and gravitational force, shaped by the star’s mass, composition, and the relative contributions of gas and radiation pressure. In this sense, the deceptively simple value γ=4/3​ marks a critical boundary between stellar stability and gravitational collapse, and thus between the life and death of a star.

 

In The Internal Constitution of The Stars, Eddington (1926) writes: “The value of g for the stellar material must be estimated or guessed; but the range of uncertainty from this cause is not very great. It is impossible for g to exceed the value 5/3 which corresponds to a mon-atomic gas; and it can be shown that if g is less than 4/3 the distribution is unstable (p. 98).”

 

Chandrasekhar’s Breakthrough

Chandrasekhar extended Eddington’s model by incorporating electron degeneracy pressure, a concept that Eddington had largely dismissed. While Eddington’s polytropic approach effectively described stars with an adiabatic index γ ranging between 4/3 to 5/3, Chandrasekhar showed that white dwarfs—supported by degenerate electrons—require relativistic treatment. His analysis revealed that as a white dwarf's mass approaches a critical threshold—the Chandrasekhar limit (» 1.4 solar masses)—the pressure response weakens, and γ falls below 4/3​, triggering gravitational collapse. Beyond this limit, it may lead to the possible formation of supernovae, neutron stars, or black holes, depending on the mass of the progenitor star. In short, Chandrasekhar’s synthesis of quantum mechanics and special relativity overcame limitations of Eddington’s model and profoundly transformed our understanding of stellar evolution.

 

Review questions:

1. Why can a star be modeled as an adiabatic system in which photon (radiation) pressure dominates?

2. Why did Eddington prefer to use the polytropic equation of state P = kργ in modeling stars, rather than limit himself to the strict adiabatic law?

3. How does the adiabatic index γ determine the stability of a star against gravitational collapse?

 

The moral of the lesson (in Feynman’s spirit): For years, Chandrasekhar’s model was dismissed—not because it was wrong, but because Eddington publicly ridiculed it. Even though physicists like Dirac*, Peierls, and Pryce refuted Eddington’s objections, many astrophysicists followed Eddington’s lead and ignored Chandrasekhar’s results. In a twist of irony—with humility—Chandrasekhar later described Eddington as “the most distinguished astrophysicist of his time,” a testament to science’s capacity for self-correction and grace, even when ideas clash. The warning? Brilliance is no protection against self-deception. As Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

 

*Dirac, Peierls, and Pryce (1942) write: “Eddington raises an objection against the customary use of the Lorentz transformation in quantum mechanics, as for instance when applied to the theory of the hydrogen atom or the behaviour of a degenerate gas. This objection seems to us to be mainly based on a misunderstanding......”

 

Fun facts: Eddington, like Einstein, had a passion for cycling. In fact, the Eddington Number—named in his honor—is a metric used by cyclists to track their endurance accomplishments. The number E represents the largest value such that a cyclist has ridden at least E miles (or kilometers) on E different days. For example, an Eddington Number of 50 means the cyclist has completed 50 rides of at least 50 miles each on 50 separate days. Beyond its intellectual appeal, cycling provides significant physical benefits. It is a low-impact exercise that strengthens the muscles around the knee, improves joint mobility, and can alleviate knee pain without placing undue stress on the joints. However, individuals with conditions such as tendonitis, bursitis, or cartilage damage should approach cycling with caution, as improper form or intensity may aggravate existing issues.

 

References:

Dirac, P. A., Peierls, R., & Pryce, M. H. L. (1942). On Lorentz invariance in the quantum theory. In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 38, No. 2, pp. 193-200). Cambridge University Press.

Eddington, A. S. (1926/1979). The internal constitution of the stars. In A Source Book in Astronomy and Astrophysics, 1900–1975 (pp. 281-290). Harvard University Press.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Johnson, K., Hewett, S., Holt, S., & Miller, J. (2000). Advanced Physics for You. Nelson Thornes.

Kippenhahn, R., Weigert, A., & Weiss, A. (2012). Stellar Structure and Evolution (2nd ed.). Springer.

Wilkinson, J. (2012). New Eyes on the Sun: A Guide to Satellite Images and Amateur Observation (p. 98). Springer.