Thursday, May 22, 2025

Section 39–1 Properties of matter

Idealizations / Approximations / Limitations

 

Though titled Properties of Matter, this section might be more fittingly named Introduction to Thermodynamics, as Feynman briefly discusses the theory’s foundational idealizations, mathematical approximations, and intrinsic limitations. To appreciate the power and elegance of thermodynamics, one might recall Einstein’s often-quoted words:

 

“A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more extended its area of applicability. Therefore the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content which I am convinced will never be overthrown, within the framework of applicability of its basic concepts.”

 

From a modern standpoint, Einstein’s admiration—though justified for classical systems— overlooks an inherent limitation. Here lies classical thermodynamics’ subtle paradox: its strength is its wide generality, yet that same generality narrow its power. While it offers a small number of universal principles applicable to diverse physical systems, it cannot capture their richer complexities—thermal fluctuations, quantized energy exchange, or nonequilibrium dynamics. However, Feynman’s discussion serves as a valuable entry point into thermodynamics and it prompts questions on how the classical theory must adapt to remain relevant in contemporary physics.

 

1. Idealizations: The Foundations of Thermodynamics

“It is the first part of the analysis of the properties of matter from the physical point of view, in which, recognizing that matter is made out of a great many atoms, or elementary parts, which interact electrically and obey the laws of mechanics, we try to understand why various aggregates of atoms behave the way they do (Feynman et al., 1963, p. 39-1).

 

According to Feynman, analyzing the properties of matter from a physical point of view requires at least three key idealizations. They form the foundation of the kinetic theory of gases, which bridges microscopic particle dynamics with macroscopic observables. The key idealizations are:

1. Atomic Model of Matter – Gas molecules are modeled as a large number of randomly moving particles, enabling a statistical treatment of their behavior.

2. Newton’s Laws of motion – They govern particle collisions (momentum-conserving), e.g., linking microscopic motion to pressure, via momentum transfer to container walls.

3. Electrostatic Interactions – Intermolecular forces are assumed to be predominantly Coulombic, while gravitational and magnetic effects are negligible. In certain scenarios, Coulomb forces may be ignored, simplifying the distribution of energy (or velocity).

These idealizations allow for a simplified but insightful framework, especially when applied to macroscopically homogeneous, isotropic, and uncharged systems (Callen, 1985).

 

“For instance, when we compress something, it heats; if we heat it, it expands. There is a relationship between these two facts which can be deduced independently of the machinery underneath. This subject is called thermodynamics (Feynman et al., 1963, p. 39-2).”

 

Feynman’s description of thermodynamics emphasizes the interplay among heat and work, and the physical properties of matter. More broadly, thermodynamics is the branch of physics that explores how energy—particularly in the forms of heat and work—relates to state variables such as temperature, pressure, and entropy, thereby determining the behavior of physical systems. The term thermodynamics—derived from the Greek therme (heat) and dynamis (power)—means “heat in motion” or “heat power.” This name can be somewhat misleading: classical thermodynamics primarily addresses equilibrium states, which are static or time-independent, rather than dynamic processes (Atkins & de Paula, 2010). Despite its wide-ranging applicability, thermodynamics is a phenomenological theory—its laws are grounded in empirical observation, instead of derived from microscopic principles. Kinetic theory, by contrast, provides a statistical foundation for thermodynamic behavior and bridges the gap to statistical mechanics.

 

“We shall also find that the subject can be attacked from a nonatomic point of view, and that there are many interrelationships of the properties of substances… The deepest understanding of thermodynamics comes, of course, from understanding the actual machinery underneath, and that is what we shall do: we shall take the atomic viewpoint from the beginning and use it to understand the various properties of matter and the laws of thermodynamics (Feynman et al., 1963, p. 39-2).”

 

An automotive analogy may help clarify the distinctions among thermodynamics, kinetic theory, and statistical mechanics:

  • Thermodynamics is like reading a car’s speedometer—it offers a macroscopic description (nonatomic viewpoint) based on observable quantities, without reference to microscopic mechanisms.
  • Kinetic theory is akin to analyzing the engine’s revolutions per minute (RPM) to explain the car’s speed—it adopts a microscopic perspective (atomic viewpoint) to understand macroscopic behavior in terms of particle motion.
  • Statistical mechanics is like the general theory of engines—it provides a unifying framework that applies probabilistic principles to a wide range of systems, but not limited to gases.

Note that Feynman’s terms ‘nonatomic point of view’ and ‘atomic viewpoint’ correspond to what is commonly known as macroscopic and microscopic perspectives respectively.

 

2. Approximations: From Microscopic Chaos to Macroscopic Order

“Anyone who wants to analyze the properties of matter in a real problem might want to start by writing down the fundamental equations and then try to solve them mathematically. Although there are people who try to use such an approach, these people are the failures in this field; the real successes come to those who start from a physical point of view, people who have a rough idea where they are going and then begin by making the right kind of approximations, knowing what is big and what is small in a given complicated situation (Feynman et al., 1963, p. 39-2).

 

Thermodynamics describes macroscopic properties—such as temperature and entropy—without invoking the microscopic details of matter. Statistical mechanics, in contrast, provides a deeper foundation by using probability theory and statistical methods to connect the microscopic behavior of particles to these observable thermodynamic quantities. Since tracking the motion of every individual particle is practically impossible, statistical approaches serve as a bridge between microscopic disorder (e.g., particle velocities and collisions) and macroscopic regularity (e.g., temperature and pressure). For example, the ideal gas law models gases as collections of non-interacting point particles, while the Van der Waals equation incorporates molecular size and intermolecular forces, offering a more realistic description of real gases. In this context, Feynman’s remark about “knowing what is big and what is small” can be interpreted not only in terms of physical size, but as an invitation to identify which variables significantly affect a system’s macroscopic behavior and which can be safely ignored.

 

“As an interesting example, we all know that equal volumes of gases, at the same pressure and temperature, contain the same number of molecules. The law of multiple proportions, that when two gases combine in a chemical reaction the volumes needed always stand in simple integral proportions, was understood ultimately by Avogadro to mean that equal volumes have equal numbers of atoms. Now why do they have equal numbers of atoms? (Feynman et al., 1963, p. 39-2).”

 

In 1808, Joseph Gay-Lussac observed that gases react in simple whole-number volume ratios—e.g., 2 volumes of hydrogen combine with 1 volume of oxygen to form 2 volumes of water vapor (2H₂ + O₂ → 2H₂O). However, John Dalton (1808) rejected Gay-Lussac’s findings because Dalton incorrectly assumed water had the formula HO. The turning point came in 1811 when Amedeo Avogadro proposed two revolutionary ideas:

(1) equal gas volumes, under the same temperature and pressure, contain equal number of molecules;

(2) Many gases, including hydrogen and oxygen, exist as diatomic molecules.

Avogadro’s insight resolved the discrepancy in Dalton’s model by correctly identifying the composition of water and explaining why volume ratios align with molecular stoichiometry. Yet his ideas were largely ignored until 1860, when Stanislao Cannizzaro revived them at the Karlsruhe Congress, enabling chemists to determine atomic masses with consistency.

 

“Now why do they have equal numbers of atoms? Can we deduce from Newton’s laws that the number of atoms should be equal? (Feynman et al., 1963, p. 39-2).”

 

Newton’s laws of motion alone cannot explain Avogadro’s principle, because they deal with how every particle move—not with the big picture of how a large number of particles behave together. Avogadro’s insight depended on recognizing gases as composed of discrete molecules with definite stoichiometries—a chemical idea beyond the scope of Newton’s laws. While classical mechanics can explain how particle collisions generate pressure, but connecting equal volumes to equal numbers of molecules requires statistical averaging. (You need to average over a large number of particles to get meaningful results.) The ideal gas law (PV = nRT), which formalizes Avogadro’s principle*, emerges only when Newton’s laws are combined with molecular assumptions and statistical reasoning. Thus, Feynman’s question underscores this gap: classical physics alone cannot explain macroscopic gas behavior without invoking probability and the atomic nature of matter.

 

* Avogadro’s principle (n V at fixed PT) is embedded in the ideal gas law PV = nRT and this requires assuming gases are composed of discrete molecules (a non-Newtonian idea).

 

3. Limitations: Where Classical Thermodynamics Fails

“… from a physical standpoint, the actual behavior of the atoms is not according to classical mechanics, but according to quantum mechanics, and a correct understanding of the subject cannot be attained until we understand quantum mechanics (Feynman et al., 1963, p. 39-1).”

 

Feynman rightly remarks that classical mechanics is inadequate for understanding atomic behavior, which fundamentally requires quantum mechanics. However, he could have been more precise about the limitations of classical thermodynamics, particularly its failure to accurately describe systems under certain conditions. While classical thermodynamics is a widely applicable theory, it breaks down when applied to quantum systems such as photons, ultracold atoms, or few-particle system. For example, quantum systems exhibit non-classical behavior due to their sensitivity to environmental interactions, which can induce decoherence and destroy their coherent quantum states. Thus, quantum thermodynamics has emerged as a suitable framework that redefines heat, work, and temperature in contexts where fluctuations, discreteness, and quantum correlations are unavoidable. It extends thermodynamic principles into the quantum domain, where basic assumptions underpinning the classical theory no longer apply.

 

“Here, unlike the case of billiard balls and automobiles, the difference between the classical mechanical laws and the quantum-mechanical laws is very important and very significant, so that many things that we will deduce by classical physics will be fundamentally incorrect. Therefore there will be certain things to be partially unlearned; however, we shall indicate in every case when a result is incorrect, so that we will know just where the ‘edges’ are (Feynman et al., 1963, p. 39-1).”

 

In his textbook Statistical Mechanics, Feynman writes: “If a system is very weakly coupled to a heat bath at a given 'temperature,' if the coupling is indefinite or not known precisely, if the coupling has been on for a long time, and if all the 'fast' things have happened and all the 'slow' things not, the system is said to be in thermal equilibrium.” This emphasizes that thermal equilibrium is not just about fast processes such as molecules settling into a Maxwell-Boltzmann distribution. It also requires that the system has interacted long enough with its environment to reach a stable state, while slower processes (e.g., container erosion) remain negligible. Thermodynamics holds under such conditions, but its accuracy depends on system size. In large systems (over 1,000 particles), fluctuations average out, making macroscopic quantities well-defined. In small systems (under 100 particles), fluctuations dominate, and corrections from statistical mechanics are needed. While thermodynamics may apply at the nanoscale, its assumptions must be used with care, as the boundary between large and small systems is inherently fuzzy.


Review Questions:

1. What idealizations underpin classical thermodynamics, and how do they simplify reality?

2. How does statistical mechanics employ approximations, and what justifies them?

3. When does classical thermodynamics fail, and how do modern frameworks address its limitations?

 

The moral of the Lesson (In Feynman’s spirit):

What makes a theory great? Simple principles, surprising connections, and wide applicability. Thermodynamics has all three—it’s the one theory I’d bet my life on.

But here’s the twist: a theory that powerful comes with its own limits. Thermodynamics is like a brilliant, no-nonsense old professor—it delivers rock-solid answers, but only to the big, timeless questions. Ask it about the messy details, the fluctuations, the microscopic chaos, and it just shrugs and says: "Not my department!"

 

References:

Avogadro, A. (1811). Essay on a Manner of Determining the Relative Masses of the Elementary Molecules of Bodies, and the Proportions in Which They Enter Into These Compounds. Journal de Physique.

Callen, H. B. (1985). Thermodynamics and an introduction to thermostatistics (2nd ed.). Wiley.

Cannizzaro, S. (1860). Sunto di un Corso di Filosofia Chimica. Paper presented at the Karlsruhe Congress.

Dalton, J. (1808). A New System of Chemical Philosophy. Manchester: Bickerstaff.

Einstein, A. (1970). Autobiographical notes. In P. A. Schilpp (Ed.), Albert Einstein: Philosopher-scientist (pp. 1–95). Open Court. (Original work published 1949).

Feynman, R. P. (2018). Statistical mechanics: a set of lectures. CRC press.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman lectures on physics, Vol. I: Mainly mechanics, radiation, and heat. Addison-Wesley.

Gay-Lussac, J. L. (1808). Mémoire sur la combinaison des substances gazeuses entre elles. Annales de Chimie.

Tuesday, May 6, 2025

Section 38–6 Philosophical implications

Conscious observer / Unmeasurable concept / Classical indeterminism


In this section, Feynman addresses several philosophical issues in quantum mechanics, including the role of the conscious observer, unmeasurable concepts, and classical indeterminism. This indicates that he did not adhere to the “shut up and calculate” mindset—a phrase popularized by David Mermin and sometimes mistakenly attributed to Feynman. While the slogan reflects a pragmatic attitude of many physicists toward philosophical debates, it overlooks Feynman’s willingness to engage with foundational questions, even as he dismissed metaphysical speculation.

       Feynman’s colleague Murray Gell-Mann was more openly contemptuous of philosophy. As described in George Johnson’s (1999) biography Strange Beauty, Gell-Mann considered philosophy as a waste of time—for instance, debating over the “reality” of quarks. In one (possibly apocryphal) anecdote, he is said to readily provide a “doctor’s note” when asked about philosophical issues, saying, “I’m sorry, I cannot talk about philosophy. My doctor said it’s bad for my blood pressure.” Ironically, Gell-Mann playfully named his quark classification scheme the “Eightfold Way,” referencing Buddhist philosophy.

       Feynman’s attitude toward philosophy was possibly ambivalent. He famously said, “Philosophy of science is about as useful to scientists as ornithology is to birds”—implying that scientists don’t need it to do good science. Yet he included a “Philosophy” section in his paper on gauge theories, though he likely meant physical significance rather than philosophical inquiry. In a 1962 letter to his wife Gweneth from a Warsaw conference, Feynman wrote, “…it is not good for my blood pressure...,” expressing frustration with unproductive research on gravitational theory—not philosophy per se.


1. Conscious observer

“The observer was sometimes important in prequantum physics, but only in a rather trivial sense. The problem has been raised: if a tree falls in a forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there (Feynman et al., 1963, p. 38-8).”

 

Feynman’s view assumes a mind-independent reality: physical processes—such as the generation of sound waves—occur regardless of conscious observers. In his lectures on gravitation for postgraduates, Feynman (1995) comments, “…are you the observer? Then there is no reality to the world after you are dead? I know a number of otherwise respectable physicists who have bought life insurance. By what philosophy will the universe without man be understood? (p. 14).” This passage lightly mocks the idea that consciousness is necessary for reality to exist. While thinkers such as von Neumann, Wigner, and Penrose have proposed consciousness-based interpretations of quantum mechanics, their ideas remain outside mainstream physics. Feynman’s mentor, John Wheeler, pushed the discussion further by suggesting an active role for observers—though not necessarily linking it directly to human consciousness.

 

In the same Lectures on Gravitation, Feynman (1995) examines the distinction between external and internal observers in quantum mechanics:

“… what may properly be described by an amplitude to an external observer, is not necessarily well described by a similar amplitude when the observer is part of the amplitude. Thus the external observer of the usual quantum mechanics is in a peculiar position. In order to find out whether the cat is alive or dead, he makes a little hole in the box and looks; it is only after he has made his measurement that the system is in a well-defined final state; but clearly, from the point of view of the internal observer, the results of this measurement by the external observer are determined by a probability, not an amplitude (p. 13).”

This passage explores central ideas from the Wigner’s friend thought experiment, which raises questions about observer-dependent reality. In such scenarios:

1.      The external observer (Wigner) describes the entire system—including the friend and the cat—using a wavefunction that remains in superposition until measurement is made.

2.      The internal observer (the friend) experiences a definite outcome—either the cat is alive or dead—not a superposition.

While these two descriptions may appear contradictory, they illustrate the challenge of reconciling quantum mechanics across different observational perspectives. Rather than exposing a logical inconsistency in the theory, the contrast points to controversies in defining objective reality in quantum mechanics. Potential resolutions depend on one’s philosophical position—whether one emphasizes consciousness, decoherence, or treats the wavefunction as a mathematical tool.

 

To avoid the philosophical problems surrounding consciousness, some physicists prefer the term “agent” or “participant” instead of “observer.” In modern interpretations such as Quantum Bayesianism (QBism), an agent is an active participant who assigns probabilities based on their personal expectations and experiences. In this framework, quantum events (e.g., an atom’s decay) do not possess definite outcomes until the agent interacts with the system. That is, unmeasured properties are not merely unknown; they are fundamentally undefined. QBism redefines the observer not as a metaphysical entity or conscious mind, but as a decision-maker who updates their knowledge through measurement. The focus shifts from an objective wavefunction collapse to subjective process of Bayesian probability updating. Notably, the framework allows even an AI robot: it can perform experiments and revise assigned probabilities, without raising questions about consciousness.

 

2. Unmeasurable concept

“The situation in the sciences is this: A concept or an idea which cannot be measured or cannot be referred directly to experiment may or may not be useful. It need not exist in a theory. In other words, suppose we compare the classical theory of the world with the quantum theory of the world, and suppose that it is true experimentally that we can measure position and momentum only imprecisely (Feynman et al., 1963, p. 38-10).”

 

In Discussion with Einstein on Epistemological Problems in Atomic Physics, Bohr (1949) writes “Isolated material particles are abstractions, their properties being definable and observable only through their interaction with other systems.” His statement aligns closely with empiricism and shares some tenets of logical positivism, but it does not strictly adhere to either philosophy. Specifically, logical positivists would argue that a statement is cognitively meaningful only if it is either analytically true (true by definition or logic, like mathematics) or empirically verifiable (testable through observation or experiment). For instance, Bohr rejected the notion of an “electron path” in quantum mechanics because it lacked direct experimental verification. On the contrary, in his development of quantum electrodynamics (QED), Feynman revived the concept of "paths," relating them to abstract mathematical amplitudes to be summed over all possible paths a particle might take.

 

During an interview, Feynman recounts his encounter with Bohr’s objections of his idea: “Bohr got up and said: ‘Already in 1925, 1926, we knew that the classical idea of a trajectory or a path is not legitimate in quantum mechanics; one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ In other words, he was telling me about the uncertainty principle. It became clear to me that there was no communication between what I was trying to say and they were thinking. Bohr thought that I didn’t know the uncertainty principle … Bohr was concerned about the uncertainty principle and the proper use of quantum mechanics. To tell a guy that he doesn’t know quantum mechanics—well, it didn’t make me angry, it just made me realize that he [Bohr] didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up…  (Mehra, 1994, p. 248).” Feynman’s approach to quantum mechanics was pragmatic, treating even unmeasurable concepts as useful computational tools for generating accurate predictions.

 

Feynman’s path integral approach to quantum mechanics is based on the principle of summing over all possible histories, treating a particle as if it explores every conceivable path between two points. These include not only classical trajectories but also mathematically useful—though physically implausible—constructs, such as those that move backward in time (often associated with antiparticles) or imaginary-time paths used in certain calculations. Although the sum over all paths yields experimentally verifiable probabilities, the individual paths themselves are not observable. A related example arises in Feynman diagrams, where internal lines represent virtual particles. These entities cannot be directly detected, yet they play a crucial role in calculating observable quantities and explaining interaction processes. Such examples highlight a key feature of theoretical physics: concepts that cannot be directly measured or empirically verified may still have significant explanatory and predictive value.

 

3. Classical indeterminism

“It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical ‘deterministic’ physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a ‘completely mechanistic’ universe. For already in classical mechanics there was indeterminability from a practical point of view (Feynman et al., 1963, p. 38-10).”

 

Feynman points out a subtle but profound insight about classical physics: although it is deterministic in principle, it can be indeterminate in practice. According to classical mechanics, perfect knowledge of a system's initial conditions would theoretically enable exact prediction of its future behavior. However, in real-world scenarios, even infinitesimal measurement uncertainties can result in exponentially diverging outcomes over time—a phenomenon known as the butterfly effect. This sensitivity to initial conditions means that while the present strictly determines the future in theory, an approximate present fails to reliably predict an approximate future. As Lorenz (1963) succinctly put it: “When the present determines the future, but the approximate present does not approximately determine the future.” This fundamental limitation, central to chaos theory, reveals how deterministic systems can nevertheless produce effectively unpredictable behavior.

 

Classical indeterminism refers to the practical unpredictability of systems governed by deterministic laws. While classical mechanics is deterministic in principle, several factors undermine our ability to predict outcomes with certainty in practice. At least four key reasons contribute to this indeterminism:

1. Sensitivity to Initial Conditions: Even infinitesimal uncertainties lead to exponentially diverging outcomes over time.

2. Measurement Precision limits: Absolute precision is physically unattainable and these microscopic uncertainties grow through interactions.

3. Computational Intractability: While the behavior of individual particles in macroscopic systems is theoretically governed by classical mechanics, solving equations for systems involving something on the order of Avogadro’s number of particles becomes computationally unfeasible.

4. Analytical Unsolvability: Even simple systems (like the three-body problem) defy closed-form solutions, requiring numerical approximations that accumulate errors.

This shows that unpredictability isn’t unique to quantum mechanics—it emerges even in purely classical physics contexts.

 

Perhaps Feynman could have acknowledged the pioneering insights of Poincaré into classical indeterminism. Decades before chaos theory's formalization, Poincaré's study of the three-body problem revealed a profound truth in celestial mechanics: deterministic systems can exhibit inherent unpredictability. As Poincaré (1908) presciently noted in Science and Method: “…it may happen that small differences in the initial conditions produce very great differences in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible, and we have the fortuitous phenomenon (p. 68).” In articulating this sensitivity to initial conditions, Poincaré anticipated what would much later be recognized as a hallmark of chaotic systems. Poincaré’s work, long underappreciated, thus serves as a conceptual bridge between classical determinism and the modern understanding of dynamical chaos.

 

Review Questions:

1. Does quantum mechanics require the concept of an observer?

2. Should physicists use concepts that cannot be directly connected to experiment?

3. How would you explain classical indeterminism?

 

The Moral of the Lesson: Although Feynman is often associated with the pragmatic “shut up and calculate” mindset, this section reveals that he engaged with philosophical issues of quantum mechanics. He was not dismissive of philosophy itself, but of poor or misguided philosophy. Feynman sought to clarify the implications of quantum theory while rejecting pseudoscientific speculation and conceptual overreach —a lesson in how to think critically about profound ideas.

 

References:

Bohr, N. (1949). Discussion with Einstein on epistemological problems in atomic physics. In Niels Bohr Collected Works (Vol. 7, pp. 339-381). Amsterdam, Netherlands: Elsevier.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Feynman, R. P., Morinigo, F. B., & Wagner, W. G. (1995). Feynman Lectures on gravitation (B. Hatfield, ed.). Reading, MA: Addison-Wesley.

Johnson, G. (2000). Strange beauty: Murray Gell-Mann and the revolution in twentieth-century physics. New York, NY: Vintage.

Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2), 130–141.

Mehra, J. (1994). The Beat of a Different Drum: The life and science of Richard Feynman. Oxford: Oxford University Press.

Poincaré, H. (1908). Science and Method (original French: La science et la méthode). London, UK: Thomas Nelson & Sons.