Thursday, April 10, 2025

Section 38–5 Energy levels

(Ritz’s Combination Principle / Energy states / Energy quantization)

 

In this section, Feynman discusses Ritz’s Combination Principle of spectroscopy (or spectral lines), energy states, and energy quantization instead of simply energy levels. The section could be titled Ritz’s Combination Principle because it is about how the principle can be explained using energy states (or energy levels) and quantization of energy.

 

1. Ritz’s Combination Principle

“… if we find two spectral lines, we shall expect to find another line at the sum of the frequencies (or the difference in the frequencies), and that all the lines can be understood by finding a series of levels such that every line corresponds to the difference in energy of some pair of levels...... it is called the Ritz combination principle (Feynman et al., 1963, p. 38-7).”

 

According to Sommerfeld, Ritz states: “[b]y additive or subtractive combination, whether of the series formulae themselves, or of the constants that occur in them, formulae are formed that allow us calculate certain newly discovered lines from those known earlier (Sommerfeld, 1923, p. 205).” To acknowledge Ritz’s generalization of Rydberg’s empirical findings, the principle is often referred to as the Rydberg–Ritz combination principle. For example, if two spectral lines are observed with frequencies n12 and n23, one may expect a third line at n13 = n12 + n23 (by addition). Conversely, if two spectral lines are known with frequencies n12 and n13, a third line may appear at the frequency n23 = n13 - n12 (by subtraction). This principle was helpful in organizing spectral data, but it lacked a theoretical foundation until the development of quantum theory.


“This remarkable coincidence in spectral frequencies was noted before quantum mechanics was discovered, and it is called the Ritz combination principle (Feynman et al., 1963, p. 38-8).”


It may not be misleading to describe the spectral frequencies as remarkable coincidence because the observation reflects a systematic pattern encapsulated by Ritz’s combination principle, which is not completely accurate. Feynman might have offered greater clarity by specifying the conditions under which the principle holds. As Ritz noted in 1908, “The new principle of combination also finds application to other spectra, particularly to helium and the earth alkalies.” However, the principle is more accurate for hydrogen-like atoms, where electron-electron interactions are negligible. In multi-electron atoms, these interactions shift energy levels, leading to deviations from the simple additive or subtractive relationship between spectral frequencies. Thus, the predictive power (or “remarkable” coincidence) of the combination principle depends on the nature of the atomic system and the complexity of electron interactions.

 

Note: Feynman’s reference text Introduction to Modern Physics (Richtmyer et al., 1956) outlines four features of the Ritz’s combination principle:

1. Spectral Lines as Differences of Terms: The wave number of each line is conveniently represented as the difference between two numbers. These numbers have come to be called terms.

2. Ordered Term Sequences: The terms group themselves naturally into ordered sequences, the terms of each sequence converging toward zero.

3. Combinability of Terms: The terms can be combined in various ways to give the wave numbers of spectral lines.

4. Spectral Series and Convergence: A series of lines, all having similar character, results from the combination of all terms of one sequence in succession with a fixed term of another sequence. Series formed in this manner have wave numbers which, when arranged in order of increasing magnitude, converge to an upper limit.

In essence, the Ritz’s combination principle means that the wavenumber (inverse wavelength, ν) of any spectral line can be expressed as the difference between two spectral terms as follows: n = T1T2 = R/(2 + S)2R/(m + P)2 where m = 3, 4, 5…..

(Each term represents a quantized energy level divided by hc.)


2. Energy states

“Let us observe how it comes about from the point of view of amplitudes that the atom has definite energy states (Feynman et al., 1963, p. 38-8).”


It should be worthwhile to clarify the distinction between energy level and energy state, as the two terms—while related—have distinct meanings. This distinction is also noted in Feynman’s reference text, Introduction to Modern Physics. For instance, Richtmyer et al. (1956) write “[i]t should be emphasized that the energy levels represented in Fig. 199 do not necessarily represent the energy states of any single nucleon (p. 515).” In short, an energy level refers to a quantized energy values (eigenvalues), which can be experimentally measured, typically through spectroscopy. On the other hand, an energy state refers to the quantum state (eigenstate) associated with a given energy level, defined by a specific set of quantum numbers. However, the term energy state is less commonly used in quantum mechanics as compared to energy level nowadays.


The observation of spectral lines arises from the interaction between a quantum system (such as an atom) and discrete units of light energy, or light quanta. Einstein (1905) introduced the concept of light quanta to explain the photoelectric effect, but it is from a heuristic viewpoint. The term photon was later coined by American chemist Gilbert N. Lewis in a 1926 letter, originally to describe a unit of radiant energy; it was subsequently adopted to refer specifically to Einstein’s light quanta. In 1913, Bohr proposed that when an electron transitions between two stationary states, it emits a quantum of light whose energy is equal to the energy difference between the two states. This relationship is expressed by the Planck-Einstein-Bohr radiation condition wnm = EnEm in which wnm is the frequency of emitted or absorbed radiation, whereas En and Em are the energy of the initial and final states respectively.

 

Note: In a December 18, 1926 letter to Nature, Lewis writes: “… I therefore take the liberty of proposing for this hypothetical new atom, which is not light but plays an essential part in every process of radiation, the name photon.”


The concept of energy levels is a theoretical construct whose existence is inferred from the observation of spectral lines. In 1928, Walter Grotrian introduced the Grotrian diagram (or term diagram) in atomic spectroscopy, a visual representation of allowed electronic transitions between energy levels. Similarly, energy level diagrams can be used to illustrate Ritz’s principle, which showed that spectral frequencies correspond to the differences between two quantities called terms, later recognized as energy levels. Historically, the spectral lines were first explained by Bohr's (1913) atomic model, which introduced the notion of quantum jump between quantized energy levels. Although the concept of energy levels is commonly attributed to Bohr, he originally used the term stationary state to describe what we now understand as an energy state.

 

3. Energy quantization

“When the electron is free, i.e., when its energy is positive, it can have any energy; it can be moving at any speed. But bound energies are not arbitrary (Feynman et al., 1963, p. 38-7).”


According to Kuhn (1997), Planck’s early papers from the 1900s did not explicitly state that the energy of a single oscillator must be restricted to discrete values in accordance with E = nhf, where n is an integer. Instead, Planck treated energy quanta as a mathematical hypothesis (Kragh, 2000). His cautious stance was understandable, given that energy is not inherently quantized in all physical contexts. For instance, in unbound systems—such as free electrons—energy can vary continuously (See below). In such cases, electrons are not confined within a potential well or subject to boundary conditions that would cause energy quantization. The term free electrons can be used to describe electrons that have been ionized or occupy high-energy states where they are no longer bound to atoms or molecules.



“… we are all familiar with the fact that confined waves have definite frequencies. For instance, if sound is confined to an organ pipe, ……. then there is more than one way that the sound can vibrate, but for each such way there is a definite frequency (Feynman et al., 1963, p. 38-8).”


In quantum mechanics, bound particles are described by wavefunctions that must satisfy specific boundary conditions—such as those imposed by the Coulomb potential in a hydrogen atom. These boundary conditions lead to quantized vibrational modes, which correspond to the discrete energy levels. Importantly, this quantization does not arise from any intrinsic discreteness of energy itself, but rather from the continuous wave-like behavior of particles and governed by the Schrödinger equation, combined with the constraints imposed by the system. An analogous situation occurs with standing sound waves in an organ pipe: only certain frequencies are permitted, determined by the pipe’s length and boundary (pressure) conditions. Similarly, in quantum systems, only specific energy states (or levels) are allowed.


The moral of the lesson: 1. Ritz’s combination —which describes spectral lines as sums or differences of spectral terms—can be understood in terms of discrete energy levels. Although Planck is often seen as the pioneer of quantized energy, he was famously reluctant to fully embrace the physical reality of energy quantization (Kragh, 2000).  His caution was not unfounded: in systems involving free electrons, energy can indeed take on a continuous range of values, unconstrained by boundary conditions.

 

2. Bohr remarked that the Rydberg’s formula and Balmer’s line as “the lovely patterns on the wings of butterflies; their beauty can be admired, but they are not supposed to reveal any fundamental biological laws (Cropper, 1970, p. 48).” Bohr's initial remark was not necessarily due to carelessness, but rather a skepticism typical of early quantum theorists faced with mysterious empirical patterns. Yet it was Bohr himself who later revealed the physical meaning behind these patterns with his atomic model—work that earned him the Nobel Prize.

 

Review questions:

1. How would you state Ritz’s combination principle and its limitations?

2. Would you prefer the term “energy state,” “stationary state,” or “energy level” and in which context?

3. How would you explain why energy becomes quantized in bound systems but remains continuous for free particles?

 

References:

Cropper, W. H. (1970). The quantum physicists and an introduction to their physics. Oxford University Press.

Einstein, A. (1905). On a heuristic viewpoint concerning the emission and transformation of light. Annalen der Physik17(6), 132-148.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Lewis, G. N. (1926). The conservation of photons. Nature, 118(2981), 874-875.

Kragh, H. (2000). Max Planck: the reluctant revolutionary. Physics World13(12), 31.

Kuhn, T. S. (1997). The structure of scientific revolutions. Chicago: University of Chicago press.

Richtmyer, F. K., Kennard, E. H., Lauritsen, T., & Stitch, M. L. (1956). Introduction to modern physics (5th ed.). New York: McGraw-Hill.

Ritz, W. (1908). On a new law of series spectra. Astrophysical Journal, 28(10), 237–243.

Sommerfeld, A. (1923) Atomic Structure and Spectral Lines (H. L. Brose, trans.). London: Methuen.

Thursday, March 20, 2025

Section 38–4 The size of an atom

Bohr radius / Rydberg energy / Stability of matter

 

In this section, Feynman discusses the Bohr radius, Rydberg energy, and stability of matter, extending beyond the simple topic of “the size of an atom.” A more precise title could be “Three Applications of the Uncertainty Principle” since these concepts were explained using the uncertainty principle. However, the section is also related to the stability of the hydrogen atom and the stability of matter.

 

1. Bohr radius

“This particular distance is called the Bohr radius, and we have thus learned that atomic dimensions are of the order of angstroms, which is right: This is pretty good—in fact, it is amazing, since until now we have had no basis for understanding the size of atoms! (Feynman et al., 1963, p. 38-6).”

 

Feynman estimated the order of magnitude of the Bohr radius using the uncertainty principle and Planck’s constant (h) instead of the reduced Planck constant (ℏ). While this method obtains the approximate atomic scale, it does not give the exact numerical factor. In contrast, Bohr introduced the quantization of angular momentum, postulating that an electron in a hydrogen atom follows discrete orbits: mvr = nℏ where n is an integer and ℏ is the reduced Planck constant. Although this assumption successfully explained atomic spectra, it lacked a deeper theoretical justification. Moreover, the Bohr radius is not a directly measurable quantity; it represents the most probable distance between the electron and nucleus in the ground state of hydrogen.

 

A more rigorous derivation arises from solving the Schrödinger equation for an electron in a Coulomb potential, leading to quantized energy levels and the Bohr radius as a fundamental length scale. Unlike Bohr’s model, which assumes quantization, this approach derives the Bohr radius naturally from the boundary conditions imposed on the electron’s wavefunction. While the uncertainty principle and Bohr’s quantization provide insights into atomic structure, Schrödinger’s equation offers a more consistent framework, revealing the Bohr radius as an intrinsic property of quantum wave behavior. More importantly, in quantum mechanics, the electron does not follow a definite trajectory; instead, its position is governed by a probability distribution described by its wavefunction. This understanding is closely linked to the stability of atoms, as electrons do not spiral inward but instead occupy discrete, quantized energy levels, preventing atomic collapse.

 

2. Rydberg energy

“However, we have cheated, we have used all the constants in such a way that it happens to come out the right number! This number, 13.6 electron volts, is called a Rydberg of energy; it is the ionization energy of hydrogen (Feynman et al., 1963, p. 38-6).”

 

In the Audio Recordings [at the end of this lecture (first try), 56 min: 05 sec], Feynman says something like this: “… This is just an order of magnitude. Actually, I’ve cheated youI put the constant just where I want… at the right place and this does come out as the mean radius of the hydrogen atom and this does come out as the actual binding energy of hydrogen but we have no right to believe that. Thank you. In a sense, Feynman’s derivation of the Rydberg energy relied on a “working backward” approach, using the known value of the Bohr radius. However, historically, the Bohr radius was derived from the Rydberg energy (or equivalently, the Rydberg constant), not the other way around. Therefore, while this application of the uncertainty principle provides a useful heuristic, it should not be taken seriously as a formal derivation.

 

The Rydberg energy (the ionization energy of hydrogen, 13.6 eV) was not initially derived from theory but was instead determined empirically from atomic spectra. Key contributors to this discovery included Johannes Rydberg and earlier spectroscopists such as Balmer, Ångström, and Paschen. Rydberg established the “Rydberg constant” by analyzing spectral data, without fully understanding its deeper significance. Bohr later provided a theoretical explanation for the Rydberg formula using his semiclassical model, making a crucial step in confirming the quantization of energy levels. However, the modern determination of the Rydberg constant relies on high-precision spectroscopy and least-squares data fitting, rather than a direct measurement from hydrogen spectra.

 

3. Stability of matter

“So we now understand why we do not fall through the floor. As we walk, our shoes with their masses of atoms push against the floor with its mass of atoms. In order to squash the atoms closer together, the electrons would be confined to a smaller space and, by the uncertainty principle, their momenta would have to be higher on the average, and that means high energy; the resistance to atomic compression is a quantum-mechanical effect and not a classical effect (Feynman et al., 1963, p. 38-6).”

 

Feynman’s explanation of why we do not fall through the floor could incorporate the term Pauli exclusion principle and Coulomb force. If an electron were confined to a smaller region near a nucleus, its position uncertainty (Δx) would decrease. By the uncertainty principle, this would necessitate an increase in momentum uncertainty (Δp), leading to higher kinetic energy. This increase in energy counterbalances the attractive Coulomb force, preventing the collapse of atom. Additionally, Coulomb repulsion between the negatively charged electron clouds of adjacent atoms further resists compression. As electrons are forced closer together, their wavefunctions would overlap (or antisymmetric). Due to the Pauli exclusion principle, which does not allow two electrons from occupying the same quantum state, it effectively provides an additional mechanism that prevents matter from collapsing. This quantum mechanical effect, along with Coulomb repulsion explain the stability of matter.

 

Mathematical Proof of Stability

Feynman’s question, “Why do we not fall through the floor?”, is related to the second kind of stability, now commonly known as the stability of matter. This problem was first mathematically solved in 1967 by Freeman Dyson and Andrew Lenard, about five years after this lecture of Feynman. Their analysis showed that the stability of matter relies on the Pauli exclusion principle. Building on this work, Elliott Lieb and Walter Thirring refined Dyson and Lenard’s approach by introducing the Lieb-Thirring inequality, providing a more elegant and conceptually clear proof. Thus, the stability of matter—why we do not fall through the floor—can be explained through a combination of the Pauli exclusion principle and Coulomb repulsion.

 

Note: In the preface of the book titled The Stability of Matter: From Atoms to Stars, Dyson writes: “Lenard and I found a proof of the stability of matter in 1967. Our proof was so complicated and so unilluminating that it stimulated Lieb and Thirring to find the first decent proof. (...) Why was our proof so bad and why was theirs so good? The reason is simple. Lenard and I began with mathematical tricks and hacked our way through a forest of inequalities without any physical understanding. Lieb and Thirring began with physical understanding and went on to find the appropriate mathematical language to make their understanding rigorous. Our proof was a dead end. Theirs was a gateway to the new world of ideas (Lieb, 2005, p. xi)”.


Review Questions:

1. Should the Bohr radius be derived using Planck constant or the reduced Planck constant?

2. Should the Rydberg energy be derived using the uncertainty principle and Bohr radius?

3. Would you explain the “why we do not fall through the floor” using the term Pauli exclusion principle or/and Coulomb force?

 

The moral of the lesson: While Bohr radius, Rydberg energy, and stability of matter could be explained using the uncertainty relation by working backward, this is not a rigorous method for establishing stability of hydrogen atom and stability of matter.

 

References:

1. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

2. Lieb, E. H. (2005). The stability of matter: from atoms to stars. Heidelberg, Berlin: Springer.


Tuesday, March 4, 2025

Section 38–3 Crystal diffraction

Bragg diffraction / Bragg condition / Bragg cutoff

 

In this section, Feynman discusses Bragg diffraction, Bragg condition, and Bragg cutoff. Interestingly, the section is titled as “crystal diffraction,” but he explains the phenomenon as the reflection of particle waves from a crystal. However, the term Bragg diffraction is more appropriate to acknowledge the contributions of W. H. Bragg and his son W. L. Bragg in x-ray diffraction, a discovery for which they received the 1915 Nobel Prize in Physics.

 

1. Bragg diffraction

“Next let us consider the reflection of particle waves from a crystal. A crystal is a thick thing which has a whole lot of similar atoms—we will include some complications later—in a nice array. The question is how to set the array so that we get a strong reflected maximum in a given direction for a given beam of, say, light (x-rays), electrons, neutrons, or anything else. In order to obtain a strong reflection, the scattering from all of the atoms must be in phase. There cannot be equal numbers in phase and out of phase, or the waves will cancel out. The way to arrange things is to find the regions of constant phase, as we have already explained; they are planes which make equal angles with the initial and final directions (Feynman et al., 1963).”

 

Feynman could have continued using the term wave packets or wave trains instead of particle waves to model x-rays, electrons, and neutrons. More importantly, the term reflection is a misnomer, as the underlying process is diffraction, not simple specular reflection. A more precise term is Bragg diffraction, which accurately describes the phenomenon as wave interference arising from periodic layers of atoms rather than mere bouncing off a surface. The process involves the scattering of incoming waves that interact with parallel atomic planes, leading to constructive interference among outgoing waves. The scattering phenomenon is also known as Bragg scattering or elastic scattering, as the interaction between the incoming waves and the crystal lattice does not result in an observable change in energy—only a change in direction.

 

Historically, in 1912, Max von Laue proposed that crystals act as three-dimensional diffraction gratings for x-rays. To simplify analysis, the x-ray source and detector are idealized as being far from the crystal, allowing both the incident and outgoing waves to be treated as plane waves.  Specifically, x-rays induce oscillations in the electrons within the crystal, causing them to emit secondary x-rays. These scattered waves interfere and give rise to diffraction patterns at certain angles. This process is a form of elastic scattering, meaning that while the x-rays interact with the crystal lattice, their wavelength remains constant. The experiments showed that x-rays have wavelike properties and provided insight into the periodic arrangement of atoms in crystals.

 

2. Bragg conditions:

“… the waves scattered from the two planes will be in phase provided the difference in distance travelled by a wavefront is an integral number of wavelengths. This difference can be seen to be 2dsinθ, where d is the perpendicular distance between the planes. Thus the condition for coherent reflection is 2dsinθ = nλ (n=1,2,…) (Feynman et al., 1963).”

 

Feynman states the condition for coherent reflection as 2d sin θ = nλ (n = 1, 2,…), where d is the interplanar spacing. However, instead of single condition, we may emphasize three key Bragg conditions:

(1) Bragg’s equation: For diffraction to occur, the scattered waves must interfere constructively satisfying Bragg’s equation, nλ = 2dsin θ.

(2) Angle of diffraction: The incident and diffracted waves must obey the relation: Angle of Incidence = Angle of Diffraction.

(3) Interplanar spacing: The crystal must have a regular, periodic arrangement of atoms with a well-defined interplanar spacing d.

Additionally, while Bragg’s equation provides a simplified scalar description of diffraction, the Laue condition offers a more general vector-based formulation that relates the incident and diffracted wave vectors. Bragg’s equation can be derived as a special case of the Laue condition, particularly when considering diffraction from parallel atomic planes.

 

“If, on the other hand, there are other atoms of the same nature (equal in density) halfway between, then the intermediate planes will also scatter equally strongly and will interfere with the others and produce no effect. So d in (38.9) must refer to adjacent planes; we cannot take a plane five layers farther back and use this formula! (Feynman et al., 1963).”

 

Perhaps Feynman could have clarified the distinction between intermediate planes and adjacent planes. In a crystal, multiple sets of parallel planes exist, each with its own interplanar spacing, leading to different pairs of incidence and diffraction angles (as shown below) that satisfy the conditions for constructive interference. Bragg’s law applies not only to regular lattice structures but also to specific lattice planes, such as hexagonal planes in certain crystals. Furthermore, if the diffraction conditions hold for a particular atomic layer and its neighboring layers, they can be assumed to apply consistently across all layers with identical spacing. Experimentally, x-rays penetrate deeply into the crystal, allowing diffraction to arise from thousands or even millions of layers, collectively contributing to the observed diffraction pattern.

 

Source: (Mansfield & O'sullivan, 2020)

3. Bragg cutoff:

“Incidentally, an interesting thing happens if the spacings of the nearest planes are less than λ/2. In this case (38.9) has no solution for n. Thus if λ is bigger than twice the distance between adjacent planes then there is no side diffraction pattern, and the light—or whatever it is—will go right through the material without bouncing off or getting lost. So in the case of light, where λ is much bigger than the spacing, of course it does go through and there is no pattern of reflection from the planes of the crystal (Feynman et al., 1963).”

 

Bragg cutoff refers to the wavelength λb beyond which Bragg diffraction cannot occur. This wavelength can be determined by substituting two extreme values, θ = 90° (maximum angle) and n =1 (minimum order) into Bragg’s equation. Mathematically, if λ > 2d​, no real angle θ satisfies Bragg’s law, making diffraction impossible. However, Feynman’s claim that “light will go right through the material without bouncing off or getting lost” oversimplifies the situation. Even if Bragg diffraction does not occur, incident waves can still interact with the crystal through scattering, absorption, or transmission. In materials with sufficient electron density, electromagnetic radiation can be significantly absorbed rather than simply passing through unaffected. Bragg cutoff represents a fundamental limit in crystal diffraction, defining the range of wavelengths that can undergo diffraction.

 

“If we take these neutrons and let them into a long block of graphite, the neutrons diffuse and work their way along (Fig. 38–7). They diffuse because they are bounced by the atoms, but strictly, in the wave theory, they are bounced by the atoms because of diffraction from the crystal planes. It turns out that if we take a very long piece of graphite, the neutrons that come out the far end are all of long wavelength!... In other words, we can get very slow neutrons that way. Only the slowest neutrons come through; they are not diffracted or scattered by the crystal planes of the graphite, but keep going right through like light through glass, and are not scattered out the sides (Feynman et al., 1963).”

 

Feynman’s statement—“if we take a very long piece of graphite, the neutrons that come out the far end are all of long wavelength!”— oversimplifies the underlying physics. As neutrons diffuse through graphite, they undergo multiple collisions with carbon atoms, losing kinetic energy in a process known as neutron moderation. This slowing-down effect is why graphite serves as a moderator in nuclear reactors, reducing neutron energy to facilitate optimal fission reactions. In a sufficiently long piece of graphite, the neutrons that emerge at the far end are mainly slower neutrons with longer de Broglie wavelengths. This occurs partly because high-energy (short-wavelength) neutrons satisfy Bragg's diffraction condition for scattering from the crystal planes and are thus deflected. In contrast, slow neutrons, which do not satisfy Bragg’s condition, pass through the lattice with minimal scattering, similar to light passing through glass.

 

Review questions:

1. What is Bragg diffraction? Is it due to the reflection of particle waves from a crystal?

2. How would you explain the Bragg condition(s)? How many are there?

3. How would you explain the Bragg cutoff? Does light simply pass through the material without bouncing off or getting lost?

 

The moral of the lesson: The wave properties of x-rays, electrons, and neutrons are revealed through Bragg diffraction, which occurs when the path difference between the adjacent waves scattered from different planes is an integer multiple of the wavelength.

 

References:

1. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

2. Mansfield, M. M., & O'sullivan, C. (2020). Understanding physics. Hoboken, NJ: John Wiley & Sons.