Tuesday, March 17, 2026

Section 42–3 Thermal ionization

Idealizations / Approximations / Limitations

 

In this section, Feynman employes a simplified toy model of thermal ionization to illustrate two drivers of plasma formation: thermal energy and volume expansion. While Feynman’s version is a simplification, it differs from Meghnad Saha’s original 1920 formulationwhich was expressed in logarithm form and the modern version, which incorporates a Boltzmann factor. Despite being a “first-order approximation,” the Saha ionization equation is useful in astrophysics, such as modeling stellar atmospheres.

 

1. Idealizations

“The total number of places that we could put the electrons is apparently na+ni, and we will suppose that when they are bound each one is bound within a certain volume Va. So the total amount of volume which is available to electrons which would be bound is (na+ni)Va, so we might want to write our formula as ne= [na/(na+ni)Va] e^−W/kT. The formula is wrong, however, in one essential feature, which is the following: when an electron is already on an atom, another electron cannot come to that volume anymore! (Feynman et al., 1963, p. 42-5).

 

Feynman’s thermal ionization formula is pedagogically effective because it rests on at least three major idealizations. First, it assumes ideal gas behavior, treating atoms, ions, and electrons as dilute, non-interacting particles while ignoring the Coulomb forces and screening effects that govern the dynamics of real plasmas. Second, it presumes thermodynamic equilibrium, so that ionization and recombination balance exactly, allowing the system to be described by equilibrium statistics rather than time-dependent kinetic processes. Third, it incorporates a simplified Pauli-like exclusion principle—counting electrons as distinguishable particles—without including spin, degeneracy factors, or the full Fermi–Dirac distribution. Together, these idealizations allow the prefactor analytically transparent and conceptually accessible, but they limit the formula’s precision in dense or strongly interacting systems.

 

A Definition of Thermal Ionization

Thermal ionization is a high-temperature process in which energetic collisions between atoms provide sufficient energy for electrons to overcome atoms’ ionization potential,  liberating electrons and thereby forming ions (or plasma). This transformation is not a unidirectional, but exists in a state of dynamic equilibrium. At a given temperature and pressure, the rate of ionization (electrons liberated from neutral atoms) is balanced by the rate of recombination (electrons captured by ions). This statistical balance is governed by the Saha Equation, which shows that the ionization fraction depends exponentially on temperature through the Boltzmann factor. In essence, thermal ionization marks the threshold where the “thermal jiggle” no longer merely moving atoms—it starts releasing their electrons.


2. Approximations

“in those circumstances, we find that a nicer way to write our formula is neni/na = (1/Va) e−W/kT (42.7). This formula is called the Saha ionization equation. Now let us see if we can understand qualitatively why a formula like this is right, by arguing about the kinetic things that are happening (Feynman et. al., 1963, p. 42-5).”

 

A current form of the Saha equation ni+1ne/ni = (2/l3)(gi+1/gi)e−W/kT is approximately correct, because it is based on several assumptions. First, it assumes that the ionization potential is a fixed constant of atomic property, thereby ignoring Coulomb interactions and screening effects that lower the ionization energy. Second, the formula is typically applied stage by stage (stepwise), modeling the balance between two adjacent ionization states i and i+1, while neglecting simultaneous multiple ionizations, which would require additional equations for higher ionization stages. Third, it assumes a dilute, ideal gas in local thermodynamic equilibrium, which bypasses the complications, such as quantum degeneracy, Coulomb correlations, and non-equilibrium kinetic processes. Under these conditions—low density, weak interparticle interactions, and near equilibrium—the equation provides a reliable first order approximation for ionization fractions, such as those in stellar atmospheres, but corrections are required in more complex or extreme environments.

 

The modern Saha Ionization Equation represents a substantial refinement over the simplified version used in Feynman’s pedagogical derivation. Feynman introduces the concept of an “atomic volume” Va as an intuitive but crude approximation; however, this quantity does not appear in either the original or modern formulation of the Saha equation. In the modern expression, it is replaced by the factor 2/l3, where l is the electron’s thermal de Broglie wavelength (l = h/Ö[2pmkT]), which accounts for the density of accessible phase-space states. Furthermore, the modern version includes the degeneracy ratio gi+1/gi, which reflects the statistical weight of the quantum states associated with different ionization levels—an aspect entirely absent from Feynman’s treatment. Together, these refinements make the equation more suitable for applications, such as modeling stellar atmospheres and plasma systems. However, Feynman’s version remains pedagogically valuable as a toy model because it highlights the central role of the Boltzmann factor in thermal ionization.

3. Limitations

“But since the ratio neni/na stays the same, the total number of electrons and ions must be greater in the larger box. To see this, suppose that there are N nuclei inside a box of volume V, and that a fraction f of them are ionized. Then ne= fN/V= ni, and na=(1−f)N/V. Then our equation becomes (f2/[1−f])N/V= (e−W/kT)/Va (42.8). In other words, if we take a smaller and smaller density of atoms, or make the volume of the container bigger and bigger, the fraction f of electrons and ions must increase (Feynman et al., 1963, p. 42-5).”

 

The passage above discusses one of the more counter-intuitive predictions in statistical mechanics: a gas may become more ionized simply by “expanding,” even if the temperature remains unchanged.

 

The Governing Equation: “Expansion-Induced Ionization”

Feynman expresses the relationship between volume and ionization through Equation 42.8: (f2/[1−f])(N/V) = (e−W/kT)/Va, where f is the fraction of ionized atoms, N/V is the particle density, and W is the ionization energy.

The equation shows that if the temperature is held constant, the right side of the equation remains fixed. Thus, if the volume increases and the density N/V decreases, the ionized fraction f must increase in order to maintain the balance. In essence, expansion alone can shift the equilibrium toward more ionization.

 

A Systematic Refinement: The "Dating Analogy"

One way to visualize the effect is to imagine the gas as a dynamic society of couples (neutral atoms) and singles (ions and electrons).

Ionization (breaking up): A couple can be separated by a sufficiently energetic “thermal jiggle.” The rate of these breakups depends on how many couples are present and the intensity of the thermal motion (temperature).

Recombination (finding a partner): For a new “union” to occur, a free electron must encounter and bind with an ion. The recombination rate depends on the likelihood of random encounters—which, in our analogy, the size of a venue.

 

Why Volume Matters:

The Manhattan Club (High Density): If 100 people are packed into a small clubroom, encounters are frequent. “Singles” may likely bump into each other. Even as couples break apart on the dance floor, new pairs readily form. In this crowded environment, the high encounter rate favors the “Couple” state (neutral atoms).

The Sahara Desert (Low Density): If the same 100 people are scattered across a vast desert, couples may still occasionally break up (ionization), but the resulting new singles may wander for years before encountering with another person. In such an enormous volume, the low encounter rate favors the “Single” state (free electrons or ions).

The Result: Because break-up events occur randomly while the chance of union becomes increasingly rare, a reasonable number of people will eventually remain  single. This is the essence of “Expansion-Induced Ionization”, which is directly proportional to the volume involved.

 

Interpreting the Equation

The structure of the equation reflects three key factors:

f2 : It represent the probability of recombination, which requires two “singles” (an electron and an ion) to meet. Since both must be present, this is a joint probability of a meeting that scales with the square of the ionized fraction.

1 – f: It represents the fraction of remaining neutral atoms that can still be ionized.

N/V : This “crowding factor” refers to the overall particle density and determines the frequency of encounters.

When a gas expands and its density decreases, the left side of the equation would decrease unless the ionized fraction (f) increases. Ionization event may occur slowly at low temperatures, but recombination takes even longer time in extreme dilute environments because particles must travel vast distances to meet. Thus, the near-vacuum of interstellar space can sustain a substantial ionized fraction; once an electron is liberated, it traverses such immense distances that the statistical probability of it "finding its way home" to an ion becomes nearly zero. In other words, the particles are so far apart that recombination is rendered statistically impossible.

 

“… if the space is enormous, wanders and wanders and does not come near anything for years, perhaps. But once in a very great while, it does come back to an ion and they combine to make an atom. So the rate at which electrons are coming out from the atoms is very slow. But if the volume is enormous, an electron which has escaped takes so long to find another ion to recombine with that its probability of recombination is very, very small; thus, in spite of the large excess energy needed, there may be a reasonable number of electrons (Feynman et al., 1963, p. 42-7).”

 

Infinite Volume Paradox?

There is a paradox in Feynman’s Equation 42.8 if is extrapolated to an extreme limit where its assumptions no longer hold. If the volume V becomes extremely large while the number of nuclei N remains fixed, the density N/V approaches zero. The equation then seems to require the ionization fraction f to approach 1, suggesting that nearly all atoms become ionized. At first glance this appears contradictory if we follow the equation strictly: since the recombination rate is proportional to neni and ne = ni = fN/V, the factor f2 would approach 1, implying a large recombination probability. From a perspective of physical reasoning, the “encounter rate” would become vanishingly small because the particles are separated by vast distances. The apparent contradiction arises from applying the Saha ionization equation—which assumes a dilute ideal gas in local thermodynamic equilibrium—to an infinitely dilute environment such as outer space, where collisions are too rare to maintain equilibrium.

 

Historical note:

Saha originally formulated the ionization equation in logarithmic form in 1920 (see below), rather than the exponential form commonly used in modern textbooks. This was a pragmatic choice: in the pre-calculator era, scientists relied on logarithm tables to transform complex numerical works including multiplications into simple additions. By contrast, Feynman presented a pedagogical version in terms of particle densities (electrons, ions, atoms) and “atomic volume,” emphasizing the statistical mechanism behind thermal ionization. Saha’s logarithmic formulation, however, proved useful for astrophysics because it allowed him to demonstrate that the stellar spectral types (O-B-A-F-G-K-M) form a temperature sequence. He also showed that the range from roughly 3000 K to 40,000 K corresponds to the successive ionization of different elements, providing a first order approximation for understanding the observed spectra of stellar atmospheres.

(Source: Saha, 1920)

 

The Scientist and the Social Barrier:

Saha’s scientific breakthrough mirrored his personal trajectory in striking ways. Born into a lower-caste family in a society historically “bound” by hereditary roles and endogamy, his rise into the bhadralok (India’s intellectual elite), represented a remarkable break from the traditional social order. His later engagement in politics was partly driven by his opposition to the entrenched caste hierarchy, a system comprising over thousands of castes and even more sub-castes. (This form of social classification differs from ethnoreligious groups such as those found in Jewish communities, where traditional roles like Kohen and Levi remain but carry minimal modern socio-economic weight.) His life thus reflects the same principle, which is helpful in astrophysics: a minimum energy is required to liberate an electron from a bound state—much as breaking from a lower caste—into a more “ionized” state of genuine freedom.

 

Key Takeaways: Two Drivers of Ionization

At the heart of thermal ionization lies the Boltzmann factor, which determines the microscopic probability that an electron’s “thermal jiggle” will acquire enough energy to overcome the ionization potential and escape from an atom. Feynman shows that this simple exponential factor governs the statistical balance of ionization, revealing that equilibrium depends on the ratio of thermal energy to the ionization energy. Yet the Saha ionization equation also leads to a counterintuitive reality: volume matters as much as temperature in ionization. In the ultra-dilute vacuum of interstellar space, a plasma may persist not because the gas is hot, but because it is very sparse. In essence, ionization in a rarefied gas is sustained not by thermal motion but also by simple geometry: even a “cold” vacuum can maintain a reasonable ionization fraction because electrons may wander vast distances to find an ion. Thus, the ionization state depends not only on the energy required to “break free,” but also on the physical space available for electrons to “get lost.”

 

Alternative Takeaways: From Stars to Smartphones

Thermal ionization is the fundamental process that transforms a neutral gas into a plasma.

It occurs when a gas is heated to temperatures so high that energetic collisions between atoms eject electrons, leaving behind positively charged ions. In stars such as the Sun, the outer atmosphere reaches thousands of degrees, sustaining an enormous reservoir of ionized gas—sometimes called hot plasma. In contrast, a plasma etcher generates plasma by applying a high frequency electric field, producing a mixture of high speed (hot) electrons and relatively slow (cold) ions—sometimes known as “cold” plasma. The gas is kept at low pressure, allowing ions to travel long distances without frequent collisions that would likely cause recombination. The result is a highly controlled chemical etching process, precise enough to produce microelectronic circuits at the nanometer scale—the same kind of circuitry found in modern smartphones. In short, while a star relies on intense heat to transform matter into a plasma state, modern chip fabrication harnesses controlled ions to perform highly precise etching.


The Moral of the Lesson:

It is worthwhile to discuss the nanofabrication by acknowledging Feynman, the visionary often credited with inspiring the birth of nanotechnology. In his 1959 Caltech lecture, There’s Plenty of Room at the Bottom, Feynman posed a provocative question: “Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?” This central vision—manipulating matter at an atomic scale—anticipated the future of miniaturization. During that lecture, Feynman did more than speculate; he sketched out practical possibilities, including the use of an ion source and methods for focusing ions into a tiny spot. To encourage progress, he offered a $1,000 prize for the first person who could reduce a page of text by a factor of 25,000. The prize remained unclaimed for 25 years until Tom Newman succeeded in etching the opening lines of A Tale of Two Cities  "It was the best of times, it was the worst of times…" onto a tiny square of plastic, using an electron beam to demonstrate the possibilities of nanoscale fabrication.

 

The phrase “It was the best of times, it was the worst of times,” from A Tale of Two Cities by Charles Dickens is often associated with dramatic historical upheavals such as the French Revolution. When Tom Newman etched the famous line to claim Feynman’s $1,000 prize, he could have been reflecting the Computer Revolution—an era of new information alongside the human obsolescence. Yet the deeper paradox may lie less in revolutions than in the complexities of human nature itself. Periods of rapid change often coincide with instability because the same intelligence that drives innovation can also generate conflicts. Scientific breakthroughs expand human capability, but they do not always improve our wisdom. As a result, every age may experience new opportunities and self-inflicted problems, from political division to social tension. In this sense, the paradox of “the best and worst of times” reflects a recurring pattern: the challenges of any age may arise not from revolutions, such as Artificial Intelligence Revolution, but our tendency to become our own worst enemy by the misuse of the power we create.

 

Review questions:

1. Analyzing Idealizations: How would you evaluate the key idealizations of Feynman’s toy model of thermal ionization?

2. Historical vs. Modern Formulations: In what ways does Feynman’s model differ from Saha’s original 1920 formulation and the modern version?

3. Volume-Induced Ionization: Do you agree with Feynman’s explanation of the persistence of plasma in the ultra low density environments of interstellar space?

 

p.s.: In today’s context, Iran’s hypersonic missiles moving at Mach 15 are wrapped in plasmas (thermal ionization) that cannot be tracked easily.

In the high-stakes arena of modern ballistics, missiles traveling through the atmosphere at speeds about Mach 13 push the limits of physics. At such hypersonic velocities, the air ahead of the missile is undergoes such violent compression that it becomes thermally ionized, forming a dense plasma sheath that can interfere with radar tracking. This phenomenon poses a significant challenge for missile defense systems such as Iron Dome, complicating interception and making a near-perfect success rate difficult to achieve.

 

References:

Feynman, R. P. (1960). There's Plenty of Room at the Bottom. Engineering and Science23(5), 22-36.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Saha, M. N. (1920). LIII. Ionization in the solar chromosphere. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science40(238), 472-488.

Thursday, March 5, 2026

Section 42–2 Thermionic emission

Idealizations / Approximations / Limitations

 

This section is about thermionic emission, historically termed the Edison effect, which occurs when a material is heated sufficiently to enable electrons the kinetic energy required to overcome the surface’s potential barrier. While Owen Willans Richardson was awarded the 1928 Nobel Prize in Physics for formalizing the relationship, Feynman simplified the complex equation into a toy model involving the Boltzmann factor. The principle can be used in the modern AI chip manufacturing, e.g., it governs the emission of electrons from tungsten tips enclosed within the Scanning Electron Microscope.

 

1. Idealizations:

“Then there would be a certain density of electrons at equilibrium which would, of course, be given by exactly the same formula as (42.1), where Va is the volume per electron in the metal, roughly, and W is equal to qeϕ, where ϕ is the so-called work function, or the voltage needed to pull an electron off the surface….. In other words, the answer is that the current of electricity that comes in per unit area is equal to the charge on each times the number that arrive per second per unit area, which is the number per unit volume times the velocity, as we have seen many times: I = qenv = (qev/Va)e^−qeϕ/kT (Feynman et al., 1963, p. 42-4).”

 

Feynman’s equation I = qenv = (qev/Va)e−qeϕ/kT contains the essential Boltzmann factor in the exponential, but the prefactor qev/Va embodies several idealizations. First, replacing the electron number density n with 1/Va treats conduction electrons as if each occupies a fixed average volume in the metal, but thermionic emission is essentially a surface phenomenon. Second, the use of a single average speed v ignores the Fermi–Dirac velocity distribution and directional effects; in reality, only electrons with sufficiently large outward velocity components normal to the surface can escape. Third, the expression assumes that every electron reaching the surface with enough energy overcomes the barrier, neglecting reflection, surface scattering, and impurities. Thus, the prefactor functions as a simplified “attempt rate” — charge × available carriers × average speed — providing a toy model while omitting the quantum statistics that is incorporated in the Richardson–Dushman law.

 

“We may give another example of a very practical situation that is similar to the evaporation of a liquid—so similar that it is not worth making a separate analysis. It is essentially the same problem (Feynman et al., 1963, p. 42-4).”

 

Strictly speaking, evaporation and thermionic emission are not identical physical problem, even though both involve particles escaping from a surface. The Richardson-Dustman law describes electron emission from a metal and is characterized by a T2 prefactor multiplied by a Boltzmann factor, reflecting the presence of a work function barrier. In contrast, Langmuir’s law (often associated with Hertz-Knudsen equation) describes the evaporation of neutral atoms and follows classical kinetic theory, yielding a 1/ÖT dependence in the flux expression, which is commonly written in pressure form without a Boltzmann factor. Although the two laws differ in detail—electrons versus neutral atoms, Fermi–Dirac statistics versus Maxwell–Boltzmann statistics—they share a similar foundation of surface-related processes.

 

It is also worth mentioning that Irving Langmuir had an exceptionally broad research program centered on surface phenomena. His investigations into surface adsorption, thin films, and tungsten-filament bulb naturally led him to study both evaporation and thermionic emission. Awarded the 1932 Nobel Prize in Chemistry in 1932 for his work in surface chemistry, Langmuir also contributed to the development of Child-Langmuir Law (for thermionic emission), which describes the space-charge limited current in vacuum tubes and was foundational for early electronics technology.

 

2. Approximations

“The filament of the tube may be operating at a temperature of, say, 1100 degrees, so the exponential factor is something like e−10; when we change the temperature a little bit, the exponential factor changes a lot. Thus, again, the central feature of the formula is the e−qeϕ/kT (Feynman et al., 1963, p. 42-4).

 

In Feynman’s equation, the central approximation lies in the use of the Boltzmann factor e^−qeϕ/kT, where qeϕ (or W) is the work function—the energy barrier electrons must overcome to escape the metal. At a filament temperature around 1100 K, the ratio W/kT may be about 10, making the emission proportional to e^−10, a very small number; because this exponential contains 1/T, even a slight increase in temperature significantly reduces the exponent and therefore produces a large increase in current. Interestingly, Richardson noted in his Nobel Lecture, it is experimentally difficult to distinguish between emission laws proportional to T½e-W/kT and T2e-W/kT: the algebraic power of T changes slowly compared with the exponential term, and small adjustments in the constants can mask the difference. However, the essential physics of thermionic emission still lies in the exponential Boltzmann factor, which governs the fraction of electrons energetic enough to overcome the work-function barrier.

 

In his Nobel Lecture, Richardson (1929) mentions: “In 1901, I was able to show that each unit area of a platinum surface emitted a limited number of electrons. This number increased very rapidly with the temperature, so that the maximum current i at any absolute temperature T was governed by the law i=AT½e-W/kT …Eq.(1)……In 1911 as a result of pursuing some difficulties in connection with the thermodynamic theory of electron emission I came to the conclusion that i=AT2e-W/kT … Eq.(2) was a theoretically preferable form of the temperature emission equation to Eq.(1) with, of course, different values of the constants A and w from those used with (1). It is impossible to distinguish between these two equations by experimenting. The effect of the T2 or T½ term is so small compared with the exponential factor that a small change in A and w will entirely conceal it. In fact, at my instigation K. K. Smith in 1915 measured the emission from tungsten over such a wide range of temperature that the current changed by a factor of nearly 1012, yet the results seemed to be equally well covered by either (1) or (2).”

 

3. Limitations

As a matter of fact, the factor in front is quite wrong—it turns out that the behavior of electrons in a metal is not correctly described by the classical theory, but by quantum mechanics, but this only changes the factor in front a little. Actually, no one has ever been able to get the thing straightened out very well, even though many people have used the high-class quantum-mechanical theory for their calculations (Feynman et al., 1963, p. 42-5).

 

The Paradox of Feynman's Pessimism: Why Agreement and Disagreement Both Clarify the Truth

Feynman's seemingly pessimistic statements about the development of thermionic emission present a productive paradox: by simultaneously agreeing and disagreeing with him, we gain a richer understanding of how physics progresses.

 

1. The Humility of Agreeing: Surface Physics is Messy

Agreeing with Feynman is an exercise in intellectual humility. His prefactor is indeed “quite wrong” because it ignores Fermi–Dirac statistics. Even after the quantum mechanical refinement to the Richardson–Dushman law—where the Richardson constant (A = 4pmek2/h3) replaces the earlier empirical prefactorexperimental values often deviate from the theoretical prediction. Surface contamination, surface roughness, and space-charge effects all complicate ideal experimental conditions. In practice, while the Boltzmann factor remains robust and reliable, the prefactor is sensitive to the “messy” material-specific surface conditions that are difficult to control. Feynman’s pessimism is not cynicism but a methodological caution: theoretical elegance does not ensure experimental exactness.

 

2. The Optimism of Disagreeing: A Theoretical Achievement

Disagreeing with Feynman allows us to recognize what was genuinely “straightened out.” By the mid-1920s, Saul Dushman and others had used quantum statistics to transform the empirical prefactor into one that is derived from fundamental constants (me, qe, k, and h). This was not yet a revolution but a revelation: thermionic emission is a manifestation of connections between Fermi–Dirac statistics, phase space, and electron behavior. When experimental values deviate from the Richardson constant, the discrepancy typically reflects imperfect surfaces rather than a breakdown of quantum theory. In this sense, much was “straightened out”, that is, there are corrections in the foundational physics, even if real materials introduce unavoidable complications.

 

3. Synthesis: Where Theory meets Reality

Holding both views simultaneously provides a mature scientific perspective. When designing thermionic energy converters or optimizing electron sources for AI chip fabrication, engineers may refine the Richardson–Dushman law*. Yet Feynman's skepticism may function like a craftsman's caliper, continually emphasizing the gap between theoretical predictions and real surfaces. The tension between theoretical completeness and experimental complexity is not evidence of failure; it is the engine of refinement in surface science. Thus, the “optimist” may still use the Richardson’s constant as a guide, while the “pessimist” accounts for the surface contamination and imperfections—and progress emerges from the dialogue between the two.

 

*In VLSI Technology, the formula related to thermionic emission is modified as shown below:


Source: VLSI Technology (Sze, 1983)

Note: In his 1928 Nobel Lecture, Richardson explicitly acknowledged the importance of Sommerfeld’s quantum-theoretical treatment of the electron gas in metals: “This great problem was solved by Sommerfeld in 1927. Following up the work of Pauli on the paramagnetism of the alkali metals, which had just appeared, he. showed that the electron gas in metals should not obey the classical statistics as in the older theories, such as that of Lorentz for example, but should obey the new statistics of Fermi and Dirac… The only clear exceptions which emerged were the magnitude of the work function in relation to temperature as deduced from the cooling effect and the calculation of the actual magnitude of the absolute constant A which enters into the AT2e-w/kT formula. As this contains Planck’s constant h its elucidation necessarily involved some form of quantum theory.”

 

Key Takeaways: Why Feynman Embraces the “Wrong” Formula

Feynman’s goal is not to provide a handbook for industrial engineering, but to illuminate the conceptual core of Statistical Mechanics.

  • The Scaffolding: His central aim is to show that the Boltzmann Factor is the universal engine behind virtually all “escape” processes. Whether the subject is evaporation, thermionic emission, or chemical reaction rates, the exponential suppression associated with an energy barrier is the main physical principle.
  • The Essence of Physics: From this perspective, the precise temperature dependence of the prefactor—whether it is ÖT or T2 in the Richardson-Dushman law—is secondary. The exponential term governs the scale of the effect; the prefactor refines it.

Thus, the deeper lesson is this: “once you understand the 'thermal jiggle' and the 'energy hill,' you are 99% of the way to the truth. The last 1% is just coefficient-hunting.” The remaining refinements—coefficients, quantum statistics, and material-specific corrections—are crucial for quantitative precision, but they do not change the underlying physics. Feynman teaches us to see the forest first; the trees, however beautiful and necessary, can be examined later.


The Moral of the Lesson:

Placing aluminum foil inside a microwave oven (Magnetron) is effectively introducing a highly reflective conductor into an electromagnetic cavity. While the magnetron generates microwaves through thermionic emission, the oven chamber is designed to bounce those waves until they are largely absorbed by your food. When foil is added, it does more than “shield”: it alters the boundary conditions of the cavity. Used carefully, foil is a tool for selective shielding to prevent portions of food from overcooking; used carelessly, it transforms the oven into an uncontrolled discharge chamber.


Why Foil Sparks: Load Geometry, Not Heat

There is a fundamental difference between the thermionic emission inside the microwave oven and electric field concentration on the foil’s surface.

  • Inside the Magnetron (The Source): Electrons are “boiled” off a cathode via thermionic emission. The frequency of microwaves depends on the geometry (size and shape) of the microwave oven and the strength of the magnetic field. This “Source Geometry” determines how fast the electrons move and oscillate.
  • On the Foil (The Load): Conversely, arcing on the foil is not directly related to the thermal “boiling” of electrons; it is driven by electric field concentration. Here, the shape of the foil (Load Geometry) determines the electric field strength and the possibility of sparks or arcing.

 

The Physics of Field Enhancement

When microwaves (oscillating electromagnetic fields) strike a conductor like aluminum, they induce surface currents. The resulting electric field is governed by the geometry of the conductor.

  • Smooth Surfaces: On a flat sheet of aluminum foil, the electric charge spreads evenly, and electric fields remain moderate.
  • Sharp Edges/Points: According to the Gauss’ law, the surface charge density — and thus the local electric field — is inversely proportional to the radius of curvature. In short, sharper points or edges ® stronger electric field.

This is the same principle that makes lightning rods work. If the electric field at a sharp edge becomes strong enough to ionize air, it can create a visible spark or arc.

 

Practical Guidelines for Safe Foil Use

To use foil safely, you should manage both geometry and energy absorption:

1. The Anti-Crumple Rule: Never use crumpled foil. Every wrinkle creates numerous microscopic “points” that intensify the local electric field. Keep foil as flat and smooth as possible.

2. Maintain Clearance: Keep foil well away from the oven walls. If they are too close, the potential difference can produce a direct arc between them, damaging the interior.

3. “Round off” the corners: If you are shielding a turkey wing or delicate pastry, tuck or fold the edges of foil into curves. Curved shapes distribute charge more evenly, significantly reducing the risk of arcing compared to jagged edges.

4. Manage the Load: Absorption Matters

Microwaves need a "load" (something to absorb energy, like water or fat in food).

  • If you wrap your food entirely in foil, you effectively create a “No-Load” condition. Thus, it reflects most of the energy rather than absorbing it. Excessive reflection may increase standing waves and stress internal components.
  • Use only small patches of foil, allowing most of the food to absorb microwaves.  

 

iPhone Prank: The "Apple Wave" Catastrophe

In 2014, a notorious hoax originated on the internet claiming that a feature called “Apple Wave” allow users to charge an iPhone by “microwaving” it. This incident provides a good review on the contributions of geometry:

The Geometry Trap: A smartphone is a dense thicket of complex metal structures. The internal circuitry and components have thousands of sharp features that may trigger arcing or fire.

Thermal Runaway: The most dangerous component is the Lithium-Ion battery. Microwaves induce massive currents in the battery's conductive layers. This leads to thermal runaway—a state where the battery's internal temperature rises so fast it causes a self-sustaining fire, releasing flammable gases and potentially exploding.

Source: How Not To Charge Your iPhone: Users Fall For 'Apple Wave' Microwave Prank | IBTimes


Youtube: Microwaving iPhone Battery!! Don't Try this at Home!

Summary: Geometry is Destiny

In a microwave oven, aluminum foil is neither inherently safe nor inherently destructive, but it is a “passive” element whose safety is determined entirely by its shape. It only becomes potentially dangerous when its geometry—sharp points and narrow gaps—allows the electric field to break down the insulating properties of the air. Feynman would likely appreciate this contrast—while the exponential factor governs the birth of electrons in thermionic emission, but in the realm of microwave safety, geometry is destiny.

 

Review Questions:

1. Does Feynman suggest that thermionic emission is fundamentally similar to the evaporation of a liquid that a separate, specialized analysis is redundant?

2.Why is the Boltzmann factor regarded as the “central feature” of thermionic emission, while the pre-exponential factor is often treated as secondaryDoes this distinction reflect deeper theoretical stability in the exponential term versus material sensitivity in the prefactor?

3. How would you evaluate Feynman’s claim that no one has ever been able to straighten out" thermionic emission despite the use of “quantum mechanics. In light of Saul Dushman’s derivation of the universal constant in Richardson-Dushman law, should this be interpreted as a failure of quantum mechanics, or as an acknowledgment of the complexity of real-world surface physics?

 

References:

Crowell, C. R. (1965). "The Richardson constant for thermionic emission in Schottky barrier diodes". Solid-State Electronics. 8(4), 395–399.

Dushman, S. (1930). Thermionic emission. Reviews of Modern Physics2(4), 381.

Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Richardson, O. W. (1929). Thermionic phenomena and the laws which govern them. Nobel Lecture, December12, 1929.

Sze, S.M. (1983). VLSI Technology. New York: McGraw-Hill.