Saturday, August 29, 2020

Section 25–4 Analogs in physics

(Ohm’s law / Inductor and capacitor / Analog computer)

 

In this section, Feynman discusses Ohm’s law, “inductor and capacitor,” as well as the analog computer.

 

1. Ohm’s law:

“… in other words, proportional to how much voltage there is: V = IR = R(dq/dt). The coefficient R is called the resistance, and the equation is called Ohm’s Law (Feynman et al., 1963, section 25–4 Analogs in physics).”

 

Feynman states Ohm’s law as “if there is a current I, that is, so and so many charges per second tumbling down, the number per second that comes tumbling through the wire is proportional to how much voltage there is.” In short, the equation V = IR is called Ohm’s law by Feynman. However, one may distinguish two different versions of Ohm’s law: the law for a part of a circuit and the law for a whole circuit (Kipnis, 2009). The law for a part of a circuit is “electric current (I = DV/R) through a conductor is directly proportional to the potential difference at its ends (DV), and the resistance of the conductor (R) is constant.” The law for a whole circuit is “electric current (I = E/[R + r]) through a conductor is directly proportional to the potential difference at its ends and inversely proportional to its resistance (R + r).”

 

Feynman explains that the resistance obeys the Ohm’s law for almost all ordinary substances and says that “this law is extremely accurate for most metals.” In Volume II, he adds that “the relation between the current and the voltage for real conducting materials is only approximately linear (Feynman et al., 1964, section 22–1 Impedances).” We can specify three conditions of validity for Ohm’s law: (1) low constant voltage (assume ohmic devices), (2) constant temperature (assume no heating effect), and (3) constant size (assume no expansion). In addition, Feynman shows that the heating loss generated is equal to V(dq/dt) = VI = I2R. On the contrary, one may clarify that it can result in a heat gain in the resistor that increases the resistance such that the voltage-current relation is not strictly linear.

 

2. Inductor and capacitor:

The equation is V = L(dI/dt) … is such that one volt applied to an inductance of one henry produces a change of one ampere per second in the current (Feynman et al., 1963, section 25–4 Analogs in physics).”

 

Feynman says that the current of an inductor does not want to stop after it is started. Based on the equation V = L(dI/dt), there is no voltage across the inductor if the current is constant. That is, we have idealized the inductor as a circuit element that has no resistance, and no power is dissipated by the current flowing through it. In Volume II, Feynman clarifies that we have to make four assumptions for an ideal inductor, e.g., “the magnetic field produced by currents in the coil does not spread out strongly all over space and interact with other parts of the circuit (Feynman et al., 1964, section 22–1 Impedances).” In a sense, the inductor has an inertial effect that resists a change in electric current, just like the inertial mass was explained as an inductive effect that is based on electrodynamics (Jammer, 1997).

 

Feynman mentions that the work done in moving a unit charge across the gap from one plate to the other is precisely proportional to the charge. He adds that we have V = q/C and the constant of proportionality is not called C, but 1/C for historical reasons. In Volume II, Feynman explains that “[t]his formula is not exact, because the field is not really uniform everywhere between the plates, as we assumed (Feynman et al., 1964, section 6–10 Condensers; parallel plates).” However, the dielectrics materials used in parallel plates capacitors are not perfect insulators. Historically, the formula SQ = SV/C does not mean Q = V/C and thus, the constant of proportionality is 1/C. In the context of an electroscope, SQ is the deflection per unit charge (or charge sensitivity) and SV is the deflection per unit potential-difference (or potential difference sensitivity).

 

3. Analog computer:

This is called an analog computer. It is a device which imitates the problem that we want to solve by making another problem… (Feynman et al., 1963, section 25–4 Analogs in physics).”

 

According to Feynman, an analog computer is a device that imitates the problem that we want to solve by making another problem, which has the same equation, but in another circumstance of nature, and which is easier to build, to measure, and to adjust. Specifically, the analog computer has continuous quantities such as voltages or currents, instead of discrete states in a digital computer. An example of analog computer is the FERMIAC, a Monte Carlo mechanical device, that was used in the Manhattan (atomic bomb) Project to perform calculations for neutron diffusion. In his autobiography, Feynman (1997) mentions that he used IBM machines to find out what happened during the bomb’s implosion. In his words, “if you’ve ever worked with computers, you understand the disease--the delight in being able to see how much you can do (Feynman, 1997, p. 127).”

 

Feynman explains that the electrical circuit is the exact analog of the mechanical system, in the sense that whatever q does, in response to V (V is made to correspond to the forces that are acting), so the x would do in response to the force. However, it is inaccurate to use the phrase “exact analog” because circuit elements connected in series are analogous to the corresponding mechanical elements connected in parallel, and vice versa (Firestone, 1933). For example, a capacitor and an inductor connected in parallel have the same voltage V, but a spring and an object experience the same force when they are connected in series (horizontally). On the other hand, the same current passes through the capacitor and the inductor connected in series, but the object and the spring have the same velocity difference when they are connected in parallel. In short, the same voltage occurs in parallel, the same force occurs in series.

 

Questions for discussion:

1. How would you state Ohm’s law?

2. How would you explain the equation V = L(dI/dt) and Q = V/C

3. Does an electrical circuit in series have an exact analog of a mechanical system in series?

 

The moral of the lesson: we may replace the equation corresponding to the circuit L(d2q/dt2) + R(dq/dt) + q/C = V by the equation m(d2x/dt2) + γm(dx/dt) + kx = F in the sense that circuit elements connected in series are analogous to the corresponding mechanical elements connected in parallel.

 

References:

1. Feynman, R. P. (1997). Surely You’re Joking, Mr. Feynman! : Adventures of a Curious Character. New York: Norton.

2. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

3. Feynman, R. P., Leighton, R. B., & Sands, M. (1964). The Feynman Lectures on Physics, Vol II: Mainly electromagnetism and matter. Reading, MA: Addison-Wesley.

4. Firestone, F. A. (1933). A new analogy between mechanical and electrical systems. The Journal of the Acoustical Society of America, 4(3), 249-267.

5. Jammer, M. (1997). Concepts of Mass in Classical and Modern Physics. Mineola, NY: Dover.

6. Kipnis, N. (2009). A law of physics in the classroom: The case of Ohm’s law. Science & Education, 18(3-4), 349-382.

Friday, August 21, 2020

Section 25–3 Oscillations in linear systems

(Idealized friction / Real friction / Resonance curves)

 

In this section, Feynman discusses the idealized friction (weak & proportional to the velocity) and real friction (strong & constant) experienced by an oscillator and its resonance curves.

 

1. Idealized friction:

… a special kind of friction must be carefully invented for the very purpose of creating a friction that is directly proportional to the velocity … (Feynman et al., 1963, section 25–3 Oscillations in linear systems).”

 

According to Feynman, a friction is invented such that it is directly proportional to the velocity and it is weaker for small oscillations. Based on this idealization, the spring force is reduced, the inertial effects are lower, the accelerations are weaker, and the friction is lesser. In a similar sense, many problems of oscillators are modeled using the equation mdv/dt + bv + kx = 0 in which bv is idealized as a weak frictional force. This is a valid model if our linear problem is essentially small oscillations. In the case of a pendulum, we have also idealized the period to be independent of the amplitude and it is independent of the weight of the pendulum. (In the last paragraph of the previous section, Feynman explains that sin q is practically equal to q for a simple pendulum if q is small; this last paragraph could be shifted to the present section.)

 

Feynman elaborates that the sizes of the oscillations are reduced by the same fraction of themselves in every cycle because of the weaker frictional force. Thus, the amplitude (A) of the oscillation can be expressed by the equation A = A0an in which A0 is the initial amplitude, a is the ratio of the two amplitudes between two successive cycles, and n is the number of cycles traversed. Importantly, the fact that n is directly proportional to t (total time) is approximately true for small oscillations (i.e., the period is assumed to remain unchanged). Furthermore, one should recall that a solution of m(dv/dt) + bv + kx = 0 is in the form of ectcos ω0t. It may not be trivial to explain that e−3ct = (ect)(ect)(ect) = (0.9)3 if ect = 0.9 and thus, we have a choice to use A = A0an (or A = ect).

 

2. Real friction:

What happens if the friction is not so artificial; for example, ordinary rubbing on a table, so that the friction force is a certain constant amount … (Feynman et al., 1963, section 25–3 Oscillations in linear systems).”

 

Feynman says that the frictional force, for example, ordinary rubbing on a table is a certain constant amount and it is independent of the size of the oscillation that reverses its direction. It seems that he suggests the friction is a constant something like the kinetic friction. This is also an idealization in which kinetic friction (µkN) is simply equal to a kinetic coefficient of friction (µktimes the normal reaction (N). However, a better model for real friction can be represented by the equation F = μN + kA, where kA is dependent on the area of contact between two surfaces (Besson et al., 2007). Strictly speaking, the real friction measured may decrease as the velocity is increased and it is more problematic (Ludema & Tabor, 1966), and thus, it has to be solved by a numerical method.

 

Feynman explains that a system does not oscillate at all if there is too much friction. If the energy in the spring is unable to overcome the frictional force, it would slowly reach the equilibrium point. In the last chapter (section 24-3), Feynman has already discussed the strong friction that results in heavy damping. In a sense, Feynman contradicts himself because he mentions earlier that the system can oscillate one cycle (instead of “does not oscillate”). Perhaps some may argue whether the oscillatory motion should be defined as a to-and-fro motion or repeated motion. However, a pendulum suspended inside a bottle of honey may not move more than a quarter of a cycle and it needs a long time to reach the equilibrium point.

 

3. Resonance curves:

Qualitatively, we understand the resonance curve; in order to get the exact shape of the curve it is probably just as well to do the mathematics (Feynman et al., 1963, section 25–3 Oscillations in linear systems).”

 

In Fig. 25–5, Feynman shows resonance curves with various amounts of friction present. He says that the curve goes toward infinity as ω approaches ω0 (the natural frequency of the oscillator). On the contrary, Feynman explains that the amplitude does not reach infinity for some reason; it may be that the spring breaks in section 21-5. Notably, Landau and Lifshitz (1976) write that “the amplitude of oscillations in resonance increases linearly with the time (until the oscillations are no longer small and the whole theory given above becomes invalid) (p. 62).” On the other hand, the collapse of the Tacoma bridge is related to an aerodynamically induced condition of self-excitation or “negative damping” instead of forced resonance (Billah & Scanlan, 1991). In short, forced resonance is not a necessary condition to break a bridge.

 

Feynman mentions that the resonance curve is usually plotted so that the top of the curve is called one unit. He adds that if there is lesser friction, this curve can have a higher peak as well as the narrower width at half the maximum height. One should re-read his explanation on the equation ρ21/4m2ω02[(ω0ω)2 + γ2/4] in chapter 23: “We shall leave it to the student to show the following: if we call the maximum height of the curve of ρ2 vs. ω one unit, and we ask for the width Δω of the curve, at one half the maximum height, the full width at half the maximum height of the curve is Δω = γ, supposing that γ is small (Feynman et al., 1963, section 23–2 The forced oscillator with damping).” That is, the resonance curve is related to the Q-factor that is a measure of the width (Δω = γ), which is defined as Q = ω0/Δω (or ω0/γ).

 

Questions for discussion:

1. How would you explain the amplitude (A) of the oscillation can be modeled by the equation A = A0an?

2. Is it correct to say that ordinary rubbing on a table is a certain constant amount?

3. How would you explain the resonance curve can have a higher peak as well as a narrower width at half the maximum height?

 

The moral of the lesson: if there is lesser friction, we will have a higher resonance curve and a narrower width at half its maximum height (or using Q = ω0/Δω = ω0/γ in which γ is dependent on the friction).

 

References:

1. Besson, U., Borghi, L., De Ambrosis, A., & Mascheretti, P. (2007). How to teach friction: Experiments and models. American Journal of Physics, 75(12), 1106-1113.

2. Billah, K. Y., & Scanlan, R. H. (1991). Resonance, Tacoma Narrows bridge failure, and undergraduate physics textbooks. American Journal of Physics, 59(2), 118-124.

3. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

4. Landau, L. D., & Lifshitz, E. M. (1976). Mechanics (3rd ed.). Oxford: Pergamon Press.

5. Ludema, K. C., & Tabor, D. (1966). The friction and viscoelastic properties of polymeric solids. Wear, 9, 329-348.

Saturday, August 15, 2020

Section 25–2 Superposition of solutions

(Principle of superposition / Radio tuning / Two useful methods)

 

In this section, Feynman discusses the principle of superposition and how it is related to radio tuning, as well as two useful problem-solving methods (Fourier series and Green’s function).

 

1. Principle of superposition:

“This is an example of what is called the principle of superposition for linear systems, and it is very important (Feynman et al., 1963, section 25–2 Superposition of solutions).”

 

According to Feynman, the principle of superposition means that a complicated force can be broken up into a sum of separate pieces in any convenient manner. To get the complete answer, we can add the pieces of the solution together just like the total force is a sum of the pieces. However, Feynman’s figure is not simply about some simple forces that are in accordance with his explanation of the superposition principle. Essentially, he is applying the superposition of waves that are ideal sinusoidal waves. The discussion of radio tuning in the middle of the section shortly after is also based on the superposition of waves. Similarly, the Fourier series method that can be used for radio tuning is related to the superposition of waves.

 

Feynman explains that the laws of electricity (instead of laws of electromagnetism), Maxwell’s equations, which determine the electric field, turn out to be differential equations that are linear. On the other hand, there are nonlinear Maxwell’s equations and thus, Maxwell’s equations are linear because they involve idealizations and approximations. Historically, the Born–Infeld model is a field theory that is also known as nonlinear electrodynamics (Born & Infeld, 1934). In chapter 50 of Feynman’s lectures (Volume I), he clarifies that “when we discussed the transmission of light, we assumed that the induced oscillations of charges were proportional to the electric field of the light—that the response was linear. That is indeed a very good approximation (Feynman et al., 1963).”

 

2. Radio tuning:

“That is how radio tuning works; it is again the principle of superposition, combined with a resonant response (Feynman et al., 1963, section 25–2 Superposition of solutions).”

 

Feynman says that a radio station transmits an oscillating electric field of very high frequency which acts on our radio antenna. For radio tuning, we can adjust the natural frequency of a radio by changing the L or the C of its circuit. The reception of radio waves is not merely dependent on the principle of superposition and one radio-frequency. In chapter 50, Feynman adds that “… the amplitude of cos ω1t is modulated with the frequency ω2. We would now say that two new components have been produced, one at the sum frequency (ω1+ω2), another at the difference frequency (ω1ω2) (Feynman et al., 1963).” To have a deeper understanding, there are at least three fundamental principles involved: generation, transmission, and reception of radio waves.

 

Feynman explains that the amplitude of the oscillating field is changed, modulated, to carry the signal of the voice, and we are not going to worry about it. Some may not understand the meaning of modulated that is used by Feynman in his explanation. This is not surprising because Feynman was able to fix radios when he was a teenager (Feynman, 1997, pp. 15-21). In addition, Feynman had some working knowledge of ham radio when he applied it in Brazil (Feynman, 1997, p. 211). Interestingly, Feynman likely had advanced knowledge of radio frequency because he was involved in a project related to radar (Feynman, 1997, p. 102). The word radar means “RAdio Detection And Ranging.”

 

3. Two useful methods:

“Out of the many possible procedures, there are two especially useful general ways that we can solve the problem (Feynman et al., 1963, section 25–2 Superposition of solutions).”

 

Feynman briefly discusses two useful problem-solving methods that are based on the principle of superposition: Fourier series and Green’s function. For the Fourier series method, he mentions that practically every curve can be obtained by adding together infinite numbers of sine waves of different frequencies. This method can also be used for radio tuning to determine the different frequencies of radio waves emitted by a radio station. In chapter 50, Feynman elaborates Fourier series in more detail and explains that this method is applicable to discontinuous curves. In a sense, our human ear-brain audio system is able to perform Fourier series to the extent that some people can distinguish the frequencies of a chord.

 

Feynman describes how a force can be likened to a succession of blows (or impulses) with a hammer and Green’s function is a method of analyzing any force by putting together the response of impulses. Note that the horizontal axis of “Fig. 25–4 A complicated force may be treated as a succession of sharp impulses” was labeled x in the first edition, but it is later revised to t that means time. Perhaps Feynman could have revealed the usefulness of Green’s function in his path integrals. In his Ph.D. thesis, Feynman has applied the Green’s function method in a forced harmonic oscillator problem from the point of view of his modified quantum mechanics. Furthermore, he simply calls it G function and one of which is Gγ(x, x; T). This could be a reason why Fig. 25-4 was labeled x instead of t.

 

Questions for discussion:

1. How would you state the principle of superposition?

2. How would you explain the principles of radio tuning?

3. Why did Feynman discuss the Fourier Series and Green’s function method? 

 

The moral of the lesson: we can solve linear problems such as those related to radio frequencies by using Fourier Series and Green’s function.

 

References:

1. Born, M. & Infeld, L. (1933). Foundations of the new field theory. Nature, 132(3348), 1004-1004.

2. Feynman, R. P. (1997). Surely You’re Joking, Mr. Feynman! : Adventures of a Curious Character. New York: Norton.

3. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

Saturday, August 1, 2020

Section 25–1 Linear differential equations

(Linear operator / Independent solutions / Forced solution)

 

In this section, Feynman discusses linear operator, independent solutions (free solutions) of a homogeneous differential equation (right-hand side of the equation is zero), and the forced solution of an inhomogeneous differential equation.

 

1. Linear operator:

We sometimes call this an operator notation, but it makes no difference what we call it, it is just ‘shorthand’ (Feynman et al., 1963, section 25–1 Linear differential equations).”

 

Feynman calls L an operator notation and says that it makes no difference what we call it. He provides two important statements: (1) L(x+y) = L(x) + L(y), and (2) for constant a, L(ax) = aL(x). However, we can call L a linear operator instead of operator notation. To be specific, linear operators are defined with two necessary conditions: (1) For x and y Î V, L(x+y) = L(x) + L(y) (L is additive), and (2) For x Î V, a Î R, L(ax) = aL(x) (L is homogeneous) in which V is a real vector space and R is a set of real numbers. Simply put, a linear operator provides an operation or instruction that informs us how we should do with x and y that may be numbers, functions, or vectors. Perhaps Feynman should problematize the word linear and explain that it is not simply about straight lines.

 

Feynman mentions that there may be more derivatives and more terms in L in more complicated problems. If the two conditions for a linear operator are maintained, then such a problem is a linear problem. In solving any linear problems, we can combine two inputs such as the velocity of an object in a train and the velocity of the train, that will result in the sum of their respective outputs. On the other hand, a differential equation such as 2 + x = 0 is a non-linear problem because it has a square term that violates the two conditions. In general, many problems in fluid dynamics, atmospheric physics, and general relativity are based on nonlinear equations that are unsolvable or difficult to be solved.

 

2. Independent solutions:

It turns out that the number of what we call independent solutions that we have obtained for our oscillator problem is only two (Feynman et al., 1963, section 25–1 Linear differential equations).”

 

Feynman explains that there are only two independent solutions if we have a second-order differential equation. He adds that the number of independent solutions in the general case depends upon what is called the number of degrees of freedom. However, we could obtain the general solution of a second-order differential equation, e.g., m + kx = 0, simply by using two integrations. That is, the general solution can be expressed as x = Ax1(t) + Bx2(t) in which A and B are dependent on the initial conditions. More importantly, the general solution in terms of two independent solutions x1(t) and x2(t) can be related to the principle of superposition, but this is discussed in the next section.

 

In a footnote, Feynman states that “solutions which cannot be expressed as linear combinations of each other are called independent.” Specifically, one may prefer the phrase “linearly independent solutions” and explain it using two vectors and two functions. In general, two vectors or two functions are linearly independent if one of them cannot be expressed as a multiple of the other. For example, two vectors x and 2x are linearly dependent because we can have 2x = 2 ´ x or 2(x). On the contrary, x and x2 are linearly independent because x2 is not a constant multiple of x. Similarly “moving in the x–direction” and “moving in the y–direction” are linearly independent in the sense that we cannot replace the x–direction by y–direction, or vice versa.

 

3. Forced solution:

Therefore, to the “forced” solution we can add any “free” solution, and we still have a solution. (Feynman et al., 1963, section 25–1 Linear differential equations).”

 

Feynman explains that the “forced” solution does not die out because it is driven by a force. Ultimately, the general solution is almost equal to the “force” solution as the “free” solution slowly becomes negligible. Formally speaking, the free “solution” is the complementary function and the “forced” solution is the particular integral of the second-order differential equation. One should also explain the three constants that appear in the general solution. In the “free” solution, any amplitude (or arbitrary constant) is possible, but the two arbitrary constants are dependent on how the system was started. On the other hand, the constant or amplitude of the “forced solution” is not arbitrary because it depends on the “forcing” function.

 

Feynman shows that L(xJ + x1) = F(t) + 0 = F(t) and says that we can add any “free” solution to the “forced” solution and is still a solution. It is worthwhile to distinguish three different principles of superposition. First, L(x+y) = 0 + 0 = 0: “Let L be any linear operator. Then if y = u and y = v are both solutions of L(y) = 0, the same is true of y = c1u + c2v, for any constants c1 and c2 (Sokolnikoff & Redheffer, 1966, p. 171).” Second, L(x+y) = F(t) + 0 = F(t): “Let u be a particular solution of L(y) = f, where L is any linear operator, and let v satisfy the homogeneous equation L(y) = 0. Then y = u + v satisfies L(y) = f, and every solution of L(y) = f can be obtained in this way (Ibid, p. 183).” Third, L(x+y) = F1(t) + F2(t): “Let y1 satisfy the equation L(y1) = f1 and let y2 satisfy L(y2) = f2, where L is any linear operator. Then, for any constants c1 and c2, the function y = c1y1 + c2y2 satisfies L(y) = c1f1 + c2f2 (Ibid, p. 186).” For consistency’s sake, Sokolnikoff and Redheffer’s use of the symbol T is changed to L.

 

Questions for discussion:

1. How would you define a linear operator?

2. How would you explain the independent solutions of a second-order differential equation are linearly independent? 

3. How would you explain the forced solution will become a steady solution?

 

The moral of the lesson: we can combine two independent solutions to form a “free” solution, and we can combine the “free” solution with a “forced” solution to form a general solution (using two slightly different principles of superposition).

 

References:

1. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.

2. Sokolnikoff, I. S., & Redheffer, R. M. (1966). Mathematics of Physics and Modern Engineering (2nd Ed.). Singapore: McGraw-Hill.