CHAPTER 5 Series Solutions of ODEs. Special Functions In Chaps. 2 and 3 we have seen that linear ODEs with constant coefficients can be solved by functions known from calculus. However, if a linear ODE has variable coefficients (functions of x), it must usually be solved by other methods, as we shall see in this chapter. Legendre polynomials, Bessel functions, and eigenfunction expansions are the three main topics in this chapter. These are of greatest importance to the applied mathematician. Legendrc's ODE and Legendre polynomials (Sec. 5.3) are likely to occur in problems showing spherical symmetry. They are obtained by the power series method (Sees. 5.1, 5.2), which gives solutions of ODEs in power series. Bessel's ODE and Bessel functions (Sees. 5.5, 5.6) are likely to occur in problems showing cylindrical symmetry. They are obtained by the Frobenius method (Sec. 5.4), an extension of the power series method which gives solutions of ODEs in power series, possibly multiplied by a logarithmic term or by a fractional power. Eigenfunction expansions (Sec. 5.8) are infinite series obtained by the Sturm-Liouvillc theory (Sec. 5.7). The terms of these series may be Legendre polynomials or other functions, and their coefficients are obtained by the orthogonality of those functions. These expansions include Fourier series in terms of cosine and sine, which are so important that we shall devote a whole chapter (Chap. 11) to them. Special functions (also called higher functions) is a name for more advanced functions not considered in calculus. If a function occurs in many applications, it gets a name, and its properties and values are investigated in all details, resulting in hundreds of formulas which together with the underlying theory often fill whole books. This is what has happened to the gamma, Legendre, Bessel, and several other functions (take a look into Refs. [GR1], [GR10], [Al 11 in App. 1). Your CAS knows most of the special functions and corresponding formulas that you will ever need in your later work in industry, and this chapter will give you a feel for the basics of their theory and their application in modeling. COMMENT. You can study this chapter directly after Chap. 2 because it needs no material from Chaps. 3 or 4. Prerequisite: Chap. 2. Sections that may be omitted in a shorter course: 5.2, 5.6-5.8. References and Answers to Problems: App. 1 Part A, and App. 2. SEC. 5.1 Power Series Method 167 5.1 Power Series Method The power series method is the standard method for solving linear ODEs with variable coefficients. It gives solutions in the form of power series. These series can be used for computing values, graphing curves, proving formulas, and exploring properties of solutions, as we shall see. In this section we begin by explaining the idea of the power series method. Power Series From calculus we recall that a power series (in powers of x — x0) is an infinite series of the form (1) 2 ßmU - x0)m = ö0 + _ x0) + a2(x - x0f + Here, x is a variable. c;0, ax, ' ' " are constants, called the coefficients of the series. x0 is a constant, called the center of the series. In particular, if ,v0 = 0, we obtain a power series in powers of x 00 (2) 2 amx"" = a0 + alx + a2x2 + a3xS +'••'. m—0 We shall assume that all variables and constants are real. Familiar examples of power series are the Maclaurin series I cos X 2 *m = 1 + x + X2 + • • ■ (\x\ < 1, geometric series) rre-0 ~ xm X2 X3 y — = i + x + — + — + ■■• „ ml 2! 3! m=0 oo (_l)mx2m x2 x4 2j - = 1---1---+ • • ■ „ (2m)! 2! 4! m—0 sin x = y, - = x — — + — — + •••. „ (2m + 1)1 3! 5! Wc note that the term "power series" usually refers to a series of the form (1) [or (2)] but does not include series of negative or fractional powers of x. We use m as the summation letter, reserving n as a standard notation in the Legcndre and Bessel equations for integer values of the parameter. Idea of the Power Series Method The idea of the power series method for solving ODEs is simple and natural. We describe the practical procedure and illustrate it for two ODEs whose solution we know, so that 168 CHAP. 5 Series Solutions of ODEs. Special Functions we can see what is going on. The mathematical justification of the method follows in the next section. For a given ODE y" + p(x)y' + q(x)y = 0 we first represent p(x) and q{x) by power series in powers of x (or of x — x0 if solutions in powers of x — x0 are wanted). Often p(x) and q(x) are polynomials, and then nothing needs to be done in this first step. Next we assume a solution in the form of a power series with unknown coefficients, OO (3) y = ^ amxm = a0 + abx + a2x2 + a3x3 + • • ■ and insert this series and the series obtained by termwise differentiation, OO (a) J = 2 mam£"~x = ax + 2a2x + 3a3x2 + • • • (4) oc (b) y" = 2 mim - i)amxr"~2 = 2a2 + 3• 2a3x + 4 • 3a4x2 + ■ • ■ m=2 into the ODE. Then we collect like powers of x and equate the sum of the coefficients of each occurring power of x to zero, starting with the constant terms, then taking the terms containing x, then the terms in x2, and so on. This gives equations from which we can determine the unknown coefficients of (3) successively. Let us show this for two simple ODEs that can also be solved by elementary methods, so that we would not need power series. EXAMPLE 1 Solve the following ODE by power series. To grasp the idea, do this by hand; do not use your CAS (for which you could program the whole process). y = 2.vv. Solution. We insert (3) and (4a) into the given ODE, obtaining flj — 2a2x + 3a$x2 — • • ■ = 2x(uq + a^x + a^x2 + ■■•)• We must perform the multiplication by 2x on the right and can write the resulting equation conveniently as o! + 2a2x h 3a3x2 + 4a4*3 - 5a5.v4 + 6afri-5 + • ■ ■ = 2a0x + 2u1jc2 + 2a2x3 + 2a3.r4 + 2a^xn + - • ■ . For this equation to hold, the two coefficients of every power of x on both sides must be equal, that is, «1 = 0, 2a2 = 2a0. 3«3 = 2an, 4a4 = 2a2. 5fl5 = 2"3- 6"r = 2a4, Hence a3 = 0, Og = 0, ■ • ■ and for the coefficients with even subscripts, Cl2 &n &4 ^0 SEC. 5.1 Power Series Method a0 remains arbitrary. With these coefficients the series (3) gives the following solution, which you should confirm by the method of separating variables. / 2 v' xe ,8 \ More rapidly, (3) and (4) give for the ODE y' = 2xy CO GO OC i 0 , NT* m — 1 o ^ "i- o mil m=2 m=0 m=0 Now, to get the same general power on both sides, we make a "shift of index" on the left by setting m = s + 2, thus m l=.s+l. Then am becomes aJ+g and xm~ becomes xs . Also the summation, which started with in = 2, now starts with s = 0 because s = m ~ 2. On the right we simply make a change of notation m =- s, hence am = as and.i™ + 1 = xs+1; also the summation now starts with s = 0. This altogether gives QC GO a, + 2 (s + 2)as+2xs^ = 2 2fls*s+1. s=o .s=0 Every occurring power of x must have the same coefficient on both sides; hence 2 «1 = 0 and 0 + 2)as+2 = 2as or as+2 = For s = 0, 1, 2, • • • we thus have o2 = (2/2)a0. a3 = (2B)a1 = 0, a4 = (2/4)«2, • • • as before. I EXAMPLE 2 Solve y" + y = 0. Solution. By inserting (3) and (4b) into the ODE we have GO CO 2 '"('" - l)amxm~z + 2 "m-v"' = 0. ?n —2 m—0 To obtain the same general power on both series, we set m = s + 2 in the first series and m = s in the second, and then we take the latter to the right side. This gives 2 - 2)0 + [)as+.zxs = - 2 s-0 s=0 Each power xs must have the same coefficient on both sides. Hence (.v I 2)(* + l)as+2 = This gives the recursion formula We thus obtain successively 's+2 (s I 2)h 1) 2-1 2! ' ":! ~~ 3-2 ~ 3! ^2 _ ^0 "3 _ "4" 4-3 " 4! • °5" 5-4 " 5! (.v = 0. 1, • ■ •)• and so on. a0 and a1 remain arbitrary. With these coefficients the scries (3) becomes v = «0 + «lJt - — .v2 — x3 + — .v4 + — xS + • ■ • 170 CHAP. 5 Series Solutions of ODEs. Special Functions Reordering lerms (which is permissible for a power series), we can write this in the form y = «0 (i - fr + ^ - + ■ ■ ■) + «i (* - fr + - -> ■ • ■) and we recognize the familiar general solution y = op cos* + flj sin jr. Do we need the power series method for these or similar ODEs? Of course not; we used them just for explaining the idea of the method. What happens if we apply the method to an ODE not of the kind considered so far, even to an innocent-looking one such as y" +xy = 0 ("Airy's equation")? We most likely end up with new special functions given by power series. And if such an ODE and its solutions are of practical (or theoretical) interest, we name and investigate them in terms of formulas and graphs and by numeric methods. We shall discuss Legendre's, Bessel's, and the hypergeometric equations and their solutions, to mention just the most prominent of these ODEs. To do this with a good understanding, also in the light of your CAS, we first explain the power series method (and later an extension, the Frobcnius method) in more detail. 1-101 POWER SERIES METHOD: TECHNIQUE, FEATURES Apply the power series method. Do this by hand, not by a CAS, so that you get a feel for the method, e.g., why a series may terminate, or has even powers only, or has no constant or linear terms, etc. Show the details of your work. 1. >■' - y = 0 2. >' + xy = 0 3. y" + Ay = 0 4. y" - y = 0 5. (2 + x)y' = y 6. y' + 3(1 + x2)y = 0 7. / = y + x 8. (x5 + 4x3)y' = (5.v4 + I2x2)y 9. y" - y' =0 10. y" - xy' + y = 0 11-16 CAS PROBLEMS. INITIAL VALUE PROBLEMS Solve the initial value problems by a power series. Graph the partial sum s of the powers up to and including x5. Find the value of s (5 digits) at x±. 11. y' + 4v 12. y' 1 + y' y(0) = 1.25, y(0) = 0. x1 Xj =0.2 13. y. = y - y\ y(0) = ± xL = 1 14. (x - 2)y' = xy, y(0) = 4, Xl = 2 15. y" + 3xy' + 2y = 0, y(0) = 1, y'{0) = 1, A", = 0.5 y'(0) = 1.875, 0.5 17. WRITING PROJECT. Power Series. Write a review (2-3 pages) on power series as they are discussed in calculus, using your own formulation and examples— do not just copy passages from calculus texts. 18. LITERATURE PROJECT. Maclaurin Series. Collect Maclaurin series of the functions known from calculus and arrange them systematically in a list that you can use for your work. 5.2 Theory of the Power Series Method In the last section we saw that the power series method gives solutions of ODEs in the form of power series. In this section we justify the method mathematically as follows. We first review relevant facts on power series from calculus. Then we list the operations on power series needed in the method (differentiation, addition, multiplication, etc.). Near the end we state the basic existence theorem for power series solutions of ODEs. SEC. 5.2 Theory of the Power Series Method Basic Concepts Recall from calculus that a power series is an infinite series of the form OS (1) 2 am(x - x0)m = a0 + at(x - x0) + a2(x - x0)2 + • - ■ m—0 As before, we assume the variable x, the center x0, and the coefficients a0, av ■ ■ ■ to be real. The «th partial sum of (1) is (2) sn(x) = a0 + aL(x - x0) + a2(x - x0f + • • • + ajx - x0)n where n = 0,1, • • • . Clearly, if we omit the terms of sn from (1), the remaining expression is (3) Rn(x) = an , x(x - x0)" + 1 + an., 2(x - x0)"+2 + ■ • • . This expression is called the remainder of (\) after the term an(x - x0)". For example, in the case of the geometric series 1 + x + x2 + • • - + xn + • • • we have S0 =1, R0 = x + x2 + x3 + ■ • • , .S'i = 1 + X, /?, = X2 + X3 + X4 + • • • , s2 = 1 + x + x2, R2 = x3 + x4 + x5 + • • ■ , etc. In this way we have now associated with (1) the sequence of the partial sums s0(x), Si(x), s2(x), • • • . If for some x = x1 this sequence converges, say, lim sjxj = s(x\), then the series (1) is called convergent at x = x1; the number s(xx) is called the value or sum of (1) at x'i, and we write ■<*i) = 2 am{xi - x0)m. Then we have for every n, (4) sOh) = sn(Xl) + «„(X!). If that sequence diverges at x = x,, the series (1) is called divergent at x = x1. In the case of convergence, for any positive e there is an N (depending on e) such that, by (4), (5) \RJxi)\ = \s(xf) ~ sjxi)\ < e for all n > N. CHAP. 5 Series Solutions of ODEs. Special Functions Geometrically, this means that all s^x-t) with n > N lie between «(%) — e and six^) + e (Fig. 102). Practically, this means that in the case of convergence we can approximate the sum s(xj) of (1) at xx by sn(x{) as accurately as we please, by taking n large enough. Convergence Interval. Radius of Convergence With respect to the convergence of the power series (1) there are three cases, the useless Case 1, the usual Case 2, and the best Case 3, as follows. Case 1. The series (1) always converges at x = x0, because for x = x0 all its terms are zero, perhaps except for the first one, aQ. In exceptional cases x = x0 may be the only x for which (1) converges. Such a series is of no practical interest. Case 2. If there are further values of x for which the series converges, these values form an interval, called the convergence interval. If this interval is finite, it has the midpoint ,v0, so that it is of the form (6) \x ~ A'ol < R (Fig- 103) and the series (1) converges for all x such that \x — x0\ < R and diverges for all x such that \x — xQ\ > R. (No general statement about convergence or divergence can be made for x — x0 = R or —R.) The number R is called the radius of convergence of (1). (R is called "radius" because for a complex power series it is the radius of a disk of convergence.) R can be obtained from either of the formulas (7) (a) R - l/lim VlaJ (b) R - l/lim provided these limits exist and are not zero. [If these limits are infinite, then (1) converges only at the center x0. \ Case 3. The convergence interval may sometimes be infinite, that is, (1) converges for all x. For instance, if the limit in (7a) or (7b) is zero, this case occurs. One then writes R = », for convenience. (Proofs of all these facts can be found in Sec. 15.2.) For each x for which (1) converges, it has a certain value .v(x). We say that (1) represents the function s(x) in the convergence interval and write cc Six) = 2 "m(X - X0)m (\X - A-0| < R). Let us illustrate these three possible cases with typical examples. Divergence -Convergence -H- R- Divergence s(x.) ■ • ( sixj sU\ ) Fig. 102. Inequality (5) xn-B Fig. 103. Convergence interval (6) of a power series with center x0 SEC. 5.2 Theory of the Power Series Method 173 EXAMPLE 1 The Useless Case 1 of Convergence Only at the Center In the case of the series CO 2 mlxm = \ + x + 2x2 + 6xa + • - ■ m=o we have anl = ml, and in (7b), %t+l (m II)! - = - = m + 1 —» w as m —* so. Thus this series converges only at the center x = 0. Such a series is useless. EXAMPLE 2 The Usual Case 2 of Convergence in a Finite Interval. Geometric Series For the geometric series we have 1 - 2 xm = 1 + x + x2 + ■ ■ ■ (\x\ < 1). 1 - x ?n=0 In fact, am = 1 for all m. and from (7) we obtain R= 1, that is, the geometric series converges and represents l/( 1 - x) when |x| < 1. ■ EXAMPLE 3 The Best Case 3 of Convergence for All x In the case of the series ~ xr" x2 ex =2, — = 1 +*+ — +•• ■ ml 21 m—0 wc have «.m = I/ml. Hence in (7b), om ! l/(m + 1)! 1 l/«i! m + 1 so that the series converges for all x. EXAMPLE 4 Hint for Some of the Problems Find the radius of convergence of the series * (- ir , x3 x6 x9 ^ 8« 8 64 512 " ■ m=0 Solution. This is a series in powers of f = a3 with coefficients am = (— l)m/8m, so that in (7b), 8'" 1 Rm+1 ~ ¥ Thus 8 = 8. Hence the series converges for |r| = \x \ < 8, that is, \x\ < 2. Operations on Power Series In the power series method we differentiate, add, and multiply power series. These three operations are permissible, in the sense explained in what follows. We also list a condition about the vanishing of all coefficients of a power series, which is a basic tool of the power series method. (Proofs can be found in Sec. 15.3.) CHAP. 5 Series Solutions of ODEs. Special Functions Termwise Differentiation A power series may be differentiated term by term. More precisely: if oc y(x) = 2 am(x - x0)m converges for \x - x0| < R, where R > 0, then the series obtained by differentiating term by term also converges for those x and represents the derivative y' of y for those x, that is, (|* - *b| < R)-Similarly, y"(x) = 2 m(m - \)am(x - x0)m~2 (\x - x0\ < R), etc. m=2 Termwise Addition Two power series may be added term by term. More precisely: if the series (8) 2 ajx - x0)m and 2 bjx - x0)m m.=0 m=0 have positive radii of convergence and their sums are f(x) and g(x), then the series 2 (Am + *m)C« - AO)'" m—0 converges and represents f(x) + g(x) for each x that lies in the interior of the convergence interval of each of the two given series. Termwise Multiplication Two power series may be multiplied term by term. More precisely: Suppose that the scries (8) have positive radii of convergence and let f(x) and g(x) be their sums. Then the series obtained by multiplying each term of the first series by each term of the second series and collecting like powers of x — x0, that is, oo 2 (a()bm + aA„_i + • • • + amb0)(x - x0)m = a0b0 + (a(ib1 + a^b0)(x - x0) + (a0b2 + a^ + a2b0)(x - x0f + • • • converges and represents f(x)g(x) for each x in the interior of the convergence interval of each of the two given series. y'(x) = 2 mam(x - x0y SEC. 5.2 Theory of the Power Series Method 175 Vanishing of All Coefficients If a power series has a positive radius of convergence and a sum that is identically zero throughout its interval of convergence, then each coefficient of the series must he zero. Existence of Power Series Solutions of ODEs. Real Analytic Functions The properties of power series just discussed form the foundation of the power series method. The remaining question is whether an ODE has power series solutions at all. An answer is simple: If the coefficients p and q and the function r on the right side of (9) y" + p(x)y' + q(x)y = r(x) have power series representations, then (9) has power series solutions. The same is true if h, p, q, and r in (10) h(x)y" + p(x)y' + q(x)y = f(x) have power series representations and h(x0) + 0 (x0 the center of the series). Almost all ODEs in practice have polynomials as coefficients (thus terminating power series), so that (when r(x) = 0 or is a power series, too) those conditions are satisfied, except perhaps the condition h(x0) + 0. If h{x0) + 0, division of (10) by h(x) gives (9) with p = plh, q = q/h~, r = rlfi. This motivates our notation in (10). To formulate all this in a precise and simple way, we use the following concept (which is of general interest). DEFINITION Real Analytic Function A real function f(x) is called analytic at a point x = x0 if it can be represented by a power series in powers of x — x0 with radius of convergence R > 0. Using this concept, we can state the following basic theorem. THEOREM 1 Existence of Power Series Solutions If p, q, and r in (9) are analytic at x = x0, then every solution of (9) is analytic at x = x0 and can thus be represented by a power series in powers of x — x0 with radius of convergence R > 0. Hence the same is true if h, p, q, and r in (10) are analytic at x = x0 and h(x0) + 0. The proof of this theorem requires advanced methods of complex analysis and can be found in Ref. [ A11J listed in App. 1. We mention that the radius of convergence R in Theorem 1 is at least equal to the distance from the point x = x0 to the point (or points) closest to x0 at which one of the functions p, q, r, as functions of a complex variable, is not analytic. (Note that that point may not lie on the x-axis but somewhere in the complex plane.) 176 "EERO B LE■■fit" SET 1-12 RADIUS OF CONVERGENCE Determine the radius of convergence. (Show the details.) x m i. 2 (c* o) i -1 )m 2 y ——-— fx + n2m 3- 2 (x - 3J- 4. 2 (-1)'». 5.2 (2m)! (2m + 2)(2m + 4) (-If , "V *■ ' ■' „2m+10 6-± w 7-2 ^g-Cr- <)2"< " (4m)! m-l (m!) io. 2 (m - 3)4 (2m)! rrf »• 2 ( m= 1 12.2 (m + l)m „2r« + l j (2m +1)! 13-151 SHIFTING SUMMATION INDICES (CF. SEC. 5.1) This is often convenient or necessary in the power series method. Shift the index so that the power under the summation sign is xs. Check by writing the first few terms explicitly. Also determine the radius of convergence R. 13. 2 (-1)' CO f_ I ym+1 i4. 2 —— ~m_3 15.2 4'" -E_ xp + 4 i (P + D! 16-23 POWER SERIES SOLUTIONS 5n Find a power scries solution in powers of x. (Show the details of your work.) 16. y" + xy = 0 17. y" - v' + x2y = 0 18. >" - y' + xy = 0 19. y" + 4xy' = 0 20. y" + 2xy' + y = 0 21. y" + (1 + x2)y = 0 22. y" - 4xy' + (Ax2 - 2)y = 0 23. (2x2 - 3x + l)y" + 2xy' - 2y = 0 24. TEAM PROJECT. Properties from Power Series. In the next sections we shall define new functions (Legendre functions, etc.) by power series, deriving properties of the functions directly from the series. To understand this idea, do the same for functions familiar from calculus, using Maclaurin series. (a) Show that cosh x + sinh x = ex. Show that cosh a* > 0 for all x. Show that ex ^ e~x for all x 0. (b) Derive the differentiation formulas for ex, cosx, sinx, 1/(1 — x) and other functions of your choice. Show that (cos x)" = — cosx, (coshx)" = coshx. Consider integration similarly. (c) What can you conclude if a series contains only odd powers? Only even powers? No constant term? If all its coefficients are positive? Give examples. (d) What properties of cos x and sin x are not obvious from the Maclaurin series? What properties of other functions? 25. CAS EXPERIMENT. Information from Graphs of Partial Sums. In connection with power series in numerics we use partial sums. To get a feci for the accuracy for various x. experiment with sin x and graphs of partial sums of the Maclaurin series of an increasing number of terms, describing qualitatively the "breakaway points" of these graphs from the graph of sin x. Consider other examples of your own choice. SEC. 5.3 Legendre's Equation. Legendre Polynomials Pn(x) 177 5.3 Legendre's Equation. Legendre Polynomials Pn(x) In order to first gain skill, we have applied the power series method to ODEs that can also be solved by other methods. We now turn to the first "big" equation of physics, for which we do need the power series method. This is Legendre's equation1 (!) (1 - x2)y" - 2xy' + n(n + l)y = 0 where n is a given constant. Legendre's equation arises in numerous problems, particularly in boundary value problems for spheres (take a quick look at Example 1 in Sec. 12.10). The parameter n in (1) is a given real number. Any solution of (1) is called a Legendre function. The study of these and other "higher" functions not occurring in calculus is called the theory of special functions. Further special functions will occur in the next sections. Dividing (1) by the coefficient 1 — x2 of y", we see that the coefficients — 2x/(l — x2) and n(n + 1)/(1 — x2) of the new equation are analytic at x = 0. Hence by Theorem 1, in Sec. 5.2, Legendre's equation has power series solutions of the form (2) m=0 Substituting (2) and its derivatives into (1), and denoting the constant n(n + 1) simply by k, we obtain (1 - x2) 2 m(m - ])amxm-2 - 2x £ mamxm-x + Jfc2 amxm = 0. m= 2 m=l m—0 By writing the first expression as two separate series we have the equation OO GO CO OO 2 m(m — l)amxm~2 — 2 m(,n ~ l)0m*m — 2 2mamx™ + 2 kamxm = 0. rre-2 m= 1 m=0 To obtain the same general power xs in all four series, we set m — 2 = s (thus m = s + 2) in the first series and simply write a instead of m in the other three series. This gives X (s + 2){s + \)as 2 s(s — l^asr^ — 2 2sasx;i + 2 kasxs — 0. 1ADRIEN-MARIE LEGENDRE (1752-1833), French mathematician, who became a professor in Paris in 1775 and made important contributions to special functions, elliptic integrals, number theory, and the calculus of variations. His book Elements de geometric (1794) became very famous and had 12 editions in less than 30 years. Formulas on Legendre functions may be found in Refs. [GRll and [GR10]. CHAP. 5 Series Solutions of ODEs. Special Functions (Note that in the first series the summation begins with s = 0.) Since this equation with right side 0 must be an identity in x if (2) is to be a solution of (1), the sum of the coefficients of each power of x on the left must be zero. Now x° occurs in the first and fourth series and gives [remember that k = n(n + 1)] (3a) 2 • la2 + n(n + l)a0 = 0. x1 occurs in the first, third, and fourth series and gives (3b) 3 ■ 2a:i +[-2 + n(n + l)]^ = 0. The higher powers x2, x3, • ■ • occur in all four series and give (3c) (s + 2)(s + l)fls+2 + [s(s - 1) - 2s + n(n + \)]as = 0. The expression in the brackets [■ ■ ■] can be written (n - s)(n + s + 1), as you may readily verify. Solving (3a) for a2 and (3b) for a3 as well as (3c) for as+2, we obtain the general formula (« - s)(n + s + 1) (4) a„+, = — - a, (s = 0, 1, • ■ •)■ 1 ; s 2 (s + 2)(s + 1) This is called a recurrence relation or recursion formula. (Its derivation you may verify with your CAS.) It gives each coefficient in terms of the second one preceding it, except for a0 and ax, which are left as arbitrary constants. We find successively n(n + 1) a-2 =--Ts-flo (n - 2)(n + 3) "4 =--^-«2 (n - 2)n(n + l)(n + 3) 4! «3 (« - \)(n + 2) --0\ 3! (n - 3)(n + 4) — -a-, 5-4 3 (n - 3)(n - 1 )(n + 2)(n + 4) 5! and so on. By inserting these expressions for the coefficients into (2) we obtain (5) y(x) = a0>'iW + OiVaU') where n(n t 1) (n - 2)n(n + \)(n t- 3) (6) y,(jc) = 1--— - x2 + - - x* - + 2! 4! (n - i)(n + 2) (n - 3)(n - l)(n + 2)(n + 4) (7) y2(x) = x---- -t----.v5 - + • • • . These series converge for |jc| < 1 (see Prob. 4; or they may terminate, see below). Since (6) contains even powers of x only, while (7) contains odd powers of x only, the ratio _v1/y2 is not a constant, so that y, and y2 are not proportional and are thus linearly independent solutions. Hence (5) is a general solution of (1) on the interval —1 < x < 1. SEC. 5.3 Legendre's Equation. Legendre Polynomials Pn(x) Legendre Polynomials Pn(x) In various applications, power series solutions of ODEs reduce to polynomials, that is. they terminate after finitely many terms. This is a great advantage and is quite common for special functions, leading to various important families of polynomials (see Refs. [GR1] or [GR10] in App. 1). For Legendre's equation this happens when the parameter n is a nonnegative integer because then the right side of (4) is zero for s = n, so that an+2 — 0, an+4 = 0> an+e = 0, • • • . Hence if n is even, }'i(x) reduces to a polynomial of degree n. If « is odd, the same is true for y2(x). These polynomials, multiplied by some constants, are called Legendre polynomials and are denoted by Pn(x), The standard choice of a constant is done as follows. We choose the coefficient a.n of the highest power xn as (2n)! 1 -3-5 • • • (2n - 1) (8) an = 2 = -— - (n a positive integer) 2 (n!) n) (and an = 1 if n = 0). Then we calculate the other coefficients from (4), solved for as in terms of as+2, that is, (5 + 2)(s + 1) (9) as = ---————— as+2 (s^n- 2). (n — s)(n + 5+1) The choice (8) makes Pn(l) = 1 for every n (see Fig. 104 on p. 180); this motivates (8). From (9) with s — n — 2 and (8) we obtain n(n - 1) n(n - 1)(2«)! "n~2 ~ ~ 2(2n - 1) fl" ~ " 2(2« - l)2n(«!)2 ' Using (2n)\ = 2n(2n - 1)(2« - 2)!, n\ = n(n - 1)!, and n\ = n(n - \)(n - 2)!, we obtain n(n - l)2«(2n - l)(2n - 2)! a"~2 ~ 2(2« - l)2"n(n - V)\ n(n - \)(n - 2)! ' n(n — 1)2«(2« — 1) cancels, so that we get (2« - 2)! Similarly, an-4 T\n - 1)! (n - 2)! _ (» - 2)(« - 3) 4(2« - 3) an~' (2« - 4)! 2"2! (« - 2)! (n ~ 4)! and so on, and in general, when n - 2m § 0, 180 CHAP. 5 Series Solutions of ODEs. Special Functions The resulting solution of Legendre's differential equation (1) is called the Legendre polynomial of degree n and is denoted by Pn(x). From (10) we obtain pnM = 2 (-ir ^rr^—■- - - ■*n_s (id 2nm\ (n - m)\ (n - 2m)! ra=0 (2n)! „. (2n-2)! + 2"(n!)2 2ral! (n - 1)! (it - 2)! where M = nil or (n - 1 )/2, whichever is an inLeger. The first few of these functions are (Fig. 104) P0M = 1, Pt{x) = x (11') jP20c) = J(3x2 - 1), P3(x) = §(5x3 - 3bc) P4U) = s(35jt-4 - 30x2 + 3), P5(x) = J(63^5 - 70x3 + 15x) and so on. You may now program (11) on your CAS and calculate Pn(x) as needed. The so-called orthogonality of the Legendre polynomials will be considered in Sees. 5.7 and 5.8. PJx) 1 l\ l/l 1 1 v 1 l/l 1 1 \ i\ /' -1 V -1 - p2 1 1 x Fig. 104. Legendre polynomials 1. Verify that the polynomials in (11') satisfy Legendre's equation. 2. Derive (ll') from (11). 3. Obtain P6 and P1 from (11). 4. (Convergence) Show that for any n for which (6) or (7) does not reduce to a polynomial the scries has radius of convergence 1. 5. (Legendre function Q0(x) for n = 0) Show that (6) with n = 0 gives y,(x) = P0(x) = 1 and (7) gives 2 , (-3K-D-2-4 , y2(x) = x + —- xd +---x + 5! - In 2 1 + x SEC. 5.3 Legendre's Equation. Legendre Polynomials Pn[x) Verify this by solving (1) with n = 0, setting z = y and separating variables. 6. (Legendre function -Qi(x) for n — 1) Show that (7) with n = 1 gives y2(x) = P\{x) = x and (6) gives yi(x) — ~Qi(x) (the minus sign in the notation being conventional). yi(x) 1 - X + = 1 - x4 3 5 + x5 3 5 1 4 - x 1 - - x 0, a + 0, 7. (ODE) Find a solution of (a2 - x2)y" - 2xy' + n(n + l)y by reduction to the Legendre equation. 8. [Rodrigues's formula (12)]2 Applying the binomial theorem to (x2 — 1)", differentiating it n times term by term, and comparing the result with (11), show that (12) PJx) dn 2nn\ dx 9. (Rodrigues's formula) Obtain (ll') from (12). 10-13 CAS PROBLEMS 10. Graph P2(x), ■ • ■ , Pi_q(x) on common axes. For what x (approximately) and n = 2, ■ • • , 10 is | < \ti 11. From what n on will your CAS no longer produce faithful graphs of Pn(x)7 Why? 12. Graph Q0(x), Qi(x), and some further Legendre functions. 13. Substitute asxs + 0). Using (13), show that I V^ + r:2 2 — 2rxr2 cos 6 1 r 2 '".»(COS 6) 2 ra=0 This formula has applications in potential theory. {Qlr is the electrostatic potential at A2 due to a charge Q located at A1. And the series expresses Mr in terms of the distances of A, and A2 from any origin O and the angle 0 between the segments OAx and OA2.) Fig. 105. Team Project 14 (c) Further applications of (13). Show that Pn{\) = UPni-V = (-1)" JVn(O) = 0, and i°2«(0) = (- D" • 1 • 3 • • • (2b - l)/[2 • 4 (2n)\. (d) Bonnet's recursion.3 Differentiating (13) with respect to u, using (13) in the resulting formula, and comparing coefficients of obtain the Bonnet recursion (14) (n + l)Pnll(x) = (2n + lu/'j.U - nPn_i(x), where n = 1, 2, • • • . This formula is useful for computations, the loss of significant digits being small (except near zeros). Try (14) out for a few computations of vour own choice. 2OLINDE RODRIGUES (1794-1851). French mathematician and economist. 3OSSIAN BONNET (1819-1892), French mathematician, whose main work was in differential geometry. 182 CHAP. 5 Series Solutions of ODEs. Special Functions 15. (Associated Legendre functions) The associated Lcgcnclre functions Pn(x) play a role in quantum physics. They are defined by and are solutions of the ODE (16) (15) pnk(x) = (l - x2r dxk (1 - x2)y" - 2xy' n(n + 1) = 0. Find P11(x), Pzix), P/ix), and P42(x) and verify that they satisfy (16). 5.4 Frobenius Method Several second-order ODEs of considerable practical importance—the famous Bessel equation among them—have coefficients that are not analytic (definition in Sec. 5.2), but are "not too bad," so that these ODEs can still be solved by series (power series times a logarithm or times a fractional power of x, etc.). Indeed, the following theorem permits an extension of the power series method that is called the Frobenius method. The latter— as well as the power series method itself—has gained in significance due to the use of software in the actual calculations. THEO REM 1 Frobenius Method Let b(x) and c(x) be any functions that are analytic at x — 0. Then the ODE (1) V + b(x) V + c(x) y = 0 has at least one solution that can be represented in the form oo (2) y(x) = xr 2 amx'n = xr(ao + aix + a2x'2 + • " ") m=0 (Oo * 0) where the exponent r may be any (real or complex) number (and r is chosen so that a0 * 0). The ODE (1) also has a second solution (such that these two solutions are linearly independent) that may be similar to (2) (with a different r and different coefficients) or may contain a logarithmic term. (Details in Theorem 2 below.)4 For example, Bessel's equation (to be discussed in the next section) V + - y X 2 2 X — V (va parameter) GEORG FROBENIUS (1849-1917), German mathematician, also known for his work on matrices and in group theory. In this theorem we may replace x by x — x0 with any number ,v0. The condition «0 / 0 is no restriction: it simply means that we factor out the highest possible power of .v. The singular point of (I) at x = 0 is sometimes called a regular singular point, a term confusing to the student, which we shall not use. 183 is of the form (1) with b{x) = 1 and c(x) = x2 — v2 analytic at x = 0, so that the theorem applies. This ODE could not be handled in full generality by the power series method. Similarly, the so-called hypergeometric differential equation (see Problem Set 5.4) also requires the Frobenius method. The point is that in (2) we have a power series times a single power of x whose exponent r is not restricted to be a nonnegative integer. (The latter restriction would make the whole expression a power series, by definition; see Sec. 5.1.) The proof of the theorem requires advanced methods of complex analysis and can be found in Ref. [All] listed in App. 1. Regular and Singular Points The following commonly used terms are practical. A regular point of >'" + p(x)y + q(x)y = 0 is a point x0 at which the coefficients p and q are analytic. Then the power series method can be applied. If x0 is not regular, it is called singular. Similarly, a regular point of the ODE h(x)y" + p(x)y'(x) + q(x)y = 0 is an x0 at which h, p, q are analytic and h(x0) # 0 (so what we can divide by h and get the previous standard form). If x0 is not regular', it is called singular. Indicial Equation, Indicating the Form of Solutions We shall now explain the Frobenius method for solving (1). Multiplication of (1) by x2 gives the more convenient form (1') x2y" + xb(x)y' + c(x)y = 0. We first expand b(x) and c{x) in power series, b(x) = b0 + bxx + bzx2 + ■ ■ • , c(x) = c0 + cxx + c2x2 + ■ ■ • or we do nothing if b(x) and c{x) are polynomials. Then we differentiate (2) term by term, finding oo y'{x) = 2 (m + r)amXm+r-1 = x'"1 [ra0 + (r + Daj.v + • • •] 7ft = 0 oc (2*) y"{x) = S (« + >')('" + r ~ \)amxm+r-2 = xr~2[r(r - l)aQ + (r + \)raxx +•••]. By inserting all these series into (l') we readily obtain x''[r(r - Y)a0 + ■ ■ ■] + (b0 + bxx + • ■ -)xr(rao + • • •) (3) + (cQ + cax + ■ • -)xr(a0 + a^x + • • •) = 0. 184 We now equate the sum of the coefficients of each power xr, xr^\ X , • • • to zero. This yields a system of equations involving the unknown coefficients am. The equation corresponding to the power xT is [r(r - 1) + b0r + c0]a0 = 0. Since by assumption a0 =t 0, the expression in the brackets [• • ■ J must be zero. This gives (4) r(r - 1) + b0r + c0 = 0. This important quadratic equation is called the indicial equation of the ODE (1). Its role is as follows. The Frobenius method yields a basis of solutions. One of the two solutions will always be of the form (2), where r is a root of (4). The other solution will be of a form indicated by the indicial equation. There are three cases: Case 1. Distinct roots not differing by an integer 1, 2, 3, ■ ■ • . Case 2. A double root. Case 3. Roots differing by an integer 1,2,3,---. Cases 1 and 2 are not unexpected because of the Euler-Cauchy equation (Sec. 2.5), the simplest ODE of the form (1). Case 1 includes complex conjugate roots rx and r2 = ~>\ because rx — r2 = rx — rx = 2i Im rx is imaginary, so it cannot be a real integer. The form of a basis will be given in Theorem 2 (which is proved in App. 4), without a general theory of convergence, but convergence of the occurring series can be tested in each individual case as usual. Note that in Case 2 we must have a logarithm, whereas in Case 3 we may or may not. THEOREM 2 Frobenius Method. Basis of Solutions. Three Cases Suppose that the ODE (1) satisfies the assumptions in Theorem 1. Let rx and r2 be the roots of the indicial equation (4). Then we have the following three cases. Case 1. Distinct Roots Not Differing by an Integer. A basis is (5) yx(x) = x'l(a0 + ahx + a2xz + • • •) and (6) y2(x) = x'2(A0 + Ajx + A2x2 + • • •) with coefficients obtained successively from (3) with r = rY and r Case 2. Double Root rx = r2 = r. A basis is = r2, respectively. (7) v1(a-) = xr(a() + axx + a2x2 + • ■ •) ['- = = |(i - M (of the same general form as before) and (8) y2(x) = yi(x) Inx + xr{AlX + A2x2 + ■ ■ •) (x > 0). SEC. 5.4 Frobenius Method Case 3. Roots Differing by an Integer. A basis is (9) }'i(x) = xTl(a0 + axx + a2x2 + • • •) (of the same general form as before) and (10) y2(x) = kyi(x) [ax + x2(A0 + A±x + A2x2 + •■•), where the roots are so denoted that i\ — r2 > 0 and k may turn out to be zero. Typical Applications Technically, the Frobenius method is similar to the power series method, once the roots of the indicial equation have been determined. However, (5)-(10) merely indicate the general form of a basis, and a second solution can often be obtained more rapidly by reduction of order (Sec. 2.1). EXAMPLE 1 Euler—Cauchy Equation, Illustrating Cases 1 and 2 and Case 3 without a Logarithm For the Euler—Cauchy equation (Sec. 2.5) x2y" + b0xy' — c0y = 0 (£>0, c0 constant) substitution of y = xr gives the auxiliary equation r(r - 1) + b0r + c0 = 0, which is the indicial equation [and y = x' is a very special form of (2)!]. For different roots rj, r2 we get a basis Vj = a'1, v2 = a'2, and for a double root /- we get a basis x , x" Inx. Accordingly, for this simple ODE, Case 3 plays no extra role. EXAMPLE 2 Illustration of Case 2 (Double Root) Solve the ODE (11) x(x - \)y" + (3a- - l)y>' + y = 0. (This is a special hypergeometric equation, as we shall see in the problem set.) Solution. Writing (11) in the standard form (1), we see that it satisfies the assumptions in Theorem 1. [What are b(x) and c(x) in (11)?1 By inserting (2) and its derivatives (2*) into (11) we obtain CC CO 2 (>« + r){m + r - l)am;tm+r - 2 (m + r)im + r ~ l)«m.-vm+7'~1 m—0 m=0 (12) ZO DO OC + 3 2 ('» + r)amxm+r - 2 (m + r)amxm+T~i + 2 aMxm+T = 0. m=0 m—0 m=0 The smallest power is x -1, occurring in the second and the fourth series; by equating the sum of its coefficients to zero we have [-<-(r - 1) - r]a0 = 0, thus r2 = 0. Hence this indicial equation has the double root r = 0. 186 CHAP. 5 Series Solutions of ODEs. Special Functions First Solution. Wc insert this value r = 0 into (12) and equate the sum of the coefficients of the power .vs to zero, obtaining s(s — l)a, — (s + ])xas_ 1 + 3sas — (s + l)os+i + as = 0 thus ds+1 = as. Hence an = aL = a2 = • • • , and by choosing a0 = I we obtain the solution (x) = 2 1 - X (W < i). Second Solution. We get a second independent solution y2 by the method of reduction of order (Sec. 2.1), substituting y2 = u>i and its derivatives into the equation. This leads to (9), Sec. 2.1, which we shall use in this example, instead of starting reduction of order from scratch (as we shall do in the next example). In (9) of Sec. 2.1 we have p = (3.t - \)/(x - x), the coefficient of y in (11) in standard form. By partial fractions, Hence (9), Sec. 2.1. becomes u = U = y7*e 2 -jp dx (x - 1 f {x - ifx u = In. jc, V'2 = "Vl In X 1 - ~x yi and y2 are shown in Fig. 106. These functions are linearly independent and thus form a basis on the interval 0 < x < 1 (as well as on 1 < x < «). 4---TT - * Fig. 106. Solutions in Example 2 EXAMPLE 3 Case 3, Second Solution with Logarithmic Term Solve the ODF (13) (x2 - x)y" - xy' + y = 0. Solution. Substituting (2) and (2*) into (13), we have (x2 - x) 2 (w + '•)<"' -t '- - l)"mxm+r 2 - • 2 (m 4 r)a»? 0 - 2 «m/"+r = o. m-0 We now take x . x, and a inside the summations and collect all terms with power* ' and simplify algebraically, (m + r — 1) awr r - 2 ('» ' <-)('» + '• - l)ara*m+r_1 = 0. m=0 m—0 In the first series wc set in = .v and in the second m = s + 1, thus .v = m - 1. Then (14) 2 (« + '• - Du + r)as iv"'f = 0. SEC. 5.4 Frobenius Method 187 The lowest power is x' 1 (take s = — 1 in the second series) and gives the indicial equation r(r - 1) = 0. The roots are = 1 and r2 = 0. They differ by an integer. This is Case 3. First Solution. Prom (14) with r = r1= 1 we have 2 [a, - c> + 2)0 + l)as+l]xs+1 = 0. This gives the recurrence relation 0 + 2)0 + 1) (5 = 0, 1, • • •)• Hence = 0, a2 = 0, ■ • ■ successively. Taking «0 = 1, wc get as a first solution yl = * 'ap = x Second Solution. Applying reduction of order (Sec. 2.1), we substitute y2 = >'i« = xu, y2 = xu + u and y2 = xu + 2u into the ODE, obtaining (x2 — x)(xm" + 2u') — .ri.iu' + u) + xw = 0. xu drops out. Division by x and simplification give (x2 - x)u" + (x - 2)u' =0. From this, using partial fractions and integrating (taking the integration constant zero), we get u" x - 2 2 1 , x - 1 — =--5- =--+ - . In « = In u' x - x x x - 1 Taking exponents and integrating (again taking the integration constant zero), we obtain x - 1 1 1 1 u = -7,— = — — -j , it in.\ I y2 = xu = x In x + 1. X X X x y1 and y2 are linearly independent, and y2 has a logarithmic term. Hence y^ and y2 constitute a basis of solutions for positive x. I The Frobenius method solves the hypergeometric equation, whose solutions include many known functions as special cases (see the problem set). In the next section we use the method for solving Bessel's equation. 1-171 BASIS OF SOLUTIONS BY THE FROBENIUS METHOD Find a basis of solutions. Try to identify the series as expansions of known functions. (Show the details of your work.) 1. xy" + 2y' - xy = 0 2. (x + 2)2y" - 2y = 0 3. xy" + 5y' + xy = 0 4. 2xy" + (3 - 4x)y' + (2x - 3)y = 0 5. x2y" + Axy' + (xz + 2)y = 0 6. Axy" + 2y' + y = 0 7. (x + 3)2y" - 9(.v + 3)y' + 25y = 0 8. xy" - y = 0 9. xy" + (2.t + 1)/ + (x + l)y = 0 10. x2y" + XrV + (x2 - 2)y = 0 11. (x2 + x)y" + (Ax + 2)y' + 2y = 0 12. x2v" + 6xy' + (Ax2 + 6)y 0 13. 2.vy" - (8x - l)y' + (8x - 2)y = 0 14. xy" + y' - xy = 0 15. (jc - 4)2y" - (x - A)y' - 35y = 0 16. x2y" + 4x>>' - (x2 - 2)y = 0 17. y" + (x - 6)y = 0 188 CHAP. 5 Series Solutions of ODEs. Special Functions 18. TEAM PROJECT. Hvpergcometric Equation, Series, and Function. Gauss's hvpergcometric ODE5 In (I + x) = xF(l, I, 2; 1 + x -x), In 1 = 2xF(k 1,1; x2). (15) x(l ~ x)y" + [c - (a + b + l)x]y - aby = 0. Here, a, b, c are constants. This ODE is of the form P2}>" + Pi}'' + Poy = 0- where p2, Pi, p0 are polynomials of degree 2, 1, 0. respectively. These polynomials are written so that the series solution takes a most practical form, namely. ab a(a + l)b(b +1) „ y.(jc) = 1 + - x + - X n ' V.c 2! c(c + 1) (16) a(a + l)(a + 2)h(h + l)(b + 2) 3! c(c + l)(c + 2) This series is called the hypergeometric series. Its sum Vi(x) is called the hypergeometric function and is denoted by F(a, b, c; x). Here, c # 0, -1, -2, By choosing specific values of a, b, c we can obtain an incredibly large number of special functions as solutions of (15) |see the small sample of elementary functions in part (c)]. This accounts for the importance of (15). (a) Hypergeometric series and function. Show that the indicial equation of (15) has the roots rx = 0 and r2 = 1 — c. Show that for t\ = 0 the Frobenius method gives (16). Motivate the name for (16) by showing that F(l, 1, 1; x) = F(],b. b; x) = F(a, 1, a; x) 1 - x (b) Convergence. For what a or b will (16) reduce to a polynomial? Show that for any other a, b, c (c■ + 0, — 1, —2, • • •) the series (16) converges when |x| < 1. (c) Special cases. Show that (1 + x)" = F(-n, b, b; -x), (1 - xf = 1 - nxF(l - n, 1, 2; x), arctanx = xF(j, 1, |: -je2), arcsinx = jcF(§, \, §; x2), Find more such relations from the literature on special functions. (d) Second solution. Show that for r2 = 1 — c the Frobenius method yields the following solution (where c * 2, 3, 4, • ■ •): + 1) (17) , / (a - c + \)(b - c (a - c + 1 )(a - c + 2)(b - c + 1 )(b - c + 2) 2! (-c - 2)(- c + Show that y2(x) = x^Fia c + 1, b - c + 1, 2 - c; x). (e) On the generality of the hypergeometric equation. Show that (18) (r2 + At + R)y + (Ct + D)y + Ky = 0 with y = rfy/rff, etc., constant B, C, D, if, and t2 + At + B = (t - /j)(r - f2). ri + t2> can be reduced to the hypergeometric equation with independent variable and parameters related by Cti + D = — c(f2 — C = a + b + 1, K = ab. From this you see that (15) is a "normalized form" of the more general (18) and that various cases of (18) can thus be solved in terms of hypergeometric functions. 19-24 I HYPERGEOMETRIC EQUATIONS Find a general solution in terms of hypergeometric functions. 19. x(l - x)y" + (I - 2x)y' - \y = 0 20. 2x(l - x)y" - (1 + 6x)y' - 2y = 0 21. x(l - x)y" + \y' + 2y = 0 22. 3f(l + t)y + ly - y = 0 23. 2(f2 - 5t + 6)y + (2t - 3)y - 8y = 0 24. 4(r2 - 3r + 2)y - 2y + y = 0 5CARL FRIEDRICH GAUSS (1777-1855). great German mathematician. He already made the first of his great discoveries as a student at Helmstedt and Göttingen. In 18Ü7 he became a professor and director of the Observatory at Göttingen. His work was of basic importance in algebra, number theory, differential equations, differential geometry. non-Euclidean geometry, complex analysis, numeric analysis, astronomy, geodesy, electromagnetism. and theoretical mechanics. He also paved the way for a general and systematic use of complex numbers. SEC. 5.5 Bessel's Equation. Bessel Functions J„(x) 189 5.5 Bessel's Equation. Bessel Functions)p[x) One of the most important ODEs in applied mathematics in Bessel's equation,6 (1) x2y" + xy' + (x2 - v2)y = 0. Its diverse applications range from electric fields to heat conduction and vibrations (see Sec. 12.9). It often appears when a problem shows cylindrical symmetry (just as Legendre's equation may appear in cases of spherical symmetry). The parameter vxn (1) is a given number. We assume that v is real and nonncgative. Bessel's equation can be solved by the Frobenius method, as we mentioned at the beginning of the preceding section, where the equation is written in standard form (obtained by dividing (1) by x2). Accordingly, we substitute the series OO (2) yix) = 2 amxm+r («0 + 0) m — 0 with undetermined coefficients and its derivatives into (1). This gives cc oo 2 On + r)(m + r - \)amxm+r + 2 (« + r)amxm+r ■m,=0 m=0 oo oo + 2 amxm+r+2 - v2 2 amxm+r = 0. ■in—0 ni 0 We equate the sum of the coefficients of xs+r to zero. Note that this power xs+r corresponds to m = s in the first, second, and fourth series, and to m = $ — 2 in the third series. Hence for ^ = 0 and s = 1, the third series does not contribute since m I 0. For s — 2, 3, ■ • • all four series contribute, so that we get a general formula for all these s. We find (a) r(r — ])aQ + ra0 — v2aVl = 0 (s = 0) (3) (b) (r + ])ra1 + (r + I)% - v2a1 = 0 (s = 1) (c) (s + r)(s + r - l)a, + (s + r)a, + as_2 - v\ = 0 (s = 2, 3, • • •)• From (3a) we obtain the indicial equation by dropping a0, (4) (r + v){r - v) = 0. The roots are r: = v 0) and r2 = — v. FRIEDRICH WILHELM BESSEL (1784-1846). German astronomer and mathematician, studied astronomy on his own in his spare time as an apprentice of a trade company and finally became director of the new Königsberg Observatory. Formulas on Bessel functions are contained in Ref. [GRI] and the standard treatise [A13], 190 CHAP. 5 Series Solutions of ODEs. Special Functions Coefficient Recursion for r = r, = v. For r = v, Eq. (3b) reduces to (2c + 1)«, = 0. Hence Q\ = 0 since cgO, Substituting r = v in (3c) and combining the three terms containing as gives simply (5) (s + 2v)sas + as-2 = 0. Since flj — 0 and v § 0, it follows from (5) that a3 = 0, as = 0, ■ * • . Hence we have to deal only with even-numbered coefficients as with $ ~ 2m. For s — 2m, Eq. (5) becomes (2m + 2v)2ma2m + a2m-2 = 0. Solving for a2m gives the recursion formula 1 (6) a2m = - 2 —- azm-2, m = 1- 2, • • • . 2 m(v + m) From (6) we can now determine a2, a4, ■ ■ • successively. This gives a2 = 22(v + 1) fl2 ÖQ "4 222(c + 2) 242! C»- + 1)(J- + 2) and so on, and in general (?) a2m ~ 22mm\ (v + l)(v + 2) •••(,/ + m) ' ~ l' 2' ' ' ' ' Bessel Functions 7n(x) For Integer = n Integer values ofv are denoted by n. This is standard. For v = n the relation (7) becomes (~l)ma0 (8) a9m = —- , m = 1, 2, ■ ■ ■ . y 2m 22mm! (n + 1)(« + 2) •••(« + m) a0 is still arbitrary, so that the series (2) with these coefficients would contain this arbitrary factor a0. This would be a highly impractical situation for developing formulas or computing values of this new function. Accordingly, we have to make a choice. a0 = 1 would be possible, but more practical turns out to be (9) fl0 = i ' because then n\(n + 1) • ■ ■ (n + m) = (m + re)! in (8), so that (8) simply becomes (-ir 2 m! (re + m)\ 191 This simplicity of the denominator of (10) partially motivates the choice (9). With these coefficients and i\= v = n we get from (2) a particular solution of (1), denoted by Jn(x) and given by (id Jn(x) (-])mx2m 2M+"m! (n + m)\ Jn(x) is called the Bessel function of the first kind of order n. The series (11) converges for all x, as the ratio test shows. In fact, it converges very rapidly because of the factorials in the denominator. EXAMPLE 1 Bessel Functions J0[x) and;,(x) For « = 0 we obtain from (11) the Bessel function of order 0 (12) J0(x) = 2 (-l)"'x2m 1 22(1!)2 which looks similar to a cosine (Fig. 107). For n (13) J-iix) = 2 , r(2!r 2°(3!) 1 we obtain the Bessel function of order 1 Lm\ (m + 1)! 231!2! 2;,2!3! 2'3!4! which looks similar to a sine (Fig. 107). But the zeros of these functions are not completely regularly spaced (see also Table Al in App. 5) and the height of the "waves" decreases with increasing x. Heuristically, n2lx2 in (1) in standard form L(') divided by x ] is zero (if n =- 0) or small in absolute value for large x. and so is y'lx, so that then Bessel's equation comes close to y" + y = 0, the equation of cos.v and sin.r; also y fx acts as a "damping term," in part responsible for the decrease in height. One can show that for large .v. (14) JJx) ! 2 hit it \ where ~ is read "asymptotically equal" and means that for fixed n the quotient of the two sides approaches 1 as a" —* Formula (14) is surprisingly accurate even for smaller .v (> 0). For instance, it will give you good starting values in a computer program for the basic task of computing zeros. For example, for the first three zeros of J0 you obtain the values 2.356 (2.405 exact to 3 decimals, error 0.049), 5.498 (5.520, error 0.022), 8.639 (8.654, error 0.015), etc. ■ Fig. 107. Bessel functions of the first kind J0 and J, CHAP. 5 Series Solutions of ODEs. Special Functions Bessel Functions jv(x) for any v ^ 0. Gamma Function We now extend our discussion from integer v — n to any v ~ 0. All we need is an extension of the factorials in (9) and (11) to any v. This is done by the gamma function Y(v) defined by the integral (15) 0 «TV-1 dt (v>0). By integration by parts we obtain ro+ l) - J + 1) = vT(v). Now by (15) oo TO) = f e~f dt 0 - (-1) = 1. From this and (16) we obtain successively T(2) = T(l) = 1!, F(3) = 2r(2) = 2!, • • and in general (17) Yin + 1) = n\ in = 0, 1, ■ • ■)• This shows the the gamma function does in fact generalize the factorial function. Now in (9) we had a0 = 1/(2"«!). This is l/(2mr(n + 1)) by (17). It suggests to choose, for any v, (18) Then (7) becomes a2m 1 «0 = ^ T'T(v + 1) (-If 22mml (v + \\i> + 2) • • • (v + m)2l'T(v + 1) But (16) gives in the denominator (v + l)T(v + 1) = T(v + 2), (v + 2)V(v + 2) = F(?^ + 3) and so on, so that {v + \){v + 2) • • • (v + m)r(v + 1) = T(p + m + 1). SEC. 5.5 Bessel's Equation. Bessel Functions J„(x) Hence because of our (standard!) choice (18) of a0 the coefficients (7) simply are (-1)™ (19) '2m 02m 22m+vm\ V(p + m + 1) ' With these coefficients and r = rx = v we get from (2) a particular solution of (1), denoted by Jv(x) and given by (20) JM=x"JJ-^rv ^ _iynx2m „ 22m'"ml T(v + m + 1) m—0 /„(jc) is called the Bessel function of the first kind of order v. The series (20) converges for all x, as one can verify by the ratio test. General Solution for Noninteger v. Solution J_v For a general solution, in addition to Jv we need a second linearly independent solution. For v not an integer this is easy. Replacing vhy — v in (20), we have (21) ■/ .I.V) A 'V (-iynx m 2m m = 0 2Zm-"m! V(m - v + 1) Since BesseFs equation involves v2, the functions J,, and J_^. are solutions of the equation for the same v. If v is not an integer, they are linearly independent, because the first term in (20) and the first term in (21) are finite nonzero multiples of x" and jt~", respectively, x = 0 must be excluded in (21) because of the factors-" (with v > 0). This snves THEOREM 1 General Solution of Bessel's Equation //' v is not an integer, a general solution of Bessel's equation for all x ¥= 0 is (22) y(x) = CiJM + c2J_v(x). But if v is an integer, then (22) is not a general solution because of linear dependence: THEOREM 2 Linear Dependence of Bessel Functions^ and/_„ For integer v = n the Bessel functions Jn(x) and J-n{x) are linearly dependent, because (23) J.n(x) = (-\)nJn{x) (n = 1,2, • • •)■ 194 CHAP. 5 Series Solutions of ODEs. Special Functions PROOF We use (21) and let v approach a positive integer n. Then the gamma functions in the coefficients of the first n terms become infinite (see Fig. 552 in App. A3.1), the coefficients become zero, and the summation starts with m = n. Since in this case T(m — n + 1) = (m — ri)\ by (17), we obtain J-Jx) = 2 l2m-n..i',_—:.T7 = 2 ——— = n + m\ (in — n)\ (n + s)\ s\ The last series represents (— l)"./„(.v), as you can see from (ll) with in replaced by s. This completes the proof. ■ A general solution for integer n will be given in the next section, based on some further interesting ideas. Discovery of Properties From Series Bessel functions are a model case for showing how to discover properties and relations of functions from series by which they are defined. Bessel functions satisfy an incredibly large number of relationships—look at Ref. [Al3] in App. I; also, find out what your CAS knows. In Theorem 3 we shall discuss four formulas that are backbones in applications. THEOREM 3 Derivatives, Recursions The derivative ofJ„(x) with respect to x can be expressed by 7l„l(x) or y,/_l(A) by the formulas (24) (a) [xvJMY = -v./.. ,i.u (b) [x-vJv(x)Y = -x'"Jvll(x). Furthermore, J,,(x) and its derivative satisfy the recurrence relations (24) 2v (C) /„_!(*) + Jv+1(x) = —J„(x) x (d) J^U) - ./,, tl(x) = 2J'„(x). PROOF (a) We multiply (20) by x" and take x v under the summation sign. Then we have aw = S m=0 22m+"m! T(v + m + I) We now differentiate this, cancel a factor 2, pull x2''~l out, and use the functional relationship Y(v + m + l) = (v + m)T(v + m) [see (16)]. Then (20) with v — 1 instead of v shows that we obtain the right side of (24a). Indeed, , ~ (-]jm2(m -r i>)x2m+2"-' _t - (-l)mx2m ™ = 2j .2mi,,.., , ... , i, = x"x" 2, 22m"'m\ T(v + m + 1) ■)2m+v—l\ ' m\ T(v + in) SEC. 5.5 Bessel's Equation. Bessel Functions J,,[x) 195 (b) Similarly, we multiply (20) by x~", so that x" in (20) cancels. Then we differentiate, cancel 2m, and use ml — m(m — 1)!. This gives, with m = s + 1. ■yj /_i \m 2m— 1 /_ i ys' + l v2s+l U ' " ~ , 22m ""Hot - 1)! \\v + m + 1) ~ „ 22s"+1,v! I> + s + 2) ' Equation (20) with v + 1 instead of v and s instead of m shows that the expression on the right is —x~vJ„+i(x). This proves (24b). (c) , (d) We perform the differentiation in (24a). Then we do the same in (24b) and multiply the result on both sides by x2". This gives (a*) vx"-xJv + x"./' = .vrJ,._1 (b*) -vxv~1Jv + x"j'v = -.v'7„+1. Substracting (b*) from (a*) and dividing the result by x" gives (24c). Adding (a*) and (b*) and dividing the result by x" gives (24d). ■ EXAMPLE 2 Application of Theorem 3 in Evaluation and Integration Formula (24c) can be used recursively in the form 2v J,, + iM = — Jv(x) ~ Jv-X(x) for calculating Bessel functions of higher order from those of lower order. For instance, ./2(.r) = 2J\(x)lx - Jq(x), so that ,/2 can be obtained from tables of ./0 and Jx (in App. 5 or, more accurately, in Ref. [CRI] in App. 1). To illustrate how Theorem 3 helps in integration, we use (24b) with v = 3 integrated on both sides. This evaluates, for instance, the integral -2 / I x J74(v) dx = -x 373(.v) x "l.:. i ,,' Ji --y;!(2) + /3,;i). A table of 73 (on p. 398 of Ref. [GR1]) or your CAS will give you -i-O.128943 + 0.019563 0.003445. Your CAS (or a human computer in precomputcr times) obtains Js from (24), first using (24c) with v = 2, that is, Js = 4x~\l2 - Jv then (24c) with v = 1, that is. ./2 = 2a-1./, /„. Together. / = .v-3(4.v-1(2.v^1i1 - J0) - ./!) 1 = -|[27x(2) - 270(2) - /i(2)] + [87x(i) - 4i0(D - ^(l)] = -|/i(2) + A/„(2) + 7/iCl) - 4./0(l). This is what you get, for instance, with Maple if you type int(- ■ ■)■ And if you type evalf(int(- • •)), you obtain 0.003445448, in agreement with the result near the beginning of the example. In the theory of special functions it often happens that for certain values of a parameter a higher function becomes elementary. We have seen this in the last problem set, and we now show this for 196 CHAP. 5 Series Solutions of ODEs. Special Functions THEOREM 4 Elementary J„ for Half-Integer Order v Besse! junctions J„ of orders ±|, ±|, ±|, • • • are elementary; they can be expressed by finitely many cosines and sines and powers of x. In particular, (25) (a) Jm(x) = / — sin x, \ ttx (b) J-V2(x) = I — c \ ttx PROOF When v = A, then (20) is Jinix) - y/x^ 02m+i/2 . (-\)mx2m 2 -,2m-1 (-1)W*2W+1 - 22m+1/2m! r•' + A2v = 0, A2 = a?/g. (b) Transform this ODE to y + s~ly + y = 0, y = dy/ds, s = 2\zm, z = L - x, so that the solution is y(x) = ,/0(2wVU - x)lg). (c) Conclude that possible frequencies uiIItt are those for which s = 2w\'L/g is a zero of J0. The corresponding solutions are called normal modes. Figure 108 shows the first of them. What does the second normal mode look like? The third'.' What is the frequency (cycles/tnin) of a cable of length 2 m? Of length 10 m? Equilibrium position Fig. 108. Vibrating cable in Team Project 32 33. CAS EXPERIMENT. Bessel Functions for Large x. (a) Graph Jn(x) for n = 0, • ■ •, 5 on common axes. (b) Experiment with (14) for integer n. Using graphs, find out from which x = xv on the curves of (11) and (14) practically coincide. How does xn change with n? (c) What happens in (b) if n = ±|? (Our usual notation in this case would be ia) (d) How does the error of (14) behave as function of x for fixed /?? [Error = exact value minus approximation (14).] (e) Show from the graphs that /0(x) has extrema where J-i(x) = 0. Which formula proves this? Find further relations between zeros and extrema. (f) Raise and answer questions of your own, for instance, on the zeros of J0 and Jv How accurately are they obtained from (14)'.' 5.6 Bessel Functions of the Second Kind Yv[x) From the last section we know that /„ and J_v form a basis of solutions of Bessel's equation, provided v is not an integer. But when v is an integer, these two solutions arc linearly dependent on any interval (see Theorem 2 in Sec. 5.5). Hence to have a general solution also when v = n is an integer, we need a second linearly independent solution besides Jn. This solution is called a Bessel function of the second kind and is denoted by Yn. We shall now derive such a solution, beginning with the case n = 0. n = 0: Bessel Function of the Second Kind Y0(x) When n = 0, Bessel's equation can be written (1) xy" t y' + xv = 0. SEC. 5.6 Bessel Functions of the Second Kind Y„(x) 199 Then the indicial equation (4) in Sec. 5.5 has a double root r = 0. This is Case 2 in Sec. 5.4. Tn this case we first have only one solution, J0(x). From (8) in Sec. 5.4 we see that the desired second solution must be of the form (2) y2(x) = J0(x)lax - Jj Amx"\ m — 1 We substitute y2 and its derivatives J 00 \'2 = J a In* H---h 2 mAmx'r'~l x y'l = Jo In a- + — - ^§ + 2 '«('« " IM,,,*"1-2 m= 1 into (1). Then the sum of the three logarithmic terms xJq In x, j'0 In x, and xJ0 In x is zero because J0 is a solution of (1). The terms — J0/x and JJx (from xv" and yf) cancel. Hence we are left with 2Jq + 2 '«(»' - lM,,,,*'"-1 + 2 mA,/"1 + 2 Amxm+1 = 0. Addition of the first and second series gives E/772Amxm_1. The power series of J$(x) is obtained from (12) in Sec. 5.5 and the use of ml/m — (m — 1)! in the form ..2m —1 cc /_ i\m,.2m-1 Ä {-\)m2mxAm-' », (-l)'V ■/0M = 2j 22"'(m\f ~ 22m_1m! (m - 1)! ' m=l m—1 v 7 Together with X/n2Amxm_1 and SAmxm 11 this gives oc /_^ \m ,-2?7? — 1 cc cc (3*} 2 22m~2;»l "(;» - 1)' + 2 w2'4»-V?""1 + 2 Am*m+1 = 0. ttt=l ' m=l m—1 First, we show that the Am with odd subscripts are all zero. The power x° occurs only in the second series, with coefficient Av Hence A± = 0. Next, we consider the even powers x2s. The first series contains none. In the second series, m — 1 = 2s gives the term (2s + \)2A2s+ix2x. In tne third series, m + 1 = 25. Hence by equating the sum of the coefficients of x2s to zero we have (2s + l)2A2s+1 + A2,_t = 0, s = 1, 2, • • • . Since A1 = 0, we thus obtain A3 = 0, A5 = 0, • ■ •, successively. We now equate the sum of the coefficients of x 1 to zero. For s = 0 this gives -1 + 4A2 = 0, thus A2 = \. For the other values of s we have in the first series in (3*) 2m — 1 = 2s + 1, hence m = 5 + 1, in the second m — 1 = 2s + 1, and in the third m + \ = 2s + \. We thus obtain (-If11 22s(s + 1)! 5 7 + (25 + 2fA2s+2 - A2s = 0. CHAP. 5 Series Solutions of ODEs. Special Functions For s = 1 this yields 1 3 --h 16/44 + A2 = 0, thus AA = 28 and in general (-1)'""1 /ll 1 , (3) A-= 2 + 3 +""+ mj' m=1'2' Using the short notations (4) hx = 1 hm = 1 + - + •■• + - m = 2, 3, 2 m and inserting (4) and Ax = A3 = • ■ ■ = 0 into (2), we obtain the result (?) y2(x) = J0(x) In x + 2 V ,7 -v2"' m. = 1 1 _ 3 . 11 fi = Ja(x) In x + - x2--x + - x 0 4 128 13824 Since i0 and v2 arc linearly independent functions, they form a basis of (1) for x > 0. Of course, another basis is obtained if we replace y2 by an independent particular solution of the form a(y2 + ^-A))* where a (# 0) and b are constants. It is customary to choose a = 2/77 and b = 7 — In 2, where the number y = 0.577 215 664 90 • • • is the so-called Euler constant, which is defined as the limit of 1 1 1 + — + ••• H---In s 2 s as s approaches infinity. The standard particular solution thus obtained is called the Bessel function of the second kind of order zero (Fig. 109) or Neumann's function of order zero and is denoted by Y0(x). Thus [see (4)] (6) Yn(x) = - tt l0(x) I In I + y\ + 2 - * ' m=l l> „2m 22m(m!)2 For small x > 0 the function Y0(x) behaves about like In a- (see Fig. 109, why?), and Y0(x) -» — oc as x —> 0. Bessel Functions of the Second Kind Yn(x) For 1^ = n = 1, 2, ■ • • a second solution can be obtained by manipulations similar to those for n = 0. starting from (10), Sec 5.4. It turns out that in these cases the solution also contains a logarithmic term. The situation is not yet completely satisfactory, because the second solution is defined differently, depending on whether the order v is an integer or not. To provide uniformity SEC. 5.6 Bessel Functions of the Second Kind Yv(x) 201 of formalism, it is desirable to adopt a form of the second solution that is valid for all values of the order. For this reason we introduce a standard second solution Yv{x) defined for all v by the formula 1 (a) Yv(x) = - [JJx) cos pit — J_v(x)] sin vtt (b) YJx) = lira Yv(x). This function is called the Bessel function of the second kind of order v or Neumann's function7 of order v. Figure 109 shows Y0(x) and Yx(x). Let us show that /„ and Yv are indeed linearly independent for all v (and x > 0). For noninteger order v, the function Yv(x) is evidently a solution of Bessel's equation because J v(x) and .1 _v(x) are solutions of that equation. Since for those v the solutions J,, and /_„ are linearly independent and Yv involves the functions Jv and Yt, are linearly independent. Furthermore, it can be shown that the limit in (7b) exists and Yn is a solution of Bessel's equation for integer order; see Ref. [A 13] in App. 1. We shall see that the series development of Yn(x) contains a logarithmic term. Hence Jn(x) and Yn(x) are linearly independent solutions of Bessel's equation. The series development of )',.(11 can be obtained if we insert the series (20) and (21), Sec. 5.5, for Jv{x) and i_„(x) into (7a) and then let v approach n; for details see Ref. [A13]. The result is (8) 2 jt Jx \ xn « {-{)m-\hm + hm+n) — J„(x) In - + y - — 2j o2m^ i ,—, it \ 2 } it „ 2Am "ml (m + «)! ,2m x~n ™ 1 (n - m - \)\ 22m'-nm\ x2m where x > 0, n = 0, 1, • • •, and [as in (4)] h0 = 0, ht = 1, 11 11 h„ = 1 + - + ■■•+— , hm ,„.= ! + -+••• + m 2 m + n Fig. 109. Bessel functions of the second kind Y0 and Y,. (For a small table, see App. 5.) 7CARL NEUMANN (1832-1925), German mathematician and physicist. His work on potential theory sparked the development in the field of integral equations by VITO VOLTERRA (1860-1940) of Rome, ERIC IVAR FREDHOLM (1866-1927) of Stockholm, and DAVID HILBERT (1862-1943) of Goltingen (see the footnote in Sec. 7.9). The solutions YJx) are sometimes denoted by NJx); in Ref. [A13] they are called Weber's functions; Euler's constant in (6) is often denoted by C or In y. 202 CHAP. 5 Series Solutions of ODEs. Special Functions For n = 0 the last sum in (8) is to be replaced by 0 [giving agreement with (6)1. Furthermore, it can be shown that Y_n(x) = C-l)nFn(x). Our main result may now be formulated as follows. THEOREM 1 General Solution of Bessel's Equation A general solution of Bessel's equation for all values of v (and x > 0) is (9) y(x) = C\J,Xx) + C2Y„(x). We finally mention that there is a practical need for solutions of Bessel's equation that are complex for real values of x. For this purpose the solutions (10) H^\x) = ./„« + iYv(x) Hj\x) = JM - iY,Xx) are frequently used. These linearly independent functions are called Bessel functions of the third kind of order v or first and second Hankel functions8 of order v. This finishes our discussion on Bessel functions, except for their "orthogonality," which we explain in Sec. 5.7. Applications to vibrations follow in Sec. 12.9. PROBLEM SET 5.6 1-101 SOME FURTHER ODEs REDUCIBLE TO BESSEL'S EQUATIONS (See also Sec. 5.5.) Using the indicated substitutions, find a general solution in terms of./,. and Y„. Indicate whether you could also use J_,, instead of Y„. (Show the details of your work.) 1. x2y" + xy' + (x2 - 25)y = 0 2. x2y" + xy' + (9x2 - $)y = 0 (3x = z) 3. Axy" + Ay' + y = 0 (Vx = z) 4. xy" + y' + 36y = 0 (12Vx = z) 5. x2y" + xy' + (4x4 16)y = 0 ix' z) 6. x2y" + xy' + (xe - l)y 7. xy" + 11 y' + xy 8. y" + 4x2y = 0 (y = uVx, x2 z) 0 (y = x-^u) 9. x2y" - 5xy + 9(x 8)y = 0 (y = x3u, xJ 10. xy" + ly + Axy = 0 (y = x~3u, 2x = z) 11. (Hankel functions) Show that the Hankel functions (10) form a basis of solutions of Bessel's equation for any v. 12. CAS EXPERIMENT. Bessel Functions for Large x. It can be shown that for large jr. (11) YJx) ~ V2/(7rc) sin (x - \ ntr - \tt) with ~ defined as in (14) of Sec. 5.5. (a) Graph Yn(x) for n = 0, ■ • •, 5 on common axes. Arc there relations between zeros of one function and extrema of another? For what functions? (b) Find out from graphs from which x = xn on the curves of (8) and (11) (both obtained from your CAS) practically coincide. How does xn change with n? (c) Calculate the first ten zeros xm, m = 1, • ■ • , 10. of Y0(x) from your CAS and from (11). How does the error behave as m increases? (d) Do (c) for Y±(x) and Y2(x). How do the errors compare to those in (c)? 'HERMANN HANKEL (1839-1873). German mathematician. SEC. 5.7 Sturm-Liouville Problems. Orthogonal Functions 13. Modified Bessel functions of the first kind of order v are defined by Iv(x) = i~"JJix), i = V — 1. Show that /„ satisfies the ODE (12) x2y" + xy - (x2 + v2)y = 0 and has the representation op ^2m i v m=0 14. (Modified Bessel functions /„) Show that f(x) is real for all real x (and real v), Iv(x) i= 0 for all real x # 0, and I-n(x) = ln(x), where n is any integer. 15. Modified Bessel functions of the third kind (sometimes called of the second kind) arc defined by the fonnula (14) below. Show that they satisfy the ODE (12). (14) KM = -t-t-— [j-Ax) - IM] 2 sin vtt 5.7 Sturm-Liouville Problems. Orthogonal Functions So far we have considered initial value problems. We recall from Sec. 2.1 that such a problem consists of an ODE, say, of second order, and initial conditions y(x0) = K0, y (x0) = Kx referring to the same point (initial point) x = x0. We now turn to boundary value problems. A boundary value problem consists of an ODE and given boundary conditions referring to the two boundary points (endpoints) x = a and x = b of a given interval a x S= b. To solve such a problem means to find a solution of the ODE on the interval oSxIl) satisfying the boundary conditions. We shall see that Legendrc's, Bessel's, and other ODEs of importance in engineering can be written as a Sturm-Liouville equation (1) [p(x)y']' + [q(x) + \r(x)]y = 0 involving a parameter A. The boundary value problem consisting of an ODE (1) and given Sturm-Liouville boundary conditions (a) kxy(a) + k2y'(a) = 0 (2) (b) hy{b) + l2y'(b) = 0 is called a Sturm-Liouville problem.9 We shall see further that these problems lead to useful series developments in terms of particular solutions of (1), (2). Crucial in this connection is orthogonality to be discussed later in this section. In (1) we make the assumptions that p, q, r, and p' are continuous on a. ^ X S b, and r(x) > 0 (aSiSi). In (2) we assume that klt k2 are given constants, not both zero, and so are lx, l2, not both zero. yJACQUES CHARLES FRANCOIS STURM (1803-1855). was born and studied in Switzerland and then moved to Paris, where he later became the successor of Poisson in the chair of mechanics at the Sorbonne (the University of Paris). JOSEPH LIOUV1LLE (1809-1882), French mathematician and professor in Paris, contributed to various fields in mathematics and is particularly known by his important work in complex analysis (Liouville's theorem; Sec. 14.4), special functions, differential geometry, and number theory. CHAP. 5 Series Solutions of ODEs. Special Functions EXAMPLE 1 Legendre's and Bessel's Equations are Sturm-Liouville Equations Legendre's equation (I - x )y" - 2xy' -I n(n + l)v = 0 may be written This is (1) with p = 1 - .v". q = 0. and r = 1. In Bessel's equation A = n(n + I). .v2.v + xy + (jf2 - n2)y = 0 v = dy/dx, etc. as a model in physics or elsewhere, one often likes to have another parameter k in addition to n. For this reason we setS = he. Then by the chain rule y = dy/dx = {dy/dx) dxld'i = y'Ik, y = y"/k2. In the first two terms, k2 and k drop out and we get x~y in ',,,2 2 2s _ n - try + (k x - n )y = u. Division by x gives the Sturm-Liouville equation [xy']' + iy - ~ -I A.vjy = 0 A = k2. This is (1) with p = x, tj - —n2lx, and r = x. B Eigenfunctions, Eigenvalues Clearly, y = 0 is a solution—the "trivial solution"—for any A because (1) is homogeneous and (2) has zeros on the right. This is of no interest. We want to find eigenfunctions y(x), that is, solutions of (1) satisfying (2) without being identically zero. We call a number A for which an eigenfunction exists an eigenvalue of the Sturm-Liouville problem (1), (2). EXAMPLE 2 Trigonometric Functions as Eigenfunctions. Vibrating String Find the eigenvalues and eigenfunctions of the Sturm-Liouville problem (3) y" + Ay = 0, y(0) = 0, y(ir) = 0. This problem arises, for instance, if an elastic string (a violin suing, for example) is stretched a little and then fixed at its ends x = 0 and x = it and allowed to vibrate. Then y(x) is the "space function" of the deflection it(x. t) of the string, assumed in the form u(x. t) = y(x)w(t). where t is time. (This model will be discussed in great detail in Sees. 12.2-12.4.) Solution. From (1) and (2) we see that p = 1, q = 0, r = 1 in (1), and a = 0, b = it. fcj = l\ = 1- k2 - l2 = 0 in (2). For negative A = - v2 a general solution of the ODE in (3) is y(x) = ci*"* + c2e~From the boundary conditions we obtain Cy = c2 = 0, so that y = 0, which is not an eigenfunction. For A = 0 the situation is similar. For positive A = v2 a general solution is y(x) = A cos vx ; B sin vx. From the first boundary condition we obtain y(0) = A = 0. The second boundary condition then yields y(7r) = B sin 1,'tt = 0. thus v = 0, ±1, ±2, ■ - ■ . y(x) = sin vx ď = 1. 2. • • •). Hence the eigenvalues of the problem are A = v2. where v = 1, 2, • ■ • , and corresponding eigenfunctions are y(.v) = sin vx, where v = 1, 2, • • ■ . Existence of Eigenvalues Eigenvalues of a Sturm-Liouville problem (1), (2), even infinitely many, exist under rather general conditions on p. q, r in (1). (Sufficient are the conditions in Theorem 1, below, together with p(x) > 0 and r(x) > 0 on a < x < b. Proofs are complicated; see Ref. [A3] or [All] listed in App. 1.) SEC. 5.7 Sturm-Liouville Problems. Orthogonal Functions 205 DEFINITION Reality of Eigenvalues Furthermore, if p, q, r, and p in (1) are real-valued and continuous on the interval a^kx^b and r is positive throughout that interval (or negative throughout that interval), then all the eigenvalues of the Sturm-Liouville problem (1), (2) are real. (Proof in App. 4.) This is what the engineer would expect since eigenvalues are often related to frequencies, energies, or other physical quantities that must be real. Orthogonality The most remarkable and important property of eigenfunctions of Sturm-Liouville problems is their orthogonality, which will be crucial in series developments in terms of eigenfunctions. Orthogonality Functions yi(x),y2(x), • • • defined on some interval a1=kx^b are called orthogonal on this interval with respect to the weight function r(x) > 0 if for all m and all n different from m. (4) The norm j r(x)ym{x)yn(x) dx = 0 (m + n). a of ym is defined by (5) ym\\ = y / r{x)ym2(x) dx. Note that this is the square root of the integral in (4) with n = m. The functions yx, y2, • • • are called orthonormal on a § x = b if they are orthogonal on this interval and all have norm 1. If r(x) = 1, we more briefly call the functions orthogonal instead of orthogonal with respect to r(x) = 1; similarly for orthonormality. Then ym(x) yn(x) dx — 0 {m # ri). ym\x) dx EXAMPLE 3 Orthogonal Functions. Orthonormal Functions The functions ym(x) = sin mx, in = 1, 2, • • • form an orthogonal set on the interval - ir £ i S 17, because for m i= n we obtain by integration [see (11) in App. A3.11 sin mx sin nx dx = f cos (m — n)x dx f The norm |[.ym|[ equals Vir, because l.vm(|2 = j s'n2 "lx dx = 77 cos (m + n)x dx = 0. (m = l, 2. ■ ■ •). Hence the corresponding orthonormal set, obtained by division by the norm, is sin 2x sin 3x sm x vit \'tt \'tt 206 CHAP. 5 Series Solutions of ODEs. Special Functions THEOREM 1 Orthogonality of Eigenfunctions Orthogonality of Eigenfunctions Suppose that the functions p, q, r, and p in the Sturm—Liouville equation (1) are real-valued and continuous and r{x) > 0 on the interval a Si x = b. Let ym{x) and yn(x) be eigenfunctions of the Sturm—Liouville problem (1), (2) that correspond to different eigenvalues ATO and A„, respectively. Then ym, yn are orthogonal on that interval with respect to the weight function r, that is, (6) J r{x)ym(x)yn(x) dx = 0 (m =h n). If p(a) = 0, then (2a) can be dropped from the problem. If p{b) = 0, then (2b) can be dropped. [It is then required that y and y remain bounded at such a point, and the problem is called singular, as opposed to a regular problem in which (2) is used.] Ifp(a) = p(b), then (2) can be replaced by the "periodic boundary conditions" (7) y(fl) = y(b), y'ia) = y'(b). The boundary value problem consisting of the Sturm-Liouville equation (I) and the periodic boundary conditions (7) is called a periodic Sturm-Liouville problem. PROOF By assumption, ym and yn satisfy the Sturm-Liouville equations (py'n,)' + (q + A,„r )>•„,, = o (py'n)' + (q + An/-)y„ = o respectively. We multiply the first equation by yn, the second by — ym, and add. (Am - K,)rymyn = ym(p>'n)' - yJ-pyL)' = [(p>'L)ym ~ (pyL)yn]' where the last equality can be readily verified by performing the indicated differentiation of the last expression in brackets. This expression is continuous on a S= x b since p and p' are continuous by assumption and ym, yn are solutions of (1). Integrating over x from a to b, we thus obtain (8) (A,n - A„.) j rymyn dx = (a < b). The expression on the right equals the sum of the subsequent Lines 1 and 2, (9) p{b)[\i(b)ym(b) - yln{b)yn(b)\ (Line 1) -p(a)[y'rl{a)ym(a) - y!m(a)y,,(a)] (Line 2). Hence if (9) is zero, (8) with km — A„ =h 0 implies the orthogonality (6). Accordingly, we have to show that (9) is zero, using the boundary conditions (2) as needed. SEC. 5.7 Sturm-Liouville Problems. Orthogonal Functions Case 1. p(a) = p(b) = 0. Clearly, (9) is zero, and (2) is not needed. Case 2. p(a) + 0, p(b) = 0. Line 1 of (9) is zero. Consider Line 2. From (2a) we have &iy«(.a) + hy'nia) = o, fci>m(a) + k2y'm(a) = 0. Let k2 ¥= 0. We multiply the first equation by >'m(a), the last by —yn(a) and add, M.v.'i.nv,. (a) - y^(a)yn(o)] = 0. This is fc2 times Line 2 of (9), which thus is zero since k2 0. If k2 = 0, then ¥= 0 by assumption, and the argument of proof is similar. Case 3. p(a) = 0, p(b) + 0. Line 2 of (9) is zero. From (2b) it follows that Line 1 of (9) is zero; this is similar to Case 2. Case 4. p{a) + 0,p(b) £ 0. We use both (2a) and (2b) and proceed as in Cases 2 and 3. Case 5. p(a) = p(b). Then (9) becomes p(b)[y'n{b)ym(b) - y'm(b)yn{b) - y'n{a)y m{a) + ym (a)yn(a)}. The expression in brackets [• ■ •] is zero, either by (2) used as before, or more directly by (7). Hence in this case, (7) can be used instead of (2), as claimed. This completes the proof of Theorem 1. ■ EXAMPLE 4 Application of Theorem 1. Vibrating Elastic String The ODE in Example 2 is a Sturm-Liouville equation with p = 1,^ = 0, and r = 1. From Theorem 1 it follows that the eigenfunctions ym = sin mx (m = 1, 2, ■ ■ •) are orthogonal on the interval 0 S x it. EXAMPLE 5 Application of Theorem 1. Orthogonality of the Legendre Polynomials Legendre's equation is a Sturm-Liouville equation (see Example 1) 1(1 - x2)y'\ + Av = 0. A = n(n + 1) with p = 1 - x2, q = 0, and r = 1. Since p(—1) = p(l) = 0, we need no boundary conditions, but have a singular Sturm—Liouville problem on the interval -1 SiS 1. We know that for n = 0, 1, • • ■ . hence A = 0, 1 • 2. 2 • 3, ■ • ■ , the Legendre polynomials Pn(x) are solutions of the problem. Hence these are the eigenfunctions. From Theorem I it follows that they arc orthogonal on that interval, that is. (10) f Pm(x) Pn(x) dx = n (m i= n). EXAMPLE 6 Application of Theorem 1. Orthogonality of the Bessel Functions J„[x) The Bessel function Jn{x) with fixed integer n s 0 satisfies Bessel's equation (Sec. 5.5) x2Jn(x) + x.in(x) + (x2 - nz)Jn(x) = 0, where Jn = djjdx, Jn = d2Jn/dx2. In Example 1 we transformed this equation, by setting x = kx, into a Sturm-Liouville equation [xj'n(kx)}' - (- ^7 + k2x^j Jn(kx) = 0 with p(x) = x, q(x) = —n2lx, r(x) - x, and parameter A = k2. Since p(0) = 0, 'theorem 1 implies orthogonality on an interval 0 £ x = R (R given, fixed) of those solutions Jn(kx) that are zero at x = R, that is, (11) Jn(kR) = 0 (n fixed). 208 [N'ote that q(x) = —n2lx is discontinuous at 0, but this does not affect the proof of Theorem I.] It can be shown (see Rcf. [A 13]) that Jn('x) has infinitely many zeros, say, X n - 0 and 1). Hence we must have (12) kR = a.„ thus = 0, form an orthogonal set on the interval (a - k)/c g t S (b - k)lc. 4. (Change of x) Using Prob. 3, derive the orthogonality of 1. cos ttx, sin TTx. cos 2ttx, sin 2ttx, ■ ■ ■ on -1 = x = 1 (r(x) = 1) from that of 1, cos x, sin v, cos 2x, sin 2x, • ■ • on — ir S A" = 77. 5. (Legendre polynomials) Show that the functions P„(cos 8). n =0. 1, • • • , form an orthogonal set on the interval 0 S 8 s it with respect to the weight function sin 6. 6. (Tranformation to Sturm-Liouville form) Show that y" + fy' + (g + A/riV = 0 takes the form (1) if you set p = cxp (Jf dx), q = pg, r = hp. Why would you do such a transformation? 19. v - 2x~y + (*2 + 2x~z)y = 0. v(l) = 0.v(2) = 0. 7-191 STURM-LIOUVILLE PROBLEMS Write the given ODE in the form (1) if it is in a different form. (Use Prob. 6.) Find the eigenvalues and eigenfunctions. Verily orthogonality. (Show the details of vour work.) 7. >'' 8 9. y 10 11 12 + Ay = 0, + Ay = 0, + Ay = 0. + Ay = 0, + A.v = 0, + Ay = 0. y(l) + y'(l) = 0 13. y" + Ay = 0, y(0) 14. (*/)' + Ax^y = 0, (Set x = e\) 15. (*~V)' + (A + 1)a"3; y(ew) = 0. (Set x 16. v" - 2y' + (A + l)v yd) = 0 17. y" + 8y' + (A + 16)y = 0. y(ir) = 0 18. xy" + 2y' + kxy = 0, y(w) (Use a CAS or set y = x~lu.) y(0) = 0, y(5) = 0 y'(0) = 0, y'(77) = 0 y(0) = 0, y'(L) = 0 y(0) = y(l), y'(0) = .y'(l) y(0) = y(2w), y'(0) = y'(2w) y(0) + y'(0) = (), 0, y(l) + y'(l) = 0 y(l) = 0, y'(e) = 0. S: = 0, - 0, yd) = 0, y(0) = 0, y(0) = 0. = 0, y(2v) = 0. (Use a CAS or set y xil) 20. TEAM PROJECT, Special Functions. Orthogonal polynomials play a great role in applications. For this reason, Legendre polynomials and various other orthogonal polynomials have been studied extensively; see Refs. |GR11, |GR101 in App. 1. Consider some of the most important ones as follows, (a) Chebyshev polynomials10 of the first and second kind are defined by Un(x) Tn(x) = cos (n arccos x) sin [(n + 1) arccos x] Vi respectively, where n = 0, 1, • • ■. Show (hat TQ = 1, T^x) = x, T2(x) = 2x2 - 1, T3(x) = 4xs - 3x, U0 = 1. Ut(x) = 2x, U2(x) = 4xz - 1, Uz(x) = Sr: - Ax. Show that the Chebyshev polynomials Tn(x) are orthogonal on the interval -1 S.tS I with respect to the weight function r(x) = 1/Vl - x2. (Hint. To evaluate the integral, set arccos x = 0.) Verify that Tn(x), n = 0, 1, 2, 3, satisfy the Chebyshev equation (1 - x2)y" - xy' + n2y = 0. (b) Orthogonality on an infinite interval: Laguerre polynomials are defined by L0 = 1, and Ln(x) -- Show thai Lt(x) = ex dn(xne x) dx' n = 1, 2, L2(x) = 1 - 2x + x2ll, Ls(x) = 1 - 3x + 3x2/2 x3/6. Prove that the Laguerre polynomials are orthogonal on the posilive axis 0S.r< 00 with respect to the weight function r(x) = e~x. Hint. Since the highest power in Lm is xm, it suffices to show that / e~xxkLn dx = 0 for k < n. Do this by k integrations by parts. PAFNUTI CHEBYSHEV (1821-1894), Russian mathematician, is known for his work in approximation theory and the theory of numbers. Another transliteration of the name is TCHEBICHEK EDMOND LAGUERRE (1834-1886). French mathematician, who did research work in geometry and in the theory of infinite series. 210 5.8 Orthogonal Eigenfunction Expansions Orthogonal functions (obtained from Sturm-Liouville problems or otherwise) yield important series developments of given functions, as we shall see. This includes the famous Fourier series (to which we devote Chaps. 11 and 12), the daily bread of the physicist and engineer for solving problems in heat conduction, mechanical and electrical vibrations, etc. Indeed, orthogonality is one of the most useful ideas ever introduced in applied mathematics. Standard Notation for Orthogonality and Orthonormality The integral (4) in Sec. 5.7 defining orthogonality is denoted by (ym, yn). This is standard. Also, Kronccker's delta12 Smn is defined by Smn = 0 if in # n and 8mrl = 1 if in = n (thus 8nn = 1). Hence for orthonormal functions y0, ylt y2, ■ ■ • with respect to weight r(x) (> 0) on a § x ^ b we can now simply write (y.m, >',,) = 8mn, written out rb fO if m + n (1) (>'m> ):n) = j '•l.viy,.,(.viy,.i.v) dx = 8mn = a [] if m — n. Also, for the norm we can now write (2) ||y|| = V(ym, ym) = / I r{x)ym2{x) dx . Write down a few examples of your own, to get used to this practical short notation. Orthogonal Series Now comes the instant that shows why orthogonality is a fundamental concept. Let y0, >'i, }'2, ' • " be an orthogonal set with respect to weight r(x) on an interval a ^ x = b. Let f(x) be a function that can be represented by a convergent series (3) fix) = 2 amym(x) = ((„v„(.vj + ax\\{x) + - • • . This is called an orthogonal expansion or generalized Fourier series. If the ym are eigenfunctions of a Sturm-Liouville problem, we call (3) an eigenfunction expansion. In (3) we use again in for summation since n will be used as a fixed order of Bessel functions. Given /(a), we have to determine the coefficients in (3), called the Fourier constants off(x) with respect to y0, \\, • ■ •. Because of the orthogonality this is simple. All we have to do is to multiply both sides of (3) by r(x)yn(x) (n fixed) and then integrate on both sides from a to b. We assume that term-by-term integration is permissible. (This is justified, for instance, in the case of "uniform convergence," as is shown in Sec. 15.5.) Then we obtain (/. >'«) = J rfyn dx = J r I J£ amym yn dx = 2 am(ym, yn). a <•■ \m._o / m^O ^LEOPOl D KRONECKER (1823-contributions to algebra, group theory. 1891). German mathematician at Berlin University, who made important and number theory. SEC. 5.8 Orthogonal Eigenfunction Expansions 211 Because of the orthogonality all the integrals on the right are zero, except when m = n. Hence the whole infinite series reduces to the single term On(yn, y„) = «-Jl.v„l!2- Assuming that all the functions yn have nonzero norm, we can divide by \\yn ||2; writing again m for n, to be in agreement with (3), we get the desired formula for the Fourier constants (/, v„,) 1 fb (4) am - - 2 = - p j r(x)f(x)ym(x) dx (m = 0, 1, • • •). i )'•!>/ :, IIA m I a EXAMPLE 1 Fourier Series A most important class of eigenfunction expansions is obtained from the periodic Sturm—Liouville problem v" + Ay = 0, y(ir) = y(-ir), y {if) = y{-ir). A general solution of the ODE is y = A cos kx + B sin kx, where k = VA. Substituting y and its derivative into the boundary conditions, we obtain A cos kir + B sin kir = A cos {—kir) + B sin (-kit) -kA sin kir + kB cos kir = -kA sin {—kir) + kB cos (—kir). Since cos ( — a) = cos a, the cosine terms cancel, so that these equations give no condition for these terms. Since sin (—a) = —sin a, the equations gives the condition sin kir = 0, hence kir = Mir, k — M = 0, 1, 2, ' • ■ , so that the eigenfunctions are cos 0 = 1, cosx, sin x, cos 2*, sin 2x, •»-., cos nu, sin mi, • • • corresponding pairwise to the eigenvalues A = k2 = 0, 1, 4, ■ • • , ■ • ■ . (sin 0 = 0 is not an eigenfunction.) By Theorem 1 in Sec. 5.7, any two of these belonging to different eigenvalues arc orthogonal on the interval — ir s= x == rr (note that r(x) = 1 for the present ODE). The orthogonality of cos mx and sin mx for the same m follows by integration. C 1 f J cos mx sin mx dx = — J sin 2mx dx = 0. For the norms wc get || 11| = \'2ir, and Vtt for all the others, as you may verify by integrating 1, cos2 a-, sin2 x, etc., from — 7r to it. This gives the series (with a slight extension of notation since we have two functions for each eigenvalue 1, 4, 9, ■ ■ ■) (5) /t.v) = % + 2 (am cos mjc + bm sin mi). According to (4) the coefficients (with m = 1, 2, • • ■) are if ir i r (6) t(() = -— I fix) dx. anl = — I f(x) cos mx dx, bm = — J(x) sin mx dx. lir J_lT ir J_ ir J The series (5) is called the Fourier series of f(x). Its coefficients are called the Fourier coefficients of f(x), as given by the so-called Euler formulas (6) (not to be confused with the Euler formula (11) in Sec. 2.2). For instance, for the "periodic rectangular wave" in Fig. 111. given by j"-l if -ir 0 we can find constants a0, ■ • • , ak (with k large enough) such that (15) ||/ - (fl0.v0 + • • • + akyk)\\ < e. An interesting and basic consequence of the integral in (13) is obtained as follows. Performing the square and using (14), we first have b b b b j r(x) [sk(x) - f(x)f dx = j rsk2 dx - 2 j rfsk dx + j rf b 2 dx J, f 2 k f" ^ J r 2 *Wm dx — 2 ^ am I rfym dx + I rf2 dx. ■m-0 The first integral on the right equals E am2 because / rynl.V( dx = 0 for m ¥= 1. and j rym2 dx = 1. In the second sum on the right, the integral equals am, by (4) with SEC. 5.8 Orthogonal Eigenfunction Expansions 215 \\ym\\ = 1. Hence the first term on the right cancels half of the second term, so that the right side reduces to u This is nonnegative because in the previous formula the integrand on the left is nonnegative (recall that the weight r(x) is positive!) and so is the integral on the left. This proves the important Bessel's inequality (16) b 2 a.J ^ (I/(12 = j r(x)f(xf dx (k = 1, 2, ■ • •)• Here we can let k —» because the left sides form a monotone increasing sequence that is bounded by the right side, so that we have convergence by the familiar Theorem 1 in App. A3.3. Hence (17) 2 a„ /II2 Furthermore, if v0, y±, ■ ■ ■ is complete in a set of functions S, then (13) holds for every / belonging to S. By (15) this implies equality in (16) with k °°. Hence in the case of completeness every / in S satisfies the so-called Parseval's equality (18) 2 am2= = \ r(x)f{xf dx. As a consequence of (18) we prove that in the case of completeness there is no function orthogonal to every function of the orthonormal set, with the trivial exception of a function of zero norm: THEOREM 1 Completeness Let y0, >'i, • • • be a complete orthonormal set on a x = b in a set of functions S. Then if a function f belongs to S and is orthogonal to every ym, it must have norm zero. In particular, if f is continuous, then f must be identically zero. PROOF Since f is orthogonal to every ym, the left side of (18) must be zero. If f is continuous, then ll/H = 0 implies f(x) = 0, as can be seen directly from (2) with / instead of y,m because r(x) > 0. ■ EXAMPLE 4 Fourier Series The orthonormal set in Example 1 is complete in the set of continuous functions on - it tk x S it. Verify directly that j\x) = 0 is the only continuous function orthogonal to all the functions of that set. Solution. Lcf / be any continuous function. By the orthogonality (we can omit Vlrr and V77), j 1 • fix) dx = 0, j fix) cos mx dx = 0, j fix) sin mx dx = 0. Hence am = 0 and b.m = 0 in (6) for all in. so that (3) reduces to fix) = 0. 216 CHAP. 5 Series Solutions of ODEs. Special Functions This is the end of Chap. 5 on the power series method and the Frobenius method, which are indispensable in solving linear ODEs with variable coefficients, some of the most important of which we have discussed and solved. We have also seen that the latter are important sources of special functions having orthogonality properties that make them suitable for orthogonal series representations of given functions. 1^1 FOURIER-LEGENDRE SERIES Showing the details of your calculations, develop: 1. 7.v4 - 6x2 2. (x + if 3. .v3 - x2 + x - 1 4. 1, .v, x2, x3 5. Prove that if f(x) in Example 2 is even [that is, fix) = f(—x)'\. its series contains only Pm(x) with even m. 16-161 CAS EXPERIMENTS. FOURIER-LEGENDRE SERIES Find and graph (on common axes) the partial sums up to that ,S'„!o whose graph practically coincides with that of fix) within graphical accuracy. State what m0 is. On what does the size of m0 seem to depend? 6. fix) = sin 7t.v 7. f(x) = sin27rx 8. fix) = cos ttx 9. fix) = cos 2ttx 10. f(x) = cos 3ttx 11. fix) = ex 12. fix) = e~x2 13. f(x) = 1/(1 + x2) 14. f(x) = Jn(cv0ax), where a01 is the first positive zero of 70 15. fix) = ./0(a02x). where a02 is the second positive zero of J0 16. f(x) = J\(a-i jjc), where ala is the first positive zero of Jl 17. CAS EXPERIMENT. Fourier-Bessel Series. Use Example 3 and again take n = 10 and R = 1, so that you get the series (19) f(x) = «1i0(a04x) + «2y0(aa2x) + a3Ju(aa3x) _i_ . . . with the zeros anl a02, • • 1 from your CAS (see also Table Al in App. 5). (a) Graph the terms J0(a01x), • • • , J0{a010x) for 0 t= x s 1 on common axes. (b) Write a program for calculating partial sums of (19). Find out for what fix) your CAS can evaluate the integrals. Take two such fix) and comment empirically on the speed of convergence by observing the decrease of the coefficients. (c) Take fix) = I in (19) and evaluate the integrals for the coefficients analytically by (24a), Sec. 5.5, with v = 1. Graph the first few partial sums on common axes. 18. TEAM PROJECT. Orthogonality on the Entire Real Axis. Hcrmite Polynomials.13 These orthogonal polynomials arc defined by He0{\) = 1 and Hejx) = (-1)'V'/2 — (e dx -x2/2 ), n 1, 2, REMARK. As is true for many special functions, the literature contains more than one notation, and one sometimes defines as Hermite polynomials the functions //n Hn*{x) = c-iyv j dner dxn This differs from our definition, which is preferred in applications. (a) Small Values of n. Show that Hexix) = x, He3ix) = x3 - 3x, He2{x) = x2 He^ix) = x4 1, 6x2 (b) Generating Function. A gencraling function of the Hermite polynomials is (20) 72 - 2 on(x)tn because Hen{x) = n\an(x). Prove this. Hint: Use the formula for the coefficients of a Maclaurin series and note that tx - \i2 = |x2 - \(x - tf. (c) Derivative. Differentiating the generating function with respect to x, show that (21) He,„ix) = nHen_x(x). 13CHARLES HERMITE (1822-1901). French mathematician, is known for his work in algebra and number theory. The great HENRI POINCARÉ (1854-1912) was one of his students. Chapter 5 Review Questions and Problems (d) Orthogonality on the x-Axis needs a weight function that goes to zero sufficiently fast as x —> ±2°. (Why?) Show that the Hermite polynomials are orthogonal on — =° < x < 00 with respect to the weight function r(x) = e~x 12. Hint. Use integration by parts and (21). (e) ODEs. Show that (22) Helix) = xHeJx) - Hen , ,(,*>. Using this with « — 1 instead of n and (21), show that y = Hen(x) satisfies the ODE (23) y" - xy + ny = 0. Show that w = e~x /4y is a solution of Weber's equation14 (24) w" + (n + I - |x2)vv = 0 (n = 0, 1, • • •)■ 19. WRITING PROJECT. Orthogonality. Write a short report (2-3 pages) about the most important ideas and facts related to orthogonality and orthogonal series and their applications. ?BaäESEE&&5EB&!BSEB3EB^ TI O N S AND PROBLEMS 1. What is a power series? Can it contain negative or fractional powers? How would you test for convergence? 2. Why could we use the power series method for Legendre's equation but needed the Frobenius method for Bessel's equation? 3. Why did we introduce two kinds of Bessel functions, J and Y? 4. What is the hj'pergeometric equation and why did Gauss introduce it? 5. List the three cases of the Frobenius method, giving examples of your own. 6. What is the difference between an initial value problem and a boundary value problem? 7. What docs orthogonality of functions mean and how is it used in series expansions? Give examples. 8. What is the Sturm-Liouville theory and its practical importance? 9. What do you remember about the orthogonality of the Lcgcndrc polynomials? Of Bessel functions? 10. What is completeness of orthogonal sets? Why is it important? 11-201 SERIES SOLUTIONS Find a basis of solutions. Try to identify the series as expansions of known functions. (Show the details of your work.) 11. 12. 13. 14. 15. 16. y" - 9y = 0 (1 - xfy" + (1 - x)y' -xy" - (x + l)y' + y = 0 x2y" - 3xy' + 4y = 0 3v = 0 2 ' x2y 4xy + (4x2 + 2)y = 0 - 4.tv' + (x2 + 6).v = 0 18. (x2 - l)y" - 2xy' + 2y = 0 19. (a-2 - l)y" + 4x>>' + 2y = 0 20. x'-'y" + xy' + (4x4 - 1 )y = 0 21-251 BESSEL'S EQUATION Find a general solution in terms of Bessel functions. (Use the indicated transformations and show the details.) 21. x2y" + xy' + (36.r2 - 2)y = 0 (fix = z) 22. x2y" + 5xy' + (x2 - 12)y = 0 (y = ulx2) 23. x2y" + xy' + 4(xi - l)y = 0 (x2 = z) 24. 4x2y" - 20xy' + (4x2 + 35)y = 0 (y x3«) 25. y" + k2x2y = 0 (y u\> x, hkx2 z) 26-30 BOUNDARY VALUE PROBLEMS Find the eigenvalues and eigenfunctions. 26. y" + Ay = 0, y(0) = 0, y'(tt) = 0 27. y" + Ay = 0, y(0) = y(l), v'(0) = y'(l) 28. (xy')' + Ax_1y = 0, (Set x = eK) 29. x2y" + xy' + (Ax2 - l)y = 0, y(0) = 0, v(l) - 0 30. y" + Xy = 0, y(0) + y'(0) = 0, y(l) = 0, y(d-) = 0. v(2t7) = 0 17. xy" + (2x + l)y' + (x + l)y = Ü 31-351 CAS PROBLEMS Write a program, develop in a Fourier-Legendre series, and graph the first five partial sums on common axes, together with the given function. Comment on accuracy. 31. e2r (-1 r:" x -•• 1) 32. sin (ttx2) (-1 s x < 1) 33. 1/(1 + |jk|) (-1 Sx £ 1) 34. |cos 77x| (-1 S x S 1) 35. x if 0 S x £ 1, 0 if -1 S x < 0 HEINRICH WEBER (1842-1913), German mathematician. 218 CHAP. 5 Series Solutions of ODEs. Special Functions summary of ghäi!£heIe^^= Series Solution of ODEs. Special Functions The power series method gives solutions of linear ODEs (1) y" + p(x)y' + q(x)y = 0 with variable coefficients p and q in the form of a power series (with any center x0, e.g., xo = 0) 00 (2) y(x) = 2 am(x - x0)m = a0 + ax(x - x0) + a2(x - .r0)2 + • • • . Such a solution is obtained by substituting (2) and its derivatives into (1). This gives a recurrence formula for the coefficients. You may program this formula (or even obtain and graph the whole solution) on your CAS. If p and q are analytic at x0 (that is, representable by a power series in powers of x — Xq with positive radius of convergence; Sec. 5.2), then (I) has solutions of this form (2). The same holds if h, p, q in Ux)y" + p(x)y' + q(x)y = 0 are analytic at x0 and h(x0) # 0, so that we can divide by h and obtain the standard form (1). Legendre's equation is solved by the power series method in Sec. 5.3. The Frobenius method (Sec. 5.4) extends the power series method to ODEs ,, a(x) , b(x) (3) y" + —^ y' + , ,2 v = 0 x - x0 (x - x0) whose coefficients are singular (i.e., not analytic) at .v0, but are "not too bad," namely, such that a and b are analytic at .v0. Then (3) has at least one solution of the form (4) y(.v) = (x - x0Y 2 am(x - x0)m = a0(x - x0)r + a,(x - x0)r^ + ■■■ m=0 where /• can be any real (or even complex) number and is determined by substituting (4) into (3) from the indicial equation (Sec. 5.4), along with the coefficients of (4). A second linearly independent solution of (3) may be of a similar form (with different /■ and am'$) or may involve a logarithmic term. Bessel's equation is solved by the Frobenius method in Sees. 5.5 and 5.6. "Special functions" is a common name for higher functions, as opposed to the usual functions of calculus. Most of them arise either as nonelementary integrals [see (24)-(44) in App. 3.1 ] or as solutions of (1) or (3). They get a name and notation and are included in the usual CASs if they are important in application or in theory. Summary of Chapter 5 Of this kind, and particularly useful to the engineer and physicist, are Legendre's equation and polynomials Pn, Pu • ■ ■ (Sec. 5.3), Gauss's hypergeometric equation and functions F(a, h, c; x) (Sec. 5.4), and BessePs equation and functions ./„ and Y„ (Sees. 5.5. 5.6). Modeling involving ODEs usually leads to initial value problems (Chaps. 1-3) or boundary value problems. Many of the latter can be written in the form of Sturm-Liouville problems (Sec. 5.7). These arc eigenvalue problems involving a parameter A that is often related to frequencies, energies, or other physical quantities. Solutions of Sturm-Liouville problems, called eigenfunctions, have many general properties in common, notably the highly important orthogonality (Sec. 5.7), which is useful in eigenfunction expansions (Sec. 5.8) in terms of cosine and sine ("Fourier series", the topic of Chap. 11), Legendre polynomials, Bessel functions (Sec. 5.8), and other eigenfunctions. The Laplace transform method is a powerful method for solving linear ODEs and corresponding initial value problems, as well as systems of ODEs arising in engineering. The process of solution consists of three steps (see Fig. 112). Step 1. The given ODE is transformed into an algebraic equation ("subsidiary equation"). Step 2. The subsidiary equation is solved by purely algebraic manipulations. Step 3. The solution in Step 2 is transformed back, resulting in the solution of the given problem. IVP Initial Value Problem AP Algebraic Problem Solving AP by Algebra Solution of the IVP © © © Fig. 112. Solving an IVP by Laplace transforms Thus solving an ODE is reduced to an algebraic problem (plus those transformations). This switching from calculus to algebra is called operational calculus. The Laplace transform method is the most important operational method to the engineer. This method has two main advantages over the usual methods of Chaps. 1-4: A. Problems are solved more directly, initial value problems without first determining a general solution, and nonhomogeneous ODEs without first solving the corresponding homogeneous ODE. B. More importantly, the use of the unit step function (Heaviside function in Sec. 6.3) and Dirac's delta (in Sec. 6.4) make the method particularly powerful for problems with inputs (driving forces) that have discontinuities or represent short impulses or complicated periodic functions. In this chapter we consider the Laplace transform and its application to engineering problems involving ODEs. PDEs will be solved by the Laplace transform in Sec. 12.11. General formulas are listed in Sec. 6.8. transforms and inverses in Sec. 6.9. The usual CASs can handle most Laplace transforms. Prerequisite: Chap. 2 Sections that may be omitted in a shorter course: 6.5, 6.7 References and Answers to Problems: App. 1 Part A, App. 2. 220 SEC. 6.1 Laplace Transform. Inverse Transform. Linearity. s-Shifting 221 6.1 Laplace Transform. Inverse Transform. Linearity. s-Shifting If f(t) is a function defined for all t S 0, its Laplace transform1 is the integral of f(t) times erst from t = 0 to oc. Tt is a function of s, say, F(s), and is denoted by 56(f); thus CD m = %(f) = I e~stf(t) dt. Jo Here we must assume that /(f) is such that the integral exists (that is, has some finite value). This assumption is usually satisfied in applications—we shall discuss this near the end of the section. Not only is the result F(s) called the Laplace transform, but the operation just described, which yields F(s) from a given /(f), is also called the Laplace transform. It is an "integral transform" F(s) = f k(s, t)f(t) dt with "kernel" k(s, t) = e~st. Furthermore, the given function /(f) in (1) is called the inverse transform of F(s) and is denoted by t 'i/i; that is, we shall write (1*) /(f) = Z-\F). Note that (1) and (1*) together imply 5£_1(2:(/)) = / and r£(%~^(F)) = F. Notation Original functions depend on t and their transforms on s—keep this in mind! Original functions are denoted by lowercase letters and their transforms by the same letters in capital, so that F(s) denotes the transform of f(t), and Y(s) denotes the transform of y(t), and so on. EXAMPLE 1 Laplace Transform Let /(/) - 1 when l g 0. Find F(s). Solution. From (1) we obtain by integration J.so „ 1 I e~st dt=--t "st = - (s > 0), ^IERRE SIMON MARQUIS DE LAPLACE (1749-1827). great French mathematician, was a professor in Paris. He developed the foundation of potential theory and made important contributions to celestial mechanics, astronomy in general, special functions, and probability theory. Napoleon Bonaparte was his student for a year. For Laplace's interesting political involvements, see Ref. [GR2], listed in App. 1. The powerful practical Laplace transform techniques were developed over a century later by the English electrical engineer OLIVER HEAVISIDE (1850-1925) and were often called "Heaviside calculus." We shall drop variables when this simplifies formulas without causing confusion. For instance, in (1) we wrote r£(f) instead of ££(/)(.?) and in (I*) 2-1(f) instead of X~1(F)(t). 222 Our notation is convenient, but we should say a word about it. The interval of integration in (I) is infinite. Such an integral is called an improper integral and, by definition, is evaluated according to the rule "tf(f)dt = lim e~stf(t)dt. i Hence our convenient notation means Um 1 -*t~ T 1 _ T 1 --e = lim --€ + — s o 5' S (s > 0). We shall use this notation throughout this chapter. EXAMPLE 2 Laplace Transform iß(cot) of the Exponential Function c° Letfit) = eal when t ä 0. where a is a constant. Find ,cf(f). Solution. Again by (1), ■■S'.(eat) hence, when s - a > 0, s — a Must we go on in this fashion and obtain the transform of one function after another directly from the definition? The answer is no. And the reason is that new transforms can be found from known ones by the use of the many general properties of the Laplace transform. Above all, the Laplace transform is a "linear operation," just as differentiation and integration. By this we mean the following. THEOREM 1 Linearity of the Laplace Transform The Laplace transform is a linear operation; that is, for any functions f (t) and g(t) whose transforms exist and any constants a and b the transform ofaf(t) + bg(t) exists, and ÍE{af(t) + bg(t)} = ar£\nt)) + b%{g(t)}. PROOF By the definition in (1), OC :£{«/(/) + bg(t)} - [ e~st[af(t) + bg(t)]dt o OC OC = a J e~stf(t) dt + b\ e~stg(t) dt = a'£[f{t)} + b£{g(t)}. ■ EXAMPLE 3 Application of Theorem 1: Hyperbolic Functions Find the transforms of cosh at and sinh at. Solution. Since cosh at = |(eat + e~at) and sinh at = \(eat - e~al), we obtain from F.xample 2 and Theorem 1 Se(coshnr) = — (3?(0 + '£(e 2 \ s — a s + a J SEC. 6.1 Laplace Transform. Inverse Transform. Linearity. s-Shifting EX A M P L E 4 Cosine and Sine Derive the formulas i£(cos cot) = -5-2 • s + CO i£(sin cot) = -5-% ,9 + co Solution by Calculus. We write Lc = 2(eos a)/) and Ls = ££(sin wr). Integrating by parts and noting that the integral-free parts give no contribution from the upper limit », we obtain g cos ojí iiř e sin ojí dí co f -«f 1 r sin (t)f cosh at sinh a; e cos cot 1(f) s2 + of 00 s2 + oo2 s s2 - a2' a s2 - a2 s — a is ~ a)2 + OJ2 to is - a)2 + w2 THEOREM 2 l\a + 1) in formula 5 is the so-called gamma function [(15) in Sec. 5.5 or (24) in App. A3.1]. We get formula 5 from (1), setting st = x: cc St(ta) = \ e~stta dt = f _ / x \a dx 1 r e~xxa dx where s > 0. The last integral is precisely that defining T(a + 1), so we have T(a + l)/sa+1, as claimed. (CAUTION! T(a + 1) has xa in the integral, not x"+1.) Note the formula 4 also follows from 5 because T(« + 1) = n\ for integer n =5 0. Formulas 6-10 were proved in Examples 2-4. Formulas 11 and 12 will follow from 7 and 8 by "shifting," to which we turn next. s-Shifting: Replacing s by s — a in the Transform The Laplace transform has the very useful property that if we know the transform of fit), we can immediately get that of eatfit), as follows. First Shifting Theorem, s-Shifting If f(t) has the transform Fis) (where s > kfor some k), then eatf(t) has the transform F(s — a) (where s — a > k). In formulas, %{eatj(t)) = Fis - a) or, if we take the inverse on both sides, eatfit) = ifrl{F(s -a)}. SEC. 6.1 Laplace Transform. Inverse Transform. Linearity. s-Shifting 225 PROOF We obtain F(s — a) by replacing s with s — a in the integral in (1), so that F(s - a) = [ e-is-a)tf(t)dt = [ e~st[eatf(t)]dt = %{eatf(t)}. If F(s) exists (i.e., is finite) for s greater than some k, then our first integral exists for s — a> k. Now take the inverse on both sides of this formula to obtain the second formula in the theorem. (CAUTION! -a in F(s - a) but +a in ea'f(t).) EXAMPLE 5 s-Shifting: Damped Vibrations. Completing the Square From Example 4 and the first shifting theorem we immediately obtain formulas 11 and 12 in Table 6.1. .cf{eatcoswr) is - af - ii? %{eat sin wř) (s - a) + (o For instance, use these formulas to find the inverse of the transform 3.v - 137 #(/) - s + 2s + 401 Solution. Applying the inverse transform, using its linearity (Prob. 28), and completing the square, we obtain A 3(.9 + 1) - 140 f = '£~ \- l (i + I f + 400 s + 1 - 7a;' 20 We now see that the inverse of the right side is the damped vibration (Fig. 113) fit) = e_t(3 cos 20ř - 7 sin 20/). Fig. 113. Vibrations in Example 5 Existence and Uniqueness of Laplace Transforms This is not a big practical problem because in most cases we can check the solution of an ODE without too much trouble. Nevertheless we should be aware of some basic facts. A function f(t) has a Laplace transform if it does not grow too fast, say, if for all (^0 and some constants M and k it satisfies the "growth restriction" (2) |/(ř)| ě Mekt. 226 CHAP. 6 Laplace Transforms (The growth restriction (2) is sometimes called "growth of exponential order," which may be misleading since it hides that the exponent must be kt, not kt2 or similar.) /(f) need not be continuous, but it should not be too bad. The technical term (generally used in mathematics) is piecewise continuity, /(f) is piecewise continuous on a finite interval a^t^kb where / is defined, if this interval can be divided into finitely many subintervals in each of which / is continuous and has finite limits as t approaches either endpoint of such a subinlerval from the interior. This then gives finite jumps as in Fig. 114 as the only possible discontinuities, but this suffices in most applications, and so does the following theorem. Fig, 114. Example of a piecewise continuous function /(f). (The dots mark the function values at the jumps.) THEOREM 3 Existence Theorem for Laplace Transforms If /(/) is defined and piecewise continuous on every finite interval on the semi-axis f s 0 and satisfies (2) for all t 0 and some constants M and k, then the Laplace transform S£(f) exists for all s > k. PROOF Since /(f) is piecewise continuous, e~stf(t) is integrable over any finite interval on the f-axis. From (2), assuming that s > k (to be needed for the existence of the last of the following integrals), we obtain the proof of the existence of !£(.[) from \m)\ ( dt OO oo g j \f(t)\e~st dt ■ J Me^e'*1 dt M Note that (2) can be readily checked. For instance, cosh f < e*, tn < n\e' (because tnln\ is a single term of the Maclaurin series), and so on. A function that does not satisfy (2) for any M and k is ef (take logarithms to see it). We mention that the conditions in Theorem 3 are sufficient rather than necessary (see Prob. 22). Uniqueness. If the Laplace transform of a given function exists, it is uniquely determined. Conversely, it can be shown that if two functions (both defined on the positive real axis) have the same transform, these functions cannot differ over an interval of positive length, although they may differ at isolated points (see Ref. [A 141 in App. 1). Hence we may say that the inverse of a given transform is essentially unique. In particular, if two continuous functions have the same transform, they are completely identical. PROBLEM SET 6.1 1-20 LAPLACE TRANSFORMS Find the Laplace transforms of the following functions. Show the details of your work, (a, b, k, 10, 6 are constants.) 1. t2 It 2. (r2 3)2 3. cos 2-irf 5. e2t cosh t 7. cos (tut + 6) O „3a-2bf 4. sin2 At 6. e~l sinh 5f 8. sin (3t - |) 10. -8 sin 0.2t SEC. 6.2 Transforms of Derivatives and Integrals. ODEs 227 11. sin t cos t 13. 12. (1 + 1)a 14. k 15. 17. 19. 20. 21. Using ££(/) in Prob. 13, find it(ff), where ft(t) = 0 if t S 2 and /,(f) = 1 if t > 2. 22. (Existence) Show that S£(l/Vt) = Vwfs. [Use (30) r(|) = V'77 in App. 3.1.] Conclude from this that the conditions in Theorem 3 are sufficient but not necessary for the existence of a Laplace transform. 23. (Change of scale) If =£(/(?)) = F(s) and c is any positive constant, show that c£(f(ct)) = F(s!c)lc. (Hint: Use (1).) Use this to obtain i£(cos ati) from i£(cos f). 24. (Nonexistence) Show that e'2 does not satisfy a condition of the form (2). 25. (Nonexistence) Give simple examples of functions (defined for all x = 0) that have no Laplace transform. 26. (Table 6.1) Derive formula 6 from formulas 9 and 10. 27. (Table 6.1) Convert Table 6.1 from a table for finding transforms to a table for finding inverse transforms (with obvious changes, e.g., 5£~1(i/sn) = tn'1/(n - 1)!, etc.). 28. (Inverse transform) Prove that f£ 1 is linear. Hint. Use the fact that i£ is linear. 29-401 INVERSE LAPLACE TRANSFORMS Given F(s) = '£(f\ find /(f). Show the details. (L, n, k, a, b arc constants.) 29. 31. As - 377 .v4 - 3s2 + 12 33. -7 uttL 35. 37. 39. LV + n'lf2 8 30. 32. 34. 2s s1 - 16 10 2s + V2 20 (s - 1)0 + 4) s2 + 4s 36. 2 (k + l)2 s + k2 1 (s - V3)(.v + V5) 1 1 s2 + 5 s + 5 38. 40. k=l 18*- 12 - 1 1 (s + «)(.v + b) 41-54 APPLICATIONS OF THE FIRST SHIFTING THEOREM (s-SHIFTING) In Probs. 41-46 find the transform. In Probs. 47-54 find the inverse transform. Show the details. 41. 3.8re24t 42. -3t*e-°st 43. 5e~at sin an 44. e~'M cos -at 45. e~kt(a cos t + 6 sin f) 46. e~l(a0 + a^t + ■ ■ ■ + antn) 7 77 47. 49. 51. 53. (s - iy V8 (s + V2f 15 s2 + 4s + 29 77 S2 + 10775 + 24T72 48. 50. 52. (S + 77)2 s - 6 (s - 1 )2 + 4 4* - 2 54. -5- 2i - 56 4s - 12 6.2 Transforms of Derivatives and Integrals. ODEs The Laplace transform is a method of solving ODEs and initial value problems. The crucial idea is that operations of calculus on functions are replaced by operations of algebra on transforms. Roughly, differentiation of f(t) will correspond to multiplication of 2(f) by s (see Theorems 1 and 2) and integration of f(t) to division of i£(f) by s. To solve ODEs, we must first consider the Laplace transform of derivatives. 228 CHAP. 6 Laplace Transforms THEOREM 1 Laplace Transform of Derivatives The transforms of the first and second derivatives of f(t) satisfy (1) (2) m') = s(£(f) - .f(0) Formula (1) holds if fit) is continuous for all t 0 and satisfies the growth restriction (2) in Sec. 6.1 and f'(t) is piecewise continuous on every finite interval on the semi-axis t — 0. Similarly, (2) holds iff and f are continuous for all t i£ 0 and satisfy the growth restriction and f" is piecewise continuous on every finite interval on the semi-axis t 0. PROOF We prove (1) first under the additional assumption that /' is continuous. Then by the definition and integration by parts, CC cc oc 2(f') = J e-sl.f'(t)dt = [e~stf(t)\ + s \ e-stf(t)dt. Since / satisfies (2) in Sec. 6.1, the integrated part on the right is zero at the upper limit when s > k, and at the lower limit it contributes — /(0). The last integral is ££(/). It exists for s > k because of Theorem 3 in Sec. 6.1. Hence £G(/') exists when s > k and (1) holds. If /' is merely piecewise continuous, the proof is similar. In this case the interval of integration off' must be broken up into parts such that /' is continuous in each such part. The proof of (2) now follows by applying (1) to /" and then substituting (1), that is ■£{f") = sX{f) - /'(0) = s[sX(f) - ,f(0)] = s2!£(f) - tf(0) - f (0). Continuing by substitution as in the proof of (2) and using induction, we obtain the following extension of Theorem 1. THEOREM 2 Laplace Transform of the Derivative /(n) of Any Order be continuous for all t =^ 0 and satisfy the growth restriction (2) in Sec. 6.1. Furthermore, let he piecewise continuous on every finite interval on the semi-axis t § 0. Then the transform off1*0 satisfies (3) X(f10) = snX(f) ,f(0) - .v"-2f (0)-----fn~l\0). EXAMPLE 1 Transform of a Resonance Term (Sec. 2.8) Lei fit) = t sin col. Then f(0) = 0, /'(f) = sin col - cot cos cot, f'(0) = 0, /" = 2co cos cot - co2r sin cot. Hence by (2), Xif) - 2co-^~-- co2mf) = 2-ffi), thus 2(f) = :£(t sin cot) (s2 + co2)2 229 EXAMPLE 2 Formulas 7 and 8 in Table 6.1, Sec. 6.1 This is a third derivation of if(cos &>f) and ,3:(sin 2 cos on. h'rom this and (2) we obtain X(f) = s2<£($) - s = -Ať/). By algebra, i£(cos cot) = —,-T7 s I- ft) Similarly, let # = sin lot. Then #(0) = 0, g = ft) cos 0, s > k, and t > 0, (4) e\fj(T)d 0). This shows that g(t) also satisfies a growth restriction. Also, g'(t) = /(f), except at points at which fit) is discontinuous. Hence g (?) is piecewise continuous on each finite interval and, by Theorem 1, since g(Q) = 0 (the integral from 0 to 0 is zero) ££{/(/)} = mg'it)} = s%{g(t)\ - g(Q) = v7{.d/i|. Division by s and interchange of the left and right sides gives the first formula in (4), from which the second follows by taking the inverse transform on both sides. EXAMPLE 3 Application of Theorem 3: Formulas 19 and 20 in the Table of Sec. 6.9 1 1 Using Theorem 3, find the inverse of / 2 , 2A sis + ft) ) and Solution. From Table 6.1 in Sec. 6.1 and the integration in (4) (second formula with the sides interchanged) we obtain 2T1 [ sis2 + ft)2 H r(l — cos tot). 230 CHAP. 6 Laplace Transforms This is formula 19 in Sec. 6.9. Integrating this result again and using (4) as before, we obtain formula 20 in Sec. 6.9: i 1 "\ 1 f T t sin cot "1 ( sin tot '£ 2, 2-27 f = "a" (1 " cos wt) = -J - J" = ~a - T~ ■ It is typical that results such as these can be found in several ways. In this example, try partial fraction reduction. Differential Equations, Initial Value Problems We shall now discuss how the Laplace transform method solves ODEs and initial value problems. We consider an initial value problem (5) y" + ay' + by = r(t), y(0) = K0, y'(0) = K, where a and b are constant. Here r(t) is the given input (driving force) applied to the mechanical or electrical system and y(t) is the output (response to the input) to be obtained. In Laplace's method we do three steps: Step 1. Setting up the subsidiary equation. This is an algebraic equation for the transform Y = £(y) obtained by transforming (5) by means of (1) and (2), namely, [s2Y - sy(0) - y'(0)J + a\sY - y(0)] + bY = R(s) where R(s) = It(r). Collecting the 7-terms, we have the subsidiary equation (.v2 + as + b)Y = 0 + a)y(0) + y'(0) + R(s). Step 2. Solution of the subsidiary equation by algebra. Wc divide by s2 + as + b and use the so-called transfer function 1 1 (6) Q{S) ~ s2 + as + b ~ (s + \af + b-\a2 ' (Q is often denoted by H, but we need H much more frequently for other purposes.) This gives the solution (7) Y(s) = [(s + a)y(O) + y'(0)]Q(s) + R(s)Q(s). If >(0) = y'(0) = 0, this is simply Y = RQ; hence ^(output.) i£'(input) and this explains the name of Q. Note that Q depends neither on r(i) nor on the initial conditions (but only on a and b). Step 3. Inversion of Y to obtain y = i£~l(Y). We reduce (7) (usually by partial fractions as in calculus) to a sum of terms whose inverses can be found from the tables (e.g., in Sec. 6.1 or Sec. 6.9) or by a CAS, so that we obtain the solution y(t) = ££~\Y) of (5). SEC. 6.2 Transforms of Derivatives and Integrals. ODEs EXAMPLE 4 Initial Value Problem: The Basic Laplace Steps Solve y" - y = t, }'(0) = 1, y'(0) = 1. Solution. Step I. From (2) and Table 6.1 we get the subsidiary equation [with Y = 9(y)\ s2Y - sy(0) - y'(0) - Y = Us2, thus (s2 - \)Y = s + 1 + 1/s2. Step 2. The transfer function is Q = - 1), and (7) becomes Y = (s + \)Q + 1 s + 1 s2 - 1 s'\s2 - 1) Simplification and partial fraction expansion gives 1 (.v2- 1 ,2) - 1 Step 3. From this expression for Y and Table 6.1 we obtain the solution y(t) = St-\Y} = sr^ The diagram in Fig. 115 summarizes our approach. (-space 1 1 it 1 1 , co —11 I'-IJ 2 l u -1. = e* + sinh t - r. s-space Given problem y"-y = t y(0) =1 y'(0) =1 Subsidiary equation (s2- l)Y = s+ 1 + 1/s2 Solution of given problem y(t) = i?' + sinh t - t I Solution of subsidiary equation Y=^+^-± s - 1 ,s2 - 1 s2 Fig. 115. Laplace transform method EXAMPLE 5 Comparison with the Usual Method Solve the initial value problem y" + y + 9y = 0, y(0) = 0.16, y'(0) - 0. Solution. From (1) and (2) we see that the subsidiary equation is s2Y - 0.16s + sY - 0.16 + 9Y = 0, thus (s2 + .s + 9)Y = 0.16(s + 1). The solution is O.I6(i + 1) 0.16(5 + J) + 0.08 (i + + f - .5 + 9 Hence by the first shifting theorem and the formulas for cos and sin in Table 6.1 we obtain , i tt-A /"35" 0.08 [35 v(() = 2 (F) = e "0.16 cos J — f + -—== sin ,/ — \ V 4 I\'35 V 4 = e_0'5t(0.16 cos 2.96; + 0.027 sin 2.96«). This agrees with Example 2, Case (III) in Sec. 2.4. The work was less. 232 CHAP. 6 Laplace Transforms Advantages of the Laplace Method 1. Solving a nonhomogeneous ODE does not require first solving the homogeneous ODE. Sec Example 4. 2. Initial values are automatically taken care of. See Examples 4 and 5. 3. Complicated inputs r(t) (right sides of linear ODEs) can be handled very efficiently, as we show in the next sections. EXAMPLE 6 Shifted Data Problems This means initial value problems with initial conditions given at some t = t0 > 0 instead of ( = 0. For such a problem set < = 7 + r0, so that t = l0 gives 7=0 and the Laplace transform can be applied. For instance, solve y" + y = It, \A\tt) = §77, y'(lir) = 2 - V2. Solution. We have f0 = \ir and we set t = t + jw. Then the problem is y" + y = 2(7 + \ir), y(0) = \tt, y(0) = 2 - V2 where y(7) = >■(/). Using (2) and Table 6.1 and denoting the transform of y by 7, wc see that the subsidiary equation of the "shifted" initial value problem is i Y - s ■ iir - (2 - V2) + ¥ = -s - — , 2 \tt l thus (i2 + l)y = -=■ + — + -77S + 2-V2. s2 n 2 Solving this algebraically for ?, we obtain 2 |tt irrs 2 - V2 - +--- + —- + - (s2 + \)s2 (s2 + l)s s2 - l s2 + I The inverse of the first two terms can be seen from Example 3 (with w = l), and the last two terms give cos and sin, y = !f~\Y) = 2(7 sin'f) + |ir(l - cos?) + ^ircos? + (2 - V2) sin? Now t = t — \tt. sin? = (s'n ' _ cus ')< so mat me answcr (the solution) is y = 2t sin t + cos t. PROBLEM SET 6.2 1-81 OBTAINING TRANSFORMS BY DIFFERENTIATION Using (1) or (2), find i£(f) if f(t) equals: 1. tekl 3. sin2 cot 5. sinh2 at 7. t sin Ith 2. r cos 5t 6. 8. COS" TTt cosh2 if (Use Prob. 3.) 9. (Derivation by different methods) It is typical that various transforms can be obtained by several methods. Show this for Prob. 1. Show it for 2?(cos2 \t) (a) by expressing cos2^f in terms of cos/, (b) by using Prob. 3. 10-24 | INITIAL VALUE PROBLEMS Solve the following initial value problems by the Laplace transform. (If necessary, use partial fraction expansion as in Example 4. Show all details.) 10. y' + 4y = 0, y(0) 11. y + i 12. y" - y y'(0) = y = 17 sin 2t, ' ~ 6y = 0, = 13 = 2.8 V(0) = -y(0) = 6, SEC. 6.3 Unit Step Function. r-Shifting 233 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 0. y(O) = 4, /(O) 4y = 0, y(0) =2.1, y - zy y" " 4y y'(0) = 3.9 y" + 2y + 2y = 0, y'(0) = -3 y" + ky - 2k2y = 0, y'(0) = 2k y" + ly' + 12y = 21e3f, .y'(0) = -10 y" + 9y = y(0) = 0, y" + 3y' + 2.25y = 9t* + 64, y'(0) = 31.5 own to illustrate the advantages of the present method (to the extent we have seen them so far). y - 6/ + 5y = y'(0) = 6.2 (Shifted data) y' v" - 2y' - 3y = j'(l) = -17 y" + 3y' - 4v = y'(i) = 5 y" + 2y' + 5y = y'(3) = 14 29 cos 2t, - 6y = 0, 0, y(l) 6ezt-2, 50/ - 150, y(0) = 1, y(0) = 2, y(0) = 3.5, y (0) = 0 y(0) = l, y(0) = 3.2, y(2) = 4 = -3, yd) = 4. y(3) = -4. PROJECT. Comments on Sec. 6.2. (a) Give reasons why Theorems 1 and 2 are more important than Theorem 3. (b) Extend Theorem 1 by showing that if /(f) is continuous, except for an ordinary discontinuity (finite jump) at some t = a (> 0), the other conditions remaining as in Theorem 1, then (see Fig. 116) (1*) ££(/') = s£(.() ~ /CO) " [/(« + 0) - f(a - 0)] 1. (d) Verify (1 *) for two more complicated functions of your choice. (e) Compare the Laplace transform of solving ODEs with the method in Chap. 2. Give examples of your Fig. 116. Formula (1*) 26. PROJECT. Further Results by Differentiation. Proceeding as in Example 1, obtain ,v2 - co2 (a) <£{t cos tot) = —2~-== (s + co ) and from this and Example 1: (b) formula 21, (c) 22, (d) 23 in Sec. 6.9, .2 4- n2 (e) £(t cosh at) (,v2 - a2)2 ' 2as (f) ££.(f sinhar) = —5-== . (s2 - a2)2 27-34 OBTAINING TRANSFORMS BY INTEGRATION Using Theorem 3. find /(f) if ££(/) equals: 27. 29. 31. 33. s2 + sl2 3 - 5s 1 28. 30. 32. 34. 10 ss + 9s 35. (Partial fractions) Solve Probs. 27, 29, and 31 by using partial fractions. 6.3 Unit Step Function. t-Shifting This section and the next one are extremely important because we shall now reach the point where the Laplace transform method shows its real power in applications and its superiority over the classical approach of Chap. 2. The reason is that we shall introduce two auxiliary functions, the unit step function or Heaviside function u(t — a) (below) and Dime's delta S(f — a) (in Sec. 6.4). These functions are suitable for solving ODEs with complicated right sides of considerable engineering interest, such as single waves, inputs (driving forces) that are discontinuous or act for some time only, periodic inputs more general than just cosine and sine, or impulsive forces acting for an instant (hammerblows, for example). CHAP. 6 Laplace Transforms Unit Step Function (Heaviside Function) u(t — a) The unit step function or Heaviside function u(t - a) is 0 for r < a, has a jump of size 1 at t = a (where we can leave it undefined), and is 1 for t > a, in a formula: f 0 if t < a (1) u(t -d) = \ (a s 0). 11 if t > a Figure 117 shows the special case u(t), which has its jump at zero, and Fig. 118 the general case u(t — a) for an arbitrary positive a. (For Heaviside see Sec. 6.1.) The transform of u(t — a) follows directly from the defining integral in Sec. 6.1, here the integration begins at t = a (i? 0) because u(t — a) is 0 for t < a. Hence e-as (2) i£[u(t - a)} = - (s > 0). s The unit step function is a typical "engineering function" made to measure for engineering applications, which often involve functions (mechanical or electrical driving forces) that are either "off or "on." Multiplying functions /(f) with u(t — a), we can produce all sorts of effects. The simple basic idea is illustrated in Figs. 119 and 120. In Fig. 119 the given function is shown in (A). In (B) it is switched off between t = 0 and / = 2 (because u(t — 2) = 0 when t < 2) and is switched on beginning at / = 2. In (C) it is shifted to the right by 2 units, say, for instance, by 2 sees, so that it begins 2 sees later in the same fashion as before. More generally we have the following. Let fit) = 0 far all negative t. Then fit - a)u(t - a) with a > 0 is fit) shifted itranslated) to the right by the amount a. Figure 120 shows the effect of many unit step functions, three of them in (A) and infinitely many in (B) when continued periodically to the right; this is the effect of a rectifier that clips off the negative half-waves of a sinuosidal voltage. CAUTION! Make sure that you fully understand these figures, in particular the difference between parts (B) and (C) of Figure 119. Figure 119(C) will be applied next. u(t) uU — a) 0 t Fig. 117. Unit step function u(t) 0 a t Fig. 118. Unit step function u(t - a) SEC. 6.3 Unit Step Function. r-Shifting fit) 5h 0 -5 5- k 2k t -5 - 2 k 2n t 2 k+2 2k+2 t (A) fit) = 5 sin f (B) flt)u(t - 2) (C) fit - 2)uit - 2) Fig. 119. Effects of the unit step function: (A) Given function. (B) Switching off and on. (C) Shift. 4 6 '- 0 2 4 6 8 10 (A) k[u(t - 1) - 2u(t - 4) + u(t - 6)] Fig. 120. Use of many unit step functions (B) 4 sin i±Kt)[u{i) - uit - 2) + uit - 4) ■ THEOREM 1 Time Shifting (f-Shifting): Replacing t by t — a in f(t) The first shifting theorem ("j-shifting") in Sec. 6.1 concerned transforms F(s) = if {/(f)} and /''(.v — a) = !£{eatf(t)}. The second shifting theorem will concern functions /(f) and fit — a). Unit step functions are just tools, and the theorem will be needed to apply them in connection with any other functions. Second Shifting Theorem; Time Shifting If fit) has the transform F(s), then the "shifted function" (3) fit) = f(t - a)u(t - a) 0 if t < a fit - a) if f > a has the transform e~asFis). That is, if i£-{f{i)} = F(s), then (4) XUit - a)u(t - a)} = \lT. (Fig. 121) Solution. Step 1. In terms of unit step functions, /(/) = 2(1 - u{t - D) - \t\u(t - 1) u{t - \tt)) + (cos t)u(t - |ir). Indeed. 2(1 - uit - 1)) gives /(f) for 0 < t < 1, and so on. Step 2. To apply Theorem 1, we must write each term in /(/) in the form /(( - a)u{t — a). Thus, 2(1 — u(t — 1)) remains as it is and gives the transform 2(1 — e~s)ls. Then I t2u(t - 1)1 = %{\it - l)2 + (t - 1) + - 1) l l tt tt2 21 (cos f)»l * ))■(-H Together, 2 2 sc/) = 7 - - + i l 7+ 27 SEC. 6.3 Unit Step Function. f-Shifting If the conversion of f[t) to f(t - a) is inconvenient, replace it by (4**) 2{fflu(t - a)} = e-as2{f(t + a)}. (4**) follows from (4) by writing f(t - a) = g(t), hence /(f) = g(t + a) and then again writing / for g. Thus, 1 ru(t - l) e~sit\-{t+ \ f\ = e~s'£- 1 2 1 —i+t + — 2 2 l l i_ 7 + s2 + 2s as before. Similarly for £{{>t2u(t - \tt}}. Finally, by (4**), '£{ cos t U J t — ^ 7t. if cos ( + — 17 I')} 2. Hence we obtain fit) = 0 if 0 < t < 1, -(sin irf)/i7 if 1< t < 2, 0 if 2 < t < 3, and (r - 3)e"2cf ~3) if r > 3. See Fig. 122. ■ 0.3 0.2 0.1 0 2 3 4 Fig. 122. f{t) in Example 2 EXAMPLE 3 Response of an RC-Circuit to a Single Rectangular Wave Find the current i(t) in the RC-circuit in Fig. 123 if a single rectangular wave with voltage V0 is applied. The circuit is assumed to be quiescent before the wave is applied. 238 v(t) -AAA- R VJR - 0 a b t Fig. 123. RC-circuit, electromotive force v(r), and current in Example 3 Solution. The input is V0[u(t - a) - u(i - ft)]. Hence the circuit is modeled by the integro-differential equation (sec Sec. 2.9 and Fig. 123) q(t) 1 Ri(t) + ~ = Riit) + — ) ch = V(r) V0[u(; - a) - u(t - ft)]. Using Theorem 3 in Sec. 6.2 and formula (1) in this section, we obtain the subsidiary equation Us) V0 r _as Ms) sC Solving this equation algebraically for /(.?), we get l(s) = FisKe.-™ - e bs) where F(s) VJR and (f-\F) = ZSL g-t/CRC) s + \/(RC) y ' R the last expression being obtained from Table 6.1 in Sec. 6.1. Hence Theorem 1 yields the solution (Fig. 123) R that is, i(t) = 0 if t < a, and >(0 Kte~ (K, - K2)e if a < t < ft if a > b where Kx = V0eaHRm/R and K2 = V0eb,(RC)/R. EXAMPLE 4 Response of an RLC-Circuit to a Sinusoidal Input Acting Over a Time Interval Find the response (the current) of the /cZ.C-circuit in Fig. 124, where £(/) is sinusoidal, acting for a short time interval only, say, E(t) = 100 sin 400r if 0 < t < 2-n and E(l) = 0 if t > 2tt and current and charge are initially zero. Solution. The electromotive force E(t) can be represented by (100 sin 400f)( 1 - u(f - 2tt)). Hence the model for the current i(t) in the circuit is the integro-differential equation (see Sec. 2.9) 0.1/' + 1 li + 100 i(t) dr = (100 sin 4000(1 ~ - 2tt)), i(0) = 0, /'(0) = 0. From Theorems 2 and 3 in Sec. 6.2 we obtain the subsidiary equation for /(.s) = X(i) OAsI + III + 100 - = -=- 100-400^ / 1 SEC. 6.3 Unit Step Function. f-Shifting Solving it algebraically and noting that s + 110.? + 1000 = (s + 10)(? + 100), we obtain (s + 10)0 + 100) v s2 + 4002 s2 + 4002 / ' For the first term in the parentheses (■ • ■) times the factor in front of them we use the partial fraction expansion 400 000i- A B Ds + K (s + I0)(* + 100)(i + 4002) s + 10 .v + 100 / + 4002 Now determine A, B, D, K by your favorite method or by a CAS or as follows. Multiplication by the common denominator gives 400 000.? = A(s + 100)(.v2 + 4002) + B(s + 10)(s2 + 4002) + (Ds + K)(s + 10)O + 100). We set s = -10 and 100 and then equate the sums of the ss and s2 terms to zero, obtaining (all values rounded) (s = -10) (s = -100) (.s^-terms) -4 000 000 = 90(102 + 4002)/t, -40 000 000 = -90(1002 + 4002)S, 0 = A + B + D, 0 = 100A + 10ß + HOD + K, -0.27760 B = 2.6144 (s -terms) Since K = 258.66 = 0.6467 ■ 400, we thus obtain for the first term Zx in / = It - I2 -2.3368 K = 258.66. h = 0.2776 2.6144 + 2.3368.? 0.6467 • 400 s - 10 s + 100 From Table 6.1 in Sec. 6.1 we see that its inverse is hit) This is the current ((;) when 0 < / < 2ir. It agrees for 0 < t < 2it with that in Example 1 of Sec. 2.9 (except for notation), which concerned the same ffLC-circuit. Its graph in Fig. 62 in Sec. 2.9 shows that the exponential terms decrease very rapidly. Note that the present amount of work was substantially less. The second term I± of / differs from the first term by the factor e~27rs. Since cos 400(f - 2ir) = cos 400; and sin 400(f - 2tt) = sin 400/, the second shifting theorem (Theorem 1) gives the inverse i2(t) = 0 if 0 < / < 277, and for > 2-77 it gives i2(t) = -0.2776e"loa"2,r) + 2.6144e-10(m-2'rt - 2.3368 cos 400/ + 0.6467 sin 400/. Hence in i(t) the cosine and sine terms cancel, and the current for / > 2tt is Ht) = -0.2776(e-10t - e-^-^) + 2.6144(e-UKJt - ^.-WOtt-ja^ It goes to zero very rapidly, practically within 0.5 sec. C= 10-2 F -II- R = 1112 -a o- E(t) ,i = 0.1H Fig. 124. RLC-circuit in Example 4 240 CHAP. 6 Laplace Transforms 1. WRITING PROJECT. Shifting Theorem. Explain and compare Ihe different roles of the two shifting theorems, using your own formulations and examples. 2-13 UNIT STEP FUNCTION AND SECOND ~~' SHIFTING THEOREM Sketch or graph the given function (which is assumed to be zero outside the given interval). Represent it using unit step functions. Find its transform. Show the details of your work. 2. t (0 < t < 1) 3. e* (0 < t < 2) 4. sin 3f (0 < / < tt) 5. t2 (1 < t < 2) 6. t2 (t > 3) 7. cos irt (1 < t < 4) 8. 1 - e~l (0 < / < tt) 9. t (5 < t < 10) 10. sin tot (t > 6tt/o>) 11. 20 cos ttJ (3 < t < 6) 12. sinh f (0 < f < 2) 13. (2 < f < 4) 14-221 INVERSE TRANSFORMS BY THE SECOND SHIFTING THEOREM Find and sketch or graph f(t) if ££(/) equals: 14. se-s/(s2 + co2) 15. e~isls2 16. 5-2 - (s ™l{sz + 2s + 2) „-s + k 17. (e" 18. e-20. (1 -2 + j"1)«" s-ns\ If „2 I) 19. e sAs£ )/(i - fc) 21. se 7(> < 4) 22. 2.5( tt; v'(0) = 0, y'(0) = 4 28. y" + 3y' + 2y = r(r), r(t) = 1 if 0 < t < 1 and 0 if t > 1; y(0) = 0, y'(0) = 0 29. y" + y = r(t), r(t) = t if 0 < t < 1 and 0 if t > 1; y(0) = y'(0) = 0 30. y" - I6y = r(t), r(t) = 48e2t if 0 < t < A and 0 if / > 4; >(0) = 3, y'(0) = -4 31. y" + y' - 2y = r(t), r(t) = 3 sin t - cos t if 0 < r < 2tt and 3 sin 2f - cos 2t if r > 2-tt: y(0) =1, y'(0) = 0 32. y" + 8y' + 15y = r(t), r(t) = 35e2t if 0 < t < 2 and 0 if t > 2; y(0) = 3, v'(0) = -8 33. (Shifted data) y" + 4y = 8<2 if 0 < t < 5 and 0 if r > 5; y(l) = 1 + cos 2, y'(l) = 4 - 2 sin 2 34. y" + 2y' + 5y = 10 sin t if 0 < t < 2tt and 0 if t > 2tt; y(Tr) = 1, y'{tt) = 2e~7T - 2 MODELS OF ELECTRIC CIRCUITS 35. (Discharge) Using the Laplace transform, find the charge q(t) on the capacitor of capacitance C in Fig. 125 if the capacitor is charged so that its potential is V0 and the switch is closed at t = 0. Fig. 125. Problem 35 36-381 RC-CIRCUIT Using the Laplace transform and showing the details, find the current i(t) in the circuit in Fig. 126 with R = 10 fl and C = 10"2 F, where the current at r = 0 is assumed to be zero, and: 36. v(t) = 100 V if 0.5 < f < 0.6 and 0 otherwise. Why does i(t) have jumps? 37. v = 0 if t < 2 and 100 (t - 2) V if t > 2 38. v = 0 if t < A and 14 • 108 A v(t) Fig. 126. Problems 36-38 39-41 RL-CIRCUIT Using the Laplace transform and showing the details, find the current i(t) in the circuit in Fig. 127, assuming /(0) = 0 and: SEC. 6.4 Short Impulses. Dirac's Delta Function. Partial Fractions 39. R = 10 fl, L 0 if t > 2 0.5 H, v = 200f V if 0 < t < 2 and 0 if 40. R = 1 kfl (= 1000 fl), L = 1 H, v 0 < t < 77, and 40 sin t V if f > 77 41. R = 25 fl, L = 0.1 H, v = 490e_5t V if 0 < t < 1 and 0 if t > 1 c(0 Fig. 127. Problems 39-41 42^14 LC-CIRCUIT Using the Laplace transfonn and showing the details, find the current i{t) in the circuit in Fig. 128, assuming zero initial current and charge on the capacitor and: 42. L = 1 H, C = 0.25 F, v = 2000 - £f3) V if 0 < t < 1 and 0 if t > 1 43. L = 1 H, C = 10-2 F, u = -9900 cos t V if 77 < t < 377 and 0 otherwise 44. L = 0.5 H, C = 0.05 1-, v 78 sin t V if 0 < r < 77 and 0 if t > 77 v(t) Fig. 128. Problems 42-44 45-47 RLC-CIRCUIT Using the Laplace transform and showing the details, find the current i(t) in the circuit in Fig. 129, assuming zero initial current and charge and: 45. R = 2 fl, L = 1 H, C = 0.5 F, v(t) = 1 kV if 0 < t < 2 and 0 if t > 2 46. R = 4 fl, L = 1 H, C = 0.05 F, V = 34e_f V if 0 < t < 4 and 0 if t > 4 47. R = 2 11, L = 1 H, C = 0.1 F, v = 255 sin t V if 0 < t < 277 and 0 if t > 2tt —o o- v(t) Fig. 129. Problems 45-47 6.4 Short Impulses. Diracs Delta Function. Partial Fractions Phenomena of an impulsive nature, such as the action of forces or voltages over short intervals of time, arise in various applications, tor instance, if a mechanical system is hit by a hammerblow, an airplane makes a "hard" landing, a ship is hit by a single high wave, or we hit a tennisball by a racket, and so on. Our goal is to show how such problems are modeled by "Dirac's delta function" and can be solved very efficiently by the Laplace transform. To model situations of that type, we consider the function \l/k ifflS/Sa + j^ (1) hit -a) = \ (Fig. 130) I 0 otherwise (and later its limit as 0). This function represents, for instance, a force of magnitude \lk acting from t = a to t = a + k, where k is positive and small. In mechanics, the integral of a force acting over a time interval a 2= t Si a + k is called the impulse of the CHAP. 6 Laplace Transforms force; similarly for electromotive forces E(l) acting on circuits. Since the blue rectangle in Fig. 130 has area 1, the impulse of fk in (1) is (2) 4= fkU-a)dt= -dt= 1. •/0 Ja k To find out what will happen if k becomes smaller and smaller, we take the limit of fk as k -> 0 (k > 0). This limit is denoted by 8(t - a), that is, 8(t - a) = hm fk(t - a). 8(t — a) is called the Dirac delta function2 or the unit impulse function. S(t — a) is not a function in the ordinary sense as used in calculus, but a so-called generalized function.2 To sec this, we note that the impulse Ik of fk is 1, so that from (1) and (2) by taking the limit as k —» 0 we obtain ao lit - a r°° (3) 8(t - a) = and j 8(t - a) dt = if t = a 0 otherwise but from calculus we know that a function which is everywhere 0 except at a single point must have the integral equal to 0. Nevertheless, in impulse problems it is convenient to operate on 8(t — a) as though it were an ordinary function. In particular, for a continuous function g(t) one uses the property [often called the sifting property of 8(t — a), not to be confused with shifting] cc (4) f g(t)8(t -a)dt = g{a) which is plausible by (2). To obtain the Laplace transform of 8(t — a), we write 1 fk(t - a) = — [u(t - a) - u(t - (a + k))] k T nit 1 -Area = 1 a a + h t, Fig. 130. The function fk[t - a) in (1) 2PALL DIRAC (1902-1984), English physicist, was awarded the Nobel Prize [jointly with the Austrian ERWIN SCHROD1NGER (1887-1961)] in 1933 for his work in quantum mechanics. Generalized functions are also called distributions. Their theory was created in 1936 by the Russian mathematician SERGEI L'VOVICH SOBOLEV (1908-1989), and in 1945, under wider aspects, by the French mathematician LAURENT SCHWARTZ (1915-2002). SEC. 6.4 Short Impulses. Dirac's Delta Function. Partial Fractions 243 and take the transform [see (2)] 1 . , , „ i 1 - e~ks X{fk(t - a)} = — le-°* - e -ia+k.)s e ks ks We now take the limit as k—> 0. By l'Hopital's rule the quotient on the right has the limit 1 (differentiate the numerator and the denominator separately with respect to k, obtaining se~ks and s, respectively, and use se~ks/s —> 1 as k —» 0). Hence the right side has the limit e~as. This suggests defining the transform of 8(f — a) by this limit, that is, (5) %{8(t - a)} = e-as. The unit step and unit impulse functions can now be used on the right side of ODEs modeling mechanical or electrical systems, as wc illustrate next. EXAMPLE 1 Mass-Spring System Under a Square Wave Determine the response of the damped mass-spring system (sec Sec. 2.8) under a square wave, modeled by (see Fig. 131) y" + 3y' + 2y = r(t) = u(l - 1) - u(l - 2), y(0) = 0, y'(0) = 0. Solution. From (1) and (2) in Sec. 6.2 and (2) and (4) in this section we obtain the subsidiary equation s2Y + 3sY + 2Y = ' (e~s - e~2s). Solution Y(s) = —5--- (e~s - e~2s). s(s2 + 3s + 2) Using the notation F(s) and partial fractions, we obtain 1 11/2 1 1/2 m = -T-. — = -.....„ =---— + s(sz + 3s + 2) s(s + I )(.v + 2) s s + 1 s + 2 From Table 6.1 in Sec. 6.1, we see that the inverse is fit) = 2T\F) = I - e~l + \e~21. Therefore, by Theorem 1 in Sec. 6.3 (r-shifting) we obtain the square-wave response shown in Fig. 131, y = 2~V«e"s - F(s)e~2s) = /(* - l)h(r - 1) - fit - 2)u{t - 2) '0 (0 < 1 < 1) 1 _ e-«-i) + ie-K«-D (l + e-a-z> + ie-w-u _ ie-w-» (, > 2). 1 - 0.5 - Fig. 131. Square wave and response in Example 1 244 EX A MPLE 2 Hammerblow Response of a Mass-Spring System Find the response of the system in Example 1 with the square wave replaced by a unit impulse at time t = 1. Solution. We now have the ODE and the subsidiary equation y" + 3y' + 2y = d(l - 1), and (s2 + 3s + 2)Y = e~'. Solving algebraically gives 1T» = {s + l)(s + 2) \ s + 1 By Theorem 1 the inverse is y(f) = £~\Y) U___Ue- \s + \ s + 2 J if 0 < t < 1 if r > 1. y(() is shown in Fig. 132. Can you imagine how Fig. 131 approaches Fig. 132 as the wave becomes shorter and shorter, the area of the rectangle remaining 17 Fig. 132. Response to a hammerblow in Example 2 EXAMPLE 3 Four-Terminal RLC-Network Find the output voltage response in Fig. 133 if R = 20 LI, L = 1 H, C = 10 4 F. the input is S(l) (a unit impulse at time I = 0), and current and charge are zero at time t = 0. Solution. To understand what is going on, note that the network is an S/LC-circuit to which two wires at A and B arc attached for recording the voltage v(t) on the capacitor. Recalling from Sec. 2.9 that current i(t) and charge q(t) are related by i = q = dqldt, we obtain the model Li' + Ri I = Lq" + Rq + j; = q" + 20q' + 10 000? = S(i). From (1) and (2) in Sec. 6.2 and (5) in this section we obtain the subsidiary equation for Q(s) = 2( tt; y(q) = 1, y'(0) = -5. Solution. From Table 6.1, (1). (2) in Sec. 6.2, and the second shifting theorem in Sec. 6.3, we obtain the subsidiary equation 2 (s2Y - s + 5) + 2(sY - 1) + 2Y = 10 -=- (1 - tt, (10) y(f) = e~l [(3 4- 2c"') cos l r 4e" sin r] if t > tt. Figure 134 shows (9) (for 0 < t < tt) and (10) (for t > tt), a beginning vibration, which goes to zero rapidly because of the damping and the absence of a driving force after t = n. Mechanical system Output (solution) Fig. 134. Example 4 The case of repeated complex factors \(s — a)(s — a)]2, which is important in connection with resonance, will be handled by "convolution" in the next section. 247 1-121 EFFECT OF DELTA FUNCTION ON VIBRATING SYSTEMS Showing ihe details, find, graph, and discuss the solution. 1. y" + y = S(t - 2tt), y(0) = 10, y'(0) = 0 2. y" + 2y' + 2y = c"1 + 5 5(r - 2), y(0) = 0, y'(0) = 1 3. y" - y = 10S(r - §) - 1005(/ - 1), y(0) = 10, y'(0) = 1 4. y" + 3y' + 2y = 10(sin t + 8(t - 1)), y(0) = 1, y'lO) = -1 5. y" + Ay' + 5y = [l - u(t - 10)]e' - ewS(t - 10). y(0) = 0, y'(0) = 1 6. y" + 2y' - 3y = 100S(/ - 2) + 100<5(f - 3), y(0) = 1, y'(0) = 0 7. y" + 2y' + lOy = 10 [1 - u(t - 4)] - 10<3(/ - 5), y(0) = 1, y'(0) = 1 8. y" + 5y' + 6y = 8(t - \tt) + u(t - it) cos t, y(0) = 0, y'(0) = 0 9. y" + 2y' + 5v = 25/ - 1008(2 - tt), y(0) = -2. y'(0) = 5 10. y" + 5y = 25/ - 100S(/ - tt), y(0) = -2, y'(0) = 5. (Compare with Prob. 9.) 11. y" + 3y' - 4y = 2ef - &e28(t - 2). y(0) = 2, y'(0) - 0 12. y" + y = -2 sin / + 10<5(/ - tt). y(0) = 0, y'(0) = 1 13. CAS PROJECT. Effect of Damping. Consider a vibrating system of your choice modeled by y" + cy' + fey = r(t) with r(t) involving a 6-function. (a) Using graphs of the solution, describe the effect of continuously decreasing the damping lo 0, keeping A- constant. (b) What happens if c is kept constant and k is continuously increased, starting from 0? (c) Extend your results to a system with two 5-functions on the right, acting at different times. 14. CAS PROJECT. Limit of a Rectangular Wave. Effects of Impulse. (a) In Example 1, take a rectangular wave of area 1 from 1 to 1 + k. Graph the responses for a sequence of values of k approaching zero, illustrating that for smaller and smaller k those curves approach the curve shown in Fig. 132. Hint: If your CAS gives no solution for the differential equation, involving k, take specific /rs from the beginning. (b) Experiment on the response of the ODE in Example 1 (or of another ODE of your choice) to an impulse 8(t — a) for various systematically chosen a (> 0); choose initial conditions y(0) + 0, y'(0) = 0. Also consider the solution if no impulse is applied. Is there a dependence of the response on at On b if you choose b8(t — a)7 Would — 8(t — a) with a > a annihilate the effect of 8(t - a.)'! Can you think of other questions that one could consider experimentally by inspecting graphs? 15. PROJECT. Heaviside Formulas, (a) Show that for a simple root a and fraction A/(s — a) in F(s)/G(s) we have the Heaviside formula A = lim (s — a)F(s) GOO (b) Similarly, show that for a root a of order m and fractions in F(s) Am Atfi-i G(s) (s — a)' (s - a)" + further fractions we have the Heaviside formulas for the first coefficient . (S - arm Am = hm - s-^o G(s) and for the other coefficients im—k 1 lim (m — k)\ s-^a ds m—k (s - a)mF(s) G(s) k = 1, • • ■ , m — 1. 16. TEAM PROJECT. Laplace Transform of Periodic Functions (a) Theorem. The Laplace transform of a piecewise continuous function f(l) with period p is (ii) seto i r "■fit) dt (s > 0). Prove this theorem. Hint: Write f = fP + f P + . . . o J o ■ p Set 1 = (n — \)p in the nth integral. Take out e~(n_1)p from under the integral sign. Use the sum formula for the geometric series. 248 CHAP. 6 Laplace Transforms (b) Half-wave rectifier. Using (11), show that the half-wave rectification of sin cot in Fig. 135 has the Laplace transform u>(\ + t Fig. 135. Half-wave rectification fit) 1 0 wko 27th) 3k/w t Fig. 136. Full-wave rectification (c) Full-wave rectifier. Show that the Laplace transform of the full-wave rectification of sin cot is O) tts -5- coth -— + co2 loo (d) Saw-tooth wave. Find the Laplace transform of the saw-tooth wave in Fig. 137. 0 p 2p 3p t Fig. 137. Saw-tooth wave (e) Staircase function. Find the Laplace transform of (he staircase function in Fig. 138 by noting that it is the difference of kt/p and the function in (d). fit) 2p Bp Fig. 138. Staircase function 6.5 Convolution. Integral Equations Convolution has to do with the multiplication of transforms. The situation is as follows. Addition of transforms provides no problem; we know that i£(,f + g) = ££(/) + !£(g). Now multiplication of transforms occurs frequently in connection with ODEs, integral equations, and elsewhere. Then we usually know 5£(f) and l£(g) and would like to know the function whose transform is the product i£(f )i£(g). We might perhaps guess that it is fg, but this is false. The transform of a product is generally different from the product of the transforms of the factors, <£(fg) * m)!£(g) in general. To see this take / = £ and g = 1. Then fg = e\ 2(fg) = l/(s - 1), but ££(/) = U(s - 1) and 2(1) = Ms give 2(f)2{^) = l/(.v2 - s). According to the next theorem, the correct answer is that if (/)££(g) is the transform of the convolution of f and g, denoted by the standard notation / * g and defined by the integral (1) hit) = (f * g)it) = f(r)g(t - t) dr. 249 THEOREM 1 Convolution Theorem If two functions f and g satisfy the assumption in the existence theorem in Sec. 6.1, so that their transforms F and G exist, the product H = FG is the transform, of h given by (1). (Proof after Example 2.) EXAMPLE 1 Convolution Let H(s) = l/[(s - a)s]. Find h(t). Solution. l/(i - a) has the inverse f(t) = eat, and lis has the inverse g(t) = 1. With f(r) = eaT and g(t — t) = 1 we thus obtain from (1) the answer h(t) = eal * 1 = ear- 1 dr = — (eal - 1). ■'o To check, calculate H(s) = i£(hM = >-1(—--)-1 a \ s — a s / a 1 1 $ - as s - a s = S£(eat) 2(1). EXAMPLE 2 Convolution Let H(s) = l/(s2 + «2)2. Find h(t). Solution. The inverse of ll(s2 + o?) is (sin ojf)la>. Hence from (1) and the trigonometric formula (11) in App. 3.1 with x = ^((ot + ojt) and y = ^(ojt — air) we obtain sin lot sin lot I f (t) = - * - = —5- I sin ojt sin oj(t — r) dr (o oj oj J0 1 2 co [-cos tot \ cos ojt] dr ' i sin ojt -t cos OJt + - 2oj" ' I 2ojz -t cos OJt + in agreement with formula 21 in the table in Sec. 6.9. PROOF We prove the Convolution Theorem 1. CAUTION! Note which ones are the variables of integration! We can denote them as we want, for instance, by t and p, and write CC CO Hs) = f e-sJir) dr and G(s) = \ e^gip) dp. We now set t = p + t, where t is at first constant. Then p = t — r, and t varies from r to oo. Thus GO CO G(s) = f e-sU-r)g(t - T)dt = eST j e"*g(f - r)dt. 250 CHAP. 6 Laplace Transforms t in F and t in G vary independently. Hence we can insert the G-integral into the F-integral. Cancellation of e~ST and eSr then gives F(s)G(s) = f e-STf(r)eST f e~stg(t - t) dt dr = f /(t) f e~stg(.t - r) dtdr. Here we integrate for fixed r over t from rto °= and then over t from 0 to This is the blue region in Fig. 139. Under the assumption on / and g the order of integration can be reversed (sec Ref. [A5] for a proof using uniform convergence). We then integrate first over t from 0 to r and then over t from 0 to °°, that is, Fig. 139. Region of integration in the fr-plane in the proof of Theorem 1 From the definition it follows almost immediately that convolution has the properties f * g = g * f (commutative law) / * (Si + 82) = f * 81 + f * §2 (distributive law) (/ * g) * v = / * (g * v) (associative law) f * 0 = 0 * / = 0 similar to those of the multiplication of numbers. Unusual are the following two properties. EXAMPLE 3 Unusual Properties of Convolution / * 1 # / in general. For instance, t * 1 = t 1 ch = \t2 + 1. (/ * f)(t) = 0 may not hold. For instance, Hxample 2 with o> = 1 gives sin f * sin t = —§ t cos t + \ sin t (Fig. 140). ■ SEC. 6.5 Convolution. Integral Equations 251 - 2 4\ 6 1 3 10 t Fig. 140. Example 3 We shall now take up the case of a complex double root (left aside in the last section in connection with partial fractions) and find the solution (the inverse transform) directly by convolution. EX AMPLE 4 Repeated Complex Factors. Resonance In an undamped mass-spring system, resonance occurs if the frequency of the driving force equals the natural frequency of the system. Then the model is (see Sec. 2.8) y" + ti)02y = A" sin a)0t where to02 = klm, k is the spring constant, and m is the mass of the body attached to the spring. We assume y(0) = 0 and y (0) = 0. for simplicity. Then the subsidiary equation is Ka0 Ka>0 s Y + eu0 Y = ~j-% . Its solution is Y = —=-2 - ■ s + ii>0 (s + Olp ) This is a transform as in Example 2 with a = w0 and multiplied by Kto0. Hence from Example 2 we can see directly that the solution of our problem is Ko)q i sino)0r\ K yd) = -—2 I — 'cos u'or + -I = -2 (_tl'o'cns '''ot + s'n fflo')- 2 to 0 We see that the first term grows without bound. Clearly, in the case of resonance such a term must occur. (See also a similar kind of solution in Fig. 54 in Sec. 2.8.) Application to Nonhomogeneous Linear ODEs Nonhomogeneous linear ODEs can now be solved by a general method based on convolution by which the solution is obtained in the form of an integral. To see this, recall from Sec. 6.2 that the subsidiary equation of the ODE (2) y" + ay' + by = r(t) {a, b constant) has the solution [(7) in Sec. 6.2] Y{s) = \(s + a)y(O) + y'(0)]GO) + R(s)Q(s) with R(s) = S£(r) and Q(s) = l/(s2 + as + b) the transfer function. Inversion of the first term [• • •] provides no difficulty; depending on whether \a2 — b is positive, zero, or negative, its inverse will be a linear combination of two exponential functions, or of the 252 form (ct + c2t)e~atl2, or a damped oscillation, respectively. The interesting term is R(s)Q(s) because r(t) can have various forms of practical importance, as we shall see. If y(0) = 0 and y'(0) = 0, then Y = RQ, and the convolution theorem gives the solution (3) y(f) = f ait - T)r(r)dT. EXAMPLE 5 Response of a Damped Vibrating System to a Single Square Wave Using convolution, determine the response of the damped mass-spring system modeled by y" + 3y' + 2y = r(i), r(t) = 1 if I < ( < 2 and 0 otherwise, y(0) = y'(0) = 0. This system with an input (a driving force) that acts for some time only (Fig. 141) has been solved by partial fraction reduction in Sec. 6.4 (Example 1). Solution by Convolution. The transfer function and its inverse are 1 111 Q{S) s2 + 35 + 2 ~(s + l)(s + 2) ~ s + 1 i + 2 ' Hence the convolution integral (3) is (except for the limits of integration) \ d.t = e hence q(t) = e y(t) = jq(t-T)-ldT= I Now comes an important point in handling convolution. r(r) = 1 if 1 < t < 2 only. Hence if t < 1, the integral is zero. If 1 < t < 2, we have to integrate from t = 1 (not 0) to t. This gives (with the first two terms from the upper limit) y(f) = e-° - ie-0 - (*-«-" - ie-2a-1}) = \ - + 1«-»"". If t > 2, we have to integrate from r = I to 2 (not to /). This gives y(/) = e~u'2' - i^2(t"2) - {e-u'v - \e~m~u). Figure 141 shows the input (the square wave) and the interesting output, which is zero from 0 to 1. then increases, reaches a maximum (near 2.6) after the input has become zero (why?), and finally decreases to zero in a monotone fashion. y(t) 1 0.5 Output (response) 0 1 2 3 4 r Fig. 141. Square wave and response in Example 5 Integral Equations Convolution also helps in solving certain integral equations, that is, equations in which the unknown function y(t) appears in an integral (and perhaps also outside of it). This concerns equations with an integral of the form of a convolution. Hence these are special and it suffices to explain the idea in terms of two examples and add a few problems in the problem set. SEC. 6.5 Convolution. Integral Equations EXAMPLE 6 A Volterra Integral Equation of the Second Kind Solve the Volterra integral equation of the second kind3 y(n - \ y t (t) sin (r - t) (It = t. Solution. From (1) we see that the given equation can be written as a convolution, y — y * sin t = t. Writing Y = 2(y) and applying the convolution theorem, we obtain 1 *a 1 m - m -ttt = n») -5—7 = -a ■ i I I 5+1 S The solution is s2 + 1 1 1 f3 Y(s) = -t— = —2 - —r and gives the answer y(f) = t + — . s s s 6 Check the result by a CAS or by substitution and repeated integration by parts (which will need patience). ■ EXAMPLE 7 Another Volterra Integral Equation of the Second Kind Solve the Volterra integral equation y(t) - (1 + r) y(t - r) dr = 1 - sinh t. •'o Solution. By (1) we can write y — (1 +/)*>'= 1 - sinh (. Writing Y = i£(y), we obtain by using the convolution theorem and then taking common denominators Y(s) 1 1 1 7 + -J 1 1 s2 s 1 s2 1 s hence Y(s) ■ s2 s{s2 - 1) (s — s - l)/s cancels on both sides, so that solving for Y simply gives s Y(s) = —5- and the solution is y(t) = cosh t. PROBLEM SET 1-8 CONVOLUTIONS BY INTEGRATION Find by integration: 1.1*1 2. t * t 3. t * tt: y(0) = 0, y'(0) = 4 22. y" + 3y' + 2y = 1 if 0 < t < a and 0 if t > a; y(0) = 0. y'(0) = 0 23. y" + 4y = 5u(t - 1); y(0) = 0, y'(0) = 0 24. y" + 5y' + 6y = 8(1 - 3); y(0) = 1, y'(0) = 0 25. y" + 6y' + 8y = 25(7 - 1) + 2S(t - 2); y(0) = 1, y'(0) = 0 26. TEAM PROJECT. Properties of Convolution. Prove: (a) Commutativity, f * g = g * f (b) Associativity. (/ * g) * v = f * (g * £>) (c) Distributivity, / * (gj + g2) = / * + / * g2 (d) Dirac's delta. Derive the silling formula (4) in Sec. 6.4 by using fk with a = 0 [(1), Sec. 6.4] and applying the mean value theorem for integrals. (e) Unspecified driving force. Show that forced vibrations governed by y" + oj2y = r(t), y(0) = Kit y'(0) = K2 with oj # 0 and an unspecified driving force r(f) can be written in convolution form, 1 A-2 y = — sin wl * lit) + K1 cos ojt + — sin ojt. 127-341 INTEGRAL EQUATIONS Using Laplace transforms and showing the details, solve: 27. v(0 - y(r) dr = 1 Jo 28. y(f) + I y(r) cosh (t ~ t) dr = t + ef 29. y(r) - I v(t) sin (r - t) dr = cos / •'o 30. y(f) + 2 v(t) cos (/ - t) dr = cos f 31. y(0 + J (f - t)v(t) dr = 1 32. y(r) - \ y(t)(t - t) dr = 2 - 33. y(0 + 2ř£ e_Ty(r) í/t = tel 34. v(r) + f ew-^'v(t) dr = r2 - f - I + i£'2t 35. CAS EXPERIMENT. Variation of a Parameter. (a) Replace 2 in Prob. 33 by a parameter k and investigate graphically how the solution curve changes if you vary k, in particular near k = —2. (b) Make similar experiments with an integral equation of your choice whose solution is oscillating. 6.6 Differentiation and Integration of Transforms. ODEs with Variable Coefficients The variety of methods for obtaining transforms and inverse transforms and their application in solving ODEs is surprisingly large. We have seen that they include direct integration, the use of linearity (Sec. 6.1), shifting (Sees. 6.1, 6.3), convolution (Sec. 6.5), and differentiation and integration of functions f(t) (Sec. 6.2). But this is not all. In this section we shall consider operations of somewhat lesser importance, namely, differentiation and integration of transforms F(s) and corresponding operations for functions fit), with applications to ODEs with variable coefficients. Differentiation of Transforms It can be shown that if a function f(t) satisfies the conditions of the existence theorem in Sec. 6.1, then the derivative F'is) = dFlds of the transform F(s) = !£if) can be obtained by differentiating F(s) under the integral sign with respect to s (proof in Ref. [GR4] listed in App. 1). Thus, if oc oc Fis) = j e-stf(t)dt, then F'is) = - \ e'"11fit) dt. J0 0 255 Consequently, if i£(/) = F(s), then (1) %{tf(t)} = -F'(s), hence (s)} = -tf(t) where the second formula is obtained by applying if"1 on both sides of the first formula. In this way, differentiation of the transform of a function corresponds to the multiplication of the function by —t. EXAMP L E 1 Differentiation of Transforms. Formulas 21-23 in Sec. 6.9 We shall derive the following three formulas. (2) (3) (4) fit) i 1 (S'n ßl ~~ ßl C0S ßl) is2 + ß2f s t - sin Bt 2ß P 1 — (unßt + ßtco&ßt) (sz + ß2)2 s2 is2 + B2)2 Solution. From (1) and formula 8 (with to = j3) in Table 6.1 of Sec. 6.1 we obtain by differentiation (CAUTION! Chain rule!) 2/3.? it(t sin Bt) = —r,-5"v • (s2 + j32)2 Dividing by 2/3 and using the linearity of X, we obtain (3). Formulas (2) and (4) are obtained as follows. From (1) and formula 7 (with u> = /3) in Table 6.1 we find (5) (s* + ß") - 2s 9:.(i cos ßl) =--- is" - From this and formula 8 (with u> = B) in Table 6.1 we have (s2 + B2)2 s(fCoSÍJř±lS«/3() = ^ 2_ß2 ß2)2 s2 + ß2 On the right we now take the common denominator. Then we see that for the plus sign the numerator becomes r) O O O O s — fr + s + P = 2s , so that (4) follows by division by 2. Similarly, for the minus sign the numerator takes the form s2 — fi2 — s2 - B2 = -2/32, and we obtain (2). This agrees with Example 2 in Sec. 6.5. I Integration of Transforms Similarly, if f(t) satisfies the conditions of the existence theorem in Sec. 6.1 and the limit of fit) It, as t approaches 0 from the right, exists, then for ,v > k, (6) oo j F(s)ds hence ÍT1 CO j Fis), ds\ = fit) In this way, integration of the transform of a function fit) corresponds to the division of fit) by t. 256 We indicate how (6) is obtained. From the definition it follows that CC 00 OG j" F(s) ds = j \ e-stf(t) dt ds, and it can be shown (see Ref. [GR4] in App. 1) that under the above assumptions we may reverse the order of integration, that is, DO CC j F(?) ds = J j e-stf(t) ds dt = \ fit) j e~st ds Ja s dt. Integration of e st with respect to s gives e st/(-f). Here the integral over s on the right equals e~stlt. Therefore, fit) J F(s)ds = i e-«mdt= k). Find the inverse i 2 i 2 ~~2 Solution. Denote the given transform by F(s). Its derivative is F'(s) = — \ln(s2 + o?) - Ins2 Is- is Taking the inverse transform and using (1), we obtain <£,-1[f'(s)} = S£~l Is 2 2-2 — — I = 2 cos OJt — 2 = -tf(l). Hence the inverse J(ř) of F(s) is /(ř) = 2(1 - cos ojt)h. This agrees with formula 42 in Sec. 6.9. Alternatively, if we let Is I G^ = 2 7 2 ~ T S + 01 s then gli) = iĚ'\G) = 2(cosw/ - 1). From this and (6) we get, in agreement with the answer just obtained. 2 , 2 S + 03 f G(s) ds git) (1 — cos tat), the minus occurring since s is the lower limit of integration. In a similar way we obtain formula 43 in Sec. 6.9, [f~y (in [\ - ^2 jj = ~ (1 - coshar). ■ Special Linear ODEs with Variable Coefficients Formula (1) can be used to solve certain ODEs with variable coefficients. The idea is this. Let i£(y) = Y. Then 2(y') = sY - y(0) (see Sec. 6.2). Hence by (1), (7) f£(/v') =--[sY - v(OY] = -Y - s ds dY ds SEC. 6.6 Differentiation and Integration of Transforms. ODEs with Variable Coefficients Similarly, X(y") = s2Y - sy(0) - y'(0) and by (1) (8) %{ty") = -— [s2Y - sy(0) - y'(())] = -2sY - s2 ~ + y(0). ds ds Hence if an ODE has coefficients such as at + b, the subsidiary equation is a first-order ODE for Y, which is sometimes simpler than the given second-order ODE. But if the latter has coefficients at2 + bt + c, then two applications of (1) would give a second-order ODE for Y, and thi s shows that the present method works well only for rather special ODEs with variable coefficients. An important ODE for which the method is advantageous is the following. EXAMPLE 3 Laguerre's Equation. Laguerre Polynomials Laguerre's ODE is (9) fy" + (1 - t)y' + ny = 0. We determine a solution of (9) with n 0, 1, 2. • • • . From (7)-(9) we get the subsidiary equation , dY -2sY - s2— + v(0) as sY - y(0) - (-Y - s^j + nY = 0. Simplification gives „ dY (s - s2)— + (n + 1 - s)Y = 0. ds Separating variables, using partial fractions, integrating (with the constant of integration taken zero), and taking exponentials, we get dY n+ 1 - s In n + 1 \ (s - l)n (10*) — =--5— ds =----ds and Y ■■ i s - s \ s — 1 n + 1 \ > I We write ln = X~\Y) and prove Rodrigues's formula (10) /0 = 1, ln(f) = — (tne~t), n = \.2,---. These arc polynomials because the exponential terms cancel if we perform the indicated differentiations. They are called Laguerre polynomials and are usually denoted by Ln (sec Problem Set 5.7. but we continue to reserve capital letters for transforms). We prove (10). By Table 6.1 and the first shifting theorem Gs-shifting), dn ■£(tHe~l) =- -—j , hence by (3) in Sec. 6.2 it -, (s + If 11 [ dt% because the derivatives up to the order n — 1 are zero at 0. Now make another shift and divide by «! to get [see (10) and then (10*)] C* - Dn mn) = „i = y. 1-12 TRANSFORMS BY DIFFERENTIATION Showing the details of your work, find iE(f) if f(t) equals: 7. t2 sinh 4t 1. 4fe£ 2. -t cosh2r 9. t2 sin cot 3. t sin cot 4. t cos (t + k) 11. t sin (/ + k) 6. t2 sin 3f 10. t cos cot 12. te~kt sin r CHAP. 6 Laplace Transforms 13-20 INVERSE TRANSFORMS Using differentiation, integration, s-shifting, or convolution (and showing the details), find f(t) if i£(f) equals: 6 13. 15. 17. 0 + 2(5 + 2) [(s + 2)2 + 1] 7 14. 16. (s2 + 16f 5 (s2 - D2 19. In (s - kf s 18. In s + a s + b 20. arccot co 21. WRITING PROJECT. Differentiation and Integration of Functions and Transforms. Make a short draft of these four operations from memory. Then compare your notes with the text and write a report of 2-3 pages on these operations and their significance in applications. 22. CAS PROJECT. Lagucrre Polynomials, (a) Write a CAS program for finding lH(t) in explicit form from (10). Apply it to calculate l0, ■ • • , /10- Verify that l0, • ■ • , Z10 satisfy Laguerre's differential equation (9). (b) Show that and calculate /0, • • • , /10 from this formula. (c) Calculate /„, • • • , /10 recursively from l0 = 1, k = 1 - t by in + l)ln+1 = (2n + 1 - t)ln - nln_L. (d) Experiment with the graphs of /0, • ■ ■ , /10, finding out empirically how the first maximum, first minimum, • • ■ is moving with respect to its location as a function of n. Write a short report on this. (e) A generating function (definition in Problem Set 5.3) for the Laguerre polynomials is 2 ln(t)xU = (1 n-0 ^\ — lgtxKx — l) Obtain /0, • • ■ , /10 from the corresponding partial sum of this power series in x and compare the ln with those in (a), (b), or (c). 6.7 Systems of ODEs The Laplace transform method may also be used for solving systems of ODEs, as we shall explain in terms of typical applications. We consider a first-order linear system with constant coefficients (as discussed in Sec. 4.1) y[ = «ii>i + «i2>'2 + gi(t) V2 = «21.Vl + fl22.>'2 + 'i is? Why is y, from some time on suddenly larger than y2? Etc. ■ 6 sal/min Salt content in 7\ Salt content in T, 6 gal/min Fig. 142. Mixing problem in Example 1 50 100 150 200 260 CHAP. 6 Laplace Transforms Other systems of ODEs of practical importance can be solved by the Laplace transform method in a similar way, and eigenvalues and eigenvectors as we had to determine them in Chap. 4 will come out automatically, as we have seen in Example 1. EXAMPLE 2 Electrical Network Find the currents iy(t) and i2(l) in the network in Fig. 143 with L and R measured in terms of the usual units (see Sec. 2.9), v(t) = 100 volts if 0 ^ t s 0.5 sec and 0 thereafter, and i(0) = 0, i'(0) = 0. L„ = 1 H v(t) Network Fig. 143. Electrical network in Example 2 Solution. The model of the network is obtained from Kirchhoff s voltage law as in Sec. 2.9. For the lower circuit we obtain 0.8/; + H/, - y + i-4ii = ioo[i - u(t -1)] and for the upper = 0. Division by 0.8 and ordering gives for the lower circuit i[ + 3ij " l-25/2 = 125ft - u« - |)] and for the upper Ik- h+ '2=0- With iiiQ) = 0, /2(0) = 0 we obtain from (1) in Sec. 6.2 and the second shifting theorem the subsidiary system (s + 3)/, - 1.25/2 = 125 I---—I -h + U + l)/2 -- 0. Solving algebraically for I1 and /2 gives 125(s + 1) 1 i(i + + 1) 125 /2 = TT (1 " «-S/2). S(i + |)0 4- J) The right sides without the factor 1 - e~s'2 have the partial fraction expansions 500 125 625 Is ~ 3(s + I) 210 + 1) and SEC. 6.7 Systems of ODEs 500 250 250 7s 3(5 + |) 21(5 + 1) ' respectively. The inverse transform of this gives the solution for 0 S t =1 \, 125 ,/a 625 7f/, 500 Ut) = ~ — e.-tl2 - — e~1U2 + - 1 3 21 7 2 3 21 7 (OStS I). According to the second shifting theorem the solution for t > \ is iyit) — ij(? — g) and i2(t) — i2(t — |), diat is, hit) = - ?f (1 - ^V"t/2 - (1 - s^V™ «0 = - ^ (l - ^V"*'2 + a - «7/4)«-W2 Can you explain physically why both currents eventually go to zero, and why iy(i) has a sharp cusp whereas i2(t) has a continuous tangent direction at t = g? I Systems of ODEs of higher order can be solved by the Laplace transform method in a similar fashion. As an important application, typical of many similar mechanical systems, we consider coupled vibrating masses on springs. Fig. 144. Example 3 EXAMPLE 3 Model of Two Masses on Springs (Fig. 144) The mechanical system in Fig. 144 consists of two bodies of mass 1 on three springs of the same spring constant k and of negligibly small masses of the springs. Also damping is assumed to be practically zero. Then the model of the physical system is the system of ODEs y'l = -ky± + k(y2 - yi) }'2 = _fc(>'2 _ yi) ~ ky2. Here yx and y2 are the displacements of the bodies from their positions of static equilibrium. These ODEs follow from Newton's second law, Moss X Acceleration = Force, as in Sec. 2.4 for a single body. We again regard downward forces as positive and upward as negative. On the upper body, — ky± is the force of the upper spring and k(y2 - vq) that of the middle spring, yz — yx being the net change in spring length—think this over before going on. On the lower body, — k(y2 — yq) is the force of the middle spring and — ky2 that of the lower spring. 262 We shall determine the solution corresponding to the initial conditions yi(0) = 1, y2(P) = 1, >'i(0) = VŠI, y^(0) = -VŠI. Let Yx = iĚ(yx) and Y2 = £(y2)- Then from (2) in Sec. 6.2 and the initial conditions we obtain the subsidiary system s-'Y -kYi + k(Y2 - Yt) k(Y2 - Yx) — kY2. This system of linear algebraic equations in the unknowns Yx and Y2 may be written (sz + 2k)Yx l I', -kY, + (sz + 2k)Y2 s + Vik s - VŠI. Elimination (or Cramer's rule in Sec. 7.7) yields the solution, which we can expand in terms of partial fractions, U - V'3l)(i2 + 2k) -r k(s - V3l) s V'3/r ť + k (s + 2k)(s - V3k) + k(s + V3t) (s2 + 2k)2 - k2 V3/c .s2 + 3k Hence the solution of our initial value problem is (Fig. 145) Vi(0 = %~\Yx) = cos Vlf + sin V3lr y2(r) = i£" V2) -= cos VI/ - sin V§I(. We see that the motion of each mass is harmonic (the system is undamped!), being the superposition of a "slow" oscillation and a "rapid" oscillation. Fig. 145. Solutions in Example 3 PROBLEM SET 6.7 1-20 SYSTEMS OF ODES Using the Laplace transform and showing the details of your work, solve the initial value problem: 1. yi(0) = o, V2. >'2 = >'l " MO) = 1 y2, yi = 5\'x + y2, y2 = yL + 5y2, Vr(0) = 1, y2(0) = -3 ■ 2 y»(0) = -7 x + y'2 = 2 cos /, 3. y[ = -6j] + 4y2, vi(0) = -2, 4. yi + y2 = 0, y >'i(0) = 1, v2(0) = 0 5- y'x = -4y, - 2y2 + /, >>2 yx(0) = 5.75. y2(0) = 6. y'x = 4y2 — 8 cos 4/, y'2 = yL(Q) = Ü, y2(0) = 3 = -4.V! 3>'t + >'2 - r< -6.75 -3yx - 9 sin 4/, SEC. 6.7 Systems of ODEs 5>'i - 4y2 - 9t2 + It, 7- y y2 = 10)'! - 7y2 -yi(0) = 2, y2(0) 8. y[ = 6>'i + y2, y2 - 2r, = 0 9>'i + 6y2, v'i(0) = -3. 9. y[ = 5y, + 5y2 >'2(0) -3 15 cos ? + 27 sin t, >'2 = yi(0) 10. y[ = vi(0) 11 10.Vl - 5y, 150 sin ?, v2(0) = 2 iyi + 3y2, 4y, 4. y2(0) = 3 u(t - 1), - k(i - l), 4 = 4yx + 2y2 + 64?u(? - 1), 0 >'i = >'2 + 1 " >'2 = -yi + 1 y2(0) = ü 12. yi = 2y1 + y2 VX(0) = 2, y2(0) = 13. yi = yx + 6«(? - 2)e4t yx(0) = 0, y2(0) = 1 14. yi = -y2, y2 = -yt + 2[1 V!(0) = 1, y2(0) = 0 15. yj = -3.v! + y2 + k(? - l)e(, y'i = -4y"i + 2y2 + «(t -yi(0) = 0, y2(0) = 3 16. y" = -2yj + 2y2, y2 = 2yj - 5y2, yi(0) = 1, yi(0) = 0, y2(0) = 3, yi(0) = 0, y2 = y, + 2y2. »(? — 2 7T)] COS f, y2(0) >'2(0) = 17. yx = 4yx + 8y2. y2 = 5y, + y2, v,(0) = 8, y[(0) = -18, y2(0) = -21 18. y" + y2 = -101 sin 10r, y2 + yi = 101 sin 10?, yj(0) = 0, y[(0) = 6, y2(0) = 8, y2(0) = -6 19. y[ + y2 = 2s* + e_t, y2 + yi = 2 sinh f, + >'i = e .Vl(0) = 0, y2(0) = 1, y3(0) = 1 20. 4yi + y2 - 2yg = 0, -2yi + y'3 = 1. 2>'2 ~ 4yg = - 16? yi(0) = 2, y2(0) = 0, y3(0) = 0 21. TEAM PROJECT. Comparison of Methods for Linear Systems of ODEs. (a) Models. Solve the models in Examples 1 and 2 of Sec. 4.1 by Laplace transforms and compare the amount of work with that in Sec. 4.1. (Show the details of your work.) (b) Homogeneous Systems. Solve the systems (8), (11)-(13) in Sec. 4.3 by Laplace transforms. (Show the details.) (c) Nonhomogeneous System. Solve the system (3) in Sec. 4.6 by Laplace transforms. (Show the details.) FURTHER APPLICATIONS 22. (Forced vibrations of two masses) Solve the model in Example 3 with k: = 4 and initial conditions y'i(0) = 1, y'i(0) = 1, y2(0) = 1, y2(0) = — 1 under the assumption that the force II sin t is acting on the first body and the force — 11 sin t on the second. Graph the two curves on common axes and explain the motion physically. 23. CAS Experiment. Effect of Initial Conditions. In Prob. 22, vary the initial conditions systematically, describe and explain the graphs physically. The great variety of curves will surprise you. Are they always periodic? Can you find empirical laws for the changes in terms of continuous changes of those conditions? 24. (Mixing problem) What will happen in Example 1 if you double all flows (in particular, an increase to 12 gal/min containing 12 lb of salt from the outside), leaving the size of the tanks and the initial conditions as before? First guess, then calculate. Can you relate the new solution to the old one? 25. (Electrical network) Using Laplace transforms, find the currents /,(/) and i2(t) in Fig. 146, where v(i) 390 cos? and ?i(0) = 0, i2(0) = 0. How soon will the currents practically reach their steady state? 2H 4H Network Currents Fig. 146. Electrical network and currents in Problem 25 26. (Single cosine wave) Solve Prob. 25 when the EMF (electromotive force) is acting from 0 to 2-7T only. Can you do this just by looking at Prob. 25, practically without calculation? 264 CHAP. 6 Laplace Transforms 6.8 Laplace Transform: General Formulas Formula Name, Comments Sec. DO F(s) = = f e-stf(t)di fit) = %-l{F(s)} Definition of Transform Inverse Transform 6.1 £{af(t) + bg(t)} = ai£{f(t)} + br±{g(D) Linearity 6.1 '£{eatfit)} = F(s - a) Z-^Fis -a)) = eatfit) .v-Shifting (First Shifting Theorem) 6.1 .'■■) Convolution 6.5 2?{/(r - a) uit - a)} = e~asFis) 2_1{e-0) 1 .v — a 1 (s - a)2 1 0 - a)n 1 (n = 1, 2, • • •) (* > 0) 1 fit) 1 /"-'/(« - 1)! YlVvt iVthr f0-Vr(fl) re" (,v - a)(s - b) s (s — a)(s — b) (a * b) (a * b) 1 •z 2 s + CO s s2 + 1 2 2 s c( s 2 „2 S a (s - fl)2 + ÖJ2 (.V - a)2 + co2 s(s2 + w2) 1 s2(s2 + co2) 1 (s2 + w2)2 (n - 1)! 1 tk~ TO l (a - b) 1 (a - b) (eat - ebl) (aeaL - bebt) 1 sin cot cos cot — sinh a? a cosh a/ 1 1 (1 — cos col) -5- {cot — sin a>0 co' 1 2w; 3 (sin u>t — cot cos wr) (continued) CHAP. 6 Laplace Transforms Table of Laplace Transforms (continued) F(s) = mm) fit) 22 23 24 (s2 + co2)2 (.s2 + w2)2 sin tat (s2 + az)(s2 + b2) (a2 + b2) 2 co - (sin cot + wf cos cot) 2co 1 ~2 (cos at — cos bt) bz - a 25 26 27 28 1 ,v4 + 4k4 s SA + 4A'4 1 ,v4 - k4 s s4 - k4 1 4k3 1 Yk2 l 2* 1 2/1 (sin fa cos fa — cos sinh fa) - sin fa sinh fa - (sinh fa — sin fa) j (cosh fa — cos fa) 29 30 31 \'s - a - Vi - b I V.v + a Vs + b V ^ + a2 -==• (ebt - eot) 2V7ir3 /0(af) 32 33 (s - a)3'2 1 (,2 - «Y (* >0) V-VTf ea'(l + 2af) r(t) \2aJ 1/2 4-i/2(af) 34 35 e-"s/s «(/ — a) S(f - a) 36 37 38 39 1 -fc/s 1 1 „3/2 -k/s (k>0) /0(2Vfa) 1 2Vfa 77/ ] W/c 2V'tt73 sinh 2Vfa ,-fcz/4t 40 In s -Inf - y (y « 0.5772) (continued) 267 Table of Laplace Transforms (continued) F(s) = %{f(t)} m Sec. 41 s — a s — b - (ebt - eat) 42 s2 + to2 In „ s2 2 — (1 — COS Cut) 6.6 43 s2 - a2 In , ,r 2 — (1 — cosh at) 44 &> arctan — s 1 — sin ojt t 45 1 — arccot s s Si(f) App. A3.1 3EHAPTER 6 REVIEW QUESTIONS AND PROBLEMS 1. What do we mean by operational calculus? 2. What are the steps needed in solving an ODE by Laplace transform'.' What is the subsidiary equation? 3. The Laplace transform is a linear operation. What does this mean? Why is it important? 4. For what problems is the Laplace transform preferable over the usual method? Explain. 5. What are the unit step and Dirac's delta functions? Give examples. 6. What is the difference between the two shifting theorems? When do they apply? 7. K 1 !.Kn,.m| = <£[f(t)mg(t)}l Explain. 8. Can a discontinuous function have a Laplace transform? Does every continuous function have a Laplace transform? Give reasons. 9. State the transforms of a few simple functions from memory. 10. If two different continuous functions have transforms, the latter are different. Why is this practically important? 11-22 LAPLACE TRANSFORMS Find the transform (showing the details of your work and indicating the method or formula you are using): 11. tem 12. e~f sin 2t 13. sin2 t 15. tu(t - tt) 17. et * cos 2? 19. sin t + sinh t 21. e° (a + b) 14. cos2 At 16. u(t — 2tt) sin t 18. (sin cot) * (cos cot) 20. cosh t — cos t 22. cosh 2t — cosh / 23-34 INVERSE LAPLACE TRANSFORMS Find the inverse transform (showing the details of your work and indicating the method or formula used): 23. 25. 27. 29. 31. 33. 10 s + 2 12 $ + 4s + 2Ü 5s + 4 -2s 2s + 4 (s2 4s + 5'f sV + ru2) 24. 26. 28. 30. 32. 34. 15 s2 - 4 is 2s + 2 10 (,s2 + 16)2 180 + 18s2 + 3s4 2 2.92 + 2.?+l 268 CHAP. 6 Laplace Transforms 35-50 SINGLE ODEs AND SYSTEMS OF ODEs Solve by Laplace transforms, showing the details and 0 < ? < it, v(t) = 0 if t > tt, and current and charge at t = 0 are 0. graphing the solution: 35. y" + y = u(t - 1), y(0) = 0, y'(0) = 20 36. y" + 16y = 4S(? - n), y(0) = = -1, y'(0) = o 37. y" + 4y = 8S(f - 5), y(0) = 10, y'(0) = -1 38. y" + y = u(t - 2), y(0) = o, y'(0) = 0 39. y" + 2y' + lOy = 0, y(0) = 7. y'(0) = -1 40. y" + 4y' + 5y = 50?, y(0) = 5, y'(0) = -5 41. y" — y ~ 2y = 12«(? - - 77) sin ?, y(0) = 1, y'(0) = -1 42. y" - 2y' + y = tS(t - 1), y(0) = 0. y'(0) = 0 43. y" - Ay' + Ay = 8(t - 1) - S(t - 2). y(0) = 0, y'(0) = 0 44. y" + Ay = 8(t - if) - ö(? - 2 TT), y(0) = 1. y'(0) = 0 45. yi(0) = 1, y2(0) = = 0 46. y'i = ~3yt + y2 - 12?, y'2 = -fyi + 2y2 .v,(0) = 0, y2(0) = = 0 47. y'i = >'2, y'z = ~5yi - 2y2, yi(0) = 0, y2(0) = = 1 48. ví = >'2> yí = ~4yi + 8(t - tt), yi(0) = 0, .y2(0) = -- 0 49. y" = 4y2 - Ae\ y"2 = 3yt + y2, >'i(0) = 1, yi(0) = 2, y2(0) = 2, y2(0) = 3 50. y'i = 16y2. y"2 = 16y1; yi(Q) = 2, yl(0) = 12, y2(0) = 6. y2(0) = 4 MODELS OF CIRCUITS AND NETWORKS 51. (/fC-circuit) Find and graph the current;'(?) in the RC-circuit in Fig. 147, where R = 100X1, C = 10_3F, v(t) = 100? V if 0 < ? < 2, v(t) = 200 V if t > 2 and the initial charge on the capacitor is 0. =F c «(*) Fig. 147. RC-circuit 52. (LC-circuit) Find and graph the charge q{i) and the cun-cnt ;'(?) in the LC-circuit in Fig. 148, where L = 0.5 H. C = 0.02 F. u(t) = 1425 sin 5? V if v(t) Fig. 148. LC-circuit 53. (RZX'-circuit) Find and graph the current i(t) in the /?LC-circuit in Fig. 149, where R = 1 il L = 0.25 H, C = 0.2 F, v(t) = 377 sin 20? V, and current and charge at ? = 0 are 0. C -II- -o o- v(t) Fig. 149. R/.C-circuit 54. (Network) Show that by KirchhofFs voltage law (Sec. 2.9), the currents in the network in Fig. 150 arc obtained from the system Li'i + R(ii - k) = v(t) Rd'z - i'i) + £ h = 0. Solve this system, where R = l íl, L = 2 H, C = 0.5 F, v(t) = 90e~m V, /x(0) = 0, (2(0) = 2 A. Fig. 150. Network in Problem 54 55. (Network) Set up the model of the network in Fig. 151 and find and graph the currents, assuming that the currents and the charge on the capacitor are 0 when the switch is closed at ? = 0. L= 1 H 100 t2 v C = 0.01 F Network in Problem 55 Summary of Chapter 269 Laplace Transforms The main purpose of Laplace transforms is the solution of differential equations and systems of such equations, as well as corresponding initial value problems. The Laplace transform F(s) = 5f (/) of a function f(t) is defined by (1) F(s) = %(f) = J e-«f(t) dl (Sec. 6.1). This definition is motivated by the property that the differentiation of / with respect to t corresponds to the multiplication of the transform F by $; more precisely, (2) (Sec. 6.2) sec/') = *2(f) - /(o) m") = s2S£(f) - sf(0) - /'(O) etc. Hence by taking the transform of a given differential equation (3) y" + ay' + by = r(t) (a, b constant) and writing i£(y) = Y(s), we obtain the subsidiary equation (4) (s2 + as + b)Y = it(r) + sf(0) + /'(0) + a/(0). Here, in obtaining the transform SS(r) we can get help from the small table in Sec. 6.1 or the larger table in Sec. 6.9. This is the first step. In the second step we solve the subsidiary equation algebraically for Y(s). In the third step we determine the inverse transform y(t) = SS_1(T), that is, the solution of the problem. This is generally the hardest step, and in it we may again use one of those two tables. Y(s) will often be a rational function, so that we can obtain the inverse %r\Y) by partial fraction reduction (Sec. 6.4) if we see no simpler way. The Laplace method avoids the determination of a general solution of the homogeneous ODE, and we also need not determine values of arbitrary constants in a general solution from initial conditions; instead, we can insert the latter directly into (4). Two further facts account for the practical importance of the Laplace transform. First, it has some basic properties and resulting techniques that simplify the determination of transforms and inverses. The most important of these properties are listed in Sec. 6.8, together with references to the corresponding sections. More on the use of unit step functions and Dirac's delta can be found in Sees. 6.3 and 6.4, and more on convolution in Sec. 6.5. Second, due to these properties, the present method is particularly suitable for handling right sides r(t) given by different expressions over different intervals of time, for instance, when r(i) is a square wave or an impulse or of a form such as r{t) = cos f if 0 =s t 3= 4tt and 0 elsewhere. The application of the Laplace transform to systems of ODEs is shown in Sec. 6.7. (The application to PDEs follows in Sec. 12.11.)