Numerical Solution of Ordinary Differential Equations E. Siili April 3, 2013 1 Contents 1 Picard's theorem 1 2 One-step methods 4 2.1 Euler's method and its relatives: the 6-method.................... 4 2.2 Error analysis of the (9-method ............................. 7 2.3 General explicit one-step method............................ 9 2.4 Runge-Kutta methods.................................. 13 2.5 Absolute stability of Runge-Kutta methods...................... 19 3 Linear multi-step methods 21 3.1 Construction of linear multi-step methods....................... 22 3.2 Zero-stability....................................... 24 3.3 Consistency........................................ 26 3.4 Convergence........................................ 29 3.4.1 Necessary conditions for convergence...................... 30 3.4.2 Sufficient conditions for convergence...................... 33 3.5 Maximum order of a zero-stable linear multi-step method.............. 37 3.6 Absolute stability of linear multistep methods..................... 43 3.7 General methods for locating the interval of absolute stability............ 45 3.7.1 The Schur criterion................................ 45 3.7.2 The Routh-Hurwitz criterion.......................... 46 3.8 Predictor-corrector methods............................... 48 3.8.1 Absolute stability of predictor-corrector methods............... 50 3.8.2 The accuracy of predictor-corrector methods ................. 52 4 Stiff problems 53 4.1 Stability of numerical methods for stiff systems.................... 54 4.2 Backward differentiation methods for stiff systems .................. 57 4.3 Gear's method ...................................... 58 5 Nonlinear Stability 59 6 Boundary value problems 62 6.1 Shooting methods .................................... 62 6.1.1 The method of bisection............................. 63 6.1.2 The Newton-Raphson method ......................... 63 6.2 Matrix methods...................................... 66 6.2.1 Linear boundary value problem......................... 66 6.2.2 Nonlinear boundary value problem....................... 69 6.3 Collocation method.................................... 70 Preface. The purpose of these lecture notes is to provide an introduction to computational methods for the approximate solution of ordinary differential equations (ODEs). Only minimal prerequisites in differential and integral calculus, differential equation theory, complex analysis and linear algebra are assumed. The notes focus on the construction of numerical algorithms for ODEs and the mathematical analysis of their behaviour, covering the material taught in the M.Sc. in Mathematical Modelling and Scientific Computation in the eight-lecture course Numerical Solution of Ordinary Differential Equations. The notes begin with a study of well-posedness of initial value problems for a first-order differential equations and systems of such equations. The basic ideas of discretisation and error analysis are then introduced in the case of one-step methods. This is followed by an extension of the concepts of stability and accuracy to linear multi-step methods, including predictor corrector methods, and a brief excursion into numerical methods for stiff systems of ODEs. The final sections are devoted to an overview of classical algorithms for the numerical solution of two-point boundary value problems. SyllablLS. Approximation of initial value problems for ordinary differential equations: one-step methods including the explicit and implicit Euler methods, the trapezium rule method, and Runge-Kutta methods. Linear multi-step methods: consistency, zero-stability and convergence; absolute stability. Predictor-corrector methods. Stiffness, stability regions, Gear's methods and their implementation. Nonlinear stability. Boundary value problems: shooting methods, matrix methods and collocation. Reading List: [1] H.B. Keller, Numerical Methods for Two-point Boundary Value Problems. SIAM, Philadelphia, 1976. [2] J.D. Lambert, Computational Methods in Ordinary Differential Equations. Wiley, Chichester, 1991. Further Reading: [1] E. Hairer, S.P. Norsett, and G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems. Springer-Verlag, Berlin, 1987. [2] P. Henrici, Discrete Variable Methods in Ordinary Differential Equations. Wiley, New York, 1962. [3] K.W. morton, Numerical Solution of Ordinary Differential Equations. Oxford University Computing Laboratory, 1987. [4] A.M. Stuart and A.R. Humphries, Dynamical Systems and Numerical Analysis. Cambridge University Press, Cambridge, 1996. 1 Picard's theorem Ordinary differential equations frequently occur as mathematical models in many branches of science, engineering and economy. Unfortunately it is seldom that these equations have solutions that can be expressed in closed form, so it is common to seek approximate solutions by means of numerical methods; nowadays this can usually be achieved very inexpensively to high accuracy and with a reliable bound on the error between the analytical solution and its numerical approximation. In this section we shall be concerned with the construction and the analysis of numerical methods for first-order differential equations of the form y' = f{*,y) (i) for the real-valued function y of the real variable x, where y' = dy/dx. In order to select a particular integral from the infinite family of solution curves that constitute the general solution to (1), the differential equation will be considered in tandem with an initial condition: given two real numbers xq and ya, we seek a solution to (1) for x > xq such that y(xo) = Vo • (2) The differential equation (1) together with the initial condition (2) is called an initial value problem. In general, even if /(•, •) is a continuous function, there is no guarantee that the initial value problem (1-2) possesses a unique solution1. Fortunately, under a further mild condition on the function /, the existence and uniqueness of a solution to (1-2) can be ensured: the result is encapsulated in the next theorem. Theorem 1 (Picard's Theorem2) Suppose that /(•, •) is a continuous function of its arguments in a region U of the (x, y) plane which contains the rectangle R = {(x,y) : x0 < x < XM, |y-yo|<*m}, where Xm > %o and Ym > 0 are constants. Suppose also, that there exists a positive constant L such that \f(x,y)-f(x,z)\ y{x), defined on the closed interval \xq,Xm\, which satisfies (1) and (2). The condition (3) is called a Lipschitz condition3, and L is called the Lipschitz constant for /. We shall not dwell on the proof of Picard's Theorem; for details, the interested reader is referred to any good textbook on the theory of ordinary differential equations, or the 1Consider, for example, the initial value problem y' = y2^3, y(0) = 0; this has two solutions: y(x) = 0 and y(x) = xs/27. 2Emile Picard (1856-1941) 3Rudolf Lipschitz (1832-1903) 1 lecture notes of P. J. Collins, Differential and Integral Equations, Part I, Mathematical Institute Oxford, 1988 (reprinted 1990). The essence of the proof is to consider the sequence of functions {yn}%Lo, defined recursively through what is known as the Picard Iteration: 2/0 0*0 = y0 , (4) rx yn(x) = y0+ /(£,yn-i(0)d£ ' n = l,2,..., Jx0 and show, using the conditions of the theorem, that {yn}%Lo converges uniformly on the interval [io,^m] to a function y defined on [xq,Xm] such that y(x) = yo+ [Xf(Z,y(0)<% Jxo This then implies that y is continuously differential)le on [io,^m] and it satisfies the differential equation (1) and the initial condition (2). The uniqueness of the solution follows from the Lipschitz condition. Picard's Theorem has a natural extension to an initial value problem for a system of m differential equations of the form y' = f(x,y), y(;co)=yo, (5) where yo £ Rm and f : [:eo>-X"m] x Rm —> Rm. On introducing the Euclidean norm || • || on Rm by / m \1/2 IMI = (£>|2J ' wGRm' we can state the following result. Theorem 2 (Picard's Theorem) Suppose that f(-, •) is a continuous function of its arguments in a region U of the (x, y) space R1+m which contains the parallelepiped R = {(x,y) : x0 < x < XM, ||y - yoII < Ym} , where Xm > %o and Ym > 0 are constants. Suppose also that there exists a positive constant L such that ||f(x,y)-f(x,z)|| y(x), defined on the closed interval [xq,Xm], which satisfies (5). A sufficient condition for (6) is that f is continuous on R, differentiable at each point (x,y) in int(R), the interior of R, and there exists L > 0 such that di , dy f(x, y) £ Rm, and || • || is a matrix norm subordinate to the Euclidean vector norm on Rm. Indeed, when (7) holds, the Mean Value Theorem implies that (6) is also valid. The converse of this statement is not true; for the function f(y) = (|yi|,..., \ym\)T, with xq = 0 and yo = 0, satisfies (6) but violates (7) because y i—> f(y) is not differentiate at the point y = 0. As the counter-example in the footnote on page 1 indicates, the expression \y — z\ in (3) and ||y — z|| in (6) cannot be replaced by expressions of the form \y — z\a and ||y — z||a, respectively, where 0 < a < 1, for otherwise the uniqueness of the solution to the corresponding initial value problem cannot be guaranteed. We conclude this section by introducing the notion of stability. Definition 1 A solution y = v(x) to (5) is said to be stable on the interval [xq,Xm] if for every e > 0 there exists S > 0 such that for all z satisfying ||v(a?o) — z|| < 5 the solution y = w(x) to the differential equation y' = f(x,y) satisfying the initial condition w(a?o) = z is defined for all x £ [xq, Xm] and satisfies ||v(x) — w(x)|| < e for all x in [xq,Xm\- A solution which is stable on [a?o,oo) (i.e. stable on [xq,Xm] for each Xm and with 8 independent of Xm) is said to be stable in the sense of Lyapunov. Moreover, if lim ||v(x) — w(x)|| = 0 , then the solution y = v(x) is called asymptotically stable. Using this definition, we can state the following theorem. Theorem 3 Under the hypotheses of Picard's theorem, the (unique) solution y = v(x) to the initial value problem (5) is stable on the interval [xq,Xm]7 (where we assume that —oo < xq < Xm < oo). Proof: Since and it follows that v(xj =v(x0)+ f f(£,v(0)d£ Jxo w(x) = z+ f f(£,w(0)d£, ■JXn ||v(x)-w(x)|| < ||v(x0) - z|| + r||f(£,v(0)-f(£,w(0)||d£ JXQ < ||v(x0)-z||+L f ||v(0-w(0l|d£. (8) JXQ Now put A{x) = ||v(x) — w(x)|| and a = ||v(a?o) — z||; then, (8) can be written as A(x) (12)" is usually referred to as the Gronwall Lemma. Returning to our original notation, we conclude from (12) that ||v(x) -w(x)|| < ||v(x0) -z||eL^°) , x0 0 as in Definition 1, we choose 5 = eexp(—L(Xm~x$)) to deduce stability, o To conclude this section, we observe that if either xq = —oo or Xm = +oo, the statement of Theorem 3 is false. For example, the trivial solution y = 0 to the differential equation y' = y is unstable on [xq, oo) for any xq > — oo. More generally, given the initial value problem y' = Ay , y(x0) = y0 , with A a complex number, the solution y(x) = yoexp(A(x — xq)) is unstable for > 0; the solution is stable in the sense of Lyapunov for KA < 0 and is asymptotically stable for KA < 0. In the next section we shall consider numerical methods for the approximate solution of the initial value problem (1-2). Since everything we shall say has a straightforward extension to the case of the system (5), for the sake of notational simplicity we shall restrict ourselves to considering a single ordinary differential equation corresponding torn = 1. We shall suppose throughout that the function / satisfies the conditions of Picard's Theorem on the rectangle R and that the initial value problem has a unique solution defined on the interval [io, Am], —oo < xq < Xm < oo. We begin by discussing one-step methods; this will be followed in subsequent sections by the study of linear multi-step methods. 2 One-step methods 2.1 Euler's method and its relatives: the ^-method The simplest example of a one-step method for the numerical solution of the initial value problem (1-2) is Euler's method4. Euler's method. Suppose that the initial value problem (1-2) is to be solved on the interval [xq, Xm]- We divide this interval by the mesh-points xn = xq+nh, n = 0,..., N, where h = (Xm — xq)/N and N is a positive integer. The positive real number h is called the step size. Now let us suppose that, for each n, we seek a numerical approximation yn to y(xn), the value of the analytical solution at the mesh point xn. Given that y(x$) = yo 4Leonard Euler (1707-1783) 4 is known, let us suppose that we have already calculated yn, up to some n, 0 < n < N — 1; we define Vn+i =Vn + hf(xn,yn) , n = 0,..., N - 1 . Thus, taking in succession n = 0,1,..., N — 1, one step at a time, the approximate values yn at the mesh points xn can be easily obtained. This numerical method is known as Euler's method. A simple derivation of Euler's method proceeds by first integrating the differential equation (1) between two consecutive mesh points xn and xn+\ to deduce that y{xn+i) = y{xn) + / f(x,y(x))dx , n = 0,...,N-l, (14) •J xn and then applying the numerical integration rule g(x) dx « hg(xn) , called the rectangle rule, with g{x) = f(x,y(x)), to get y{xn+i) ~ y{xn) + hf{xn,y(xn)) , n = 0, ...N-l, y(x0) = y0 • This then motivates the definition of Euler's method. The idea can be generalised by replacing the rectangle rule in the derivation of Euler's method with a one-parameter family of integration rules of the form / g(x)dx^h[(l-6)g(xn) + dg(xn+1)} , (15) •J xn with 6 £ [0,1] a parameter. On applying this in (14) with g{x) = f(x, y{x)) we find that y{xn+i) ~ y{xn) + h[{l - 6)f(xn,y(xn)) + 6f(xn+1,y(xn+1))] , n = 0,..., N - 1 , y(x0) = yo ■ This then motivates the introduction of the following one-parameter family of methods: given that yo is supplied by (2), define 2/n+i = yn + h[{l - 6)f(xn,yn) + 6f(xn+1,yn+1)] , n = 0,..., N - 1 . (16) parametrised by 6 G [0,1]; (16) is called the 0-method. Now, for 6 = 0 we recover Euler's method. For 9 = 1, and yo specified by (2), we get yn+i = yn + hf(xn+1,yn+1) , n = 0,...,N -1 , (17) referred to as the Implicit Euler Method since, unlike Euler's method considered above, (17) requires the solution of an implicit equation in order to determine yn+i, given yn. In order to emphasize this difference, Euler's method is sometimes termed the Explicit Euler Method. The scheme which results for the value of 6 = 1/2 is also of interest: yo is supplied by (2) and subsequent values yn+i are computed from yn+i=yn + ^h[f(xn,yn)+f(xn+1,yn+1)] , n = 0,..., N - 1 ; 5 k Xk yk for 0 = 0 yk for 6 = 1/2 yk for 6 = 1 0 0 0 0 0 1 0.1 0 0.00500 0.00999 2 0.2 0.01000 0.01998 0.02990 3 0.3 0.02999 0.04486 0.05955 4 0.4 0.05990 0.07944 0.09857 Table 1: The values of the numerical solution at the mesh points this is called the Trapezium Rule Method. The 6-method is an explicit method for 6 = 0 and is an implicit method for 0 < 6 < 1, given that in the latter case (16) requires the solution of an implicit equation for yn+i-Further, for each value of the parameter 6 6 [0,1], (16) is a one-step method in the sense that to compute yn+i we only use one previous value yn. Methods which require more than one previously computed value are referred to as multi-step methods, and will be discussed later on in the notes. In order to assess the accuracy of the 0-method for various values of the parameter 6 in [0,1], we perform a numerical experiment on a simple model problem. Example 1 Given the initial value problem y' = x — y2, y(0) = 0, on the interval of x £ [0,0.4], we compute an approximate solution using the 6-method, for 6 = 0, 6 = 1/2 and 6 = 1, using the step size h = 0.1. The results are shown in Table 1. In the case of the two implicit methods, corresponding to 6 = 1/2 and 6 = 1, the nonlinear equations have been solved by a fixed-point iteration. For comparison, we also compute the value of the analytical solution y{x) at the mesh points xn = 0.1 * n, n = 0,... ,4. Since the solution is not available in closed form,5 we use a Picard iteration to calculate an accurate approximation to the analytical solution on the interval [0, 0.4] and call this the "exact solution". Thus, we consider yo(x) = 0 , yk{x) = j* (t - yLx(O) k = 1, 2,... . Hence, y0(x) = 0 , / \ f 2 f 5 y2(x) = —X--X , y y ' 2 20 5Using MAPLE V, we obtain the solution in terms of Bessel functions: > dsolve({diff(y(x),x) + y(x)*y(x) = x, y(0)=0}, y(x)); — 2 2 \ 3BesselK( —, - x3/2) _2 2 » 3—2--Bessell( —, - x3/2) 7T v 3 3 V_ ) V3BesselK(i, 1 x3/2) ^ ^ ---hBessellf-, - x3/2) 7T 3 3 6 k Xk y{xk) 0 0 0 1 0.1 0.00500 2 0.2 0.01998 3 0.3 0.04488 4 0.4 0.07949 Table 2: Values of the "exact solution" at the mesh points V3(^ = —x--X H--X--X y v ' 2 20 160 4400 It is easy to prove by induction that y(x) = -X2 - —X5 + —X8--—X11 + O (x14 yy ' 2 20 160 4400 V Tabulating y%{x) on the interval [0,0.4] with step size h = 0.1, we get the "exact solution" at the mesh points shown in Table 2. The "exact solution" is in good agreement with the results obtained with 9 = 1/2: the error is < 5 * 10~5. For 9 = 0 and 9 = 1 the discrepancy between y^ and y(xk) is larger: it is < 3*10~2. We note in conclusion that a plot of the analytical solution can be obtained, for example, by using the MAPLE V package by typing in the following at the command line: > with(DEtools) : > DEplot(diff(y(x),x)+y(x)*y(x)=x, y(x), x=0..0.4, [[y(0)=0]], y=-0.1..0.1, stepsize=0.05); So, why is the gap between the analytical solution and its numerical approximation in this example so much larger for 9^1/2 than for 9 = 1/2? The answer to this question is the subject of the next section. 2.2 Error analysis of the ^-method First we have to explain what we mean by error. The exact solution of the initial value problem (1-2) is a function of a continuously varying argument x £ [io^m], while the numerical solution yn is only denned at the mesh points xn, n = 0,..., N, so it is a function of a "discrete" argument. We can compare these two functions either by extending in some fashion the approximate solution from the mesh points to the whole of the interval [io,^m] (say by interpolating between the values yn), or by restricting the function y to the mesh points and comparing y{xn) with yn for n = 0,..., N. Since the first of these approaches is somewhat arbitrary because it does not correspond to any procedure performed in a practical computation, we adopt the second approach, and we define the global error e by en = y{xn) - yn , n = 0,..., N . We wish to investigate the decay of the global error for the 0-method with respect to the reduction of the mesh size h. We shall show in detail how this is done in the case of Euler's 7 method {0 = 0) and then quote the corresponding result in the general case (0 < 9 < 1) leaving it to the reader to fill the gap. So let us consider Euler's explicit method: Vn+i =Vn + hf(xn, yn) , n = 0,..., N , y0 = given . The quantity m y{xn+l) ~ y{xn) „, / /10\ Tn =----f{xn,y(xn)) , (18) h obtained by inserting the analytical solution y{x) into the numerical method and dividing by the mesh size is referred to as the truncation error of Euler's explicit method and will play a key role in the analysis. Indeed, it measures the extent to which the analytical solution fails to satisfy the difference equation for Euler's method. By noting that f(xn,y(xn)) = y'{xn) and applying Taylor's Theorem, it follows from (18) that there exists £n £ (xn,xn+i) such that Tn = \hy"(in) , (19) where we have assumed that that / is a sufficiently smooth function of two variables so as to ensure that y" exists and is bounded on the interval [xq, Xm]- Since from the definition of Euler's method n - Vn+1 ~ Vn -f(x v ) u — ^ J \Xm yn) i on subtracting this from (18), we deduce that + h[f(xn, y(xn)) - f(xn, yn)\ + hTn . Thus, assuming that \yn — yo\ < Ym from the Lipschitz condition (3) we get |e„+i| < {I + hL)\en\ + h\Tn\ , n = 0,..., N - 1 . Now, let T = maxo - l) , n = 0,..., N . (20) 8 To conclude, we note that pursuing an analogous argument it is possible to prove that, in the general case of the 0-method, ^ |eo|explLr^zxJ h + L l--e 2 M2 + \hM3 T 6XP ( L^Lh ~ 1 (21) for n = 0,..., N, where now m3 = m.&xxe[XQtxM] ly'"^)!- In the absence of rounding errors in the imposition of the initial condition (2) we can suppose that eo = y{x$) — yo = 0. Assuming that this is the case, we see from (21) that \en\ = 0(h?) for 6 = 1/2, while for 6 = 0 and 6 = 1, and indeed for any 6 7^ 1/2, \en\ = 0(h) only. This explains why in Tables 1 and 2 the values yn of the numerical solution computed with the trapezium-rule method (6 = 1/2) were considerably closer to the analytical solution y(xn) at the mesh points than those which were obtained with the explicit and the implicit Euler methods (6 = 0 and 6 = 1, respectively). In particular, we see from this analysis, that each time the mesh size h is halved, the truncation error and the global error are reduced by a factor of 2 when 6 7^ 1/2, and by a factor of 4 when 6 = 1/2. While the trapezium rule method leads to more accurate approximations then the forward Euler method, it is less convenient from the computational point of view given that it requires the solution of implicit equations at each mesh point xn+\ to compute yn+i- An attractive compromise is to use the forward Euler method to compute an initial crude approximation to y(xn+i) and then use this value within the trapezium rule to obtain a more accurate approximation for y(xn+i): the resulting numerical method is yn+i=yn + ^h[f(xn,yn)+f(xn+1,yn + hf(xn,yn))}, n = 0,..., N , y0 = given , and is frequently referred to as the improved Euler method. Clearly, it is an explicit one-step scheme, albeit of a more complicated form than the explicit Euler method. In the next section, we shall take this idea further and consider a very general class of explicit one-step methods. 2.3 General explicit one-step method A general explicit one-step method may be written in the form: Vn+i =Vn + h$(xn,yn; h) , n = 0,..., N - 1 , y0 = y(x0) [= specified by (2)] , (22) where $(•, •; •) is a continuous function of its variables. For example, in the case of Euler's method, <&(xn,yn; h) = f(xn,yn), while for the improved Euler method ${xn, yn; h) = \ [f(xn, Vn) + f(xn + h,yn + hf(xn,yn))} . In order to assess the accuracy of the numerical method (22), we define the global error, en, by 9 We define the truncation error, Tn, of the method by Tn = y^+^-y^ _ <5>{xn,y{xn); h) . (23) The next theorem provides a bound on the global error in terms of the truncation error. Theorem 4 Consider the general one-step method (22) where, in addition to being a continuous function of its arguments, <3? is assumed to satisfy a Lipschitz condition with respect to its second argument; namely, there exists a positive constant L such that, for 0 < h < ho and for the same region R as in Picard's theorem, \§>(x,y;h) — §>(x,z;h)\ < L$\y — z\, for (x,y), (x,z)inR. (24) Then, assuming that \yn — j/oI < Ym> it follows that eLg>(xn — Xo) _ ]_ leJ < eL*^-^|eo| + T , n = 0,...,N, (25) where T = maxo\en\ + h\Tn\ , n = 0,.. .,N -1 . That is, Hence |e„+i| < (1 + hL$)\en\ + h\Tn\ , n = 0,..., N - 1 . |ei| < (1 + hL*)\eo\ + hT , M < (l + hL$)2\e0\ + h[l + (l + hL$)]T , |e3| < (l + hL$f\e0\ + h[l + (l + hL$) + (l + hL$)2]T , etc. |e„| < (l + /iL$)n|eo| + [(l + /iL$)n-l]r/L$ . Observing that 1 + hL$ < exp(/iL$), we obtain (25). o Let us note that the error bound (20) for Euler's explicit method is a special case of (25). We highlight the practical relevance of the error bound (25) by focusing on a particular example. Example 2 Consider the initial value problem y' = tan-1 y, y(0) = yo, and suppose that this is solved by the explicit Euler method. The aim of the exercise is to apply (25) to 10 quantify the size of the associated global error; thus, we need to find L and M2. Here f(x,y) = tan-1?/, so by the Mean Value Theorem \f(x,y) - f(x,z)\ dy where r] lies between y and z. In our case df dy 1+ 2/2)-1! < 1 and therefore L = 1. To find M2 we need to obtain a bound on \y"\ (without actually solving the initial value problem!). This is easily achieved by differentiating both sides of the differential equation with respect to x: y>> = A(tan-1 y) = (1 + j,2}-1^ = (1 + 2}-l tan-l . ax ax Therefore \y"(x)\ < M2 = \tv . Inserting the values of L and M2 into (20), \en\ < eXn\e0\ + ^tt (eXn - 1) h , n = 0,..., N . In particular if we assume that no error has been committed initially (i.e. £0 = 0,), we have that \en\ < -tt (eXn - 1) h , n = 0,...,N. Thus, given a tolerance TOL specified beforehand, we can ensure that the error between the (unknown) analytical solution and its numerical approximation does not exceed this tolerance by choosing a positive step size h such that h < -(eXM - I)-1 TOL . TV For such h we shall have \y{xn) — yn\ = \en\ < TOL for each n = 0,..., N, as required. Thus, at least in principle, we can calculate the numerical solution to arbitrarily high accuracy by choosing a sufficiently small step size. In practice, because digital computers use finite-precision arithmetic, there will always be small (but not infinitely small) pollution effects due to rounding errors; however, these can also be bounded by performing an analysis similar to the one above where f(xn,yn) is replaced by its finite-precision representation. Returning to the general one-step method (22), we consider the choice of the function <í>. Theorem 4 suggests that if the truncation error 'approaches zero' as h —> 0 then the global error 'converges to zero' also (as long as |eo| —^0 when h —> 0). This observation motivates the following definition. Definition 2 The numerical method (22) is consistent with the differential equation (1) if the truncation error defined by (23) is such that for any e > 0 there exists a positive h{e) for which \Tn\ < e for 0 < h < h{e) and any pair of points (xn,y(xn)), (xn+i, y{xn+i)) on any solution curve in R. 11 For the general one-step method (22) we have assumed that the function <]?(•, •; •) is continuous; also y' is a continuous function on [io^m]- Therefore, from (23), lim Tn = y'{xn) - $(xn, y(xn); 0) . This implies that the one-step method (22) is consistent if and only if $(x,y;0)=/(x,y). (26) Now we are ready to state a convergence theorem for the general one-step method (22). Theorem 5 Suppose that the solution of the initial value problem (1-2) lies in R as does its approximation generated from (22) when h < h$. Suppose also that the function <£(-, •; •) is uniformly continuous on R x [0, ho] and satisfies the consistency condition (26) and the Lipschitz condition \$(x,y;h) - $(x,z;h)\ < Lq>\y - z\ on R x [0, h0] . (27) Then, if successive approximation sequences (yn), generated for xn = xq + nh, n = 1, 2,..., N, are obtained from (22) with successively smaller values of h, each less than ho, we have convergence of the numerical solution to the solution of the initial value problem in the sense that y(xn) - yn\ ->• 0 as h ->• 0, xn ->• x G [x0,XM] ■ PROOF: Suppose that h = (ly — xq)/N where N is a positive integer. We shall assume that ./V is sufficiently large so that h < Hq. Since y{x$) = yo and therefore eo = 0, Theorem 4 implies that \y(xn) - yn\ < La, max | T„ 0 0 there exists h\{e) such that \y\i)-y'{xn)\< -e for h < hi(e), n = 0,1, ,7V - 1 . Also, by the uniform continuity of <3? with respect to its third argument, there exists /12(e) such that \${xn,y(xn);0) - $(xn,y(xn);h)\ < ^e for h < h2{e), n = 0,1,..., N - 1 . 12 Thus, defining h{e) = min(/ii(e), 112(e)), we have \Tn\ < e for h< h(e), n = 0,1,...,N- 1 . Inserting this into (28) we deduce that \y(xn) — yn\ —> 0 as h —> 0. Since \y{x) - yn\ < \y{x) - y{xn)\ + \y{xn) - yn\ , and the first term on the right also converges to zero as h —> 0 by the uniform continuity of y on the interval [io^m], the proof is complete, o We saw earlier that for Euler's method the absolute value of the truncation error Tn is bounded above by a constant multiple of the step size h, that is \Tn\ < Kh for 0 < h < h0 , where K is a positive constant, independent of h. However there are other one-step methods (a class of which, called Runge-Kutta methods, will be considered below) for which we can do better. More generally, in order to quantify the asymptotic rate of decay of the truncation error as the step size h converges to zero, we introduce the following definition. Definition 3 The numerical method (22) is said to have order of accuracy p, if p is the largest positive integer such that, for any sufficiently smooth solution curve (x,y(x)) in R of the initial value problem (1-2), there exist constants K and ho such that \Tn\ < Khp for0 2 then there exists a non-zero real number a$ such that Tn{h,a0) = O{h3). Solution: Let us define y; h) = (l- a)f(x, y) + af + y + 7^f(x, y) Then the numerical method can be rewritten as Vn+i =Vn + h$(xn,yn; h) . Since *(a;,j/;0) = f{x,y) , 15 the method is consistent. By definition, the truncation error is Tn(h,a) = y(Xn") _ ${xn,y{xn);h) . h We shall perform a Taylor expansion of Tn(h, a) to show that it can be expressed in the desired form. Indeed, Tn(h, a) h h2 v'M + -y"(xn) + —v'"{xn) -(1 - a)y'(xn) - af(xn + ^v{xn) + + h h2 V'M + ^y"(xn) + —y'"{xn) - (1 - a)y'(xn) f(xn,y(xn)) + ^-fx(xn,y(xn)) + ^-fy(xn,y(xn))y'(xn) a ~2 2a<-^-'"'^-'" ' 2aJyy Tj^j fxx(xn,y(xn)) + 2 (J^ ) fxy(xn,y(xn))y'(xn) 2a > J'yy{ fyy{xniy(xn))[y' (xn)f 0(h3) = y'{xn) - (1 - a)y'{xn) - ay'{xn) + ^y"(xn) - 7; [fx(xn,y(xn)) + fy(xn, y(xn))y'(xn)} h2 h2 + -^y"'{xn) - — [fxx(xn,y(xn)) + 2fxy(xn,y(xn))y' (xn) + fyy(xn,y(xnW(xn)}2] + 0(h3) h2 h2 = -^y"'{xn) - — [y"'(xn) - y"(xn)fy(xn,y(xn))} + 0(h3) 0- 8a \ df a-l) y"'{xn) + y" (xn) — (xn,y(xn)) o(h3) as required ] that Now let us apply the method to y' — —yp, with p > 1. If p — 1, then y'" — —y" — y' — —y, so h2 Tn{h,a) = -— y(xn) + 0(h3) . 6 As y(xn) — e Xn ^ 0, it follows that for all (non-zero) a. Finally, suppose that p > 2. Then Tn{h,a) = 0{h2) II p— 1 I 2p—1 v = -py y =py and and therefore Tn(h,a) = -^ y'" = p(2p - l)y2p-2y> = -p(2p - l)y3^2 , y3p-2(xn) + 0(h3) ^a-l)p(2p-l)+p2 16 Choosing a such that namely - 1 ) p(2p- 1) +p2 = 0 3p — 3 a = ao =--- gives Tn(h,a0) = O(h3) . We note in passing that for p > 1 the exact solution of the initial value problem y' = — yp, j/(0) = 1, is y{x) = [(p - l)x + l]1/(1~p). o Three-stage Runge Kutta methods. Let us now suppose that R = 3 to illustrate the general idea. Thus, we consider the family of methods: Vn+l =Vn + h [cifci + c2k2 + c3k3] , where ki = f(x,y), k2 = f{x + ha2,y + hb2iki), k3 = f(x + ha3,y + hb31kx + hb32k2) , a2 = b21 , a3 = 63i + 632 . Writing 621 = a2 and 631 = a3 — 632 in the definitions of k2 and ^3 respectively and expanding k2 and ^3 into Taylor series about the point (x, y) yields: k2 = f + ha2(fx + k1fy) + ^h2al(fxx + 2k1fxy + kffyy) + 0(h3) = f + ha2(fx + ffy) + h2a2(fxx + 2//^ + /2/ra) + 0(h3) = f + /m2F! + ^2a2F2 + 0(ft3) , where and Fi — fx + ffy and F2 — fxx + 2f fxy + f fyy , = f + h {a3fx + [(a3 - 632)^1 + 632^2] fy} + \h2 {alfxx + 2fl3 [(«3 - 632)^1 + 632^2] fxy + [(a3 " b32)ki + b32k2)2 fyy) + 0{h3) 1 / + fca3Fi + h2 a2by2F1fy + -a2F2 + 0(h V ' 2 ,3\ Substituting these expressions for &2 and &3 into (29) with R = 3 we find that $(x,y,h) = {ci + c2 + c3)f+ h(c2a2+c3a3)F1 + -h2 ^303632^1/1, + (c2a2 + c3a23) F2] + 0(/i3) . (34) 17 We match this with the Taylor series expansion: y(x + h)-y(x) _ y/{x) + ^hy,{x) + ^h2y/,{x) + 0{h3) h a K ' 2 " ' ' 6 2 6 / + -hF1 + -hz (Fi/j, + F2) + 0(ftJ) • This yields: Ci + c2 + c3 = 1 1 C2«2 + c3a3 = -C2«2 + c3a3 3 ' C3a2h2 = ~ ■ b Solving this system of four equations for the six unknowns: c±, c2, c3, a2, a3, 632, we obtain a two-parameter family of 3-stage Runge-Kutta methods. We shall only highlight two notable examples from this family: (i) Heun's method corresponds to 1 n 3 1 2 k 2 ci = ~ , c2 = 0 , c3 = - , a2 = - , a3 = - , 632 = - , yielding 2/n+i = Vn + ~h {h + 3fc3) , ^1 — f{xn,Vn) , k2 = f (xn + 7^h,yn + ^/ifci ) , / 2 2 k3 = f [Xn + yn + 7^2 (ii) Standard third-order Runge-Kutta method. This is arrived at by selecting 12 11 ci = - , c2 = - , c3 = - , a2 = - , a3 = 1 , 632 = 2 , 0 3 0 2 yielding Vn+l = Vn + \h{kx + 4k2 + fc3) 0 ^1 — fiXfiiVn k2 = f (xn + ^h, yn + ^hki k3 = f (%n + h,yn- hki + 2hk2) . 18 Four-stage Runge Kutta methods. For R = 4, an analogous argument leads to a two-parameter family of four-stage Runge-Kutta methods of order four. A particularly popular example from this family is: Vn+i =Vn + \h (ki + 2k2 + 2k3 + k4) , b where ki = f{xn,Vn) , h = f (xn + T^h, yn + T^hk^J , h = f (xn + ^h, yn + ^hk2 h = f(xn + h,yn + hk3) . Here k2 and k% represent approximations to the derivative y'(-) at points on the solution curve, intermediate between (xn, y{xn)) and (xn+i, y(xn+i)), and <3?(xn, yn; h) is a weighted average of the fcj, i = 1,..., 4, the weights corresponding to those of Simpson's rule method (to which the fourth-order Runge-Kutta method reduces when ^ = 0). In this section, we have constructed i?-stage Runge-Kutta methods of order of accuracy 0(hR), R = 1, 2, 3, 4. Is is natural to ask whether there exists an R stage method of order R for R > 5. The answer to this question is negative: in a series of papers John Butcher showed that for R = 5,6,7,8,9, the highest order that can be attained by an i?-stage Runge-Kutta method is, respectively, 4, 5, 6, 6, 7, and that for R > 10 the highest order is < R- 2. 2.5 Absolute stability of Runge Kutta methods It is instructive to consider the model problem y' = Xy, y(0)= yo(^0), (35) with A real and negative. Trivially, the analytical solution to this initial value problem, y{x) = yoexp(Xx), converges to 0 at an exponential rate as x —> +00. The question that we wish to investigate here is under what conditions on the step size h does a Runge-Kutta method reproduce this behaviour. The understanding of this matter will provide useful information about the adequate selection of h in the numerical approximation of an initial value problem by a Runge-Kutta method over an interval [xq, Xm] with Xm >> ^o- F°r the sake of simplicity, we shall restrict our attention to the case of i?-stage methods of order of accuracy R, with 1 < R < 4. Let us begin with R = 1. The only explicit one-stage first-order accurate Runge-Kutta method is Euler's explicit method. Applying (30-35) yields: yn+1 = (1 + h)yn , n>0, where h = Xh. Thus, yn = (1 + h)ny0 . 19 Consequently, the sequence {yn}%Lo will converge to 0 if and only if |1 + h\ < 1, yielding h 6 (—2,0); for such h the Euler's explicit method is said to be absolutely stable and the interval ( — 2, 0) is referred to as the interval of absolute stability of the method. Now consider R = 2 corresponding to two-stage second-order Runge-Kutta methods: Vn+l =Vn + h{c\k\ + c2k2) where with h = f(xn,yn), k2 = f(xn + a2h,yn + b2ihki) c\ + c2 = 1 , a2c2 = b21c2 = ^ . Applying this to (35) yields, Vn+l : and therefore l + h + \hz )yn , n > 0 , 1 - \ n yn= \l + h + -h2 Hence the method is absolutely stable if and only if 1: l + h+-h2 2 < 1 , namely when h £ (—2, 0). In the case of R = 3 an analogous argument shows that 1 + h + H2 + H3 ) yn . Demanding that 1 + h + -h2 + -h 2 6 < 1 then yields the interval of absolute stability: h 6 (—2.51,0). When R = 4, we have that l + h+ -h2 + -h3 + —h4 I yn , 2 6 24 ' and the associated interval of absolute stability is ft, £ (—2.78, 0). For R > 5 on applying the Runge-Kutta method to the model problem (35) still results in a recursion of the form yn+1 = AR(h)yTi n > 0 however, unlike the case when R = 1,2,3,4, in addition to h the expression AR(h) also depends on the coefficients of the Runge-Kutta method; by a convenient choice of the free parameters the associated interval of absolute stability may be maximised. For further results in this direction, the reader is referred to the book of J.D. Lambert. 20 3 Linear multi-step methods While Runge-Kutta methods present an improvement over Euler's method in terms of accuracy, this is achieved by investing additional computational effort; in fact, Runge-Kutta methods require more evaluations of /(•, •) than would seem necessary. For example, the fourth-order method involves four function evaluations per step. For comparison, by considering three consecutive points xn—i, xn = xn-\ +h, x-n+i — xn—i-\-2h, integrating the differential equation between xn_i and xn+i, and applying Simpson's rule to approximate the resulting integral yields y{xn+i) = y{xn-i) + / f(x,y(x))dx ~ y{xn-i) + T^h [f{xn-i,y{xn-i)) + 4f(xn, y{xn)) + f{xn+1, y(xn+i))] which leads to the method 2/n+i = yn-i + ]^h[f(xn-i,yn-i) +4f(xn,yn) + f{xn+1,yn+1)] . (36) In contrast with the one-step methods considered in the previous section where only a single value yn was required to compute the next approximation yn+i, here we need two preceding values, yn and yn-\ to be able to calculate yn+i, and therefore (36) is not a one-step method. In this section we consider a class of methods of the type (36) for the numerical solution of the initial value problem (1-2), called linear multi-step methods. Given a sequence of equally spaced mesh points (xn) with step size h, we consider the general linear fc-step method k k — h E Pjf(xn+j,yn+j ) , (37) 3=0 j=0 where the coefficients ao,..., ak and (3q, ..., /% are real constants. In order to avoid degenerate cases, we shall assume that ak 7^ 0 and that a® and (3q are not both equal to zero. If = 0 then yn+fc is obtained explicitly from previous values of yj and f(xj,yj), and the fc-step method is then said to be explicit. On the other hand, if f3k 7^ 0 then yn+fc appears not only on the left-hand side but also on the right, within f{xn+k,yn+k)] due to this implicit dependence on yn+fc the method is then called implicit. The numerical method (37) is called linear because it involves only linear combinations of the {yn} and the {f(xn,yn)}; for the sake of notational simplicity, henceforth we shall write fn instead of f(xn,yn). Example 3 We have already seen an example of a linear 2-step method in (36); here we present further examples of linear multi-step methods. a) Euler's method is a trivial case: it is an explicit linear one-step method. The implicit Euler method yn+i =yn + hf(xn+1, yn+1) is an implicit linear one-step method. 21 b) The trapezium method, given by Vn+1 = Vn + + fn] is also an implicit linear one-step method. c) The four-step Adams''- Bashforth method Vn+4 = Vn+3 + 7^M55/n+3 - 59/n+2 + 37/n+i - 9/n] is an example of an explicit linear four-step method; the three-step Adams - Moul-ton method Vn+3 = Vn+2 + ^h[9fn+3 + 19/n+2 ~ 5/n+l + fn] is an implicit linear three-step method. The construction of general classes of linear multi-step methods, such as the (implicit) Adams-Bashforth family and the (explicit) Adams-Moulton family will be discussed in the next section. 3.1 Construction of linear multi-step methods Let us suppose that un, n = 0,1,..., is a sequence of real numbers. We introduce the shift operator E, the forward difference operator A+ and the backward difference operator A_ by E : un i-)- un+i , A+ : un >->• (un+i - un) , A_ : un >->• (un - un-i) . Further, we note that E^1 exists and is given by E^1 : un+\ \—> un. Since A+ = E - I = EA_ , A-=I-E~1 and E = (I - A-)'1 , where / signifies the identity operator, it follows that, for any positive integer k, k A\un = {E- I)kUn = ( 'J ] Un+k-j and {E-i)kun = Ys-iy(kJ^ 7=0 V Ak_un = (I- E-^Ur,. = E("1)J' ) ) u--i ■ Now suppose that u is a real-valued function defined on R whose derivative exists and is integrable on [xo,xn] for each n > 0, and let un denote u{xn) where xn = xq + nh, n = 0,1,..., are equally spaced points on the real line. Letting D denote d/dx, by applying a Taylor series expansion we find that (Esu)n = u(xn + sh) = un + sh(Du)n + ^(sh)2(D2u)n + ... oo , = E „((shD)ku)n = (eshDu)n , 7J. C. Adams (1819-1892) 22 and hence Es = eshD _ Thus, formally, hD = InE = -ln(I - A_) , and therefore, again by Taylor series expansion, hu'(xn) = (A- + ^a2 + ^A3_ + ..}jun. Now letting u{x) = y{x) where y is the solution of the initial-value problem (1-2) and noting that u'{x) = y'{x) = f(x,y(x)), we find that hf(xn,y(xn)) = (a_ + ^a2 + ^a3. + .. }j y(xn) . On successive truncation of the infinite series on the right, we find that y{xn) ~ y{xn-i) ~ hf(xn,y(xn)) , 3 1 -y{xn) - 2y(xn_i) + -y(xn_2) ~ hf(xn, y{xn)) , 11 3 1 -^y{xn) ~ 3y(xn_i) + -y(xn_2) - ^y{xn-z) ~ hf(xn,y(xn)) , and so on. These approximate equalities give rise to a class of implicit linear multi-step methods called backward differentiation formulae, the simplest of which is Euler's implicit method. Similarly, E^ihD) = hDE-1 = (J - A_)(-ln(J - A_)) = -(/ - A_)ln(J - A_) , and therefore hu'(xn) = (A- - ^a2 - ^A3_ + .. }j Un+1 . Letting, again, u{x) = y{x) where y is the solution of the initial-value problem (1-2) and noting that u'{x) = y'{x) = f(x,y(x)), on successive truncation of the infinite series on the right results in y{xn+i) - y{xn) ~ hf(xn,y(xn)) , -^y(xn+i) - -y{xn-i) « hf(xn,y(xn)) , ^y{xn+i) + 7^y{xn) - y{xn-i) + ^y{xn-2) ~ hf(xn, y{xn)), and so on. The first of these yields Euler's explicit method, the second the so-called explicit mid-point rule, and so on. Next we derive additional identities which will allow us to construct further classes of linear multi-step methods. Let us define -i fXn D u(xn) = u(xq) + / u(£) d£ , Jx0 23 and observe that (E - 7)L>-1n(x Xn+l u(0 d£ . Now, (E-I)D-1 = A+D~l = EA-D-1 = hEA-(hD)-1 = -hEA-\\n{I- A-)]-1. Furthermore, (E-I)D-1 EA-D-1 = A-ED-1 = A^DE-1)-1 i-i hAlhDE -hA- [(/- A_)ln(/- A_) (38) (39) Letting, again, u{x) = y{x) where y is the solution of the initial-value problem (1-2), noting that u'{x) = y'{x) = f(x,y(x)) and using (38) and (39) we deduce that y(xn+1) - y(xn) = / y'(0^ = (E-I)D-1y'(xn) = (E-I)D-1f(xn,y(xn)) J Xr>, -hEA_ [ln(I - A-)}'1 f(xn,y(xn)) -hA_ [(I - A_)ln(/ - A-)]-1 f(xn,y(xn)) . (40) On expanding ln(7 — A_) into a Taylor series on the right-hand side of (40) we find that y{xn+i) - y{xn) « h 2 and y{xn+i) - y(xn) « h 1 12" 5 -A^ 1 24" -Ad / + - A_ H--Az + -AJ + 2 12 _ 8 ~ 19 720 251 -A 720 Ai + f{xn,y{xn)) (41) f{xn,y{xn)) ■ (42) Successive truncation of (41) yields the family of Adams-Moulton methods, while similar successive truncation of (42) gives rise to the family of Adams-Bashforth methods. Next, we turn our attention to the analysis of linear multi-step methods and introduce the concepts of stability, consistency and convergence. 3.2 Zero-stability As is clear from (37) we need k starting values, ya,..., yk-i, before we can apply a linear k-step method to the initial value problem (1-2): of these, yo is given by the initial condition (2), but the others, y±,... ,yk-i, have to be computed by other means: say, by using a suitable Runge-Kutta method. At any rate, the starting values will contain numerical errors and it is important to know how these will affect further approximations yn, n > k, which are calculated by means of (37). Thus, we wish to consider the 'stability' of the numerical method with respect to 'small perturbations' in the starting conditions. Definition 4 A linear k-step method for the ordinary differential equation y' = f(x,y) is said to be zero-stable if there exists a constant K such that, for any two sequences (yn) and (yn) which have been generated by the same formulae but different initial data y0, yi,..., yk-i and y0, yi,..., jjk-i, respectively, we have \Vn ~ Vn\ < ifmax{|y0 - Vo\, \yi ~ Vi\, ■ ■ ■, \yk-i ~ Vk-i\} for xn < Xm, and as h tends to 0. (43) 24 We shall prove later on that whether or not a method is zero-stable can be determined by merely considering its behaviour when applied to the trivial differential equation y' = 0, corresponding to (1) with f(x,y) = 0; it is for this reason that the kind of stability expressed in Definition 4 is called zero stability. While Definition 4 is expressive in the sense that it conforms with the intuitive notion of stability whereby "small perturbations at input give rise to small perturbations at output", it would be a very tedious exercise to verify the zero-stability of a linear multi-step method using Definition 4 only; thus we shall next formulate an algebraic equivalent of zero-stability, known as the root condition, which will simplify this task. Before doing so we introduce some notation. Given the linear fc-step method (37) we consider its first and second characteristic polynomial, respectively k j=0 a(z) = j=0 where, as before, we assume that ak ± 0 , a\ + 0 . Now we are ready to state the main result of this section. Theorem 6 A linear multi-step method is zero-stable for any ordinary differential equation of the form (1) where f satisfies the Lipschitz condition (3), if and only if its first characteristic polynomial has zeros inside the closed unit disc, with any which lie on the unit circle being simple. The algebraic stability condition contained in this theorem, namely that the roots of the first characteristic polynomial lie in the closed unit disc and those on the unit circle are simple, is often called the root condition. proof: Necessity. Consider the linear fc-step method, applied to y' = 0: &kyn+k + "fc-12/n+fc-i + • • • + aiyn+i + a0yn = 0 . (44) The general solution of this kth order linear difference equation has the form yn=J2Ps(n)Z's > (45) s where zs is a zero of the first characteristic polynomial p{z) and the polynomial ps(-) has degree one less than the multiplicity of the zero. Clearly, if \zs\ > 1 then there are starting values for which the corresponding solutions grow like \zs\n and if \zs\ = 1 and its multiplicity is ms > 1 then there are solutions growing like nms_1. In either case there are solutions that grow unbounded as n —> oo, i.e. as ft 0 with nh fixed. Considering starting data ya, yi, ■ ■ ■ ,yk-i which give rise to such an unbounded solution (yn), and starting data yo = yi = ■ ■ ■ = i)k-i = 0 f°r which the corresponding solution of (44) is (yn) with yn = 0 for all n, we see that (43) cannot hold. To summarise, if the root condition is violated then the method is not zero-stable. 25 Sufficiency. The proof that the root condition is sufficient for zero-stability is long and technical, and will be omitted here. For details, see, for example, K.W. Morton, Numerical Solution of Ordinary Differential Equations, Oxford University Computing Laboratory, 1987, or P. Henrici, Discrete Variable Methods in Ordinary Differential Equations, Wiley, New York, 1962. o Example 4 We shall consider the methods from Example 3. a) The explicit and implicit Euler methods have first characteristic polynomial p{z) = z — 1 with simple root z = 1, so both methods are zero-stable. The same is true of the trapezium method. b) The Adams-Bashforth and Adams-Moulton methods considered in Example 3 have the same first characteristic polynomial, p[z) = z3[z — 1), and therefore both methods are zero-stable. c) The three-step (sixth order accurate) linear multi-step method Hyn+3 + 27yn+2 - 27yn+i - llyn = 3h[fn+3 + 9fn+2 + 9fn+i + fn] is not zero-stable. Indeed, the associated first characteristic polynomial p(z) = llz3 + 27z2 - 27z - 11 has roots at zx = 1, z2 ~ -0.3189, z3 « -3.1356, so \z3\ > 1. 3.3 Consistency In this section we consider the accuracy of the linear fc-step method (37). For this purpose, as in the case of one-step methods, we introduce the notion of truncation error. Thus, suppose that y{x) is a solution of the ordinary differential equation (1). Then the truncation error of (37) is defined as follows: T" =-s^ft-■ (46) Of course, the definition requires implicitly that a(l) = Y!j=qPj + 0. A gain, as in the case of one-step methods, the truncation error can be thought of as the residual that is obtained by inserting the solution of the differential equation into the formula (37) and scaling this residual appropriately (in this case dividing through by hYJj=$ Pj) so that Tn resembles y' — f{x,y{x)). Definition 5 The numerical scheme (37) is said to be consistent with the differential equation (1) if the truncation error defined by (46) is such that for any e > 0 there exists h{e) for which \Tn\ 0 as h —> 0 and this requires that Cq = 0 and C\ = 0; in terms of the characteristic polynomials this consistency requirement can be restated in compact form as p(l) = 0 and p'(l) = o-(l) ^ 0 . Let us observe that, according to this condition, if a linear multi-step method is consistent then it has a simple root on the unit circle at z = 1; thus the root condition is not violated by this zero. Definition 6 The numerical method (37) is said to have order of accuracy p if p is the largest positive integer such that, for any sufficiently smooth solution curve in R of the initial value problem (1-2), there exist constants K and ho such that \Tn\ 0. This assumption is made because in practice exact starting values are usually not available and have to be computed numerically. In the remainder of this section we shall investigate the interplay between zero-stability, consistency and convergence; the section culminates in Dahlquist's Equivalence Theorem which, under some technical assumptions, states that for a consistent linear multi-step method zero-stability is necessary and sufficient for convergence. 3.4.1 Necessary conditions for convergence In this section we show that both zero-stability and consistency are necessary for convergence. Theorem 7 A necessary condition for the convergence of the linear multi-step method (37) is that it be zero-stable. proof: Let us suppose that the linear multi-step method (37) is convergent; we wish to show that it is then zero-stable. We consider the initial value problem y' = 0, y(0) = 0, on the interval [0, Xm] > Xm > 0, whose solution is, trivially, y{x) = 0. Applying (37) to this problem yields the difference equation otkyn+k + Uk-Wn+k-i + • • • + a0yn = 0 . (52) Since the method is assumed to be convergent, for any x > 0, we have that lim yn = 0 , (53) nh=x for all solutions of (52) satisfying ys = r]s(h), s = 0,..., k — 1, where lim r]s(h) = 0 , s = 0,1,... k - 1 . (54) Let z = re^, be a root of the first characteristic polynomial p(z); r > 0, 0 < < 2ir. It is an easy matter to verify then that the numbers yn = hrn cos ncf) define a solution to (52) satisfying (54). If 0 7^ 0 and 0 7^ tt, then 2 yn — yn+iyn-i _ ,22« ■ 2 i — sin q> Since the left-hand side of this identity converges to 0 as h —> 0, n —> oo, nh = x, the same must be true of the right-hand side; therefore, x 2 lim I — I r2n = 0 30 This implies that r < 1. In other words, we have proved that any root of the first characteristic polynomial of (37) lies in the closed unit disc. Next we prove that any root of the first characteristic polynomial of (37) that lies on the unit circle must be simple. Assume, instead, that z = re*^, is a multiple root of p(z); |t| = 1, 0 < 0 < 27t. We shall prove below that this contradicts our assumption that the method (52) is convergent. It is easy to check that the numbers Vn = h1/2nrn cos(n0) (55) define a solution to (52) which satisfies (54), for \r]s(h)\ = \ys\ < h1/2s < h1/2(k - 1) , s = 0,... k - 1 . If 0 = 0 or 0 = 7T, it follows from (55) with h = x/n that \Vn\ =xi/2nV2rn . (56) Since, by assumption, \r\ = 1, we deduce from (56) that limn^oo \yn\ = oo, which contradicts (53). If, on the other hand, 0 7^ 0 and 0 7^ tt, then 2 zn ~ ^n+l^n-l _ ^2n sin2 (f> ' where zn = n~lh~ll2yn = hll2x~lyn. Since, by (53), limn^oo zn = 0, it follows that the left-hand side of (57) converges to 0 as n —> 00. But then the same must be true of the right-hand side of (57); however, the right-hand side of (57) of cannot converge to 0 as n —> 00, since \z\ = 1. Thus, again, we have reached a contradiction. To summarise, we have proved that all roots of the first characteristic polynomial p{z) of the linear multi-step method (37) lie in the unit disc \z\ < 1, and those which belong to the unit circle \z\ = 1 are simple. By virtue of Theorem 6 the linear multi-step method is zero-stable. o Theorem 8 A necessary condition for the convergence of the linear multi-step method (37) is that it be consistent. proof: Let us suppose that the linear multi-step method (37) is convergent; we wish to show that it is then consistent. Let us first show that Cq = 0. We consider the initial value problem y' = 0, y(0) = 1, on the interval [0,Xm], Xm > 0, whose solution is, trivially, y{x) = 1. Applying (37) to this problem yields the difference equation otkyn+k + Uk-Wn+k-i + • • • + a0yn = 0 . (58) We supply "exact" starting values for the numerical method; namely, we choose ys = 1, s = 0,..., k — 1. Given that by hypothesis the method is convergent, we deduce that lim yn = l. (59) nh=x 31 Since in the present case yn is independent of the choice of h, (59) is equivalent to saying that lim yn = 1 . (60) Passing to the limit n —> oo in (58), we deduce that ak + ak-i + • • • + a0 = 0 . (61) Recalling the definition of Co, (61) is equivalent to Co = 0 (i.e. p(l) = 0). In order to show that C\ = 0, we now consider the initial value problem y' = 1, y(0) = 0, on the interval [0, Jm], Xm > 0, whose solution is y{x) = x. The difference equation (37) now becomes akyn+k + "fc-12/n+fc-i + • • • + a0yn = h(/3k + /3fc_i + ... + /30) , (62) where Xm — %o = Xm — 0 = Nh and 1 < n < N — k. For a convergent method every solution of (62) satisfying lim r)8(h) = 0 , s = 0,1,... k - 1 , (63) where ys = r]s(h), s = 0,1,... ,k — 1, must also satisfy lim yn = x . (64) n/i=a: Since according to the previous theorem zero-stability is necessary for convergence, we may take it for granted that the first characteristic polynomial p{z) of the method does not have a multiple root on the unit circle \z\ = 1; therefore = kak + ... + 2a2 + ax ^ 0 . Let the sequence {yn}n=o ^e defined by yn = Knh, where K=/fc + ---+/32+/31 ; (65) this sequence clearly satisfies (63) and is the solution of (62). Furthermore, (64) implies that x = y{x) = lim yn = lim Knh = Kx , nh=x nh=x and therefore K = 1. Hence, from (65), Ci = (fcafc + ... + 2a2 + ai) - (f3k + ... + f32 + ft) = 0 ; equivalently, p/(l)=a(l). o 32 3.4.2 Sufficient conditions for convergence We begin by establishing some preliminary results. Lemma 1 Suppose that all roots of the polynomial p(z) = akzk + ... + ct\z + lie in the closed unit disk \z\ < 1 and those which lie on the unit circle \z\ = 1 are simple. Assume further that the numbers 7^, I = 0,1, 2,..., are defined by 1 2 70 + 71^ + 72^ +••• • afc + ... + ct\zk 1 + aQZk Then, T = sup;>0 \^fi\ < 00 . proof: Let us define p(z) = zkp(l/z) and note that, by virtue of our assumptions about the roots of p{z), the function 1/p{z) is holomorphic in the open unit disc \z\ < 1. As the roots z\, Z2, ■ ■ ■, zm of p{z) on \z\ = 1 are simple, the same is true of the poles of 1/p{z), and there exist constants A\, ..., Am such that the function ' y ' zx zm is holomorphic for \z\ < 1 and |/(^)| < M for all \z\ < 1. Thus by Cauchy's estimate8 the coefficients of the Taylor expansion of / at z = 0 also form a bounded sequence. As A'i °° l l t~ = Ai y z^z , i = 1,... ,m , 7 — — — Zi 1=0 and \zi\ < 1, we deduce from (66) that the coefficients in the Taylor series expansion of 1 /p{z) form a bounded sequence, which completes the proof. o Now we shall apply Lemma 1 to estimate the solution of the linear difference equation + Pk ) + A (67) The result is stated in the next Lemma. Lemma 2 Suppose that all roots of the polynomial p(z) = a^zk + ... + oi\z + lie in the closed unit disk \z\ < 1 and those which lie on the unit circle \z\ = 1 are simple. Let B* and A denote nonnegative constants and (3 a positive constant such that \Pk,n\ + \Pk-l,n\ + • • • + |A),n| < B* , \Pk,n\ Q (68) we have that 1. Consequently, K* > TAEk > Ek > E. Now, letting n = 1 in (69), |ei| < IT + 7iL*|e0| < IT + 7iL*E < K*(l + TiL*) . Repeating this argument, we find that \em\ < K*(l + hL*)m , m = 0,...,fc-l. Now suppose that this inequality has already been shown to hold for m = 0,1,... ,n — 1, where n > k; we shall prove that it then also holds for m = n, which will complete the induction. Indeed, we have from (69) that \en\ < K* + hL*K*{ -= K*{l + hL*)n . (70) iilj Further, as 1 + hL* < ehL* we have from (70) that |e„| < K*ehL*n , n = 0,l,...,N. (71) That completes the proof of the lemma. We remark that the implication (69) (71) is usually referred to as the Discrete Gronwall Lemma. o Using Lemma 2 we can now show that zero-stability and consistency, which have been shown to be necessary are also sufficient conditions for convergence. Theorem 9 For a linear multi-step method that is consistent with the ordinary differential equation (1) where f is assumed to satisfy a Lipschitz condition, and starting with consistent starting conditions, zero-stability is sufficient for convergence. PROOF: Let us define 8 = 8(h) = max \i]s(h) — y(a + sh)\ ; 0 0, the function X(e) = max \y'(x*) - y'(x)\ . x* — x i 7^ 0 0, em = 0. By virtue of (73), we then have that otkem+k + • • • + aoeo = h {(3kgm+kem+k + ... + (3ogmem) + 6Kx{kh)h . As / is assumed to satisfy the Lipschitz condition (3) we have that \gm\ < L, m = 0,1,.... On applying Lemma 2 with E = 5{h), A = Kx{kh)h, N = {Xm — xo)/h, B* = BL, where B = |/30| + + ... + \/3k\, we find that |e„| < T* [AkS{h) + {XM - x0)KX{kh)] exp[{xn - x0)LT*B] , (74) 36 where A = \a0\ + |ai| + ... + \ak r 1 - h\ak xpk\L Now, y' is a continuous function on the closed interval [io,^m] ; therefore it is uniformly continuous on [io^m]- Thus, x(kh) —> 0 as h —> 0; also, by virtue of the assumed consistency of the starting values, 8{h) —> 0 as h —> 0. Passing to the limit h —> 0 in (74), we deduce that x—xo=nh so the method is convergent. o On combining Theorems 7, 8 and 9, we arrive at the following important result. Theorem 10 (Dahlquist) For a linear multi-step method that is consistent with the ordinary differential equation (1) where f is assumed to satisfy a Lipschitz condition, and starting with consistent initial data, zero-stability is necessary and sufficient for convergence. Moreover if the solution y{x) has continuous derivative of order (p + 1) and truncation error 0{hp), then the global error en = y{xn) — yn is also 0{hp). By virtue of Dahlquist's theorem, if a linear multi-step method is not zero-stable its global error cannot be made arbitrarily small by taking the mesh size h sufficiently small for any sufficiently accurate initial data. In fact, if the root condition is violated then there exists a solution to the linear multi-step method which will grow by an arbitrarily large factor in a fixed interval of x, however accurate the starting conditions are. This result highlights the importance of the concept of zero-stability and indicates its relevance in practical computations. 3.5 Maximum order of a zero-stable linear multi-step method Let us suppose that we have already chosen the coefficients ctj, j = 0,..., k, of the fc-step method (37). The question we shall be concerned with in this section is how to choose the coefficients f3j, j = 0,..., k, so that the order of the resulting method (37) is as high as possible. In view of Theorem 10 we shall only be interested in consistent methods, so it is natural to assume that the first and second characteristic polynomials p{z) and a(z) associated with (37) satisfy p(l) = 0, p'(l) — cr(l) = 0, with (z) defined by (75) has a zero of multiplicity p at z = 1. PROOF: Let us suppose that the fc-step method (37) for the numerical approximation of the initial value problem (1-2) is of order p. Then, for any sufficiently smooth function f(x,y), the resulting solution to (1-2) yields a truncation error of the form: Cp+i T a I -hPy(P+1\xn) + 0(hP+1) as ft —> 0, Cp+i 7^ 0, xn = xq + nh. In particular, for the initial value problem we get ~.nh T ha {I) y' = y, y(0) = 1 , p(eri) - ha(eri) a(l) hp + 0(hp+1) (76) as h —> 0, Cp+i 7^ 0. Thus, the function f(h) p{eh) - ha{eh) is holomorphic in a neighbourhood of h = 0 and has a zero of order p at h = 0. The function z = eh is a bijective mapping of a neighbourhood of h = 0 onto a neighbourhood of z = 1. Therefore (f>(z) is holomorphic in a neighbourhood of z = 1 and has a zero of multiplicity p at z = 1. Conversely, suppose that (z) has a zero of multiplicity p at z = 1. Then /(/i) = 4>{eh) is a holomorphic function in the vicinity of h = 0 and has a zero of multiplicity p at h = 0. Therefore, 9(h) = J2(aj ~ hPj)e j=0 has a zero of multiplicity (p + 1) at ft, = 0, implying that g(0) = j = Co . i=o Now, by successive differentiation of g with respect to ft, k g'(0) = 0 = ^{jaj - /3j) = Cx , i=o fc 5"(0) = 0 = - 2J/3,) = 2C2 , i=o fc g"'(0) = 0 = ^(i3«, - 3j2h) = 2C3 , 3=0 etc. fc 5^(0) = 0 = J2(fai - p?-1^) =P\CP. 3=0 g(P\0) =0, 38 We deduce that C0 = d = C2 = ... = Cp = 0; since ^p+1)(0) /Owe have that Cp+1 ^ 0. Consequently (37) is of order of accuracy p. o We shall use this lemma in the next theorem to supply a lower bound for the maximum order of a linear multistep method with prescribed first stability polynomial p{z). Theorem 11 Suppose that p{z) is a polynomial of degree k such that p(l) = 0 and p'(l) 7^ 0, and let k be an integer such that 0 < k < k. Then there exists a unique polynomial o~{z) of degree k such that p'(l) — c(l) = 0 and the order of the linear multi-step method associated with p(z) and o~{z) is > k + 1. proof: Since the function p{z)/\og{z) is holomorphic in the neighbourhood of z = 1 it can be expanded into a convergent Taylor series: P(Z) „ , „ („ IN , 1\2 \ogz c0+c1(z-l)+c2{z-l)z + ... . On multiplying both sides by logz and differentiating we deduce that cq = p'(l) (7^ 0). Let us define a(z) = c0 + a(z - 1) + ... + ck(z - l)k . Clearly a(l) = c0 = p'(l) (7^ 0). With this definition, ^Z) = ^l-a(z) = ck+1(z-l)k+1 + ... , logz ^ and therefore (f>(z) has a zero at z = 1 of multiplicity not less than k + 1. By Lemma 3 the linear fc-step method associated with p{z) and a(z) is of order > k + 1. The uniqueness of (z) about the point z = 1. o We note in connection with this theorem that for most methods of practical interest either k = k — 1 resulting in an explicit method or k = k corresponding to an implicit method. In the next example we shall encounter the latter situation. Example 5 Consider a linear two-step method with p{z) = {z — l)(z — A). The method will be zero-stable as long as A G [—1,1). Consider the Taylor series expansion of the function p{z)/\og{z) about the point z = 1: p(z) (z-l)(l-A + (z-l)) logz log[l + (z — 1)] f z-1 (z-1)2 (z-lf n4n = [l-A + (z-l)] x h-—- + ^—L-^—L+0{{z-lf) \ z-1 (z-1)2 (z-1)3 n4n = [l-A + (z-l)] x + — -—^--^^+0^z-1)) = l-X + ^{z-l) + ^{z-l)2-l-±±{z-l)3 + 0{{z-iy A two-step method of maximum order is obtained by selecting a{z) = l-X + ^{z-l) + ^{z-l)2 1 + 5A 2-2A 5 +A 2 =---1--z H--zz . 12 3 12 -1 39 If X 7^ — 1, the resulting method is of third order, with error constant 1 + A 24 ' whereas if X = — 1 the method is of fourth order. In the former case the method is Vn+2 — (1 + A) + Xyn = h \—^-fn+2 H----Jn+1--In with X a parameter contained in the interval ( — 1,1). In the latter case, the method has the form Vn+2 - Vn = — (fn+2 + 4/n+i + fn) , and is referred to as Simpson's method. By inspection, the linear fc-step method (37) has 2k+2 coefficients: ay, f3j, j = 0,..., k, of which a& is taken to be 1 by normalisation. This leaves us with 2k+1 free parameters if the method is implicit and 2k free parameters if the method is implicit (given that in the latter case (3k is fixed to have value 0). According to (47), if the method is required to have order p, p + 1 linear relationships Cq = 0,..., Cp = 0 involving ay, f3j, j = 0,..., k, must be satisfied. Thus, in the case of the implicit method, we can impose p + 1 = 2k + l linear constraints Co =0, ..., C^+i = 0 to determine the unknown constants, yielding a method of order p = 2k. Similarly, in the case of an explicit method, the highest order we can expect is p = 2k — 1. Unfortunately, there is no guarantee that such methods will be zero-stable. Indeed, in a paper published in 1956 Dahlquist proved that there is no consistent, zero-stable fc-step method which is of order > (k + 2). Therefore the maximum orders 2k and 2k — 1 cannot be attained without violating the condition of zero-stability. We formalise these facts in the next theorem. Theorem 12 There is no zero-stable linear k-step method whose order exceeds k + 1 if k is odd or k + 2 if k is even. proof: Consider a linear fc-step method (37) with associated first and second stability polynomials p and a. Further, consider the transformation C G (?r—i7 G C C + l which maps the open unit disc |£| < 1 of the £-plane onto the open half-plane < 0 of the z-plane; the circle |£| = 1 is mapped onto the imaginary axis = 0, the point £ = 1 onto z = 0, and the point £ = — 1 onto £ = oo. It is clear that the functions r and s defined by are in fact polynomials, deg(r) < k and deg(s) < k. 40 If p(£) has a root of multiplicity p, 0 < p < fc, at £ = £o 7^ — 1, then r(z) has a root of the same multiplicity at z = (Co — l)/(Co + 1)> if p(C) nas a ro°t °f multiplicity p > 1, 0 < p < k, at £ = — 1, then r(z) is of degree k — p. Since, by assumption, the method is zero-stable, £ = 1 is a simple root of p(C); consequently, z = 0 is a simple root of r(z). Thus, r(z) = a\z + a2z2 + ... + cikZk , ai 7^ 0 , aj G R . It can be assumed, without loss of generality, that a± > 0. Since by zero stability all roots of p(£) are contained in the closed unit disc, we deduce that all roots of r(z) have real parts < 0. Therefore, all coefficients a,j, j = 1,..., k, of r(z) are nonnegative. Now let us consider the function «w = (1ii)**(B) = toiii{p<"-w- The function q(z) has a zero of multiplicity p at z = 0 if and only if defined by (75) has a zero of multiplicity p at £ = 1; according to Lemma 3 this is equivalent to the linear fc-step method associated with p(£) and k + 1 (or p > A: + 2) now hinges on the possibility that bk+i = • • • = bp-i = 0 , (or bk+2 = ■■■ = = 0) . Let us consider whether this is possible. We denote by cq, c±, c2, ■ ■., the coefficients in the Taylor series expansion of the function namely, 1+z — c0 + C2Z2 + C4Z4 + ... log ft* & 1—z Then, adopting the notational convention that av = 0 for 1/ > fc, we have that bo = cqcio , b\ = c0a2 , etc. b2v = c0a2i/+l + c2a2v-\ + • • • + C2vd-\ , &2i/+i = c0a2i/+2 + c2a2i, + • • • + c2va2 , i/ = l,2,.... It is a straightforward matter to prove that C2V < 0, 1/ = 1, 2,... (see also Lemma 5.4 on page p.233 of Henrici's book). 41 If k is an odd number, then, since av = 0 for v > k, we have that h+i = C2Q-k + c4afc-2 + • • • + cfc+iai . Since a\ > 0 and no is negative, it follows that bk+i < 0. 11 If k is an even number, then bk+i = C2Q-k + c4ak-2 + • • • + cka2 . Since C2v < 0, v = 1, 2,..., and > 0, p = 2, 3,..., k, we deduce that bk+i = 0 if and only if a2 = a4 = ... = = 0, i.e. when r{z) is an odd function of z. This, together with the fact that all roots of r{z) have real part < 0, implies that all roots of r{z) mush have real part equal to zero. Consequently, all roots of p{Q) lie on |C| = 1. Since <2fc = 0, the degree of r{z) is A; — 1, and therefore —1 is a (simple) root showing that 6^+2 7^ 0. Thus if a fc-step method is zero-stable and k is odd then bk+i ^ 0, whereas if k is even then This proves that there is no zero-stable fc-step method whose order exceeds k + 1 if k is odd or k + 2 if k is even. o Definition 8 A zero-stable linear k-step method of order k + 2 is said to be an optimal method. According to the proof of the previous theorem, all roots of the first characteristic polynomial p associated with an optimal linear multistep method have modulus 1. Example 6 As k + 2 = 2k if an only ifk = 2 and Simpson's rule method is the zero-stable linear 2-step method of maximum order, we deduce that Simpson's rule method is the only zero-stable linear multistep method which is both of maximum order {2k = 4) and optimal Optimal methods have certain disadvantages in terms of their stability properties; we shall return to this question later on in the notes. Linear fc-step methods for which the first characteristic polynomial has the form p{z) = zk — zk~1 are called Adams methods. Explicit Adams methods are referred to as Adams Bashforth methods, while implicit Adams methods are termed Adams— Moulton methods. Linear fc-step methods for which p{z) = zk — zk~2 are called Nystrom methods if explicit and Milne—Simpson methods if implicit. All these methods are zero-stable. h+2 = C4(ik-i + c6afc_3 + ... + cfc+2ai < 0 (k + 2 = 4). 42 3.6 Absolute stability of linear multistep methods Up to now we have been concerned with the stability and accuracy properties of linear multistep methods in the asymptotic limit of ft —> 0, n —> oo, nh fixed. However, it is of practical significance to investigate the performance of methods in the case of ft > 0 fixed and n —> oo. Specifically, we would like to ensure that when applied to an initial value problem whose solution decays to zero asx-> oo, the linear multistep method exhibits a similar behaviour, for ft > 0 fixed and xn = xq + nh —> oo. The canonical model problem with exponentially decaying solution is y' = Xy, x>0, y(0) = y0 & 0) , (77) where KA < 0. Indeed, y(x) = yoe^V^ , and therefore, \y(x)\ < exp(-x|KA|) , x>0, yielding lim^-nx, y{x) = 0 . Thus, using the terminology introduced in the last paragraph of Section 1, the solution is asymptotically stable. In the rest of the section we shall assume, for simplicity, that A is a negative real number, but everything we shall say extends straightforwardly to the general case of A complex, KA < 0. Now consider the linear fc-step method (37) and apply it to the model problem (77) with A real and negative. This yields the linear difference equation k (aj ~ h^Pj) Vn+j = 0 . 3=0 Given that the general solution yn to this homogeneous difference equation can be expressed as a linear combination of powers of roots of the associated characteristic polynomial ir(z; ft) = p(z) - ha(z) , (ft = ftA) , (78) it follows that yn will converge to zero for ft > 0 fixed and n —> oo if and only if all roots of tv(z; ft) have modulus < 1. The kth degree polynomial tv(z; ft) denned by (78) is called the stability polynomial of the linear fc-step method with first and second characteristic polynomials p{z) and a(z), respectively. This motivates the following definition. Definition 9 The linear multistep method (37) is called absolutely stable for a given ft if and only if for that ft all the roots rs = rs(Ji) of the stability polynomial tt(z, ft) defined by (78) satisfy \rs\ < 1, s = l,...,k. Otherwise, the method is said to be absolutely unstable. An interval (a, (3) of the real line is called the interval of absolute stability if the method is absolutely stable for all ft £ (a,/3). If the method is absolutely unstable for all ft, it is said to have no interval of absolute stability. Since for A > 0 the solution of (77) exhibits exponential growth, it is reasonable to expect that a consistent and zero-stable (and, therefore, convergent) linear multistep method will have a similar behaviour for ft > 0 sufficiently small, and will be therefore absolutely unstable for small ft = Aft. According to the next theorem, this is indeed the case. 43 Theorem 13 Every consistent and zero-stable linear multistep method is absolutely unstable for small positive ft. proof: Given that the method is consistent, there exists an integer p > 1 such that Co = Ci = ... = Cp = 0 and Cp+i 7^ 0. Let us consider 7T(eh;h) = p(en) - ha(en) [aje* - ft/3,>' j=0 ■3 ^ q[ q=0 y ■1 ^ „1 q=0 y a E k 00 E ajJ2 j=0 _ k 00 E ajJ2 j=0 _ k k E aJ + E 3=0 j=0 k 00 k 5> + £/i* E 3=0 q=l [j=0 00 Co + ]T VCq q=l Cqhq = 0(hP+1) q=p+l q=0 y oo 9=1 hq3 9-1 ft«j 9=1 ^ fc -q-1 i=o 9-1 (79) On the other hand, noting that the polynomial tv(z; h) can be written in the factorised form ir(z, ft) = (ak - hpk)(z -n)...(z- rk) where rs = rs(h), s = 1,..., k, signify the roots of 7r(.; ft), we deduce that tt^; ft) = (ak - hpk)(eJl - n(ft)) ...(eh- rk(h)) . (80) As ft —> 0, ak — h(3k —> ak 7^ 0 and rs(h) —> (s, s = 1,..., k, where £s, s = 1,..., k, are the roots of the first stability polynomial Q{z). Since, by assumption, the method is consistent, 1 is a root of C(z); furthermore, by zero-stability 1 is a simple root of Q{z). Let us suppose, for the sake of definiteness that it is Q\ that is equal to 1. Then, Qs 7^ 1 for s / 1 and therefore _lim(e^-rs(ft)) = (l-Cs)7^0, s^l. We conclude from (80) that the only factor of ir(eh; ft) that converges to 0 as ft 0 is eh — ri(ft) (the other factors converge to nonzero constants). Now, by (79), 7r(eh;h) = 0{hp+1), so it follows that e^-n(ft) =0{hp+1) . 44 Thus we have shown that ri{h) = eh + 0{hp+1) . This implies that ri(h) > ^ + \h for small positive h. That completes the proof. o According to the definition adopted in the previous section, an optimal fc-step method is a zero-stable linear fc-step method of order k + 2. We have also seen in the proof of Theorem 12 that all roots of the first characteristic polynomial of an optimal fc-step method lie on the unit circle. By refining the proof of Theorem 13 it can be shown that an optimal linear multistep method has no interval of absolute stability. It also follows from Theorem 13 that whenever a consistent zero-stable linear multistep method is used for the numerical solution of the initial value problem (1-2) where ^ > 0, the error of the method will increase as the computation proceeds. 3.7 General methods for locating the interval of absolute stability In this section we shall describe two methods for identifying the endpoints of the interval of absolute stability. The first of these is based on the Schur criterion, the second on the Routh-Hurwitz criterion. 3.7.1 The Schur criterion Consider the polynomial 4>{r) = ckrk + ... + cir + c0 , ck ^ 0 , c0 ^ 0 , with complex coefficients. The polynomial 0 is said to be a Schur polynomial if each of its roots rs satisfies \rs\ < 1, s = 1,..., k. Let us consider the polynomial 4>(r) = c0rk + cir^1 + ... + ck^r + ck , where Cj denotes the complex conjugate of Cj, j = 1,..., k. Further, let us define 0i(r) = -U(O)0(r)-0(O)0(r) r L Clearly (f>i has degree < k — 1. The following key result is stated without proof. Theorem 14 (Schur's Criterion) The polynomial |0(O)| and (f>i is a Schur polynomial. We illustrate Schur's criterion by a simple example. Exercise 4 Use Schur's criterion to determine the interval of absolute stability of the linear multistep method Vn+2 - Vn = 2 (fn+1 + 3/n) • 45 Solution: The first and second characteristic polynomials of the method are p(z) = Therefore the stability polynomial is It- (. 37 Now, 7r(r; h) — p{r) — ha(r) — r--hr — 1 H—h r(r; h) = - [ 1 + |/i j r2 - 1/ir + 1 Clearly, |tt(0; h) \ > |tt(0, h) \ if and only if h e (-§, 0). As 7^/1) =-i/i(2+|/i)(3r+l) has the unique root — ^ and is, therefore, a Schur polynomial, we deduce from Schur's criterion that Tr(r;h) is a Schur polynomial if and only if h e (—1,0). Therefore the interval of absolute stability is (—4,0). o 3.7.2 The Routh Hurwitz criterion Consider the mapping 1 r + 1 of the open unit disc \r\ < 1 of the complex r-plane to the open left half-plane < 0 of the complex z-plane. The inverse of this mapping is given by 1+z r =- . 1 - z Under this transformation the function 7r(r, h) = p(r) — ha(r) becomes P 1+z ha 1+z On multiplying this by (1 s, we obtain a polynomial of the form a0zk + aiz^1 + ... +ak . (81) The roots of the stability polynomial 7r(r, h) lie inside the open unit disk \r\ < 1 if and only if the roots of the polynomial (81) lie in the open left half-plane < 0. Theorem 15 (Routh—Hurwitz Criterion) The roots of (81) lie in the open left half-plane if and only if all the leading principal minors of the k x k matrix Q as a5 . • • a,2k-i a0 a2 ■ ■ a2k-2 0 a± a3 . ■ ■ a2k-3 0 a0 a2 . . . a2fc-4 0 0 0 . ak are positive and ao > 0; we assume that aj =0 if j > k. In particular, 46 a) for k = 2: a® > 0, a± > 0, a2 > 0, b) for k = 3: ao > 0, a± > 0, a2 > 0, a% > 0, a±a2 — a^ao > 0, c) for k = 4: ao > 0, a\ > 0, a2 > 0, 03 > 0, 04 > 0, 010203 — aoa§ — «4^1 > 0 represent the necessary and sufficient conditions for ensuring that all roots of (81) lie in the open left half-plane. We illustrate this result by a simple exercise. Exercise 5 Use the Routh-Hurwitz criterion to determine the interval of absolute stability of the linear multistep method from the previous exercise. Solution: On applying the substitution l + z 1 - z in the stability polynomial 1- / 3-7r(r, h) = r2 — —hr — ( 1 + —h and multiplying the resulting function by (1 — z)2, we get {l-z) »i-z/ 2--U-W V 2 aoz2 + a\z + a,2 with ao = —h , ai = 4 + 3/i , a2 = —2/i . Applying part a) of Theorem 15 we deduce that the method is zero-stable if and only iih e (—|, 0); hence the interval of absolute stability is (—|, 0). o We conclude this section by listing the intervals of absolute stability (a, 0) of fc-step Adams-Bashforth and Adams-Moulton methods, for k = 1,2,3,4. We shall also supply the orders p* and p and error constants Cp*+i and Cp+i, respectively, of these methods. The verification of the stated properties is left to the reader as exercise. fc-step Adams Bashforth (explicit) methods: (1) k = l,p* = l, Cp.+i = i, a = -2 , yi - Vo = hf0 ; (2) k = 2, p* = 2, Cp.+i = ^, a = -l V2-yi = ^(3/i - /o) (3) fc = 3,p* = 3,Cp.+i = §,« = -£ 2/3-2/2 = ^(23/2-i6/i + 5/o); 47 (4) k = 4, p* = 4, Cp.+i = f§, a = , 2/4 " 2/3 = 4(55/3 - 59f2 + 37/i - 9/0) . fc-step Adams Moulton (implicit) methods: (1) k = 1, p = 2, Cp+1 (2) k = 2, p = 3, Cp+i (3) fc = 3, p = 4, Cp+i (4) fc = 4, p = 5, Cp+i 2/4 - 2/3 = ^(251/4 + 646/a - 264/2 + 106/i - 19/0) . We notice that the fc-step Adams-Moulton (implicit) method has larger interval of absolute stability and smaller error constant than the fc-step Adams-Bashforth (explicit) method. 3.8 Predictor-corrector methods Let us suppose that we wish to use the implicit linear fc-step method k k E ajyn+j = h E Pjfn+j , ak, f3k^0 . 3=0 3=0 Then, at each step we have to solve for yn+fc the equation fc-i UkVn+k ~ hf3kf(Xn+k,yn+k) = E (hPjfn+j ~ ajVn+j) ■ 3=0 If h < |a!fc|/(L|/3fc|) where L is the Lipschitz constant of / with respect to y (as in Pi-card's Theorem 1), then this equation has a unique solution, yn+fc; moreover, yn+fc can be computed by means of the fixed-point iteration fc-i fc-i akVn+k + E a3Vn+j = h(3kf(xn+k, y[*]+k) + h Pjfn+j , 5 = 1,2,3,..., 3+0 3=0 -j2> a = —oo yi-yo = -(/i + /o) -L a = -& 24 i u ' 2/2-yi = -(5/2 + 8/i-/o) ; ^ a = -3 720' " 0 ' 720 2/3 - 2/2 = 24(9/3 + 19/2 - 5/i + /o) ; ■-2Z- a - -90 1440 ' 49 ' 48 with yn+k a suitably chosen starting value. "í^l/in I I -i r ttt/~\ TTrm 1 I t~\ 1 1- í"^ 1- /~\ TT 1~\ 1-1 I T l~\ /~\ /I, Theoretically, we would iterate until the iterates y^\k have converged (in practice, until some stopping criterion such as |yjf+^ — Vn+k\ < e is satisfied, where e is some preassigned tolerance). We would then regard the converged value as an acceptable approximation yn+k to the unknown analytical solution-value y{xn+k). This procedure is usually referred to as correcting to convergence. Unfortunately, in practice, such an approach is usually unacceptable due to the amount of work involved: each step of the iteration involves an evaluation of f(xn+k,y^+k) which may be quite time-consuming. In order to keep to a minimum the number of times f(xn+k, 2/n+fc) ^S evamated, the initial guess y^+k must be chosen accurately. This is achieved by evaluating y^+k by a separate explicit method called the predictor, and taking this as the initial guess for the iteration based on the implicit method. The implicit method is called the corrector. For the sake of simplicity we shall suppose that the predictor and the corrector have the same number of steps, say k, but in the case of the corrector we shall allow that both cvq and (3q vanish. Let the linear multistep method used as predictor have the characteristic polynomials k k-l p*(z) = J2^zJ, c4 = l, o-*(z) = J2PpJ , (82) j=0 j=0 and suppose that the corrector has characteristic polynomials k k P(z) = aiz3 ' °k = 1 ' a(z) = J2(3jz3 ■ (83) Suppose that m is a positive integer: it will denote the number of times the corrector is applied; in practice m < 2. If P indicates the application of the predictor, C a single application of the corrector, and E an evaluation of / in terms of the known values of its arguments, then P{EC)mE and P{EC)m denote the following procedures. a) P(EC)mE k-l k-1 y-n+k ^ 2-^i 3 Un+j n ^3 Jn+j ' 3=0 3=0 fn+k ~ f(.xn+k> Vn+k) ' k-l k-l 3=0 3=0 fH _ f(T. , „M \ Jn+k ~ J I n+K! Hn+k) ' for n = 0,1, 2,.... b) P(EC)m k-l k-l J°l , + V a*v[m] ■ = ft V 8* f[m~1] y-n+k ^ 2-^i 3 yn+j n ^3 Jri+j ' 3=0 3=0 49 fc-1 j=0 y~n+k for n = 0,1, 2,.... ' n+fc [m] ajVn+j f(xn+k,Vn+k) ' fc—1 i=o m—1] n+j ' s = 0,..., m — 1 , 3.8.1 Absolute stability of predictor-corrector methods Let us apply the predictor-corrector method P(EC)mE to the model problem y' = Xy, y(0) = y0 & 0) , (84) where A < 0, whose solution is, trivially, the decaying exponential y(x) = yoexp(Ax), x > 0. Our aim is to identify the values of the step size h for which the numerical solution computed by the P(EC)mE method exhibits a similar exponential decay. The resulting recursion is fc-i [mj 1n+j [0] , * [r, 3=0 j=0 3=0 fc-1 fc-1 [ml Vn+k + 2^ aJyn+j = nPkyn+k + tl 2^ PjVn+j > 3=0 3=0 s = 0,..., m — 1 , for n = 0,1, 2,..., where h = Xh. In order to rewrite this set of equations as a single difference equation involving y^, y^+i,... y^+k ov^y^ we nave to eliminate the intermediate stages involving y^+k, • • •, y^+k^ from the above recursion. Let us first take s = 0 and eliminate yn+k form the resulting pair of equations to obtain y[nlk + E «iv{£i = fa E FjvSi - E + ~h E frvl' 3=0 3=0 3=0 3=0 Now take s = 1 and use the last equation to eliminate y^}+k', this gives, ^fc^E^-E^S i=o i=0 [2] , \ ^ yn+k + E ajy-. fc-i n+j 3=0 +^E^a i=o fc-1 Elm] i=0 + ^E^S i=o Equivalently, yI+fc + (i + ^)E«^ fc-i n+j i=0 fc-1 fc-1 fc-1 mf \ h E %vSi - E "ivSi + a + we ft4 i=o i=0 / i=0 50 By induction, CJfc + 1 + Wk + • • • + mr-1) E «iv-m- fc-i i=0 = mr (^E^ft - E ) + (x + ^ +... + mr-1) ^E^ft • V i=o i=o / i=o For m fixed, this is a fcth order difference equation involving yil™', • • •, V'n+k- ^n order to ensure that the solution to this exhibits exponential decay as n —> oo, we have to assume that all roots to the associated characteristic equation fc-i zk + (1 + Uk +... + (wm_1) y, <*jj j=0 (k—1 k—1 \ k—1 h PP3 - E a>3 +(l + h/3k + ... + {h(3k)m-1) h Pj j=0 j=o I j=0 z] have modulus < 1. This can be rewritten in the equivalent form Azk + (l + hf3k + ... + (h^r-1) (p(z) - ha(z)) + (hpk)m (p*(z) - ha*(z)) = 0 , where A = 1 + (l + Uk + • • • + {hfa)m-1) (h/3k - ak) + {hpk)m{hPl - a*k) , Now, since ak = a*k = 1 and /3£ = 0, we deduce that A = 0, and therefore the characteristic equation of the P{EC)mE method is kp(ec)™e{z\ fy = p{z) - ha(z) + Mm(h)(p*(z) - ha*{z)) = 0 , where Mm(h) = -=-(^fc)m- N-- , m > 1 . 1 ^ l + h/3k + ... + (h/3k)™-i Here, ^p[ec)me{z'^ h) 1S referred to as the stability polynomial of the predictor-corrector method P(EC)mE. By pursuing a similar argument we can also deduce that the characteristic equation of the predictor corrector method P{EC)m is kp(ec)™(z; h) = p(z) - ha(z) + (p*(z)a(z) - p(z)a*(z)) = 0 . h0k Here, Tip(Ec)m(z>h) 1S referred to as the stability polynomial of the predictor-corrector method P(EC)m. With the predictor and corrector specified, one can now check using the Schur criterion or the Routh-Hurwitz criterion, just as in the case of a single multi-step method, whether the roots of ^p[ec)me{z'^ h) and 7rp(ec)m(z'i h) an ne m the open unit disk \z\ < 1 thereby ensuring the absolute stability of the P{EC)mE and P{EC)m method, respectively. 51 Let us suppose, for example, that \h/3k\ < 1, i.e. that 0 < h < l/|A/3fc|; then, linijn^oo Mm(h) = 0, and consequently, lim ttp(ec)™e{z; h) = tt(z, h) , lim irP 1. We have that ^p{ec)^e^k; h) = p(eh) - ha(eh) + Mm(h)(p*(eh) - ha*(eh)) = 0{hp+l) + Mm{h)0{hpt+l) = 0(hp+1 + hpt+m+1) 0(hp+1 + hp+2) ifp*>p 0(hp+1) if p* = p — q, 0 < q < p and m > q 0(hp+1 + ftP-9+m+!) if p* = p - q, 0 < q < p and m < q - 1 . Consequently, denoting by Tn 1 ' the truncation error of the method P(EC)mE, we have that 0(hp) ifp*>p tp(ec)me = ^ 0^pj [{p* = p-q,0q 0(hp~q+m) if p* = p - q, 0 < q < p and m < q — 1 . This implies that from the point of view of overall accuracy there is no particular advantage in using a predictor of order p* > p. Indeed, as long as p* +m > p, the predictor-corrector method P{EC)mE will have order of accuracy p. Similar statements can be made about P{EC)m type predictor-corrector methods with m > 1. On writing p*(z)a(z) - a*(z)p(z) = (p*(z) - ha*(z))a(z) - a*(z)(p(z) - ha(z)) , we deduce that 7Tp{EC)m(eh h) = 0(hp+1 + hp'+m + hp+m) . Consequently, as long as p* +m > p + 1 the predictor-corrector method P{EC)m will have order of accuracy p. 52 4 Stiff problems Let us consider an initial value problem for a system of m ordinary differential equations of the form: f(^,y) y{a) = yo, (85) where y = the form yi,... ,ym) . A linear fc-step method for the numerical solution of (85) has ajyn+j j=0 3=0 (86) where f, n+j fix. n+j,yn+j)- Let us suppose, for simplicity, that f(x,y) = Ay + b where A is a constant matrix of size m x m and b is a constant (column) vector of size m; then (86) becomes ^2{ajl - hPjA)yn+j = ha(l)b 3=0 (87) where > 1. What choice of the step size h will guarantee absolute stability in the sense of Definition 10? Solution: a) For Euler's explicit method p(z) — z — 1 and o(z) — 1, so that tt(z;7i) — p(z) — hcr(z) — (z — 1) — h — z — (1 + h). This has the root r — (1 + h). Hence the region of absolute stability is IZa — {h G C : |1 + h\ < 1} which is an open unit circle centred at —1. b) Now writing u — y and v — y' and y — (u,v)T, the initial value problem for the given second-order differential equation can be recast as y' = Ay, y(0) = y0 , where A={0\ -(A + l)) and ^(a^) ' The eigenvalues of A are the roots of the characteristic polynomial of A, det(A - zl) = z2 + (A + l)z + A . Hence, r± — —1, r2 — —A; we deduce that the method is absolutely stable provided that h < 2/A. It is an easy matter to show that u{x) = 2e~x - e~Xx , v{x) = -2e-'x + Ae~Aa; . The graphs of the functions u and v are shown in the figure below for A — 45. Note that v exhibits a rapidly varying behaviour (fast time scale) near x — 0 while u is slowly varying for x > 0 and v is slowly varying for x > 1/45. Despite the fact that over most of the interval [0,1] both u and v are slowly varying when A — 45, we are forced to use a step size of h < 2/45 in order to ensure that the method is absolutely stable. o In the example the v component of the solution exhibited two vastly different time scales; in addition, the fast time scale (which occurs between x = 0 and x ~ 1/A) has negligible effect on the solution so its accurate resolution does not appear to be important for obtaining an overall accurate solution. Nevertheless, in order to ensure the stability of the numerical method under consideration, the mesh size h is forced to be exceedingly small, h < 2/A, smaller than an accurate approximation of the solution for x>l/A would necessitate. Systems of differential equations which exhibit this behaviour are generally referred to as stiff systems. We refrain from formulating a rigorous definition of stiffness. 4.1 Stability of numerical methods for stiff systems In order to motivate the various definitions of stability which occur in this section, we begin with a simple example. Consider Euler's implicit method for the initial value problem y' = Ay , y(0) = y0 , 54 40- 30- 20- 10- Figure 2: The functions u and v plotted against x for x G [0,1]. where A is a complex number. The stability polynomial of the method is tt(z, h) = p(z) — ha(z) where h = hX, p(z) = z — 1 and o~(z) = z. Since the only root of the stability polynomial is z = 1/(1 — h), we deduce that the method has the region of stability K = {h : |1 - h\ > 1} . In particular 1Z includes the whole of the left-hand complex half plane. The next definition is due to Dahlquist (1963). Definition 11 A linear multistep method is said to be A-stable if its region of absolute stability, TZa contains the whole of the left-hand complex half-plane K(/iA) < 0. As the next theorem by Dahlquist (1963) shows, Definition 11 is far too restrictive. Theorem 16 (i) No explicit linear multistep method is A-stable. (ii) The order of an A-stable implicit linear multistep method cannot exceed 2. (in) The second-order A-stable linear multistep method with smallest error constant is the trapezium rule. Thus we adopt the following definition due to Widlund (1967). 55 Definition 12 A linear multistep method is said to be A(a)-stable, a £ (0,7r/2), if its region of absolute stability IZa contains the infinite wedge Wa = {h 17r — a < &rg(h) < tv + a} . A linear multistep method is said to be A(0)-stable if it is A(a) -stable for some a £ (0,7r/2). A linear multistep method is Aq stable if IZa includes the negative real axis in the complex plane. Let us note in connection with this definition that if KA < 0 for a given A then h = hX either lies inside the wedge Wa or outside Wa for all positive h. Consequently, if all eigenvalues A of the matrix A happen to lie in some wedge Wp then an A(/3)-stable method can be used for the numerical solution of the initial value problem (85) without any restrictions on the step size h. In particular, if all eigenvalues of A are real and nonnegative, then an A(0) stable method can be used. The next theorem (stated here without proof) can be regarded the analogue of Theorem 16 for the case of A(a) and A(0) stability. Theorem 17 (i) No explicit linear multistep method is A(0) -stable. (ii) The only A(0) -stable linear k-step method whose order exceeds k is the trapezium rule. (Hi) For each a 6 [0,7r/2) there exist A(a)-stable linear k-step methods of order p for which k = p = 3 and k = p = 4. A different way of loosening the concept of A-stability was proposed by Gear (1969). The motivation behind it is the fact that for a typical stiff problem the eigenvalues of the matrix A which produce the fast transients all lie to the left of a line Wi = —a, a > 0, in the complex plane, while those that are responsible for the slow transients are clustered around zero. Definition 13 A linear multistep method is said to be stiffly stable if there exist positive real numbers a and c such that IZa 3 1Z\ U IZ2 where TZX = {h G C : m< -a} and 1Z2 = {h £ C : -a < Mi< 0, -c < < c} . It is clear that stiff stability implies A(a)-stability with a = arctan(c/a). More generally, we have the following chain of implications: A-stability stiff-stability A(a)-stability A(0)-stability Ao-stability . In the next two sections we shall consider linear multistep methods which are particularly well suited for the numerical solution of stiff systems of ordinary differential equations. 56 k "6 "5 014 a3 a2 ol\ a0 P Cp+i 1 1 -1 1 1 1 2 0 90° 2 1 4 3 1 3 2 3 2 2 9 0 90° 3 1 18 11 9 11 2 11 6 11 3 3 22 0.1 88° 4 1 48 25 36 25 16 25 3 25 12 25 4 12 125 0.7 73° 5 1 300 137 300 137 200 137 75 137 12 137 60 137 5 10 137 2.4 52° 6 1 360 147 450 147 400 147 225 147 72 147 10 147 60 147 6 20 343 6.1 19° Table 3: Coefficients, order, error constant and stability parameters for backward differentiation methods 4.2 Backward differentiation methods for stiff systems Consider a linear multistep method with stability polynomial Ti(z,h) = p(z) — ha(z). If the method is A(a)-stable or stiffly stable, the roots r(h) of 7r(-, h) lie in the closed unit disk when h is real and h —> —00. Hence, o(r(h)) 0 = _ lim FV ^ " = _ lim a(r(h)) = a{_ lim r(h)) ; in other words, the roots of 7r(-, h) approach those of 7 backward differentiation methods of order p of the kind considered here are not zero-stable and are therefore irrelevant from the practical point of view. 57 4.3 Gear's method Since backward differentiation methods are implicit, they have to be used in conjunction with a predictor. Instead of iterating the corrector to convergence via a fixed point iteration, Newton's method can be used to accelerate the iterative convergence of the corrector. Rewriting the resulting predictor-corrector multi-step pair as a one step method gives rise to Gear's method which allows the local alteration of the order of the method as well as of the mesh size. We elaborate on this below. As we have seen in Section 4.1, in the numerical solution of stiff systems of ordinary differential equations, the stability considerations highlighted in parts (i) of Theorems 16 and 17 necessitate the use of implicit methods. Indeed, if a predictor-corrector method is used with a backward differentiation formula as corrector, a system of nonlinear equations of the form Yn+k ~ hf3ki{xri+k-< Yn+k) = gn+fc will have to be solved at each step, where gn fc-1 j=0 is a term that involves information which has already been computed at previous steps and can be considered known. If this equation is solved by a fixed-point iteration, the Contraction Mapping Theorem will require that Lh\Pk\ < 1 (89) in order to ensure convergence of the iteration; here L is the Lipschitz constant of the function f(x, •). In fact, since the function f(x, •) is assumed to be continuously differen-tiable, df , L max (z,y)eR For a stiff system L is typically very large, thus the restriction on the steplength h expressed by (89) is just as severe as the condition on h that one encounters when using an explicit method with a bounded region of absolute stability. In order to overcome this difficulty, Gear proposed to use Newton's method: [s+l] I ~ hfik — {xn+k,y\ 8\ \ n+k> h(3k{(xn+k,y[n (90) for s = 0,1,..., with a suitable initial guess yn+k- Even when applied to a stiff system, convergence of the Newton iteration (90) can be attained without further restrictions on the mesh size h provided that we can supply a sufficiently accurate initial guess y^+k (by using an appropriately accurate predictor, for example). On the other hand, the use of Newton's method in this context has the disadvantage that the Jacobi matrix d(/dy has to be reevaluated and the matrix / — h(3k§^(xn+k, y^k) inverted at each step of the iteration and at each mesh point xn+k. One aspect of Gear's method is that the matrix / — hfr df involved in the Newton iteration described above is only calculated occasionally (i.e. at the start of the kgy \xn+k,yn+k. 58 iteration, for s = 0, and thereafter only if the Newton iteration exhibits slow convergence); the inversion of this matrix is performed by an LU decomposition. A further aspect of Gear's method is a strategy for varying the order of the backward differentiation formula and the step size according to the intermediate results in the computation. This is achieved by rewriting the multistep predictor-corrector pair as a one-step method (in the so-called Nordsieck form). For further details, we refer to Chapter III.6 in the book of Hairer, Norsett and Wanner. 5 Nonlinear Stability All notions of stability which were considered so far in these notes rest on the assumption that f(x,y) = Xy or f(x,y) = Ay + b. The purpose of this section is to develop an appropriate theoretical framework which is directly applicable to the stability analysis of numerical methods for nonlinear ODEs. Consider the linear system of differential equations y' = Ay, y(xo) = yo, where all eigenvalues of the m x m matrix A have negative real part. Then ||y(a;)|| decreases as x increases; also, neighbouring solution curves get closer as x increases: if u(x) and v(x) are two solutions to y' = Ay subject to u(xq) = uq and v(xq) = vq, respectively, then u(x2)-v(x2)|| x!>x0, (91) for x2 > x\ > xq and therefore, ||u(x2) - v(x2)|| < ||u(xi) - v(xi)|| , x2 > xx > Xq . It is the property (91) that has a natural extension to nonlinear differential equations and leads to the following definition. Definition 14 Suppose that u and v are two solutions of the differential equation y' = f(x,y) subject to respective initial conditions u(xq) = uq, v(xq) = vq. If for all real numbers x2 and x\ such that x2 > x\ > xq, where \\ ■ \\ denotes the Euclidean norm on Cm, then the solutions u and v are said to be contractive in the norm || • ||. To see what assumptions of f we must make to ensure that two solutions u and v to the differential equations y' = f(x,y), with respective initial conditions u(a?o) = uo, v(a?o) = vo, are contractive, let (•, •) denote the inner product of Rm and consider the inner product of u' — v' = f (x, u) — f (x, v) with u — v. This yields, where denotes the Euclidean norm on R5 Clearly, 0 < gO^-^i) maxj dlXi(a) < ^ u(x2) - v(xi)|| < ||u(xi) - v(xi)|| u(x) — v(x) II2 = (f (x, u(x)) — f (x, v(x)), u(x) — v(x)) . Thus, if (f(x, u(x)) — f(x, v(x)), u(x) — v(x)) < 0 (92) 59 for all x > xq then IA||u(x)_v(x)||2<0 for all x > xq, and therefore the solutions u and v corresponding to initial conditions uo and vo, respectively, are contractive in the norm || • ||. The inequality (92) motivates the following definition. Definition 15 The system of differential equations y' = f(x, y) is said to be dissipative in the interval [xq, oo) if (f(x,u)-f(x,v),u-v) < 0 (93) for all x > xq and all u and v in Rm. Thus we have proved that if the system of differential equations is dissipative then any solutions u and v corresponding to respective initial values uo and vo are contractive. A slight generalisation of (92) results in the following definition. Definition 16 The function f(x, •) is said to satisfy a one-sided Lipschitz condition on the interval [xq, oo) if there exists a function v{x) such that (f(x,u) - f(x,v),u-v) < v{x)\\a- v||2 (94) for all x £ [xq, oo). In particular, if f(x, •) satisfies a one-sided Lipschitz condition on [xq, oo) and v{x) < 0 for all x G [xq,oo), then the differential equation y' = f(x,y) is dissipative on this interval, and therefore any pair of solutions u and v to this equations, corresponding to respective initial values uo and vo are also contractive. Now we shall consider numerical methods for the solution of an initial value problem for a dissipative differential equation y' = f(x, y) and develop conditions under which numerical solutions are also contractive in a suitable norm. In order to keep the presentation simple we shall suppose that f is independent of x and, instead of a linear fc-step method, k k ajyn+j = hJ2 ftf (y«+i) (95) j=0 j=0 for the numerical solution of y' = f(y), y(a^o) = yo> we shall consider its one-leg twin k / k £ CXjYn+j = hi £ fayn+j ] • (96) j=0 \j=0 For example, the one-leg twin of the trapezium rule method yn+i -yn = ^h [f(yn+i) + f(yn) is the implicit midpoint rule method yn+i - yn = hf Q(yn+i + yn] 60 Let us recall the notation from Section 3.1 to simplify writing: putting yn+i = Eyn, we can write the linear fc-step method (95) as p{E)yn = ha(E)f(y. nj > where p(z) = X)j=o aiz^ an<^ o is a solution to (96) then {zn}n>o, with zn = • • • ) zn)T in Rmfc where zn+j 6 Rm, j = 0,1,..., k — 1. Given that G = (g-ij) is a k x k symmetric positive definite matrix, the G-norm || • \\q is defined by k k x1/2 ^nllG = | ^ ] ^ ] 9ij(zn+k—ii zn+k—j) vj=l j=l Definition 17 The k-step method (96) is said to be G-stable if there exists a symmetric positive definite matrix G such that < 2(p(E)zn,a(E)zn)/a2(l) . Let {un} and {vn} be two solutions of the differential equation y' = f(y) given by (96) with different starting values, and suppose that (96) is G-stable. Define the vectors Un and Vn in Rmfc by Un = (un+fc_l5 . . . , un) , Vn = (vn+k_±, . . . , Vn) . Since we are assuming that the differential equation y' = f(y) is dissipative, it follows that ||Un+i - Vu+iWg ~ l|U„ - Vu\\g < 2(p(E)(un - vn),a(E)(un - vn))/a2(l) < 2{hf(a(E)un) - M{a{E)vn),a{E){un - vn))/a2(l) 2h (f(a(E)un) - hf(a(E)vn),a(E)un - a(E)vn)) < 0 . a I 61 Hence, ||Un+i - Vn+i||c < ||Un - VnWc for n > 0 . Thus we deduce that a G-stable approximation of a dissipative ordinary differential equation y' = f(y) is contractive in the G-norm; this is a discrete counterpart of the property we established at the beginning of this section that analytical solutions to a dissipative equation of this kind are contractive in the Euclidean norm. For further developments of these ideas, we refer to the books of J.D. Lambert, and A.M. Stuart and A.R. Humphries. 6 Boundary value problems In many applications a system of m simultaneous first-order ordinary differential equations in m unknowns yi(x),y2(x),..., ym{x) has to be solved. If each of these variables satisfies a given condition at the same value a of x then we have an initial value problem for a system of first-order ordinary differential equations. If the j/j, i = 1,... ,m, satisfy given conditions at different values a, b, c, ... of the independent variable x then we have a multi-point boundary value problem. In particular, if conditions on the j/j, i = 1,... ,m, are imposed at two different values a and b then we have a two-point boundary value problem. Example 7 Here is an example of a multipoint (in this case, three-point) boundary value problem: y'" -y" +y' -y = 0, y(0) = 1 , y(l) = e , y'{2) = e2 . The exact solution is y{x) = ex. Example 8 This is an example of a two-point boundary value problem: y" - 2y3 = 0 , y(l) = 1 , y'(2) + [y{2))2 = 0 . The exact solution is y{x) = 1/x. In this section we shall consider three classes of methods for the numerical solution of two-point boundary value problems: shooting methods, matrix methods and collocation methods. 6.1 Shooting methods Let us consider the two-point boundary value problem y" = f(x,y,y') , y(a) = A , y(b) = B , (97) with a < b and x £ [a, b]. We shall suppose that (97) has a unique solution. The motivation behind shooting methods is to convert the two-point boundary value problem into solving a sequence of initial value problems whose solutions converge to that of the boundary value problem, so that one can use existing software developed for the numerical solution of initial value problems: observe that an attempt to solve the boundary value problem (97) directly will lead to a coupled system of nonlinear equations whose solution may be a hard problem. 62 Let us make an initial guess s for y'(a) and denote by y(x\ s) the solution of the initial value problem y" = f{x, y, y') , y(a) = A , y'(a) = s . (98) Introducing the notation u(x;s) = y(x;s), v(x;s) = -j^y(x;s), we can rewrite (98) as a system of first-order ordinary differential equations: d —u(x;s) = v(x;s), u(a;s)=A, ox (99) d —v(x;s) = f(x,u(x;s),v(x;s)) , v(a;s)=s. The solution u(x; s) of the initial value problem (99) will coincide with with the solution y(x) of the boundary value problem (97) provided that that we can find a value of s such that (f){s) = u(b; s) - B = 0 . (100) The essence of the shooting method for the numerical solution of the boundary value problem (97) is to find a root to the equation (100). Any standard root-finding technique can be used; here we shall consider two: bisection of the interval which is known to contain the root and Newton's method. 6.1.1 The method of bisection Let us suppose that that two numbers s± and s2 are known such that 0(si) < 0 and 0 . We assume, for the sake of definiteness, that s± < s2- Given that the solution of the initial value problem (99) depends continuously on the initial data, there must exist at least one value of s in the interval (s±, s2) such that (f>(s) = 0. Thus the interval [s±, s2] contains a root of the equation (100). The root of (100) can now be calculated approximately using the method of bisection. We take the midpoint s3 of the interval [si,^], compute u(b,s^) and consider whether 0(53) = u(b;s^) — B is positive or negative. If ^(53) > 0 then it is the interval [^1,53] that contains a root of 0, whereas if ^(53) < 0 then the interval in question is [53, s2]. By repeating this process, one can construct a sequence of numbers {sn}^=1 converging the s. In practice the process of bisection is terminated after a finite number of steps when the length of the interval containing s has become sufficiently small. 6.1.2 The Newton Raphson method An alternative to the method of bisection is to compute a sequence {sn}^=1 generated by the Newton-Raphson method: Sn+l = Sn - 0, then the initial value problem (99), (102) can be solved by a predictor-corrector method or a Runge-Kutta method on the interval [a, b]. Thus we obtain u(b; sn) or, more precisely, an approximation to u(b; sn) from which we can calculate (sn) and (f>'(sn), we obtain the next Newton-Raphson iterate sn+i from (101). The process is then repeated until the iterates sn settle to a fixed number of digits. Two remarks are in order: 1) According to (103), the initial value problems (99) and (102) are coupled and therefore they must be solved simultaneously over the interval with s set to «sn, n = 0,l,2,...; 2) The coupled initial value problem (99), (102) may be very sensitive to perturbations of the initial guess sq; a bad initial guess of sq may result in a sequence of Newton-Raphson iterates {sn}^=0 which does not converge to the root s. The latter difficulty may be overcome by using the multiple shooting method which we describe below. First, however, we show how the simple shooting method may be extended to the nonlinear two-point boundary value problem y' = f(x,y), a 2. The essence of the method is to approximate the derivatives in the differential equation and the boundary conditions by divided differences. Assuming that y is sufficiently smooth, it is a simple matter to show using Taylor series expansions that ,"(*,) = ^+l)-2^)+^-l)+Q(/l2)> (m) A*) = ^+l)27(^l)+0(^2), (H2) l/(x0) = "%(X°)+42f l}"^2)+0(ft2), (113) y\xN) = y^-*)-MxN-i)+MxN) +Q{h2h (n4) Now, using (111-114) we can construct a finite difference method for the numerical solution of (110); we denote by yi the numerical approximation to y{xj) for i = 0,..., N: yi+i-2yi + yi-1 yl+i - Vl-i , s _ f( s i_1 N_x -3y0 + 4yi - y2 aoVo + »0-^7- = co , In , , Vn-2 - 4y;v-i + aiVN+bi-—- = ci . In After rearranging these, we obtain 66 a0 -4&i 3b0\ 4b0 b0 2h yo+2hyi~2hm YhVN-2 ~ ^VN-i + ( ai + — CO , Ci . This is a system of linear equations of the form A)2/o + C0yi + B0y2 = Ayi-i + Ciyi + Biyi+i = AnVn-2 + CNyN-i + BNyN = The matrix of the system is M F0 , FN • 1,...,N-1 ' A0 Co Bo 0 0 Ai Ci Bi 0 0 0 A2 c2 B2 0 0 0 0 0 AN-i 0 0 0 0 AN 0 0 0 Bn-i Bn Cn-i Cn Let us consider the following cases: 1) If Bo = 0 and An = 0 then M is a tridiagonal matrix. 2) If Bo 7^ 0 and Bi = 0 (and/or An 7^ 0 and An-i = 0) then we can interchange the first two rows of the matrix (and/or the last two rows) and, again, we obtain a tridiagonal matrix. 3) If Bo 7^ 0 and Bi 7^ 0 (and/or An 7^ 0 and An-i 7^ 0) then we can eliminate Bq from the first row (and/or An from the last row) to obtain a tridiagonal matrix. In any case the matrix M can be transformed into a tridiagonal matrix. Thus, from now on, without any restrictions on generality, we shall suppose that M is tridiagonal. To summarise the situation, we wish to solve the system of linear equations My where M is a tridiaginal matrix, y = (2/0,2/1, • • -,vn)t, F = (F0,Fi,...,Fn)t • The algorithm we present below for the solution of this linear system is usually referred to as the Thomas algorithm. It is a special case of Gaussian elimination or LU decomposition; in fact, it is this latter interpretation that we adopt here. We wish to express M as a product LU of a lower-triangular matrix .. 0 .. 0 .. 0 L 1 0 0 0 0 h 1 0 0 0 0 h 1 0 0 0 0 0 0 In-1 0 0 0 0 0 1 In 0 1 67 and an upper-triangular matrix U Uo v0 0 0 0 0 0 Ui 0 0 0 0 0 U2 V2 0 0 0 0 0 0 0 UN-I VN-1 0 0 0 0 0 0 UN On multiplying L and U and equating the resulting matrix with M, we find that hui—± = Ai n0 = Co , v0 = BQ , kvi-i +Ui =d ) i = 1,..., N . Vi = Bi Hence Vi = Bi , i = 0,..., N , u0 = C0 Ui = d - (AiBi-x)/Ui-x 1% = A-i j u%—\ i = l,...,N. Given that the entries of the matrix M are known, the components of L and U can be computed from these formulae, and the set of linear equations My = F can be written in the following equivalent form: f Lz = F \Uy = z ■ Thus, instead of solving My = F directly, we solve two linear systems in succession, each with a triangular matrix: Lz = F is solved for z, followed by solving Uy = z for y. Writing this out in detail yields and zi = F± , Zi — Fi li—\Zi—\ UN = zn/un , Vi = {zi - Viyi+1)/ui i = l,...,N , i = N -1,...,0 . Expressing in these formulae the values of Ui and Vi in terms of Ai, Bi, d, we find that Vi VN oti+iVi+i + ßi+i ßN+l , i = N-l 0 where ati+i ßi+l Bi d + aiAi Fi - ßiAi i = 1,2,...,7V- 1 i = l,2,...,N , ßi Co ' Fb Co ' d + aiAi The last set of formulae are usually referred to as the Thomas algorithm. 68 6.2.2 Nonlinear boundary value problem Now consider a nonlinear second-order differential equation subject to linear boundary conditions: y" = f(x,y,y') , x e(a,b), a0y(a) + b0y'(a) = c0 , aiy{b) + hy'(b) = cx . On approximating the derivatives with divided differences, we obtain the following set of difference equations: j\xi,yi,—TTi- ' ? = l,...,iv - 1 , h2 J V 'y ' 2h -3y0 + 4yi - y2 aoyo + b0-—- = c0 , In . , VN-2 ~ 4yN-! + 3yN aiVN+bi-—- = ci . In After rearranging these, we obtain 1 2 , 1 f( m+i-m-A 1 ^ • ^ at 1 tfVi-i --fpVi +-fpVi+i = f[xi,yi,--j' 1<*<^-1, 3b0\ 4b0 b0 a°-2h)y0+2hyi-2hm = C°' This is a system of system of nonlinear equations of the form A0yo + C0yi + #02/2 = F0 , + dvi + Siyi+i = ^, i = l,...,N-l, (115) AnVn-2 + CNyN-i + -Batjmt = Fat , where A0 = a0 - (3b0)/(2h) , C0 = 4&o/(2fc) , B0 = -b0/(2h) , Ai = I//12, Ci = -2//12 , Bi = 1/h2 , i = 1,...,7V- 1 , A Ar = 6i/(2/i) , CW = -4h/(2h) , BN = ai + {3b1)/{2h) , and Fo = co , Fi = f(xi,yi,(yi+1-yi-1)/(2h)) , i = l,...,N -1 , FN = c\ . Thus, (115) can be written in the compact form My = F(y) where M is a tridiagonal matrix, y = (yo, ■ ■ ■ iVn)t and F(y) = (Fq, ..., Fn)t . The nonlinear system of equations G(y) = My - F(y) = 0 69 can now be solved by Newton's method, for example, assuming that a solution exists. Given a starting value y(°) for the Newton iteration, subsequent iterates are computed successively from J(y(n))(y(n+1) _ y(n)} = _G(y(n)) , n = 0, 1, 2, . . . , (116) where dG (dG, \ is the Jacobi matrix of G. As Gi is independent of yj with \i — j\ > 1, the matrix J(y^n->) is tridiagonal. Consequently, each step of the Newton iteration (116) involves the solution of a system of linear equations with a tridiagonal matrix; this may be accomplished by using the Thomas algorithm described above. 6.3 Collocation method Consider the boundary value problem £y = y" + v{x)y'+ q{x)y = f(x) , xe(a,b), a0y(a)+b0y'(a) = c0 , (117) aiy(b) + hy'(b) = cx , where ao, a±, bo, b\ are real numbers such that (ao&i — ai&o) + aoai(k — a) 7^ 0 . (118) It can be assumed, without loss of generality, that eg = 0 and c\ = 0; for if this is not the case then we consider the function y defined by y(x) = y(x) - d(x) , with d{x) = ax + (3 where a and f3 are real numbers chosen so that d(x) satisfies the (nonhomogeneous) boundary conditions at x = a and x = b stated in (117). It is a straightforward matter to see that provided (118) holds there are unique such a and f3. The function y then obeys the differential equation Cy = f, where f{x) = f{x) — Cd{x), and satisfies the boundary conditions in (117) with eg = c\ = 0. Suppose that {ipi}f=1 is a set of linearly independent functions defined on the interval [a, b] satisfying the homogeneous counterparts of the boundary conditions in (117) (i.e. c0 = ci = 0). We shall suppose that each function ipi is twice continuously differentiable on [a, b]. The essence of the collocation method is to seek an approximate solution j/Ar(x) to (110) in the form N yN(x) = J2^Mx), (119) and demand that £yN(xj) = f(xj) , j = l,...,N, (120) 70 at (iV + 1) distinct points Xj, j = 1,..., N, referred to as the collocation points. We note that since each of the functions ipi(x) satisfies the (homogeneous) boundary conditions at x = a and x = b the same is true of yj\f(x). Now, (119-120) yield the system of linear equations n »r.r,;.r;! /;,-! . j = l,...,N, (121) i=i for the coefficients £j, i = 1,...,N. The specific properties of the collocation method depend on the choice of the basis functions ipi and the collocation points Xj. In general the matrix M = (£Vi(xj))i M for some integer M, 1 < M < N. If M is a fixed integer, independent of N, then the resulting system of linear equations can be solved in 0(N) arithmetic operations. In spectral collocation methods, the functions ipi(x) are chosen as trigonometric polynomials. Consider, for example, the simple boundary value problem -y"(x)+q(x)y(x) = f(x), x € (0,ir) , y{0) = y{Tv) = 0 , where q{x) > 0 for all x in [0, tt]. The functions V«(x) = s'mix, i = 1,..., N, satisfy the boundary conditions and are linearly independent on [0,7r]. Thus we seek an approximate solution yN(x) in the form N Vn(x) = ^iisinix . i=i Substitution of this expansion into the differential equation results in the system of linear equations N £& {f +q(xj)) sinixj = f(xj) , i = 1,... ,N , i=i for £i,... ,£n- The collocation points are usually chosen as jtt Xj = - , 7 = 1..... iV . This choice is particularly convenient when q is a (nonnegative) constant, for then the linear system £ & (f + q) sin ——- = f(xj) , i = 1,..., N , i=l can be solved for ^i,..., £n in O(N) arithmetic operations using a Fast Fourier Transform, despite the fact that the matrix of the system is full. 71 Further Exercises 1. Verify that the following functions satisfy a Lipschitz condition on the respective intervals and find the associated Lipschitz constants: a) f(x,y) = 2yx~4 , x G [l,oo) ; b) f(x, y) = e~x2 tan-1 y , x 6 [1, oo) ; c) f(x, y) = 2y(l + y2)-l{l + e~x) , x e (-00, 00) . 2. Suppose that m is a fixed positive integer. Show that the initial value problem y' = y2m«2m+V , 1,(0) = 0 , has infinitely many continuously differentiable solutions. Why does this not contradict Picard's theorem? 3. Show that the explicit Euler method fails to approximate the solution y{x) = (4x/5)5/4 of the initial value problem y' = y1/5, y(0) = 0. Justify your answer. Consider the same problem with the implicit Euler method. 4. Write down the explicit Euler method for the numerical solution of the initial value problem y' + 5y = xe~5x , y(0) = 0 , on the interval [0,1] with step size h = 1/N, N > 1. Denoting by yjy the Euler approximation to y(l) at x = 1, show that liniTv^oo yN = 2/(1)- Find an integer A^o such that |y(l) - yAf| < 10-5 , for all N > N0 . 5. Consider the initial value problem y' = loglog(4 + y2) , x £ [0,1] , y(0) = 1 , and the sequence {yn}n=o> ^ > 1, generated by the explicit Euler method yn+i =yn + h log log(4 + y2n) , n = 0,..., N - 1 , y0 = 1 , using the mesh points xn = nh, n = 0,..., N, with spacing h = 1/N. Here log denotes the logarithm with base e. a) Let Tn denote the truncation error of Euler's method at x = xn for this initial value problem. Show that \Tn\ < h/A. b) Verify that \y{xn+i) - yn+i\ < (1 + hL)\y(xn) - yn\ + h\Tn\ , n = 0,...,N -1 , where L = 1/(2 log4). c) Find a positive integer Nq, as small as possible, such that max \y{xn) - yn\ < 10-4 0 Nq. 72 6. Define the truncation error Tn of the trapezium rule method Vn+l =Vn + ^h (fn+1 + fn) for the numerical solution of the initial value problem y' = f(x, y), y(0) given, where fn = f(xn, yn) and h = xn+1 - xn. By integrating by parts the integral (x - xn+1)(x - xn)y (x)dx , or otherwise, show that T h2y"\in) 12 for some £ n in the interval (xn, xn+\ ), where y is the solution of the initial value problem. Suppose that / satisfies the Lipschitz condition \f(x,u) - f(x,v)\ < L\u - v\ for all real x, u, v, where L is a positive constant independent of x, and that \y'"{x) \ < M for some positive constant M independent of x. Show that the global error en = y{xn) — Vn satisfies the inequality |en+i| < \en\ + ^hL {\en+i\ + |en|) + ^-/i2M . For a uniform step h satisfying hL < 2 deduce that, if yo = y(xo), then h2M \en\ < 12L 1 - \hL/ 1 7. Consider the following one-step method for the numerical solution of the initial value problem y' = f(x,y), y(x0) = y0: Vn+l =Vn + ~jh(k\ + k2) , where ki=f(xn,yn), k2 = fixn + h^yn + hkx) . Show that the method is consistent and has truncation error 1. Tn = -hz 6 1 fyifx + fyf) 2 (fxx ^fxyf fyyf ) + 0(h3) . Show that for R = 1, 2 there is no i?-stage Runge-Kutta method of order R + 1. 73 9. Consider the one-step method Vn+i =yn + h{a ki + b k2) , where k\ = f(xm y-n) > k2 = f(xn + ah,yn + fihki) , and where a, b, a, (3 are real parameters. Show that there is a choice of these parameters such that the order of the method is 2. Is there a choice of the parameters for which the order exceeds 2? 10. Consider the one-step method Vn+i =Vn + athf(xn,yn) + j3hf(xn + 7/1, yn + ^hf(xn,yn)) , where a, (3 and 7 are real parameters. Show that the method is consistent if and only if a + (3 = 1. Show also that the order of the method cannot exceed 2. Suppose that a second-order method of the above form is applied to the initial value problem y' = —Ay, y(0) = 1, where A is a positive real number. Show that the sequence (yn)n>o is bounded if and only if h < j. Show further that, for such A, \y{xn) - yn\ < ^X3h2xn , n > 0 . D 11. Derive the mid-point rule method and the Simpson rule method by integrating the differential equation y' = f(x,y(x)) over suitable intervals of the real line and applying appropriate numerical integration rules. 12. Write down the general form of a linear multistep method for the numerical solution of the initial value problem y' = f(x, y), y(xo) = yo- What does it mean to say that such a method is zero-stable? Explain the significance of zero-stability. What is the truncation error of such a linear multistep method? Determine for what values of the real parameter b the linear multistep method defined by the formula Vn+3 + (26 - 3)(yn+2 - yn+i) - yn = hb(fn+2 + fn+i) is zero-stable. Show that there exists a value of b for which the order of the method is 4. Show further that if this method is zero-stable then its order cannot exceed 2. 13. Consider the initial value problem y' = f(x,y), y(0) = yo- Consider attempting to solve this problem by the linear multistep method ayn-2 + fy/n-l + Vn+i = hf(x ni Vn) on the regular mesh xn = nh where a and b are constants. a) For a certain (unique) choice of a and b, this method is consistent. Find these values of a and b and verify that the order of accuracy is 1. 74 b) Although the method is consistent for the choice of a and b from part a), the numerical solution it generates will not, in general, converge to the solution of the initial value problem as h —> 0, because the method is not zero-stable. Show that the method is not zero-stable for these a and b, and describe quantitatively what the unstable solutions will look like for small h. 14. Consider the linear two-step method Vn+2 — Vn = g (/n+2 + 4/n+l + fn) ■ Show that the method is zero-stable; show further that it is third-order accurate, namely, Tn = 0(h3). 15. Show that the linear three-step method Hyn+3 + 27y„+2 - 27y„+i - llyn = 3h[fn+3 + 9fn+2 + 9/n+i + fn] is sixth order accurate. Find the roots of the first characteristic polynomial and deduce that the method is not zero-stable. 16. Write down the general form of a linear multi-step method for the numerical solution of the initial value problem y'' = f(x,y) , y(x0)=yo, on the closed real interval [xq, xn], where / is a continuous function of its arguments and yo is a given real number. Define the truncation error of the method. What does it mean to say that the method has order of accuracy pi Given that a is a positive real number, consider the linear two-step method Vn+2 ~ ayn = ^ [f(xn+2, yn+2) + 4f(xn+1,yn+1) + f(xn,yn)] , on the mesh {xn : xn = xq + nh,n = 1,..., N} of spacing h, h > 0. Determine the set of all a such that the method is zero-stable. Find a such that the order of accuracy is as high as possible; is the method convergent for this value of al 17. Compute a numerical solution to the initial value problem y' + y = 0, y(0) = 1, on the interval [0,1] with h = 2~k for k = 1, 2,..., 10, using the linear two-step method Vn - Vn-l = -h{fn-l ~ fn-?) , where the missing starting value y± is computed by the explicit Euler method. Tabulate the global error at x = 1 for k = 1,2,..., 10, and comment on its rate of decay. 18. Solve numerically the initial value problem y' = x - y2 , y(0) = 0 , x £ [0,1] , with step size h = 0.01 by a fourth-order Milne-Simpson method. Use the classical fourth-order Runge-Kutta method to compute the necessary starting values. 75 19. Find which of the following linear multistep methods for the solution of the initial value problem y' = f(x, y), y(0) given, are zero-stable. For any which are zero-stable, find limits on the value of h = xn+i — xn for which they are absolutely stable when applied to the equation y' = Ay, A < 0. a) yn+i -yn = hfn b) Vn+i +yn~ 2yn_i = h(fn+i + fn + fn-l) c) 2/n+l - yn-l = \h(fn+l + 4/n + fn-l) d) yn+i -yn = \h{3fn - fni) e) yn+l ~yn = J2~h(5fn+1 + 8/n - fn-l) 20. Determine the order of the linear multistep method yn+2 - (1 + a)yn+i +yn = ^h [(3 - a)fn+2 + (1 - 3a) fn] and investigate its zero-stability and absolute stability. 21. If 0. For values of a such that the method is zero-stable investigate its absolute stability using Schur's criterion. 24. A predictor P and a corrector C are defined by their characteristic polynomials: P: p*(z)=z4-l, a*(z) = |(2z3 - z2 +2z) , C : p(z)=z2-l, a(z) = \(z2 + 4z + 1) . a) Write down algorithms which use P and C in the P(EC)mE and P(EC)m modes. b) Find the stability polynomials ^p[EC)mE{z'^ h) and ivp(EC)m(zi ^) °f these methods. Assuming that m = 1, use Schur's criterion to calculate the associated intervals of absolute stability. c) Express the truncation errors Tn^EC^ E and Tn^EC^ of these methods in the form 0(hr) where r = r(p*,p, m), with p* and p denoting the orders of accuracy of P and C, respectively. 76 25. Which of the following would you consider a stiff initial value problem? a) y' -(105e~lo4:E + l)(y - 1), y(0) = 0 on the interval x G [0,1]. Note that the solution can be found in closed form: y(x) = eme-10**-De-x + ! . b) y[ = -0.5yi+0.501y2 , yi(0) = 1.1 , y'2 = 0.501yi - 0.5y2 , y2(0) = -0.9 , on the interval x G [0,103]. c) y' = A(x)y, y(0) = (1,1,1)T, where -1 + 100 cos 200x 100(1 - sin 200x) 0 A(x) = | -100(1 + sin 200x) -(1 + 100 cos 200x) 0 1200(cosl00x+ sinl00x) 1200(cos lOOx - sin lOOx) -501 26. Consider the 0-method Vn+l =yn + h[(l- 9)fn + Ofn+l] for 9 G [0,1]. a) Show that the method is A-stable for 9 > 1/2. b) Show that the method is A(a)-stable when it is A-stable. 27. Show that the second-order backward differentiation method is A-stable. Show that the third-order backward differentiation method is not A-stable, but that it is stiffly stable in the sense of Definition 13. 28. Find Xm > 0 as large as possible such that the system of differential equations Vi = -yi + xy2 V2 = x2(y1 - y2) is dissipative in the interval [0,Xm]. Deduce that any two solutions with respective initial conditions are then contractive on [0, Xm] m the Euclidean norm. 29. Show that the trapezium rule method is G-stable with G = 1. 30. Show that the second-order backward differentiation method and its one-leg twin are G-stable with " 5/2 -1 -1 1/2 G 31. Develop a shooting method for the numerical solution of the boundary value problem -y"+yey = l, y(0)=y(l)=0, on the interval [0,1] using: 77 a) The method of bisection; b) The Newton-Raphson method. Explain how your algorithm can be extended to the case of multiple shooting. 32. Construct a three-point finite difference scheme for the numerical solution of the boundary value problem -y" + x2y = 0, y(0) = 1 , y'(l) = 0 , on the interval [0,1]. Show that the resulting system of equations can be written so that its matrix is tridiagonal. Apply the Thomas algorithm to the linear system when the spacing between the mesh points is h = 1/10. 33. Suppose that real numbers a,i, bi and c« satisfy at > 0 , b{ > 0 , Ci>ai+bi, i = 1,2,..., N - 1 , and let e, = ^- , i = l,2,...,N-1 , with eo = 0. Show by induction that 0 < < 1 for i = 1,2,..., N — 1, and that the conditions Ci > 0 , Ci > \a,i\ + \bi\ , i = 1,2,..., N - 1, are sufficient for eo = 0 to imply that |ej| < 1 for i = 1, 2,..., N — 1. How is this method used to solve the system of equations -diyi-i + Ciyi - hyi+i = fi , i = 1,2,..., N - 1 , with 2/0 = 0 , yN = 0 ? 34. Assume that the boundary value problem y" + f(x,y) = 0, 02. a) Show that the truncation error of the finite difference scheme is given by for some £n G (xn_i,xn+i). 78 Show further that the global error en = y{xn) — yn satisfies h~252en + pnen = Tn , 1 < n < N - 1 , e0 = eN = 0 , where pn = fy(xn,r]n) for some r]n between y{xn) and yn, and it is assumed that fy(x, y) is a continuous function of x and y for x £ [0,1] and y G R. b) Suppose now that fy(x,y) < 0 for all x £ [0,1] and y G R. Let |Tn| < M, 1 < n < N- 1. Show that wn = \h2Mn{N -n) satisfies h'252wn = -M, that hT2b2wn + pnwn < -M , 1 < n < N - 1 , and that, if vn = wn + en or un = iun — en, then un satisfies h~252vn + pnwn < 0 , 1 < n < iV — 1 , vq = vn = 0 . c) Assume that un has a negative minimum for some value of n between 1 and N — 1 and show that this leads to a contradiction. Deduce that vn>0, 1