MASARYK UNIVERSITY Faculty of Science Department of mathematics and statistics Habilitation Thesis Brno 2016 Petr Zem´anek MASARYK UNIVERSITY Faculty of Science Department of mathematics and statistics Discrete Symplectic Systems and Square Summable Solutions Habilitation Thesis Petr Zem´anek 2010 Mathematics Subject Classification: Primary 39A10; 39A12; 39A70; 47A06. Secondary 34B20; 34B05; 47A20; 47B25; 47B37; 47B39. Keywords: Discrete symplectic system; time-reversed system; linear Hamiltonian difference system; Weyl–Titchmarsh theory; eigenvalue; M(λ)-function; Atkinson-type condition; Weyl disk; Weyl circle; Weyl solution; square summable solution; limit point case; limit circle case; limit circle invariance; criteria; separated endpoints; jointly varying endpoints; periodic endpoints; definiteness condition; nonhomogeneous problem; linear relation; deficiency index; self-adjoint extension; uniqueness; Krein–von Neumann extension. © Petr Zem´anek, Masaryk University, 2016 Preface A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man whom you meet on the street. David Hilbert, see [86, pg. 438] This habilitation thesis is submitted as an integral part of the promotion to associate professor at Masaryk University (Faculty of Science) in the field Mathematics – Mathematical analysis. It reflects my research from the period of 2011 to 2016, which includes the postdoc position supported by the Program “Employment of Newly Graduated Doctors of Science for Scientific Excellence” (grant number CZ.1.07/2.3.00/30.0009) co-financed from European Social Fund and the state budget of the Czech Republic. I would like to express gratitude to my collaborator, colleague, friend, and former advisor Roman ˇSimon Hilscher for his continual support, many hours of stimulating and inspiring discussions, and his comments to a pre-final version of the manuscript. I am also deeply indebted to Stephen Clark for the (hopefully successful and pleasant) collaboration and to the Department of Mathematics and Statistics (Missouri University of Science and Technology, Rolla, USA) for hosting my visits in 2012 and 2014, where some parts of the presented results were achieved. In addition, many thanks belong to the heads of our scientific team, Zuzana Doˇsl´a and Ondˇrej Doˇsl´y. Finally, I highly appreciate the financial support of the Dean of the Faculty of Science provided for the Spring semester 2016, which enabled me to focus on this text only. Nevertheless, this work could not be written without the unflagging support and endless tolerance of my wife P´et’a. I also apologize to our daughter Stella that we did not spend more time together during the first months of her life. I am extremely thankful to both of them with ( x2 + 9 4 y2 + z2 − 1 )3 − x2 z3 − 9 200 y2 z3 = 0, r(θ) = 1 + √ − ln ( 2 e−0.812 − e−1.42 sin2 [5 (θ−π/2)/2] ) . The text was typeset by using XELATEX and pictures were generated by GeoGebra. Brno, August 2016 Petr Zem´anek Table of contents Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Notation and auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Discrete symplectic systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Spectral theory on bounded interval . . . . . . . . . . . . . . . . . . . . . . . 16 2.3 Weyl disk and Weyl circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4 Square summable solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 Illustrating examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.6 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Chapter 3. Jointly varying endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1 Weyl–Titchmarsh theory for jointly varying endpoints . . . . . . . . . . . . 42 3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.3 Augmented symplectic system . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Chapter 4. Invariance of limit circle case for two discrete systems . . . . . . . . . . . . . . . . 55 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.2 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 5. Polynomial and analytic dependence on spectral parameter . . . . . . . . . . 65 5.1 Preliminaries and Lagrange identity . . . . . . . . . . . . . . . . . . . . . . . 66 5.2 Special examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.3 Weyl–Titchmarsh theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4 Special quadratic dependence and limit circle case . . . . . . . . . . . . . . 79 5.4.1 Results for one system . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.4.2 Results for two systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.5 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 – vii – Table of contents Chapter 6. Nohomogeneous problem and maximal and minimal linear relations 89 6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.2 Definiteness condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.3 Nonhomogeneous problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.4 Maximal and minimal linear relations . . . . . . . . . . . . . . . . . . . . . . 106 6.4.1 Linear relations and definiteness . . . . . . . . . . . . . . . . . . . . . 107 6.4.2 Orthogonal decomposition of sequence spaces . . . . . . . . . . . . . 110 6.4.3 Minimal linear relation and its deficiency indices . . . . . . . . . . . . 112 6.5 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Chapter 7. Self-adjoint extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.1 Limit point criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.2 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.3 Proof of main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 7.4 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Appendix: Linear relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 List of symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 List of author’s publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 – viii – Chapter 1 Introduction The approach turns out to be fruitful and successful, and leads to the effective construction as well as the theoretical understanding of an abundance of what we call symplectic difference scheme, or symplectic algorithms, or simply Hamiltonian algorithms, since they present the proper way, i.e., the Hamiltonian way for computing Hamiltonian dynamics. Kang Feng, see [74, pg. 18] In this habilitation thesis we present our recent contributions to the ongoing development of the theory of square summable solutions of discrete symplectic systems. It is based on results, which were achieved by the author and his scientific collaborators (S. Clark and R. ˇSimon Hilscher) during his postdoctoral research in the period 2011–2016. The major part of them was published in papers [A13,A15,A17,A18,A21] and in a more general setting also in papers [A16,A19,A20]. Systematic research in this area began in 2010, when M. Bohner and S. Sun in [26] and independently (and more extensively) S. Clark and the author in [A4] investigated square summable solutions of discrete symplectic systems with a special linear dependence on the spectral parameter, see the beginning of Chapter 2 for more details. In the present work we collect our results for discrete symplectic systems with general linear dependence on the spectral parameter as well as with a polynomial or analytic dependence. However, we emphasize that these results do not only improve the type of the dependence on the spectral parameter, but they significantly generalize and extend the results of [26] and [A4]. We also lay foundations of the “operator theory” for discrete symplectic systems, which is intimately connected with the topic of square summable solutions. The thesis consists of seven chapters and an appendix. ▶ In the next sections of this introductory chapter we summarize the used notation and some important results from linear algebra, define discrete symplectic systems, and show some of their special cases. ▶ In Chapter 2 we develop the limit point and limit circle classification for discrete symplectic systems, which depend linearly on the spectral parameter. In particular, we investigate the associated eigenvalue problem with separated boundary conditions, the Weyl disks and Weyl circles, their limiting behavior, and properties of square summable solutions including the precise analysis of the number of linearly independent square summable solutions as well as some criteria for the limit point and limit circle cases. This chapter is based on [A15]. ▶ Since the theory of Chapter 2 is based on separated boundary conditions, we focus in – 1 – Chapter 1. Introduction Chapter 3 on the spectral theory for discrete symplectic systems with general jointly varying endpoints. We characterize the eigenvalues, construct the M(λ)-function and Weyl disks, their matrix radii and centers, and discuss the number of linearly independent square summable solutions. These results include several particular cases, such as the periodic and antiperiodic endpoints. The method utilizes a new transformation to separated endpoints, which is simpler and more transparent than the one in the known literature. This chapter is based on [A13]. ▶ In Chapter 4 we extend the invariance of the limit circle case to two linear discrete systems depending linearly on spectral parameter. The main result is a discrete analogue of the corresponding continuous time statement, which was derived by Walker for a pair of non-hermitian linear Hamiltonian differential systems in [168]. This chapter is based on [A19]. ▶ In Chapter 5 we consider discrete symplectic systems with polynomial and analytic dependence on the spectral parameter. We derive fundamental properties of these systems (including the Lagrange identity) and discuss their connection with systems known in the literature. In analogy with the results of Chapter 2, we present a construction of the Weyl disks and determine the number of linearly independent square summable solutions. In addition, we prove the invariance of the limit circle case for a special quadratic dependence on the spectral parameter and its extension to the case of two (generally non-symplectic) discrete systems. We also provide several illustrative examples, one of which contradicts the invariance of the limit circle case for symplectic systems depending truly analytically (i.e., nonpolynomially) on the spectral parameter. This chapter is based on [A17]. ▶ In Chapter 6 we study the definiteness of the discrete symplectic system, pay an attention to a nonhomogeneous discrete symplectic system, and introduce the minimal and maximal linear relations associated with these systems. We also show some fundamental properties of the corresponding deficiency indices, including a relationship between the number of square summable solutions and the dimension of the defect subspace. Moreover, we give a sufficient condition for the existence of a densely defined operator associated with a discrete symplectic system. This chapter is based on [A18]. ▶ In Chapter 7 we characterize all self-adjoint extensions of the minimal linear relation. Especially for the scalar case on a finite discrete interval we present some equivalent forms, discuss their uniqueness, and describe the Krein–vonNeumann extension. In addition, we establish a limit point criterion, which partially generalizes a classical limit point criterion for the second order Sturm–Liouville difference equations. This chapter is based on [A21]. ▶ In order to make the thesis self-contained we conclude this work by a short overview of basic definitions and some important results from the theory of linear relations, which is utilized in Chapters 6 and 7. We close each chapter by a section concerning bibliographical notes, in which we mention some open problems and possible directions for our future research. For readers’ convenience, we provide also a list of symbols, which are used throughout the thesis. Finally, we include an overview of author’s publications. For completeness, we point out that the results of Chapters 2–5 were further extended to symplectic systems on time scales in [A16,A19,A20]. This generalization enables us to – 2 – 1.1. Notation and auxiliary results unify and compare the corresponding results for linear Hamiltonian differential systems and discrete symplectic systems. Moreover, since some of the studied problems were not considered for linear Hamiltonian differential systems (such as the jointly varying endpoints or the analytic dependence on the spectral parameter), it yields even new results for these systems. 1.1 Notation and auxiliary results In this section we summarize the notation used through this thesis and recall several known facts from linear algebra (see also the list of symbols on page 141). The sets of natural numbers, integers, real and complex numbers are, respectively, denoted by N, Z, R, and C. Moreover, N0 := N ∪ {0}. For any λ ∈ C the symbols ¯λ, re(λ), im(λ), and δ(λ) represent, respectively, the complex conjugate of λ, the real and imaginary parts of λ, and the sign of the imaginary part of λ i.e., δ(λ) := sgn(im(λ)). We also use the symbols C+ and C− for the upper and lower complex half-planes, i.e., we put C+ := {λ ∈ C | δ(λ) = 1} and C− := {λ ∈ C | δ(λ) = −1}. Typically, all vectors and matrices are written by small and capital letters, respectively. All matrices are considered over the field of complex numbers C. For r ∈ N we denote the r × r identity and zero matrices by Ir and 0r. If the dimension is clear from the context, we write only I and 0 (for simplicity, the zero vector will also be denoted by 0). For r, s ∈ N we mean by Cr×s the space of r × s complex matrices M = (mi,j)i=1,...,r j=1,...,s and Cr×1 is abbreviated as Cr. If M1, . . . , Mm ∈ Cr×r, then diag{M1, . . . , Mm} represents the block diagonal matrix M ∈ Cmr×mr with the matrices M1, . . . , Mm on the main diagonal. For a given matrix M ∈ Cr×s we indicate by M⊤, M, M∗, rank M, tr M, det M, M > 0, M ≥ 0, Madj, Ker M, Ran M, dim Ran M, sprad M, im(M) := (M−M∗)/(2i), and re(M) := (M+M∗)/2, respectively, its transpose, conjugate, conjugate transpose, rank, trace, determinant, positive definiteness, positive semidefiniteness, adjugate (or adjoint) matrix, kernel, range (or image, i.e., the space spanned by the columns of M), the dimension of Ran M, spectral radius, and Hermitian components (or real and imaginary parts, see [102, pg. 170] or [16, Fact 3.7.29]). In addition, by Mp,q we mean the submatrix of M ∈ Cr×s consisting of the first p ≤ r rows and of the first q ≤ s columns of the matrix M and we write only Mp in the case p = q, i.e., for the p-th leading principal submatrix of M. We recall that two Hermitian and positive semidefinite matrices L, M ∈ Cr×r with L ≤ M satisfy Ran L ⊆ Ran M and rank L ≤ rank M, (1.1) where the equalities occur simultaneously, i.e., it holds Ran L = Ran M if and only if rank L = rank M, see e.g. [16, Fact 8.10.2]. Furthermore, for any matrices L ∈ Cr×s, M ∈ Cs×p, P ∈ Cr×q, and Q ∈ Cr×r we have rank L = rank LL∗ = rank L∗ L, (1.2) rank L + rank M − s ≤ rank LM ≤ min{rank L, rank M}, (1.3) rank(L, P) + dim[Ran L ∩ Ran P] = rank L + rank P, (1.4) (I − Q)−1 = ∞∑ k=0 Qk , when sprad Q < 1, (1.5) see e.g. [16, Corollaries 2.5.1, 2.5.3, 2.5.10 and Facts 2.11.9, 4.10.5]. In the following statements we show two important properties of unitary matrices. The first proposition can be found in [102, Lemma 2.1.8] and the second statement is from [108, Theorem 5.3]. – 3 – Chapter 1. Introduction Proposition 1.1.1. Let r ∈ N and U1, U2, · · · ∈ Cr×r be a given sequence of unitary matrices. Then there exists a subsequence Uk1 , Uk2 , . . . such that all of the entries of Ukj converge (as a sequences of complex numbers) to the entries of a unitary matrix U as j → ∞. Proposition 1.1.2. Let r ∈ N and the matrices L, M ∈ Cr×r be such that rank L = ℓ and rank M = m. Then sup{rank LUM | U ∈ Cr×r is unitary} = min{ℓ, m}. We also point out, see e.g. [A4, Remark 2.6], that im(M) > 0 or im(M) < 0 implies the invertibility of the matrix M. Moreover, we write only M∗−1 instead of (M∗)−1 or ( M−1 )∗ and similarly for parameter dependent matrices M∗ (λ) := [M(λ)]∗ , M−1 (λ) := [M(λ)]−1 , and M∗−1 (λ) := [M∗ (λ)]−1 = [M−1 (λ)]∗ . Finally, if we denote by the symbol S⊥ the orthogonal complement of a subspace S of an inner product space, then the codimension of S is defined as codim S := dim S⊥ and it holds S⊥ 1 ⊇ S⊥ 2 for any subspaces S1 ⊆ S2. Moreover, for any M ∈ Cr×s we have Ran M = (Ker M∗ )⊥ , (1.6) see [16, Theorem 2.4.3]. For any matrix M = (mi,j)i,j=1,...,r ∈ Cr×r we define its entrywise H¨older norm (or ℓ1-norm) and spectral norm, respectively, as ||M||1 := r∑ i=1 r∑ j=1 |mi,j | and ||M||σ := max { √ µ | µ is an eigenvalue of M∗ M } , see [102, Section 5.6] or [16, Chapter 9]. These norms satisfy the estimates ||M||σ ≤ ||M||1 ≤ r √ rank M × ||M||σ, (1.7) see [16, Fact 9.8.12 (v)], and possess the submultiplicative and self-adjoint properties, i.e., ||ML||a ≤ ||M||a × ||L||a and ||M∗ ||a = ||M||a, where a = 1 or a = σ. The spectral norm is also unitarily invariant, i.e., ||UMV||σ = ||M||σ for any unitary matrices U, V ∈ Cr×r, which implies that ||L||σ ≤ ||M||σ for any Hermitian matrices L, M ∈ Cr×r such that L ≤ M, see [16, Fact 9.9.5]. Moreover, the spectral norm is the matrix norm induced by the Euclidean vector norm on Cr, i.e., by the norm ||v||2 := (v∗v)1/2 for any v ∈ Cr, see [16, Proposition 9.49]. In other words, the inequality ||Mv||2 ≤ ||M||σ ||v||2 (1.8) holds true for any M ∈ Cr×r and v ∈ Cr. A matrix M ∈ Cr×r is said to be nilpotent provided there exists m ∈ N such that Mm = 0. The following proposition can be found in [16, Fact 3.17.9]. Proposition 1.1.3. Let L, M ∈ Cr×r be such that the matrix L is nilpotent and the matrices commute, i.e., LM = ML. Then det(L + M) = det M. Following [16, Chapter 4], for λ ∈ C we define the polynomial matrix M(λ) as M(λ) := λm Mm + λm−1 Mm−1 + · · · + λM1 + M0, where Mm, . . . , M0 ∈ Cr×r. The matrix-valued function M(λ) is called singular if det M(λ) is zero for all λ ∈ C, otherwise M(λ) is called nonsingular. Moreover, M(λ) is called unimodular if det M(λ) is a nonzero constant. The latter condition is equivalent to the fact that M(λ) is nonsingular and that M−1(λ) is also a polynomial matrix, see [16, Proposition 4.3.7]. – 4 – 1.2. Discrete symplectic systems For M ∈ Cr×r we also define the matrix exponential exp(M) as exp(M) := ∞∑ j=0 1 j! Mj . Then for L, M, P, Q ∈ Cr×r with L being nonsingular and PQ = QP we have det[exp(M)] = etr M , L exp(M)L−1 = exp(LML−1 ), (1.9) exp(P) exp(Q) = exp(Q) exp(P) = exp(P + Q), (1.10) see [16, Corollary 11.2.4, Proposition 11.2.8(v), Corollary 11.1.6] . If I is an interval in R, then the associated discrete interval IZ is the set of integers in I, i.e., IZ := I ∩ Z. In particular, N = [1, ∞)Z. With N ∈ N ∪ {0, ∞} we will be interested in the discrete intervals, which are bounded or unbounded above, i.e., IZ := [0, N + 1)Z. Then we define I+ Z := [0, N + 1]Z with the understanding that I+ Z = IZ when N = ∞. If N is finite we write rather [0, N]Z instead of [0, N + 1)Z. By C(IZ)r×s we denote the space of sequences, defined on IZ, of complex r × s matrices, where typically r ∈ {n, 2n} and 1 ≤ s ≤ 2n. Especially, we write only C(IZ)r in the case s = 1. If M ∈ C(IZ)r×s, then M(k) := Mk for k ∈ IZ; if M(λ) ∈ C(IZ)r×s, then M(λ, k) := Mk(λ) for k ∈ IZ. When M ∈ C(IZ)r×s and L ∈ C(IZ)s×q, then ML ∈ C(IZ)r×q, where (ML)k := Mk Lk for all k ∈ IZ. The set C0(IZ)r×s represents the subspace of C(IZ)r×s consisting of all sequences compactly supported in the discrete interval IZ. The symbol means the forward difference operator acting on C(IZ)r×s, i.e., we put ( z)k := zk = zk+1 − zk. Moreover, we let zk n m := zn − zm. Finally, the next result follows directly from [9, Theorem IV.1.1] and it concerns a sufficient condition for the boundedness of any fundamental matrix of a recurrence relation. Proposition 1.1.4. Let M ∈ C([0, ∞)Z)r×r be such that ∑∞ k=0 ||Mk − I||1 < ∞. Then all solutions of the recurrence relation uk+1 = Mk uk, k ∈ [0, ∞)Z, (1.11) converge as k → ∞, i.e., for any fundamental matrix U ∈ C([0, ∞)Z)r×r of system (1.11) there exists κ > 0 such that ||Uk ||1 < κ for all k ∈ [0, ∞)Z. In addition, if Mk is invertible for all k ∈ [0, ∞)Z, then limk→∞ uk 0 for any nontrivial solution u ∈ C([0, ∞)Z)r of system (1.11). 1.2 Discrete symplectic systems Let us define the real 2n × 2n skew-symmetric matrix J := ( 0 In −In 0 ) . (1.12) Then det J = 1 and J can be seen as a matrix analogue of the complex unit i, because J2 = −I. Moreover, J⊤ J = I and J−1 = −J = J⊤. A matrix M ∈ C2n×2n is called Hamiltonian if the matrix JM is Hermitian, i.e., M∗ J + JM = 0, – 5 – Chapter 1. Introduction and it is said to be symplectic1,2 whenever M∗ JM = J. (1.13) The simplest examples of symplectic matrices are I2n and J. From (1.13) one easily observes that every symplectic matrix is invertible and satisfies | det M| = 1 (in the case of a real symplectic matrix we have even det M = 1, see e.g. [117, pg. 3] or [115, Appendix 3, Theorem 5]). In addition, M is symplectic if and only if M−1 = −JM∗J. Therefore the set of 2n × 2n symplectic matrices over C forms a group with respect to the standard matrix multiplication. We also note that condition (1.13) is equivalent to MJM∗ = J, i.e., M is symplectic if and only if M∗ is symplectic. A discrete symplectic system is the first order system of recurrence relations zk+1 = Sk zk, (1.14) where k belongs to some discrete interval IZ and S ∈ C(IZ)2n×2n with Sk being symplectic matrices for all k ∈ IZ. These systems naturally arise in the discrete calculus of variations and optimal theory as Jacobi systems obtained from the weak Pontryagin maximum principle applied to the second variation of a functional, see e.g. [89–92,94,151]. Moreover, these systems can be found also in numerical integration schemes for Hamiltonian systems or in the theory of continued fractions, see e.g. [31, 47, 73–77, 140] and [3, Chapter 2], respectively. The origin of a systematic treatment of discrete symplectic systems goes back to [3], see also [4, 20]. However, some aspects of this theory can be observed at least 30 years earlier in [9, Section 3]. In the last two decades, the theory of discrete symplectic systems has been developed in various directions such as the Reid roundabout theorem, see e.g. [18,24,53,87,92–94,137,138], trigonometric and hyperbolic systems, see e.g. [5,21,22,58,59,A3], Sturmian, spectral, and oscillation theory, see e.g. [25,50,54–57, 62–65,67,68,149]. Since the matrices Sk are symplectic on IZ, it follows from their invertibility that any initial value problem associated with system (1.14) and with an initial condition given at an arbitrary point k0 ∈ IZ possesses a unique solution z ∈ C(I+ Z )2n. Moreover, any fundamental matrix of system (1.14) is symplectic on IZ if and only if it is symplectic at some index k ∈ IZ. The same property has also any fundamental matrix of the linear Hamiltonian differential system z′ (t) = H(t)z(t), (1.15) where t belongs to some interval I and H(t) is a piecewise continuous Hamiltonian matrix on I, see e.g. [115, Appendix 3, Theorem 3]. Therefore system (1.14) can be regarded as 1 The term symplectic in this context was suggested by the German mathematician Hermann Klaus Hugo Weyl (1885–1955) in his book [173, pg. 165]: The name “complex group” formerly advocated by me in allusion to line complexes, as these are defined by the vanishing of antisymmetric bilinear forms, has become more and more embarrassing through collision with the word “complex” in the connotation of complex number. I therefore propose to replace it by the corresponding Greek adjective “symplectic”. Dickson calls the group the “Abelian linear group” in homage to Abel who first studied it. 2 A matrix M ∈ C2n×2n satisfying equality (1.13) is also referred as conjugate symplectic. Moreover, if condition (1.13) is replaced by M⊤ JM = J and applied to matrices M ∈ C2n×2n or M ∈ R2n×2n , then it is called complex or real symplectic, respectively, see e.g. [117]. Nevertheless, we suppress the adjective “conjugate”, because we will consider only complex matrices and identity (1.13) throughout this thesis. – 6 – 1.2. Discrete symplectic systems the proper discrete counterpart of system (1.15), see also [52] and Remark 1.2.1(iv) below. We note that the Hamiltonian property of the matrix H(t) implies the block structure H(t) = ( A(t) B(t) C(t) −A∗(t) ) for some piecewise continuous n × n matrix-valued functions A(t), B(t), and C(t) such that B(t) = B∗(t) and C(t) = C∗(t) on I. Moreover, system (1.15) can be equivalently written as − Jz′ (t) = H(t)z(t), (1.16) where H(t) is Hermitian on I. For completeness we remark that some authors deal rather with a matrix J instead of J given in (1.12), where J = ( 0 −In In 0 ) = −J or more generally J is any nonsingular 2n × 2n matrix satisfying J ∗ = −J, see e.g. [101] or [9, Chapter 9]. Moreover, it is also possible to replace J by a 2n×2n matrix-valued function J(t) such that det J(t) 0 and J ∗ (t) = −J(t) for all t ∈ I, see e.g. [116]. Remark 1.2.1. In order to emphasize the importance and generality of system (1.14) we show now that it includes several equations or systems, which have been intensively studied in the literature. Therefore we fix the numbers n ∈ N, N ∈ N0 and, for simplicity, divide the vector zk into two blocks of the same size and the coefficient matrix Sk of system (1.14) into four blocks of the same size as zk = ( xk uk ) and Sk = ( Ak Bk Ck Dk ) . Then the symplecticity of the matrices Sk and S∗ k is equivalent with the conditions A∗ k Dk − C∗ k Bk = I = Ak D∗ k − Bk C∗ k (1.17) and the matrices A∗ k Ck, B∗ k Dk, Ak B∗ k, Ck D∗ k are Hermitian. (1.18) Let us also note that although in parts (i)–(iii) below we consider only finite discrete intervals, the discussed equivalences remain valid (with appropriate modifications) for any type of an unbounded discrete interval. (i) Let the number m ∈ N be fixed and P[0] ∈ C([0, N]Z)n×n, P[1] ∈ C([0, N + 1]Z)n×n, . . . , P[m] ∈ C([0, N + m]Z)n×n be sequences of complex-valued n × n Hermitian matrices with det P[m] k 0 for all k ∈ [0, N + m]Z. Then the n-vector-valued Sturm–Liouville difference equation of order 2m, i.e., m∑ s=0 (−1)s s ( P[s] k s yk+1−s ) = 0, (1.19) is equivalent to a discrete symplectic system of a special form. More specifically, if y ∈ C([1−m, N+m+1]Z)n solves equation (1.19) on [0, N]Z, then z ∈ C([0, N+1]Z)2mn with the components xk :=   yk ... r−1yk+1−r ... m−1yk+1−m   , uk :=   ∑m s=1(− )s−1 ( P[s] k s yk+1−s ) ... ∑m s=r(− )s−r ( P[s] k s yk+1−s ) ... P[m] k myk+1−m   (1.20) – 7 – Chapter 1. Introduction solves symplectic system (1.14) on [0, N]Z, where S ∈ C([0, N]Z)2mn×2mn with the mn × mn blocks Ak :=   I I · · · I 0 I · · · I ... ... ... ... 0 · · · 0 I   , Dk :=   I 0 · · · · · · 0 P[0] k ( P[m] k )−1 −I I 0 · · · 0 P[1] k ( P[m] k )−1 0 −I I · · · 0 P[2] k ( P[m] k )−1 ... ... ... ... ... ... 0 · · · · · · · · · −I I + P[m−1] k ( P[m] k )−1   , (1.21) Bk :=   0 · · · 0 ( P[m] k )−1 ... ... ... ... 0 · · · 0 ( P[m] k )−1   , Ck :=   P[0] k P[0] k · · · P[0] k 0 P[1] k · · · P[1] k ... ... ... ... 0 · · · 0 P[m−1] k   . (1.22) On the other hand, let z ∈ C([0, N + 1]Z)2mn solve system (1.14) on [0, N]Z with Sk having the same block structure as in (1.21)–(1.22) and denote the first n components of xk as yk for all k ∈ [0, N + 1]Z. Then according to the transformation in (1.20) we can extend the definition of yk to the interval [1 − m, N + m + 1]Z. In particular, the relation between the components of xk and yk applied at k = 0 yields y−1, . . . , y1−m, while the relation between the components of uk and yk applied at k = N+1, together with the invertibility of P[m] k on [N + 1, N + m]Z, yields yN+2, . . . , yN+m+1. Then we have y ∈ C([1 − m, N + m + 1]Z)n, which satisfies equation (1.19) on [0, N]Z. (ii) From the previous part it follows that any symplectic system (1.14) with Ak ≡ I and det Bk 0 on the discrete interval [0, N]Z can be reduced to the second order Sturm–Liouville difference equation − ( P[1] k yk ) + P[0] k yk+1 = 0. (1.23) Indeed, if N ≥ 1 and we put yk := xk for all k ∈ [0, N + 1]Z, then equality (1.23) is satisfied on [0, N − 1]Z with P[1] k := B−1 k and P[0] k := Ck. Moreover, equation (1.23) is a special case of the Jacobi equation − ( Pk yk + R∗ k yk+1 ) + Qk yk+1 + Rk yk = 0, (1.24) where k ∈ [0, N]Z, P, R ∈ C([0, N + 1]Z)n×n with matrices Pk being Hermitian on [0, N + 1]Z and Pk + R∗ k invertible for all k ∈ [0, N + 1]Z, and Q ∈ C([0, N]Z)n×n with Qk being Hermitian on [0, N]Z. But also equation (1.24) can be written as a discrete symplectic system and, under an additional assumption, vice versa. More precisely, if y ∈ C([0, N +2]Z)n solves equation (1.24), then the pair xk := yk, k ∈ [0, N +2]Z, and uk := Pk yk + R∗ k yk+1, k ∈ [0, N + 1]Z, solves system (1.14) on [0, N]Z, where Ak := (Pk + R∗ k)−1 Pk, Dk := (Pk + R∗ k + Rk + Qk)(Pk + R∗ k)−1 , Bk := (Pk + R∗ k)−1 , Ck := Qk (Pk + R∗ k)−1 Pk − Rk (Pk + R∗ k)−1 R∗ k. On the other hand, if N ≥ 1 and z ∈ C([0, N + 1]Z)2n solves system (1.14) with det Bk 0 on [0, N]Z, then yk := xk, k ∈ [0, N + 1]Z, satisfies equation (1.24) for all k ∈ [0, N − 1]Z with the coefficient matrices (observe that Pk + R∗ k = B−1 k ) Pk := B−1 k Ak, Rk := (I − A∗ k)B∗−1 k , Qk := (Dk − I)B−1 k + (A∗ k − I)B∗−1 k . – 8 – 1.2. Discrete symplectic systems (iii) Upon expanding the difference operators in (1.23) or in (1.24), we obtain special cases of the symmetric three-term recurrence relation, i.e., Sk+1 yk+2 − Tk+1 yk+1 + S∗ k yk = 0, (1.25) where we have k ∈ [0, N]Z, S ∈ C([0, N + 1]Z)n×n with det Sk 0 on [0, N + 1]Z, and T ∈ C([1, N + 1]Z)n×n with T∗ k = Tk on [1, N + 1]Z. In particular, equation (1.24) leads to (1.25) with Sk = Pk + R∗ k and Tk = Pk + Pk−1 + R∗ k−1 + Rk−1 + Qk−1. Equation (1.25) is also equivalent to a special discrete symplectic system. Indeed, if y ∈ C([0, N +2]Z)n solves equation (1.25) and we put xk := yk for k ∈ [0, N + 2]Z and uk := Sk xk+1 for k ∈ [0, N + 1]Z, then zk solves system (1.14) on [0, N]Z with the n × n blocks Ak := 0, Bk := S−1 k , Ck := −S∗ k, Dk := Tk+1 S−1 k . (1.26) On the other hand, if N ≥ 1 and z ∈ C([0, N + 1]Z)2n solves system (1.14) with det Bk 0 on [0, N]Z, then yk := xk, k ∈ [0, N + 1]Z, satisfies equation (1.25) for all k ∈ [0, N − 1]Z with the coefficient matrices Sk := B−1 k , Tk := B−1 k Ak + Dk−1 B−1 k−1. This follows immediately from the previous part and the relation between equations (1.24) and (1.25). (iv) Another extremely important example of system (1.14) is provided by the linear Hamiltonian difference system xk = Ak xk+1 + Bk uk, uk = Ck xk+1 − A∗ k uk, (1.27) where k belongs to a discrete interval IZ, z ∈ C(I+ Z )2n, and A, B, C ∈ C(IZ)n×n with the matrices Bk and Ck being Hermitian on IZ and the matrix I − Ak invertible for all k ∈ IZ. If we denote by the superscript [s] the partial shift in the first component of zk, i.e., z[s] k := ( xk+1 uk ) , then system (1.27) can be written as zk = Hk z[s] k or equivalently − J zk = Hk z[s] k , where H ∈ C(IZ)2n×2n with the matrices Hk = ( Ak Bk Ck −A∗ k ) being Hamiltonian for all k ∈ IZ, while H ∈ C(IZ)2n×2n with Hk = ( −Ck A∗ k Ak Bk ) being Hermitian on IZ, compare with systems (1.15) and (1.16). This system was introduced in [69,70] as a discrete analogue of (1.15), however the invertibility of I − Ak guarantees that system (1.27) can be written as symplectic system (1.14) with the coefficient matrix Sk := ( (I − Ak)−1 (I − Ak)−1Bk Ck (I − Ak)−1 I − A∗ k + Ck (I − Ak)−1Bk ) . (1.28) On the other hand, system (1.14) can be written as (1.27) only if Ak is invertible for all k ∈ IZ, in which case Ak := I − A−1 k , Bk := A−1 k Bk, Ck := Ck A−1 k . (1.29) – 9 – Chapter 1. Introduction 1.3 Bibliographical notes The equivalence described in Remark 1.2.1(i) is motivated by [4, Section 3.5], [20, Remark 2], and [111, Lemma 3]. We note that, analogously to [139, Chapter 3], it seems to be possible to write equation (1.19) as system (1.14) also without the regularity assumption for the coefficient matrix P[m] . A solution of this problem will appear soon. The relation between system (1.14) and equations (1.24) and (1.25) from Remark 1.2.1(ii)–(iii) was discussed in [150]. In [4, Section 3.6] similar transformation of equation (1.24) into system (1.14) was derived as a consequence of a relation between equation (1.24) and system (1.27), which requires the additional assumption det Pk 0. The connection between systems (1.14) and (1.27) was shown in [2, Theorem 3], see also [4, Section 3.4]. – 10 – Chapter 2 Weyl–Titchmarsh theory for general linear dependence on spectral parameter A modern mathematical proof is not very different from a modern machine, or a modern test setup: the simple fundamental principles are hidden and almost invisible under a mass of technical details. Hermann Weyl, see [174, pg. 453] In this chapter we present the theory of square summable solutions of discrete symplectic systems in the form zk+1(λ) = 5k(λ)zk(λ) with 5k(λ) := Sk + λVk, (Sλ) where λ ∈ C is the spectral parameter and Sk and Vk are complex 2n×2n matrices satisfying S∗ k JSk = J, S∗ k JVk is Hermitian, V∗ k JVk = 0, and k := JVk JS∗ k J ≥ 0. (2.1) Here J stands for the matrix defined in (1.12) but it is also possible to use its generalization as discussed at the end of the paragraph preceding Remark 1.2.1, see also Section 3.3. The indices k belong to a bounded or unbounded discrete interval as will be specified later. The dependence on λ in (Sλ) is linear, but other than that quite general. The properties in (2.1) imply that the matrix Sk is symplectic and that the coefficient matrix 5k(λ) of (Sλ) satisfies the symplectic-type identity 5∗ k(¯λ)J5k(λ) = J for all λ ∈ C. (2.2) The Hermitian matrix k will play a role of a weight for the associated semi-inner product, see (2.26) and (2.54). Throughout this thesis we also use the standard convention that by (Sν) we refer to the system as in (Sλ) with λ replaced by ν. Identity (2.2) shows that the matrix 5k(λ) satisfies properties, which are similar to those of symplectic matrices, see Section 1.2. This fact also motivates the above terminology “symplectic system”, although system (Sλ) corresponds to the discrete symplectic system as introduced in Section 1.2 only when λ ∈ R, such as for λ = 0. On the other hand, system (Sλ) can be viewed as a perturbation of the original symplectic system zk+1 = Sk zk, – 11 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter i.e., system (S0), for which the fundamental properties of symplectic systems are satisfied with appropriate (but natural) modifications. Such properties of system (Sλ) are derived in Section 2.1. We note that system (Sλ) with λ ∈ R was already investigated in [9, Sections 3.1 and 3.3], where the weight matrix k is also obtained for this special case. We remark that k and J correspond to Cn and −J in [9, Formula (3.3.10)]. Particularly, in [9, Section 3.3] it was observed that 5k(λ) = Sk + λVk = (I + λJ k)Sk. (2.3) Hence, system (Sλ) is reduced to the equivalent form zk+1(λ) = (I + λJ k)Sk zk(λ) with ∗ k = k and k J k = 0, (2.4) which is more convenient in some applications, e.g., when calculating the determinant | det 5k(λ)| = det(I + λJ k) = 1, see Lemma 2.1.3. Note that in the scalar case n = 1 the properties of k in (2.4) imply that k is a real 2 × 2 matrix. Therefore, the present chapter extends this special form of k from the scalar (and hence real) case to any even-dimensional complex case, cf. again [9, Section 3.3]. The origin of the theory of square integrable or summable solutions goes back to the paper of H. Weyl [172], where the second-order Sturm–Liouville differential equation was considered and the famous Weyl alternative was proven by using a geometrical approach3. Weyl’s results were re-proved (by using more analytical methods) and further extended by Titchmarsh in the series of papers summarized in [165,166]. In honor of the pioneers, this theory is usually referred as the Weyl–Titchmarsh theory. Of course, it has been developed in many directions during the last hundred years and we do not attempt to delineate all details of its long history and a considerable literature, see an outstanding overview given in [71]. Rather than that we now discuss only some crucial moments, which are closely related to the topic of this thesis. As a natural generalization of the theory for Sturm–Liouville differential equations, Atkinson initiated the study of the Weyl–Titchmarsh theory for the linear Hamiltonian differential system z′ (t) = [H(t) + λW(t)]z(t) or equivalently − Jz′ (t) = [ H(t) + λW(t) ] z(t) (2.5) where H(·) and W(·) are suitable Hamiltonian matrix-valued functions with −JW(t) being positive semidefinite, see [9] and also e.g. [30,34,35,39,97–101,108–110,123,141]. As far as we know, the first related results devoted to the second-order difference equations were independently given in [85, 125], see also the references mentioned in connection with equations (2.8)–(2.10) below. However the discrete Weyl–Titchmarsh theory appears to be substantially underdeveloped in contrast to the continuous time case. Surprisingly, its extension to discrete systems had not attracted almost any attention (except [9, Chapter 3]) until 2004, when the Weyl–Titchmarsh theory for the linear Hamiltonian difference system ( xk uk ) = (Hk + λWk) ( xk+1 uk ) , Hk := ( Ak Bk Ck −A∗ k ) , Wk := ( Ek Fk Gk −E∗ k ) (2.6) was established in [36] as an answer to a remark of Professor Allan Krall, see [36, pg. 152]. In (2.6) the coefficient matrices Bk, Ck and Fk, Gk are Hermitian, i.e., Hk and Wk are Hamiltonian. Furthermore, Wk is such that −JWk ≥ 0, and the matrix Ak(λ) := (I − Ak − λEk)−1 3 Yes, it is the same Hermann Weyl as in Section 1.2, see the first footnote on page 6. Hence the topic of this thesis can be seen as a connection of two (originally unrelated) concepts, which were significantly influenced by H. Weyl. – 12 – exists for all k and λ ∈ C, which guarantees the existence of a solution of any initial value problem associated with (2.6) in the backward time. A special case of system (2.6) with Ek ≡ 0 (which implies that ˜Ak(λ) ≡ ˜Ak is constant in λ) was independently and more intensively studied in [142], see also e.g. [10,122,133,158]. As already mentioned at the beginning of Chapter 1, the study of the Weyl–Titchmarsh theory for discrete symplectic systems was initiated in [26] and [A4], where system (Sλ) was considered in the special case when its first equation does not depend on λ. In this case the matrices Sk, Vk, and k have necessarily the form Sk = ( Ak Bk Ck Dk ) , Vk = ( 0 0 −Wk Ak −Wk Bk ) , k = ( Wk 0 0 0 ) (2.7) with Wk ∈ Cn×n being Hermitian and positive semidefinite for all k, see also [23, Remark 3(iii)]. The theory of discrete symplectic systems (Sλ) with (2.7) has been developed in several directions. For example, the results in [25,54,56,65,66] cover the oscillation theorems, Sturmian theory, properties of finite eigenvalues, and the Rayleigh principle. Let us note that the form of Vk in (2.7) follows from the perturbation of the second equation in system (S0) by the term λWk xk+1. This approach is naturally motivated by the connection between discrete symplectic systems and any even order vector-valued Sturm–Liouville difference equation, Jacobi equation, and symmetric three-term recurrence relation discussed in Remark 1.2.1(i)–(iii). More specifically, system (Sλ) with the coefficients of the form (2.7) includes equations (1.19), (1.24), and (1.25) with the term λWk yk+1 on the right-hand side, i.e., the equations m∑ s=0 (−1)s s ( P[s] k s yk+1−s ) = λWk yk+1, (2.8) − ( Pk yk + R∗ k yk+1 ) + Qk yk+1 + Rk yk = λWk yk+1, (2.9) Sk+1 yk+2 − Tk+1 yk+1 + S∗ k yk = λWk yk+1, (2.10) which were studied e.g. in [9,11,12,15,32,33,96,103,121,147,156,157,164,170] and [105, Chapter 7], see also [39–41]. In particular, for equation (2.8) we get system (Sλ) with the matrix Sk as in Remark 1.2.1(i) and Vk = − ( 0 0 V[1] k V[2] k ) , V[1] k =   Wk · · · Wk 0 · · · 0 ... ... ... 0 · · · 0   , V[2] k =   0 · · · 0 Wk ( P[m] k )−1 0 · · · 0 0 ... ... ... ... 0 · · · 0 0   , (2.11) which yields k = diag{Wk, 0 . . . , 0} ∈ Cmn×mn. Similarly, for equation (2.8) with m = 1 and equations (2.9), (2.10) we obtain system (Sλ) with k = diag{Wk, 0}. The aim of this chapter is to present a generalization and extension of the results in [9,26] and [A4] to the discrete symplectic systems of the form (Sλ). Moreover, it turns out that theses results are more general than those in [26] and [A4] theoretically and also practically. We show (see Example 2.5.2) that the present theory applies to certain system (Sλ) with (2.7), to which the results in [26] and [A4] cannot be used. It is easy to see that there is (almost) no restriction on matrices W(t) and Wk in systems (2.5) and (2.6), respectively, while the form of Vk in (2.7) is very special. This inconsistency represents one of our motivations for a thorough study of the discrete symplectic systems with general linear dependence on λ and their Weyl–Titchmarsh theory – 13 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter as presented in this chapter. However even for system (Sλ) there remains one remarkable difference, which follows immediately from the third condition in (2.1): the matrix Vk has to be singular for all k. Moreover, it is worth noticing that there is an interesting overlap between system (2.6) with Ek ≡ 0 and system (Sλ), see also Remark 1.2.1(iv). More precisely, system (Sλ) can be written as a linear Hamiltonian difference system only if the n × n left-upper block of 5k(λ) is invertible for all λ ∈ C. However, in this instance the dependence on λ may be nonlinear and the matrix Ek may be nonzero. On the other hand, system (2.6) can be written as system (Sλ) only if Gk (I − Ak)−1Fk ≡ 0, see also [142, Formula (2.3)]. Without this additional assumption we obtain a discrete symplectic system with a special quadratic dependence on λ. This observation motivates our study of discrete symplectic systems with polynomial and analytic dependence on λ and their Weyl–Titchmarsh theory in Chapter 5. If, in addition, also Fk ≡ 0 in (2.6) we get system (Sλ) with the special linear dependence described in (2.7). For completeness, we note that the study of discrete symplectic systems has also an advantage over the approach based on the linear difference Hamiltonian systems in an easier unification with the continuous time theory through the calculus on time scales, see [A7,A16]. Some problems with a unification of the Weyl–Titchmarsh theory for continuous and discrete Hamiltonian systems are discussed in [6]. This chapter is organized as follows. In Section 2.1 we present the fundamental properties of system (Sλ) and in Section 2.2 we study the spectral theory on a bounded interval. In the subsequent sections we focus on the Weyl–Titchmarsh theory for system (Sλ). In Section 2.3 we introduce the corresponding Weyl disks and Weyl circles both in the regular and singular cases. In Section 2.4 we consider the space ℓ2 of square summable sequences with respect to the weight k and investigate the limit point and limit circle cases. Finally, in Section 2.5 we provide several examples illustrating our theory. 2.1 Preliminaries First, we derive some important “symplectic” properties of the matrix 5k(λ) defined in (Sλ). Observe that (2.2) and (2.3) imply that 5k(λ) and I + λJ k are invertible with 5−1 k (λ) = −J5∗ k(¯λ)J, (I + λJ k)−1 = I − λJ k for all λ ∈ C. (2.12) From the invertibility of 5k(λ) we obtain the (global) existence and uniqueness of solutions of any initial value problem associated with system (Sλ). Formula (2.12) also yields the following straightforward facts about the coefficients Sk and Vk from (2.1). Lemma 2.1.1. Let n ∈ N be given. For any k ∈ [0, ∞)Z the following conditions are equivalent. (i) The matrices Sk and Vk satisfy the first three conditions in (2.1), i.e., S∗ k JSk = J, S∗ k JVk is Hermitian, and V∗ k JVk = 0. (ii) The matrix 5k(λ) in (Sλ) satisfies (2.2), i.e., 5∗ k (¯λ)J5k(λ) = J for all λ ∈ C. (iii) The matrices Sk and Vk satisfy Sk JS∗ k = J and Vk JV∗ k = 0, and Vk JS∗ k is Hermitian. (iv) The matrix 5k(λ) in (Sλ) satisfies 5k(λ)J5∗ k (¯λ) = J for all λ ∈ C. Condition (iii) in Lemma 2.1.1 implies that the matrix k is indeed Hermitian, as required in the main assumption (2.1). This shows that if Sk and Vk are any given matrices satisfying the first three properties in (2.1), then the matrix k := JVk JS∗ k J is Hermitian and, moreover, k J k = 0. The latter equality in fact characterizes the matrices Vk for which k has this property, i.e., the matrices Vk and k determine each other. More – 14 – 2.1. Preliminaries precisely, if k is any 2n × 2n Hermitian matrix such that k J k = 0, then following (2.4) we define Vk := J k Sk. It is easy to see that the second and third properties in (2.1) are in this case satisfied. For convenience, we summarize the notation employed throughout this chapter. Notation 2.1.2. The numbers n ∈ N and N ∈ [0, ∞)Z are fixed and S, V, ∈ C([0, ∞)Z)2n×2n are such that (2.1) is satisfied for all k ∈ [0, ∞)Z. Now we focus on the determinant of the matrix 5k(λ). Let us recall that the absolute value of the determinant of any symplectic matrix is equal to 1 as a simple consequence of the formula (1.13). Lemma 2.1.3. For every λ ∈ C and k ∈ [0, ∞)Z we have | det 5k(λ)| = | det Sk | = 1. Proof. First, from the expression given in (2.3) we obtain det 5k(λ) = det(I+λJ k)×det Sk. Since (λJ k)2 = 0, the matrix λJ k is nilpotent of degree 2. Thus, by Proposition 1.1.3 with L := λJ k and M := I, we get det(λJ k +I) = det(L+M) = det M = 1, which implies det 5k(λ) = det Sk. Hence the statement follows from the first condition in (2.1), because | det 5k(λ)| = | det Sk | (2.1) = det J = 1. ■ Remark 2.1.4. In some special cases the result of Lemma 2.1.3 can be also verified directly. For example, when the dependence on λ is special as displayed in (2.7), we have by [66, pg. 1232] that 5k(λ) = ( I 0 −λWk I ) Sk, which implies det 5k(λ) = det Sk for all λ ∈ C. (2.13) The following statements are direct consequences of formula (2.2) and they provide basic properties of solutions of system (Sλ) on [0, ∞)Z. Nevertheless, it is easy to see that the results remain valid (with appropriate modifications) for solutions of system (Sλ) on any discrete interval IZ ⊆ [0, ∞)Z. Lemma 2.1.5 (Wronskian-type identity). Let λ ∈ C and m ∈ N be given. If the sequences Z(λ), Z(¯λ) ∈ C([0, ∞)Z)2n×m solve systems (Sλ) and (S¯λ) on [0, ∞)Z, respectively, then Z∗ k(¯λ)JZk(λ) = Z∗ 0(¯λ)JZ0(λ) for all k ∈ [0, ∞)Z. (2.14) Proof. Identity (2.14) follows directly from (Sλ), (S¯λ), and (2.2), because Z∗ k+1(¯λ)JZk+1(λ) = Z∗ k(¯λ)5∗ k(¯λ)J5k(λ)Zk(λ) = Z∗ k(¯λ)JZk(λ) for any k ∈ [0, ∞)Z. ■ Lemma 2.1.6. Let λ ∈ C and (λ) be a fundamental matrix of system (Sλ) on [0, ∞)Z such that ∗ 0(¯λ)J 0(λ) = J. (2.15) Then for any k ∈ [0, ∞)Z we have ∗ k(¯λ)J k(λ) = J, −1 k (λ) = −J ∗ k(¯λ)J, and k(λ)J ∗ k(¯λ) = J. (2.16) Proof. The first identity in (2.16) follows from Lemma 2.1.5 and identity (2.15). The other two identities in (2.16) follow from the first one, since k(λ) −1 k (λ) = I = −1 k (λ) k(λ). ■ – 15 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter One easily observes that identity (2.15) is satisfied especially when 0(λ) ≡ 0 does not depend on λ and 0 is symplectic. The matrix k plays a key role in the Lagrange identity for system (Sλ), which is one of the main tools in the whole Weyl–Titchmarsh theory, see also [9, Formula (3.7.6)]. Theorem 2.1.7 (Lagrange identity). Let λ, ν ∈ C and m ∈ N be given. If the sequences Z(λ), Z(ν) ∈ C([0, ∞)Z)2n×m solve systems (Sλ) and (Sν) on [0, ∞)Z, respectively, then for any k ∈ [0, ∞)Z we have [ Z∗ k(λ)JZk(ν) ] = (¯λ − ν)Z∗ k+1(λ) k Zk+1(ν), (2.17) Z∗ k+1(λ)JZk+1(ν) = Z∗ 0(λ)JZ0(ν) + (¯λ − ν) k∑ j=0 Z∗ j+1(λ) j Zj+1(ν). (2.18) Proof. Let Zk(λ) and Zk(ν) satisfy (Sλ) and (Sν) on [0, ∞)Z, respectively. Then [ Z∗ k(λ)JZk(ν) ] = Z∗ k+1(λ) [ J − 5∗−1 k (λ)J5−1 k (ν) ] Zk+1(ν) (2.12) = Z∗ k+1(λ) [ J + J5k(¯λ)J5∗ k(¯ν)J ] Zk+1(ν) (2.1) = (¯λ − ν)Z∗ k+1(λ) k Zk+1(ν), which shows identity (2.17). Equality (2.18) then follows from (2.17) by summation. ■ 2.2 Spectral theory on bounded interval In this section we study the spectral properties of the corresponding regular eigenvalue problem with separated boundary conditions. Matrices describing these boundary conditions belong to the set := { α ∈ Cn×2n | αα∗ = I, αJα∗ = 0 } . (2.19) It is known e.g. in [A4, Remark 2.7] that for any α ∈ the 2n × 2n matrix (α∗, −Jα∗) is unitary and symplectic and it satisfies α∗ α − Jα∗ αJ = I, i.e., ( α∗ −Jα∗ )−1 = ( α αJ ) , and Ker α = Ran Jα∗ . (2.20) For α ∈ we denote by (λ, α) ∈ C([0, ∞)Z)2n×2n the fundamental matrix of system (Sλ) determined by the initial condition 0(λ, α) = ( α∗, −Jα∗ ) , i.e., k+1(λ) = (Sk + λVk) k(λ), k ∈ [0, ∞)Z, 0(λ) = ( α∗ −Jα∗ ) , λ ∈ C. (2.21) Then the initial value 0(λ, α) is unitary, symplectic, does not depend on λ, and its inverse is −1 0 (λ, α) = ∗ 0 (λ, α). However we usually suppress the dependence on α, i.e., we write only (λ) instead of (λ, α). In addition, we need to emphasize the two “halves” of the fundamental matrix (λ), hence we put k(λ) = ( Zk(λ) Zk(λ) ) , (2.22) where Z(λ) = Z(λ, α) ∈ C([0, ∞)Z)2n×n and Z(λ) = Z(λ, α) ∈ C([0, ∞)Z)2n×n are the 2n × n solutions of system (Sλ) satisfying the initial conditions Z0(λ) = α∗ and Z0(λ) = −Jα∗. – 16 – 2.2. Spectral theory on bounded interval Definition 2.2.1. With the fundamental matrix (λ) and its blocks specified in (2.22), we define for M ∈ Cn×n the Weyl solution4 X(λ) ∈ C([0, ∞)Z)2n×n of system (Sλ) as Xk(λ) = Xk(λ, α, M) := k(λ)(I, M∗ )∗ = Zk(λ) + Zk(λ)M, k ∈ [0, ∞)Z. (2.23) For the following part (including the beginning of Section 2.3) we restrict our attention to the finite discrete interval [0, N]Z with the fixed N ∈ [0, ∞)Z as stated in Notation 2.1.2. Then for α, β ∈ we consider the following (regular) eigenvalue problem (Sλ), k ∈ [0, N]Z, λ ∈ C, αz0(λ) = 0, βzN+1(λ) = 0. (2.24) Note that when α = (I, 0) = β, the boundary conditions in (2.24) reduce to the Dirichlet boundary conditions x0 = 0 = xN+1. On the other hand, the periodic or antiperiodic boundary conditions z0(λ) = ±zN+1(λ) cannot be obtained through any choice of the matrices α, β ∈ . These particular cases are special examples of jointly varying endpoints, which are investigated in Chapter 3. We recall that a number λ ∈ C is said to be an eigenvalue of problem (2.24) if, for this particular value λ, there exists a nontrivial solution z(λ) ∈ C([0, N + 1]Z)2n of problem (2.24). In this case, the function z(λ) is said to be the eigenfunction corresponding to the eigenvalue λ and the dimension of all these eigenfunctions corresponding to λ is called the geometric multiplicity of λ. Moreover, we introduce the following definiteness assumption, called Atkinson’s condition, compare with [9, Formula (3.7.10)]. Throughout this chapter we will distinguish several forms of this definiteness assumption depending on how many solutions of (Sλ) is involved. This distinction also serves as an indicator of the minimal assumptions needed in each result, compare with Hypotheses 2.3.4 and 2.3.7 below. Hypothesis 2.2.2 (Weak Atkinson condition – finite). For any λ ∈ CKR every column z(λ) of the solution Z(λ) satisfies N∑ k=0 z∗ k+1(λ) k zk+1(λ) > 0. (2.25) Identity (2.18) and Hypothesis 2.2.2 imply the following characterization of the eigenvalues and eigenfunctions of problem (2.24). Theorem 2.2.3. Let α, β ∈ be given. Then the following statements hold. (i) A number λ ∈ C is an eigenvalue of (2.24) if and only if det βZN+1(λ) = 0. In this case, the eigenfunctions corresponding to the eigenvalue λ have the form z(λ) = Z(λ)d on [0, N+1]Z with nonzero d ∈ Ker βZN+1(λ). Moreover, the geometric multiplicity of λ is equal to its algebraic multiplicity, i.e., to dim Ker βZN+1(λ). (ii) A number λ ∈ C is an eigenvalue of problem (2.24) if and only if det(−ZN+1(λ), Jβ∗) = 0. In this case, the algebraic and geometric multiplicities of the eigenvalue λ are equal to the value of dim Ker(−ZN+1(λ), Jβ∗). (iii) Under Hypothesis 2.2.2, the eigenvalues of (2.24) are real and eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the semi-inner product ⟨z, ˜z⟩ ,N := N∑ k=0 z∗ k+1 k ˜zk+1. (2.26) 4 The symbol X stands for the Greek letter Chi (/’ki:/). – 17 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter Proof. The proof follows standard arguments from linear algebra about eigenvalue problems for Hermitian matrices or self-adjoint differential or difference equations. Alternatively, see the proofs of [A4, Lemmas 3.1, 2.9 and Theorem 2.11]. ■ Next we proceed by defining the Weyl–Titchmarsh M(λ)-function for problem (2.24). Definition 2.2.4. Let α, β ∈ . Whenever the matrix βZk(λ) is invertible for some value λ ∈ C and k ∈ [0, N + 1]Z, we define the Weyl–Titchmarsh M(λ)-function as the n × n matrix Mk(λ) = Mk(λ, α, β) := −[βZk(λ)]−1 βZk(λ). (2.27) It follows from Theorem 2.2.3 that the M(λ)-function is well defined at k = N + 1 for every λ ∈ CKR, when Hypothesis 2.2.2 holds. Now we show the “symmetry” property of the M(λ)-function. Lemma 2.2.5. Let α, β ∈ and λ ∈ C be given. If k ∈ [0, N + 1]Z is such that Mk(λ) and Mk(¯λ) exist, then M∗ k(λ) = Mk(¯λ). (2.28) Moreover, Mk(·) is an analytic function in its argument λ. Proof. Let k ∈ [0, N + 1]Z. By the definition of Mk(λ), the partition of k(λ) in (2.22), and the third formula in (2.16) we have M∗ k(λ) − Mk(¯λ) = [ βZk(¯λ) ]−1 β k(¯λ)J ∗ k(λ)β∗ [ βZk(λ) ]∗−1 = [ βZk(¯λ) ]−1 βJβ∗ [ βZk(λ) ]∗−1 (2.19) = 0, which proves identity (2.28). The analytic property of Mk(·) follows from the fact that Zk(λ) and Zk(λ), and hence βZk(λ) and βZk(λ), are polynomials in λ. ■ Remark 2.2.6. (i) The Weyl solution X(λ) from Definition 2.2.1 trivially satisfies the initial boundary condition αX0(λ) = I. In addition, if βZk(λ) is invertible for some k ∈ [0, N+1]Z, then βXk(λ) = βZk(λ)[M − Mk(λ)]. This shows that for M = Mk(λ) we have βXk(λ) = 0. In particular, when k = N + 1 and M = MN+1(λ), the Weyl solution X(λ) satisfies the second boundary condition in (2.24). (ii) We also point out that the matrix P := −βJXk(λ) ∈ Cn×n, where the Weyl solution X(λ) is defined by (2.23) with M = Mk(λ), is invertible for any β ∈ and any given k ∈ [0, N + 1]Z, which will be a very useful fact in the proof of the next theorem. Indeed, the calculation n = rank ( β βJ ) Xk(λ) = rank ( 0 βJXk(λ) ) = rank P shows that P is invertible. In the following theorem we specify the dependence of the Weyl-Titchmarsh M(λ)function on the matrix α determining the initial boundary condition of the fundamental matrix (λ) = (λ, α) in (2.22). We consider the matrix Mk(λ, α, β) defined in (2.27) and the matrix Mk(λ, γ, β) given also by (2.27) but with α replaced by γ ∈ , i.e., Mk(λ, γ, β) is defined through the 2n × n columns of the fundamental matrix (λ, γ) which satisfies 0(λ, γ) = (γ∗, −Jγ∗). The proofs of the next theorem and its corollary follow the similar – 18 – 2.3. Weyl disk and Weyl circle arguments as the corresponding proofs in [A4, Lemma 3.10 and Corollary 3.11]. Note that the assumptions of Theorem 2.2.7 and Corollary 2.2.8 below are in particular satisfied when λ ∈ CKR, k = N + 1, and Hypothesis 2.2.2 holds. Theorem 2.2.7. Let β ∈ and λ ∈ C. Assume that for α, γ ∈ and k ∈ [0, N + 1]Z the matrices Mk(λ, α, β) and Mk(λ, γ, β) exist. Then we have Mk(λ, α, β) = [αJγ∗ + αγ∗ Mk(λ, γ, β)][αγ∗ − αJγ∗ Mk(λ, γ, β)]−1 . (2.29) Proof. Let X(α) := X(λ, α, Mk(λ, α, β)) and X(γ) := X(λ, γ, Mk(λ, γ, β)) be the Weyl solutions as in (2.23) corresponding to M = Mk(λ, α, β) and M = Mk(λ, γ, β), respectively. Since βXk(α) = 0 = βXk(γ) by Remark 2.2.6(i), it follows from the third equality in (2.20) that there exist matrices P(α), P(γ) ∈ Cn×n such that Xk(α) = Jβ∗P(α) and Xk(γ) = Jβ∗P(γ). Moreover, the matrices P(α) and P(γ) are invertible by Remark 2.2.6(ii), and hence Xk(α)P−1 (α) = Jβ∗ = Xk(γ)P−1 (γ), i.e., Xk(α) = Xk(γ)P with P := P−1 (γ)P(α). By the uniqueness of solutions of system (Sλ), it follows X(α) = X(γ)P on [0, N + 1]Z, i.e., Xj(α) = Xj(γ)P for all j ∈ [0, N + 1]Z. (2.30) The choice j = 0 then yields ( I Mk(λ, α, β) ) = −1 0 (λ, α) 0(λ, γ) ( I Mk(λ, γ, β) ) P = ( αγ∗ − αJγ∗ Mk(λ, γ, β) αJγ∗ + αγ∗ Mk(λ, γ, β) ) P. (2.31) The first row of the latter identity implies P = [αγ∗ − αJγ∗ Mk(λ, γ, β)]−1, and then the second row of (2.31) yields identity (2.29). ■ As a consequence of (2.30) and Theorem 2.2.7 we get a formula relating the Weyl solutions corresponding to the matrices M = Mk(λ, α, β) and M = Mk(λ, γ, β) and the initial conditions with α, γ ∈ . Corollary 2.2.8. Let β ∈ and λ ∈ C. Assume that for α, γ ∈ and k ∈ [0, N + 1]Z the matrices Mk(λ, α, β) and Mk(λ, γ, β) exist. Then for all j ∈ [0, N + 1]Z we have Xj(λ, α, Mk(λ, α, β)) = Xj(λ, γ, Mk(λ, γ, β))[αγ∗ − αJγ∗ Mk(λ, γ, β)]−1 . 2.3 Weyl disk and Weyl circle In this section we study the properties of the Weyl disks and the Weyl circles, which are defined through the following E(M)-function. For a given α ∈ and λ ∈ CKR we define the matrix-valued function Ek(M) = Ek(M, λ, α) : [0, N + 1]Z × Cn×n → Cn×n as Ek(M) := iδ(λ)X∗ k(λ, α, M)JXk(λ, α, M). (2.32) In the abbreviated form we write E(M) = iδ(λ)X∗(λ)JX(λ). The matrix Ek(M) is Hermitian for any k ∈ [0, N + 1]Z and M ∈ Cn×n, which can be seen from the equality (iJ)∗ = iJ. Moreover, the Lagrange identity (Theorem 2.1.7) yields that Ek(M) = −2δ(λ) im(M) + 2| im(λ)| k−1∑ j=0 X∗ j+1(λ) j Xj+1(λ), k ∈ [0, N + 1]Z, (2.33) where for k = 0 the sum is zero by definition. – 19 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter Definition 2.3.1. Let α ∈ and λ ∈ CKR. For all k ∈ [0, N + 1]Z we define the Weyl disk Dk(λ) = Dk(λ, α) and the Weyl circle Ck(λ) = Ck(λ, α), respectively, by Dk(λ) := { M ∈ Cn×n | Ek(M) ≤ 0 } , Ck(λ) := { M ∈ Cn×n | Ek(M) = 0 } . A natural question now arises concerning the elements of Dk(λ) and Ck(λ). For example, from (2.33) we obtain E0(M) = −2δ(λ) im(M), which implies that the Weyl circle C0(λ) coincides with the set of all n × n Hermitian matrices, while the interior of the Weyl disk D0(λ) is a proper subset of the set of all invertible n × n matrices. In the following two theorems we present characterizations of matrices M lying on the Weyl circle and in the interior of the Weyl disk. Theorem 2.3.2. Let α ∈ , λ ∈ CKR, k ∈ [0, N + 1]Z, and M ∈ Cn×n. The matrix M belongs to the Weyl circle Ck(λ) if and only if there exists β ∈ such that βXk(λ) = 0. In this case M = Mk(λ), whenever the matrix Mk(λ) defined in (2.27) exists. Proof. Assume that M ∈ Ck(λ), i.e., Ek(M) = 0. Then for the matrix γ := X∗ k (λ)J we get iδ(λ)γXk(λ) = Ek(M) = 0, which implies γXk(λ) = 0 and also γJγ∗ = 0. Moreover, since rank γ = n, we have γγ∗ > 0. The matrix β := (γγ∗)−1/2 γ satisfies βXk(λ) = 0, βJβ∗ = 0, and ββ∗ = I. Thus β ∈ as stated in the theorem. Conversely, assume that for a given matrix M ∈ Cn×n there exists β ∈ such that βXk(λ) = 0. Then Xk(λ) = Jβ∗P for P := −βJXk(λ), see Remark 2.2.6(ii). It follows that Ek(M) = iδ(λ)P∗βJβ∗P = 0, so that M ∈ Ck(λ). Finally, if Mk(λ) exists, then βZk(λ) is invertible and βZk(λ) + βZk(λ)M = βXk(λ) = 0, i.e., M = Mk(λ). ■ Theorem 2.3.3. Let α ∈ , λ ∈ CKR, k ∈ [0, N + 1]Z, and M ∈ Cn×n. The matrix M satisfies Ek(M) < 0 if and only if there exists β ∈ Cn×2n such that iδ(λ)βJβ∗ > 0 and βXk(λ) = 0. In this case we have with such a matrix β that M = Mk(λ), whenever the matrix Mk(λ) exists, and β may be chosen so that ββ∗ = I. Proof. For M ∈ Cn×n we consider the Weyl solution X(λ) given by (2.23) with n × n blocks φ(λ) and ψ(λ), i.e., Xj(λ) = (φ∗ j (λ), ψ∗ j (λ))∗ for all j ∈ [0, N + 1]Z. Assume first Ek(M) < 0. Then the matrices φk(λ) and ψk(λ) are invertible, since for a vector d ∈ Cn such that φk(λ)d = 0 or ψk(λ)d = 0 we have d∗Ek(M)d = iδ(λ)d∗ [φ∗ k (λ)ψk(λ) − ψ∗ k (λ)φk(λ)]d = 0, so that Ek(M) < 0 implies d = 0. We put γ := (I, −φk(λ)ψ−1 k (λ)) and then we have γXk(λ) = 0 and Ek(M) = −iδ(λ)ψ∗ k (λ)γJγ∗ ψk(λ). Since Ek(M) < 0 and ψk(λ) is invertible, it follows that iδ(λ)γJγ∗ > 0. Finally, the matrix β := (γγ∗)−1/2γ satisfies βXk(λ) = 0, βJβ∗ > 0, and ββ∗ = I as required in the theorem. Conversely, assume that for a given matrix M ∈ Cn×n there exists β = (β1, β2) ∈ Cn×2n such that βXk(λ) = 0 and iδ(λ)βJβ∗ > 0. Since 2i im(β1 β∗ 2 ) = βJβ∗, we can see that the condition iδ(λ)βJβ∗ > 0 is equivalent to im(β1 β∗ 2 ) > 0 for im(λ) < 0 and to im(β1 β∗ 2 ) < 0 for im(λ) > 0. In both cases, the positive or negative definiteness of im(β1 β∗ 2 ) implies the invertibility of β1 β∗ 2 , and consequently the invertibility of β1 and β2 alone. Hence, from βXk(λ) = 0 we obtain the equality φk(λ) = −β−1 1 β2 ψk(λ), and then Ek(M) = iδ(λ)[φ∗ k(λ)ψk(λ) − ψ∗ k(λ)φk(λ)] = −iδ(λ)ψ∗ k(λ)β−1 1 (βJβ∗ )β∗−1 1 ψk(λ). (2.34) If ψk(λ)d = 0 for some d ∈ Cn, then φk(λ)d = −β−1 1 β2 ψk(λ)d = 0. Since rank Xk(λ) = n, it follows that d = 0, i.e., ψk(λ) is invertible. Therefore, identity (2.34) and the assumption iδ(λ)βJβ∗ > 0 imply Ek(M) < 0. Finally, the identity M = Mk(λ) follows with the same argument as in the final part of the proof of Theorem 2.3.2. ■ – 20 – 2.3. Weyl disk and Weyl circle The following result shows that the matrix δ(λ) im(M) is positive semidefinite when M belongs to the Weyl disk Dk(λ). Under an additional Atkinson-type assumption we also obtain that δ(λ) im(M) is positive definite. Hypothesis 2.3.4. For a given matrix M ∈ Cn×n and λ ∈ CKR, each column z(λ) of the Weyl solution X(λ) satisfies (2.25). Theorem 2.3.5. Let α ∈ , λ ∈ CKR, and k ∈ [0, N + 1]Z. For every matrix M ∈ Dk(λ) we have δ(λ) im(M) ≥ | im(λ)| k−1∑ j=0 X∗ j+1(λ) j Xj+1(λ) ≥ 0. (2.35) Moreover, if k = N +1 and Hypothesis 2.3.4 holds, then δ(λ) im(M) > 0 and thus M is invertible. Proof. For a matrix M ∈ Dk(λ), inequality (2.35) follows from (2.33) via Ek(M) ≤ 0. Moreover, under Hypothesis 2.3.4 we have δ(λ) im(M) > 0, which yields that M is invertible. ■ Since the number N ∈ [0, ∞)Z was chosen arbitrarily, see Notation 2.1.2, in the remaining part of this section we focus on the Weyl disks when k belongs to the unbounded interval [0, ∞)Z. The first result shows that the Weyl disks are nested with increasing k. Theorem 2.3.6. Let α ∈ and λ ∈ CKR. Then we have Dk(λ) ⊆ Dj(λ) for every k, j ∈ [0, ∞)Z such that k ≥ j. Proof. Let M ∈ Dk(λ), i.e., Ek(M) ≤ 0. From identity (2.33) used at indices k and j and from the fact ℓ ≥ 0 for all ℓ ∈ [j, k − 1]Z we get Ej(M) ≤ Ek(M) ≤ 0. Therefore, M ∈ Dj(λ). ■ Our next goal is to identify the center and the matrix radii of the Weyl disks Dk(λ) for every λ ∈ CKR, see Theorem 2.3.8. First we analyze the structure of the Ek(M) function. From the definition of Ek(M) in (2.32) and from (2.23) one easily derives Ek(M) = ( I M∗ ) Kk(λ) ( I M ) , Kk(λ) := iδ(λ) ∗ k(λ)J k(λ) = ( Fk(λ) G∗ k (λ) Gk(λ) Hk(λ) ) , (2.36) where Fk(λ), Gk(λ), Hk(λ) are the n × n matrices Fk(λ) := iδ(λ)Z∗ k(λ)JZk(λ), Gk(λ) := iδ(λ)Z ∗ k(λ)JZk(λ), Hk(λ) := iδ(λ)Z ∗ k(λ)JZk(λ). Since Kk(λ) is Hermitian, it follows that Fk(λ) and Hk(λ) are also Hermitian. The Lagrange identity (Theorem 2.1.7) with ν = λ then implies Kk(λ) = iδ(λ)J + 2| im(λ)| k−1∑ j=0 ∗ j+1(λ) j j+1(λ), from which we get the formula Hk(λ) = 2| im(λ)| k−1∑ j=0 Z ∗ j+1(λ) j Zj+1(λ). (2.37) Therefore, the following Atkinson-type condition is used in order to guarantee the invertibility (in fact, the positive definiteness) of Hk(λ) for large k; cf. Hypothesis 5.3.2. Note also that if Hm(λ) is invertible for some m ∈ [0, ∞)Z, then it is invertible for all k ∈ [m, ∞)Z, because the sequence of matrices Hk(λ) is nondecreasing in k as a consequence of the fourth condition in (2.1) and identity (2.37). – 21 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter Hypothesis 2.3.7 (Weak Atkinson condition – infinite). There exists N0 ∈ [0, ∞)Z such that each column z(λ) of Z(λ) satisfies inequality (2.25) with N = N0 for every λ ∈ CKR. Under Hypothesis 2.3.7, the matrices Hk(λ) are positive definite (and hence invertible) for all k ∈ [N0 + 1, ∞)Z. For these values of k it is then possible to represent Ek(M) as Ek(M) = Fk(λ) − G∗ k(λ)H−1 k (λ)Gk(λ) + [G∗ k(λ)H−1 k (λ) + M∗ ]Hk(λ)[H−1 k (λ)Gk(λ) + M], see also [A4, Identity (4.11)]. By using the third identity in (2.16), it follows that the matrices Kk(λ) defined in (2.36) satisfy the symplectic-type relation ( Fk(λ)Gk(¯λ) − G∗ k (λ)Fk(¯λ) Fk(λ)Hk(¯λ) − G∗ k (λ)G∗ k (¯λ) Gk(λ)Gk(¯λ) − Hk(λ)Fk(¯λ) Gk(λ)Hk(¯λ) − Hk(λ)G∗ k (¯λ) ) = K∗ k(λ)JKk(¯λ) = −J. This implies that G∗ k(λ)H−1 k (λ)Gk(λ) − Fk(λ) = H−1 k (¯λ) > 0 for all k ∈ [N0 + 1, ∞)Z. (2.38) In the following theorem we justify the terminology “disk” and “circle” for Dk(λ) and Ck(λ), respectively. In the scalar case n = 1, the sets Dk(λ) and Ck(λ) indeed represent a disk and a circle in the complex plane similarly as in the original paper of H. Weyl, see [172]. For this purpose, we introduce the set U of unitary matrices in Cn×n and the set V of contractive matrices in Cn×n, i.e., U := { U ∈ Cn×n | U∗ U = I } and V := { V ∈ Cn×n | V∗ V ≤ I } . (2.39) Theorem 2.3.8. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then for every k ∈ [N0 + 1, ∞)Z the Weyl disk and Weyl circle admit the representations Dk(λ) = { Pk(λ) + Rk(λ)VRk(¯λ) | V ∈ V } , (2.40) Ck(λ) = { Pk(λ) + Rk(λ)URk(¯λ) | U ∈ U } , (2.41) where the matrices Pk(λ) and Rk(λ), Rk(¯λ) are defined by Pk(λ) := −H−1 k (λ)Gk(λ) and Rk(λ) := H−1/2 k (λ), Rk(¯λ) := H−1/2 k (¯λ). (2.42) Consequently, the sets Dk(λ) are closed and convex for every k ∈ [N0 + 1, ∞)Z. Proof. Let k ∈ [N0 + 1, ∞)Z be fixed. Identity (2.37) and Hypothesis 2.3.7 imply that the matrices H := Hk(λ) and H := Hk(¯λ) are positive definite, so that P := Pk(λ), R := Rk(λ), and R := Rk(¯λ) are well defined. For any matrix M ∈ Dk(λ) we then have 0 ≥ Ek(M) = F − G∗ H−1 G + (G∗ H−1 + M∗ )H(H−1 G + M) = −H −1 + (G∗ H∗−1 + M∗ )H(H−1 G + M) = −R 2 + (M∗ − P∗ )R−2 (M − P), (2.43) where the equality from (2.38) was used. Identity (2.43) can be also written as R −1 (M∗ − P∗ )R−2 (M − P)R −1 ≤ I, i.e., V∗ V ≤ I with V := R−1 (M − P)R −1 . (2.44) Therefore, the above defined matrix V belongs to the set V and M = P + RVR. This calculation can be reversed, i.e., every matrix V ∈ V gives a unique matrix M := P + RVR, which then belongs to Dk(λ). This leads to a bijection (even a homeomorphism) between – 22 – 2.3. Weyl disk and Weyl circle the matrices M ∈ Dk(λ) and the matrices V ∈ V. Therefore, the Weyl disk Dk(λ) has the representation Dk(λ) = P + RV R, as it is stated in (2.40). In a similar way, the elements of the Weyl circle Ck(λ) are in one-to-one correspondence with the matrices V given in (2.44), which in this case satisfy the relation V∗V = I. This means that the Weyl circle Ck(λ) admits the representation Ck(λ) = P + RUR given in (2.41). Finally, the Weyl disk Dk(λ) is closed and convex, because the set V has the same properties. ■ The matrix Pk(λ) in Theorem 2.3.8 is called the center of the Weyl disk or the Weyl circle, and the matrices Rk(λ) and Rk(¯λ) are called the matrix radii of the Weyl disk or the Weyl circle. Given (2.37), these matrices are well defined whenever Hypothesis 2.3.7 is satisfied. Moreover, the matrices Hk(λ) are nondecreasing for k → ∞, so that the matrix radii Rk(λ) and Rk(¯λ) are nonincreasing as k → ∞. And since Rk(λ) and Rk(¯λ) are Hermitian and positive definite for k ∈ [N0 + 1, ∞)Z, their limits R+(λ) := lim k→∞ Rk(λ), R+(¯λ) := lim k→∞ Rk(¯λ) (2.45) exist and satisfy R+(λ) ≥ 0 and R+(¯λ) ≥ 0. Now we show that the limit of the center Pk(λ) also exists. The proof of this fact relies on properties of the spectral matrix norm ||·||σ recalled in Section 1.1. Theorem 2.3.9. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then the center Pk(λ) of the Weyl disk Dk(λ) converges for k → ∞ to a limiting matrix P+(λ) ∈ Cn×n, i.e., P+(λ) := lim k→∞ Pk(λ). (2.46) Proof. The proof utilizes three main tools: the representation of the Weyl disks in Theorem 2.3.8, the convergence of the matrix radii Rk(λ) and Rk(¯λ) to R+(λ) and R+(¯λ), and the Cauchy criterion for sequences. Let k ≥ j ∈ [N0 + 1, ∞)Z. Then Dk(λ) ⊆ Dj(λ) by Theorem 2.3.6. Hence for a matrix M ∈ Dk(λ) there exist by Theorem 2.3.8 (unique) matrices Vk, Vj ∈ V such that M = Pk(λ) + Rk(λ)Vk Rk(¯λ) and M = Pj(λ) + Rj(λ)Vj Rj(¯λ). (2.47) By comparing both equalities in (2.47) we can express the matrix Vj in terms of Vk as Vj = R−1 j (λ)[Pk(λ) − Pj(λ) + Rk(λ)Vk Rk(¯λ)]R−1 j (¯λ). (2.48) The right-hand side of equation (2.48) defines a continuous mapping T : V → V, which assigns to each matrix V = Vk the matrix T(V) = Vj. Since the set V is convex and compact, the Brouwer fixed point theorem implies that the mapping T has a fixed point, i.e., there exists a matrix V ∈ V such that T(V) = V. Going back to equation (2.48), we get from T(V) = V the expression Pk(λ) − Pj(λ) = [Rj(λ) − Rk(λ)]VRj(¯λ) + Rk(λ)V[Rj(¯λ) − Rk(¯λ)]. The matrices V ∈ V satisfy ||V||σ ≤ 1, so that from the above equality we obtain ||Pk(λ) − Pj(λ)||σ ≤ ||Rj(λ) − Rk(λ)||σ × ||Rj(¯λ)||σ + ||Rk(λ)||σ × ||Rj(¯λ) − Rk(¯λ)||σ. (2.49) Since the sequences of the matrix radii R(λ), R(¯λ) ∈ C([N0 + 1, ∞)Z)n×n converge, they are bounded in the spectral norm, i.e., there exists K > 0 such that ||Rℓ(λ)||σ ≤ K and ||Rℓ(¯λ)||σ ≤ K for all ℓ ∈ [N0 + 1, ∞)Z. Choose now an arbitrary ε > 0. The convergence – 23 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter of Rℓ(λ) and Rℓ(¯λ) for ℓ → ∞ yields the existence of an index m ∈ [N0 + 1, ∞)Z such that ||Rj(ν) − Rk(ν)||σ < ε/(2K) for ν ∈ {λ, ¯λ} and for every k ≥ j ≥ m. From inequality (2.49) we then get ||Pk(λ) − Pj(λ)||σ < ε for all k ≥ j ≥ m. This shows that the sequence P(λ) ∈ C([N0 + 1, ∞)Z)n×n is a Cauchy sequence. Hence the completeness of Cn×n in the spectral norm implies the result. ■ From Theorems 2.3.6 and 2.3.8 it follows that the Weyl disks Dk(λ) are closed, convex, and nested with increasing k ∈ [N0+1, ∞)Z, where the number N0 is from Hypothesis 2.3.7. Therefore, the limit of Dk(λ) as k → ∞ exists and it is closed, convex, and nonempty. Definition 2.3.10. Let α ∈ and λ ∈ CKR. Under Hypothesis 2.3.7, we define the limiting Weyl disk as the set D+(λ) := lim k→∞ Dk(λ) = ∩ k∈[N0+1,∞)Z Dk(λ) The matrix P+(λ) defined in (2.46) and the matrices R+(λ) and R+(¯λ) from (2.45) are called the center and the matrix radii of the limiting Weyl disk D+(λ). Based on Theorem 2.3.8, the limiting Weyl disk D+(λ) has the following representation. Corollary 2.3.11. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then D+(λ) = { P+(λ) + R+(λ)VR+(¯λ) | V ∈ V } . The next corollary shows that the Weyl solutions X(λ) corresponding to matrices M ∈ D+(λ) have finite “norms” with respect to the weight matrices k. And conversely, the matrices M for which the corresponding Weyl solution X(λ) satisfies the estimate below belongs necessarily to the limiting Weyl disk D+(λ). This result illustrates the significance of the limiting Weyl disk, because it yields a lower bound for the number of linearly independent square summable solutions in Section 2.4. Corollary 2.3.12. Let α ∈ , λ ∈ CKR, M ∈ Cn×n, and suppose that Hypothesis 2.3.7 holds. Then the matrix M belongs to the limiting Weyl disk D+(λ) if and only if ∞∑ k=0 X∗ k+1(λ) k Xk+1(λ) ≤ im(M) im(λ) . Proof. It follows directly from Theorem 2.3.5, when it is applied at each k ∈ [N0+1, ∞)Z. ■ Under an additional assumption on the Weyl solution X(λ), compare with Hypothesis 2.3.4, we get from Theorem 2.3.5 also an information about the positive definiteness of the matrix δ(λ) im(M). Hypothesis 2.3.13. There exists N1 ∈ [0, ∞)Z such that for a given matrix M ∈ Cn×n and λ ∈ CKR, each column z(λ) of the Weyl solution X(λ) satisfies (2.25) with N = N1. Corollary 2.3.14. Let α ∈ , λ ∈ CKR, M ∈ D+(λ), and suppose that Hypotheses 2.3.7 and 2.3.13 hold. Then δ(λ) im(M) > 0 and hence M is invertible. Proof. The result follows from Corollary 2.3.12 and the second part of Theorem 2.3.5, when it is applied at each k ∈ [N0 + 1, ∞)Z ∩ [N1 + 1, ∞)Z. ■ The last result of this section describes the matrices M which lie in the “interior” of D+(λ). This statement requires a strengthened version of Hypothesis 2.3.13. This stronger assumption guarantees that the Weyl disks are strictly nested, i.e., D+(λ) ⫋ Dm(λ) ⫋ Dk(λ) for all m > k large enough. – 24 – 2.3. Weyl disk and Weyl circle Hypothesis 2.3.15. There exists N2 ∈ [0, ∞)Z such that for all m > k ∈ [N2, ∞)Z and for a given matrix M ∈ Cn×n and λ ∈ CKR, each column z(λ) of the Weyl solution X(λ) satisfies the inequality m−1∑ j=k z∗ j+1(λ) j zj+1(λ) > 0. Theorem 2.3.16. Let α ∈ , λ ∈ CKR, M ∈ Cn×n, and suppose that Hypotheses 2.3.7 and 2.3.15 hold. Then M ∈ D+(λ) if and only if Ek(M) < 0 for all k ∈ [N0 + 1, ∞)Z ∩ [N2 + 1, ∞)Z. Proof. Let us start with the condition M ∈ D+(λ), which is equivalent to M ∈ Dk(λ) for all k ∈ [N0 + 1, ∞)Z ∩ [N2 + 1, ∞)Z. With such an index k and with m > k we get from the representation of Ek(M) and Em(M) in (2.33) that Ek(M) = Em(M) − 2| im(λ)| m−1∑ j=k X∗ j+1(λ) j Xj+1(λ). The second term on the right-hand side of the latter equality is positive due to Hypothesis 2.3.15, while the first term satisfies Em(M) ≤ 0. Thus, M ∈ D+(λ) is truly equivalent to Ek(M) < 0 for all k ∈ [N0 + 1, ∞)Z ∩ [N2 + 1, ∞)Z. ■ Remark 2.3.17. (i) As in [A4, Formula (4.57)], we can define a M+(λ)-function corresponding to the matrices from the limiting Weyl disk D+(λ). In particular, for α ∈ , λ ∈ CKR and under Hypothesis 2.3.7 we define M+(λ) ∈ D+(λ) as the limit of a subsequence of the matrices Mk(λ, α, βk) ∈ Dk(λ), i.e., M+(λ) := lim j→∞ Mkj (λ, α, βkj ), (2.50) where iδ(λ)βkj Jβ∗ kj ≥ 0 and βkj β∗ kj = I, see also Remark 2.4.4 below. The function M+(λ) defined in (2.50) is called a half-line Weyl–Titchmarsh M(λ)-function and it satisfies M∗ +(λ) = M+(¯λ) for all λ ∈ CKR. Moreover, it is analytic on C+ and C− (and consequently it is a Herglotz function) as a limit of uniformly bounded analytic functions Mk(λ) with λ restricted to compact subsets of the upper and/or lower half-planes of C, see [142, Lemma 2.14]. (ii) In [26, Definition 3.4], the matrices on the limiting Weyl circle were defined as the elements M ∈ D+(λ) for which there exists a sequence {kj}∞ j=1 such that kj → ∞ as j → ∞ and limj→∞ Ekj (M) = 0. By using Corollary 2.3.12 and Theorem 2.3.2, this condition is equivalent to ∞∑ k=0 X∗ k+1(λ) k Xk+1(λ) = im(M) im(λ) . (2.51) Moreover, with the aid of the Lagrange identity (Theorem 2.1.7) we obtain for every k ∈ [0, ∞)Z that X∗ k+1(λ)JXk+1(λ) = 2i im(λ) [ im(M) im(λ) − k∑ j=0 X∗ j+1(λ) j Xj+1(λ) ] . (2.52) – 25 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter Hence one easily concludes that the sum on the right-hand side of (2.52) converges for k → ∞ and (2.51) is satisfied if and only if limk→∞ X∗ k+1 (λ)JXk+1(λ) = 0. The latter statement is a generalization of [26, Theorems 3.8(ii) and 3.9] and [142, Theorem 6.3] to systems (Sλ) with a general linear dependence on λ. 2.4 Square summable solutions In this section we extend the classification of system (Sλ) being in the limit point and limit circle case from [A4, Section 4] to a general linear dependence on λ. In the literature there are different (but equivalent) approaches to this issue and we essentially follow [36] and [A4]. Hence we consider the linear space of weighted square summable sequences (with the weight ) with entries in C2n, i.e., ℓ2 = ℓ2 [0, ∞)Z := { z ∈ C([0, ∞)Z)2n | ||z|| < ∞ } , (2.53) where ||z|| := √ ⟨z, z⟩ and ⟨z, ˜z⟩ := ∞∑ k=0 z∗ k+1 k ˜zk+1. (2.54) However we point out that ℓ2 is never a Hilbert space, because || · || is only a semi-norm and ⟨·, ·⟩ is only a semi-inner product as a consequence of the fourth condition in (2.1), compare with (6.60). We also denote by N(λ) the linear space of all square summable solutions of system (Sλ), i.e., N(λ) := { z ∈ ℓ2 | z solves system (Sλ) } . Just for the sake of curiosity, we note that the space of all non-square summable solutions does not form a linear space. In the next result we show the ℓ2 -properties of the Weyl solution X(λ) with respect to the choice of M. Theorem 2.4.1. Let α ∈ , λ ∈ CKR, and M ∈ Cn×n. Then the columns of X(λ) form a system of linearly independent solutions of system (Sλ). If in addition Hypothesis 2.3.7 holds and M ∈ D+(λ), then the columns of the Weyl solution X(λ) = X(λ, α, M) are square summable, i.e., they belong to N(λ) and so dim N(λ) ≥ n. Proof. Let j ∈ {1, . . . , n} and denote by z[j] := X(λ)ej the columns of the Weyl solution X(λ), where ej stands for the j-th unit vector of the standard basis in Cn. If c1 z[1] k + · · · + cn z[n] k = 0 for some k ∈ [0, ∞)Z and c1, . . . , cn ∈ C, then Xk(λ)c = 0 with c := (c1, . . . , cn)⊤. Since k(λ) is invertible on [0, ∞)Z, it follows from (2.23) that (I, M∗)∗c = 0, i.e., c = 0 and the solutions z[1] , . . . , z[n] are linearly independent. Under Hypothesis 2.3.7 the limiting Weyl disk D+(λ) exists and then for M ∈ D+(λ) we have by Corollary 2.3.12 that z[j] 2 = ∞∑ k=0 z [j]∗ k+1 k z [j] k+1 ≤ e∗ j im(M) im(λ) ej < ∞ for all j ∈ {1, . . . , n}. Therefore, in this case z[j] ∈ N(λ) for all j ∈ {1, . . . , n}. ■ As a consequence of Theorem 2.4.1 we get under Hypothesis 2.3.7 the estimate n ≤ dim N(λ) ≤ 2n for each λ ∈ CKR. (2.55) This fact motivates the following notions, which correspond to the Weyl dichotomy in the scalar case (i.e., for n = 1 we have either dim N(λ) = 1 or dim N(λ) = 2). – 26 – 2.4. Square summable solutions Definition 2.4.2. Let λ ∈ C. System (Sλ) is said to be in the limit point case if dim N(λ) = n and to be in the limit circle case if dim N(λ) = 2n. Any other case, i.e., n + 1 ≤ dim N(λ) ≤ 2n − 1, is simply called intermediate. The special cases introduced in Definition 2.4.2 are singled out because of their sui generis characteristics. In particular, in the limit point case there exists a unique M ∈ Cn×n such that the corresponding Weyl solution X(λ) is square summable, while in the limit circle case the Weyl solution X(λ) is square summable for any M ∈ Cn×n and this behavior is invariant with respect to λ, see Theorem 2.4.17. In the following theorem we show that the Weyl disks Dk(λ) collapse to a singleton as k → ∞ if system (Sλ) is in the limit point case. In particular, the center P+(λ) given by (2.46) is the only matrix belonging to D+(λ). This justifies the above terminology of being in the limit point case for system (Sλ) with dim N(λ) = n. For this particular situation we show in the proof below that the columns of Z(λ) do not belong to N(λ). This result is a generalization of [A4, Lemma 4.11]. Theorem 2.4.3. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. System (Sλ) is in the limit point case if and only if the limiting matrix radius R+(λ) = 0. In this case the limiting Weyl disk satisfies D+(λ) = {P+(λ)} and D+(¯λ) = {P+(¯λ)}. Proof. Assume that system (Sλ) is in the limit point case, i.e., dim N(λ) = n. Since the columns of the fundamental matrix (λ) introduced in (2.22) span all solutions of system (Sλ), the definition of X(λ) as Z(λ) + Z(λ)M with M ∈ D+(λ) implies that the columns of Z(λ) and X(λ) also form a basis of all solutions of system (Sλ). Hence, from dim N(λ) = n and Theorem 2.4.1 we conclude that the columns of Z(λ) do not belong to N(λ). It then follows from formula (2.37) that the matrix Hk(λ) is nondecreasing for k ∈ [0, ∞)Z without any upper bound, i.e., its eigenvalues (being real) tend to ∞. Therefore, the function Rk(λ) has its limit at ∞ equal to zero, i.e., R+(λ) = 0. This argument can be reversed, i.e., if R+(λ) = 0, then the eigenvalues of Hk(λ) tend to ∞ and formula (2.37) yields that the columns of Z(λ) do not belong to ℓ2 . Since the columns of Z(λ) and Z(λ) form a basis of all solutions of (Sλ), it then follows that dim N(λ) ≤ n. But since at the same time dim N(λ) ≥ n by Theorem 2.4.1, we obtain dim N(λ) = n and system (Sλ) is in the limit point case. Finally, if R+(λ) = 0 (or, equivalently, system (Sλ) is in the limit point case), then the equality D+(λ) = {P+(λ)} follows from Corollary 2.3.11. At the same time we get from Corollary 2.3.11 that D+(¯λ) = {P+(¯λ)}. ■ Remark 2.4.4. (i) In the continuous time setting, i.e., for linear Hamiltonian differential systems or Sturm–Liouville differential equations, it can happen that the limiting Weyl disk D+(λ) is a singleton consisting, of course, of the limiting center P+(λ), but the corresponding limiting matrix radius R+(λ) is not the zero matrix, i.e, it satisfies rank R+(λ) ≥ 1. Such an example is constructed in [120] for the fourth order Sturm– Liouville differential equation. Although the same behavior can be expected also in the discrete case, a specific example is still missing, see also Remark 2.4.21. (ii) In addition, Theorem 2.4.3 gives a simpler characterization of the half-line Weyl– Titchmarsh M(λ)-function M+(λ) from (2.50). In particular, in the limit point case the limit in (2.50) can be taken over all k ∈ [N0 + 1, ∞)Z without going to subsequences and also with βk ≡ β ∈ . That is, we have in this case M+(λ) = lim k→∞ Mk(λ, α, β). – 27 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter In the next part of this section we extend the result in Theorem 2.4.3 in a way which includes the precise effect of the matrix radii R+(λ) and R+(¯λ) on the number of linearly independent square summable solutions of system (Sλ). Specifically, we study the relationship between the number r(λ) := rank R+(λ), λ ∈ CKR, (2.56) and the dimension of N(λ). The statements in Theorem 2.4.5–Corollary 2.4.9 extend the results in [142, Section 4] from special linear Hamiltonian difference systems to discrete symplectic systems. In addition, these results were established as new even for system (Sλ) with the special linear dependence on λ in (2.7). At the same time, they can be regarded as discrete time analogues of the corresponding results for linear Hamiltonian differential systems in [108, Section 5] and [141]. From the definition of R+(λ) in (2.45) and from (2.42) one can see that the value of r(λ) depends also on α ∈ , i.e., we should write r(λ, α) instead of r(λ) in (2.56). But, obviously, the number of linearly independent square summable solutions of system (Sλ) does not depend on the choice α ∈ and we will see in Theorem 2.4.8 below that also r(λ, α) is independent of α, so that the notation in (2.56) is justified. Theorem 2.4.5. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then system (Sλ) has at least m := n + min{r(λ), r(¯λ)} linearly independent square summable solutions, i.e., we have dim N(λ) ≥ m. Proof. Consider the Weyl solution X(λ) defined through the matrix M = P+(λ), i.e., Xk(λ) := k(λ) ( I P+(λ) ) = ( ˜z[1] k , . . . , ˜z[n] k ) for all k ∈ [0, ∞)Z, where ˜z[j] ∈ C([0, ∞)Z)2n for j ∈ {1, . . . , n}. Then, by Theorem 2.4.1, the sequences ˜z[1] , . . . , ˜z[n] represent n linearly independent square summable solutions of system (Sλ). We also consider the Weyl solution X(λ) defined by the matrix M = P+(λ) + R+(λ)UR+(¯λ), i.e., Xk(λ) := k(λ) ( I M ) = ( ˆz[1] k , . . . , ˆz[n] k ) for all k ∈ [0, ∞)Z, where U ∈ U is such that rank R+(λ)UR+(¯λ) = min{r(λ), r(¯λ)}, see Proposition 1.1.2. It follows from Theorem 2.4.1 that ˆz[1] , . . . , ˆz[n] are also square summable solutions of (Sλ), because by Corollary 2.3.11 the above matrix M belongs to the limiting disk D+(λ). If we put z[j] := ˜z[j] and z[n+j] := ˆz[j] for j ∈ {1, . . . , n}, then ( z[1] k , . . . , z[2n] k ) = k(λ) ( I I P+(λ) M ) = k(λ) ( I 0 P+(λ) R+(λ)UR+(¯λ) ) ( I I 0 I ) for all k ∈ [0, ∞)Z. Since the rank of the middle matrix on the right-hand side above is equal to m and the other two matrices are invertible, we obtain that rank ( z[1] k , . . . , z[2n] k ) = m as well, from which the statement follows. ■ Before we present a precise relationship between r(λ) and dim N(λ) in Theorem 2.4.8 below, we proceed with some preliminary results. In the following theorem we establish a connection between the value of r(λ) and the asymptotic behavior of the eigenvalues of the matrix Hk(λ) as k → ∞. For a given λ ∈ CKR we denote by µ[1] k ≤ · · · ≤ µ[n] k the eigenvalues of the positive semidefinite matrix Hk(λ) arranged in the nondecreasing order (suppressing the argument λ). – 28 – 2.4. Square summable solutions Theorem 2.4.6. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then for ℓ ∈ {1, . . . , n} we have r(λ) = ℓ if and only if 0 < lim k→∞ µ [j] k =: ρ[j] < ∞, 1 ≤ j ≤ ℓ, and lim k→∞ µ [j] k = ∞, ℓ + 1 ≤ j ≤ n. Moreover, the numbers ( ρ[1] )−1/2 , . . . , ( ρ[ℓ] )−1/2 are the positive eigenvalues of R+(λ). Proof. Let k ∈ [N0 + 1, ∞)Z, where N0 is from Hypothesis 2.3.7. Since Hk(λ) is Hermitian, there exists a unitary matrix Uk such that U∗ k Hk(λ)Uk = diag { µ[1] k , . . . , µ[n] k } . (2.57) From the definition of Rk(λ) in (2.42) we have U∗ k Rk(λ)Uk = diag {( µ[1] k )−1/2 , . . . , ( µ[n] k )−1/2} , (2.58) so that ( µ[1] k )−1/2 , . . . , ( µ[n] k )−1/2 are all the eigenvalues of Rk(λ). Since the set U of unitary matrices is compact, there exists a subsequence {Ukj }∞ j=1 which converges as j → ∞ to a unitary matrix U+, see also Proposition 1.1.1. Hence, from (2.58) and (2.45) we get U∗ + R+(λ)U+ = diag { lim j→∞ ( µ[1] kj )−1/2 , . . . , lim j→∞ ( µ[n] kj )−1/2} . This implies that r(λ) = ℓ if and only if the limits lim j→∞ µ[1] kj = ρ[1] , . . . , lim j→∞ µ[ℓ] kj = ρ[ℓ] and lim j→∞ µ[ℓ+1] kj = · · · = lim j→∞ µ[n] kj = ∞, where ρ[1] , . . . , ρ[ℓ] are finite and positive. Therefore ( ρ[1] )−1/2 , . . . , ( ρ[ℓ] )−1/2 are the positive eigenvalues of R+(λ), while the remaining n − ℓ eigenvalues of R+(λ) are zero. ■ The following lemma will be utilized in the first part of the proof of the subsequent Theorem 2.4.8, which is the main result of this section. Lemma 2.4.7. Let α ∈ , λ ∈ CKR, q ∈ {0, 1, . . . , n}, and suppose that Hypothesis 2.3.7 holds. System (Sλ) has exactly n + q linearly independent square summable solutions if and only if there exists an n × q matrix Q with rank Q = q such that the columns of Z(λ)Q belong to ℓ2 , and Z(λ)η ∈ ℓ2 implies η ∈ Ran Q. Proof. Let us assume that system (Sλ) has n + q linearly independent square summable solutions z[1] , . . . , z[n+q] . By Theorem 2.4.1, these solutions can be ordered so that the first n solutions z[1] , . . . , z[n] correspond to the columns of the Weyl solution, which is defined through the center P+(λ), i.e., Xk(λ) := k(λ) ( I P+(λ) ) = ( z[1] k , . . . , z[n] k ) for all k ∈ [0, ∞)Z. (2.59) Then there exists a constant 2n × q matrix K such that rank K = q and ( z[n+1] k , . . . , z [n+q] k ) = k(λ)K for all k ∈ [0, ∞)Z. – 29 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter If we write K = (K∗ 1 , K∗ 2 )∗ with n × q blocks K1, K2, then we obtain for all k ∈ [0, ∞)Z that ( z[1] k , . . . , z [n+q] k ) = k(λ) ( I K1 P+(λ) K2 ) and rank ( I K1 P+(λ) K2 ) = n + q, (2.60) because k(λ) is invertible on [0, ∞)Z. If we put Q := K2 − P+(λ)K1 ∈ Cn×q, then ( In K1 P+(λ) K2 ) ( In −K1 0 Iq ) = ( In 0 P+(λ) Q ) . (2.61) It now follows from (2.60) that rank Q = q. In addition, from the first equality in (2.60), (2.61), and (2.22) we get ( z[1] k , . . . , z [n+q] k ) ( In −K1 0 Iq ) = k(λ) ( In 0 P+(λ) Q ) = ( Xk(λ) Zk(λ)Q ) , which implies that the columns of Z(λ)Q belong to ℓ2 . Hence (Xk(λ), Zk(λ)Q) consists of n + q linearly independent square summable solutions. Finally, if we have Z(λ)η ∈ ℓ2 for some η ∈ Cn, then by the previous part there exists ξ = (ξ∗ 1 , ξ∗ 2 )∗ ∈ Cn+q such that Zk(λ)η = ( Xk(λ) Zk(λ)Q ) ξ (2.59) = Zk(λ)ξ1 + Zk(λ)[P+(λ)ξ1 + Qξ2]. Since k(λ) = (Zk(λ), Zk(λ)), the above equality can be written as k(λ) ( ξ1 P+(λ)ξ1 + Qξ2 − η ) = 0, from which we get ξ1 = 0 and P+(λ)ξ1 + Qξ2 − η = 0, i.e., η = Qξ2 ∈ Ran Q as required. Conversely, assume that there exists a matrix Q ∈ Cn×q with rank Q = q such that the columns of Z(λ)Q belong to ℓ2 and that η ∈ Ran Q whenever Z(λ)η ∈ ℓ2 . Let X(λ) be as in (2.59). Then the equality Tk(λ) := ( Xk(λ) Zk(λ)Q ) = k(λ) ( In 0 P+(λ) Q ) implies that rank Tk(λ) = n + q for all k ∈ [0, ∞)Z. This shows that system (Sλ) has at least n + q linearly independent square summable solutions, namely these are the columns of Tk(λ). Let z ∈ N(λ) be arbitrary. We show that zk is a linear combination of the columns of Tk(λ), i.e., we prove that zk ∈ Ran Tk(λ) for all k ∈ [0, ∞)Z. Since the matrix ( Xk(λ) Zk(λ) ) = k(λ) ( I 0 P+(λ) I ) is also a fundamental matrix of system (Sλ), there exists ζ = (ζ∗ 1 , ζ∗ 2 )∗ ∈ C2n such that zk = ( Xk(λ) Zk(λ) ) ζ = Xk(λ)ζ1 + Zk(λ)ζ2 for all k ∈ [0, ∞)Z. This implies that Z(λ)ζ2 = z−X(λ)ζ1 ∈ N(λ). Thus, by the current assumption, the vector ζ2 ∈ Ran Q, i.e., ζ2 = Qυ for some vector υ ∈ Cq. It then follows that zk = Xk(λ)ζ1 + Zk(λ)Qυ = ( Xk(λ) Zk(λ)Q ) ( ζ1 υ ) ∈ Ran Tk(λ) for all k ∈ [0, ∞)Z. Therefore, system (Sλ) has exactly n + q linearly independent square summable solutions. ■ – 30 – 2.4. Square summable solutions Now we can give an exact relation between the number of linearly independent square summable solutions of system (Sλ) and the rank of the limiting matrix radius R+(λ) of the limiting Weyl disk. The result below extends and makes more precise the statements in Theorems 2.4.1 and 2.4.5. Theorem 2.4.8. Let α ∈ and λ ∈ CKR be given, suppose that Hypothesis 2.3.7 holds, and define r(λ) by (2.56). Then system (Sλ) has exactly n+r(λ) linearly independent square summable solutions, i.e., dim N(λ) = n+r(λ). Furthermore, the number r(λ) is independent of the coefficient matrix α determining the initial boundary condition in (2.24). Proof. Since dim N(λ), i.e., the number of square summable solutions of system (Sλ), does not depend on the choice of α, the number r(λ) also does not depend on α. Similarly as in [141,142], the proof is divided into two parts. In the first part we derive the estimate dim N(λ) ≤ n + r(λ), while the opposite inequality will be given in the second part of the proof. We abbreviate r := r(λ). Assume that there exists a number q with r < q ≤ n such that system (Sλ) has exactly n + q square summable solutions. By Lemma 2.4.7, there exists a constant n × q matrix Q with rank Q = q such that the columns of Z(λ)Q belong to ℓ2 and η ∈ Ran Q whenever Z(λ)η ∈ ℓ2 . Using (2.57), for every k ∈ [N0 + 1, ∞)Z there is a unitary matrix Uk such that Hk(λ) = Uk diag { µ[1] k , . . . , µ[n] k } U∗ k. (2.62) By Proposition 1.1.1, there exists a subsequence {kj}∞ j=1 such that kj → ∞ as j → ∞ and Ukj → U+, where U+ is unitary, see also the proof of Theorem 2.4.6. If we put Kk := U∗ k Q = ( K[1]∗ k , K[2]∗ k )∗ ∈ Cn×q with K[1] k ∈ Cr×q and K[2] k ∈ C(n−r)×q, then K = ( K[1]∗ , K[2]∗ )∗ := lim j→∞ Kkj = U∗ + Q with K[1] := lim j→∞ K[1] kj and K[2] := lim j→∞ K[2] kj . It can be easily seen that rank K = q, because U+ is unitary and rank Q = q. Moreover, since q > r, it follows that rank K[1] ≤ r and rank K[2] ≥ 1. Hence there exists ξ ∈ Cq such that K[2] ξ 0, and then for zk := Zk(λ)Qξ on [0, ∞)Z we have z ∈ ℓ2 by Lemma 2.4.7. On the other hand, from (2.37), (2.62), and the above definition of Kk we get ||z||2 = lim k→∞ ξ∗ Q∗ ( k−1∑ j=0 Z ∗ j+1(λ) j Zj+1(λ) ) Qξ (2.37) = 1 2| im(λ)| lim k→∞ ξ∗ Q∗ Hk(λ)Qξ (2.62) = 1 2| im(λ)| lim k→∞ ξ∗ Q∗ Uk diag { µ[1] k , . . . , µ[n] k } U∗ k Qξ = 1 2| im(λ)| lim k→∞ ξ∗ ( K[1]∗ k diag { µ[1] k , . . . , µ[r] k } K[1] k + K[2]∗ k diag { µ[r+1] k , . . . , µ[n] k } K[2] k ) ξ. By Theorem 2.4.6 (with ℓ = r) we know that the eigenvalues µ[1] k , . . . , µ[r] k have finite limits as k → ∞, denoted by ρ[1] , . . . , ρ[r] , while the eigenvalues µ[r+1] k , . . . , µ[n] k tend to ∞. This implies that lim j→∞ ξ∗ K[1]∗ kj diag { µ[1] kj , . . . , µ[r] kj } K[1] kj ξ = ξ∗ K[1]∗ diag { ρ[1] , . . . , ρ[r] } K[1] ξ < ∞, lim j→∞ ξ∗ K[2]∗ kj diag { µ[r+1] kj , . . . , µ[n] kj } K[2] kj ξ = ∞, – 31 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter because K[2] kj ξ → K[2] ξ 0 as j → ∞. This shows that ||z||2 = ∞, which contradicts z ∈ ℓ2 . Thus, system (Sλ) has at most n + r linearly independent square summable solutions. Conversely, we will show that dim N(λ) ≥ n+r by constructing n+r linearly independent solutions of system (Sλ) from ℓ2 . By Theorem 2.4.1, we know that the columns of the Weyl solution defined in (2.59) form n linearly independent square summable solutions of (Sλ). For k ∈ [N0 +1, ∞)Z, let Uk ∈ Cn×n be a unitary matrix such that (2.57) and (2.62) hold. We put Uk = ( U[1] k , U[2] k ) with full rank blocks U[1] k ∈ Cn×r and U[2] k ∈ Cn×(n−r). It follows that the dimension of kernel of U[2]∗ k is equal to r. Hence, if ξ[1] k , . . . , ξ[r] k is an orthonormal basis for Ker U[2]∗ k , then the matrix Qk := ( ξ[1] k , . . . , ξ[r] k ) ∈ Cn×r satisfies Q∗ k Qk = Ir and U[2]∗ k Qk = 0(n−r)×r for all k ∈ [N0 + 1, ∞)Z, (2.63) where 0(n−r)×r means the (n − r) × r zero matrix. By the aid of Proposition 1.1.1 again, there exist a subsequence such that Ukj → U+ and Qkj → Q+ ∈ Cn×r for j → ∞, where the matrix U+ = ( U[1] + , U[2] + ) is unitary, U[1] + ∈ Cn×r and U[2] + ∈ Cn×(n−r) and, by (2.63), the matrix Q+ satisfies Q∗ + Q+ = Ir, rank Q+ = r, and U[2]∗ + Q+ = 0(n−r)×r. If we denote by em for 1 ≤ m ≤ r the m-th unit vector in Cr (similarly as in the proof of Theorem 2.4.1), then Z(λ)Q+ em 2 (2.37) = 1 2| im(λ)| lim k→∞ e∗ m Q∗ + Hk(λ)Q+ em. (2.64) Fix now k ∈ [N0 + 1, ∞)Z. Then for every kj ≥ k we have by the monotonicity of H(λ) that e∗ m Q∗ kj Hk(λ)Qkj em ≤ e∗ m Q∗ kj Hkj (λ)Qkj em (2.62) = e∗ m Q∗ kj Ukj diag { µ[1] kj , . . . , µ[n] kj } U∗ kj Qkj em (2.63) = e∗ m Q∗ kj U[1] kj diag { µ[1] kj , . . . , µ[r] kj } U[1]∗ kj Qkj em. (2.65) Upon taking j → ∞ in inequality (2.65) we get e∗ m Q∗ + Hk(λ)Q+ em ≤ e∗ m Q∗ + U[1] + diag { ρ[1] , . . . , ρ[r] } U[1]∗ + Q+ em =: T < ∞, (2.66) where ρ[1] , . . . , ρ[r] are the finite limits of the eigenvalues µ[1] kj , . . . , µ[r] kj as j → ∞, see Theorem 2.4.6. Since the estimate in (2.66) holds for every k ∈ [N0 + 1, ∞)Z, it follows from equality (2.64) that 2| im(λ)| × Z(λ)Q+ em 2 = lim k→∞ e∗ m Q∗ + Hk(λ)Q+ em (2.66) ≤ T < ∞. This shows that the columns of Z(λ)Q+ belong to ℓ2 . Consequently, system (Sλ) has at least n + r linearly independent square summable solutions, which are generated by the columns of the matrix Yk := ( Zk(λ) Zk(λ) ) ( I 0n×r P+(λ) Q+ ) = k(λ) ( I 0n×r P+(λ) Q+ ) . (2.67) Since k(λ) is invertible and rank Q+ = r, it follows that the matrix Yk in (2.67) has n + r linearly independent columns. Hence, we proved that system (Sλ) has at least n + r linearly independent square summable solutions, which completes the proof. ■ Combining the results of Theorems 2.4.6 and 2.4.8 we get the following supplement of Theorem 2.4.8. – 32 – 2.4. Square summable solutions Corollary 2.4.9. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then 0 < lim k→∞ µ [j] k =: ρ[j] < ∞, 1 ≤ j ≤ r(λ), and lim k→∞ µ [j] k = ∞, r(λ) + 1 ≤ j ≤ n, where the numbers ( ρ[1] )−1/2 , . . . , ( ρ[r(λ)] )−1/2 are the positive eigenvalues of R+(λ). Moreover, yet another simple corollary follows from Theorem 2.4.8 as a counterpart of Theorem 2.4.3. Corollary 2.4.10. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Then system (Sλ) is in the limit circle case if and only if the matrix R+(λ) is invertible, i.e., r(λ) = n. In the remaining part of this section we present some characterizations of the two extreme cases r(λ) = 0 (i.e., the limit point case) and r(λ) = n (i.e., the limit circle case). We will utilize the following strengthened Atkinson-type condition, which includes both Hypotheses 2.3.7 and 2.3.13. An alternative terminology is that system (Sλ) is definite on the discrete interval [0, ∞)Z, see Section 6.2 for more details. Hypothesis 2.4.11 (Strong Atkinson condition – infinite). There exists N3 ∈ [0, ∞)Z such that each nontrivial solution z(λ) of system (Sλ) satisfies inequality (2.25) with N = N3 for every λ ∈ CKR. The following three results represent direct generalizations of [A4, Theorems 4.13, 4.14 and Corollary 4.15] from the special linear dependence on λ in (2.7) to the general linear dependence on λ. The proofs of the statements in Theorems 2.4.12 and 2.4.13 follow exactly the same ideas as in the corresponding proofs in [A4] quoted above, which are now considered for the general linear dependence on λ. The details are therefore omitted. Note that Theorem 2.4.12 requires the strengthened Atkinson-type condition in Hypothesis 2.4.11, because it uses in its proof both the limiting Weyl disk D+(λ) and the Mk(λ) functions for large k. On the other hand, in Theorem 2.4.13 we utilize the weaker condition from Hypothesis 2.3.7, because its proof uses only the limiting Weyl disk D+(λ). Theorem 2.4.12. Let α ∈ , λ, ν ∈ CKR, and suppose that Hypothesis 2.4.11 holds. If systems (Sλ) and (Sν) are both in the limit point or limit circle case, then lim k→∞ X∗ k(λ, α, M+(λ))JXk(ν, α, M+(ν)) = 0, (2.68) where X(λ, α, M+(λ)) ∈ C([0, ∞)Z)2n×n and X(ν, α, M+(ν)) ∈ C([0, ∞)Z)2n×n mean the Weyl solutions of systems (Sλ) and (Sν) defined as in (2.23) through the matrices M+(λ) and M+(ν), respectively, which are determined by the limit in (2.50). Theorem 2.4.13. Let α ∈ , λ ∈ CKR, and suppose that Hypothesis 2.3.7 holds. Systems (Sλ) and (S¯λ) are in the limit point case if and only if for every square summable solutions z(λ) and ˜z(¯λ) of (Sλ) and (S¯λ), respectively, we have z∗ k(λ)J ˜zk(¯λ) = 0 for all k ∈ [0, ∞)Z. (2.69) Corollary 2.4.14. Let α ∈ and suppose that Hypothesis 2.4.11 holds. System (Sλ) is in the limit point case for all λ ∈ CKR if and only if for every λ, ν ∈ CKR and every square summable solutions z(λ) and ˜z(ν) of systems (Sλ) and (Sν), respectively, we have lim k→∞ z∗ k(λ)J ˜zk(ν) = 0. (2.70) Proof. If system (Sλ) is in the limit point case for every λ ∈ CKR, then the square summable solution z(λ) must be a constant multiple of the Weyl solution X(λ), by Theorem 2.4.1. – 33 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter Similarly, the square summable solution z(ν) is a constant multiple of X(ν). Identity (2.70) then follows from Theorem 2.4.12. Conversely, let (2.70) be satisfied for every λ, ν ∈ CKR and every z(λ) ∈ N(λ) and ˜z(ν) ∈ N(ν). Fix λ ∈ CKR and put ν := ¯λ. Then from Lemma 2.1.5 we know that the value of z∗ k (λ)J ˜zk(¯λ) is constant on k ∈ [0, ∞)Z. Hence the limit in (2.70) implies that identity (2.69) is satisfied, and so system (Sλ) is in the limit point case by Theorem 2.4.13. ■ From Theorems 2.4.12 and 2.1.7 we also get the following statement. Corollary 2.4.15. Let α ∈ , λ, ν ∈ CKR, and suppose that Hypothesis 2.4.11 holds. If systems (Sλ) and (Sν) are both in the limit point or limit circle case, then (¯λ − ν) ∞∑ k=0 X∗ k+1(λ, α, M+(λ)) k Xk+1(ν, α, M+(ν)) = M∗ +(λ) − M+(ν), (2.71) where the Weyl solutions X(λ, α, M+(λ)) ∈ C([0, ∞)Z)2n×n and X(ν, α, M+(ν)) ∈ C([0, ∞)Z)2n×n are the same as in Theorem 2.4.12. Proof. By Theorem 2.1.7, we get that the left-hand side of (2.71) is equal to the difference lim k→∞ X∗ k+1(λ, α, M+(λ))JXk+1(ν, α, M+(ν)) − X∗ 0(λ, α, M+(λ))JX0(ν, α, M+(ν)). While the limit above is zero by (2.68), the second term gives by the definition of the Weyl solution in (2.23) the equality X∗ 0 (λ, α, M+(λ))JX0(ν, α, M+(ν)) = M+(ν) − M∗ +(λ). ■ Remark 2.4.16. It can be shown under Hypothesis 2.3.7 that if ψk denotes the minimal eigenvalue of the Hermitian matrix k for k ∈ [0, ∞)Z and if ∞∑ k=0 ψk = ∞, (2.72) then system (Sλ) is in the limit point case. This fact follows in a similar way as in [154, Theorem 5.1] by using Theorem 2.4.13. However, system (Sλ) is such that ψk = 0 for all k ∈ [0, ∞)Z, because the matrices k are singular, see Lemma 2.1.1 and the subsequent paragraph. Therefore, condition (2.72) can never be fulfilled in the present theory. We conclude this section by a generalization of one half of the classical Weyl alternative, see e.g. [171, Theorem 8.27]. More precisely, we show that if system (Sλ) is in the limit circle case for some λ0 ∈ C (i.e., dim N(λ0) = 2n), then it is in the limit circle case for every λ ∈ C (i.e., dim N(λ) ≡ 2n). In other words, we derive the invariance of the limit circle case, which provides a discrete analogue of the result established in [9, Theorem 9.11.2] for system (2.5). However we emphasize that in contrast to the latter result we do not need to impose any additional assumptions on the coefficient matrices of system (Sλ) in the present setting. Similar statements for the second order Sturm–Liouville difference equations and linear Hamiltonian difference systems can be found, respectively, in [9, Theorem 5.6.1] and [142, Theorem 5.5]. In addition, we skip the proof, because it follows immediately from more a general result derived in Chapter 4, see Theorem 4.2.2 and Remark 4.2.4. Theorem 2.4.17. If there exists λ0 ∈ C such that system (Sλ0 ) is in the limit circle case, then system (Sλ) is in the limit circle case for every λ ∈ C. – 34 – 2.4. Square summable solutions Remark 2.4.18. From the discussion concerning equations (2.8)–(2.10) in the introduction of this chapter and from Theorem 2.4.17 we easily deduce the invariance of the limit circle case (i.e., of the situation when all solutions are square summable with respect to the weight W) for any even order vector-valued Sturm–Liouville difference equation, Jacobi equation, and symmetric three term recurrence relation. Indeed, since the corresponding system (Sλ) is such that k = diag{Wk, 0, . . . , 0} or k = diag{Wk, 0}, we get immediately the equality y∗ k+1 (λ)Wk yk+1(λ) = z∗ k (λ) k zk+1(λ) for all k ∈ [0, ∞)Z. Therefore a solution y(λ) ∈ C([0, ∞)Z)n of equations (2.8) or (2.9) or (2.10) satisfies ∑∞ k=0 y∗ k+1 (λ)Wk yk+1(λ) < ∞ if and only if the corresponding z(λ) belongs to ℓ2 with respect to the weight specified earlier, see also Remark 4.2.7. As a direct consequence of Theorem 2.4.17 we obtain the following criterion for the limit circle case. This result corresponds to [9, Theorem 5.8.1] and [134, Theorem 6.3] for the second order Sturm–Liouville difference equations and linear Hamiltonian difference systems. Again we skip the proof, because the statement follows from Corollary 4.2.3 and Remark 4.2.4. We note that the matrix norm ||·||1 used in (2.73) below can be replaced by any other matrix norm because of their equivalence. Corollary 2.4.19. Assume that ∞∑ k=0 ||Sk − I||1 < ∞ and ∞∑ k=0 || k ||1 < ∞. (2.73) Then system (Sλ) is in the limit circle case for all λ ∈ C. Upon combining Theorem 2.4.17 and Corollary 2.4.10 we get the following result. Corollary 2.4.20. Assume that Hypothesis 2.3.7 holds and that λ0 ∈ C is such that (Sλ0 ) is in the limit circle case. Then r(λ) = n for all λ ∈ CKR. Remark 2.4.21. Under the assumptions of Corollary 2.4.20 we can deduce that the value of r(λ) is constant and equal to n on CKR. This observation gives rise to two additional and very natural questions. Are the values of r(λ) and r(¯λ) constant on some subsets of C in general, especially on the upper and lower half-planes C+ and C−? And if r(λ) and r(¯λ) are constant on C+ and C−, do they satisfy r(λ) = r(¯λ) on CKR? Of course, these questions can be formulated also for the numbers dim N(λ) and dim N(¯λ). The first answer is positive as discussed in Remark 6.4.15. The answer to the second question is positive in the limit circle case under the assumptions stated in Corollary 2.4.20, as well as in the limit point case under analogous assumptions as in Theorem 7.1.1 from Chapter 7. Moreover, if the matrices Sk and Vk are real-valued for all k ∈ [0, ∞)Z, then it follows immediately that z(λ) ∈ C([0, ∞)Z)2n solves system (Sλ) if and only if z(λ) solves (S¯λ), i.e., zk(¯λ) = zk(λ) for all k ∈ [0, ∞)Z. Thus, in that case we have dim N(λ) = dim N(¯λ), i.e., r(λ) = r(¯λ). However, in other situations we conjecture that the answer can be negative similarly as it was shown in the continuous time case by the example from [120], which was already mentioned in Remark 2.4.4(i). More specifically, in the latter example the fourth order differential equation with three and two linearly independent square integrable solutions in C+ and C−, respectively, was constructed. In the scalar case (i.e., n = 1) the estimate in (2.55) implies that dim N(λ) ∈ {1, 2}. In this case we derive from Theorem 2.4.17 its limit point counterpart for λ ∈ CKR, i.e., the second part of the Weyl alternative. Corollary 2.4.22. Let n = 1 and assume that Hypothesis 2.3.7 holds. If there λ0 ∈ C such that system (Sλ0 ) is in the limit point case, then system (Sλ) is in the limit point case for any λ ∈ CKR. – 35 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter Proof. Let system (Sλ0 ) be in the limit point case and assume that there exists λ1 ∈ CKR such that system (Sλ1 ) is not in the limit point case. Then, by n = 1 and the estimate in (2.55), we know that system (Sλ1 ) is in the limit circle case. But from Theorem 2.4.17 (applied with λ0 = λ1) we obtain that system (Sλ) is in the limit circle case for every λ ∈ C, which contradicts the original assumption that system (Sλ0 ) is in the limit point case. ■ Upon combining Theorems 2.4.17 and 2.4.22 we get the scalar symplectic analogue of the Weyl alternative for system (Sλ). Corollary 2.4.23 (Weyl alternative). Let n = 1 and assume that Hypothesis 2.3.7 holds. Then system (Sλ) is either in the limit circle case for all λ ∈ C, or in the limit point case for all λ ∈ CKR. Although in this section we have conducted a thorough analysis of the number of linearly independent square summable solutions of system (Sλ), we gave absolutely no information about dim N(λ) if λ ∈ R (except for the limit circle case). Fortunately, a basic estimate of this value is derived in Theorem 6.4.16 by using the theory of linear relations developed in Chapter 6, see also Remark 6.4.15. Moreover, there exists an intimate connection between the Weyl solution X(λ) with λ ∈ R and the so-called principal (or recessive) solution of system (Sλ) in the nonoscillatory case. This connection represents one of the goals of our current research and it will be new even in the continuous case, i.e., for system (2.5). Similar results were established for the second order Sturm–Liouville differential, difference, and dynamic equations in [37] and [A23]. 2.5 Illustrating examples Now we present several examples which illustrate the results of this chapter. In the whole section we focus on the special case of system (Sλ) with n = 1 and Sk ≡ S := I2, which can be considered as a discrete analogue of the so-called no potential case known in the theory of Sturm–Liouville differential equations and linear Hamiltonian differential systems. That is, we analyze the system zk(λ) = λVk zk(λ), k ∈ [0, ∞)Z, (2.74) with two special choices of the matrices V ∈ C([0, ∞)Z)2×2 satisfying the assumptions in (2.1). In each example we determine the centers and the radii of the Weyl disk and of the limiting Weyl disk, as well as we give the corresponding limit point or limit circle classification. We note that in the limit point case we denote by X+(λ) the Weyl solution defined as in (2.23) with M = P+(λ). Example 2.5.1. Let λ ∈ CKR and consider system (2.74) with the constant matrices Vk ≡ V = ( √ ab a −b − √ ab ) , k ≡ = ( b √ ab√ ab a ) ≥ 0 for all k ∈ [0, ∞)Z, (2.75) where a > 0 and b ≥ 0 are given real numbers. This choice of V is naturally based on the properties required in (2.1). Note that for ≥ 0 we need only a ≥ 0, but as we will see, the crucial Hypothesis 2.3.7 is not satisfied when a = 0. The fundamental matrix k(λ) of system (2.74) with α = (1, 0) has in this case the form k(λ) = (I + λV)k = ( 1 + kλ √ ab kλa −kλb 1 − kλ √ ab ) , k ∈ [0, ∞)Z. (2.76) – 36 – 2.5. Illustrating examples In view of (2.22), the two solutions Zk(λ) and Zk(λ) of system (2.74) are equal to the first and second columns of the matrix k(λ) in (2.76), respectively. The sum in (2.25) with N = N0 is then equal to (N0 + 1)a, which shows that Hypothesis 2.3.7 is satisfied for any N0 ∈ [0, ∞)Z and only a > 0. From (2.36) we then get Fk(λ) = 2kb| im(λ)|, Gk(λ) = −iδ(λ) + 2k √ ab| im(λ)|, Hk(λ) = 2ka| im(λ)|, which yields through (2.42) that the center and the radius of the Weyl disk Dk(λ) for k ∈ [1, ∞)Z are Pk(λ) = − √ b/a + i 2ka im(λ) and Rk(λ) = 1 √ 2ka| im(λ)| . By taking the limit as k → ∞ we can see that the center and the radius of the limiting Weyl disk D+(λ) are P+(λ) = − √ b/a and R+(λ) = 0, that is, D+(λ) = { − √ b/a } . This shows that system (2.74) with Vk given in (2.75) is in the limit point case for any λ ∈ CKR. The limiting behavior of the Weyl disks is demonstrated in Figure 2.1 below. The Weyl solution X+ k (λ) ≡ ( 1, − √ b/a )⊤ satisfies ||X+(λ)|| = 0 and it is the only square summable solution (up to a nonzero constant multiple). Note that ||Z(λ)|| = ∞ and ||Z(λ)|| = ∞ for b > 0, while for b = 0 we have Z(λ) = X+(λ). re z im z × × × × × Figure 2.1: The Weyl disks Dk(λ) for k ∈ {1, 2, 4, 6, 10}, their centers, and P+(λ) = −1 from Example 2.5.1 with a = b = 2 and λ = 0.4 + 0.4i. ▲ Although the choice of a = 0 was not possible in Example 2.5.1, we show that also in this case the system from Example 2.5.1 is in the limit points case. This is done by using a suitable transformation. Example 2.5.2. Let λ ∈ CKR and consider the system from Example 2.5.1 with a = 0 and b > 0, i.e., zk+1(λ) = 5(λ)zk(λ), 5(λ) = ( 1 0 −λb 1 ) , = ( b 0 0 0 ) ≥ 0, k ∈ [0, ∞)Z. (2.77) The dependence on λ in system (2.77) is special as in (2.7) and, as we discussed in Example 2.5.1, Hypothesis 2.3.7 is not satisfied in this case. Therefore the theory developed – 37 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter in Section 2.3 cannot be applied, neither can be applied the results in [26] and [A4]. On the other hand, by using the transformation yk(λ) := T−1 zk(λ), where T := ( − √ c/b −1 1 0 ) is a constant symplectic matrix with c ≥ 0, we obtain another symplectic system yk+1(λ) = 5(λ)yk(λ), 5(λ) = ( 1 + λ √ bc λb −λc 1 − λ √ bc ) , = ( c √ bc√ bc b ) , k ∈ [0, ∞)Z, where 5(λ) := T−1 5(λ)T, see [20, Lemma 6]. Now, since b > 0 and c ≥ 0, the results in Example 2.5.1 can be used for the above transformed system. In particular, system (2.77) is in the limit point case for every λ ∈ CKR. Depending on the choice of the constant c ≥ 0 in the transformation matrix T, we obtain from the reversed transformation the two linearly independent solutions Zk(λ) = ( − √ c/b, 1 + kλ √ bc )⊤ and Zk(λ) = (−1, kλb)⊤ of system (2.77). For these solutions we easily calculate that ||Z(λ)|| = 0 when c = 0, ||Z(λ)|| = ∞ when c > 0, and ||Z(λ)|| = ∞. The corresponding Weyl solution is X+ k (λ) ≡ (0, 1)⊤, which obviously satisfies ||X+(λ)|| = 0. ▲ Finally, we present a system of the form as in (2.74) with nonconstant Vk, which can be either in the limit point case or in the limit circle case. Example 2.5.3. Let λ ∈ CKR and v ∈ C([0, ∞)Z) be a given sequence such that v0 = 0, vk ≥ 0 for all k ∈ [0, ∞)Z, and vℓ > 0 for some index ℓ ∈ [1, ∞)Z. Define the matrix Vk = ( 0 vk 0 0 ) for all k ∈ [0, ∞)Z and consider the system zk+1(λ) = 5k(λ)zk(λ), 5k(λ) = ( 1 λ vk 0 1 ) , k = ( 0 0 0 vk ) ≥ 0, k ∈ [0, ∞)Z. (2.78) The fundamental matrix of system (2.78) with α = (1, 0) is equal to k(λ) = ( 1 λvk 0 1 ) , i.e., Zk(λ) ≡ ( 1 0 ) , Zk(λ) = ( λvk 1 ) , k ∈ [0, ∞)Z. (2.79) This implies that Hypothesis 2.3.7 is satisfied for any N0 ∈ [ℓ − 1, ∞)Z, since the sum in (2.25) with N = ℓ − 1 is equal to vℓ > 0. From (2.36) we get Fk(λ) = 0, Gk(λ) = −iδ(λ), and Hk(λ) = 2vk | im(λ)|. The assumptions imply that Hk(λ) > 0 for all k ∈ [ℓ, ∞)Z, so that the center and the radius of the Weyl disk Dk(λ) are well defined and equal to Pk(λ) = i 2vk im(λ) and Rk(λ) = 1 √ 2vk | im(λ)| for all k ∈ [ℓ, ∞)Z. If we put v∞ := limk→∞ vk = supk∈[0,∞)Z {vk}, then v∞ > 0 and the center and the radius of the limiting Weyl disk D+(λ) are equal to P+(λ) = i/[2v∞ im(λ)] and R+(λ) = 1/ √ 2v∞ | im(λ)|. From this one can easily conclude that system (2.78) is in the limit point case if and only if v∞ = ∞, while it is in the limit circle case if and only if v∞ < ∞. In the latter case, the linearly independent solutions Z(λ) and Z(λ) are square summable with ||Z(λ)|| = 0 and ||Z(λ)|| = √ v∞. In addition, the Weyl solution X(λ) defined by (2.23) through the fundamental matrix (λ) from (2.79) and M = m ∈ C is also square summable with the corresponding semi-norm ||X(λ)|| = √ v∞ |m| < ∞. The behavior of the Weyl disks is demonstrated in Figure 2.2 below. – 38 – 2.6. Bibliographical notes re z im z × × × Figure 2.2: The Weyl disks Dk(λ) for k ∈ {1, 2, 3}, their centers, and the disk D+(λ) with P+(λ) = 5i/2 and R+(λ) = √ 5/2 from Example 2.5.3 with vk = 1 − 2−k and λ = 0.4 + 0.4i. ▲ The results of the previous example are summarized in the following statement. Corollary 2.5.4. Let n = 1, λ ∈ CKR, v ∈ C([0, ∞)Z) be such that v0 = 0, vk ≥ 0 for all k ∈ [0, ∞)Z, vℓ > 0 for some index ℓ ∈ [1, ∞)Z, and define v∞ := limk→∞ vk. If we put Vk := ( 0 vk 0 0 ) for all k ∈ [0, ∞)Z, then the following holds. (i) System (2.74) is in the limit point case if and only if v∞ = ∞. In this case, P+(λ) = 0, R+(λ) = 0, and the Weyl solution X+ k (λ) ≡ (1, 0)⊤ is the only (up to a nonzero constant multiple) square summable solution of system (2.74). (ii) System (2.74) is in the limit circle case if and only if v∞ < ∞. In this case we have P+(λ) = i/[2v∞ im(λ)] and R+(λ) = 1/ √ 2v∞ | im(λ)|. The solutions Zk(λ) ≡ (1, 0)⊤ and Zk(λ) = (λvk, 1)⊤ are linearly independent with ||Z(λ)|| = 0 and ||Z(λ)|| = √ v∞. We note that one direction in part (ii) of Corollary 2.5.4 also follows from Corollary 2.4.19, because in this case we have ∑∞ k=0 ||Sk − I||1 = 0 and ∑∞ k=0 || k ||1 = v∞ < ∞. 2.6 Bibliographical notes The results of this chapter (including the direct proofs of Theorem 2.4.17 and Corollary 2.4.19) were published in [A15]. Moreover, their generalization to symplectic systems on time scales was established in [A16]. In our future research we aim to construct a particular example of system (Sλ) mentioned in Remarks 2.4.4(i) and 2.4.21, i.e., such that dim N(λ) dim N(¯λ). – 39 – Chapter 2. Weyl–Titchmarsh theory for general linear dependence on spectral parameter – 40 – Chapter 3 Jointly varying endpoints Perhaps the most surprising thing about mathematics is that it is so surprising. The rules which we make up at the beginning seem ordinary and inevitable, but it is impossible to foresee their consequences. These have only been found out by long study, extending over many centuries. Much of our knowledge is due to a comparatively few great mathematicians such as Newton, Euler, Gauss, Cauchy, or Riemann; few careers can have been more satisfying than theirs. They have contributed something to human thought even more lasting than great literature, since it is independent of language. Edward Charles Titchmarsh, see [1, pg. 12] In the previous chapter we started with the eigenvalue problem given in (2.24), which includes the separated boundary conditions. Then, by using the fundamental matrix (λ) determined in (2.21), we developed the theory of Weyl disks and square summable solutions for system (Sλ). These results were achieved under the weak Atkinson condition, see Hypothesis 2.3.7, instead of the traditional strong Atkinson condition, see Hypothesis 2.4.11 and [A4], [26,142,154]. This is crucial and absolutely essential in the context of the results established in this chapter, where we will extend the results of Chapter 2 to problems with general jointly varying endpoints γ ( z0 zN+1 ) = 0, γ ∈ Γ := { γ ∈ C2n×4n | γγ∗ = I, γ ( −J 0 0 J ) γ∗ = 0 } . (3.1) The boundary conditions in (3.1) include, among others, the periodic endpoints z0 = zN+1 or the antiperiodic endpoints z0 = −zN+1, which could not be treated by the previous case in (2.24), see the discussion following (2.24). The method we use is based on the augmentation of system (Sλ) into double dimension, which leads to a problem with separated endpoints having the original boundary conditions from (3.1) as one of its constraints. This technique is known in the literature in principle (cf. [17, 87, 88, 92, 112, 153]), but the transformation to separated endpoints presented in this chapter is much simpler. At the same time, the transformed symplectic system no longer satisfies the corresponding strong Atkinson condition, but only its weak form. Thus, the derivation of the Weyl–Titchmarsh theory in Chapter 2 under the weak Atkinson condition is truly crucial for its further extension to jointly varying endpoints. For this general situation, we give a characterization of eigenvalues of the eigenvalue problem determined by system (Sλ) together with the boundary conditions in (3.1), we construct the Weyl disks, their centers and matrix radii, and also focus on properties of square summable solutions. More precisely, we give an exact connection between the limit point or limit circle classification of the original system (in dimension 2n) and the augmented system (in dimension 4n). This connection reveals an interesting fact, namely that the limiting matrix radius of the augmented system has its rank at least – 41 – Chapter 3. Jointly varying endpoints n (see Theorem 3.1.11), and so it is never zero in the limit point case as one would expect from Theorem 2.4.3. The results of this chapter (see Theorem 3.1.2) also imply the existence of multiple eigenvalues for scalar symplectic eigenvalue problems with jointly varying endpoints. This is known e.g. for the second order discrete Sturm–Liouville problems with periodic endpoints in [105, Example 7.6] or [170, Theorem 2.2] and here we extend it to discrete symplectic systems. The transformation of jointly varying endpoints into separated endpoints will also find applications in the continuous time problems or time scales problems, see e.g. [153]. Finally, we remark that the results shown in this chapter were established as new even for special discrete symplectic systems, such as those with (2.7), the Jacobi equations and symmetric three term recurrence relations, i.e., equations (2.9) and (2.10), and also for linear Hamiltonian difference systems. 3.1 Weyl–Titchmarsh theory for jointly varying endpoints Throughout this chapter we use the same notation as in Chapter 2, see Notation 2.1.2. Moreover, we emphasize the augmentation by the bold notation. For a given γ ∈ Γ and N ∈ [0, ∞)Z we consider the eigenvalue problem (Sλ), k ∈ [0, N]Z, λ ∈ C, (3.1). (3.2) The eigenvalues of (3.2) are defined as for (2.24). That is, a number λ ∈ C is an eigenvalue of problem (3.2) if, for this particular value λ, system (Sλ) has a nontrivial solution z(λ) ∈ C([0, N + 1]Z)2n satisfying the boundary conditions in (3.1). In this case, z(λ) is called an eigenfunction for λ and the dimension of such eigenfunctions for λ is its geometric multiplicity. As one of the main assumptions we suppose that system (Sλ) satisfies the strong Atkinson condition on a finite or infinite interval, see Hypothesis 3.1.1 below and Hypothesis 2.4.11, respectively. Hypothesis 3.1.1 (Strong Atkinson condition – finite). The inequality in (2.25) is satisfied for every nontrivial solution z(λ) ∈ C([0, N + 1]Z)2n of system (Sλ) on the discrete interval [0, N]Z and every λ ∈ CKR. The results of this chapter will be formulated with the aid of a particular fundamental matrix Φ(λ) of system (Sλ) starting with the initial value Φ0(λ) = −J, which corresponds to the fundamental matrix (λ) specified in (2.21) with the choice α = (0, I), i.e., Φk+1(λ) = (Sk + λVk)Φk(λ), k ∈ [0, ∞)Z, Φ0(λ) = −J, λ ∈ C. (3.3) Our first result describes the orthogonality of the eigenfunctions and the multiplicity of the eigenvalues of problem (3.2). It generalizes Theorem 2.2.3 to jointly varying endpoints, compare also with [19, Theorem 2.2]. The proofs mostly follow by direct calculations in a similar way as the corresponding results in Chapter 2. For completeness and comparison we provide alternative proofs based on the transformation in Section 3.3. Theorem 3.1.2. Let γ ∈ Γ be given. Then the following statements hold. (i) A number λ ∈ C is an eigenvalue of problem (3.2) if and only if the matrix L(λ) := γ ( −J ΦN+1(λ) ) (3.4) is singular. In this case, the eigenfunctions corresponding to the eigenvalue λ have the form z(λ) = Φ(λ)d on [0, N + 1]Z with a nonzero d ∈ Ker L(λ). Moreover, the geometric multiplicity of λ is equal to its algebraic multiplicity, i.e., to the value of dim Ker L(λ). – 42 – 3.1. Weyl–Titchmarsh theory for jointly varying endpoints (ii) Under Hypothesis 3.1.1, the eigenvalues of problem (3.2) are real and the eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the semi-inner product ⟨·, ·⟩ ,N defined in (2.26). Proof. The statement follows from Theorem 3.3.5 with (3.29) and Corollary 3.3.3. ■ By Theorem 3.1.2, the multiplicities of the eigenvalues of problem (3.2) are at most 2n, compared to the separated endpoints case in Theorem 2.2.3 in which the multiplicities of the eigenvalues are at most n. It implies that in the scalar case (i.e., for n = 1) there may exist multiple eigenvalues of problem (3.2). This phenomenon was observed in [105, Example 7.6] and later justified in [170, Theorem 2.2] for the periodic discrete Sturm– Liouville eigenvalue problem, see also Example 3.2.4. Remark 3.1.3. When system (Sλ) has the special structure shown in (2.7), it can be deduced from [152, Corollary 4.6] that the total number of eigenvalues of (3.2) is equal to the dimension of the space of admissible functions for the associated discrete quadratic functional. In some even more special cases, such as for the second order Sturm–Liouville difference equations with periodic or antiperiodic endpoints, this exact number of the eigenvalues of problem (3.2) is derived in [170, Theorem 4.2] or [145, Theorem 4.1]. The result in [152, Corollary 4.6] is based on the Rayleigh principle for system (Sλ) with the boundary condition from (3.1), compare also with [153, Theorem 3.2], and on the fact that the space of admissible functions is independent of λ, which follows from the special structure in (2.7). As the Rayleigh principle for eigenvalue problems (3.2) is not known and the space of admissible functions is in this case not constant in λ, the question about the total number of eigenvalues of problem (3.2) remains open for the general linear dependence on λ. On the other hand, the oscillation theorem for discrete symplectic eigenvalue problems with jointly varying endpoints in [148, Theorem 6.13] yields that the total number of the eigenvalues of problem (3.2) is less or equal to (N + 3)n. Next we define the Weyl–Titchmarsh M(λ)-function for problem (3.2), compare with identity (2.27). For k ∈ [0, N + 1]Z and λ ∈ C we set Mk(λ) := − [ γ ( −J Φk(λ) ) ]−1 γ ( J Φk(λ) ) J, (3.5) whenever the inverse above exists. In particular, we can see from Theorem 3.1.2 that Mk(λ) is well defined for every λ ∈ CKR and k = N +1, when Hypothesis 3.1.1 holds. The following statement generalizes Lemma 2.2.5 to jointly varying endpoints. Theorem 3.1.4. Let γ ∈ Γ, λ ∈ C, and k ∈ [0, ∞)Z. If Mk(λ) and Mk(¯λ) exist, then we have M∗ k (λ) = Mk(¯λ). Moreover, Mk(·) is an analytic function in its argument λ. Proof. This result follows from (3.36) and (3.33) via Lemma 2.2.5. ■ For any M ∈ C2n×2n we define the Weyl-solution X(λ) of (Sλ) with values in C4n×2n by Xk(λ) := 1 √ 2 ( J −J Φk(λ) Φk(λ) ) ( J M ) . (3.6) It then follows, compare with Remark 2.2.6(i), that γXk(λ) = 0 if and only if the matrix M equals to Mk(λ) defined in (3.5). – 43 – Chapter 3. Jointly varying endpoints One of the central concepts of this chapter is the E(M)-function with values in C2n×2n, through which we later on define the Weyl disks. For M ∈ C2n×2n we put Ek(M) := iδ(λ)X∗ k(λ) ( −J 0 0 J ) Xk(λ) = ( J M )∗ ( Hk(λ) Gk(λ) Gk(λ) Hk(λ) ) ( J M ) , (3.7) Hk(λ) := 1 2 iδ(λ)[Φ∗ k(λ)JΦk(λ) − J], Gk(λ) := Hk(λ) + iδ(λ)J. (3.8) From (iJ)∗ = iJ one can easily see that Ek(M), Hk(λ), and Gk(λ) are Hermitian matrices. Moreover, we have H0(λ) = 0 and the Lagrange identity in Theorem 2.1.7 implies the following crucial equalities Ek(M) = −2δ(λ) im(M) + | im(λ)|(M∗ − J)   k−1∑ j=0 Φ∗ j+1(λ) j Φj+1(λ)   (M + J), (3.9) Hk(λ) = | im(λ)| k−1∑ j=0 Φ∗ j+1(λ) j Φj+1(λ), (3.10) compare with (2.32), (2.36), (2.33), and (2.37), respectively. Since Φ(λ) represents a fundamental matrix of system (Sλ), equality (3.10) justifies the following result. Theorem 3.1.5. If Hypothesis 2.4.11 holds, then the matrix Hk(λ) is positive definite for every λ ∈ CKR and k ∈ [N3 + 1, ∞)Z. In addition, for such k we have (suppressing the argument λ) Ek(M) = −J ( Hk − Gk H−1 k Gk ) J + ( M∗ − JGk H−1 k ) Hk ( M + H−1 k Gk J ) . (3.11) Proof. The invertibility of Hk(λ) for all k ∈ [N3 + 1, ∞)Z follows from (3.10) and Hypothesis 2.4.11. Moreover, identity (3.11) is a consequence of (3.37) and (3.39). ■ For any λ ∈ CKR we now define the Weyl disk Dk(λ) and the Weyl circle Ck(λ) as Dk(λ) := { M ∈ C2n×2n | Ek(M) ≤ 0 } and Ck(λ) := { M ∈ C2n×2n | Ek(M) = 0 } , compare with Definition 2.3.1. The following result provides some properties of the elements in Dk(λ) and Ck(λ). It is a generalization of Theorems 2.3.2 and 2.3.3 to jointly varying endpoints. Theorem 3.1.6. Let λ ∈ CKR, k ∈ [0, ∞)Z, and M ∈ C2n×2n. Then the following hold. (i) The matrix M ∈ Ck(λ) if and only if there exists γ ∈ Γ such that γXk(λ) = 0. In this case, we have with such a matrix γ that M = Mk(λ), whenever the matrix Mk(λ) exists. (ii) The matrix M satisfies Ek(M) < 0 if and only if there exists γ ∈ C2n×4n such that iδ(λ)γ ( −J 0 0 J ) γ∗ > 0 and γXk(λ) = 0. In this case, we have with such a matrix γ that M = Mk(λ), whenever the matrix Mk(λ) exists, and γ can be chosen so that γγ∗ = I. (iii) We have Ek(−J) = −2δ(λ)iJ, i.e., −J Dk(λ). Proof. Statements (i) and (ii) follow by Theorems 2.3.2 and 2.3.3 from the facts that the sets Dk(λ) and Ck(λ) coincide respectively with the Weyl disk and Weyl circle in (3.38). Statement (iii) is verified by direct calculation from (3.7), because the matrix iJ is indefinite. ■ – 44 – 3.1. Weyl–Titchmarsh theory for jointly varying endpoints The center Pk(λ) and the matrix radius Rk(λ) of the Weyl disk Dk(λ) are defined as the 2n × 2n matrices Pk(λ) := −H−1 k (λ)Gk(λ)J = −J + iδ(λ)H−1 k (λ) and Rk(λ) := H−1/2 k (λ), (3.12) whenever Hk(λ) is invertible, i.e., whenever Hk(λ) > 0, compare with (2.42). Note that from (3.8) and (1.5) we get the series expansion Pk(λ) = −J + 2J ∞∑ j=0 [ − Φ∗ k(λ)JΦk(λ)J ]j , when sprad Φ∗ k(λ)JΦk(λ)J < 1. The following theorem provides the most important geometric properties of the Weyl disks, including their nested property, closedness, and convexity. It is a generalization of Theorems 2.3.6 and 2.3.8 to the case of jointly varying endpoints. Hence we denote by 777 and 888 the sets of all unitary and contractive 2n × 2n complex matrices, respectively, i.e., 777 := {U ∈ C2n×2n | U∗ U = I} and 888 := {V ∈ C2n×2n | V∗ V ≤ I}. (3.13) Theorem 3.1.7. Let λ ∈ CKR. Then Dk(λ) ⊆ Dj(λ) for every k, j ∈ [0, ∞)Z with k ≥ j. In addition, under Hypothesis 2.4.11 we have for every k ∈ [N3 + 1, ∞)Z the representations Dk(λ) = { Pk(λ) + Rk(λ)VRk(¯λ) | V ∈ 888 } , Ck(λ) = { Pk(λ) + Rk(λ)URk(¯λ) | U ∈ 777 } . Consequently, the Weyl disks Dk(λ) are closed and convex for every k ∈ [N3 + 1, ∞)Z. Proof. The result follows from (3.40), (3.41), and (3.42) combined with Corollary 3.3.4. ■ The latter theorem implies that the intersection of all Weyl disks Dk(λ) over the discrete interval [N3 + 1, ∞)Z is a nonempty, closed, and convex set. This yields that the limiting Weyl disk has the form D+(λ) := ∩ k∈[N3+1,∞)Z Dk(λ) = { P+(λ) + R+(λ)VR+(¯λ) | V ∈ 888 } , where P+(λ) and R+(λ) are the 2n × 2n matrices defined by P+(λ) := lim k→∞ Pk(λ), R+(λ) := lim k→∞ Rk(λ) ≥ 0. (3.14) They are called the center and the matrix radius of the limiting Weyl disk D+(λ), compare with Definition 2.3.10. Note that the convergence of Pk(λ) and Rk(λ) can be seen from their definitions given in (3.12) and equality (3.10). Remark 3.1.8. If Hinv + (λ) denotes the limit of H−1 k (λ) as k → ∞, which exists by (3.10), then the formulas in (3.14) for the center and matrix radius of the limiting Weyl disk reduce to P+(λ) = −J + iδ(λ)Hinv + (λ), R+(λ) = [ Hinv + (λ) ]1/2 . (3.15) The next result is a generalization of Corollary 2.3.12 to jointly varying endpoints. Note that as in Theorem 3.1.6(iii) we have −J D+(λ). – 45 – Chapter 3. Jointly varying endpoints Theorem 3.1.9. Let λ ∈ CKR, M ∈ C2n×2n, and suppose that Hypothesis 2.4.11 holds. Then M belongs to the limiting Weyl disk D+(λ) if and only if (M∗ − J)   ∞∑ k=0 Φ∗ k+1(λ) k Φk+1(λ)   (M + J) ≤ 2 im(M) im(λ) . Proof. This statement follows from (3.9), or alternatively by Corollary 2.3.12 from (3.43) and the definition of the Weyl solution in (3.36) and (3.33). ■ Remark 3.1.10. The limiting Weyl circle C+(λ) can be introduced as the boundary of the limiting Weyl disk D+(λ). Then M ∈ C+(λ) if and only if any of the following two equivalent conditions hold, compare with Remark 2.3.17(ii), (M∗ − J)   ∞∑ k=0 Φ∗ k+1(λ) k Φk+1(λ)   (M + J) = 2 im(M) im(λ) , lim k→∞ X∗ k(λ) ( −J 0 0 J ) Xk(λ) = 0. Finally, let us discuss some properties of square summable solutions of system (Sλ). The following result is quite surprising in the sense that one would expect to have R+(λ) = 0 in the limit point case, see Theorem 2.4.3. To the contrary, due to the augmented structure of the matrix R+(λ), which has dimension 2n, it is the rank of R+(λ) alone which determines the number of linearly independent square summable solutions of system (Sλ), compare with Theorem 2.4.8. In the result below we show that rank R+(λ) ≥ n, so that the equality rank R+(λ) = n must necessarily hold in the limit point case. This fact is stated in Corollary 3.1.12 below and also illustrated in Example 3.2.6. Theorem 3.1.11. Let λ ∈ CKR and suppose that Hypothesis 2.4.11 holds. Then system (Sλ) has exactly rank R+(λ) linearly independent square summable solutions, i.e., n ≤ dim N(λ) = rank R+(λ) ≤ 2n. (3.16) Proof. The statement in (3.16) is proven in Theorem 3.3.6 and (3.44). ■ The meaning of Theorem 3.1.11 can be explained also directly from (3.10) and Remark 3.1.8. In particular, by (3.15), the rank of R+(λ) is equal to the number of positive eigenvalues of the matrix Hinv + (λ) from Remark 3.1.8 and this number is the same as the number of the eigenvalues of Hk(λ), which tend to a finite limit as k → ∞. Consequently, equality (3.10) shows that it is equal to the number of linearly independent square summable solutions of system (Sλ). Corollary 3.1.12. Let λ ∈ CKR and suppose that Hypothesis 2.4.11 holds. Then system (Sλ) is in the limit point case if and only if rank R+(λ) = n, while (Sλ) is in the limit circle case if and only if rank R+(λ) = 2n. 3.2 Examples Now we examine several examples which illustrate the theory presented in the previous section. In particular, we consider the periodic and antiperiodic boundary conditions as in [153, Remark 6.17] and the corresponding M(λ)-function. – 46 – 3.2. Examples Example 3.2.1. For the periodic endpoints z0 = zN+1 we take γ = 1√ 2 (J, −J) ∈ Γ. In this case, the matrix in (3.4) is L(λ) = 1√ 2 J[ΦN+1(λ) + J] Then, by Theorem 3.1.2, a number λ ∈ C is an eigenvalue of problem (3.2) if and only if the matrix ΦN+1(λ)+J is singular, and the number dim Ker[ΦN+1(λ)+J] is its multiplicity. Moreover, the M(λ)-function in (3.5) reduces to M [p] k (λ) = −[Φk(λ) + J]−1 [Φk(λ) − J]J. (3.17) ▲ Example 3.2.2. For the antiperiodic endpoints z0 = −zN+1 we take γ = 1√ 2 (J, J) ∈ Γ. In this case we have L(λ) = 1√ 2 J[ΦN+1(λ) − J] and, by Theorem 3.1.2, a number λ ∈ C is an eigenvalue of problem (3.2) if and only if the matrix ΦN+1(λ) − J is singular. The multiplicity of λ is then dim Ker[ΦN+1(λ) − J]. In addition, the M(λ)-function in (3.5) now has the form M [ap] k (λ) = −[Φk(λ) − J]−1 [Φk(λ) + J]J. (3.18) ▲ The M(λ)-functions M [p] k (λ) and M [ap] k (λ) from (3.17) and (3.18) for the periodic and antiperiodic endpoints are closely related, as we show in the next interesting statement. Let p be the set of all eigenvalues of the periodic problem (3.2) with γ = γp from Example 3.2.1. Similarly, let ap be the set of all eigenvalues of the antiperiodic problem (3.2) with γ = γap from Example 3.2.2. Then by Theorem 3.1.2(ii) we have p ∪ ap ⊆ R under Hypothesis 3.1.1. Corollary 3.2.3. Let k ∈ [0, N + 1]Z and λ ∈ C be fixed. The matrices M [p] k (λ) and M [ap] k (λ) given in (3.17) and (3.18) satisfy the following conditions. (i) If Φk(λ) + J is invertible, then rank M [p] k (λ) = rank[Φk(λ) − J]. In particular, this equality holds at k = N + 1 for every λ p. (ii) If Φk(λ)−J is invertible, then rank M [ap] k (λ) = rank[Φk(λ)+J]. In particular, this equality holds at k = N + 1 for every λ ap. (iii) If Φk(λ) + J and Φk(λ) − J are invertible, then M [p] k (λ) and M [ap] k (λ) are also invertible and satisfy the equalities [ M [p] k (λ)J ]−1 = M [ap] k (λ)J and det M [p] k (λ) × det M [ap] k (λ) = 1. In particular, these equalities hold at k = N + 1 for every λ p ∪ ap. As an addendum to Corollary 3.2.3 we derive the series representations for M [p] k (λ) and M [ap] k (λ). If we expand the inverses in (3.17) and (3.18) by (1.5), then under the condition sprad Φk(λ)J < 1 we obtain M [p] k (λ) = 2J ( ∞∑ j=0 [ Φk(λ)J ]j ) − J and M [ap] k (λ) = 2J ( ∞∑ j=0 [ − Φk(λ)J ]j ) − J. – 47 – Chapter 3. Jointly varying endpoints Now, we illustrate our results on the scalar symplectic system zk+1(λ) = ( 1 1 −λ 1 − λ ) zk(λ) with k ≡ := ( 1 0 0 0 ) , (3.19) i.e., Sk ≡ ( 1 1 0 1 ) and Vk ≡ ( 0 0 −1 −1 ) . This system corresponds to the second order Sturm– Liouville difference equation (2.8) with m = n = 1, P[1] k = Wk ≡ 1, and P[0] k ≡ 0, i.e., − (pk yk) + qk yk+1 = λwk yk+1, where pk = wk ≡ 1, qk ≡ 0. (3.20) System (3.19) satisfies the strong Atkinson condition in Hypothesis 3.1.1 or 2.4.11 with N3 = 1, as can be easily verified. Example 3.2.4. Let us consider the scalar eigenvalue problem with periodic endpoints (3.19), k ∈ [0, 3]Z, z0 = z4, (3.21) i.e., we look for the solutions of system (3.19) with period 4. This problem corresponds to the periodic Sturm–Liouville eigenvalue problem (3.20) with k ∈ [0, 3]Z and y0 = y4, y0 = y4, which was studied in [105, Example 7.6]. It was shown in the latter reference that λ = 2 is a double eigenvalue of problem (3.21) by finding two linearly independent eigenfunctions. The results in Theorem 3.1.2 and Example 3.2.1 confirm this conclusion. The fundamental matrix Φ(λ) from (3.3) now satisfies Φ2(λ) = ( −λ + 2 λ − 1 λ2 − 3λ + 1 −λ2 + 2λ ) , (3.22) Φ4(λ) = ( −λ3 + 6λ2 − 10λ + 4 λ3 − 5λ2 + 6λ − 1 λ4 − 7λ3 + 15λ2 − 10λ + 1 −λ4 + 6λ3 − 10λ2 + 4λ ) . (3.23) This yields that det[Φ4(λ) + J] = −λ(λ − 4)(λ − 2)2. Thus, by Theorem 3.1.2 and Example 3.2.1, λ = 2 is indeed a double eigenvalue of problem (3.21), and the columns of Φ(2) are the two linearly independent eigenfunctions. Note that it holds Φ4(2) = −J = Φ0(2). The other eigenvalues of problem (3.21) are λ = 0 with the eigenfunction Φ(0)(0, 1)⊤ and λ = 4 with the eigenfunction Φ(4)(2, 1)⊤. ▲ Example 3.2.5. Let us consider again system (3.19), but now only on the interval [0, 2]Z and with the antiperiodic boundary conditions z0 = −z2. From equality (3.22) we see that det[Φ2(λ) − J] = (λ − 2)2. Hence, by Theorem 3.1.2 and Example 3.2.2, λ = 2 is a double eigenvalue of this problem with the columns of Φ(2) as the two linearly independent eigenfunctions. Note that it holds Φ2(2) = J = −Φ0(2). This problem then does not have any other eigenvalues. ▲ In the last example we calculate the rank of the limiting radius R+(λ) and compare it with the corresponding number of linearly independent square summable solutions. Example 3.2.6. We examine system (3.19) on the discrete interval [0, ∞)Z with a particular choice of λ0 ∈ CKR. We show that system (3.19) with λ = λ0 is in the limit point case, so that by Corollary 2.4.22 it is in the limit point case for every λ ∈ CKR. Let λ0 = 2 + 2i √ 3, i.e., system (3.19) reduces to the second order difference equation yk+2 +2i √ 3 yk+1 + yk = 0 on [0, ∞)Z. The roots of the corresponding characteristic polynomial are ν± := (±2 − √ 3)i, so that the fundamental matrix Φ(λ) of system (3.19) satisfying (3.3) has the form Φk(λ0) = ( νk + νk − νk +(ν+ − 1) νk −(ν− − 1) ) T, where T := 1 4 ( −i −2 − √ 3 + i i −2 + √ 3 − i ) . (3.24) – 48 – 3.3. Augmented symplectic system By (3.10), we obtain with µ± := |ν± |2 = 7 ∓ 4 √ 3 that Hk(λ0) = 2 √ 3 T∗ k−1∑ j=0 ( |ν+ |2j+2 (¯ν+ν−)j+1 (ν+ ¯ν−)j+1 |ν− |2j+2 ) T = 2 √ 3 T∗ k−1∑ j=0 ( (µ+)j+1 (−1)j+1 (−1)j+1 (µ−)j+1 ) T. Since each entry of Hk(λ0) represents a geometric series, it can be evaluated explicitly as Hk(λ0) = 2 √ 3 T∗ ( (µ+)[1 − (µ+)k]/(1 − µ+) [(−1)k − 1]/2 [(−1)k − 1]/2 (µ−)[1 − (µ−)k]/(1 − µ−) ) T. Therefore, the matrix Hk(λ0) is indeed invertible (and positive definite) on the discrete interval [N3 + 1, ∞)Z = [2, ∞)Z and, by Remark 3.1.8, we have Hinv + (λ0) = lim k→∞ H−1 k (λ0) = ( 4 2 + √ 3 − i 2 + √ 3 + i 2 + √ 3 ) , P+(λ0) = −J + iHinv + (λ0) = ( 4i (2 + √ 3)i (2 + √ 3)i (2 + √ 3)i ) . The matrix R+(λ0) can also be calculated explicitly by (3.15), but it is not really important. We can find the eigenvalues of R+(λ0) as the nonnegative square roots of the eigenvalues of Hinv + (λ0). Namely, since the eigenvalues of the matrix Hinv + (λ0) are 0 and 6 + √ 3, we obtain rank R+(λ0) = 1 and system (3.19) is in the limit point case, by Corollary 3.1.12. The square summable solution of (3.19) is then given as the second component of the columns of the Weyl solution X+(λ0) from (3.6) with M = P+(λ0). That is, the columns of the matrix Φ(λ0) × Hinv + (λ0) are square summable. But since Hinv + (λ0) is singular, it follows that the square summable solutions of (3.19) are generated by exactly one column of the matrix Φ(λ0) × Hinv + (λ0). On the other hand, since |ν+ | < 1, one can identify the first column of Φ(λ0) × T−1 in (3.24) as the square summable solution of (3.19). ▲ 3.3 Augmented symplectic system We will show that problem (3.2) is equivalent to a certain eigenvalue problem in dimension 4n with separated endpoints. At first, let us define the 4n × 4n matrices Sk := ( I 0 0 Sk ) , Vk := ( 0 0 0 Vk ) , k := ( 0 0 0 k ) ≥ 0, (3.25) k(λ) := 1√ 2 ( J −J Φk(λ) Φk(λ) ) , J := ( −J 0 0 J ) , K := ( 0 J J 0 ) , (3.26) where Φ(λ) is the fundamental matrix of system (Sλ) specified in (3.3). Then one can easily verify that J∗ = −J = J−1, K∗ = −K = K−1, and S∗ k JSk = J, S∗ k JVk is Hermitian, V∗ k JVk = 0, k = JVk JS∗ k J. With this setting, we introduce the augmented symplectic system zk+1(λ) = (Sk + λVk)zk(λ). (Sλ) It follows that (λ) is a fundamental matrix of system (Sλ), because k(λ) = ( I 0 0 Φk(λ)J ) Q with Q := 0(λ) = 1√ 2 ( J −J −J −J ) and det Q = 1, – 49 – Chapter 3. Jointly varying endpoints Moreover, by analogy with Lemma 2.1.6 we obtain on [0, ∞)Z the identities ∗ k(λ)J k(¯λ) = K, k(¯λ)K ∗ k(λ) = J, and ∗ 0(λ) 0(λ) = I. Note that the latter equality means that the matrix 0(λ) is unitary. The double size of system (Sλ) implies that we consider all vector solutions z(λ) of system (Sλ) in dimension 4n and matrix-valued solutions Z(λ) of system (Sλ) in dimension 4n × 2n. It then follows that they have the form zk(λ) = ( d zk(λ) ) and Zk(λ) = ( D E Z[1] k (λ) Z[2] k (λ) ) , (3.27) where d ∈ Cn and D, E ∈ C2n×n are constant, while the sequences z(λ) ∈ C([0, ∞)Z)2n and Z[1] (λ), Z[2] (λ) ∈ C([0, ∞)Z)2n×n solve system (Sλ). It turns out that the main properties of system (Sλ) and its solutions, such as those in Section 2.1 or [A4, Section 2], are preserved for the augmented system (Sλ). In particular, the coefficients of system (Sλ) satisfy the following identities (Sk + λVk)∗ J(Sk + ¯λVk) = J, (Sk + λVk)−1 = −J(S∗ k + λV∗ k)J, and we have also the Lagrange identity for two 4n × m matrix solutions of systems (Sλ) and (Sν), i.e., for all λ, ν ∈ C and k ∈ [0, ∞)Z it holds Z∗ k+1(λ)JZk+1(ν) = Z∗ 0(λ)JZ0(ν) + (¯λ − ν) k∑ j=0 Z∗ j+1(λ) j Zj+1(ν). (3.28) Given γ ∈ from (3.1), we define the 2n × 4n matrices α, β ∈ := Γ by α := 1√ 2 ( −I I ) and β := γ. (3.29) Since vector solutions of system (Sλ) can be written as in (3.27), the choice of α in (3.29) implies that the solutions of (Sλ) satisfying αz0(λ) = 0 have necessarily the form zk(λ) = ( z0(λ) zk(λ) ) , (3.30) where z(λ) ∈ C([0, ∞)Z)2n solves system (Sλ). Conversely, every solution z(λ) of system (Sλ) yields through formula (3.30) a solution z(λ) of system (Sλ) such that αz0(λ) = 0. Therefore, the original eigenvalue problem (3.2) is equivalent to the augmented eigenvalue problem with separated endpoints (Sλ), k ∈ [0, N]Z, λ ∈ C, αz0 = 0, βzN+1 = 0. (3.31) In addition, the form of k in (3.25) implies that the semi-norm of any augmented solution z(λ) is the same as the semi-norm of the corresponding solution z(λ), because ⟨z, ˜z⟩ ,N := N∑ k=0 z∗ k+1 k ˜zk+1 = N∑ k=0 z∗ k+1 k ˜zk+1 = ⟨z, ˜z⟩ ,N with zk = ( z0 zk ) , ˜zk = ( ˜z0 ˜zk ) . (3.32) – 50 – 3.3. Augmented symplectic system In order to apply the theory from Sections 2.2 and 2.3, we need to find the fundamental matrix (λ) of the augmented system (Sλ) such that 0(λ) = (α∗, −Jα∗), see (2.21) and (2.22). That is, k(λ) = ( Zk(λ), Zk(λ) ) , Zk(λ) := 1√ 2 ( −I Φk(λ)J ) , Zk(λ) := 1√ 2 ( −J Φk(λ) ) . (3.33) with Φ(λ) being the fundamental matrix of (Sλ) used in (3.26). The above transformation then yields the results for the eigenvalue problem (3.31) in terms of (λ). When translating these results to the data of the original problem (3.2) we use the fundamental matrix (λ) in (3.26). Its relationship with (λ) is given by the equality k(λ) = k(λ)L, where L := ( J 0 0 I ) . (3.34) From (3.34) we can see that the second column Z(λ) of (λ) and (λ) is the same. Finally, the theory also requires the following weak Atkinson-type conditions for system (Sλ), compare with Hypotheses 2.2.2 and 2.3.7. Hypothesis 3.3.1 (Weak augmented Atkinson condition – finite). For any λ ∈ CKR every column z(λ) of the solution Z(λ) satisfies N∑ k=0 z∗ k+1(λ) k zk+1(λ) > 0. (3.35) Hypothesis 3.3.2 (Weak augmented Atkinson condition – infinite). There exists a number N3 ∈ [0, ∞)Z such that each column z(λ) of Z(λ) satisfies inequality (3.35) with N = N3 for every λ ∈ CKR. When we write the weak Atkinson condition in (3.35) in terms of the data of the original problem (3.2), we get by (3.25) and (3.33) that N∑ k=0 Z ∗ k+1(λ) k Zk+1(λ) = N∑ k=0 Φ∗ k+1(λ) k Φk+1(λ) > 0. This shows that the conditions in Hypotheses 3.1.1 and 3.3.1 are intimately connected as stated in the following corollary. Corollary 3.3.3. System (Sλ) satisfies the strong Atkinson condition on the discrete interval [0, N]Z (Hypothesis 3.1.1) if and only if the augmented system (Sλ) satisfies the corresponding weak Atkinson condition on [0, N]Z (Hypothesis 3.3.1). Similar connection is true also for Hypotheses 2.4.11 and 3.3.2. This fact justifies the use of the same index N3 in both conditions. Corollary 3.3.4. System (Sλ) satisfies the strong Atkinson condition on the discrete interval [0, ∞)Z (Hypothesis 2.4.11) if and only if the augmented system (Sλ) satisfies the corresponding weak Atkinson condition on [0, ∞)Z (Hypothesis 3.3.2). In particular, we can see why assuming the weak Atkinson condition in Chapter 2 is really essential – the transformation of the problem (3.2) with jointly varying endpoints, which satisfies the strong Atkinson condition, leads to the augmented problem (3.31) satisfying the corresponding weak Atkinson condition. Therefore, one can simply apply the previous results on separated endpoints to the augmented problem and then transform the obtained results back to the data of the original problem (3.2). The next theorem provides basic properties of problem (3.31). – 51 – Chapter 3. Jointly varying endpoints Theorem 3.3.5. Let α, β ∈ be given. Then the following statements hold. (i) A number λ ∈ C is an eigenvalue of (3.31) if and only if the matrix L(λ) := βZN+1(λ) is singular. In this case, the eigenfunctions of problem (3.31) corresponding to the eigenvalue λ have the form z = Zk(λ)d on [0, N + 1]Z with a nonzero d ∈ Ker L(λ). Moreover, the geometric and algebraic multiplicities of λ coincide and are equal to dim Ker L(λ). (ii) Under Hypothesis 3.3.1, the eigenvalues of problem (3.31) are real and the eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the semi-inner product ⟨·, ·⟩ ,N defined in (3.32). Proof. The statement follows from Theorem 2.2.3, when it is applied to the augmented eigenvalue problem (3.31). ■ Following Definitions 2.2.1 and 2.2.4, we define the Weyl solution for system (Sλ) corresponding to M ∈ C2n×2n and the M(λ)-function associated with problem (3.31) as Xk(λ) := k(λ) ( I M ) = k(λ) ( J M ) and Mk(λ) := − [ βZk(λ) ]−1 βZk(λ), (3.36) where (λ), Z(λ), and Z(λ) are given in (3.33). In addition, for M ∈ C2n×2n we define the E(M)-function by Ek(M) := iδ(λ)X∗ k(λ)JXk(λ) = ( I M )∗ ( Fk(λ) G∗ k (λ) Gk(λ) Hk(λ) ) ( I M ) , (3.37) where Fk(λ), Gk(λ), and Hk(λ) are the 2n × 2n matrices Fk(λ) := iδ(λ)Z∗ k(λ)JZk(λ) = J∗ Hk(λ)J, Gk(λ) := iδ(λ)Z ∗ k(λ)JZk(λ) = Hk(λ)J − iδ(λ), Hk(λ) := iδ(λ)Z ∗ k(λ)JZk(λ) = 1 2 iδ(λ)[Φ∗ k(λ)JΦk(λ) − J]. As in Definition 2.3.1, the Weyl disk Dk(λ) and the Weyl circle Ck(λ) are defined by Dk(λ) := { M ∈ C2n×2n | Ek(M) ≤ 0 } , Ck(λ) := { M ∈ C2n×2n | Ek(M) = 0 } . (3.38) Note that under Hypothesis 3.3.2 the matrices Hk(λ) are positive definite (and hence invertible) for all k ∈ [N3 + 1, ∞)Z, because by (3.28) we have Hk(λ) = 2| im(λ)| k−1∑ j=0 Z ∗ j+1(λ) j Zj+1(λ). (3.39) By Theorem 2.3.8, the Weyl disk and the Weyl circle possess the representations Dk(λ) = { Pk(λ) + Rk(λ)VRk(¯λ) | V ∈ 888 } , (3.40) Ck(λ) = { Pk(λ) + Rk(λ)URk(¯λ) | U ∈ 777 } , (3.41) where 888 and 777 are, respectively, the sets of all complex contractive and unitary 2n × 2n matrices introduced in (3.13) and where the center Pk(λ) and the matrix radius Rk(λ) are defined by Pk(λ) := −H−1 k (λ)Gk(λ) = −J + iδ(λ)H−1 k (λ), Rk(λ) := H−1/2 k (λ). (3.42) – 52 – 3.3. Augmented symplectic system Therefore, the Weyl disks Dk(λ) are closed, convex, and nested, which implies that the limiting Weyl disk D+(λ) := ∩ k∈[N3+1,∞)Z Dk(λ) = { P+(λ) + R+(λ)VR+(¯λ) | V ∈ 888 } (3.43) exists and is nonempty, closed, and convex as well. By using Theorem 2.3.9 and the monotonicity of Hk(λ) shown in (3.39), the center and the matrix radius of D+(λ) are P+(λ) := lim k→∞ Pk(λ) and R+(λ) := lim k→∞ Rk(λ) ≥ 0. The final results of this section are devoted to the square integrable solutions of the augmented system (Sλ). Let ℓ2 be the space of all square summable sequences z ∈ C([0, ∞)Z)4n with the corresponding semi-norm defined as ||z|| < ∞, where ||z|| := ( ∞∑ k=0 z∗ k+1 k zk+1 )1/2 = lim N→∞ √ ⟨z, z⟩ ,N. For every λ ∈ C we denote the space of all square summable solutions of system (Sλ) by N(λ) := { z ∈ ℓ2 | z solves (Sλ) } . If Hypothesis 3.3.2 is satisfied, we know from Theorem 2.4.1 that the dimension of N(λ) is at least 2n, or more precisely dim N(λ) = 2n + rank R+(λ), (3.44) by Theorem 2.4.8. On the other hand, the analysis of the structure of the square summable solutions of the augmented system (Sλ) yields the following result. Theorem 3.3.6. Let λ ∈ CKR and suppose that Hypothesis 3.3.2 holds. Then 3n ≤ 2n + dim N(λ) = dim N(λ) ≤ 4n (3.45) n ≤ dim N(λ) = n + rank R+(λ) = rank R+(λ) ≤ 2n. (3.46) Proof. Let ej ∈ C2n be the j-th canonical unit vector for j ∈ {1, . . . , 2n}. Then system (Sλ) possesses constant solutions z[j] := (e∗ j , 0∗)∗ ∈ C4n for all j ∈ {1, . . . , 2n}, which certainly belong to N(λ), because they satisfy ||z[j] || = 0. In addition, any square summable solution z ∈ N(λ) naturally generates a square summable solution z = (0∗, z∗)∗ ∈ N(λ), which is linearly independent with the above defined solutions z[1] , . . . , z[2n] . This yields that dim N(λ) = 2n+dim N(λ). Hence identity (3.45) follows from Corollary 3.3.4 and the inequality dim N(λ) ≥ n in Theorem 2.4.1. Identity (3.46) is then only a direct consequence of equality (3.44). ■ Combining Theorem 3.3.6 and Corollaries 3.3.4 and 2.4.20 then yields the next result. Corollary 3.3.7. Let λ0 ∈ CKR and suppose that Hypothesis 2.4.11 is satisfied. Then we have dim N(λ0) = 3n if and only if system (Sλ0 ) is in the limit point case. Similarly, dim N(λ0) = 4n if and only if system (Sλ0 ) is in the limit circle case. Moreover, if it holds dim N(λ1) = 4n for some λ1 ∈ C, then dim N(λ) = 4n for all λ ∈ C. – 53 – Chapter 3. Jointly varying endpoints We can now see that the rank of the limiting matrix radius R+(λ) can never be zero, so that the “limit point” behavior of (Sλ), i.e., dim N(λ) = 3n, should not be determined by the equality R+(λ) = 0, as one would expect from the separated endpoints case in Theorem 2.4.3. However, we can read Theorems 2.4.3, 2.4.8 and Corollary 2.4.10 also in a different way. Namely, the number of linearly independent solutions of system (Sλ), which are not square summable, is equal to dim Ker R+(λ). For the augmented system (Sλ) we now have exactly the same statement, i.e., the number of linearly independent solutions of system (Sλ), which are not square summable, is equal to dim Ker R+(λ). Remark 3.3.8. The augmentation of system (Sλ) into the double dimension is a known technique for studying the problems with jointly varying endpoints, see e.g. [17, 87, 88, 92,112,153]. The transformation introduced in this chapter has the advantage that it uses the solutions z or Z of system (Sλ) rather than their components x, u or X, U as in the above references. This yields a direct connection between the original system (Sλ) and the augmented system (Sλ). For example, the boundary conditions in [153, Section 6] are of the form P1 ( −x0 xN+1 ) + P2 ( u0 uN+1 ) = 0 (3.47) with certain 2n × 2n matrices P1 and P2. One can see that the approach via (3.1) is much easier and more transparent. The relationship between the transformation in the above mentioned references and the transformation, which is utilized in this section, is determined by the multiplication of the data (from one side or from both sides) by the following 4n × 4n matrix T :=   −I 0 0 0 0 0 I 0 0 I 0 0 0 0 0 I   = T−1 . In particular, the equality T   −x0 xN+1 u0 uN+1   =   x0 u0 xN+1 uN+1   = ( z0 zN+1 ) gives a direct connection between the boundary conditions in (3.47) and (3.1). 3.4 Bibliographical notes The results of this chapter were established in [A13] and their generalization to symplectic systems on time scales was given in [A16, Section 8]. In addition, Corollary 3.2.3 is published for the first time in the present setting and it is derived as a special case of [A16, Corollary 8.5]. – 54 – Chapter 4 Invariance of limit circle case for two discrete systems The question of the ultimate foundations and the ultimate meaning of mathematics remains open; we do not know in what direction it will find its final solution or even whether a final objective answer can be expected at all. “Mathematizing” may well be a creative activity of man, like language or music, of primary originality, whose historical decisions defy complete objective rationalization. Hermann Weyl, see [106, pg. 319] In this chapter we derive an invariance of the situation, when all solutions are square summable, i.e., of the limit circle case for system (Sλ) as it was already stated in Theorem 2.4.17, see Remark 4.2.4. However, instead of system (Sλ) we consider two discrete systems of the first order in the form ˆzk+1(λ) = ( Sk + λVk ) ˆzk(λ), (Sλ) ˜zk+1(λ) = ( Sk + λVk ) ˜zk(λ), (Sλ) where k ∈ [0, ∞)Z, λ ∈ C, and S, V, S, V ∈ C([0, ∞)Z)2n×2n. Moreover, the coefficients of systems (Sλ) and (Sλ) satisfy for every k ∈ [0, ∞)Z the relations S ∗ k JSk = J, V ∗ k JSk + S ∗ k JVk = 0, V ∗ k JVk = 0, (4.1) k := JVk JS ∗ k J ≥ 0, (4.2) i.e., the matrix k is Hermitian and positive semidefinite for all k ∈ [0, ∞)Z, compare with Remark 4.1.4(i). Despite the analogy between the conditions in (4.1), (4.2) and the assumptions for system (Sλ) displayed in (2.1), we emphasize that systems (Sλ) and (Sλ) are generally non-symplectic in the sense of the terminology introduced in Chapter 2, see also Remark 4.1.4(ii). Our investigation is motivated by Walker’s results in [168], where an analogous problem was studied for a pair of the non-Hermitian linear Hamiltonian differential systems ˆz′ (t, λ) = [H(t) + λW(t)] ˆz(t, λ), (H R λ ) ˜z′ (t, λ) = [JH∗ (t)J + λW(t)] ˜z(t, λ) (H R λ ) with t ∈ [a, ∞) and H(t), W(t) being locally integrable 2n × 2n complex matrix-valued functions such that the matrix W(t) is Hamiltonian and −JW(t) ≥ 0 on [a, ∞). More – 55 – Chapter 4. Invariance of limit circle case for two discrete systems specifically, let us denote by L2 A the space of all functions z : [a, ∞) → C2n, which are square integrable with respect to the weight A(t) := −JW(t), i.e., − ∫ ∞ a z∗(t)JW(t)z(t)dt < ∞. Then it was proven in [168, Theorem 2] that if ∫ ∞ a | tr W(t)|dt < ∞ (4.3) and all solutions of systems (H R λ0 ) and (H R λ0 ) belong to L2 A for some λ0 ∈ C, then this property holds for all solutions of systems (H R λ ) and (H R λ ) with an arbitrary λ ∈ C. This statement extends the invariance of the limit circle case for one system (H R λ ) with the coefficient matrix H(t) being Hamiltonian on [a, ∞), i.e., for the situation (H R λ )=(H R λ )=(2.5), established by Atkinson in [9, Theorem 9.11.2]. Although the main result of this chapter (Theorem 4.2.2) yields a discrete counterpart of [168, Theorem 2], we point out that, surprisingly, it does not require any analogue of condition (4.3). 4.1 Preliminaries In this section we collect some auxiliary results about the coefficients of systems (Sλ) and (Sλ). Similarly as in Chapter 2, let us define 5k(λ) := Sk + λVk, 5k(λ) := Sk + λVk. (4.4) Then the identities in (4.1) imply for all k ∈ [0, ∞)Z and λ ∈ C that 5∗ k(λ)J5k(¯λ) = J. (4.5) Thus from the invertibility of J it follows immediately that the matrices 5k(λ) and 5k(λ) are invertible for any k ∈ [0, ∞)Z and λ ∈ C with 5−1 k (λ) = −J5∗ k(¯λ)J, 5−1 k (λ) = −J5∗ k(¯λ)J. (4.6) Hence any initial value problem associated with system (Sλ) or (Sλ) possesses a unique solution on [0, ∞)Z. Moreover, the fundamental matrices of systems (Sλ) and (Sλ) are invertible on the whole discrete interval [0, ∞)Z. In the following lemma we give several conditions which are equivalent to (4.1). Namely, we show that any variation of the superscripts star, hat, and tilde is possible. Lemma 4.1.1. Let n ∈ N be given. For any k ∈ [0, ∞)Z the following conditions are equivalent. (i) The matrices S(t), S(t), V(t), V(t) satisfy (4.1). (ii) The matrices 5k(λ) and 5k(λ) satisfy (4.5) for all λ ∈ C. (iii) The matrices S(t), S(t), V(t), V(t) satisfy Sk JS ∗ k = J, Vk JS ∗ k + Sk JV ∗ k = 0, Vk JV ∗ k = 0. (4.7) (iv) The matrices 5k(λ) and 5k(λ) satisfy for all λ ∈ C that 5k(λ)J5∗ k(¯λ) = J. (4.8) Proof. The equivalence of (i) and (ii), and of (iii) and (iv), follows by direct calculations with the notation introduced in (4.4). The equivalence of (ii) and (iv) is a consequence of the relations in (4.6) and of the fact 5k(λ)5−1 k (λ) = I. ■ – 56 – 4.1. Preliminaries Further conditions which are equivalent to (i)–(iv) in Lemma 4.1.1 can be obtained by the conjugate transpose of the identities in (4.1), (4.5), (4.7), and (4.8). Now we focus on the coefficients of systems (Sλ) and (Sλ) with different values of the spectral parameter. Together with k defined in (4.2) we consider also the matrix k := JVk JS ∗ k J. (4.9) Lemma 4.1.2. Let n ∈ N be fixed and k ∈ [0, ∞)Z be such that the conditions in (4.1) hold. Then the matrices k, k defined in (4.2) and (4.9), respectively, satisfy k = ∗ k, k J k = 0, k J k = 0, (4.10) and for all λ, ν ∈ C we have J5k(λ)J5∗ k(ν)J = (λ − ¯ν) k − J, (4.11) J5k(λ)J5∗ (ν)J = (λ − ¯ν) k − J. (4.12) Proof. The above identities follow by direct calculations. ■ Identities (4.11) and (4.12) play a crucial role in the proof of the following generalization of the Lagrange identity for two systems, compare with Theorem 2.1.7. Theorem 4.1.3 (Generalized Lagrange identity). Let n, m ∈ N and λ, ν ∈ C be fixed and assume that the conditions in (4.1) hold for all k ∈ [0, ∞)Z. If Z(λ), Z(ν) ∈ C([0, ∞)Z)2n×m solve systems (Sλ) and (Sν) on [0, ∞)Z, respectively, then for any k ∈ [0, ∞)Z we have [ Z ∗ k(λ)JZk(ν) ] = (¯λ − ν)Z ∗ k+1(λ) k Zk+1(ν), (4.13) [ Z ∗ k(λ)JZk(ν) ] = (¯λ − ν)Z ∗ k+1(λ) k Zk+1(ν). (4.14) Proof. Identity (4.13) follows from the first equality in (4.6) and from (4.11), because [ Z ∗ k(λ)JZk(ν) ] = Z ∗ k+1(λ) [ J − 5∗−1 k (λ)J5−1 k (ν) ] Zk+1(ν) (4.6) = Z ∗ k+1(λ) [ J + J5k(¯λ)J5∗ k(¯ν)J ] Zk+1(ν) (4.11) = (¯λ − ν)Z ∗ k+1(λ) k Zk+1(ν). Similarly we get identity (4.14) from the second equality in (4.6) and from (4.12). ■ Remark 4.1.4. (i) The results in Theorem 4.1.3 imply that in order to have a single weight matrix for the semi-inner product and the semi-norm in the associated space of square summable solutions, we must necessarily assume that k = k. This means, in view of Lemma 4.1.2, that we need to have k Hermitian. This is, in fact, the original motivation for our assumption (4.2). (ii) In the continuous case it is obvious that systems (H R λ ) and (H R λ ) coincide if and only if H(t) is Hamiltonian on [a, ∞). Now we give the answer to the same question for systems (Sλ) and (Sλ). From the first equality in (4.1) (or from (4.6) with λ = 0) and from the first and the second conditions in (4.1) we obtain, respectively, Sk = −JS ∗−1 k J and Vk = JS ∗−1 k V ∗ k S ∗−1 k J. (4.15) – 57 – Chapter 4. Invariance of limit circle case for two discrete systems Therefore, S(t) = S(t) in (4.15) if and only if S ∗ k JSk = J, i.e., Sk is symplectic for all k ∈ [0, ∞)Z. In this case, Vk = Vk in (4.15) if and only if the matrix Vk JS ∗ k = Vk JS ∗ k is Hermitian for all k ∈ [0, ∞)Z, i.e., the matrix k is Hermitian on [0, ∞)Z. In other words, systems (Sλ) and (Sλ) satisfying (4.1) and (4.2) coincide if and only if they represent the discrete symplectic system studied in Chapter 2, i.e., system (Sλ) with the coefficient matrices satisfying (2.1). Finally, we calculate the determinant of the matrices 5k(λ), 5k(λ) and of their product. Let k ∈ [0, ∞)Z be such that the conditions in (4.1) hold. From the first equality in (4.1) we obtain for any λ ∈ C that 5k(λ) = (I + λJ k)Sk and 5k(λ) = (I + λJ k)Sk. Moreover, by the second and third identities in (4.10) the matrices λJ k and λJ k are nilpotent of degree two, which with the aid of Proposition 1.1.3 yields det(I + λJ k) = 1 = det(I + λJ k). Therefore det 5k(λ) = det Sk and det 5k(λ) = det Sk, i.e., the determinants do not depend on λ. Consequently from the first condition in (4.1) we get det 5∗ k(λ)J5k(λ) = det S ∗ k JSk (4.1) = det J = 1, i.e., det 5∗ k(λ) × det 5k(λ) = 1. (4.16) In addition, from the latter equality and Remark 4.1.4(ii) one concludes that when systems (Sλ) and (Sλ) coincide, then the absolute value of det 5k(λ) is equal to one, as we claim in Lemma 2.1.3. 4.2 Main result In this section we establish the main result of this chapter (Theorem 4.2.2). We also provide sufficient conditions for the invariance, present an illustrative example, and discuss some special cases. For convenience, we summarize the used notation. Notation 4.2.1. The number n ∈ N is fixed and S, S, V, V, ∈ C([0, ∞)Z)2n×2n are such that the conditions in (4.1) and (4.2) are satisfied for all k ∈ [0, ∞)Z. It follows from Remark 4.1.4(i) that with Notation 4.2.1 we have just one weight matrix k = k, which is Hermitian and positive semidefinite on [0, ∞)Z. Therefore, we denote by ℓ2 the space of all sequences defined on [0, ∞)Z, which are square summable with respect to the weight matrix k, i.e., ℓ2 [0, ∞)Z = ℓ2 := { z ∈ C([0, ∞)Z)2n ∞∑ k=0 z∗ k k zk < ∞ } . Let us note that the statement in Theorem 4.2.2 below remains the same for other types of unbounded discrete intervals, such as for (−∞, b]Z or (−∞, ∞)Z, if the corresponding space ℓ2 is defined over that interval. Theorem 4.2.2. Let us assume that there exists λ0 ∈ C such that all solutions of systems (Sλ0 ) and (Sλ0 ) belong to ℓ2 . Then all solutions of systems (Sλ) and (Sλ) belong to ℓ2 for any λ ∈ C. – 58 – 4.2. Main result Proof. Let λ0 ∈ C be as stated in the theorem and λ ∈ CK{λ0} be fixed. For ν ∈ {λ, λ0} we denote by k(ν) and k(ν) the fundamental matrices of systems (Sν) and (Sν), respectively, such that 0(ν) = I = 0(ν). In the first part of the proof we show that all solutions of system (Sλ) belong to ℓ2 . Since the matrices k(λ) and k(λ0) are obviously invertible on [0, ∞)Z, it follows that for every k ∈ [0, ∞)Z we have k(λ) = k(λ0) k (4.17) for some invertible matrix k ∈ C2n×2n, i.e., k = −1 k (λ0) k(λ). Hence by straightforward calculations with using (4.1), (4.4), (4.6), and (4.17) we get k (4.17) = −1 k+1(λ0) [ 5k(λ) − 5k(λ0) ] k(λ) (4.4) = (λ − λ0) −1 k+1(λ0)Vk k(λ0) k = (λ − λ0) −1 k+1(λ0)Vk 5−1 k (λ0) k+1(λ0) k (4.6) = −(λ − λ0) −1 k+1(λ0)Vk J5∗ k(¯λ0)J k+1(λ0) k (4.1) = −(λ − λ0) −1 k+1(λ0)Vk JS ∗ k J k+1(λ0) k = (λ − λ0) −1 k+1(λ0)J k k+1(λ0) k. (4.18) It means that k satisfies the recurrence relation k+1 = [ I + (λ − λ0)ϒk ] k, k ∈ [0, ∞)Z, (4.19) where ϒ ∈ C([0, ∞)Z)2n×2n is given by the formula ϒk := −1 k+1(λ0)J k k+1(λ0) = − [ ∗ k+1(λ0)J k+1(λ0) ]−1 ∗ k+1(λ0) k k+1(λ0). (4.20) Identity (4.17) implies that for the required conclusion it suffices to prove the boundedness of || k ||σ on [0, ∞)Z. However equality (4.19) and the submultiplicative property of the norm ||·||σ yield k+1 σ = [ I + (λ − λ0)ϒk ] k σ ≤ ( ||I||σ + |λ − λ0 | × ||ϒk ||σ ) × k σ = ( 1 + |λ − λ0 | × ||ϒk ||σ ) × [ I + (λ − λ0)ϒk−1 ] k−1 σ ≤ ( 1 + |λ − λ0 | × ||ϒk ||σ ) × ( 1 + |λ − λ0 | × ||ϒk−1 ||σ ) × k−1 σ ≤ · · · ≤ ( 1 + |λ − λ0 | × ||ϒk ||σ ) × · · · × ( 1 + |λ − λ0 | × ||ϒ0 ||σ ) × 0 σ ≤ e|λ−λ0 |ωk with ωk := k∑ j=0 ||ϒj ||σ, (4.21) where in the last step we used the inequality 1 + x ≤ ex and the fact 0 = I; cf. Proposition 1.1.4. Therefore we need to show that limk→∞ ωk < ∞. Since k ≥ 0 on [0, ∞)Z, we obtain from the Cauchy–Schwarz inequality and the arithmetic-geometric mean inequality for any ξ, ζ ∈ C2n that ξ∗ kζ ≤ ( ξ∗ kξ )1/2 ( ζ∗ kζ )1/2 ≤ 1 2 ( ξ∗ kξ + ζ∗ kζ ) , which for any ˜z, ˆz ∈ ℓ2 implies ∞∑ k=0 ˜z∗ k k ˆzk ≤ ∞∑ k=0 ˜z∗ k k ˆzk ≤ 1 2 ∞∑ k=0 ( ˜z∗ k k ˜zk + ˆz∗ k k ˆzk ) < ∞. – 59 – Chapter 4. Invariance of limit circle case for two discrete systems Hence inequality (1.7) and the assumption that all solutions of systems (Sλ0 ) and (Sλ0 ) belong to ℓ2 , yield ∞∑ k=0 ∗ k+1(λ0) k k+1(λ0) σ (1.7) ≤ ∞∑ k=0 ∗ k+1(λ0) k k+1(λ0) 1 ≤ ε < ∞ (4.22) for some ε > 0. Now we put Qk := ∗ k+1(λ0)J k+1(λ0) and show that the value of Q−1 k σ is bounded on [0, ∞)Z. By the generalized Lagrange identity in Theorem 4.1.3 we get Qk = J − 2i im(λ0) ∞∑ k=0 ∗ j+1(λ0) j j+1(λ0), and so inequality (4.22) implies that the limit limk→∞ Qk exists and is bounded in the spectral norm, i.e., ||Qk ||σ is bounded on [0, ∞)Z. Therefore also the adjugate matrix Q adj k is bounded on [0, ∞)Z in the spectral norm. Moreover, it holds det Qk = det ∗ k+1(λ0) × det k+1(λ0) = det ∗ k(λ0) × det k(λ0) × det 5∗ k(λ0) × det 5k(λ0) (4.16) = det ∗ k−1(λ0) × det k−1(λ0) × det 5∗ k−1(λ0) × det 5k−1(λ0) = · · · = det ∗ 0(λ0) × det 0(λ0) = 1, which yields Q−1 k = Q adj k and consequently the matrices Q−1 k are bounded in the spectral norm, i.e., Q−1 k σ ≤ κ for all k ∈ [0, ∞)Z and some κ > 0. By combining the submultiplicative property of the spectral norm, (4.20), and (4.22) we get ∞∑ k=0 ||ϒk ||σ ≤ ∞∑ k=0 Q−1 k σ × ∗ k+1(λ0) k k+1(λ0) σ ≤ κε < ∞, i.e., limk→∞ ωk < ∞ by (4.21). Thus k σ ≤ τ on [0, ∞)Z for some τ > 0. In turn, the definition of k, the second inequality in (1.7), and the submultiplicativity and selfadjointness of the spectral norm imply (2n)−3/2 ∞∑ k=0 ∗ k+1(λ) k k+1(λ) 1 (1.7) ≤ ∞∑ k=0 ∗ k+1(λ) k k+1(λ) σ (4.17) = ∞∑ k=0 ∗ k+1 ∗ k+1(λ0) k k+1(λ0) k+1 σ ≤ ∞∑ k=0 k+1 2 σ × ∗ k+1(λ0) k k+1(λ0) σ ≤ τ2 ∞∑ k=0 ∗ k+1(λ0) k k+1(λ0) σ < ∞, because all columns of the fundamental matrix (λ0) belong to ℓ2 . This shows that all columns of (λ) belong to ℓ2 and consequently any solution of system (Sλ) is square summable with respect to k. For the proof of the fact that all solutions of system (Sλ) – 60 – 4.2. Main result are also in ℓ2 we only switch the roles of systems (Sλ) and (Sλ). Namely, we define k := −1 k (λ0) k(λ) and similarly as in (4.18) we derive k = (λ − λ0) −1 k+1(λ0)J k k+1(λ0) k. But since we have k = k for all k ∈ [0, ∞)Z, the rest of the proof is the same as in the previous part. ■ In the following result we give sufficient conditions in terms of the coefficient matrices, which guarantee that all solutions of systems (Sλ) and (Sλ) belong to ℓ2 for any λ ∈ C. Let us note that the matrix norm ||·||1 used in (4.23) below can be replaced by any other matrix norm because of their equivalence. Corollary 4.2.3. Let us assume that ∞∑ k=0 Sk − I 1 < ∞, ∞∑ k=0 Sk − I 1 < ∞, and ∞∑ k=0 k 1 < ∞. (4.23) Then all solutions of systems (Sλ) and (Sλ) belong to ℓ2 for any λ ∈ C. Proof. It suffices to show that the assumptions of Theorem 4.2.2 are satisfied for λ0 = 0. The equality 5k(0) = Sk for all k ∈ [0, ∞)Z and the first condition in (4.23) imply by Proposition 1.1.4 that for a fundamental matrix (0) ∈ C([0, ∞)Z)2n×2n of system (S0) there exists κ > 0 such that k(0) 1 ≤ κ < ∞ for all k ∈ [0, ∞)Z. Hence by the submultiplicativity and self-adjointness of the matrix norm ||·||1 and the third condition in (4.23) we have ∞∑ k=0 ∗ k+1(0) k k+1(0) 1 ≤ κ2 ∞∑ k=0 k 1 < ∞, i.e., all solutions of system (S0) belong to ℓ2 . In a similar way we prove that any solution of system (S0) also belongs to ℓ2 . Thus Theorem 4.2.2 implies that all solutions of systems (Sλ) and (Sλ) are in ℓ2 for any λ ∈ C. ■ Remark 4.2.4. In accordance with Remark 4.1.4(ii), if both systems (Sλ) and (Sλ) coincide, Theorem 4.2.2 reduces to Theorem 2.4.17 and Corollary 4.2.3 to Corollary 2.4.22. Therefore, with a slight abuse in the terminology, Theorem 4.2.2 can be interpreted as the invariance of the limit circle case for systems (Sλ) and (Sλ), i.e., the situation when all solutions of systems (Sλ) and (Sλ) belong to ℓ2 for any λ ∈ C. Now we provide an illustrative example of the established invariance, i.e., the application of Theorem 4.2.2. This example also shows that all solutions of systems (Sλ) and (Sλ) may be in ℓ2 for any λ ∈ C even when conditions (4.23) in Corollary 4.2.3 are not satisfied. Example 4.2.5. Let n = 1 and fix ε ∈ R, ε ≥ 1. Let {vk}∞ k=0 be a real sequence such that vk ≥ 0 for all k ∈ [0, ∞)Z and ∑∞ k=0 ε2k vk < ∞. We note that then the series ∑∞ k=0 vk < ∞ and ∑∞ k=0 vk/ε2k < ∞ are convergent as well, because 0 ≤ vk/ε2k ≤ vk ≤ ε2k vk. Consider systems (Sλ) and (Sλ) with Sk := (1/ε)I, Vk := ( 0 vk 0 0 ) , Sk := εI, Vk := ( 0 ε2 vk 0 0 ) , k := ( 0 0 0 εvk ) (4.24) – 61 – Chapter 4. Invariance of limit circle case for two discrete systems for all k ∈ [0, ∞)Z. Then all the conditions (4.1) and (4.2) are satisfied. The fundamental matrices k(0) = ( ˆz[1] k (0), ˆz[2] k (0) ) and k(0) = ( ˜z[1] k (0), ˜z[2] k (0) ) of systems (S0) and (S0) with (4.24) satisfying 0(0) = I = 0(0) are equal to k(0) = (1/εk)I and k(0) = εk I, so that ˆz[1] k (0) = (1/εk, 0)⊤, ˆz[2] k (0) = (0, 1/εk)⊤, ˜z[1] k (0) = (εk, 0)⊤, and ˜z[2] k (0) = (0, εk)⊤ for all k ∈ [0, ∞)Z. Then with the notation ||z||2 := ∑∞ k=0 z∗ k k zk we have ˆz[1] (0) 2 = 0, ˆz[2] (0) 2 = ε ∞∑ k=0 vk/ε2k < ∞, ˜z[1] (0) 2 = 0, ˜z[2] (0) 2 = ε ∞∑ k=0 ε2k vk < ∞, i.e., the solutions ˆz[1] (0), ˆz[2] (0), ˜z[1] (0), ˜z[2] (0) belong to ℓ2 . Thus the assumptions of Theorem 4.2.2 are satisfied for λ0 = 0, which implies that all solutions of systems (Sλ) and (Sλ) with the coefficients specified in (4.24) belong to ℓ2 for any λ ∈ C. Indeed, for any λ ∈ C the fundamental matrices k(λ) and k(λ) of systems (Sλ) and (Sλ) with (4.24) satisfying 0(λ) = I = 0(λ) are given by k(λ) =   1/εk (λ/εk−1) ∑k−1 j=0 vj 0 1/εk   , k(λ) =   εk λεk+1 ∑k−1 j=0 vj 0 εk   , from which we obtain again ˆz[1] (λ) 2 = 0 = ˜z[1] (λ) 2 and ˆz[2] (λ) 2 < ∞, ˜z[2] (λ) 2 < ∞. One also easily observe that the first two conditions in (4.23) are not satisfied (since the corresponding series are divergent), but still all solutions of systems (Sλ) and (Sλ) with (4.24) do belong to ℓ2 . In addition, we note that for ε = 1 both systems coincide and reduce to the system investigated in Example 2.5.3, see (2.78). ▲ Finally, let us consider systems (Sλ) and (Sλ), which correspond to the following n-vector-valued difference equations of order 2m and of the Sturm–Liouville type m∑ s=0 (−1)s s [ P [s] k s ˆyk+1−s(λ) ] = λWk ˆyk+1(λ), (Eλ) m∑ s=0 (−1)s s [ P [s] k s ˜yk+1−s(λ) ] = λWk ˜yk+1(λ), (Eλ) on [0, ∞)Z, where P [0] , . . . , P [m] , P [0] , . . . , P [m] , W, W ∈ C([0, ∞)Z)n×n with det P [m] k 0 and det P [m] k 0 for all k ∈ [0, ∞)Z. In particular, if m = n = 1 we obtain the pair of scalar difference equations − ( p[1] k ˆyk(λ) ) + p[0] k ˆyk+1(λ) = λwk ˆyk+1(λ), − ( p[1] k ˜yk(λ) ) + p[0] k ˜yk+1(λ) = λwk ˜yk+1(λ), where p[0] , p[1] , p[0] , p[1] , w, w ∈ C([0, ∞)Z) with p[1] 0 and p[1] 0 for all k ∈ [0, ∞)Z. Then equations (Eλ) and (Eλ) can be written, respectively, as systems (Sλ) and (Sλ) with the coefficients given similarly as in (1.20)–(1.22) and (2.11), which yield k = diag { Wk, 0, . . . , 0 } , k = diag { Wk, 0, . . . , 0 } . (4.25) – 62 – 4.3. Bibliographical notes The first and second conditions in (4.1) imply P [j] k = P [j]∗ k and Wk = W ∗ k for all j = 0, . . . , m and k ∈ [0, ∞)Z, while the third condition is satisfied trivially. Moreover, assumption (4.2) forces that Wk = Wk ≥ 0 on [0, ∞)Z, i.e., the weight matrices Wk and Wk are Hermitian matrices and coincide on the interval [0, ∞)Z. With the vectors ˆzk(λ), ˜zk(λ) defined similarly as in (1.20) and the matrices k, k from (4.25) we have ˆz∗ k+1(λ) k ˆzk+1(λ) = ˆy∗ k+1(λ)Wk ˆyk+1(λ) and ˜z∗ k+1(λ) k ˜zk+1(λ) = ˜y∗ k+1(λ)Wk ˜yk+1(λ). This shows that the associated space of square summable sequences has the form ℓ2 W := { y ∈ C([0, ∞)Z)n ∞∑ k=0 y∗ k+1(λ)Wk yk+1(λ) < ∞ } . Then from Theorem 4.2.2 we get the following result, which in the scalar case provides a discrete analogue of [168, Theorem 1]. Corollary 4.2.6. Let the numbers m, n ∈ N be given and P [0] , . . . , P [m] , W ∈ C([0, ∞)Z)n×n be such that Wk = W ∗ k ≥ 0 on [0, ∞)Z. Consider equations (Eλ) and (Eλ) with P [j] k := P [j]∗ k and Wk := Wk for all j ∈ {0, . . . , m} and k ∈ [0, ∞)Z. If there exists λ0 ∈ C such that all solutions of equations (Eλ0 ) and (Eλ0 ) belong to ℓ2 W , then all solutions of equations (Eλ) and (Eλ) belong to the space ℓ2 W for an arbitrary λ ∈ C. Remark 4.2.7. If, in addition, the coefficient matrices P [0] , . . . , P [m] are Hermitian, then from Corollary 4.2.6 one easily concludes the invariance of the limit circle case for any even order vector-valued Sturm–Liouville difference equation discussed in Remark 2.4.18. Similarly we can derive also the invariance for pairs of difference/discrete equations of the type as in (2.9) or (2.10). 4.3 Bibliographical notes The results of this chapter are a special case of the invariance of the limit circle case for two differential systems on time scales established in [A19] but without the shift in the definition of the space ℓ2 . The present reformulation and the proof are published for the first time in the setting of systems (Sλ) and (Sλ). The statement of Corollary 4.2.6 is new in the case n > 1 or m > 1. Moreover, the presence of the shift in the definition of ℓ2 produces less restrictive assumptions on the coefficients of equations (Eλ) and (Eλ), compare Corollary 4.2.6 with m = n = 1 and [A19, Corollary 4.4] with T = Z. Finally, Example 4.2.5 corresponds to [A19, Example 4.6]. – 63 – Chapter 4. Invariance of limit circle case for two discrete systems – 64 – Chapter 5 Polynomial and analytic dependence on spectral parameter It can be of no practical use to know that π is irrational, but if we can know, it surely would be intolerable not to know. Edward Charles Titchmarsh, see [136, pg. 113] In this chapter we extend some of the previous results to systems with polynomial or analytic dependence on the spectral parameter. More specifically, we consider the discrete symplectic system zk+1(λ) = Sk(λ)zk(λ), (Sλ) whose coefficient matrix S(λ) ∈ C([0, ∞)Z)2n×2n is analytic (or in a special case only polynomial) in the spectral parameter λ ∈ C in a neighborhood of 0, i.e., Sk(λ) = ∞∑ j=0 λj S [j] k , (5.1) and it satisfies the symplectic-type identity S∗ k(¯λ)JSk(λ) = J. (5.2) System (Sλ) includes several significant special cases known in the literature. Obviously, system (Sλ) from Chapter 2 is a special case of (Sλ), see (2.2). Furthermore, we will see that the linear Hamiltonian difference system in (2.6) leads to system (Sλ) with polynomial dependence on λ. Therefore, in order to unify the known theory of systems (2.6) and (Sλ), it is necessary to study the systems with polynomial dependence on λ. In fact, our interest in system (Sλ) is motivated by the latter observation and also by [49], where system (Sλ) with S[0] k ≡ I was investigated. We discuss an eigenvalue problem associated with system (Sλ) and develop the theory of Weyl disks and square summable solutions (including the limit point and limit circle cases) for system (Sλ). However, we point out that in this treatment we encounter several “problems”, which did not appear in Chapters 2–4, i.e., when the dependence on λ was only linear. For example, the weight matrix is no longer constant in λ, which implies that the validity of the crucial weak Atkinson condition may now depend on λ, see – 65 – Chapter 5. Polynomial and analytic dependence on spectral parameter Hypotheses 2.3.7 and 5.3.2. Also the maximal number of linearly independent square summable solutions (i.e., the limit circle case) is not any more invariant with respect to λ ∈ C, see Theorem 2.4.17 and Example 5.3.9. On the other hand, we prove in Theorems 5.4.1 and 5.4.5 that for system (Sλ) with a special quadratic dependence on λ the invariance of the limit circle case holds true as in Chapter 4. This chapter is organized as follows. In the next section we derive several preliminary results on system (Sλ) and its coefficient matrix S(λ). We also prove a general form of the Lagrange identity for system (Sλ) including the explicit calculation of the corresponding weight matrix in terms of the coefficients of (Sλ), see Theorem 5.1.6. As a consequence we obtain the J-monotonicity of a fundamental matrix of system (Sλ), which is used in [49] for proving the Krein traffic rules for the eigenvalues of the fundamental matrix. In Section 5.2 we discuss in more details some special cases of system (Sλ), which are known in the literature. In Section 5.3 we show that under appropriate Atkinson-type conditions involving the weight matrix, the theory of eigenvalues, Weyl disks, and square summable solutions developed in Chapter 2 remains valid without any change also for system (Sλ). Finally, in Section 5.4 we establish the invariance of the limit circle case for system (Sλ) with a special quadratic dependence on the spectral parameter, which includes also system (2.6) with Ek ≡ 0. 5.1 Preliminaries and Lagrange identity Throughout this chapter we assume that Sk(λ) has a positive radius of convergence as a power series with respect to λ uniformly in k ∈ [0, ∞)Z. It means that there exists ε > 0 such that Sk(λ) is absolute convergent for all λ ∈ C satisfying |λ| < ε and all k ∈ [0, ∞)Z. We denote this region of convergence as CS, i.e., we have CS := {λ ∈ C | |λ| < ε}. Moreover, we say that Sk(λ) is a polynomial matrix (of degree p) with respect to λ, if there exists p ∈ N0 such that Sk(λ) = ∑p j=0 λjS [j] k with S [p] k 0. Obviously, in the latter case we can take ε = ∞, i.e., CS = C. If S [j] k 0 for infinitely many j ∈ N0, we say that the matrix Sk(λ) is analytic with respect to λ. Using the absolute convergence of the matrices Sk(λ) for all λ ∈ CS, identity (5.2) can be equivalently written as S[0]∗ k JS[0] k = J and m∑ j=0 S [j]∗ k JS [m−j] k = 0 for all m ∈ N. (5.3) If Sk(λ) is a polynomial matrix of degree p in λ, then the sum in (5.3) is nontrivial only for m = 0, . . . , 2p. Moreover, identity (5.2) also implies that Sk(λ) is invertible with S−1 k (λ) = −JS∗ k(¯λ)J = − ∞∑ j=0 λj JS [j]∗ k J. (5.4) The following lemma provides several equivalent formulations of assumption (5.2), compare with Lemma 2.1.1. The proof follows (again) by direct calculations and from (5.4). Lemma 5.1.1. Let n ∈ N be given. For any k ∈ [0, ∞)Z the following conditions are equivalent. (i) The matrix Sk(λ) in (5.1) satisfies identity (5.2) for all λ ∈ CS. (ii) The matrices S[0] k , S[1] k , . . . satisfy the equalities in (5.3). – 66 – 5.1. Preliminaries and Lagrange identity (iii) The matrix Sk(λ) satisfies Sk(λ)JS∗ k(¯λ) = J for all λ ∈ CS. (5.5) (iv) The matrices S[0] k , S[1] k , . . . satisfy S[0] k JS[0]∗ k = J and m∑ j=0 S [j] k JS [m−j]∗ k = 0 for all m ∈ N. (5.6) Given Lemma 5.1.1 we can summarize the basic notation used in this chapter. Notation 5.1.2. The number n ∈ N is fixed and S[0] , S[1] , · · · ∈ C([0, ∞)Z)2n×2n are such that (i) the equalities in (5.3) are satisfied for all k ∈ [0, ∞)Z and (ii) the radius of convergence of Sk(λ) in λ is equal to ε > 0 uniformly in k ∈ [0, ∞)Z. The invertibility of Sk(λ) guarantees that system (Sλ) is uniquely solvable on [0, ∞)Z for any initial value at any k0 ∈ [0, ∞)Z. Moreover, when Sk(λ) is polynomial of degree p in λ, we obtain an additional information about its determinant, which generalizes the result in Theorem 2.1.3. However, when Sk(λ) is analytic and not polynomial in λ, then the following statement may be violated as we will demonstrate in Example 5.2.4. Theorem 5.1.3. Let k ∈ [0, ∞)Z and λ ∈ CS be such that the matrix Sk(λ) is polynomial in λ. Then | det Sk(λ)| = det S[0] k = 1. Proof. If Sk(λ) is polynomial in λ, then identity (5.4) implies that Sk(λ) is even an unimodular polynomial matrix. Thus, its determinant is constant in λ and we have | det Sk(λ)| = | det Sk(0)| = det S[0] k = 1, because S[0] k is a symplectic matrix by the first equality in (5.3). ■ Now we return to a general case of the analytic dependence on λ. The following lemma provides an extension of identity (5.5) and it is a main tool for the proof of the Lagrange identity given below. Lemma 5.1.4. For all k ∈ [0, ∞)Z and any λ, ν ∈ CS we have Sk(λ)JS∗ k(ν) = J + (λ − ¯ν)Λk(λ, ¯ν), (5.7) where the matrix Λ(λ, ¯ν) ∈ C([0, ∞)Z)2n×2n is defined by Λk(λ, ¯ν) := ∞∑ m=0 m∑ j=0 λm−j ¯νj j∑ ℓ=0 S[m−ℓ+1] k JS[ℓ]∗ k . (5.8) Moreover, for ν = λ the matrix Λk(λ, ¯λ) is Hermitian for all k ∈ [0, ∞)Z. Remark 5.1.5. If Sk(λ) is a polynomial matrix of degree p in λ, then the infinite sum in (5.8) is in fact a finite sum for m = 0, . . . , 2p − 1. Observe also that identity (5.8) reduces to (5.5) when ν = ¯λ. Moreover, we point out that the Hermitian property of Λk(λ, ¯λ) was already shown in [49, Proposition 1]. – 67 – Chapter 5. Polynomial and analytic dependence on spectral parameter Proof of Lemma 5.1.4. Let k ∈ [0, ∞)Z and λ, ν ∈ CS be fixed. The power series for Sk(λ) and S∗ k (ν) converge absolutely, so that the terms in the product Sk(λ)JS∗ k (ν) can be re-arranged to the separate powers of λm−j ¯νj, that is, Sk(λ)JS∗ k(ν) = ∞∑ m=0 m∑ j=0 λm−j ¯νj S [m−j] k JS [j]∗ k . By using identity (5.6) for each m ∈ N, we replace the term ¯νm S[0] k JS[m]∗ k by −¯νm ( S[m] k JS[0]∗ k + S[m−1] k JS[1]∗ k + · · · + S[1] k JS[m−1]∗ k ) . Thus, with the aid of the first identity in (5.6), we get Sk(λ)JS∗ k(ν) = J + ∞∑ m=1 m∑ j=1 (λj − ¯νj ) ¯νm−j S [j] k JS [m−j]∗ k . Upon factoring λ − ¯ν out of each term λj − ¯νj = (λ − ¯ν) ∑j ℓ=1 λj−ℓ ¯νℓ−1 and collecting the remaining products with the same powers of λ and ¯ν, we obtain Sk(λ)JS∗ k(ν) = J + (λ − ¯ν) ∞∑ m=1 m∑ j=1 ( j∑ ℓ=1 λj−ℓ ¯νℓ−1 ) ¯νm−j S [j] k JS [m−j]∗ k = J + (λ − ¯ν) ∞∑ m=0 m∑ j=0 ( j∑ ℓ=0 λj−ℓ ¯νm+ℓ−j ) S [j+1] k JS [m−j]∗ k = J + (λ − ¯ν) ∞∑ m=0 m∑ j=0 λm−j ¯νj j∑ ℓ=0 S[m−ℓ+1] k JS[ℓ]∗ k = J + (λ − ¯ν)Λk(λ, ¯ν), where we used also the formula ∑m j=0 ∑j ℓ=0 aj,ℓ = ∑m ℓ=0 ∑m j=ℓ aj,ℓ. Finally, for ν := λ we get from the fact J∗ = −J and identities (5.6) that the matrix Λk(λ, ¯λ) is Hermitian. ■ The following theorem represents the main result of this section. Its relationship to known discrete Lagrange identities in the literature is discussed in Section 5.2. Theorem 5.1.6 (Lagrange identity). Let the numbers m ∈ N and λ, ν ∈ CS be given. If the sequences Z(λ), Z(ν) ∈ C([0, ∞)Z)2n×m solve systems (Sλ) and (Sν) on [0, ∞)Z, respectively, then for any k ∈ [0, ∞)Z we have [ Z∗ k(λ)JZk(ν) ] = (¯λ − ν)Z∗ k+1(λ)JΛj(¯λ, ν)JZk+1(ν), (5.9) Z∗ k+1(λ)JZk+1(ν) = Z∗ 0(λ)JZ0(ν) + (¯λ − ν) k∑ j=0 Z∗ j+1(λ)JΛj(¯λ, ν)JZj+1(ν). (5.10) In particular, for ν = ¯λ and ν = λ we have on [0, ∞)Z, respectively, Z∗ k(λ)JZk(¯λ) ≡ Z∗ 0(λ)JZ0(¯λ), (5.11) Z∗ k+1(λ)JZk+1(λ) = Z∗ 0(λ)JZ0(λ) − 2i im(λ) k∑ j=0 Z∗ j+1(λ)JΛj(¯λ, λ)JZj+1(λ). (5.12) – 68 – 5.1. Preliminaries and Lagrange identity Proof. Given that Zk(λ) and Zk(ν) satisfy systems (Sλ) and (Sν) for all k ∈ [0, ∞)Z, respectively, we obtain from formula (5.4) and Lemma 5.1.4 that [ Z∗ k(λ)JZk(ν) ] = Z∗ k+1(λ) [ J − S∗−1 k (λ)JS−1 k (ν) ] Zk+1(ν) (5.4) = Z∗ k+1(λ) [ J + JSk(¯λ)JS∗ k(¯ν)J ] Zk+1(ν) (5.7) = (¯λ − ν)Z∗ k+1(λ)JΛk(¯λ, ν)JZk(ν). Identities (5.9)–(5.12) are only direct consequences of (5.9). ■ Identity (5.12) indicates that the matrix JΛk(¯λ, λ)J will play an important role in the study of square summable solutions of system (Sλ), especially in the definition of the semi-inner product associated with system (Sλ), see Section 5.3. Hence we define the Hermitian 2n × 2n matrix Ψk(λ) := JΛk(¯λ, λ)J = ∞∑ m=0 m∑ j=0 ¯λm−j λj j∑ ℓ=0 JS[m−ℓ+1] k JS[ℓ]∗ k J. (5.13) Remark 5.1.7. The expression of the matrix Ψk(λ) given in (5.13) can be significantly simplified for real λ, i.e., for λ ∈ CS ∩ R. In particular, if we denote by ˙Sk(λ) := d dλ Sk(λ) the derivative of Sk(λ) with respect to λ, then Ψk(λ) = ∞∑ m=0 m∑ j=0 λm j∑ ℓ=0 JS[m−ℓ+1] k JS[ℓ]∗ k J = J ˙Sk(λ)JS∗ k(λ)J = −JSk(λ)J ˙S∗ k(λ)J, (5.14) where the last equality follows from the fact Ψ∗ k (λ) = Ψk(λ). The matrix J ˙Sk(λ)JS∗ k (λ)J was used in [113,149] in the oscillation theory of discrete symplectic systems with general nonlinear dependence on λ ∈ R. The Lagrange identity established in (5.12) has many applications in the qualitative theory of difference equations. Apart from the results in the following sections, it yields for example the J-monotonicity of the fundamental matrix of system (Sλ). Following the terminology from [114, pg. 7], a matrix M ∈ C2n×2n is called J-nondecreasing if iM∗JM ≥ iJ, and M is J-nonincreasing if iM∗JM ≤ iJ. Similarly we define the corresponding notions of a J-increasing or J-decreasing matrix. These concepts were used in [114] to study the stability zones for periodic linear Hamiltonian differential systems. In a similar way, such stability zones were studied in [129, 130] for the linear Hamiltonian difference systems given in (2.6) with Hk ≡ 0 and in [49] for system (Sλ) with S[0] k = I. Corollary 5.1.8. Let λ ∈ CS be fixed, Ψk(λ) ≥ 0 on [0, ∞)Z, and (λ) ∈ C([0, ∞)Z)2n×2n be a fundamental matrix of system (Sλ) such that the matrix 0(λ) is symplectic, i.e., it satisfies ∗ 0 (λ)J0(λ) = J. Then for every k ∈ [0, ∞)Z the matrix k(λ) is J-nondecreasing when im(λ) > 0, or J-nonincreasing when im(λ) < 0. If, in addition, there exists N ∈ [0, ∞)Z such that every nontrivial solution z(λ) ∈ C([0, ∞)Z)2n of system (Sλ) satisfies N∑ k=0 z∗ k+1(λ)Ψk+1(λ)zk(λ) > 0, (5.15) then the J-monotonicity of k(λ) is strict for all k ∈ [N + 1, ∞)Z, i.e., k(λ) is J-increasing when im(λ) > 0, or J-decreasing when im(λ) < 0. – 69 – Chapter 5. Polynomial and analytic dependence on spectral parameter Proof. By applying (5.12) to the fundamental matrix (λ) we get i∗ k(λ)Jk(λ) − iJ = 2 im(λ) k−1∑ j=0 ∗ j+1(λ)Ψj(λ)j+1(λ). (5.16) Since Ψk(λ) ≥ 0 for all k ∈ [0, ∞)Z, the sum on the right-hand side of (5.16) is zero for k = 0 and nonnegative for k ∈ [1, ∞)Z, so that k(λ) is J-nondecreasing when im(λ) > 0, and it is J-nonincreasing when im(λ) < 0. Moreover, the additional assumption concerning inequality (5.15) guarantees that the sum in (5.16) is positive definite for all k ∈ [N+1, ∞)Z, so that k(λ) is J-increasing when im(λ) > 0, and it is J-decreasing when im(λ) < 0. ■ 5.2 Special examples In this section we show the connection of the generalized Lagrange identity from Theorem 5.1.6 with several special cases known in the literature. We also demonstrate that a nonsingular weight matrix Ψk(λ) can be obtained when Sk(λ) is quadratic in λ, compare with the weight matrix k defined in (2.1). Example 5.2.1. The simplest example of system (Sλ) provides system (Sλ) with general linear dependence on the spectral parameter studied in Chapter 2. Indeed, if the matrix Sk(λ) is linear in λ, i.e., Sk(λ) = S[0] k + λS[1] k and S [j] k := 0 for j = 2, 3, . . . , then Lemma 5.1.1 implies that S[0]∗ k JS[1] k = J, S[0]∗ k JS[1] k + S[1]∗ k JS[0] k = 0, and S[1]∗ k JS[1] k = 0. In other words, the matrices Sk := S[0] k and Vk = S[1] k satisfy the first three conditions in (2.1) and system (Sλ) reduces to (Sλ) with 5k(λ) := Sk(λ). In this case ε = ∞, i.e., CS = C, and Λk(λ, ¯ν) = S[1] k JS[0]∗ k . Hence Ψk(λ) = JS[1] k JS[0]∗ k J, i.e., Ψk(λ) = k as defined in (2.1), which shows that Theorem 5.1.6 generalizes Theorem 2.1.7 and Lemma 2.1.5. Consequently, system (Sλ) includes all special cases of system (Sλ) mentioned in the introduction of Chapter 2, see (2.7)–(2.10). ▲ Example 5.2.2. Now, let the dependence on λ in system (Sλ) be quadratic, i.e., S [j] k ≡ 0 for j = 3, 4, . . . and zk+1(λ) = [ S[0] k + λS[1] k + λ2 S[2] k ] zk(λ), (5.17) where S[0] k satisfies the first identity in (5.3), the matrices S[0]∗ k JS[1] k and S[1]∗ k JS[2] k are Hermitian, S[2]∗ k JS[2] k = 0, and S[0]∗ k JS[2] k + S[1]∗ k JS[1] k + S[2]∗ k JS[0] k = 0. These conditions represent identity (5.3) with m = 0, . . . , 4, while for m = 5, 6, . . . the sum in identity (5.3) is trivial. In particular, we consider system (5.17) with the special quadratic dependence on λ given by S[0] k = ( Ak Bk Ck Dk ) , S[1] k = ( 0 Ak W[2] k −W[1] k Ak Ck W[2] k − W[1] k Bk ) , S[2] k = ( 0 0 0 −W[1] k Ak W[2] k ) , (5.18) where W[1] k and W[2] k are Hermitian n×n matrices, see also system (Qλ) in Section 5.4. Note that the coefficients in (5.18) corresponds to (2.7) when W[2] k ≡ 0. In this case the matrix Sk(λ) can be factorized as Sk(λ) = ( I 0 −λW[1] k I ) S[0] k ( I λW[2] k 0 I ) , (5.19) – 70 – 5.2. Special examples compare with (2.13). This shows that | det Sk(λ)| = det S[0] k (λ) = 1 on [0, ∞)Z for all λ ∈ C as claimed in Theorem 5.1.3. If we put Tk(λ) := ( 0 Ak −I Ck − λW[1] k Ak ) and Wk := diag { W[1] k , W[2] k } , (5.20) then by (5.8) we get Λ(λ, ¯ν) = S[1] k JS[0]∗ k + λS[2] k JS[0]∗ k − ¯νS[0] k JS[2]∗ k + λ¯νS[2] k JS[1]∗ k = −Tk(λ)Wk T ∗ k (ν). Therefore by (5.13) we have Ψk(λ) = −JTk(¯λ)Wk T ∗ k (¯λ)J and det Ψk(λ) = | det Ak |2 × det Wk, (5.21) i.e., the matrix Ψk(λ) is no longer constant in λ and it is invertible if (and only if) the matrices Ak, W[1] k , and W[2] k are invertible, compare with Ψk(λ) = k in the case of general linear dependence on λ. However, the invertibility of the weight matrix Ψk(λ) can occur only when system (5.17) with the coefficients specified in (5.18) corresponds to the linear Hamiltonian difference system from (2.6) with Ak := I − A−1 k , Bk := A−1 k Bk, Ck := Ck A−1 k , Ek ≡ 0, Fk = W[2] k , and Gk = −W[1] k , see Remark 1.2.1(iv) and the identities in (1.29). In addition, formula (5.4) yields that the multiplication of zk+1(λ) by T ∗ k (¯λ)J produces a backward shift in the second component. More precisely, if zk(λ) = (x∗ k (λ), u∗ k (λ))∗ solves system (5.17) with (5.18), then by using the partially shifted notation z[s] k (λ) := (x∗ k+1(λ), u∗ k(λ))∗ (5.22) we obtain T ∗ k (¯λ)Jzk+1(λ) = z[s] k (λ) and z∗ k+1(λ)JTk(¯λ) = − [ T ∗ k (¯λ)Jzk+1(λ) ]∗ = −z[s]∗ k (λ). (5.23) Hence identity (5.9) can be written as [ z∗ k(λ)Jzk(ν) ] = −(¯λ−ν)z∗ k+1(λ)JTk(¯λ)Wk T ∗ k (¯ν)Jzk+1(ν) = (¯λ−ν)z[s]∗ k (λ)Wk z[s] k (ν). (5.24) Since the weight matrix Wk in (5.24) is independent of λ, we can associate with the system in hand a semi-inner product and a semi-norm, which are independent of λ, see (5.41) below. We note that the latter observation is crucial for the invariance of the limit circle case derived in Section 5.4. ▲ In the following example we investigate the connection between the linear Hamiltonian difference system from (2.6) and system (Sλ). Example 5.2.3. According to Remark 1.2.1(iv), see identity (1.28), system (2.6) can be written as system (Sλ) with the coefficient matrix Sk(λ) = ( Ak(λ) Bk(λ) Ck(λ) Dk(λ) ) , where Ak(λ) := Ak(λ) = (I − Ak − λEk)−1 , Bk(λ) := Ak(λ)(Bk + λFk), Ck(λ) := (Ck + λGk)Ak(λ), Dk(λ) := I − A∗ k − λE∗ k + (Ck + λGk)Ak(λ)(Bk + λFk). } (5.25) We claim that the matrix Ak(λ) is polynomial in λ, and consequently the corresponding system (Sλ) is polynomial in λ. Let us fix k ∈ [0, ∞)Z. By the definition of the determinant, the function d(λ) := det(I −Ak −λEk) is a polynomial of degree at most n. The assumption on the existence of Ak(λ) for all λ ∈ C then implies that d(λ) 0 for all λ ∈ C. Thus, – 71 – Chapter 5. Polynomial and analytic dependence on spectral parameter d(λ) ≡ d 0 on C and the matrix I − Ak − λEk is unimodular. This yields that Ak(λ) is also a polynomial matrix in λ (of degree at most n − 1) and hence by (5.25), the matrix Sk(λ) is in this case polynomial of degree at most n + 1. Although we do not calculate the matrix Λ(¯λ, ν) explicitly, the corresponding Lagrange identity can be written as in (5.24) with Wk replaced by −JWk; cf. [36, Formula (2.55)]. Indeed, one easily observes that Sk(λ)JS∗ k(ν) = ( Ak(λ)B∗ k (ν) − Bk(λ)A∗ k (ν) Ak(λ)D∗ k (ν) − Bk(λ)C∗ k (ν) Ck(λ)B∗ k (ν) − Dk(λ)A∗ k (ν) Ck(λ)D∗ k (ν) − Dk(λ)C∗ k (ν) ) , and similarly as in (5.23) we get zk+1(λ) = −JT ∗−1 k (¯λ)z[s] k (λ), where Tk(λ) := ( 0 Ak(λ) −I Ck(λ) ) . Therefore, following the calculation in the proof of Theorem 5.1.6 we obtain [ z∗ k(λ)Jzk(ν) ] = z[s]∗ k (λ)T −1 k (¯λ)[J − Sk(¯λ)JS∗ k(¯ν)]T ∗−1 k (¯ν)z[s] k (ν) = −(¯λ − ν)z[s]∗ k (λ)JWk z[s] k (ν). (5.26) Especially, if Ek ≡ 0, then the matrix Ak(λ) ≡ Ak does not depend on λ and in this case system (2.6) can be written as system (5.17) with the special quadratic dependence on λ specified in (5.18) with Ak being invertible, see also [142, Formula (2.3) and Lemma 2.2]. ▲ In the last example of this section we consider system (Sλ) with the truly analytic (i.e., nonpolynomial) dependence on λ, which was studied in [48,49]. Example 5.2.4. Let S [j] k := (1/j!)E j k for j = 0, 1, . . . , where Ek ∈ C2n×2n is Hamiltonian for all k ∈ [0, ∞)Z, i.e., E∗ k J + JEk = 0. Then CS = C and the coefficient matrix Sk(λ) is of the exponential type, i.e., Sk(λ) = ∞∑ j=0 λj j! E j k = exp(λEk). (5.27) Then by (5.8), (1.9)–(1.10), and the Hamiltonian property of Ek we obtain Λ(λ, ¯ν) = ∞∑ j=1 (−1)j (λ − ¯ν)j−1 j! J(E∗ k)j , (5.28) compare with [49, pg. 6] or [48, Section 2]. The Lagrange identity has the same form as in (5.9) with the corresponding Λk(¯λ, ν). Especially, let n = 1 and consider the matrix Sk(λ) as in (5.27) with Ek ≡ E := ( i 1 −1 i ) . Then by (1.9) we have det Sk(λ) = eλ tr E = e2iλ = e−2 im(λ) e2i re(λ) for all k ∈ [0, ∞)Z and any λ ∈ C. Thus | det Sk(λ)| = e−2 im(λ) and it is equal to one if only if λ ∈ R, which agrees with the symplecticity of the matrix Sk(λ) on the real line; compare with Theorem 5.1.3 and see also Example 5.3.9. ▲ – 72 – 5.3. Weyl–Titchmarsh theory 5.3 Weyl–Titchmarsh theory In this section we focus on an eigenvalue problem and the Weyl–Titchmarsh theory for system (Sλ) with analytic or polynomial dependence on λ. We show that the main results of Chapter 2 remain valid also for the latter system, when we modify the corresponding Atkinson-type conditions to this more general setting. The solutions are weighted with respect to the Hermitian matrix Ψk(λ) defined in (5.13), that is, ||z||Ψ(λ) := √ ⟨z, z⟩Ψ(λ) and ⟨z, ˜z⟩Ψ(λ) := ∞∑ k=0 z∗ k+1 Ψk(λ) ˜zk+1. (5.29) The expression of the bilinear form in (5.29) justifies the restriction of λ ∈ CS to those values for which Ψk(λ) ≥ 0. Since this condition can be violated for some λ ∈ CS, we denote by CΨ and CΨ,N the subsets of CS such that CΨ,N := { λ ∈ CS | Ψk(λ) ≥ 0 for all k ∈ [0, N]Z } , CΨ := { λ ∈ CS | Ψk(λ) ≥ 0 for all k ∈ [0, ∞)Z } , where N ∈ [0, ∞)Z. An example with CΨ ⊊ CS can be found in [A20, Example 5.7] for a continuous analogue of system (Sλ). Our treatment is based on the Lagrange identity derived in Theorem 5.1.6 and a construction of the Weyl disks. Nevertheless, the proofs are basically the same as for the linear dependence on λ in Chapter 2 and hence they are omitted. For brevity, we do not keep the precise identification of the minimal assumptions as in Chapter 2 and slightly simplify some formulations. Throughout this section, let α ∈ be given, see (2.19), and Z(λ), Z(λ) ∈ C([0, ∞)Z)2n×n be the two components of the fundamental matrix k(λ) = ( Zk(λ), Zk(λ) ) of system (Sλ) satisfying 0(λ) = (α∗, −Jα∗), i.e., the solutions Z(λ) and Z(λ) are determined by the initial conditions Z0(λ) = α∗ and Z0(λ) = −Jα∗, compare with (2.22). The fundamental matrix (λ) then satisfies the identities ∗ k(¯λ)Jk(λ) = J and k(λ)J∗ k(¯λ) = J for all k ∈ [0, ∞)Z, see Theorem 5.1.6 and compare with Lemma 2.1.6. Now, let us fix N ∈ [0, ∞)Z and β ∈ . If we associate with system (Sλ) the following eigenvalue problem (Sλ), k ∈ [0, N]Z, λ ∈ CS, αz0(λ) = 0, βzN+1(λ) = 0, (5.30) then it follows as in Theorem 2.2.3 that the eigenvalues of problem (5.30) are characterized by det βZN+1(λ) = 0 and the corresponding eigenfunctions are of the form Z(λ)d with a nonzero d ∈ Ker βZN+1(λ). Moreover, we introduce the following hypothesis, compare with Hypothesis 2.2.2. Hypothesis 5.3.1 (Weak Atkinson condition – finite). For any λ ∈ CΨ,N KR every column z(λ) of the solution Z(λ) satisfies N∑ k=0 z∗ k+1(λ)Ψk(λ)zk+1(λ) > 0. Then, under Hypothesis 5.3.1, all eigenvalues of problem (5.30) restricted to the set CΨ,N are real and eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the semi-inner product ⟨·, ·⟩Ψ(λ),N defined similarly as in (5.29) with the sum over the finite discrete interval [0, N]Z. – 73 – Chapter 5. Polynomial and analytic dependence on spectral parameter The M(λ)-function for system (Sλ) is defined in the same way as in Definition 2.2.4, i.e., Mk(λ) := −[βZk(λ)]−1βZk(λ), and it satisfies the properties established in Lemma 2.2.5 and Theorem 2.2.7. In particular, M∗ k (λ) = Mk(¯λ) and Mk(λ) is analytic in λ. For the rest of this section we consider system (Sλ) on [0, ∞)Z and study its square summable solutions. For this purpose we will need the following condition. Hypothesis 5.3.2 (Weak Atkinson condition – infinite). For λ ∈ CSKR such that λ, ¯λ ∈ CΨ there exists N4 ∈ [0, ∞)Z such that for ν ∈ {λ, ¯λ} we have N4∑ k=0 z∗ k+1(ν)Ψk(ν)zk+1(ν) > 0 (5.31) for every column z(ν) of the solution Z(ν) of system (Sν). We note that the number N4 in Hypothesis 5.3.2 depends in general on the chosen λ (and ¯λ). This is a weaker condition than in Hypothesis 2.3.7, where it was considered for all λ ∈ C. The results of this section are phrased in terms of the following set CA := { λ ∈ CSKR | Hypothesis 5.3.2 holds at λ } , which is associated with the above Atkinson-type condition. Then, by definition, we have λ ∈ CA if and only if ¯λ ∈ CA, i.e., the set CA is symmetric with respect to the real axis. This observation is also very important for the development of the present theory. For M ∈ Cn×n we define the Weyl solution X(λ, M) ∈ C([0, ∞)Z)2n×n of system (Sλ) by Xk(λ, M) := k(λ)(I, M∗ )∗ , where (λ) is the fundamental matrix of system (Sλ) specified above, cf. (2.23). Moreover, we utilize the Hermitian matrix-valued function E : [0, ∞)Z × CA × Cn×n → Cn×n given by Ek(λ, M) := iδ(λ)X∗ k (λ, M)JXk(λ, M), compare with (2.32). This function is used for the definition of the Weyl disk Dk(λ) and the Weyl circle Ck(λ), i.e., Dk(λ) := { M ∈ Cn×n | Ek(λ, M) ≤ 0 } , Ck(λ) := { M ∈ Cn×n | Ek(λ, M) = 0 } . Since for k = 0 we have E0(λ, M) = −2δ(λ) im(M), it follows from (5.12) that for λ ∈ CA and k ∈ [1, ∞)Z the elements of Dk(λ) are characterized by the inequality k−1∑ j=0 X∗ j+1(λ, M)Ψj(λ)Xj+1(λ, M) ≤ im(M) im(λ) , (5.32) compare with Theorem 2.3.5. Similarly, the elements of Ck(λ) are characterized by the equality in (5.32). The following geometric description of the Weyl disk and the Weyl circle can be derived as in Section 2.3. If we set Gk(λ) := iδ(λ)Z ∗ k(λ)JZk(λ), Hk(λ) := iδ(λ)Z ∗ k(λ)JZk(λ), (5.33) then Hk(λ) is Hermitian, H0(λ) = 0, and identity (5.12) yields Hk(λ) = 2| im(λ)| k−1∑ j=0 Z ∗ j+1(λ)Ψj(λ)Z j+1(λ) ≥ 0. (5.34) This shows that Hk(λ) is nondecreasing in k ∈ [0, ∞)Z. Moreover, the symmetry of the set CA with respect to the real axis and Hypothesis 5.3.2 guarantee that the matrices Hk(λ) and Hk(¯λ) are positive definite for k ∈ [N4 + 1, ∞)Z. We summarize the main properties of the Weyl disks in the following theorem. – 74 – 5.3. Weyl–Titchmarsh theory Theorem 5.3.3. Let α ∈ , λ ∈ CA, and suppose that Hypothesis 5.3.2 holds. Then for all k ∈ [N4 + 1, ∞)Z the Weyl disk and the Weyl circle admit the representations Dk(λ) = { Pk(λ) + Rk(λ)VRk(¯λ) | V ∈ V } , Ck(λ) = { Pk(λ) + Rk(λ)URk(¯λ) | U ∈ U } , (5.35) where the center Pk(λ) and the matrix radii Rk(λ), Rk(¯λ) are defined by Pk(λ) := −H−1 k (λ)Gk(λ), Rk(λ) := H−1/2 k (λ), Rk(¯λ) := H−1/2 k (¯λ), (5.36) and U, V are the sets defined in (2.39). Moreover, the Weyl disk Dk(λ) is closed, convex, and Dk(λ) ⊆ Dj(λ) for all k, j ∈ [N4 + 1, ∞)Z with k ≥ j. Proof. The proof follows the same arguments as in Theorem 2.3.8. In particular, for the representations given in (5.35) we utilize the identity Ek(λ, M) = [H−1 k (λ)Gk(λ) + M]∗ Hk(λ)[H−1 k (λ)Gk(λ) + M] − H−1 k (¯λ) (5.37) for k ∈ [N4 + 1, ∞)Z, which is obtained by completing Ek(λ, M) to a square, see formulas (2.38) and (2.43). Expression (5.37) uses the invertibility of Hk(λ) and Hk(¯λ) for k ∈ [N4 + 1, ∞)Z, which is guaranteed by the assumption λ ∈ CA. In fact, the motivation for the complicated form of Hypothesis 5.3.2 comes from the above symmetry argument with respect to λ and ¯λ. ■ The latter properties of the Weyl disks imply that the intersection of all Dk(λ) for k ∈ [N4 + 1, ∞)Z is nonempty, closed, and convex. Thus we define the limiting Weyl disk D+(λ) := lim k→∞ Dk(λ) = ∩ k∈[N4+1,∞)Z Dk(λ). From Theorem 5.3.3 and inequality (5.32) we obtain the following result. It extends Corollaries 2.3.11 and 2.3.12 to the case of the analytic dependence on λ. Theorem 5.3.4. Let α ∈ , λ ∈ CA, and suppose that Hypothesis 5.3.2 holds. Then D+(λ) = { P+(λ) + R+(λ)VR+(¯λ) | V ∈ V } , (5.38) where the limiting center P+(λ) and the limiting matrix radii R+(λ), R+(¯λ) are given by P+(λ) := lim k→∞ Pk(λ), R+(λ) := lim k→∞ Rk(λ) ≥ 0, R+(¯λ) := lim k→∞ Rk(¯λ) ≥ 0. (5.39) In addition, a matrix M ∈ Cn×n belongs to the limiting Weyl disk D+(λ) if and only if ∞∑ k=0 X∗ k+1(λ, M)Ψk(λ)Xk+1(λ, M) ≤ im(M) im(λ) . (5.40) We note that by (5.34) and (5.36) the limit of Rk(λ) as k → ∞ exist and is positive semidefinite, while the proof of the existence of the limit of the matrices Pk(λ) is based on the fixed point argument as in Theorem 2.3.9. The statement of Theorem 5.3.4 follows from Theorem 5.3.3 and formula (5.32). Let λ ∈ CΨ be fixed. We now turn our attention to the number of linearly independent square summable solutions of system (Sλ). By ℓ2 Ψ(λ) we denote the space of all sequence on [0, ∞)Z, which are square summable with respect to the weight Ψ(λ), i.e., ℓ2 Ψ(λ) := { z ∈ C([0, ∞)Z)2n | ||z||Ψ(λ) < ∞ } , – 75 – Chapter 5. Polynomial and analytic dependence on spectral parameter where the semi-norm ||·||Ψ(λ) is defined in (5.29) and Ψk(λ) ≥ 0 on [0, ∞)Z. The space ℓ2 Ψ(λ) generally depends on the value of λ, but in some special cases it may be independent of λ, see Example 5.2.1 with Ψk(λ) ≡ k. Furthermore, in view of (5.24) or (5.26) in Examples 5.2.2 and 5.2.3, i.e., we may consider for system (Sλ) with the coefficients specified in (5.18) or (5.25) the space ℓ2 W := { z ∈ C([0, ∞)Z)2n ∞∑ k=0 z[s]∗ k Wk z[s] k < ∞ } , (5.41) which does not depend on λ, see also Section 5.4. We are interested in the subspace N(λ) ⊆ ℓ2 Ψ(λ) consisting of all square summable solutions of system (Sλ), i.e., N(λ) := { z ∈ ℓ2 Ψ(λ) | z solves system (Sλ) } . (5.42) If λ ∈ CA, then by inequality (5.40)the columns of the Weylsolution X(λ, M)corresponding to the matrices M ∈ D+(λ) are linearly independent and all belong to N(λ). This means that n ≤ dim N(λ) ≤ 2n for all λ ∈ CA, which justifies the classification of system (Sλ) as being in the limit point case if dim N(λ) = n, and as being in the limit circle case if dim N(λ) = 2n. The remaining cases with n + 1 ≤ dim N(λ) ≤ 2n − 1 are called intermediate. Moreover, one can verify that the results of Theorem 2.4.1–Corollary 2.4.10 in Section 2.4 hold with exactly the same proofs also in the case of the analytic dependence on λ. Especially, the following extension of Theorem 2.4.8 to the analytic dependence on λ is true. Theorem 5.3.5. Let α ∈ , λ ∈ CA, and suppose that Hypothesis 5.3.2 holds. Then system (Sλ) has exactly n + rank R+(λ) linearly independent square summable solutions, i.e., dim N(λ) = n + rank R+(λ), where R+(λ) is the matrix radius of the limiting Weyl disk D+(λ) defined in (5.39). By combining Theorem 5.3.5 and identity (5.38) we obtain the following limit point and limit circle classification of system (Sλ) in terms of the rank of R+(λ), compare with Theorem 2.4.3 and Corollary 2.4.10. Corollary 5.3.6. Let α ∈ , λ ∈ CA, and Hypothesis 5.3.2 hold. Then system (Sλ) is (i) in the limit point case if and only if R+(λ) = 0, in which case D+(λ) = {P+(λ)} and D+(¯λ) = {P+(¯λ)}, (ii) in the limit circle case if and only if R+(λ) is invertible. Remark 5.3.7. We note that the results of Chapter 3 regarding the Weyl–Titchmarsh theory for discrete symplectic systems with jointly varying endpoints hold in the same way for system with the analytic dependence on λ under the appropriate strong Atkinson-type conditions including all nontrivial solutions, see Hypotheses 3.1.1 and Hypothesis 2.4.11. Finally, we illustrate the results of the Weyl–Titchmarsh theory for system (Sλ) by two interesting examples with the exponential dependence on λ discussed in Example 5.2.4. Example 5.3.8. In this example we show that the discrete symplectic system zk+1(λ) = exp(λJ)zk(λ). (5.43) is in the limit point case for every λ ∈ CKR and we calculate the unique 2n × n solution (up to an invertible multiple) of system (5.43) whose columns lie in ℓ2 Ψ(λ) and form a basis – 76 – 5.3. Weyl–Titchmarsh theory of N(λ), i.e., the Weyl solution X(λ, P+(λ)). System (5.43) corresponds to the system from Example 5.2.4 with Ek := E ≡ J, which satisfies the condition E∗ k J + JEk = 0. Moreover, we have Sk(λ) = exp(λJ) = (cos λ)I + (sin λ)J and CS = C, see also [16, Example 11.3.4]. For simplicity we perform the calculations below in the scalar case, i.e., for n = 1. The general case follows with the same arguments upon multiplication by the n × n or 2n × 2n identity matrices at appropriate places. If we choose α = (1, 0), then the fundamental matrix k(λ) of system (5.43) with 0(λ) = I is given by k(λ) = exp(kλJ) = (cos kλ)I + (sin kλ)J = ( cos kλ sin kλ − sin kλ cos kλ ) , k ∈ [0, ∞)Z, and so Zk(λ) = (sin kλ, cos kλ)⊤. Since the powers of J repeat in a cycle of length four, we obtain for any k ∈ [0, ∞)Z by (5.28) that Λk(¯λ, λ) = −I for all λ ∈ R, while for λ ∈ CKR we calculate (with p := im(λ) 0) Λk(¯λ, λ) = ∞∑ j=1 (−2ip)j−1 j! Jj+1 = 1 2ip ∞∑ j=1 (−1)j+1 (2ip)2j (2j)! J + 1 2ip ∞∑ j=0 (−1)j+1 (2ip)2j+1 (2j + 1)! I = cosh 2p − 1 2p iJ − sinh 2p 2p I = sinh p p [ (sinh p)iJ − (cosh p)I ] , where we used the well-known identities sinh 2x = 2 sinh x cosh x, cosh 2x = 2 sinh2 x + 1, i sinh p = sin(ix), and cosh p = cos(ip). Hence by (5.13) we have Ψk(λ) ≡ Ψ(λ) =    sinh p p   cosh p −i sinh p i sinh p cosh p   > 0 for all λ ∈ CKR, I for all λ ∈ R, (5.44) see also (5.14). Thus CΨ = C. By the definitions of Hk(λ) and Gk(λ) in (5.33) we get Hk(λ) = iδ(λ)(sin k¯λ cos kλ − cos k¯λ sin kλ) = δ(λ) sinh(2k im(λ)), Gk(λ) = −iδ(λ)(sin k¯λ sin kλ + cos k¯λ cos kλ) = −iδ(λ) cosh(2k im(λ)). Note that the same value for Hk(λ) is of course obtained from formula (5.34) after some calculations. This shows that Hypothesis 5.3.2 is satisfied for any λ ∈ CKR and any N4 ∈ [0, ∞)Z, i.e., CA = CKR. The relations in (5.36) yield Pk(λ) = i coth(2k im(λ)) and Rk(λ) = 1/ √ sinh(2k| im(λ)|) for all k ∈ [1, ∞)Z. The center and radius of the limiting disk D+(λ) are then P+(λ) = lim k→∞ Pk(λ) = iδ(λ) and R+(λ) = lim k→∞ Rk(λ) = 0, which shows that system (5.43) is in the limit point case for every λ ∈ CKR by Corollary 5.3.6(i). Moreover, the space N(λ) of square summable solutions of system (5.43) with λ ∈ CKR is generated by the Weyl solution Xk(λ, P+(λ)) = k(λ) ( 1 P+(λ) ) = ( cos kλ + iδ(λ) sin kλ − sin kλ + iδ(λ) cos kλ ) = ( 1 iδ(λ) ) eiδ(λ)kλ , – 77 – Chapter 5. Polynomial and analytic dependence on spectral parameter for which (we again substitute p := im(λ)) we have X(λ, P+(λ)) 2 Ψ(λ) = ∞∑ k=0 X∗ k+1(λ, P+(λ))Ψk(λ)Xk+1(λ, P+(λ)) = 2 sinh p p × [cosh p + δ(λ) sinh p] × ∞∑ k=0 e−2|p|(k+1) = 2 sinh p p × [cosh p + δ(λ) sinh p] × e−2|p| 1 − e−2|p| = 1 |p| . This shows that ||X(λ, P+(λ))||Ψ(λ) = 1/ √ | im(λ)| < ∞, and so indeed X(λ, P+(λ)) ∈ ℓ2 Ψ(λ) for every λ ∈ CKR. On the other hand, we also have Z(λ) 2 Ψ(λ) = ∞∑ k=0 Z ∗ k+1(λ)Ψk(λ)Zk+1(λ) (5.34) = 1 2| im(λ)| lim k→∞ Hk(λ) = 1 2| im(λ)| lim k→∞ sinh(2k| im(λ)|) = ∞, i.e., Z(λ) ℓ2 Ψ(λ) . Thus, again we get that dim N(λ) = 1 for any λ ∈ CKR, see also the proof of Theorem 2.4.3. Similarly, in arbitrary dimension n we get that the n columns of the Weyl solution X(λ, P+(λ)) are linearly independent and they belong to ℓ2 Ψ(λ) , while the n columns of Z(λ) are linearly independent and they do not belong to ℓ2 Ψ(λ) . Hence, dim N(λ) = n and system (5.43) is in the limit point case for all λ ∈ CKR. ▲ Now we give a counterexample for the invariance of the limit circle case when the dependence on λ is analytic and nonpolynomial. Example 5.3.9. Let us consider system (Sλ) with the coefficient matrix Sk(λ) from Example 5.2.4 with Ek := E ≡ iI + J, i.e., the system zk+1(λ) = exp(λE)zk(λ), exp(λE) = eiλ [(cos λ)I + (sin λ)J], (5.45) see again [16, Example 11.3.4]. Then CS = C and the fundamental matrix k(λ) of system (5.45) corresponding to the choice α = (1, 0) is given by k(λ) = exp(kλE) = eikλ [(cos kλ)I + (sin kλ)J], k ∈ [0, ∞)Z. Since (E∗)j = −(−2i)j−1 E for j ≥ 1, it follows by (5.28) that for any k ∈ [0, ∞)Z we have Λk(¯λ, λ) = iE if λ ∈ R and Λk(¯λ, λ) = i 4p (e4p − 1)E if λ ∈ CKR with p := im(λ). Hence identity (5.13) yields Ψk(λ) ≡ Ψ(λ) =    − i(e4 im(λ) − 1) 4 im(λ) E for all λ ∈ CKR, −iE for all λ ∈ R. (5.46) Moreover, iE ≤ 0, which yields through (5.46) that Ψ(λ) ≥ 0 for all λ ∈ C, i.e., CΨ = C. Note that in this case Ψ(λ) is singular for all λ ∈ C, compare with (5.44). The left-hand side of (5.31) with zk(λ) = Zk(λ) = eikλ(sin kλ, cos kλ)⊤ has the form N4∑ k=0 z∗ k+1(λ)Ψ(λ)zk+1(λ) =    1 − e−4(N4+1) im(λ) 4 im(λ) , λ ∈ CKR, N4 + 1, λ ∈ R. – 78 – 5.4. Special quadratic dependence and limit circle case Therefore the inequality in (5.31) is satisfied for all λ ∈ C and any N4 ∈ [0, ∞)Z, which implies CA = CKR. By (5.33), (5.34), and (5.36) we get Hk(λ) = δ(λ) ( 1 − e−4k im(λ) ) /2, Gk(λ) = −iδ(λ)e−2k im(λ) cosh(2k im(λ)), Pk(λ) = i coth(2k im(λ)), R2 k(λ) = 2 δ(λ) ( 1 − e−4k im(λ) ). Thus by (5.39) we have P+(λ) = iδ(λ) for every λ ∈ CA and R+(λ) = √ 2 for λ ∈ C+, while R+(λ) = 0 for λ ∈ C−. This shows by Corollary 5.3.6 that system (5.45) is in the limit circle case for λ ∈ C+ and in the limit point case for λ ∈ C−. The space N(λ) is then generated by the columns of the fundamental matrix (λ) when λ ∈ C+, and by the Weyl solution Xk(λ, P+(λ)) ≡ (1, −i)⊤ with ||X(λ, −i)||Ψ(λ) = 0 when λ ∈ C−. For completeness we note that system (5.45) is in the limit point case also for all λ ∈ R with X(λ, −i) being the unique square summable solution (up to a constant multiple). ▲ 5.4 Special quadratic dependence and limit circle case In Example 5.3.9 it was shown that the dimension of N(λ) may vary with respect to λ, even when Hypothesis 5.3.2 is satisfied. In particular, system (Sλ) can be in the limit circle case for some value λ and in the limit point case for another one. This situation is not possible when the dependence on λ is only linear as we derived in Chapter 4 and stated in Theorem 2.4.17. In this section we prove a similar invariance of the limit circle case for system (Sλ) with the special quadratic dependence on λ from Example 5.2.2. In this case we can choose the associated space of square summable solutions to be independent of λ, which is a key ingredient for this result. Note that this property was trivially satisfied also in the previous chapters. 5.4.1 Results for one system Let us consider system (5.17) with the coefficients specified in (5.18) or equivalently (suppressing the argument λ) xk+1 = Ak xk + ( Bk + λAW[2] k ) uk, uk+1 = Ck xk + ( Dk + λCk W[2] k ) uk − λW[1] k xk+1, (Qλ) where A, B, C, D, W[1] , W[2] ∈ C([0, ∞)Z)n×n are such that the matrix S[0] k in (5.18) satisfies the first equality in (5.3), W[1] k and W[2] k are Hermitian, and Wk := diag { W[1] k , W[2] k } ≥ 0 for all k ∈ [0, ∞)Z. (5.47) Recalling the notation from (5.22), we associate with system (Qλ) the space of all square summable sequences with respect to the weight matrix Wk defined in (5.41), i.e., ℓ2 W . Since from (5.21) and (5.24) one infers that z∗ k+1 (λ)Ψk(λ)zk+1(λ) = z[s]∗ k (λ)Wk z[s] k (λ) for all solutions of system (Qλ), the space of all solutions being in ℓ2 W is the same as the corresponding space N(λ) defined in (5.42). This means that we have N(λ) = { z ∈ ℓ2 W | z solves system (Qλ) } and system (Qλ) is in the limit point case when dim N(λ) = n and in the limit circle case when dim N(λ) = 2n. – 79 – Chapter 5. Polynomial and analytic dependence on spectral parameter The following result concerning the invariance of the limit circle case for system (Qλ) generalizes [142, Theorem 5.5] for system (2.6) with Ek ≡ 0, which corresponds to system (Qλ) with Ak being invertible on [0, ∞)Z. The proof is given in Subsection 5.4.2 below, where we establish a more general statement (Theorem 5.4.5) for two systems of the form (Qλ) as in Chapter 4. Theorem 5.4.1. Let (5.47) hold. If there exists λ0 ∈ C such that system (Qλ0 ) is in the limit circle case, then system (Qλ) is in the limit circle case for every λ ∈ C. From Theorem 5.4.1 we obtain the following simple criterion for the limit circle case in terms of the norms of the coefficients S[0] k and Wk; cf. Corollary 2.4.19. It extends the statement in [134, Theorem 6.3] for system (2.6) with Ek ≡ 0. Corollary 5.4.2. Let (5.47) hold and assume that ∞∑ k=0 ||S[0] k − I||1 < ∞ and ∞∑ k=0 ||Wk ||1 < ∞. (5.48) Then system (Qλ) is in the limit circle case for all λ ∈ C. Proof. The conditions in (5.48) imply that system (Q0) is in the limit circle case. Therefore the result follows from Theorem 5.4.1. Alternatively, this statement can be derived as a special case of Corollary 5.4.6 below regarding two systems. ■ In the scalar case we get from Theorems 5.4.1 and 5.3.5 the following limit point criterion for system (Qλ). Theorem 5.4.3. Let n = 1 and assume that condition (5.47) and Hypothesis 5.3.2 hold. If there exists λ0 ∈ C such that system (Qλ0 ) is in the limit point case, then system (Qλ) is in the limit point case for every λ ∈ CA. Proof. Assume that for some λ1 ∈ CA system (Qλ1 ) is not in the limit point case. Then by n = 1 and Theorem 5.3.5 we know that (Qλ1 ) is in the limit circle case. Consequently, by Theorem 5.4.1, system (Qλ) is in the limit circle case for all λ ∈ C, which contradicts the original assumption that system (Qλ0 ) is in the limit point case. ■ By combining Theorems 5.4.1 and 5.4.3 we obtain the following extension of the Weyl alternative, i.e., the dichotomy between the limit point and limit circle classifications of system (Qλ) for all suitable λ; cf. Corollary 2.4.23. Corollary 5.4.4 (Weyl alternative). Let n = 1 and assume that condition (5.47) and Hypothesis 5.3.2 hold. Then system (Qλ) is either in the limit circle case for all λ ∈ C, or in the limit point case for all λ ∈ CA. 5.4.2 Results for two systems Motivated by the results in Chapter 4, we consider instead of system (Qλ) two systems of the same form (suppressing the argument λ) ˆxk+1 = Ak ˆxk + ( Bk + λAW[2] k ) ˆuk, ˆuk+1 = Ck ˆxk + ( Dk + λCk W[2] k ) ˆuk − λW[1] k ˆxk+1, (Qλ) ˜xk+1 = Ak ˜xk + ( Bk + λAW[2] k ) ˜uk, ˜uk+1 = Ck ˜xk + ( Dk + λCk W[2] k ) ˜uk − λW[1] k ˜xk+1, (Qλ) where A, B, C, Dk, A, B, C, Dk, W[1] , W[2] ∈ C([0, ∞)Z)n×n with W[1] k and W[2] k being Hermitian and satisfying (5.47). Note that the weight matrices W[1] k and W[2] k are the same in both – 80 – 5.4. Special quadratic dependence and limit circle case systems (Qλ) and (Qλ). This is justified by the requirement of having the same space of square summable functions associated with these systems. The coefficient matrices S [0] , S [1] , S [2] ∈ C([0, ∞)Z)2n×2n and S [0] , S [1] , S [2] C([0, ∞)Z)2n×2n of systems (Qλ) and (Qλ), respectively, are defined analogously to (5.18). We assume that systems (Qλ) and (Qλ) are in general non-symplectic, i.e., we do not impose that the matrices S [0] k and S [0] k are such that the first equality in (5.3) holds. Instead we assume on [0, ∞)Z the combined identity S [0]∗ k JS [0] k = J. (5.49) By using the block structure of S [0] k and S [0] k we obtain that equality (5.49) is equivalent to A ∗ k Ck = C ∗ k Ak, D ∗ k Bk = B ∗ k Dk, A ∗ k Dk − C ∗ k Bk = I and D ∗ k Ak − B ∗ kCk = I. (5.50) Consequently, the coefficient matrices also satisfy the following identities S [0]∗ k JS [1] k + S [1]∗ k JS [0] k = 0, (5.51) S [0]∗ k JS [2] k + S [1]∗ k JS [1] k + S [2]∗ k JS [0] k = 0, (5.52) S [1]∗ k JS [2] k + S [2]∗ k JS [1] k = 0, S [2]∗ k JS [2] k = 0. (5.53) In addition, identity (5.49) is equivalent to S [0] k JS [0]∗ k = J, (5.54) which can be written in terms of the n × n blocks as Ak B ∗ k = Bk A ∗ k, Dk C ∗ k = Ck D ∗ k, Ak D ∗ k − Bk C ∗ k = I and Dk A ∗ k − CkB ∗ k = I. (5.55) Since identity (5.49) is equivalent also with S [0]∗ k JS [0] k = J and S [0] k JS [0]∗ k = J, the block matrices satisfy the relations A ∗ k Ck = C ∗ k Ak, D ∗ k Bk = B ∗ k Dk, A ∗ k Dk − C ∗ k Bk = I, D ∗ k Ak − B ∗ kCk = I, (5.56) Ak B ∗ k = Bk A ∗ k, Dk C ∗ k = Ck D ∗ k, Ak D ∗ k − Bk C ∗ k = I, Dk A ∗ k − CkB ∗ k = I. (5.57) Note that (5.49) or (5.54) trivially implies that det S [0]∗ k × det S [0] k = 1. Let Sk(λ) := S [0] k +λS [1] k +λ2 S [2] k and Sk(λ) := S [0] k +λS [1] k +λ2 S [2] k be the coefficient matrices of systems (Qλ) and (Qλ), respectively. Then S ∗ k(λ)JSk(¯λ) = J and Sk(λ)JS ∗ k(¯λ) = J with S −1 k (λ) = −JS ∗ k(¯λ)J, (5.58) which means that Sk(λ) and Sk(λ) are invertible for all k ∈ [0, ∞)Z and any λ ∈ C. Therefore any initial value problems associated with systems (Qλ) and (Qλ) posses unique solutions on [0, ∞)Z for any initial value given at any point in [0, ∞)Z. Moreover, the fundamental matrices of systems (Qλ) and (Qλ) are in this case invertible on the discrete interval [0, ∞)Z. Since by (5.19) we have det Sk(λ) = det S [0] k and det Sk(λ) = det S [0] k , (5.59) it follows that det S ∗ k(λ) × det Sk(λ) = 1 for all k ∈ [0, ∞)Z and any λ ∈ C. (5.60) – 81 – Chapter 5. Polynomial and analytic dependence on spectral parameter Similarly as in Example 5.2.2, see (5.20), we define the 2n × 2n matrices Tk(λ) := ( 0 Ak −I Ck − λW[1] k Ak ) and Tk(λ) := ( 0 Ak −I Ck − λW[1] k Ak ) . (5.61) If ˆz(λ), ˜z(λ) ∈ C([0, ∞)Z)2n solve systems (Qλ) and (Qλ), respectively, then by using the inverse formula in (5.58) we get that the multiplication of ˆzk+1(λ) by T ∗ k (¯λ)J yields the backward shift in the second component of ˆzk+1(λ) and the multiplication of ˜zk+1(λ) by T ∗ k (¯λ)J yields the backward shift in the second component of ˜zk+1(λ), i.e., ˆz[s] k (λ) := (ˆx∗ k+1(λ), ˆu∗ k(λ))∗ = T ∗ k (¯λ)J ˆzk+1(λ), ˜z[s] k (λ) := (˜x∗ k+1(λ), ˜u∗ k(λ))∗ = T ∗ k (¯λ)J ˜zk+1(λ),    (5.62) compare with (5.23). The same notation will be also used for matrix-valued solutions, in particular for the fundamental matrices of systems (Qλ) and (Qλ). By similar calculations as in (5.24) we obtain for any solution Z(λ) ∈ C([0, ∞)Z)2n×m of system (Qλ) and any solution Z(λ) ∈ C([0, ∞)Z)2n×m of system (Qλ) the Lagrange-type identity [ Z ∗ k(λ)JZk(ν) ] = (¯λ − ν)Z [s]∗ k (λ)Wk Z [s] k (ν), (5.63) where we employed the identities in (5.57) and the notation from (5.62). In addition, by the summation of both sides of (5.63) we get Z ∗ k+1(λ)JZk+1(ν) = Z ∗ 0(λ)JZ0(ν) + (¯λ − ν) k∑ j=0 Z [s]∗ j (λ)Wj Z [s] j (ν), (5.64) compare with (5.10). In the following result we use the space ℓ2 W defined in (5.41). Theorem 5.4.5. Let (5.47) hold. If there exists λ0 ∈ C such that all solutions of systems (Qλ0 ) and (Qλ0 ) belong to ℓ2 W , then all solutions of systems (Qλ) and (Qλ) belong to ℓ2 W for any λ ∈ C. Proof. Let the assumptions be satisfied for λ0 ∈ C and λ ∈ CK{λ0} be fixed. For ν ∈ {λ, λ0} we denote by (ν) ∈ C([0, ∞)Z)2n×2n and (ν) ∈ C([0, ∞)Z)2n×2n the fundamental matrices of systems (Qν) and (Qν), respectively, such that 0(ν) = I = 0(ν). First we prove that all solutions of system (Qλ) belong to ℓ2 W . Since k(λ) and k(λ0) are invertible on [0, ∞)Z, there exists Ω ∈ C([0, ∞)Z)2n×2n such that k(λ) = k(λ0)Ωk for all k ∈ [0, ∞)Z, (5.65) Then by a direct calculation we obtain that Ωk satisfies the recurrence relation Ωk+1 = [ I + (λ − λ0)ϒk ] Ωk, where ϒk :=  −1 k+1(λ0) [ S [1] k + (λ + λ0)S [2] k ] k(λ0). (5.66) Since  −1 k+1(λ0) =  −1 k (λ0)S −1 k (λ0), we have det [ I + (λ − λ0)ϒk ] = det [  −1 k (λ0)S −1 k (λ0)Sk(λ)k(λ0) ] (5.59) = det ( S [0] k )−1 × det S [0] k = 1, i.e., the matrix I + (λ − λ0)ϒk is invertible on [0, ∞)Z. By using (5.50) and the n × n block structure of the matrices Sk(λ0), S [1] k , S [2] k , Tk(λ0), Tk(λ0), Wk, we get the identity [ S [1] k + (λ0 + ¯λ0)S [2] k ] JS ∗ k(¯λ0) = −Tk(¯λ0)Wk T ∗ k (¯λ0). (5.67) – 82 – 5.4. Special quadratic dependence and limit circle case Moreover, a simple calculation yields S [1] k + (λ + λ0)S [2] k = Tk [ S [1] k + (λ0 + ¯λ0)S [2] k ] , where Tk := ( I 0 (¯λ0 − λ)W[1] k I ) . (5.68) Hence by using (5.67) and (5.68) in the definition of ϒk in (5.66) and then applying (5.62) we can equivalently expressed ϒk as ϒk = Q−1 k  ∗ k+1(λ0)J∗ Tk(¯λ0)Wk T ∗ k (¯λ0)Jk+1(λ0) (5.62) = Q−1 k  [s]∗ k (λ0)Wk  [s] k (λ0), (5.69) where we put Qk := − ∗ k+1(λ0)JT−1 k k+1(λ0). (5.70) Since det Tk ≡ 1, we get for any k ∈ [0, ∞)Z the equality det Qk = det  ∗ k+1(λ0) × det k+1(λ0) = det  ∗ k(λ0) × det k(λ0) × det S ∗ k(λ0) × det Sk(λ0) (5.60) = det  ∗ k(λ0) × det k(λ0) = · · · = det  ∗ 0(λ0) × det 0(λ0) = 1. Now we show that there exists κ > 0 such that ||Q−1 k ||σ ≤ κ < ∞ on [0, ∞)Z. (5.71) Since Wk ≥ 0, the Cauchy–Schwarz and arithmetic-geometric mean inequalities yield |ξ∗ Wk ζ| ≤ (ξ∗ Wk ξ)1/2 (ζ∗ Wk ζ)1/2 ≤ 1 2 (ξ∗ Wk ξ + ζ∗ Wk ζ) for any ξ, ζ ∈ C2n and k ∈ [0, ∞)Z. Hence, for any sequences ˆz, ˜z ∈ ℓ2 W we have ∞∑ k=0 ˜z[s]∗ k Wk ˆz[s] k ≤ ∞∑ k=0 ˜z[s]∗ k Wk ˆz[s] k ≤ 1 2 ∞∑ k=0 (˜z[s]∗ k Wk ˜z[s] k + ˆz[s]∗ Wk ˆz[s] k ) < ∞. The last inequality with ˆz and ˜z being the columns of (λ0) and (λ0), respectively, and the assumption that all solutions of systems (Qλ0 ) and (Qλ0 ) belong to ℓ2 W imply that there exists ε > 0 such that ∞∑ k=0  [s]∗ k (λ0)Wk  [s] k (λ0) σ (1.7) ≤ ∞∑ k=0  [s]∗ k (λ0)Wk  [s] k (λ0) 1 ≤ ε < ∞. (5.72) Since JT−1 k = J − (¯λ0 − λ) diag { W[1] k , 0 } , the sequence Qk in (5.70) can be written as Qk =  ∗ k+1(λ0) [ (¯λ0 − λ) diag { W[1] k , 0 } − J ] k+1(λ0) (5.64) = −J + 2i im(λ0) k∑ j=0  [s]∗ j (λ0)Wj  [s] j (λ0) + (¯λ0 − λ) [s]∗ k (λ0) diag { W[1] k , 0 }  [s] k (λ0). (5.73) From the unitary invariance of the spectral norm, assumption (5.47), and the estimate in (5.72) we conclude that there exists τ > 0 such that  [s]∗ k (λ0) diag { W[1] k , 0 }  [s] k (λ0) σ ≤  [s]∗ k (λ0)Wk  [s] k (λ0) σ ≤ τ < ∞ (5.74) – 83 – Chapter 5. Polynomial and analytic dependence on spectral parameter for all k ∈ [0, ∞)Z. Upon taking the matrix norm in (5.73) we obtain for any k ∈ [0, ∞)Z that ||Qk ||σ ≤ ||J||σ + 2| im(λ0)| k∑ j=0  [s]∗ j (λ0)Wj  [s] j (λ0) σ + |¯λ0 − λ|  [s]∗ k (λ0) diag { W[1] k , 0 }  [s] k (λ0) σ (5.72),(5.74) ≤ ω < ∞, where ω := ||J||σ + 2| im(λ0)|ε + |¯λ0 − λ|τ. Therefore the matrix Qk is bounded on [0, ∞)Z with det Qk ≡ 1. This implies that Q−1 k is also bounded on [0, ∞)Z, i.e., inequality (5.71) holds. By combining the submultiplicative property of the spectral norm and (5.69), (5.71), (5.72) we then obtain ∞∑ k=0 ||ϒk ||σ ≤ ∞∑ k=0 ||Q−1 k ||σ ×  [s]∗ k (λ0)Wk  [s] k (λ0) σ ≤ κε < ∞. Hence the same calculation as in (4.21), see also Proposition 1.1.4, implies that the fundamental matrix Ωk of system (5.66) satisfies ||Ωk ||σ ≤ ρ for all k ∈ [0, ∞)Z and some ρ > 0. (5.75) With Kk(λ) :=  [s]∗ k (λ)Wk  [s] k (λ) we obtain from (5.65) and (5.47) that Kk(λ) = Ω ∗ k+1  [s]∗ k (λ0) diag { W[1] k , 0 }  [s] k (λ0)Ωk+1 + Ω ∗ k  [s]∗ k (λ0) diag { 0, W[2] k }  [s] k (λ0)Ωk ≤ Ω ∗ k+1 Kk(λ0)Ωk+1 + Ω ∗ k Kk(λ0)Ωk. (5.76) This implies through (5.75) and the submultiplicativity, self-adjointness, and unitary invariance of the spectral norm that ∞∑ k=0 ||Kk(λ)||σ (5.76) ≤ ∞∑ k=0 ( Ω ∗ k+1 Kk(λ0)Ωk+1 σ + Ω ∗ k Kk(λ0)Ωk σ ) ≤ ∞∑ k=0 ( Ωk+1 2 σ + Ωk 2 σ ) × ||Kk(λ0)||σ (5.75) ≤ 2ρ2 ∞∑ k=0 ||Kk(λ0)||σ < ∞, because all solutions of system (Qλ0 ) belong to ℓ2 W . This shows that all solutions of system (Qλ) belong to ℓ2 W as well. Analogously, by switching the roles of systems (Qλ) and (Qλ), we prove that all solutions of system (Qλ) belong to ℓ2 W . Since λ ∈ CK{λ0} was chosen arbitrarily, the proof is complete. ■ Proof of Theorem 5.4.1. The statement of Theorem 5.4.1 follows immediately from Theorem 5.4.5, when it is applied in the case S [0] k ≡ S [0] k := S[0] k , i.e., when all systems (Qλ), (Qλ), and (Qλ) are equal. ■ Now we give sufficient conditions in terms of the coefficients, which guarantee that all solutions of systems (Qλ) and (Qλ) belong to the space ℓ2 W . Similarly as in Corollary 4.2.3, the matrix norm ||·||1 utilized in (5.77) below can be replaced by any other matrix norm because of their equivalence. – 84 – 5.4. Special quadratic dependence and limit circle case Corollary 5.4.6. Let (5.47) hold and assume that ∞∑ k=0 S [0] k − I 1 < ∞, ∞∑ k=0 S [0] k − I 1 < ∞, and ∞∑ k=0 ||Wk ||1 < ∞. (5.77) Then all solutions of systems (Qλ) and (Qλ) belong to ℓ2 W for any λ ∈ C. Proof. By Theorem 5.4.5 it suffices to show that there exists λ0 ∈ C such that ∞∑ k=0  [s]∗ k (λ0)Wk  [s] k (λ0) 1 < ∞ and ∞∑ k=0  [s]∗ k (λ0)Wk  [s] k (λ0) 1 < ∞. (5.78) We show that these inequalities are satisfied for λ0 = 0. Since Sk(0) = S [0] k , we obtain from the first condition in (5.77) and Proposition 1.1.4 that there exist ε > 0 such that k(0) 1 ≤ ε < ∞ for all k ∈ [0, ∞)Z, where (0) ∈ C([0, ∞)Z)2n×2n represents a fundamental matrix of system (Q0). Moreover, the second condition in (5.77) implies that there exists ρ > 0 such that Ak − I 1 + Ck 1 ≤ ρ < ∞ for all k ∈ [0, ∞)Z. Hence from (5.61) we get T ∗ k (0)J 1 = ( 0 0 −C ∗ k A ∗ k − I ) + I2n 1 ≤ ( 0 0 −C ∗ k A ∗ k − I ) 1 + ||I2n ||1 ≤ Ak − I 1 + Ck 1 + 2n ≤ κ < ∞ for all k ∈ [0, ∞)Z, where we used the self-adjointness of the norm ||·||1 and put κ := 2n + ρ. Since we have  [s] k (0) = T ∗ k (0)Jk+1(0) by (5.62), the last condition in (5.77) yields the estimate ∞∑ k=0  [s]∗ k (0)Wk  [s] k (0) 1 ≤ ∞∑ k=0  [s] k (0) 2 1 × ||Wk ||1 ≤ ∞∑ k=0 T ∗ k (0)J 2 1 × k+1(0) 2 1 × ||Wk ||1 ≤ κ2 ε2 ∞∑ k=0 ||Wk ||1 < ∞. In a similar way we prove also the second inequality in (5.78). Therefore all solutions of systems (Q0) and (Q0) belong to ℓ2 W , and so the statement follows from Theorem 5.4.5. ■ If the matrices Ak and Ak are invertible on [0, ∞)Z, then systems (Qλ) and (Qλ) are equivalent with the pair of the first order difference systems ˆzk(λ) = [ Hk + λJWk ] ˆz[s] k (λ) and ˜zk(λ) = [ Hk + λJWk ] ˜z[s] k (λ), where W is from (5.47), the coefficient matrix H ∈ C([0, ∞)Z)2n×2n has the form Hk :=   I − A −1 k A −1 k Bk Ck A −1 k Dk − Ck A −1 k Bk − I   (5.50) =   I − A −1 k A −1 k Bk Ck A −1 k A ∗−1 k − I   , and H ∈ C([0, ∞)Z)2n×2n is given analogously, cf. Examples 5.2.2 and 5.2.3. Especially, if we take Ak ≡ Ak, then the first identities in (5.50) and (5.55) imply that Hk = JH∗ k J for all k ∈ [0, ∞)Z. In this case we obtain from Theorem 5.4.5 the following generalization of [142, Theorem 5.5] for two non-Hermitian linear Hamiltonian difference systems ˆzk(λ) = [ Hk + λWk ] ˆz[s] k (λ), (Hλ) ˜zk(λ) = [ JH ∗ k J + λWk ] ˜z[s] k (λ), (Hλ) – 85 – Chapter 5. Polynomial and analytic dependence on spectral parameter where we put Hk := ( Ak Bk Ck −A ∗ k ) and Wk := ( 0 W [2] k −W [1] k 0 ) (5.79) with A, B, C, W [1] , W [2] ∈ C([0, ∞)Z)n×n being such that I − Ak is invertible, W [1] k , W [2] k are Hermitian, and −JWk ≥ 0 on [0, ∞)Z. Note that we have (Hλ)=(Hλ)=(2.6) if Bk = B ∗ k = Bk and Ck = C ∗ k = Ck. Corollary 5.4.7. Let H, W ∈ C([0, ∞)Z)2n×2n be as in (5.79). If there exists λ0 ∈ C such that all solutions of systems (Hλ0 ) and (Hλ0 ) belong to ℓ2 W with W := −JWk, then all solutions of systems (Hλ) and (Hλ) belong to ℓ2 W for any λ ∈ C. Finally, we give an example illustrating the result of Theorem 5.4.5. Example 5.4.8. Let κ ≥ 1 be fixed and q, r ∈ C([0, ∞)Z)1 be nonnegative sequences such that F(κ) := ∑∞ k=0 Fk(κ) < ∞ and G(κ) := ∑∞ k=0 Gk(κ) < ∞, where Fk(κ) := κ8k+2 q4k + κ8k+4 r4k+2 + κ8k+6 r4k+3 + κ8k+8 q4k+3, Gk(κ) := κ8k r4k + κ8k+2 r4k+1 + κ8k+4 q4k+1 + κ8k+6 q4k+2.    (5.80) For example, we may choose qk = rk := 1/(ck κ3k ) with c > 1/κ, because in this case the numbers F(κ) and G(κ) are multiples of the convergent series∑∞ k=0 1/(cκ)4k. By substituting 1/κ instead of κ in (5.80) we can see that the series F(1/κ) and G(1/κ) are convergent as well. Let us consider systems (Qλ) and (Qλ) with n = 1 and with the following coefficients: (i) for all k ∈ [0, ∞)Z we put W[1] k := qk and W[2] k := rk, (ii) for k ∈ [0, ∞)Z even (k = 2j) we set Ak = Dk := (−1)j/κ, Bk = Ck := 0 and Ak = Dk := (−1)jκ, Bk = Ck := 0, while for k ∈ [0, ∞)Z odd (k = 2j + 1) we define Ak = Dk := 0, Bk = −Ck := (−1)j/κ and Ak = Dk := 0, Bk = −Ck := (−1)jκ. This means that S [0] k = (1/κ)Jk , S [0] k = κJk , and Wk = diag { W[1] k , W[2] k } = diag{qk, rk} ≥ 0. (5.81) Then conditions (5.47) and (5.49) are satisfied. We show that all solutions of systems (Q0) and (Q0) with the coefficients specified above belong to the corresponding space ℓ2 W . The fundamental matrices k(0) and k(0) of systems (Q0) and (Q0) determined by the initial conditions 0(0) = I = 0(0) are equal to k(0) = (1/κk )J(k2−k)/2 and k(0) = κk J(k2−k)/2 for all k ∈ [0, ∞)Z. From (5.61) we obtain Tk(0) = ( 0 Ak −I Ck ) and Tk(0) = ( 0 Ak −I Ck ) , and thus for k ∈ [0, ∞)Z we get  [s] k (0) = T ∗ k (0)Jk+1(0) = (( ˆz[1] k )[s] , ( ˆz[2] k )[s] ) =    (−1)j κ4j+1   1 0 0 κ   , k = 4j, (−1)j κ4j+2   0 1 0 κ   , k = 4j + 1, (−1)j+1 κ4j+3   0 1 κ 0   , k = 4j + 2, (−1)j+1 κ4j+4   1 0 −κ 0   , k = 4j + 3, – 86 – 5.5. Bibliographical notes where ( ˆz[1] k )[s] and ( ˆz[2] k )[s] mean the partial shift applied to ˆz[1] k and ˆz[2] k , respectively, i.e., ( ˆz[1] k )[s] = ( ˆx[1] k+1 , ˆu[1] k )⊤ and ( ˆz[2] k )[s] = ( ˆx[2] k+1 , ˆu[2] k )⊤ . Similarly, the matrix  [s] k (0) = T ∗ k (0)Jk+1(0) = (( ˜z[1] k )[s] , ( ˜z[2] k )[s] ) has the same form as  [s] k (0) above but with κ replaced by 1/κ. By direct calculations we then have ∞∑ k=0 ( ˆz[1] k )[s]∗ Wk ( ˆz[1] k )[s] = F(1/κ) < ∞, ∞∑ k=0 ( ˆz[2] k )[s]∗ Wk ( ˆz[2] k )[s] = G(1/κ) < ∞, ∞∑ k=0 ( ˜z[1] k )[s]∗ Wk ( ˜z[1] k )[s] = F(κ) < ∞, ∞∑ k=0 ( ˜z[2] k )[s]∗ Wk ( ˜z[2] k )[s] = G(κ) < ∞. This shows that ˆz[1] , ˆz[2] , ˜z[1] , ˜z[2] ∈ ℓ2 W , i.e., the assumptions of Theorem 5.4.5 are satisfied with λ0 = 0. Therefore, by this theorem, all solutions of systems (Qλ) and (Qλ) with (5.81) belong to ℓ2 W for any λ ∈ C. Observe that the statement of Corollary 5.4.6 cannot be applied, because ∑∞ k=0 S [0] k −I 1 = ∞ = ∑∞ k=0 S [0] k −I 1 , i.e., the first two conditions in (5.77) are now violated. Note also that systems (Qλ) and (Qλ) in this example cannot be written as systems (Hλ) and (Hλ), respectively, because the coefficient matrices Ak = 0 = Ak for k ∈ [0, ∞)Z odd are singular. ▲ Remark 5.4.9. We note that Example 5.4.8 with κ = 1 illustrates the application of Theorem 5.4.1, since in this case the systems (Qλ) and (Qλ) coincide. 5.5 Bibliographical notes The results of this chapter were published in [A17] and their generalization to symplectic systems on time scales was established in [A20]. More precisely, Theorem 5.1.3, Example 5.2.3, and Section 5.4 were published only for systems on time scales and they are explicitly presented for the first time in the discrete case, while Examples 5.3.9 and 5.4.8 are taken almost verbatim from [A20]. However it is worth noticing that the results of [A17,A20] were stated without the shift in the definition of the space ℓ2 Ψ(λ) . A generalization of the invariance of the limit circle case for system (Sλ) with the special polynomial dependence on λ described in Example 5.2.3 will be a subject of our future research. – 87 – Chapter 5. Polynomial and analytic dependence on spectral parameter – 88 – Chapter 6 Nohomogeneous problem and maximal and minimal linear relations To compare the discrete with the continuous, to search for analogies between them, and ultimately to effect their unification, are patterns of mathematical development that did not begin with Zeno, and certainly did not end with Leibnitz and Newton, nor even with Riemann and Stieltjes. Such a pattern of investigation is especially appropriate to the theory of boundary problems, for which the discrete and the continuous pervade both physical origins and mathematical methods. Frederick Valentine Atkinson, see [9, pg. v] In the last two chapters we return to the case of linear dependence on the spectral parameter and focus on the discrete symplectic systems from the “operator-theoretic” point of view. To the best of the author’s knowledge, this direction in the theory of discrete symplectic systems was completely untouched before the publications [A18,A21], see also the bibliographical notes in Sections 6.5 and 7.4. However, instead of system (Sλ) with the matrices 5k, Sk, Vk, k we now consider the underlying discrete symplectic system in the so-called time-reversed form with the matrices pk, Sk, Vk, ψk (to avoid any confusion we use two different fonts), i.e., zk(λ) = pk(λ)zk+1(λ) with pk(λ) := Sk + λVk, (Sλ) where λ ∈ C is the same as in the previous chapters and Sk, Vk ∈ C2n×2n are such that S∗ k JSk = J, S∗ k JVk is Hermitian, V∗ k JVk = 0, and ψk := JSk JV∗ kJ ≥ 0, (6.1) where the matrix J is (again) as in (1.12). This change is mainly motivated by the absence of the shift in the definition of the associated semi-inner product and semi-norm, see (6.8) and compare with (2.54) and (2.26). Consequently, it produces a more natural form of the Green function associated with nonhomogeneous discrete symplectic systems, see Lemma 6.3.2, and allows us to associate with system (Sλ) a densely defined operator, see Theorem 6.4.5 and compare with [132]. Regardless the transition from system (Sλ) to (Sλ), the results of the previous chapters remain valid also for system (Sλ) with the changes given for the definition of the semiinner product and semi-norm. More precisely, one easily observes that the first three identities in (6.1) are the same as in (2.1). Therefore (again) the matrix Sk is symplectic, – 89 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations for any λ ∈ C we have p∗ k(¯λ)Jpk(λ) = J, p−1 k (λ) = −Jp∗ k(¯λ)J, | det pk(λ)| = | det Sk | = 1, (6.2) and the identities for the matrices Sk, Vk, pk(λ) can be modified similarly as in Lemma 2.1.1, see Lemma 6.1.1. Consequently system (Sλ) is equivalent to system (Sλ), where we put 5k(λ) := p−1 k (λ), i.e., Sk := −JS∗ k J and Vk := −JV∗ k J. But in that case we obtain z∗ k(λ)ψk zk(λ) = z∗ k+1(λ)p∗ k(λ)ψk pk(λ)zk+1(λ) = −z∗ k+1(λ)V∗ k JSk zk+1(λ) = z∗ k+1(λ)JVk JS∗ k Jzk+1(λ) = z∗ k+1(λ) k zk+1(λ), (6.3) which among others justifies the replacement of k by ψk. On the other hand, systems (Sλ) and (Sλ) lead to different spaces of square summable sequences, see (2.53) and (6.9). Finally, we remark that although the presence of the shift in the semi-inner product could be suppressed similarly as in [A17], the approach based on the time-reversed system seems to be more natural in the present situation and it is, in addition, traditionally used in connection with the second order Sturm–Liouville difference equations, see e.g. [96,147]. Moreover, while for the analyses of the square summable solutions developed in the previous chapters we it was justifiable to “ignore” finite discrete intervals, in this and the following chapters we concern with system (Sλ) on a discrete interval IZ, which is finite or unbounded above, i.e., IZ = [0, N + 1)Z with N ∈ N ∪ {0, ∞}. Given the inherent positive semidefiniteness of the sequence ψ defined in (6.1), it is reasonable to consider the construction of operators in connection with system (Sλ), their extensions and their spectral theory. However we will see that the natural map associated with system (Sλ) does give rise only to a multivalued or non-densely defined operator, see Section 6.4. Hence the approach dealing with linear relations instead of operators is utilized as it provides powerful tools for the investigation of multivalued linear operators in a Hilbert space, especially for non-densely defined linear operators. The beginning of the general theory of linear relations can be traced back to [8], where it was established as a generalization of the results carried out in [167], see also [42,45,46,82,143,144,146,175] and the references therein. Moreover, a short introduction to this theory is provided in the Appendix of this thesis and the reader should be acquainted with its content before reading Section 6.4. The study of linear relations in connection with the linear Hamiltonian differential system from (2.5) was initiated in [128] and further developed, e.g., in [13,83,104,116]. One of the first occurrences of this concept in the discretetheory can be found in [28,29] for the second order Sturm–Liouville difference equations, while their extension to the linear Hamiltonian difference systems was given recently in [79,134,135, 155,161], compare with [142,163]. Hence, in this chapter we aim to associate the minimal and maximal linear relations to the (time-reversed) discrete symplectic systems and to establish their fundamental properties in analogy with [116, Section 2] for system (2.5) and with [134, Section 5] for system (2.6). The rest of this chapter is organized as follows. In the following section we give some basic properties of system (Sλ). In Section 6.2 we focus on the definiteness (or the strong Atkinson) condition for system (Sλ), which plays a crucial role in the present theory, and derive some equivalent characterizations. A nonhomogeneous discrete symplectic system is investigated in Section 6.3. Concluding with Section 6.4, the maximal and minimal linear relations associated with the (time-reversed) discrete symplectic systems are introduced and their fundamental properties, such as a relationship between the deficiency indices of the minimal relation in a suitable Hilbert space and the number of – 90 – 6.1. Preliminaries square summable solutions of system (Sλ), are established. In this final section we also present a sufficient condition guaranteeing the existence of a densely defined operator associated with the time-reversed discrete symplectic system. 6.1 Preliminaries The conditions for the coefficients Sk and Vk of system (Sλ) given in (6.1) can be expressed in several equivalent forms, which are summarized in the following statement, compare with Lemma 2.1.1. Moreover, the same arguments as in (2.1.3) can be used for the calculation of | det pk(λ)|. Lemma 6.1.1. Let n ∈ N be given. For any k ∈ [0, ∞)Z the following conditions are equivalent. (i) The matrices Sk and Vk satisfy the first three conditions in (6.1), i.e., S∗ k JSk = J, S∗ k JVk is Hermitian, and V∗ k JVk = 0. (ii) The matrix pk(λ) defined in (Sλ) satisfies the first equality in (6.2), i.e., p∗ k (¯λ)Jpk(λ) = J, for all λ ∈ C. (iii) The matrices Sk and Vk satisfy Sk JS∗ k = J and Vk JV∗ k = 0, and Vk JS∗ k is Hermitian. (iv) The matrix pk(λ) in (Sλ) satisfies pk(λ)Jp∗ k (¯λ) = J for all λ ∈ C. Moreover, if any of conditions (i), (ii), (iii) or (iv) holds, then pk(λ) = (I − λJψk)Sk, ψ∗ k = ψk, ψk Jψk = 0, and (I − λJψk)−1 = (I + λJψk), (6.4) where ψk is defined (without the requirement of the positive semidefiniteness) in (6.1). Consequently, | det pk(λ)| = | det Sk(λ)| = 1 for all λ ∈ C. The invertibility of all matrices pk(λ) guarantees the (global) existence and uniqueness of a solution of any initial value problem associated with system (Sλ). Especially, if z(λ) ∈ C(I+ Z )2n solves system (Sλ) and satisfies zs(λ) = 0 for some s ∈ I+ Z , then z(λ) is only a trivial solution, i.e., zk(λ) = 0 for all k ∈ I+ Z . The latter lemma also establishes the correspondence between the matrix pairs {S, V} and {S, ψ}. More precisely, if Sk, Vk ∈ C2n×2n satisfies the first three conditions in (6.1), then the equalities in (6.4) hold. On the other hand, if Sk, ψk ∈ C2n×2n are such that Sk satisfies the first relation in (6.1), ψ∗ k = ψk, and ψk Jψk = 0, then the second and third conditions in (6.1) hold with Vk := J∗ψk Sk = −JψkSk. Simultaneously, from (6.4) one can easily conclude that system (Sλ) can be equivalently written as J ( zk(λ) − Sk zk+1(λ) ) = λψk zk(λ), which gives rise to a natural linear map L : C(I+ Z )2n×m → C(IZ)2n×m associated with system (Sλ), namely L (z)k := J(zk − Sk zk+1), (6.5) where m ∈ N corresponds to the dimension of z and typically we consider m = 1. Hence system (Sλ) is equivalent to L (z(λ))k = λψk zk(λ), k ∈ IZ. (6.6) From this point of view, the second approach seems to be more natural in the present situation, i.e., to “fix” the sequences of matrices S, ψ ∈ C(IZ)2n×2n such that S∗ k JSk = J, ψ∗ k = ψk, ψk Jψk = 0, and ψk ≥ 0 for all k ∈ IZ. (6.7) In any case, throughout Chapters 6 and 7 we employ the following notation. – 91 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations Notation 6.1.2. The number n ∈ N is fixed, the discrete interval IZ is given either finite or infinite, i.e., IZ = [0, N + 1)Z with N ∈ N ∪ {0, ∞}, and the sequences S, V, ψ ∈ C(IZ)2n×2n are such that the conditions in (6.1) are satisfied for all k ∈ IZ. With respect to the fact that ψ ∈ C(IZ)2n×2n is a sequence with positive semidefinite terms, we define a semi-inner product on C(I+ Z )2n by ⟨z, v⟩ψ := ∑ k∈I z∗ k ψk vk, (6.8) where z, v ∈ C(I+ Z )2n. Here we are intentionally sloppy when suppressing the subscript Z in the notation of the discrete interval IZ in the sum on the right-hand side of the latter identity. The same convention is applied throughout the last two chapters. The associated linear space of all square summable sequences is denoted by l2 ψ(IZ) = l2 ψ := { z ∈ C(I+ Z )2n | ||z||ψ < ∞ } , (6.9) where ||·||ψ := √ ⟨·, ·⟩ψ denotes the associated semi-norm. The equivalent expression of system (Sλ) given in (6.6) also justifies the following form of the associated nonhomogeneous system zk(λ) = pk(λ)zk+1(λ)−Jψk fk or equivalently L (z(λ))k = λψk zk(λ)+ψk fk, k ∈ IZ, (S f λ ) where z(λ), f ∈ C(I+ Z )2n×m with m ∈ N being the same as discussed when defining the linear map L in (6.5). For simplicity, (S g ν ) refers to the nonhomogeneous system of the form given in (S f λ ) with λ replaced by ν and f replaced by g. In addition, instead of (S0 λ ) we write only (Sλ) and for this system we apply the same convention, i.e., by (Sν) we denote the system of the form given in (Sλ) with the parameter λ replaced by ν. Let us point out that the equivalent expression given in (S f λ ) will play a key role in Section 6.4, where we sometimes write only L (z(λ)) = λψz(λ)+ψf and L (z) = ψf with the meaning L (z(λ))k = λψk zk(λ)+ψk fk and L (z)k = ψk fk for all k ∈ IZ, respectively. For convenience, we also put L ∗(z)k := [L (z)k]∗. The next result presents an absolutely essential tool used throughout this chapter, compare with Lemma 2.1.5 and Theorem 2.1.7. Theorem 6.1.3 (Extended Lagrange identity). Let λ, ν ∈ C, m ∈ N, and f, g ∈ C(IZ)2n×m be given. If the sequences Z(λ), Z(ν) ∈ C(I+ Z )2n×m solve systems (S f λ ) and (S g ν ) on IZ, respectively, then for any s, t ∈ IZ with s ≤ t we have [ Z∗ k(λ)JZk(ν) ] = (¯λ − ν)Z∗ k(λ)ψk Zk(ν) + f∗ k ψk Zk(ν) − Z∗ k(λ)ψk gk, (6.10) Z∗ k(λ)JZk(ν) t+1 s = t∑ k=s { (¯λ − ν)Z∗ j (λ)ψj Zj(ν) + f∗ j ψj Zj(ν) − Z∗ j (λ)ψj gj } . (6.11) Especially, if ν = ¯λ and fk ≡ gk ≡ 0 on IZ, we have the Wronskian-type identity Z∗ k(λ)JZk(¯λ) ≡ Z∗ 0(λ)JZ0(¯λ) on I+ Z . (6.12) Proof. Let Z(λ), Z(ν) ∈ C(I+ Z )2n×m solve systems (S f λ ) and (S g ν ). Since by (6.4) we have p∗−1 k (λ)Jp−1 k (ν) = J + (¯λ − ν)ψk, we see that [ Z∗ k(λ)JZk(ν) ] = [ Zk(λ) + Jψk fk ]∗ p∗−1 k (λ)Jp−1 k (ν) [ Zk(ν) + Jψkgk ] − Z∗ k(λ)JZk(ν) = Z∗ k(λ) [ p∗−1 k (λ)Jp−1 k (ν) − J ] Zk(ν) + f∗ k ψk Zk(ν) − Z∗ k(λ)ψk gk = (¯λ − ν)Z∗ k(λ)ψk Zk(ν) + f∗ k ψk Zk(ν) − Z∗ k(λ)ψk gk. – 92 – 6.2. Definiteness condition Identities (6.11) and (6.12) are only simple consequences of the latter equality. ■ Let us note that, by using the equivalent expression of system (S f λ ), identity (6.11) can be also written as ( Z(λ), Z(ν) ) k t+1 s = t∑ k=s { L ∗ (Z(λ))k Zk(ν) − Z∗ k(λ)L (Z(ν))k } , (6.13) where we use for any Z, Z ∈ C(I+ Z )2n×m and k ∈ I+ Z the notation (Z, Z)k := Z∗ k JZk. Moreover, under the assumptions of Theorem 6.1.3 with m = 1, Z(λ) = z, Z(ν) = v, λ = 0 = ν, s = 0, and t = N we get from (6.11) and (6.8) that (z, v)k N+1 0 = ⟨ f, v⟩ψ − ⟨z, g⟩ψ, (6.14) where the left-hand side of (6.14) means limk→∞(z, v)k −(z, v)0 if IZ = [0, ∞)Z. Nevertheless identity (6.14) and the Cauchy–Schwarz inequality imply that the latter limit exists finite whenever z, v, f, g ∈ l2 ψ . Throughout this and the following chapters we denote by Θ(λ) ∈ C(I+ Z )2n×2n a fundamental matrix of system (Sλ). If, similarly as in Chapter 2, it is such that Θ∗ s(¯λ)JΘs(λ) = J, (6.15) for some s ∈ I+ Z , then as an immediate consequence of (6.12) we have for any k ∈ I+ Z that Θ∗ k(¯λ)JΘk(λ) = J, Θ−1 k (λ) = −JΘ∗ k(¯λ)J, and Θk(λ)JΘ∗ k(¯λ) = J. (6.16) Remark 6.1.4. Similarly as for linear Hamiltonian differential systems, there exists a unitary map Q : C(I+ Z )2n → C(I+ Z )2n preserving the square summability with respect to ψ and such that system (S f 0 ) can be written in the canonical form, i.e., with S ≡ I. Indeed, let Θ = Θ(0) denote the fundamental matrix of system (S0) satisfying Θ0 = I. Then it is invertible for all k ∈ I+ Z with Θ−1 k = −JΘ∗ k J and this inverse provides the canonical transformation, i.e., Qk = Θ−1 k with Q(z)k := Θ−1 k zk. Hence system (S f 0 ) is equivalent with − J vk = ψk gk, k ∈ IZ, (6.17) where we put vk := Q(z)k, gk := Q(f)k, and ψk := Θ∗ k ψk Θk, see also [79]. Moreover, one can easily verify that v ∈ l2 ψ if and only if z ∈ l2 ψ . System (6.17) can be seen as a discrete counterpart of the canonical linear Hamiltonian differential system, i.e., nonhomogeneous system associated with (2.5), where H(t) ≡ 0, see e.g. [128] or [116, Subsection 2.2] and the references therein. 6.2 Definiteness condition In this section we focus on the definiteness condition for system (Sλ), which is closely related to the strong Atkinson condition applied to all λ ∈ C, see Hypothesis 2.4.11 and Remark 6.2.7. Although it was shown in Chapter 2 that only the weak form of the Atkinson condition is sufficient for the development of the Weyl–Titchmarsh theory, we will need the strong version for (at least) two reasons: – 93 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations (i) in the section devoted to the nonhomogeneous problem we utilize Theorem 2.4.12 in the setting of system (Sλ), see Section 6.3; (ii) it guarantees a certain (extremely important) uniqueness property for system (S f 0 ), see Lemmas 6.2.8 and 6.4.3, Theorem 6.4.2, Corollary 6.4.14, and Chapter 7. Similar treatment in connection with the linear Hamiltonian differential and difference systems can be found in [13,116] and [134], respectively. Definition 6.2.1. System (Sλ) is said to be definite on a discrete interval ID Z ⊆ IZ provided the interval ID Z is nonempty and for each λ ∈ C any nontrivial solution z(λ) ∈ C(I+ Z )2n of system (Sλ), i.e, L (z(λ))k = λψk zk(λ) for all k ∈ IZ, satisfies ∑ k∈ID z∗ k(λ)ψk zk(λ) > 0. (6.18) Remark 6.2.2. Alternatively, the definiteness condition for (Sλ) can be stated in the following way: system (Sλ) is definite on a (nonempty) discrete interval ID Z ⊆ IZ if, for each λ ∈ C, every solution z(λ) ∈ C(I+ Z )2n of system (Sλ) for which ∑ k∈ID z∗ k(λ)ψk zk(λ) = 0, (6.19) is trivial on ID Z , i.e., zk(λ) = 0 for all k ∈ ID Z , and consequently it is trivial on the whole interval I+ Z . Furthermore, from the assumption of ψk ≥ 0 on IZ we get immediately that for any discrete interval IZ ⊆ IZ such that ID Z ⊆ IZ we have ∑ k∈ID z∗ k(λ)ψk zk(λ) ≤ ∑ k∈I z∗ k(λ)ψk zk(λ). (6.20) Therefore one easily concludes that the definiteness of system (Sλ) on ID Z guarantees the definiteness of (Sλ) on every discrete “interval superset” IZ, especially on IZ. Hence, definiteness of system (Sλ) on some finite discrete subinterval ID Z implies, for every λ ∈ C, that the semi-norm ||·||ψ of any nontrivial solution of system (Sλ) is nonzero. The converse of this last statement will be established in Lemma 6.2.6 below. In the next lemma we show that for the definiteness of system (Sλ) on a discrete interval ID Z it is not necessary to verify inequality (6.18) for all nontrivial solutions and every λ ∈ C, but it suffices to do it only for one particular choice of λ ∈ C. Lemma 6.2.3. System (Sλ) is definite on a discrete interval ID Z ⊆ IZ if and only if, for some λ0 ∈ C, each solution z(λ0) ∈ C(I+ Z )2n of system (Sλ0 ) satisfying ∑ k∈ID z∗ k(λ0)ψk zk(λ0) = 0 (6.21) is trivial on ID Z , i.e., zk(λ0) = 0 for all k ∈ ID Z . Proof. We begin by assuming that for some λ0 ∈ C each solution z(λ0) of system (Sλ0 ) satisfying (6.21) is necessarily trivial on ID Z , i.e., zk(λ0) = 0 for all k ∈ ID Z . Let λ ∈ C be arbitrary and z(λ) be a solution of system (Sλ) such that (6.19) holds. Given Remark 6.2.2, it suffices to show that z(λ) is also trivial on ID Z . Since ψk zk(λ) = 0 for all k ∈ ID Z , we see by the equivalent expression given in (6.6) that z(λ) solves also system (Sλ0 ) on ID Z . Then, by the assumed definiteness of system (Sλ0 ) on ID Z , condition (6.19) indeed implies that zk(λ) = 0 for all k ∈ ID Z . The converse is trivial. ■ – 94 – 6.2. Definiteness condition Example 6.2.4. The discrete symplectic systems (or their time-reversed form) investigated in Examples 2.5.1–2.5.3 are not definite, because they possess nontrivial solutions with the semi-norm equal to zero. On the other hand, we can provide a simple example of system (Sλ) being definite on some nontrivial discrete interval ID Z ⊆ IZ; cf. Theorem 6.2.5. Let us consider the following system ( xk(λ) uk(λ) ) = ( 1 −1/pk+1 −qk + λwk 1 + (qk − λwk)/pk+1 ) ( xk+1(λ) uk+1(λ) ) , k ∈ IZ, (6.22) where p ∈ C(I+ Z K{0}) and q, w ∈ C(IZ) are (only) real-valued sequences such that wk ≥ 0 and pk+1 0 for all k ∈ IZ. Then the conditions in (6.1) are satisfied with Sk = ( 1 −1/pk+1 −qk 1 + qk/pk+1 ) , Vk = ( 0 0 wk −wk/pk+1 ) , and ψk = ( wk 0 0 0 ) , see also (6.25). If wk > 0 for at least two consecutive points of IZ, i.e., for k ∈ [a, b]Z with a, b ∈ IZ and a < b, then system (6.22) is definite on [a, b]Z, and thus, by Remark 6.2.2, also on any discrete interval ID Z such that [a, b]Z ⊆ ID Z ⊆ IZ. Indeed, let us denote by z = (x, u)⊤ ∈ C(I+ Z )2 a sequence satisfying system (6.22) with λ = 0 and assume that ψk zk = 0 on [a, b]Z, i.e., the pair xk, uk is such that xk = uk+1 pk+1 , uk = qk xk+1 − qk pk+1 uk+1, and wk xk = 0 for all k ∈ [a, b]Z. (6.23) Then the positivity of wk on [a, b]Z and the third equality in (6.23) yield xk = 0 for all k ∈ [a, b]Z, which implies that also uk = 0 for all k ∈ [a + 1, b]Z by the first equality in (6.23). Hence zk = 0 on [a + 1, b]Z and from the invertibility of the coefficient matrix in (6.22) we easily obtain z ≡ 0 on [a, b]Z. It means that z is only the trivial solution of system (6.22) with λ = 0 and the definiteness of system (6.22) on [a, b]Z follows by Lemma 6.2.3. In addition, we point out that system (6.22) corresponds to the second order Sturm– Liouville difference equation − [pk yk−1(λ)] + qk yk(λ) = λwk yk(λ), k ∈ IZ. (6.24) with p ∈ C(I+ Z ) and q, w ∈ C(IZ). More precisely, let y(λ) ∈ C(I+ Z ∪ {−1}) be a solution of equation (6.24) on the discrete interval IZ. Then the pair xk(λ) := yk(λ), k ∈ I+ Z ∪ {−1}, and uk(λ) := pk yk−1(λ), k ∈ I+ Z , satisfies system (6.22) for all k ∈ IZ, compare with the system corresponding to equation (2.8) in the case m = n = 1 and see also [4, Example 3.8]. On the other hand, if p0 0 and the pair x(λ), u(λ) ∈ C(I+ Z ) solves system (6.22), then y(λ) ∈ C(I+ Z ∪ {−1}) defined as yk(λ) :=    xk(λ), k ∈ I+ Z , x0(λ) − u0(λ)/p0, k = −1, satisfies equation (6.24) for all k ∈ IZ. ▲ System (6.22) is a particular case of system (Sλ) with a special linear dependence on the parameter λ, which is analogous to system (Sλ) with (2.7). Namely, let Sk = ( Ak Bk Ck Dk ) , Vk = ( 0 0 Wk Ak Wk Ak ) , and ψk = ( Wk 0 0 0 ) , (6.25) – 95 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations where A, B, C, D, W ∈ C(IZ)n×n are such that S∗ k JSk = J and Wk = W∗ k ≥ 0 for all k ∈ IZ. Then the conditions in (6.1) are satisfied on IZ and in the following theorem we derive a sufficient condition for the definiteness of system (Sλ) with the coefficients as in (6.25). For completeness, we note that the latter system is equivalent with the pair of equations xk(λ) = Ak xk+1(λ) + Bk uk+1(λ), uk(λ) = Ck xk+1(λ) + Dk uk+1(λ) + λWk xk(λ). (6.26) Theorem 6.2.5. Let us consider system (Sλ) with the coefficients specified in (6.25). If there exists an index m ∈ IZ K{0} such that the matrices Bm−1, Wm−1, and Wm are invertible (in fact, Wm−1 and Wm are positive definite), then system (Sλ) is definite on the discrete interval [m − 1, m]Z, and thus also on IZ. Proof. Let λ ∈ C be fixed and z(λ) = (x⊤(λ), u⊤(λ))⊤ ∈ C(I+ Z )2n be a nontrivial solution of system (Sλ) such that ψk zk(λ) = 0 on [m − 1, m]Z, i.e., Wm−1 xm−1(λ) = 0 and Wm xm(λ) = 0. By Lemma 6.2.3 we have to show that zk(λ) = 0 on [m − 1, m]Z. From the invertibility of Wm−1 and Wm we obtain xm−1(λ) = xm(λ) = 0. Hence 0 = xm−1(λ) (6.26) = Am−1 xm(λ) + Bm−1 um(λ) = Bm−1 um(λ), which yields also um(λ) = 0 by the invertibility of Bm−1. It means zm(λ) = 0 and consequently z(λ) ≡ 0 on I+ Z . ■ For the following result we fix k0 ∈ I+ Z and by Θ(λ) ∈ C(I+ Z )2n×2n we denote the fundamental matrix of system (Sλ) satisfying Θk0 (λ) = I for any λ ∈ C. Hence Θ(λ) satisfies equality (6.15) with s = k0 and thus also the relations in (6.16) for all k ∈ I+ Z . The next result provides a characterization of the definiteness of system (Sλ) which is analogous to that for system (2.5) established in [13, Proposition 2.11]. Lemma 6.2.6. System (Sλ) is definite on IZ if and only if there exists a finite discrete interval ID Z ⊆ IZ over which the system is definite. Proof. If the discrete interval IZ is finite, the statement is trivial. Hence, let us consider the case IZ = [0, ∞)Z. From Remark 6.2.2 we know that the definiteness of system (Sλ) on a finite discrete interval ID Z implies definiteness on the discrete interval [0, ∞)Z. Thus, it remains to show the converse. Assume that system (Sλ) is definite on [0, ∞)Z. In light of Lemma 6.2.3, we need only to show the existence of a finite discrete interval ID Z over which system (Sλ) is definite for one value λ0 ∈ C. Thus, let λ0 ∈ C be fixed and for any finite discrete subinterval IZ ⊂ [0, ∞)Z we define the set s(IZ) as s(IZ) := { ξ ∈ C2n ||ξ||2 = 1 and ∑ k∈I ξ∗ Θ∗ k(λ0)ψk Θk(λ0)ξ = 0 } , where ||·||2 is the Euclidean norm on C2n. Then s(IZ) is compact and it holds s(IZ) ⊆ s(IZ) whenever IZ ⊆ IZ. Moreover, let { I [m] Z } m∈N be a collection of nested finite discrete intervals I [1] Z ⊆ I [2] Z ⊆ · · · ⊂ [0, ∞)Z such that ∪ m∈N I [m] Z = [0, ∞)Z. Suppose that there exists a vector ξ ∈ C2n with ||ξ||2 = 1 such that for every m ∈ N we have ∑ k∈I[m] ξ∗ Θ∗ k (λ0)ψk Θk(λ0)ξ = 0. Then also ∑∞ k=0 ξ∗ Θ∗ k (λ0)ψk Θk(λ0)ξ = 0, which implies that Θk(λ0)ξ = 0 for all k ∈ [0, ∞)Z by the definiteness of system (Sλ) on [0, ∞)Z, see Remark 6.2.2. But it means that ξ = 0, which contradicts the assumption ||ξ||2 = 1. Consequently ∩ m∈N s ( I [m] Z ) = ∅ – 96 – 6.2. Definiteness condition for the collection of nested compact sets s ( I [1] Z ) ⊇ s ( I [2] Z ) ⊇ · · · . Thus by the Cantor intersection theorem, q.v. [7, Theorem 3.25], there exists m0 ∈ N such that s ( I [m0] Z ) = ∅, which demonstrates the definiteness of system (Sλ) on the interval ID Z = I [m0] Z ⊂ [0, ∞)Z. ■ Remark 6.2.7. From Lemmas 6.2.3 and 6.2.6 we conclude that the strong Atkinson condition stated similarly as in Hypothesis 2.4.11 for system (Sλ) is equivalent to the definiteness of system (Sλ) on [0, ∞)Z and it suffices to verify this condition only for one λ ∈ C. Note also that [26, Assumption 2.2] requires satisfaction of inequality (6.18) on every nonempty finite subinterval of [0, ∞)Z, which is significantly stronger than requiring the definiteness of system (Sλ) on [0, ∞)Z as seen, e.g., when (Sλ) is definite on a finite discrete interval [0, N]Z ⊂ [0, ∞)Z and ψk ≡ 0 for k ∈ [N + 1, ∞)Z. Now we establish a basic result concerning the solvability of a boundary value problem associated with system (Sλ), which will be utilized in the proof of Lemma 6.4.3. It provides the symplectic counterpart of the original Naimark’s result known as the “Patching lemma”, see [124, Lemma 2 in Section 17.3]. Analogous result for system (2.6) can be found in [135, Lemma 3.3]. Lemma 6.2.8. Let system (Sλ) be definite on a finite discrete interval ID Z ⊆ IZ and a finite discrete interval IZ := [c, d]Z be given such that ID Z ⊆ IZ ⊆ IZ with c, d ∈ IZ. Then for any given ξ, η ∈ C2n there exists f ∈ C(IZ)2n such that the boundary value problem L (z)k = ψk fk, zc = ξ, zd+1 = η, k ∈ IZ, (6.27) possesses a solution z ∈ C(I + Z )2n. Proof. Let A = (aij)i,j=1,...,2n be the 2n × 2n matrix with the elements aij := ∑d k=c φ[i]∗ k ψk φ [j] k for i, j ∈ {1, . . . , 2n}, where φ[1] , . . . , φ[2n] ∈ C(I+ Z )2n are linearly independent solutions of system (S0), i.e., L (φ[i] )k = 0 for all k ∈ IZ and i ∈ {1, . . . , 2n}. Then the homogeneous system of algebraic equations Aρ = 0, where ρ = (ρ1, . . . , ρ2n)⊤ ∈ C2n, is equivalent to ∑d k=c φ∗ k ψk φk = 0, where φ := ∑2n i=1 ρi φ[i] ∈ C(I+ Z )2n. Since φ also solves system (S0), it follows from the assumption of the definiteness on ID Z and inequality (6.20) that φ is only the trivial solution of system (S0), i.e., ∑2n i=1 ρi φ[i] k ≡ 0, which implies that ρi = 0 for all i ∈ {1, . . . , 2n}. Thus, the matrix A is invertible. Consequently, there exists a unique solution ζ = (ζ1, . . . , ζ2n)⊤ ∈ C2n of the nonhomogeneous system of algebraic equations ζ∗ A = η∗ JΘd+1, (6.28) where Θ := (φ[1]∗, . . . , φ[2n]∗)∗ is a fundamental matrix of system (S0). If we put h[1] k := Θk ζ for k ∈ IZ, we get from (6.28) for all i ∈ {1, . . . , 2n} that d∑ k=c h[1]∗ k ψk φ[i] k = η∗ Jφ[i] d+1 . (6.29) Simultaneously the definiteness of system (Sλ) guarantees the existence of a unique solution z[1] ∈ C(I + Z )2n of the nonhomogeneous initial value problem L (z[1] )k = ψk h[1] k , z[1] c = 0, k ∈ IZ. Then, for all i ∈ {1, . . . , 2n}, the fact L (φ[i] )k ≡ 0 and identity (6.13) yield d∑ k=c h[1]∗ k ψk φ[i] k = d∑ k=c { L ∗ (z[1] )k φ[i] k − z[1]∗ k L (φ[i] )k } = (z[1] , φ[i] )k d+1 c = (z[1] , φ[i] )d+1. (6.30) – 97 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations Upon combining (6.29) and (6.30) we obtain z[1] d+1 = η, which means that z[1] solves the boundary value problem L (z[1] )k = ψk h[1] k , z[1] c = 0, z[1] d+1 = η, k ∈ IZ. Similarly, the nonhomogeneous system of algebraic equations ω∗A = ξ∗JΘc has a unique solution ω = (ω1, . . . , ω2n)⊤ ∈ C2n. Then with h[2] k := Θk ω, k ∈ IZ, we can calculate that z[2] ∈ C(I + Z )2n, being the unique solution of L (z[2] )k = −ψk h[2] k , z[2] d+1 = 0, k ∈ IZ, also satisfies z[2] c = ξ, i.e., it solves the boundary value problem L (z[2] )k = −ψk h[2] k , z[2] c = ξ, z[2] d+1 = 0, k ∈ IZ. Therefore, the sequence z ∈ C(I + Z )2n with the terms zk := z[1] k + z[2] k for all k ∈ I + Z solves the boundary value problem (6.27) with fk := h[1] k − h[2] k for k ∈ IZ, i.e., f ∈ C(IZ)2n. ■ Finally, we derive yet another characterization of the definiteness of system (Sλ), analogous to that given for linear Hamiltonian systems (2.5) and (2.6) with Ek ≡ 0 in [116, Sections 2.3 and 2.4] and [134, Sections 3 and 4], respectively. For any λ ∈ C and nonempty finite discrete interval IZ ⊆ IZ with k0 ∈ I + Z we define the 2n × 2n positive semidefinite matrix ϑ(λ, IZ) := ∑ k∈I Θ∗ k(λ)ψk Θk(λ) (6.31) in terms of the fundamental matrix Θ(λ) of system (Sλ) specified in the paragraph preceding Lemma 6.2.6. While the matrix ϑ(λ, IZ) depends obviously on λ and IZ, we next establish that its kernel and range do not, which implies also the independence of the rank of ϑ(λ, IZ) on the value of λ. Lemma 6.2.9. For any nonempty finite discrete interval IZ ⊆ IZ such that k0 ∈ I + Z , the subspaces Ker ϑ(λ, IZ) and Ran ϑ(λ, IZ) are independent of λ ∈ C. Proof. Let λ ∈ C and ξ ∈ Ker ϑ(λ, IZ) be fixed and put zk := Θk(λ)ξ for all k ∈ I + Z . Then obviously z ∈ C(I + Z )2n solves system (Sλ) on IZ, i.e., L (z) = λψz on IZ, while satisfying the initial condition zk0 = ξ. Simultaneously, the sequence z solves also system (Sν) on IZ for any ν ∈ C, i.e., L (z) = νψz on IZ, because the positive semidefiniteness of the elements in the sum on the right-hand side of (6.31) implies ψk zk = ψk Θk(λ)ξ = 0 for all k ∈ IZ. Hence zk = Θk(ν)ζ for all k ∈ I + Z and some ζ ∈ C2n. Since ξ = zk0 = Θk0 (ν)ζ = ζ, it holds z = Θ(λ)ξ = Θ(ν)ξ on the discrete interval I + Z , which implies 0 = ξ∗ ϑ(λ, IZ)ξ = ∑ k∈I z∗ k ψk zk = ξ∗ ϑ(ν, IZ)ξ. Therefore, Ker ϑ(λ, IZ) ⊆ Ker ϑ(ν, IZ) and by reversing the roles of the numbers λ and ν we obtain Ker ϑ(λ, IZ) = Ker ϑ(ν, IZ) for all λ, ν ∈ C. The independence of Ran ϑ(λ, IZ) on λ ∈ C follows from the previous part and the fact that, as defined in (6.31), the matrix ϑ(λ, IZ) is Hermitian, which yields Ran ϑ(λ, IZ) = Ker ϑ(λ, IZ)⊥ by (1.6). ■ – 98 – 6.2. Definiteness condition The latter statement justifies the suppression of the parameter λ in the notation of the kernel, range, and rank of ϑ(λ, IZ) for any nonempty finite discrete interval IZ ⊆ IZ with k0 ∈ I + Z , i.e., henceforward we write only Ker ϑ(IZ), Ran ϑ(IZ), and rank ϑ(IZ). In the following theorem we show that there exists a nonempty finite discrete interval IZ ⊆ IZ, which maximizes the value of rank ϑ(·). Lemma 6.2.10. There exists a nonempty finite discrete interval IZ ⊆ IZ with k0 ∈ I + Z such that for any finite discrete interval IZ satisfying IZ ⊆ IZ ⊆ IZ we have rank ϑ(IZ) = rank ϑ(IZ), Ran ϑ(IZ) = Ran ϑ(IZ). (6.32) Proof. If the discrete interval IZ is finite, the statement is trivial. Hence, let us consider the case IZ = [0, ∞)Z. For finite discrete intervals IZ and IZ such that k0 ∈ I + Z ⊆ I + Z ⊂ [0, ∞)Z, we see that Ker ϑ(IZ) ⊆ Ker ϑ(IZ) by the definition of ϑ(·) in (6.31). Then, given that the matrix ϑ(·) is Hermitian for any finite discrete interval, we see that Ran ϑ(IZ) (1.6) = [Ker ϑ(IZ)]⊥ ⊆ [Ker ϑ(IZ)]⊥ (1.6) = Ran ϑ(IZ), and consequently rank ϑ(IZ) ≤ rank ϑ(IZ) by (1.1), i.e., the value of rank ϑ(·) does not decrease when we extend the interval. Since at the same time rank ϑ(·) ≤ 2n, there must be a finite discrete interval IZ such that rank ϑ(IZ) = rank ϑ(IZ), and thus also Ran ϑ(IZ) = Ran ϑ(IZ), for all finite discrete intervals IZ containing IZ. ■ Finally, we describe a connection between the definiteness of system (Sλ) and the matrix ϑ(λ, IZ) for a finite discrete interval IZ ⊆ IZ. Theorem 6.2.11. For a nonempty finite discrete interval IZ ⊆ IZ with k0 ∈ I + Z and for ϑ(λ, IZ) being defined as in (6.31) the following statements are equivalent. (i) It holds rank ϑ(IZ) = 2n. (ii) It holds Ker ϑ(IZ) = {0}. (iii) For some λ ∈ C, every nontrivial solution z(λ) ∈ C(I+ Z )2n of system (Sλ) satisfies∑ k∈I z∗ k (λ)ψk zk(λ) > 0. (iv) For some λ ∈ C, a solution z(λ) ∈ C(I+ Z )2n of system (Sλ) is necessarily trivial, i.e. zk(λ) = 0 for all k ∈ I+ Z , when ∑ k∈I z∗ k (λ)ψk zk(λ) = 0. Proof. The equivalence of (i) and (ii) is clear, while the equivalence of (iii) and (iv) follows from Remark 6.2.2 and Lemma 6.2.3. Hence it remains to show that the statements in (ii) and (iii) are equivalent. Let us assume that (ii) is true and z(λ) ∈ C(I+ Z )2n be an arbitrary nontrivial solution of system (Sλ) for some λ ∈ C, i.e., zk(λ) = Θk(λ)ξ for some ξ ∈ C2nK{0} and all k ∈ I+ Z . Since ϑ(λ, IZ) is positive definite and ξ 0, we obtain 0 < ξ∗ ϑ(λ, IZ)ξ = ∑ k∈I ξ∗ Θ∗ k(λ)ψk Θk(λ)ξ = ∑ k∈I z∗ k(λ)ψk zk(λ), i.e., (iii) holds. Conversely, assume that the statement in (iii) is true. Let ξ ∈ C2nK{0} be fixed and put zk(λ) := Θk(λ)ξ for all k ∈ I+ Z . Then z(λ) ∈ C(I+ Z )2n is a nontrivial solution of system (Sλ) and, by (iii), ξ∗ ϑ(λ, IZ)ξ = ∑ k∈I z∗ k(λ)ψk zk(λ) > 0, – 99 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations i.e., we have ϑ(λ, IZ)ξ 0. Since the vector ξ was chosen arbitrarily, we conclude that Ker ϑ(IZ) = {0}, i.e., (ii) is satisfied, which completes the proof. ■ As an immediate consequence of Lemma 6.2.6 and Theorem 6.2.11 we get the following corollary; cf. [116, Definition 2.14]. Corollary 6.2.12. System (Sλ) is definite on IZ if and only if, for some nonempty finite discrete interval IZ ⊆ IZ, one of the conditions listed in Theorem 6.2.11 is satisfied. 6.3 Nonhomogeneous problem Now we take the nonhomogeneous problem into consideration and extend the results of [A4, Section 5] to the case of general linear dependence on λ. For this purpose we naturally consider only the case IZ = [0, ∞)Z and we start this section by restating some fundamental results from the Weyl–Titchmarsh theory for system (Sλ), which are related to the present study of system (S f λ ). As discussed at the beginning of this chapter, see (6.3), these results can be easily derived from Chapter 2 by appropriate changes in the definition of the semi-inner product and its weight matrix. Throughout this section we assume that system (Sλ) is definite on [0, ∞)Z and “fix” the fundamental matrix Θ(λ) ∈ C([0, ∞)Z)2n×2n by the initial condition Θ0(λ) = (α∗, −Jα∗) for a given α ∈ , see (2.19) and (2.21). Since α ∈ , this fundamental matrix satisfies equality (6.15) with s = 0, and thus all the relations in (6.16) hold. We also denote by Z(λ), Z(λ) ∈ C([0, ∞)Z)2n×n the two components of the fundamental matrix, i.e., we put Θ(λ) := (Z(λ), Z(λ)). If λ ∈ CKR, then the associated Weyl disks are defined in the same way as in Definition 2.3.1 with the E(M)-function replaced by E(M) := iδ(λ)X∗(λ)JX(λ), where M ∈ Cn×n and Xk(λ) := Θk(λ, α)(I, M∗ )∗ , k ∈ [0, ∞)Z, (6.33) represents the Weyl solution of system (Sλ); cf. identity (2.32) and Definition 2.2.1. Since system (Sλ) is assumed to be definite on the discrete interval [0, ∞)Z, the limiting Weyl disk exists and it is a closed, convex, and nonempty subset of Cn×n; cf. Definition 2.3.10 and Remark 6.2.7. Hence the columns of the Weyl solution X(λ) defined through any matrix M from the limiting Weyl disk are linearly independent square summable solutions of system(Sλ), i.e., they belong to l2 ψ ; cf. Theorem 2.4.1. Consequently we adopt the terminology from Definition 2.4.2, i.e., system (Sλ) is said to be in the limit point case and in the limit circle case if it possesses n and 2n linearly independent solutions in l2 ψ , respectively. Finally, by M+(λ) we denote the half-line Weyl–Titchmarsh M(λ)-function, which is defined in accordance with Remark 2.3.17(i) and satisfies M∗ +(λ) = M+(¯λ) for all λ ∈ CKR. (6.34) Moreover, if λ, ν ∈ CKR and systems (Sλ) and (Sν), are both in the limit point or in the limit circle case, then lim k→∞ X+∗ k (λ)JX+ k (ν) = 0, (6.35) whereX+(λ),X+(ν) ∈ C([0, ∞)Z)2n×n represent the Weyl solutions of systems (Sλ) and (Sν) defined by (6.33) through the matrices M+(λ) and M+(ν), respectively; cf. Theorem 2.4.12. For simplicity, we putX+∗ k (λ) := [X+ k (λ)]∗. The next result shows a useful relation between the Weyl solution X+(λ) and Z(λ). – 100 – 6.3. Nonhomogeneous problem Lemma 6.3.1. Let α ∈ , λ ∈ CKR, and system (Sλ) be definite on [0, ∞)Z. Then X+ k (λ)Z ∗ k(¯λ) − Zk(λ)X+∗ k (¯λ) = J for all k ∈ [0, ∞)Z. (6.36) Proof. Identity (6.36) then follows by a direct calculation from the definition of X+(·), the third identity in (6.16), and equality (6.34). ■ For λ ∈ CKR and k, s ∈ [0, ∞)Z we introduce the Green function Gk,s(λ) :=    Zk(λ)X+∗ s (¯λ), k ∈ [0, s]Z, X+ k (λ)Z ∗ s(¯λ), k ∈ [s + 1, ∞)Z, (6.37) which can be equivalently expressed as Gk,s(λ) =    X+ k (λ)Z ∗ s(¯λ), s ∈ [0, k − 1]Z, Zk(λ)X+∗ s (¯λ), s ∈ [k, ∞)Z. (6.38) Let us note that in the literature we also find the terminology resolvent kernel for an analogous function in the continuous time case, see e.g. [107, page 15]. In the following lemma, we establish some fundamental properties of the Green function. We note that the given identities are presented in a more symmetric form, with respect to the variables k and s, than the corresponding identities for the Green function in the case of the system (Sλ) with the special linear dependence on the spectral parameter given in [A4, Lemma 5.1]. Lemma 6.3.2. Let α ∈ , λ ∈ CKR, and system (Sλ) be definite on [0, ∞)Z. Then the Green function G(λ) ∈ C([0, ∞)Z)2n×2n possesses the following properties: (i) G∗ k,s (λ) = Gs,k(¯λ) for all k, s ∈ [0, ∞)Z such that k s; (ii) G∗ k,k (λ) = Gk,k(¯λ) + J for all k ∈ [0, ∞)Z; (iii) for any given s ∈ [0, ∞)Z the function G·,s(λ) satisfies the homogeneous system (Sλ) for all k ∈ [0, ∞)Z such that k s, i.e., Gk,s(λ) = pk(λ)Gk+1,s(λ) on the set { {k, s} ∈ [0, ∞)Z × [0, ∞)Z | k s } ; (iv) Gk,k(λ) = pk(λ)Gk+1,k(λ) − J for every k ∈ [0, ∞)Z; (v) the columns of G·,s(λ) belong to l2 ψ for every s ∈ [0, ∞)Z and the columns of Gk,·(λ) belong to l2 ψ for every k ∈ [0, ∞)Z. Proof. The first property follows directly from the definition of Gk,s(λ) given in (6.37). The second property can be obtained from (6.37) by means of identity (6.36). For the proof of the third property we distinguish two cases for the calculation of the value of Gk,s(λ) − pk(λ)Gk+1,s(λ) and use the fact that X+(λ) and Z(λ) solve system (Sλ), i.e., Gk,s(λ) − pk(λ)Gk+1,s(λ) = { [X+ k (λ) − pk(λ)X+ k+1(λ)]Z ∗ s(¯λ) = 0 when s < k < k + 1, [Zk(λ) − pk(λ)Zk+1(λ)]X+∗ s (¯λ) = 0 when s ≥ k + 1 > k. Property (iv) can be proven by using the definition of G(λ) in (6.37) together with identities (6.34) and (6.16). Finally, the columns of G·,s(λ) belong to l2 ψ for every s ∈ [0, ∞)Z – 101 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations by the definition of the Green function and the square summability of the Weyl solution X+(λ), because ||G·,s(λ)ej ||2 ψ =e∗ jX+ s (¯λ) ( s∑ k=0 Z ∗ k(λ)ψk Zk(λ) ) X+∗ s (¯λ)ej + e∗ j Zs(¯λ) ( ∞∑ k=s+1 X∗+ k (λ)ψkX+ k (λ) ) Z ∗ s(¯λ)ej < ∞, while the columns of G∗ k,· (λ) are in l2 ψ for every k ∈ [0, ∞)Z by the similar calculation and the equivalent expression given in (6.38). ■ Let us associate with system (S f λ ) the sequence z(λ) ∈ C([0, ∞)Z)2n, where zk(λ) := ∞∑ s=0 Gk,s(λ)ψs fs, k ∈ [0, ∞)Z. (6.39) By (6.38), it can be written as zk(λ) = X+ k (λ) k−1∑ s=0 Z ∗ s(¯λ)ψs fs + Zk(λ) ∞∑ s=k X+∗ s (¯λ)ψs fs, (6.40) which shows that z(λ) is well defined for all f ∈ l2 ψ by the Cauchy–Schwarz inequality, because the columns of X+(¯λ) are square summable, see also Lemma 6.3.2(v). Similarly as in [A4, Theorem 5.2] we show that the above defined function z(λ) represents a square summable solution of system (S f λ ) with f ∈ l2 ψ . Theorem 6.3.3. Let α ∈ , λ ∈ CKR, f ∈ l2 ψ , and system (Sλ) be definite on [0, ∞)Z. The sequence z(λ) defined in (6.39) solves system (S f λ ) on [0, ∞)Z, satisfies the initial condition αz0(λ) = 0, is square summable, i.e., z(λ) ∈ l2 ψ , and it holds ||z(λ)||ψ ≤ 1 | im(λ)| || f ||ψ. (6.41) In addition, if system (Sλ) is in the limit point or limit circle case for all λ ∈ CKR, then lim k→∞ X+∗ k (ν)Jzk(λ) = 0 for every ν ∈ CKR. (6.42) Proof. The form of zk(λ) given in (6.40) together with the similar expression of zk+1(λ), the fact that X+(λ) and Z(λ) solve system (Sλ), and identity (6.36) yield zk(λ) − pk(λ)zk+1(λ) = X+ k (λ) k−1∑ s=0 Z ∗ s(¯λ)ψs fs + Zk(λ) ∞∑ s=k X+∗ s (¯λ)ψs fs − pk(λ)X+ k+1(λ) k∑ s=0 Z ∗ s(¯λ)ψs fs − pk(λ)Zk+1(λ) ∞∑ s=k+1 X+∗ s (¯λ)ψs fs = − [ X+ k (λ)Z ∗ k(¯λ) − Zk(λ)X+∗ k (¯λ) ] ψk fk = −Jψk fk, i.e., the sequence z(λ) solves system (S f λ ). – 102 – 6.3. Nonhomogeneous problem The fulfillment of the boundary condition follows by the simple calculation αz0(λ) = αZ0(λ) ∞∑ s=0 X+∗ s (¯λ)ψs fs = −αJα∗ ∞∑ s=0 X+∗ s (¯λ)ψs fs = 0, because Z0(λ) = −Jα∗ and α ∈ . Next, we prove the estimate in (6.41) which together with the assumption f ∈ l2 ψ will imply that z(λ) ∈ l2 ψ . For every r ∈ [0, ∞)Z we define the function f[r] k :=    fk, k ∈ [0, r]Z, 0, k ∈ [r + 1, ∞)Z, and the function z[r] k (λ) := ∞∑ s=0 Gk,s(λ)ψs f[r] s = r∑ s=0 Gk,s(λ)ψs fs. (6.43) Then z[r] (λ) solves system (S f λ ) with f replaced by f[r] . Applying the extended Lagrange identity from Theorem 6.1.3, we obtain lim k→∞ z[r]∗ k+1 (λ)Jz[r] k+1 (λ) = z[r]∗ 0 (λ)Jz[r] 0 (λ) + (¯λ − λ) ∞∑ k=0 z[r]∗ k (λ)ψk z[r] k (λ) + ∞∑ k=0 f[r]∗ k ψk z[r] k (λ) − ∞∑ k=0 z[r]∗ k (λ)ψk f[r] k . (6.44) Since Z0(λ) = −Jα∗ and α ∈ , we see that z[r]∗ 0 (λ)Jz[r] 0 (λ) = ( r∑ s=0 X+∗ s (¯λ)ψs fs )∗ Z ∗ 0(λ)JZ0(λ) ( r∑ s=0 X+∗ s (¯λ)ψs fs ) = 0. (6.45) For every k ∈ [r + 1, ∞)Z we can also write z[r] k (λ) = X+ k (λ)gr(λ), where gr(λ) := r∑ s=0 Z ∗ s(¯λ)ψs fs, (6.46) which, together with the fact that M+(λ) belongs to the limiting Weyl disk, yields 1 ¯λ − λ lim k→∞ z[r]∗ k+1 (λ)Jz[r] k+1 (λ) = iδ(λ) 2| im(λ)| g∗ r(λ) ( lim k→∞ X+∗ k+1(λ)JX+ k+1(λ) ) gr(λ) = 1 2| im(λ)| g∗ r(λ) ( lim k→∞ Ek+1(M+(λ)) ) gr(λ) ≤ 0. (6.47) By using identities (6.44), (6.45), and (6.47), the assumption λ ¯λ, the Hermitian property of ψk, and the Cauchy–Schwarz inequality we get ||z[r] (λ)||2 ψ = ∞∑ k=0 z[r]∗ k (λ)ψk z[r] k (λ) ≤ 1 2i im(λ) ( r∑ k=0 f[r]∗ k ψk z[r] k (λ) − r∑ k=0 z[r]∗ k (λ)ψk f[r] k ) ≤ 1 | im(λ)| r∑ k=0 z[r]∗ k (λ)ψk f[r] k ≤ 1 | im(λ)| ( r∑ k=0 z[r]∗ k (λ)ψk z[r] k (λ) )1/2 ( r∑ k=0 f[r]∗ k ψk f[r] k )1/2 ≤ 1 | im(λ)| ||z[r] (λ)||ψ × || f[r] ||ψ, – 103 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations thereby yielding the inequality ||z[r] (λ)||ψ ≤ 1 | im(λ)| || f[r] ||ψ ≤ 1 | im(λ)| || f ||ψ, (6.48) because z[r] (λ) ∈ l2 ψ . Upon combining identities (6.39) and (6.43) for any k, r ∈ [0, ∞)Z we easily calculate zk(λ) − z[r] k (λ) = ∞∑ s=r+1 Gk,s(λ)ψs fs. Now, let t ∈ [0, r]Z. Then from the definition of G(λ) given in (6.37) we obtain for every k ∈ [0, t]Z that zk(λ) − z[r] k (λ) = Zk(λ) ∞∑ s=r+1 X+∗ s (¯λ)ψs fs. (6.49) Since the columns of the Weyl solution X+(¯λ) and the function f belong to l2 ψ , it follows that ∑∞ s=0 X+∗ s (¯λ)ψs fs < ∞ by the Cauchy–Schwarz inequality. Hence the right-hand side of (6.49) tends to zero as r → ∞ for every k ∈ [0, t]Z, which shows that z[r] converges uniformly to z(λ) on the interval [0, t]Z. Moreover, by (6.48), we have t∑ k=0 z[r]∗ k (λ)ψk z[r] k (λ) ≤ ||z[r] (λ)||2 ψ ≤ 1 | im(λ)|2 || f ||2 ψ, from which, as a consequence of the uniform convergence of z[r] (λ) for r → ∞ on [0, t]Z, we see that t∑ k=0 z∗ k(λ)ψk zk(λ) ≤ 1 | im(λ)|2 || f ||2 ψ. (6.50) As identity (6.50) is satisfied for any t ∈ [0, ∞)Z, the desired estimate in (6.41) follows. Finally, to establish the existence of the limit in (6.42), assume that system (Sλ) is in the limit point case for all λ ∈ CKR and ν ∈ CKR is arbitrary. From the extended Lagrange identity in Theorem 6.1.3, for any k, r ∈ [0, ∞)Z, we obtain [ X+∗ j (ν)Jz[r] j (λ) ]k+1 0 = (¯ν − λ) k∑ j=0 X+∗ j (ν)ψj z[r] j (λ) − k∑ j=0 X+∗ j (ν)ψj f[r] j (λ). (6.51) If r ∈ [0, ∞)Z and k ∈ [r + 1, ∞)Z, then identities (6.35) and (6.46) imply lim k→∞ X+∗ k+1(ν)Jz[r] k+1 (λ) (6.46) = lim k→∞ X+∗ k+1(ν)JX+ k+1(λ)gr(λ) (6.35) = 0. Hence upon taking in (6.51) the limit for k → ∞ we get X+∗ 0 (ν)Jz[r] 0 (λ) = (λ − ¯ν) ∞∑ j=0 X+∗ j (ν)ψj z[r] j (λ) + ∞∑ j=0 X+∗ j (ν)ψj f[r] j (λ). (6.52) Since, by the previous part, z[r] (λ) converges uniformly on finite subintervals of [0, ∞)Z to z(λ) as r → ∞, identity (6.52) yields X+∗ 0 (ν)Jz0(λ) = (λ − ¯ν) ∞∑ j=0 X+∗ j (ν)ψj zj(λ) + ∞∑ j=0 X+∗ j (ν)ψj fj(λ). (6.53) – 104 – 6.3. Nonhomogeneous problem Simultaneously, as in (6.51), we obtain from (6.11) for every k ∈ [0, ∞)Z that [ X+∗ j (ν)Jzj(λ) ]k+1 0 = (¯ν − λ) k∑ j=0 X+∗ j (ν)ψj zj(λ) − k∑ j=0 X+∗ j (ν)ψj fj(λ). (6.54) Hence by letting k → ∞ in (6.54) and using equality (6.53) the limit in (6.42) is established. An argument, similar to that given above, can be used also in the limit circle case to show the existence of the limit in (6.42), because all solutions of system (Sλ) are square summable in that case. However, an alternative and more direct method of the proof is available. It utilizes the fact that Z(¯λ) ∈ l2 ψ in the limit circle case, which is not true in the limit point case, q.v. the proof of Theorem 2.4.3. More specifically, by (6.40) we have for every k ∈ [0, ∞)Z that X+∗ k (ν)Jzk(λ) = X+∗ k (ν)JX+ k (λ) k−1∑ s=0 Z ∗ s(¯λ)ψs fs + X+∗ k (ν)JZk(λ) ∞∑ s=k X+∗ s (¯λ)ψs fs. (6.55) The limit in (6.42) follows by the fact that both the terms on the right-hand side of (6.55) tend to zero as k → ∞. Indeed, the zero limit of the first term is a consequence of the relation in (6.35) together with the convergence of the sum for k → ∞, which we get by the Cauchy–Schwarz inequality. The second term tends to zero because X+∗(ν)JZ(λ) is bounded by equality (6.11) and the sum converges to zero as k → ∞. ■ In the last result of this section, we extend [A4, Corollary 5.3] to the case of general linear dependence on the spectral parameter. Corollary 6.3.4. Let α ∈ , λ ∈ CKR, f ∈ l2 ψ , and ξ ∈ Cn. Assume that system (Sλ) is definite on the discrete interval [0, ∞)Z, and define ˆzk(λ) := X+ k (λ)ξ + zk(λ), k ∈ [0, ∞)Z, (6.56) where zk(λ) is given in (6.39). Then the sequence ˆz(λ) ∈ C([0, ∞)Z)2n represents a square summable solution of system (S f λ ) satisfying the initial condition α ˆz0(λ) = ξ and || ˆz(λ)||ψ ≤ 1 | im(λ)| || f ||ψ + ||X+ (λ)ξ||ψ. (6.57) If system (Sλ) is in the limit point or in the limit circle case for all λ ∈ CKR, we have lim k→∞ X+∗ k (ν)J ˆzk(λ) = 0 for every ν ∈ CKR. (6.58) Moreover, in the limit point case the sequence ˆz(λ) is the unique square summable solution of system (S f λ ) satisfying α ˆz0(λ) = ξ, while in the limit circle case ˆz(λ) is the unique solution of (S f λ ) being in l2 ψ such that α ˆz0(λ) = ξ and lim k→∞ X+∗ k (¯λ)J ˆzk(λ) = 0. (6.59) Proof. Since X+(λ)ξ solves system (Sλ) and Θ0(λ, α) = (α∗, −Jα∗), it follows from Theorem 6.3.3 that the sequence ˆz(λ) solves the nonhomogeneous system (S f λ ) and satisfies α ˆz0(λ) = αX+ 0 (λ)ξ = ξ. The estimate in (6.57) follows directly from (6.56) and (6.41) by – 105 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations the triangle inequality. The limit in (6.58) follows in the limit point or in the limit circle case from (6.35), (6.42), and from the calculation lim k→∞ X+∗ k (ν)J ˆzk(λ) = lim k→∞ { X+∗ k (ν)JX+ k (λ)ξ + X+∗ k (ν)Jzk(λ) } = 0. Finally, we prove the uniqueness of the solution in the limit point and limit circle cases. Assume that z[1] (λ), z[2] (λ) ∈ C([0, ∞)Z)2n are two square summable solutions of system (S f λ ) satisfying αz[1] 0 (λ) = ξ = αz[2] 0 (λ). Then zk(λ) := z[1] k (λ) − z[2] k (λ), k ∈ [0, ∞)Z, represents a square summable solution of the homogeneous system (Sλ) and it satisfies αz0(λ) = 0. Since z(λ) = Θ(λ, α)ζ for some ζ ∈ C2n, the initial condition αz0(λ) = 0 implies that zk(λ) = Zk(λ)η for some η ∈ Cn. If system (Sλ) is in the limit point case, we have z(λ) l2 ψ for η 0, because the columns of Z(λ) do not belong to l2 ψ in this case, see Theorem 2.4.3. Therefore η = 0 and the uniqueness follows. On the other hand, if (Sλ) is in the limit circle case and both z[1] (λ) and z[2] (λ) satisfy also the limit relation given in (6.59), then we obtain from the previous part and the first identity in (6.16) that 0 = lim k→∞ X+∗ k (¯λ)Jzk(λ) = lim k→∞ X+∗ k (¯λ)JΘk(λ) ( 0 η ) = lim k→∞ ( I M∗ +(¯λ) ) Θ∗ k(¯λ)JΘk(λ) ( 0 η ) = η, which implies the uniqueness of the solution ˆz(λ) also in the latter case. ■ 6.4 Maximal and minimal linear relations Finally, in this section we come to the topic of linear relations. We focus on a pair of linear relations defined in terms of the linear map, L (·), introduced in (6.5) in association with system (Sλ). Let us mention that similar results for linear Hamiltonian differential and difference systems can be found in [13,116,134]. Moreover, we remind that a short introduction to the theory of linear relations is available in the Appendix and we recommend to read this passage now if it is not familiar to the reader. For the present treatment we need to introduce some spaces in addition to the space of square summable sequences l2 ψ defined in (6.9). Namely, we denote by ˜l2 ψ the quotient space obtained by factoring out the kernel of the semi-norm ||·||ψ, i.e., ˜l2 ψ := l2 ψ /{ z ∈ C(I+ Z )2n | ||z||ψ = 0 } . (6.60) It is easy to see that ˜l2 ψ is a Banach space with respect to the norm generated by the quotient space map π(z) := ˜z and simultaneously it is a Hilbert space with respect to the associated inner product ⟨˜z, ˜v⟩ψ := ⟨z, v⟩ψ, where z and f are elements of the equivalence classes ˜z ∈ ˜l2 ψ and ˜f ∈ ˜l2 ψ , respectively; cf. [142, Lemma 2.5]. Henceforward, we denote by the superscript ˜· the corresponding equivalence class, i.e., if z ∈ l2 ψ then ˜z ∈ ˜l2 ψ is such that z ∈ ˜z. In addition, we point out that the value of ||z||ψ := √ ⟨z, z⟩ψ for z ∈ C(I+ Z )2n does not depend on zN+1 in the case of IZ being a finite discrete interval, which implies that the sequences z, v ∈ C(I+ Z )2n such that zk vk only for k = N + 1, belong to the same equivalence class. In addition, the product space ˜l2×2 ψ := ˜l2 ψ × ˜l2 ψ is also a pre-symplectic space with the associated function [· : ·] : ˜l2×2 ψ × ˜l2×2 ψ → C given by [{˜z, ˜f} : {w, ˜g}] := ⟨ ˜f, w⟩ − ⟨˜z, ˜g⟩ , (6.61) – 106 – 6.4. Maximal and minimal linear relations see (6.14) and the second part of the Appendix. Moreover, we define the subspace l2 ψ,0 :=    { z ∈ C0(I+ Z )2n ∩ l2 ψ | z0 = 0, zN+1 = 0 } if IZ = [0, N]Z, N ∈ N ∪ {0}, { z ∈ C0(I+ Z )2n ∩ l2 ψ | z0 = 0 } if IZ = [0, ∞)Z, and if IZ = [0, ∞)Z also l2 ψ,1 := { z ∈ l2 ψ | there exists K ∈ [0, ∞)Z such that ψk zk = 0 for all k ∈ [K, ∞)Z } . Finally, the space ˜l2 ψ,1 consists of equivalence classes similarly as ˜l2 ψ and it is easy to see that ˜l2 ψ,1 is a dense subspace of ˜l2 ψ . 6.4.1 Linear relations and definiteness Now we can introduce the maximal linear relation Tmax as a subspace of ˜l2×2 ψ given by Tmax := { {˜z, ˜f} ∈ ˜l2×2 ψ | there exists z ∈ ˜z such that L (z) = ψ f } . (6.62) Note that when L (z) = ψ f, then L (z) = ψg for any g ∈ ˜f, i.e., the definition of Tmax does not depend on the choice of the representative g ∈ ˜f. Similarly, we define the pre-minimal linear relation T0 := { {˜z, ˜f} ∈ ˜l2×2 ψ | there exists z ∈ ˜z ∩ l2 ψ,0 such that L (z) = ψ f } , (6.63) which evidently satisfies T0 ⊆ Tmax. In addition, by (A.6) we put T0 − λI := { {˜z, ˜f} ∈ ˜l2×2 ψ | there exists z ∈ ˜z ∩ l2 ψ,0 such that L (z) = λψz + ψf } . (6.64) The consideration of linear relations (instead of operators) in our current context is natural given that the weight ψ, being present on the right-hand side of system (S f λ ) and in the definitions of the sequence spaces associated with Tmax and Tmin, has terms none of which are positive definite, but all of which are only positive semidefinite, see (6.7). Moreover, this fact is affirmed by the following simple example, which is analogous to that found in [116, Section 2] for system (2.5). Example 6.4.1. Let n = 1 and consider system (S f 0 ) with Sk ≡ ( 1 0 0 1 ) , ψk ≡ ( 1 0 0 0 ) , fk =   f[1] k f[2] k   , and zk = ( xk uk ) . (6.65) Then L (z) and (S f 0 ), respectively, can be written as L (z) = ( 0 −1 1 0 ) z and z = ( x u ) = ( 0 −f[1] ) . (6.66) Hence, for any f ∈ l2 ψ , the sequence z ∈ C(I+ Z )2 with the terms z0 = 0 and zk =   0 − ∑k−1 j=0 f[1] j   for all k ∈ I+ Z K{0} – 107 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations solves the nonhomogeneous system in (6.66), i.e., system (S f 0 ) with the coefficients specified in (6.65). Since obviously z ∈ ˜0, we have {˜0, ˜f} ∈ Tmax for any ˜f ∈ ˜l2 ψ . Thus, the multivalued part of the corresponding linear relation Tmax, i.e., mul Tmax defined in (A.3), is nontrivial, which means that Tmax is not a graph of a linear operator. Let us also note that a solution of system (S f 0 ) in (6.66) which is an element of the space l2 ψ,0 , of necessity is such that xk = 0 for all k ∈ I+ Z with the consequence that dom T0 = {˜0}, i.e., the set dom T0 is not dense in l2 ψ . ▲ The latter example illustrates yet another interesting situation: for any c ∈ C the sequence zk(λ) ≡ (0, c)⊤ ∈ ˜0 represents a solution of the nonhomogeneous system in (6.66) with f ∈ ˜0, i.e., f[1] k ≡ 0. It means that for the pair {˜0, ˜0} there exists infinitely many representatives z ∈ ˜0 such that L (z) = ψf = 0 and, at the same time, it implies that system (Sλ) with the coefficients given in (6.65) is not definite on the discrete interval IZ by Lemma 6.2.3. In the next result we characterize the definiteness of system (Sλ) in terms of the domain of Tmax. Theorem 6.4.2. System (Sλ) is definite on the discrete interval IZ if and only if for any pair {˜z, ˜f} ∈ Tmax there exists a unique z ∈ ˜z such that L (z) = ψ f. Proof. Assume that system (Sλ) is definite on IZ. Let {˜z, ˜f} ∈ Tmax and z[1] , z[2] ∈ ˜z be two representatives of the equivalence class ˜z such that L (z[1] ) = ψ f = L (z[2] ). If we put vk := z[1] k − z[2] k for all k ∈ I+ Z , then L (v) = 0 and v ∈ ˜0 ∈ ˜l2 ψ , i.e., v solves system (S0) and satisfies ∑ k∈I v∗ k ψk vk = 0. Therefore, by Remark 6.2.2, the definiteness of system (Sλ) implies that vk ≡ 0 on I+ Z , i.e., z[1] ≡ z[2] on I+ Z and so there exists only one representative z ∈ ˜z such that L (z) = ψf. To show the converse, assume that there is only one z ∈ ˜z for which L (z) = ψf, whenever {˜z, ˜f} ∈ Tmax. Let IZ ⊆ IZ be a finite discrete interval such that rank ϑ(IZ) is maximal; cf. Lemma 6.2.10. If rank ϑ(IZ) < 2n, then there is a vector η ∈ C2nK{0} such that η∗ ϑ(0, IZ)η = ∑ k∈I η∗ Θ∗ k ψk Θk η = 0, where Θk := Θk(0) is the fundamental matrix of system (Sλ) specified in the paragraph preceding Lemma 6.2.6 with k0 ∈ I + Z . If also∑ k∈I η∗Θ∗ k ψkΘkη = 0, then v := Θη ∈ l2 ψ , L (v) = 0, and v ∈ ˜0. Given that the zero sequence is the unique representative of ˜0 satisfying L (z) = 0, it follows vk = Θk η = 0 for all k ∈ I+ Z and as a consequence η = 0, which contradicts the assumption η 0. Hence, there is a finite discrete interval superset IZ ⊃ IZ such that ϑ(0, IZ)η 0. As a result, Ker ϑ(IZ) ⊂ Ker ϑ(IZ) and consequently Ran ϑ(IZ) ⊂ Ran ϑ(IZ); thereby contradicting the maximality of rank ϑ(IZ). Thus, rank ϑ(IZ) = 2n and system (Sλ) is definite on the discrete interval IZ by Theorem 6.2.11. ■ Naturally, the latter statement does not mean that the equivalence class ˜z contains only one representative as it can be easily seen, e.g., from the example discussed in Remark 6.4.4 below. Nevertheless, if system (Sλ) is definite on the discrete interval IZ and {˜z, ˜f} ∈ Tmax, then we denote the unique representative z ∈ ˜z satisfying L (z) = ψf by ˆz, i.e., ˆz := z. In this case, identities (6.14) and (6.61) yield [{˜z, ˜f} : {˜v, ˜g}] = ⟨ f, ˆv⟩ − ⟨ˆz, g⟩ = (ˆz, ˆv)k N+1 0 (6.67) for any {˜z, ˜f}, {˜v, ˜g} ∈ Tmax, where f ∈ ˜f and g ∈ ˜g are arbitrary representatives. Moreover, we obtain from Lemma 6.2.8 the following statement, compare with [135, Remark 3.2] and [147, Lemma 3.3]. – 108 – 6.4. Maximal and minimal linear relations Lemma 6.4.3. Let system (Sλ) be definite on a finite discrete interval ID Z := [a, b]Z ⊆ IZ, where a, b ∈ IZ. Then for any pairs {˜z, ˜f}, {˜v, ˜g} ∈ Tmax there exists {˜r, ˜h} ∈ Tmax such that ˆrk =    ˆzk, k ∈ [0, c]Z, ˆvk, k ∈ [d + 1, ∞)Z ∩ I+ Z , where c ∈ [0, a]Z and d ∈ [b, ∞)Z ∩ IZ. In particular, there exists {˜z[i] , ˜f[i] } ∈ Tmax such that ˆz[i] 0 = ei and ˆz[i] k = 0 for k ∈ [d+1, ∞)Z ∩I+ Z , where i ∈ {1, . . . , 2n} is arbitrary and ei = (0, . . . , 1, . . . , 0)⊤ ∈ C2n is the i-th canonical unit vector in C2n. If, in addition, IZ is a finite discrete interval, i.e., IZ = [0, N]Z with N ∈ N ∪ {0}, then there exists {˜r, ˜h} ∈ Tmax such that ˆrk = 0 for k ∈ [0, c]Z and ˆrN+1 = ei, where i ∈ {1, . . . , 2n} and ei are the same as above. Proof. Let IZ be a finite discrete interval as in Lemma 6.2.8, the pairs {˜z, ˜f}, {˜v, ˜g} ∈ Tmax be arbitrary, and define ξ := ˆzc, η := ˆvd+1. Then by the latter lemma there exist sequences ℓ ∈ C(IZ)2n and s ∈ C(I + Z )2n such that L (s)k = ψk ℓk, sc = ξ, sd+1 = η, k ∈ IZ. If we put rk :=    ˆzk, k ∈ [0, c]Z ∩ IZ, sk, k ∈ [c + 1, d]Z ∩ IZ, ˆvk, k ∈ [d + 1, ∞)Z ∩ I+ Z hk :=    fk, k ∈ [0, c − 1]Z ∩ IZ, ℓk, k ∈ [c, d]Z ∩ IZ, gk, k ∈ [d + 1, ∞)Z ∩ IZ, then obviously r, h ∈ l2 ψ and it can be verified by a direct calculation that L (r)k = ψk hk for all k ∈ IZ, i.e., {˜r, ˜h} ∈ Tmax with ˆrk ≡ rk. The second part of the statement follows directly from Lemma 6.2.8. ■ Remark 6.4.4. In Example 6.4.1 we constructed the linear map L and the nonhomogeneous system, see (6.66), with two significant properties: (i) the maximal linear relation does not determine a linear operator and (ii) the domain of the pre-minimal linear relation is not is dense in ˜l2 ψ . In addition, let us consider system (6.22) from Example 6.2.4 with the coefficients pk ≡ 1, qk ≡ 0, and wk ≡ 1 on IZ with N 0, which imply the definiteness of system (6.22) on IZ. If we put fk = ( f[1] k , f[2] k )⊤ with f0 = (1, 0)⊤ and fk = (0, 0)⊤ for all k ∈ IZ K{0}, then the corresponding nonhomogeneous system zk = Sk zk+1 − Jψk fk, i.e., ( xk uk ) = ( 1 −1 0 1 ) ( xk+1 uk+1 ) + ( 0 f[1] k ) , possesses the solution zk = (xk, uk)⊤, where z0 = (0, 1)⊤ and zk ≡ (0, 0)⊤ on I+ Z K{0}, i.e., there exists ˜f ˜0 such that {˜0, ˜f} ∈ Tmax. This shows that even the definiteness of system (Sλ) does not suffice to get mul Tmax = {˜0}; cf. [147]. Thus, to guarantee that the maximal linear relation defines an operator, we need to assume explicitly that mul Tmax = {˜0}, i.e., if there exists z ∈ ˜0 such that L (z) = ψf for some f ∈ l2 ψ , then z ≡ 0; cf. [108, pg. 666]. In other words, we assume the “definiteness” of system (S f 0 ) for every f ∈ l2 ψ . Then system (Sλ) is definite as it follows by the choice f ≡ 0, see Theorem 6.2.11(iii) and Corollary 6.2.12. Moreover, in the next theorem we prove that this assumption even yields the density of dom T0 in ˜l2 ψ ; cf. [108, Theorem 7.6]. As noted in [10, pg. 3], a similar condition is also needed for the study of operators associated with system (2.6) in [142]. – 109 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations Theorem 6.4.5. If system (S f 0 ) is definite on IZ, then dom T0 is dense in ˜l2 ψ . Proof. Assume that dom T0 is not dense in ˜l2 ψ , i.e., there exists ˜f ∈ (dom T0)⊥ such that || f ||ψ 0. Let ˜z ∈ dom T0 be such that L (z) = ψ f and ˜v ∈ dom T0 be such that L (v) = ψg for some g ∈ l2 ψ . Then, by identity (6.14), we obtain ⟨z, g⟩ψ = ⟨ f, v⟩ψ = 0, (6.68) because z, v ∈ l2 ψ,0 . Since g ∈ l2 ψ was chosen arbitrarily and z ∈ l2 ψ,0 , we can take g = z and the solution of L (v) = ψz can be obtained as vk = Θk J ∑k−1 j=0 Θ∗ j ψj zj, where Θ := Θ(0) means a fundamental matrix of (S0) satisfying (6.15) for any s ∈ I+ Z . Then equality (6.68) implies that ⟨z, z⟩ψ = 0, i.e., ψk zk = 0 on IZ. Thus we have L (z) = ψf and z ∈ ˜0, which yields z ≡ 0 by the definiteness assumption for system (S f 0 ). So ψk fk = 0 on IZ, i.e., f ∈ ˜0, and the density of dom T0 in ˜l2 ψ is thus established. ■ 6.4.2 Orthogonal decomposition of sequence spaces In this subsection we introduce a linear map which will allow orthogonal decompositions of ˜l2 ψ and ˜l2 ψ,1 depending on the cardinality of the discrete interval IZ. In particular, let us denote by Kλ the linear map defined by Kλ :    ˜l2 ψ → C2n if IZ is a finite discrete interval, ˜l2 ψ,1 → C2n if IZ = [0, ∞)Z, Kλ(˜z) := ∑ k∈I Θ∗ k(¯λ)ψk zk, (6.69) where Θ(λ) is the fundamental matrix of system (Sλ) specified in the paragraph preceding Lemma 6.2.6 with k0 ∈ I+ Z . If λ = 0 we write only K(·) instead of K0(·). For completeness, we note that the sum ∑ k∈I Θ∗ k (¯λ)ψk zk does not depend on the choice of the representative z ∈ ˜z ∈ ˜l2 ψ or z ∈ ˜z ∈ ˜l2 ψ,1 , i.e., the map Kλ is defined correctly. In the following statement we utilize the matrix ϑ(·) defined in (6.31). Lemma 6.4.6. Let IZ be a finite discrete interval and λ ∈ C. Then Ran Kλ is independent of λ ∈ C; in particular, Ran Kλ = { ξ ∈ C2n | ψk Θk(¯λ)ξ = 0 for all k ∈ IZ }⊥ = Ran ϑ(IZ). (6.70) Furthermore, the space ˜l2 ψ admits the following orthogonal sum decomposition ˜l2 ψ = Ker Kλ ⊕ { ˜z ∈ ˜l2 ψ | z = Θ(¯λ)ξ, ξ ∈ Ran ϑ(IZ) } . (6.71) Proof. For any ξ ∈ C2n and ˜v ∈ ˜l2 ψ , we have ⟨ξ, Kλ(˜v)⟩C2n = ∑ k∈I ξ∗ Θ∗ k(¯λ)ψk vk = ⟨Θ(¯λ)ξ, v⟩ψ. Hence we have K∗ λ : C2n → ˜l2 ψ with K∗ λ (ξ) := ˜z for ξ ∈ C2n, where ˜z is the equivalence class corresponding to z = Θ(¯λ)ξ. In particular, Ran K∗ λ = { ˜z ∈ ˜l2 ψ | z = Θ(¯λ)ξ ∈ ˜z, ξ ∈ C2n } – 110 – 6.4. Maximal and minimal linear relations and Ker K∗ λ = { ξ ∈ C2n | ||Θ(¯λ)ξ||ψ = 0 } = { ξ ∈ C2n | ψk Θk(¯λ)ξ = 0 for all k ∈ IZ } . Thus the first equality in (6.70) follows from the fact that Ran Kλ = (Ker K∗ λ )⊥. Next, let us define the sequences z := Θ(¯λ)ξ ∈ C(I+ Z )2n, v := Θ(¯λ)η ∈ C(I+ Z )2n, and r := Θ(¯λ)ζ ∈ C(I+ Z )2n, where ξ, η, ζ ∈ C2n are such that ξ = η + ζ with η ∈ Ran Kλ and ζ ∈ (Ran Kλ)⊥ = Ker K∗ λ . Then ||r||ψ = ||Θ(¯λ)ζ||ψ = 0, i.e., ˜r = ˜0 ∈ ˜l2 ψ . Thus ˜z = ˜v and Ran K∗ λ = { ˜z ∈ ˜l2 ψ | z = Θ(¯λ)η, η ∈ Ran Kλ } . So, to complete the demonstration of equalities (6.70) and (6.71), it remains to show that Ran Kλ = Ran ϑ(IZ), because ˜l2 ψ = Ker Kλ ⊕ (Ker Kλ)⊥ = Ker Kλ ⊕ Ran K∗ λ . Hence, let now ˜z ∈ ˜l2 ψ . Then by the previous part ˜z = ˜r + ˜v, where ˜r ∈ Ker Kλ and the equivalence class ˜v corresponds to v = Θ(¯λ)η with η ∈ Ran Kλ. Therefore Kλ(˜z) = Kλ(˜v) = ∑ k∈I Θ∗ k(¯λ)ψk Θk(¯λ)η = ϑ(λ, IZ)η, which implies Ran Kλ ⊆ Ran ϑ(IZ). On the other hand, if ξ ∈ C2n and z = Θ(¯λ)ξ, then ϑ(λ, IZ)ξ = ∑ k∈I Θ∗ k(¯λ)ψk Θk(¯λ)ξ = ∑ k∈I Θ∗ k(¯λ)ψk zk = Kλ(˜z); thereby showing that Ran Kλ = Ran ϑ(IZ). ■ In the next two lemmas we establish similar results in the case IZ = [0, ∞)Z. Let us recall that then there is a finite discrete interval IZ ⊂ [0, ∞)Z for which the value of rank ϑ(IZ) is maximal by Lemma 6.2.10, i.e., rank ϑ(IZ) = rank ϑ(IZ) for any finite discrete interval IZ such that IZ ⊆ IZ, viz. (6.32). Moreover, if IZ is a finite discrete interval for which rank ϑ(IZ) is maximal, then we have Ran Kλ,I = Ran Kλ for any finite discrete interval IZ such that IZ ⊆ IZ ⊂ [0, ∞)Z, see (6.70). Here Kλ,I means the map Kλ defined as in (6.69) with IZ replaced by IZ. Especially, dom Kλ,I = ˜l2 ψ,I , where ˜l2 ψ,I is given as in (6.60) for the discrete interval IZ instead of IZ. Lemma 6.4.7. Let IZ = [0, ∞)Z and IZ ⊂ [0, ∞)Z be a finite discrete interval for which rank ϑ(IZ) is maximal. Then Ran Kλ = Ran ϑ(IZ) for all λ ∈ C; in particular, Ran Kλ is independent of λ. Proof. Let ˜z ∈ dom Kλ = ˜l2 ψ,1 . Then there exists K ∈ [0, ∞)Z such that ψk zk = 0 for all k ∈ [K, ∞)Z. Hence, z ∈ l2 ψ and ˜z ∈ ˜l2 ψ,[0,K]Z = dom Kλ,[0,K], where ˜l2 ψ,[0,K]Z is defined similarly as ˜l2 ψ in (6.60) with IZ = [0, K]Z. In particular, Kλ,[0,N](˜z) = Kλ(˜z), and thus Ran Kλ ⊆ Ran Kλ,[0,N]. Without loss of generality, we may assume that IZ ⊆ [0, K]Z, and hence that Ran Kλ,I = Ran Kλ,[0,N] ⊇ Ran Kλ. Conversely, suppose that ˜z ∈ ˜l2 ψ,I = dom Kλ,I. Let ˜v ∈ ˜l2 ψ,1 be determined by the sequence v ∈ C([0, ∞)Z)2n with the terms vk = { zk, k ∈ IZ, 0, k IZ. Then Kλ,I(˜z) = ∑ k∈I Θ∗ k (¯λ)ψk zk = ∑∞ k=0 Θ∗ k (¯λ)ψk vk = Kλ(˜v). Hence Ran Kλ,I ⊆ Ran Kλ, which upon combining with the first part of the proof yields Ran Kλ = Ran Kλ,I. Therefore Ran Kλ = Ran ϑ(IZ) by Lemma 6.4.6. ■ – 111 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations Lemma 6.4.8. Let IZ = [0, ∞)Z and IZ ⊂ [0, ∞)Z be a finite discrete interval for which rank ϑ(IZ) is maximal. Then ˜l2 ψ,1 = Ker Kλ ⊕ { ˜z ∈ ˜l2 ψ,1 | z = Θ(¯λ)ξ, ξ ∈ Ran ϑ(IZ) } (6.72) and codim(Ker Kλ) = rank ϑ(IZ). Proof. In the complete analogy with the arguments given in the proof of Lemma 6.4.6, one can show that ˜l2 ψ,1 = Ker Kλ ⊕ (Ker Kλ)⊥ = Ker Kλ ⊕ Ran K∗ λ, where Ran K∗ λ = { ˜z ∈ ˜l2 ψ,1 | z = Θ(¯λ)ξ, ξ ∈ Ran ϑ(IZ) } , and that (Ran Kλ)⊥ = Ker K∗ λ = { ξ ∈ C2n | ||Θ(¯λ)ξ||ψ = 0 } = { ξ ∈ C2n | ψk Θk(¯λ)ξ = 0 for all k ∈ [0, ∞)Z } . Finally, let ξ1, . . . , ξm denote a basis for Ran ϑ(IZ) = Ran Kλ and put v [j] k := Θk(¯λ)ξj for all k ∈ [0, ∞)Z. Then ˜v[1] , . . . , ˜v[m] ∈ Ran K∗ λ and suppose that ∑m j=1 cj ˜v[j] = ˜0 ∈ ˜l2 ψ,1 for some numbers c1, . . . , cm ∈ C. Then ψk Θk(¯λ)( ∑m j=1 cj ξj) = 0 for all k ∈ [0, ∞)Z, which implies m∑ j=1 cj ξj ∈ Ran Kλ ∩ (Ran Kλ)⊥ = {0} by the first part. Hence c1 = · · · = cm = 0, i.e., ˜v[1] , . . . , ˜v[m] ∈ Ran K∗ λ are linearly independent in ˜l2 ψ,1 . Thus codim(Ker Kλ) = dim(Ker Kλ)⊥ = dim(Ran K∗ λ ) = rank ϑ(IZ). ■ 6.4.3 Minimal linear relation and its deficiency indices Before we define the minimal linear relation Tmin and establish the fundamental relation between Tmax and Tmin, we prove two auxiliary lemmas. Lemma 6.4.9. We have Ker Kλ ⊆ Ran(T0 − λI) for every λ ∈ C. Proof. Let λ ∈ C and ˜f ∈ Ker Kλ, i.e., Kλ( ˜f) = ∑ k∈I Θ∗ k (¯λ)ψk zk = 0. Assume that IZ is a finite discrete interval, i.e., IZ = [0, N]Z for some N ∈ N ∪ {0}. Then ˜f ∈ ˜l2 ψ by (6.69) and for any g ∈ ˜f the sequence z ∈ l2 ψ,0 with the terms zk :=    0, k = 0, Θk(λ)J ∑k−1 j=0 Θ∗ j (¯λ)ψj gj, k ∈ I+ Z K{0}, (6.73) solves the nonhomogeneous system L (z) = λψz + ψg on IZ, see also [61, Theorem 3.17]. Thus, ˜f ∈ Ran(T0 − λI) and the conclusion follows; cf. (6.64). On the other hand, if IZ = [0, ∞)Z, then ˜f ∈ ˜l2 ψ,1 by (6.69), i.e., there exists K ∈ [0, ∞)Z such that ψk gk = 0 for all k ∈ [K, ∞)Z and all g ∈ ˜f. Hence for any g ∈ ˜f the sequence z ∈ l2 ψ with the terms zk := −Θk(λ)J ∞∑ j=k Θ∗ j(¯λ)ψj gj =    −Θk(λ)J ∑K−1 j=k Θ∗ j (¯λ)ψj gj, k ∈ [0, K − 1]Z, 0, k ∈ [K, ∞)Z, (6.74) – 112 – 6.4. Maximal and minimal linear relations satisfies L (z) = λψz + ψg on [0, ∞)Z. In addition, z0 = −Θ0(λ)JKλ( ˜f) = 0, which implies z ∈ l2 ψ,0 . Thus again ˜f ∈ Ran(T0 − λI) and the proof is complete. ■ Lemma 6.4.10. We have ⟨ ˜f, ˜y⟩ψ = ⟨˜z, ˜g⟩ψ for every {˜z, ˜f} ∈ T0 and {˜v, ˜g} ∈ Tmax. Proof. Let {˜z, ˜f} ∈ T0 and {˜v, ˜g} ∈ Tmax, i.e., there are z ∈ l2 ψ,0 and v ∈ l2 ψ such that L (z) = ψ f and L (v) = ψg on IZ. Then, similarly as in (6.14), we get k∑ j=0 z∗ j ψj gj = −(z, v)j k+1 0 + k∑ j=0 f∗ j ψj vj. However, z ∈ l2 ψ,0 implies z0 = 0 and the existence of K ∈ I+ Z such that zk = 0 for all k ∈ [K, ∞)Z ∩I+ Z . As a consequence we see that ⟨z, g⟩ψ = ⟨ f, v⟩ψ, and thus ⟨z, g⟩ψ = ⟨ f, v⟩ψ for any z ∈ ˜z and v ∈ ˜v. ■ By Lemma 6.4.10 and the definition of the adjoint linear relation, see (A.4), one obtains T0 ⊆ Tmax ⊆ T∗ 0, (6.75) from which we conclude that T0 is symmetric in ˜l2×2 ψ , and hence closable. We then define the minimal linear relation Tmin by Tmin := T0. (6.76) Another approach to the definition of a minimal linear relation is to put Tmin := T∗ max; cf. [116, Definition 2.3] and [13, Identity (4.2)]. In the next theorem we establish the equivalence of these approaches for the linear relations at hand. We note also that an alternative demonstration of the next statement can be patterned after that presented in [13, pp. 1354–1355] by using the results in Section 6.3 and [13, Proposition A.2]. Theorem 6.4.11. The linear relations T0, Tmax, and Tmin as defined in (6.63), (6.62), and (6.76), respectively, satisfy T∗ 0 = T∗ min = Tmax. (6.77) Proof. By (6.75), note that Tmax ⊆ T∗ 0 = T0 ∗ = T∗ min . Thus it remains to show that T∗ 0 ⊆ Tmax; or equivalently, given {˜z, ˜f} ∈ T∗ 0 , that there is a z ∈ ˜z ∈ ˜l2 ψ such that L (z) = ψf. Let {˜z, ˜f} ∈ T∗ 0 be given and r ∈ C(I+ Z )2n satisfy L (r) = ψf. Whenever {˜v, ˜g} ∈ T0, i.e., there exists v ∈ ˜v such that v ∈ l2 ψ,0 and L (v) = ψ g, we obtain from (6.14) that ⟨r, g⟩ψ = ⟨f, v⟩ψ, because v0 = 0 and vk = 0 for all sufficiently large k ∈ I+ Z . Simultaneously, from the definition of T∗ 0 , we have ⟨˜z, ˜g⟩ψ = ⟨ ˜f, ˜v⟩ψ. Therefore ⟨z − r, g⟩ψ = 0 for any z ∈ ˜z ∈ dom T∗ 0 and g ∈ ˜g ∈ Ran T0. Consequently, for any h ∈ ˜h ∈ Ker K ⊆ Ran T0, see Lemma 6.4.9 with λ = 0, we get from the previous part that ⟨z − r, h⟩ψ = 0. Thus, for any ξ ∈ C2n we have ∑ k∈I (zk − rk − Θk ξ)∗ ψk hk = 0, (6.78) because K(˜h) = ∑ k∈I Θ∗ k ψk hk = 0 by (6.69). Let IZ = [0, N]Z with N ∈ N ∪ {0}, then obviously ˜z, ˜r ∈ ˜l2 ψ . Hence by (6.71) there exists η ∈ Ran ϑ(IZ) such that the equivalence class corresponding to z−r−Θη belongs to Ker K. But in that case it is equal to ˜0 by (6.78). Therefore ||z − r − Θη||ψ = 0, i.e., r + Θη ∈ ˜z, and L (r + Θη) = L (r) = ψ f, which implies T∗ 0 ⊆ Tmax. – 113 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations In the opposite case, i.e., IZ = [0, ∞)Z, let IZ ⊂ [0, ∞)Z be a finite discrete interval such that k0 ∈ I + Z and the value of rank ϑ(IZ) = m ≤ 2n is maximal as noted in Lemma 6.2.10. Then ˜l2 ψ,1 = Ker K⊕ Ker K⊥ and by Lemma 6.4.8 we see that there exists a basis ˜v[1] , . . . , ˜v[m] ∈ ˜l2 ψ,1 of the space Ker K⊥, in which v [j] k := Θk ξ[j] on [0, ∞)Z for j ∈ {1, . . . , m} with ξ[1] , . . . , ξ[m] forming a basis of Ran ϑ(IZ). Moreover, it follows from the definition of ˜l2 ψ,1 that there is a finite discrete interval IZ = [0, K]Z such that IZ ⊆ IZ and ψk v [j] k = 0 for all k ∈ [K, ∞)Z and j ∈ {1, . . . , m}. (6.79) If ˜d ∈ Ker KI := Ker K0,I, then for all j ∈ {1, . . . , m} we get ⟨v[j] , d⟩ψ,I := ∑ k∈I ξ[j]∗ Θ∗ k ψk dk = ξ[j]∗ KI( ˜d) = 0, which implies ˜v[1] , . . . , ˜v[m] ∈ Ker K⊥ I . Now, let s ∈ l2 ψ,1 be defined by sk :=    zk − rk, k ∈ IZ, 0, k ∈ [0, ∞)Z KIZ, where {˜z, ˜f} and r are the same as in the first part. Obviously ˜s ∈ ˜l2 ψ and, by Lemma 6.4.6, there is p ∈ Ker K⊥ I , where p := Θη for some η ∈ Ran ϑ(IZ), such that ˜s − p ∈ Ker KI. As a consequence, we obtain ⟨˜s − p, ˜v[j] ⟩ψ,I = 0 for all j ∈ {1, . . . , m}. Then, by (6.79), we have for all j ∈ {1, . . . , m} that ∑ k∈I (zk − rk − Θk η)∗ ψk v [j] k = ∑ k∈I (zk − rk − Θk η)∗ ψk v [j] k = ⟨˜s − p, ˜v[j] ⟩ψ,I = 0. (6.80) Hence the direct sum decomposition of ˜l2 ψ,1 in (6.72) and identities (6.78) and (6.80) show that ⟨z − r − Θη, q⟩ψ = 0 for any q ∈ l2 ψ,1 ; in particular, for qk :=    zk1 − rk1 − Θk1 η, k = k1, 0, k ∈ [0, ∞)Z K{k1}. Since k1 ∈ [0, ∞)Z can be chosen arbitrarily, it follows ψk (zk − rk − Θk η) ≡ 0 on [0, ∞)Z, i.e., ||z − (r + Θη)||ψ = 0. Therefore, r + Θη ∈ ˜z and it satisfies L (r + Θη) = L (r) = ψf, which again implies T∗ 0 ⊆ Tmax. ■ Moreover, the following theorem provides an explicit characterization of the minimal linear relation Tmin; cf. [135, Theorem 3.2]. Theorem 6.4.12. Let system (Sλ) be definite on a finite discrete interval ID Z := [a, b]Z ⊆ IZ, where a, b ∈ IZ. Then Tmin = { {˜z, ˜f} ∈ Tmax | ˆz0 = 0 = (ˆz, ˆv)N+1 for all ˜v ∈ dom Tmax } , (6.81) which in the case of IZ being a finite discrete interval reduces to Tmin = { {˜z, ˜f} ∈ Tmax | ˆz0 = 0 = ˆzN+1 } . (6.82) – 114 – 6.4. Maximal and minimal linear relations Proof. Since Tmin = (Tmin) by the definition, identities (A.11), (6.67), and (6.77) yield Tmin = { {˜z, ˜f} ∈ Tmax | (ˆz, ˆv)k N+1 0 = 0 for all ˆv ∈ dom Tmax } . (6.83) Let T be the linear relation on the right-hand side of (6.81). Then obviously T ⊆ Tmin. On the other hand, let {˜z, ˜f} ∈ Tmin be fixed. Then, (ˆz, ˆv)k N+1 0 = 0 for all ˆv ∈ dom Tmax by (6.83). By Lemma 6.4.3, for any {˜v, ˜g} ∈ Tmax there exists {˜r, ˜h} ∈ Tmax such that ˆvk = 0 for k ∈ [0, c]Z and ˆrk = ˆvk for k ∈ [d + 1, ∞)Z ∩ I+ Z . Hence (ˆz, ˆv)0 = (ˆz, ˆv)N+1 = 0 for all ˆv ∈ dom Tmax. From the second part of Lemma 6.4.3 we get ˆz0 = 0, because there exists {˜z[i] , ˜f[i] } ∈ Tmax such that ˆz[i] 0 = ei. Therefore T = Tmin. If, in addition, IZ is a finite discrete interval, i.e., IZ = [0, N]Z with N ∈ N ∪ {0}, then dom Tmax contains also ˜r such that ˆrN+1 = ei, i ∈ {1, . . . , 2n}, by the last part of Lemma 6.4.3. Hence equality (6.82) holds. ■ Finally, we define the defect subspace and defect index for the minimal linear relation Tmin according to (A.7) and (A.8), respectively, i.e., by using (6.77) we put Nλ = Nλ(Tmin) := { ˜z ∈ ˜l2 ψ | {˜z, λ˜z} ∈ T∗ min = Tmax } , ˜nλ := dim Nλ. Since Tmin is a closed, symmetric linear relation by Theorem 6.4.11, it follows that the value of the deficiency indices ˜nλ is constant on each of the open upper and lower halfplanes of C, i.e., ˜n±λ = ˜n±i for all λ ∈ C+; cf. [143, Theorem 2.13]. Thus we let ˜n± := ˜n±i. In addition, we introduce the following two subspaces Nλ := { z ∈ l2 ψ | L (z)k = λψk zk for all k ∈ IZ } , nλ := dim Nλ, kλ = kλ(Tmin) := { {˜z, λ˜z} | {˜z, λ˜z} ∈ Tmax } . In other words, the subspace Nλ ⊆ l2 ψ consists of all square summable solutions of system (Sλ) and nλ denotes the number of linearly independent square summable solutions of system (Sλ), while kλ is a subspace associated with the defect subspace Nλ. Then we obtain the following von Neumann decomposition of the maximal linear relation Tmax in terms of the minimal linear relation Tmin and the subspaces kλ and k¯λ, i.e., Tmax = Tmin ∔ kλ ∔ k¯λ, (6.84) where the direct sum ∔ is orthogonal if λ = ±i, see (A.9). Assuming that rank ϑ(IZ) is maximal, as described in Lemma 6.2.10, we next show a relationship between the numbers nλ and ˜nλ. Theorem 6.4.13. Let IZ ⊆ IZ be a finite discrete interval such that the value of rank ϑ(IZ) is maximal. Then, for any λ ∈ C, nλ = ˜nλ + 2n − rank ϑ(IZ). (6.85) Proof. Let λ ∈ C be given and π1 denote the restriction of the quotient space map π (introduced at the beginning of this section) given by π1 := π|Nλ : Nλ → Nλ. Since for any ˜z ∈ Nλ there exists z ∈ ˜z solving system (Sλ), i.e., z ∈ Nλ, the map π1 is surjective. Thus dim Nλ = dim Nλ + dim(Ker π1). (6.86) Moreover, we note that Ker π1 = { z ∈ l2 ψ | L (z)k = λψk zk and ψk zk = 0 for all k ∈ IZ } , – 115 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations which shows that z ∈ Ker π1 if and only if z = Θξ for some ξ ∈ ∩N K=0 Ker ϑ([0, K]Z) = Ker ϑ(IZ). Therefore dim(Ker π1) = dim[Ker ϑ(IZ)] = 2n − rank ϑ(IZ), which together with equality (6.86) implies (6.85). ■ As a direct consequence of equality (6.85) we get the following properties of the numbers nλ and ˜nλ. Corollary 6.4.14. The following statements hold true: (i) nλ − ˜nλ is nonnegative and constant for all λ ∈ C; (ii) nλ = ˜nλ for some (and hence for any) λ ∈ C if and only if system (Sλ) is definite on [0, ∞)Z; (iii) nλ is constant in the half-planes C+ and C−. Proof. The first statement follows from equality (6.85) and Lemma 6.2.9, while for the second statement it suffices to combine (6.85), Theorem 6.2.11, and Lemma 6.2.3. Finally, since the minimal linear relation Tmin is symmetric, see (6.75) and (6.76), the number ˜nλ is constant in the half-planes C+ and C− by [143, Theorem 2.13]. Hence the value of nλ is also constant in C+ and C− by the first part. ■ Remark 6.4.15. The last statement of Corollary 6.4.14 extends the enumeration and analysis of linearly independent square summable solutions of system (Sλ) provided in Section 2.4, see Theorem 2.4.8 and also equality (6.3). More precisely, since the number of linearly independent square summable solutions of system (Sλ) or (Sλ) is constant in C+ and C−, the the same property possesses also the function r(λ) defined in (2.56) as it was already mentioned in Remark 2.4.21. Moreover, in the final result of this chapter we complete this analysis by a basic estimate, which shows that the number of linearly independent square summable solutions on the real line cannot exceed the same number calculated for any λ ∈ CKR. Theorem 6.4.16. For any λ ∈ C we have Ker(Tmin − λI) = {˜0}. (6.87) Moreover, for every λ ∈ R we have ˜nλ ≤ ˜n± and nλ ≤ n±i. Proof. First, suppose that IZ = [0, N]Z for some N ∈ N ∪ {0}. Let λ ∈ C, ˜f ∈ ˜l2 ψ , and ˜z ∈ ˜l2 ψ be determined by z ∈ C(I+ Z )2n constructed as in (6.73). Then L (z) = λψz + ψf on IZ, which yields {˜z, ˜f} ∈ Tmax − λI, i.e., ˜f ∈ Ran(Tmax − λI). Hence ˜l2 ψ = Ran(Tmax − λI), because λ and ˜f were chosen arbitrarily. Thus Ker(Tmin − λI) (A.5) = [Ran(T∗ min − ¯λI)]⊥ (6.77) = [Ran(Tmax − ¯λI)]⊥ = ( ˜l2 ψ)⊥ = {˜0}. On the other hand, let now IZ = [0, ∞)Z. Then for λ ∈ C, ˜f ∈ ˜l2 ψ,1 , and ˜z ∈ ˜l2 ψ,1 determined by the sequence z ∈ C([0, ∞)Z)2n constructed as in (6.74) we obtain that {˜z, ˜f} ∈ Tmax − λI, see the proof of Lemma 6.4.9. Hence ˜l2 ψ,1 ⊆ Ran(Tmax − λI). Since ˜l2 ψ,1 is a dense subspace of ˜l2 ψ , it follows that Ran(Tmax − λI) is also dense in ˜l2 ψ for any λ ∈ C. Thus, for the proof of equality (6.87) it suffices to utilize an analogous calculation as in the previous part. The remaining assertions follow from (A.10) and Corollary 6.4.14(i). ■ – 116 – 6.5. Bibliographical notes 6.5 Bibliographical notes The results of this chapter were published in [A18] except for Remark 6.1.4, Lemmas 6.2.8 and 6.4.3, and Theorem 6.4.12, which were published later in [A21]. Their generalization to symplectic systems on time scales is one of the goals of the current research. – 117 – Chapter 6. Nohomogeneous problem and maximal and minimal linear relations – 118 – Chapter 7 Self-adjoint extensions ... when America’s National Academy of Science asked shortly before his death what he thought were his three greatest achievements ... Johnny replied to the academy that he considered his most important contributions to have been on the theory of self-adjoint operators in Hilbert space, and on the mathematical foundations of quantum theory and the ergodic theorem. John von Neumann, see [118, pg. 24] This final chapter is devoted to the characterization of all self-adjoint extensions of the minimal linear relation Tmin defined in (6.76). The description of self-adjoint extensions and their particular cases is a classic problem in the theory of differential and difference equations, see [14,27,44,51,60,80,81,95,98,119,124,126,127,147,159,160,162,169] and [A5]. As in [135,160,162,169], our main result is obtained by using square summable solutions of system (Sλ) and the Glazman–Krein–Naimark theory, see again the Appendix of this thesis for general results from the latter theory. Throughout this chapter we consider system (Sλ) with the coefficients specified in Notation 6.1.2 and it is organized as follows. In Section 7.1 we establish a limit point criterion for system (Sλ), see Theorem 7.1.1. In Section 7.2 we present the main result, Theorem 7.2.1, concerning the characterization of self-adjoint extensions of the minimal linear relation associated with system (Sλ). We apply this to a consideration of the 2 × 2 (scalar) case for a finite discrete interval and describe the Krein–von Neumann extension explicitly, see Theorems 7.2.7 and 7.2.9, and Example 7.2.8. We note that there is no analogue of Theorems 7.1.1, 7.2.7, 7.2.9 and Example 7.2.8 in the setting of system (2.6). Finally, Section 7.3 is devoted to the proof of Theorem 7.2.1. 7.1 Limit point criterion As noted in the Appendix, the equality ˜n+ = ˜n− is necessary and sufficient for the existence of a self-adjoint extension of the minimal linear relation Tmin. Since the latter equality is equivalent with n+ = n− by Corollary 6.4.14(i), we need to guarantee that the number of linearly independent square summable solutions of system (Sλ) is constant on the set CKR. This condition is trivially satisfied if the discrete interval IZ is finite or if system (Sλ) is in the limit circle case for some (and hence for all) λ ∈ C, i.e., nλ = 2n, while it can be violated in any other case nλ < 2n with IZ = [0, ∞)Z, see Remarks 2.4.21 and 6.4.15. Moreover, it was also discussed in Remark 2.4.16 that the classical limit point criterion for linear Hamiltonian differential and difference systems (2.5) and (2.6) utilizes the minimal eigenvalue of the corresponding weight matrix and a similar criterion cannot be derived – 119 – Chapter 7. Self-adjoint extensions in the current setting. Nevertheless, in the following theorem we give conditions, which imply the invariance of the limit point case on CKR for system (Sλ) with the special linear dependence on λ specified in (6.25), i.e., for system (6.26); cf. [134, Theorem 6.2]. This statement is a discrete analogue of [116, Theorem 5.6]. Theorem 7.1.1. Let IZ = [0, ∞)Z and consider system (Sλ) with the coefficients given in (6.25), where B∗ k Ck ≡ 0, B∗ k Dk > 0, and Wk > 0 for all k ∈ IZ. If there exists h ∈ C(IZ)1 such that hk ≥ h > 0 and A∗ k Ck ≥ −hk Wk+1, ∞∑ k=0 1 gk √ hk = ∞, (7.1) where gk := max { 1, W−1/2 k+1 (B∗ k Dk)−1/2 2 } , and a constant T ≥ 0 such that ( 1 hk ) gk ≤ T √ hk for all k ∈ IZ, (7.2) then system (Sλ) is in the limit point case for all λ ∈ CKR, i.e., nλ = n for all λ ∈ CKR. Proof. System (Sλ) with the coefficients from (6.25) can be written as (6.26) and the invertibility of Bk and Wk on IZ implies the definiteness of this system by Theorem 6.2.5. In accordance with Theorem 2.4.3, we have nλ = n if and only if Zk(λ)ξ l2 ψ for any ξ ∈ CnK{0}, where Z(λ) is the same as in Section 6.3, i.e., the 2n × n solution of system (Sλ) determined by the initial condition Z0(λ) = −Jα∗ with a given α ∈ . Then zk := ( xk uk ) = Zk(λ)ξ with x, u ∈ C([0, ∞)Z)n satisfies z∗ 0 Jz0 = 0. Moreover, it is sufficient to consider only λ = ±i, because the number nλ ≥ n is constant in C+ and C− by Corollary 6.4.14(iii). Hence, let ξ ∈ CnK{0} and λ ∈ {±i} be fixed. We show that under the current assumptions we have z l2 ψ . Let us assume that z ∈ l2 ψ . By a direct calculation, we obtain from the block structure of the system at hand, see (6.26), and from the symplecticity of the matrix Sk, see identities (1.17) and (1.18), that (x∗ k uk) = −x∗ k+1A∗ k Ck xk+1 − x∗ k+1 C∗ k Bk uk+1 − u∗ k+1B∗ k Ck xk+1 − u∗ k+1B∗ k Dk uk+1 − λx∗ k Wk xk. Since B∗ k Dk > 0 and hk > 0, the quantity Fk(x, u) := ( ∑k j=0 1 hj u∗ j+1 B∗ j Dj uj+1 )1/2 ≥ 0 is well-defined. Then the latter equality and the assumption B∗ k Ck ≡ 0 yield F2 k (x, u) = − k∑ j=0 1 hj x∗ j+1 A∗ j Cj xj+1 − λ k∑ j=0 1 hj x∗ j Wj xj − k∑ j=0 1 hj (x∗ j uj). (7.3) From the Hermitian property and positive definiteness of Wk and B∗ k Dk, the Cauchy– Schwarz inequality, inequality (1.8), and the definition of gk we obtain | x∗ k+1 uk+1 | = ( W1/2 k+1 xk+1 )∗ W−1/2 k+1 (B∗ k Dk)−1/2 (B∗ k Dk)1/2 uk+1 ≤ W1/2 k+1 xk+1 2 × W−1/2 k+1 (B∗ k Dk)−1/2 (B∗ k Dk)1/2 uk+1 2 ≤ W1/2 k+1 xk+1 2 × W−1/2 k+1 (B∗ k Dk)−1/2 σ × (B∗ k Dk)1/2 uk+1 2 ≤ gk W1/2 k+1 xk+1 2 × (B∗ k Dk)1/2 uk+1 2 . (7.4) – 120 – 7.1. Limit point criterion Hence the latter inequality, assumption (7.2), the Cauchy–Schwarz inequality, and the inequality of arithmetic and geometric means √ ab ≤ a+b 2 yield k∑ j=0 ( 1 hj ) x∗ j+1 uj+1 ≤ k∑ j=0 ( 1 hj ) | x∗ j+1 uj+1 | ≤ k∑ j=0 ( 1 hj ) gj W1/2 j+1 xj+1 2 × (B∗ j Dj)1/2 uj+1 2 ≤ k∑ j=0 T W1/2 j+1 xj+1 2 × h−1/2 j (B∗ j Dj)1/2 uj+1 2 ≤ ( T2 k∑ j=0 W1/2 j+1 xj+1 2 2 )1/2 × ( k∑ j=0 h−1 j (B∗ j Dj)1/2 uj+1 2 2 )1/2 ≤ 1 2 ( T2 k∑ j=0 W1/2 j+1 xj+1 2 2 + k∑ j=0 h−1 j (B∗ j Dj)1/2 uj+1 2 2 ) ≤ 1 2 ( T2 ||z||2 ψ + F2 k (x, u) ) . (7.5) By using the summation by parts together with the assumption hk ≥ h and the inequalities from (7.4) and (7.5) we get re [ k∑ j=0 1 hj (x∗ j uj) ] ≤ k∑ j=0 1 hj (x∗ j uj) ≤ [ x∗ j uj/hj ]k+1 0 − k∑ j=0 ( 1 hj ) x∗ j+1 uj+1 ≤ | x∗ 0 u0/h0 | + | x∗ k+1 uk+1/hk+1 | + k∑ j=0 ( 1 hj ) x∗ j+1 uj+1 ≤ T1 + 1 h gk W1/2 k+1 xk+1 2 × (B∗ k Dk)1/2 uk+1 2 + 1 2 ( T2 ||z||2 ψ + F2 k (x, u) ) , (7.6) where T1 := | x∗ 0 u0/h0 |. Since re ( F2 k (x, u) ) = − k∑ j=0 1 hj x∗ j+1 A∗ j Cj xj+1 − re ( k∑ j=0 1 hj (x∗ j uj) ) and the inequality in (7.1) implies − k∑ j=0 1 hj x∗ j+1 A∗ j Cj xj+1 ≤ k∑ j=0 x∗ j+1 Wj xj+1 ≤ ||z||2 ψ, it follows from (7.3) and (7.6) that 1 2 k∑ j=0 g−1 j h−1/2 j F2 j (x, u) ≤ T2 k∑ j=0 g−1 j h−1/2 j + 1 h k∑ j=0 h−1/2 j W1/2 j+1 xj+1 2 × (B∗ j Dj)1/2 uj+1 2 , where we put T2 := T1 + (1 + T2/2)||z||2 ψ. Then with the aid of the Cauchy–Schwarz inequality we obtain Gk := 1 2 k∑ j=0 g−1 j h−1/2 j [F2 j (x, u) − 2T2] ≤ 1 h ( k∑ j=0 W1/2 j+1 xj+1 2 2 )1/2 × ( k∑ j=0 (B∗ j Dj)1/2 uj+1 2 2 )1/2 ≤ 1 h ||z||ψ Fk(x, u). (7.7) – 121 – Chapter 7. Self-adjoint extensions In the next part we show that F2 k (x, u) ≤ 2T2 for all k ∈ IZ. Assume that there exists an index m ∈ IZ such that F2 m(x, u) > 2T2. Since F2 k (x, u) is nondecreasing, we have F2 k (x, u) − 2T2 > t for all k ∈ [m, ∞)Z, where t := F2 m(x, u) − 2T2. Also Gk is nondecreasing for all k ∈ [m − 1, ∞)Z and for all k ∈ [m, ∞)Z we get from inequality (7.7) and the relation F2 k (x, u) = 2 gk h1/2 k Gk−1 + 2T2 that h2 − 2||z||2 ψ G−2 k T2 ≤ 2G−2 k ||z||2 ψ gk h1/2 k Gk−1. (7.8) In addition, Gk ≥ t 2 ∑k j=0 g−1 j h−1/2 j → ∞ as k → ∞ by the second part of (7.1). Now, let 0 < a < 2h2 be arbitrary and ℓ ∈ [m, ∞)Z be such that Gℓ ≥ 2||z||ψ T1/2 2 / √ 2h2 − a. Then we have a/2 ≤ h2 − 2G−2 k T2 ||z||2 ψ for all k ∈ [ℓ, ∞)Z, which together with (7.8) yields for any k ∈ [ℓ + 1, ∞)Z that a 2 k∑ j=ℓ+1 1 gj h1/2 j ≤ k∑ j=ℓ+1 1 gj h1/2 j ( h2 − 2G−2 j T2 ||z||2 ψ ) ≤ k∑ j=ℓ+1 2G−2 j ||z||2 ψ Gj−1 ≤ 2||z||2 ψ k∑ j=ℓ+1 Gj−1 Gj Gj−1 ≤ −2||z||2 ψ k∑ j=ℓ+1 ( 1 Gj−1 ) ≤ 2||z||2 ψ 1 Gℓ < ∞. But it contradicts the second assumption from (7.1) as k → ∞. Thus it holds F2 k (x, u) ≤ 2T2 for all k ∈ IZ, i.e., ∞∑ j=0 h−1 j u∗ j+1 B∗ j Dj uj+1 ≤ 2T2 < ∞. (7.9) Since system (Sλ) is definite on [0, ∞)Z, there exists s ∈ IZ such that ∑s j=0 z∗ k ψk zk = T3 > 0. Hence the positive definiteness of Wk and the extended Lagrange identity in (6.11) yield z∗ k+1 Jzk+1 = z∗ 0 Jz0 ± 2i k∑ j=0 z∗ j ψj zj = 2 k∑ j=0 z∗ j ψj zj ≥ 2 s∑ j=0 z∗ j ψj zj = 2T3, (7.10) for any k ∈ [s, ∞)Z. Simultaneously, we get from (7.4) the estimate z∗ k+1 Jzk+1 ≤ 2 x∗ k+1 uk+1 ≤ 2 gk h1/2 k || W1/2 k+1 xk+1 ||2 × h−1/2 j ||(B∗ k Dk)1/2 uk+1 ||2. (7.11) Inequalities (7.9), (7.10), (7.11) together with the Cauchy–Schwarz inequality imply for any k ∈ [s, ∞)Z that k∑ j=s 1 gj h1/2 j ≤ k∑ j=s 2 z∗ j+1 Jzj+1 W1/2 k+1 xk+1 2 × h−1/2 j (B∗ k Dk)1/2 uk+1 2 ≤ 1 T3 k∑ j=s W1/2 k+1 xk+1 2 × h−1/2 j (B∗ k Dk)1/2 uk+1 2 ≤ 1 T3 ( k∑ j=s W1/2 k+1 xk+1 2 2 )1/2 × ( k∑ j=s h−1 j (B∗ k Dk)1/2 uk+1 2 2 )1/2 ≤ 1 T3 ||z||ψ √ 2 T1/2 2 < ∞, – 122 – 7.2. Main results which (again) contradicts the second condition in (7.1) as k → ∞. Hence z l2 ψ . Since ξ and λ were chosen arbitrarily, it follows that Z(λ)ξ l2 ψ for any ξ ∈ CnK{0}. Therefore, system (Sλ) is in the limit point case for λ ∈ {±i} and consequently for all λ ∈ CKR. ■ Upon applying Theorem 7.1.1 to system (6.22) with qk ≡ 0 we obtain the following corollary for a special case of the second order Sturm–Liouville difference equation (6.24), because one easily observes that z(λ) ∈ l2 ψ if and only if ∑∞ k=0 |yk(λ)|2 wk < ∞, where zk(λ) are ψk are as in Example 6.2.4. Corollary 7.1.2. Let IZ = [0, ∞)Z and consider equation (6.24) with qk ≡ 0, pk < 0 and wk > 0 for all k ∈ IZ. If there exist hk ∈ C(IZ)1 and a constant T ≥ 0 such that hk ≥ h > 0 and ∞∑ k=0 1 gk √ hk = ∞, ( 1 hk ) gk ≤ T √ hk , k ∈ IZ, where gk := max { 1, ( − pk+1 wk+1 )1/2} , then equation (6.24) is in the limit point case for any λ ∈ CKR, i.e., there exists only one nontrivial solution satisfying ∑∞ k=0 |yk(λ)|2 wk < ∞. It was shown in [96, Theorem 10], see also [157, Corollary 3.1], that equation (6.24) with pk 0 and wk > 0 is in the limit point case for any λ ∈ CKR if ∑∞ k=0 (wk wk+1)1/2 | pk+1 | = ∞. Corollary 7.1.2 partially generalizes this classical limit-point criterion as shown in the following example; cf. [A23, Example 3.4]. Example 7.1.3. Let us consider the equation (6.24), pk ≡ −1, qk ≡ 0, wk = 1/(k + 1)2 , IZ = [0, ∞)Z. (7.12) Then the criterion from [96, Theorem 10] cannot be applied, because ∞∑ k=0 ( wk wk+1 )1/2 | pk+1 | = ∞∑ k=0 √ 1 (k + 1)2 (k + 2)2 = 1 < ∞. On the other hand, the assumptions of Corollary 7.1.2 are satisfied with hk ≡ 1, gk = k + 2, and T = 0, i.e., equation (7.12) is in the limit point case for all λ ∈ CKR. This fact can be verified directly by using the Weyl alternative, see e.g. [9, Theorem 5.6.1] or Corollary 2.4.23. More precisely, equation (7.12) with λ = 0 possesses two linearly independent solutions y[1] k ≡ 1 and y[2] k = k for k ∈ IZ ∪ {−1}. Since only y[1] is square summable with respect to wk, it follows from the Weyl alternative that equation (7.12) has to be in the limit point case for all λ ∈ CKR. ▲ 7.2 Main results According to Corollary 6.4.14(iii), the number of linearly independent square summable solutions of system (Sλ) is constant in the upper and lower half-planes of C. Hence the numbers n+ := nλ for λ ∈ C+ and n− := nλ for λ ∈ C− are well-defined. Let λ0 ∈ C+ be fixed. Then system (Sλ0 ) has n+ linearly independent square summable solutions, which we denote by s[1] (λ0), . . . , s[n+] (λ0), and similarly system (S¯λ0 ) has n− linearly independent square summable solutions, which we denote by s[1] (¯λ0), . . . , s[n−] (¯λ0). Let φ[i] k := s[i] k (λ0), φ [j+n+] k := s [j] k (¯λ0), i ∈ {1, . . . , n+}, j ∈ {1, . . . , n−}, k ∈ I+ Z , (7.13) Θ+ k := (φ[1] k , . . . , φ[n+] k ), Θ− k := (φ[1+n+] k , . . . , φ[p] k ), and p := n+ + n−. – 123 – Chapter 7. Self-adjoint extensions Note that 2n ≤ p ≤ 4n. Moreover, if system (Sλ) is definite on IZ, then the solutions φ[1] , . . . , φ[p] belong to different equivalence classes. Hence for all i ∈ {1, . . . , n+} and j ∈ {n+ + 1, . . . ,p} we have {φ[i] , λ0 φ[i] } ∈ Tmax and {φ[j] , ¯λ0 φ[j] } ∈ Tmax with φ[ℓ] ≡ φ[ℓ] for ℓ ∈ {1, . . . ,p}. We also define the matrix := ( (Θ+, Θ−), (Θ+, Θ−) ) N+1 , i.e., = ( [1,1] [1,2] [2,1] [2,2] ) =   (φ[1] , φ[1] )N+1 . . . (φ[1] , φ[p] )N+1 ... ... ... (φ[p] , φ[1] )N+1 . . . (φ[p] , φ[p] )N+1   ∈ Cp×p , (7.14) where [1,2] ∈ Cn+×n− . The elements ωij := (φ[i] , φ[j] )N+1 exist finite for all i, j ∈ {1, . . . ,p} by identity (6.14). Furthermore, from (6.12) one easily concludes that the matrix [1,2] consists of the elements (φ[i] , φ[j] )N+1 = (φ[i] , φ[j] )0 for i ∈ {1, . . . , n+} and j ∈ {n+ + 1, . . . ,p}. Identity (6.84) implies that for any {˜z, ˜f} ∈ Tmax we have ˆzk = ˆvk + p∑ j=1 ξj φ [j] k , k ∈ I+ Z , (7.15) where ˆv ∈ dom Tmin and ξ1, . . . , ξp ∈ C are determined uniquely. Especially, for the pairs {˜z[1] , ˜f[1] }, . . . , {˜z[2n] , ˜f[2n] } ∈ Tmax (see Lemma 6.4.3), we get the unique expression ˆz[i] k = ˆv[i] k + p∑ j=1 ξi,j φ [j] k , k ∈ I+ Z , i ∈ {1, . . . , 2n}. (7.16) If we put Zk := (ˆz[1] k , . . . , ˆz[2n] k ) for k ∈ I+ Z , then identity (7.16) implies Zk = Vk + (Θ+ k , Θ− k ) ⊤ , (7.17) where Vk := (ˆv[1] k , . . . , ˆv[2n] k ) ∈ C(I+ Z )2n×2n and the matrix ∈ C2n×p consists of ξi,j. In particular, for k = 0 we obtain I = V0 + (Θ+ 0 , Θ− 0 ) ⊤, which together with (6.81) yields I = (Θ+ 0 , Θ− 0 ) ⊤, i.e., rank = 2n by the second inequality in (1.3). From the definition of ˆz[i] (see Lemma 6.4.3), its expression in (7.16), and identity (6.81) we have 0 = (ˆz[i] , φ[ℓ] )N+1 = (ˆv[i] , φ[ℓ] )N+1 + p∑ j=1 ξi,j (φ[j] , φ[ℓ] )N+1 = p∑ j=1 ξi,j (φ[j] , φ[ℓ] )N+1 for all i ∈ {1, . . . , 2n} and any ℓ ∈ {1, . . . ,p}, i.e., = 0. Since rank = 2n, the first inequality in (1.3) implies rank ≤ p − 2n. On the other hand, the equality [1,2] = Θ+∗ 0 JΘ− 0 and the first inequality in (1.3) yield rank [1,2] ≥ p − 2n. Therefore rank = p − 2n = rank [1,2] . Since p − 2n ≤ n+ and p − 2n ≤ n−, we may assume, without loss of generality, that φ[1] , . . . , φ[n+] are arranged such that rank [1,2] p−2n, n− = p − 2n. (7.18) The main result concerning the characterization of all self-adjoint extension of Tmin is stated in the following theorem and its proof is given in Section 7.3; cf. [135, Theorem 5.7]. Recall that for the existence of a self-adjoint extension it is essential to have n+ = n−. – 124 – 7.2. Main results Theorem 7.2.1. Let system (Sλ) be definite on the discrete interval IZ, equality n+ = n− =: q hold and assume that the solutions φ[1] , . . . , φ[q] are arranged such that (7.18) holds. Then a linear relation T ⊆ ˜l2×2 ψ is a self-adjoint extension of Tmin if and only if there exist matrices M ∈ Cq×2n and L ∈ Cq×(2q−2n) such that rank(M, L) = q, MJM∗ − L 2q−2n L∗ = 0, (7.19) and T =    {˜z, ˜f} ∈ Tmax | M ˆz0 − L   (φ[1], ˆz)N+1 ... (φ[2q−2n], ˆz)N+1   = 0    . (7.20) Remark 7.2.2. If, in addition to the assumptions of Theorem 7.2.1, there exists ν ∈ R such that (Sν) has q linearly independent square summable solutions θ[1] , . . . , θ[q] (we suppress the argument ν), then the statement of Theorem 7.2.1 can be formulated by using these solutions, which are (without loss of generality) arranged such that the submatrix ϒ2q−2n has the full rank, where ϒ :=   (θ[1] , θ[1] )N+1 . . . (θ[1] , θ[q] )N+1 ... ... ... (θ[q] , θ[1] )N+1 . . . (θ[q] , θ[q] )N+1   , see Lemma 7.3.3. Moreover, the Wronskian-type identity (6.12) yields that ϒ = ∗ 0 J 0, where k := (θ[1] k , . . . , θ [q] k ) for k ∈ I+ Z . In the next part we discuss several special cases of Theorem 7.2.1. If system (Sλ) is in the limit point case for all λ ∈ CKR, i.e., n+ = n− = n, then the boundary conditions at N + 1 (which is necessary equal to ∞) are superfluous as stated in the following corollary; cf. [135, Theorem 5.9]. This situation occurs, e.g., when the assumptions of Theorem 7.1.1 are satisfied. The proof follows directly from Theorem 7.2.1. Corollary 7.2.3. Let system (Sλ) be definite on the discrete interval IZ and n+ = n− = n hold. Then a linear relation T ⊆ ˜l2×2 ψ is a self-adjoint extension of Tmin if and only if there exists a matrix M ∈ Cn×2n such that rank M = n, MJM∗ = 0, and T = { {˜z, ˜f} ∈ Tmax | M ˆz0 = 0 } . As it was already discussed in the previous chapters, if there exists λ0 ∈ C with the property nλ0 = 2n, then system (Sλ) is in the limit circle case for all λ ∈ C, i.e., n+ = n− = 2n. Hence for any ν ∈ R there exist solutions θ[1] , . . . , θ[2n] (we again suppress the argument ν) of system (Sν), which are linearly independent, square summable, and the fundamental matrix ∈ C(I+ Z )2n×2n satisfies 0 = I, which implies ϒ = J, i.e., rank ϒ = 2n, see Remark 7.2.2. Upon combining the latter remark and Theorem 7.2.1 we obtain the following result; cf. [135, Theorem 5.10]. Corollary 7.2.4. Let system (Sλ) be definite on the discrete interval IZ, ν ∈ R be fixed, and assume that there exists a number λ0 ∈ C such that nλ0 = 2n. Let ∈ C(I+ Z )2n×2n be the fundamental matrix of system (Sν) satisfying 0 = I and denote its columns by θ[1] , . . . , θ[2n] , i.e., k = (θ[1] k , . . . , θ[2n] k ). Then a linear relation T ⊆ ˜l2×2 ψ is a self-adjoint extension of Tmin if and only if there exist matrices M, L ∈ C2n×2n such that rank(M, L) = 2n, MJM∗ − LJL∗ = 0, (7.21) – 125 – Chapter 7. Self-adjoint extensions and T =    {˜z, ˜f} ∈ Tmax | M ˆz0 − L   (θ[1], ˆz)N+1 ... (θ[2n], ˆz)N+1   = 0    . (7.22) Especially, if IZ is a finite discrete interval, then the equality nλ = 2n is trivially satisfied for any λ ∈ C. Therefore we get from Corollary 7.2.4 yet one more special case of Theorem 7.2.1. Corollary 7.2.5. Let IZ be a finite discrete interval and system (Sλ) be definite on IZ. Then a linear relation T ⊆ ˜l2×2 ψ is a self-adjoint extension of Tmin if and only if there exist matrices M, L ∈ C2n×2n such that rank(M, L) = 2n, MJM∗ − LJL∗ = 0, (7.23) and T = TM,L := { {˜z, ˜f} ∈ Tmax | M ˆz0 − L ˆzN+1 = 0 } . (7.24) Proof. By Corollary 7.2.4 every self-adjoint extension of Tmin can be expressed as in (7.22) with matrices M, L ∈ C2n×2n satisfying (7.21). If we put ˜L := L ∗ N+1 J ∈ C2n×2n, then the matrices M, ˜L satisfies (7.23) and the linear relation in (7.22) can be written as TM, ˜L. ■ One can easily observe that a linear relation TM,L, i.e., the linear relation given by (7.24) with M, L ∈ C2n×2n satisfying (7.23), is the same as a linear relation TM, L determined by the matrices M := CM and L := CL for an arbitrary invertible matrix C ∈ C2n×2n. We show that the converse is also true, see Remark 7.2.10(i). Moreover, it is well known that all selfadjoint extensions of operators associated with the regular second order Sturm–Liouville differential equations can be expressed by using the separated or coupled boundary conditions, see e.g. [38]. In the last part of this section we show similar results for scalar symplectic systems on a finite interval, i.e., n = 1 and IZ = [0, N]Z with N ∈ N ∪ {0}, and provide a unique representation of all self-adjoint extensions of Tmin. The main assumptions for this treatment are summarized in the following hypothesis. Hypothesis 7.2.6. The discrete interval IZ is finite, i.e., there exists N ∈ N ∪ {0} such that IZ = [0, N]Z, we have n = 1, system (Sλ) is definite on IZ, and the matrices M, L ∈ C2×2 are such that (7.23) holds. If Hypothesis 7.2.6 is satisfied, then identity (7.23) implies that either rank M = rank L = 2 or rank M = rank L = 1. Hence we get the following dichotomy on the boundary conditions in (7.24). Theorem 7.2.7. Let Hypothesis 7.2.6 be satisfied. Then the following statements hold. (i) A linear relation TM,L given through M, L ∈ C2×2 with rank M = 1 = rank L is a selfadjoint extension of Tmin if and only if TM,L = TP,Q := { {˜z, ˜f} ∈ Tmax | P ˆz0 = 0 = Q ˆzN+1 } , where P = ( cos α0 sin α0 0 0 ) and Q = ( 0 0 − sin αN+1 cos αN+1 ) (7.25) for a unique pair α0, αN+1 ∈ [0, π). (ii) A linear relation TM,L given through M, L ∈ C2×2 with rank M = 2 = rank L is a selfadjoint extension of Tmin if and only if TM,L = TR,β := { {˜z, ˜f} ∈ Tmax | eiβR ˆz0 = ˆzN+1 } with a unique β ∈ [0, π) and a symplectic matrix R ∈ R2×2. – 126 – 7.2. Main results Proof. Since the pairs of matrices P, Q and eiβR, I satisfy (7.23), Corollary 7.2.5 implies that the linear relations TP,Q and TR,β are self-adjoint extensions of Tmin. (i) Let TM,L be a linear relation given through M, L ∈ C2×2 satisfying (7.23) and with rank M = 1 = rank L. Since by (1.4) we have dim[Ran M ∩ Ran L] = 0, it follows that Mξ = Lη for some ξ, η ∈ C2 if and only if Mξ = 0 = Lη. Therefore the boundary conditions in (7.24) can be expressed as M ˆz0 = 0 = L ˆzN+1. The rank condition implies that M = ab⊤ and L = cd⊤ for some vectors a, b, c, d ∈ C2K{0}. Then the equality MJM∗ = 0 = LJL∗ does not depend on the vectors a, c and it is equivalent to b⊤Jb = 0 = d⊤Jd, which implies that b and d are (scalar) complex multiples of vectors from R2. Thus, without loss of generality, the vectors a, c may be chosen such that M, L can be written in the form as in (7.25) for some α0, αN+1 ∈ [0, π). The uniqueness follows from the fact that cotan β = cotan γ with β, ≥∈ (0, π) if and only if β = γ. (ii) Finally, let TM,L be a linear relation given through M, L ∈ C2×2 satisfying (7.23) and with rank M = 2 = rank L. Then the boundary conditions in (7.24) can be written as ˆzN+1 = K ˆz0, where K := L−1M. Upon applying the second equality in (7.23) we obtain that the matrix K is symplectic, i.e., KJK∗ = J. Therefore, K−1 = −JK∗J and | det K| = 1, i.e., det K = eiε for some ε ∈ [0, 2π), which implies K−1 = e−iεKadj = −eiεJK⊤J, i.e., K∗⊤ = K = eiε K. If we put R := e−iε/2 K, i.e., K = eiε/2 R, then R = R and det R = 1, i.e., R ∈ R2×2 is a symplectic matrix. Uniqueness can be verified by a direct calculation. ■ As an illustration of the last theorem we provide a description of the Krein–von Neumann extension of the minimal linear relation Tmin under Hypothesis 7.2.6. Example 7.2.8. Assume that system (Sλ) is such that Hypothesis 7.2.6 holds and that the minimal linear relation Tmin is positive, i.e., there exists c > 0 such that ⟨˜z, ˜f⟩ ≥ c|| ˜z||ψ for all {˜z, ˜f} ∈ Tmin. Then the Krein–von Neumann self-adjoint extension extension of Tmin admits the representation given in (A.14), i.e., TK = Tmin ∔ (Ker Tmax × {0}). We show that TK can be also expressed as in the second part of Theorem 7.2.7 with a suitable matrix R and a number β ∈ [0, 2π). By definition, Ker Tmax = { ˜z ∈ l2 ψ | {˜z, ˜0} ∈ Tmax } , i.e., ˆz solves system (S0), i.e., L (ˆz)k = 0 on [0, N]Z. Because all solutions of system (S0) are square summable in this case, the assumption of the definiteness of system (Sλ) implies that dim Ker Tmax = 2. If ˜z ∈ dom TK, then there exist ˜v ∈ dom Tmin and ˜r ∈ Ker Tmax such that ˜z = ˜v + ˜r or ˆzk = ˆvk + ˆrk for all k ∈ [0, N + 1]Z, (7.26) where ˆz ∈ ˜z, ˆv ∈ ˜v, and ˆr ∈ ˜r are the uniquely determined elements. Moreover, we have ˆv0 = 0 = ˆvN+1 by (6.82) and ˆrk = α[1] ˆr[1] k + α[2] ˆr[2] k for all k ∈ [0, N + 1]Z, where ˆr[1] and ˆr[2] form a basis of Ker Tmax. Let us define the matrix G = ( a b c d ) := (S0 × S1 × · · · × SN)−1 ∈ C2×2. Then one can easily conclude that the matrix G is symplectic and every solution z ∈ C([0, N + 1]Z)2 of system (S0) satisfies zN+1 = Gz0. (7.27) In the following construction we consider two cases: either b 0 or b = 0. – 127 – Chapter 7. Self-adjoint extensions First, assume that b 0. Then there exist two solutions of system (S0) such that ˆr[1] 0 = ( 0 1/b ) and ˆr[2] 0 = ( 1 −a/b ) . These solutions are obviously linearly independent and by (7.27) we have ˆr[1] N+1 = ( 1 a/b ) and ˆr[2] N+1 = ( 0 c − da/b ) . If we take these two solutions as a basis of Ker Tmax, then (7.26) yields ˆzk = ˆvk + α[1] ˆr[1] k + α[2] ˆr[2] k for all k ∈ [0, N + 1]Z. Upon evaluating ˆzk at k = 0 and k = N + 1 we obtain ˆz0 = ( α[2] α[1] /b − α[2] a/b ) and ˆzN+1 = ( α[1] α[1] d/b + α[2] c − α[2] da/b ) , which for ˆzk = ( ˆxk ˆuk ) implies α[1] = ˆxN+1 and α[2] = ˆx0. Therefore ( ˆxN+1 ˆxN+1 d/b + ˆx0 c − ˆx0 da/b ) = ˆzN+1 = G ˆz0 = G ( ˆx0 ˆxN+1/b − ˆx0 a/b ) . It means that ˆz ∈ dom TR,β, where β ∈ [0, π) is such that eiβ = √ ad − bc, and R = e−iβ G, i.e., TK ⊆ TR,β. On the other hand, the linear relations TK and TR,β are self-adjoint extensions of Tmin, thus TK = TR,β. Especially, if the coefficients a, b, c, d are real, then TR,β = TG,0. If b = 0, then G = ( a 0 c d ) with |ad| = 1, i.e., d 0. In this case we proceed in the same way with the basis of Ker Tmax given by the solutions ˜r[1] and ˜r[2] of (S0) such that ˆr[1] 0 = ( 0 1/d ) , ˆr[2] 0 = ( 1 −c/d ) . Then ( ˆx0 a ˆuN+1 ) = ˆzN+1 = G ˆz0 = G ( ˆx0 ˆuN+1/d−ˆx0 c/d ) . This shows (again) that TK = TR,β with β ∈ [0, π) being such that eiβ = √ ad, and R = e−iβ G. In particular, let Sk = ( 1 −bk 0 1 ) and ψk = ( wk 0 0 0 ) with bk > 0 and wk > 0 on [0, N]Z. This system is definite on [0, N]Z and corresponds to the second order Sturm–Liouville difference equation − [pk yk−1(λ)] = λwk yk(λ) with bk = 1/pk+1, see Example 6.2.4. Then G = ( 1 ∑N k=0 bk 0 1 ) and by the previous part we have TK =    {˜z, ˜f} ∈ Tmax | ˆz = ( ˆx ˆu ) ∈ C([0, N + 1]Z)2 , ˆu0 = ˆuN+1 =   N∑ k=0 bk   −1 × (ˆxN+1 − ˆx0)    . ▲ The boundary conditions in Theorem 7.2.7 include four particular cases. Namely, with the notation ˆzk = (ˆxk, ˆuk)⊤ we get for α0 = 0 and αN+1 = π/2 the Dirichlet boundary conditions ˆx0 = 0 = ˆxN+1, while for α0 = π/2 and αN+1 = 0 we have the Neumann boundary conditions ˆu0 = 0 = ˆuN+1. The choice R = I and β = 0 yields the periodic boundary conditions ˆz0 = ˆzN+1 and the choice R = I and β = π leads to the antiperiodic boundary conditions ˆz0 = −ˆzN+1. – 128 – 7.2. Main results In the first part of the following theorem we show that any self-adjoint extension of Tmin can be described by using the matrices determining the Dirichlet and Neumann boundary conditions. For convenience, we introduce the general boundary trace map γM,L : C(I+ Z )2 → C2 as γM,L(ˆz) := M ˆz0 − LˆzN+1, see also [38]. Then TM,L = { {˜z, ˜f} ∈ Tmax | γM,L(ˆz) = 0 } . Especially, for P, Q given in (7.25) we denote γx := γP,Q for α0 = 0, αN+1 = π/2, i.e., γx(ˆz) = 0 abbreviates the Dirichlet boundary conditions, and similarly γu := γP,Q for α0 = π/2, αN+1 = 0, i.e., γu(ˆz) = 0 abbreviates the Neumann boundary conditions. In the second part of this theorem we derive yet another equivalent representation of TM,L, which possesses the uniqueness property. Theorem 7.2.9. Let Hypothesis 7.2.6 be satisfied. Then the following hold. (i) A linear relation T is a self-adjoint extension of Tmin if and only if there exist matrices F, G ∈ C2×2 such that rank(F, G) = 2, FG∗ = GF∗ (7.28) and T = TF,G := { {˜z, ˜f} ∈ Tmax | Fγx(ˆz) + Gγu(ˆz) = 0 } . (7.29) (ii) We have TF,G = TF,G, where F, G satisfy (7.28), if and only if F = CF and G = CG for some invertible matrix C ∈ C2×2. (iii) A linear relation T is a self-adjoint extension of Tmin if and only if there exists a unitary matrix V ∈ C2×2 such that T = TV := { {˜z, ˜f} ∈ Tmax | i(V − I)γx(ˆz) = (V + I)γu(ˆz) } . (7.30) (iv) We have TV = TV, where V ∈ C2×2 is a unitary matrix, if and only if V = V. Proof. (i) Let T be given by (7.29) with the matrices F, G ∈ C2×2 satisfying (7.28). If we put M := FP0 + GPπ/2 and L := FQπ/2 + GQ0, where P0, Pπ/2 and Q0, Qπ/2 are the matrices corresponding to P, Q defined in (7.25). Then MJM∗ −LJL∗ = FG∗ −GF∗ = 0 and rank(F, G) = 2 is equivalent to rank(M, L) = 2. Hence M, L satisfy (7.23). Moreover, for the left-hand side of the boundary conditions in (7.29) we have Fγx(ˆz) + Gγu(ˆz) = γM,L(ˆz). Therefore {˜z, ˜f} ∈ TM,L if and only if {˜z, ˜f} ∈ TF,G, i.e., TF,G is a self-adjoint extension of Tmin by Corollary 7.2.5. On the other hand, let T be a self-adjoint extension of Tmin, i.e., T = TM,L with M, L ∈ C2×2 satisfying (7.23). If we put F := MP0 − LPπ/2 and G := LQ0 − MQπ/2, then the conditions in (7.28) hold and γM,L(ˆz) can be written as in (7.29). (ii) Sufficiency is clear. Assume that TF,G = TF,G for two pairs of matrices F, G and F, G satisfying (7.28). Then, by (7.29), we have for any {˜z, ˜f} ∈ Tmax that Fγx(ˆz) + Gγu(ˆz) = 0 if and only if F γx(ˆz) + Gγu(ˆz) = 0. It means that ˆz0, ˆzN+1 solve simultaneously the both systems of algebraic equations with the coefficient matrices F, G and F, G. It yields the equivalence of systems, which implies an existence of an invertible matrix C ∈ C2×2 such that F = CF and G = CG. (iii) Let T be given by (7.30) with a unitary matrix V ∈ C2×2. If we put F := i 2(I − V) and G := 1 2 (I + V). Then FG∗ = GF∗ and, by (1.2), rank(F, G) = 2, i.e., the matrices F, G satisfy (7.28). Since the boundary conditions in (7.30) are equivalent to the boundary conditions in (7.29) with F, G defined above, i.e., {˜z, ˜f} ∈ TF,G if and only if {˜z, ˜f} ∈ TV, it follows from the previous part that the linear relation TV is a self-adjoint extension – 129 – Chapter 7. Self-adjoint extensions of Tmin. On the other hand, let T be a self-adjoint extension of Tmin. Then, by the part (i), we have T = TF,G with F, G ∈ C2×2 satisfying (7.28). Since by (1.2) and (7.28) we have rank(F+iG) = 2, the matrix V := (F+iG)−1 (iG−F) is well-defined. One can directly verify that V is a unitary matrix and the boundary conditions Fγx(ˆz) + Gγu(ˆz) = 0 are satisfied if and only if i(V − I)γx(ˆz) − (V + I)γu(ˆz) = 0, i.e., TF,G = TV. (iv) If V = V, then TV = TV. On the other hand, assume that TV = TV for two unitary matrices V, V ∈ C2×2. Then TF,G = TV = TV = TF,G with the pairs of matrices F, G and F, G being given as in the previous part. Then V = (F + iG)−1(iG − F) and V = (F + iG)−1 (iG − F ) and by the part (ii) there exists an invertible matrix C ∈ C2×2 such that F = CF and G = CG. Upon combining these facts we obtain V = V. ■ Remark 7.2.10. (i) As a consequence of Theorem 7.2.7(i)-(ii) we obtain that TM,L = TM,L if and only if M = CM and L = CL for some invertible matrix C ∈ C2×2. (ii) Theorem 7.2.7(iii)-(iv) shows that the map from the set of all 2 × 2 unitary matrices to the set of all self-adjoint extensions expressed as in (7.30) is a bijection. 7.3 Proof of main result In this section, a proof is given for Theorem 7.2.1 which utilizes several arguments from the linear algebra and whose main idea goes back to [162]. It is based on a construction of a suitable GKN-set (see Theorem A.1) and on a more convenient expression than that given in (7.15) for elements in dom Tmax. Similar results for system (2.6) can be found in [135, Section 4]. Lemma 7.3.1. Let system (Sλ) be definite on the discrete interval IZ, {˜z, ˜f} ∈ Tmax be arbitrary, and φ[1] , . . . , φ[n+] be arranged such that equality (7.18) holds. Then the element ˆz can be uniquely expressed as ˆzk = ˆvk + 2n∑ i=1 ηi ˆz[i] k + p−2n∑ j=1 ζj φ [j] k , k ∈ I+ Z , (7.31) where ˆv ∈ dom Tmin, ˆz[1] , . . . , ˆz[2n] are specified in Lemma 6.4.3, and the numbers ηi, ζj ∈ C for all i ∈ {1, . . . , 2n} and j ∈ {1, . . . ,p − 2n}. Moreover, rank p−2n = p − 2n, (7.32) where was defined in (7.14). Proof. Since (7.18) is satisfied, there exists an invertible matrix P ∈ Cp×p such that P = ( Ip−2n 0(p−2n)×2n Q R ) , (7.33) where 0(p−2n)×2n stands for the (p − 2n) × 2n zero matrix. If we put = ( [1] , [2] ), where [1] ∈ C2n×(p−2n) and [2] ∈ C2n×2n, and multiply (7.33) by from the left, we obtain [1] = − [2] Q, i.e., = ( − [2] Q, [2] ) . It implies that rank [2] = 2n by the second inequality in (1.3), because rank = 2n. If we multiply equality (7.17) by ( [2] )⊤−1 from the right, we get Zk ( [2] )⊤−1 = Vk ( [2] )⊤−1 + Θ[1] k [1]⊤ ( [2] )⊤−1 + Θ[2] k , – 130 – 7.3. Proof of main result where Θ[1] k ∈ C2n×(p−2n) and Θ[2] k ∈ C2n×2n are such that (Θ+ k , Θ− k ) = (Θ[1] k , Θ[2] k ) for all k ∈ I+ Z . It shows that every solution φ[p−2n+1] , . . . , φ[p] can be uniquely expressed with ˆv[1] , . . . , ˆv[2n] , ˆz[1] , . . . , ˆz[2n] , and φ[1] , . . . , φ[p−2n] , i.e., φ [j] k = ˆr [j] k + 2n∑ ℓ=1 ηj,ℓ ˆz[ℓ] k + p−2n∑ s=1 ζj,s φ[s] k , k ∈ I+ Z , j ∈ {p − 2n + 1, . . . ,p}, (7.34) for some ˆr[j] ∈ dom Tmin and ηj,ℓ, ζj,s ∈ C. Therefore the expression in (7.31) follows from equality (7.15). Moreover, if we multiply both sides of (7.34) by φ[i]∗ k J from the left, where i ∈ {1, . . . ,p − 2n}, then (φ[i] , φ[j] )N+1 = (φ[i] , ˆv[j] )N+1 + 2n∑ ℓ=1 ηj,ℓ (φ[i] , ˆz[ℓ] )N+1 + p−2n∑ s=1 ζj,s (φ[i] , φ[s] )N+1. Hence from (6.81) and the definition of ˆz[i] we have [1,2] p−2n, n− = p−2n T⊤ , (7.35) where T ∈ Cn−×(p−2n) is a matrix consisting of the elements ζj,s for j ∈ {n+ + 1, . . . ,p} and s ∈ {1, . . . ,p − 2n}. Since the solutions are arranged such that rank [1,2] p−2n, n− = p − 2n, equality (7.32) follows from identity (7.35) and the second inequality in (1.3). ■ Remark 7.3.2. If we switch the role of s[·] (λ0) and s[·] (¯λ0) in the definition of φ[1] , . . . , φ[p] in (7.13), i.e., we put φ[i] = w[i] (¯λ0) for i ∈ {1, . . . , n−} and φ[j+n−] = v[j] (λ0) for j ∈ {1, . . . , n+}, then the solutions φ[1] , . . . , φ[n−] can be arranged such that (7.31) and (7.32) hold. Now we give the proof of Theorem 7.2.1. Proof of Theorem 7.2.1. Assume that T is a self-adjoint extension of Tmin. Then, by Theorem A.1, there exists a GKN-set {βj} q j=1 for (Tmin, Tmax) such that (A.12) holds. Since βj ∈ Tmax, they may be identified as βj = {w[j] , ˜h[j] } ∈ Tmax. By Lemma 7.3.1, the elements w[j] can be uniquely expressed as w [j] k = ˆv [j] k + 2n∑ i=1 ηj,i ˆz[i] k + 2q−2n∑ l=1 ζj,l φ[l] k , k ∈ I+ Z , (7.36) where ˆv[j] ∈ dom Tmin and ηj,i, ζj,l ∈ C. We next show that the matrices M := ( w[1] 0 , . . . , w [q] 0 )∗ J ∈ Cq×2n and L :=   ζ1,1 · · · ζ1,2q−2n ... ... ... ζq,1 · · · ζq,2q−2n   ∈ Cq×(2q−2n) satisfy the relations in (7.19). Since rank(M, L) ≤ q, let us assume that rank(M, L) < q. Then there exists a vector c = (c1, . . . , cq)⊤ ∈ CqK{0} such that c∗(M, L) = 0, i.e., c∗ M = 0 = c∗ L. If wk := ∑q j=1 cjw [j] k for k ∈ I+ Z , then we have w0 = JM∗ c = 0 and also (w, φ[i] )N+1 = ∑q j=1 cj (w[j] , φ[i] )N+1 for all i ∈ {1, . . . , 2q − 2n}. Hence by (7.36) and (6.81) we have ( (w, φ[1] )N+1, . . . , (w, φ[2q−2n] )N+1 ) = c∗ L 2q−2n = 0. – 131 – Chapter 7. Self-adjoint extensions But then (w, ˆv)N+1 = 0 for any ˆv ∈ dom Tmax, because it can be written as in (7.31). It means that w ∈ dom Tmin by (6.81) and hence β1, . . . , βq are linearly dependent in Tmax modulo Tmin, which contradicts the assumption that that {βj} q j=1 is a GKN-set. Therefore, the first condition in (7.19) is satisfied. Next, we see that   (w[1] , w[1] )0 · · · (w[1] , w[q] )0 ... ... ... (w[q] , w[1] )0 . . . (w[q] , w[q] )0   = MJM∗ (7.37) and by using (7.36), (6.81), and the definition of ˆz[i] , also see that   (w[1] , w[1] )N+1 · · · (w[1] , w[q] )N+1 ... ... ... (w[q] , w[1] )N+1 . . . (w[q] , w[q] )N+1   = L 2q−2n L∗ . (7.38) Since {βj} q j=1 is a GKN-set, we obtain from (6.67) that 0 = [βi : βj] = (w[i] , w[j] )k N+1 0 for all i, j ∈ {1, . . . , q}. By (7.37) and (7.38), this implies that MJM∗ − L 2q−2n L∗ = 0, and so the second condition in (7.19) is also satisfied. For any ˆz ∈ dom Tmax, we can write   (w[1] , ˆz)0 ... (w[q] , ˆz)0   = M ˆz0 and   (w[1] , ˆz)N+1 ... (w[q] , ˆz)N+1   = L   (φ[1] , ˆz)N+1 ... (φ[2q−2n] , ˆz)N+1   , (7.39) where the second equality follows from (7.36), (6.81), and the definition of ˆz[i] . Upon combining (A.12), (6.67), (7.39), we obtain that the linear relation T can be expressed as T = { {˜z, ˜f} ∈ Tmax (ˆz, w[j] )k N+1 0 = 0 for all j ∈ {1, . . . , q} } = { {˜z, ˜f} ∈ Tmax w [j]∗ k J ˆzk N+1 0 = 0 for all j ∈ {1, . . . , q} } =    {˜z, ˜f} ∈ Tmax M ˆz0 − L   (φ[1], ˆz)N+1 ... (φ[2q−2n], ˆz)N+1   = 0    , i.e., as written in (7.20). On the other hand, let M ∈ Cq×2n and L ∈ Cq×(2q−2n) satisfy (7.19) and T be the linear relation given by (7.20). We then must show that there exists a GKN-set {βj} q j=1 for (Tmin, Tmax) such that T can be expressed as in (A.12). Denote the columns of JM∗ ∈ C2n×q as ρ1, . . . , ρq and the columns of the matrix (φ[1] k , . . . , φ [2q−2n] k )L∗ ∈ C2n×q as w[1] k , . . . , w [q] k , i.e., ρi := JM∗ ei and w[i] k := 2q−2n∑ l=1 ηi,l φ[l] k for all i ∈ {1, . . . , q}, (7.40) – 132 – 7.3. Proof of main result where ei is the i-th canonical unit vector in Cq and ηi,j are the elements of the matrix L for i ∈ {1, . . . , q} and j ∈ {1, . . . , 2q − 2n}. Then w[i] ∈ Tmax for all i ∈ {1, . . . , q} and, by Lemma 6.4.3, there exist βi := {˜r[i] , ˜h[i] } ∈ Tmax such that ˆv[i] 0 = ρi and ˆv[i] k = w[i] k , k ∈ [b + 1, ∞)Z ∩ I+ Z for all i ∈ {1, . . . , q}, where the number b is determined by the finite discrete interval ID Z := [a, b]Z ⊆ IZ with a, b ∈ IZ on which system (Sλ) is definite. We next show that {βi} q i=1 form a GKN-set for (Tmin, Tmax). Since the linear independence of β1, . . . , βq in Tmax modulo Tmin is equivalent to the linear independence of ˆv[1] , . . . , ˆv[q] in dom Tmax modulo Tmin, we assume that there exists C = (c1, . . . , cq)⊤ ∈ CqK{0} such that ˆv := q∑ j=1 cj ˆv[j] ∈ dom Tmin. Then, from (6.81) and (7.40), we have for all φ[1] , . . . , φ[2q−2n] ∈ Tmax that 0 = ( (ˆv, φ[1] )N+1, . . . , (ˆv, φ[2q−2n] )N+1 ) = C∗ L 2q−2n. This implies C∗ L = 0, because 2q−2n is assumed to be invertible, see (7.32). Simultaneously we have ˆv0 = 0, which yields 0 = ˆv0 = q∑ j=1 cj ˆv [j] 0 = JM∗ C, i.e., C∗ M = 0, because the matrix J is invertible. But this means C∗ (M, L) = 0, which contradicts the first assumption in (7.19). Next, let Yk :=   (ˆv[1] , ˆv[1] )k · · · (ˆv[1] , ˆv[q] )k ... ... ... (ˆv[q] , ˆv[1] )k · · · (ˆv[q] , ˆv[q] )k   . Since it can be directly calculated that Y0 = MJM∗ and YN+1 = L 2q−2n L∗, the second equality in (7.19) implies Y0 − YN+1 = 0. Therefore, by using (6.67), we get [βi : βj] = (ˆv[i] , ˆv[j] )k N+1 0 = 0, which shows that {βi} q i=1 is a GKN-set for (Tmin, Tmax) as defined in the Appendix. Finally, let {w, ˜g} ∈ Tmax be arbitrary, then Mw0 =   (ˆv[1] , w)0 ... (ˆv[q] , w)0   and L   (φ[1] , w)N+1 ... (φ[2q−2n] , w)N+1   =   (ˆv[1] , w)N+1 ... (ˆv[q] , w)N+1   . (7.41) By (6.67) the condition [{w, ˜g} : βi] = 0 is equivalent to (w, ˆv[i] )k N+1 0 = 0 = −(ˆv[i] , w)k N+1 0 (7.42) – 133 – Chapter 7. Self-adjoint extensions for all i ∈ {1, . . . , q}. Hence, by (7.41), we see that (7.42) can be written as Mw0 − L   (φ[1] , w)N+1 ... (φ[2q−2n] , w)N+1   = 0. Therefore the linear relation T given in (7.20) can be equivalently expressed as in (A.12), which means that T is a self-adjoint extension of Tmin. ■ The simplification of Theorem 7.2.1 in the limit circle case is based on the following lemma. Lemma 7.3.3. Let system (Sλ) be definite on the discrete interval IZ and φ[1] , . . . , φ[n+] be arranged as in Lemma 7.3.1. Assume that there exists a number ν ∈ R such that system (Sν) possesses q := max{n+, n−} linearly independent square summable solutions (suppressing the argument ν) given by θ[1] , . . . , θ[q] . Then these solutions can be arranged such that rank ϒp−2n = p−2n, where ϒ :=   (θ[1] , θ[1] )N+1 . . . (θ[1] , θ[q] )N+1 ... ... ... (θ[q] , θ[1] )N+1 . . . (θ[q] , θ[q] )N+1   ∈ Cq×q . Moreover, for any {˜z, ˜f} ∈ Tmax the element ˆz can be uniquely expressed as ˆzk = ˆvk + 2n∑ i=1 αi ˆz[i] k + p−2n∑ j=1 βj θ [j] k , k ∈ I+ Z , where ˆv ∈ dom Tmin, ˆz[1] , . . . , ˆz[2n] are given in Lemma 6.4.3, and αi, βj ∈ C for all i ∈ {1, . . . , 2n} and j ∈ {1, . . . ,p − 2n}. Proof. Since θ[1] , . . . , θ[q] ∈ dom Tmax, Lemma 7.3.1 implies that there exist unique numbers αi,j, βi,ℓ ∈ C such that θ[i] k = ˆv[i] k + 2n∑ j=1 αi,j ˆz [j] k + p−2n∑ ℓ=1 βi,ℓ φ[ℓ] k , k ∈ I+ Z , (7.43) where i ∈ {1, . . . , q}. Then the definition of ˆz[i] and identity (6.81) yield ϒ = B p−2n B∗ , (7.44) where the matrix B = ( βi,j ) ∈ Cq×(p−2n). Hence rank ϒ ≤ p − 2n by the first inequality in (1.3). On the other hand, by the Wronskian-type identity in (6.12) we have ϒ = ∗ 0 J 0, where k := (θ[1] k , . . . , θ [q] k ). Since the solutions θ[1] k , . . . , θ [q] k are linearly independent, we have rank k = q for all k ∈ I+ Z , and hence rank ϒ ≥ p−2n by the second inequality in (1.3). Therefore rank ϒ = p − 2n, which implies that the solutions k := (θ[1] k , . . . , θ [q] k ) can be arranged such that rank ϒp−2n = p − 2n. In this case, the invertibility of the submatrix Bp−2n follows from the equality ϒp−2n = Bp−2n p−2n B∗ p−2n , which is obtained analogously to (7.44). Since from (7.43) we have (θ[1] k , . . . , θ[p−2n] k ) = (ˆv[1] k , . . . , ˆv[p−2n] k ) + (ˆz[1] k , . . . , ˆz[2n] k )A∗ 2n,p−2n + (φ[1] k , . . . , φ[p−2n] k )B∗ q−2n, where A = ( αi,j ) ∈ Cq×2n, the invertibility of Bp−2n means that φ[1] k , . . . , φ[p−2n] k can be uniquely expressed by using θ[1] k , . . . , θ[p−2n] k , ˆv[1] k , . . . , ˆv[p−2n] k , and ˆz[1] k , . . . , ˆz[2n] k . Upon combining these expressions with (7.31) we obtain the second part of the statement. ■ – 134 – 7.4. Bibliographical notes 7.4 Bibliographical notes The results of this chapter were published in [A18]. Their generalization to symplectic systems on time scales is one of the goals of the current research as well as an extension of Theorem 7.1.1 to the discrete symplectic systems with B∗ k Ck 0. The topic of the present section is also closely related to the characterization of the spectrum of self-adjoint linear relations, which represents another goal of our future research. – 135 – Chapter 7. Self-adjoint extensions – 136 – Appendix Linear relations ... the theoryoflinearrelationwouldseemto havethepotential for contributing to the enrichment and clarification of many aspects of operator theory, including those concerned with non-closable or non-densely-defined linear operators. Ronald Cross, see [45, pg. iii] In this supplementary chapter we recall several results from the theory of linear relations, which are relevant to the content of Chapters 6 and 7. The theory of linear relations has been established as a suitable tool for the study of multivalued or nondensely defined linear operators in a Hilbert space. Its history goes back to [8] and the results were further developed, e.g., in [42,45,46,82]. A (closed) linear relation T in a Hilbert space H over the field of complex numbers C with the inner product ⟨·, ·⟩ is a (closed) linear subspace of the product space H2 := H × H, i.e., the Hilbert space of all ordered pairs {z, f } such that z, f ∈ H. The domain, range, kernel, and the multivalued part of T are respectively defined as dom T := {z ∈ H | {z, f } ∈ T }, (A.1) Ran T := { z ∈ H | there exists f ∈ H such that {z, f } ∈ T } , (A.2) Ker T := {z ∈ H | {z, 0} ∈ T }, mul T := { f ∈ H | {0, f } ∈ T } . (A.3) In general, we let T (z) := {f ∈ H | {z, f } ∈ T }, and note that a linear relation T is the graph of a linear operator in H when T (0) = {0}, i.e., when the subspace mul T is trivial. The inverse of T , denoted as T −1, is the linear relation T −1 := { {f , z} | {z, f } ∈ T } and it satisfies dom T −1 = Ran T , Ran T −1 = dom T , Ker T −1 = mul T , and mul T −1 = Ker T . By T we mean the closure of T . The sum T + U and the algebraic sum T ∔ U are defined as T + U := { {z, f + g} | {z, f } ∈ T , {z, g} ∈ U } , T ∔ U := { {z + y, f + g} | {z, f } ∈ T , {y, g} ∈ U } . – 137 – Appendix The adjoint T ∗ of the linear relation T is the closed linear relation given by T ∗ := { {y, g} ∈ H2 | ⟨z, g⟩ = ⟨f , y⟩ for all {z, f } ∈ T } . (A.4) The definition of T ∗ reduces to the standard definition for the graph of the adjoint operator when T is a densely defined operator. The adjoint linear relation T ∗ satisfies T ∗ = ( T )∗ , T ∗∗ = T , Ker T ∗ = (Ran T )⊥ = ( Ran T )⊥ , (dom T )⊥ = mul T ∗ , (A.5) A linear relation T is said to be symmetric (or Hermitian) if T ⊆ T ∗, and it is said to be self-adjoint if T ∗ = T . It is easily seen that T is a symmetric linear relation if and only if ⟨z, g⟩ = ⟨f , y⟩ for all {z, f }, {y, g} ∈ T. A symmetric linear relation T1 is said to be a self-adjoint extension of T if T ⊆ T1 and T ∗ 1 = T1. For λ ∈ C and a linear relation T we define the linear relation T − λI := { {z, f − λz} ∈ H2 | {z, f } ∈ T } (A.6) with the property (T − λI)∗ = T ∗ − ¯λI. Then Mλ(T ) := Ker(T ∗ − λI) = {z ∈ H | {z, λz} ∈ T ∗ } (A.7) is said to be the defect subspace of T and λ. Its dimension, i.e., the number dλ(T ) := Ker(T ∗ − λI), (A.8) is said to be the deficiency index of T and λ. Since Ran(T − ¯λI)⊥ = Ker(T ∗ − λI) the deficiency indices of T and T with the same λ are equal by (A.5), see [143, Lemma 2.4]. If T is a symmetric linear relation, the values of dλ(T ) are constant in the open upper and lower half-planes of C, see [143, Theorem 2.13]. Hence we define the positive and negative deficiency indices as d±(T ) := d±i(T ). If T is a closed symmetric linear relation, then for every λ ∈ CKR the following direct sum decomposition (a generalization of the von Neumann formula) T ∗ = T ∔ Mλ(T ) ∔ M¯λ(T ) (A.9) holds, where Mλ(T ) = { {z, λz} | {z, λz} ∈ T ∗ } and the sum ∔ is orthogonal for λ = ±i, see [116, Proposition 2.22]. A closed symmetric linear relation T possesses a self-adjoint extension if and only if the positive and negative deficiency indices are equal, i.e., d+(T ) = d−(T ), see [42, Corollary, pg. 34]. Moreover, it was shown in [116, Lemma 2.25] that dλ(T) ≤ d±(T), (A.10) whenever λ ∈ R and Ker(T − λI) = {0}. Since the characterization of self-adjoint extensions of the minimal linear relation associated with system (Sλ), see Chapter 7, is derived by applying the Glazman–Krein– Naimark theory for linear relations, we recall the most fundamental parts of this theory, see [143] for more details. A complex linear space S with a complex-valued function – 138 – Appendix [:] : S × S → C is called pre-symplectic if it possesses the conjugate bilinear and skewHermitian properties, i.e., for all P, Q, R ∈ S and α ∈ C we have [P : Q + R] = [P : Q] + [P : R], [P + Q : R] = [P : R] + [Q : R], [αP : Q] = α[P : Q], [P : αQ] = ¯α[P : Q], [P : Q] = −[Q : P], see [72] for more details. If we put S = H2 and [{z, f } : {u, g}] := ⟨f , u⟩ − ⟨z, g⟩ for {z, f }, {u, g} ∈ H2, then S and [:] form the pre-symplectic space. For a symmetric linear relation T ⊆ H2 we have [T : T ] = 0 = [T : T ∗ ] and T = { {z, f} ∈ T ∗ | [{z, f} : T ∗ ] = 0 } , (A.11) see[143,Theorem3.5]. If, inaddition, the linear relationT isclosed and d := d+(T ) = d−(T ), then the set {βj}d j=1 with βj ∈ T ∗ for j ∈ {1, . . . , d} such that 1. β1, . . . , βd are linearly independent in T ∗ modulo T , 2. [βj : βi] = 0 for all i, j ∈ {1, . . . , d}, is called GKN-set for the pair of linear relations (T , T ∗). The following theorem provides the necessary and sufficient conditions for a linear relation T1 ⊆ H2 being a self-adjoint extension of T , see [143, Theorem 4.7]. Theorem A.1. Let T ⊆ H2 be a closed symmetric linear relation such that d+(T ) = d−(T ) = d. A subspace T1 ⊆ H2 is a self-adjoint extension of T if and only if there exists GKN-set {βj}d j=1 for (T , T ∗) such that T1 = { F ∈ T ∗ | [F : βj] = 0 for all j = 1, . . . , d } . (A.12) Finally, a linear relation T is called semibounded below, if there exists a ∈ R such that ⟨z, f ⟩ ≥ a⟨z, z⟩ for all {z, f } ∈ T . (A.13) The number m(T ) := sup{a ∈ R | inequality (A.13) holds} is called the lower bound of T . If m(T ) > 0, the linear relation T is said to be positive. Then, by analogy with the case of densely defined positive symmetric operators (see [43, Theorem 5]), the smallest and largest self-adjoint extensions of a positive symmetric linear relation are respectively known as the Krein–von Neumann (or soft) extension TK and the Friedrichs (or hard) extension TF. In particular, if T is closed and m(T ) > 0, then the Krein–von Neumann extension admits the representation TK = T ∔ (Ker T ∗ × {0}), (A.14) see [43, Corollary 1] and also [84]. – 139 – Appendix – 140 – List of symbols We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations. Richard Phillips Feynman, see [78, pg. 17-7] The items in the following list are sorted by their pronunciation or LATEX command. The number refers to the page with the definition (or the first occurrence) of the symbol. A Madj (adjugate) . . . 3 B S⊥ (orthogonal complement) . . . 4 C Ck(λ) (Weyl circle for (Sλ)) . . . 74 Ck(λ) (Weyl circle for JVE) . . . 44 C (complex numbers) . . . 3 C(IZ)r×s, C(IZ)r (sequences over IZ) . . . 5 Cr×s (r × s matrices) . . . 3 Cr (r-dimensional vectors) . . . 3 C+ (upper half-plane of C) . . . 3 C− (lower half-plane of C) . . . 3 C0(IZ)r×s (compactly supported) . . . 5 Ck(λ) (Weyl circle for (Sλ)) . . . 20 codim S (codimension) . . . 4 CΨ (set for (Sλ)) . . . 73 CΨ,N (set for (Sλ)) . . . 73 D D+(λ) (limiting Weyl disk for (Sλ)) . . . 75 Dk(λ) (Weyl disk for (Sλ)) . . . 74 D+(λ) (limiting Weyl disk for JVE) . . . 45 Dk(λ) (Weyl disk for JVE) . . . 44 δ(·) (sign of imaginary part) . . . 3 zk (forward difference) . . . 5 det M (determinant) . . . 3 diag{·} (diagonal matrix) . . . 3 dim Ran M (dimension of range) . . . 3 Dk(λ) (Weyl disk for (Sλ)) . . . 20 D+(λ) (limiting Weyl disk for (Sλ)) . . . 24 E Ek(λ, M) (E(M)-function for (Sλ)) . . . 74 Ek(M) (E(M)-function for for JVE) . . . 44 Ek(M) (E(M)-function for (Sλ)) . . . 19 exp(M) (matrix exponential) . . . 5 Ek(M) (E(M)-function for (Sλ)) . . . 100 zk|n m . . . 5 F Fk(λ) (matrix for Weyl disk Dk(λ) ) . . . 21 – 141 – List of symbols G Gk(λ) (matrix for Weyl disk Dk(λ)) . . . 74 Gk(λ) (matrix for Weyl disk Dk(λ)) . . . 44 (set for boundary conditions) . . . 16 Gk(λ) (matrix for Weyl disk Dk(λ)) . . . 21 Γ (boundary conditions for JVE) . . . 41 Gk,s(λ) (Green function for (Sλ)) . . . 101 H Hk(λ) (matrix for Weyl disk Dk(λ)) . . . 74 Hk(λ) (matrix for Weyl disk Dk(λ)) . . . 44 Hk(λ) (matrix for Weyl disk Dk(λ)) . . . 21 H(t) (coefficient matrix for (2.5)) . . . 12 Hk (coefficient matrix for (2.6)) . . . 12 (H R λ ) (system for LC-invariance) . . . 55 (H R λ ) (system for LC-invariance) . . . 55 (Hλ) (system for LC-invariance) . . . 85 (Hλ) (system for LC-invariance) . . . 85 I im(·) (imaginary part) . . . 3 ⟨·, ·⟩ (semi-inner product for (Sλ)) . . . 26 ⟨·, ·⟩Ψ(λ) (semi-inner product for (Sλ)) . . . 73 ⟨·, ·⟩ ,N (finite semi-inner product) . . . 17 ⟨·, ·⟩ψ (semi-inner product for (Sλ)) . . . 92 IZ, I+ Z (discrete interval) . . . 5 J J . . . 5 K Ker M (kernel) . . . 3 Kλ (map for ˜l2 ψ and ˜l2 ψ,1 ) . . . 110 L Λk(λ, ¯ν) (matrix for (Sλ)) . . . 67 ℓ2 (space for (Sλ)) . . . 53 ℓ2 (space for (Sλ)) . . . 58 ℓ2 (space for (Sλ)) . . . 26 ℓ2 Ψ(λ) (space for (Sλ)) . . . 75 ℓ2 W (space with partial shift) . . . 76 L (z)k (natural map for (Sλ)) . . . 91 l2 ψ (space for (Sλ)) . . . 92 l2 ψ,1 (space for (Sλ)) . . . 107 l2 ψ,0 (space for (Sλ)) . . . 107 ˜l2 ψ (Hilbert space for (Sλ)) . . . 106 ˜l2×2 ψ (space for (Sλ)) . . . 106 M Mk(λ) (W-T function for JVE) . . . 43 Mk(λ) (W–T function) . . . 18 M+(λ) (half-line W–T function) . . . 25 N N(λ) (ℓ2 Ψ(λ) -solutions of (Sλ)) . . . 76 N(λ) (ℓ2 -solutions of (Sλ)) . . . 53 nλ (dimension of Nλ) . . . 115 Nλ (l2 ψ -solutions of (Sλ)) . . . 115 N (natural numbers) . . . 3 N0 (natural numbers and zero) . . . 3 N(λ) (ℓ2 -solutions of (Sλ)) . . . 26 ||·|| (semi-norm for (Sλ)) . . . 53 ||·||2 (Euclidean vector norm) . . . 4 ||·||1 (H¨older norm) . . . 4 ||·|| (semi-norm for (Sλ)) . . . 26 ||·||Ψ(λ) (semi-norm for (Sλ)) . . . 73 ||·||σ (spectral norm) . . . 4 ||·||ψ (semi-norm for (Sλ)) . . . 92 ˜nλ (defect index for Tmin) . . . 115 Nλ (defect subspace for Tmin) . . . 115 kλ (space associated with Nλ) . . . 115 N0 (index for Hypothesis 2.3.7) . . . 22 N1 (index for Hypothesis 2.3.13) . . . 24 – 142 – List of symbols N2 (index for Hypothesis 2.3.15) . . . 25 N3 (index for Hypothesis 2.4.11) . . . 33 N4 (index for Hypothesis 5.3.2) . . . 74 O M, ¯λ (conjugate) . . . 3 P Pk(λ) (center of Dk(λ)) . . . 75 P+(λ) (center of D+(λ)) . . . 75 k(λ) (fundamental matrix for (Sλ)) . . . 73 Ψk(λ) (weight matrix for (Sλ)) . . . 69 P+(λ) (center of D+(λ)) . . . 45 Pk(λ) (center of Dk(λ)) . . . 45 k(λ) (fundamental matrix for (Sλ)) . . . 49 k (weight matrix for (Sλ)) . . . 49 Φk(λ) (fundamental matrix for JVE) . . . 42 k (weight matrix for (Sλ)) . . . 55 k(λ) (fundamental matrix for (Sλ)) . . . 16 π(z) (quotient space map for ˜l2 ψ ) . . . 106 Pk(λ) (center of Dk(λ)) . . . 22 P+(λ) (center of D+(λ)) . . . 23 k (weight matrix for (Sλ)) . . . 11 k (weight matrix for (Sλ)) . . . 57 ψk (weight matrix for (Sλ)) . . . 89 M > 0 (positive definite) . . . 3 M ≥ 0 (positive semidefinite) . . . 3 z[s] k (λ) (partial shift) . . . 71 Q (Qλ) (system for LC-invariance) . . . 80 (Qλ) (system for LC-invariance) . . . 79 (Qλ) (system for LC-invariance) . . . 80 R Rk(λ) (radius of Dk(λ)) . . . 75 R+(λ) (radius of D+(λ)) . . . 75 R+(λ) (radius of D+(λ)) . . . 45 Rk(λ) (radius of Dk(λ)) . . . 45 Ran M (range) . . . 3 rank M (rank) . . . 3 R (real numbers) . . . 3 re(·) (real part) . . . 3 Rk(λ) (radius of Dk(λ)) . . . 22 r(λ) (rank of R+(λ)) . . . 28 R+(λ) (radius of D+(λ)) . . . 23 S Sk(λ) (coefficient matrix for (Sλ)) . . . 65 Sk (coefficient matrix for (Sλ)) . . . 49 Sk (coefficient matrix for (Sλ)) . . . 55 5k(λ) (coefficient matrix for (Sλ)) . . . 11 Sk (coefficient matrix for (Sλ)) . . . 11 S[·] k (coefficient matrix for (Sλ)) . . . 65 sprad M (spectral radius) . . . 3 pk(λ) (coefficient matrix for (Sλ)) . . . 89 Sk (coefficient matrix for (Sλ)) . . . 89 Sk (coefficient matrix for (Sλ)) . . . 55 M∗, M∗(·) (conjugate transpose) . . . 3 Mp,q (p × q submatrix) . . . 3 Mp (p × p submatrix) . . . 3 (Sλ) (system linear in λ) . . . 11 (Sλ) (augmented system (Sλ)) . . . 49 (Eλ) (equation for LC-invariance) . . . 62 (Eλ) (equation for LC-invariance) . . . 62 (Sλ) (system for LC-invariance) . . . 55 (Sλ) (system for LC-invariance) . . . 55 (Sλ) (system analytic in λ) . . . 65 (S f λ ) (nonhomogeneous system) . . . 92 (Sλ) (time-reversed version of (Sλ)) . . . 89 T T k(λ) (shift matrix for (Qλ)) . . . 82 – 143 – List of symbols Tmax (maximal linear relation) . . . 107 Tmin (minimal linear relation) . . . 113 TM,L (self-adjoint extension of Tmin) . . . 126 T0 (pre-minimal linear relation) . . . 107 TP,Q (self-adjoint extension of Tmin) . . . 126 Tk(λ) (shift matrix for (5.17)) . . . 71 tr M (trace) . . . 3 TR,β (self-adjoint extension of Tmin) . . . 126 Θk(λ) (fundamental matrix for (Sλ)) . . . 93 T k(λ) (shift matrix for (Qλ)) . . . 82 ϑ(λ, IZ) (matrix for (Sλ)) . . . 98 M⊤, M⊤(·) (transpose) . . . 3 TK (K–vN extension of Tmin) . . . 127 U 777 (set of 4n × 4n unitary matrices) . . . 45 U (set of 2n × 2n unitary matrices) . . . 22 V 888 (set of 4n×4n contractive matrices) . . . 45 Vk (coefficient matrix for (Sλ)) . . . 49 Vk (coefficient matrix for (Sλ)) . . . 55 Vk (coefficient matrix for (Sλ)) . . . 89 Vk (coefficient matrix for (Sλ)) . . . 55 V (set of 2n×2n contractive matrices) . . . 22 Vk (coefficient matrix for (Sλ)) . . . 11 W Wk (weight matrix for (5.17)) . . . 71 W(t) (weight matrix for (2.5)) . . . 12 Wk (weight matrix for (2.6)) . . . 12 X Xk(λ, M) (Weyl solution for (Sλ)) . . . 74 Xk(λ) (Weyl solution for JVE) . . . 43 Xk(λ) (Weyl solution for (Sλ)) . . . 100 Xk(λ) (Weyl solution for (Sλ)) . . . 17 Z Zk(λ) (second half of k(λ)) . . . 73 Zk(λ) (second half of k(λ)) . . . 51 Zk(λ) (second half of Θk(λ)) . . . 100 Zk(λ) (second half of k(λ)) . . . 16 Z (integers) . . . 3 – 144 – List of author’s publications (as of August 22, 2016) Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert, see [131, p. 240] Items [A1, A2, A4–A21] are indexed in the MathSciNet Database. Moreover, items [A1,A6–A9] were included in the author’s doctoral dissertation [A24]. 2009 [A1] R. Hilscher and P. Zem´anek, Trigonometric and hyperbolic systems on time scales, Dynam. Systems Appl. 18 (2009), no. 3-4, 483–505. (Cited on page 145.) [A2] R. ˇSimon Hilscher and P. Zem´anek, Definiteness of quadratic functionals for Hamiltonian and symplectic systems: a survey, Int. J. Difference Equ. 4 (2009), no. 1, 49–67. (Cited on page 145.) [A3] P. Zem´anek, Discrete trigonometric and hyperbolic systems: An overview, Ulmer Seminare ¨uber Funktionalanalysis und Differentialgleichungen 14 (2009), 345–359. (Cited on page 6.) 2010 [A4] S. L. Clark and P. Zem´anek, On a Weyl–Titchmarsh theory for discrete symplectic systems on a half line, Appl. Math. Comput. 217 (2010), no. 7, 2952–2976. (Cited on pages 1, 4, 13, 16, 18, 19, 22, 25, 26, 27, 33, 38, 41, 50, 100, 101, 102, 105, and 145.) [A5] R. ˇSimon Hilscher and P. Zem´anek, Friedrichs extension of operators defined by linear Hamiltonian systems on unbounded interval, Math. Bohem. 135 (2010), no. 2, 209–222. (Cited on pages 119 and 145.) 2011 [A6] P. Hasil and P. Zem´anek, Critical second order operators on time scales, Discrete Contin. Dyn. Syst. 2011 (2011), suppl., 653–659. (Cited on page 145.) – 145 – List of author’s publications [A7] R. ˇSimon Hilscher and P. Zem´anek, Weyl–Titchmarsh theory for time scale symplectic systems on half line, Abstr. Appl. Anal. 2011 (2011), Art. ID 738520, 41 pp. (electronic). (Cited on pages 14 and 145.) [A8] R. ˇSimon Hilscher and P. Zem´anek, Overview of Weyl–Titchmarsh theory for second order Sturm–Liouville equations on time scales, Int. J. Difference Equ. 6 (2011), no. 1, 39–51. (Cited on page 145.) [A9] P. Zem´anek, Krein-von Neumann and Friedrichs extensions for second order operators on time scales, Int. J. Dyn. Syst. Differ. Equ. 3 (2011), no. 1–2, 132–144. (Cited on page 145.) 2012 [A10] R. ˇSimon Hilscher and P. Zem´anek, New results for time reversed symplectic dynamic systems and quadratic functionals, Electron. J. Qual. Theory Differ. Equ. (2012), no. 15, 11 pp. (electronic). (Cited on page 145.) [A11] P. Zem´anek, Rofe–Beketov formula for symplectic systems, Adv. Difference Equ. 2012 (2012), no. 104, 9 pp. (electronic). (Cited on page 145.) [A12] P. Zem´anek and P. Hasil, Friedrichs extension of operators defined by Sturm–Liouville equations of higher order on time scales, Appl. Math. Comput. 218 (2012), no. 22, 10829–10842. (Cited on page 145.) 2013 [A13] R. ˇSimon Hilscher and P. Zem´anek, Weyl disks and square summable solutions for discrete symplectic systems with jointly varying endpoints, Adv. Difference Equ. 2013 (2013), no. 232, 18 pp. (electronic). (Cited on pages 1, 2, 54, and 145.) [A14] P. Zem´anek, A note on the equivalence between even-order Sturm–Liouville equations and symplectic systems on time scales, Appl. Math. Lett. 26 (2013), no. 1, 134–139. (Cited on page 145.) 2014 [A15] R. ˇSimon Hilscher and P. Zem´anek, Weyl–Titchmarsh theory for discrete symplectic systems with general linear dependence on spectral parameter, J. Difference Equ. Appl. 20 (2014), no. 1, 84–117. (Cited on pages 1, 39, and 145.) [A16] R. ˇSimon Hilscher and P. Zem´anek, Limit point and limit circle classification for symplectic systems on time scales, Appl. Math. Comput. 233 (2014), 623–646. (Cited on pages 1, 2, 14, 39, 54, and 145.) [A17] R. ˇSimon Hilscher and P. Zem´anek, Generalized Lagrange identity for discrete symplectic systems and applications in Weyl–Titchmarsh theory, in “Theory and Applications of Difference Equations and Discrete Dynamical Systems”, Proceedings of the 19th International Conference on Difference Equations and Applications (Muscat, 2013), – 146 – List of author’s publications Z. AlSharawi, J. Cushing, and S. Elaydi (editors), Springer Proceedings in Mathematics & Statistics, Vol. 102, pp. 187–202, Springer, Berlin, 2014. (Cited on pages 1, 2, 87, 90, and 145.) 2015 [A18] S. L. Clark and P. Zem´anek, On discrete symplectic systems: Associated maximal and minimal linear relations and nonhomogeneous problems, J. Math. Anal. Appl. 421 (2015), no. 1, 779–805. (Cited on pages 1, 2, 89, 117, 135, and 145.) [A19] R. ˇSimon Hilscher and P. Zem´anek, Limit circle invariance for two differential systems on time scales, Math. Nachr. 288 (2015), no. 5-6, 696–709. (Cited on pages 1, 2, 63, and 145.) [A20] R. ˇSimon Hilscher and P. Zem´anek, Time scale symplectic systems with analytic dependence on spectral parameter, J. Difference Equ. Appl. 21 (2015), no. 3, 209–239. (Cited on pages 1, 2, 73, 87, and 145.) 2016 [A21] P. Zem´anek and S. L. Clark, Characterization of self-adjoint extensions for discrete symplectic systems, J. Math. Anal. Appl. 440 (2016), no. 1, 323–350. (Cited on pages 1, 2, 89, 117, and 145.) Accepted [A22] P. Zem´anek, Limit point criteria for second order Sturm–Liouville equations on time scales, in “Differential and Difference Equations with Applications”, Proceedings of the International Conference on Differential & Difference Equations and Applications 2015 (Amadora, 2015), S. Pinelas, O. Doˇsl´y, Z. Doˇsl´a, and P. Kloeden (editors), Springer Proceedings in Mathematics & Statistics, Springer, Berlin, to appear. (Not cited.) Submitted [A23] P. Zem´anek, Principal solution in Weyl–Titchmarsh theory for second order Sturm– Liouville equation on time scales, submitted. (Cited on pages 36 and 123.) Doctoral dissertation [A24] P. Zem´anek, New Results in Theory of Symplectic Systems on Time Scales, doctoral dissertation, Masaryk University, Brno, 2011. ISBN 978-80-210-5515-5. (Cited on page 145.) – 147 – List of author’s publications – 148 – Bibliography [1] The Pentagon: A Mathematics Magazine for Students, Vol. XI (1951), no. 1. (Cited on page 41.) [2] C. D. Ahlbrandt, Equivalence of discrete Euler equations and discrete Hamiltonian systems, J. Math. Anal. Appl. 180 (1993), no. 2, 498–517. (Cited on page 10.) [3] C. D. Ahlbrandt and A. C. Peterson, The (n, n)-disconjugacy of a 2nth order linear difference equation, in “Advances in Differences Equations” (R. P. Agarwal, editor), Comput. Math. Appl. 28 (1994), no. 1-3, 1–9. (Cited on page 6.) [4] C. D. Ahlbrandt and A. C. Peterson, Discrete Hamiltonian Systems: Difference Equations, Continued Fractions, and Riccati Equations, Kluwer Texts in the Mathematical Sciences, Vol. 16, Kluwer Academic Publishers Group, Dordrecht, 1996. ISBN 0- 7923-4277-1. (Cited on pages 6, 10, and 95.) [5] D. R. Anderson, Discrete trigonometric matrix functions, Panamer. Math. J. 7 (1997), no. 1, 39–54. (Cited on page 6.) [6] D. R. Anderson, Titchmarsh–Sims–Weyl theory for complex Hamiltonian systems on Sturmian time scales, J. Math. Anal. Appl. 373 (2010), no. 2, 709–725. (Cited on page 14.) [7] T. M. Apostol, Mathematical Analysis, second edition, Addison-Wesley Publishing, Reading, 1974. (Cited on page 97.) [8] R. Arens, Operational calculus of linear relations, Pacific J. Math. 11 (1961), 9–23. (Cited on pages 90 and 137.) [9] F. V. Atkinson, Discrete and Continuous Boundary Problems, Mathematics in Science and Engineering, Vol. 8, Academic Press, New York, 1964. (Cited on pages 5, 6, 7, 12, 13, 16, 17, 34, 35, 56, 89, and 123.) [10] H. Behncke, Spectral theory of Hamiltonian difference systems with almost constant coefficients, J. Difference Equ. Appl. 19 (2013), no. 1, 1–12. (Cited on pages 13 and 109.) – 149 – Bibliography [11] H. Behncke and F. O. Nyamwala, Spectral theory of difference operators with almost constant coefficients, J. Difference Equ. Appl. 17 (2011), no. 5, 677–695. (Cited on page 13.) [12] H. Behncke and F. O. Nyamwala, Spectral theory of difference operators with almost constant coefficients II, J. Difference Equ. Appl. 17 (2011), no. 5, 821–829. (Cited on page 13.) [13] J. Behrndt, S. Hassi, H. S. V. de Snoo, and R. Wietsma, Square-integrable solutions and Weyl functions for singular canonical systems, Math. Nachr. 284 (2011), no. 11-12, 1334–1384. (Cited on pages 90, 94, 96, 106, and 113.) [14] M. B. Bekker, M. Bohner, and H. Voulov, Extreme self-adjoint extensions of a semibounded q-difference operator, Math. Nachr. 287 (2014), no. 8-9, 869–884. (Cited on page 119.) [15] J. M. Berezans’ki˘ı, Expansions in Eigenfunctions of Selfadjoint Operators, translated from the Russian by R. Bolstein, J. M. Danskin, J. Rovnyak and L. Shulman, Translations of Mathematical Monographs, Vol. 17, American Mathematical Society, Providence, 1968. (Cited on page 13.) [16] D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas, second edition, Princeton University Press, Princeton, 2009. ISBN 978-0-691-14039-1. (Cited on pages 3, 4, 5, 77, and 78.) [17] M. Bohner, Linear Hamiltonian difference systems: Disconjugacy and Jacobi-type conditions, J. Math. Anal. Appl. 199 (1996), no. 3, 804–826. (Cited on pages 41 and 54.) [18] M. Bohner, Symplectic systems and related discrete quadratic functionals, in “Dedicated to Professor Dragoslav S. Mitrinovi´c (1908–1995)” (Niˇs, 1996), Facta Univ. Ser. Math. Inform. (1997), no. 12, 143–156. (Cited on page 6.) [19] M. Bohner, Discrete linear Hamiltonian eigenvalue problems, in “Advances in Difference Equations, II”, Comput. Math. Appl. 36 (1998), no. 10-12, 179–192. (Cited on page 42.) [20] M. Bohner and O. Doˇsl´y, Disconjugacy and transformations for symplectic systems, Rocky Mountain J. Math. 27 (1997), no. 3, 707–743. (Cited on pages 6, 10, and 38.) [21] M. Bohner and O. Doˇsl´y, Trigonometric transformations of symplectic difference systems, J. Differential Equations 163 (2000), no. 1, 113–129. (Cited on page 6.) [22] M. Bohner and O. Doˇsl´y, Trigonometric systems in oscillation theory of difference equations, in “Dynamic Systems and Aplications, Vol. 3.” (Proceedings of the Third International Conference on Dynamic Systems and Applications, Atlanta, GA, 1999), pp. 99–104, Dynamic, Atlanta, 2001. (Cited on page 6.) [23] M. Bohner, O. Doˇsl´y, and W. Kratz, An oscillation theorem for discrete eigenvalue problems, Rocky Mountain J. Math. 33 (2003), no. 4, 1233–1260. (Cited on page 13.) [24] M. Bohner, O. Doˇsl´y, and W. Kratz, Positive semidefiniteness of discrete quadratic functionals, Proc. Edinb. Math. Soc. (2) 46 (2003), no. 3, 627–636. (Cited on page 6.) – 150 – Bibliography [25] M. Bohner, O. Doˇsl´y, and W. Kratz, Sturmian and spectral theory for discrete symplectic systems, Trans. Amer. Math. Soc. 361 (2009), no. 6, 3109–3123. (Cited on pages 6 and 13.) [26] M. Bohner and S. Sun, Weyl–Titchmarsh theory for symplectic difference systems, Appl. Math. Comput. 216 (2010), no. 10, 2855–2864. (Cited on pages 1, 13, 25, 26, 38, 41, and 97.) [27] B. M. Brown and J. S. Christiansen, On the Krein and Friedrichs extensions of a positive Jacobi operator, Expo. Math. 23 (2005), no. 2, 179–186. (Cited on page 119.) [28] B. M. Brown and W. D. Evans, On an extension of Copson’s inequality for infinite series, Proc. Roy. Soc. Edinburgh Sect. A 121 (1992), no. 1-2, 169–183. (Cited on page 90.) [29] B. M. Brown, W. D. Evans, and L. L. Littlejohn, Orthogonal polynomials and extensions of Copson’s inequality, in “Proceedings of the Seventh Spanish Symposium on Orthogonal Polynomials and Applications (VII SPOA, Granada, 1991)”, J. Comput. Appl. Math. 48 (1993), no. 1-2, 33–48. (Cited on page 90.) [30] B. M. Brown, W. D. Evans, and M. Plum, Titchmarsh–Sims–Weyl theory for complex Hamiltonian systems, Proc. London Math. Soc. (3) 87 (2003), no. 2, 419–450. (Cited on page 12.) [31] J. Candy and W. Rozmus, A symplectic integration algorithm for separable Hamiltonian functions, J. Comput. Phys. 92 (1991), no. 1, 230–256. (Cited on page 6.) [32] J. Chen and Y. Shi, The limit circle and limit point criteria for second-order linear difference equations, Comput. Math. Appl. 47 (2004), no. 6-7, 967–976. (Cited on page 13.) [33] S. L. Clark, A spectral analysis for self-adjoint operators generated by a class of second order difference equations, J. Math. Anal. Appl. 197 (1996), no. 1, 267–285. (Cited on page 13.) [34] S. L. Clark and F. Gesztesy, Weyl–Titchmarsh M-function asymptotics for matrix-valued Schr¨odinger operators, Proc. London Math. Soc. (3) 82 (2001), no. 3, 701–724. (Cited on page 12.) [35] S. L. Clark and F. Gesztesy, Weyl–Titchmarsh M-function asymptotics, local uniqueness results, trace formulas, and Borg-type theorems for Dirac operators, Trans. Amer. Math. Soc. 354 (2002), no. 9, 3475–3534 (electronic). (Cited on page 12.) [36] S. L. Clark and F. Gesztesy, On Weyl–Titchmarsh theory for singular finite difference Hamiltonian systems, J. Comput. Appl. Math. 171 (2004), no. 1-2, 151–184. (Cited on pages 12, 26, and 72.) [37] S. L. Clark, F. Gesztesy, and R. Nichols, Principal solutions revisited, in “Stochastic and Infinite Dimensional Analysis”, C. C. Bernido, M. V. Carpio-Bernido, M. Grothaus, T. Kuna, M. J. Oliveira, and J. L. da Silva (editors), Trends in Mathematics, Birkh¨auser, Basel, to appear. (Cited on page 36.) [38] S. L. Clark, F. Gesztesy, R. Nichols, and M. Zinchenko, Boundary data maps and Krein’s resolvent formula for Sturm–Liouville operators on a finite interval, Oper. Matrices 8 (2014), no. 1, 1–71. (Cited on pages 126 and 129.) – 151 – Bibliography [39] S. L. Clark, F. Gesztesy, and M. Zinchenko, Weyl–Titchmarsh theory and Borg– Marchenko-type uniqueness results for CMV operators with matrix-valued Verblunsky coefficients, Oper. Matrices 1 (2007), no. 4, 535–592. (Cited on pages 12 and 13.) [40] S. L. Clark, F. Gesztesy, and M. Zinchenko, Borg–Marchenko-type uniqueness results for CMV operators, Skr. K. Nor. Vidensk. Selsk. (2008), no. 1, 1–18. (Cited on page 13.) [41] S. L. Clark, F. Gesztesy, and M. Zinchenko, Minimal rank decoupling of full-lattice CMV operators with scalar- and matrix-valued Verblunsky coefficients, in “Difference Equations and Applications” (Proceedings of the Fourteenth International Conference on Difference Equations and Applications, Istanbul, 2008), M. Bohner, Z. Doˇsl´a, G. Ladas, M. ¨Unal, and A. Zafer (editors), pp. 19–59, U˘gur–Bahc¸es¸ehir University Publishing Company, 2009. (Cited on page 13.) [42] E. A. Coddington, Extension Theory of Formally Normal and Symmetric Subspaces, Memoirs of the American Mathematical Society, Vol. 134, American Mathematical Society, Providence, 1973. (Cited on pages 90, 137, and 138.) [43] E. A. Coddington and H. S. V. de Snoo, Positive selfadjoint extensions of positive symmetric subspaces, Math. Z. 159 (1978), no. 3, 203–214. (Cited on page 139.) [44] E. A. Coddington and A. Dijksma, Self-adjoint subspaces and eigenfunction expansions for ordinary differential subspaces, J. Differential Equations 20 (1976), no. 2, 473–526. (Cited on page 119.) [45] R. Cross, Multivalued Linear Operators, Monographs and Textbooks in Pure and Applied Mathematics, Vol. 213, Marcel Dekker, New York, 1998. ISBN 0-8247-0219- 0. (Cited on pages 90 and 137.) [46] A. Dijksma and H. S. V. de Snoo, Self-adjoint extensions of symmetric subspaces, Pacific J. Math. 54 (1974), 71–100. (Cited on pages 90 and 137.) [47] D. Donnelly and E. Rogers, Symplectic integrators: An introduction, Amer. J. Phys. 73 (2005), no. 10, 938–945. (Cited on page 6.) [48] O. Doˇsl´y, Symplectic difference systems: Natural dependence on a parameter, Adv. Dyn. Syst. Appl. 8 (2013), no. 2, 193–201. (Cited on page 72.) [49] O. Doˇsl´y, Symplectic difference systems with periodic coefficients: Krein’s traffic rules for multipliers, Adv. Difference Equ. 2013 (2013), no. 85, 13 pp. (electronic). (Cited on pages 65, 66, 67, 69, and 72.) [50] O. Doˇsl´y and J. Elyseeva, Singular comparison theorems for discrete symplectic systems, J. Difference Equ. Appl. 20 (2014), no. 8, 1268–1288. (Cited on page 6.) [51] O. Doˇsl´y and P. Hasil, Friedrichs extension of operators defined by symmetric banded matrices, Linear Algebra Appl. 430 (2009), no. 8-9, 1966–1975. (Cited on page 119.) [52] O. Doˇsl´y and R. Hilscher, Disconjugacy, transformations and quadratic functionals for symplectic dynamic systems on time scales, J. Difference Equ. Appl. 7 (2001), no. 2, 265–295. (Cited on page 7.) – 152 – Bibliography [53] O. Doˇsl´y, R. Hilscher, and V. Zeidan, Nonnegativity of discrete quadratic functionals corresponding to symplectic difference systems, Linear Algebra Appl. 375 (2003), 21–44. (Cited on page 6.) [54] O. Doˇsl´y and W. Kratz, Oscillation theorems for symplectic difference systems, J. Difference Equ. Appl. 13 (2007), no. 7, 585–605. (Cited on pages 6 and 13.) [55] O. Doˇsl´y and W. Kratz, A Sturmian separation theorem for symplectic difference systems, J. Math. Anal. Appl. 325 (2007), no. 1, 333–341. (Cited on page 6.) [56] O. Doˇsl´y and W. Kratz, Oscillation and spectral theory for symplectic difference systems with separated boundary conditions, J. Difference Equ. Appl. 16 (2010), no. 7, 831–846. (Cited on pages 6 and 13.) [57] O. Doˇsl´y and W. Kratz, A remark on focal points of recessive solutions of discrete symplectic systems, J. Math. Anal. Appl. 363 (2010), no. 1, 209–213. (Cited on page 6.) [58] O. Doˇsl´y and ˇS. Pechancov´a, Trigonometric recurrence relations and tridiagonal trigonometric matrices, Int. J. Difference Equ. 1 (2006), no. 1, 19–29. (Cited on page 6.) [59] O. Doˇsl´y and Z. Posp´ıˇsil, Hyperbolic transformation and hyperbolic difference systems, Fasc. Math. (2001), no. 32, 25–48. (Cited on page 6.) [60] M. A. El-Gebeily, D. O’Regan, and R. P. Agarwal, Characterization of self-adjoint ordinary differential operators, Math. Comput. Modelling 54 (2011), no. 1-2, 659–672. (Cited on page 119.) [61] S. Elaydi, An Introduction to Difference Equations, third edition, Undergraduate Texts in Mathematics, Springer, New York, 2005. ISBN 0-387-23059-9. (Cited on page 112.) [62] Yu. V. Eliseeva, Comparison theorems for symplectic systems of difference equations, Differ. Equ. 46 (2010), 1339–1352. (Cited on page 6.) [63] Yu. V. Eliseeva, On the spectra of discrete symplectic boundary value problems with separated boundary conditions, Russian Math. (Iz. VUZ) 55 (2011), no. 11, 71–75. Translated from: Izv. Vyssh. Uchebn. Zaved. Mat. 2011 (2011), no.11, 84–88 (Russian). (Cited on page 6.) [64] J. V. Elyseeva, Transformations and the number of focal points for conjoined bases of symplectic difference systems, J. Difference Equ. Appl. 15 (2009), no. 11-12, 1055–1066. (Cited on page 6.) [65] J. V. Elyseeva, The comparative index and the number of focal points for conjoined bases of symplectic difference systems, in “Discrete Dynamics and Difference Equations”, S. Elaydi, H. Oliveira, J. M. Ferreira, and J. F. Alves (editors), pp. 231–238, World Scientific Publishing, London, 2010. (Cited on pages 6 and 13.) [66] J. V. Elyseeva, On relative oscillation theory for symplectic eigenvalue problems, Appl. Math. Lett. 23 (2010), no. 10, 1231–1237. (Cited on pages 13 and 15.) [67] J. V. Elyseeva, A note on relative oscillation theory for symplectic difference systems with general boundary conditions, Appl. Math. Lett. 25 (2012), no. 11, 1809–1814. (Cited on page 6.) – 153 – Bibliography [68] J. V. Elyseeva, Generalized oscillation theorems for symplectic difference systems with nonlinear dependence on spectral parameter, Appl. Math. Comput. 251 (2015), 92–107. (Cited on page 6.) [69] L. H. Erbe and P. X. Yan, Disconjugacy for linear Hamiltonian difference systems, J. Math. Anal. Appl. 167 (1992), no. 2, 355–367. (Cited on page 9.) [70] L. H. Erbe and P. X. Yan, Qualitative properties of Hamiltonian difference systems, J. Math. Anal. Appl. 171 (1992), no. 2, 334–345. (Cited on page 9.) [71] W. N. Everitt, A personal history of the m-coefficient, J. Comput. Appl. Math. 171 (2004), no. 1-2, 185–197. (Cited on page 12.) [72] W. N. Everitt and L. Markus, Boundary Value Problems and Symplectic Algebra for Ordinary Differential and Quasi-differential Operators, Mathematical Surveys and Monographs, Vol. 61, American Mathematical Society, Providence, 1999. ISBN 0-8218- 1080-4. (Cited on page 139.) [73] K. Feng, On difference schemes and symplectic geometry, in “Proceedings of the 1984 Beijing Symposium on Differential Geometry and Differential Equations”, K. Feng (editor), pp. 42–58, Science Press, Beijing, 1985. (Cited on page 6.) [74] K. Feng, The Hamiltonian way for computing Hamiltonian dynamics, in “Applied and Industrial Mathematics” (Venice, 1989), Math. Appl., Vol. 56, pp. 17–35, Kluwer Acad. Publ., Dordrecht, 1991. (Cited on pages 1 and 6.) [75] K. Feng and M. Qin, Symplectic Geometric Algorithms for Hamiltonian Systems, translated and revised from the Chinese original and with a foreword by F. Duan, Zhejiang Science and Technology Publishing House, Hangzhou; Springer, Heidelberg, 2010. ISBN 978-7-5341-3595-8; 978-3-642-01776-6. (Cited on page 6.) [76] K. Feng and D. L. Wang, A note on conservation laws of symplectic difference schemes for Hamiltonian systems, J. Comput. Math. 9 (1991), no. 3, 229–237. (Cited on page 6.) [77] K. Feng and D. L. Wang, Symplectic difference schemes for Hamiltonian systems in general symplectic structure, J. Comput. Math. 9 (1991), no. 1, 86–96. (Cited on page 6.) [78] R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics I: Mainly Mechanics, Radiation, and Heat, Addison-Wesley Publishing, Reading, 1963. (Cited on page 141.) [79] A. Fischer and Ch. Remling, The absolutely continuous spectrum of discrete canonical systems, Trans. Amer. Math. Soc. 361 (2009), no. 2, 793–818. (Cited on pages 90 and 93.) [80] X. Hao, J. Sun, A. Wang, and A. Zettl, Characterization of domains of self-adjoint ordinary differential operators II, Results Math. 61 (2012), no. 3-4, 255–281. (Cited on page 119.) [81] X. Hao, J. Sun, and A. Zettl, Canonical forms of self-adjoint boundary conditions for differential operators of order four, J. Math. Anal. Appl. 387 (2012), no. 2, 1176–1187. (Cited on page 119.) – 154 – Bibliography [82] S. Hassi, H. S. V. de Snoo, and F. H. Szafraniec, Componentwise and Cartesian decompositions of linear relations, Dissertationes Math. (Rozprawy Mat.) 465 (2009), 59. (Cited on pages 90 and 137.) [83] S. Hassi, H. S. V. de Snoo, and H. Winkler, Boundary-value problems for two-dimensional canonical systems, Integral Equations Operator Theory 36 (2000), no. 4, 445–479. (Cited on page 90.) [84] S. Hassi, A. Sandovici, H. S. V. de Snoo, and H. Winkler, A general factorization approach to the extension theory of nonnegative operators and relations, J. Operator Theory 58 (2007), no. 2, 351–386. (Cited on page 139.) [85] E. Hellinger, Zur Stieltjesschen Kettenbruchtheorie (in German), Math. Ann. 86 (1922), no. 1-2, 18–29. (Cited on page 12.) [86] D. Hilbert, Mathematical problems, Bull. Amer. Math. Soc. 8 (1902), no. 10, 437–479. (Cited on page v.) [87] R. Hilscher and V. R˚uˇziˇckov´a, Implicit Riccati equations and quadratic functionals for discrete symplectic systems, Int. J. Difference Equ. 1 (2006), no. 1, 135–154. (Cited on pages 6, 41, and 54.) [88] R. Hilscher and V. R˚uˇziˇckov´a, Riccati inequality and other results for discrete symplectic systems, J. Math. Anal. Appl. 322 (2006), no. 2, 1083–1098. (Cited on pages 41 and 54.) [89] R. Hilscher and V. Zeidan, Discrete optimal control: The accessory problem and necessary optimality conditions, J. Math. Anal. Appl. 243 (2000), no. 2, 429–452. (Cited on page 6.) [90] R. Hilscher and V. Zeidan, Discrete optimal control: Second order optimality conditions, in “In honour of Professor Allan C. Peterson on the occasion of his 60th birthday, Pat I”, J. Difference Equ. Appl. 8 (2002), no. 10, 875–896. (Cited on page 6.) [91] R. Hilscher and V. Zeidan, Second order sufficiency criteria for a discrete optimal control problem, J. Difference Equ. Appl. 8 (2002), no. 6, 573–602. (Cited on page 6.) [92] R. Hilscher and V. Zeidan, Symplectic difference systems: Variable stepsize discretization and discrete quadratic functionals, Linear Algebra Appl. 367 (2003), 67–104. (Cited on pages 6, 41, and 54.) [93] R. Hilscher and V. Zeidan, Equivalent conditions to the nonnegativity of a quadratic functional in discrete optimal control, Math. Nachr. 266 (2004), 48–59. (Cited on page 6.) [94] R. Hilscher and V. Zeidan, Nonnegativity and positivity of quadratic functionals in discrete calculus of variations: Survey, J. Difference Equ. Appl. 11 (2005), no. 9, 857– 875. (Cited on page 6.) [95] D. B. Hinton, A. M. Krall, and J. K. Shaw, Boundary conditions for differential operators with intermediate deficiency index, Appl. Anal. 25 (1987), no. 1-2, 43–53. (Cited on page 119.) [96] D. B. Hinton and R. T. Lewis, Spectral analysis of second order difference equations, J. Math. Anal. Appl. 63 (1978), no. 2, 421–438. (Cited on pages 13, 90, and 123.) – 155 – Bibliography [97] D. B. Hinton and A. N. Schneider, On the Titchmarsh–Weyl coefficients for singular S-Hermitian systems I, Math. Nachr. 163 (1993), 323–342. (Cited on page 12.) [98] D. B. Hinton and A. N. Schneider, On the Titchmarsh–Weyl coefficients for singular S-Hermitian systems II, Math. Nachr. 185 (1997), 67–84. (Cited on pages 12 and 119.) [99] D. B. Hinton and A. N. Schneider, Titchmarsh–Weyl coefficients for odd-order linear Hamiltonian systems, J. Spectral Math. Appl. (2006), 36 pp. (electronic). (Cited on page 12.) [100] D. B. Hinton and J. K. Shaw, On Titchmarsh–Weyl M(λ)-functions for linear Hamiltonian systems, J. Differential Equations 40 (1981), no. 3, 316–342. (Cited on page 12.) [101] D. B. Hinton and J. K. Shaw, Titchmarsh–Weyl theory for Hamiltonian systems, in “Spectral Theory of Differential Operators” (Proc. Conf., Birmingham, Ala., 1981), I. W. Knowles and R. T. Lewis (editors), North-Holland Math. Stud., Vol. 55, pp. 219–231, North-Holland, Amsterdam, 1981. (Cited on pages 7 and 12.) [102] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. ISBN 0-521-30586-1. (Cited on pages 3 and 4.) [103] A. Jirari, Second-Order Sturm–Liouville Difference Equations and Orthogonal Polynomials, Memoirs of the American Mathematical Society, Vol. 113, no. 542, American Mathematical Society, Providence, 1995. (Cited on page 13.) [104] I. S. Kac, Linear relations generated by canonical differential equations, Funct. Anal. Appl. 17 (1983), no. 4, 315–317. Translated from: Funktsional. Anal. i Prilozhen. 17 (1983), no. 4, 86–87 (Russian). (Cited on page 90.) [105] W. G. Kelley and A. C. Peterson, Difference Equations: An Introduction with Applications, second edition, Harcourt/Academic Press, San Diego, 2001. ISBN 0-12-403330X. (Cited on pages 13, 42, 43, and 48.) [106] M. Kline, Mathematics: The Loss of Certainty, Oxford University Press, New York, 1980. ISBN 0-19-502754-X. (Cited on page 55.) [107] V. I. Kogan and F. S. Rofe-Beketov, On square-integrable solutions of symmetric systems of differential equations of arbitrary order, Proc. Roy. Soc. Edinburgh Sect. A 74 (1974/75), 5–40 (1976). (Cited on page 101.) [108] A. M. Krall, M(λ) theory for singular Hamiltonian systems with one singular point, SIAM J. Math. Anal. 20 (1989), no. 3, 664–700. (Cited on pages 3, 12, 28, and 109.) [109] A. M. Krall, M(λ) theory for singular Hamiltonian systems with two singular points, SIAM J. Math. Anal. 20 (1989), no. 3, 701–715. (Cited on page 12.) [110] A. M. Krall, A limit-point criterion for linear Hamiltonian systems, Appl. Anal. 61 (1996), no. 1-2, 115–119. (Cited on page 12.) [111] W. Kratz, Banded matrices and difference equations, Linear Algebra Appl. 337 (2001), 1–20. (Cited on page 10.) [112] W. Kratz, Definiteness of quadratic functionals, Analysis (Munich) 23 (2003), no. 2, 163–183. (Cited on pages 41 and 54.) – 156 – Bibliography [113] W. Kratz and R. ˇSimon Hilscher, A generalized index theorem for monotone matrixvalued functions with applications to discrete oscillation theory, SIAM J. Matrix Anal. Appl. 34 (2013), no. 1, 228–243. (Cited on page 69.) [114] M. G. Krein, Foundations of the theory of lambda-zones of stability of a canonical system of linear differential equations with periodic coefficients, in “Four Papers on Ordinary Differential Equations”, L. J. Leifman (editor), Amer. Math. Soc. Transl. (2), Vol. 120, pp. 1–70, American Mathematical Society, Providence, 1983. (Cited on page 69.) [115] P. D. Lax, Linear Algebra and its Applications, Pure and Applied Mathematics (Hoboken), Wiley-Interscience [John Wiley & Sons], second edition, Hoboken, 2007. ISBN 978-0-471-75156-4. (Cited on page 6.) [116] M. Lesch and M. M. Malamud, On the deficiency indices and self-adjointness of symmetric Hamiltonian systems, J. Differential Equations 189 (2003), no. 2, 556–615. (Cited on pages 7, 90, 93, 94, 98, 100, 106, 107, 113, 120, and 138.) [117] D. S. Mackey and N. Mackey, On the determinant of symplectic matrices, Numerical Analysis Report 422 (2003), 1–12. Manchester Centre for Computational Mathematics, England. (Cited on page 6.) [118] N. Macrae, John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, reprint of the 1992 original, American Mathematical Society, Providence, 1999. ISBN 0-8218-2064-8. (Cited on page 119.) [119] M. Marletta and A. Zettl, The Friedrichs extension of singular differential operators, J. Differential Equations 160 (2000), no. 2, 404–421. (Cited on page 119.) [120] J. B. McLeod, The number of integrable-square solutions of ordinary differential equations, Quart. J. Math. Oxford Ser. (2) 17 (1966), 285–290. (Cited on pages 27 and 35.) [121] A. B. Mingarelli, A limit-point criterion for a three-term recurrence relation, C. R. Math. Rep. Acad. Sci. Canada 3 (1981), no. 3, 171–175. (Cited on page 13.) [122] S. J. Monaquel and K. M. Schmidt, On M-functions and operator theory for nonself-adjoint discrete Hamiltonian systems, in “Special Issue: 65th birthday of Prof. Desmond Evans”, J. Comput. Appl. Math. 208 (2007), no. 1, 82–101. (Cited on page 13.) [123] M. Muzzulini, Titchmarsh–Sims–Weyl theory for Complex Hamiltonian Systems of Arbitrary Order, Ph.D. dissertation, University of Karlsruhe, Karlsruhe, 2007. (Cited on page 12.) [124] M. A. Na˘ımark, Linear Differential Operators, Part II: Linear Differential Operators in Hilbert Space, with additional material by the author, and a supplement by V. E. Lyantse, translated from the Russian by E. R. Dawson, English translation edited by W. N. Everitt, George G. Harrap & Company, New York, 1968. (Cited on pages 97 and 119.) [125] R. Nevanlinna, Asymptotische Entwicklungen beschr¨ankter Funktionen und das Stieltjessche Momentenproblem (in German), Ann. Acad. Sci. Fenn. A 18 (1922), no. 5, 53 pp. (Cited on page 12.) – 157 – Bibliography [126] H.-D. Niessen and A. Zettl, The Friedrichs extension of regular ordinary differential operators, Proc. Roy. Soc. Edinburgh Sect. A 114 (1990), no. 3-4, 229–236. (Cited on page 119.) [127] H.-D. Niessen and A. Zettl, Singular Sturm–Liouville problems: The Friedrichs extension and comparison of eigenvalues, Proc. London Math. Soc. (3) 64 (1992), no. 3, 545–578. (Cited on page 119.) [128] B. C. Orcutt, Canonical Differential Equations, Doctoral dissertation – University of Virginia, ProQuest LLC, Ann Arbor, 1969. (Cited on pages 90 and 93.) [129] V. R˘asvan, Stability zones for discrete time Hamiltonian systems, in “CDDE 2000 Proceedings (Brno)”, Arch. Math. (Brno) 36 (2000), no. suppl., 563–573. (Cited on page 69.) [130] V. R˘asvan, On stability zones for discrete-time periodic linear Hamiltonian systems, Adv. Difference Equ. 2006 (2006), Art. ID 80757, 13 pp. (electronic). (Cited on page 69.) [131] R. Remmert, Theory of Complex Functions, translated from the second German edition by R. B. Burckel, Readings in Mathematics, Graduate Texts in Mathematics, Vol. 122, Springer-Verlag, New York, 1991. ISBN 0-387-97195-5. (Cited on page 145.) [132] G. Ren, On the density of the minimal subspaces generated by discrete linear Hamiltonian systems, Appl. Math. Lett. 27 (2014), 1–5. (Cited on page 89.) [133] G. Ren and Y. Shi, The defect index of singular symmetric linear difference equations with real coefficients, Proc. Amer. Math. Soc. 138 (2010), no. 7, 2463–2475. (Cited on page 13.) [134] G. Ren and Y. Shi, Defect indices and definiteness conditions for a class of discrete linear Hamiltonian systems, Appl. Math. Comput. 218 (2011), no. 7, 3414–3429. (Cited on pages 35, 80, 90, 94, 98, 106, and 120.) [135] G. Ren and Y. Shi, Self-adjoint extensions for discrete linear Hamiltonian systems, Linear Algebra Appl. 454 (2014), 1–48. (Cited on pages 90, 97, 108, 114, 119, 124, 125, and 130.) [136] N. J. Rose, Mathematical Maxims and Minims, Rome Press, Raleigh, 1988. (Cited on page 65.) [137] V. R˚uˇziˇckov´a, Discrete Symplectic Systems and Definiteness of Quadratic Functionals, Ph.D. dissertation, Masaryk University, Brno, 2006. (Cited on page 6.) [138] V. R˚uˇziˇckov´a, Perturbation of discrete quadratic functionals, Tatra Mt. Math. Publ. 38 (2007), 229–241. (Cited on page 6.) [139] K. Setzer, Definite Sturm–Liouville Matrix Differential Equations and Applications of Moore–Penrose Inverses to Related Problems, Ph.D. dissertation, Ulm University, Ulm, 2012. (Cited on page 10.) [140] Y. Shi, Symplectic structure of discrete Hamiltonian systems, J. Math. Anal. Appl. 266 (2002), no. 2, 472–478. (Cited on page 6.) [141] Y. Shi, On the rank of the matrix radius of the limiting set for a singular linear Hamiltonian system, Linear Algebra Appl. 376 (2004), 109–123. (Cited on pages 12, 28, and 31.) – 158 – Bibliography [142] Y. Shi, Weyl–Titchmarsh theory for a class of discrete linear Hamiltonian systems, Linear Algebra Appl. 416 (2006), no. 2-3, 452–519. (Cited on pages 13, 14, 25, 26, 28, 31, 34, 41, 72, 80, 85, 90, 106, and 109.) [143] Y. Shi, The Glazman–Krein–Naimark theory for Hermitian subspaces, J. Operator Theory 68 (2012), no. 1, 241-256. (Cited on pages 90, 115, 116, 138, and 139.) [144] Y. Shi, Stability of essential spectra of self-adjoint subspaces under compact perturbations, J. Math. Anal. Appl. 433 (2016), no. 2, 832–851. (Cited on page 90.) [145] Y. Shi and S. Chen, Spectral theory of second-order vector difference equations, J. Math. Anal. Appl. 239 (1999), no. 2, 195–212. (Cited on page 43.) [146] Y. Shi, C. Shao, and G. Ren, Spectral properties of self-adjoint subspaces, Linear Algebra Appl. 438 (2013), no. 1, 191–218. (Cited on page 90.) [147] Y. Shi and H. Sun, Self-adjoint extensions for second-order symmetric linear difference equations, Linear Algebra Appl. 434 (2011), no. 4, 903–930. (Cited on pages 13, 90, 108, 109, and 119.) [148] R. ˇSimon Hilscher, Eigenvalue theory for time scale symplectic systems depending nonlinearly on spectral parameter, Appl. Math. Comput. 219 (2012), no. 6, 2839–2860. (Cited on page 43.) [149] R. ˇSimon Hilscher, Oscillation theorems for discrete symplectic systems with nonlinear dependence in spectral parameter, Linear Algebra Appl. 437 (2012), no. 12, 2922-2960. (Cited on pages 6 and 69.) [150] R. ˇSimon Hilscher and V. Zeidan, Symmetric three-term recurrence equations and their symplectic structure, Adv. Difference Equ. (2010), Art. ID 626942, 17 pp. (electronic). (Cited on page 10.) [151] R. ˇSimon Hilscher and V. Zeidan, Symplectic structure of Jacobi systems on time scales, Int. J. Difference Equ. 5 (2010), no. 1, 55–81. (Cited on page 6.) [152] R. ˇSimon Hilscher and V. Zeidan, Rayleigh principle for time scale symplectic systems and applications, Electron. J. Qual. Theory Differ. Equ. (2011), no. 83, 16 pp. (electronic). (Cited on page 43.) [153] R. ˇSimon Hilscher and V. Zeidan, Oscillation theorems and Rayleigh principle for linear Hamiltonian and symplectic systems with general boundary conditions, Appl. Math. Comput. 218 (2012), no. 17, 8309–8328. (Cited on pages 41, 42, 43, 46, and 54.) [154] H. Sun, On the limit-point case of singular linear Hamiltonian systems, Appl. Anal. 89 (2010), no. 5, 663–675. (Cited on pages 34 and 41.) [155] H. Sun, Q. Kong, and Y. Shi, Essential spectrum of singular discrete linear Hamiltonian systems, Math. Nachr. 289 (2016), no. 2-3, 343–359. (Cited on page 90.) [156] H. Sun and Y. Shi, Eigenvalues of second-order difference equations with coupled boundary conditions, Linear Algebra Appl. 414 (2006), no. 1, 361–372. (Cited on page 13.) [157] H. Sun and Y. Shi, Limit-point and limit-circle criteria for singular second-order linear difference equations with complex coefficients, Comput. Math. Appl. 52 (2006), no. 3-4, 539–554. (Cited on pages 13 and 123.) – 159 – Bibliography [158] H. Sun and Y. Shi, Strong limit point criteria for a class of singular discrete linear Hamiltonian systems, J. Math. Anal. Appl. 336 (2007), no. 1, 224–242. (Cited on page 13.) [159] H. Sun and Y. Shi, Self-adjoint extensions for linear Hamiltonian systems with two singular endpoints, J. Funct. Anal. 259 (2010), no. 8, 2003–2027. (Cited on page 119.) [160] H. Sun and Y. Shi, Self-adjoint extensions for singular linear Hamiltonian systems, Math. Nachr. 284 (2011), no. 5-6, 797–814. (Cited on page 119.) [161] H. Sun and Y. Shi, Spectral properties of singular discrete linear Hamiltonian systems, J. Difference Equ. Appl. 20 (2014), no. 3, 379–405. (Cited on page 90.) [162] J. Sun, On the selfadjoint extensions of symmetric ordinary differential operators with middle deficiency indices, Acta Math. Sinica (N.S.) 2 (1986), no. 2, 152–167. (Cited on pages 119 and 130.) [163] S. Sun, Y. Shi, and S. Chen, The Glazman–Krein–Naimark theory for a class of discrete Hamiltonian systems, J. Math. Anal. Appl. 327 (2007), no. 2, 1360–1380. (Cited on page 90.) [164] G. Teschl, Jacobi Operators and Completely Integrable Nonlinear Lattices, Mathematical Surveys and Monographs, Vol. 72, American Mathematical Society, Providence, 2000. ISBN 0-8218-1940-2. (Cited on page 13.) [165] E. C. Titchmarsh, Eigenfunction Expansions Associated with Second-order Differential Equations, Part II, Clarendon Press, Oxford, 1958. (Cited on page 12.) [166] E. C. Titchmarsh, Eigenfunction Expansions Associated with Second-order Differential Equations, Part I, second edition, Clarendon Press, Oxford, 1962. (Cited on page 12.) [167] J. von Neumann, ¨Uber adjungierte Funktionaloperatoren (in German), Ann. of Math. (2) 33 (1932), no. 2, 294–310. (Cited on page 90.) [168] P. W. Walker, A note on differential equations with all solutions of integrable-square, Pacific J. Math. 56 (1975), no. 1, 285–289. (Cited on pages 2, 55, 56, and 63.) [169] A. Wang, J. Sun, and A. Zettl, Characterization of domains of self-adjoint ordinary differential operators, J. Differential Equations 246 (2009), no. 4, 1600–1622. (Cited on page 119.) [170] Y. Wang and Y. Shi, Eigenvalues of second-order difference equations with periodic and antiperiodic boundary conditions, J. Math. Anal. Appl. 309 (2005), no. 1, 56–69. (Cited on pages 13, 42, and 43.) [171] J. Weidmann, Linear Operators in Hilbert Spaces, Graduate Texts in Mathematics, Vol. 68, Springer-Verlag, New York, 1980. ISBN 0-387-90427-1. (Cited on page 34.) [172] H. Weyl, ¨Uber gew¨ohnliche Differentialgleichungen mit Singularit¨aten und die zugeh¨origen Entwicklungen willk¨urlicher Funktionen (in German), Math. Ann. 68 (1910), no. 2, 220–269. (Cited on pages 12 and 22.) [173] H. Weyl, The Classical Groups: Their Invariants and Representations, Princeton University Press, Princeton, 1946. (Cited on page 6.) – 160 – Bibliography [174] H. Weyl, Topology and abstract algebra as two roads of mathematical comprehension I, Amer. Math. Monthly 102 (1995), no. 5, 453–460. Translated from: Unterrichtsblatter fur Mathematik und Naturwissenschaften 38 (1932), 177-188 (German). (Cited on page 11.) [175] D. L. Wilcox, Essential spectra of linear relations, Linear Algebra Appl. 462 (2014), 110–125. (Cited on page 90.) – 161 – Bibliography – 162 –