Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 859578, 5 pages http://dx.doi.org/10.1155/2013/859578 Research Article Constructing the Lyapunov Function through Solving Positive Dimensional Polynomial System Zhenyi Ji,1,2 Wenyuan Wu,2 Yong Feng,2 and Guofeng Zhang3 1 Laboratory of Computer Reasoning and Trustworthy Computation, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 2 Laboratory of Automated Reasoning and Cognition, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Science, Chongqing 401120, China 3 L.A.S Department of ChengDu College, University of Electronic Science and Technology of China, Chengdu 611731, China Correspondence should be addressed to Zhenyi Ji; zyji001@163.com Received 24 July 2013; Accepted 21 November 2013 Academic Editor: Bo-Qing Dong Copyright Β© 2013 Zhenyi Ji et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose an approach for constructing Lyapunov function in quadratic form of a differential system. First, positive polynomial system is obtained via the local property of the Lyapunov function as well as its derivative. Then, the positive polynomial system is converted into an equation system by adding some variables. Finally, numerical technique is applied to solve the equation system. Some experiments show the efficiency of our new algorithm. 1. Introduction Analysis of the stability of dynamical systems plays a very important role in control system analysis and design. For linear systems, it is easy to verify the stability of equilibria. For nonlinear dynamical systems, proving stability of equilibria of nonlinear systems is more complicated than linear systems. One can use the Lyapunov function at the equilibria to determine the stability. For an autonomous polynomial system of differential equations, how to compute the Lyapunov function at equilibria is a basic problem. In [1, 2], the author transformed the problem of computing the Lyapunov function into a quantifier elimination problem. The disadvantage of the method is that the computation complexity of quantifier elimination is doubly exponential in the number of total variables. In order to avoid this problem, She et al. [3] propose a symbolic method; they first construct a special semialgebraic system using the local properties of a Lyapunov function as well as its derivative and solving these inequations using cylindrical algebraic decomposition (CAD) introduced by Collins in [4]. The algorithm in [5] uses semidefinite programming to search for Lyapunov function. There are also other algorithms, see [6, 7] for more details. In this paper, we suppose Lyapunov function has quadratic form and some coefficients of Lyapunov function are unknown numbers. Some positive polynomials are obtained using the technique mentioned in [3] first, then a positive dimensional polynomial system is constructed by adding some new variables. The parameter in Lyapunov function is computed through solving the real root of the positive dimensional system using the numerical method. The rest of this paper is organized as follows: Definitions and preliminaries about the Lyapunov function and the asymptotic stability analysis of differential system are given in Section 2. Section 3 reviews some methods for solving the real root of positive dimensional polynomial system. The new algorithm to compute the Lyapunov function and some experiments are shown in Section 4. In Section 5, some examples are given to illustrates the efficiency of our algorithm. Finally, Section 6 draws a conclusion of this paper. 2. Stability Analysis of Differential Equations In this section, some preliminaries on the stability analysis of differential equations are presented. 2 Journal of Applied Mathematics In this paper, we consider the following differential equations: Μ‡π‘₯1 = 𝑓1 (x) Μ‡π‘₯2 = 𝑓2 (x) ... Μ‡π‘₯ 𝑛 = 𝑓𝑛 (x) , (1) where x = (π‘₯1, π‘₯2, . . . , π‘₯ 𝑛), 𝑓𝑖 ∈ R[x], and π‘₯𝑖 = π‘₯𝑖(𝑑), Μ‡π‘₯𝑖 = 𝑑π‘₯𝑖/𝑑𝑑. A point x = (π‘₯1, π‘₯2, . . . , π‘₯ 𝑛) in the 𝑛-dimensional real Euclidean space R 𝑛 is called an equilibrium of differential system (1) if 𝑓𝑖(x) = 0 for all 𝑖 ∈ {1, 2, . . . , 𝑛}. Without loss of generality, we suppose the origin is an equilibrium of the given system in this paper. In general, there exists two techniques to analyze the stability of an equilibrium: the Lyapunov’s first method with the technique of linearization which considers the eigenvalues of the Jacobian matrix at equilibrium. Theorem 1. Let 𝐽 𝐹(x) denote the Jacobian matrix of system {𝑓1, . . . , 𝑓𝑛} at point x. If all the eigenvalues of 𝐽 𝐹(x) have negative real parts, then x is asymptotically stable. If the matrix 𝐽 𝐹(x) has at least one eigenvalue with positive real part, then x is unstable. For a small system, it is easy to obtain the eigenvalues of the matrix 𝐽 𝐹(x); then one can analyze the stability of the equilibrium using Theorem 1. For a high-dimensional system, solving the characteristic polynomial to get the exact zeros is a difficult problem. Indeed, to answer the question on stability of an equilibrium, we only need to know whether all the eigenvalues have negative real parts or not. Therefore, the theorem of Routh-Hurwitz [8] serves to determine whether all the roots of a polynomial have negative real parts. Another method to determine asymptotic stability is to check if there exists a Lyapunov function at the point x, which is defined in the following. Definition 2. Given a differential system and a neighborhood U of the equilibrium, a Lyapunov function with respect to the differential system is a continuously differential function 𝐹 : U β†’ R such that (1) : 𝐹(0) = 0 and 𝐹(x) > 0 whenever x ΜΈ= 0; (2) : (𝑑/𝑑𝑑)𝐹(0) = 0 and (𝑑/𝑑𝑑)𝐹(x) < 0 whenever x ΜΈ= 0. 3. Solving the Real Roots of Positive Dimensional Polynomial System Solving polynomial system has been one of the central topics in computer algebra. It is required and used in many scientific and engineering applications. Indeed, we only care about the real roots of a polynomial system arising from many practical problems. For zero dimensional system, homotopy continuation method [9, 10] is a global convergence algorithm. For positive dimensional system, computing real roots of this system is a difficult and extremely important problem. Due to the importance of this problem, many approaches have been proposed. The most popular algorithm which solves this problem is CAD; another is the so-called critical point methods, such as Seidenberg’s approach of computing critical points of the distance function [11]. The algorithm in [12] uses the idea of Seidenberg to compute the real root of a positive dimensional defined by a signal polynomial; and extends it to a random polynomial system in [13]. Actually, these algorithms depend on symbolic computations, so they are restricted to small size systems because of the high complexity of the symbolic computation. In order to avoid this problem, homotopy method has been used to compute real root of polynomial system in [14, 15]. Recently, Wu and Reid [16] propose a new approach, which is different from the critical point technique. In order to facilitate the description of this algorithm, we suppose polynomial system 𝑔 = {𝑔1, 𝑔2, . . . , 𝑔 π‘˜}; the system has π‘˜ polynomials, 𝑛 variables, and π‘˜ < 𝑛. First, 𝑛 βˆ’ π‘˜ hyperplanes β„Ž = {β„Ž1, . . . , β„Ž π‘›βˆ’π‘˜} in R[x] are chosen randomly. Note that {𝑔1, . . . , 𝑔 π‘˜, β„Ž1, . . . , β„Ž π‘›βˆ’π‘˜} is a square system; then witness points are computed by homotopy method and verified by the following theorem. Theorem 3 (see [17]). Let 𝑓(x) : R 𝑛 β†’ R 𝑛 be a polynomial system, and x ∈ R 𝑛 . Let IR be the set of real intervals, and IR 𝑛 and IR 𝑛×𝑛 be the set of real interval vectors and real interval matrices, respectively. Given X ∈ IR 𝑛 with 0 ∈ X and 𝑀 ∈ IR 𝑛×𝑛 satisfies βˆ‡π‘“π‘–(x + X) βŠ† 𝑀𝑖, for 𝑖 = 1, 2, . . . , 𝑛. Denote by 𝐼𝑛 the identity matrix and assume βˆ’πΉβˆ’1 x (x) 𝐹 (x) + (𝐼𝑛 βˆ’ 𝐹x (x) 𝑀) X βŠ† int (X) , (2) where 𝐹x(x) is the Jacobian matrix of 𝐹(x) at x. Then there is a unique Μ‚x ∈ 𝑋 such that 𝑓(Μ‚x) = 0. Moreover, every matrix 𝑀 ∈ 𝑀 is nonsingular, and the Jacobian matrix 𝐹x(x) is nonsingular. There may exist some components which have no intersection with these random hyperplanes. Some points on these components must be the solutions of the Lagrange optimization problem: 𝑓 = 0, π‘˜ βˆ‘ 𝑖=1 πœ†π‘–βˆ‡π‘“π‘– = n. (3) Here n is a random vector in R 𝑛 . The system has 𝑛 + π‘˜ equations and 𝑛+π‘˜ variables; thus we can find real points through solving system (3). 4. Algorithm for Computing the Lyapunov Function In this section, we will present an algorithm for constructing the Lyapunov function. Our idea is to compute positive polynomial system which satisfies the definition of Lyapunov function first. Then we solve the polynomial system deduced from the positive polynomial system using homotopy algorithm; at this step, we use the famous package hom4ps2 [18]. Given a quadratic polynomial 𝐹(x), the following theorem gives a sufficient condition for the polynomial to be a Lyapunov function. Journal of Applied Mathematics 3 Theorem 4 (see [3]). Let 𝐹(x) be a quadratic polynomial, for a given differential system; if 𝐹(x) satisfies the fact that 𝐻𝑒𝑠𝑠(𝐹)|x=0 is positive definite and 𝐻𝑒𝑠𝑠((𝑑/𝑑𝑑)𝐹)|x=0 is negative definite, then 𝐹(x) is a Lyapunov function. By the theory of linear algebra, one knows that the symmetric matrix 𝐻𝑒𝑠𝑠(𝐹)|x=0 is positive definite if and only if all its eigenvalues are positive, and 𝐻𝑒𝑠𝑠((𝑑/𝑑𝑑)𝐹)|x=0 is negative definite if and only if all its eigenvalues are negative. Let β„Ž = 𝑠 𝑛 + 𝑑 π‘›βˆ’1 𝑠 π‘›βˆ’1 + β‹… β‹… β‹… + 𝑑0 (4) be a characteristic polynomial of a matrix; the following theorem deduced from the Descartes’ rule of signs [19] can be used to determine whether β„Ž has only positive roots or not. Theorem 5 (see [3]). Suppose all the roots of a real polynomial β„Ž are real; then its roots are all positive if and only if for all 1 ≀ 𝑖 ≀ 𝑛, (βˆ’1)𝑖 𝑑 π‘›βˆ’π‘– > 0. Combine Theorems 4 and 5, finding that the Lyapunov function in quadratic form can be converted into solving the real root of some positive polynomial system, denoting it by Inequ = {𝑔1 > 0, 𝑔2 > 0, . . . , 𝑔 𝑛 > 0} . (5) Suppose we have obtained the positive polynomial system as in (5), and denote the variable in the system by a. In order to obtain one value of a using numerical technique, we first convert the positive equation into equation. A simple ideal is to add new variable set x = (π‘₯1, π‘₯2, . . . , π‘₯ 𝑛), and construct the equation system as follows: 𝑝𝑠 = {𝑔1 βˆ’ π‘₯2 1, 𝑔2 βˆ’ π‘₯2 2, . . . , 𝑔 𝑛 βˆ’ π‘₯2 𝑛} . (6) If we find one real point (a, x) of system (6) such that there has nonzero element in x, then it is easy to see that the point a satisfies {𝑔1 (a) > 0, 𝑔2 (a) > 0, . . . , 𝑔 𝑛 (a) > 0} , (7) which means the differential system exists a Lyapunov function at the equilibrium. Note that the number of variable is more than the number of equation in system (6); then the system 𝑝𝑠 must be a positive dimensional polynomial system. Recall the algorithm mentioned in Section 3; all of the algorithms obtain at least one real point in each connect component, and they use Theorem 3 to verify the existence of real root which deduces the low efficiency. However, in this paper, we only need one real point of system (6) to ensure the establishment of these inequalities in (7), so we verify the establishment of these inequalities using the residue of inequalities at the real part of every approximate real root of the system (6). In the following we propose an algorithm to determine if there exists a Lyapunov function at the equilibrium. Algorithm 6. Input: a differential system as defined in (1) and a tolerance πœ–. Output: a Lyapunov function or UNKNOW. (1) Construct the positive polynomial. (2) Convert the positive polynomial system into positive dimensional system defined in system (6). (3) We choose 𝑛 random point (Μ‚x1, Μ‚x2, . . . , Μ‚x 𝑛) and 𝑛 random vector k1, k2, . . . , k 𝑛; then construct 𝑛 hyperplane in R 𝑛 through Μ‚x𝑖 with normal k𝑖 for 𝑖 = 1, 2, . . . , 𝑛. Denote the set of this hyperplane by 𝑝𝑠2. (4) Let 𝑝𝑠 = {𝑝𝑠1, 𝑝𝑠2}, and solve the square system using homotopy continuation algorithm, denoting solution of 𝑝𝑠 by π‘Ÿπ‘œπ‘œπ‘‘π‘ . (5) for 𝑠 = 1 : π‘™π‘’π‘›π‘”π‘‘β„Ž(π‘Ÿπ‘œπ‘œπ‘‘π‘ ) (a) if the norm of imaginary part of π‘Ÿπ‘œπ‘œπ‘‘π‘ {𝑠} is smaller than πœ–, then substitute the real part of π‘Ÿπ‘œπ‘œπ‘‘π‘ {𝑠} into {𝑔1, . . . , 𝑔 𝑛}, and denote the value by {V1, V2, . . . , V 𝑛}. If V𝑖 > 0 for all 𝑖 ∈ {1, 2, . . . , 𝑛}, then return the real part of π‘Ÿπ‘œπ‘œπ‘‘π‘ {𝑠} and break the program. (6) End for. (7) Construct polynomial system 𝑝𝑠3 = βˆ‘ 𝑛 𝑖=1 πœ†π‘–βˆ‡π‘“π‘– = k, where πœ†π‘– is new variable and k are chosen from {k1, . . . , k 𝑛} randomly. (8) Solve {𝑝𝑠1, 𝑝𝑠3} using homotopy continuation algorithm, denote its solution by π‘Ÿπ‘œπ‘œπ‘‘π‘ , and go to Step 4. (9) return UNKNOW. In the following, we present a simple example to illustrate our algorithm. Example 7. This is an example from [20] Μ‡π‘₯ = βˆ’π‘₯ + 2𝑦3 βˆ’ 2𝑦4 ̇𝑦 = βˆ’π‘₯ βˆ’ 𝑦 + π‘₯𝑦. (8) Let Lyapunov function 𝐹(π‘₯, 𝑦) = π‘₯2 + π‘Žπ‘₯𝑦 + 𝑏𝑦2 . Step 1. We obtain the positive polynomial using Theorems 4 and 5 as follows: [2𝑏 + 2 > 0, βˆ’π‘Ž2 + 4𝑏 > 0, 2π‘Ž + 4𝑏 + 4 > 0, 4π‘Ž2 + 4𝑏2 βˆ’ 16𝑏 > 0] . (9) Step 2. Convert system (9) into the following system: 𝑝𝑠1 = {{{ {{{ { 2𝑏 + 2 βˆ’ π‘₯2 1 = 0 βˆ’π‘Ž2 + 4𝑏 βˆ’ π‘₯2 2 = 0 2π‘Ž + 4𝑏 + 4 βˆ’ π‘₯2 3 = 0 4π‘Ž2 + 4𝑏2 βˆ’ 16𝑏 βˆ’ π‘₯2 4 = 0. (10) 4 Journal of Applied Mathematics Step 3. Construct two hyperplanes {β„Ž1, β„Ž2} in R6 randomly, where β„Ž1 = 0.09713178123584754π‘Ž + 0.04617139063115394𝑏 + 0.27692298496089π‘₯1 + 0.8234578283272926π‘₯2 + 0.694828622975817π‘₯3 + 0.3170994800608605π‘₯4 + 0.9502220488383549, β„Ž2 = 0.3815584570930084π‘Ž + 0.4387443596563982𝑏 + 0.03444608050290876π‘₯1 + 0.7655167881490024π‘₯2 + 0.7951999011370632π‘₯3 + 0.1868726045543786π‘₯4 + 0.4897643957882311. (11) Step 4. Compute the roots of the augmented system {𝑝𝑠1 = 0, β„Ž1 = 0, β„Ž2 = 0} using homotopy method, and we find the system has only 16 roots. Step 5. We obtain the first approximate real root of the system x = [βˆ’2.407604610156789, 4.633115716668555, 3.356520733339377, 3.568739680591174, βˆ’4.209186815331512, βˆ’5.909266734956268] . (12) Substituting π‘Ž = βˆ’2.407604610156789, 𝑏 = 4.633115716668555 into the left of the positive polynomial in (9), we obtain the following result: [11.26623143, 12.73590291, 17.71725365, 34.91943333] . (13) This ensure the establishment of inequality in (9). Thus, 𝐹 (π‘₯, 𝑦) = π‘₯2 + 4.633115716668555𝑦2 βˆ’ 2.407604610156789π‘₯𝑦 (14) is a Lyapunov function. If the random hyperplanes {β„Ž1, β„Ž2} are as follows: β„Ž1 = βˆ’3π‘Ž βˆ’ 𝑏 + π‘₯1 + 2π‘₯2 βˆ’ 2π‘₯3 βˆ’ 2π‘₯4 βˆ’ 3, β„Ž2 = 3π‘Ž βˆ’ 3𝑏 βˆ’ π‘₯1 βˆ’ 2π‘₯2 + π‘₯3 + 2π‘₯4 βˆ’ 2, (15) we find that polynomial system {β„Ž1 = 0, β„Ž2 = 0, 𝑝𝑠 = 0} has no real root; then we go to Step 7 in Algorithm 6 and obtain the following system: 𝑝𝑠3 = {{{{{{{ {{{{{{{ { βˆ’2πœ†2 π‘Ž + 2πœ†3 + 8πœ†4 π‘Ž βˆ’ 1 = 0 2πœ†1 + 4πœ†2 + 4πœ†3 + πœ†4 (8𝑏 βˆ’ 16) βˆ’ 3 = 0 βˆ’2πœ†1 π‘₯1 + 1 = 0 βˆ’2πœ†2 π‘₯2 + 2 = 0 βˆ’2πœ†3 π‘₯3 βˆ’ 2 = 0 βˆ’2πœ†4 π‘₯4 βˆ’ 3 = 0. (16) Solving the system {𝑝𝑠1 = 0, 𝑝𝑠3 = 0}, we find the first approximate real root and substitute the value of π‘Ž = 1.3053335232048229, 𝑏 = 0.4314538107033688 into the left of the positive polynomial in (9) and we obtain the following result: [2.862907621406738, 0.021919636011159, 8.336482289223121, 0.656931019037197] . (17) This ensures the establishment of inequality in (9). Thus, 𝐹 (π‘₯, 𝑦) = π‘₯2 + 0.4314538107033688𝑦2 + 1.3053335232048229π‘₯𝑦 (18) is a Lyapunov function. 5. Experiments In this section, some examples are given to illustrate the efficiency of our algorithm. Example 8. This is an example from [7] Μ‡π‘₯ = 𝑦, ̇𝑦 = 𝑧, ̇𝑧 = βˆ’4π‘₯ βˆ’ 3𝑦 βˆ’ 2𝑧 + π‘₯2 𝑦 + π‘₯2 𝑧. (19) We assume that 𝐹(π‘₯, 𝑦, 𝑧) = π‘₯2 +𝑦2 +𝑧2 +π‘Žπ‘₯𝑦+𝑏π‘₯𝑧+𝑐𝑦𝑧. Algorithm 6 returns a Lyapunov function 𝐹 (π‘₯, 𝑦, 𝑧) = π‘₯2 + 𝑦2 + 𝑧2 + 1.370502803658027π‘₯𝑦 + 0.655753434727512π‘₯𝑧 + 0.632220465746607𝑦𝑧, (20) at Step 4 using only 1.085175 s. If the algorithm does not terminate at Step 4, it returns 𝐹 (π‘₯, 𝑦, 𝑧) = π‘₯2 + 𝑦2 + 𝑧2 + 0.566986159377122π‘₯𝑦 + 1.934844270891010π‘₯𝑧 + 0.065341301862036𝑦𝑧, (21) using about 21.285095 s. Example 9. This is an example from a classic ODE’s textbook: Μ‡π‘₯ = βˆ’π‘₯ βˆ’ 3𝑦 + 2𝑦 + 𝑦𝑧, ̇𝑦 = 3π‘₯ βˆ’ 𝑦 βˆ’ 𝑧 + π‘₯𝑧, ̇𝑧 = βˆ’2π‘₯ + 𝑦 βˆ’ 𝑧 + π‘₯𝑦. (22) Assume that 𝐹(π‘₯, 𝑦, 𝑧) = π‘₯2 + π‘Žπ‘₯𝑦 + π‘₯𝑧 + 𝑐𝑦2 + 𝑑𝑦𝑧 + 𝑒𝑧2 . With about 2.4 s, we got a real root for the parameters that form the coefficients of 𝐹. Indeed, this point was obtained from Step 4. If there is no real point at Step 4, this program returns one real root using about 267 s, which is also more efficient than 1800 s in [3]. Journal of Applied Mathematics 5 Example 10. This is another example from an ODE’s text- book: Μ‡π‘₯ = βˆ’π‘₯ + 𝑦 + π‘₯𝑧2 βˆ’ π‘₯3 , ̇𝑦 = π‘₯ βˆ’ 𝑦 + 𝑧2 βˆ’ 𝑦3 , ̇𝑧 = βˆ’π‘¦π‘§ βˆ’ 𝑧2 . (23) Assume that 𝐹 = π‘₯2 + 𝑏π‘₯𝑧 + 𝑐𝑦2 + 𝑑𝑦𝑧 + 𝑒𝑧2 . For this program, our algorithm stops at Step 3, using about 1.24475 s. In [3], they use about 840 s. 6. Conclusion For a differential system, based on the technique of computing real root of positive dimensional polynomial system, we present a numerical method to compute the Lyapunov function at equilibria. According to the relationship between the positive dimensional system and the Lyapunov function, we know we just need only one real root of this system, so we convert the algorithm into two steps. At each step, rather than using interval Newton’s method to verify the existence of real root, we use the residue of the positive polynomial system at approximate real root to verify the correctness of the positive polynomial system. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments This research was partially supported by the National Natural Science Foundation of China (11171053) and the National Natural Science Foundation of China Youth Fund Project (11001040) and cstc2012ggB40004. References [1] T. V. Nguyen, T. Mori, and Y. Mori, β€œExistence conditions of a common quadratic Lyapunov function for a set of second-order systems,” Transactions of the Society of Instrument and Control Engineers, vol. 42, no. 3, pp. 241–246, 2006. [2] T. V. Nguyen, T. Mori, and Y. Mori, β€œRelations between common Lyapunov functions of quadratic and infinity-norm forms for a set of discrete-time LTI systems,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E89-A, no. 6, pp. 1794–1798, 2006. [3] Z. She, B. Xia, R. Xiao, and Z. Zheng, β€œA semi-algebraic approach for asymptotic stability analysis,” Nonlinear Analysis: Hybrid Systems, vol. 3, no. 4, pp. 588–596, 2009. [4] G. E. Collins, β€œQuantifier elimination for real closed fields by cylindrical algebraic decomposition,” in Automata Theory and Formal Languages, vol. 33 of Lecture Notes in Computer Science, pp. 134–183, Springer, Berlin, Germany, 1975. [5] M. Bakonyi and K. N. Stovall, β€œStability of dynamical systems via semidefinite programming,” in Recent Advances in Matrix and Operator Theory, vol. 179 of Operator Theory: Advances and Applications, pp. 25–34, BirkhΒ¨auser, Basel, Switzerland, 2008. [6] K. Forsman, β€œConstruction of lyapunov functions using GroΒ¨ener bases,” in Proceedings of the 30th IEEE Conference on Decision and Control, vol. 1, pp. 798–799, 1991. [7] A. Papachristodoulou and S. Prajna, β€œOn the construction of Lyapunov functions using the sum of squares decomposition,” in Proceedings of the 41st IEEE Conference on on Decision and Control, vol. 3, pp. 3482–3487, 2002. [8] M. W. Hirsch and S. Smale, Differential Equations, Dynamical Systems, and Linear Algebra, vol. 60, Academic Press, New York, NY, USA, 1974. [9] T. Y. Li, β€œNumerical solution of polynomial systems by homotopy continuation methods,” Handbook of Numerical Analysis, vol. 11, pp. 209–304, 2003. [10] A. J. Sommese and C. W. Wampler II, The Numerical Solution of Systems of Polynomials: Arising in Engineering and Science, World Scientific, Singapore, 2005. [11] A. Seidenberg, β€œA new decision method for elementary algebra,” Annals of Mathematics, vol. 60, no. 2, pp. 365–374, 1954. [12] F. Rouillier, M.-F. Roy, and M. Safey El Din, β€œFinding at least one point in each connected component of a real algebraic set defined by a single equation,” Journal of Complexity, vol. 16, no. 4, pp. 716–750, 2000. [13] P. Aubry, F. Rouillier, and M. Safey El Din, β€œReal solving for positive dimensional systems,” Journal of Symbolic Computation, vol. 34, no. 6, pp. 543–560, 2002. [14] J. D. Hauenstein, β€œNumerically computing real points on algebraic sets,” Acta Applicandae Mathematicae, vol. 125, no. 1, pp. 105–119, 2013. [15] G. M. Besana, S. di Rocco, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler, β€œCell decomposition of almost smooth real algebraic surfaces,” Numerical Algorithms, vol. 63, no. 4, pp. 645–678, 2013. [16] W. Wu and G. Reid, β€œFinding points on real solution componetns and applications to differential polynomial systems,” in Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, pp. 339–346, 2013. [17] S. M. Rump and S. Graillat, β€œVerified error bounds for multiple roots of systems of nonlinear equations,” Numerical Algorithms, vol. 54, no. 3, pp. 359–377, 2010. [18] T. Y. Li, HOM4PS-2.0, 2008, http://www.math.nsysu.edu.tw/∼ leetsung/works/HOM4PS soft.htm. [19] D. Wang and B. Xia, Computer Algebra, Tsinghua University Press, Beijing, China, 2004. [20] S. H. Strogatz, Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering, Westview Press, 2001. Submit your manuscripts at http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematics Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Differential Equations International Journal of Volume 2014 Applied Mathematics Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Probability and Statistics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Physics Advances in Complex Analysis Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 CombinatoricsHindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Operations Research Advances in Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Function Spaces Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Algebra Discrete Dynamics in Nature and Society Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Decision Sciences Advances in DiscreteMathematics Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Stochastic Analysis International Journal of