E7441: Scientific computing in biology and biomedicine Non-linear equations and optimization Vlad Popovici, Ph.D. RECETOX Outline 1 Nonlinear equations Numerical methods in R Systems of nonlinear equations 2 Fundamental concepts in numerical optimization Problem setting Optimization in R Optimization in Rn Unconstrained optimization in Rn Important classes of optimization problems Linear programming Quadratic programming Constrained nonlinear optimization Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 2 / 62 Nonlinear equations Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 3 / 62 Nonlinear equations scalar problem: f : R → R, find x ∈ R such that f(x) = 0 vectorial problem: f : Rn → Rn , find x ∈ Rn such that f(x) = 0 in any case, here we consider f to be continuously differentiable everywhere in the neighborhood of the solution an interval [a, b] is a bracket for the function f if f(a)f(b) < 0 f continuous → f([a, b]) is an interval Bolzano’s thm.: if [a, b] is a bracket for f than there exists at least one x∗ ∈ [a, b] s.t. f(x∗) = 0 if f(x∗) = f′(x∗) = · · · = f(m−1)(x∗) = 0 but f(m) 0 then x∗ has multiplicity m note: in Rn things are much more complicated Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 4 / 62 Conditioning for a scalar problem, the absolute condition number is 1/|f′(x∗)| the problem is is ill-conditioned around a multiple solution for a vectorial problem, the absolute condition number is ∥J−1 f (x∗)∥, where Jf is the Jacobian matrix of f, [Jf (x)]ij = ∂fi(x) ∂xj if the Jacobian is nearly singular, the problem is ill-conditioned Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 5 / 62 Sensitivity and conditioning possible interpretations of the approximate solution: ▶ ∥f(ˆx) − f(x∗ )∥ ≤ ϵ: small residual ▶ ∥ˆx − x∗ ∥ ≤ ϵ closeness to the true solution the two criteria might not be satistfied simultaneously if the problem is well-conditioned: small residual implies accurate solution Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 6 / 62 Convergence rate usually, the solution is found iteratively let ek = xk − x∗ be the error at the k−th iteration, where xk is the approximation and x∗ is the true solution the method converges with rate r if lim k→∞ ∥ek+1∥ ∥ek ∥r = C, for C > 0 finite if the method is based on improving the bracketing, then ek = bk − ak if ▶ r = 1 and C < 1, the convergence is linear and a constant number of digits are "gained" per iteration ▶ r = 2 the convergence is quadratic, the number of exact digits doubles at each iteration ▶ r > 1 the converges is superlinear, increasing number of digits are gained (depends on r) Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 7 / 62 Bisection method Idea: refine the bracketing of the solution until the length of the interval is small enough. Assumption: there is only one solution in the interval. Algorithm 1: Bisection while (b − a) > ϵ do m ← a + b−a 2 if sign(f(a)) = sign(f(m)) then a ← m else b ← m a bm f(a) f(b) Implement the above procedure in Python. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 8 / 62 Bisection, cont’d convergence is certain, but slow convergence rate is linear (r = 1 and C = 1/2) after k iterations, the length of the interval is (b − a)/2k , so achieving a tolerance ϵ requires log2 b − a ϵ iterations, idependently of f. the value of the function is not used, just the sign Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 9 / 62 Fixed-point methods a fixed point for a function g : R → R is a value x ∈ R such that f(x) = x the fixed-point iteration xk+1 = g(xk ) is used to build a series of successive approximations to the solution for a given equation f(x) = 0 there might be several equivalent fixed-point problems x = g(x) Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 10 / 62 Example The solutions of the equation x2 − x − 2 = 0 are the fixed points of each of the following functions: g(x) = x2 − 2 g(x) = √ x + 2 g(x) = 1 + 2/x g(x) = x2 +2 2x−1 2 3 1 1 2 3 g(x)=1+2/x g(1) = 3 g(g(1)) = 1.(6) g(g(g(1))) = 2.2 g(g(g(g(1)))) = 1.(90) . . . Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 11 / 62 Conditions for convergence a function g : S ⊂ R → R is called Lipschitz-bounded if ∃α ∈ [0, 1] so that |f(x1) − f(x0)| ≤ α|x1 − x0|, ∀x0, x1 ∈ S in other words, if |g′(x∗)| < 1, then g is Lipschitz-bounded for such functions, there exists an interval containing x∗ s.t. iteration xk+1 = g(xk ) converges to x∗ if started within that interval if |g′(x∗)| > 1 the iterations diverge in general, convergence is linear smoothed iterations: xk+1 = λk xk + (1 − λk )f(xk ) with λk ∈ [0, 1] and limk→∞ λk = 0 Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 12 / 62 Stopping criteria If either 1 |xk+1 − xk | ≤ ϵ1|xk+1| (relative error) 2 |xk+1 − xk | ≤ ϵ2 (absolute iteration error) 3 |f(xk+1) − f(xk )| ≤ ϵ3 (absolute functional error) stop the iterations. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 13 / 62 Newton-Raphson method from Taylor series: f(x + h) ≈ f(x) + f′ (x)h so in a small neighborhood around x f(x) can be approximated by a linear function of h with the root −f(x)/f′(x) Newton iteration: xk+1 = xk − f(xk ) f′(xk ) x_kx_k+1 Implement the above procedure in Python. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 14 / 62 Newton-Raphson method, cont’d convergence for a simple root is quadratic to converge, the procedure needs to start close enough to the solution, where the function f is monotonic Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 15 / 62 Secant method (lat.: Regula falsi) secant method approximates the derivative by finite differences: xk+1 = xk − f(xk ) xk − xk−1 f(xk ) − f(xk−1) convergence is normally superlinear, with r ≈ 1.618 it must be started in a properly chosen neighborhood x_k+1 x_k x_k−1 Implement the above procedure in Python. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 16 / 62 Interpolation methods and other approaches secant method uses linear interpolation one can use higher-degree polynomial interpolation (e.g. quadratic) and find the roots of the interpolating polynomial inverse interpolation: xk+1 = p−1 (yk ) where p is an interpolating polynomial fractional interpolation special methods for finding roots of the polynomials Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 17 / 62 Fractional interpolation previous methods have difficulties with functions having horizontal or vertical asymptotes linear fractional interpolation uses ϕ(x) = x − u vx − w function, which has a vertical asymptote at x = w/v, a horizontal asymptote at y = 1/v and a zero at x = u Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 18 / 62 Fractional interpolation, cont’d let x0, x1, x2 be three points where the function is estimates, yielding f0, f1, f2 find u, v, w for ϕ by solving   1 x0f0 −f0 1 x1f1 −f1 1 x2f2 −f2     u v w   =   x0 x1 x2   the iteration step swaps the values: x0 ← x1 and x1 ← x2 the new approximate solution is the zero of the linear fraction, x2 = u. This can be implemented as x2 ← x2 + (x0 − x2)(x1 − x2)(f0 − f1)f2 (x0 − x2)(f2 − f1)f0 − (x1 − x2)(f2 − f0)f1 Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 19 / 62 Systems of nonlinear equations much more difficult than the scalar case no simple way to ensure convergence computational overhead increases rapidly with the dimension difficult to determine the number of solutions difficult to find a good starting approximation Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 20 / 62 Fixed-point methods in Rn g : Rn → Rn , x = g(x) = [g1(x), . . . , gn(x)] fixed-point iteration: xk+1 = g(xk ) denote ρ(Jg(x)) the spectral radius (maximum absolute eigenvalue) of the Jacobian matrix of g evaluated at x if ρ(Jg(x∗)) < 1, the fixed point iteration converges if started close enough to the solution the convergence is linear with C = ρ(Jg(x∗)) Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 21 / 62 Newton-Raphson method in Rn xk+1 = xk − J−1 f (xk )f(xk ) no need for inversion; solve the system Jf (xk )sk = −f(xk ) for Newton step sk and iterate xk+1 = xk + sk Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 22 / 62 Broyden’s method uses approximations of the Jacobian the initial approximation of J can be the actual Jacobian (if available) or even I Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 23 / 62 Algorithm 2: Broyden’s method for k = 0, 1, 2, . . . do solve Bk sk = −f(xk ) for sk xk+1 = xk + sk yk = f(xk+1) − f(xk ) Bk+1 = Bk + ((yk − Bk sk )sT k )/(sT k sk ) if ∥xk+1 − xk ∥ ≥ ϵ1(1 + ∥xk+1∥) then continue if ∥f(xk+1)∥ < ϵ2 then x∗ = xk+1 break else algorithm failed Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 24 / 62 Further topics secant method is also extended to Rn (see Broyden’s method) robust Newton-like methods: enlarge the region of convergence, introduce a scalar parameter to ensure progression toward solution in Python: scipy.optimize.root_scalar(), scipy.optimize.root(), scipy.optimize.fsolve(), etc. See Python notebook for examples. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 25 / 62 Numerical optimization Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 26 / 62 Problem setting minimization problem: f : Rn → R, S ⊆ Rn , find x∗ ∈ S: f(x) ≤ f(y), ∀y ∈ S \ {x} x∗ is called minimizer (minimum, extremum) of f maximization is equivalent to minimizing −f f is called objective function and considered, here, differentiable with continuous second derivative constraint set S (or feasible region) is defined by a system of equations and/or inequations y ∈ S is called a feasible point if S = Rn the optimization is unconstrained Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 27 / 62 Optimization problem min x f(x) subject to g(x) = 0 hk (x) ≤ 0 where f : Rn → R, g : Rn → Rm , hk : Rn → R. If f, g and hk functions are linear: linear programming. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 28 / 62 Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 29 / 62 Some theory Rolle’s thm: f cont. on [a, b] and differentiable on (a, b) with f(a) = f(b), then ∃c ∈ (a, b) : f′(c) = 0 Weierstrass’ thm: f cont. on a compact set with values in a subset of R attains its extrema Fermat’s thm: f : (a, b) → R then in a stationary point x0 ∈ (a, b), f′(x0) = 0. Generalization: ∇f(x0) = 0. convex function: f′′(x) > 0; concave function: f′′(x) < 0 if f′(x0) = 0 and f′′(x0) < 0 then x0 is a minimizer if f′(x0) = 0 and f′′(x0) > 0 then x0 is a maximizer if f′(x0) = f′′(x0) = 0, then x0 is an inflection point Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 30 / 62 Set convexity Formally: a set S is convex if αx1 + (1 − αx2) ∈ S for all x1, x2 ∈ S and α ∈ [0, 1]. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 31 / 62 Function convexity Formally: f is said to be convex on a convex set S if f(αx1 + (1 − α)x2) ≤ αf(x1) + (1 − α)f(x2) for all x1, x2 ∈ S and α ∈ [0, 1]. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 32 / 62 Uniqueness of the solution any local minimum of a convex function f on a convex set S ⊆ Rn is global minimum of f on S any local minimum of a strictly convex function f on a convex set S ⊆ Rn is unique global minimum of f on S Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 33 / 62 Optimality criteria For x∗ ∈ S to be an extremum of f : S ⊆ Rn → R first order condition: x∗ must be a critical point: ∇f(x∗ ) = 0 second order condition: the Hessian matrix Hf (x∗) must be positive or negative definite [Hf (x)]ij = ∂f(x) ∂xi∂xj If the Hessian is ▶ positive definite, then x∗ is a minimum of f ▶ negative definite, then x∗ is a maximum of f ▶ indefinite, then x∗ is a saddle point of f ▶ singular, then different degenerated cases are possible... Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 34 / 62 Saddle point source: Wikipedia Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 35 / 62 Unimodality Unimodality allows discarding safely parts of the interval, without loosing the solution (like in the case of interval bisection). Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 36 / 62 Golden section search evaluate the function at 3 points and decide which part to discard ensure that the sampling space remains proportional: c a = a b ⇒ b a = 1 + √ 5 2 = 1.618 . . . convergence is linear, with C ≈ 0.618 Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 37 / 62 Successive parabolic interpolations Convergence is superlinear, with r ≈ 1.32. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 38 / 62 Newton’s method From Taylor’s series: f(x + h) ≈ f(x) + f′ (x)h + f′′(x) 2 h2 whose minimum is at h = −f′(x)/f′′(x). Iteration scheme: xk+1 = xk − f′ (x)/f′′ (x) (That’s Newton’s method for finding the zero of f′(x) = 0.) Quadratic convergences, but needs to start close to the solution. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 39 / 62 Hybrid methods idea: combine "slow-but-sure" methods with "fast-but-risky" most library routines are using such approach popular combination: golden search and successive parabolic interpolation Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 40 / 62 Python functions for optimization in R many packages, check scipy.optimize module an interesting project: PyOMO scipy.optimize.fminbound(): bounded function minimization you can use functions for multivariate case as well Exercise in Python... Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 41 / 62 Nelder-Mead (simplex) method direct search methods simply compare the function values at different points in S Nelder-Mead selects n + 1 points (in Rn ) forming a simplex (i.e. a segment in R, a triangle in R2 , a tetrahedron in R3 , etc) along the line from the point with highest function value through the centroid of the rest, select a new vertex the new vertex replaces the worst previous point repeat until convergence useful procedure for non-smooth functions, but expensive for large n Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 42 / 62 Nelder-Mead in Python Use the function scipy.optimize.fmin() to find the minimum of the function f(x) = sin(∥x∥2 ). Try different initial conditions. −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −1 0 1 Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 43 / 62 Steepest descent (gradient descent) f : Rn → R: the negative gradient, −∇f(x) is locally the steepest descent towards a (local) minimum xk+1 = xk − αk ∇f(xk ) where αk is line search parameter x0 x1 x2 x3 x4 * * Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 44 / 62 αk = arg minα f(xk − ∇f(xk )) the method always progresses towards minimum, as long as the gradient is non-zero the convergence is slow, the search direction may zig-zag the method is "myopic" in its choices Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 45 / 62 Newton’s method exploit the 1st and 2nd derivative Newton iteration xk+1 = xk − H−1 f (xk )∇f(xk ) no need to invert the Hessian; solve the system Hf (xk )sk = −∇f(xk ) and then xk+1 = xk + sk variation: damped Newton method uses a line search along the direction of sk to make the method more robust Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 46 / 62 Newton’s method, cont’d close to minimum, the Hessian is symmetric positive definite, so you can use Cholesky decomposition if initialized far from minimum, the Newton step may not be in the direction of steepest descent: (∇f(xk ))T sk < 0 choose a different direction based on negative gradient, negative curvature, etc Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 47 / 62 Quasi-Newton methods improve reliability and reduce overhead general form xk+1 = xk − αk B−1 k ∇f(xk ) where αk is a line search parameter and Bk is an approximation to the Hessian Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 48 / 62 BFGS (Broyden-Fletcher-Goldfarb-Shanno) method Algorithm 3: BFGS method x0 = some initial value B0 = initial approximation of the Hessian for k = 0, 1, 2, . . . do solve Bk sk = −∇f(xk ) for sk xk+1 = xk + sk yk = ∇f(xk+1) − ∇f(xk ) Bk+1 = Bk + (yk yT k )/(yT k sk ) − (Bk sk sT k Bk )/(sT k Bk sk ) Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 49 / 62 BFGS, cont’d update only the factorization of Bk rather than factorizing it at each iteration no 2nd derivative is needed can start with B0 = I Bk does not necessarily converge to true Hessian Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 50 / 62 Conjugate gradient (CG) does not need 2nd derivative, does not construct an approximation of the Hessian searches on conjugate directions, implicitly accumulating information about the Hessian for quadratic problems, it converges in n steps to exact solution (theoretically) two vectors x, y are conjugate with respect to a matrix A is xT Ay = 0 idea: start with an initial guess x0 (could be 0); go along the negative gradient at the current point; compute the new direction as a combination of previous and new gradients Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 51 / 62 Algorithm 4: CG method x0 = some initial value g0 = ∇f(x0) s0 = −g0 for k = 0, 1, 2, . . . do αk = arg minα f(xk + αsk ) xk+1 = xk + αk sk gk+1 = ∇f(xk+1) βk+1 = (gT k+1 gk+1)/(gT k gk ) sk+1 = −gk+1 + βk+1sk x0 x source: Wikipedia Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 52 / 62 Other methods we barely scratched the surface! heuristic methods genetic algorithms stochastic methods hybrid methods etc etc etc Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 53 / 62 Some Python functions in scipy.optimize linear and quadratic optimization: linprog() linear least squares: nnls(), lsq_linear() nonlinear minimization: ▶ fminbound() - scalar bounded problem; ▶ fmin_bfgs(), etc. - multidimensional nonlinear minimization ▶ fmin() - Nelder-Mead unconstrained nonlinear minimization ▶ fmin_l_bfgs_b(), etc. - multidimensional constrained nonlinear minimization ▶ ... Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 54 / 62 Linear programming (LP) General form: minimize fT x subject to Aeqx = beq Ax ≤ b lb ≤ x ≤ ub Python: X = linprog(f,A,b,Aeq,beq,bounds=(lb,ub),x0=...) Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 55 / 62 LP - Example Solve the LP: maximize 2x1 + 3x2 such that x1 + 2x2 ≤ 8 2x1 + x2 ≤ 10 x2 ≤ 3 See the Python notebook. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 56 / 62 LP - A “practical” example A company produces two types of microchips: C1 (1g silicon, 1g plastic, 4g copper) and C2 (1g germanium, 1g plastic, 2g copper). C1 brings a profif of 12 EUR, C2 a profit of 9 EUR. The stock of raw materials: 1000g silicon, 1500g germanium, 1750g plastic, 4800g copper. How many C1 and C2 should be produced to maximize profit while respecting the availability of raw material stock? Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 57 / 62 LP - A “practical” example A company produces two types of microchips: C1 (1g silicon, 1g plastic, 4g copper) and C2 (1g germanium, 1g plastic, 2g copper). C1 brings a profif of 12 EUR, C2 a profit of 9 EUR. The stock of raw materials: 1000g silicon, 1500g germanium, 1750g plastic, 4800g copper. How many C1 and C2 should be produced to maximize profit while respecting the availability of raw material stock? Let x denote the quantity of C1, and y the quantity of C2. The problem is max x,y 12x + 9y s.t. x ≤ 1000 y ≤ 1500 x + y ≤ 1750 4x + 2y ≤ 4800 x, y ≥ 0 Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 57 / 62 The problem can be written as max x cT x s.t. Ax ≤ b x ∈ R2 + where x = x y , c = 12 9 , A =   1 0 0 1 1 1 4 2   , and b =   1000 1500 1750 4800   See Python notebook for a possible approach. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 58 / 62 Quadratic programming (QP) General form: minimize 1 2 xT Hx + fT x subject to Ax ≤ b Aeqx = beq lb ≤ x ≤ ub with H ∈ Rn×s symmetric. Python: you need to install some extra packages e.g., qpsolvers X = qpsolvers . solve_qp (H, f , A, b , A_eq , b_eq , lb , ub , solver =" proxqp " ) # other solvers are available Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 59 / 62 QP - Example Solve: minimize x2 1 + x1x2 + 2x2 2 + 2x2 3 + 2x2x3 + 4x1 + 6x2 + 12x3 subject to x1 + x2 + x3 ≥ 6 −x1 − x2 + 2x3 ≥ 2 x1, x2, x3 ≥ 0 See Python notebook. Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 60 / 62 Constrained nonlinear optimization Problem: minimize f(x) subject to c(x) ≤ 0 ceq(x) = 0 Ax ≤ b Aeqx = beq lb ≤ x ≤ ub Python: various functions - see, for example, scipy.optimize.minimize() Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 61 / 62 Questions? Vlad Popovici, Ph.D. (RECETOX) E7441: Scientific computing in biology and biomedicine 62 / 62