ESI 4313 Operations Research 2 - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

ESI 4313 Operations Research 2

Description:

Multi-dimensional NLP. A general multi-dimensional NLP is: Multi-dimensional ... A general unconstrained multi-dimensional NLP is: Unconstrained optimization ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 17
Provided by: hedwinr
Category:
Tags: esi | nlp | operations | research

less

Transcript and Presenter's Notes

Title: ESI 4313 Operations Research 2


1
ESI 4313Operations Research 2
  • Nonlinear Programming
  • Multi-dimensional Unconstrained Programming
    Problems (Sections 11.3, 11.6 11.7)

2
Multi-dimensional NLP
  • A general multi-dimensional NLP is

3
Multi-dimensional derivatives
  • As in the one-dimensional case, we can recognize
    convexity and concavity by looking at the
    objective functions derivatives
  • The gradient vector is the vector of first-order
    partial derivatives

4
Recognizing multi-dimensional convex and concave
functions
  • The Hessian matrix is the matrix of second-order
    partial derivatives

5
Recognizing multi-dimensional convex and concave
functions
  • An alternative characterization uses the concept
    of principal minors of a matrix
  • A principal minor of order i of the matrix H is
    the determinant of an i?i submatrix obtained by
    removing n-i rows and the corresponding n-i
    columns

6
Recognizing multi-dimensional convex and concave
functions
  • Suppose that
  • H(x) exists for all x?S
  • Then
  • f(x) is a convex function on S if and only if all
    principal minors of H(x) are nonnegative for all
    x?S
  • f(x) is a concave function on S if and only if
    the principal minors of H(x) of order k have the
    same sign as (-1)k for all x?S and all k

7
Unconstrained optimization
  • A general unconstrained multi-dimensional NLP is

8
Unconstrained optimization
  • A point x where ?f(x) 0 is called a stationary
    point of f
  • These are called first-order conditions for
    optimality
  • Let x be a stationary point, i.e., ?f(x) 0
  • If H(x) is positive definite then x is a local
    minimum
  • If H(x) is negative definite then x is a local
    maximum
  • If H(x) is neither negative definite nor
    positive definite,
  • If detH(x) 0 then x is a local minimum, local
    maximum, or saddle point
  • If detH(x) ? 0 then x is not a optimum

9
Unconstrained optimization
  • These characterizations can be strengthened a bit
    using the concept of leading principal minors
  • The leading principal minor of order i of the
    matrix H is the determinant of the i?i submatrix
    formed by the first i rows and columns

10
Unconstrained optimization
  • A point x where ?f(x) 0 is called a stationary
    point of f
  • Let x be a stationary point, i.e., ?f(x) 0
  • If all leading principal minors of H(x) are
    positive then x is a local minimum
  • If the leading principal minors of H(x) of order
    k has the same sign as (-1)k (for all k) then x
    is a local maximum

11
Numerical optimization
  • It is difficult to generalize the
  • Bisection
  • Golden section method
  • to the multi-dimensional case
  • A method that makes use of
  • First order derivatives
  • One-dimensional optimizations
  • is the method of steepest ascent/descent

12
Method of steepest ascent
  • We will restrict ourselves to maximization
    problems
  • Otherwise, simply replace f by f

13
Method of steepest ascent
  • The idea behind this method is
  • Consider a particular solution, say x
  • Check whether the gradient at the current
    solution is 0 if so, the current solution is a
    stationary point
  • If the gradient is not zero, we can try to
    improve the current solution
  • Question
  • In which direction should we try to improve the
    solution?
  • Answer gradient at x

14
Method of steepest ascent
  • Finding the best point in the direction of
    steepest ascent is an optimization point in its
    own right
  • This is a one-dimensional optimization problem!
  • Decision variable is ?

15
Method of steepest ascent
  • Start at a point x
  • Follow the direction of steepest ascent
  • Move to the best point in the direction of
    steepest ascent
  • Stop as soon as
  • This point is approximately stationary

16
Example
  • Use the method of steepest ascent to approximate
    the optimal solution to the following problem
  • Start at the point (0.5,0.5)
Write a Comment
User Comments (0)
About PowerShow.com