xuebaunion@vip.163.com
3551 Trousdale Rkwy, University Park, Los Angeles, CA
留学生论文指导和课程辅导
无忧GPA:https://www.essaygpa.com
工作时间:全年无休-早上8点到凌晨3点

微信客服:xiaoxionga100

微信客服:ITCS521
Lecture Note (Week 4) Lecturer: Dr. Hongtao Zhu Office: Bld8.106 Email: hongtao@uow.edu.au Consultation time: 9:30-11:30 Monday and 10:30-12:30 Wednesday 1 ENGG952 – ENGINEERING COMPUTING Difference between roots and optima Roots: 0)( =xf Optima: 0)( = xf 0)( xf Maximum 0)( xf Minimum 0)( = xf Inflection point Feature? 2 Outline of Week-4 lecture- Optimization ▪ One-Dimensional Unconstrained Optimization Newton’s method Golden-section search ▪ Multidimensional Unconstrained Optimization Steepest ascent method ▪ Linear Constrained Optimization Graphical Solution Simplex method 3 General form of optimization problem Objective function: )(xf subject to p1,2ibe m1,2iad ii ii ,,)( ,,)( == = x x n-dimensional design variable vector Inequality equations Equality equations Find x Maximize or minimize f(x) 4Degree of freedom: n-m-p; Conditions for solution: m+p<=n; If m+p > n, over constrained. Classifications of optimization problems 1) By the forms of f(x) and constraints: • Linear programming: f(x) and constraints are linear; • Quadratic programming: f(x) is quadratic and constraints are linear; • Nonlinear programming: f(x) is not linear and/or constraints are not linear; 2) By constraints: • Constrained optimization: Constraints equations are included. • Unconstrained optimization: Constraints equations are not included. 3) By dimensionality: • One-dimensional problem: The design variable x is a single variable. • Multidimensional problem: The design variable vector x consists of two or more variables. 5 One-Dimensional Unconstrained Optimization 6 Key features: Single variable, no constrained condition Find x Maximize or minimize f(x) Description of one-dimensional unconstrained optimization 7 Method 1 – Newton’s method The optima corresponds to 0)( = xf If the f (x) equation can be obtained, Optimization problem Find the root for 0)( = xf All equation solving methods learned in week 2 can be used to solve 0)( = xf 1) Bracketing methods: • Bisection method; • False-position method. 2) Open methods: • Simple-fixed-point iteration; • Newton-Raphson method; • Secant method. 8 0)( =xf Newton-Raphson method for roots of equation at week 2 )( )( 1 i i ii xf xf xx −=+ 0)( = xf Newton’s method for optimization at this week )( )( 1 i i ii xf xf xx −=+ First derivative Second derivative Target: Target: 9 Method 1 – Newton’s method Example-1 Use Newton’s method to find the maximum of 10 )sin(2)( 2x xxf −= with an initial guess of x0=2.5. Solution: 5 )cos(2)( x xxf −= 5 1 )sin(2)( −−= xxf ▪ At initial guess x0=2.5 1023.2 5 )cos(2)( 000 −=−= x xxf 1.3969 5 1 )sin(2)( 00 −=−−= xxf ▪ First iteration: 99508.0 3969.1 1023.2 5.2 )( )( 0 0 01 = − − −= −= xf xf xx 8899.0 5 )cos(2)( 111 =−= x xxf 8776.1 5 1 )sin(2)( 11 −=−−= xxf %24.151%100 99508.0 5.299508.0 1 01 = − = − = x xx a 10 %1.0=s ▪ Second iteration: 46901.1 8776.1 8899.0 99508.0 )( )( 1 1 12 = − −= −= xf xf xx 0906.0 5 )cos(2)( 222 −=−= x xxf 1897.2 5 1 )sin(2)( 22 −=−−= xxf %26.32%100 46901.1 99508.046901.1 2 12 = − = − = x xx a The process is repeated. 11Newton’s method is impractical where the derivatives can’t be evaluated ! What is the limitation for Newton’s method a 151.24% 32.26% 2.89% 0.63% 0.0000038% Method 2 – Golden-Section Search (similar to bisection method) This method is suitable for the problem of finding a single maximum within a known interval with lower bound xl and upper bound xu. Lower bound xl and upper bound xu Reduce width of interval maximum 12 Process for Golden-section search method Step 1: Choose lower xL and upper xu guesses Step 2: Determine two interior points x1 and x2 according to the golden ratio. dxx L +=1 dxx U −=2 )( 2 15 lu xxd − − = Golden ratio: 0.618 Two interior points by golden ratio. 13 For x1 and x2 , which value is larger ? (what is the sequence of four points, including xl, xu, x1 and x2) Lx ux2x 1x Step 3: ▪ If f(x1)>f(x2), set xL=x2. Go to Step 4. ▪ If f(x2)>f(x1), set xu=x1. Go to Step 5. Step 4 for f(x1) > f(x2): oldnew xx 12 = dxx L new +=1 Return to Step 3 until the stopping criteria is satisfied Due to the use of golden factor ordxx u new −=2 Step 5 for f(x1) < f(x2): oldnew xx 21 = dxx u new −=2 Return to Step 3 until the stopping criteria is satisfied Due to the use of golden factor ordxx L new +=1 )( 2 15 Lu xxd − − = )( 2 15 Lu xxd − − = re-calculate d for each iteration Re-use the point for next iteration 14 Increasing direction Eliminate the lower boundary Example Use the golden-section search to find the maximum of 10 )sin(2)( 2x xxf −= within the interval xL=0 and xu=4. Solution: Low bound: xL=0 and upper bound xu=4, interval= [0 4](1) First iteration 472.2472.201 =+=+= dxx L 528.1472.242 =−=−= dxx u 472.2)04( 2 15 )( 2 15 =− − =− − = Lu xxd 765.1 10 528.1 )528.1sin(2 10 )sin(2)( 22 2 22 =−=−= x xxf 63.0 10 472.2 )472.2sin(2 10 )sin(2)( 22 1 11 =−=−= x xxf )()( 12 xfxf Set xu = x1 and the interval become [0 2.472] 15 Lx ux2x 1x 2 > (1) Eliminated 1st iteration solution 765.11_ =optf528.11_ =optx %5=s (2) Second iteration 528.121 == oldxx 944.0528.1472.22 =−=−= dxx u 528.1)0472.2( 2 15 )( 2 15 =− − =− − = Lu xxd or 528.1528.101 =+=+= dxx L 531.1 10 944.0 )944.0sin(2 10 )sin(2)( 22 2 22 =−=−= x xxf 765.1)()( 21 == oldxfxf Set xL = x2 and the interval become [0.944 2.472] 16 Lx ux2x 1x 2 < (1) Eliminated 2nd iteration solution 765.12_ =optf528.12_ =optx 765.11_ =optf528.11_ =optxAs the results of 1st and 2nd iteration are same, how to calculate the iteration error? 765.12_ =optf528.12_ =optx opt lu a x xx R − −= )1( %8.61 528.1 0475.2 )618.01( = − −= Interval [0 2.472] 2 < (1)New interval Current maximum is highlighted for every integration 17 Golden-section search method involves many evaluation and time-consuming! What is the limitations of Golden-section search method What is the advantages of Golden-section search method Golden-section search method does not need the calculation of derivatives. a 100% 61.8% 38.2% 23.6% 14.6% 9.5% 5.9% 3.6% The process is repeated. Global optimization and local optimization Three ways to distinguish the global optimum and a local optimum : ➢ Insight into the behavior of low-dimensional functions can sometimes be obtained graphically. ➢ Finding optima based on widely varying and perhaps randomly generated starting guesses and then selecting the largest of these as global. ➢ Perturbing the starting point associated with a local optimum and seeing if the routine returns a better point or always returns to the same point. 18 MATLAB build-in function for single-variable bounded nonlinear function minimization 19 Syntax: X = fminbnd(FUN,x1,x2,OPTIONS) %function for optimizaiton-min f = inline('-2*sin(x)+x.^2/10','x'); %option options = optimset('Display','iter','TolFun’,1e-8,'TolX',1e-4, 'MaxIter',5); xl = 0 xu = 4; %fminbnd Single-variable bounded nonlinear function minimization. [x, fval] = fminbnd(f,xl,xu,options) Func-count x f(x) Procedure 1 1.52786 -1.76472 initial 2 2.47214 -0.629974 golden 3 0.944272 -1.53098 golden 4 1.42704 -1.77573 parabolic 5 1.42576 -1.77572 parabolic 6 1.42755 -1.77573 parabolic Exiting: Maximum number of iterations has been exceeded - increase MaxIter option. Current function value: -1.775726 x = 1.4275 fval = -1.7757 Maximum 10 )sin(2)( 2x xxf −= Multidimensional Unconstrained Optimization 20 Key features: several variables, no constrained condition Find x Maximize or minimize f(x) Description of multidimensional unconstrained optimization 21In the context of ascending a mountain (maximization) 2D topographic map 3D mountain Method – Steepest ascent method (Gradient method) The first derivative of a function provides a slope or rangent to the function. This is useful information for optimization. For example, if the slope is positive, it means that increasing the independent variable will lead to a higher value of the function. Gradient methods explicitly use derivative information to generate efficient algorithm to locate optima. 22 Gradient (first derivative) One dimension: axx f axf = == )( Two dimensions: j y f i x f byaxf byax == + === ),( n dimensions: = )( )( )( )( 2 1 x x x x nx f x f x f f The gradient tells us what direction is the locally quickest direction to get higher value. 23 Hessian matrix The second derivative of a function f(x,y) can tell us the extremum is a maximum or a minimum. Hessian (H) of a function (f): = 2 22 2 2 2 y f xy f yx f x f H Determinant of H: 2 2 2 2 2 222 2 2 2 2 − = − = yx f y f x f xy f yx f y f x f H (1) |H|>0 and 2f/x2>0 f(x,y) has a local minimum. (2) |H|>0 and 2f/x2<0 f(x,y) has a local maximum. (3) |H|<0 f(x,y) has a saddle point. 24 Steepest Ascent Method ➢ The gradient tells us the “best direction” to search. ➢ The “best value (best step)” along that search direction needs to be determined. Strategy: Determine a function of best step, g(h), along the gradient direction; then search until g’(h)=0. This is a one-dimensional optimization problem. h y f yy h x f xx += += 0 0 25 j y f i x f byaxf byax == + === ),( Correlating x and y to h ),( yxf )(hg )(hg Example Maximum the following function: 22 222),( yxxxyyxf −−+= Using initial guesses, x0 = -1 and y0 = 1. Solution: (1) First iteration: Search the best direction: 6)1(22)1(2222 )1,1( =−−+=−+= =−= xy x f yx 6)1(4)1(242 )1,1( −=−−=−= =−= yx y f yx 26 j6i6 −=fThe gradient vector is Determining the function of best step : 27 772180 )61(2)61()61(2)61)(61(2 2 22 )61,61(),()( −+−= −−+−−+−+−+−= == −+− hh hhhhh ffg hhyxh 22 222),( yxxxyyxf −−+= It can be determined by the derivative of g(h) is 0. hh y f yy hh x f xx 61 61 0 0 −= += +−= += Determine the function of best step: Correlating x and y to h 072360)( =+−= hg h Finding the best step 2.0=h 2.02.0*6161 2.02.0*6161 1 1 −=−=−= =+−=+−= hy hx Updating for new point (2) Second iteration: ▪ Search the best direction: 2.1)2.0(22)2.0(2222)2.0,2.0( =−+−=−+=−== xyyx x f 2.1)2.0(4)2.0(242)2.0,2.0( =−−=−=−== yxyx y f hh y f yy hh x f xx 2.12.0 2.12.0 0 0 +−= += += += 28 Determining the function of best step : ▪ Determine the function of best step: Correlating x and y to h 2.088.244.1 )2.12.0(2)2.12.0()2.12.0(2)2.12.0)(2.12.0(2 )2.12.0,2.12.0(),()( 2 22 ++−= +−−+−+++−+= +−+== hh hhhhh hhfyxfhg Finding the best step 088.288.2)( =+−= hhg 1=h ▪ Updating for new point 11*2.12.02.12.0 4.11*2.12.02.12.0 2 2 =+−=+−= =+=+= hy hx MATLAB build-in function for Multidimensional unconstrained nonlinear minimization 29 Syntax: X = fminsearch(FUN,X0,OPTIONS) Maximum 22 222),( yxxxyyxf −−+= Iteration Func-count min f(x) Procedure 0 1 7 1 3 7 initial simplex 2 4 7 reflect 3 6 6.41562 expand 4 8 6.12266 expand 5 10 4.90785 expand 6 12 3.82446 expand 7 14 1.51873 expand . . 44 84 -2 contract outside 45 86 -2 contract inside Optimization terminated: the current x satisfies the termination criteria using OPTIONS.TolX of 1.000000e-04 and F(X) satisfies the convergence criteria using OPTIONS.TolFun of 1.000000e-08 xf = 1.9999 1.0000 fval = -2.0000 %function for optimizaiton-min f = inline ('-2*x(1)*x(2) - 2*x(1) + x(1).^2 + 2*x(2).^2'); %option options = optimset('Display','iter','TolFun',1e-8,'TolX',1e- 4,'MaxIter',1000); %initial guesses x0 = [-1, 1]; %fminsearch Multidimensional unconstrained nonlinear minimization (Nelder-Mead) [xf,fval] = fminsearch(f,x0,options) Constrained Optimization (Linear Programming) 30 Key features: several variables, linear constrained condition, linear objective function An optimization approach that deals with meeting a desired objective such as maximizing profit or minimizing cost in presence of constraints such as limited resources Standard form of linear programming Objective function: nnxcxcxcZMaximize +++= 2211 subject to mibxaxaxa ininii ,,2,12211 =+++ nixi ,,2,10 = 31 Linear Example – gas-processing The raw gas is processed into two grades of heating gas, regular and premium quality. These grades of gas are in high demand (that is, they are guaranteed to sell) and yield different profits to the company. However, the raw gas is constrained by no more than 77 m3 each week. The consumption for raw gas to manufacture the regular and premium heating gas are different. Further, only one of the grades can be produced at a time, and the facility is open for only 80hr/week. And, there is limited on-site storage for each of the products. Develop a linear programming formulation to maximize the profits for this operation. 32 Solution (1) x1: the amount of regular gas. x2: the amount of premium gas (2) Total profit 21 175150 xxZ += (3) Total raw gas (constraint) 77117 21 + xx (4) Total production time (constraint): 80810 21 + xx (5) Storage for regular gas (constraint): 91 x (6) Storage for premium gas (constraint): 62 x Objective function: 21 175150 xxZMaximize += subject to 77117 21 + xx 80810 21 + xx 91 x 62 x 0 0 2 1 x x 33 Method 1 – Graphical Solution The graphical solutions are limited to two or three dimensions. ➢ Step 1: Constraint lines surround a region, called the feasible solution space, including all possible combination of x1 and x2 that obey the constraints and hence represent feasible solutions. ➢ Step 2: The objective function for a particular value of Z can be plotted as another straight line and superimposed on this space. The value of Z can then be adjusted until it is at the maximum value while still touching the feasible space. This value of Z represents the optimal solution. 34 Example – gas-processing 77117 21 + xx 80810 21 + xx 91 x 62 x 0 0 2 1 x x 35 Constrains Z increases Maximum z=1400 21 175150 xxZ += 36 Example – gas-processing Objective function: Solutions of a linear programming Unique solution Alternate solution No possible solution Unbounded solution 37 Features of the unique solution ➢ Our problem often involves a unique solution. ➢ The optimum always occurs at one of the corner points where two constraints meet. Such a point is known formally as an extreme point. ➢ Not every extreme point is feasible, that is, satisfying all constraints. ➢ Once all feasible extreme points are identified, the one yielding the best value of the objective function represents the optimum solution. ➢ The simplex method offers a strategy to locate the optimum through a sequence of feasible extreme points. 38 The simplex method ➢ Assume that the optimal solution will be an extreme point. ➢ The constraint inequality equations are reformulated as equality equations by introducing slack variables. Slack variable (S1, S2, ….): 77117 21 + xx 77117 121 =++ Sxx 80810 21 + xx 91 x 62 x 80810 221 =++ Sxx 931 =+ Sx 642 =+ Sx 01 S 02 S 03 S 04 S 39 Reformulation of the linear programming problem Objective function: subject to 0,,,,, 432121 SSSSxx 77117 121 =++ Sxx 80810 221 =++ Sxx 931 =+ Sx 642 =+ Sx Four equations, six variables. 40 Adding the slack variables: 21 175150 xxZMaximize += 00000175150 432121 =−−−−−− SSSSxxZ Simplex method implementation (efficient algorithm to search the maximum extreme point) Objective function: 00000175150 432121 =−−−−−− SSSSxxZ subject to 77000117 332121 =+++++ SSSSxx 80000810 432121 =+++++ SSSSxx 90000 432121 =+++++ SSSSxx 60000 432121 =+++++ SSSSxx Build a tableau 41 42 Initial Tableau Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 -175 0 0 0 0 0 S1 0 7 11 1 0 0 0 77 S2 0 10 8 0 1 0 0 80 S3 0 1 0 0 0 1 0 9 S4 0 0 1 0 0 0 1 6 60000 90000 80000810 77000117 00000175150 432121 432121 432121 332121 432121 =+++++ =+++++ =+++++ =+++++ =−−−−−− SSSSxx SSSSxx SSSSxx SSSSxx SSSSxxZ 1. In the tableau, locate the most negative entry in the objective function row. The column for this entry is called the entering column. 2. Form the ratios of the entries in the “Solution-column” with their corresponding “entering column”. The departing row corresponds to the smallest nonnegative ratio / 3. The entry in the departing row and the entering column is called the pivot. 4. Use elementary row operations so that the pivot is 1, and all other entries in the entering column are 0. This process is called pivoting. 5. If all entries in the objective row are zero or positive, this is the final tableau. If not, repeat Procedures to solve the Tableau 43 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 -175 0 0 0 0 0 S1 0 7 11 1 0 0 0 77 S2 0 10 8 0 1 0 0 80 S3 0 1 0 0 0 1 0 9 S4 0 0 1 0 0 0 1 6 The entering variable is 2 as it is smallest (the most negative) entry in the objective row. The departing variable corresponds to 4 the smallest nonnegative ratio of / (77/11=7, 80/8=10, 9/0=∞, 6/1=6) in the column determined by the entering variable First solution: Pivoting operation Entering Departing Ratio 77/11=7 80/8=10 9/0=~ 6/1=6 The pivot is the entry in row (x2) and column (S4). As the pivot is 1, we use Gauss-Jordan Elimination to obtain the improved solution 44 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 0 0 0 0 175 1050 S1 0 7 0 1 0 0 -11 11 S2 0 10 0 0 1 0 -8 32 S3 0 1 0 0 0 1 0 9 S4 0 0 1 0 0 0 1 6 Gauss-Jordan elimination to obtain the improved solution zero 45 Variable entering Checking: In the objective row, there is negative coefficient. Further optimization is needed! )175(*4 −−= Szz RowRowRow 11*411 SSS RowRowRow −= 8*422 SSS RowRowRow −= 0*433 SSS RowRowRow −= Z 1 -150 -175 0 0 0 0 0 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 -175 0 0 0 0 0 S1 0 7 11 1 0 0 0 77 S2 0 10 8 0 1 0 0 80 S3 0 1 0 0 0 1 0 9 S4 0 0 1 0 0 0 1 6 Z 1 -150 0 0 0 0 175 1050 S4 0 0 -175 0 0 0 -175 -1050 - = x2 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 0 0 0 0 175 1050 S1 0 1 0 0.143 0 0 -1.571 1.57 S2 0 10 0 0 1 0 -8 32 S3 0 1 0 0 0 1 0 9 x2 0 0 1 0 0 0 1 6 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 0 0 0 0 175 1050 S1 0 7 0 1 0 0 -11 11 S2 0 10 0 0 1 0 -8 32 S3 0 1 0 0 0 1 0 9 x2 0 0 1 0 0 0 1 6 Second solution: Pivoting operation Ratio 11/7=1.57 32/10=3.2 9/1=9 6/0=~ Entering Departing The entering variable is 1 as it is the most negative entry in the objective row. And the departing variable corresponds to 1 the smallest nonnegative ratio of / The pivot is the entry in row (x1) and column (S1). As the pivot is not 1, we need to Use elementary row operations so that the pivot is 1 Divide row (S1) with 7 (we need to use elementary row operations), so that the pivot is 1 46 7 11 1 SS S Row pivot Row Row == Gauss-Jordan elimination to obtain the improved solution 47 )150(*1 −−= Szz RowRowRow 10*122 SSS RowRowRow −= 1*133 SSS RowRowRow −= 0*122 Sxx RowRowRow −= Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 -150 0 0 0 0 175 1050 S1 0 1 0 0.143 0 0 -1.571 1.57 S2 0 10 0 0 1 0 -8 32 S3 0 1 0 0 0 1 0 9 x2 0 0 1 0 0 0 1 6 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 0 0 21.43 0 0 -60.71 1285.71 x1 0 1 0 0.143 0 0 -1.57 1.57 S2 0 0 0 -1.43 1 0 7.71 16.29 S3 0 0 0 -0.14 0 1 1.57 7.43 x2 0 0 1 0 0 0 1 6 Checking: In the objective row, there is negative coefficient. Further optimization is needed! Variable entering Ratio -21.18 1.57/-1.57=-1 16.29/7.71=2.11 7.43/1.57=4.73 6/1=6 Entering Departing Third solution: Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 0 0 21.43 0 0 -60.71 1285.71 x1 0 1 0 0.143 0 0 -1.57 1.57 S2 0 0 0 -1.43 1 0 7.71 16.29 S3 0 0 0 -0.14 0 1 1.57 7.43 x2 0 0 1 0 0 0 1 6 48 The pivot is the entry in row (S1) and column (S4). As the pivot is not 1, we need to Use elementary row operations so that the pivot is 1 Divide row (S2) with 7.71 (we need to use elementary row operations), so that the pivot is 1 71.7 22 2 SS S Row pivot Row Row == Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 0 0 21.43 0 0 -60.71 1285.71 x1 0 1 0 0.143 0 0 -1.57 1.57 S2 0 0 0 -0.185 0.130 0 1 2.113 S3 0 0 0 -0.14 0 1 1.57 7.43 x2 0 0 1 0 0 0 1 6 Gauss-Jordan elimination to obtain the improved solution 49 )71.60(*2 −−= Szz RowRowRow )57.1(*211 −−= Sxx RowRowRow 57.1*233 SSS RowRowRow −= 0*222 Sxx RowRowRow −= Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 0 0 21.43 0 0 -60.71 1285.71 x1 0 1 0 0.143 0 0 -1.57 1.57 S2 0 0 0 -0.185 0.130 0 1 2.113 S3 0 0 0 -0.14 0 1 1.57 7.43 x2 0 0 1 0 0 0 1 6 Optimum solution: Checking: In the objective row, there is no negative coefficient. Optimization is obtained 89.3 89.4 2 1 = = x x Variable entering = 1413.89 Basic Z x1 x2 S1 S2 S3 S4 Solution Z 1 0 0 10.19 7.87 0 0 1413.89 x1 0 1 0 -0.15 0.20 0 0 4.89 S4 0 0 0 -0.19 0.13 0 1 2.11 S3 0 0 0 0.15 -0.20 1 0 4.11 x2 0 0 1 0.19 -0.13 0 0 3.89 MATLAB build-in function for Linear programming 50 Syntax: X = linprog(f,A,b,Aeq,beq) 21 175150 xxZMaximize += 0 0 6 9 80810 77117 2 1 2 1 21 21 + + x x x x xx xx %function for optimization-min f = [-150 -175]; %option options = optimset('Display','iter','TolFun',1e-8,'TolX',1e-4); %unequality constraints A*x <= b A = [7 11; 10 8; 1 0; 0 1]; b = [77; 80; 9; 6]; %linprog Linear programming.- min %[x,fval] = linprog(f,A,b,Aeq,beq,lb,ub,x0,options) [xf,fval,EXITFLAG] = linprog(f,A,b) subject to Optimal solution found. xf = 4.8889 3.8889 fval = -1.4139e+03 EXITFLAG = 1 Conclusion ▪ One-Dimensional Unconstrained Optimization Newton’s method is used if the derivative function can be determined. Golden-section search is suitable for the problem of finding a single maximum within a known interval. ▪ Multidimensional Unconstrained Optimization Steepest ascent method is popular method to find a maximum, which is In the context of ascending a mountain (maximization) ▪ Linear Constrained Optimization Simplex method is an optimization approach that deals with meeting a desired objective such as maximizing profit or minimizing cost in presence of constraints such as limited resources. It is very useful in engineering practice. 51