Optimization models possess algorithms to compare the measures of effectiveness of a system and attempt to yield the optimal solution having the most desirable value of the adopted measures. In other words, an optimization model applies an optimum-seeking algorithm, which enables the search of all alternative solutions to select the best one. The general class of such optimum-seeking algorithms is called mathematical programming, which includes linear programming, nonlinear programming, dynamic programming, etc.
The main advantage of optimization models is that the optimal solution to a multidimensional (or multivariate) problem can be found readily by using an efficient search algorithm. The limitation, generally dictated by the solution technique available for model solving, is that sometimes drastic simplifications of real-life systems are inevitable in order to make the model mathematically tractable. Consequently, it is important to recognize that the optimal solution so derived is for a rather restricted case; that is, the solution is optimal to the simplified problem, and it is the optimal solution to the real-life problem of interest only to the extent that the simplifications are not damaging.
All optimization models consist of three basic elements: (1) decision variables and parameters, (2) constraints, and (3) objective functions. Decision variables are those unknown variables which are to be determined from the solution of the model, whereas parameters describe the characteristics of the system. Decision variables generally are controllable, whereas model parameters may or may not be controllable in real-life problems. Constraints describe the physical, economical, legal, and other limitations of the system expressed in mathematical form. The constraints must be included in the model, if they exist, to limit the decision variables to their feasible values. Objective functions are measures ofthe effectiveness ofsystem performance and are also expressed as mathematical functions of decision variables. In a mathematical programming model, the value of the objective function is to be optimized.
The general form of an optimization model can be expressed as
Optimize |
xo = f (x1, x2, …, xn) |
(8.1a) |
|
Subject to |
gi (x1, x2,…, xn) = 0 i = 1,2, .. |
., m |
(8.1b) |
aj < xj < bj j = 1,2,…, n |
(8.1c) |
where f (■) and g( ) are, respectively, the general expressions for the objective function and constraint equations, which can be linear or nonlinear. The constraint set by Eq. (8.1c) is called the bounding constraint, indicating that the decision variables Xj’s are bounded by their respective lower bound aj and upper bound bj. The most commonly seen bounding constraint type is the nonnegativity constraint, with the lower bound being 0 and upper bound being infinity.
In the preceding formulation, the decision variables xj’s in the problem generally are controllable inputs. The solution to the problem consists a set of decision variables in the system, each of which has a particular value. A solution can be feasible or infeasible. A feasible solution to an optimization problem is the one that satisfies all system constraints simultaneously. That is, a feasible solution is an element in the feasible solution space defined by the constraint equations. The solution to an optimization problem is the one in the feasible solution space that yields the most desirable objective function value, which is called the optimum feasible solution or simply the optimum solution.
The feasible solution space to an optimization model can be classified as either convex or nonconvex. Schematic sketches of convex and nonconvex feasible solution spaces are shown in Fig. 8.1. The nature of convexity of the feasible region of an optimization problem would dictate whether the optimal solution obtained is a global optimum or a local optimum. A global optimum is the best solution to the problem within the entire feasible space, whereas a local optimum is the best solution to the problem in the neighborhood of the solution point.