Category Hydrosystems Engineering Reliability Assessment and Risk Analysis

Tangible costs in risk-based design

Design of a hydrosystem infrastructure, by nature, is an optimization prob­lem consisting of an analysis of the hydraulic performance of the structure to convey flow across or through the structure and a determination of the most economical design alternative. The objective function is to minimize the sum of capital investment cost, the expected flood damage costs, and operation and maintenance costs. The relevant variables and parameters associated with the investment cost and the expected damage costs of a hydraulic structure, i. e., a highway drainage structure, are listed in Tables 8.1 and 8.2, respectively. The maintenance costs over the service life of the structure generally are treated as a yearly constants. Based on Tables 8.1 and 8.2, the information needed for the risk-based design of hydraulic structures can be categorized into four types:

1. Hydrologic/physiographic data include flood and precipitation data, drainage area, channel bottom slope, and drainage basin slope. These are needed to predict the magnitude of hydrologic events such as streamflow and rainfall by frequency analysis and/or regional analysis.

Pipe culverts

Box culverts



Unit cost of culvert

Unit cost of concrete Unit cost of steel

Unit cost of bridge


Number of pipes Pipe size Pipe length Pipe material

Number of barrels Length of barrel Width of barrel Quantity of concrete Quantity of steel

Bridge length Bridge width

SOURCE: After Tung et al. (1993).

TABLE 8.1 Variables and Parameters Relevant in Evaluating Capital Investment Cost of Highway Drainage Structures

2. Hydraulic data include flood-plain slopes, geometry of the channel cross sec­tion, roughness coefficients, size of the structural opening, and height of the embankment. These are needed to determine the flow-carrying capacities of hydraulic structures and to perform hydraulic analysis.

TABLE 8.2 Damage Categories with Related Economic Variables and Site Characteristics in Risk-Based Design of Highway Drainage Structures

Подпись: Floodplain property damage: Losses to crops Losses to buildingsПодпись: Damage to pavement and embankment: Pavement damage Embankment damage Подпись: Traffic-related losses: Increased travel cost due to detour Lost time of vehicle occupants Increased risk of accidents on a flooded highway

Подпись: Location of crop fields Location of buildings Physical layout of drainage structures Roadway geometry Flood characteristics Stream valley cross-section Slope of channel profile Channel and floodplain roughness Flood magnitude Flood hydrograph Overtopping duration Depth of overtopping Total area of pavement Total volume of embankment Types of drainage structure and layout Roadway geometry Average daily traffic volume Composition of vehicle types Length of normal detour paths Flood hydrograph Duration and depth of overtopping Подпись: Types of crops Economic value of crops Types of buildings Economic values of buildings and contents
Подпись: Material cost of pavement Material cost of embankment Equipment costs Labor costs Repair rate for pavement and embankment
Подпись: Rate of repair Operational cost of vehicle Distribution of income for vehicle occupants Cost of vehicle accident Rate of accident Duration of repair

Damage category Economic variables Site characteristics

SOURCE: After Tung and Bao (1990).

3. Structural data include material of substructures and layout of structure.

4. Economic data include (a) type, location, distribution, and economic value of upstream properties such as crops and buildings, (b) unit costs of structural materials, equipment, operation of vehicles, accidents, occupancy, and labor fee, (c) depth and duration ofovertopping, rate ofrepair, and rate ofaccidents, and (d) time of repair, and length of detour.

In the design of hydrosystem infrastructures, the installation cost often de­pends on environmental conditions, such as the location of the structure, ge – omorphic and geologic conditions, the soil type at the structure site, type and price of construction material, hydraulic conditions, flow conditions, recovery factor of the capital investment, and labor and transportation costs. In reality, these cost-affecting factors would result in uncertainties in the cost functions used in the analysis. However, a practical way to incorporate economic uncer­tainties in the risk-based design of hydrosystem infrastructures remains to be developed.

Historical development of hydrosystem design methods

The evolution of hydrosystem design methods can be roughly classified into four stages: (1) historical event-based design, (2) return-period design, (3) conven­tional risk-based design, and (4) optimal risk-based design with consideration given to a variety of uncertainties.

Historical event-based design. In the design of hydrosystem infrastructures and the establishment of land-use management practices to prevent and/or reduce damages resulting from natural disasters, the risk (damage) assessment typ­ically has been implicit. The earliest structures and land-use management approaches for flood protection were designed or established on the basis of their ability to withstand previous disastrous floods. For example, Chow (1962) noted that the Dun waterway table used to design railroad bridges in the early 1900s was primarily determined from channel areas corresponding to high – water marks studied during and after floods. Thus previous large floods of un­known frequency could pass through the designed bridges safely. Also, after a devastating flood on the Mississippi River in 1790, a homeowner in Saint Genieve, Missouri, rebuilt his house outside the boundary of that flood. Similar rules were applied in the design of coastal-protection works in The Netherlands at the time the Zuiderzee was closed (1927-1932) (Vrijling, 1993).

Rules based on previous experience work well in some cases. For example, the house in Missouri was not flooded until the 1993 flood on the Mississippi River, and the Zuiderzee protection works survived the 1953 storm that devastated the southwestern part of The Netherlands. However, in most cases these meth­ods are inadequate because human experience with floods and other natural hazards do not include a broad enough range of events. As noted by Vrijling (1993), “One is always one step behind when a policy is only based on historical facts.”

Return-period design. In the early part of the twentieth century, the concept of frequency analysis began to emerge as a method to extend limited data on extreme events to probabilistically estimate the magnitude of rarely occur­ring events. Frequency analysis of observed events is a key aspect of meteoro – logic, hydrologic, and seismic hazard analyses. Thus, using frequency-analysis methods, it is possible to estimate events with magnitudes beyond those that have been observed. This necessitates the selection of a societally acceptable hazard frequency (see Sec. 8.3.6).

Using the return-period design approach, a hydraulic engineer first deter­mines the design discharge from a frequency-discharge relationship by select­ing an appropriate design frequency or return period. The design discharge then is used to determine the size and layout of the hydrosystem that has a satisfactory hydraulic performance. In the return-period design method, selec­tion of the design return period is crucial to the design. Once the design return period is determined, it remains fixed throughout the whole design process. In the past, the design return period was selected subjectively on the basis of an in­dividual’s experience or the societally acceptable hazard frequency (Sec. 8.3.6). Selection of the design return period is a complex procedure that involves con­siderations of economic, social, legal, and other factors. However, the procedure does not account for these factors explicitly.

Conventional risk-based design. Risk-based design is a procedure that evaluates among alternatives by considering the tradeoff between the investment cost and the expected economic losses due to failures. Specifically, the conventional risk- based design considers the inherent hydrologic uncertainty in calculation of the expected economic losses. In the risk-based design procedure, the design return period is a decision variable instead of being a preselected design parameter value, as with the return-period design procedure.

The concept of risk-based design has been recognized for many years. As early as 1936, Congress passed the Flood Control Act (U. S. Statutes 1570), in which consideration of failure consequences in the design procedure was advocated. The economic risks or the expected flood losses were not considered explicitly until the early 1960s. Pritchett’s work (1964) was one of the early attempts to apply the risk-based hydraulic design concept to highway culverts. At four ac­tual locations, Pritchett calculated the investment costs and expected flood dam­age costs on an annual basis for several design alternatives, among which the most economical one was selected. The results indicated that a more economical solution could be reached by selecting smaller culvert sizes compared with the

traditional return-period method used by the California Division of Highways. The conventional approach has been applied to the design of highway drainage structures such as culverts (Young et al., 1974; Corry et al., 1980) and bridges (Schneider and Wilson, 1980). Inherent randomness of hydrologic processes is integrated with reliability analysis in seismic, structural, and geotechnical aspects in the design of new dams (Pate-Cornell and Tagaras, 1986) and eval­uation of alternatives for rehabilitating existing dams (McCann et al., 1984; Bureau of Reclamation, 1986; National Research Council, 1983).

Risk-based design considering other uncertainties. In the conventional risk-based hydraulic design procedure, economic risks are calculated considering only the randomness of hydrologic events. In reality, there are various types of un­certainties, as described in Sec. 1.2, in a hydrosystem infrastructure design. Advances have been made to incorporate other aspects of uncertainty in the design of various hydrosystem infrastructures. For example, both hydrologic and hydraulic uncertainties were considered in the design of highway drainage structures (Mays, 1979; Tung and Mays, 1980, 1982; Tung and Bao, 1990), storm sewer systems (Yen and Ang, 1971; Yen and Jun, 1984; Tang and Yen, 1972; Tang et al., 1975, 1976), levee systems (Tung and Mays, 1981b), riprap design of stable channels (Tung, 1994), and river diversion (Afshar et al., 1994). Inherent hydrologic uncertainty, along with parameter and model uncertain­ties, was considered in design of levee systems (Wood, 1977; Bodo and Unny, 1976). Economic uncertainty, along with hydrologic and hydraulic uncertain­ties, has been considered in flood-damage-reduction projects (U. S. Army Corps of Engineers, 1996).

Basic concept

The basic concept of risk-based design is shown schematically in Fig. 8.9. The risk function accounting for the uncertainties of various factors can be obtained using the reliability computation procedures described in earlier chapters. Al­ternatively, the risk function can account for the potential undesirable conse­quences associated with the failure of hydrosystem infrastructures. For sim­plicity, only the tangible damage cost is presented herein.

Risk costs associated with the failure of a hydrosystem infrastructure cannot be predicted accurately from year to year. A practical way is to quantify risk cost using an expected value on an annual basis. The total annual expected cost (TAEC) is the sum of the annual installation cost and annual expected damage cost, which can be expressed mathematically as

TAEC(x) = FC(x) x CRF + E(Dx) (8.29)

where FC is the first or total installation cost, which is the function of decision vector x defined by the size and configuration of the hydraulic structure, E(D x) is the annual expected damage cost associated with structural failure, and CRF is the capital recovery factor, which brings the present worth of the installation costs to an annual basis and can be computed as

CRF = (1 + i} T~T 1 (8.30)

i(1 + i)1

Annual cost

Basic concept

Figure 8.9 Schematic diagram of optimal risk-based design.

with T and і being the expected service life of the structure and the interest rate, respectively.

In practice, the optimal risk-based design determines the optimal structural size, configuration, and operation such that the annual total expected cost is minimized. Referring to Fig. 8.6, as the structural size increases, the annual in­stallation cost increases, whereas the annual expected damage cost associated with the failure decreases. The optimal risk-based design procedure attempts to determine the lowest point on the total annual expected cost curve. Mathe­matically, the optimal risk-based design problem can be stated as

Minimize TAEC(x) = FC(x) x CRF + E(Dx) (8.31a)

subject to gi (x) = 0 і = 1,2,…, m (8.31b)

where gi(x) = 0, і = 1, 2,…, m, are constraints representing the design speci­fications that must be satisfied.

In general, the solution to Eqs. (8.31a-b) could be acquired through the use of appropriate optimization algorithms. The selection or development of the so­lution algorithm is largely problem-specific, depending on the characteristics of the problem to be optimized. Section 8.4 describes an application of the optimal risk-based design to pipe culverts for roadway drainage.

Optimal Risk-Based Design of Hydrosystem Infrastructures

Reliability analysis methods can be applied to design hydrosystem infrastruc­tures with or without considering risk costs. Risk costs are those cost items incurred owing to the unexpected failure of structures, and they can be broadly classified into tangible and intangible costs. Tangible costs are those measurable in terms of monetary unit, which include damage to property and structures, loss of business, cost of repair, etc. On the other hand, intangible costs are not measurable by monetary unit, such as psychological trauma, loss of lives, social unrest, damage to the environment, and others. Without considering risk costs, reliability has been explicitly accounted for in the design of storm sewer sys­tems (Yen and Ang, 1971; Yen et al., 1976; Yen and Jun, 1984), culverts (Yen et al., 1980; Tung and Mays, 1980), and levees (Tung and Mays, 1981a; Lee and Mays, 1986). Cheng et al. (1986) demonstrated how to apply the AFOSM method to calculate the risk reduction associated with freeboard in dam design. Melching et al. (1987) suggested different flood peak-risk curves for forecasting and for design. However, it is the risk-based least cost design of hydrosystem infrastructure that promises to be potentially the most significant application of reliability analysis.

The risk-based design procedure integrates the procedures of uncertainty and reliability analyses in the design practice. The procedure considers the tradeoff among various factors such as failure probability, economics, and other per­formance measures in hydraulic structure design. Plate and Duckstein (1987,

1988) list a number of performance measures, called the figures of merit, in the risk-based design of hydraulic structures and water resource systems, which are further discussed by Plate (1992). When risk-based design is embedded into an optimization framework, the combined procedure is called the optimal risk-based design.

Determination of optimal maintenance schedule

In Sec. 6.3.4 it was shown that the implementation of scheduled maintenance can increase the mean time to failure (MTTF) of a system having an increasing hazard function. Increasing maintenance frequency would result in a decrease in repair frequency and vice versa. Suppose that an engineer is considering implementing a regular scheduled maintenance for a system. The problem of interest is to determine the optimal maintenance frequency associated with the minimum total cost, which includes the maintenance cost and repair cost. Of course, the issue is worth considering if the cost of maintenance is much
lower than the cost of repair. Otherwise, there will be little economic incentive to conduct scheduled maintenance. The following descriptions show a simple example of determining the optimal maintenance schedule associated with the minimum total cost of repair and maintenance. More sophisticated models for dam safety inspection have been developed by Tang and Yen (1991, 1993).

The total cost per unit time with a maintenance cycle of time interval length tM can be expressed as

TC(tM) = Cr x fu(tM) + Cm x fM(tM) (8.25)

in which CR and CM are unit cost per repair and unit cost per maintenance, respectively, and f R and fM are the repair and maintenance frequencies, respectively.

Assume that the repair is ideal. The repair frequency (number of repairs per unit time) f R for a specified maintenance interval tM can be calculated by

1 tM

fu (tM) = ft (r )dr (8.26)

tM 0

Determination of optimal maintenance schedule Подпись: (8.27)

in which ft (t) is the failure density function. On the other hand, since there will be one maintenance within each scheduled time interval tM, the maintenance frequency is 1/tM. Therefore, the total cost per unit time is

The relationships between the three cost items and the scheduled maintenance interval are shown in Fig. 8.8, which shows that the repair cost per unit time in­creases with tM, whereas the maintenance cost per unit time decreases with tM. Therefore, there is a tradeoff between the two cost components, and the objec­tive is to determine the optimal scheduled maintenance interval tM associated with the least total cost.

Подпись: CR Determination of optimal maintenance schedule Подпись: (8.28)

The optimal scheduled maintenance time interval can be obtained by solving d[TC(tM)]/d (tM) = 0, that is,

In general, solving Eq. (8.28) requires the use of numerical root-finding procedures.

Reliability design with redundancy

Consider the design of a hydrosystem consisting of n subsystems that are ar­ranged in series so that the failure of one subsystem will cause the failure of the entire system. In this case, reliability of the hydrosystem can be improved by installing standby units in each subsystem (see Fig. 8.7). Figure 8.7 consists of a series-parallel configuration that is called unit redundancy. Suppose that each subsystem can install up to K standby units and that the total capital available for the hydrosystem is C. Furthermore, the cost functions are known, with Ci (ki) being the cost associated with installing ki standby units on the ith subsystem.

Suppose that the engineer is interested in determining the number ofstandby units for each subsystem to maximize the system reliability asys without

Subsystem 1


Subsystem 2


Subsystem n


Figure 8.7 Unit redundancy with series-parallel configuration.


Reliability design with redundancyReliability design with redundancy

exceeding the available capital. The optimization model for the problem can be expressed as


Maximize asys = ]^[ at (ki) (8.11a)

i = 1 n

subject to ‘^JCi(kt) < C (8.11b)


where kt is the nonnegative integer-valued decision variable, 0 < kt < K, and at (ki) is the reliability of the ith subsystem installed with kt standby units.

This optimization problem can be solved efficiently by the DP approach de­scribed in Sec. 8.1.3. The stages are the subsystems i = 1, 2,…, n. The DP backward-recursive equation can be written as

max[ai (ki)] i = n

Подпись: fi (bi)Подпись: (8.12)ki

max{ai(ki) x f i+1 [b; – Ci(ki)]} i = 1, 2,…, n – 1

ki where bi is the state variable representing the total capital available for subsystems i, i + 1,…, n.

Alternatively, the design engineer may be interested in finding the system configuration associated with the least total capital investment while achieving some acceptable reliability for the system asys, min. The problem of this type can be expressed by the following model:


Minimize ‘^2/Ci (ki) (8.13a)



subject to ]^[ ai(ki) ^ asys, min (8.13b)


The model defined by Eqs. (8.13a-b) also can be solved by the DP approach. To illustrate the application of other optimization procedure, let the reliability
of each standby unit in each subsystem be equal to at and the unit cost be Ci, i = 1, 2,, n. Furthermore, the minimum acceptable system reliability is set equal to asys, min. Then, Eqs. (8.13a-b) can be rewritten as


Minimize Y. ciki (8.14a)



subject to [1 – (1 – ai)ki] = aSyS, min (8.14b)


Reliability design with redundancy Подпись: 1 - (1 - ai )ki

The model, Eqs. (8.14a-b), involves a linear objective function and a nonlinear equality constraint. To simplify the multiplicative relationship of Eq. (8.14b), a new variable pi is defined to satisfy the following equation:

Reliability design with redundancy Подпись: i = 1, 2, ..., n Подпись: (8.15a) (8.15b)

In terms of the new variable pi, the original decision variable ki and the con­straint Eq. (8.14b) can be expressed, respectively, as

Reliability design with redundancy Reliability design with redundancy Подпись: (8.16a) (8.16b)

Therefore, the original model, Eqs. (8.14 a-b), can be written as

Reliability design with redundancy Подпись: 1
Reliability design with redundancy

This constrained minimization problem, Eqs. (8.16a-b), can be solved by mini­mizing the following Lagrangian function:



Reliability design with redundancy

Ci d (! – asPys, min)

(1 – asys, mi^(1 – ai) dPi

+ A = 0 i = 1, 2,…, n


d L( P1, P2, …, Pn, A)


d Pi




results in


Подпись: (8.19)c aPi ln! aPi і

к i asys, min Vasys, min/

(! – asys, min)(1 – ai)

Reliability design with redundancy Reliability design with redundancy

Suppose that the minimum acceptable system reliability asys, min is chosen to be close to 1. In such a case,

Reliability design with redundancy Подпись: n Reliability design with redundancy

which renders

Then, by Eqs. (8.22) and (8.23), the new variable pi can be obtained, in terms of the unit cost ci, and reliability of standby unit ai, as

Подпись:Подпись: Pi =Подпись: i = 1, 2,..., n(8.24)

Once the values of pi’s are computed by Eq. (8.24), the number of standby units for each subsystem ki can be obtained by Eq. (8.15a). Finally, one should realize that the values of ki so obtained are not guaranteed to be integer-valued. A round-off to the closest integer may be needed.

Optimization of System Reliability

As described in Chap. 5, reliability of a multicomponent system depends on the component reliability, number of redundant components, and the arrangement of the components. With everything else remaining fixed, the system reliability gets higher as the number of redundant components increases. However, noth­ing is free—achieving higher system reliability has to be paid for by a higher price for the system. In general, practically all systems exhibit a behavior of diminishing rate of return as the number of redundant components increases (see Example 5.2). In the context in system reliability, this behavior is describ- able by a strictly concave function relation between the system reliability and number of redundant components (or total system cost). A relevant issue is how much one is willing to invest before the improvement in system reliability is no longer economically justifiable. The problem becomes even more challenging when there are several components with different reliability levels competing for the limited resources, such as budget and space. How would an engineer decide the best allocation of resources to achieve the highest system reliability possible? This section describes several examples showing how system reliabil­ity can be optimized by various techniques introduced in Sec. 8.1.

Optimization techniques

Having constructed an optimization model, one must choose an optimization technique to solve the model. The way that one solves the problem depends largely on the form of the objective function and constraints, the nature and number of variables, the kind of computational facility available, and personal taste and experiences. Mays and Tung (1992) provide a rather detailed discus­sion on linear programming, dynamic programming, and nonlinear program­ming techniques. In this subsection, brief descriptions are given to provide
readers with some background about these techniques. A list of applications of various optimization techniques to hydrosystems engineering problems can be found elsewhere (Mays and Tung, 2005).

Linear programming (LP). LP has been applied extensively to optimal resource allocation problems. When the system under study is linear, LP models also can be used for parameter identification and system operation. By the name of the technique, an LP model has the basic characteristic that both the objective function and constraints are linear functions of the decision variables. The general form of an LP model can be expressed as


Подпись: Max (or min) subject toxo = ^ CjXj (8.3a)



‘^a. ijXj = bi i = 1,2,…, m (8.3b)


Xj > 0 j = 1, 2,…, n (8.3c)

where the Cj’s are the objective function coefficients, the aij’s are called the technological coefficients, and the bi’s are the right-hand-side (RHS) coefficients. An LP model can be expressed concisely in matrix form as

Max (or min)

xo = ctx


subject to

Ax = b


x > 0


where c is an n x 1 column vector of objective function coefficients, x is an n x 1 column vector of decision variables, A is an m x n matrix of technological coeffi­cients, b is an m x 1 column vector of the RHS coefficients, and the superscript t represents the transpose of a matrix or vector. Excellent books on LP include Winston (1987), Taha (1987), and Hillier and Lieberman (1990).

The commonly used algorithm for solving an LP model is the simplex method developed by Dantzig (1963). Since its conception, the method has been used widely and considered as the most efficient approach for solving LP problems. The simplex algorithm, starting at an initial feasible solution to the LP model, searches along the boundary of a problem’s feasible space to reach the opti­mal solution. The method has been applied to solve large problems involving thousands of decision variables and constraints on today’s computers. Com­puter codes based on the simplex method are widely available. Some of the well-known LP software includes GAMS (general algebraic modeling system) by Brooke et al. (1988) and LINDO (linear, interactive, and discrete optimizer) by Schrage (1986), for which PC versions of the programs are available. For LP
models, because of the convexity of the feasible region, the solution obtained is a global optimum.

Two additional methods for solving LP problems have been developed that apply different algorithmic strategies (Khatchian, 1979; Karmarkar, 1984). In contrast to the simplex algorithm, Khatchian’s ellipsoid method and Karmarkar’s projective scaling method seek the optimum solution to an LP problem by moving through the interior of the feasible region.

Nonlinear programming (NLP). A nonlinear programming model has the general format of Eqs. (8.1a-c) in which either the objective function f (x) or the con­straints g (x) or both are nonlinear. In an NLP problem, the convexity of the feasible space defined by the nonlinear constraints g (x) is generally difficult to assess. As a result, the solution obtained by any NLP algorithm cannot guar­antee to be globally optimal. Detailed description for solving NLP problems can be found in Edgar and Himmelblau (1988), Fletcher (1980), McCormick (1983), and Gill et al. (1981).

Basically, algorithms for solving NLP problems are divided into two cate­gories: unconstrained and constrained algorithms. Unconstrained NLP algo­rithms solve Eq. (8.1a) without considering the presence of constraints. They provide the building blocks for the more sophisticated constrained NLP algo­rithms. Consider an unconstrained NLP problem

Minimize f (x) x є Kn (8.5)

in which Kn is an n-dimensional real space. Assume that f (x) is twice dif­ferentiable; the necessary conditions for x* to be a solution to Eq. (8.5) are (1) Vxf (x*) = 0 and (2) V| f (x*) = H(x*) is semipositive definite in which Vxf = (d f /д x1, d f /д x2,…, d f /dxn)t, agradient vector of the objective function and Vx2 f = (д2 f /дxiдxj) is an nx nHessian matrix. The sufficient conditions for an unconstrained minimum x * are (1) Vxf (x *) = 0 and (2) Vxf (x *) = H (x *) is strictly positive definite.

In theory, the solution to Eq. (8.5) can be obtained by solving Vxf (x*) = 0, which involves a system of n nonlinear equations with n unknowns. This ap­proach has been regarded as indirect because it backs away from solving the original problem of minimizing f (x). Furthermore, numerical iterative proce­dures are required to solve the system of nonlinear equations which tend to be computationally inefficient. Therefore, the general preference is given to those solution procedures which directly attack the problem of optimizing the objec­tive function. Like the LP solution procedure, direct solution methods, during the course of iteration, generate a sequence of points in the solution space that converge to the solution of the problem by the following recursive equation:

x(r+1) = x(r} + в(r}d(r} r = 1,2,… (8.6)

in which the superscript (r) represents the iteration number, x is the vector of the solution point, d is the vector of the search direction along which the objective function f (x) is to be minimized, and в is a scalar, called the step size, that minimizes f (x(r) + в(r)d(r)). This procedure is called the line search or one-dimensional search. Several unconstrained NLP algorithms have been developed, and they differ by the way the search directions are determined during the course of iteration. Mays and Tung (1992, p. 136) summarize the search directions of various methods.

Without losing generality, consider a nonlinear constrained problem stated by Eq. (8.1) with no bounding constraints. Note that the constraints Eq. (8.1b) are all equality constraints. Under this condition, the Lagrangian multiplier method can be applied, which converts a constrained NLP problem to an uncon – strainted one by an augmented objective function called the Lagragian. That is, Eqs. (8.1a-b) can be written as

Minimize L(x, X) = f (x) + Xіg(x) (8.7)

in which L(x, X) is the Lagrangian function, X is the vector of m Lagragian multipliers, and g(x) is the vector of constraint equations. The new objective function L(x, X) involves n + m decision variables. The necessary condition and sufficient conditions for x* to be the solution for Eq. (8.7) are

1. f (x*) is convex, and g(x*) is convex in the vicinity of x*.

2. VXL(x*, X) = 0.

3. VxL(x*, X) = g(x*) = 0.

4. X’s are unrestricted in sign.

Solving conditions 2 and 3 simultaneously yields the optimal solution to Eq. (8.7).

The most important theoretical results for the NLP problem are the Kuhn – Tucker conditions, which can be derived easily by using the Lagrangian method for the general NLP problem, as stated in Eq. (8.1a-c). These conditions must be satisfied at any constrained optimum, local or global, of any LP or NLP problem. They form the basis for the development of many computational algorithms.

Several NLP computer codes are available commercially. They are the GRG2 (generalized reduced gradient 2) developed by Lasdon and his colleagues (Lasdon et al., 1978; Lasdon and Waren, 1978); (2) GINO (Liebman et al., 1986); (3) MINOS (modular in-core nonlinear optimization system) by Mautaugh and Saunders (1980, 1983), and (4) GAMS-MINOS, a link for GAMS and MINOS. Microsoft Excel SOLVER implements GINO in a spreadsheet working environment.

Dynamic programming (DP). Before performing the optimization, it is sometimes desirable to make some changes of variables and transformations so that the model can be solved more efficiently. However, one must keep in mind that such transformations should completely preserve the properties of the original problem (model) so that the transformed model will yield the optimal solution to the original problem. Basically, DP is such a transformation that takes a sequential or multistage decision process containing many interrelated decision variables and converts it into a series of single-stage problems, each containing only one or a few variables. In other words, the DP technique decomposes an n-decision problem into a sequence of n separate but interrelated single-decision subproblems. Books that deal with DP are Dreyfus and Law (1977), Cooper and Cooper (1981), and Denardo (1982).

To describe the general philosophy of the DP technique, consider the fol­lowing resource allocation problem (Tung and Mays, 1992). Suppose that one wishes to allocate funds to three water development projects, A, B, and C, to maximize the total revenue. Each development project consists of several alter­native configurations that require different funding levels and yield different revenues. Owing to the budget limitation, the total available funds for the en­tire development are fixed. If the number of alternatives for each project is not too large, one probably can afford to enumerate all possible combinations of project alternatives exhaustively to identify the optimal alternatives for the entire project development. This brute-force exhaustive enumeration approach possesses three main shortcomings: (1) It would become impractical if the num­ber of alternative combinations is large, (2) the optimal course of action cannot be verified, even it is obtained in the early computations, until all the combi­nations are examined, and (3) infeasible combinations (or solutions) cannot be eliminated in advance.

Using the DP technique, one considers the selection of alternatives within each project individually without ignoring the interdependence among the projects through the total available budget. Since the total funds are limited, the amount available to each project depends on the allocations to the other projects. Whatever funds are given to project A and project B, the allocation to the remaining project C must be made to optimize its return with respect to the remaining capital. In other words, the optimal allocation to project C is condi­tioned only on the available funds for project C after allocations to project A and project B are made. Since one does not know the optimal allocations to project A and project B, the optimal allocation and the corresponding revenue from project C must be determined for all possible remaining funds after allocations to project A and project B have been made. Furthermore, whatever amount is allocated to project A, the allocations to project B and project C must be made optimal with respect to the remaining funds after the allocation is made to project A. To find the optimal allocation to project B, one finds the allocation maximizing the revenue from project B together with the optimal revenue from project C as a function of remaining funds from the allocation to project B. Finally, the optimal allocation to project A is determined, to maximize the rev­enue from project A plus the optimal revenue from both project B and project C, as a function of the funds remaining after the allocation to project A.

This description of the DP algorithm applied to a budget allocation example can be depicted schematically as Fig. 8.3, from which the basic elements and terminologies of a DP formulation are defined.

Return Return Return

from A from B from C

Optimization techniques

Decision Decision Decision

for A for B for C

Figure 8.3 Dynamic programming diagram for budget allocation example.

1. Stages (n) are the points in the problem where decisions are to be made. In the funds allocation example, each different project represents different stages in the DP model.

2. Decision variables (dn) are courses of action to be taken for each stage. The decision in each stage (project) is the alternative within the project to be selected for funding. The number of decision variables dn in each stage is not necessarily equal to one.

3. State variables (Sn) are variables describing the state of a system at differ­ent stages. A state variable can be discrete or continuous, finite or infinite. Referring to Fig. 8.3, at any stage n, there are the input state Sn and the output state Sn+1. The state variables of the system in a DP model have the function of linking between succeeding stages so that when each stage is optimized separately, the resulting decision is automatically feasible for the entire problem. Furthermore, it allows one to make optimal decisions for the remaining stages without having to check the effect of future decisions on the decisions made previously. Given the current state, an optimal policy for the remaining stages is independent of the policy adopted in the previous stages. This is called Bellman’s principle of optimality, which serves as the backbone of the optimization algorithm of the DP technique.

4. Stage return (rn) is a scalar measure of effectiveness of the decision made in each stage. It is a function of input state, output state, and the decision variable of a particular stage. That is, rn = r (Sn, Sn+1, dn).

5. State transition function (t„) is a single-valued transformation that expresses the relationships between input state, output state, and decision. In general, through the stage transition function, the output state Sn+1 at any stage n can be expressed as the function of input state Sn and the decision dn as

Sn+1 = tn(Sn, dn) (8.8)

The solution begins with finding the optimal decision for each possible state in the last stage (called the backward recursive) or in the first stage (called the forward recursive). Usually, one can exchange the sequence of the decision-making process. Hence which end to begin will be trivial. A recursive relationship that identifies the optimal policy for each state at any stage n can be developed, given the optimal policy for each state at the next stage n + 1. This backward-recursive equation, referring to Fig. 8.4, can be written as

f*(Sn) = optdn{rn(Sn, dn)} for n = N

= optdn {r n( Sn, dn) о f*+1[tn( Sn, dn)]} for n = 1,2,…, N – 1 (8.9)

in which о represents a general algebraic operator that can be +, -, x, or others.

The efficiency of an optimization algorithm is commonly measured by the computer time and storage required in problem solving. In the DP algorithm, the execution time mainly arises from the evaluation of the recursive formula, whereas the storage is primarily for storing the optimal partial return and the decision for each feasible state in each stage. DP possesses several advantages in solving problems involving the analysis of multiperiod processes; however, there are two major disadvantages of applying DP to many hydrosystems prob­lems, i. e., the computer memory and time requirements. These disadvantages would become especially severe under two situations: (1) when the number of state variables is beyond three or four and (2) when DP is applied in a discrete fashion to a continuous state space. The problem associated with the second situation is that difficulties exist in obtaining the true optimal solution without having to considerably increase discretization of the state space.

Because of the prohibitive memory requirement of DP for multidimensional systems, several attempts have been made to reduce this problem. One such modification is the discrete differential DP (DDDP). The DDDP is an iterative DP that is specially designed to overcome the shortcomings of the DP approach. The DDDP uses the same recursive equation as the DP to search for an improved trajectory among discrete states in the stage-state domain in the vicinity of an initially specified or trail trajectory (Heidari et al., 1971). Instead of searching over the entire state-stage domain for the optimal trajectory, as is the case for DP, DDDP examines only a portion of the state-stage domain, saving a considerable amount of computer time and memory (Chow et al., 1975). This optimization procedure is solved through iterations oftrial states and decisions to search for optimal returns (maximum or minimum) for a system subject to the constraints that the trial states and decisions should be within the respective admissible domain in the state and decision spaces.



Г2 1


Гn+1 1






__ ^ …. _Sn^



Sn+1 —►

Stage n + 1

__ ^ ___ Sn ^











Figure 8.4 Schematic diagram of dynamic programming representation.

General purpose computer codes for solving DP problems are not available commercially because most problems are very specific and cannot be cast into a general framework such as Eqs. (8.1a-c). Therefore, analysts often have to develop a customized compute code for a specific problem under consideration.

Global optimization techniques. To optimize a complex hydrosystem involving a large number of interrelated decision variables, it is generally difficult to be certain that the optimal solution can be obtained within a reasonable amount of time. In dealing with such “hard” problems, one could opt to obtain the optimal solution with the anticipation to consume a vary large amount of computational time using the optimization techniques previously mentioned or to reach a quick but good solution that is suboptimal through some approximate or heuristic al­gorithms. Simulated annealing (SA) and genetic algorithm (GA) are two types of high-quality general algorithms that are independent of the nature of the problem. Both SA and GA, by nature, are randomization algorithms that apply a local search based on stepwise improvement of the objective function through exploring the neighboring domain of the current solution. The quality of the local optimum can be strongly dependent on the initial solution, and in prac­tice, it is difficult to assess the time needed to reach the global optimum. To avoid being trapped in a local optimum, local search algorithms can (1) try a large number of initial solutions, (2) introduce a more complex neighborhood structure to search a larger part of the solution space, and (3) accept limited transitions to explore the solution space in which the solution is inferior (Aarts and Korst, 1989).

Simulated annealing algorithms. The simulated annealing (SA) algorithms solve optimization problems by using an analogy to physical annealing process of decreasing temperature to lower energy in a solid to a minimum level. The annealing process involves two steps: (1) increasing the temperature of a heat bath to a maximum value at which the solid melts, and (2) decrease carefully the temperature of the heat bath to allow particles to arrange themselves in a more structured lattice to achieve minimum energy level. If the temperature of the heat bath decreases too rapidly (called quenching), the solid could be frozen into a metastable state in which the solid would be brittle. The connection between optimization and the annealing process was first noted by Pincus (1970) and formalized as an optimization technique by Kirkpatrick et al. (1983).

SA algorithms employ random search, which not only accepts solutions that improve the current solution but also accept solutions that are inferior with a specified probability according to the Metropolis criterion, that is,

Optimization techniques



where x-1 is the optimal solution in the r th iteration, x j +1 is the j th trial solution of the (r + 1)th iteration, f ( ) is the objective function value analogous
to the energy level of a solid, and T is the control parameter analogous to the temperature of the heat bath. As can be seen from Eq. (8.10), in a subsequent iteration a new trial solution yielding an objective function value that is a worse objective function value compared with the current optimum one will have lower (but not zero) probability of being accepted than a solution producing a better objective function value.

Implementation of the SA algorithm is remarkably easy, as shown in Fig. 8.5, which involves the following steps: (1) generation of possible solutions to explore

Optimization techniques

Figure 8.5 Algorithm of simulated annealing.

the solution space, (2) evaluation of the objective function, and (3) definition of the annealing schedule specified by the initial control parameter T0, decrement of control parameter AT, and convergence criteria. Mechanisms for generat­ing trial solutions in each iteration involve introducing random changes with the intent to cover the solution domain to be searched. The domain of search generally changes with the iteration, and there are many ways to implement domain search (Vanderbilt and Louie, 1984; Parks, 1990).

Since SA algorithms only require objective function evaluation at each gener­ated trial solution, computational efficiency of the entire process could become an important issue because implementation of the algorithms anticipates a large number of function evaluations. Many optimization problems in hydrosys­tems engineering involve objective functions whose values depend on physical constraints defined by complex flow simulations. In such cases, it is worthy of the effort to search and implement more computationally efficient procedures. To handle constraints in an SA algorithm, the simplest way is to reject the trial solution if it leads to a constraint violation. Alternatively, penalty function can be introduced to account for the constraint violations.

In implementation of the SA algorithm, T0 is generally set to be high enough to allow virtually all trial solutions to be accepted. It is analogous to having the temperature in the heat bath high enough to “melt” the solid. It is equivalent to the acceptance probability for T0 being close to 1. As the solution improves with the iterations, the control parameter T gradually decreases in value. The SA iteration is terminated when the control parameter T reaches a specified final value or the total number of trial solutions is reached. Alternatively, one can halt the algorithm if lack of improvement in the objective a function value is defined, which can be (1) no improvement can be found in all trial solutions at one control parameter and (2) acceptance ratio falls below a specified value.

Genetic algorithms. Genetic algorithms (GAs) are a class of computer-based search procedures that emulate the biologic evolution processes of natural se­lection and genetics. Since its introduction by Goldberg in 1983, this innovative search procedure has been applied widely for optimization in a wide variety of areas (Goldberg, 1989). GAs have been demonstrated to be robust search proce­dures that outperform the conventional optimization procedures, in particular for problems with high dimensionality, discontinuities, and noise.

Using GA for optimization, analogues are made between the properties of an optimization problem and the biologic process of gene selection and reproduc­tion. The solution space for an optimization problem can be treated as the en­vironment in which potential solutions can be considered as genes. The degree of fitness of each gene can be measured by the value of the objective function of an optimization problem. In each iteration of a GA search, several genes rep­resenting solutions are generated from the population. These genes compete for their survival based on their fitness: The one that is fitter is more likely to survive and influence the next generation. Through this competition, the pop­ulation evolves to contain high-performance individuals. In a GA, iteration is represented by a generation, and decision variables are analogous to biologic genes and are represented by coded structures either in the form of a real num­ber or binary codes. A point in the solution space is represented by a collection of genes, and the coded genes are juxtaposed to form an individual or chromosome.

Like any optimum-seeking algorithm, a GA requires the specification of an initial population (the first generation) of potential solutions. This can be done by random generation of a population or by specifying an initial solution. In the initial population, the fitness of an individual is evaluated with respect to the objective function. Those individuals with a high level of fitness will have higher chance being selected to produce offspring than those individuals with a lower level of fitness. The selection can be conducted in many different ways, such as stochastic universal sampling (Baker, 1987), which behaves like a roulette wheel with the slot size apportioned to each individual’s relative fit­ness in the population. The principle of the selection is to prefer better solutions to worse ones. This very feature of the selection procedure in a GA is similar to the Metropolis acceptance criterion of SA algorithms. The implementation of such selection procedure will prevent the solutions from being trapped at the local optimum.

On the completion of selection of individuals in a generation, individuals selected will mate to produce the population of the next generation. During the mating, biologic processes of gene crossover and mutation could take place. Again, fitness of individuals in the population of the new generation will be evaluated based on which selection and mating processes will be repeated. A schematic diagram of a GA is shown in Fig. 8.6. Through this repeated

Optimization techniques

Figure 8.6 Schematic diagram of genetic algorithm.

fitness-selection-reproduction cycle, the population generated by the GA will evolve toward improved solutions.

Two main mechanisms are involved for those selected individuals to generate offspring in the next generation, i. e., crossover and mutation. Crossover allows information exchange between two chosen individuals forming the parents. In GA, crossover is done by randomly selecting positions in a chromosome of two individuals to swap their coded genes to produce the offspring. The rate of crossover has been suggested to be from 0.6 to 1.0 (Goldberg, 1989). Mutation then is a process by which new genetic materials can be introduced into the population. It is applied on a bit-by-bit basis, with the mutation rate specified by the user.

Single-objective versus multiobjective programming

The optimization model defined by Eqs. (8.1a-c) is for a single-objective problem. On the other hand, multiobjective programming deals with problems in­volving several noncommensurable and often conflicting objectives simultane­ously. Among the objective functions involved, no single one has an importance that is overwhelmingly dominant over all others. Under this circumstance, the ideological theme of optimality in the single-objective context is no longer appropriate.

Mathematically, a multiobjective programming problem can be expressed in terms of vector optimization as

Maximize f (x) = [ f 1(x), f 2(x),…, fM(x)] (8.2a)

subject to g(x) = 0 (8.2b)

in which f (x) is a vector of M objective functions, g(x) = 0 are vector of con­straints, and x is a vector of decision variables.

The solution to a multiobjective programming problem is a best compromis – able solution according to the decisionmaker’s preference among the objectives and the noninferior solutions to the problem. A noninferior solution to a multi­objective programming problem is a feasible solution to which there is no other

Single-objective versus multiobjective programming


Single-objective versus multiobjective programming


Figure 8.1 Schematic diagrams of (a) convex and (b) non-convex spaces.

feasible solution that will yield an improvement in one objective without caus­ing degradation to at least one other objective (Cohon, 1978). The collection of such noninferior solutions allows the assessment of tradeoff among conflicting objectives.

To obtain the solution to a multiobjective programming problem, the pref­erence of the decisionmaker among the conflicting objectives must be known. Information concerning the decisionmaker’s preference is commonly called the utility function, which is a function of the objective function values. Geomet­rically, the utility function can be depicted as a series of indifference curves (Fig. 8.2). The utility of a decision maker will be the same for a combination of solutions that fall on the same indifference curve. The best compromise solu­tion to a multiobjective programming problem is a unique set of alternatives that possesses the property of maximizing the decisionmaker’s utility, and the alternatives are elements of the noninferior solution set.


Single-objective versus multiobjective programming

Many methods have been proposed to derive the solution to a multiobjec­tive programming problem. Basically, they can be classified into two categories (Cohon, 1978): generating techniques and techniques incorporating knowledge of the decisionmaker’s preference. The purpose of generating techniques is to provide decisionmakers with information about the noninferior solution set or feasible tradeoff to the multiobjective problem. System analysts play the role of information providers and do not actively engage in decision making. On the other hand, techniques in the other category explicitly incorporate the de­cisionmaker’s preference to search for the best compromise solution. Detailed descriptions of various techniques for solving multiobjective problems can be found elsewhere (Cohon, 1978; Goicoechea et al., 1982; Chankong and Haimes, 1983; Steuer, 1986). Regardless of the approach to be used for solving a mul­tiobjective programming problem, the basic solution technique is still one of single-objective optimization algorithms.

General framework of optimization models

Optimization models possess algorithms to compare the measures of effective­ness of a system and attempt to yield the optimal solution having the most de­sirable value of the adopted measures. In other words, an optimization model applies an optimum-seeking algorithm, which enables the search of all alterna­tive solutions to select the best one. The general class of such optimum-seeking algorithms is called mathematical programming, which includes linear pro­gramming, nonlinear programming, dynamic programming, etc.

The main advantage of optimization models is that the optimal solution to a multidimensional (or multivariate) problem can be found readily by using an efficient search algorithm. The limitation, generally dictated by the solution technique available for model solving, is that sometimes drastic simplifications of real-life systems are inevitable in order to make the model mathematically tractable. Consequently, it is important to recognize that the optimal solution so derived is for a rather restricted case; that is, the solution is optimal to the simplified problem, and it is the optimal solution to the real-life problem of interest only to the extent that the simplifications are not damaging.

All optimization models consist of three basic elements: (1) decision variables and parameters, (2) constraints, and (3) objective functions. Decision variables are those unknown variables which are to be determined from the solution of the model, whereas parameters describe the characteristics of the system. De­cision variables generally are controllable, whereas model parameters may or may not be controllable in real-life problems. Constraints describe the physical, economical, legal, and other limitations of the system expressed in mathemat­ical form. The constraints must be included in the model, if they exist, to limit the decision variables to their feasible values. Objective functions are measures ofthe effectiveness ofsystem performance and are also expressed as mathemat­ical functions of decision variables. In a mathematical programming model, the value of the objective function is to be optimized.

The general form of an optimization model can be expressed as


xo = f (x1, x2, …, xn)


Subject to

gi (x1, x2,…, xn) = 0 i = 1,2, ..

., m


aj < xj < bj j = 1,2,…, n


where f (■) and g( ) are, respectively, the general expressions for the objective function and constraint equations, which can be linear or nonlinear. The con­straint set by Eq. (8.1c) is called the bounding constraint, indicating that the decision variables Xj’s are bounded by their respective lower bound aj and upper bound bj. The most commonly seen bounding constraint type is the nonnega­tivity constraint, with the lower bound being 0 and upper bound being infinity.

In the preceding formulation, the decision variables xj’s in the problem gen­erally are controllable inputs. The solution to the problem consists a set of de­cision variables in the system, each of which has a particular value. A solution can be feasible or infeasible. A feasible solution to an optimization problem is the one that satisfies all system constraints simultaneously. That is, a feasible solution is an element in the feasible solution space defined by the constraint equations. The solution to an optimization problem is the one in the feasible solution space that yields the most desirable objective function value, which is called the optimum feasible solution or simply the optimum solution.

The feasible solution space to an optimization model can be classified as ei­ther convex or nonconvex. Schematic sketches of convex and nonconvex feasible solution spaces are shown in Fig. 8.1. The nature of convexity of the feasible region of an optimization problem would dictate whether the optimal solution obtained is a global optimum or a local optimum. A global optimum is the best solution to the problem within the entire feasible space, whereas a local opti­mum is the best solution to the problem in the neighborhood of the solution point.