Problem Solutions For POWER GENERATION OPERATION AND CONTROL
Allen J. Wood Bruce F. Wollenberg Gerald B. Sheblé
August 2013
Solutions to homeworks problems Chapter 3
Preface
We trust that these homework problem solutions will prove helpful in teaching a course with our text. If you find typographical errors please send us corrections via John Wiley.
Allen J. Wood Bruce f. Wollenberg Gerald B. Sheblé
Problem 3.7
NOTE: For instruction on gradient methods see instructions ahead of problem 3.10
Problem 3.10 instructions Problem 3.10 requires that the student know something about the Gradient and Newton methods of solution. These were purposely eliminated from the third edition of the text so the next pages are included here. If you wish to use problem 3.10 then use these pages from the second edition to teach the Gradient and Newton methods.
Problem 3.10 instructions
Gradient Methods of Economic Dispatch Note that the lambda search technique always requires that one be able to find the power output of a generator, given an incremental cost for that generator. In the case of a quadratic function for the cost function, or in the case where the incremental cost function is represented by a piecewise linear function, this is possible. However, it is often the case that the cost function is much more complex, such as the one below: (P E) F ( P) A BP CP 2 D exp F In this case, we shall propose that a more basic method of solution for the optimum be found.
Gradient Search This method works on the principle that the minimum of a function, f( x ), can be found by a series of steps that always take us in a downward direction. From any starting point, x0 , we may find the direction of “steepest descent” by noting that the gradient of f, i.e., f x 1 f= f xn always points in the direction of maximum ascent. Therefore, if we want to move in the direction of maximum descent, we negate the gradient. Then we should go from x0 to x1 using: x1 x0 f Where is a scalar to allow us to guarantee that the process converges. The best value of must be determined by experiment.
Economic Dispatch by Gradient Search In the case of power system economic dispatch this becomes: N
f Fi ( Pi ) i 1
and the object is to drive the function to its minimum. However, we have to be concerned with the constraint function:
N
i 1
Pload Pi
To solve the economic dispatch problem which involves minimizing the objective function and keeping the equality constraint, we must apply the gradient technique directly to the Lagrange function itself.
Problem 3.10 instructions The Lagrange function is: N
N
i 1
Fi ( Pi ) Pload Pi i 1
and the gradient of this function is: d F1 ( P1 ) P 1 dP1 d P F2 ( P2 ) = 2 dP2 d P F3 ( P3 ) 3 dP3 N Pload i 1 Pi The problem with this formulation is the lack of a guarantee that the new points generated each step will lie on the surface . We shall see that this can be overcome by a simple variation of the gradient method. The economic dispatch algorithm requires a starting value and starting values for P1 , P2 , and P3 . The gradient for L is calculated as above and the new values of , P1 , P2 , and P3 , etc., are found from: x1 x 0 (L) where the vector x is: P1 P 2 x P3 EXAMPLE 1 Given the generator cost functions found in Example 3A in the text, solve for the economic dispatch of generation with a total load of 800 MW. Using 100 and starting from P10 300 MW , P20 200 MW , and P30 300 MW , we set the initial value of equal to the average of the incremental costs of the generators at their starting generation values. That is: 1 3 d 0 Fi ( Pi 0 ) 3 i 1 dPi This value is 9.4484. The progress of the gradient search is shown in Table 3.2. The table shows that the
Problem 3.10 instructions iterations have led to no solution at all. Attempts to use this formulation will result in difficulty as the gradient cannot guarantee that the adjustment to the generators will result in a schedule that meets the correct total load of 800 MW. Economic Dispatch by Gradient Method
Iteration 1 2 3 4 5 10
P1 300 300.59 301.18 301.76 302.36
P2 200 200.82 201.64 202.45 203.28
309.16 211.19
P3 300 298.59 297.19 295.8 294.43
Ptotal 800 800 800.0086 800.025 800.077
291.65 811.99
Cost
9.4484 9.4484 9.4484 9.4570 9.4826
7938.0 7935 7932 7929.3 7926.9
16.36
8025.6
A simple variation of this technique is to realize that one of the generators is always a dependent variable and remove it from the problem. In this case, we pick P3 and use the following: P3 800 P1 P2 Then the total cost, which is to be minimized, is: Cost F1 ( P1 ) F2 ( P2 ) F3 ( P3 ) F1 ( P1 ) F2 ( P2 ) F3 (800 P1 P2 ) Note that this function stands by itself as a function of two variables with no load-generation balance constraint (and no ). The cost can be minimized by a gradient method and in this case the gradient is: d dF1 dF3 Cost dP dP dP 1 1 1 Cost d dF2 dF3 Cost dP dP d P 2 2 2 Note that this gradient goes to the zero vector when the incremental cost at generator 3 is equal to that at generators 1 and 2. The gradient steps are performed in the same manner as previously, where: x1 x 0 Cost and P x 1 P 2 Each time a gradient step is made, the generation at generator 3 is set to 800 minus the sum of the generation at generators 1 and 2. This method is often called the “reduced gradient” because of the smaller number of variables.
Problem 3.10 instructions
EXAMPLE 2 Reworking example 1 with the reduced gradient we obtain the results shown in table below. This solution is much more stable and is converging on the optimum solution. Reduced Gradient Results
Iteration P1 1 300 2 320.04 3 335.38 4 347.08 5 355.97 10 380.00
( 10)
P2 200 222.36 239.76 253.33 263.94
P3 300 257.59 224.85 199.58 180.07
Ptotal 800 800 800 800 800
304.43
115.56 800
Cost 7938.0 7858.1 7810.4 7781.9 7764.9 7739.2
Newton’s Method of Economic Dispatch We may wish to go a further step beyond the simple gradient method and try to solve the economic dispatch by observing that the aim is to always drive x 0 Since this is a vector function, we can formulate the problem as one of finding the correction that exactly drives the gradient to zero (i.e., to a vector, all of whose elements are zero). We know how to find this, however, since we can use Newton’s method. Newton’s method for a function of more than one variable is developed as follows. Suppose we wish to drive the function g ( x ) to zero. The function g is a vector and the unknowns, x, are also vectors. Then, to use Newton’s method, we observe: g(x x) g( x) [ g ( x)]x 0 If we let the function be defined as: g1 ( x1 x2 x3 ) g ( x ) g 2 ( x1 x2 x3 ) g 3 ( x1 x2 x3 )
then g1 x 1 g ( x) g 2 x 1
g1 x2
g1 x3
which is the familiar Jacobian matrix. The adjustment at each step is then: x [ g ( x)]1 g( x)
Problem 3.10 instructions
Now, if we let the g function be the gradient vector x we get: 1
x x x For our economic dispatch problem this takes the form: N
N
i 1
Fi ( Pi ) Pload Pi i 1
and is as it was defined before. The Jacobian matrix now becomes one made up of second derivatives and is called the Hessian matrix: d2 d2 dx 2 dx1dx2 1 d2 d x d x x 2 1 x d2 d dx1 Generally, Newton’s method will solve for the correction that is much closer to the minimum generation cost in one step than would the gradient method. EXAMPLE 3 In this example we shall use Newton’s method to solve the same economic dispatch as used in Examples 3E and 3F. The gradient is the same as in Example 3E, the Hessian matrix is: d 2 F1 0 0 1 dP 2 1 d 2 F2 0 1 0 2 [H ] dP2 2 d F3 0 0 1 dP32 1 1 1 0 In this example, we shall simply set the initial equal to 0, and the initial generation values will be the same as in Example 3E as well. The gradient of the Lagrange function is:
Problem 3.10 instructions 88572 86260 108620 0 The Hessian matrix is: 0 0 1 00031 0 00039 0 1 [H ] 0 0 00096 1 1 1 0 1 Solving for the correction to the x vector and making the correction, we obtain P1 3696871 P 3156965 x 2 P3 1146164 90749 and a total generation cost of 7738.8. Note that no further steps are necessary as the Newton’s method has solved in one step. When the system of equations making up the generation cost functions are quadratic, and no generation limits are reached, the Newton’s method will solve in one step. We have introduced the gradient, reduced gradient and Newton’s method here mainly as a way to show the variations of solution of the generation economic dispatch problem. For many applications, the lambda search technique is the preferred choice. However, in later chapters, when we introduce the optimal power flow, the gradient and Newton formulations become necessary.
Problem 3.11 continued
Problem 3.13 continued Problem 3.13 a 1600 Unit 1 Unit 2 Unit 3
1400
fuel rate [BTU/h]
1200
1000
800
600
400
200 20
30
40
50
60 P [MW]
70
80
90
100
Input vs power output Problem 3.13 b 22
incremental heat rate [BTU/MWh]
20 18 16 14 Unit 1 Unit 2 Unit 3
12 10 8 6 20
30
40
50
60 P [MW]
70
80
Incremental cost vs. power output
90
100
Problem 3.13 continued
Problem 3.13 c 5500 5000 4500
fuel rate [BTU/h]
4000 3500 3000 2500 2000 1500 1000
0
50
100
150 P [MW]
200
250
Total fuel all units vs power output Numerical solutions comparison: prob3_13 dynamic programming demand: 100.000000 cost: 1663.596000 demand: 140.000000 cost: 2306.379660 demand: 180.000000 cost: 3101.291160 demand: 220.000000 cost: 3990.412980 demand: 260.000000 cost: 4975.289625 lamda iteration demand: 100.000000 cost: 1663.789824 demand: 140.000000 cost: 2306.503107 demand: 180.000000 cost: 3101.336569 demand: 220.000000 cost: 3990.448568 demand: 260.000000 cost: 4975.325797
unit1: unit1: unit1: unit1: unit1:
40.00 60.00 76.00 82.00 97.00
unit2: unit2: unit2: unit2: unit2:
40.00 52.00 67.00 72.00 85.00
unit3: unit3: unit3: unit3: unit3:
20.00 28.00 37.00 66.00 78.00
unit1: unit1: unit1: unit1: unit1:
39.29 60.85 76.45 81.89 96.98
unit2: unit2: unit2: unit2: unit2:
40.71 51.78 66.80 71.70 85.29
unit3: unit3: unit3: unit3: unit3:
20.00 27.36 36.75 66.41 77.73
300
Problem 3.14
3.1
Problem 3.15
Problem 3.15, continued
Problem 3.15, continued
Problem 3.16
Problem Problem 3.17 3.17, part a
Table 2: Iterations for problem 3.17, part a