Simulated Annealing: Basics and application examples Introduction
Simulated Annealing: Basics and application examples By Ricardo Alejos
I. Introduction Finding the global minimum can be a hard optimization problem since the objective functions can have many local minima. A procedure for solving such problems should sample values of the objective function in such a way as to have a high probability of finding a near-optimal solution and lend itself to efficient implementation. Such criteria is met by the Simulated Annealing method which was introduced by Kirkpatrick et al. and independently by Cerny in early 1980s. Simulated Annealing (SA) is a stochastic computational technique derived from statistical mechanics for finding near globally-minimum-cost solutions to large optimization problems [1].
Statistical mechanics is the study of the behavior of large systems of interacting components, such as atoms in a fluid, in thermal equilibrium at a finite temperature. If the system is in thermal equilibrium, then its particles have a probability to change from one state of energy to another given by Boltzmann distribution which is dependent dependent on the system temperature and the magnitude of the pretended energy change. This in such a way that higher temperatures allow random changes while low temperatures tend to allow only decreasing energy state changes. In order to achieve a low-energy state, one must use an annealing process, which consists on elevating the system temperature and gradually lower it down and spending enough time at each temperature to reach thermal equilibrium. In contrast to many of the classical optimization methods, this one is not based in gradients and it does not has a deterministic convergence: the same seed and parametric configuration may make the algorithm converge to a different solution from one run to another. This is due to the random nature on how it decides to make its steps towards the final candidate solution. Such behavior may not result in the most precise/optimal solution, however it has other exploitable advantages: it can get unstuck from local optimum points when the algorithm is in a high-energy state, it can deal with noisy objective functions, and it can be used for combinatorial/discrete optimization, among others. All this with a small number of iterations / function evaluations in comparison to other optimization methods. When applicable, the SA algorithm can be used alternately with other methods to increase the accuracy of the final solution. In this document the basic theory of this algorithm is explained and some of its benefits are verified with practical examples.
II. Basic theory of Simulated Annealing The name of this algorithm is inspired from metallurgy. In that discipline, annealing consists on a technique that involves heating a metal and cooling it down in a controlled manner such that it can increase the size of the solid crystals and therefore reduce their defects. This notion of slow cooling is implemented in the SA to decrease the probability of accepting accepting local optimum optimum values values of the objective function.
Page 1
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Practical examples Each state (solution candidate) can be considered as a node which is interconnected to other nodes. The current node may change from one position to the other depending on the change of magnitude of the objective function. Such principle is implemented implement ed in a way that worse candidates are less likely to be accepted than better on es, favoring the creation of a path towards de optimal solution. Each next-node-candidate is generated randomly. A popular variant of the original SA also considers a temperature dependent step size which balances the chances of escaping from local optimum values and the precision of the final found solution candidate. Let’s consider our current node is c and that the next generated candidate is n. This step has a probability P(c,n) probability P(c,n) of of being taken taken and 1- P(c,n) of P(c,n) of being rejected. When a candidate is accepted the next thing to do is just make c=n; c=n; and when a candidate is rejected then a new one is generated. After this the process is repeated (all this details can be better understood understood by looking at the pseudo-code pseudo-code included in this this document). The The probability function function is described described with the mathematical expression in (1). in (1).
P (c, n)
1 1 e
E / T
(1)
Where ΔE is is the change on the objective function value from c to n, and T is the current algorithm temperature parameter parameter value. Notice that this function tends to have a value of 0.5 when T>>ΔE and and that it varies from ~0 to ~1 in other cases. Therefore, the higher the temperature the more random the algorithm becomes (which matches with its natural model) and any candidate get the same chance of being chosen or rejected. When T→1 the behavior of P(c,n) of P(c,n) becomes becomes very similar to an time-inverted unit-step-function: its value goes rapidly to 0 when ΔE >0 >0 and to 1 when Δ E <0. <0. This means that, when the temperature goes very low, only good candidates will be chosen and that all bad candidates will be rejected. This behavior is illustrated in Figure 1 for different values of T . The values of T are are determined by a custom function of time (or iterations) that is called cooling function or tem perature profile. profile. Changing Changing the the temperature temperature profile profile can lead to different properties properties of of the SA. For example, example, a simple simple decreasing ramp may provoke different performance than a slower transition function. A periodic temperature profile may lead to the possibility of abandoning local optimum values to get a better one. The graphs in Figure in Figure 2 show different temperature profiles with the properties mentioned in the previous paragraph. The best choice depends on the characteristics of each problem and the strategy that the engineer pretends to use. The pseudo-code can be consulted in TABLE 1 [2].
The current implementation As mentioned before, this basic version can be altered to exploit other features such as custom temperature profiles, controlled/variable step-size, return the best candidate instead of the last one (which allows better recursion techniques), controlled candidate ranges, a custom exit criteria (the original algorithm exits until the entire temperature profile has has been swept) and and etcetera. etcetera. Most of this features features have have been added to to the algorithm version used used along along this study. The MATLAB code used for this work can be found in TABLE in TABLE 2.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Practical examples that make patent the advantages and disadvantages of these methods and gives an idea on how they complement each other.
Case 1: Bowl function The Bowl function is a well-behaved and easy to optimize function that serves as a good first test for optimization algorithms. The mathematical expression that describes it is written as (2). Its (2). Its analytic minimum can be calculated T analytically to be x =[6, =[6, 4.5] . 2 y ( x1 6)
1 25
4 ( x 2 4.5)
(2)
In order to assess the Simulated Annealing algorithm performance, it is compared with the Conjugate Gradients FR and the Nelder Mead algorithms. The results of applying them to the Bowl function are shown in the TABLE 3. Also, as a visual aid on how the Simulated Annealing chooses the next steps please look at the Figure 3. With the purpose the purpose of exploring exploring the behavior behavior far away from the optimum optimum point, let’s let’s trigger the algorithm algorithm using the point [-20, 60] as the seed value. value. The results can be checked checked in TABLE in TABLE 4 and the Simulated Annealing algorithm evolution can be observed in Figure in Figure 4.
Case 2: Bowl function with random noise This test case basically consists on adding a relatively small amount of noise to the objective function. In this case it is a Gaussian noise centered in the function value at each point with a standard deviation of 0.5. Just as before, a set of experiments were made using the same algorithms to assess the performance of each one compared to the others. The results are shown in TABLE in TABLE 5 while the Simulated Annealing algorithm evolution is visually described in Figure in Figure 5.
Case 3: Multiple local minimum points For this test, the function to use is a periodic and exponentially decreasing function. With the purpose of illustrating it using graphs the exercise keeps the function dimensionalit dimensionality. y. The math expression that describes this function is (3). x1 x2 10
4 x1 2 4 x2 y sin sin cos cos e 1 3 13 2
(3)
Let’s also limit the solution space to the range 0 to 12 for both x1 and x and x2. With this scenario the analytic solution happens at [11.4285, 9.8035]. 9.8035 ]. Such limits are incorporated to the problem with “punishment functions” which increase the value of the function as it goes away from the ranges of interest. The results of such experiments, as well as the Simulated Annealing evolution are shown in TABLE 6 and Figure and Figure 6 correspondingly.
Case 4: Low-pass filter on micro-strip technology This exercise consists on finding the size parameters that allow the next low-pass filter match its specification requirements (which are known to be strict). Such requirements are shown below this paragraph and they have to be met by varying x =[W =[W 1 L1 S 1]T while preserving z =[ H H ε W L , tan( δ), σ t ]: ]:
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Conclusions The value of z is [0.794mm, 2.2, 2.45mm, 12.25mm, 0.01, 5.8×10 7S/m, 15.24µm]. Such dimensions and the filter geometry can be visualized in Figure in Figure 7. This problem can be solved using a mini-max formulation, where the objective function function is the maximum error with respect to the spec requirements. In this kind of formulations, the maximum error has to become less than zero to meet all the specs. As the last example, it has many many local minimums which made other algorithms converge conver ge into them. For this problem the SA algorithm is fed with a periodic (re-heating) temperature profile. Such profiles make the algorithm recover a big step-size and randomness after they were near to convergence in the first cycle, and that is how the SA can escape the local minimums. The TABLE The TABLE 7 shows the results of each of the three algorithms we have been using to make the comparison. Even though that the Nelder-Mead algorithm was able to achieve a negative maximum error, it doubled the cost that the SA algorithm took to find its nearby solution. For such cases the designer has to decide on the tradeoff between the computation cost and the quality of the solution when choosing an optimization algorithm. The Conjugate Gradients algorithm did not work as well as the other two algorithms. algorithms. Refer to the Figure the Figure 8 to visualize the algorithm evolution in terms of the maximum design error.
Case 5: Noisy filter optimization In this last case, the same problem as case 4 is solved but now with added complexity: random noise was added to the circuit response. This noise has its mean centered to the target function value with a standard deviation of 0.1. Refer to the experiment results in TABLE 8. 8. Both seeds make the SA’s algorithm to find near -to-zero near -to-zero solutions, however, the Nelder-Mead has an acceptable performance just in the first one (the second one diverges by almost 90% of the maximum error). So other valuable application appli cation for the SA’s algorithm is not just direct optimization over complex problems but also a good seed finder for finer optimization algorithms. In this case, after using the last SA solution and then applying it to the Nelder-Mead algorithm it gets a final objective function value of 0.0646 (however it still evaluates the function 601 times).
IV.
Conclusions
The SA algorithm is a cheap optimization method compared to gradient based methods and the Nelder-Mead algorithm. Its capability of locating good-enough good-enough solutions in very short number of iterations makes makes it a tool that can be used for initial objective function exploration. Once a set of good candidate solutions are gotten, the algorithm can be adapted to finer steps and less randomness in order to achieve more precise solutions. Otherwise, its inexact solutions can be used as seed values for other optimization algorithms that otherwise would get stuck in local minimum points along the objective function if they were initiated using the original seed value. In order to increase the capability of the SA algorithm to escape the local minimum points, periodic temperature profiles can be used so the function can recover the solution mobility after settling to a candidate solution. If it happens to lose a global maximum due to this recovered randomness it will still report the most optimum point along the search.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples References Noisy functions is a problem that is frequently frequently observed when real-world measureme measurements nts are done. And this is because all measurements have a range of uncertainty, which can be modeled as noise. Other methods that are used to solve this kind of problems are based in the idea of averaging a set of samples of the objective function evaluated in a fixed point. Such method has a good effectiveness in terms of finding a good candidate solution, however they tend to increase the function evaluation cost exponentially (which does not happen with SA).
V. References [1] J. Gall, "Simulated Annealing," in Computer Vision. A Reference Guide., Guide. , Tübingen, Germany, Springerlink, 2014, p. 898. [2] P. Rossmanith, "Simulated Annealing," in Algorithms Unplugged , Springerlink, 2011, p. 406. [3] P. v. d. H. J. K. W. M. H. S. E. Aarts, "Simulated Annealing," in Metaheuristic Metaheuristic Procedures for Training Training Neural Networks, Networks, Springerlink, 2006.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix
VI.
Appendix
Tables TABLE 1. PSEUDO CODE FOR THE SIMULATED ANNEALING ANNEALING ALGORITHM. NOTICE NOTICE THAT THE ACCEPTANCE ACCEPTANCE OF A NEW NEW POINT HAP PENS WITH A PROBABILITY PROBABILITY P GIVEN BY THE PROBABILITY FUNCTION FUNCTION (1). (1).
Make the seed value our current node (c=x ( c=x0) Evaluate the objective function function in the seed value ( E ( E 0= f (c)) For each temperature T value value in a decreasing normalized set ({1 … 0}): Generate a new step candidate (n ( n) Evaluate the objective function in the new step candidate ( E ( E 1= f (n)) If P If P (c,n) > random(0,1) Accept the new candidate ( c=n) c=n) Output: Final node value. TABLE 2. MATLAB IMPLEMENTATION IMPLEMENTATION OF THE SIMULATED SIMULATED ANNEALING ALGORITHM WITH THE IMPROVEMENTS IMPROVEMENTS MENTIONED MENTIONED IN THE SECTION The The current implementation.
function [x_opt, f_val, XN, FN] = SimulatedAnnealing2(f, x0, t, s, l) function [x_opt, %{ Simulated Annealing - Optimization Algorithm Inputs f - Function to be optimized x0 - Seed value of independent variable of "f" t - Vector with temperature values s - Step size [Maximum Minimum] l - x limits [min(x); max(x)] Ouputs x_opt - x value that minimizes "f" to the found minimum. f_val - value of "f" at x_opt XN - x value history during the algorithm run FN - f value history during the algorithm run The more the cost of f is, the shorter the t vector should be. %} N = 1:1:length(t); % iterator P = @(DE,T) 1/(1+exp(DE/T)); % bigger when DE is more negative S = @(T) (s(2)-s(1))/(max(t)-min(t))*(T-min(t))+s(2); % linear step-size calculation in function of temperature xn = x0; XN = zeros(length(t), length(x0)); FN = zeros(length(t),1); c=0; f0 = feval(f,xn); fn = f0; for n for n = N
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix xt = xn + xt; % advance that step if (sum(xt>l(1,:))==length(xt) if (sum(xt>l(1,:))==length(xt) && sum(xt
Algorithm Algorith m
Seed value
Found solution
Conjugate Gradients FR Nelder-Mead Nelder-Mead SA (70 elements in downward ramp profile)
[1, 1]
[6.0000
4.4300]
Function tions 31
[6.0000 [6.3730
4.5000] 5.7405]
174 70
evalua-
Euclidean norm of error 0.07 3.6161e-06 0.0855
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix Conjugate Gradients FR Nelder-Mead Nelder-Mead SA (70 elements in downward ramp profile) SA (30 elements in downward ramp profile)
[-20, 60]
[6.0000
4.4226]
9688
0.0774
[6.0000 [5.8363
4.5000] 4.8158]
180 70
1.3098e-06 0.3557
[6.0663
4.0117]
30
0.4928
TABLE 5. RESULTS FROM THE EXPERIMENTS EXPERIMENTS MADE WITH WITH THE NOISY BOWL FUNCTION FUNCTION AND USING THE CONJUGATE GRADIENT FR, FR, NELDER-MEAD AND THE SIMULATED ANNEALING ANNEALING (10 FUNCTION EVALUATIONS EVALUATIONS ONLY). NOW NOW IT BECOMES OBVIOUS OBVIOUS THAT THE SIMULATED ANNEALING GOT A MUCH BETTER RESULT THAN THE OTHER ALGORITHMS WHICH SIMPLY DIVERGE FROM THE ANALYTIC SOLUTION.
Algorithm Algorith m Conjugate ents FR
Gradi-
Seed value
Found solution
[1, 1]
[5.8477 -1.0434] (maxed out iterations) [1.0590 0.9977] (maxed out function evaluations) [6.0581 4.3855]
Nelder-Mead Nelder-Mead
SA (10 elements downward ramp profile)
Function tions 37317
evalua-
Euclidean norm of error 5.5455
401
6.0563
10
0.1284
TABLE 6. RESULTS OF APPLICATION OF THE CONJUGATE GRADIENTS GRADIENTS FR, NELDER-MEAD NELDER-MEAD AND SIMULATED SIMULATED ANNEALING ALGO RITHMS FOR MINIMIZING MINIMIZING THE FUNCTION DESCRIBED DESCRIBED BY (3). NOTICE (3). NOTICE NOW THAT THE SIMULATED ANNEALING ALGO RITHM IS THE ONE THAT IS CAPABLE OF GETTING GETTING A MUCH BETTER BETTER RESULT THAN THE OTHER ALGORITHMS. IT ALSO ALSO NEEDED MORE TRIES TRIES IN ORDER TO GET SUCH SOLUTIONS (EACH TRY THROWS THROWS A DIFFERENT RESULT RESULT AS IT WORKS AS A STOCHASTIC PROCESS
Algorithm Algorith m
Seed value
Found solution
Function tions
evalua-
Euclidean norm of error
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix SA (90 elements 3 cycle sawtooth)
[11.6230
9.8049]
90
0.1945
TABLE 7 RESULTS OF MINIMIZING THE ERROR WITH RESPECT TO THE DESIGN SPECIFICATIONS SPECIFICATIONS FOR THE FILTER FILTER DESCRIBED IN Case 4: Low-pass filter on micro-strip technology USING THE CONJUTAGE GRADIENTS FR, NELDER-MEAD AND SIMULATED AN NEALING ALGORITHMS. IN THIS CASE, THE NELDER-MEAD PERFORMED PERFORMED THE BEST, FOLLOWED FOLLOWED BY THE SIMULATED ANAN NEALING ALGORITHM. THE CONJUGATE CONJUGATE GRADIENTS FR METHOD METHOD DID NOT CONVERGE TO A SOLUTION EVEN AFTER MORE THAN 30000 FUNCTION EVALUATIONS.
Algorithm Algorith m
Seed value (mm)
Conjugate Gradients FR Nelder-Mead Nelder-Mead
[3.5 5.6 4.2]
Simulated Annealing
Found solution (mm) – (mm) – Relative Relative to seed [1.4002 6.1634 4.9724] [0.2730 1.1980 0.9154] [0.5438 0.9741 1.6639]
Maximum error value at solution
Function tions
0.1753
37317
-0.0077
163
0.0837
63
evalua-
TABLE 8. RESULTS OF MINIMIZING THE MAXIMUM ERROR FOR THE DESIGN PROBLEM Case Case 5: Noisy filter optimization. THE optimization. THE SAME FILTER AS IN Case Case 4: Low-pass filter on micro-strip technology HAS technology HAS BEEN USED, BUT BUT THIS TIME THERE IS A WHITE NOISE COMPOCOMPO NENT ADDED TO THE FILTER FILTER RESPONSE MAKING MAKING THIS PROBLEM MORE DIFFICULT. NOTICE NOTICE HOW THE NELDER-MEAD NELDER-MEAD ALGORITHM IS STILL REPORTING REPORTING GOOD RESULTS (NOT NEGATIVE NEGATIVE BUT THE LOWEST LOWEST IN THE TABLE) BY MAXING MAXING OUT ITS FUNCTION EVALUATIONS. EVALUATIONS. THE SIMULATED ANNEALING ANNEALING ALGORITHM DOES DOES NOT CONVERGE TO THE BEST SOLUTION BUT BUT CAN BE USED TO GENERATE GOOD SEEDS NEAR THE REGION WHERE THE OPTIMUM POINT RESIDES.
Algorithm Algorith m
Seed value (mm)
Nelder-Mead Nelder-Mead
[3.5 5.6 4.2]
Simulated Annealing (1st try) Simulated Annealing (2nd try)
Found solution (mm) – (mm) – Relative Relative to seed [0.2368 1.1248 0.8668] [0.3401 0.8781 2.0473] [0.5284 1.0444 1.3042]
Maximum error value at solution
Function tions
0.0115
601
0.1593
100
0.1326
100
evalua-
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix
Figures
Figure 1.
Plotted transition probability probability function. This describes how probable is is to accept or reject reject a step given given the energy difference between the current point and the proposed one ( )
(a)
(b)
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix
(a) Figure 3.
1,1]]. The steps Evolution of the Simulated Simulated Annealing algorithm for for the Bowl function function starting from the point [1,1 that were accepted are marked with a circle. Notice how they concentrate near the analytic minimum. In (a) the algorithm runs evaluating 70 points in th e objective function while in (b) it uses only 10. Notice that number is the number of points in the temperature profile function being used.
(a) Figure 4.
(b)
(b)
Evolution of the Simulated Simulated Annealing algorithm for for the Bowl function function starting from the point [−20,60]. This time, 70 temperature profile points were used to produce (a) and 30 for (b).
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix
(a)
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Titles you can't find anywhere else
Try Scribd FREE for 30 days to access over 125 million titles without ads or interruptions! Start Free Trial Cancel Anytime.
Simulated Annealing: Basics and application examples Appendix
Figure 7.
Dimensional description of the RF filter filter used for Case Case 4: Low-pass filter on micro-strip technology and Case Case 5: Noisy filter optimization. optimization.