25 [31] Y. Li, P. Pardalos, and M. Resende, A greedy randomized adaptive search procedure for the quadratic assignment problem, tech. report, AT&T Bell Laboratories, Murray Hill, NJ, 1993. To appear in DIM ACS Series on Discrete Mathematics and Theoretical Computer Science. [32] C. Papadimitriou and K. Steiglitz, Combinatorial optimization: Algorithms and complexity, Prentice-Hall, 1982. [33] A. Rajak and M. Segal, Assigning components to robotic work-cells for electronic assembly, AT&T Technical Journal, 68 (1989), pp. 93-102. [34] M. Resende and T. Feo, A GRASP for Satisfiability, tech. report, AT&T Bell Laboratories, Murray Hill, NJ, 1994. [35] B. Si. i, max. H. Levesque, and D. Mitchell, A new method for solving hard Satisfiability problems, in Proceedings of the Tenth National Conference on Artificial Intelligence (AAAI-92), July 1992, pp. 440-446. [36] S. Smith and T. Feo, A GRASP for coloring sparse graphs, tech. report, Operations Research Group, Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712-1063, January 1991. 24 [15] T. Feo, K. Venkatraman, and J. Bard, A GRASP for a difficult single machine scheduling problem, Computers and Operations Research, 18 (1991). [16] M. Fischetti, S. Martello, and P. Toth, The fixed job scheduling problem with spread-time constraints, Operations Research, 35 (1987), pp. 849-858. [17] C. Friden, A. Hertz, and D. de Werra, Stabulus: A technique for finding stable sets in large graphs with tabu search, Computing, 42 (1989), pp. 35-44. [18] D. Fulkerson, G. Nemhauser, and L. Trotter Jr., Two computationally difficult set covering problems that arise in computing the 1-width of incidence matrices of Steiner triple systems, Mathematical Programming Study, 2 (1974), pp. 72-81. [19] F. Glover, Tabu search - Part I, ORSA Journal on Computing, 1 (1989), pp. 190-206. [20] -, Tabu search - Part II, ORSA Journal on Computing, 2 (1990), pp. 4-32. [21] D. Goldberg, Genetic algorithms in search, optimization and machine learning, Addison-Wesley, 1989. [22] D. Johnson and M. Trick, eds., The Second DIMACS Algorithm Implementation Challenge: Maximum Clique, Coloring, and Satisfiability, DIMACS Series on Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, 1994. [23] N. Karmarkar and K. Ramakrishnan, Computational results of an interior point algorithm for large scale linear programming, Mathematical Programming, 52 (1991), pp. 555-586. [24] B. Kernighan and S. Lin, An efficient heuristic procedure for partitioning graphs, Bell System Technical Journal, 49 (1970), pp. 291-307. [25] S. KlRKPATRlCK, Optimization by simulated annealing: Quantitative studies, Journal of Statistical Physics, 34 (1984), pp. 975-986. [26] J. Klincewicz, Avoiding local optima in the p-hub location problem using tabu search and GRASP, Annals of Operations Research, 40 (1992), pp. 283-302. [27] J. Klincewicz and A. Rajan, Using GRASP to solve the component grouping problem, tech. report, AT&T Bell Laboratories, Holmdel, NJ, 1992. [28] M. Laguna, T. Feo, and H. Elrod, A greedy randomized adaptive search procedure for the 2-partition problem, tech. report, Graduate School of Business and Administration, The University of Colorado at Boulder, Boulder, CO 80309-0419, June 1993. To appear in Operations Research. [29] M. Laguna and J. Gonzalez-Velarde, A search heuristic for just-in-time scheduling in parallel machines, Journal of Intelligent Manufacturing, 2 (1991), pp. 253-260. [30] E. Lawler, J. Lenstra, A. Rinnooy Kan, and D. Shmoys, The traveling salesman problem, John Wiley, 1985. 23 References [1] J. Bard and T. Feo, Operations sequencing in discrete parts manufacturing, Management Science, 35 (1989), pp. 249-255. [2] J. Bard and T. Feo, An algorithm for the manufacturing equipment selection problem, HE Transactions, 23 (1991), pp. 83-92. [3] B. BollobÄs, Random graphs, Academic Press, 1985. [4] R. Burkhard, S. Karisch, and F. Rendl, QAPLIB - A quadratic assignment problem library, European Journal of Operational Research, 21 (1991). [5] R. Carraghan and P. Pardalos, An exact algorithm for the maximum clique problem, Operations Hesearch Letters, 9 (1990). [6] S. Chand and H. Schneeberger, Single machine scheduling to minimize earliness subject to no tardy jobs, European Journal of Operational Hesearch, 34 (1988), pp. 221230. [7] T. Feo and J. Bard, Flight scheduling and maintenance base planning, Management Science, 35 (1989), pp. 1415-1432. [8] -, The cutting path and tool selection problem in computer-aided process planning, Journal of Manufacturing Systems, 8 (1989), pp. 17-26. [9] T. Feo, J. Bard, and S. Holland, Facility-wide planning and scheduling of printed wiring board assembly, tech. report, Operations Hesearch Group, Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712-1063, February 1993. [10] -, A GRASP for scheduling printed wiring board assembly, tech. report, Operations Hesearch Group, Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712-1063, December 1993. [11] T. Feo and J. GokzÄlez-Velarde, The intermodal trailer assignment problem, tech. report, Operations Research Group, Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712-1063, April 1992. [12] T. Feo and M. Resende, A probabilistic heuristic for a computationally difficult set covering problem, Operations Research Letters, 8 (1989), pp. 67-71. [13] T. Feo, M. Resende, and S. Smith, A greedy randomized adaptive search procedure for the maximum independent set, tech. report, AT&T Bell Laboratories, Murray Hill, NJ, 1989. To appear in Operations Research. [14] T. Feo, K. Sarathy, and J. McGahan, A GRASP for single machine scheduling with sequence dependent setup costs and linear delay penalties, tech. report, Operations Research Group, Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712-1063, January 1994. 22 procedure grasp () 1 Input Inst anceQ; 2 for Grasp stopping criterion not satisfied — 3 ConstructGreedyRandomizedSolution(Solution); 4 for local search stopping criterion not satisfied —► 5 LocalSearch(Solution); 6 UpdateSolution(Solution,BestSolutionFound); 7 MutateSolution(Solution); 8 rof; 9 UpdateSolution(Solution,BestSolutionFound); 10 rof; 11 return(BestSolutionFound) end grasp; Figure 27: Adding mutation concept to grasp local search phase Biases in heuristic mechanisms are sometimes referred to as intelligence. They can be grouped as follows: Random or lexicographic bias - indiscriminate selection of alternatives; Greedy or simple decent bias - selection based on the problem's objective function; Memory bias - selection based on prior selections; Experience or target bias - selection based on prior performance. Consider the following partial illustrations. grasp uses a greedy bias to guide the construction of each new solution. Simulating annealing uses a random bias to perturb its current solution. Tabu search employs a short term memory bias, while genetic algorithms possess a subtle experience bias analogous to natural selection. Explicit examples of experience bias are also apparent in mechanisms employing the dynamic application of target analysis. grasp and the other methods discussed herein have contributed enormously to our ability to empirically find good solutions to otherwise unsolved instances of practical combinatorial optimization problems. Fortunately, these methodologies are not antithetical to one another. They each possess characteristics that can be combined in an enormous number of ways yet to be explored. As an example, consider the hybrid procedure, developed by Feo, Sarathy, and McGahan [14], depicted in Figure 27. The framework is grasp-based, yet the mutation introduced in Phase 2 is borrowed from the ga methodology. A future direction of research into the design of heuristics should include an expansion of the classification schema started here. The motivation for this work is abundant. First, it will improve our ability to describe and define heuristic methodologies and allow us to conceptually compare different approaches. Second, it will guide the enhancement efforts of existing procedures that will lead to improved hybrid methods. And finally, it may evolve into a theoretical framework capable of blossoming the currently limited discipline of probabilistic analysis of heuristics. 21 It issues hostler and packer work orders through a radio frequency (RF) interface to speed operations and handle greater volumes of traffic with less equipment and personnel. It optimizes load plans for both trailers and containers, and thus, improves railcar utilization. The grasp found in oasis is used for optimizing the load plans and is based in part on the work of Feo and Gonzalez-Velarde [11], discussed previously. oasis is currently in use at several Conrail terminals and will be deployed at all major Conrail and Union Pacific intermodal yards by 1996. 4 Concluding remarks grasp possesses characteristics found in and shared by other heuristic search methodologies. Close analogies can be drawn to simulated annealing, tabu search, and genetic algorithms. The implementations of these various approaches are certainly quite different in practice. However, they all share with grasp fundamental heuristic concepts that can be used to classify their operations. The next two paragraphs give a terse description of simulated annealing, tabu search, and genetic algorithms. The remainder of the conclusion offers several thoughts regarding a classification schema for these and other heuristic methodologies. Tabu search and simulated annealing contain local search procedures that explore the neighborhood around a current solution for improvements to that solution. Each has the ability to remove itself from local optima in order to find better if not optimal solutions. Simulated annealing uses a straightforward randomization technique. Tabu search in its simplest form uses a short term memory strategy to intelligently direct its search away from neighborhoods already considered. Medium and long term memory strategies are respectively used in tabu search to allow for search intensification and diversification with regard to a known set of promising solutions. Genetic algorithms (ga) apply crossover and mutation operations to a population of solutions. Crossover mates two solutions in the population by combining attributes of the solutions to form an offspring. The offspring is then mutated by randomly altering a few of its attributes. The offspring is added to the population if its solution value compares favorably with the other solution values in the population, thus resembling natural selection in the theory of evolution. Categories of fundamental heuristic concepts include: solution construction, solution perturbation, procedure repetition and restart criteria, problem decomposition or conditioning, and procedure stopping rules. For illustrative purposes consider the category of solution perturbation. A local search mechanism, such as a 2-exchange technique or a mutation operation found in a genetic algorithm, are examples of solution perturbation. The basic principle is to move from one solution to another. For each of the categories, a wide variety of mechanisms have been devised and even combined to form hybrid techniques. Guiding the design of mechanisms in each category are two goals. The first is to find an optimum or near optimum solution. The second is to arrive at such a solution with a minimal amount of computational effort. Given that most combinatorial optimization problems are classified as intractable and have enormous solution spaces, it is very often ineffective to apply the brute force technique of exhaustive enumeration. Thus, one must strategically search for good solutions, biasing the search to consider only a minuscule fraction of all possibilities. 20 data for airline hub design (n = 10,15,25, p = 3,4) and a packet network design problem (n = 52, p = 4,10). The author concludes that while the tabu search implementation was about twice as fast as the grasp code in producing the best solution, grasp found solutions having the best known value more often. 3.5 Quadratic assignment problems Feo and Gonzalez-Velarde [11] apply grasp to a quadratic assignment problem (QAP) that models the positioning of intermodal highway trailers on railcars. The grasp heuristic is used within a branch and bound scheme to provide optimal solutions. The heuristic is observed to be extremely fast, and by itself, finds optimal solutions to all problem instances furnished over a two-year period by Consolidated Rail Corporation (Conrail). Li, Pardalos, and Resende [31] propose a grasp for the classical quadratic assignment problem, where one wants to assign, at minimum cost, n facilities (with interfacility flow demands) to n sites. The cost of assigning facility i to site k and facility j to site I is fi^-dkj, where is the flow between facilities i and j, and dkj is the distance between sites k and I. The grasp was tested on 88 instances of QAP, most of which are from QAPLIB [4], a library of QAP test problems. The grasp found the best known solution of almost all of the instances, and improved on the best known solution in a few cases. 3.6 Problems in logic Resende and Feo [34] describe several grasp implementations for the satisfiability problem in logic. In the satisfiability problem one wants to find a truth assignment to Boolean variables to make a given Boolean formula evaluate to true or prove that no such assignment exists. The grasps tested attempt to find an assignment and are not capable of proving unsatisfiability. The codes were tested on most satisfiable instances of the benchmark collection of the Second DIMACS Algorithm Implementation Challenge [22] and compared with gsat [35], a code that has recently received much attention due to its ability to find satisfying truth assignments of large formulae. The grasps found satisfiable assignments on all 114 instances tested. The grasps were faster than gsat in three out of the five problem classes tested. Furthermore, gsat failed to produce satisfiable assignments to several formulae for which the grasps were successful. 3.7 Industrial applications grasp has been directly applied in practice as part of two large scale decision support systems developed and implemented by Optimization Alternatives, an information systems development firm in Austin, Texas. Iksites™ (Integrated Scheduling, Inventory, and Throughput Evaluation System) provides facility-wide planning and scheduling functions for printed wire board assembly operations. The grasp used in Iksites is described in Feo, Bard, and Holland [10]. The success of this management information system at Texas Instruments is discussed in Feo, Bard, and Holland [9]. oasis™ (Optimization Alternatives' Strategic Intermodal Scheduler) controls the logistics operations in an intermodal rail terminal. The system tracks all inventory in the yard and directs parking activities to maximize the utilization of the terminal's parking areas. 19 3.3 Graph problems Feo, Hesende and Smith [13] describe a grasp for finding large independent sets on sparse random graphs. The construction and local search phases of that grasp are described in Section 2 of this paper. The grasp is implemented in parallel on a mimd computer by assigning to different processors the different contracted graphs induced by the conditioning-on-pairs strategy described in Subsection 2.4.2 of this paper. The efficiency (speedup divided by the ratio of processors) of going from one to eight processors was, on average, 93.6 percent. The grasp was tested on graphs with up 3500 nodes and over 3 million edges and is compared with implementations of simulated annealing, tabu search, and an exact method. The grasp found larger independent sets, and in substantially less cpu time, than the simulated annealing implementation. grasp was compared with the tabu search code stabulus [17] on three classes of random graphs, having 600, 1500, and 3500 vertices. The tabu search code was 1.6 times faster on the 600-node graphs, but was 3.7 times and over 10 times slower on the 1500-node and 3500-node graphs, respectively. On 600-node graphs, the exact method of Carraghan and Pardalos [5] produced optimal solutions on all 25 instances tested, while the grasp rarely produced optimal solutions. However, to produce the certificate of optimality, the exact method required about 40 times more cpu time than needed by the grasp to produce independent sets having one vertex less than the optimal size. For a 1000-node graph, the exact method failed to find an optimal solution in 10 cpu days of computing, while grasp quickly found probably-optimal sets of size 15 or 16 in all 200 instances tested. Feo and Smith [36] offer a grasp for coloring sparse graphs. The construction phase builds one color class at a time by identifying maximal independent sets. The local search phase uses a simulated annealing approach starting at a relatively cold temperature. This starting condition keeps the search in the vicinity of the constructed solution while allowing it to wander away from local minima. The grasp implementation performs well on a wide range of instances including random graphs and graphs of known chromatic number. Laguna, Feo, and Elrod [28] develop a grasp implementation for the network 2-partition problem. The heuristic is conceptually simple and straightforward to program. The grasp is empirically compared to the Kernighan-Lin method [24] which stood for over twenty years as the dominating heuristic procedure. Over 3500 instances are used to compare the running times and solution values provided by the two methods. The instances include a wide variety of random and geometric graphs, as well as smaller examples for which optimal solutions can be found via branch and bound. The comparative study empirically confirms the effectiveness of the grasp implementation. 3.4 Location problems Klincewicz [26] compares tabu search and grasp for solving instances of the discrete p-hub location problem, a problem that has applications in airline and package delivery systems, as well as in certain telecommunications network design problems. In this problem, one is given an ra-node graph and a matrix of internodal traffic and is asked to choose p of the n nodes to serve as hubs, which are to be fully interconnected. For all nonhub nodes, one must also determine which hub that node is to be connected to, making it possible to route traffic between any two nodes in the graph. The objective is to minimize the total cost of sending traffic between demand pairs. Computational testing was carried out on real 18 cpu requirements, is judged by examining a wide variety of instances derived from actual manufacturing data. Laguna and Gonzalez-Velarde [29] consider the scheduling of parallel machines in a just-in-time production environment. The optimization problem possesses a weighted ear-liness penalty with deadlines for identical parallel machines. The authors present a hybrid heuristic that combines elements of both tabu search and grasp methodologies, and uses a branch-and-bound postprocessor. They compare the performance of their method with the modified Smith heuristic of Chand and Scheeberger [6], concluding that their method succeeds in finding solutions that are, on average, 10 percent better than those found by the modified Smith heuristic. Feo, Venkatraman, and Bard [15] develop a grasp for a single machine scheduling problem with flow time and earliness penalties. The method compares favorably with methods previously reported in the literature. A dynamic programming (DP) algorithm yields optimal solutions to problems with up to 30 jobs. In a fraction of the time required by the DP implementation, the grasp code provides optimal solutions to 238 out of the 240 instances tested, while providing solutions that are extremely close to the optimal in the remaining two instances. Feo, Sarathy, and McGahan [14] write about a single machine scheduling problem with sequence dependent setup costs and linear delay penalties. They develop a grasp which quickly finds optimal solutions to 20-job problems previously reported in the literature. The method is favorably compared to a tabu search implementation on instances ranging up to 165 jobs. The authors take advantage of the mutation concept found in genetic algorithms to enhance the performance of the local search phase of their grasp implementation. Klincewicz and Hajan [27] describe two grasp heuristics to solve the component grouping problem, a type of set partitioning problem that arises in a number of manufacturing and material logistics applications. In computational results, based on real manufacturing data, the grasps produce solutions having objective function values within 4.3 to 9.5 percent (7.4 percent on average) of a lower bound based on a combinatorial argument. Compared to previously used methods based on a network flow heuristic [33], the first grasp produced better solutions on all 12 test problems, while the second grasp produced better solutions on all but one. Feo, Bard, and Holland [10] present a grasp implementation for scheduling printed wiring board assembly. The combinatorial optimization problem possesses multiple machines, precedence relationships, start dates, due dates, capacity constraints, set up times, processing times, and resource constraints. A multicriterion objective is considered that includes minimizing weighted tardiness, maximizing revenue (weighted throughput), minimizing cycle times, and Howline balancing. The grasp is empirically validated in an industrial setting with over 70 processing stations, 140 product types, 4500 components, 126 shifts, 49,000 boards in wip, and 142,000 boards on demand. The heuristic is shown to outperform rule based methods used previously. This work highlights the ease and effectiveness with which grasp can be applied to extremely large and complex optimization problems found in practice. 17 independent sets of size 15 were found in 98 instances and of size 16 in two instances. In more than half of the runs, the grasp took less than one minute of cpu time to terminate. The code was run on the same instances to search for sets of size 16 or greater. There, the code found the two size 16 sets found in the first set of runs, along with another set of size 16, totaling three instances with independent sets of size 16 out of the 100 tested. For those runs, the preprocessing times were .42, .45, and .45 seconds; the number of tuples examined were 35, 16, and 76; and the grasp running times were 101.88, 319.39, and 217.77 seconds. 3 Applications We now turn our attention to a number of grasp implementations that have appeared in the literature, covering a wide range of applications, including set covering, production planning and scheduling, graph problems, location problems, quadratic assignment problems, and problems in logic. Two industrial implementations of grasp are also discussed. 3.1 Set covering Feo and Hesende [12] describe a grasp for solving set covering problems that arise in computing the 1-width of the incidence matrix of Steiner triple systems. The construction mechanism as well as the local search strategy of that grasp are described in Section 2 of this paper. Computational results are described, where the grasp quickly produces best known solutions for all of the instances considered. Bard and Feo [2] describe a unified framework in which product and process demands can be related to manufacturing system requirements. The objective is to determine, in a flexible manufacturing environment, how many of each machine to purchase, as well as what fraction of the time each piece of equipment will be configured for a particular type of operation. A nonlinear cost minimization model is developed and is solved with a depth-first branch and bound routine that employs a grasp for set covering to find good feasible solutions. The solutions obtained with the grasp permit early fathoming and greatly contribute to the efficiency of the algorithm. Feo and Bard [7] use grasp to solve a sequence of set covering problems in an approach that renders an approximate solution to a minimum cost, multicommodity, network flow problem with integral constraints for airline flight scheduling and maintenance base planning. They demonstrate the procedure with data for the American Airlines Boeing 727 fleet, and show that the new approach is a significant improvement over current solution techniques. 3.2 Production planning and scheduling Bard and Feo [1, 8] apply grasp to computer aided process planning, specifically, the selection of tools and cutting paths for milling metal on flexible manufacturing machines. The underlying optimization problem is modeled as an integer program and is solved by branch and bound. Lower bounds are calculated by means of a Lagrangian relaxation technique. Feasible solutions (upper bounds) are found by a grasp applied to a specialized set covering problem. Overall performance of the method, including quality of solutions and 16 g h eg d fg b •-o-• •-o-• • • condition on {b, /} condition on {b, c} condition on {c, /} Figure 25: Preprocessing for maximum independent set instances preproc seconds tupl as examined grasp seconds min avg max min avg max min avg max 1-10 0.41 0.42 0.43 1 2.1 4 0.12 3.77 9.26 11-20 0.40 0.42 0.43 4 4.8 8 9.70 12.36 19.66 21-30 0.39 0.41 0.42 7 8.6 10 20.24 22.93 28.72 31-40 0.41 0.42 0.43 11 12.2 14 29.50 33.11 38.56 41-50 0.40 0.42 0.43 15 17.0 20 41.96 47.86 55.36 51-60 0.41 0.42 0.43 20 24.8 29 59.20 69.89 82.67 61-70 0.41 0.42 0.43 32 34.6 38 88.87 98.64 110.31 71-80 0.40 0.42 0.44 39 50.5 66 110.41 141.66 196.69 81-90 0.40 0.42 0.43 73 88.2 116 203.64 247.24 315.17 91-100 0.41 0.42 0.44 114 173.8 314 324.68 489.28 893.19 Figure 26: Experimental results: grasp maximum independent set solution statistics pairs VijVj € V[ow, such that (vi,Vj) $ E, compute a({vi,Vj}), the number of vertices not adjacent to either v,{ or Vj. The pairs are ordered in decreasing value of a, and at most 400 pairs are kept for consideration. The problems on the contracted graphs are solved in order, conditioning on the pairs being in the independent set. Consider, as an example, the graph in Figure 24, where we choose to condition on pairs of vertices from the set of vertices having degree 2, i.e. vertices {b, c, /}. The pairs that we condition on are {b, c}, {b, /}, and {c,/}. For these pairs, we have a({b,c}) = \{d, f,g}\ = 3, a({b,f}) = \{c,g,h}\ = 3, and &->• 6 S := Sf U M; 7 local(V, 8 ft; 9 rof; end local; Figure 16: Local search: maximum independent set 2.3.2 Maximum independent set problem We next describe a fc-exchange search procedure for maximum independent set in the graph G = f V. E). The idea here is to take as input an independent set S C V of size p and consider all fc-tuples of vertices in S, for a given parameter k, 0 < k < p. For each such fc-tuple {tJjj,..., Vik}, apply an exhaustive search to find a maximum independent set in the graph induced by the vertices of G not adjacent to the vertices in the set S' = S \ {v^,..., Vik}. If the resulting independent set M is larger than S, the set of vertices S'uM is an independent set and is larger than S. The procedure can now be applied to the new independent set. This procedure is given in Figure 16. Consider the example in Figure 17, where a 1-exchange (vi = a) is carried out on the independent set {a,c}. There, the set S' = S \ {a} = {c}, so the exhaustive enumeration is done of the graph consisting of vertices {a, b, d} and edges {(a, b), (a, d)}, resulting in the maximum independent set M = {b, d}. Since this set has size 2, the new larger independent set {b, c, d} can be built. Applying the local search on this new independent set does not produce an improvement, thus halting the procedure at this local minimum. 9 Figure 12: Maximum independent set: construction phase lfc d »0 l*f Figure 13: Maximum independent set: construction phase be S* = {a,c}. If instead, b was initially chosen (S* = {6}), the resulting graph G\ would be the one depicted in Figure 13. In that case, the restricted candidate list HCL = {c, d, f}. Selecting vertex d and then vertex c results in an optimal independent set S* = {b, c, d}. 2.3 GRASP local search phase We now turn our attention to the local search phase for each of the two examples. We begin with a local search procedure for the set covering problem and then describe a procedure for maximum independent set. 2.3.1 Set covering problem In the set covering problem, define a k,p-exchange as follows: For all fc-tuples in a cover J*, if possible, exchange the fc-tuple with a p-tuple (p < k) not in J*. Consider the example in Figure 14 with cover J* = {P3, P5, P6}. Applying the 2,1-exchange that replaces the 2-tuple {PsjPo} with the 1-tuple {P4} results in an optimal cover J* = {P4,P6} depicted in Figure 15. I'l P2 I'', I'i P5 I'; I'7 Pg Figure 14: Set covering: local search phase (cover {pjjPsjPe}) 8 I'l P2 I'', l'i P5 I'; I'7 Pg 1 2 3 4 5 1 1 Figure 10: Set covering example: construction phase Figure 11: Maximum independent set: construction phase A plausible greedy choice for the maximum independent set is to select the vertex with the smallest degree with respect to the graph induced by the yet unselected vertices that are not adjacent to any previously selected vertex. Let dv denote the degree of vertex v in graph Gk = (V),, Ef.). The greedy choice is to select a vertex with the smallest degree. Instead of selecting the greedy choice, the grasp construction phase builds a restricted candidate list of all vertices having small degree, but not necessarily the smallest degree. Let L be the smallest degree of vertices in V},.. i.e., L = min{d„ I v E Vk}, and let a > 0 be the restricted candidate parameter. The value restricted candidate list is B,CL = {v € Vk I dv < (1 + a)T}. From the candidate list a vertex, say v, is selected at random and placed in the independent set, i.e., S* <— S* U {v}. The greedy function is adaptive, because with the addition of each new vertex in the independent set, Gk+i is different from Gk, and consequently vertex degrees change. Gk+i is defined as follows: V},. ( [ = \). \ {v} \ adj(tj), where adj(w) is the set of vertices in Gk adjacent to v; Ek+i = E \ {(u, w) | u £ S* or iu £ S*}. Consider the example of Figure 11. Let a = 0.6 in this case. Vertices {a, b, c,d, /} each have degree 2, while vertex e has degree 4. Hence, the value restricted candidate list HCL = {a, b, c, d, /}. Suppose vertex a were to be selected at random from the RCL. The initial independent set would be S* = {a} and the resulting graph G\ would be the one depicted in Figure 12. In graph G\, all vertices have identical degree and consequently HCL = {c, e, /}. If vertex c were to be selected, the resulting independent set of size 2 would 7 I'l P2 I'', I'l P5 I'; I'7 Pg Figure 8: Set covering example: construction phase I'l P2 P3 I'i P5 I'; I'7 Ps 0 0 Figure 9: Set covering example: construction phase Instead of making the greedy choice, we allow a set to be in the restricted candidate list if the number of yet uncovered elements that would be covered if that set were to be chosen is within some percentage (a) of the number covered by a greedy choice. This type of candidate list limitation is referred to as value restriction. Similarly, we can limit the size of the candidate list by including only the (3 best elements. This limitation is referred to as a cardinality restriction. Note that one may apply both types of restrictions simultaneously to form a candidate list. Consider the example in Figure 8 and let a = 40 percent. The numbers on the bottom row are the number of yet uncovered elements that would become covered if the corresponding set on the top row of the figure were to be selected. The greedy choices, P^Ps, or Pq would therefore cover 3 elements. Since a = 40 percent, the value restricted candidate list HCL = {Pi, P4, P5, P6j P7}. Suppose, at random, that set P5 is selected. Then elements 3,4 and 5 are covered and we are left with the situation depicted in Figure 9, with HCL = {P3, P4, Pe, P7}. Next, choosing P3 would leave the remaining choice as P6, and the resulting constructed cover would be J* = {P3, P5, P6}, of size 3. On the other hand, if P6 had initially been chosen in place of P5, we would be in the situation depicted in Figure 10, where choosing P4 results in a smaller (optimal) cover J* = {P^Pg} of size 2. 2.2.2 Maximum independent set problem In the case of the maximum independent set problem, a grasp builds an independent set, one vertex at a time, guided by an adaptive greedy function. Let S* denote the independent set to be constructed. The grasp begins with S* = {0}. Let k = 0, \\ = V and Ej, = E. 6 Pi P2 P3 P4 Figure 6: Set covering problem example a •-»b c *• -•d Figure 7: Independent set problem example and J = {1,2,..., n}. A set J* C J is a cower if Uje j*Pj = /. In the set covering problem we want to find the minimum cardinality cover. Consider the example in Figure 6 where four sets Pi = {1,2}, P2 = {1? 3}, P3 = {2}, and P4 = {3} are given. There are 7 valid covers for this example: {Pi, Pi-, P3, P4}, {Pi, P2, P3}, {Pi',P2-,Pa], {P^iPsiP^i {PiiPz}, {Pi)Pa\i {P^iPs}- The optimal covers, of size 2, are: {P1;P2}, {Pi,P4} and {P2,P3}. 2.1.2 Maximum independent set problem Given a graph G = ( V. E) where V is the vertex set and E is the edge set of G, an independent set (or vertex packing or stable set) is a set of vertices whose elements are pairwise nonadjacent. In the maximum independent set problem we want an independent set of maximum cardinality. Consider the example in Figure 7 where a graph with four vertices {a, b, c, d} and three edges {(a, b), (a, c), (c, d)} is given. The independent sets for this example are {a, d}, {b, c}, {a}, {&}, {c} and {d}. There are two maximum independent sets: {a, d} and {b, c}. 2.2 GRASP construction phase During the construction phase of grasp a solution is built one element at a time, with each element selected at random from a list of candidates determined by an adaptive greedy function. In this subsection, we illustrate the construction phase by defining adaptive greedy functions and candidate list restriction mechanisms for the two examples described above. 2.2.1 Set covering problem A set Pi is said to cover the set /' C / if P± f] I' = A greedy choice in the set covering problem is to select the set Pj that covers the largest number of yet uncovered elements of set /. Let us use this as the adaptive greedy function to construct a solution for the problem. 5 experiment that illustrates this intuition. The figures show, for different cardinality restriction values (candidate list size = 1,2,..., 256), the distribution of observed solution values (3017,3018,3026) obtained at each iteration, for 100,000 replications of grasp iterations. The simulation uses the code Grasp-B, of Resende and Feo [34], to solve satisfiability instance ssa7552-160 of the 2nd DIMACS Algorithm Implementation Challenge [22]. In the optimization problem, one wants to maximize the number of satisfied clauses. Problem instance ssa7552-160 is satisfiable and has 1391 variables, 3126 clauses, and 7025 literals. Consequently, the optimal solution value is 3216. The simulation shows that the greedy solution (|RCL| = 1) has the highest mean solution value (3124.00) and the smallest variance (zero). As the restriction is increasingly relaxed, the mean values decrease and the variances increase. With the increase in variance, the number of samples drawn from the set of optimal solutions also increases. No optimal solution is drawn for |RCL| = 1,2, and 4. Five are drawn for |RCL| = 8, and as many as 689 are drawn for the largest list size of 256. An especially appealing characteristic of grasp is the ease with which it can be implemented. Few parameters need to be set and tuned (candidate list size and number of grasp iterations), and therefore, development can focus on implementing efficient data structures to assure quick grasp iterations. Finally, grasp can be trivially implemented on a parallel processor in an mimd environment. Each processor can be initialized with its own copy of the procedure, the instance data, and an independent random number sequence. The grasp iterations are then performed in parallel with only a single global variable required to store the best solution found over all processors. The remainder of the this paper is organized as follows. In Section 2, the grasp methodology is described in detail. Several grasp implementations are summarized in Section 3. A summary and discussion of future work are presented in Section 4. 2 Methodology In this section, we describe a general framework for grasp, using two classical combinatorial optimization problems (set covering and the maximum independent set) to illustrate the various components of the methodology. We define the two problems and describe the two phases of grasp with respect to each problem class. Examples are given for the procedures described. We conclude the section by describing computational testing of grasp codes for set covering and maximum independent set. 2.1 Problem definitions We begin by defining the two combinatorial optimization problems used in this section to illustrate the phases of a grasp: the set covering problem and the maximum independent set problem. 2.1.1 Set covering problem Given n finite sets Pi, P2,..., Pn, let n I =\J Pi = {1,2,...,m} i=l 4 size RCL solution values 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 1 100000 2 151 6053 93796 4 75 1676 17744 80503 2 8 1 50 750 6566 31257 61336 35 5 16 16 282 2485 13274 38329 45547 42 25 32 1 3 72 635 4196 16455 37937 40479 164 58 64 4 18 177 1213 5933 19553 37666 34832 441 163 128 4 36 269 1716 7324 21140 37186 34832 679 281 256 1 5 35 304 1980 7867 21792 36725 29027 1575 689 Figure 4: Sample distributions of grasp iteration solutions size RCL 1 2 4 8 16 32 64 128 256 moan (3120 +) 4.00 3.94 3.79 3.53 3.27 3.14 3.00 2.91 2.89 Figure 5: Means of sample distributions of grasp iteration solutions subset of solutions N(s). A solution s is said to be locally optimal if there is no better solution in N(s). Given a neighborhood structure N, a local search algorithm has the general form as stated in Figure 3. The key to success for a local search algorithm consists of the suitable choice of a neighborhood structure, efficient neighborhood search techniques, and the starting solution. While such local optimization procedures can require exponential time from an arbitrary starting point, empirically their efficiency significantly improves as the initial solution improves. Through the use of customized data structures and careful implementation, an efficient construction phase can be created which produces good initial solutions for efficient local search. The result is that often many grasp solutions are generated in the same amount of time required for the local optimization procedure to converge from a single random start. Furthermore, the best of these grasp solutions is generally significantly better than the single solution obtained from a random starting point. It is difficult to formally analyze the quality of solution values found by using the grasp methodology. However, there is an intuitive justification that views grasp as a repetitive sampling technique. Each grasp iteration produces a sample solution from an unknown distribution of all obtainable results. The mean and variance of the distribution are functions of the restrictive nature of the candidate list. For example, if the cardinality of the restricted candidate list is limited to one, then only one solution will be produced and the variance of the distribution will be zero. Given an effective greedy function, the mean solution value in this case should be good, but probably suboptimal. If a less restrictive cardinality limit is imposed, many different solutions will be produced implying a larger variance. Since the greedy function is more compromised in this case, the mean solution value should degrade. Intuitively, however, by order statistics and the fact that the samples are randomly produced, the best value found should outperform the mean value. Indeed, often the best solutions sampled are optimal. Figures 4-5 show results of a simulation 3 procedure ConstructGreedyRandomizedSolution(Solution) 1 Solution = {}; 2 for Solution construction not done —► 3 MakeRCL(RCL); 4 s = SelectElementAtRandom(RCL); 5 Solution = Solution U {s}; 6 AdaptGreedyFunction(,s); 7 rof; end ConstructGreedyRandomizedSolution; Figure 2: Grasp construction phase pseudo-code procedure local(P,iV(P),s) 1 for s not locally optimal —► 2 Find a better solution t £ N(s); 3 Let s = t; 4 rof; 5 return(5 as local optimal for P) end local: Figure 3: Grasp local search phase adaptive because the benefits associated with every element are updated at each iteration of the construction phase to reflect the changes brought on by the selection of the previous element. The probabilistic component of a grasp is characterized by randomly choosing one of the best candidates in the list, but not necessarily the top candidate. The list of best candidates is called the restricted candidate list (RCL). This choice technique allows for different solutions to be obtained at each grasp iteration, but does not necessarily compromise the power of the adaptive greedy component of the method. Figure 2 displays pseudo-code for the construction phase of grasp. The solution to be contracted is initialized in line 1 of the pseudo-code. The loop from line 2 to 7 is repeated until the solution is constructed. In line 3, the restricted candidate list is built. A candidate from the list is selected, at random, in line 4 and is added to the solution in line 5. The effect of the selected solution element s on the greedy function is taken into consideration in line 6, where the greedy function is adapted. As is the case for many deterministic methods, the solutions generated by a grasp construction are not guaranteed to be locally optimal with respect to simple neighborhood definitions. Hence, it is almost always beneficial to apply a local search to attempt to improve each constructed solution. A local search algorithm works in an iterative fashion by successively replacing the current solution by a better solution in the neighborhood of the current solution. It terminates when no better solution is found in the neighborhood. The neighborhood structure N for a problem P relates a solution s of the problem to a 2 procedure grasp () 1 Input Inst anceQ; 2 for Grasp stopping criterion not satisfied — 3 ConstructGreedyRandomizedSolution(Solution); 4 LocalSearch(Solution); 5 UpdateSolution(Solution,BestSolutionFound); 6 rof; 7 return(BestSolutionFound) end grasp; Figure 1: A generic GRASP pseudo-code reported for linear programming [23], specialized versions of the traveling salesman problem [30] and bus driver scheduling [16], to name a few. Nevertheless, most problems found in industry and government are either computationally intractable by their nature, or sufficiently large so as to preclude the use of exact algorithms. In such cases, heuristic methods are usually employed to find good, but not necessarily optimal solutions. The effectiveness of these methods depends upon their ability to adapt to a particular realization, avoid entrapment at local optima, and exploit the basic structure of the problem, such as a network or a natural ordering among its components. Furthermore, restart procedures, controlled randomization, efficient data structures, and preprocessing are also beneficial. Building on these notions, various heuristic search techniques have been developed that have demonstrably improved our ability to obtain good solutions to difficult combinatorial optimization problems. The most promising of such techniques include simulated annealing [25], tabu search [19, 20], genetic algorithms [21] and grasp (Greedy Randomized Adaptive Search Procedures). In this paper, we define the various components comprising a grasp and demonstrate, step by step, how to develop such heuristics for combinatorial optimization problems. Intuitive justifications for the observed empirical behavior of the methodology will be discussed. The paper concludes with a brief literature review of grasp and mentions two industrial applications. A grasp is an iterative process, with each grasp iteration consisting of two phases, a construction phase and a local search phase. The best overall solution is kept as the result. A generic grasp pseudo-code is given in Figure 1. Line 1 of the pseudo-code corresponds to problem input. The grasp iterations take place in lines 2-6, and terminate when some termination criterion, such as maximum number of iterations have occured or solution sought has been found, is satisfied. Line 3 is the grasp construction phase, while line 4 is the local search phase. If an improved solution is found, the incumbent is updated in line 5. We next present a high-level description of these two phases. In the following section we delve into more detail. In the construction phase, a feasible solution is iteratively constructed, one element at a time. At each construction iteration, the choice of the next element to be added is determined by ordering all elements in a candidate list with respect to a greedy function. This function measures the (myopic) benefit of selecting each element. The heuristic is Greedy Randomized Adaptive Search Procedures* Thomas A. Feo^ Mauricio G.C. Resende''' Abstract Today, a variety of heuristic approaches are available to the operations research practitioner. One methodology that has a strong intuitive appeal, a prominent empirical track record, and is trivial to efficiently implement on parallel processors is GRASP (Greedy Randomized Adaptive Search Procedures). GRASP is an iterative randomized sampling technique in which each iteration provides a solution to the problem at hand. The incumbent solution over all GRASP iterations is kept as the final result. There are two phases within each GRASP iteration: the first intelligently constructs an initial solution via an adaptive randomized greedy function; the second applies a local search procedure to the constructed solution in hope of finding an improvement. In this paper, we define the various components comprising a GRASP and demonstrate, step by step, how to develop such heuristics for combinatorial optimization problems. Intuitive justifications for the observed empirical behavior of the methodology are discussed. The paper concludes with a brief literature review of GRASP implementations and mentions two industrial applications. 1 Introduction Optimization problems that involve a large but finite number of alternatives often arise in industry, government and science. Common examples include designing efficient telecommunication networks, scheduling operations in a semiconductor manufacturing plant, designing effective school zoning, locating strategic energy reserves, routing delivery vehicles, troop deployment, airline crew scheduling, and designing a large experiment. In all of these examples, it is theoretically possible to enumerate all combinations of solutions and evaluate each with respect to the stated objective. The ones that provide the most favorable outcome are deemed optimal. However, from a practical perspective, it is infeasible to follow such a strategy of complete enumeration because the number of combinations often grows exponentially with the size of problem. Much work has been done over the last 40 years to develop optimal seeking methods that do not explicitly require an examination of each alternative. This research has given rise to the field of combinatorial optimization (see Papadimitriou and Steiglitz [32]), and an increasing capability to solve ever larger real-world problems. Notable successes have been "Last revision: February 7, 1994 'The University of Texas, Austin, TX 78712 USA 'AT&T Bell Laboratories, Murray Hill, NJ 07974 USA 1