Lecture 10 – New Ideas and Future Research •Self-adaptation •Asynchronous Cooperation •Self-assembly Learning outcomes: − Get an insight into some recent new ideas proposed in the literature of heuristic search methods. − Identify some examples of self-adaptation in meta-heuristic algorithms. − Understand the main principles of asynchronous local search as a methodology to extend existitng single-solution approaches. − Get an insight into the use of self-assembly as a tool for the automated design of heuristic methods. This is still a very new research direction with only promising results. PA184 - Heuristic Search Methods 1 Non-linear Great Deluge with Reinforcement Learning Contrary to the traditional linear form, the non-linear decay rate of the water level reacts to the value of the current best solution, i.e. the decay rate is driven by the search.        max][min, exp rnd BB 2 SELF-ADAPTATION For a minimisation problem, candidate solutions are accepted if the objective function value is below the current water level. 3 The adaptive water level allows a more effective search for various difficult instances of the UCTT problem. Non-Linear Great Deluge (Small5) 0 50 100 150 200 250 0 50000 100000 150000 200000 Iterations PenaltyCost Best Solution Level Linear Great Deluge (Small5) 0 50 100 150 200 250 300 0 200000 400000 600000 800000 1000000 1200000 1400000 Iterations PenaltyCost Best Solution Level Linear Great Deluge (Medium1) 0 100 200 300 400 500 600 700 800 900 1000 0 20000 40000 60000 80000 100000 120000 140000 160000 Iterations PenaltyCost Best Solution Level Non-Linear Great Deluge (Medium1) 0 100 200 300 400 500 600 700 800 900 0 50000 100000 150000 200000 250000 300000 Iterations PenaltyCost Best Solution Level 4 A simple learning mechanism guides the selection of low-level heuristics (local moves) during the search. The learning mechanism tunes the priorities of the low-level heuristics as the search progresses in an attempt to learn the low-level heuristic to use at each point of the search.   Hi i i i w w p Static Memory     Hi i i i ww ww p min min Dynamic Memory              selectednotif0 solutionnewnoand0if solutionnewand0if 0if 0if r r ij   h kj ij i ihw  },0min{min iww  5 The non-linear and floating decay rate and the learning mechanism for low-level heuristic selection, improve the performance of the Great Deluge algorithm when solving instances of the UCTT problem. 6 The best results so far for most of these instances of the UCTT problem are obtained by the NLGD with Reinforcement Learning. 7 Natural Selection  Selection schemes: • Fitness proportionate selection • Rank-based selection • Tournament selection  Parents chosen based on their fitness  Passively assigned mates  Operate in objective space Sexual Selection  Selection schemes: • Ancestry selection • Assortative mating • Gender selection  2nd parent chosen w.r.t 1st parent  Actively choose mates  Operate in objective/decision space Restricted Assortative Mating • Incorporate some degree of restriction when recombining parents in evolutionary algorithms • Use mating radius to control the exploration and/or exploitation 8 The proposed restricted assortative mating strategy adapts to the similarity between potential parents and also to the diversity of the current population. The method can be incorporated into other EAs. Proposed strategyIshibuchi’s strategy 9 • Experiments on 2-, 3- and 4-objecitve knapsack problems • NSGA2, SPEA2, SEAMO2 • Ishibuchi mating strategy and restricted assortative mating strategy incorporated into SEAMO2 • The mating ratio mating is fixed or automatically adjusted according to the population diversity (decision space) 2-objectives 3-objectives 4-objectives EMOSA – an Adaptive Algorithm for MOCO Employs simulated annealing for the optimisation of each sub-problem and adapts search directions for diversifying non-dominated solutions. Various strategies used to define search directions in existing multi-objective meta-heuristics based on decomposition. 10 11 EMOSA Approach Step 0: Initialisation Produce pop well-distributed weight vectors. For each weight vector, an initial solution is generated randomly. Update the external population (EP) with the nondominated solutions in the initial population. Set T:=T0. Step 1: Local Search and Competition For each individual x in the current population, repeat the follow steps K times. 1.1 find a neighbouring solution y 1.2 update the EP with y if it is not dominated by x 1.3 replace x with the probability P(w, x, y, T) Compete with the other solutions with similar weight vectors to that of x Step 2: Temperature Change Lower the temperature value by using T:=T – alpha. If T>=Tc, adapt the weight vector of each individual in the population, otherwise, go to Step 4. Step 3: Direction Adaptation Modify each weight vector to move the current solution away from its nearest nondominated neighbours in the population. Step 4: Stopping Criteria If T