Heuristic Optimization Methods - PowerPoint PPT Presentation

Loading...

PPT – Heuristic Optimization Methods PowerPoint presentation | free to download - id: 44dd7d-YmYzY



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Heuristic Optimization Methods

Description:

Heuristic Optimization Methods Greedy algorithms, ... * * * * * * * * * * Agenda Greedy Algorithms A class of heuristics Approximation Algorithms Does not prove ... – PowerPoint PPT presentation

Number of Views:98
Avg rating:3.0/5.0
Slides: 31
Provided by: ITSen
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Heuristic Optimization Methods


1
Heuristic Optimization Methods
  • Greedy algorithms,
  • Approximation algorithms,
  • and GRASP

2
Agenda
  • Greedy Algorithms
  • A class of heuristics
  • Approximation Algorithms
  • Does not prove optimality, but returns a solution
    that is guaranteed to be within a certain
    distance from the optimal value
  • GRASP
  • Greedy Randomized Adaptive Search Procedure
  • Other
  • Sqeaky Weel
  • Ruin and Recreate
  • Very Large Neighborhood Search

3
Greedy Algorithms
  • We have previosly studied Local Search
    Algorithms, which can produce heuristic solutions
    to difficult optimization problems
  • Another way of producing heuristic solutions is
    to apply Greedy Algorithms
  • The idea of a Greedy Algorithm is to construct a
    solution from scratch, choosing at each step the
    item bringing the best immediate reward

4
Greedy Example (1)
  • 0-1 Knapsack Problem
  • Maximize
  • 12x1 8x2 17x3 11x4 6x5 2x6 2x7
  • Subject to
  • 4x1 3x2 7x3 5x4 3x5 2x6 3x7 9
  • With x binary
  • Notice that the variables are ordered such that
  • cj/aj cj1/aj1
  • Variable cj gives more bang per buck than cj1

5
Greedy Example (2)
  • The greedy solution is to consider each item in
    turn, and to allow it in the knapsack if it has
    enough room, starting with the variable that
    gives most bang per buck
  • x1 1 (enough space, and best remaining item)
  • x2 1 (enough space, and best remaining item)
  • x3 x4 x5 0 (not enough space for any of
    them)
  • x6 1 (enough space, and best remaining item)
  • x7 0 (not enough space)

6
Formalized Greedy Algorithm (1)
  • Let us assume that we can write our combinatorial
    optimization problem as follows
  • For example, the 0-1 Knapsack Problem
  • (S will be the set of items not in the knapsack)

7
Formalized Greedy Algorithm (2)
8
Adapting Greedy Algorithms
  • Greedy Algorithms have to be adapted to the
    particular problem structure
  • Just like Local Search Algorithms
  • For a given problem there can be many different
    Greedy Algorithms
  • TSP nearest neighbor, pure greedy (select
    shortest edges first)

9
Approximation Algorithms
  • We remember three classes of algorithms
  • Exact (returns the optimal solution)
  • Approximation (returns a solution within a
    certain distance from the optimal value)
  • Heuristic (returns a hopefully good solution, but
    with no guarantees)
  • For Approximation Algorithms, we need some kind
    of proof that the algorithm returns a value
    within some bound
  • We will look at an example of a Greedy Algorithm
    that is also an Approximation Algorithm

10
Approximation Example (1)
  • We consider the Integer Knapsack Problem
  • Same as the 0-1 Knapsack Problem, but we can
    select any number of each item (that is, we have
    available an unlimited number of each item)

11
Approximation Example (2)
  • We can assume that
  • aj b
  • c1/a1 cj/aj for all items j
  • (That is, the first item is the one that gives
    the most bang per buck)
  • We will show that a greedy solution to this gives
    a value that is at least half of the optimal value

12
Approximation Example (3)
  • The first step of a Greedy Algorithm will create
    the following solution
  • We could imagine that some of the other variables
    were non-0 as well (if x1 is very large, and
    there are some smaller ones to fill the gap that
    is left)

13
Approximation Example (4)
  • Now, the Linear Programming Relaxation of the
    problem will have the following solution
  • x1 b/a1
  • xj 0 for all j2, ..., n
  • We let the value of the greedy heuristic be zH
  • We let the value of the LP-relaxation be zLP
  • We should show that zH/z gt ½, where z is the
    optimal value

14
Approximation Example (5)
  • The proof goes as follows
  • where, for some 0f 1

15
Approximation Summary
  • It is important to note that the analysis depends
    on finding
  • A lower bound on the optimal value
  • An upper bound on the optimal value
  • The practical importance of such analysis might
    not be too high
  • Bounds are usually not very good, and alternative
    heuristics will often work much better

16
GRASP
  • Greedy Randomized Adaptive Search Procedures
  • A Metaheuristic that is based on Greedy
    Algorithms
  • A constructive approach
  • A multi-start approach
  • Includes (optionally) a local search to improve
    the constructed solutions

17
Spelling out GRASP
  • Greedy Select the best choice (or among the best
    choices)
  • Randomized Use some probabilistic selection to
    prevent the same solution to be constructed every
    time
  • Adaptive Change the evaluation of choices after
    making each decision
  • Search Procedure It is a heuristic algorithm for
    examining the solution space

18
Two Phases of GRASP
  • GRASP is an iterative process, in which each
    iteration has two phases
  • Construction
  • Build a feasible solution (from scratch) in the
    same way as using a Greedy Algorithm, but with
    some randomization
  • Improvement
  • Improve the solution by using some Local Search
    (Best/First Improvement)
  • The best overall solution is retained

19
The Constructive Phase (1)
20
The Constructive Phase (2)
  • Each step is both Greedy and Randomized
  • First, we build a Restricted Candidate List
  • The RCL contains the best elements that we can
    add to the solution
  • Then we select randomly one of the elements in
    the Restricted Candidate List
  • We then need to reevaluate the remaining elements
    (their evaluation should change as a result of
    the recent change in the partial solution), and
    repeat

21
The Restricted Candidate List (1)
  • Assume we have evaluated all the possible
    elements that can be added to the solution
  • There are two ways of generate a restricted list
  • Based on rank
  • Based on value
  • In each case, we introduce a parameter a that
    controls how large the RCL will be
  • Include the (1- a) elements with highest rank
  • Include all elements that has a value within a
    of the best element

22
The Restricted Candidate List (2)
  • In general
  • A small RCL leads to a small variance in the
    values of the constructed solutions
  • A large RCL leads to worse average solution
    values, but a larger variance
  • High values (1) for a result in a purely greedy
    construction
  • Low values (0) for a result in a purely random
    construction

23
The Restricted Candidate List (3)
24
The Restricted Candidate List (4)
  • The role of a is thus critical
  • Usually, a good choice will be to modify the
    value of a during the search
  • Randomly
  • Based on results
  • The approach where a is adjusted based on
    previous results is called Reactive GRASP
  • The probability distribution of a changes based
    on the performance of each value of a

25
Effect of a on Local Search
26
GRASP vs. Other Methods (1)
  • GRASP is the first pure constructive method that
    we have seen
  • However, GRASP can be compared to Local Search
    based methods in some aspects
  • That is, a GRASP can sometimes be interpreted as
    a Local Search where the entire solution is
    destroyed (emptied) whenever a local optimum is
    reached
  • The construction reaches a local optimum when no
    more elements can be added

27
GRASP vs. Other Methods (2)
  • In this sense, we can classify GRASP as
  • Memoryless (not using adaptive memory)
  • Randomized (not systematic)
  • Operating on 1 solution (not a population)
  • Potential improvements of GRASP would involve
    adding some memory
  • Many improvements have been suggested, but not
    too many have been implemented/tested
  • There is still room for doing research in this
    area

28
Squeaky Wheel Optimization
  • If its not broken, dont fix it.
  • Often used in constructive meta-heuristics.
  • Inspect the constructed (complete) solution
  • If it has any flaws, focus on fixing these in the
    next constructive run

29
Ruin and Recreate
  • Also called Very Large Neighborhood search
  • Given a solution, destroy part of it
  • Random
  • Geographically
  • Along other dimensions
  • Rebuild Greedily
  • Can also use GRASP-like ideas
  • Can intersperse with local search
    (meta-heuristics)

30
Summary of Todayss Lecture
  • Greedy Algorithms
  • A class of heuristics
  • Approximation Algorithms
  • Does not prove optimality, but returns a solution
    that is guaranteed to be within a certain
    distance from the optimal value
  • GRASP
  • Greedy Randomized Adaptive Search Procedure
  • Other
  • Sqeaky Weel
  • Ruin and Recreate
  • Very Large Neighborhood Search
About PowerShow.com