Efficient Stochastic Local Search for MPE Solving - PowerPoint PPT Presentation

About This Presentation
Title:

Efficient Stochastic Local Search for MPE Solving

Description:

1. Efficient Stochastic Local Search. for MPE Solving. Frank Hutter ... Illustration. Previous SLS algorithms. Guided Local Search (GLS) in detail ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 32
Provided by: Ana55
Category:

less

Transcript and Presenter's Notes

Title: Efficient Stochastic Local Search for MPE Solving


1
Efficient Stochastic Local Search for MPE Solving
  • Frank Hutter The University of British Columbia
    (UBC), Vancouver, Canada
  • Joint work with Holger Hoos (UBC) and
  • Thomas Stützle (Darmstadt University of
    Technology, Germany)

2
SLS general algorithmic framework for solving
combinatorial problems
3
MPE in graphical models many applications
4
Outline
  • Most probable explanation (MPE) problem
  • Problem definition
  • Previous work
  • SLS algorithms for MPE
  • Illustration
  • Previous SLS algorithms
  • Guided Local Search (GLS) in detail
  • From Guided Local Search to GLS
  • Modifications
  • Performance gains
  • Comparison to state-of-the-art

5
MPE - problem definition(in most general
representation factor graphs)
  • Given a factor graph
  • Discrete Variables X X1, ..., Xn
  • Factors ? ?1,...,?m over subsets of X
  • A factor ?i over variables Vi µ X assigns a
    non-negative number to every complete
    instantiation vi of Vi
  • Find
  • Complete instantiation x1,...,xn maximizing
    ?i1m ?ix1,...,xn
  • NP-hard (simple reduction from SAT)
  • Also known as Max-product or Maximum a posteriori
    (MAP)

6
Previous approaches for solving MPE
  • Variable elimination / Junction tree
  • Exponential in the graphical models induced
    width
  • Approximation with loopy belief propagation and
    its generalizations Yedidia, Freeman, Weiss 02
  • Approximation with Mini Buckets (MB) Dechter
    Rish 97 ! also gives lower upper bound
  • Search algorithms
  • Local Search
  • Branch and Bound with various MB heuristics
    Dechters group, 99 - 05UAI 03 BB with MB
    heuristic shown to be state-of-the-art

7
Motivation for our work
  • BB clearly outperforms best SLS algorithm so
    far, even on random problem instances
    Marinescu, Kask, Dechter, UAI 03
  • MPE is closely related to weighted Max-SAT Park
    02
  • For Max-SAT, SLS is state-of-the-art(at the very
    least for random problems)
  • Why is SLS not state-of-the-art for MPE ?
  • Additional problem structure inside the factors
  • But for completely random problems ?
  • SLS algos should be much better than they
    currently are
  • We took the best SLS algorithm so far (GLS) and
    improved it

8
Outline
  • Most probable explanation (MPE) problem
  • Problem definition
  • Previous work
  • SLS algorithms for MPE
  • Illustration
  • Previous SLS algorithms
  • Guided Local Search (GLS) in detail
  • From Guided Local Search to GLS
  • Modifications
  • Performance gains
  • Comparison to state-of-the-art

9
SLS for MPE illustration
2
1
0
0
Instantiation
X2 X3 X4 ?5
0 0 0 10
0 0 1 0.9
0 1 0 0
0 1 1 100
1 0 0 33.2
1 0 1 0
1 1 0 23.2
1 1 1 13.7
X1
X2
X4
X3
X1 ?1
0 0
1 21.2
2 0.1
?1
?2
?3
?4
?5
X1 X2 ?2
0 0 21
0 1 0.7
1 0 0
1 1 1
2 0 0.9
2 1 0.2
X3 ?4
0 0.9
1 0.1
X1 X3 ?3
0 0 1.1
0 1 23
1 0 0
1 1 0.7
2 0 2.7
2 1 42
?i1M ?i2,1,0,0 0.1 0.2 2.7 0.9 33.2
10
SLS for MPE illustration
2
1!0
0
0
Instantiation
X2 X3 X4 ?5
0 0 0 10
0 0 1 0.9
0 1 0 0
0 1 1 100
1 0 0 33.2
1 0 1 0
1 1 0 23.2
1 1 1 13.7
X1
X2
X4
X3
X1 ?1
0 0
1 21.2
2 0.1
?1
?2
?3
?4
?5
X1 X2 ?2
0 0 21
0 1 0.7
1 0 0
1 1 1
2 0 0.9
2 1 0.2
X3 ?4
0 0.9
1 0.1
X1 X3 ?3
0 0 1.1
0 1 23
1 0 0
1 1 0.7
2 0 2.7
2 1 42
?i1M ?i2,0,0,0 ?i1M ?i2,1,0,0 0.9/0.2
10/33.2
11
Previous SLS algorithms for MPE
  • Iterative Conditional Modes Besag, 86
  • Just greedy hill climbing
  • Stochastic Simulation
  • Sampling algorithm, very poor for optimization
  • Greedy Stochastic Simulation Kask Dechter,
    99
  • Outperforms the above simulated annealing by
    orders of magnitude
  • Guided Local Search (GLS) Park 02
  • (Iterated Local Search (ILS) Hutter 04)
  • Outperforms Greedy Stochastic Simulation by
    orders of magnitude

12
Guided Local Search (GLS) Voudouris 1997
  • Subclass of Dynamic Local Search Hoos
    Stützle, 2004Iteratively1) Local search !
    local optimum2) Modify evaluation function
  • In local optima penalize some solution features
  • Solution features for MPE are partial assigments
  • Evaluation fct. Objective fct. - sum of
    respective penalties
  • Penalty update rule experimentally designed
  • Performs very well across many problem classes

13
GLS for MPE Park 2002
  • Initialize penalties to 0
  • Evaluation function
  • Obj. function - sum of penalties of current
    instantiation
  • ?i1m ?ix1,...,xn - ?i1p ?ix1,...,xn
  • In local optimum
  • Choose partial instantiations (according to GLS
    update rule)
  • Increment their penalty by 1
  • Every N? local optima
  • Smooth all penalties by multiplying them with ? lt
    1
  • Important to eventually optimize the original
    objective function

14
Outline
  • Most probable explanation (MPE) problem
  • Problem definition
  • Previous work
  • SLS algorithms for MPE
  • Illustration
  • Previous SLS algorithms
  • Guided Local Search (GLS) in detail
  • From Guided Local Search to GLS
  • Modifications
  • Performance gains
  • Comparison to state-of-the-art

15
GLS ! GLSOverview of modified components
  • Modified evaluation function
  • Pay more attention to the actual objective
    function
  • Improved caching of evaluation function
  • Straightforward adaption from SAT caching schemes
  • Tuning of smoothing parameter ?
  • Over two orders of magnitude improvement !
  • Initialization with Mini-Buckets instead of
    random
  • Was shown to perform better by Kask Dechter,
    1999

16
GLS ! GLS (1)Modified evaluation function
  • GLS
  • ?i1m ?ix1,...,xn - ?i1p ?ix1,...,xn
  • Product of entries minus sum of penalties¼ zero
    minus sum of penaltiesAlmost neglecting
    objective function
  • GLS
  • ?i1m log(?ix1,...,xn) - ?i1p ?ix1,...,xn
  • Use logarithmic objective function
  • Very simple, but much better results
  • Penalties are now just new temporary factors
    that decay over time!
  • Could be improved by dynamic weighting of the
    penalties

17
GLS ! GLS (1) Modified evaluation function
  • Much faster in early stages of the search
  • Speedups of about 1 order of magnitude

18
GLS ! GLS (2)Speedups by caching
  • Time complexity for a single best-improvement
    step
  • Previously best caching ?(V DV ?V)
  • Improved caching ?(Vimproving DV)

19
GLS ! GLS (3)Tuning the smoothing factor ?
  • Park 02 stated GLS to have no parameters
  • Changing ? from Parks setting 0.8 to 0.99
  • Sometimes from unsolvable to milliseconds
  • Effect increases for large instances

20
GLS ! GLS (4)Initialization with Mini-Buckets
  • Sometimes a bit worse, sometimes much better
  • Particularly helps for some structured instances

21
Outline
  • Most probable explanation (MPE) problem
  • Problem definition
  • Previous work
  • SLS algorithms for MPE
  • Illustration
  • Previous SLS algorithms
  • Guided Local Search (GLS) in detail
  • From Guided Local Search to GLS
  • Modifications
  • Performance gains
  • Comparison to state-of-the-art

22
Comparison based on Marinescu, Kask, Dechter,
UAI 03
  • Branch Bound with MB heuristic was
    state-of-the-art for MPE, even for random
    instances!
  • Scales better than original GLS with
  • Number of variables
  • Domain size
  • Both as anytime algorithm and in terms of time
    needed to find optimum
  • On the same problem instances, we show that our
    new GLS scales better than their implementation
    with
  • Number of variables
  • Domain size
  • Density
  • Induced width

23
Benchmark instances
  • Randomly generated Bayes nets
  • Graph structure completely random/grid networks
  • Controlled number of variables domain size
  • Random networks with controlled induced width
  • Bayesian networks from Bayes net repository

24
Original GLS vs. BB with MB heuristic relative
solution quality after 100 secondsfor random
grid networks of size NxN
A
Large
Small
Medium
A
25
GLS vs. GLS and BB with MB heuristic relative
solution quality after 100 secondsfor random
grid networks of size NxN
Large
Small
Medium
26
GLS vs. BB with MB heuristic Solution time
with increasing domain size on random networks
Large
Medium
Small
27
Solution times with increasing induced width on
random networks
d-BBMB
s-BBMB
Orig GLS
GLS
A
28
Results for Bayes net repository
  • GLS shows overall best performance
  • Only algorithm to solve Link network (in 1
    second!)
  • Problems for Barley and especially Diabetes
  • Preprocessing with partial variable elimination
    helps a lot
  • Can reduce (variables) dramatically

29
Conclusions
  • SLS algorithms are competitive for MPE solving
  • Scale very well, especially with induced width
  • But they need careful design, analysis
    parameter tuning
  • SLS and Machine Learning (ML) people should talk
  • SLS can perform very well for some traditional ML
    problems
  • Our C source code is online
  • Please use it ?
  • Theres also a Matlab interface

30
Extensions in progress
  • Real problem domains
  • MRFs for stereo vision
  • CRFs for sketch recognition
  • Domain-dependent extensions
  • Hierarchical SLS for problems in computer vision
  • Automated parameter tuning
  • Use Machine Learning to predict runtime for
    different settings of algorithm parameters
  • Use parameter setting with lowest predicted
    runtime

31
The End
  • Thanks to
  • Holger Hoos Thomas Stützle
  • Radu Marinescu for their BB code
  • You for your attention ?
Write a Comment
User Comments (0)
About PowerShow.com