Backdoors in the Context of Learning (short paper) - PowerPoint PPT Presentation

About This Presentation
Title:

Backdoors in the Context of Learning (short paper)

Description:

Given a Boolean formula F in conjunctive normal form. e.g. F = (a or b) and ( a or c or d) and (b or ... Good scaling behavior seems to defy 'NP-completeness' ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 23
Provided by: csfac5
Category:

less

Transcript and Presenter's Notes

Title: Backdoors in the Context of Learning (short paper)


1
Backdoors in the Context of Learning(short paper)
  • Bistra Dilkina, Carla P. Gomes, Ashish Sabharwal
  • Cornell University
  • SAT-09 Conference
  • Swansea, U.K., June 30, 2009

2
SAT Gap between theory practice
  • Boolean Satisfiability or SAT
  • Given a Boolean formula F in conjunctive normal
    forme.g. F (a or b) and (a or c or d) and
    (b or c)determine whether F is satisfiable
  • NP-complete note worst-case notion
  • widely used in practice, e.g. in hardware
    software verification, design automation, AI
    planning,
  • Large industrial benchmarks (10K vars) are
    solved within seconds by state-of-the-art
    complete/systematic SAT solvers
  • Even 100K or 1M not completely out of question
  • Good scaling behavior seems to defy
    NP-completeness!
  • Real-world problems have tractable sub-structure

Backdoors help explain how solvers canget
smart and solve very large instances
3
Backdoors to Tractability
A notion to capture hidden structure
  • Informally
  • A backdoor to a given problem is a subset of
    its variables such that, once assigned values,
    the remaining instance simplifies to a tractable
    class.
  • Formally
  • define a notion of a poly-time sub-solver
    handles tractable substructure of problem
    instance e.g. unit prop., pure literal
    elimination, CP filtering, LP solver,
  • Weak backdoors for finding feasible solutions
  • Strong backdoors for finding feasible solutions
    or proving unsatisfiability

4
Are backdoors small in practice?
Enough to branch on backdoor variables to solve
the formula? heuristics need to be good on only
a few vars
The notion of backdoors has provided powerful
insights, leading totechniques like
randomization, restarts, and algorithm portfolios
for SAT
5
This Talk Motivation
  • Traditional backdoors are defined for a basic
    tree-search procedure, such as pure DPLL
  • Oblivious to the now-standard (and essential)
    feature of learning during search, i.e, clause
    learning for DPLL
  • Note state-of-the-art SAT solvers rely heavily
    on clause learning, especially for industrial and
    crafted instances
  • provably leads to shorter proofs for many
    unsatisfiable formulas
  • significant speed-up on satisfiable formulas as
    well
  • Does clause learning allow for smaller
    backdoorswhen capturing hidden structure in SAT
    instances?

6
This Talk Contribution
  • Affirmative answer
  • First, must extend the notion of backdoors to
    clause learning SAT solvers take
    order-sensitivity into account
  • Theoretically, learning-sensitive backdoors for
    SAT solvers with clause learning (CDCL solvers)
    can be exponentially smaller than traditional
    strong backdoors
  • Initial empirical results suggesting that in
    practice,
  • More learning-sensitive backdoors than
    traditional (of a given size)
  • SAT solvers often find much smaller
    learning-sensitive backdoors than traditional ones

7
DPLL Search with Clause Learning
  • Input CNF formula F
  • At every search node
  • branch by setting a variable to True or
    Falsecurrent partial variable assignment ?
  • consider simplified sub-formula F?
  • apply a poly-time inference procedure to
    F?(e.g. unit prop., pure literal test, failed
    literal test / probing)
  • Contradiction ? learn a conflict clause
  • Solution ? declare satisfiable and exit
  • Not solved ? continue branching

sub-solver for SAT
8
Backdoors and Search with Learning
Search order matters!
9
Traditional Backdoors
  • Definition Williams, Gomes, Selman 03
  • A subset B of variables is a strong
    backdoor(for F w.r.t. a sub-solver S) if for
    every truth assignment ? to variables in B,
  • S solves F?.
  • Issue oblivious to previously learned clauses
    sub-solver must infer contradiction on F? for
    every ? from scratch.

either finds a satisfying assignment for For
proves that F is unsatisfiable
10
New Learning-Sensitive Backdoors
  • Definition
  • A subset B of variables is a learning-sensitive
    backdoor(for F w.r.t. a sub-solver S) if there
    exists a search order s.t. a clause learning
    solver
  • branching only on the variables in B
  • in this search order
  • with S as the sub-solver at each leaf
  • solves F.

either finds a satisfying assignment for For
proves that F is unsatisfiable
11
Theoretical Results
12
Learning-Sensitive Backdoors Can Provably be Much
Smaller
  • Setup
  • Sub-solver unit propagation
  • Clause learning scheme 1-UIP
  • Comparison w.r.t. traditional strong backdoors
  • Theorem 1 There are unsatisfiable SAT instances
    for which learning-sensitive backdoors are
    exponentially smaller than the smallest
    traditional strong backdoors.
  • Theorem 2 There are satisfiable SAT instances
    for which learning-sensitive backdoors are
    smaller than the smallest traditional strong
    backdoors.

used Rsat for experiments
13
Proof Idea Simple Example
x is a learning-sensitive backdoor (of size 1)
Learn 1-UIP clause (?q)
x0
x1
With clause learning, branching on xin the right
order suffices to prove unsatisfiability
14
Proof Idea Simple Example
  • In contrast, without clause learning, must branch
    onat least 2 variables in every proof of
    unsatisfiability!
  • every traditional strong backdoor is of size
    2
  • Why?
  • every variable, in at least one polarity, only in
    long clausese.g., ?p1, q, r, ?a do not appear
    in any 2-clauses
  • therefore, no unit prop. or empty clause
    generation by fixing this variable to this value
  • therefore, this variable by itself cannot be a
    strong backdoor

15
Proof Idea Exponential Separation
  • Construct an unsatisfiable formula F on n vars.
    such that
  • certain long clauses must be used in every
    refutation(i.e., removing a long clause makes F
    satisfiable)
  • many variables in at least one polarity appear
    only in such long clauses with ?(n) variables
  • Controlled unit propagation / empty clause
    generation
  • Must branch on essentially all variables of the
    long clauses to derive a contradiction
  • Such variables must be part of every traditional
    backdoor set
  • With learning conflict clauses from previous
    branches on O(log n) key variables enable unit
    prop. in long clauses

16
Order-Sensitivity of Backdoors
  • Corollary (follows from the proof of Theorem 1)
  • There are unsatisfiable SAT instances for which
    learning-sensitive backdoors w.r.t. one value
    ordering are exponentially smaller than the
    smallest learning-sensitive backdoors w.r.t.
    another value ordering.

17
Experimental evaluation
18
Learning-Sensitive Backdoors in Practice
  • Preliminary evaluation of smallest backdoor size
    Reporting best found backdoors over 5000
    runs of Rsat (with clause learning) or
    Satz-rand (no learning)
  • up to 10x smaller than traditional on satisfiable
    instances
  • often 2x or less smaller than traditional on
    unsatisfiable instances

19
How hard is it to find small backdoor sets with
learning?
Recently reported in a paper at
CPAIOR-09(backdoors in the context of
optimization problems)
  • Considering only the size of the smallest
    backdoor does not provide much insight into this
    question
  • One way to assess this difficulty
  • How many backdoors are there of a given
    cardinality?
  • Experimental setup
  • For each possible backdoor size k, sample
    uniformly at random subsets of cardinality k from
    the (discrete) variables of the problem
  • For each subset, evaluate whether it is a
    backdoor or not

20
Backdoor Size Distribution
E.g., for a Mixed Integer Programming
(MIP)optimization instance
21
Added Power of Learning
E.g., for a Mixed Integer Programming
(MIP)optimization instance
22
Summary
  • Defined backdoors in the context of learning
    during search (in particular, clause learning for
    SAT solvers)
  • Proved that learning-sensitive backdoors can be
    smaller than traditional strong backdoors
  • Exponentially smaller on unsatisfiable instances
  • Somewhat smaller on satisfiable instances (open?)
  • Branching order affects backdoor size as well
  • Future work stronger separation for satisfiable
    instances detailed empirical
    study
Write a Comment
User Comments (0)
About PowerShow.com