Title: Algorithmic and Domain Centralization in Distributed Constraint Optimization Problems
1Algorithmic and Domain Centralization in
Distributed Constraint Optimization Problems
- John P. Davin
- Carnegie Mellon University
- June 27, 2005
- Committee
- Manuela Veloso, Co-Chair
- Pragnesh Jay Modi, Co-Chair
- Scott Fahlman
- Stephen F. Smith
- Carnegie Mellon University
- School of Computer Science
- In partial fulfillment of the 5th year master's
degree. - Full paper http//www.cs.cmu.edu/jdavin/thesis/
2DCOP
- DCOP - Distributed Constraint Optimization
Problem - Provides a model for many multi-agent
optimization problems (scheduling, sensor nets,
military planning). - More expressive than Distributed Constraint
Satisfaction. - Computationally challenging (NP Complete).
3Centralization in DCOPs
- Centralization aggregating information about
the problem in a single agent. - ? resulting in a larger local search space.
- We define two types of centralization
- Algorithmic a DCOP algorithm actively
centralizes parts of the problem through
communication. - ? allows a centralized search procedure.
- Domain the DCOP definition inherently has parts
of the problem already centralized. - ? eg., multiple variables per agent (ex.
scheduling)
4Motivation
- Current state of DCOP research
- Two existing DCOP algorithms Adopt OptAPO
exhibit differing levels of centralization. - DCOP has been applied to several domains (eg,
meeting scheduling). - Only 1 variable per agent problems used in
existing work. - Still Needed
- It is unclear exactly how Adopt OptAPO are
affected by centralization. - Domains with multiple variables per agent have
not been explored
5Thesis Statement
- Questions
- How does algorithmic centralization affect
performance? - How can we take advantage of domain
centralization?
6Spectrum of centralization
7Outline
- Introduction
- Part 1 Algorithmic centralization in DCOPs
- Part 2 Domain centralization in DCOPs
- Conclusions
8Part 1 Algorithmic Centralization in DCOP
Algorithms
9Evaluation Metrics
- How does algorithmic centralization affect
performance? - How do we measure performance?
10Evaluation Metrics Cycles
- Cycle one unit of algorithm progress in which
all agents process incoming messages, perform
computation, and send outgoing messages. - Independent of machine speed, network conditions,
etc. - Used in prior work Yokoo et al., Mailler et al.
11Evaluation Metrics Constraint Checks
- Constraint check the act of evaluating a
constraint between N variables. - Provides a measure of computation.
- Concurrent constraint checks (CCC) maximum
constraint checks from the agents during a cycle.
12Problems with previous measures
- Cycles do not measure computational time (they
dont reflect the length of a cycle). - Constraint checks alone do not measure
communication overhead.
- ? We need a metric that combines both of the
previously used metrics.
13CBR Cycle-Based Runtime
-
- The length of a cycle is determined by
communication and computation
-
- (t time for one constraint check.)
-
-
Define ccc(m) as the total constraint checks
14CBR parameters
- L communication latency (time to communicate in
each cycle). - ? can be parameterized based on the system
environment. Eg., L1, 10, 100, 1000. - We assume t1, since constraint checks are
usually faster than communication time. - ? L is defined in terms of t (L10 indicates
communication is 10 times slower than a
constraint check).
15Comparing Adopt and OptAPO
- Algorithmic centralization
- Adopt is non-centralized.
- OptAPO is partially centralized.
- Prior work Mailler Lesser has shown that
OptAPO completes in fewer cycles than Adopt for
graph coloring problems at density 2n and 3n. - But how do they compare when we use CBR to
measure both communication time and computation?
16DCOP Algorithms Adopt
- ADOPT, Jay Modi, et al.
- Variables are ordered in a priority tree (or
chain). - Agents pass their current value down the tree to
neighboring agents using VALUE messages. - Agents send COST messages up to their parents.
Cost messages inform the parent of the lower and
upper bounds at the subtree. - These costs are dependent on the values of the
agents ancestor variables.
17DCOP Algorithms OptAPO
OptAPO Optimal Asynchronous Partial Overlay.
Mailler and Lesser
- cooperative mediation an agent is dynamically
appointed as mediator, and collects constraints
for a subset of the problem. - OptAPO agents attempt to minimize the number of
constraints that are centralized. - The mediator solves its subproblem using Branch
Bound centralized search Freuder and Wallace.
x1
x2
x5
mediator
x3
Values constraints
18Results DavinModi, 05
- Tested on graph coloring problems, D3
(3-coloring). - Variables 8, 12, 16, 20, with link density
2n or 3n. - 50 randomly generated problems for each size.
CCC
Cycles
? OptAPO takes fewer cycles, but more constraint
checks.
19How do Adopt and OptAPO compare using CBR?
Density 2
20How do Adopt and OptAPO compare using CBR?
Density 3
? For L values lt 1000, Adopt has a lower CBR than
OptAPO. ? OptAPOs high number of constraint
checks outweigh its lower number of cycles.
21How much centralization occurs in OptAPO?
- OptAPO sometimes centralizes all of the problem
structure. -
22How does the distribution of computation differ?
- We measure the distribution of computation during
a cycle as - This is the ratio of the maximum computing agent
to the total computation during a cycle. - A value of 1.0 indicates one agent did all the
computation. - Lower values indicate more evenly distributed
load.
23How does the distribution of computation differ?
- Load was measured during the execution of one
representative graph coloring problem with 8
variables, density 2
- OptAPO has varying load, because one agent (the
mediator) does all of the search within each
cycle. - Adopts load is evenly balanced.
24Communication Tradeoffs of Centralization
- How does centralization affect performance under
a range of communication latencies?
- ?Adopt has the lowest CBR at L1,10,100, and
crosses over between L100 and 1000. - OptAPO outperforms Branch Bound at density 2
but not at density 3.
25Part 2 Domain Centralization in DCOPs
26How can we take advantage of domain
centralization?
- Adopt treats variables within an agent as
independent pseudo-agents. - Does not take advantage of partially centralized
domains.
x1
agent2
x2
agent1
x3
27How can we take advantage of domain
centralization?
- Instead, we could use centralized search to
optimize the agents local variables. We call
this AdoptMVA, for Multiple Variables per Agent. - Potentially more efficient
- Adopts variable ordering heuristics do not apply
to agent orderings - Need agent ordering heuristics for AdoptMVA.
- Also need intra-agent heuristics for the local
search.
agent2
x1
agent1
x2
x3
28Extending Adopt to AdoptMVA
- In Adopt, a context is a partial solution of the
form (xj,dj),(xk,dk). - We define a context where S is the set
of all possible assignments to an agents local
variables. - We must modify Adopts cost function to include
the cost of constraints between variables in s
Intra-agent cost
Inter-agent cost
- Use Branch Bound search to find the
that minimizes the local cost.
29Results
- Setup Randomly generated meeting scheduling
problems - Based on an 8-hour day (D 8).
- Number of attendees (A) per meeting is randomly
chosen from a geometric progression. - All meeting scheduling problems generated were
fully schedulable. - We compared using a lexicographic agent ordering
for both algorithms.
30How does AdoptMVA perform vs Adopt?
High density meeting scheduling (4 meetings per
agent)
CBR
Cycles
20 problems per datapoint.
31How does AdoptMVA perform vs Adopt?
Graph Coloring with 4 variables per agent, link
density 2
CBR
Cycles
? AdoptMVA uses fewer cycles than Adopt, and has
a lower CBR at L1000.
32Adopt and AdoptMVA Variable ordering
Original Problem
Adopt hierarchy
AdoptMVA hierarchy
- AdoptMVAs order has a reduced granularity from
Adopts order. Adopt can interleave the variables
of an agent, while AdoptMVA can only order
agents.
33Inter-agent ordering heuristics
- We tested several heuristics for ordering the
agents - Lexicographic
- Inter-agents links order by of links to other
agents. - AdoptToMVA-Max compute the Brelaz ordering over
variables, and convert it to an agent ordering
using the maximum priority variable within each
agent. - AdoptToMVA-Min AdoptToMVA-Max but using the
minimum priority variables.
34Comparison of agent orderings
- Intra-agent ordering MVA-HigherVars
Low Density meeting scheduling
High Density meeting scheduling
Graph Coloring with 4 variables per agent
35Comparison of agent orderings
- Intra-agent ordering MVA-HigherVars
Low Density meeting scheduling
High Density meeting scheduling
- Ordering makes an order of magnitude difference
in some cases. - AdoptToMVA-Min was the best on 8 out of 11 cases,
but with high variance.
36Intra-agent Branch Bound Heuristics
- Best-first Value ordering heuristic we put the
best domain value first in the value ordering. - Variable ordering heuristics
- Lexicographic
- Random
- Brelaz Graph Coloring only order by number of
links to other variables within the agent. - MVA-AllVars order by of links to external
variables. - MVA-LowerVars MVA-AllVars but only consider
lower priority variables. - MVA-HigherVars MVA-AllVars but only consider
higher priority variables.
37Comparison of intra-agent search heuristics
- Goal Reduce constraint checks used by Branch
Bound. - Metric Average CCC per Cycle (TotalCCC / Total
Cycles).
High density Meeting Scheduling
Cycles
Avg CCC
- Nearly all differences are statistically
significant. Excepting MVA-AllVars vs
MVA-HigherVars - MVA-HigherVars is the most computationally
efficient heuristic. - ?Low density Meeting Scheduling produced similar
results.
38Comparison of intra-agent search heuristics
Graph Coloring
Avg CCC
Cycles
? Confirms Brelaz is the most efficient heuristic
for graph coloring.
39How does Meeting Scheduling scale as agents and
meeting size are increased?
Original Data
Outliers removed
(A avg of attendees (meeting size))
- Original data had several outliers (two standard
deviations away from the mean) so they were
removed for easier interpretation. - ? Meeting size has a large effect on solution
difficulty.
40Thesis contributions
- Formalization of algorithmic vs. domain
centralization. - Empirical comparison of Adopt OptAPO showing
new results. - CBR - a performance metric which accounts for
both communication computation in DCOP
algorithms. - AdoptMVA - a DCOP algorithm which takes advantage
of domain centralization. We also contribute an
efficient intra-agent search heuristic for
meeting scheduling. - Empirical analysis of the meeting scheduling
problem. - Impact of Problem Centralization in DCO
Algorithms, AAMAS 05, Davin Modi.
41Future Work
- Improve AdoptMVA
- Agent ordering heuristics is there a heuristic
which will work better than the ones tested? - Intra-agent search heuristics develop a
heuristic that is both informed and randomly
varied. - Test DCOP algorithms in a fully distributed
testbed.
42The End
My future plans AAMAS in July and MSN Search
(Microsoft) in the Fall. Contact jdavin_at_cmu.edu