What can we expect from Game Theory in Scheduling? - PowerPoint PPT Presentation

About This Presentation
Title:

What can we expect from Game Theory in Scheduling?

Description:

Krzysztof Rzadca (Polish-Japanese computing school, Warsaw) Fanny Pascual (LIP6, Paris) ... The evolution of high-performance execution platforms leads to physical or ... – PowerPoint PPT presentation

Number of Views:110
Avg rating:3.0/5.0
Slides: 83
Provided by: emm132
Category:

less

Transcript and Presenter's Notes

Title: What can we expect from Game Theory in Scheduling?


1
What can we expect from Game Theory in Scheduling?
  • Denis Trystram (Grenoble University and INRIA)
  • Collection of results of 3 papers with
  • Pierre-François Dutot (Grenoble University)
  • Krzysztof Rzadca (Polish-Japanese computing
    school, Warsaw)
  • Fanny Pascual (LIP6, Paris)
  • Erik Saule (Grenoble university)
  • may 23, 2008

2
Goal
The evolution of high-performance execution
platforms leads to physical or logical
distributed entities (organizations) which have
their own  local  rules, each organization is
composed of multiple users that compete for the
resources, and they aim to optimize their own
objectives Proposal Construct a framework for
studying such problems. Work partially supported
by the Coregrid Network of Excellence of the EC.
3
content
Basics in Scheduling (computational)
models Multi-users scheduling (1
resource) Multi-users scheduling (m
resources) Multi-organizations scheduling (1
objective) Multi-organizations with mixed
objectives
4
Computational model
Informally, a set of users have some (parallel)
applications to execute on a (parallel)
machine. The  machine  belongs or not to
multiple organizations. The objectives of the
users are not always the same.
5
Classical Scheduling
Informal definition given a set of n
(independent) jobs and m processors, determine an
allocation and a date for processing the tasks.
pi
ri
Task i
Ci
?i
Objectives based on completion times Cmax, ?Ci
6
Classical Scheduling (Cmax)
Complexity results the central problem is
NP-hardit remains NP-hard for independent tasks
Algorithms List-scheduling Graham 69
2-approximation LPT (largest first)
4/3-approximation m2
7
Classical Scheduling (?Ci)
Algorithm SPT (shortest first) polynomial for
any m
8
Multi-users optimization
  • Let us start by a simple case several users
    compete for resources belonging to the same
    organization.
  • System centered problems (Cmax, load-balancing)
  • Users centered (minsum, maxstretch, flowtime)
  • Motivation Take the diversity of users
    wishes/needs into account

9
A simple example
Blue (4 tasks duration 3,4,4 and 5) has a program
to compile (Cmax) Red (3 tasks duration 1,3 and
6) is running experiments (?Ci) m3 (machines)
Global LPT schedule Cmax 9 ?Ci 689 23
10
A simple example
Blue (4 tasks duration 3,4,4 and 5) has a program
to compile (Cmax) Red (3 tasks duration 1,3 and
6) is running experiments (?Ci) m3 (machines)
Global LPT schedule Cmax 9 ?Ci 689 23
SPT schedule for red Cmax 8 ?Ci 1311 15
11
Description of the problem
  • Instance k users, user u submit n(u) tasks,
    processing time of task i belonging to u pi(u)
  • Completion time Ci(u)
  • Each user can choose his-her objective among
  • Cmax(u) max (Ci(u)) or ?Ci(u) weighted or not
  • Multi-user scheduling problem
  • MUSP(k?CikCmax) where kkk

12
Complexity
  • Agnetis et al. 2004, case m1
  • MUSP(2?Ci) is NP-hard in the ordinary sense
  • MUSP(2Cmax) and MUSP(1?Ci1Cmax) are
    polynomial
  • Thus, on m machines, all variants of this problem
    are NP-hard
  • We are looking for approximation
    (multi-objective)

13
Baker et al. 2003 (m1)
Linear aggregation optimize ?Cmax ?Ci
User2 ?Ci
User1 Cmax
1. Merge the tasks of user 1
2. Global SPT
14
Baker et al. 2003 (m1)
Linear aggregation optimize ?Cmax ?Ci
User2 ?Ci
User1 Cmax
1. Merge the tasks of user 1
2. Global SPT
Not truthful if Blue declares to be interested
in ?Ci
15
Linear aggregation is unfair
Two users with ?Ci (both own three tasks 1,1,1
and 2,2,2).
16
MUSP(kCmax)
  • Inapproximability
  • no algorithm better than (1,2,,k)
  • Proof consider the instance where each user has
  • one unit task (pi(u)1) on one machine (m1).
  • Cmax(u) 1 and there is no other choice than
  • . . .

17
MUSP(kCmax)
  • Inapproximability
  • no algorithm better than (1,2,,k)
  • Proof consider the instance where each user has
  • one unit task (pi(u)1) on one machine.
  • Cmax(u) 1 and there is no other choice than
  • . . .
  • Thus, there exists a user u whose Cmax(u) k

18
MUSP(kCmax)
  • Algorithm (multiCmax)
  • Given a ?-approximation schedule ? for each user
  • Cmax(u) ?Cmax(u)
  • Sort the users by increasing values of Cmax(u)
  • Analysis
  • multiCmax is a (?,2?, , k?)-approximation.

19
MUSP(k?Ci)
  • Inapproximability no algorithm better than
  • ((k1)/2,(k2)/2,,k)
  • Proof consider the instance where each user has
    x
  • Tasks pi(u) 2i-1.
  • Optimal schedule ?Ci 2x1 - (x2)
  • SPT is Pareto Optimal (3 users blue, green and
    red)
  • . . .
  • For all u, ?CiSPT(u) k(2x -(x1)) (2x -1) u
  • Ratio to the optimal (ku)/2 for large x

20
MUSP(k?Ci)
Algorithm (single machine) Aggreg Let ?(u) be
the schedule for user u. Construct a schedule by
increasing order of Ci(?(u))
21
MUSP(k?Ci)
  • Algorithm (single machine) Aggreg
  • Let ?(u) be the schedule for user u.
  • Construct a schedule by increasing order of
    Ci(?(u))

22
MUSP(k?Ci)
  • Algorithm (single machine) Aggreg
  • Let ?(u) be the schedule for user u.
  • Construct a schedule by increasing order of
    Ci(?(u))
  • Analysis Aggreg is (k,k,,k)-approximation

23
MUSP(k?Ci)
  • Algorithm (extension to m machines)
  • The previous property still holds on each machine
    (using SPT individually for each user)
  • Local SPT

24
MUSP(k?Ci)
  • Algorithm (extension to m machines)
  • The previous property still holds on each machine
    (using SPT individually for each user)
  • Local SPT
  • Merge on each machine

25
MUSP(k?Ci)
  • Analysis we obtain the same bound as before.

26
Mixed caseMUSP(k?Ci(k-k)Cmax)
  • A similar analysis can be done, see the paper
    with Erik Saule for more details

27
Decentralized objective
  • In the previous analysis, the users had to choose
    among several objectives (expressed from the
    completion time).
  • The scheduling policy was centralized and global.
  • A natural question is  what happens with exotic
    objectives or with predefined schedules? 

28
Complicating the modelMulti-organizations
29
Context computational grids
m1 machines

Organization O1


m2 machines
Organization O3

Organization O2


m3 machines
Collection of independent clusters managed
locally by an  organization .
30
Preliminarysingle user, multi-organization
Independent tasks are submitted locally by single
users on  private  organizations
31
Preliminarysingle user, multi-organization
Independent tasks are submitted locally by single
users on  private  organizations
32
Multi-organization
  • Problem each organization has its own
    objective.We are looking for a centralized
    mechanism that improves the global behaviour
    without worsering the local solutions.

33
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

34
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

35
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

36
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

37
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

38
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

39
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

40
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm

41
Multi-organization with Cmax
  • Algorithm iterative load-balancing
  • The organizations are sorted by increasing load
  • The load of the more loaded one is balanced
    usinga simple list algorithm
  • Analysis bound 2-1/m for the global Cmax

42
Extension to parallel taskssingle resource
cluster
Independent applications are submitted locally on
a cluster. The are represented by a precedence
task graph. An application is viewed as a usual
sequential task or as a parallel rigid job (see
Feitelson and Rudolph for more details and
classification).
43
Local queue of submitted jobs
J1
J2
J3

  • Cluster


44
Job
45
(No Transcript)
46
(No Transcript)
47
(No Transcript)
48
overhead
Computational area
Rigid jobs the number of processors is fixed.
49
Runtime pi
of required processors qi
50
Runtime pi
of required processors qi
Useful definitions high jobs (those which
require more than m/2 processors) low jobs (the
others).
51
Scheduling rigid jobsPacking algorithms (batch)
Scheduling independent rigid jobs may be solved
as a 2D packing Problem (strip packing).
m
52
Multi-organizations
n organizations.
J1
J2
J3

  • Cluster


Organization k
m processors
k
53
users submit their jobs locally

O1


O3

O2


54
The organizations can cooperate

O1


O3

O2


55
Constraints
Cmax(O3)
Local schedules
Cmaxloc(O1)
O1
O1
Cmax(O1)
O2
O2
Cmax(O2)
O3
O3
Cmax(Ok) maximum finishing time of jobs
belonging to Ok. Each organization aims at
minimizing its own makespan.
56
Problem statement
MOSP minimization of the  global  makespan
under the constraint that no local schedule is
increased. Consequence taking the restricted
instance n1 (one organization) and m2 with
sequential jobs, the problem is the classical 2
machines problem which is NP-hard. Thus, MOSP is
NP-hard.
57
Multi-organizations
Motivation A non-cooperative solution is that
all the organizations compute their local jobs
( my job first  policy). However, such a
solution is arbitrarly far from the global
optimal (it grows to infinity with the number of
organizations n). See next example with n3 for
jobs of unit length.
O1
O1
O2
O2
O3
O3
with cooperation (optimal)
no cooperation
58
More sophisticated algorithms than the simple
load balancing are possible matching certain
types of jobs may lead to bilaterally profitable
solutions. However, it is a hard combitanorial
problem
no cooperation
with cooperation
O1
O1
O2
O2
59
Preliminary results
  • List-scheduling (2-1/m) approximation ratio for
    the variant with resource constraint
    Garey-Graham 1975.
  • HF Highest First schedules (sort the jobs by
    decreasing number of required processors). Same
    theoretical guaranty but perform better from the
    practical point of view.

60
Analysis of HF (single cluster)
Proposition. All HF schedules have the same
structure which consists in two consecutive zones
of high (I) and low (II) utilization. Proof. (2
steps) By contracdiction, no high job appears
after zone (II) starts
low utilization zone (II)
high utilization zone
(I) (more than 50 of processors are busy)
61
If we can not worsen any local makespan, the
global optimum can not be reached.
local
globally optimal
2
1
O1
O1
1
1
1
2
2
2
O2
O2
2
2
62
If we can not worsen any local makespan, the
global optimum can not be reached.
local
globally optimal
2
1
O1
O1
1
1
1
2
2
2
O2
O2
2
2
1
2
O1
1
best solution that does not increase Cmax(O1)
O2
2
2
63
If we can not worsen any local makespan, the
global optimum can not be reached.
  • Lower bound on approximation ratio greater than
    3/2.

2
1
O1
O1
1
1
1
2
2
2
O2
O2
2
2
1
2
O1
1
best solution that does not increase Cmax(O1)
O2
2
2
64
Using Game Theory?
We propose here a standard approach using
Combinatorial Optimization. Cooperative Game
Theory may also be usefull, but it assumes that
players (organizations) can communicate and form
coalitions. The members of the coalitions split
the sum of their playoff after the end of the
game. We assume here a centralized mechanism and
no communication between organizations.
65
Multi-Organization Load-Balancing
1 Each cluster is running local jobs with Highest
First LB max (pmax,W/nm) 2.
Unschedule all jobs that finish after 3LB. 3.
Divide them into 2 sets (Ljobs and Hjobs) 4. Sort
each set according to the Highest first order 5.
Schedule the jobs of Hjobs backwards from 3LBon
all possible clusters 6. Then, fill the gaps with
Ljobs in a greedy manner
66
Hjob
Ljob
let consider a cluster whose last job finishes
before 3LB
3LB
67
Hjob
Ljob
3LB
68
Hjob
Ljob
3LB
69
Hjob
Ljob
3LB
70
Ljob
3LB
71
Ljob
3LB
72
Feasibility (insight)
Zone (I)
Zone (II)
  • Zone (I)

3LB
73
Sketch of analysis
Proof by contradiction let us assume that it is
not feasible, and call x the first job that does
not fit in a cluster.
Case 1 x is a small job. Global surface
argument Case 2 x is a high job. Much more
complicated, see the paper for technical details
74
Guaranty
Proposition 1. The previous algorithm is a
3-approximation (by construction) 2. The bound is
tight (asymptotically) Consider the following
instance m clusters, each with 2m-1
processors The first organization has m short
jobs requiring each the full machine (duration
?) plus m jobs of unit length requiring m
processors All the m-1 others own m sequential
jobs of unit length
75
Local HF schedules
76
Optimal (global) schedule Cmax 1?
77
Multi-organization load-balancing Cmax3
78
Improvement
We add an extra load-balancing procedure
O1
O2
Local schedules
O3
O4
O5
O1
O2
Multi-org LB
O3
O4
O5
O1
O2
O3
Compact
O4
O5
O1
O2
O3
load balance
O4
O5
79
Some experiments
80
Link with Game Theory?
We propose an approach based on combinatorial
optimization Can we use Game theory? players
organizations or users objective makespan,
minsum, mixed Cooperative game theory assume
that players communicateand form
coalitions. Non cooperative game theory key
concept is Nash equilibrium which is the
situation where the players do not have
interestto change their strategy Price of
stability best Nash equilibrium over the opt.
solution
strategy collaboration or not obj. global
min makespan
81
Conclusion
  • Single unified approach based on multi-objective
    optimization for taking into account the users
    need or wishes.
  • MOSP - good guaranty for Cmax, ?Ci and mixed case
    remains to be studied
  • MUSP -  bad  guaranty but we can not obtain
    better with mow cost algorithms

82
Thanks for attentionDo you have any questions?
Write a Comment
User Comments (0)
About PowerShow.com