# Introduction to Optimization with Integer and Linear programming - PowerPoint PPT Presentation

PPT – Introduction to Optimization with Integer and Linear programming PowerPoint presentation | free to download - id: 3e4dcf-MzEwM

The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
Title:

## Introduction to Optimization with Integer and Linear programming

Description:

### Introduction to Optimization with Integer and Linear programming Lanbo Zheng 13 October 2010 Lanbo.Zheng_at_newcastle.edu.au Software Geometric mean of results taken ... – PowerPoint PPT presentation

Number of Views:245
Avg rating:3.0/5.0
Slides: 133
Provided by: UON6
Category:
Transcript and Presenter's Notes

Title: Introduction to Optimization with Integer and Linear programming

1
Introduction to Optimization with Integer and
Linear programming
• Lanbo Zheng
• 13 October 2010
• Lanbo.Zheng_at_newcastle.edu.au

2
An example
Property Inspection
12pm-1pm
12pm-1245pm
12pm-1230pm
3
An example CP formulation 1
4
An example ILP formulation 1
5
Comparison of CP/IP
• Branch and Prune
• Prune eliminate infeasible configurations
• Branch decompose into subproblems
• Prune
• Carefully examine constraints to reduce possible
variable values
• Branch
• Use heuristics based on feasibility info
• Main focusconstraints and feasibility
• Branch and Bound
• Bound eliminate suboptimal solutions
• Branch decompose into subproblems
• Bound
• Use (linear) relaxation of problem ( cuts)
• Branch
• Use information from relaxation
• Main focus objective function and optimality

6
Comparing IP and CP
• Complementary technologies
• Integer programming
• Objective function relaxations
• Constraint programming
• Feasibility domain reductions
• Might need to experiment with both
• CP particularly useful when IP formulation is
hard or relaxation does not give much information

7
Combining Methods
• Local and Global Search
• Use CP/IP for very large neighborhood search
(take a solution, remove large subset, find
optimal completion)
• Combining CP and IP
• Use LP as constraint handler
• Use CP as subproblem solver in branch and price

8
Contents
• Introduction to Linear and Integer programming
(R.J. Vanderbei)
• Network flow (Cook et al.)
• Brief introduction to polyhedral theory (Cook et.
al)
• Travelling salesman (Martin Grötschel)
• Brief overview of current softwares (SCIP website)

9
Contents
• Introduction to Linear and Integer programming
(R.J. Vanderbei)
• Network flow (Cook et al.)
• Brief introduction to polyhedral theory (Cook et.
Al)
• Travelling salesman (Martin Grötschel)
• Brief overview of current softwares (SCIP website)

10
General mathematical programming
• Input
• An objective function f Rn -gt R
• A set of constraint functions gi Rn -gt R
• A set of constraint values bi
• Goal
• Find x in Rn which
• maximizes f(x)
• satisfies gi(x) lt bi

11
Linear programming
• Input
• A linear objective function f Rn -gt R
• A set of linear constraint functions gi Rn -gt
R
• A set of constraint values bi
• Goal
• Find x in Rn which
• maximizes f(x)
• satisfies gi(x) lt bi

12
Integer linear programming
• Input
• A linear objective function f Zn -gt Z
• A set of linear constraint functions gi Zn -gt
Z
• A set of constraint values bi
• Goal
• Find x in Zn which
• maximizes f(x)
• satisfies gi(x) lt bi

13
Linear programming
14
The linear programming problem
• A brief history of linear programming
• The root of linear programming can be traced as
far back as 1826 to the work of Fourier in the
study of linear inequalities.
• Leonid Kantorovich, a Russian mathematician who
developed linear programming problems in 1939.
• George B. Dantzig, who published the simplex
method in 1947.
• John von Neumann, who developed the theory of
the duality in the same year.
• The linear programming problem was first shown to
be solvable in polynomial time by Leonid
Khachiyan in 1979, but a larger theoretical and
practical breakthrough in the field came in 1984
when Narendra Karmarkar introduced a new interior
point method for solving linear programming
problems.
• In 1975, Kantorovich and Koopmans were awarded
the Nobel Prize in economic science.

15
Optimal use of scarce resourcesfoundation of
economic interpretation of LP
16
Algorithms for linear programming
• (Dantzig 1951) Simplex method
• Very efficient in practice
• Exponential time in worst case
• (Khachiyan 1979) Ellipsoid method
• Not efficient in practice
• Polynomial time in worst case

17
An example
• The production facility management
• Is capable of producing 1, 2, , n products
• The products are constructed by 1, 2, , m raw
materials
• Each product j can be sold at a prevailing market
price dj per unit
• Each raw material i has a known unit value ?i
• Each product j requires aij amount of raw
material i for one unit production
• For each raw material i, the company has bi in
stock

18
Two models
• From a production manager point of view
• Objective maximize profit by using the raw
material on hand
• Variables the amount of product j to be produced
xj
• Constraints each production quantity must be
nonnegative the amount of raw materials consumed
must not exceed the amount in stock

19
Two models
• From the production manager point of view

Resource allocation problem
20
Two models
• From the comptroller point of view
• Objective minimize the lost opportunity cost
• Variables the internal value wi of each raw
material i on hand
• Constraints the internal value should be no less
than the prevailing market value so does the
corresponding product value

21
Two models
• From the comptroller point of view

22
The linear programming problem
• Decision variables whose value to be decided in
optimal fashion (xj, j 1, 2, , n)
• Objective function maximize/minimize some linear
function of the decision variables (? c1x1
c2x2 cnxn)
• Constraints either an equality or an inequality
associated with some linear combination of the
decision variables (a1x1 a2x2 anxn lt b1)

23
The linear programming problem
• Converting between different formats of
constraints
• Inequalities ? equations

24
The linear programming problem
• The standard form of an LP

25
The linear programming problem
• Solution
• Feasible
• Infeasible
• Unbounded

26
The simplex method an example
27
The simplex method an example
• Rewrite with slack variables
• The layout is called a dictionary.
• Setting x1, x2, x3 to 0, we can read off the
values for the other variables
• w1 7, w2 3, etc. This specific solution is
called a dictionary solution.
• Dependent variables, on the left, are called
basic variables.
• Independent variables, on the right, are called
nonbasic variables.

28
The simplex method an example
• Feasible dictionary solution
• All the variables in the current dictionary
solution are nonnegative
• Such a solution is called feasible.
• The initial dictionary solution need not to be
feasible.

29
The simplex method an example
• Simplex Method First Iteration
• Look for the variable with positive coefficient
in the objective function whose increase will
largely increase the obj (x2 in this example)
• Look at the basic variables to see how much x2
can increase such that they all remain
nonnegative
• Choose the one which goes to 0 after the increase
of x2 (w4)
• X2 must become basic and w4 must become nonbasic
• This kind of iteration is a pivot

30
The simplex method an example
• After the pivot, the dictionary looks like

Now we start to look for the next
entering/leaving pair
31
The simplex method an example
• After this pivot, the dictionary is

Its optimal!
32
The simplex method a general representation
• Considering a general LP problem presented in a
standard form
• We treat the basic variables and the nonbasic
variables in the same way
• The starting dictionary is

33
The simplex method a general representation
• For the set of indices S1, 2, , nm,
• let denotes the collection of
indices corresponding to the basic variables let
denotes the collection of indices
corresponding to the nonbasic variables
initially
• After a set of pivots, we may reach

34
The simplex method a general representation
• Entering variable the variable that goes from
nonbasic to basic. Currently its suffice to
choose the one with the largest coefficient.
• Leaving variable the variable that goes from
basic to nonbasic. Assume xk will be the entering
variable, the leaving variable will be selected
such that it remains nonnegative and xk increases
as much as it can
• Pivot rule the rule of picking the entering and
leaving variables

35
Unboundedness
• Consider the following dictionary
• Could increase either x1 or x3 to increase obj.
• Could increase x1
• Which basic variable decrease to zero first?
• Answer none of them, x1 can grow without bound,
and obj along with it.
• This is how we detect unboundedness with the
simplex method.

36
Initialization
• Consider the following problem
• Phase-I Problem
• Modify problem by subtracting a new variable,
x0, from each constraint and
• replacing objective function with x0

37
Initialization
• Phase-I Problem
• Clearly feasible pick x0 large, x1 0 and x2
0.
• If optimal solution has obj 0, then original
problem is feasible
• Final phase-I basis can be used as initial
phase-II basis (ignoring x0 thereafter)
• If optimal solution has obj lt 0, then original
problem is infeasible.

38
Geometry
• An example

-x13x212
x2
X1x28
3x12x222
3x12x211
2x1-x210
Each constraint, including the non-negative
constraints on the variables is a half-plane.
x1
Optimize over a polyhedron
39
Simplex algorithm graphical description
http//en.wikipedia.org/wiki/FileSimplex_descript
ion.png
40
Efficiency Analysis
• Performance Measures
• Measuring the Size of a Problem
• Measuring the Effort to Solve a Problem
• Worst Case Analysis of the Simplex Method

41
1.3.1 Performance Measures
• Question
• Given a problem of a certain size, how long will
it take to solve it?
• Average Case. How long for a typical problem.
• Worst Case. How long for the hardest problem.
• Average Case
• Mathematically difficult
• Empirical studies
• Worst Case
• Mathematically tractable
• Limited value.

42
Measures
• Measures of Size
• Number of constraints m and/or number of
variables n.
• Number of data elements, mn.
• Number of nonzero data elements.
• Size, in bytes, of AMPL formulation (modeldata).
• Measuring Time
• Number of iterations.
• Arithmetic operations per iteration.
• Timer per arithmetic operation (depends on
hardware).

43
Worst-case analysis
• Klee-Minty Problem (1972)
• Example n3

44
Worst-case analysis
n3
• A distorted cube
• Constraints represent a minor distortion
to an n-dimensional hypercube

10000
9600
9992
9592
100
1
96
45
Worst-case analysis
• Exponential
• Klee-Minty problem shows that
• Largest-coefficient rule can take 2n-1 pivots to
solve a problem in n variables and constraints
(there by visiting all 2n vertices of the
distorted cube).
• For n 70, 2n 1.2 X 1021
• At 1000iterations per second, this problem will
take 40 billion years to solve. The age of the
universe is estimated at 13.7 billion years.
• Yet, Problems with 10,000 to 100,000 variables
are solved routinely every day.
• Worst case analysis is just that worst case.

46
The simplex algorithm
• Exponential in theory
• Fast solution in practice
• Open question Does there exist a variant of the
simplex method whose worst case performance is
polynomial?

47
Duality
• Every Problem
• Has a Dual

48
Duality
• Primal problem
• Dual in standard form
• Original problem is called the primal problem.
• A problem is defined by its data (notation used
for the variables is arbitrary).
• Dual is negative transpose of primal.

Theorem Dual of dual is primal
49
Duality Theorem
• (weak) If (x1, x2, , xn) is feasible for the
primal and (y1, y2, , ym) is feasible for the
dual, then
• (strong) If the primal problem has an optimal
solution x (x1, x2, , xn), then the dual
problem has an optimal solution y (y1, y2, ,
ym) and

50
Duality theorem
• Four possibilities
• Primal optimal, dual optimal (no gap).
• Primal unbounded, dual infeasible (no gap).
• Primal infeasible, dual unbounded (no gap).
• Primal infeasible, dual infeasible (infinite
gap).

51
Algorithms for linear programming
• Primal simplex
• Dual simplex
• Network simplex
• Fourier-Motzkin Elimination (1847, 1938)
• The Ellipsoid Method (1970 -1979)
• Interior-Point/Barrier Methods (1984)

52
Other approaches
• Lagrangean Relaxation (for very large scale and
structured LPs)
• Plus
• bundle
• bundle trust region
• or any other nondifferentiable NLP method that
looks promissing

53
Integer programming
• Relaxation
• Branch and Bound
• Cutting planes
• Column generation

Close the gap!
Min Max Theorem
54
(No Transcript)
55
(No Transcript)
56
(No Transcript)
57
(No Transcript)
58
(No Transcript)
59
(No Transcript)
60
(No Transcript)
61
(No Transcript)
62
Cutting Plane Technique for Solving Integer
Programs
• www.math.ohiou.edu/vardges/math443/slides/cutpla
ne1.ppt

63
Motivating Example for Cutting Planes
• Recall the bad-case example for the LP-rounding
algorithm
• Integer Program LP relaxation
• max x1 5x2 max x1 5x2
• s.t. x1 10x2 ? 20 s.t. x1 10x2 ? 20
• x1 ? 2 x1 ? 2
• x1 , x2 0 integer x1 , x2 0
• Solution to LP-relaxation (2, 1.8)
• Rounded IP solution
• (2, 1) with value 7
• IP optimal solution
• (0, 2) with value 10
• Conclusion Rounded solution too far
• from optimal solution

x1 2
x1 10x2 20
Z11
64
• How can we improve the performance of the
LP-rounding?
• Add the following new constraint to the problem
x1 2x2 ? 4 .
• New Integer Program New LP relaxation
• max x1 5x2 max x1 5x2
• s.t. x1 10x2 ? 20 s.t. x1 10x2 ? 20
• x1 ? 2 x1 ? 2
• x1 2x2 ? 4 x1 2x2 ? 4
• x1 , x2 0 integer x1 , x2 0
• The set of feasible integer points
• is the same for the old and new IPs
• But the feasible region of
• the new LP-relaxation is different
• some of the fractional points are cut off
• As a result, the optimal solution of
• the new LP-relaxation, (0,2)
• is also the optimal IP solution.

x1 2
(0, 2)
x1 10x2 20
Z10
x1 2x2 4
65
General Idea of Cutting Plane Technique
• Add new constraints (cutting planes) to the
problem such that
• (i) the set of feasible integer solutions
remains the same, i.e., we still have the same
integer program.
• (ii) the new constraints cut off some of the
fractional solutions making the feasible region
of the LP-relaxation smaller.
• Smaller feasible region might result in a better
LP value (i.e., closer to the IP value), thus
making the search for the optimal IP solution
more efficient.
• Each integer program might have many different
formulations.
• Important modeling skill
• Give as tight formulation as possible.
• How? Find cutting planes that make the
formulation of the original IP tighter.

66
Branch and cut
67
Column generation
• Danzig-Wolfe decomposition replace the system
described with inequalities by extreme points and
extreme rays. Variables are generated only when
needed!
• Used to solve very large scale combinatorial
optimization problems (airline crew scheduling,
vehicle routing, etc.)
• Two models restricted master problem and
subproblem.
• Master problem is solved with ILP and subproblem
is solved with dynamic programming or CP.

68
Contents
• Introduction to Linear and Integer programming
(R.J. Vanderbei)
• Network flow (Cook et al.)
• Brief introduction to polyhedral theory (Cook et.
Al)
• Travelling salesman (Martin Grötschel)
• Brief overview of current softwares (SCIP website)

69
The maximum flow problem
• Input
• A digraph G (V, E)
• A pair of source and target nodes (s, t)
• For each arc ltv,wgt in E, a capacity u_v,w
• Output
• A maximum flow

70
The maximum flow problem
• A feasible (s,t)-flow is a vector x of the arcs
that satisfies
• Flow conservation constraint
• Flow capacity constraint

71
The maximum flow problem
• A maximum flow
• A feasible (s,t)-flow that the flow amount
• The flow amount
• fx(t) is maximized

72
The maximum flow problem
• The Ford-Fulkerson algorithm (the Augmenting Path
algorithm)

X-incrementing path
2/3
2/3
2/3
2/3
x-augmenting path
2/3
2/3
2/3
s
t
2/3
73
The maximum flow problem
• The Ford-Fulkerson algorithm

74
The maximum flow problem
• The Ford-Fulkerson algorithm
• An auxiliary graph
• G(x) (V, E) where vw
• in E if

The auxiliary graph of X0
75
The maximum flow algorithm
• Every (s,t)-dipath in the auxiliary graph
represents a x-augmenting path
• Use BFS to find the (s,t)-dipaths
• Each iteration takes O(m) running time, and at
most nm augmentations (shortest)

76
The minimum cut problem
• A cut

A (s,t)-cut
77
Max -Flow Min-Cut
• The Max-Flow Min-Cut Theorem
• If there is a maximum (s,t)-flow, then

78
Max -Flow Min-Cut
• The linear program of Max-Flow

79
Max -Flow Min-Cut
• The dual of max-flow

80
Maximum flow and minimum cut
• The maximum flow

2/2
D
G
1/1
0/5
0/5
4/5
3/4
A
B
F
H
1/5
2/5
4/5
3/5
C
E
2/2
81
Maximum flow and minimum cut
• The residual capacity on arcs

0
D
G
0
5
5
1
1
A
B
F
H
4
3
1
2
C
E
0
82
Maximum flow and minimum cut
• The residual graph

D
G
A
B
F
H
C
E
83
Maximum flow and minimum cut
• The minimum cut

2
D
G
1
5
5
5
4
A
B
F
H
5
5
5
5
C
E
2
84
Contents
• Introduction to Linear and Integer programming
(R.J. Vanderbei)
• Network flow (Cook et al.)
• Brief introduction to polyhedral theory (Cook et.
Al)
• Travelling salesman (Martin Grötschel)
• Brief overview of current softwares (SCIP website)

85
Elementary linear algebra
• Given a finite set of points S, z1, z2, . . . ,
zn, in Rm, a point z in Rm is called a convex
combination of these points if
• The convex hull of S, denoted by conv(S), is the
set of all points that are convex combination of
points in S.

The convex hull of two points
The convex hull of three points
86
Elementary linear algebra
• A convex hull conv(S) can be described by a
finite set of linear inequalities and that

87
Elementary linear algebra
• For integer programming, its hard
• Finding the linear programming description of the
convex hull of an integer programming problem is
hard
• The reduction might lead to an exponential
increase in the problem size
• However, these are the problems we need to
overcome!

88
Elementary linear algebra
• A set of points x1, x2, , xk in Rn is linearly
independent if the unique solution of
• The maximum number of linearly independent rows
(columns) of a matrix A is the rank of A and is
denoted by rank(A).
• A set of points x1, x2, , xk in Rn is affinely
independent if the unique solution of

89
Polyhedron and polytope
• A polyhedron is the solution set of a
finite system of linear inequalities. (linear
programming is to optimize a linear function over
a polyhedron).
• A polyhedron is a polytope if there exist
such that for all .
• A polytope is the convex hull of a finite set of
vectors and vice versa.

90
Polyhedron and polytope
91
Polyhedron and polytope
Exterior representation
92
Polyhedron and polytope
93
Polyhedron and polytope
• An inequality is valid for a
polyhedron P if
• The solution set of an equation is
called a hyperplane if w?0
• The above is called a supporting hyperplane of a
polyhedron P if is valid for P and

94
Polyhedron and polytope
• A polytope P is of dimension k, denoted by dim(P)
k, if the maximum number of affinely
independent points in P is k1.
• A polyhedron is full-dimensional if
dim(P) n.
• If , the maximum
number of affinely independent solutions of Ax
b is n1 rank(A)
• If , then dim(P) rank(A, b) n.

95
Faces and vertices
• The intersection of the polytope and its
supporting hyperplane is a face.
• A vector is called a vertex if v is a
face of P.
• P is pointed if it has at least one vertex.

96
Facets
• Let F be a nonempty proper face of a polyhedron
P. Then F is a facet if and only if dim(F)
dim(P)-1.
• Theorem 3 Let
be a nonempty polyhedron. Then
the defining system is minimal if and only if the
rows of A are linearly independent and for each
row i of A, the inequality
induces a distinct facet of P.

97
Facets
• Corollary 1 A full-dimensional polyhedron has a
unique (up to positive scalar multiples) minimal
defining system.
• Corollary 2 Any defining system for a polyhedron
must contains a distinct facet-inducing
inequality for each of its facets.
• A polyhedron has a finite number of facets

98
An example
x3
x2
(0,0,1)
(0,1,0)
x1
(1,0,0)
rank(A, b) 1, dim(p) 2
99
Extreme points and extreme rays
• x is an extreme point of P if and only if x is a
zero-dimensional face of P.
• Let . If
, then is called a ray of P.
• If P is not empty, then r is an extreme ray of P
if and only if is
one-dimensional face of P0.

100
An example (cont)
x3
x2
(0,0,1)
F1 (0, 0, 1) is an extreme point r1 (1, 0,
-1) is an extreme ray r2 (0, 1, -1) is an
extreme ray
x1
(0,1,-1)
(1,0,-1)
?r2
?r1
101
Extreme points and extreme rays
• A polyhedron has finite number of extreme points
and extreme rays.
• Theorem (Minkowski) If P is not empty and
rank(A) n, then

102
An example (cont)
x3
x2
(0,0,1)
x1
(0,1,-1)
(1,0,-1)
interior representation
?r2
?r1
103
Integral polytopes
• Polyhedra that can be defined by rational linear
systems are rational polyhedra.
• A rational polyhedron is called integral if every
nonempty face contains an integral vector.
• Theorem 4 A rational polytope P is integral if
and only if for all integral vectors w the
optimal value of
is an integer.

104
Total unimodularity
• Let A be an m n-matrix with full row rank. The
matrix A is called unimodular if all entries of A
are integral and each nonsingular m m-submatrix
of A has determinant 1.
• Theorem 5 The polyhedron defined by Ax b, x
0 is integral for every integral vector b in Rm
if and only if A is unimodular.

105
Total unimodularity
• Matrix A is called totally unimodular if all of
its square submatrices have determinant 0, 1 or
-1. (In particular, every entry in a totally
unimodular matrix is 0, 1, or -1).
• Theorem 6 (Hoffman-Kruskal Theorem) Let A be an
m by n integral matrix. Then the polyhedron
defined by Ax b, x 0 is integral for every
integral vector b in Rm if and only if A is
totally unimodular. (even true without
nonnegative constraints)

106
Total unimodularity
• There have been many characterizations of totally
unimodular matrices
• Ghouila-Houri (1962)
• Camion (1965)
• Truemper(1977)
• ....
• Full understanding was achieved by
establishing a link to regular matroids, Seymour
(1980). This connection also yields a polynomial
time algorithm to recognize totally unimodular
matrices.

107
Total unimodularity
Rows are indexed by vertices Columns are indexed
by edges Av,e 1 if e is the head of e Av,e -1
if e is the tail of e Av,e 0 otherwise
108
Total unimodularity
• Theorem 7 Let A be a 0, 1 valued matrix, where
each column has at most one 1 and at most one
-1. Then A is totally unimodular.

109
Total unimodularity
Rows are indexed by edges E from the spanning
tree T Columns are indexed by E For e (u,v) in
E and e in E Mee 1 if (u,v)-path in T uses
e in the forward direction Mee -1 if
(u,v)-path in T uses e in the backward
direction Mee 0 if (u,v)-path in T does not
use e
M
110
Total unimodularity
L
N
L N is total unimodular by theorem 7 L is a
basis of L N L-1N is total unimodular M L-1N
and therefore M is total unimodular
111
Total Dual integrality
• The basic theme of polyhedral combinatorics is
the application of
• A linear system Ax b is totally dual integral
if the minimum can be achieved by an integral
vector y for each integral w for which the optima
exist.

112
Total Dual integrality
• Theorem 8 (Hoffman 1974) Let Ax b be a totally
dual integral system such that P x Ax b is
a rational polytope and b is integral. Then P is
an integral polytope.
• minyTb is integer gt maxwTx x in P is
integer
• Result follows by theorem 4.

113
Total Dual integrality
• An example in Chapter 8 for the process of
proving total dual integrality systems.
• To achieve the integrality involves the addition
of many equalities that are redundant for the
definition of P. Need to decide if the add
improves the min-max theorem or not.

114
Cutting planes
Example
Polytope and its integer hull
115
Cutting-plane proofs
• (5) x ½, we get
• (6) x 2 (1) x 3, we get
• (7) x 1/17 and round down, we get

116
Cutting-plane proofs
• The Gomory-Chvátal cutting plane

cuts off part of the polyhedron, but does not
cut off any integral vectors
117
Cutting-plane proofs
(Chvátal1973 and Gomory 1960) If wTx t is a
cutting plane of a polytope P, then there is
always a cutting-plane proof to prove it! For
to 2, 3, ..
118
Cutting-plane proofs
• Theorem 9 Let P x Ax b be a rational
polytope that contains no integral vectors. Then
there exists a cutting-plane proof of 0Tx -1
from Ax b.
• Analogous to Farkas Lemma a polytope is empty
if and only if 0Tx -1 can be written as a
nonnegative linear combination of the
inequalities Ax b.

119
Chvátal Rank
• An approximation process that gives a finite
procedure for obtaining a linear description of
the convex hull of the integral vectors of the
polytope P.
• Theorem 10 If P is a rational polytope, then
P(k) PI for some integer k. (The least k is
known as Chvátal rank of P)
• (See Shrijver1980 for more details)

120
Separation and optimization
Separation problem Given a bounded rational
polyhedron P ? Rn and a rational vector v ? Rn,
either conclude that v belongs to P or, if not,
find a rational vector w ? Rn such that wTx lt wTv
for all x ? P.
Optimization problem Given a bounded rational
polyhedron P ? Rn and a rational (objective)
vector w ? Rn, either find x ? P that maximizes
wTx over all x ? P, or conclude that P is empty.
121
Separation and optimization
• A separation algorithm is polynomially solvable
over the class P if there exists an algorithm
that solves the separation problem for any Pt ? P
and any rational vector v ? Rnt in time
polynomial in the sizes of t and v.
• Theorem 10 For any proper class of polyhedra,
the optimization problem is polynomially solvable
if and only if the separation problem is
polynomially solvable. (Grötschel, Lovász and
Schrijver 1979, 1988 )

122
Contents
• Introduction to Linear and Integer programming
(R.J. Vanderbei)
• Network flow (Cook et al.)
• Brief introduction to polyhedral theory (Cook et.
Al)
• Travelling salesman (Martin Grötschel)
• Brief overview of current softwares (SCIP website)

123
• Please refer to the talks website
9-09-21/2009-09-21-1600-MG-TSP-and-Applications.pd
f

124
Facets of the TSP polytope
• The subtour elimination inequalities.

2-matchings
125
Facets of the TSP polytope
• The 2-matching inequalities

126
Facets of the TSP polytope
• The comb inequalities

127
Facets of the TSP polytope
• The generalized comb inequalities (with k teeth
and r handles)

128
Contents
• Introduction to Linear and Integer programming
(R.J. Vanderbei)
• Network flow (Cook et al.)
• Brief introduction to polyhedral theory (Cook et.
Al)
• Travelling salesman (Martin Grötschel)
• Brief overview of current softwares (SCIP website)

129
Software
• Commercial
• IBM ILOG CPLEX
• GUROBI (Microsoft solver foundation)
• SAS
• Publicly available
• SCIP
• MINTO
• CLP (COIN-OR)

130
Software
Geometric mean of results taken from the homepage
of Hans Mittelmann (29/Jul/2010).Unsolved or
failed instances are accounted for with the time
limit of 2 hours.
131
References
• 1. F. Focacci, A. Lodi and M, Milano, A hybrid
exact algorithm for the TSPTW.
• 2. V. Chvátal, Cutting planes in combinatorics,
European Journal of Combinatorics 6 (1985) 217
226.
• 3. M. Grötschel and W.R. Pulleyblank, Clique
tree inequalities and the symmetric travelling
salesman problem, Mathematics of Operations
Research 11 (1986) 537-569.
• 4. A. Schrijver, On cutting planes, in
Combinatorics 79, Part II, Annals of Discrete
Mathematics 9 (1980) 291-296.

132
References
• W. J. Cook, W. H. Cunningham, W. R. Pulleyblank
and A. Shrijver. Combinatorial Optimization.
Wiley-Interscience series in mathematics and
optimization 1998.
• R. J. Vanderbei. Linear programming foundations
and extensions. 2001
• Martin Grötschels website http//www.zib.de/groe
tschel
• SCIP http//scip.zib.de/