Efficient Convex Relaxation for Transductive Support Vector Machine - PowerPoint PPT Presentation

About This Presentation
Title:

Efficient Convex Relaxation for Transductive Support Vector Machine

Description:

Efficient Convex Relaxation for Transductive Support Vector Machine Zenglin Xu1, Rong Jin2, Jianke Zhu1, Irwin King1, and Michael R. Lyu1 2 rongjin_at_cse.msu.edu – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 2
Provided by: Kristen162
Category:

less

Transcript and Presenter's Notes

Title: Efficient Convex Relaxation for Transductive Support Vector Machine


1
Efficient Convex Relaxation for Transductive
Support Vector Machine Zenglin Xu1, Rong Jin2,
Jianke Zhu1, Irwin King1, and Michael R. Lyu1
2 rongjin_at_cse.msu.edu Department of Computer
Science and Engineering Michigan State
University East Lansing, MI, 48824
1 zlxu, jkzhu, king, lyu_at_cse.cuhk.edu.hk
Department of Computer Science and Engineering
The Chinese University of Hong Kong Shatin,
N.T., Hong Kong
3. Efficient Relaxation for TSVM
1. Introduction
4. Experimental Results and Conclusion
We consider the problem of Support Vector Machine
transduction, which involves a combinatorial
problem with exponential computational complexity
in the number of unlabeled examples. Although
several studies are devoted to Transductive SVM,
they suffer either from
We introduce a new variable z, such that
To evaluate the performance of the proposed
method, we perform experiments on several data
sets with different scales.
The dual problem then becomes as follows
(1) the high computation complexity, or from
(2) the solutions of local optimum.
, we then have
and
We further define
Contributions
The following figure compares the training time
of the traditional relaxation method with the
proposed efficient convex relaxation method.
We propose an improved SDP relaxation algorithm
to TSVM.
(1) Unlike the semi-definite relaxation Xu et
al., 2005 that approximates TSVM by dropping the
rank constraint, the proposed approach
approximates TSVM by its dual problem, which
provides a tighter approximation.
where the last constraint is the balance
constraint, which avoids assigning all
the data to one of the classes.
(2) The proposed algorithm involves fewer free
parameters and therefore significantly improves
the efficiency by reducing the worst-case
computational complexity from to
.
To solve the above problem, we get the following
theorem by calculating the Lagrange of the above
non-convex problem.
Computation time of the proposed convex
relaxation approach for TSVM (i.e., CTSVM) and
the traditional semi-definite relaxation approach
for TSVM (i.e., RTSVM) versus the number of
unlabeled examples. The Course data set is used,
and the number of labeled examples is 20.
2. Problem
The following table shows the classification
performance of supervised SVM and a variety of
approximation methods to TSVM.
The dual form of SVM can be formulated in the
following according to Lanckriet et al., 2004
where K is the kernel matrix.
Discussion
Using Schur complement, we have a semi-definite
programming (SDP)
1
variable scale
worst-case time complexity
Note (1) Due to the high computational
complexity, the traditional SDP-relaxation method
is only evaluated on the small-size data sets,
i.e., IBM-s and Course-s. The results are
68.5722.73 and 64.037.65, respectively, which
are worse than the proposed CTSVM method. (2)
SVM-light fails to converge on Banana within one
week.
Our proposed convex approximation method provides
tighter approximation than the previous method.
Our prediction function implements the conjugate
of conjugate of the prediction function f(x),
which is the convex envelope of f(x).
2
Conclusion
Based on the result that the set
3
The experimental results demonstrate that our
proposed algorithm is effective to decrease the
time complexity of convex relaxation TSVM and to
improve the transductive classification accuracy
performance.
,
The solution of the proposed algorithm is related
to that of harmonic functions.
the following approximation is achieved in Xu et
al., 2005
The decision function can be written as follows.
1
variable scale
Acknowledgments
We first propagate the class labels from the
labeled examples to the
time complexity
unlabeled one by term
, then adjust the prediction labels by the
2
The constraint
The authors thank the anonymous reviewers and PC
members for their valuable suggestions and
comments to improve this paper.
factor
is dropped.
Write a Comment
User Comments (0)
About PowerShow.com