Introduction to Support Vector Machines
Starting from slides drawn by Ming-Hsuan Yang and Antoine Cornu´ejols
SVM Bibliography
B. Boser, I. Guyon, V. Vapnik, “A training algorithm for optimal margin classifier”, 1992
C. Cortes, V. Vapnik, “Support vector networks”. Journal of Machine Learning, 20, 1995.
V. Vapnik. “The nature of statistical learning theory”.
Springer Verlag, 1995.
C. Burges, “A tutorial on support vector machines for pat- tern recognition”. Data Mining and Knowledge Descovery, 2(2):955-974, 1998.
N. Cristianini, J. Shawe-Taylor, “Support Vector Machines and other kernel-based learning methods”. Cambridge University Press, 2000. Andrew Ng, “Support Vector Ma- chines”, Stanford University, CS229 Lecture Notes, Part V.
2.
SVM — The Main Idea
Given a set of data points which belong to either of two classes, find an optimal separating hyperplane
- maximizing the distance (from closest points) of either class to the separating hyperplane, and
- minimizing the risk of misclassifying the training samples and the unseen test samples.
Approach: Formulate a constraint-based optimisation prob- lem, then solve it using quadratic programming (QP).
Optimal Separation Hyperplane
optimal separating hyperplane
maximal margin
valid separating hyperplane
Plan
1. Linear SVMs
The primal form and the dual form of linear SVMs Linear SVMs with soft margin
2. Non-Linear SVMs
Kernel functions for SVMs
An example of non-linear SVM
1. Linear SVMs: Formalisation
Let S be a set of points xi ∈ Rd with i = 1, . . . , m. Each point xi belongs to either of two classes, with label yi ∈ {−1, +1}. The set S is linear separable if there are w ∈ Rd and w0 ∈ R
such that
yi(w · xi + w0) ≥ 1 i = 1, . . . , m
The pair (w, w0) defines the hyperplane of equation w·x+w0 = 0, named the separating hyperplane.
The signed distance di of a point xi to the separating hyper- plane (w, w0) is given by di = w·x||iw+w|| 0.
It follows that yidi ≥ ||w1||, therefore ||w1|| is the lower bound on the distance between points xi and the separating hyper- plane (w, w0).
Optimal Separating Hyperplane
Given a linearly separable set S, the optimal separat- ing hyperplane is the separating hyperplane for which the distance to the closest (either positive or negative) points in S is maximum, therefore it maximizes ||w1||.
xi
maximal margin
optimal separating hyperplane
1 II w II
xi
II w II
D( )
geometric margin
vectors support
D(x) < −1 D(x) = 0
D(x) = −1
D(x) = 1 D(x) > 1
w
D(x) = w · x + w0
Linear SVMs: The Primal Form
minimize 12||w||2
subject to yi(w · xi + w0) ≥ 1 for i = 1, . . . , m
This is a constrained quadratic problem (QP) with d + 1 pa- rameters (w ∈ Rd and w0 ∈ R). It can be solved by quadratic optimisation methods if d is not very big (103).
For large values of d (105): due to the Kuhn-Tucker theorem, since the above objective function and the associated con- straints are convex, we can use the method of Lagrange multipliers (αi ≥ 0, i = 1, . . . , m) to put the above problem under an equivalent “dual” form.
Note: In the dual form, the variables (αi) will be subject to much simpler constraints than the variables (w, w0) in the primal form.
Linear SVMs: Getting the Dual Form
The Lagrangean function associated to the primal form of the given QP is
LP(w, w0, α) = 1
2||w2|| −
m
X
i=1
αi(yi(w · xi + w0) − 1)
with αi ≥ 0, i = 1, . . . , m. Finding the minimum of LP implies
∂LP
∂w0 = −
m
X
i=1
yiαi = 0
∂LP
∂w = w −
m
X
i=1
yiαixi = 0 ⇒ w =
m
X
i=1
yiαixi
where ∂LP
∂w = (∂LP
∂w1 , . . . , ∂LP
∂wd )
By substituting these constraints into LP we get its dual form
LD(α) =
m
X
i=1
αi − 1 2
m
X
i=1 m
X
j=1
αiαjyiyj xi · xj
Linear SVMs: The Dual Form
maximize Pmi=1 αi − 12 Pmi=1
Pm
j=1 αiαjyiyj xi · xj subject to Pmi=1 yiαi = 0
αi ≥ 0, i = 1, . . . , m
The link between the primal and the dual form:
The optimal solution (w, w0) of the primal QP problem is given by
w =
m X i=1
αiyixi
αi(yi(w · xi + w0) − 1) = 0 for any i = 1, . . . , m
where αi are the optimal solutions of the above (dual form) optimisation problem.
Support Vectors
The only αi (solutions of the dual form of our QP problem) that can be nonzero are those for which the constraints yi(w · xi + w0) ≥ 1 for i = 1, . . . , m in the primal form of the QP are satisfied with the equality sign.
Because most αi are null, the vector w is a linear combination of a relative small percentage of the points xi.
These points are called support vectors because they are the closest points to the optimal separating hyperplane (OSH) and the only points of S needed to determine the OSH.
The problem of classifying a new data point x is now simply solved by looking at sign(w · x + w0).
Linear SVMs with Soft Margin
If the set S is not linearly separable — or one simply ignores whether or not S is linearly separable —, the previous analysis can be generalised by introducing m non-negative (“slack”) variables ξi, for i = 1, . . . , m such that
yi(w · xi + w0) ≥ 1 − ξi, for i = 1, . . . , m
Purpose: to allow for a small number of missclassified points, for better generalisation or computational efficiency.
Generalised OSH
The generalised OSH is then viewed as the solution to the problem:
minimize 12||w||2 + C Pmi=1 ξi
subject to yi(w · xi + w0) ≥ 1 − ξi for i = 1, . . . , m ξi ≥ 0 for i = 1, . . . , m
The associated dual form:
maximize Pmi=1 αi − 12 Pmi=1
Pm
j=1 αiαjyiyj xi · xj subject to Pmi=1 yiαi = 0
0 ≤ αi ≤ C, i = 1, . . . , m As before:
w = Pmi=1 αiyixi
αi(yi(w · xi + w0) − 1 + ξi) = 0 (C − αi) ξi = 0
The role of C:
it acts as a regularizing parameter:
• large C ⇒ minimize the number of misclassified points
• small C ⇒ maximize the minimum distance ||w1||
2. Nonlinear Support Vector Machines
• Note that the only way the data points appear in (the dual form of) the training problem is in the form of dot products xi · xj.
• In a higher dimensional space, it is very likely that a linear separator can be constructed.
• We map the data points from the input space Rd into some space of higher dimension Rn (n > d) using a function Φ : Rd → Rn
• Then the training algorithm would depend only on dot products of the form Φ(xi) · Φ(xj).
• Constructing (via Φ) a separating hyperplane with maximum margin in the higher-dimensional space yields a nonlinear decision boundary in the input space.
General Schema for Nonlinear SVMs
Input
space Output
space
Internal redescription
space
x Φ h y
Introducing Kernel Functions
• But the dot product is computationally expensive...
• If there were a “kernel function” K such that K(xi, xj) = Φ(xi)·Φ(xj), we would only use K in the training algorithm.
• All the previous derivations in the model of linear SVM hold (substituting the dot product with the kernel func- tion), since we are still doing a linear separation, but in a different space.
• Important remark: By the use of the kernel function, it is possible to compute the separating hyperplane without explicitly carrying out the map into the higher space.
Some Classes of Kernel Functions for SVMs
• Polynomial: K(x, x′) = (x · x′ + c)q
• RBF (radial basis function): K(x, x′) = e−||x−x′||
2 2σ2
• Sigmoide: K(x, x′) = tanh(αx · x′ − b)
An Illustration
(a) (b)
Decision surface
(a) by a polynomial classifier, and (b) by a RBF.
Support vectors are indicated in dark fill.
Important Remark
The kernel functions require calculations in x(∈ Rd), therefore they are not difficult to compute.
It remains to determine which kernel function K can be as- sociated with a given (redescription space) function Φ.
In practice, one proceeds vice versa:
we test kernel functions about which we know that they correspond to the dot product in a certain space (which will work as redescription space, never made explicit).
Therefore, the user operates by “trial and error”...
Advantage: the only parameters when training an SVM are the kernel function K, and the “tradeoff” parameter C.
Mercer’s Theorem (1909):
A Characterisation of Kernel Functions for SVMs
Theorem: Let K : Rd × Rd → R be a symmetrical function.
K represents a dot product,
i.e. there is a function Φ : Rd → Rn such that K(x, x′) = Φ(x) · Φ(x′)
if and only if
Z
K(x, x′)f(x)f(x′)dx dx′ ≥ 0
for any function f such that R f2(x)dx is finite.
Remark: The theorem doesn’t say how to construct Φ.
Some simple rules for building (Mercer) kernels
If K1 and K2 are kernels over X × X, with X ⊆ Rn, then
• K(x, y) = K1(x, y) + K2(x, y)
• K(x, y) = aK1(x, y), with a ∈ R+
• K(x, y) = K1(x, y)K2(x, y) are also kernels.
Illustrating the General Architecture of SVMs
for the problem of hand-written character recognition
K
K K
α
2α
3α
4α
1K
Σ
Output: sign(Pi αiyiK(xi, x) + w0)Comparison: K(xi, x)
Support vectors: x1, x2, x3, . . .
Input: x
An Exercise: xor
x1
x2
1
−1
−1
1
Note: use K(x, x′) = (x · x′ + 1)2.
It can be easily shown that Φ(x) = (x21, x22, √
2x1x2, √
2x1, √
2x2, 1) ∈ R6 for x = (x1, x2) ∈ R2.
i xi yi Φ(i) 1 (1,1) −1 (1,1,√
2,√ 2,√
2,1) 2 (1,−1) 1 (1,1,−√
2,√
2,−√ 2,1) 3 (−1,1) 1 (1,1,−√
2,−√ 2,√
2,1) 4 (−1,−1) −1 (1,1,√
2,−√
2,−√ 2,1)
LD(α) = P4i=1 αi − 12 P4i=1
P4
j=1 αiαjyiyj Φ(xi) · Φ(xj)
= α1 + α2 + α3 + α4−
−12( 9α12 − 2α1α2 − 2α1α3 + 2α1α4+ 9α22 + 2α2α3 − 2α2α4+
9α32 − 2α3α4+ 9α42)
subject to:
−α1 + α2 + α3 − α4 = 0
∂LD(α)
∂α1 = 0 ⇔ 9α1 − α2 − α3 + α4 = 1
∂LD(α)
∂α2 = 0 ⇔ α1 − 9α2 − α3 + α4 = −1
∂LD(α)
∂α3 = 0 ⇔ α1 − α2 − 9α3 + α4 = −1
∂LD(α)
∂α4 = 0 ⇔ α1 − α2 − α3 + 9α4 = 1
¯
α1 = ¯α2 = ¯α3 = ¯α4 = 18
¯
w = 18(−Φ(x1) + Φ(x2) + Φ(x3) − Φ(x4)) = 18(0, 0,−4√
2, 0, 0, 0)
¯
w · Φ(xi) + ¯w0 = yi ⇒ w¯0 = 0
The optimal separation hyperplane: w¯ · Φ(x) + ¯w0 = 0 ⇔ −x1x2 = 0 Test: sign (−x1x2)
The xor Exercise: Result
2 x1
margin: 2
maximum
1 2
D(x , x ) = −1
1 2
D(x , x ) = 0
x2
x1
1 2
D(x , x ) = 0
21D(x , x ) = 0
1 2
D(x , x ) = +1
1 2
D(x , x ) = 0
1 2
D(x , x ) = +1
1 2
D(x , x ) = −1
1 2
D(x , x ) = −1
1 2
D(x , x ) = +1
2 x x1 2
D(x , x ) = −x x
1 2
1 2 D(x , x ) =− 2 x x1 2 1 2
0 1 2
−1
−2
−2
−1 0 1 2
0 1 2
−1
−2
−2
−1 0 1 2
Feature space Input space
Concluding Remarks: SVM — Pros and Cons
Pros:
• Find the optimal separation hyperplane.
• Can deal with very high dimentional data.
• Some kernels have infinite Vapnik-Chervonenkis dimension (see Computational learning theory, ch. 7 in Tom Mitchell’s book), which means that they can learn very elaborate concepts.
• Usually work very well.
Cons:
• Require both positive and negative examples.
• Need to select a good kernel function.
• Require lots of memory and CPU time.
• There are some numerical stability problems in solving the con- strained QP.
Multi-class Classification with SVM
SVMs can only do binary classification.
For M classes, one can use the one-against-the-rest approach:
construct a hyperplane between class k and the M −1 other classes. ⇒ M SVMs.
To predict the output of a new instance, just predict with each of these M SVMs, and then find out which one puts the prediction furthest into the positive region of the instance space.
SVM Implementations
• SVMlight
• LIBSVM
• mySVM
• Matlab
• Huller
• ...
The SMO (Sequential Minimal Optimization) algorithm John Pratt, 1998
Optimization problem:
maxα W(α) =
m
X
i=1
αi − 1 2
m
X
i=1 m
X
j=1
yiyjαiαjxi ·xj s. t. 0 ≤ αi ≤ C, i = 1, . . . , m
m
X
i=1
αiyi = 0.
Andrew Ng, Stanford, 2012 fall, ML course, Lecture notes 3.
Algorithm:
Repeat till convergence {
1. Select some pair αi and αj to update next (using a heuristic that tries to pick the two that will allow us to make the biggest progress towards the global maximum).
2. Reoptimize W(α) with respect to αi and αj, while holding all the other αk’s (k 6= i, j) fixed.
}
Update equations:
•
αnew, unclipped
j = αj − yj(Eiη−Ej) αnew, clipped
j =
H if αnew, unclipped
j > H
αnew, unclipped
j if L ≤ αnew, unclipped
j ≤ H
L if αnew, unclipped
j < L
◦ where
Ek = w · xk +w0 −yk w = Pm
i=1yiαixi, η = − k xi −xj k2
L = max(0, αj −αi) ¸si H = min(C, C +αj −αj) if yi 6= yj L = max(0, αj +αi −C) ¸si H = min(C, αj +αj) if yi = yj
•
αnew
1 = αold
1 +y1y2(αold
2 −αnew
2 )
Assume that αi, α2 are the free dual variables, and let the i and j indices be used to index other variables.
Credit: John Pratt, Fast training of SVMs using Sequential Minimal Optimization, 2000 The two Lagrange multipliers must fulfill all the constraints of the full prob- lem.The inequality constraints cause the Lagrange multipliers to lie in the box. The liniar equality constraint causes them to lie on a diagonal line.
Thefore, one step of SMO must find an optimum of the objective function on a diagonal line segment. In this figure, γ = αold
1 + sαold
2 , is a constant that depends on the previous values of α1 and α2, and s = y1y2.
Proof
[following N. Cristianini and J. Shawe-Taylor, An introduction to SVM, 2000, pag. 138-140]
The objective function:
W(α1, α2, . . . , αm) = Pm
i=1αi − 12 Pm i=1
Pm
j=1yiyjαiαjxi ·xj vi not.
=
Pm
j=3 yjαjxj
·xi = f(xi)−P2
j=1 yjαjxj ·xi for i = 1,2
⇒ W(α1, α2) = α1 +α2 − 12α21x21 − 12α22x22 −y1y2α1α2x1 ·x2 −y1α1v1 −y1α2v2 +const From
Pm
i=1yiαi = 0 ⇒ αold
1 +αold
2 = αnew
1 +αnew
2 = γ (another constant)
⇒ αnew
1 = αold
1 +y1y2(αold
2 −αnew
2 )
s not.
= y1y2
⇒ W(α2) = γ −sα2 +α2 − 12(γ −sα2)2x21 − 12α22x22
−y1y2x1 ·x2(γ −sα2)α2 −y1v1(γ −sα2) −y2v2α2 +const
∂W(α2)
∂α2 = −s+ 1 + 12 x212s(γ −sα2) − 12 x222α2
−y1y2(x1 ·x2)γ + 2y1y2(x1 ·x2)sα2 +y1v1s−y2v2
= −s+ 1 +x21(sγ −α2) −x22α2 −sγx21−sγ (x1 ·x2) + y1v1s −y2v2
Finding the stationary point:
∂W(α2)
∂α2 = 0 ⇒ αnew, unclipped
2 (x21 +x22 −2x1 ·x2) = 1−s +γsx21 −γs(x1· x2) +y2v1 −y2v2
1−s +γsx21 −γs(x1 ·x2) +y2v1 −y2v2 = y2(y2− y1+γy1(x21 −x1 ·x2) +v1 −v2) v1 −v2 = f(x1) −y1αold
1 x21 −y2αold
2 x1 ·x2
−f(x2) +y1αold
1 x1 ·x2 +y2αold
2 x22
= f(x1) −y1(γ −sαold
2 )x21 −y2αold
2 x1 ·x2
−f(x2) +y1(γ −sαold
2 )x1 ·x2 +y2αold
2 x22 1−s +γsx21 −γs(x1 ·x2) +y2v1 −y2v2 = y2(y2− y1+f(x1) + y1sαold
2 x21 −y2αold
2 x1 ·x2
−f(x2)−y1sαold
2 x1 ·x2 +y2αold
2 x22)
= y2(f(x1)−y1 −f(x2) + y2) +αold
2 (x21 +x22 −2x1 ·x2)
= y2(E1 −E2) +αold
2 (x21 +x22 −2x1 ·x2)
⇒ αnew, unclipped
2 (x21 +x22 −2x1 ·x2) = y2(E1 −E2) +αold
2 (x21 +x22 −2x1 ·x2)
⇒ αnew, unclipped
2 = αold
2 − y2(E1η−E2)