• Nu S-Au Găsit Rezultate

SVM — The Main Idea

N/A
N/A
Protected

Academic year: 2022

Share "SVM — The Main Idea"

Copied!
36
0
0
Arată mai multe ( pagini)

Text complet

(1)

Introduction to Support Vector Machines

Starting from slides drawn by Ming-Hsuan Yang and Antoine Cornu´ejols

(2)

SVM Bibliography

B. Boser, I. Guyon, V. Vapnik, “A training algorithm for optimal margin classifier”, 1992

C. Cortes, V. Vapnik, “Support vector networks”. Journal of Machine Learning, 20, 1995.

V. Vapnik. “The nature of statistical learning theory”.

Springer Verlag, 1995.

C. Burges, “A tutorial on support vector machines for pat- tern recognition”. Data Mining and Knowledge Descovery, 2(2):955-974, 1998.

N. Cristianini, J. Shawe-Taylor, “Support Vector Machines and other kernel-based learning methods”. Cambridge University Press, 2000. Andrew Ng, “Support Vector Ma- chines”, Stanford University, CS229 Lecture Notes, Part V.

2.

(3)

SVM — The Main Idea

Given a set of data points which belong to either of two classes, find an optimal separating hyperplane

- maximizing the distance (from closest points) of either class to the separating hyperplane, and

- minimizing the risk of misclassifying the training samples and the unseen test samples.

Approach: Formulate a constraint-based optimisation prob- lem, then solve it using quadratic programming (QP).

(4)

Optimal Separation Hyperplane

optimal separating hyperplane

maximal margin

valid separating hyperplane

(5)

Plan

1. Linear SVMs

The primal form and the dual form of linear SVMs Linear SVMs with soft margin

2. Non-Linear SVMs

Kernel functions for SVMs

An example of non-linear SVM

(6)

1. Linear SVMs: Formalisation

Let S be a set of points xi ∈ Rd with i = 1, . . . , m. Each point xi belongs to either of two classes, with label yi ∈ {−1, +1}. The set S is linear separable if there are w ∈ Rd and w0 ∈ R

such that

yi(w · xi + w0) ≥ 1 i = 1, . . . , m

The pair (w, w0) defines the hyperplane of equation w·x+w0 = 0, named the separating hyperplane.

The signed distance di of a point xi to the separating hyper- plane (w, w0) is given by di = w·x||iw+w|| 0.

It follows that yidi||w1||, therefore ||w1|| is the lower bound on the distance between points xi and the separating hyper- plane (w, w0).

(7)

Optimal Separating Hyperplane

Given a linearly separable set S, the optimal separat- ing hyperplane is the separating hyperplane for which the distance to the closest (either positive or negative) points in S is maximum, therefore it maximizes ||w1||.

(8)

xi

maximal margin

optimal separating hyperplane

1 II w II

xi

II w II

D( )

geometric margin

vectors support

D(x) < −1 D(x) = 0

D(x) = −1

D(x) = 1 D(x) > 1

w

D(x) = w · x + w0

(9)

Linear SVMs: The Primal Form

minimize 12||w||2

subject to yi(w · xi + w0) ≥ 1 for i = 1, . . . , m

This is a constrained quadratic problem (QP) with d + 1 pa- rameters (w ∈ Rd and w0 ∈ R). It can be solved by quadratic optimisation methods if d is not very big (103).

For large values of d (105): due to the Kuhn-Tucker theorem, since the above objective function and the associated con- straints are convex, we can use the method of Lagrange multipliers (αi ≥ 0, i = 1, . . . , m) to put the above problem under an equivalent “dual” form.

Note: In the dual form, the variables (αi) will be subject to much simpler constraints than the variables (w, w0) in the primal form.

(10)

Linear SVMs: Getting the Dual Form

The Lagrangean function associated to the primal form of the given QP is

LP(w, w0, α) = 1

2||w2|| −

m

X

i=1

αi(yi(w · xi + w0) 1)

with αi 0, i = 1, . . . , m. Finding the minimum of LP implies

∂LP

∂w0 =

m

X

i=1

yiαi = 0

∂LP

∂w = w

m

X

i=1

yiαixi = 0 w =

m

X

i=1

yiαixi

where ∂LP

∂w = (∂LP

∂w1 , . . . , ∂LP

∂wd )

By substituting these constraints into LP we get its dual form

LD(α) =

m

X

i=1

αi 1 2

m

X

i=1 m

X

j=1

αiαjyiyj xi · xj

(11)

Linear SVMs: The Dual Form

maximize Pmi=1 αi12 Pmi=1

Pm

j=1 αiαjyiyj xi · xj subject to Pmi=1 yiαi = 0

αi ≥ 0, i = 1, . . . , m

The link between the primal and the dual form:

The optimal solution (w, w0) of the primal QP problem is given by

w =

m X i=1

αiyixi

αi(yi(w · xi + w0) − 1) = 0 for any i = 1, . . . , m

where αi are the optimal solutions of the above (dual form) optimisation problem.

(12)

Support Vectors

The only αi (solutions of the dual form of our QP problem) that can be nonzero are those for which the constraints yi(w · xi + w0) ≥ 1 for i = 1, . . . , m in the primal form of the QP are satisfied with the equality sign.

Because most αi are null, the vector w is a linear combination of a relative small percentage of the points xi.

These points are called support vectors because they are the closest points to the optimal separating hyperplane (OSH) and the only points of S needed to determine the OSH.

The problem of classifying a new data point x is now simply solved by looking at sign(w · x + w0).

(13)

Linear SVMs with Soft Margin

If the set S is not linearly separable — or one simply ignores whether or not S is linearly separable —, the previous analysis can be generalised by introducing m non-negative (“slack”) variables ξi, for i = 1, . . . , m such that

yi(w · xi + w0) ≥ 1 − ξi, for i = 1, . . . , m

Purpose: to allow for a small number of missclassified points, for better generalisation or computational efficiency.

(14)

Generalised OSH

The generalised OSH is then viewed as the solution to the problem:

minimize 12||w||2 + C Pmi=1 ξi

subject to yi(w · xi + w0) 1 ξi for i = 1, . . . , m ξi 0 for i = 1, . . . , m

The associated dual form:

maximize Pmi=1 αi 12 Pmi=1

Pm

j=1 αiαjyiyj xi · xj subject to Pmi=1 yiαi = 0

0 αi C, i = 1, . . . , m As before:

w = Pmi=1 αiyixi

αi(yi(w · xi + w0) 1 + ξi) = 0 (C αi) ξi = 0

(15)

The role of C:

it acts as a regularizing parameter:

• large C ⇒ minimize the number of misclassified points

• small C ⇒ maximize the minimum distance ||w1||

(16)

2. Nonlinear Support Vector Machines

Note that the only way the data points appear in (the dual form of) the training problem is in the form of dot products xi · xj.

In a higher dimensional space, it is very likely that a linear separator can be constructed.

We map the data points from the input space Rd into some space of higher dimension Rn (n > d) using a function Φ : Rd Rn

Then the training algorithm would depend only on dot products of the form Φ(xi) · Φ(xj).

Constructing (via Φ) a separating hyperplane with maximum margin in the higher-dimensional space yields a nonlinear decision boundary in the input space.

(17)

General Schema for Nonlinear SVMs

Input

space Output

space

Internal redescription

space

x Φ h y

(18)

Introducing Kernel Functions

• But the dot product is computationally expensive...

• If there were a “kernel function” K such that K(xi, xj) = Φ(xi)·Φ(xj), we would only use K in the training algorithm.

• All the previous derivations in the model of linear SVM hold (substituting the dot product with the kernel func- tion), since we are still doing a linear separation, but in a different space.

• Important remark: By the use of the kernel function, it is possible to compute the separating hyperplane without explicitly carrying out the map into the higher space.

(19)

Some Classes of Kernel Functions for SVMs

• Polynomial: K(x, x) = (x · x + c)q

• RBF (radial basis function): K(x, x) = e||x−x′||

2 2

• Sigmoide: K(x, x) = tanh(αx · x − b)

(20)

An Illustration

(a) (b)

Decision surface

(a) by a polynomial classifier, and (b) by a RBF.

Support vectors are indicated in dark fill.

(21)

Important Remark

The kernel functions require calculations in x(∈ Rd), therefore they are not difficult to compute.

It remains to determine which kernel function K can be as- sociated with a given (redescription space) function Φ.

In practice, one proceeds vice versa:

we test kernel functions about which we know that they correspond to the dot product in a certain space (which will work as redescription space, never made explicit).

Therefore, the user operates by “trial and error”...

Advantage: the only parameters when training an SVM are the kernel function K, and the “tradeoff” parameter C.

(22)

Mercer’s Theorem (1909):

A Characterisation of Kernel Functions for SVMs

Theorem: Let K : Rd × Rd → R be a symmetrical function.

K represents a dot product,

i.e. there is a function Φ : Rd → Rn such that K(x, x) = Φ(x) · Φ(x)

if and only if

Z

K(x, x)f(x)f(x)dx dx ≥ 0

for any function f such that R f2(x)dx is finite.

Remark: The theorem doesn’t say how to construct Φ.

(23)

Some simple rules for building (Mercer) kernels

If K1 and K2 are kernels over X × X, with X ⊆ Rn, then

• K(x, y) = K1(x, y) + K2(x, y)

• K(x, y) = aK1(x, y), with a ∈ R+

• K(x, y) = K1(x, y)K2(x, y) are also kernels.

(24)

Illustrating the General Architecture of SVMs

for the problem of hand-written character recognition

K

K K

α

2

α

3

α

4

α

1

K

Σ

Output: sign(Pi αiyiK(xi, x) + w0)

Comparison: K(xi, x)

Support vectors: x1, x2, x3, . . .

Input: x

(25)

An Exercise: xor

x1

x2

1

−1

−1

1

Note: use K(x, x) = (x · x + 1)2.

It can be easily shown that Φ(x) = (x21, x22,

2x1x2,

2x1,

2x2, 1) R6 for x = (x1, x2) R2.

(26)

i xi yi Φ(i) 1 (1,1) 1 (1,1,

2, 2,

2,1) 2 (1,1) 1 (1,1,

2,

2, 2,1) 3 (1,1) 1 (1,1,

2, 2,

2,1) 4 (1,1) 1 (1,1,

2,

2, 2,1)

LD(α) = P4i=1 αi 12 P4i=1

P4

j=1 αiαjyiyj Φ(xi) · Φ(xj)

= α1 + α2 + α3 + α4

12( 9α12 2α1α2 2α1α3 + 2α1α4+ 9α22 + 2α2α3 2α2α4+

9α32 2α3α4+ 9α42)

subject to:

α1 + α2 + α3 α4 = 0

(27)

∂LD(α)

∂α1 = 0 9α1 α2 α3 + α4 = 1

∂LD(α)

∂α2 = 0 α1 9α2 α3 + α4 = 1

∂LD(α)

∂α3 = 0 α1 α2 9α3 + α4 = 1

∂LD(α)

∂α4 = 0 α1 α2 α3 + 9α4 = 1

¯

α1 = ¯α2 = ¯α3 = ¯α4 = 18

¯

w = 18(Φ(x1) + Φ(x2) + Φ(x3) Φ(x4)) = 18(0, 0,4

2, 0, 0, 0)

¯

w · Φ(xi) + ¯w0 = yi w¯0 = 0

The optimal separation hyperplane: w¯ · Φ(x) + ¯w0 = 0 ⇔ −x1x2 = 0 Test: sign (x1x2)

(28)

The xor Exercise: Result

2 x1

margin: 2

maximum

1 2

D(x , x ) = −1

1 2

D(x , x ) = 0

x2

x1

1 2

D(x , x ) = 0

21D(x , x ) = 0

1 2

D(x , x ) = +1

1 2

D(x , x ) = 0

1 2

D(x , x ) = +1

1 2

D(x , x ) = −1

1 2

D(x , x ) = −1

1 2

D(x , x ) = +1

2 x x1 2

D(x , x ) = −x x

1 2

1 2 D(x , x ) =− 2 x x1 2 1 2

0 1 2

−1

−2

−2

−1 0 1 2

0 1 2

−1

−2

−2

−1 0 1 2

Feature space Input space

(29)

Concluding Remarks: SVM — Pros and Cons

Pros:

Find the optimal separation hyperplane.

Can deal with very high dimentional data.

Some kernels have infinite Vapnik-Chervonenkis dimension (see Computational learning theory, ch. 7 in Tom Mitchell’s book), which means that they can learn very elaborate concepts.

Usually work very well.

Cons:

Require both positive and negative examples.

Need to select a good kernel function.

Require lots of memory and CPU time.

There are some numerical stability problems in solving the con- strained QP.

(30)

Multi-class Classification with SVM

SVMs can only do binary classification.

For M classes, one can use the one-against-the-rest approach:

construct a hyperplane between class k and the M −1 other classes. ⇒ M SVMs.

To predict the output of a new instance, just predict with each of these M SVMs, and then find out which one puts the prediction furthest into the positive region of the instance space.

(31)

SVM Implementations

• SVMlight

• LIBSVM

• mySVM

• Matlab

• Huller

• ...

(32)

The SMO (Sequential Minimal Optimization) algorithm John Pratt, 1998

Optimization problem:

maxα W(α) =

m

X

i=1

αi 1 2

m

X

i=1 m

X

j=1

yiyjαiαjxi ·xj s. t. 0 αi C, i = 1, . . . , m

m

X

i=1

αiyi = 0.

Andrew Ng, Stanford, 2012 fall, ML course, Lecture notes 3.

Algorithm:

Repeat till convergence {

1. Select some pair αi and αj to update next (using a heuristic that tries to pick the two that will allow us to make the biggest progress towards the global maximum).

2. Reoptimize W(α) with respect to αi and αj, while holding all the other αk’s (k 6= i, j) fixed.

}

(33)

Update equations:

αnew, unclipped

j = αj yj(Eiη−Ej) αnew, clipped

j =

H if αnew, unclipped

j > H

αnew, unclipped

j if L αnew, unclipped

j H

L if αnew, unclipped

j < L

where

Ek = w · xk +w0 yk w = Pm

i=1yiαixi, η = − k xi xj k2

L = max(0, αj αi) ¸si H = min(C, C +αj αj) if yi 6= yj L = max(0, αj +αi C) ¸si H = min(C, αj +αj) if yi = yj

αnew

1 = αold

1 +y1y2old

2 αnew

2 )

(34)

Assume that αi, α2 are the free dual variables, and let the i and j indices be used to index other variables.

Credit: John Pratt, Fast training of SVMs using Sequential Minimal Optimization, 2000 The two Lagrange multipliers must fulfill all the constraints of the full prob- lem.The inequality constraints cause the Lagrange multipliers to lie in the box. The liniar equality constraint causes them to lie on a diagonal line.

Thefore, one step of SMO must find an optimum of the objective function on a diagonal line segment. In this figure, γ = αold

1 + old

2 , is a constant that depends on the previous values of α1 and α2, and s = y1y2.

(35)

Proof

[following N. Cristianini and J. Shawe-Taylor, An introduction to SVM, 2000, pag. 138-140]

The objective function:

W1, α2, . . . , αm) = Pm

i=1αi 12 Pm i=1

Pm

j=1yiyjαiαjxi ·xj vi not.

=

Pm

j=3 yjαjxj

·xi = f(xi)P2

j=1 yjαjxj ·xi for i = 1,2

W1, α2) = α1 +α2 12α21x21 12α22x22 y1y2α1α2x1 ·x2 y1α1v1 y1α2v2 +const From

Pm

i=1yiαi = 0 αold

1 +αold

2 = αnew

1 +αnew

2 = γ (another constant)

αnew

1 = αold

1 +y1y2old

2 αnew

2 )

s not.

= y1y2

W2) = γ 2 +α2 12 2)2x21 12α22x22

y1y2x1 ·x2 22 y1v1 2) y2v2α2 +const

(36)

∂W2)

∂α2 = s+ 1 + 12 x212s(γ 2) 12 x222

y1y2(x1 ·x2)γ + 2y1y2(x1 ·x2)2 +y1v1sy2v2

= s+ 1 +x21(sγ α2) x22α2 sγx21 (x1 ·x2) + y1v1s y2v2

Finding the stationary point:

∂W2)

∂α2 = 0 αnew, unclipped

2 (x21 +x22 2x1 ·x2) = 1s +γsx21 γs(x1· x2) +y2v1 y2v2

1s +γsx21 γs(x1 ·x2) +y2v1 y2v2 = y2(y2 y1+γy1(x21 x1 ·x2) +v1 v2) v1 v2 = f(x1) y1αold

1 x21 y2αold

2 x1 ·x2

f(x2) +y1αold

1 x1 ·x2 +y2αold

2 x22

= f(x1) y1 old

2 )x21 y2αold

2 x1 ·x2

f(x2) +y1 old

2 )x1 ·x2 +y2αold

2 x22 1s +γsx21 γs(x1 ·x2) +y2v1 y2v2 = y2(y2 y1+f(x1) + y1old

2 x21 y2αold

2 x1 ·x2

f(x2)y1old

2 x1 ·x2 +y2αold

2 x22)

= y2(f(x1)y1 f(x2) + y2) old

2 (x21 +x22 2x1 ·x2)

= y2(E1 E2) +αold

2 (x21 +x22 2x1 ·x2)

αnew, unclipped

2 (x21 +x22 2x1 ·x2) = y2(E1 E2) +αold

2 (x21 +x22 2x1 ·x2)

αnew, unclipped

2 = αold

2 y2(E1η−E2)

Referințe

DOCUMENTE SIMILARE

The diplomatic activities regarding the protection of American religious, educational, philanthropic institutions, the safety of American interests and missionary activities and

poses, with the help of a particular form of power. Re- flecting upon the ideas of James Davison Hunter, Hava Lazarus-Zafeh, Laurence L. Silberstein, Joan Scott, Lionel Caplan,

In such cases as divided cities or conflict prevention and post-conflict peace building, the leaders of different reli- gious communities having broad concerns about people beyond

A flood is really a cutoff in time system of water level which when raises causes flood as such SVM can make a hyperplane of water level and subject to the moving in

As the distribution of the sensible refers, broadly speaking, to who is allowed to speak, what is visible and what remains invisible within a perceptual framework, this article

Today, though the subaltern is no longer that individual who is silenced as formerly, the relationship between developed nations and non-developed ones helps perpetuate other forms of

(1 st generation female); (2) The role of women [in traditional South Asian culture] is about sitting at home, rais- ing children, being a housewife, and just cooking food for

These factors should be envisaged by Romanian society when trying to make both the minority and the majority cooperate in order to avoid the transformation of the Roma

Moshe de Leon.(65) Therefore, in lieu of assuming that Jewish philosophy would, invariably, inhibit Jewish mysticism from using extreme expres- sions, there are examples of the

A similar observation holds for all quasiconvex functions and this paper addresses the question how to com- pute the normal vector of a hyperplane separating the lower level set of

For the calibration of the connection between the integral intensity of the band “amide I” and the acetylation degree, five mixes were prepared with different proportions of

public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {.

The diagnostic accuracy of US could be improved in combination with CEUS (65.3% vs 83.7%). The diagnostic accuracy of the GB wall thickening type was higher than the mass forming

The best performance, considering both the train and test results, was achieved by using GLRLM features for directions {45 ◦ , 90 ◦ , 135 ◦ }, GA feature selection with DT and

The representation sepals length versus petals length of iris dataset For linear separation we use classes Iris setosa and Iris versicolor (Fig. ) And for nonlinear classification

Faced with the possible insurrection of the body and of the sensible in general, and in order to surpass the possible nefarious consequences of such a logical dead end, (clear

We also furnish an existence result for vector equilibria in absence of convexity assumptions, passing through the existence of approximate solutions of an optimization

units, it is worth noting that the fundamental reference quantities in the British system of units (foot, pound, second) are based on the primary dimensions of length (L), force

However, the sphere is topologically different from the donut, and from the flat (Euclidean) space.. Classification of two

The paper presented a descriptive analysis of the international migration phenomenon in the EU members, with a special focus on Romania, between 2006 and 2010. Romania,

The number of vacancies for the doctoral field of Medicine, Dental Medicine and Pharmacy for the academic year 2022/2023, financed from the state budget, are distributed to

e) The doctoral thesis is officially submitted to the Doctoral School Secretariat, in printed and electronic format, together with the summary of the thesis in

The longevity of amalgam versus compomer/composite restorations in posterior primary and permanent teeth: findings From the New England Children’s Amalgam Trial..