The code examples listed here don't work with LibSVM 3.1, so I've more or less ported the example by mossplix:

from svmutil import *
svm_model.predict = lambda self, x: svm_predict([0], [x], self)[0][0]

prob = svm_problem([1,-1], [[1,0,1], [-1,0,-1]])

param = svm_parameter()
param.kernel_type = LINEAR
param.C = 10

m=svm_train(prob, param)

m.predict([1,1,1])
Answer from ShinNoNoir on Stack Overflow
🌐
GitHub
github.com › cjlin1 › libsvm
GitHub - cjlin1/libsvm: LIBSVM -- A Library for Support Vector Machines · GitHub
For example, 25 0:3 1:1 2:0 3:1 45 0:2 1:6 2:18 3:0 implies that the kernel matrix is [K(2,2) K(2,3)] = [18 0] [K(3,2) K(3,3)] = [0 1] Library Usage ============= These functions and structures are declared in the header file `svm.h'. You need ...
Starred by 4.7K users
Forked by 1.6K users
Languages   Java 21.8% | C++ 18.7% | HTML 17.9% | M4 13.9% | Python 13.5% | C 13.3%
Top answer
1 of 8
24

The code examples listed here don't work with LibSVM 3.1, so I've more or less ported the example by mossplix:

from svmutil import *
svm_model.predict = lambda self, x: svm_predict([0], [x], self)[0][0]

prob = svm_problem([1,-1], [[1,0,1], [-1,0,-1]])

param = svm_parameter()
param.kernel_type = LINEAR
param.C = 10

m=svm_train(prob, param)

m.predict([1,1,1])
2 of 8
20

This example demonstrates a one-class SVM classifier; it's about as simple as possible while still showing the complete LIBSVM workflow.

Step 1: Import NumPy & LIBSVM

  import numpy as NP
    from svm import *

Step 2: Generate synthetic data: for this example, 500 points within a given boundary (note: quite a few real data sets are are provided on the LIBSVM website)

Data = NP.random.randint(-5, 5, 1000).reshape(500, 2)

Step 3: Now, choose some non-linear decision boundary for a one-class classifier:

rx = [ (x**2 + y**2) < 9 and 1 or 0 for (x, y) in Data ]

Step 4: Next, arbitrarily partition the data w/r/t this decision boundary:

  • Class I: those that lie on or within an arbitrary circle

  • Class II: all points outside the decision boundary (circle)


The SVM Model Building begins here; all steps before this one were just to prepare some synthetic data.

Step 5: Construct the problem description by calling svm_problem, passing in the decision boundary function and the data, then bind this result to a variable.

px = svm_problem(rx, Data)

Step 6: Select a kernel function for the non-linear mapping

For this exmaple, i chose RBF (radial basis function) as my kernel function

pm = svm_parameter(kernel_type=RBF)

Step 7: Train the classifier, by calling svm_model, passing in the problem description (px) & kernel (pm)

v = svm_model(px, pm)

Step 8: Finally, test the trained classifier by calling predict on the trained model object ('v')

v.predict([3, 1])
# returns the class label (either '1' or '0')

For the example above, I used version 3.0 of LIBSVM (the current stable release at the time this answer was posted).

Finally, w/r/t the part of your question regarding the choice of kernel function, Support Vector Machines are not specific to a particular kernel function--e.g., i could have chosen a different kernel (gaussian, polynomial, etc.).

LIBSVM includes all of the most commonly used kernel functions--which is a big help because you can see all plausible alternatives and to select one for use in your model, is just a matter of calling svm_parameter and passing in a value for kernel_type (a three-letter abbreviation for the chosen kernel).

Finally, the kernel function you choose for training must match the kernel function used against the testing data.

The code examples listed here don't work with LibSVM 3.1, so I've more or less ported the example by mossplix:

from svmutil import *
svm_model.predict = lambda self, x: svm_predict([0], [x], self)[0][0]

prob = svm_problem([1,-1], [[1,0,1], [-1,0,-1]])

param = svm_parameter()
param.kernel_type = LINEAR
param.C = 10

m=svm_train(prob, param)

m.predict([1,1,1])
Answer from ShinNoNoir on Stack Overflow
🌐
pytz
pythonhosted.org › bob.learn.libsvm › guide.html
Support Vector Machines and Trainers — bob.learn.libsvm 2.0.12 documentation
Our current svm object was trained with the file called heart_scale, distributed with LIBSVM and available here. This dataset proposes a binary classification problem (i.e., 2 classes of features to be discriminated). The number of features is 13. Our extensions to LIBSVM also allows you to ...
🌐
Vivian Website
csie.ntu.edu.tw › ~cjlin › libsvm
LIBSVM -- A Library for Support Vector Machines
Instructions for using LIBSVM are in the README files in the main directory and some sub-directories. A guide for beginners: C.-W. Hsu, C.-C. Chang, C.-J. Lin. A practical guide to support vector classification · An introductory video for windows users. Other implementation documents: R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using the second order information for training SVM.
🌐
Alivelearn
alivelearn.net
SVM (support vector machine) with libsvm – Xu Cui while(alive){learn;}
To avoid over fitting, you use n-fold cross validation. For example, a 5-fold cross validation is to use 4/5 of the data to train the svm model and the rest 1/5 to test. The option -c, -g, and -v controls parameter C, gamma and n-fold cross validation. A piece of code from libsvm website is:
🌐
Vivian Website
csie.ntu.edu.tw › ~cjlin › libsvmtools
LIBSVM Tools
If you find that the CV accuracy has stabilized, you can stop the code and use only a subset of certain size. To use, put the code under the compiled liblinear/matlab directory, and open octave or matlab: > [y,x] = libsvmread('mydata'); > size_acc(y,x); Currently, only linear classification ...
🌐
Bytefish
bytefish.de › blog › using_libsvm.html
Using libsvm - bytefish.de
September 1, 2011 - num_train = 100 num_test = 30 sigma = 0.2 d1 = circle(3, sigma, num_train) d2 = circle(5, sigma, num_train) plt.figure() plt.plot(d1[:,0],d1[:,1],'ro') plt.plot(d2[:,0],d2[:,1],'bo') plt.show() ... Now to the SVM. The LibSVM binding expects a list with the classes and a list with the training data:
🌐
Stanford
openclassroom.stanford.edu › MainFolder › DocumentPage.php
Machine Learning - OpenClassroom - Stanford University
Record the classification accuracy for each training set and check your answers with the solutions. How do the errors compare to the Naive Bayes errors? ... An m-file implementation of the two-feature exercise can be found here. An m-file for the email classification exercise is here. ... Here are the classification performance results that LIBSVM reports.
Find elsewhere
🌐
Google Sites
sites.google.com › site › kittipat › libsvm_matlab
Kittipat's Homepage - libsvm for MATLAB
% This code just simply run the SVM on the example data set "heart_scale", % which is scaled properly. The code divides the data into 2 parts · % train: 1 to 200 · % test: 201:270 · % Then plot the results vs their true class.
Top answer
1 of 1
12

In libsvm package, in the file matlab/README, you can find the following examples:

Examples
========

Train and test on the provided data heart_scale:

matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07');
matlab> [predict_label, accuracy, dec_values] = svmpredict(heart_scale_label, heart_scale_inst, model); % test the training data

For probability estimates, you need '-b 1' for training and testing:

matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07 -b 1');
matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
matlab> [predict_label, accuracy, prob_estimates] = svmpredict(heart_scale_label, heart_scale_inst, model, '-b 1');

To use precomputed kernel, you must include sample serial number as
the first column of the training and testing data (assume your kernel
matrix is K, # of instances is n):

matlab> K1 = [(1:n)', K]; % include sample serial number as first column
matlab> model = svmtrain(label_vector, K1, '-t 4');
matlab> [predict_label, accuracy, dec_values] = svmpredict(label_vector, K1, model); % test the training data

We give the following detailed example by splitting heart_scale into
150 training and 120 testing data.  Constructing a linear kernel
matrix and then using the precomputed kernel gives exactly the same
testing error as using the LIBSVM built-in linear kernel.

matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
matlab>
matlab> % Split Data
matlab> train_data = heart_scale_inst(1:150,:);
matlab> train_label = heart_scale_label(1:150,:);
matlab> test_data = heart_scale_inst(151:270,:);
matlab> test_label = heart_scale_label(151:270,:);
matlab>
matlab> % Linear Kernel
matlab> model_linear = svmtrain(train_label, train_data, '-t 0');
matlab> [predict_label_L, accuracy_L, dec_values_L] = svmpredict(test_label, test_data, model_linear);
matlab>
matlab> % Precomputed Kernel
matlab> model_precomputed = svmtrain(train_label, [(1:150)', train_data*train_data'], '-t 4');
matlab> [predict_label_P, accuracy_P, dec_values_P] = svmpredict(test_label, [(1:120)', test_data*train_data'], model_precomputed);
matlab>
matlab> accuracy_L % Display the accuracy using linear kernel
matlab> accuracy_P % Display the accuracy using precomputed kernel

Note that for testing, you can put anything in the
testing_label_vector.  For more details of precomputed kernels, please
read the section ``Precomputed Kernels'' in the README of the LIBSVM
package.
🌐
Vivian Website
csie.ntu.edu.tw › ~cjlin › libsvm › faq.html
LIBSVM FAQ
It depends on your data format. A simple way is to use libsvmwrite in the libsvm matlab/octave interface. Take a CSV (comma-separated values) file in UCI machine learning repository as an example. We download SPECTF.train. Labels are in the first column.
Top answer
1 of 1
1

What I can see from your code is, that you are mixing OpenCV and LIBSVM.

Basically you can follow one of the following ways. Personally I would suggest to use OpenCV only.

OpenCV

OpenCV is a very powerfull library for working with images. Hence they implement their own machine learning algorithms including SVMs.

As described in a very good way here it is very easy to perform classification with images via OpenCV since the algorithms use a common interface for this purpose.

LIBSVM

LIBSVM a standalone library for SVM classification in various form (e.g. multiclass, two-class, with probability estimates, etc). If you go this way, you have to perform the following steps in order to do successful classification:

  1. Think about how many different classes you want to differentiate (e.g. + / -)
  2. Maybe preprocess your images (filters, ...)
  3. Extract so called "features" rom your images using a feature selection method (for example: Mutual Information). Those methods will tell you, which points are significant for your given classes since we follow the basic assumption, that not every singel pixel in an image is important.
  4. According to your extracted features you transform your images to an vectorial representation.
  5. Write it into an file according to the LIBSVM data format:

    label feature_id1:feature_value1 feature_id2:feature_value2

    +1 1:0.53265 2:0.5232

    -1 1:0.78543 2:0.64326

  6. Proceed with "svm_train" according to its description. Classification would be a combination of 2.) 4.) 5.) and a run of "svm_predict".

🌐
Vivian Website
csie.ntu.edu.tw › ~cjlin › libsvmtools › multilabel
LIBSVM Tools: Multi-label classification
>> runbinary(training_file, test_file, options); Note that options indicate LIBSVM/LIBLINEAR options. Example:
🌐
Blogger
lekshmideepu.blogspot.com › 2012 › 02 › libsvm-tutorial.html
Web Developers Portal: LIBSVM tutorial
It uses cross validation (CV) (will discuss later) technique to estimate the accuracy of each parameter combination in the specified range and helps you to decide the best parameters for your problem. grid.py directly executes libsvm binaries (so no python binding is needed) for cross validation and then draw contour of CV accuracy using gnuplot. Before using the grid.py , we should edit grid.py for setting the path of svmtrain_exe and gnuplot_exe grid.py if not is_win32: svmtrain_exe = "../svm-train" gnuplot_exe = "/usr/bin/gnuplot" else: # example for windows svmtrain_exe = r"F:\lekshmi\libsvm-3.11\windows\svm-train.exe" # svmtrain_exe = r"c:\Program Files\libsvm\windows\svm-train.exe" gnuplot_exe = r"F:\lekshmi\gp444win32\gnuplot\binary\gnuplot.exe" you should install python and gnuplot before running grid.py.
🌐
VitalFlux
vitalflux.com › home › data science › sklearn svm classifier using libsvm – code example
Sklearn SVM Classifier using LibSVM - Code Example - Analytics Yogi
July 10, 2020 - In this post, you learn about Sklearn LibSVM implementation used for training an SVM classifier, with code example.
🌐
Vivian Website
csie.ntu.edu.tw › ~cjlin › papers › libsvm.pdf pdf
LIBSVM: A Library for Support Vector Machines Chih-Chung Chang and Chih-Jen Lin
C = 1/(ρl). Thus, in LIBSVM, we output (α/ρ, b/ρ) in the model.5 ... One-class SVM was proposed by Sch¨olkopf et al. (2001) for estimating the support of · a high-dimensional distribution. Given training vectors xi ∈Rn, i = 1, .
🌐
GitHub
github.com › cjlin1 › libsvm › blob › master › svm-train.c
libsvm/svm-train.c at master · cjlin1/libsvm
December 22, 2016 - "-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)\n"
Author   cjlin1
🌐
GitHub
github.com › cjlin1 › libsvm › tree › master › python
libsvm/python at master · cjlin1/libsvm
>>> from libsvm.svm import * >>> prob = svm_problem(np.asarray([1,-1]), scipy.sparse.csr_matrix(([1, 1, -1, -1], ([0, 0, 1, 1], [0, 2, 0, 2])))) >>> param = svm_parameter('-c 4') >>> m = libsvm.svm_train(prob, param) # m is a ctype pointer to an svm_model # Convert a tuple of ndarray (index, data) to feature_nodearray, a ctypes structure # Note that index starts from 0, though the following example will be changed to 1:1, 3:1 internally >>> x0, max_idx = gen_svm_nodearray((np.asarray([0,2]), np.asarray([1,1]))) >>> label = libsvm.svm_predict(m, x0) Design Description ================== There are two files svm.py and svmutil.py, which respectively correspond to low-level and high-level use of the interface.
Author   cjlin1
🌐
GitHub
github.com › cjlin1 › libsvm › blob › master › matlab › svmtrain.c
libsvm/matlab/svmtrain.c at master · cjlin1/libsvm
"Usage: model = svmtrain(training_label_vector, training_instance_matrix, 'libsvm_options');\n" "libsvm_options:\n" "-s svm_type : set type of SVM (default 0)\n" " 0 -- C-SVC (multi-class classification)\n" " 1 -- nu-SVC (multi-class classification)\n" " 2 -- one-class SVM\n" " 3 -- epsilon-SVR (regression)\n" " 4 -- nu-SVR (regression)\n" "-t kernel_type : set type of kernel function (default 2)\n" " 0 -- linear: u'*v\n" " 1 -- polynomial: (gamma*u'*v + coef0)^degree\n" " 2 -- radial basis function: exp(-gamma*|u-v|^2)\n" " 3 -- sigmoid: tanh(gamma*u'*v + coef0)\n" " 4 -- prec
Author   cjlin1