sklearn自定義svm核函數(外部和內部定義)

咱們可使用本身編寫的核函數:python

*注意,若是使用precomputed模式,也就是不傳入函數,而直接傳入計算後的核,那麼參與這個覈計算的數據集要包含訓練集和測試集git

# coding=utf-8
import numpy as np
from sklearn import svm, datasets
from matplotlib.pylab import plt
from sklearn.utils import shuffle
from sklearn.metrics import zero_one_loss


if __name__ == "__main__":
    # 定義數據集
    X_train = np.array([[0.3, 0.4], [0, 0], [1, 1], [1.1, 1.1]])
    y_train = [0, 0, 1, 1]
    X_test = np.array([[0.2, 0.2], [0, 3], [1, -1], [5, 5]])
    y_test = [0, 1, 0, 1]
    #
    #
    #

    # 測試1
    def my_kernel(X, Y):  # 自定義核函數
        return np.dot(X, Y.T)
    clf = svm.SVC(kernel=my_kernel)
    clf.fit(X_train, y_train)
    result = clf.predict(X_test)
    print(result)
    #
    #
    #
    #
    #
    #
    #

    # 測試2:外部覈計算
    clf = svm.SVC(kernel='precomputed')
    gram = np.dot(X_train, X_train.T)  # linear kernel computation,先在外部計算核
    clf.fit(gram, y_train)
    # predict on training examples
    # 當用precomputed模式的時候,測試集和訓練集都要包含在kernel裏面。
    gram_test = np.dot(X_test, X_train.T)
    result = clf.predict(gram_test)
    print(result)
    #
    #
    #
    #
    #
    #
    #
    #
    # import some data to play with
    iris = datasets.load_iris()
    X_train = iris.data[:, :2]  # we only take the first two features. We could
    #                       avoid this ugly slicing by using a two-dim dataset
    Y_train = iris.target


    def my_kernel(X, Y):
        """
        We create a custom kernel:
                     (2  0)
        k(X, Y) = X  (    ) Y.T
                     (0  1)
        """
        M = np.array([[2, 0], [0, 1.0]])
        return np.dot(np.dot(X, M), Y.T)


    h = .02  # step size in the mesh

    # we create an instance of SVM and fit out data.
    clf = svm.SVC(kernel=my_kernel)
    clf.fit(X_train, Y_train)

    # Plot the decision boundary. For that, we will assign a color to each
    # point in the mesh [x_min, x_max]x[y_min, y_max].
    x_min, x_max = X_train[:, 0].min() - 1, X_train[:, 0].max() + 1
    y_min, y_max = X_train[:, 1].min() - 1, X_train[:, 1].max() + 1
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
    Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
    #
    #

    # Put the result into a color plot
    Z = Z.reshape(xx.shape)
    plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
    # Plot also the training points
    plt.scatter(X_train[:, 0], X_train[:, 1], c=Y_train, cmap=plt.cm.Paired, edgecolors='k')
    plt.title('3-Class classification using Support Vector Machine with custom'
              ' kernel')
    plt.axis('tight')
    plt.show()

    #
    #
    #
    #
    #
    #
    #
    #
    #
    #
    #
    # ------ 正確的姿式 --------------

    digits = datasets.load_digits()
    X, y = shuffle(digits.data, digits.target)
    X_train, X_test = X[:1000, :], X[1000:, :]
    y_train, y_test = y[:1000], y[1000:]

    svc = svm.SVC(kernel='precomputed')

    kernel_train = np.dot(X_train, X_train.T)  # linear kernel

    svc.fit(kernel_train, y_train)

    # kernel_test = np.dot(X_test, X_train[svc.support_, :].T)
    kernel_test = np.dot(X_test, X_train.T)
    y_pred = svc.predict(kernel_test)
    # print(zero_one_score(y_test, y_pred))
    print(zero_one_loss(y_test, y_pred))

其它參考:函數

一、Gram matrix:style transfer 當中,什麼是風格,存在本身特性的才叫作風格。所以如何去度量這個本身的特性勒,本身的特色越突出,別人的越不突出最好。測試

preview

Gram Matrix實際上可看作是feature之間的偏愛協方差矩陣(即沒有減去均值的協方差矩陣),在feature map中,每個數字都來自於一個特定濾波器在特定位置的卷積,所以每一個數字就表明一個特徵的強度,而Gram計算的其實是兩兩特徵之間的相關性,哪兩個特徵是同時出現的,哪兩個是此消彼長的等等,同時,Gram的對角線元素,還體現了每一個特徵在圖像中出現的量,所以,Gram有助於把握整個圖像的大致風格。有了表示風格的Gram Matrix,要度量兩個圖像風格的差別,只需比較他們Gram Matrix的差別便可。this

相關文章
相關標籤/搜索