1. K - means 算法算法
K - means 算法用於解決聚類問題,屬於無監督學習。能夠對沒有標記的數據進行處理,將其分紅 K 類。其步驟以下:app
2. PLA( Principal Component Analysis )ide
用於將數據降維,加快模型的處理和計算速度。其步驟以下:oop
1. 計算參數sigma:學習
其中 X 是輸入數據的矩陣。this
2. 將參數帶入公式:spa
獲得的 U 爲一個 N * N 的矩陣 , S 是一個對角矩陣(除了主對角線之外數據全都是0)component
假設咱們要降到 K 維,則取矩陣 U 的前 K 列,獲得U_reduce(n * k) 將 X 和 U_reduce 相乘獲得新的矩陣 Z ,就是降後的矩陣,用來代替Xorm
3. 將 Z 和 U_reduce 的轉置相乘,能夠獲得還原矩陣 X_approx,咱們有以下公式blog
這個值越小,說明降維對原數據形成的影響越小,這個值通常要在 0.01 ~ 0.1之間。而利用 S 矩陣能夠很方便的計算這個值。公式:
1 -
k 即要取的 K 維。咱們要找一個 k 值,使得該值小於必定值。
function [U, S] = pca(X) %PCA Run principal component analysis on the dataset X % [U, S, X] = pca(X) computes eigenvectors of the covariance matrix of X % Returns the eigenvectors U, the eigenvalues (on diagonal) in S % % Useful values [m, n] = size(X); % You need to return the following variables correctly. U = zeros(n); S = zeros(n); % ====================== YOUR CODE HERE ====================== % Instructions: You should first compute the covariance matrix. Then, you % should use the "svd" function to compute the eigenvectors % and eigenvalues of the covariance matrix. % Sigma = (X' * X) / m ; [U , S , V] = svd(Sigma); % ========================================================================= end
function Z = projectData(X, U, K) %PROJECTDATA Computes the reduced data representation when projecting only %on to the top k eigenvectors % Z = projectData(X, U, K) computes the projection of % the normalized inputs X into the reduced dimensional space spanned by % the first K columns of U. It returns the projected examples in Z. % % You need to return the following variables correctly. Z = zeros(size(X, 1), K); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the projection of the data using only the top K % eigenvectors in U (first K columns). % For the i-th example X(i,:), the projection on to the k-th % eigenvector is given as follows: % x = X(i, :)'; % projection_k = x' * U(:, k); % Z = X * U(: , 1:K); % ============================================================= end
function X_rec = recoverData(Z, U, K) %RECOVERDATA Recovers an approximation of the original data when using the %projected data % X_rec = RECOVERDATA(Z, U, K) recovers an approximation the % original data that has been reduced to K dimensions. It returns the % approximate reconstruction in X_rec. % % You need to return the following variables correctly. X_rec = zeros(size(Z, 1), size(U, 1)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the approximation of the data by projecting back % onto the original space using the top K eigenvectors in U. % % For the i-th example Z(i,:), the (approximate) % recovered data for dimension j is given as follows: % v = Z(i, :)'; % recovered_j = v' * U(j, 1:K)'; % % Notice that U(j, 1:K) is a row vector. % X_rec = Z * U(: , 1:K)'; % ============================================================= end
function idx = findClosestCentroids(X, centroids) %FINDCLOSESTCENTROIDS computes the centroid memberships for every example % idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids % in idx for a dataset X where each row is a single example. idx = m x 1 % vector of centroid assignments (i.e. each entry in range [1..K]) % % Set K K = size(centroids, 1); % You need to return the following variables correctly. idx = zeros(size(X,1), 1); % ====================== YOUR CODE HERE ====================== % Instructions: Go over every example, find its closest centroid, and store % the index inside idx at the appropriate location. % Concretely, idx(i) should contain the index of the centroid % closest to example i. Hence, it should be a value in the % range 1..K % % Note: You can use a for-loop over the examples to compute this. % m = size(X , 1); for i = 1 : m x = X(i , :); min = sum((x - centroids(1 , :)) .^ 2); idx(i) = 1; for j = 2 : K sumnum = sum((x - centroids(j , :)) .^ 2); if sumnum < min min = sumnum; idx(i) = j; end end end % ============================================================= end
function centroids = computeCentroids(X, idx, K) %COMPUTECENTROIDS returns the new centroids by computing the means of the %data points assigned to each centroid. % centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by % computing the means of the data points assigned to each centroid. It is % given a dataset X where each row is a single data point, a vector % idx of centroid assignments (i.e. each entry in range [1..K]) for each % example, and K, the number of centroids. You should return a matrix % centroids, where each row of centroids is the mean of the data points % assigned to it. % % Useful variables [m n] = size(X); % You need to return the following variables correctly. centroids = zeros(K, n); % ====================== YOUR CODE HERE ====================== % Instructions: Go over every centroid and compute mean of all points that % belong to it. Concretely, the row vector centroids(i, :) % should contain the mean of the data points assigned to % centroid i. % % Note: You can use a for-loop over the centroids to compute this. % cnt = zeros(K , n); for i = 1 : m centroids(idx(i) , :) = centroids(idx(i) , :) + X(i , :) cnt(idx(i) , :) = cnt(idx(i) , :) + 1; end centroids = centroids ./ cnt; % ============================================================= end