機器學習(Andrew Ng)做業代碼(Exercise 7~8)

Programming Exercise 7: K-means Clustering and Principal Component Analysis

K-Means聚類

findClosestCentroids

給出若干組數據點X,矩陣X每一行表明一組數據,以及K個聚類中心centroids,尋找距離每一個點最近的聚類中心點,換言之:算法

$$\arg\ \min_{idx(1),\cdots,idx(m)}J(idx(1),\cdots,idx(m),\mu_1,\cdots,\mu_K)$$ $$=\frac 1 m \sum_{i=1}^m |x^{(i)}-\mu_{idx(i)}|^2$$app

function idx = findClosestCentroids(X, centroids)
%FINDCLOSESTCENTROIDS computes the centroid memberships for every example
%   idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids
%   in idx for a dataset X where each row is a single example. idx = m x 1 
%   vector of centroid assignments (i.e. each entry in range [1..K])
%

% Set K
K = size(centroids, 1);

% You need to return the following variables correctly.
idx = zeros(size(X,1), 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Go over every example, find its closest centroid, and store
%               the index inside idx at the appropriate location.
%               Concretely, idx(i) should contain the index of the centroid
%               closest to example i. Hence, it should be a value in the 
%               range 1..K
%
% Note: You can use a for-loop over the examples to compute this.
%
    mindis=zeros(size(X,1),1);
    mindis(:)=1e9;
    idx(:)=0;
    for i=1:size(X,1)
        for j=1:K
            nowdis=(X(i,:)-centroids(j,:))*(X(i,:)-centroids(j,:))';
            if(nowdis<mindis(i))
                mindis(i)=nowdis;
                idx(i)=j;
            end
        end
    end
% =============================================================

end

computeCentroids

給出若干組數據點X,矩陣X每一行表明一組數據,以及K個聚類中心centroids,更新K個聚類中心點,使得代價函數最小,換言之:dom

$$\arg\ \min_{\mu_1,\cdots,\mu_K}J(idx(1),\cdots,idx(m),\mu_1,\cdots,\mu_K)$$ $$=\frac 1 m \sum_{i=1}^m |x^{(i)}-\mu_{idx(i)}|^2$$ide

$$\mu_t:=\frac {\sum_{idx(i)=t}x^{(i)}}{\sum_{idx(i)=t}1}$$函數

function centroids = computeCentroids(X, idx, K)
%COMPUTECENTROIDS returs the new centroids by computing the means of the 
%data points assigned to each centroid.
%   centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by 
%   computing the means of the data points assigned to each centroid. It is
%   given a dataset X where each row is a single data point, a vector
%   idx of centroid assignments (i.e. each entry in range [1..K]) for each
%   example, and K, the number of centroids. You should return a matrix
%   centroids, where each row of centroids is the mean of the data points
%   assigned to it.
%

% Useful variables
[m n] = size(X);

% You need to return the following variables correctly.
centroids = zeros(K, n);


% ====================== YOUR CODE HERE ======================
% Instructions: Go over every centroid and compute mean of all points that
%               belong to it. Concretely, the row vector centroids(i, :)
%               should contain the mean of the data points assigned to
%               centroid i.
%
% Note: You can use a for-loop over the centroids to compute this.
%
    cluster_num=zeros(K,1); %cluster_num(i)=the point number of the ith cluster
    for i=1:size(X,1)
        centroids(idx(i),:)=centroids(idx(i),:)+X(i,:);
        cluster_num(idx(i))=cluster_num(idx(i))+1;
    end
    for i=1:K
        centroids(i,:)=centroids(i,:)/cluster_num(i);
    end
% =============================================================


end

kMeansInitCentroids

隨機從全部數據點中選K個點做爲初始聚類中心點,具體看代碼oop

function centroids = kMeansInitCentroids(X, K)
%KMEANSINITCENTROIDS This function initializes K centroids that are to be 
%used in K-Means on the dataset X
%   centroids = KMEANSINITCENTROIDS(X, K) returns K initial centroids to be
%   used with the K-Means on the dataset X
%

% You should return this values correctly
centroids = zeros(K, size(X, 2));

% ====================== YOUR CODE HERE ======================
% Instructions: You should set centroids to randomly chosen examples from
%               the dataset X
%
    idx=randperm(size(X,1));
    centroids=X(idx(1:K),:);
% =============================================================

end

最終測試結果

Fig 1. K-Means聚類10次迭代過程當中,3個聚類中心的變化路徑 測試

Fig 2.保留16色、32色後壓縮獲得的圖片 this

主成分分析

pca

函數要求返回全部數據構成的協方差矩陣$\Sigma$的特徵向量U,每一個特徵向量對應的特徵值構成的對角陣Sspa

$$\Sigma=\frac 1 m X^T X$$3d

對$\Sigma$奇異值分解便可獲得U和S

function [U, S] = pca(X)
%PCA Run principal component analysis on the dataset X
%   [U, S, X] = pca(X) computes eigenvectors of the covariance matrix of X
%   Returns the eigenvectors U, the eigenvalues (on diagonal) in S
%

% Useful values
[m, n] = size(X);

% You need to return the following variables correctly.
U = zeros(n);
S = zeros(n);

% ====================== YOUR CODE HERE ======================
% Instructions: You should first compute the covariance matrix. Then, you
%               should use the "svd" function to compute the eigenvectors
%               and eigenvalues of the covariance matrix. 
%
% Note: When computing the covariance matrix, remember to divide by m (the
%       number of examples).
%
    [U,S,~]=svd((X'*X)/m);
% =========================================================================

end

projectData

用前K個主成分(即U的前K列向量)對數據矩陣X降維,X的每一行表明一組數據

以前求得的U裏每一列都是單位向量,對於某組數據(列向量$x^{(i)}$),對其降維就是將其投影到$C(u^{(1)},\cdots,u^{(K)})$子空間,即降維後的向量爲:

$$z^{(i)}=\begin{pmatrix}u^{(1)T}\\vdots\u^{(K)T}\end{pmatrix}x^{(i)}$$

令$U_{reduced}=(u^{(1)},\cdots,u^{(K)})$,則

$$Z^T=(z^{(1)},\cdots,z^{(m)})=\begin{pmatrix}u^{(1)T}\\vdots\u^{(K)T}\end{pmatrix}(x^{(1)},\cdots,x^{(m)})=U_{reduced}^TX^T$$

$$Z=XU_{reduced}$$

function Z = projectData(X, U, K)
%PROJECTDATA Computes the reduced data representation when projecting only 
%on to the top k eigenvectors
%   Z = projectData(X, U, K) computes the projection of 
%   the normalized inputs X into the reduced dimensional space spanned by
%   the first K columns of U. It returns the projected examples in Z.
%

% You need to return the following variables correctly.
Z = zeros(size(X, 1), K);

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the projection of the data using only the top K 
%               eigenvectors in U (first K columns). 
%               For the i-th example X(i,:), the projection on to the k-th 
%               eigenvector is given as follows:
%                    x = X(i, :)';
%                    projection_k = x' * U(:, k);
%
    Ureduced=U(:,1:K);
    Z=(Ureduced'*X')';
% =============================================================

end

recoverData

給出降維到K維後的若干組數據構成的矩陣Z,Z中每一行表明一組降維後的數據,以及協方差矩陣的特徵向量(列向量)構成的矩陣U,恢復出原始數據X_rec

對於每組降維後的數據$z^{(i)}$,只需將前K個特徵向量按$z^{(i)}$線性組合便可恢復數據

$$x_{rec}^{(i)}=(u^{(1)},\cdots,u^{(K)})z^{(i)}$$

$$X_{rec}^T=(x_{rec}^{(1)},\cdots,x_{rec}^{(m)})=(u^{(1)},\cdots,u^{(K)})(z^{(1)},\cdots,z^{(m)})=U_{reduced}Z^T$$

$$X_{rec}=(U_{reduced}Z^T)^T$$

function X_rec = recoverData(Z, U, K)
%RECOVERDATA Recovers an approximation of the original data when using the 
%projected data
%   X_rec = RECOVERDATA(Z, U, K) recovers an approximation the 
%   original data that has been reduced to K dimensions. It returns the
%   approximate reconstruction in X_rec.
%

% You need to return the following variables correctly.
X_rec = zeros(size(Z, 1), size(U, 1));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the approximation of the data by projecting back
%               onto the original space using the top K eigenvectors in U.
%
%               For the i-th example Z(i,:), the (approximate)
%               recovered data for dimension j is given as follows:
%                    v = Z(i, :)';
%                    recovered_j = v' * U(j, 1:K)';
%
%               Notice that U(j, 1:K) is a row vector.
%               
    Ureduced=U(:,1:K);
    X_rec=(Ureduced*(Z'))';
% =============================================================

end

最終測試結果

Fig 1.PCA降維獲得的兩個主成分

Fig 2.原始數據點(藍色)與PCA降維後重構出的數據點(紅色)

Fig 3.PCA降維獲得的主成分臉

Fig 4.原始臉部圖像與PCA降維重構後的臉部圖像

Fig 5.原始數據點(3種特徵)

Fig 6.經PCA降維可視化後的數據(兩種特徵)

Programming Exercise 8: Anomaly Detection and Recommender Systems

單變量高斯分佈實現異常檢測

estimateGaussian

對於m組、n種特徵的數據,假設其全部特徵都是相互獨立的,$P(x|\mu;\sigma^2)$是數據x正常的機率,那麼

$$P(x|\mu;\sigma^2)=P(x_1|\mu_1;\sigma_1^2)\cdots P(x_n|\mu_n;\sigma_n^2)$$

其中

$$\mu_t=\frac 1 m \sum_{i=1}^m x_t^{(i)}$$

$$\sigma_t^2=\frac 1 m \sum_{i=1}^m (x_t^{(i)}-\mu_t)^2$$

function [mu sigma2] = estimateGaussian(X)
%ESTIMATEGAUSSIAN This function estimates the parameters of a 
%Gaussian distribution using the data in X
%   [mu sigma2] = estimateGaussian(X), 
%   The input X is the dataset with each n-dimensional data point in one row
%   The output is an n-dimensional vector mu, the mean of the data set
%   and the variances sigma^2, an n x 1 vector
% 

% Useful variables
[m, n] = size(X);

% You should return these values correctly
mu = zeros(n, 1);
sigma2 = zeros(n, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the mean of the data and the variances
%               In particular, mu(i) should contain the mean of
%               the data for the i-th feature and sigma2(i)
%               should contain variance of the i-th feature.
%
    mu=(mean(X))';
    for i=1:n
        sigma2(i)=sum((X(:,i)-mu(i)).*(X(:,i)-mu(i)))/m;
    end
% =============================================================


end

selectThreshold

用預測機率pval和Ground Truth yval選取最合適的$\epsilon$,使得F1 Score最大

令Positive爲異常(出現頻率極小),Negtive爲正常(出現頻率大),則

TP=True Positive,預測爲Positive而且預測正確的個數

TN=True Negative,預測爲Negative而且預測正確的個數

FP=False Positive,預測爲Positive而且預測錯誤的個數

FN=False Negative,預測爲Negative而且預測錯誤的個數

Precision爲準確率(查準率),Recall爲召回率(查全率)

$$Precision=\frac {TP}{TP+FP}$$

$$Recall=\frac {TP}{TP+FN}$$

$$F_1=2\frac {Precision·Recall}{Precision+Recall}$$

function [bestEpsilon bestF1] = selectThreshold(yval, pval)
%SELECTTHRESHOLD Find the best threshold (epsilon) to use for selecting
%outliers
%   [bestEpsilon bestF1] = SELECTTHRESHOLD(yval, pval) finds the best
%   threshold to use for selecting outliers based on the results from a
%   validation set (pval) and the ground truth (yval).
%

bestEpsilon = 0;
bestF1 = 0;
F1 = 0;

stepsize = (max(pval) - min(pval)) / 1000;
for epsilon = min(pval):stepsize:max(pval)
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Compute the F1 score of choosing epsilon as the
    %               threshold and place the value in F1. The code at the
    %               end of the loop will compare the F1 score for this
    %               choice of epsilon and set it to be the best epsilon if
    %               it is better than the current choice of epsilon.
    %               
    % Note: You can use predictions = (pval < epsilon) to get a binary vector
    %       of 0's and 1's of the outlier predictions
        predictions=(pval<epsilon);
        truePositive=sum((predictions==yval)&(predictions));
        falsePositive=sum((predictions~=yval)&(predictions));
        trueNegative=sum((predictions==yval)&(~predictions));
        falseNegative=sum((predictions~=yval)&(~predictions));
        precision=truePositive/(truePositive+falsePositive);
        recall=truePositive/(truePositive+falseNegative);
        F1=2*(precision*recall)/(precision+recall);
    % =============================================================

    if F1 > bestF1
       bestF1 = F1;
       bestEpsilon = epsilon;
    end
end

end

最終測試結果

Fig 1.用訓練樣本擬合出的高斯分佈等高線圖

Fig 2.用自動選取的閾值篩選出的異常點

基於內容的推薦系統、協同過濾算法

cofiCostFunc

代價函數

$$J(\theta^{(1)},\cdots,\theta^{(u_n)},x^{(1)},\cdots,x^{(u_m)})=$$ $$\frac 1 2 \sum_{i,j:R(i,j)=1}(\theta^{(j)T}x^{(i)}-y^{(i,j)})^2+\frac \lambda 2 \sum_{j=1}^{n_u}\sum_{k=1}^{n_f}\theta_k^{(j)2}+\frac \lambda 2 \sum_{i=1}^{n_m}\sum_{k=1}^{n_f}x_k^{(i)2}$$

梯度降低:

$$\frac {\partial J}{\partial x_k^{(i)}}=\sum_{j:R(i,j)=1}(\theta^{(j)T}x^{(i)}-y^{(i,j)})\theta_k^{(j)}+\lambda x_k^{(i)}$$

$$\frac {\partial J}{\partial \theta_k^{(j)}}=\sum_{i:R(i,j)=1}(\theta^{(j)T}x^{(i)}-y^{(i,j)})x_k^{(i)}+\lambda\theta_k^{(j)}$$

先嚐試用一堆for循環實現梯度降低的公式,發現matlab效率過低,訓練速度太慢

function [J, grad] = cofiCostFunc(params, Y, R, num_users, num_movies, ...
                                  num_features, lambda)
%COFICOSTFUNC Collaborative filtering cost function
%   [J, grad] = COFICOSTFUNC(params, Y, R, num_users, num_movies, ...
%   num_features, lambda) returns the cost and gradient for the
%   collaborative filtering problem.
%

% Unfold the U and W matrices from params
X = reshape(params(1:num_movies*num_features), num_movies, num_features);
Theta = reshape(params(num_movies*num_features+1:end), ...
                num_users, num_features);

            
% You need to return the following values correctly
J = 0;
X_grad = zeros(size(X));
Theta_grad = zeros(size(Theta));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost function and gradient for collaborative
%               filtering. Concretely, you should first implement the cost
%               function (without regularization) and make sure it is
%               matches our costs. After that, you should implement the 
%               gradient and use the checkCostFunction routine to check
%               that the gradient is correct. Finally, you should implement
%               regularization.
%
% Notes: X - num_movies  x num_features matrix of movie features
%        Theta - num_users  x num_features matrix of user features
%        Y - num_movies x num_users matrix of user ratings of movies
%        R - num_movies x num_users matrix, where R(i, j) = 1 if the 
%            i-th movie was rated by the j-th user
%
% You should set the following variables correctly:
%
%        X_grad - num_movies x num_features matrix, containing the 
%                 partial derivatives w.r.t. to each element of X
%        Theta_grad - num_users x num_features matrix, containing the 
%                     partial derivatives w.r.t. to each element of Theta
%
    nm=size(Y,1);
    nf=size(X,2);
    nu=size(Y,2);
    J=sum(sum(((Theta*X'-Y').*R').^2))/2+...
        (lambda*sum(sum(Theta.^2))/2)+(lambda*sum(sum(X.^2))/2);
    
    for i=1:nm
        for k=1:nf
            for j=1:nu
                if(R(i,j)==1)
                    X_grad(i,k)=X_grad(i,k)+(Theta(j,:)*(X(i,:))'-Y(i,j))*Theta(j,k);
                end
            end
            X_grad(i,k)=X_grad(i,k)+lambda*X(i,k);
        end
    end
    
    for j=1:nu
        for k=1:nf
            for i=1:nm
                if(R(i,j)==1)
                    Theta_grad(j,k)=Theta_grad(j,k)+(Theta(j,:)*(X(i,:))'-Y(i,j))*X(i,k);
                end
            end
            Theta_grad(j,k)=Theta_grad(j,k)+lambda*Theta(j,k);
        end
    end
% =============================================================

grad = [X_grad(:); Theta_grad(:)];

end

以後只保留一層for循環,內部其他循環所有用矩陣運算代替: 一、 $$y_{temp}=(y^{(i,j_1)},\cdots,y^{(i,j_t)}),R(i,j_1),\cdots,R(i,j_t)=1$$

$$\theta^T_{temp}=(\theta^{(j_1)},\cdots,\theta^{(j_t)}),R(i,j_1),\cdots,R(i,j_t)=1$$

則有:

$$\frac {\partial J}{\partial x_k^{(i)}}=\sum_{j:R(i,j)=1}(\theta^{(j)T}x^{(i)}-y^{(i,j)})\theta_k^{(j)}+\lambda x_k^{(i)}=\sum_{j:R(i,j)=1}(x^{(i)T}\theta^{(j)}-y^{(i,j)})\theta_k^{(j)}+\lambda x_k^{(i)}$$

$$(\frac {\partial J}{\partial x_1^{(i)}},\cdots,\frac {\partial J}{\partial x_{n_f}^{(i)}})=(x^{(i)}\theta^T_{temp}-y_{temp})\theta_{temp}+\lambda x^{(i)T}$$

二、

$$y_{temp}=(y^{(i_1,j)},\cdots,y^{(i_t,j)}),R(i_1,j),\cdots,R(i_t,j)=1$$

$$X^T_{temp}=(x^{(i_1)},\cdots,x^{(i_t)}),R(i_1,j),\cdots,R(i_t,j)=1$$

$$\frac {\partial J}{\partial \theta_k^{(j)}}=\sum_{i:R(i,j)=1}(\theta^{(j)T}x^{(i)}-y^{(i,j)})x_k^{(i)}+\lambda\theta_k^{(j)}$$

$$(\frac {\partial J}{\partial \theta_1^{(j)}},\cdots,\frac {\partial J}{\partial \theta_{n_f}^{(j)}})=(\theta^{(j)T}X^T_{temp}-y_{temp})X_{temp}+\lambda \theta^{(j)T}$$

function [J, grad] = cofiCostFunc(params, Y, R, num_users, num_movies, ...
                                  num_features, lambda)
%COFICOSTFUNC Collaborative filtering cost function
%   [J, grad] = COFICOSTFUNC(params, Y, R, num_users, num_movies, ...
%   num_features, lambda) returns the cost and gradient for the
%   collaborative filtering problem.
%

% Unfold the U and W matrices from params
X = reshape(params(1:num_movies*num_features), num_movies, num_features);
Theta = reshape(params(num_movies*num_features+1:end), ...
                num_users, num_features);

            
% You need to return the following values correctly
J = 0;
X_grad = zeros(size(X));
Theta_grad = zeros(size(Theta));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost function and gradient for collaborative
%               filtering. Concretely, you should first implement the cost
%               function (without regularization) and make sure it is
%               matches our costs. After that, you should implement the 
%               gradient and use the checkCostFunction routine to check
%               that the gradient is correct. Finally, you should implement
%               regularization.
%
% Notes: X - num_movies  x num_features matrix of movie features
%        Theta - num_users  x num_features matrix of user features
%        Y - num_movies x num_users matrix of user ratings of movies
%        R - num_movies x num_users matrix, where R(i, j) = 1 if the 
%            i-th movie was rated by the j-th user
%
% You should set the following variables correctly:
%
%        X_grad - num_movies x num_features matrix, containing the 
%                 partial derivatives w.r.t. to each element of X
%        Theta_grad - num_users x num_features matrix, containing the 
%                     partial derivatives w.r.t. to each element of Theta
%
    nm=size(Y,1);
    nf=size(X,2);
    nu=size(Y,2);
    
    J=sum(sum(((Theta*X'-Y').*R').^2))/2+...
        (lambda*sum(sum(Theta.^2))/2)+(lambda*sum(sum(X.^2))/2);
    
    for i=1:nm
        idx=find(R(i,:)==1);
        Theta_tmp=Theta(idx,:);
        Y_tmp=Y(i,idx);
        X_grad(i,:)=(X(i,:)*Theta_tmp'-Y_tmp)*Theta_tmp+lambda*X(i,:);
    end
    
    for j=1:nu
        idx=find(R(:,j)==1);
        X_tmp=X(idx,:);
        Y_tmp=Y(idx,j)';
        Theta_grad(j,:)=(((Theta(j,:))*X_tmp')-Y_tmp)*X_tmp+lambda*Theta(j,:);
    end
% =============================================================

grad = [X_grad(:); Theta_grad(:)];

end

最終測試結果

相關文章
相關標籤/搜索