2014斯坦福大學-吳恩達公開課學習筆記

資源連接:連接: https://pan.baidu.com/s/1c1MIm1E 密碼: ganthtml

chapter2 : linear regression with one feature算法

 

************************************************************************************************************dom

************************************************************************************************************ide

chapter4:linear regression with multiple feature函數

  • 在咱們面對多維特徵問題的時候,咱們要保證這些特徵都具備相近的尺度,這將幫助梯度降低算法更快地收斂

   

 

  • 若是學習率 α 太小,則達到收斂所需的迭代次數會很是高;若是學習率 α 過大,每次迭代可能不會減少代價函數,可能會越過局部最小值致使沒法收斂。

        

************************************************************************************************************學習

************************************************************************************************************lua

chapter 5 : Octavespa

  • 參考文獻 
  • plot 
    x=[0:0.01:1];
    y1=sin(2*pi*x);
    plot(x,y1);
    y2=cos(2*pi*x);
    hold on;
    plot(x,y2);
    xlabel('time');
    ylabel('value');
    title('my plot');
    legend('sin','cos');
    print -dpng 'my.png';
    close;
    figure(1);plot(x,y1);
    figure(2);plot(x,y2);
    figure(3);
    subplot(1,2,1);
    plot(x,y1);
    subplot(1,2,2);
    plot(x,y2);
    axis([0.5 1 -1 1]); %change the axis of x and y
    clf;
    a=magic(5)
    imagesc(a);
    imagesc(a), colorbar,colormap gray;
    View Code

 

*************************************************************************************************************.net

*************************************************************************************************************code

 

chapter 6 : logistic regression and regularization

 

************************************************************************************************************

************************************************************************************************************

chapter 7 : regularization

************************************************************************************************************

************************************************************************************************************

chapter 8 : neural network

  • cost function

        

  • forward propagation

      

  • backward propagation

                

  • 數學證實
  • numerical estimation of gradient
  • random initialization and the step of training a neural network

********************************************************************************************************

********************************************************************************************************

chapter 10 : Deciding what to try next

  • evaluating a hypothesis with cross validation
  •  Diagnosing bias and variance

  • learning curves and decide what to do next
  • 高手學習筆記

 

**********************************************************************************************************

**********************************************************************************************************

chapter 11 : precision and recall 

  • skewed data vs precision and recall

*********************************************************************************************************

*********************************************************************************************************

chapter 12 : SVM 

  • for enlagering the projection of X, then we get large margin 
  • kernel and  do perform feature scalling before using the Gaussian kernel
  • Kernel need to satisfy technical condition called "Mercer's Theorem"  to make sure SVM packages' optimizations run correctly, and do not diverge.

  • Polynomial kernel: (XTL+constant)degree

  • SVM  has a convex optimization problem

  •  大牛學習博客

*********************************************************************************************************

*********************************************************************************************************

chapter 13 : Unsupervised learning and clustering

*********************************************************************************************************

*********************************************************************************************************

chapter 14 : PCA

  •   first perform mean normalization and feature scalling so that the feature should have zero mean and comparabel ranges of values.
  • Data preprocessing
  • PCA and SVD , the implementation of PCA , the choosing of K

  • getting the PCA parameter only on the training set and use them on the test and cross validation set

chapter 15 :

相關文章
相關標籤/搜索