日誌php
2017.11.28 update deeplearninghtml
上週出差回來,開始找了一篇論文看《ScSPM》,這裏有源代碼,本身但願能認真看懂;畢竟這篇文章包含了dense sift特徵提取+Spare coding+linear svm知識很全面,但願能看懂代碼。這個過程卻發現本身缺乏了不少東西,他本身的sift提取方法,Sc,svm都是本身實現的;感受看懂好難。而後週六開始實驗室有「學術交流」,師兄師姐交流他們整個小論文的過程,針對梯度降低這些基本的方法,咱們都沒有認真的理解。發現圖像和機器學習本身都沒有認真的系統的學習;本身在博客上零零散散的看了不少方法;可是沒有成系統。而後想跟着coursera機器學習 認真的學一下;卻發現視頻放不了的。可是做業仍是能夠提交的,想認真弄弄最後搞個證書。我在github上找了不少課程matlab的代碼;今天完整了把第一次編程做業看了,其實對損失函數,參數學習,梯度降低在代碼中認識了一下;感受很不錯。後續把後面的做業認真的看看。git
Coursera上機器學習有證書了,不事後面學習的有點水,用matlab工具github
機器視覺算法
## 學習目錄 整理學習目錄,可以更系統的瞭解知識結構。 ### 目錄 #### Week1 機器學習介紹 1機器學習介紹 1.1什麼是機器學習? 1.2監督學習(Supervised Learning) 1.3非監督學習(Unsupervised Learning) #### Week1 單變量線性迴歸 2單變量線性迴歸(Linear Regression with One Variable) 2.1模型表達(Model Representation) 2.2代價函數(Cost Function) 2.3梯度降低(Gradient Descent) 2.4對線性迴歸運用梯度降低法 #### Week2 多變量線性迴歸 3多變量線性迴歸(Linear Regression with Multiple Variables) 3.1 多維特徵(Multiple Features) 3.2多變量梯度降低(Gradient descent for multiple variables) 3.3特徵縮放(feature scaling) 3.4學習率(Learning rate) #### Week2 多項式迴歸和正規方程 4多項式迴歸和正規方程 4.1多項式迴歸(Polynomial Regression) 4.2正規方程(Normal Equation) #### Week3 歸一化 5 邏輯迴歸(Logistic Regression) 5.1分類問題 5.2分類問題建模. 5.3斷定邊界(Decision Boundary) 5.4代價函數 5.5多類分類(Multiclass Classification) #### Week3 歸一化 6歸一化(Regularization) 6.1過擬合問題(The Problem of Overfitting) 6.2歸一化代價函數(Regularization Cost Function) 6.3歸一化線性迴歸(Regularized Linear Regression) 6.4歸一化邏輯迴歸(Regularized Logistic Regression) #### Week4 神經網絡:表達 7神經網絡:表達 7.1非線性假設(Non-Linear Hypothesis) 7.2神經網絡介紹 7.3模型表達 7.4神經網絡模型表達. 7.5正向傳播 (Forward Propagation) 7.6對神經網絡的理解 7.7神經網絡示例:二元邏輯運算符(Binary Logical Operators) 7.8多類分類 #### Week5 神經網絡:學習 8神經網絡:學習 8.1神經網絡代價函數. 8.2反向傳播算法(Backpropagation Algorithm) 8.3梯度檢驗(Gradient Checking) 8.4隨機初始化(Random Initialization) 8.5綜合起來 #### Week6 機器學習應用建議 9機器學習應用建議 9.1決定下一步作什麼 9.2假設的評估(Evaluating a Hypothesis) 9.3模型選擇(交叉驗證集) 9.4偏倚和誤差診斷(Diagnosis Bias vs. Variance) 9.5歸一化與偏倚/誤差 9.6學習曲線(Learning Curves) 9.7決定下一步作什麼 #### Week6 機器學習系統設計 10機器學習系統設計 10.1首先要作什麼 10.2偏差分析(Error Analysis) 10.3類偏斜的偏差度量(Error Metrics for Skewed Classes) 10.4查全率和查準率之間的權衡 10.5機器學習的數據 #### Week7 支持向量機 11 支持向量機(Support Vector Machine) 11.1優化目標(Optimization Objective) 11.2支持向量機斷定邊界(SVM Decision Boundary) 11.3核函數(Kernels) 11.4邏輯迴歸與支持向量機 #### Week8 聚類 12 聚類(Clustering) 12.1K-均值算法 12.2優化目標 12.3隨機初始化 12.4選擇聚類數 #### Week8 降維 13 降維(Dimensionality Reduction) 13.1 動機一:數據壓縮(Data Compression) 13.2 動機二:數據可視化(Data Visualization) 13.3主要成分分析(Principal Component Analysis) 13.4主要成分分析算法 13.5選擇主要成分的數量 13.6應用主要成分分析 #### Week9 異常檢測 14異常檢測(Anomaly Detection) 14.1密度估計(Density Estimation) 14.2高斯分佈 14.3異常檢測 14.4評價一個異常檢測系統 14.5異常檢測與監督學習對比 14.6選擇特徵 14.7多元高斯分佈(Multivariate Gaussian Distribution) ##### Week9 推薦系統 15推薦系統(Recommender Systems) 15.1問題形式化 15.2基於內容的推薦系統(Content-based Recommendations) 15.3協同過濾算法(Collaborative Filtering Algorithm) 15.4均值歸一化 #####Week10 大規模機器學習 16大規模機器學習(Large Scale Machine Learning) 16.1大型數據集的學習 16.2隨機梯度降低法(Stochastic Gradient Descent) 16.3微型批量梯度降低(Mini-Batch Gradient Descent) 16.4隨機梯度降低收斂(Stochastic Descent Convergence) 16.5在線學習(Online Learning) 16.6映射化簡和數據並行(Map Reduce and Data Parallelism) #### Week10 應用示例:圖像文字識別 17應用實例:圖像文字識別(Photo OCR) 17.1問題描述和流程圖(Problem description and pipeline) 17.3得到大量數據我的工數據 17.4上限分析( Ceiling Analysis) ##Refenece: [COURSERA機器學習課筆記](https://wenku.baidu.com/view/f328b62b69dc5022abea0068.html)
## Course Contents ### Neural Networks and Deep Learning - Week1 Introduction to deep learning - Week2 Neural Networks Basics - Week3 Shallow Neural networks - Week4 Deep Neural Networks ### Improving Deep Neural Networks - Week1 Practical aspects of Deep Learning(Initialization-Regularization-Gradient Checking) - Week2 Optimization algorithms - Week3 Hyperparameter tuning, Batch Normalization and Programming Frameworks ### Convolutional Neural Network - Week1 Foundations of Convolutional Neural Networks - Week2 Deep convolutional models: case studies - Week3 Object detection - Week4 Special applications: Face recognition & Neural style transfer 第一門課 神經網絡和深度學習(Neural Networks and Deep Learning) 第一週:深度學習引言(Introduction to Deep Learning) 1.1 歡迎(Welcome) 1 1.2 什麼是神經網絡?(What is a Neural Network) 1.3 神經網絡的監督學習(Supervised Learning with Neural Networks) 1.4 爲何神經網絡會流行?(Why is Deep Learning taking off?) 1.5 關於本課程(About this Course) 1.6 課程資源(Course Resources) 1.7 Geoffery Hinton 專訪(Geoffery Hinton interview) 第二週:神經網絡的編程基礎(Basics of Neural Network programming) 2.1 二分類(Binary Classification) 2.2 邏輯迴歸(Logistic Regression) 2.3 邏輯迴歸的代價函數(Logistic Regression Cost Function) 2.4 梯度降低(Gradient Descent) 2.5 導數(Derivatives) 2.6 更多的導數例子(More Derivative Examples) 2.7 計算圖(Computation Graph) 2.8 計算圖導數(Derivatives with a Computation Graph) 2.9 邏輯迴歸的梯度降低(Logistic Regression Gradient Descent) 2.10 梯度降低的例子(Gradient Descent on m Examples) 2.11 向量化(Vectorization) 2.12 更多的向量化例子(More Examples of Vectorization) 2.13 向量化邏輯迴歸(Vectorizing Logistic Regression) 2.14 向量化邏輯迴歸的梯度計算(Vectorizing Logistic Regression's Gradient) 2.15 Python中的廣播機制(Broadcasting in Python) 2.16 關於 Python與numpy向量的使用(A note on python or numpy vectors) 2.17 Jupyter/iPython Notebooks快速入門(Quick tour of Jupyter/iPython Notebooks) 2.18 邏輯迴歸損失函數詳解(Explanation of logistic regression cost function) 第三週:淺層神經網絡(Shallow neural networks) 3.1 神經網絡概述(Neural Network Overview) 3.2 神經網絡的表示(Neural Network Representation) 3.3 計算一個神經網絡的輸出(Computing a Neural Network's output) 3.4 多樣本向量化(Vectorizing across multiple examples) 3.5 向量化實現的解釋(Justification for vectorized implementation) 3.6 激活函數(Activation functions) 3.7 爲何須要非線性激活函數?(why need a nonlinear activation function?) 3.8 激活函數的導數(Derivatives of activation functions) 3.9 神經網絡的梯度降低(Gradient descent for neural networks) 3.10(選修)直觀理解反向傳播(Backpropagation intuition) 3.11 隨機初始化(Random+Initialization) 第四周:深層神經網絡(Deep Neural Networks) 4.1 深層神經網絡(Deep L-layer neural network) 4.2 前向傳播和反向傳播(Forward and backward propagation) 4.3 深層網絡中的前向和反向傳播(Forward propagation in a Deep Network) 4.4 覈對矩陣的維數(Getting your matrix dimensions right) 4.5 爲何使用深層表示?(Why deep representations?) 4.6 搭建神經網絡塊(Building blocks of deep neural networks) 4.7 參數VS超參數(Parameters vs Hyperparameters) 4.8 深度學習和大腦的關聯性(What does this have to do with the brain?) 第二門課 改善深層神經網絡:超參數調試、正則化以及優化(Improving Deep Neural Networks:Hyperparameter tuning, Regularization and Optimization) 第一週:深度學習的實用層面(Practical aspects of Deep Learning) 1.1 訓練,驗證,測試集(Train / Dev / Test sets) 1.2 誤差,方差(Bias /Variance) 1.3 機器學習基礎(Basic Recipe for Machine Learning) 1.4 正則化(Regularization) 1.5 爲何正則化有利於預防過擬合呢?(Why regularization reduces overfitting?) 1.6 dropout 正則化(Dropout Regularization) 1.7 理解 dropout(Understanding Dropout) 1.8 其餘正則化方法(Other regularization methods) 1.9 標準化輸入(Normalizing inputs) 1.10 梯度消失/梯度爆炸(Vanishing / Exploding gradients) 1.11 神經網絡的權重初始化(Weight Initialization for Deep NetworksVanishing /Exploding gradients) 1.12 梯度的數值逼近(Numerical approximation of gradients) 1.13 梯度檢驗(Gradient checking) 1.14 梯度檢驗應用的注意事項(Gradient Checking Implementation Notes) 第二週:優化算法 (Optimization algorithms) 2.1 Mini-batch 梯度降低(Mini-batch gradient descent) 2.2 理解Mini-batch 梯度降低(Understanding Mini-batch gradient descent) 2.3 指數加權平均(Exponentially weighted averages) 2.4 理解指數加權平均(Understanding Exponentially weighted averages) 2.5 指數加權平均的誤差修正(Bias correction in exponentially weighted averages) 2.6 momentum梯度降低(Gradient descent with momentum) 2.7 RMSprop——root mean square prop(RMSprop) 2.8 Adam優化算法(Adam optimization algorithm) 2.9 學習率衰減(Learning rate decay) 2.10 局部最優問題(The problem of local optima) 第三週超參數調試,batch正則化和程序框架(Hyperparameter tuning, Batch Normalization and Programming Frameworks) 3.1 調試處理(Tuning process) 3.2 爲超參數選擇和適合範圍(Using an appropriate scale to pick hyperparameters) 3.3 超參數訓練的實踐:Pandas vs. Caviar(Hyperparameters tuning in practice: Pandas vs. Caviar) 3.4 網絡中的正則化激活函數(Normalizing activations in a network) 3.5 將 Batch Norm擬合進神經網絡(Fitting Batch Norm into a neural network) 3.6 爲何Batch Norm奏效?(Why does Batch Norm work?) 3.7 測試時的Batch Norm(Batch Norm at test time) 3.8 Softmax 迴歸(Softmax Regression) 3.9 訓練一個Softmax 分類器(Training a softmax classifier) 3.10 深度學習框架(Deep learning frameworks) 3.11 TensorFlow(TensorFlow) 第三門課 結構化機器學習項目 (Structuring Machine Learning Projects) 第一週:機器學習策略(1)(ML Strategy (1)) 1.1 爲何是ML策略? (Why ML Strategy) 1.2 正交化(Orthogonalization) 1.3 單一數字評估指標(Single number evaluation metric) 1.4 知足和優化指標 (Satisficing and Optimizing metric) 1.5 訓練集、開發集、測試集的劃分(Train/dev/test distributions) 1.6 開發集和測試集的大小 (Size of the dev and test sets) 1.7 何時改變開發集/測試集和評估指標(When to change dev/test sets and metrics) 1.8 爲何是人的表現 (Why human-level performance?) 1.9 可避免誤差(Avoidable bias) 1.10 理解人類的表現 (Understanding human-level performance) 1.11 超過人類的表現(Surpassing human-level performance) 1.12 改善你的模型表現 (Improving your model performance) 第二週:機器學習策略(2)(ML Strategy (2)) 2.1 偏差分析 (Carrying out error analysis) 2.2 清除標註錯誤的數據(Cleaning up incorrectly labeled data) 2.3 快速搭建你的第一個系統,並進行迭代(Build your first system quickly, then iterate) 2.4 在不一樣的分佈上的訓練集和測試集 (Training and testing on different distributions) 2.5 數據分佈不匹配的誤差與方差分析 (Bias and Variance with mismatched data distributions) 2.6 處理數據不匹配問題(Addressing data mismatch) 2.7 遷移學習 (Transfer learning) 2.8 多任務學習(Multi-task learning) 2.9 什麼是端到端的深度學習? (What is end-to-end deep learning?) 2.10 是否使用端到端的深度學習方法 (Whether to use end-to-end deep learning) 第四門課 卷積神經網絡(Convolutional Neural Networks) 第一週 卷積神經網絡(Foundations of Convolutional Neural Networks) 2.1.1 計算機視覺(Computer vision) 1.2 邊緣檢測示例(Edge detection example) 1.3 更多邊緣檢測內容(More edge detection) 1.4 Padding 1.5 卷積步長(Strided convolutions) 1.6 三維卷積(Convolutions over volumes) 1.7 單層卷積網絡(One layer of a convolutional network) 1.8 簡單卷積網絡示例(A simple convolution network example) 1.9 池化層(Pooling layers) 1.10 卷積神經網絡示例(Convolutional neural network example) 1.11 爲何使用卷積?(Why convolutions?) 第二週 深度卷積網絡:實例探究(Deep convolutional models: case studies) 2.1 爲何要進行實例探究?(Why look at case studies?) 2.2 經典網絡(Classic networks) 2.3 殘差網絡(Residual Networks (ResNets)) 2.4 殘差網絡爲何有用?(Why ResNets work?) 2.5 網絡中的網絡以及 1×1 卷積(Network in Network and 1×1 convolutions) 2.6 谷歌 Inception 網絡簡介(Inception network motivation) 2.7 Inception 網絡(Inception network) 2.8 使用開源的實現方案(Using open-source implementations) 2.9 遷移學習(Transfer Learning) 2.10 數據擴充(Data augmentation) 2.11 計算機視覺現狀(The state of computer vision) 第三週 目標檢測(Object detection) 3.1 目標定位(Object localization) 3.2 特徵點檢測(Landmark detection) 3.3 目標檢測(Object detection) 3.4 卷積的滑動窗口實現(Convolutional implementation of sliding windows) 3.5 Bounding Box預測(Bounding box predictions) 3.6 交併比(Intersection over union) 3.7 非極大值抑制(Non-max suppression) 3.8 Anchor Boxes 3.9 YOLO 算法(Putting it together: YOLO algorithm) 3.10 候選區域(選修)(Region proposals (Optional)) 第四周 特殊應用:人臉識別和神經風格轉換(Special applications: Face recognition &Neural style transfer) 4.1 什麼是人臉識別?(What is face recognition?) 4.2 One-Shot學習(One-shot learning) 4.3 Siamese 網絡(Siamese network) 4.4 Triplet 損失(Triplet 損失) 4.5 面部驗證與二分類(Face verification and binary classification) 4.6 什麼是神經風格轉換?(What is neural style transfer?) 4.7 什麼是深度卷積網絡?(What are deep ConvNets learning?) 4.8 代價函數(Cost function) 4.9 內容代價函數(Content cost function) 4.10 風格代價函數(Style cost function) 4.11 一維到三維推廣(1D and 3D generalizations of models) 人工智能大師訪談 吳恩達採訪 Geoffery Hinton 吳恩達採訪 Ian Goodfellow 吳恩達採訪 Ruslan Salakhutdinov 吳恩達採訪 Yoshua Bengio 吳恩達採訪 林元慶 吳恩達採訪 Pieter Abbeel 吳恩達採訪 Andrej Karpathy