深度學習算法索引(持續更新)

https://zhuanlan.zhihu.com/p/26004118html

機器學習最近幾年發展如同野獸出籠,網上的資料鋪天蓋地。學習之餘創建一個索引,把最有潛力(不求最全)的機器學習算法、最好的教程、貼近工業界最前沿的開源代碼收錄其中。我的能力有限,但願知友指正和補充。git

Model篇

1. Reinforcement Learning

領軍人物:david silvergithub

教程web

2015年david silver的UCL Course on RL:Teaching算法

david silver的Tutorial: Deep Reinforcement Learning:網絡

Deep Reinforcement Learning, Spring 2017課程:CS 294 Deep Reinforcement Learning, Spring 2017app

david silver發表在nature上的alphago算法:Mastering the Game of Go with Deep Neural Networks and Tree Search框架

2014年Deterministic Policy Gradient Algorithmsless

ICLR 2016 deepmind發表的DDPG算法:CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING,機器學習

2015年Deep Reinforcement Learning with Double Q-learning

2015年Massively Parallel Methods for Deep Reinforcement ... Parallel Methods for Deep Reinforcement Learning

2016年PRIORITIZED EXPERIENCE REPLAY

2016年DUELING NETWORK ARCHITECTURES FOR DEEP REINFORCEMENT LEARNING

2017年 Value Iteration Networks

博客

【David Silver強化學習公開課之一】強化學習入門

AlphaGo的分析 - 知乎專欄

深度 | David Silver全面解讀深度強化學習:從基礎概念到AlphaGo

重磅 | Facebook 田淵棟詳解:深度學習如何進行遊戲推理?

2. GAN

領軍人物:Ian Goodfellow

2014年Ian提出GAN:[1406.2661] Generative Adversarial Networks

2017年WGAN:Wasserstein GAN 以及源碼實現martinarjovsky/WassersteinGAN

reddit討論:[R] [1701.07875] Wasserstein GAN • r/MachineLearning

2017年DCGAN:[1511.06434] Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

2017年Google Brain的AdaGAN:

博客

使人拍案叫絕的Wasserstein GAN - 知乎專欄

3. Deep Learning

領軍人物:Hinton, lecun, bengio三巨頭

教程

Google首席研發科學家Vincent Vanhoucke主講的課程,淺顯易懂。從機器學習到深度學習(Udacity

@

卷積神經網路的tricks:A guide to convolution arithmetic for deep learning(

Ian Goodfellow and Yoshua Bengio and Aaron Courville寫的Deep learning書:Deep Learning

4. RNN和LSTM

領軍人物:Alex Graves,他的我的主頁Home Page of Alex Graves

博客

有哪些LSTM(Long Short Term Memory)和RNN(Recurrent)網絡的教程?

5. Attention Model (Encoder-Decoder框架)

領軍人物:?

Neural Machine Translation by Jointly Learning to Align and Translate(Yoshua Bengio):[1409.0473] Neural Machine Translation by Jointly Learning to Align and Translate

Encoding Source Language with Convolutional Neural Network for Machine Translation(Li Hang):

Survey on Attention-based Models Applied in NLP

Attention based model 是什麼,它解決了什麼問題?@Tao Lei 的回答。

Sequence to Sequence Learning with Neural Networks以及源碼

A Neural Attention Model for Abstractive Sentence Summarization

博客

天然語言處理中的Attention Model:是什麼及爲何

以Attention Model爲例談談兩種研究創新模式

源碼篇

1. DMLC

Distributed (Deep) Machine Learning Community

2. tensorflow

tensorflow/tensorflow

3. Caffe/Caffe2

Caffe | Deep Learning Framework

caffe2/caffe2

4. 微軟的開源

Microsoft/LightGBM

Microsoft/LightLDA

5. Facebook

計算機圍棋開源程序:facebookresearch/darkforestGo,負責人@田淵棟

應用篇

Deep Reinforcement Learning應用於Go,也就是AlphaGo。

Youtube視頻推薦:Deep Neural Networks for YouTube Recommendations

Google的CTR預估model:Wide & Deep Learning for Recommender Systems,開源代碼

微軟的DSSM模型:DSSM can be used to develop latent semantic models that project entities of different types (e.g., queries and documents) into a common low-dimensional semantic space for a variety of machine learning tasks such as ranking and classification. DSSM - Microsoft Research

~~ 20170416 更新 ~~

position bias 優化:Position-Normalized Click Prediction in Search Advertising

相關文章
相關標籤/搜索