人工智能資料庫:第6輯(20170110)


  1. 【代碼】A Practical Guide for Debugging Tensorflow Codes

簡介:html

如何調試 Tensorflow 代碼,有案例,有PPT,很是全面。python

原文連接:https://github.com/wookayin/TensorflowKR-2016-talk-debugginggit


2.【論文】Deep Class Aware Denoisinggithub

簡介:app

The increasing demand for high image quality in mobile devices brings forth the need for better computational enhancement techniques, and image denoising in particular. At the same time, the images captured by these devices can be categorized into a small set of semantic classes. However simple, this observation has not been exploited in image denoising until now. In this paper, we demonstrate how the reconstruction quality improves when a denoiser is aware of the type of content in the image. To this end, we first propose a new fully convolutional deep neural network architecture which is simple yet powerful as it achieves state-of-the-art performance even without being class-aware. We further show that a significant boost in performance of up to 0.4 dB PSNR can be achieved by making our network class-aware, namely, by fine-tuning it for images belonging to a specific semantic class. Relying on the hugely successful existing image classifiers, this research advocates for using a class-aware approach in all image enhancement tasks.ide

原文連接:https://arxiv.org/pdf/1701.01698v1.pdfpost


3.【視頻】How to use Q Learning in Video Games Easilyui

簡介:this

In this video, I go over the history of reinforcement learning then talk about how a type of reinforcement learning called Q learning works. We'll then write a 10 line python script for a Q learning bot in a 5x5 grid that will help it go from point A to point B as fast as possible.lua

原文連接:https://www.youtube.com/watch?v=A5eihauRQvo&feature=youtu.be

代碼連接:https://github.com/llSourcell/q_learning_demo


4.【博客】Practical seq2seq

簡介:

In my last article, I talked a bit about the theoretical aspect of the famous Sequence to Sequence Model. I have shared the code for my implementation of seq2seq - easy_seq2seq. I have adopted most of the code from en-fr translation example provided by Google. Hence, most parts of the code, that dealt with data preprocessing, model evaluation were black boxes to me and to the readers. To make matters worse, the model trained on Cornell Movie Dialog corpus performed poorly. A lot of people complained about this. After training the model for days, most of the responses were gibberish. I apologize for wasting your time.

原文連接:http://suriyadeepan.github.io/2016-12-31-practical-seq2seq/


5.【博客】Tutorial: Categorical Variational Autoencoders using Gumbel-Softmax

簡介:


In this post, I discuss our recent paper, Categorical Reparameterization with Gumbel-Softmax, which introduces a simple technique for training neural networks with discrete latent variables. I'm really excited to share this because (1) I believe it will be quite useful for a variety of Machine Learning research problems, (2) this is my first published paper ever (on Arxiv, and submitted to a NIPS workshop and ICLR as well).

原文連接:http://blog.evjang.com/2016/11/tutorial-categorical-variational.html

代碼連接:https://github.com/ericjang/gumbel-softmax/blob/master/Categorical%20VAE.ipynb

相關文章
相關標籤/搜索