Paper Reading: Neural Machine Translation by Jointly Learning to Align and Translate

這篇文章是論文"NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE"的閱讀筆記,這是2015年發表在ICLR的一篇文章。html

ABSTRACT

NMT(neural machine translation)是個不少人研究過的問題,最近也突破不少
回到這篇論文,當時解決NMT問題的作法主要是基於encoder-decoder框架的,這框架也挺好的,在不少領域表現都不錯。可是,encoder部分把輸入信息壓縮到一個固定長度的vector中,這形成了性能的瓶頸。這篇論文提出的模型就是在翻譯的過程當中自動在輸入中尋找與輸出目標有關係的部分幫助決策。這就是這篇論文提出的方法的核心思想。網絡

看一下原文是怎麼說的👇框架

In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder–decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.iphone

BACKGROUND

1 translation problem

從機率角度看,翻譯問題就是 : \(arg max_yp(y|x)\)函數

2 RNN Encoder-Decoder

Encoder讀入輸入\(x=(x_i,x_i,...x_t)\),輸出一個vector \(c\):性能

\[ c=q(\{h_1,h_2...h_t\}) \]this

q是某個非線性的函數。
Decoder是這樣一個機率模型,spa

\[ \begin{aligned} p(y)= \prod_{t=1}^{T}p(y_t|\{y_1,...y_{t-1}\},c) \\ p(y_t|\{y_1,...y_{t-1}\},c) = g(y_{t_i},s_t,c) \end{aligned} \]翻譯

PROPOSED METHOD

網絡結構是這樣的:


3d

encoder部分是個雙向RNN。decoder部分,條件機率變成:

\[ p(y_i|\{y_1,...y_{i-1}\},x) = g(y_{i-1},s_i,c_i) \]

\[ s_i = f(s_{i-1},y_{i-1},c_i) \]

與前面的decoder不一樣的是,\(c_i\)對每一個\(y_i\)都是不一樣的。

\[ c_i = \sum_{j=1}^{T}\alpha_{ij}h_j \]

能夠看到\(c_i\)是encoder各個輸出狀態\(h_j\)的一個加權和,因此它能作到focus和這個目標\(y_i\)最相關的輸入。

\[ \alpha_{ij}=\frac{exp(e_{ij})}{\sum exp(e_{ik}) } \]

\[ e_{ij} = a(s_{j-1},h_j) \]

這個\(e_{ij}\)就是最重點的地方了,這個函數衡量i和j有多match.

is an alignment model which scores how well the inputs around position j and the output at position i match.

EXPERIMENT RESULT

和基於encoder-decoder的模型比較了一下,效果很好,尤爲對於長句子有奇效。

相關文章
相關標籤/搜索