性質:
1. Transformer是Seq2Seq類模型.
2. ransformer不是RNN.
3.僅依賴attention和全鏈接層.
準確率遠高於RNN類.
各類weights:
- \(weights \space\space \alpha_{ij} = align(h_i, s_j)\).
- Compute\(k_{:i} = W_K h_i\)and\(q_{:j} = W_Q S_j\).
- Compute weights\(\alpha_{:j} = Softmax(K^T q_{:j}) \in \mathbb{R}^m\).
- Context vector:\(c_j = \sum\limits_{i=1}^{m}{\alpha_{ij}v_{:m}}\).
- Query:\(q_{:j} = W_Q s_j\)-- 匹配別人.
- Key:\(k_{:i} = W_K h_i\)-- 等待被匹配.
- Value:\(V_{:i} = W_V h_i\)-- 待加權平均.
- \(W_Q, W_K, W_V\)皆爲待學習參數.
\(Q-K-V\)的關係其實就是:\(h(P)與s(P)求對於h(P)的 attention\), 三個(P)處都是不一樣的可學習的W.架構
Attention Layer
Key:\(k_{:i} = W_K x_i\).app
Value:\(v_{:i} = W_V x_i\).學習
- Queries are based on decoder's inputs\(x_1^\prime, x_2^\prime, ..., x_t^\prime\).
- Query:\(q_{:j} = W_Q x_j^\prime\).
![](http://static.javashuo.com/static/loading.gif)
符號彙總:spa
- Attention layer:\(C = Attn(X, X^\prime)\).
- Encoder's inputs:\(X = [x_1, x_2, ..., x_m]\).
- Decoder's inputs:\(X^\prime = [x_1^\prime, x_2^\prime, ..., x_t^\prime]\).
- parameters:\(W_Q, W_K, W_V\).
![](http://static.javashuo.com/static/loading.gif)
- Self-attention layer:\(C = Attn(X, X)\).
- RNN's inputs\(X = [x_1, x_2, ..., x_m]\).
- Parameters:\(W_Q, W_K, W_V\).
![](http://static.javashuo.com/static/loading.gif)
Summary:
- Attention 最初用於Seq2Seq的RNN模型.
- self-attention: 可用於全部RNN模型而不只是Seq2Seq模型.
- Attention 能夠不依賴於RNN使用.
Single-head self-attention
Multi-head self-attention:
- l 個不共享權重的single-head self-attentions.
- 將全部single-head self-attentions的結果concat起來
- 假設single-head self-attention的輸出爲dxm的矩陣, 則對應multi-head 的輸出shape爲(ld)xm.
![](http://static.javashuo.com/static/loading.gif)
- Transformer's encoder = 6 stacked blocks.
- 1 encoder block $\approx$1 multi-head attention layer + 1 dense layer.
![](http://static.javashuo.com/static/loading.gif)
- Transformer's decoder = 6 stacked blocks.
- 1 decoder block\(\approx\)multi-head self-attention + multi-head attention + dense layer
- Input shape: (512 x m, 512 x t), output shape: 512 x t.
![](http://static.javashuo.com/static/loading.gif)
Stacked Attention
![](http://static.javashuo.com/static/loading.gif)
BERT
- BERT 是爲了預訓練Transformer 的 encoder.
- 預測mask掉的單詞: 隨即遮擋15%的單詞:
![](http://static.javashuo.com/static/loading.gif)
- 預測下一個句子: 50%隨機抽樣句子或50%下一句, 給予false/true:
![](http://static.javashuo.com/static/loading.gif)