JavaShuo
欄目
標籤
Neural Headline Generation with Minimum Risk Training
時間 2020-12-20
原文
原文鏈接
今天分享的paper是Neural Headline Generation with Minimum Risk Training。 本文通過將評價指標融入目標函數來訓練模型,在中文和英文數據集上均取得了超過之前所有模型的結果。結果一點也不意外,因爲傳統的MLE並不是以ROUGE評價指標最大爲目標函數,而本文的方法針對了評價指標來做文章,一定會得到不錯的結果。反過來,我們需要思考一個問題,如果文本摘
>>阅读原文<<
相關文章
1.
【讀】seq2seq——(6)Neural Headline Generation with Minimum Risk Training
2.
Text Generation With LSTM Recurrent Neural Networks in Python with Keras
3.
Neural Response Generation via GAN with an Approximate Embedding Layer
4.
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
5.
TRAINING DEEP NEURAL NETWORKS WITH LOW PRECISION MULTIPLICATIONS
6.
Synchronous Bidirectional Inference for Neural Sequence Generation
7.
Training Neural Networks, part I
8.
Training Recurrent Neural Network
9.
(轉)A Recipe for Training Neural Networks
10.
AAAI2018-Long Text Generation via Adversarial Training with Leaked Information論文筆記
更多相關文章...
•
XSLT
元素
-
XSLT 教程
•
Docker 容器連接
-
Docker教程
•
爲了進字節跳動,我精選了29道Java經典算法題,帶詳細講解
•
Composer 安裝與使用
相關標籤/搜索
generation
risk
neural
training
minimum
flink training
111.minimum
921.minimum
with+this
with...connect
0
分享到微博
分享到微信
分享到QQ
每日一句
每一个你不满意的现在,都有一个你没有努力的曾经。
最新文章
1.
FM理論與實踐
2.
Google開發者大會,你想知道的都在這裏
3.
IRIG-B碼對時理解
4.
乾貨:嵌入式系統設計開發大全!(萬字總結)
5.
從域名到網站—虛機篇
6.
php學習5
7.
關於ANR線程阻塞那些坑
8.
android studio databinding和include使用控件id獲取報錯 不影響項目正常運行
9.
我女朋友都會的安卓逆向(四 動態調試smali)
10.
io存取速度
本站公眾號
歡迎關注本站公眾號,獲取更多信息
相關文章
1.
【讀】seq2seq——(6)Neural Headline Generation with Minimum Risk Training
2.
Text Generation With LSTM Recurrent Neural Networks in Python with Keras
3.
Neural Response Generation via GAN with an Approximate Embedding Layer
4.
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
5.
TRAINING DEEP NEURAL NETWORKS WITH LOW PRECISION MULTIPLICATIONS
6.
Synchronous Bidirectional Inference for Neural Sequence Generation
7.
Training Neural Networks, part I
8.
Training Recurrent Neural Network
9.
(轉)A Recipe for Training Neural Networks
10.
AAAI2018-Long Text Generation via Adversarial Training with Leaked Information論文筆記
>>更多相關文章<<