JavaShuo
欄目
標籤
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
時間 2021-01-17
標籤
Poison
機器學習
简体版
原文
原文鏈接
論文簡介 在這項工作中,我們研究了一種新的攻擊類型,稱爲乾淨標籤攻擊,攻擊者注入的訓練示例被認證機構清晰地標記,而不是被攻擊者自己惡意地貼上標籤。我們的策略假設攻擊者不瞭解訓練數據,而是瞭解模型及其參數。攻擊者的目標是當網絡在包含中毒實例的增強數據集上進行重新訓練後,使重新訓練的網絡將一個特定測試實例從一個類錯誤地分類爲她選擇的另一個類。除了目標的預期預測錯誤之外,受害的分類器的性能下降並不明顯。
>>阅读原文<<
相關文章
1.
【翻譯】Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
2.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks 論文閱讀、復現及思考
3.
Cascade-based attacks on complex networks
4.
【翻譯】Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
5.
[S&P 2019翻譯]Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
6.
讀書筆記17:Adversarial Attacks on Neural Networks for Graph Data
7.
【生成對抗樣本】Simple Black-Box Adversarial Attacks on Deep Neural Networks
8.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
9.
Paper Notes: A Comprehensive Survey on Graph Neural Networks
10.
[娜璋帶你讀論文] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
更多相關文章...
•
Docker Compose
-
Docker教程
•
ionic 手勢事件
-
ionic 教程
•
RxJava操作符(一)Creating Observables
•
PHP Ajax 跨域問題最佳解決方案
相關標籤/搜索
networks
poisoning
frogs
attacks
targeted
neural
join..on
join....on
join......on
join...on
0
分享到微博
分享到微信
分享到QQ
每日一句
每一个你不满意的现在,都有一个你没有努力的曾经。
最新文章
1.
以實例說明微服務拆分(以SpringCloud+Gradle)
2.
idea中通過Maven已經將依賴導入,在本地倉庫和external libraries中均有,運行的時候報沒有包的錯誤。
3.
Maven把jar包打到指定目錄下
4.
【SpringMvc】JSP+MyBatis 用戶登陸後更改導航欄信息
5.
在Maven本地倉庫安裝架包
6.
搭建springBoot+gradle+mysql框架
7.
PHP關於文件$_FILES一些問題、校驗和限制
8.
php 5.6連接mongodb擴展
9.
Vue使用命令行創建項目
10.
eclipse修改啓動圖片
本站公眾號
歡迎關注本站公眾號,獲取更多信息
相關文章
1.
【翻譯】Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
2.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks 論文閱讀、復現及思考
3.
Cascade-based attacks on complex networks
4.
【翻譯】Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
5.
[S&P 2019翻譯]Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
6.
讀書筆記17:Adversarial Attacks on Neural Networks for Graph Data
7.
【生成對抗樣本】Simple Black-Box Adversarial Attacks on Deep Neural Networks
8.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
9.
Paper Notes: A Comprehensive Survey on Graph Neural Networks
10.
[娜璋帶你讀論文] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
>>更多相關文章<<