JavaShuo
欄目
標籤
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
時間 2021-01-17
標籤
Poison
機器學習
简体版
原文
原文鏈接
論文簡介 在這項工作中,我們研究了一種新的攻擊類型,稱爲乾淨標籤攻擊,攻擊者注入的訓練示例被認證機構清晰地標記,而不是被攻擊者自己惡意地貼上標籤。我們的策略假設攻擊者不瞭解訓練數據,而是瞭解模型及其參數。攻擊者的目標是當網絡在包含中毒實例的增強數據集上進行重新訓練後,使重新訓練的網絡將一個特定測試實例從一個類錯誤地分類爲她選擇的另一個類。除了目標的預期預測錯誤之外,受害的分類器的性能下降並不明顯。
>>阅读原文<<
相關文章
1.
【翻譯】Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
2.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks 論文閱讀、復現及思考
3.
Cascade-based attacks on complex networks
4.
【翻譯】Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
5.
[S&P 2019翻譯]Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
6.
讀書筆記17:Adversarial Attacks on Neural Networks for Graph Data
7.
【生成對抗樣本】Simple Black-Box Adversarial Attacks on Deep Neural Networks
8.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
9.
Paper Notes: A Comprehensive Survey on Graph Neural Networks
10.
[娜璋帶你讀論文] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
更多相關文章...
•
Docker Compose
-
Docker教程
•
ionic 手勢事件
-
ionic 教程
•
RxJava操作符(一)Creating Observables
•
PHP Ajax 跨域問題最佳解決方案
相關標籤/搜索
networks
poisoning
frogs
attacks
targeted
neural
join..on
join....on
join......on
join...on
0
分享到微博
分享到微信
分享到QQ
每日一句
每一个你不满意的现在,都有一个你没有努力的曾经。
最新文章
1.
《給初學者的Windows Vista的補遺手冊》之074
2.
CentoOS7.5下編譯suricata-5.0.3及簡單使用
3.
快速搭建網站
4.
使用u^2net打造屬於自己的remove-the-background
5.
3.1.7 spark體系之分佈式計算-scala編程-scala中模式匹配match
6.
小Demo大知識-通過控制Button移動來學習Android座標
7.
maya檢查和刪除多重面
8.
Java大數據:大數據開發必須掌握的四種數據庫
9.
強烈推薦幾款IDEA插件,12款小白神器
10.
數字孿生體技術白皮書 附下載地址
本站公眾號
歡迎關注本站公眾號,獲取更多信息
相關文章
1.
【翻譯】Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
2.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks 論文閱讀、復現及思考
3.
Cascade-based attacks on complex networks
4.
【翻譯】Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
5.
[S&P 2019翻譯]Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
6.
讀書筆記17:Adversarial Attacks on Neural Networks for Graph Data
7.
【生成對抗樣本】Simple Black-Box Adversarial Attacks on Deep Neural Networks
8.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
9.
Paper Notes: A Comprehensive Survey on Graph Neural Networks
10.
[娜璋帶你讀論文] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
>>更多相關文章<<