論文筆記:Evolving Losses for Unsupervised Video Representation Learning

Evolving Losses for Unsupervised Video Representation Learning 論文筆記 Distillation Knowledge Distillation from: zhihu Distillate Knowledge from Teacher model Net-T to Student model Net-S. 目的:爲了精簡模型方便部署。
相關文章
相關標籤/搜索