Open Access
Issue
MATEC Web Conf.
Volume 382, 2023
6th International Conference on Advances in Materials, Machinery, Electronics (AMME 2023)
Article Number 01034
Number of page(s) 8
DOI https://doi.org/10.1051/matecconf/202338201034
Published online 26 June 2023
  1. Kunlong Chen, Liu Yang, Yitian Chen, Kunjin Chen, Yidan Xu, and Lujun Li. GP-NAS-ensemble: a model for the NAS Performance Prediction. In CVPRW.(2022) [Google Scholar]
  2. Jang Hyun Cho and Bharath Hariharan. On the efficacy of knowledge distillation. In ICCV.(2019) [Google Scholar]
  3. Peijie Dong, Lujun Li, and Zimian Wei. DisWOT: Student Architecture Search for Distillation WithOut Training In CVPR.(2023) [Google Scholar]
  4. Peijie Dong, Xin Niu, Lujun Li, Zhiliang Tian, Xiaodong Wang, Zimian Wei, Hengyue Pan, and Dongsheng Li.RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies. arXiv preprint arXiv:2301.09850 (2023). [Google Scholar]
  5. Peijie Dong, Xin Niu, Lujun Li, Linzhen Xie, Wenbin Zou, Tian Ye, Zimian Wei, and Hengyue Pan. Prior-Guided One-shot Neural Architecture Search. arXiv preprint arXiv:2206. 13329 (2022). [Google Scholar]
  6. Peijie Dong, Xin Niu, Zhiliang Tian, Lujun Li, Xiaodong Wang, Zimian Wei, Hengyue Pan, and Dongsheng Li. Progressive Meta-Pooling Learning for Lightweight Image Classification Model. arXiv preprint arXiv:2301. 10038 (2023). [Google Scholar]
  7. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531 (2015). [Google Scholar]
  8. Yiming Hu, Xingang Wang, Lujun Li, and Qingyi Gu. Improving one-shot NAS with shrinking-and-expanding supernet. Pattern Recognition (2021). [Google Scholar]
  9. Lujun Li. Self-Regulated Feature Learning via Teacher-free Feature Distillation. In ECCV.(2022) [Google Scholar]
  10. Lujun Li and Zhe Jin. Shadow Knowledge Distillation: Bridging Offline and Online Knowledge Transfer In NeuIPS.(2022) [Google Scholar]
  11. Lujun Li, Liang Shiuan-Ni, Ya Yang, and Zhe Jin. Boosting Online Feature Transfer via Separable Feature Fusion In IJCNN.(2022) [Google Scholar]
  12. Lujun Li, Liang Shiuan-Ni, Ya Yang, and Zhe Jin. Teacher-free Distillation via Regularizing Intermediate Representation. In IJCNN.(2022) [Google Scholar]
  13. Lujun Li, Yikai Wang, Anbang Yao, Yi Qian, Xiao Zhou, and Ke He. Explicit Connection Distillation. In ICLR. (2020) [Google Scholar]
  14. Li Liu, Qinwen Huang, Sihao Lin, Hongwei Xie, Bing Wang, Xiaojun Chang, and Xiao-Xue Liang. Exploring Inter-Channel Correlation for Diversity-preserved Knowledge Distillation. ICCV (2021). [Google Scholar]
  15. Yifan Liu, Changyong Shun, Jingdong Wang, and Chunhua Shen. Structured knowledge distillation for dense prediction. arXiv preprint, arXiv:1903.04197 (2019). [Google Scholar]
  16. Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. Improved knowledge distillation via teacher assistant. In AAAI.(2020) [Google Scholar]
  17. Wonpyo Park, Yan Lu, Minsu Cho, and Dongju Kim. Relational Knowledge Distillation. In CVPR.(2019) [Google Scholar]
  18. Jie Qin, Jie Wu, Xuefeng Xiao, Lujun Li, and Xingang Wang. Activation Modulation and Recalibration Scheme for Weakly Supervised Semantic Segmentation. In AAAI.(2022) [Google Scholar]
  19. Tao Wang, Li Yuan, Xiaopeng Zhang, and Jiashi Feng. Distilling object detectors with fine-grained feature imitation. In CVPR.(2019) [Google Scholar]
  20. Zimian Wei, Hengyue Pan, Lujun Li Li, Menglong Lu, Xin Niu, Peijie Dong, and Dongsheng Li. ConvFormer: Closing the Gap Between CNN and Vision Transformers. arXiv preprint arXiv:2209.07738 (2022). [Google Scholar]
  21. Liu Xiaolong, Li Lujun, Li Chao, and Anbang Yao. NORM: Knowledge Distillation via N-to-One Representation Matching. In ICLR.(2023) [Google Scholar]
  22. Guodong Xu, Ziwei Liu, Xiaoxiao Li, and Chen Change Loy. Knowledge distillation meets self-supervision. In ECCV.(2020) [Google Scholar]
  23. Kaiyu Yue, Jiangfan Deng, and Feng Zhou. Matching Guided Distillation. ArXiv (2020). [Google Scholar]
  24. Sukmin Yun,Joon-Seok Park, Kimin Lee, and Jinwoo Shin. Regularizing Class-Wise Predictions via Self-Knowledge Distillation. In CVPR.(2020) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.