来自今日爱可可的前沿推介
主题:TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models TinyMIM:掩码图像建模预训练模型蒸馏实证研究
作者 S Ren, F Wei, Z Zhang, H Hu [Microsoft Research Asia]
要点:
提出TinyMIM,一种对小型视觉Transformer(ViT)掩码图像建模(MIM)预训练进行知识蒸馏的成功方法;
蒸馏token关系、以中间层为目标、弱正则化和顺序蒸馏对于增强性能很重要。
为开发小型视觉Transformer模型探索了改进训练方法的新路,而不是在架构中引入更多归纳偏差。
摘要:
掩码图像建模(MIM)在预训练大型视觉Transformer(ViT)中效果显著。然而,对现实世界应用至关重要的小模型无法或很少从这种预训练方法中受益。本文探讨了蒸馏技术,将基于MIM的大型预训练模型的成功迁移到较小的模型上。本文系统地研究了蒸馏框架中的不同选项,包括蒸馏目标、损失、输入、网络正则化、顺序蒸馏等,结果表明:1)蒸馏token关系比基于CLS token和特征的蒸馏更有效;2)当学生的深度与教师的深度不匹配时,教师网络作以中间层为为目标表现优于用最后一层;3)弱正则化更可取等。通过这些发现,使用所有ViT-Tiny、ViT-Small和ViT基模型,在ImageNet-1K分类的从头MIM预训练中实现了显著的微调精度改进,分别提高了+4.2%/+2.4%/+1.4%。所得到的TinyMIM基本尺寸模型在AE20K语义分割中实现了52.2 mIoU,比MAE基线高出+4.1。而微小尺寸TinyMIM模型在ImageNet-1K图像分类上实现了79.6%的top 1精度,为相同尺寸和计算预算的小型视觉模型创造了新纪录。这种强劲的性能表明,开发小型视觉Transformer模型的另一种方式,即探索更好的训练方法,而不是像之前的大多数工作那样在架构中引入归纳偏差。
论文地址:https://arxiv.org/abs/2301.01296
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at this https URL.
内容中包含的图片若涉及版权问题,请及时与我们联系删除
评论
沙发等你来抢