作者:A Majumdar, K Yadav, S Arnaud, Y J Ma...
[Georgia Institute of Technology & Meta AI & Stanford University & UC Berkeley]

总结:
研究了可扩展的自监督计算机视觉学习器——掩码自编码器(MAE),探讨了其在人工视觉皮层中的应用。

推荐说明:

  • 目标:寻找一种可扩展的自监督计算机视觉学习方法。
  • 途径:通过掩码输入图像的随机区域并重建缺失像素,采用非对称编码器-解码器架构以及高比例掩码输入图像。
  • 亮点:提高训练速度和准确性,实现高容量模型的学习,具有良好的泛化能力。

https://arxiv.org/abs/2303.18240

This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3⇥ or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pretraining and shows promising scaling behavior.

图片

图片

图片

内容中包含的图片若涉及版权问题,请及时与我们联系删除