来自今天的爱可可AI前沿推介
[CV] Learning Video Representations from Large Language Models
Y Zhao, I Misra, P Krähenbühl, R Girdhar
[Meta AI & University of Texas]
从大型语言模型学习视频表示
简介:提出LaViLa,一种用大型语言模型(LLM)学习视频-语言表示的新方法。将预训练好的LLM以视觉输入作条件、对其进行微调以创建自动视频解说,LaViLa提供了一些优势,如提供长视频的密集覆盖以及文本-视觉信息间更好的时间同步。LaViLa在各种第一人称和第三人称视频任务上的表现优于现有方法,在更多的训练、更大的视觉骨干和更强的LLM上显示出积极的扩展行为。
摘要:本文提出LaViLa,一种通过利用大型语言模型(LLM)来学习视频语言表示的新方法。重新利用预训练好的LLM,使其以视觉输入为条件,对其进行微调,以创建自动视频解说。自动生成的解说词有很多优点,包括对长视频的密集覆盖,视觉信息和文本在时间上更好的同步,以及更好的文本多样性。在多个第一人称和第三人称视频任务中,与这些额外的自动生成的解说进行对比学习的视频-文本嵌入,在零样本和微调的设置下,都超过了之前的最先进水平。最明显的是,LaViLa在EGTEA分类上获得了10.1%的绝对收益,在Epic-Kitchens-100多实例检索基准上获得了5.9%的绝对收益。此外,只用Ego4D数据集中一半的解说进行训练的LaViLa优于在全部数据集上训练的基线模型,并在增加预训练数据和模型规模时表现出积极的扩展行为。
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
评论
沙发等你来抢