滑铁卢大学Matthias Schonlau教授和博士生Ilia Sucholutsky合作的论文“'Less Than One'-Shot Learning: Learning N Classes From M<N Samples”

MIT TR专文进行了报道。其中有同行的评价:

MIT博士研究生Tongzhou Wang领导了之前数据蒸馏的研究,他谈到Sucholutsky的贡献时说:“本文基于一个真正新颖且重要的目标:从小型数据集中学习强大的模型。”

蒙特利尔AI伦理研究所的研究员Ryan Khurana回应了这一观点:“最重要的是,“少于1”次学习将从根本上减少建立功能模型所需的数据。” 这可能使迄今为止受该领域数据要求所困扰的公司和行业更容易使用AI。 它还可以改善数据隐私性,因为从个人那里提取的信息更少,可以训练有用的模型。

摘要:

Deep neural networks require large training sets but suffer from high computational cost and long training times. Training on much smaller training sets while maintaining nearly the same accuracy would be very beneficial. In the few-shot learning setting, a model must learn a new class given only a small number of samples from that class. One-shot learning is an extreme form of few-shot learning where the model must learn a new class from a single example. We propose the 'less than one'-shot learning task where models must learn N new classes given only M<N examples and we show that this is achievable with the help of soft labels. We use a soft-label generalization of the k-Nearest Neighbors classifier to explore the intricate decision landscapes that can be created in the 'less than one'-shot learning setting. We analyze these decision landscapes to derive theoretical lower bounds for separating N classes using M<N soft-label samples and investigate the robustness of the resulting systems. 深度神经网络需要大量的训练集,但计算成本高且训练时间长。在保持几乎相同的准确性的同时,在更小的训练集上进行训练将非常有益。在几次学习的情况下,模型必须在仅从该类别中获取少量样本的情况下学习一个新类别。一次学习是一次学习的一种极端形式,其中模型必须从一个示例中学习一个新的类。我们提出了“少于1”次学习任务,其中模型必须仅给出M <N个示例就必须学习N个新类,并且我们证明借助软标签可以实现这一目标。我们使用k最近邻分类器的软标签概括来探索可以在“少于1”次学习设置中创建的复杂决策态势。我们分析这些决策态势,得出使用M <N个软标签样本分离N个类别的理论下界,并研究所得系统的鲁棒性。