作者:K Ahuja, R Hada, M Ochieng, P Jain, H Diddee, S Maina, T Ganu, S Segal, M Axmed, K Bali, S Sitaram
[Microsoft]

总结:
MEGA是第一个全面评估生成式自然语言模型的基准,涵盖了33种语言和8种任务,发现生成式模型在一些语言和任务上表现不佳,需要进一步研究和改进。

要点:

  1. MEGA是全面评估生成式自然语言模型的第一个基准,覆盖了33种语言和8种任务。
  2. 发现生成式模型在一些低资源语言和任务上表现不佳,需要进行更多的研究和改进。
  3. 比较了生成式模型和非自回归模型的性能,发现前者在高资源语言和拉丁文字母语言上表现更好。
  4. 推荐研究人员在自然语言生成领域优先考虑自动基准测试和人工评估跨越尽可能多的语言,以避免让全球人口被生成式AI的浪潮所遗弃。

https://arxiv.org/abs/2303.12528

Generative AI models have impressive performance on many Natural Language Processing tasks such as language understanding, reasoning and language generation. One of the most important questions that is being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs are restricted to English and it is unclear how capable these models are at understanding and generating other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 8 diverse tasks and 33 typologically diverse languages. We also compare the performance of generative LLMs to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and discuss some of the reasons why generative LLMs are currently not optimal for all languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.