邓依荷是加州大学洛杉矶分校通用人工智能实验室的三年级博士研究生，导师是顾全全教授。在此之前，她在加州大学洛杉矶分校获得了计算机科学硕士学位以及计算数学学士学位。她的研究兴趣主要包括大语言模型、多模态学习以及提高基础模型的稳健性。她已在 NeurIPS、ACL 和 AAAI 等机器学习顶会上发表论文，并专注于视觉和语言领域的理论驱动方法的开发。
Yihe is a third-year PhD student at UCLA Artificial General Intelligence Lab, advised by Prof. Quanquan Gu. Before that, Yihe received her MS in Computer Science and BS in Mathematics of Computation from UCLA. Her research interests primarily center around Large Language Models, multi-modal learning, and enhancing the robustness of foundational models. She has published papers on top-tier machine learning venues such as NeurIPS, ACL and AAAI, with a focus on developing theory-guided methods in vision and language domains.
在这次演讲中，我将介绍我们的新提示词（prompting）方法，名为“Rephrase and Respond”（RaR），它让LLM重述和扩展人类提出的问题，并在单个对话中提供回应。在广泛的基准任务上，RaR显著提高了不同LLM模型的性能。我还将讨论RaR与流行的“思维链”（CoT）方法在理论和实验上的比较。我们展示了RaR与CoT互补，可以与CoT轻松结合使用以达到更好的性能。
Misunderstandings arise not only in interpersonal communication but also between humans and Large Language Models (LLMs). Such discrepancies can make LLMs interpret seemingly unambiguous questions in unexpected ways, yielding incorrect responses. While it is widely acknowledged that the quality of a prompt, such as a question, significantly impacts the quality of the response provided by LLMs, a systematic method for crafting questions that LLMs can better comprehend is still underdeveloped.
In this talk, I will present our new prompting method named “Rephrase and Respond” (RaR), which allows LLMs to rephrase and expand questions posed by humans and provide responses in a single prompt. Across a wide range of benchmark tasks, RaR significantly improves the performance of different LLM models. I will also discuss the comparison between RaR and the popular Chain-of-Thought (CoT) methods, both theoretically and empirically. RaR is shown to be complementary to CoT and can be combined with CoT to achieve even better performance.
This is joint work with Weitong Zhang, Zixiang Chen and Quanquan Gu.