来自今天的爱可可AI前沿推介
1、[CL] Talking About Large Language Models
M Shanahan
[Imperial College London]
谈谈大型语言模型
要点:
-
人工智能的快速发展带来了技术和哲学相互碰撞的时代; -
为了避免误用用于描述语言模型的哲学术语,有必要回头看看语言模型是如何工作的; -
重要的是要避免把语言模型人格化,在谈论语言模型时使用准确的语言。
摘要:
由于人工智能的快速发展,我们已经进入了一个技术和哲学以有趣方式碰撞的时代。大型语言模型(LLMs)正处于这个碰撞的中心位置。大型语言模型越是善于模仿人类语言,我们越容易受到拟人主义的影响,把嵌入其中的系统看得比实际情况更像人类。在描述这些系统时,人们自然倾向于使用带有哲学意味的术语,例如"知道"、"相信"和"认为",从而扩大了这种趋势。为了缓解这种趋势,本文提倡反复退后一步,提醒自己LLM以及它们所构成的系统是如何实际工作的。希望科学精度的提高将鼓励在人工智能领域和公共领域的讨论中出现更多哲学上的细微差别。
Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.
论文链接:https://arxiv.org/abs/2212.03551
内容中包含的图片若涉及版权问题,请及时与我们联系删除
评论
沙发等你来抢