来自今天的爱可可AI前沿推介

[LG] Tracr: Compiled Transformers as a Laboratory for Interpretability

D Lindner, J Kramár, M Rahtz, T McGrath, V Mikulik
[ETH Zurich & DeepMind]

Tracr: 作为可解释性实验室的编译Transformer

要点:

  1. Tracr是一种手动构建 Transformer 模型的工具,作为可解释性研究的测试平台,旨在构建理解机器学习模型的工具;
  2. Tracr是一个“编译器”,将用特定域语言 RASP 编写的代码转换为标准的、仅解码器、类似GPT的 Transformer 架构的权重;
  3. Tracr用于创建一系列实现不同程序的真实值 Transformer,例如计算token频率、排序和 Dyck-n 括号检查等。

一句话总结:
Tracr是一种工具,通过将RASP编写的人工可读代码翻译成 Transformer 模型权重,来手动构建 Transformer 模型,作为可解释性研究的测试平台,创建实现各种程序的基准 Transformer ,并提供开源实现供研究社区探索和使用。

摘要:
可解释性研究旨在构建理解机器学习(ML)模型的工具。然而,这些工具因为我们对ML模型的实际工作方式没有真实值信息而固有地难以评估。本文提出手动构建 Transformer 模型,作为可解释性研究的测试平台。提出 Tracr,一个将人工可读程序翻译成 Transformer 模型权重的“编译器”。Tracr采用RASP编写的代码,RASP 是一种特定域语言,将其翻译成标准的、仅解码器的、类似GPT的 Transformer 架构的权重。本文用 Tracr 创建了一系列实现程序的基准 Transformer,包括计算 token 频率、排序和 Dyck-n 括号检查等。

Interpretability research aims to build tools for understanding machine learning (ML) models. However, such tools are inherently hard to evaluate because we do not have ground truth information about how ML models actually work. In this work, we propose to build transformer models manually as a testbed for interpretability research. We introduce Tracr, a "compiler" for translating human-readable programs into weights of a transformer model. Tracr takes code written in RASP, a domain-specific language (Weiss et al. 2021), and translates it into weights for a standard, decoder-only, GPT-like transformer architecture. We use Tracr to create a range of ground truth transformers that implement programs including computing token frequencies, sorting, and Dyck-n parenthesis checking, among others. To enable the broader research community to explore and use compiled models, we provide an open-source implementation of Tracr at this https URL.

论文链接:https://arxiv.org/abs/2301.05062
图片
图片
图片
图片

内容中包含的图片若涉及版权问题,请及时与我们联系删除