- 简介机器学习正从云端转向边缘。边缘计算减少了暴露私有数据的表面,并实现了实时应用的可靠吞吐量保证。在众多部署在边缘的设备中,资源受限的MCU(例如,Arm Cortex-M)更为普遍,比应用处理器或GPU便宜数个数量级,而且功耗更低。因此,在深度边缘实现智能是时代精神,研究人员正在专注于揭示在这些受限设备上部署人工神经网络的新方法。量化是一种成熟的技术,已被证明可以在MCU上部署神经网络,但是,在面对对抗性示例时,QNN的鲁棒性仍然是一个开放性问题。 为了填补这一空白,我们从实证角度评估了从(全精度)ANN到(受限)QNN的攻击和防御的有效性。我们的评估包括针对TinyML应用的三个QNN,十个攻击和六个防御。通过这项研究,我们得出了一些有趣的发现。首先,量化增加了决策边界的点距离,并导致某些攻击估计的梯度爆炸或消失。其次,量化可以作为噪声衰减器或放大器,取决于噪声大小,并导致梯度不对齐。关于对抗性防御,我们得出结论,输入预处理防御在小扰动方面表现出色;然而,随着扰动的增加,其表现不佳。同时,基于训练的防御增加了到决策边界的平均点距离,在量化后仍然保持。但是,我们认为,基于训练的防御仍需要平滑量化偏移和梯度不对齐现象,以抵消对抗性示例对QNN的可转移性。所有文物都是开源的,以便独立验证结果。
- 解决问题QNNs' robustness against adversarial examples is still an open question. The paper aims to evaluate the effectiveness of attacks and defenses on QNNs and fill the gap in understanding their robustness.
- 关键思路The paper empirically evaluates the effectiveness of attacks and defenses from full-precision ANNs on constrained QNNs. The study includes three QNNs, ten attacks, and six defenses. The authors draw interesting findings on the impact of quantization on the distance to the decision boundary, gradient estimation, and noise attenuation or amplification. They also evaluate input pre-processing and train-based defenses and argue that the latter needs to address quantization-shift and gradient misalignment to counteract adversarial example transferability to QNNs.
- 其它亮点The paper experiments with three QNNs targeting TinyML applications, ten attacks, and six defenses. The authors find that quantization increases the distance to the decision boundary and leads to gradient explosion or vanishing. They also observe that quantization can act as a noise attenuator or amplifier and causes gradient misalignment. The authors conclude that input pre-processing defenses show impressive results on small perturbations but fall short as the perturbation increases. They also argue that train-based defenses need to address quantization-shift and gradient misalignment to counteract adversarial example transferability to QNNs.
- Recent related work includes 'Adversarial Examples in the Physical World' by Kurakin et al. (2016), 'Towards Evaluating the Robustness of Neural Networks' by Carlini and Wagner (2017), and 'Defending Against Adversarial Attacks by Leveraging an Entire GAN' by Samangouei et al. (2018).
沙发等你来抢
去评论
评论
沙发等你来抢