这篇名为Fourier Neural Operator for Parametric Partial Differential Equations 的论文已投稿到ICLR 2021。

一作是华人博士Zongyi Li。 导师Anima Anandkumar也是论文作者之一。她是人工智能领域很活跃的女学者,博士导师是康奈尔的Long Tang(1985年毕业于清华)。目前同时担任加州理工教授和NVidia机器学习研究总监,此前在Amazon工作多年。她在Twitter上说,新方法在Navier-Stokes方程上的实验比传统求解器快1000倍。

摘要

The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers’ equation, Darcy flow, and the Navier-Stokes equation (including the turbulent regime). Our Fourier neural operator shows state-of-the-art performance compared to existing neural network methodologies and it is up to three orders of magnitude faster compared to traditional PDE solvers. 神经网络的经典发展主要集中在学习有限维欧氏空间之间的映射。 近来,这已被推广到学习函数空间之间映射的神经元算子。 对于偏微分方程(PDE),神经元算子可以直接学习从任何函数参数相关性到解的映射。 因此,与解决方程式一个实例的经典方法相反,它们学习了一整类PDE。 本文中,我们通过直接在傅立叶空间中对积分内核进行参数化,从而提出了一种新的神经元算子,从而实现了高效而富有表现力的架构。 我们对Burgers方程,Darcy流和Navier-Stokes方程(包括湍流状态)进行了实验。 与现有的神经网络方法相比,我们的傅里叶神经元算子表现出了最先进的性能,并且与传统的PDE求解器相比,它的速度提高了三个数量级。

配套代码:https://github.com/zongyi-li/fourierneuraloperator

论文其实是加州理工团队一系列工作之一,其他论文还包括: - https://arxiv.org/pdf/2003.03485.pdf - https://arxiv.org/pdf/2005.03180.pdf - https://arxiv.org/pdf/2006.09535.pdf

内容中包含的图片若涉及版权问题,请及时与我们联系删除