- 简介近年来,实时视角合成技术在保证保真度和速度方面取得了快速的进展,现代方法能够以交互帧率呈现接近照片级别的场景。与此同时,显式场景表示和基于射线行进的神经场之间出现了紧张关系,尽管后者在质量上超越了前者,但对于实时应用来说成本过高。本文介绍了SMERF,一种视角合成方法,它在 3.5 mm$^3$ 的体积分辨率下,能够在大尺寸场景(最大占地面积达到 300 平方米)中实现实时方法中最高的准确性。我们的方法基于两个主要贡献:一种分层模型分割方案,它可以增加模型容量同时限制计算和内存消耗,以及一种蒸馏训练策略,可以同时产生高保真度和内部一致性。我们的方法可以在网络浏览器中实现完整的 6 自由度导航,并且可以在普通智能手机和笔记本电脑上实时渲染。广泛的实验表明,我们的方法在标准基准测试中比当前实时新视角合成的最新技术高出 0.78 dB,在大场景上高出 1.78 dB,比最先进的辐射场模型渲染帧速度快三个数量级,并且在包括智能手机在内的各种普通设备上实现了实时性能。我们鼓励读者在我们的项目网站 https://smerf-3d.github.io 上亲自探索这些模型。
- 图表
- 解决问题SMERF: A Real-Time View Synthesis Approach for Large Scenes
- 关键思路SMERF achieves state-of-the-art accuracy among real-time methods on large scenes by using a hierarchical model partitioning scheme and a distillation training strategy, enabling full six degrees of freedom (6DOF) navigation within a web browser and rendering in real-time on commodity smartphones and laptops.
- 其它亮点SMERF exceeds the current state-of-the-art in real-time novel view synthesis by 0.78 dB on standard benchmarks and 1.78 dB on large scenes, renders frames three orders of magnitude faster than state-of-the-art radiance field models, and achieves real-time performance across a wide variety of commodity devices, including smartphones. The method is open-sourced and can be explored at the project website.
- Recent related works include 'NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis', 'GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis', and 'PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization'.
沙发等你来抢
去评论
评论
沙发等你来抢