MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers

2024年06月14日
  • 简介
    最近,通过重建和生成创建的3D资产已经达到了手工制作资产的质量水平,突显了它们替代的潜力。然而,这种潜力在很大程度上没有得到实现,因为这些资产总是需要转换为网格以用于3D行业应用,而目前的网格提取方法产生的网格明显劣于由人类艺术家创建的网格(AMs)。具体而言,目前的网格提取方法依赖于密集的面,并忽略几何特征,导致低效率、复杂的后期处理和低表示质量。为了解决这些问题,我们介绍了MeshAnything,这是一个将网格提取视为生成问题的模型,可以生成与指定形状对齐的AMs。通过将任何3D表示中的3D资产转换为AMs,MeshAnything可以与各种3D资产生产方法集成,从而增强它们在3D行业中的应用。MeshAnything的架构包括一个VQ-VAE和一个形状条件的仅解码变压器。我们首先使用VQ-VAE学习网格词汇表,然后在这个词汇表上训练形状条件的仅解码变压器,用于形状条件的自回归网格生成。我们广泛的实验表明,我们的方法生成的AMs面数减少了数百倍,显著提高了存储、渲染和模拟效率,同时实现了与先前方法相当的精度。
  • 图表
  • 解决问题
    MeshAnything: Generating Artist-Created Meshes from 3D Representations
  • 关键思路
    The MeshAnything model treats mesh extraction as a generation problem, producing Artist-Created Meshes (AMs) aligned with specified shapes, significantly improving storage, rendering, and simulation efficiencies, while achieving precision comparable to previous methods.
  • 其它亮点
    The MeshAnything model uses a VQ-VAE to learn a mesh vocabulary and a shape-conditioned decoder-only transformer for shape-conditioned autoregressive mesh generation. The experiments show that MeshAnything generates AMs with hundreds of times fewer faces, enhancing their application across the 3D industry. The model achieves precision comparable to previous methods and can be integrated with various 3D asset production methods.
  • 相关研究
    Related work includes previous methods for mesh extraction, such as dense faces and geometric feature-based methods, as well as recent advances in generative models for 3D object reconstruction, such as GANs and VAEs.
PDF
原文
点赞 收藏 评论 分享到Link

沙发等你来抢

去评论