- 简介我们介绍了BOP挑战赛2023的评估方法、数据集和结果,这是一系列公开比赛中的第五个,旨在捕捉基于模型的RGB/RGB-D图像中6D物体位姿估计及相关任务的最新技术。除了2022年的三个任务(基于模型的2D检测、2D分割和训练期间观察到的物体的6D定位)外,2023年的挑战还引入了这些任务的新变体,重点放在训练期间未见过的物体上。在新任务中,方法需要在提供的3D物体模型中进行短暂的入门阶段(最长5分钟,1个GPU)学习新的物体。2023年看不见物体的6D定位最佳方法(GenFlow)显著地达到了2020年看得见物体的最佳方法(CosyPose)的准确度,尽管速度较慢。对于看得见的物体,2023年的最佳方法(GPose)在准确度上实现了适度的提高,但与2022年最佳方法(GDRNPP)相比,运行时间显著缩短了43%。自2017年以来,看得见物体的6D定位准确度已经提高了50%以上(从56.9到85.6 AR_C)。在线评估系统仍然开放,并可在以下网址上使用:http://bop.felk.cvut.cz/。
- 图表
- 解决问题BOP Challenge 2023 aims to capture the state of the art in model-based 6D object pose estimation from an RGB/RGB-D image and related tasks, including new variants of these tasks focused on objects unseen during training.
- 关键思路The 2023 challenge introduced new tasks that required methods to learn new objects during a short onboarding stage from provided 3D object models. The best 2023 method for 6D localization of unseen objects reached the accuracy of the best 2020 method for seen objects. The best 2023 method for seen objects achieved a moderate accuracy improvement and a significant run-time improvement compared to the best 2022 counterpart.
- 其它亮点The accuracy of 6D localization of seen objects has improved by more than 50% since 2017. The online evaluation system is available for public use. The paper provides details on the evaluation methodology, datasets, and results of the BOP Challenge 2023.
- The paper does not explicitly mention related work, but it is a continuation of a series of public competitions organized to capture the state of the art in model-based 6D object pose estimation.
沙发等你来抢
去评论
评论
沙发等你来抢