PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes

ARXIV 2023


Xiao Fu1, Tianrun Chen1, Yichong Lu1,
Xiaowei Zhou1, Andreas Geiger2, Yiyi Liao1

1Zhejiang University    2University of Tübingen and Tübingen AI Center

Abstract


PanopticNeRF-360 renders panoramic 2D panoptic labels from readily available coarse 3D bounding primitives.

Overall Framework


framework

At each 3D location, we combine scene features obtained from a deep MLP and multi-resolution hash grids to jointly model geometry, appearance and semantics. We leverage dual semantic fields to obtain two semantic categorical logits. Our method allows for rendering panoptic labels by combining the learned semantic field and a fixed instance field determined by the 3D bounding primitives. The losses applied to fixed semantic and instance field improve the geometry. The learned semantic loss serves to improve the 3D semantic predictions, which allows for resolving the label ambiguity at the bbox intersection regions.


Panoramic Semantic/Instance Label Transfer



360º Outward Rotated Label & Appearance Synthesis



Fixed Instance Optimization



Neural Scene Representation


Ours achieve a trade-off between label quality and appearance.
Label quality : MLP > (slightly) Ours > Tri-planes            Appearance : Ours > MLP > Tri-planes

Hybrid Feature Aggregation


“concatenation” performs better than “product” to fuse MLPs and hash grids to avoid jagged geometric errors.
framework

Citation



@article{fu2023panoptic,
  title={PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes},
  author={Fu, Xiao and Zhang, Shangzhan and Chen, Tianrun and Lu, Yichong and Zhou, Xiaowei and Geiger, Andreas and Liao, Yiyi},
  journal={arXiv},
  year = {2023}
}