3D Equivariant Graph Implicit Functions

1University of Amsterdam, 2A*STAR 3University of Edinburgh 4Technical University of Munich
ECCV 2022


We embed 3D implicit neural representations in graphs to achieve high-fidelity equivariant 3D reconstruction.

Abstract

In recent years, neural implicit representations have made remarkable progress in modeling of 3D shapes with arbitrary topology. In this work, we address two key limitations of such representations, in failing to capture local 3D geometric fine details, and to learn from and generalize to shapes with unseen 3D transformations. To this end, we introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details and guaranteed robustness to various groups of geometric transformations, through local k-NN graph embeddings with sparse point set observations at multiple resolutions. Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 (IoU) on the ShapeNet reconstruction task. We also show that our equivariant implicit function can be extended to other types of similarity transformations and generalizes to unseen translations and scaling.

Method

Our equivariant graph implicit function infers the implicit field for a 3D shape, given a sparse point cloud observation. When a transformation (rotation, translation, or/and scaling) is applied to the observation, the resulting implicit field is guaranteed to be the same as applying a corresponding transformation to the inferred implicit field from the untransformed input (middle). The property of equivariance enables generalization to unseen transformations, under which existing models often struggle (right).

Graph-structured local implicit feature embedding

To achieve high fidelity 3D reconstruction in local details, we embed implicit function in local k-NN graphs. The architecture is robust to similarity geometric transformations, while existing local implicit embedding methods based on convolutional grid structure are sensitive to these transformations.

Equivariant graph convolution layers

We incorporate equivariant layer design with hybrid scalar and vector features for graph convolution layers, which facilitates numeric robustness against geometric transformations. The equivariant mechanism was adapted from Vector Neurons [Deng et al. ICCV 2021] and EGNN [Satorras et al. ICML 2021].

Results

Our models are denoted GraphONet, the graph implicit function without equivariant layers, and E-GraphONet, with equivariant layers for graph convolutions.

Object reconstruction

Aside from the merit of transform-robustness, our method benefits high-fidelity 3D reconstruction in general even for canonically oriented shapes. Our graph mechanism allows focusing on more critical areas with surface points.

Scene reconstruction

The graph feature embedding mechanism is translation-equivariant, so our methods scale to scene-level reconstruction.

We show results on Synthetic Rooms and ScanNet.


Reconstruction of shapes under unseen transformations

We evaluate 3D implicit surface reconstruction with under unseen similarity transformations, including rotation, translation and scaling.

Our GraphONet is more robust to transformations than other baseline non-equivariant methods. Standard implicit representation method with global latent embedding ONet fails to reconstruct local details and it cannot handle transformations. Compared to ConvONets the grid method with local feature embeddings, GraphONet generates significantly less artifacts, especially under rotation.

Our E-GraphONet with equivariant layers further guarantees generalization to all similarity transformations, while the baseline GraphONet fails under extreme scaling. Previous rotation-equivariant method VN-ONet considers only rotation and the global latent embedding limits detailed reconstruction.

BibTeX

@article{chen2022equivariant,
  author    = {Chen, Yunlu and Fernando, Basura and Bilen, Hakan and Nie{\ss}ner, Matthias and Gavves, Efstratios},
  title     = {3D Equivariant Graph Implicit Functions},
  journal   = {ECCV},
  year      = {2022},
}