Reference-guided Controllable Inpainting of Neural Radiance Fields

ICCV 2023

{ashkan,jkelly,gilitschenski}@cs.toronto.edu, tristan.a@partner.samsung.com, {kosta,mab}@eecs.yorku.ca, alex.lev@samsung.com
*Authors contributed equally 1Samsung AI Centre Toronto, 2University of Toronto, 3York University, 4Vector Institute for AI

Qualitative Results on SPIn-NeRF's Dataset

(Top: Original Scene, Bottom: Ours)


Controllability

Original Scene
Inpainted Scene #1
Inpainted Scene #2


View Substitution

The first time the fitting reaches the view-substitution component, the masked region is still not supervised. As detailed in the paper, to add view-dependent effects to the reference view and supervise other views, we render the scene from the reference camera, but with the colours from other views. The difference between these renderings and the refenrece view is then inpainted inside the masked region using bilateral solvers to add propagate proper view-dependent effects inside the mask. Below, we show sample view-substituted images for two scenes (the target views used in the following examples are the spiral cameras).

View-Substituted Images
Absolute Residuals

After fitting, the view-substituted images look like this:

View-Substituted Images
Absolute Residuals


Comparison to Baselines

Original Scene
Masked-NeRF + DreamFusion
NeRF-In
SPIn-NeRF
Ours

Ablations

Original Scene
w/o Depth Loss
w/o Disocc. Handling
Ours
Original Scene
w/o Depth Loss
w/o Disocc. Handling
Ours
Original Scene
w/o View Dependence
Ours

BibTeX

@inproceedings{mirzaei2023reference,
      title={Reference-guided Controllable Inpainting of Neural Radiance Fields}, 
      author={Ashkan Mirzaei and Tristan Aumentado-Armstrong and Marcus A. Brubaker and Jonathan Kelly and Alex Levinshtein and Konstantinos G. Derpanis and Igor Gilitschenski},
      year={2023},
      booktitle={ICCV},
}