Indian Conference on Computer Vision, Graphics and Image Processing
1CVIT, IIIT Hyderabad
Abstract
We present a neural rendering framework for simultaneous view synthesis and appearance editing of a scene from
multi-view images captured under known environment illumination. Existing approaches either achieve view synthesis alone or view synthesis along with relighting, without direct control over the scene’s appearance. Our approach explicitly disentangles the appearance and learns a lighting representation that is independent of it. Specifically, we independently estimate the BRDF and use it to learn a lighting-only representation of the scene. Such disentanglement allows our approach to generalize to arbitrary changes in appearance while performing view synthesis. We show results of editing the appearance of a real scene, demonstrating that our approach produces plausible appearance editing. The performance of our view synthesisapproach is demonstrated to be at par with state-of-the-art
approaches on both real and synthetic data.
This code was tested on UBuntu 20.04, with Python 3.8. For running the code we used pytorch 3.8
. Please check requirements.txt
for other dependencies
Checkout preprocess
for instructions on how to generate and preprocess the data.
DNR
for instructions on how to run DNR code.Independent
for instructions on how to run code with independent optimization.Joint
for instructions on how to run code with joint optimization.