S3-NeRF: Neural Reflectance Field from
Shading and Shadow under a Single Viewpoint
NeurIPS 2022
-
Wenqi Yang
The University of Hong Kong -
Guanying Chen
FNii and SSE, CUHK-Shenzhen -
Chaofeng Chen
Nanyang Technological University -
Zhenfang Chen
MIT-IBM Watson AI Lab -
Kwan-Yee K. Wong
The University of Hong Kong
Abstract
In this paper, we address the "dual problem" of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting.
Video
Method
Given N images captured from a single viewpoint under different near point lights, our method targets at recovering the geometry and materials for the scene. Following existing near-field photometric stereo methods, we assume a calibrated perspective camera and known point light positions. Instead of representing the visible surface with a normal / depth map like others, we adopt a 3D neural field representation to describe the 3D scene.
For each camera ray, we first apply root-finding to locate the surface intersection point xs. NV points on the camera ray are sampled within a relatively large interval around the surface to generate accumulated shading values. NL points are sampled on the surface-to-light segment to calculate the light visibility, which is multiplied to the accumulated shading to output the final RGB value.
More Results (no audio)
BibTeX
Acknowledgements
The website template was borrowed from Michaƫl Gharbi.