PS-NeRF: Neural Inverse Rendering
for Multi-view Photometric Stereo
Traditional multi-view photometric stereo (MVPS) methods are often composed of multiple disjoint stages, resulting in noticeable accumulated errors. In this paper, we present a neural inverse rendering method for MVPS based on implicit representation. Given multi-view images of a non-Lambertian object illuminated by multiple unknown directional lights, our method jointly estimates the geometry, materials, and lights. Our method first employs multi-light images to estimate per-view surface normal maps, which are used to regularize the normals derived from the neural radiance field. It then jointly optimizes the surface normals, spatially-varying BRDFs, and lights based on a shadow-aware differentiable rendering layer. After optimization, the reconstructed object can be used for novel-view rendering, relighting, and material editing. Experiments on both synthetic and real datasets demonstrate that our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.
Given multi-view and multi-light images of an object taken from M sparse views, our goal is to simultaneously reconstruct its shape, materials, and lights. (Multiple images are captured for each view, where each image is illuminated by a single unknown directional light.)
In the first stage, we estimate a guidance normal map for each view, which is used to supervise the normals derived from the density field. This direct normal supervision is expected to provide a strong regularization on the density field, leading to an accurate surface.
In the second stage, based on the learned density field as the shape prior, we jointly optimize the surface normals, materials, and lights using a shadow-aware rendering layer.
More Results (no audio)
The website template was borrowed from Michaël Gharbi.