|
The standard Neural Radiance Fields (NeRF) paradigm employs a viewer-centered methodology, entangling the aspects of illumination and material reflectance into emission solely from 3D points. This simplified rendering approach presents challenges in accurately modeling images captured under adverse lighting conditions, such as low light or over-exposure. Motivated by the ancient Greek emission theory that posits visual perception as a result of rays emanating from the eyes, we slightly refine the conventional NeRF framework to train NeRF under challenging light conditions and generate normal-light condition novel views unsupervised. We introduce the concept of a ”Concealing Field,” which assigns transmittance values to the surrounding air to account for illumination effects. In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process. Concealing Field thus compel NeRF to learn reasonable density and colour estimations for objects even in dimly lit situations. Similarly, the Concealing Field can mitigate over-exposed emissions during the rendering stage. Furthermore, we present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[1]. You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction (BMVC 2022)
[2]. Learning Temporal Consistency for Low Light Video Enhancement from Single Images (CVPR 2021) | ||||||||||||||||
An overview of our method shown as follow, we design 2 types of concealing fields, the local concealing field Ω and the global concealing field ΘG. Meanwhile several Unsupervised losses have been added to ensure concealing fields generation. |
| |
Train on adverse lighting condition images Cadv, Aleth-NeRF performs unsupervised lightness correction by (a). remove concealing fields in low-light conditions and (b). add concealing fields in over-exposure conditions. |
| |
Along camera ray r (z axis), concealing fields and density σ exhibit a negative correlation, This validates that concealing fields are separated from density, thus rarely participating in scene rendering. Concealing fields exists more in locations r(i) with sparse density, i.e. air outside the objects.. |
| |
We collect the first paired Low-light & Over-exposure & normal-light Multi-view dataset, LOM dataset. Including 5 scenes: buu, chair, sofa, bike and shrub. Download the LOM dataset from: [google drive] or [baiduyun (passwd: ve1t)]. You can also download the experimental results of Aleth-NeRF and comparison methods, Low-Light-Results from [google drive] or [baiduyun (passwd: 729w)], and Over-Exposure Results from [google drive] or [baiduyun (passwd: 6q4k)], |
| |
@article{cui_alethnerf,
title = {Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption},
author = {Cui, Ziteng and Gu, Lin and Sun, Xiao and Ma, Xianzheng and Qiao, Yu and Harada, Tatsuya},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2024}
}
@misc{cui2023alethnerf,
title={Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields},
author={Ziteng Cui and Lin Gu and Xiao Sun and Xianzheng Ma and Yu Qiao and Tatsuya Harada},
year={2023},
eprint={2303.05807},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Ziteng Cui, Kunchang Li, Lin Gu, Shenghan Su et.al.
You Only Need 90K Parameters to Adapt Light: a Light Weight Transformer for Image Enhancement and Exposure Correction. BMVC 2022 (ArXiv, Github). |
|
Ziteng Cui, Guo-Jun Qi, Lin Gu, Shaodi You et.al.
Multitask AET with Orthogonal Tangent Regularity for Dark Object Detection. ICCV 2021 (ArXiv, Github). |
Acknowledgements |