|国家预印本平台
首页|Learning to See in the Extremely Dark

Learning to See in the Extremely Dark

Learning to See in the Extremely Dark

来源:Arxiv_logoArxiv
英文摘要

Learning-based methods have made promising advances in low-light RAW image enhancement, while their capability to extremely dark scenes where the environmental illuminance drops as low as 0.0001 lux remains to be explored due to the lack of corresponding datasets. To this end, we propose a paired-to-paired data synthesis pipeline capable of generating well-calibrated extremely low-light RAW images at three precise illuminance ranges of 0.01-0.1 lux, 0.001-0.01 lux, and 0.0001-0.001 lux, together with high-quality sRGB references to comprise a large-scale paired dataset named See-in-the-Extremely-Dark (SIED) to benchmark low-light RAW image enhancement approaches. Furthermore, we propose a diffusion-based framework that leverages the generative ability and intrinsic denoising property of diffusion models to restore visually pleasing results from extremely low-SNR RAW inputs, in which an Adaptive Illumination Correction Module (AICM) and a color consistency loss are introduced to ensure accurate exposure correction and color restoration. Extensive experiments on the proposed SIED and publicly available benchmarks demonstrate the effectiveness of our method. The code and dataset are available at https://github.com/JianghaiSCU/SIED.

Hai Jiang、Binhao Guan、Zhen Liu、Xiaohong Liu、Jian Yu、Zheng Liu、Songchen Han、Shuaicheng Liu

计算技术、计算机技术电子技术应用

Hai Jiang,Binhao Guan,Zhen Liu,Xiaohong Liu,Jian Yu,Zheng Liu,Songchen Han,Shuaicheng Liu.Learning to See in the Extremely Dark[EB/OL].(2025-06-26)[2025-07-16].https://arxiv.org/abs/2506.21132.点此复制

评论