Enlighten Anything: When Segment Anything Model Meets Low-Light Image Enhancement. (arXiv:2306.10286v3 [cs.CV] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Zhao_Q/0/1/0/all/0/1">Qihan Zhao</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_X/0/1/0/all/0/1">Xiaofeng Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Tang_H/0/1/0/all/0/1">Hao Tang</a>, <a href="http://arxiv.org/find/cs/1/au:+Gu_C/0/1/0/all/0/1">Chaochen Gu</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhu_S/0/1/0/all/0/1">Shanying Zhu</a> Posted: June 23, 2023
Image restoration is a low-level visual task, and most CNN methods are
designed as black boxes, lacking transparency and intrinsic aesthetics. Many
unsupervised approaches ignore the degradation of visible information in
low-light scenes, which will seriously affect the aggregation of complementary
information and also make the fusion algorithm unable to produce satisfactory
fusion results under extreme conditions. In this paper, we propose
Enlighten-anything, which is able to enhance and fuse the semantic intent of
SAM segmentation with low-light images to obtain fused images with good visual
perception. The generalization ability of unsupervised learning is greatly
improved, and experiments on LOL dataset are conducted to show that our method
improves 3db in PSNR over baseline and 8 in SSIM. Zero-shot learning of SAM
introduces a powerful aid for unsupervised low-light enhancement. The source
code of Enlighten Anything can be obtained from
https://github.com/zhangbaijin/enlighten-anything
Provided by:
http://arxiv.org/icons/sfx.gif