Recently, Apple researchers collaborated with Purdue University to release a photography technology called DarkDiff, aiming to solve the problem of noise and blurred details in smartphone photos under extreme low-light conditions. This technology introduces generative diffusion models directly into the camera's image signal processing (ISP) workflow, allowing the phone lens to capture astonishing details even in the dark.

image.png

Traditional night scene modes often rely on post-processing algorithms to reduce noise, but this often leads to a fake oil-painting-like feel or severe smearing in photos. The innovation of DarkDiff lies in not being a simple "post-editing" process, but rather involving AI from the very beginning of image generation. It learns from massive image data to "predict" and restore textures and colors lost by the sensor in low light. To prevent AI from "fabricating" unrealistic objects, the research team also introduced a local patch attention mechanism, ensuring that every generated detail conforms to the actual structure of the shooting scene.

In practical testing, researchers used the Sony A7SII to simulate an extremely dark environment, and under an exposure time as short as 0.033 seconds, DarkDiff produced photos of very high quality, even comparable to reference photos taken with a 300 times longer exposure using a tripod.

image.png

Although the results are impressive, the technology still faces challenges before it can be integrated into an iPhone. Because diffusion models require high computing power and consume a lot of energy, future implementation may rely on cloud processing. In addition, the model currently has limitations in recognizing non-English text in low-light environments. Although Apple has not yet revealed when this technology will be commercially available, it undoubtedly demonstrates the infinite potential of mobile photography in the era of AI enhancement.