Apple’s Deep Fusion photography comes to iPhone 11s in beta


Deep Fusion takes an underexposed photo for sharpness, and blends that with three neutral pictures and a long high-exposure image on a per-pixel level to achieve a highly customized result. The machine learning system examines the context of the picture to understand where a pixel sits on the frequency spectrum. Pixels for clouds will be treated differently than those for skin, for example. After that, the technology grabs structure and tonality based on ratios.

There are some gotchas. You can’t use this with your phone’s ultra-wide angle lens, as hinted earlier, and bright telephoto shots will revert to Smart HDR to maintain better exposure. The capture process is quick, but it’ll take a second for your iPhone to process the image at full quality. And yes, you absolutely need a 2019 iPhone for this to work — it’s dependent on the A13 chip.

You’ll have to wait until the general release of iOS 13.2 if you’re not willing to experiment. Even so, this could represent a minor coup for Apple. The company has been accused of slipping on photography in the past, letting AI-centric phone cameraslike Google’spull ahead. Although the iPhone 11 series made strides in photo quality out of the box (particularly with its Night Mode), Deep Fusion gives Apple an AI-powered camera feature that boosts quality further and might provide an edge over rivals in key scenarios.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Comment


Comments

Share

59
Shares

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here