Apple's Deep Fusion camera feature will be available soon with upcoming iOS 13.2 developer beta update

iPhone 11 Pro
iPhone 11 Pro (Image credit: Joseph Keller / iMore)

What you need to know

  • Apple is rolling out Deep Fusion with an upcoming iOS 13.2 developer beta.
  • The camera feature uses machine learning to get the most amount of data out of a setting to create vividly detailed images.
  • The feature only works with the iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max.

UPDATE: While initial reports indicated the iOS 13.2 developer beta with Deep Fusion might be released 'today', it's our understanding that the actual timeline is "very soon". We've updated to reflect that.

During its iPhone event last month, Apple unveiled the new iPhone 11 models with impressive cameras and a new "computational photography mad science" feature called Deep Fusion. Unfortunately, the feature was not available right away with iOS 13, but Apple is starting to roll out the feature with an upcoming developer beta.

According to The Verge, Deep Fusion will be available for the iPhone 11 models through an upcoming iOS 13.2 developer beta rolling out soon. With the feature upon us, it will give us our first chance to test out the feature besides just looking at the images Apple has shared.

Deep Fusion uses machine learning to capture more data within an image. Phil Schiller stated it takes four long and short exposure shots before you take an image and then one long exposure shot afterward. It combines all of the images and produces the best image possible.

Additionally, Apple stated the feature does "pixel-by-pixel processing" to grab the most amount of data from a setting and create an image with the proper detail. The Verge broke down in detail how the process will work.

  1. By the time you press the shutter button, the camera has already grabbed three frames at a fast shutter speed to freeze motion in the shot. When you press the shutter, it take three additional shots, and then one longer exposure to capture detail.
  2. Those three regular shots and long-exposure shot are merged into what Apple calls a "synthetic long" — this is a major difference from Smart HDR.
  3. Deep Fusion picks the short exposure image with the most detail and merges it with the synthetic long exposure — unlike Smart HDR, Deep Fusion only merges these two frames, not more. These two images are also processed for noise differently than Smart HDR, in a way that's better for Deep Fusion.
  4. The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail — the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images — taking detail from one and tone, tone, and luminance from the other.
  5. The final image is generated.

Here's an image sample of Deep Fusion in action Apple shared with The Verge.

__alt__

iPhone 11 Deep Fusion sample

Nilay Patel notes that unlike Night Mode, Deep Fusion will not alert users when it is turned on and that it will not work with the ultra wide lens, only the wide and telephoto cameras.

We look forward to testing out the feature and see how it stacks up. Judging from the technology that goes into it and the images Apple has released, it looks very impressive.

Apple has been on a rampant upgrade cycle with iOS 13—iOS 13.1.2 is now available for all iPhone users—so it seems more upgrades appear to be coming constantly. One of those is sure to include Deep Fusion.

Danny Zepeda