Skip to main content

Portrait Mode: iPhone X Camera vs. DSLR vs Pixel 2 XL

Computational photography is the biggest leap forward in image capture since digital photography freed us from film. iPhone X — like iPhone 8 Plus and iPhone 7 Plus — uses it and a dual lens camera system to capture depth data and then applies machine learning to create an artificial bokeh effect. The Pixel 2 XL borrows the phase-detection auto-focus (PDAF) system to grab depth data, combines it with a machine-learned segmentation map, and create a similar artificial bokeh.

But how do they compare to the optical quality of a Canon 5D Mark III paired with a 50mm ƒ/1.4 lens that doesn't need to compute or simulate anything?

iPhone X = DSLR-quality... Maybe?

Canon 5D Mark III with 50mm ƒ/1.4 lens

This is the reference. An amazing sensor in the camera body combined with a terrific fast prime lens makes for an amazingly terrific photo. Go figure.

Because there's no depth data, segmentation mapping, machine learning, or any other processing involved — just the gorgeous physics of light and glass. The separation between subject and background is "perfect" and the bokeh consistent across elements and lines.

Apple iPhone X

On iPhone X, like iPhone 8 Plus and iPhone 7 Plus, Apple uses a dual-lens camera system to capture both the image and a layered depth map. (It was 9 layers on iOS 10, it may be more by now, including foreground and background layers.) It then uses machine learning to separate the subject and apply a custom disc-blur to the background and foreground layers. Because of the layers, it can apply the custom disc-blur to lesser and greater degrees depending on the depth data. So, closer background elements can receive less blur than background elements that are further away.

Apple can display the portrait mode effect live during capture, and stores depth data as part of the HEIF (high-efficiency image format) or stuffs it into the header for JPG images. That way, it's non-destructive and you can toggle depth mode on or off at any time.

In practice, Apple's Portrait Mode looks overly "warm" to me. It appears as though the iPhone's camera system is allowing highlights to blow out in an effort to preserve skin tones. It's generally consistent with how it applies the blur effect but can be far too soft around the edges. In low light, the custom disc-blur can look gorgeous and the noise seems deliberately pushed away from a mechanical pattern and into an artistic grain.

The result is imperfect images that pack powerful emotional characteristics. You see them better than they look.

Google Pixel 2 XL

On Pixel 2 and Pixel 2 XL, Google uses machine learning to analyze the image and create a segmentation mask to separate the subject from the background. If available, Google will also use the regular single lens camera system and double-dips on the dual pixels in the phase-detection auto-focus system (PDAF) to get baseline depth data as well. Google then combines the two and applies a blur effect in proportion to the depth. (I'm not sure what kind of blur Google is using; it may be a disc-blur like Apple.)

In practice, Google's Portrait mode looks a little "cold" to me. It seems to want to prevent blowouts even at the expense of skin tones. Blurring isn't as consistent but the edge detection is far, far better. At times, it can look too sudden, almost like a cutout, and will preserve details even a real camera wouldn't. It doesn't resort to artistry to compensate for the limitations of the system, it pushes towards a more perfect system.

The result is images that are almost clinical in their precision. They look sometimes better than you see them, even when compared to a DLSR.

Moving targets

Which photo you prefer will be entirely subjective. Some people will gravitate towards the warmth and artistry of iPhone. Others, the almost scientific precision of Pixel. Personally, I prefer the DSLR. It's not too hot, not too cold, not too loose, not too severe.

It's also completely unbiased. Apple and Google's portrait modes still skew heavily towards human faces — it's what all that face detection is used for. You can get heart-stopping results with pets and objects, but there just aren't enough models yet to cover all the wonderous diversity found in the world.

The good news is that computational photography is new and improving rapidly. Apple and Google can keep pushing new bits, new neural networks, and new machine learning models to keep making it better and better.

Portrait mode on iPhone has gotten substantially better over the last year. I imagine the same will be true for both companies this year.

Rene Ritchie
Rene Ritchie

Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.

13 Comments
  • Tried really really hard to inspect the bokeh quality on the smartphone images, but my eyes wouldn't play along and just kept focusing on all that noise on the bottom left quarters. So I gave up and posted this comment instead.
  • DSLR definitely wins hands down for me. If I had to pick a 2nd choice it would be the iPhone. The Pixel one just looks flat lifeless and as mentioned above wayyyy too cool. You can fix cool in post but the depth mapping on the Pixel is not as smooth of a transition (totally obvious in the hair on the left side of the pic)
  • Would be more interesting to do this against a 1” sensor compact camera (Sony RX series for example).
  • Interesting that you criticise the image saturation on the iPhone, but find the canon image "just the gorgeous physics of light and glass" when there is similar saturation on the left shoulder of the girl. The Pixel has run away from any saturation, and in doing so has produced a compressed image with very little impact. Yes, the Canon is the best image, but I'd rather have the iPhone image in my album, than the Pixel 2.
  • Actually all modern cameras today use some form of computational photography (not just an Apple marketing term) including your Canon 5D. The more important points is that number of bits per channel has increased significantly since the early days, increased processing performance & memory, and as you have spoken about in previous articles, the use of statistical based AI. One other thing, that bothers about your comparison reviews is the choice of language when comparing other products to Apple products. An example is using the word "scientific" to describe the method of producing an image on the Pixel phone along with the word "cold" instead of the word cool. Where in your description of iPhone X's image, you describe it as "overly warm," instead of the word hot. By writing in this manner, you are conveying that Google creates products that are cold and heartless, where Apple embodies the human spirit. It may not have been your intention, but your unconscious bias towards Apple is also consistent in your mannerism seen during your appearance on Podcasts as well. This makes it difficult to take your reviews seriously because we already know how enamored you are with many of Apple's products. Nevertheless it is still important to me to read your reviews, to obtain a perspective I may not have noticed. I know you are Canadian Renee, but if you are still close enough to wish you a happy thanksgiving. PS I develop on both platforms with a ton of Mac & iOS devices and give Apple credit where credit is due. To me it is all about the manipulation of atoms by the many gifted people over history that have given us so many things to be thankful for, including the right for each of us to express our views.
  • Neither of the camera pics is very good, compared to the DSLR pic. Zoom in on each, look at the pixelation on the phone pics, the jagged circles of her eyes. Then zoom in on the DSLR pic. No comparison. There are real cameras, and there are phone cameras. Still a vast difference. For everyday shots, phones are generally fine. For serious work, nothing will ever beat an amazing camera with a terrific lens.
  • As a novice photographer, the DSLR wins hands down, there is no substitute for the depth of field and physical focusing that you can do with the larger lens and sensor. That being said... iPhone looks better than pixel, while the eyes have artifacts on iPhone the hair looks rough (sharpened?) on pixel. The pixel is too cool, background looks like fake grain, at least the iPhone blur is fairly soft, even though the photo is a touch warm and saturated, it looks more natural. ...if you look at the hair on the DSLR then iPhone, DSLR then pixel, the iPhone is closer to the DSLR look. I also notice the effect is improved from iPhone 7+ w/iOS 10, . Perhaps in a few more years Apple will advance to a place where it may be hard to tell from DSLR, but, I doubt it, just like it has taken until last year for digital cameras to get close to the color of film, it may take a decade or so for tech to mimic depth of field in a natural looking way. I’m observing via safari on iPad Pro 9.7 iOS 11.1.2, so I’m assuming I’m getting the full representation of these photos (P3 color).
  • The DSLR wins clearly, from the point of view of the skilled and discerning photographer but that is irrelevant. 90% of camera users (DSLR, interchangeable lens mirrorless, point/shoot, or smartphone) have no idea what DoF is, much less how to control it for effect. Nor would they ever consciously choose to NOT use "portrait mode" so as to make an environmental portrait (in which a usually in-focus setting/background conveys a story about the subject beyond appearance). For those who use "portrait mode" on a smartphone, never before will so many bad portraits have such nice bokeh. It all reminds me of when Canon introduced the AE-1 SLR in the 1970s, probably the first affordable consumer SLR with automatic exposure (and a Depth of Field preview button); IIRC, one reviewer said "Never before will so many bad pictures be so well-exposed." A serious photographer learns and uses composition and technique; anyone with money can buy the latest tools and technology. The main benefit of smartphone camera features is in marketing.
  • DSLR owns this but I'm not sure why anyone would think any different.
    The pixel is the camera choice for me based on the above samples. I prefer it when subjects don't look jaundiced and the pixel is much closer in color spectrum to the Canon. More realistic in my opinion.
  • It's no surprise that the DSLR wins. More glass, bigger sensor. That said, computational photography these days is getting really impressive.
  • I can appreciate the technical points made about the Canon DSLR and 50mm 1.4 lens (I use that lens myself). I am also impressed with the detail retained by the Pixel's single lens. I still end up thinking I prefer to look at the iPhone portrait. Something about the way the shadows play across her face gives it a playful moodiness (entirely an emotional response I think) that is missing (for me) in the other photos. When looking at a portrait, I'm not focusing on the details around the edges of the head. I definitely want the background blurred to the point where I am not distracted by it. So, all three cameras get a pass for that. But when that's accomplished, I want the face to say something to me. I thought the iPhone did a tiny better more in that respect.
  • Apple's images are much too warm. This is so noticeable, that it's hard to pay attention to the Bokeh, esp. with the reference and Pixel images beside it. I wish they would change their processing to not do this.
  • A 50mm lens is not a good choice for such portraits. The depth of field is still pretty substantial compared to, say, an 85mm f/1.2, or Nikon's new 105mm f/1.4 lens. And the fake depth of field is not going to look right to anyone familiar with the real thing.