Skip to main content

Wow, Pixel 2 XL can't actually shoot in Portrait Mode — it does it all in post

I'm using the Google Pixel 2 XL as my primary phone this week. I've owned "the Google Phone", off and on, since the Nexus One launched in 2010, and I bought the original Pixel last year. Google does a lot of interesting and exciting things and I like to keep current with them.

One of the things I was most interested in checking out this year was Google's version of Portrait Mode. (Yeah, fine, Google used Apple's name for the feature, but consistency is a user-facing feature.)

So, as soon as I had the Pixel 2 XL set up, I fired up the camera and got set to shoot me some Portraits Mode. But.. I didn't see the option.

A tale of two Portrait Modes

On iPhone, Portrait Mode is right up-front, in-your-face labeled, and just a swipe to the side. On Pixel 2 XL, I eventually discovered, it's hidden behind the tiny menu button on the top left. Tap that first. Then select Portrait Mode from the drop-down menu. Then you're in business. Sort of.

At first, I thought I was Portrait Mode-ing it wrong. I framed a photo and... nothing. No depth effect. No blur. I double checked everything and tried again. Still no blur. Nothing. I took a couple shots. Nothing and more nothing.

Exasperated, I tapped on the photo thumbnail to take a closer look. The full-size photo lept up onto my screen. And it was completely in focus. Not a bit of blur to be seen. Then, a few seconds later, it happened. Bokeh happened.

One of these Portrait Modes is not like the other

It turns out, Pixel 2 XL can't actually shoot in Portrait Mode. By that I mean it can't render the depth effect in real time and show it to you in the preview before you capture the photo.

It can still use the dual pixels in its phase-detect auto-focus system to grab basic depth data (at least on the rear camera — the front camera has no PDAF system, so there's no depth data for portrait selfies) and combine that with its machine learning (ML) segmentation map, but only after you open the image in your camera roll. Only in post.

See more

I didn't realize any of this when I first tried the Pixel 2 XL Portrait Mode. I hadn't noticed it in the Pixel 2 reviews I'd read. (When I went back and looked more carefully, I did see a couple of them mentioned it in passing.)

Machine Learned

I guess that means the only thing more impressive than Google's machine learning process is its messaging process — it got everyone to focus on "can do it with just one lens!" and totally gloss over "can't do it live!" That's some amazing narrative control right there.

Now, undeniably, inarguably, Google does an amazing job with the segmentation mask and the entire machine-learned process. Some may call the results a little paper cutout-like but, in most cases they're better than Apple's sometimes too-soft edging. And glitches are fewer as well. But it all only happens in post.

Otherwise, Google is absolutely killing it with the the Pixel 2 XL. What they can do with a single lens, especially with the front one that provides no actual depth data, is industry leading. Hopefully, it drives Apple to up its own ML game. That's the great thing about Apple have the dual-lens system on the back and TrueDepth on the front — the company can keep pushing new and better bits and neural nets. It's much harder to retrofit new atoms.

Photographer vs. photography

What I like about Apple's Portrait Mode is that it doesn't just feel like an artificial intelligence (AI) filter you're applying to your photos in post. It doesn't feel like Prism or Faces. It feels like you're shooting with a camera and lens that really produces a depth of field.

It informs my process and how I frame my shot. I can't imagine shooting without it any more than I can imagine shooting with my DLR and fast prime lens and not seeing the image it will actually capture before I press the shutter.

And, at least for me, it's way better than shooting, switching, waiting, checking, switching back, shooting again, switching, waiting, checking... and one and on.

Showing the real-time depth effect preview on iPhone 7 Plus, iPhone 8 Plus, and iPhone X wasn't easy for Apple. It took a lot of silicon and a lot of engineering. But it unites camera and photo.

It makes it feel real.

Rene Ritchie
Rene Ritchie

Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.

21 Comments
  • Apple's implementation of portrait photos is undoubtedly better than Google's. However Samsung's is better still, allowing the user to adjust the bokeh affect "live".
  • I don't agree the feature is hidden, it clearly states "portrait." when you go to the 'options' hamburger layers. In any case, I don't think it has to do more with engineering or silicon on Apple's part, just having a two different lens system with different focal points and parallax. Something that actually requires movement to achieve parallax for depth determination on a Pixel XL (with Android 8.1) , which is called "Lens Blur" and for whatever reason has been removed on the Pixel 2 XL Google camera app. But I have to say though your article title was sort of link baitey since to me "Portrait Mode" has always been about actually rotation of the screen, so I guess you really were not writing this for anyone outside the Apple ecosystem who is not familiar with this feature. I personally think the term "Lens Blur" or even "parallax blur calculator" is a much more accurate term.
  • Couple of glaring errors (or just convienent fibbing) here... First, it doesn't wait for you to view the thumbnail to start processing, it starts the instant the picture is taken. If you take a picture and leave it be, it'll go merrily on it's way and complete. And second, it takes all of 2 seconds or so to finish... The only way you'll see it process is if you tap on the thumbnail the instant you take the picture, and even then it's nearly finished by the time the full image is loaded. There is no way you are going to switch back and forth a few times before it's done, there is simply no time. If you want to criticize that you can see the effect in the viewfinder, fine, fair enough. But don't make stuff up just to support your narrative.
  • You're correct. Just tried it for myself and the effect was nigh on instantaneous. Shame that Rene is resorting to spreading FUD when he's usually trying to dispell it.
  • A lot of Android/Google fans here today.
  • Google's Portrait Selfies are better than anything any iPhone puts out - by a wide clip. Their portraits from the back camera are as good as any iPhone. Google has actually been doing the Portrait Mode stuff much longer than Apple. Back in 2013 they had this in their Camera App for any Android phone to use. They probably leveraged that & Google[+] Photos to tune it. Their photo algorithms are top notch - best in industry, IMO. The only reason why the Pixel shots such good pictures is because of the Google Camera software. The actual camera hardware in the Pixels is really nothing to write home about. It isn't that good. It's the Google Algorithms that make it that good... And with Pixel Visual Core (whatever it's called), that will extend to third party camera apps, as well. I think Samsung's cameras have other faults from a basic photography sense that make their ability to adjust this insignificant. For one, they have terrible color reproduction, and oversaturate to the N'th degree. They also soften all of their low light images way too much. The iPhones beat the Pixel 2 for me largely due to their much superior video recording capabilities.
  • Agree 100%. Apple with their special lens destroys effects around the edges. Hopefully they can get this figured out, because Google is doing a much better job with the single lens and their AI. I am sure this eats Rene up inside since the iPhone is no longer the best camera out there, and there is actually a *gasp* android phone with a great camera.
  • Apple got leapfrogged in the smartphone camera race in 2014. The Note 4 was better than any iPhone in that year. The reason the iPhone gets decent camera scores is because the camera is held up by its video features. For those that use their cameras mainly for video, like me, that’s a huge win. Those that don’t, have better choices, IMO. The only area where Apple holds a lead over top Android cameras is in taking Panoramic images. Phones like the Note 8 also produce cleaner RAW DNG output with less noise than iPhones, as well. So if you like photographing “keepers” with your phone; it’s a superior choice. It also gives you a window into why iPhone JPEGs look the way they do (I.e. watercolor effects in low light and shadows, etc.). Apple now defaults to HDR to produce less flat images. I think that’s why the pictures in the 8 reviews look “cinematic” compared to the 7 Plus. You’d think people would care more about that, with the way they’re trying to act like these phones replace DSLRs.
  • Except that the out of focus portions of the image don’t smoothly go out of focus, it’s more of a focus, or no focus. It’s not bad, but when you look closely, you can tell.
  • Umm. That applies to both devices. Google is as Good as Apple with the back camera, and wildly superior with the front camera portrait mode (it isn't even close).
  • Portrait Mode is not “Apple’s name”. The phrase goes back decades that I personally know of, along with Landscape Mode. Both have been in use in photography - and printing - long before Apple was making cameras.
  • I think we all understand that but the marketing term didn’t exist in smartphone photography until Apple coined (I assume they trademarked it) it first. It’s not so much a matter of stealing as it is a matter of causing some consumer confusion since the two approaches use completely different underlying technologies and procedures.
  • I think that's the impressive thing about Google's camera. IT does almost everything in Post. That's the point. It's all algorithms... And it's doing it better than Apple's camera, to be frank - particularly the FFC on the Pixel 2 blowing away the "True Depth" camera system's Portrait Selfies on the iPhone X. Google will probably surpass them in Portrait Lighting in the next Android update, at this point. If it's one thing Google knows how to do well, it's photo algorithms.
  • I don’t think you understand what the term “in post” means...
    With the iPhone X, the depth effect is applied to a captured image with depth data, but it’s still technically post-processing. So just like the Pixel, the iPhone can’t technically shoot in portrait mode either.
  • I have both, and can say that the pixel xl 2 portrait mode delivers better portraits 9/10 times than the x. It doesn't white wash the background. Take a selfie of you with the sky in the background - pixel xl shows blues and defined clouds, while the x is just white.
  • What you’re taking about doesn’t have anything to do with portrait mode specifically. It’s just basic exposure in specific lighting situations.
  • in The overall end result the google pixel produces the better portrait photo. Since the Ali takes care of everything you don’t need to fret, worry or even think about it. You just frame your photo and take the picture then let Ai take care of the rest.
    The Apple cameras are still far better in terms of video. If only you could access video setting in the camera app instead of a hidden menu in the settings app....... 😏
  • Someone has a massive case of butthurt that the Pixel 2 can do better portrait mode (LOTS of bon-platform specific blogs have said so) via post shot computations than Apple with a dual lense system or multi lens system.
  • Why do we care about when the processing takes place? Isn’t the most important thing to end result?
  • That’s a little like saying “Who cares about the viewfinder or screen quality when you take a pic? The only thing that matters is the final image.” Real-time preview of effects and framing informs so many things about that final image, it cannot be separated from it. That’s really the point of this piece, not the Pixel’s post processing prowess.
  • The DSLR wins clearly, from the point of view of the skilled and discerning photographer but that is irrelevant. 90% of camera users (DSLR, interchangeable lens mirrorless, point/shoot, or smartphone) have no idea what DoF is, much less how to control it for effect. Nor would they ever consciously choose to NOT use "portrait mode" so as to make an environmental portrait (in which a usually in-focus setting/background conveys a story about the subject beyond appearance). For those who use "portrait mode" on a smartphone, never before will so many bad portraits have such nice bokeh. It all reminds me of when Canon introduced the AE-1 SLR in the 1970s, probably the first affordable consumer SLR with automatic exposure (and a Depth of Field preview button); IIRC, one reviewer said "Never before will so many bad pictures be so well-exposed." A serious photographer learns and uses composition and technique; anyone with money can buy the latest tools and technology. The main benefit of smartphone camera features is in marketing.