The Rise of Computational Photography

Computational photography is coming up more and more as a topic these days, driven largely by developments in the smartphone world. Apple and Google, specifically, have worked diligently over the past few years to overcome the inherent limitations in the cameras of their pocket-size phones—small sensors and tiny lenses—to produce better images than would be available solely from the phones’ optics alone. By using custom chips, advanced software, dual lenses (in the case of newer iPhones and some Android phones) and more, these phones can create photographs that once required high-end cameras or painstaking compositing to produce. (For more, see Jeff Carlson’s piece on DPReview.com about the computational aspects of the iPhone XS.)

The result is that our phones now use this technology to provide impressive images, ones combined in the phone from multiple “shots.” They include things like automatic high-dynamic range (HDR) photos, seamless panoramas, and portraits with shallow depth of field. Sure, you can find many online commenters who rail against the bad portrait shots and wonky panos as proof that this iPhone or that Pixel is not as good as a basic SLR or mirrorless camera, but that misses the point (something we talk about extensively in Taking Better Pictures Doesn’t Mean a New Camera).

smartphone camera comparisons
Computational photography includes such things as in-smartphone HDR creation and better depth-of-field compositions. The goal of this technology is not to create garish compositions, but to help achieve better dynamic range and pleasing depth of field when taking simple shots.

This is still a new technology area, and one that will only get better over time, not only because of the combined might of the companies involved in the space, but because the vast majority of people view their phones as their camera. They don’t want anything more, and they demand the same level of photo quality that can be taken with a traditional camera. This demand is why companies like Lytro—which went out of business earlier this year, their assets reportedly purchased by Google—and Light, with their standalone, light-field-based cameras, have struggled to find acceptance: they’re trying to create a market in the space between the SLR/mirrorless and the smartphone. There just isn’t a big enough market there.

Computational on the desktop

Despite the buzz that’s been in the press—driven by new phone announcements from Apple and Google—we’ve had computational photography on the desktop for years. Think of assembling panoramas within Photoshop, building HDR images with Photomatix or Aurora HDR, or combining multiple images of a scene into a perfectly focused final photo with Helicon Focus. All of these tasks use your computer to combine multiple shots into a finished photo.

And now, we’re seeing the nest step, as companies like Adobe incorporate the phone’s capabilities into their software. For example, one of the new features in Lightroom Classic CC 8 is support for the depth masks created on most recent iPhone models:

Many of today’s high-end cameras on smartphones have the ability to capture depth information. Depth information is now captured on these devices as an additional channel of data, like the red, green, and blue channels that all digital cameras capture in, and provides information about where the objects in the photograph existed in three-dimensional space. In today’s release, we’re excited to add a new Depth option to the Range Masking tool. Depth Range Masking enables you to leverage the depth information to make precise selections and isolate subjects faster than ever before. This feature is only available for use with photos that contain Depth Masks, such as Portrait Mode photos captured on recent iPhone cameras.

Brian Mataish has a really good video overview of what the new depth-range masking means inside Lightroom. It’s worth watching if you’re interested in where the future is going. (He also shows where you can turn on the depth-map-capture feature inside the Lightroom CC iOS app.)

We’ve been saying for years that the future of photography is in the phone, and recent developments give us an idea of how the phone might transform the idea of the photograph. SLRs and mirrorless cameras aren’t going away, but the big innovations in photography will come from non-traditional camera companies like Apple, Google, Adobe, and others. There will always be questions about technology and its role in photography, especially around extensive manipulation of images, but the simple photograph will ultimately win out, no matter where it comes from.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.