The Rise of Computational Photography

Computational photography is coming up more and more as a topic these days, driven largely by developments in the smartphone world. Apple and Google, specifically, have worked diligently over the past few years to overcome the inherent limitations in the cameras of their pocket-size phones—small sensors and tiny lenses—to produce better images than would be available solely from the phones’ optics alone. By using custom chips, advanced software, dual lenses (in the case of newer iPhones and some Android phones) and more, these phones can create photographs that once required high-end cameras or painstaking compositing to produce. (For more, see Jeff Carlson’s piece on DPReview.com about the computational aspects of the iPhone XS.)

The result is that our phones now use this technology to provide impressive images, ones combined in the phone from multiple “shots.” They include things like automatic high-dynamic range (HDR) photos, seamless panoramas, and portraits with shallow depth of field. Sure, you can find many online commenters who rail against the bad portrait shots and wonky panos as proof that this iPhone or that Pixel is not as good as a basic SLR or mirrorless camera, but that misses the point (something we talk about extensively in Taking Better Pictures Doesn’t Mean a New Camera).

Read more