Computational photography is coming up more and more as a topic these days, driven largely by developments in the smartphone world. Apple and Google, specifically, have worked diligently over the past few years to overcome the inherent limitations in the cameras of their pocket-size phones—small sensors and tiny lenses—to produce better images than would be available solely from the phones’ optics alone. By using custom chips, advanced software, dual lenses (in the case of newer iPhones and some Android phones) and more, these phones can create photographs that once required high-end cameras or painstaking compositing to produce. (For more, see Jeff Carlson’s piece on DPReview.com about the computational aspects of the iPhone XS.)
The result is that our phones now use this technology to provide impressive images, ones combined in the phone from multiple “shots.” They include things like automatic high-dynamic range (HDR) photos, seamless panoramas, and portraits with shallow depth of field. Sure, you can find many online commenters who rail against the bad portrait shots and wonky panos as proof that this iPhone or that Pixel is not as good as a basic SLR or mirrorless camera, but that misses the point (something we talk about extensively in Taking Better Pictures Doesn’t Mean a New Camera).
This week marks the Adobe MAX conference in Los Angeles, Adobe’s annual centerpiece for product announcements. While there is little major news in the Lightroom Classic/Lightroom CC space (beyond some small updates, which you can read about over on the
Over at DPReview this morning, our old friend Jeff Carlson has a very interesting
I had an interesting conversation with someone the other day, one that I felt was worth recounting here. I was at a bookstore, perusing photography books for possible review here on the website. It was clearly a very slow day at the bookstore; while I was at the register, the checkout dude murmured something like, “these look interesting…I’d really like to take better pictures.”