Single camera smartphones are becoming an increasingly rare sight, as Samsung and others adopt dual-camera technology in more devices. Huawei’s P20 Pro even offers three (count ’em!) cameras on the back for some seriously impressive tricks. The budget space is seeing dual-camera layouts as well, bringing bokeh trickery and portrait tomfoolery to ever cheaper phones.
Despite all that, some of the most impressive photos are still coming from single camera devices, and it’s largely due to fast image capture techniques.
What’s it all about?
At its most basic level, fast image capture boils down to two parts. First, there’s the fast image capture itself: capturing several images in extremely quick succession. After that, there’s the processing of these images, which can produce a better final image or have an entirely different effect (more on that later).
The processing uses these extra images to reduce noise and blur in the final picture. One common processing technique is called image or frame averaging, which has been cited for years as a method of improving image quality.
According to photography resource Cambridge in Colour:
“Image averaging works on the assumption that the noise in your image is truly random. This way, random fluctuations above and below actual image data will gradually even out as one averages more and more images.”
Combine fast image capture with the ability to conduct image averaging and general processing locally on the device in a speedy fashion and you have a wealth of possibilities.
Google and Samsung carry the torch
“HDR+ allows you to take photos at low-light levels by rapidly shooting a burst of up to ten short exposures and averaging them into a single image,” said Google in a 2014 post on its Research blog.
The technique has seen massive speed and quality gains since then, culminating in the Pixel and Pixel 2 phones being lauded for their HDR+ mode. In fact, Google is so confident in the feature’s speed and quality it chose to enable HDR+ by default on the Pixel 2.
Samsung has also adopted fast image capture and its associated processing for its flagships, dubbing the feature “multi-frame image processing.” The technique was first used on the Galaxy S8, which snaps three images in quick succession. From here, the phone chooses the best shot as a foundation and uses the remaining two images to reduce blurriness.
According to Samsung, this method results in more detailed, clearer shots, even in less ideal conditions.
Combine fast image capture with brisk processing and you’ve got a recipe for excellent photos
In both Samsung and Google’s cases, the latest phones capture images so quickly that you’d be mistaken for thinking it was taking a single snap.
Samsung and Google have similar techniques, though the Korean firm captures fewer images in the process. But capturing a high number of images doesn’t necessarily produce the best results.
“After a certain point, there will not be a noticeable difference in the noise reduction with additional frames averaged,” said North Star Imaging research and development fellow Brett Muehlhauser when discussing frame averaging on the company’s blog.
Why this route?
Smartphones will never go toe-to-toe with DSLR cameras on hardware alone. After all, a smartphone camera sensor is exponentially smaller than that of a hefty dedicated camera. Space is at a premium when you need a device that can fit in your pocket.
Optical image stabilization isn’t perfect either, which means one long exposure shot will likely be extremely blurry without a tripod. Huawei’s AI-enabled stabilization feature seems extremely promising, but it’s the exception, rather than the rule.
The good news is that you don’t need optical image stabilization or a gigantic, bulging sensor for several short exposure shots in quick succession, slapped together. That means budget phones may be able to grab this feature too.
It doesn’t hurt that smartphones have plenty of power at their disposal, enabling on-device processing instead of manually editing on the computer.
What can be done with fast image capture?
We’ve already mentioned better HDR (which gives us more detail in bright and dark areas) and improved low-light performance by reducing noise, but fast image capture and processing is capable of delivering a variety of neat results.
Probably the most recent innovation in this regard is Super HDR, by Chinese company Vivo. The feature differs from HDR+ by simply taking more images (up to 12 shots), this time at higher and lower exposures.
We haven’t heard of any production phones with the tech just yet, and Vivo is also claiming it harnesses AI for the mode.
“AI” is the new “cloud,” isn’t it? Ugh.
Even if your phone doesn’t end up with this feature, you have fast image capture and nippier processors to thank for today’s super-smooth HDR modes.
As Samsung and Google alluded to earlier, fast image snapping is a boon for zoomed in shots as well. Traditional mobile cameras show a loss of detail and a ton of noise when you zoom in, but fast image capture techniques specifically target blur and noise, two notable enemies of digital zoom (aside from, you know, digital zoom).
You’ll still want telephoto zoom (or even hybrid or oversampled zoom) if you can get it, but zoom with fast image capture is at least better than the standard digital zoom on older phones.
The feature lets you change the focus after shooting an image by capturing a series of photos at different focus points. The Lumia Refocus app took between two and eight shots at a time, each weighing in at 5MP.
To be fair, dual cameras are considered superior for this kind of trickery, with generally faster and more polished results. Single camera versions of the feature can still be fun. Now if only Instagram would let users upload these shots and let followers tinker with the focus.
Probably the only reason you’d want a Lumia 950 (I still miss mine, to be fair) is its Rich Capture feature. It was one of the better examples of what’s possible with fast image capture.
The Rich Capture option (available as a toggle in the camera app) essentially let users change the level of exposure, flash or HDR after taking a photo. It’s all context-sensitive, so low-light situations will let you tweak exposure, low-light shots with the flash will let you adjust the level of flash and general daytime snaps will let you tinker with HDR levels.
According to the All About Windows Phone blog, the Rich Capture mode takes three images in 0.2 seconds, though former Microsoft camera lead Juha Alakarhu said, at least for flash images, the mode takes two snaps (one with flash and one without). From here, the phone uses its algorithmic smarts to craft effects.
A quick-fire burst of photos is also ideal for erasing people or objects in the background. Who says your vacation photos need to have photo-bombers in them?
The feature was arguably popularized by the Galaxy S4‘s Eraser Mode, which took a burst of five shots and let you wipe moving objects from the frame. It’s since been deprecated, but we’d imagine even better results with today’s faster processors and AI trickery.
Samsung isn’t the only mobile brand to dip its toe in these waters. Google demoed an advanced object removal feature at I/O 2017. The function uses machine learning instead of an image burst and can even remove a chain link fence, perfectly revealing the subject behind it. Unfortunately, Google hasn’t launched this feature in a production phone just yet.
If you’re an action sports fan, you’ve probably seen action shots in magazines or on Instagram, usually showcasing a skateboard or BMX trick. You know, the ones showing every stage of the trick in one photo.
Samsung and Nokia brought the feature to prominence in 2013 and 2014, dubbing it Drama Shot and Smart Sequence respectively. It’s another example of rapid image capture, capturing a series of images and stacking them to reveal the full sequence.
It’s not quite a feature you’d use every day, but it certainly made life easier for some, so you didn’t need to manually stack your images. Now, how about Samsung resurrecting this feature as a downloadable camera mode?
Super high resolution photos
Another rather interesting feature enabled by fast image capture is the ability to create super high resolution snaps out of several lower resolution shots. It’s not just a theory either, having been touted by the likes of Oppo and Asus.
Oppo launched the feature on the Find 7 smartphone back in 2014. According to the feature description, Oppo’s mode “shoots six photos consecutively” and combines them for a 50MP photo. Nevertheless, the final results were somewhat mixed, although our own Joshua Vergara reckons they were indeed better than auto-mode shots from the phone. For what it’s worth, Oppo kept the feature in several subsequent models.
Meanwhile, Asus’ ZenFone AR takes four 23MP pictures and merges them into one 92MP image. Reviews seem to gloss over this one, but it’s a cool figure on paper.
Fast image capture and associated processing techniques have enabled a variety of results over the years. Some of these are gimmicky for sure, but for every sports mode, we get better low-light snaps and richly detailed HDR shots.
Google’s HDR+ feature and Vivo’s recently announced Super HDR mode show there’s still plenty of potential uses for brisk image capturing. We could even see older features resurrected and improved thanks to faster chipsets and machine learning tech. We could also see entirely new features — who knows. Whatever the case, it’s clear that single-camera smartphones still have some life left in them, even as dual-camera phones will undoubtedly benefit from the tech too.
April 23, 2018 at 11:57AM