Skip to content

Computational photography – the secret ingredient in the Camera app

  • by

How and why Deep Fusion, Smart HDR, and Night Mode change the game for iPhone photography

Photography is no longer just about the quality of your hardware. Over the past few years, giant leaps on the software side have proved just as important as how many megapixels your camera can capture.

That’s all thanks to a concept called computational photography, a catch-all term for digital processing techniques that allow the humble smartphone camera to do things a conventional film camera can’t.

These computational photography tricks range from creating the depth-of-field effect on your Portrait mode shots to automatically stitching together super-wide panoramas.

Portrait Mode simulates traditional camera effects using software

With iOS 13, and the iPhone 11 range especially, there are more of these tricks than ever at play. On these devices, the Camera app uses three core features – Smart HDR, Night Mode, and Deep Fusion – to ensure your photos are as sharp and well-lit as possible, with one or the other automatically taking control depending on your exact shooting circumstances.

These overlapping responsibilities can get confusing fast, so here’s a little more detail on each feature and how to get the best from it.

Smart HDR

High Dynamic Range (HDR) photography has been an option for years in iOS, but these days you won’t find a button for it in the Camera app – it’ll automatically clock in when required. Hence, “Smart.”

HDR has been around for years (this comparison dates from 2014) but now it’s smarter than ever

Its purpose is to reproduce a wide range of luminance levels without over-exposing highlights or under-exposing shadows. Whenever you hit the shutter, Smart HDR snaps three images at different exposures before automatically merging them into a single perfectly-lit-shot.

Smart HDR is particularly helpful on bright days, where the sun has a tendency to bleach the entire sky white – but there are all kinds of situations in which the technology proves invaluable.

Night Mode

At the other end of the lighting spectrum, Night Mode was added in iOS 13 to improve photos shot in – you guessed it – very low light.

Apple’s official press shots for Night Mode

Your iPhone will automatically tweak exposure time and ISO speed to suit your environment no matter which shooting mode you’re in, but Night Mode allows these settings to go beyond their usual limits. It does this by capturing for a few full seconds to drink in as much light as possible. Computational photography helps to reduce noise and minimize any inadvertent blurriness caused by your shaky hands. With iPhone 11, it’s surprisingly effective.

Night Mode automatically activates when the device senses sufficient darkness, but users do get a small degree of control. When active, a new Night button will surface in the Camera app. Tap this and you can adjust the exposure time or disable the mode completely. The exposure slider is dictated by how dark the environment is and how steady your hand is – prop your iPhone perfectly still (or use a tripod) and you’ll see the maximum exposure time rocket up to 30 seconds.

Don’t confuse the crescent moon Night button for the crescent moon Do Not Disturb icon

Deep Fusion

Last, but certainly not least, is Deep Fusion. Only available on the iPhone 11 series, this uses Apple’s advanced neural engine to process a high level of detail. It can pick out flyaway hairs and even individual strands of wool in a shirt, which is why you’ll see so much complex knitwear in Apple’s press shots this year.

Here, photographer Tyler Stalman showcases the additional detail possible using Deep Fusion on iPhone 11

The “mad science” – as Apple calls it – that powers Deep Fusion is ridiculously complex, but at a basic level it’s easy enough to understand. The Camera app simply captures several images in quick succession – preparing some of them even before you hit the shutter button – in order to combine them into an ideal super-sharp image.

Like Smart HDR, these shots are captured at various exposure levels – but here, most of them are dedicated to absorbing various levels of detail, while a single long exposure helps draw in tone and color. Deep Fusion then uses machine learning to rework this batch of images into a finished article that clocks in at 24 megapixels – twice the resolution of the iPhone’s usual 12-megapixel images. Very useful if you’ll later need to zoom, crop, or print the file without sacrificing fidelity.

This whole process takes around a second, but you can keep snapping while Deep Fusion works its magic.

Make the most of automation

So, if all this clever computational photography stuff is hidden under the hood and triggered automatically, why should we care?

Thankfully, for the most part, you don’t need to. Point and shoot, let your phone do its thing, and you should end up with the most appropriate function chipping in to give your shots a helping hand. But an understanding of exactly when these tools activate – and what their limitations are – can be handy in those edge cases where you need to take a little more control.

iPhone 11 Pro has three cameras: wide (1x zoom), ultrawide (0.5x), and telephoto (2x)

So, here’s everything you need to know to tell which function is at play. Deep breath…

Smart HDR is the default shooting mode on the standard wide and ultrawide cameras, but only kicks in on the telephoto camera in extremely bright scenes. If in doubt, there’s a good chance you’re using Smart HDR – especially if you’re outdoors, with strong lighting.

Night Mode, meanwhile, will only switch on when it detects sufficient darkness. This function only ever uses the wide camera, but can still activate when your app shows 2x in the viewfinder – just know that in this circumstance it will use an inferior digital zoom and not the actual telephoto lens. And be wary that Night Mode doesn’t work at all with the ultrawide camera, so you won’t ever see the option appear at 0.5x zoom.

Deep Fusion will take over for anything between these extremes. So if you’re in a medium-lit environment – especially indoors – you can be confident this is the feature pulling the strings. However, the ultrawide camera isn’t compatible with Deep Fusion at all, meaning you’ll sometimes have to choose between super-sharp details and fitting more in shot.

Practically, this all means if you’re shooting a subject where detail is important – like a woolly sweater – switching to the telephoto camera and avoiding extreme brightness gives you the best chance of Deep Fusion kicking into action and capturing every last seam.

Further comparisons courtesy Tyler Stalman

It’s also worth knowing that Deep Fusion doesn’t ever kick in if you have Capture Photos Outside of Frame enabled. Luckily this feature is turned off by default, but you’ll need to consider whether turning it on is worth losing out on super-detailed photos.

The future of photography

These features are turning iPhone into a serious alternative to an expensive DSLR camera – side-by-side comparisons are increasingly difficult to tell apart, and computational photography is advancing much faster than physical camera hardware.

We’re in the midst of a great machine learning revolution, with advanced neural nets powering all sorts of things behind the scenes in iOS 13. Photography is just one aspect Apple is improving with clever software engineering, but it’s perhaps the most relatable.

Still, it’s reassuring to know that our future robot overlords will at least be able to document their takeover of Earth with stunningly detailed, perfectly-lit photographs.