the smarter image | Popular Photography Founded in 1937, Popular Photography is a magazine dedicated to all things photographic. Wed, 13 Apr 2022 18:20:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popphoto.com/uploads/2021/12/15/cropped-POPPHOTOFAVICON.png?auto=webp&width=32&height=32 the smarter image | Popular Photography 32 32 Preprocess Raw files with machine learning for cleaner-looking photos https://www.popphoto.com/how-to/preprocess-raws-dxo-pureraw-2/ Wed, 13 Apr 2022 18:20:42 +0000 https://www.popphoto.com/?p=168297
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2. Mason Marsh

We tested out DxO PureRAW 2's DeepPRIME technology to see if it could improve noise and detail in our shots. Spoiler: it did.

The post Preprocess Raw files with machine learning for cleaner-looking photos appeared first on Popular Photography.

]]>
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2
A high ISO photo of a Bristlecone Pine processed through DxO PureRAW 2. Mason Marsh

Machine learning technology is used in many aspects of modern photography, from shooting images that would otherwise be difficult to capture to speeding up sorting and editing. This week I want to focus on a targeted implementation that ripples through the entire image edit process: translating and processing the data in Raw files. Specifically, I’m looking at DxO PureRAW 2, which uses a technology called DeepPRIME to demosaic and denoise the unedited data in Raw files to create better editable versions. The technology is also available in DxO PhotoLab 5, the company’s photo editing app.

The process of Raw processing

First, let’s get on the same page about Raw processing. 

If your camera is set to shoot in JPEG or HEIC formats, the camera’s processor does a lot of work to take the light information captured by the sensor, interpret it as luminance and color values, and create an image made up of colored pixels. The photo is also compressed to save storage, so a significant amount of image data is thrown away.

When you shoot in the camera’s Raw format, the file it saves is made up entirely of the data captured by the image sensor. It includes not just the values from each pixel, but information about how the sensor deals with digital noise, characteristics of the lens being used, and more. You also end up with larger image files because none of the data is purged in a processing step.

At this point, that file isn’t really a “photo” at all. The data has to be decoded before it can be viewed as an image. (When you see the shot on the camera’s screen, you’re actually looking at a JPEG preview.)

The first step is demosaicing, which interprets the color values produced by patterned filters on the camera’s image sensor. Most cameras use sensors with Bayer filters, while Fujifilm cameras use sensors with X-Trans filters.

The photo software’s algorithms translate that data to pixels and then apply some other processing adjustments such as denoising based on the information in the Raw file to minimize digital noise such as that caused by shooting at high ISO values. At that point, the Raw file is ready for your edits to tone, color, and so on. 

Software such as Adobe Lightroom or Capture One automatically handles this process, making it appear that the Raw file has been opened just as if you’d opened a JPEG file. Some apps use an intermediary step, such as when Adobe Photoshop opens Raw images first in the Adobe Camera Raw interface.

Into the DeepPRIME

A night photos of the Seattle skyline with space needle.
A photo of the Seattle skyline captured at ISO 3200 and processed through Adobe Lightroom Classic. Jeff Carlson

Related: The best photo editor for every photographer

That window between Raw data and editable file is where DxO PureRAW 2 comes in. Its DeepPRIME technology uses machine learning to demosaic and denoise the data and create a better image to work with. It then pulls from DxO’s extensive library of camera and lens profiles to apply optical corrections. (And honestly, DeepPRIME is just an awesomely cool name.)

In my testing, I’m seeing results in two specific areas: processing noisy high-ISO images and working with the Raw files from my Fujifilm X-T3, which are sometimes underserved by Lightroom’s conversion.

A night photos of the Seattle skyline with space needle, zoomed to 200%.
Here’s the above image zoomed into 100%. The crop on the left was processed through Lightroom, the crop on the right through DxO PureRAW 2. The latter looks significantly cleaner while still maintaining a good level of detail. Jeff Carlson
A night photos of the Seattle skyline with space needle, zoomed to 100%.
At 200%, the PureRAW 2 example looks a tad soft but I still prefer this output to Lightroom’s. Jeff Carlson

In this example of the Seattle skyline, shot handheld at ISO 3200, the Raw file processed by Lightroom Classic reveals noticeable noise. I’ve made only exposure changes since the original was still pretty underexposed. The PureRAW-processed image is much cleaner and only reveals some denoising softening when viewed at 200% magnification.

Another noisy example

A night photo of a Bristlecone Pine.
The image on the left was processed through Lightroom with no adjustments made, the image on the right was processed through PureRAW 2, also with no adjustments made. Mason Marsh

For another example, this image of a Bristlecone Pine was shot on a Sony A7R IV at ISO 12800 and processed in Lightroom and using PureRAW 2 without any other adjustments applied.

A night photo of a Bristlecone Pine zoomed in to 100%.
A 100% crop with the Lightroom example on the left and the PureRAW 2 one on the right. The latter maintains as much or more detail than the former, without any of the ugly noise. Mason Marsh
A night photo of a Bristlecone Pine zoomed in to 200%.
The same example cropped to 200%. Mason Marsh

Processing Fujifilm Raw files with PureRAW 2

As a Fujifilm camera owner, I was also happy to hear that PureRAW 2 added support for Fujifilm’s X-Trans Raw files. Lightroom’s conversion can create a “wormy” appearance, particularly in textured areas. Shooting autumn foliage, for instance, often vexes me because, while the overall image looks good, at 100% magnification and closer the pattern is pretty obvious.

A photo of fall foliage with a pond in center.
The image on the left was processed through Lightroom, the image on the right through PureRAW. From an overview, they look pretty similar. But zoom the shots in (see examples below) and you’ll notice crisper detail in the PureRAW version. Jeff Carlson

Lightroom includes its own machine-learning-based feature to reprocess Raw files (in the Develop module, choose Photo > Enhance), which improves on the default appearance, but to my eye, the PureRAW version looks better, if a little over-sharpened when viewed magnified.

A photo of fall foliage with a pond in center, cropped to 200%.
A 200% crop shows Lightroom’s standard processing on the left, Lightroom’s processing with the machine-learning-based “Enhanced” feature turned on (center), and PureRAW 2’s processing on the right. Jeff Carlson

I also ran the same image through Capture One 22, because it tends to handle Fujifilm Raw files well. The image is an improvement over Lightroom, but I still prefer the PureRAW output.

A photo of fall foliage with a pond in center, cropped to 100%.
A 100% crop shows Lightroom’s “Enhanced” processing on the left, PureRAW 2’s output in the center, and Capture One 2022’s output at the right. Jeff Carlson

Final thoughts

Since PureRAW is dealing just with the Raw translation stage, you can process images regardless of which app you prefer to edit in. In my Lightroom Classic examples, I ran PureRAW as a plug-in, which creates a new image saved to my Lightroom library. You can alternately run PureRAW 2 as a standalone app, process your files, and then import the resulting DNG images into your app of choice or edit them singularly.

One final note: DxO PureRAW 2 works with just about any Raw image available, with one notable exception, Apple ProRAW files. That’s because ProRAW images, while sharing many of the editable characteristics of Raw files such as greater dynamic range, are already demosaiced when they’re saved.

The post Preprocess Raw files with machine learning for cleaner-looking photos appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Outsmart your iPhone camera’s overzealous AI https://www.popphoto.com/how-to/outsmart-iphone-overzealous-ai/ Thu, 24 Mar 2022 20:55:29 +0000 https://www.popphoto.com/?p=166350
A green iPhone camera on a red background
Dan Bracaglia

Apps like Halide and Camera+ make it easy to bypass your smartphone's computational wizardry for more natural-looking photos.

The post Outsmart your iPhone camera’s overzealous AI appeared first on Popular Photography.

]]>
A green iPhone camera on a red background
Dan Bracaglia

Last weekend The New Yorker published an essay by Kyle Chayka with a headline guaranteed to pique my interest and raise my hackles: “Have iPhone Cameras Become Too Smart?” (March 18, 2022).

Aside from being a prime example of Betteridge’s Law of Headlines, it feeds into the idea that computational photography is a threat to photographers or is somehow ruining photography. The subhead renders the verdict in the way that eye-catching headlines do: “Apple’s newest smartphone models use machine learning to make every image look professionally taken. That doesn’t mean the photos are good.”

A bench on a beach with a blue sky.
This image was shot on an iPhone 13 Pro using the Halide app and saved as a Raw file. It was then processed in Adobe Camera Raw. Jeff Carlson

The implication there, and a thrust of the article, is that machine learning is creating bad images. It’s an example of a type of nostalgic fear contagion that’s increasing as more computational photography technologies assist in making images: The machines are gaining more control, algorithms are making the decisions we used to make, and my iPhone 7/DSLR/film SLR/Brownie took better photos. All wrapped in the notion that “real” photographers, professional photographers, would never dabble with such sorcery.

A bench on a beach with a blue sky.
Here’s the same scene shot using the native iPhone camera app, straight out of camera, with all of its processing. Jeff Carlson

(Let’s set aside the fact that the phrase “That doesn’t mean the photos are good” can be applied to every technological advancement since the advent of photography. A better camera can improve the technical qualities of photos, but doesn’t guarantee “good” images.)

I do highly recommend that you read the article, which makes some good points. My issue is that it ignores—or omits—an important fact: computational photography is a tool, one you can choose to use or not.

Knowing You Have Choices

A sandy beach with wood pylons.
Another Phone 13 Pro photo, captured straight out of camera. Jeff Carlson

Related: Meet Apple’s new flagship iPhone 13 Pro & Pro Max

To summarize, Chayka’s argument is that the machine learning features of the iPhone are creating photos that are “odd and uncanny,” and that on his iPhone 12 Pro the “digital manipulations are aggressive and unsolicited.” He’s talking about Deep Fusion and other features that record multiple exposures of the scene in milliseconds, adjust specific areas based on their content such as skies or faces, and fuses it all together to create a final image. The photographer just taps the shutter button and sees the end result, without needing to know any of the technical elements such as shutter speed, aperture, or ISO.

An underexposed photo sandy beach with wood pylons.
Here’s the same angle (slightly askew) captured using the Halide app and saved as a raw file, unedited. Jeff Carlson

You can easily bypass those features by using a third-party app such as Halide or Camera+, which can shoot using manual controls and save the images in JPEG or raw format. Some of the apps’ features can take advantage of the iPhone’s native image processing, but you’re not required to use them. The only manual control not available is aperture because each compact iPhone lens has a fixed aperture value.

That fixed aperture is also why the iPhone includes Portrait Mode, which detects the subject and artificially blurs the background to simulate the soft background depth of field effect created by shooting with a bright lens at f/1.8 or wider. The small optics can’t replicate it, so Apple (and other smartphone developers) turned to software to create the effect. The first implementations of Portrait Mode often showed noticeable artifacts, the technology has improved in the last half-decade to the point where it’s not always apparent the mode was used.

But, again, it’s the photographer’s choice whether to use it. Portrait Mode is just another tool. If you don’t like the look of Portrait Mode, you can switch to a DSLR or mirrorless camera with a decent lens.

A sandy beach with wood pylons.
The same Halide raw photo, quickly edited in Adobe Lightroom. Jeff Carlson

Algorithmic Choices

More apt is the notion that the iPhone’s processing creates a specific look, identifying it as an iPhone shot. Some images can appear to have exaggerated dynamic range, but that’s nothing like the early exposure blending processing that created HDR (high dynamic range) photos where no shadow was left un-brightened.

Each system has its own look. Apple’s processing, to my eye, tends to be more naturalistic, retaining darks while avoiding blown-out areas in scenes that would otherwise be tricky for a DSLR. Google’s processing tends to lean more toward exposing the entire scene with plenty of light. These are choices made by the companies’ engineers when applying the algorithms that dictate how the images are developed.

A lake scene with a blue sky and tree in the foreground.
The iPhone 13 Pro retains blacks in the shadows of the tree, the shaded portions of the building on the pier, and the darker blue of the sky at the top. Jeff Carlson

The same applies to traditional camera manufacturers: Fujifilm, Canon, Nikon, Sony cameras all have their own “JPEG look”, which are often the reason photographers choose a particular system. In fact, Chayka acknowledges this when reminiscing over “…the pristine Leica camera photo shot with a fixed lens, or the Polaroid instant snapshot with its spotty exposure.”

The article really wants to cast the iPhone’s image quality as some unnatural synthetic version of reality, photographs that “…are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning.” That’s a lovely turn of phrase, but it comes at the end of talking about the iPhone’s Photographic Styles feature that’s designed to give the photographer more control over the processing. If you prefer images to be warmer, you can choose to increase the warmth and choose that style when shooting.

A lake scene with a blue sky and tree in the foreground.
The Pixel 6 Pro differs slightly in this shot, opening up more image detail in the building and, to a lesser extent, the deep blue of the sky at the top Jeff Carlson

It’s also amusing that the person mentioned at the beginning of the article didn’t like how the iPhone 12 Pro rendered photos, so “Lately she’s taken to carrying a Pixel, from Google’s line of smartphones, for the sole purpose of taking pictures.”

The Pixel employs the same types of computational photography as the iPhone. Presumably, this person prefers the look of the Pixel over the iPhone, which is completely valid. It’s their choice.

Choosing with the Masses

A green iPhone camera on a red background
Computational photography is a tool, one you can choose to use or not. Dan Bracaglia

I think the larger issue with the iPhone is that most owners don’t know they have a choice to use anything other than Apple’s Camera app. The path to using the default option is designed to be smooth; in addition to prominent placement on the home screen, you can launch it directly from an icon on the lock screen or just swipe from right to left when the phone is locked. The act of taking a photo is literally “point and shoot.”

More important, for millions of people, the photos it creates are exactly what they’re looking for. The iPhone creates images that capture important moments or silly snapshots or any of the unlimited types of scenes that people pull out their phones to record. And computational photography makes a higher number of those images decent.

Of course not every shot is going to be “good,” but that applies to every camera. We choose which tools to use for our photography, and that includes computational photography as much as cameras, lenses, and capture settings.

The post Outsmart your iPhone camera’s overzealous AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The promise and difficulty of AI-enhanced photo editing https://www.popphoto.com/how-to/luminar-neo-ai-powered-photo-editing/ Thu, 24 Feb 2022 22:06:29 +0000 https://www.popphoto.com/?p=163334
Photo editing platform Luminar Neo's new "Relight AI" feature
Jeff Carlson

We tested out Luminar Neo's new AI-powered 'Relight' tool to see if it can really improve our shots. The results are, well, mixed.

The post The promise and difficulty of AI-enhanced photo editing appeared first on Popular Photography.

]]>
Photo editing platform Luminar Neo's new "Relight AI" feature
Jeff Carlson

Several years ago, an executive at Skylum (the makers of Luminar editing software) told me the company was aggressively hiring hotshot machine-learning programmers as part of a push to infuse Luminar with AI features. It was my first glimpse at the importance of using AI to stand apart from other photo editing apps. Now, Skylum has just released Luminar Neo, the newest incarnation of its AI-based editor.

One of the new features I’ve most wanted to explore is “Relight AI,” which is emblematic of what AI technologies can do for photo editing. Imagine being able to adjust the lighting in a scene based on the items the software identifies, adding light to foreground objects, and controlling the depth of the adjustment as if the image were rendered in 3D.

To be upfront, I’m focusing just on the Relight AI feature, not reviewing Luminar Neo as a whole. The app has only recently been released and, in my experience so far, still has rough edges and is missing some basic features.

Why ‘Relight?’

A lot of photo editing we do is relighting, from adjusting an image’s overall exposure to dodging and burning specific areas to make them more or less prominent. 

But one of the core features of AI-based tools is the ability to analyze a photo and determine what’s depicted in it. When the software knows what’s in an image, it can act on that knowledge.

If a person is detected in the foreground, but they’re in the shadows, you may want to increase the exposure on them to make it look as if a strobe or reflector illuminated them. Usually, we do that with selective painting, circular or linear gradients, or making complex selections. Those methods are often time-consuming, or the effects are too general.

For example, the following photo is not only underexposed, but the tones between the foreground and background are pretty similar; we want more light on the subjects in the foreground and to create separation from the active background.

Photo editing platform Luminar Neo's new "Relight AI" feature
The exposure and depth of field are not great here. Jeff Carlson

So I can start with the obvious: make the people brighter. One option in many apps is to paint an exposure adjustment onto them. In Luminar Neo, the way to do that is to use the “Develop” tool to increase the Exposure value, then use the “Mask” feature to make the edit apply only to the subjects.

Photo editing platform Luminar Neo's new "Relight AI" feature
I’ve painted a mask in a few seconds. To do it right would take several minutes of work. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
Unwanted halos are easy to create when making masks. Jeff Carlson

Another option would be to apply a linear gradient that brightens the bottom half of the image and blends into the top portion, but then the ground at the left side of the frame, which is clearly farther behind the family, would be brighter too.

Ideally, you want to be the art director who asks for the foreground to be brighter and lets the software figure it out.

How Relight AI Works

The Relight AI tool lets you control the brightness of areas near the camera and areas away from the camera, it also lets you extend the depth of the effect. In our example, increasing the “Brightness Near” slider does indeed light up the family and the railing, and even adjusts the background a little, to smooth the transition between what Luminar Neo has determined to be the foreground and background.

Photo editing platform Luminar Neo's new "Relight AI" feature
The image with only “Brightness Near” applied in Relight AI. Jeff Carlson

The photo is already much closer to what I intended, and I’ve moved a single slider. I can also lower the “Brightness Far” slider to make the entire background recede. The “Depth” control balances the other two values (I’ll get back to Depth shortly).

Photo editing platform Luminar Neo's new "Relight AI" feature
The background is now darker, creating more separation from the foreground elements. Jeff Carlson

Depending on how the effect applies, the “Dehalo” control under Advanced Settings can smooth the transition around the foreground elements, such as the people’s hair. You can also make the near and far areas warmer or cooler using the “Warmth” sliders.

What about photos without people?

OK, photos with people are important, but also low-hanging fruit for AI. Humans get special treatment because often a person detected in the foreground is going to be the subject of the photo. What if an image doesn’t include a person?

In this next example, I want to keep the color in the sky and the silhouettes of the building but brighten the foreground. I’m going to ratchet Brightness Near all the way to 100 to exaggerate the effect so we can get an idea of where Luminar is identifying objects.

Photo editing platform Luminar Neo's new "Relight AI" feature
The original image is too dark. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
Increasing “Brightness Near” reveals what Luminar thinks are foreground subjects. Jeff Carlson

We can see that the plants in the immediate foreground are lit up, as well as the main building. Luminar protected the sky in the background to the left of the building and didn’t touch the more distant building on the right. So Relight AI is clearly detecting prominent shapes.

Photo editing platform Luminar Neo's new "Relight AI" feature
Decreasing the “Depth” value illuminates just the bushes in the foreground. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
Relight AI is adjusting brightness based on the shapes it has detected. Taken to the extreme, it’s also introduced a halo around the nearest building. Jeff Carlson

When I reduce the Depth value, the nearest bushes are still illuminated but the buildings remain in shadow. Cranking up the Depth amount adds an unnatural halo to the main building—but the side building still holds up well.

So, overall Relight AI isn’t bad. In these two images it’s achieved its main goals: let me adjust near and far brightness quickly and easily.

Where It Struggles

This is where I hold up a large disclaimer that applies to all photos edited using AI tools: the quality of the effect depends a lot on the images themselves and what the software can detect in them.

In this photo of trees, the software doesn’t really know what it’s looking at. The bushes and groups of trees at the right and left are at about the same distance from the camera, and then the rest of the trees recede into the distance. My expectation would be that those side and foreground trees would be illuminated, and the forest would get darker the deeper it moves away from the lens.

Photo editing platform Luminar Neo's new "Relight AI" feature
This group of trees doesn’t have a typical depth perspective. Jeff Carlson

When I make dramatic changes to the near and far brightness controls, however, Relight AI falls back to gradients from top to bottom, since in many photos, the foreground is at the bottom and the background is in the middle and top areas. It looks like the prominent trees on the right and left have been partially recognized, since they don’t go as dark as the rest, but still, the effect doesn’t work here.

Photo editing platform Luminar Neo's new "Relight AI" feature
When in doubt, Relight AI applies a gradient to simulate foreground and background lighting. Jeff Carlson

Other limitations

Occasionally, with people, the tool will apply the Brightness Near value to them and stick with it, even when you adjust the Depth setting. For example, in this photo of a person in a sunflower field, darkening the background and illuminating the foreground balances the image better, picking up the leaves and sunflowers that are closest to the camera.

Photo editing platform Luminar Neo's new "Relight AI" feature
The original image. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
With no other edits made, Relight AI improves the lighting of the subject and knocks down the brightness of the background. Jeff Carlson

When I set Depth to a low value to make the light appear very close to the camera, the flower on the left—the nearest object—gets dark, but the person’s lighting remains the same. The tool is making the assumption that a person is going to be the primary subject, regardless of the perceived depth in the image.

Photo editing platform Luminar Neo's new "Relight AI" feature
The lighting on the person is consistent even when Depth is set to almost zero. Jeff Carlson

One more limitation with the tool is the inability to adjust the mask that the AI creates. You can edit a mask of the tool’s overall effect, much as we did when painting manually earlier, but that affects only wherein the image the tool’s processing will be visible. You can’t go in and help the AI identify which areas are at which depths. (This also ties into the argument I made in a previous column about not knowing what an AI tool is going to detect.)

Getting Lit in the Future

Luminar Neo’s Relight AI feature is audacious, and when it works well it can produce good results with very little effort—that’s the point. Computational photography will continue to advance and object recognition will certainly improve in the future.

And it’s also important to realize that this is just one tool. A realistic workflow would involve using features like this and then augmenting them as needed with other tools, like dodging and burning, to get the result you’re looking for.

The post The promise and difficulty of AI-enhanced photo editing appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
When AI changes its mind: the unpredictable side of computational photography https://www.popphoto.com/how-to/when-ai-changes-its-mind/ Thu, 10 Feb 2022 13:00:00 +0000 https://www.popphoto.com/?p=161946
Asian women's eye taking in digital binary data with high speed motion.
Getty Images

Unlike changing a shutter speed or stopping a lens down, the results of applying machine learning to photography are often hard to guess.

The post When AI changes its mind: the unpredictable side of computational photography appeared first on Popular Photography.

]]>
Asian women's eye taking in digital binary data with high speed motion.
Getty Images

One of the questions that preoccupies too much of my headspace is: Why do many photographers seem wary of computational photography? AI technologies offer a lot of advantages: they make cameras see better in the dark, capture larger dynamic ranges of exposure and color, pinpoint focus by automatically locking onto faces or eyes, and save photographers time by speeding up the culling and editing process. Those all sound like wins, right?

And yet, photographers seem reluctant to fully embrace AI and machine learning tools. It’s not that we reject progress: Photography itself is a constantly evolving tale of technological advancement—without technology, there would be no photography.

Instead, I think it’s that we don’t always know what to expect when invoking many AI features.

Most photography is fairly predictable and, importantly, repeatable. For example, during the capture process, shooting with a slower shutter speed increases exposure. Upping ISO adds even more exposure but creates digital noise. When you adjust settings on a camera, you know what you’re going to get.

By contrast, when you capture a scene using a modern smartphone, it blends exposures and adjusts specific areas of a scene to balance the overall look. One manufacturer’s algorithms determine which areas to render in which ways, such as how saturated a scene will look, based on what the camera perceives. The algorithms of another company’s phone may render the same scene differently.

On the editing side, making adjustments is usually similarly predictable, from increasing exposure to balancing color. Sure, there’s variability in how some apps’ imaging engines apply color, but in general, you know what you’re going to get when you sit down to edit.

Machine learning introduces an element of unpredictability to editing. Sometimes you know what the software will do, but it’s not always apparent.

I realize I’m speaking in broad strokes here, so let me offer some examples (and counter-examples).

Perception and identification

A landscape photo edited with AI.
The Landscape Mixer neural filter in Adobe Photoshop is wild and unpredictable. Jeff Carlson

A signature characteristic of AI editing features is the ability to recognize what’s in an image. The software identifies features such as the sky, people, foliage, buildings, and whatever else its model is trained to perceive. Based on that, it can take action on those areas.

However, at the outset, you don’t know which areas will be recognized. For example, the new AI-assisted selection tools in Lightroom and Lightroom Classic do a great job of identifying a sky or a subject, in my experience. But each time you click “Select Subject,” you don’t know if the software’s idea of the subject is the same as yours. Or how much spill will also be selected outside the subject.

Now, the point of such a tool is to save you time. You could take that image into Photoshop and use its tools to make an incredibly accurate selection. Doing it in Lightroom gets you 90% of the way, and you can clean up the selection. 

In Luminar AI, the object selection is opaque. The app analyzes a photo when you open it, and you have to trust that when you use a tool such as Sky Enhancer, it will apply to the sky. If the app doesn’t think a sky exists, the sky-editing tools aren’t active at all. If a sky is detected, you have to go with the areas it thinks are skies, with limited options for adjusting the mask.

(The upcoming Luminar Neo will have improved masking and layer tools, but it currently exists as a limited early-access beta, which I haven’t used.) 

For an extreme example, consider the Landscape Mixer neural filter in Adobe Photoshop. I recognize upfront that this isn’t entirely fair, because it’s a feature still in development, and it’s also designed as something fun and artistic—no one is going to apply a winter scene to a summer photo and pass it off as a genuine photo. But my point is that when you apply one of the presets to a photo, you don’t know what you’re going to get until it’s made.

A landscape photo edited with AI.
Here’s the same photo from above, with the “Winter” filter applied. Jeff Carlson

The learning part of machine learning

The other reason I think photographers are hesitant to fully embrace AI technologies is the way the state of the art is advancing. Improving algorithms and performance is a given in software development, and it’s what we expect when we upgrade to new versions of apps. Sometimes that progress doesn’t go the way we expect.

As an example, in an early release version of Luminar AI, I used the Sky AI tool to change a drab midday scene into a more dramatic sunset. One of the improvements Luminar AI made over its predecessor was the ability to detect reflected water in a scene and apply the new sky to that area.

A landscape photo
An unedited landscape photo I shot on a rather grey day. Jeff Carlson

The version I edited turned out pretty well (except for a spot in the surf where the highlight is blown out), with a good distribution of the light in the water.

A short while later, Skylum released an update to Luminar AI that, among other things, improved the recognition of reflections. When I opened the same image after applying the update, the effect was different, even though I hadn’t moved a slider since my original edit. And now I can’t replicate the tones in that first edit. In fact, I’m not able to position the sky in the same way, which may be part of why the reflection isn’t rendered the same way. The repeatability of my earlier edit went out the window.

A landscape photo edited with AI.
Edited using the sky-replacement tool in Luminar AI in early 2021. Jeff Carlson

It’s entirely possible this was due to a bug in how Luminar AI handled reflections, before or after the update. But it could also be due to the “learning” part of machine learning. The models the software uses are trained by analyzing lots of other similar photos, which could be high quality or fodder. We don’t know.

I know that sounds like I’m resistant to change or that I don’t believe in advancing technology, but that’s not the case. As a counterpoint, let me draw your attention to “Process versions” in Lightroom Classic, found in the Calibration panel. The Process version is the engine used to render images in the Develop module. As imaging technology improves, Adobe implements new Process versions to add features and adapt to new tools. The current incarnation is Version 5.

A landscape photo edited with AI.
The same image, edited in Luminar AI in early 2022. Jeff Carlson

But the other Process versions are still available. When I edit an image that was imported when Version 3 was the latest, I can get the same results as I did then. Or, I can apply Version 5 and take advantage of tools that didn’t exist then. I have a much better idea of what to expect. 

Don’t get me wrong, in general, I’m a look-forward kind of guy, and I’m thrilled at a lot of the capabilities that AI technologies are bringing. But we can’t ignore that the evolutionary cycle of computational photography is fluid and in motion. And I think that’s what makes photographers hesitant to embrace them.

The post When AI changes its mind: the unpredictable side of computational photography appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to use AI to sort and edit your photos faster https://www.popphoto.com/how-to/sort-photos-faster-with-ai/ Thu, 27 Jan 2022 18:16:12 +0000 https://www.popphoto.com/?p=161216
How to edit photos faster
Image culling can be a daunting process. But AI is here to help speed things up. Getty Images

Culling images after a big shoot can be a real drag. Here's how to speed that process up, with a little help from our friend, artificial intelligence.

The post How to use AI to sort and edit your photos faster appeared first on Popular Photography.

]]>
How to edit photos faster
Image culling can be a daunting process. But AI is here to help speed things up. Getty Images

With computational photography, we devote a lot of attention to making images: how the camera interprets and renders a scene, how editing software can quickly create selections or smooth skin. Heck, we do that with photography in general, (justifiably) romancing the experience of getting out in nature or interacting with people in the studio or at events.

Then comes the less exciting part, reviewing those images. Shooting digital means you can grab hundreds or thousands of shots, but you still have to go through them and find the best ones. It can be grueling and time-consuming, particularly if you’ve just returned from an event and the client is expecting a quick turnaround.

This is a situation where machine learning and artificial intelligence (AI) can help make the process faster, not just make photos look better. Several apps and services can analyze the images from a photoshoot, evaluate their quality using machine learning algorithms, and perform a first culling pass while you brew coffee or walk your dog.

To get an idea of how this works in practice, I ran a few photoshoots through a handful of tools: AfterShoot, Narrative Select, Optyx, and FilterPixel, which are all standalone apps, and PostPro Wand, a plug-in for Lightroom Classic.

Criteria and Review

Related: Computational photography, explained

The first challenge is determining what makes a photo “good.” Based on your experience, knowledge of photography, and memory of the photoshoot, you can probably look at an image and know right away if it’s a keeper or one to toss. Software, on the other hand, needs more concrete criteria to work with.

It’s important to note that these apps are aimed mostly at an audience of wedding, portrait, and event photographers, since those shoots tend to produce many, many images that are often similar. It’s not unusual to fire off 20 or more frames to catch a single moment while a bride is getting ready for the ceremony, for instance. So one key feature is the ability to locate shots that appear to be in bursts and group them together. 

Selecting a stack in AfterShoot reveals similar “duplicates” at right
Selecting a stack in AfterShoot reveals similar “duplicates” at right. Jeff Carlson

The apps also detect people and prioritize shots where faces are in-focus and eyes are open. Blurry images and people with closed eyes are demoted or marked as rejected so you don’t waste your time on them.

Optyx assigns values for Focus Quality, Subject Prominence, and Expression Quality (the sliders at right) to determine good images.
Optyx assigns values for Focus Quality, Subject Prominence, and Expression Quality (the sliders at right) to determine good images. Jeff Carlson

Related: How modern smartphone cameras work

These are all things you’d look for when reviewing the photos manually, but it takes more time, especially if you’re frequently zooming in to make sure a person’s face is in focus.

The amount of control you have for assigning criteria varies among the apps. Some let you tweak the settings to your comfort level. Optyx, for example, includes built-in profiles that will rank photos based on ratings or it will assign colors; it can also adjust focus thresholds for portrait sessions: the Portraits (Studio) profile is more strict about picking sharp focus images than Portrait (Low Light).

Faces in Narrative Select are judged by whether they’re in focus and the state of the eyes
Faces in Narrative Select are judged by whether they’re in focus and the state of the eyes. Jeff Carlson

After the images have been analyzed, you surface the good ones based on the criteria. Clicking “Sneak Peeks” in Aftershoot, for instance, shows only the shots it thinks are highlights. Clicking a star rating or color filter in most of the others also narrows the list of possible picks.

Less obvious in these apps are other criteria that go into reviewing photos, such as overall composition and exposure. PostPro Wand calls these out specifically on its site, but the others may also be taking them into account in their analysis. 

 AfterShoot has determined that these 14 photos are the highlights of the shoot based on its criteria
AfterShoot has determined that these 14 photos are the highlights of the shoot based on its criteria Jeff Carlson

Workflow Integration

Related: Composition in the age of AI: Who’s really framing the shot?

Most of these apps are slotted into the spot between ingesting images from a memory card and adding them to your preferred organizer. Typically you import your photos to disk and then open them in the app. After analysis, you pass the images to Lightroom or Capture One or another system.

The culling information is stored in XMP sidecar files for raw files or written to JPEGs, and that metadata gets passed to your organizing app. If you narrowed your shoot down to 100 images, for example, they would appear in Lightroom Classic with tags such as “Accepted” or marked with a color label.

The exception is Wand, which as a Lightroom Classic plug-in works with your existing library and marks rejected photos with a color label (you can customize the labels).

It’s also worth noting that of these apps I tested, Wand and FilterPixel perform their analysis in the cloud after uploading images. The other apps process the images locally.

It’s Still on You

The advantage of these culling apps is the opportunity to speed up the review process, but of course, it’s still your job to pick the best photos. The client (even if that’s you) is relying on your eye and judgment, not just your ability to show up and snap the shutter. One of my wife’s favorite photos from our wedding is one where her eyes are closed in happiness as she stands next to her father. Algorithmically, an app might flag that photo as rejected because her eyes aren’t open, but artistically that expression is everything.

Still, saving time and reducing drudgery is always welcome, particularly for photographers who process large collections of photos on deadline. If that’s you, give this category of tools a try. Each of the ones I’ve mentioned includes trial periods to test the professional features. They all charge monthly or yearly fees, which can be flexible during the busy and slow times of the year.

AfterShoot is $14.99/mo or $119.88/year; Narrative Select is $18/mo or $150/year; Optyx 2 is $9.99/mo or $83.88/year; FilterPixel is $6 to $20/mo or $60 to $120/year; PostPro Wand is $17/mo or $144/year.

The post How to use AI to sort and edit your photos faster appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Composition in the age of AI: Who’s really framing the shot? https://www.popphoto.com/how-to/how-ai-impacts-composition/ Thu, 13 Jan 2022 16:00:00 +0000 https://www.popphoto.com/?p=160123
Man photographing breakfast in a cafe with smartphone
AI can't physically move your arm to help frame a shot better (yet). But it can help you improve your composition after-the-fact. Getty Images

In the age of AI-powered smartphone cameras and editing software, who's really in control of composition, the algorithm or the photographer? Our third 'Smarter Image' column has answers.

The post Composition in the age of AI: Who’s really framing the shot? appeared first on Popular Photography.

]]>
Man photographing breakfast in a cafe with smartphone
AI can't physically move your arm to help frame a shot better (yet). But it can help you improve your composition after-the-fact. Getty Images

A recurring theme with computational photography is the tension over creative choices. The “AI” features in cameras and photo editing apps can take over for many technical aspects, such as how to expose a scene or nail focus. Does that grab too much creative control from photographers? (See my last column, “You’re already using computational photography, but that doesn’t mean you’re not a photographer.”)

Image composition seems to be outside that tension. When you line up a shot, the camera isn’t physically pulling your arms to aim the lens at a better layout. (If it is pulling your arms, it’s time to consider a lighter camera or a sturdy tripod!) But software can affect composition in many circumstances—mostly during editing, but in some cases while shooting, too.  

Consider composition

On the surface, composition seems to be the simplest part of photography. Point the camera at a subject and press or tap the shutter button. Experienced photographers know, however, that choosing where the subject appears, how it’s composed in the viewfinder/screen, and even which element is the subject, involves more work and consideration. In fact, composition is often truly the most difficult part of capturing a scene.

So where does computational photography fit into this frame?

In some ways, more advanced AI features such as HDR are easier to pull off. The camera, almost exclusively in a smartphone, captures a dozen or so shots at different exposures within a few milliseconds and then combines them to build a well-lit photo. It’s gathering a wide range of data and merging it all together.

To apply AI smarts to composition, the camera needs to understand as well as you do what’s in the viewfinder. Some of that happens: smartphones can identify when a person is in the frame, and some can recognize common elements such as the sky, trees, mountains, and the like. But that doesn’t help when determining which foreground object should be prominent and where it should be placed in relation to other objects in the scene.

Plus, when shooting, the photographer is still the one who controls where the lens is pointed. Or is it? I joke about the camera dragging your arms into position, but that’s not far from describing what many camera gimbal devices do. In addition to minimizing camera movement for smooth movement, a gimbal can identify a person and keep them centered in the frame.

Photography Tips photo

But let’s get back to the camera itself. If we have control over the body and where it’s pointed, any computational assistance would need to come from what the lens can see. One way of doing that is by selecting a composition within the pixels the sensor records. We do that all the time during editing by cropping, and I’ll get to that in a moment. But let’s say we want an algorithm to help us determine the best composition in front of us. Ideally, the camera would see more than what’s presented in the viewfinder and choose a portion of those pixels.

Well, that’s happening too, in a limited way. Last year Apple introduced a feature called Center Stage in the 2021 iPad, iPad mini, and iPad Pro models. The front-facing camera has an ultra-wide 120-degree field of view, and the Center Stage software reveals just a portion of that, making it look like any normal video frame. But the software also recognizes when a person is in the shot and adjusts that visible area to keep them centered. If another person enters the space, the camera appears to zoom out to include them, too. (Another example is the Logitech StreamCam, which can follow a single person left and right.) The effect feels a bit slippery, but the movement is pretty smooth and will certainly improve.

Composition in editing

Apple Photos on the iPhone 13 Pro displays an Auto button when it detects people in the image (left). The automatic crop is applied at right
Apple Photos on the iPhone 13 Pro displays an Auto button when it detects people in the image (left). The automatic crop is applied to the image on the right. Jeff Carlson

You’ll find more auto-framing options in editing software, but the results are more hit and miss.

In the Photos app in macOS, clicking the Auto button in the Crop interface applies only straightening, which in this case rotated the image too far to the right.
In the Photos app in macOS, clicking the Auto button in the Crop interface applies only straightening, which in this case rotated the image too far to the right. Jeff Carlson

The concept mirrors what we do when sitting down to edit a photo: the app analyzes the image and detects objects and types of scenes, and then uses that insight to choose alternate compositions by cropping. Many of the apps I’ve chosen as examples use faces as the basis for recomposing; Apple’s Photos app only presents an Auto button in the crop interface when a person is present or when an obvious horizon line suggests that the image needs straightening.

Luminar AI did a good job cropping out the signpost at right.
Luminar AI did a good job cropping out the signpost at right. Jeff Carlson

I expected better results in Luminar AI using its Composition AI tool, because the main feature of Luminar AI is the fact that it analyzes every image when you start editing it to determine the contents. It did a good job with my landscape test photo, keeping the car and driver in the frame along with the rocks at the top-left, and removing the signpost in the bottom right. However, in the portrait (below), it did the opposite of what I was hoping for, by keeping the woman’s fingers in view at right and cropping out her hair on the left.

Luminar AI looks as if it emphasized each face along the rule-of-thirds guides, but kept the distracting fingers at right.
Luminar AI looks as if it emphasized each face along the rule-of-thirds guides, but kept the distracting fingers at right. Jeff Carlson

I also threw the two test images at Pixelmator Pro on macOS, since Pixelmator (the company) has been aggressive about incorporating machine-language-based tools into its editor. It framed the two women well in the portrait (below), although my preference would be to not crop as tightly as it did. In the landscape shot, it cropped in slightly, which improved the photo only slightly.

Pixelmator Pro’s auto crop feature used on a portrait
Pixelmator Pro’s reframe emphasizes the women’s faces, though I’d shift the crop boundaries to the left for better balance. Jeff Carlson

Adobe Lightroom and Photoshop do not include any automatic cropping feature, but Photoshop Elements does offer four suggested compositions in the Tool Options bar when the Crop tool is selected. However, it’s not clear what the app is using to come up with those suggestions, as they can be quite random.

The Auto Crop feature in Adobe Elements
Adobe Elements also features automatically cropping, but the results can be unpredictable. Jeff Carlson

Comp-u-sition? (No, that’s a terrible term)

It’s essential to note here that these are all starting points. In every case, you’re presented with a suggestion that you can then manipulate manually by dragging the crop handles.

Perhaps that’s the lesson to be learned today: computational photography isn’t all about letting the camera or software do all the work for you. It can give you options or save you some time, but ultimately the choices you make are still yours. Sometimes seeing a poor reframing suggestion can help you realize which elements in the photo work and which should be excised. Or, you may apply an automatic adjustment, disagree (possibly vehemently), and do your own creative thing.

The post Composition in the age of AI: Who’s really framing the shot? appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You’re already using computational photography, but that doesn’t mean you’re not a photographer https://www.popphoto.com/how-to/how-smartphone-cameras-work/ Thu, 30 Dec 2021 18:08:57 +0000 https://www.popphoto.com/?p=159292
A women shooting a smartphone photo at dusk
A smarter camera just means more opportunities to concentrate on what matters most in photography: composition, lighting, and timing. Getty Images

In our second 'Smarter Image' column, we'll take a look at the core features that make modern smartphone photography possible. And assess how they impact creativity.

The post You’re already using computational photography, but that doesn’t mean you’re not a photographer appeared first on Popular Photography.

]]>
A women shooting a smartphone photo at dusk
A smarter camera just means more opportunities to concentrate on what matters most in photography: composition, lighting, and timing. Getty Images

Maybe questioning my own intelligence isn’t the best way to kick off this piece, but if I’m going to write a column called “The Smarter Image,” I need to be honest. In many ways, my cameras are smarter than I am—and while that sounds threatening to many photographers, it can be incredibly freeing.

Computational photography is changing the craft of photography in significant ways (see my previous column, “The next age of photography is already here”). As a result, the technology is usually framed as an adversary to traditional photography. If the camera is “doing all the work,” then are we photographers just button pushers? If the camera is making so many decisions about exposure and color, and blending shots to create composite images for greater dynamic range, is there no longer any art to photography?

Smartphone cameras keep getting better at capturing images in low-light. But how? Through multi-image fusion.
Smartphone cameras keep getting better at capturing images in low light. But how? Through multi-image fusion. Getty Images

I’m deliberately exaggerating because we as a species tend to jump to extremes (see also: the world). But extremes also allow us to analyze differences more clearly.

One part of this is our romanticized notion of photography. We hold onto the idea that the camera simply captures the world around us as it is, reflecting the environment and burning those images onto a chemical emulsion or a light-sensitive sensor. In the early days, the photographer needed to choose the exposure, aperture, focus, and film stock—none of those were automated. That engagement with all of those aspects made the process more hands-on, requiring more of the photographer’s skills and experience.

Now, particularly with smartphones, a good photo can be made by just pointing the lens and tapping a button. And in many cases, that photo will have better focus, more accurate color, and an exposure that balances highlights and shadows even in challenging light.

Notice that those examples, both traditional and modern, solve technical problems, not artistic ones. It’s still our job to find good light, work out composition, and capture emotion in subjects. When the camera can take care of the technical, we gain more space to work out the artistic aspects.

Let’s consider some examples of this in action. Currently, you’ll see far more computational photography features in smartphones, but AI is creeping into DSLR and mirrorless systems, too.

Multi-image fusion

A closer look at the many steps in Apple's multi-image processing pipeline.
A closer look at the many steps in Apple’s multi-image processing pipeline for the iPhone 13 and 13 Pro. Apple

This one feels the most “AI” in terms of being both artificial and intelligent, and yet the results are often quite good. Many smartphones, when you take a picture, record several shots at various exposure levels and merge them together into one composite photo. It’s great for balancing difficult lighting situations and creating sharp images where a long exposure would otherwise make the subject blurry.

Google’s Night Sight feature and Apple’s Night and Deep Fusion modes coax light out of darkness by capturing a series of images at different ISO and exposure settings, they then de-noise the pieces and merge the results. It’s how you can get visible low-light photos even when shooting hand-held.

What you don’t get as a photographer is transparency into how the fusion is happening; You can’t reverse-engineer the component parts. Even Apple’s ProRAW format, which combines the advantages of shooting in RAW—greater dynamic range, more data available for editing—with multi-image fusion, creates a de-mosaiced DNG file. It certainly has more information than the regular processed HEIC or JPEG file, but the data isn’t as malleable as it is with a typical RAW file.

Scene recognition

The Google Pixel 6 and Pixel 6 Pro camera's use "Real Tone" technology, which promises better accuracy in skin tones, for all types of people.
The Google Pixel 6 and Pixel 6 Pro cameras use “Real Tone” technology, which promises better accuracy in skin tones. Triyansh Gill / Unsplash

So much of computational photography is the camera understanding what’s in the frame. One obvious example is when a camera detects that a person is in front of the lens, enabling it to focus on the subject’s face or eyes.

Now, more things are actively recognized in a shot. A smartphone can pick out a blue sky and boost its saturation while keeping the color of a grouping of trees in their natural green color instead of drifting toward blue. It can recognize snow scenes, sunsets, urban skylines, and so on, making adjustments in those areas of the scene when writing the image to memory.

Another example is the ability to not just recognize people in a shot, but preserve their skin tones when other areas are being manipulated. The Photographic Styles feature in the iPhone 13 and iPhone 13 Pro lets you choose from a set of looks, such as Vibrant, but it’s smart enough to not make people look as if they’re standing under heat lamps.

Or take a look at Google’s Real Tone technology, a long-overdue way to more accurately measure the skin tones of folks with darker skin tones. For decades, color film was processed using only white-skinned reference images, leading to inaccurate representations of darker skin tones. (I highly recommend the “Shirley Cards” episode of the podcast 99% Invisible for more information.) Google claims that Real Tone more accurately depicts the ranges of skin color.

Identifying objects after capture

Modern smartphone cameras can identify subjects like my dog Belvedere, with ease.
Modern smartphone cameras can identify subjects like PopPhoto editor Dan’s dog Belvedere with ease. If you tap the “Look up – Dog” tab, Siri will present images and information about similar-looking breeds. Dan Bracaglia

Time for software to help me mask a shortcoming: I’m terrible at identifying specific trees, flowers, and so many of the things that I photograph. Too often I’ve written captions like, “A bright yellow flower on a field of other flowers of many colors.”

Clearly, I’m not the only one, because image software now helps. When I shoot a picture of nearly any kind of foliage with my iPhone 13 Pro, the Photos app uses machine learning to tell that a type of plant is present. I can then tap to view the possible matches.

This kind of awareness extends to notable geographic locations, dog breeds, bird species, and more. In this sense, the camera (or more accurately, the online database the software is accessing) is smarter than I am, making me seem more knowledgeable—or at least basically competent.

Smarts don’t have to smart

Loading a film camera.
Burnt out on the automation associated with smartphone photography? Maybe it’s time to go all-out manual and try your hands at some film. Getty Images

I want to reiterate that all of these features are, for the most part, assisting with the technical aspects of photography. When I’m shooting with my smartphone, I don’t feel like I’ve given up anything. On the contrary, the camera is often making corrections that I would otherwise have to deal with. I get to think more about what’s in the frame instead of how to expose it.

It’s also important to note that alternatives are also close at hand: apps that enable manual controls, shooting in RAW for more editing latitude later, and so on. And hey, you can always pick up a used SLR camera and a few rolls of film and go completely manual.

The post You’re already using computational photography, but that doesn’t mean you’re not a photographer appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Computational photography, explained: The next age of image-making is already here https://www.popphoto.com/how-to/what-is-computational-photography/ Thu, 16 Dec 2021 13:00:00 +0000 https://www.popphoto.com/?p=158356
The future of photography is computational
Computational photography refers to processing images, often as they’re captured, in a way that goes beyond simply recording the light that hits a camera’s image sensor. Getty Images

Our inaugural 'Smarter Image' column breaks down the concept of 'computational' photography so that you can better leverage its powers.

The post Computational photography, explained: The next age of image-making is already here appeared first on Popular Photography.

]]>
The future of photography is computational
Computational photography refers to processing images, often as they’re captured, in a way that goes beyond simply recording the light that hits a camera’s image sensor. Getty Images

The most profound shift in photography since the transition from film to digital is happening now—and most people don’t realize it yet.

That shift is computational photography, which refers to processing images, often as they’re captured, in a way that goes beyond simply recording the light that hits a camera’s image sensor. The terms machine learning (ML) and artificial intelligence (AI) also get bandied about when we’re talking about this broad spectrum of technologies, inevitably leading to confusion.

Does the camera do everything automatically now? Is there a place for photographers who want to shoot manually? Do cloud networks know everything about us from our photos? Have we lost control of our photos?

It’s easy to think “AI” and envision dystopian robots armed with smartphones elbowing us aside. Well, maybe it’s easy for me, because now that particular mini-movie is playing in a loop in my head. But over the past few years, I’ve fielded legitimate concerns from photographers who are anxious about the push toward incorporating AI and ML into the field of photography.

So let’s explore this fascinating space we’re in. In this column, I’ll share how these technologies are changing photography and how we can understand and take advantage of them. Just as you can take better photos when you grasp the relationships between aperture, shutter speed, and ISO, knowing how computational photography affects the way you shoot and edit is bound to improve your photography in general.

For now, I want to talk about just what computational photography is in general, and how it’s already impacting how we take photos, from capture to organization to editing.

Let’s get the terminology out of the way

The future of photography is computational
It’s easy to think “AI” and envision dystopian robots armed with smartphones elbowing us aside. Getty Images

Computational photography is an umbrella term that essentially means, “a microprocessor and software did extra work to create this image.”

True researchers of artificial intelligence may bristle at how the term AI has been adopted broadly, because we’re not talking about machines that can think for themselves. And yet, “AI” is used most often because it’s short and people have a general idea, based on fiction and cinema over the years, that it refers to some machine independence. Plus, it works really well in promotional materials. There’s a tendency to throw “AI” onto a product or feature name to make it sound cool and edgy, just as we used to add “cyber” to everything vaguely hinting at the internet.

A more accurate term is machine learning which is how much of the computational photography technologies are built. In ML, a software program is fed thousands or millions of examples of data—in our case, images—as the building blocks to “learn” information. For example, some apps can identify that a landscape photo you shot contains a sky, some trees, and a pickup truck. The software has ingested images that contain those objects and are identified as such. So when a photo contains a green, vertical, roughly triangular shape with appendages that resemble leaves, it’s identified as a tree.

Another term that you may run into is high dynamic range, HDR, which is achieved by blending several different exposures of the same scene, creating a result where a bright sky and a dark foreground are balanced in a way that a single shot couldn’t capture. In the early days of HDR, the photographer combined the exposures, resulting in garish, oversaturated images where every detail was illuminated and exaggerated. Now, that same approach happens in smartphone cameras automatically during capture, with much more finesse to create images that look more like what your eyes—which have a much higher dynamic range than a camera’s sensor—perceived at the time.

Smarter capture

The Pixel 6 pro
When you tap the shutter button on a modern smartphone, like the Pixel 6 Pro, behind the scenes, millions of operations are taking place to create that image. Google

Perhaps the best example of computational photography is in your pocket or bag: the mobile phone, the ultimate point-and-shoot camera. It doesn’t feel disruptive, because the process of grabbing a shot on your phone is straightforward. You open the camera app, compose a picture on-screen, and tap the shutter button to capture the image.

Behind the scenes, though, your phone runs through millions of operations to get that photo: evaluating the exposure, identifying objects in the scene, capturing multiple exposures in a fraction of a second, and blending them together to create the photo displayed moments later.

In a very real sense, the photo you just captured is manufactured, a combination of exposures and algorithms that make judgments based not only on the lighting in the scene, but the preferences of the developers regarding how dark or light the scene should be rendered. It’s a far cry from removing a cap and exposing a strip of film with the light that comes through the lens.

But let’s take a step back and get pedantic for a moment. Digital photography, even using early digital cameras, is itself computational photography. The camera sensor records the light, but then it applies an algorithm to turn that digital information into colored pixels, and then typically compresses the image into a JPEG file that is optimized to look good while also keeping the file size small.

Traditional camera manufacturers like Canon, Nikon, and Sony have been slow to incorporate the types of computational photography technologies found in smartphones, for practical and no doubt institutional reasons. But they also haven’t been sitting idle. The bird eye tracking feature in the Sony Alpha 1, for example, uses subject recognition to identify birds in the frame in real-time.

Smarter organization

For quite a while, apps such as Adobe Lightroom and Apple Photos have been able to identify faces in photos, making it easier to bring up all the images containing a specific person. Machine learning now enables software to recognize all sorts of objects, which can save you the trouble of entering keywords—a task that photographers seem quite reluctant to do. You can enter a search term and bring up matches without having touched the metadata in any of those photos.

If you think applying keywords is drudgery, what about culling a few thousand shots into a more manageable number of images that are actually good? Software such as Optyx can analyze all the images, flag the ones that are out of focus or severely underexposed, and mark those for removal. Photos with good exposure and sharp focus get elevated so you can end up evaluating several dozen, saving a lot of time.

Smarter editing

Photoshop Super Resolution
Machine Learning allows for smarter image editing. Photoshop’s Super Resolution mode, which can increase a shot’s resolution, while maintaining good detail and sharpness, is an example of this. Pop Photo

The post-capture stage has seen a lot of ML innovation in recent years as developers add smarter features for editing photos. For instance, the Auto feature in Lightroom, which applies several adjustments based on what the image needs, improved dramatically when it started referencing Adobe’s Sensei cloud-based ML technology. Again, the software recognizes objects and scenes in the photo, compares it to similar images in its dataset, and makes better-informed choices in how to adjust the shot.

As another example, ML features can create complex selections in a few seconds, compared to the time it would take to hand-draw a selection using traditional tools. Skylum’s Luminar AI identifies objects, such as a person’s face, when it opens a photo. Using the Structure AI tool, you can add contrast to a scene and know that the effect won’t be applied to the person (which would be terribly unflattering). Or, in Lightroom, Lightroom Classic, Photoshop, and Photoshop Elements, the Select Subject feature makes an editable selection around the prominent thing in the photo, including a person’s hair, which is difficult to do manually.

Most ML features are designed to relieve pain points that otherwise take up valuable editing time, but some are able to simply do a better job than previous approaches. If your image was shot at a high ISO under dark lighting, it probably contains a lot of digital noise. De-noising tools have been available for some time, but they usually risk turning the photo into a collection of colorful smears. Now, apps such as ON1 Photo RAW and Topaz DeNoise AI use ML technology to remove noise and retain detail.

And for my last example, I want to point to the capability to enlarge low-resolution images. Upsizing a digital photo carried the risk of softening the image quality because you’re often just making existing pixels larger. Now, ML-based resizing features, such as Pixelmator Pro’s ML Super Resolution or Photoshop’s Super Resolution, can increase the resolution of a shot, while smartly keeping sharp in-focus areas crisp.

The ‘Smarter Image’

The future of photography is computational
We’ll dive deeper into computational photography’s features in the next column, including what it means for “selfies”. Getty Images

I’m skimming quickly over the possibilities to give you a rough idea of how machine learning and computational photography are already affecting photographers. In upcoming columns, I’m going to be looking at these and other features in more depth. And along the way, I’ll cover news and interesting developments in this field that is rapidly growing. You can’t swing a dead Pentax without hitting something that’s added “AI” to its name or marketing materials these days.

And what about me, your smart(-ish/-aleck) columnist? I’ve written about technology and creative arts professionally for over 25 years, including several photography-specific books published by Pearson Education and Rocky Nook, and hundreds of articles for outlets such as DPReview, CreativePro, and The Seattle Times. I co-host two podcasts, lead photo workshops in the Pacific Northwest, and drink a lot of coffee.

It’s an exciting time to be a photographer. Unless you’re a dystopian robot, in which case you’re probably tired of elbowing photographers.

The post Computational photography, explained: The next age of image-making is already here appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>