Columns | Popular Photography Founded in 1937, Popular Photography is a magazine dedicated to all things photographic. Tue, 20 Sep 2022 04:34:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popphoto.com/uploads/2021/12/15/cropped-POPPHOTOFAVICON.png?auto=webp&width=32&height=32 Columns | Popular Photography 32 32 How to unlock your smartphone camera’s best hidden features https://www.popphoto.com/how-to/unlock-smartphone-camera-app-features/ Tue, 20 Sep 2022 04:34:10 +0000 https://www.popphoto.com/?p=186412
Puget Sound grain terminal.
Jeff Carlson

Whether you're shooting Android or iPhone, here's how to get the most out of your device's built-in camera app.

The post How to unlock your smartphone camera’s best hidden features appeared first on Popular Photography.

]]>
Puget Sound grain terminal.
Jeff Carlson

What could be more fundamental to photography today than our smartphone cameras? They’re ever-present, ready in moments, and the technology behind them makes it easy to capture great photos in most situations. And yet, I regularly encounter people who are unaware of many of the core functions of the built-in camera app.

Smartphone camera fundamentals extend beyond just “push the big button.” Some tools help you set up the shot, and some give you more control over the exposure. A few are just plain convenient or cool. However, these features aren’t always easy to find. That’s where we come in.

iOS 16 vs. Android 13

But first, for these examples, I’m using the two phones I have at hand: an iPhone 13 Pro running iOS 16 and a Google Pixel 6 Pro running Android 13. I’m also focusing just on the built-in camera apps; for even more manual control, you can find third-party apps in the app stores. Many of the camera features overlap between iOS and Android operating systems, and it’s possible that some may not be available on older models, or are accessible in a different way. If you see something here that doesn’t match with what you see, break out the manual—I mean, search Google—and see if it’s available for yours.

How to quick-launch the camera

Most people perform the usual dance of unlocking the phone, finding the camera app, and tapping to launch it. By that time, the moment you were trying to capture might be gone. There are faster ways.

Related: Composition in the age of AI – Who’s really framing the shot?

On the iPhone’s lock screen, swipe right-to-left to jump straight to the camera app without unlocking the phone at all. You can also press the camera icon on the lock screen. On the Pixel, double-press the power button from any screen.

When the phone is unlocked, a few more options are available. On both phones, press and hold the camera app icon to bring up a menu of shooting modes, such as opening the app with the front-facing selfie camera active.

Screenshots of Apple and Google camera apps with shortcuts shown.
Press and hold the Camera app icon to display some photo mode shortcuts (iPhone 13 Pro at left, Pixel 6 Pro at right). Jeff Carlson

I also like the ability to double-tap the back of the phone to launch the camera. On the iPhone, go to Settings > Accessibility > Touch > Back Tap and choose Camera for the Double Tap (or Triple Tap) option. In Android, go to Settings > System > Gestures > Quick Tap > Open app and choose Camera.

Related: Outsmart your iPhone camera’s overzealous AI

How to use the volume buttons to trigger the shutter

If you miss the tactile feedback of pressing a physical shutter button, or if hitting the software button introduces too much shake, press a volume button instead.

On both phones, pressing either volume button triggers the shutter. Holding a button starts recording video, just as if you hold your finger on the virtual shutter button.

Hand holding an iPhone and pressing the volume button to take a photo.
Press a volume button to trigger the shot for that tactile-camera experience. Jeff Carlson

On the iPhone, you can also set the volume up button to fire off multiple shots in burst mode: go to Settings > Camera > Use Volume Up for Burst.

How to adjust the exposure & focus quickly

The camera apps do a good job of determining the proper exposure for any given scene—if you forget that “proper” is a loaded term. You do have more control, though, even if the interfaces don’t make it obvious.

On the iPhone

A water scene with focus held in the distance/
Press and hold to lock exposure and focus on the iPhone. Jeff Carlson

On the iPhone, tap anywhere in the preview to set the focus and meter the exposure level based on that point. Even better (and this is a feature I find that many people don’t know about), touch and hold a spot to lock the focus and exposure (an “AE/AF LOCK” badge appears). You can then move the phone to adjust the composition and not risk the app automatically resetting them.

A water scene with the exposure decreased.
Drag the sun icon to adjust the exposure without changing the focus lock on the iPhone. Jeff Carlson

Once the focus and exposure are set or locked, lift your finger from the screen and then drag the sun icon that appears to the right of the target box to manually increase or decrease the exposure. A single tap anywhere else resets the focus and exposure back to automatic.

On the Pixel

On the Pixel, tap a point to set the focus and exposure. That spot becomes a target, which stays locked even as you move the phone to recompose the scene. Tapping also displays sliders you can use to adjust white balance, exposure, and contrast. Tap the point again to remove the lock, or tap elsewhere to focus on another area.

A water scene with Google's exposure slider shown.
The Pixel 6 Pro displays sliders for exposure, white balance, and contrast control when you tap to meter and focus on an area. Jeff Carlson

How to zoom with confidence

We think of “the camera” on our phones, but really, on most modern phones, there are multiple cameras, each with its own image sensor behind the array of lenses. So when you’re tapping the “1x” or “3x” button to zoom in or out, you’re switching between cameras.

Whenever possible, stick to those preset zoom levels. The 1x level uses the main camera (what Apple calls the “wide” camera), the 3x level uses the telephoto camera, and so on. Those are optical values, which means you’ll get a cleaner image as the sensor records the light directly.

The same water scene, zoomed in using pinch-to-zoom.
When you drag the camera selection buttons, this zoom dial appears for an up to 15x telephoto increase. But if you’re not on the 0.5x, 1x, or 3x levels, you’re sacrificing image quality for digital zoom. Jeff Carlson

But wait, what about using the two-finger pinch gesture to zoom in or out? Or, you can drag left or right on the zoom selection buttons to reveal a circular control (iPhone) or slider (Android) to let you compose your scene without needing to move, or even zoom way into 15x or 20x.

It’s so convenient, but try to avoid it if possible. All those in-between values are calculated digitally: the software is interpolating what the scene would look like at that zoom level by artificially enlarging pixels. Digital zoom technology has improved dramatically over the years, but optical zoom is still the best option.

How to switch camera modes quickly

Speaking of switching, the camera apps feature many different shooting modes, such as Photo, Video, and Portrait. Instead of tapping or trying to drag the row of mode names, on both iOS and Android, simply swipe left or right in the middle of the screen to switch modes.

Two flowers at different views.
Drag anywhere in the middle of the preview to switch between shooting modes. Jeff Carlson

How to use the grid & level for stronger compositions

Whether you subscribe to the “rule of thirds” or just want some help keeping your horizons level, the built-in grid features are handy.

In iOS, go to Settings > Camera > Grid and turn the option on. In Android, you can choose from three types of grids by going to the settings in the camera app, tapping More Settings, and choosing a Grid Type (such as 3 x 3).

The grid on the iPhone, and a related setting called Framing Hints on the Pixel, also enable a horizontal level. When you’re holding the phone parallel to the ground or a table, a + icon appears in the middle of the screen on both models. As you move, the phone’s accelerometer indicates when you’re not evenly horizontal by displaying a second + icon. Maneuver the phone so that both icons line up to ensure the camera is horizontally level.

A close-up of a pink flower.
When the phone is held parallel to the ground, a pair of + icons appears to indicate how level it is. Line them up for a level shot. (iPhone shown here.) Jeff Carlson

How to control the flash & ‘Night’ modes

Both camera systems are great about providing more light in dark situations, whether that’s turning on the built-in flash or activating Night mode (iOS) or Night Sight (Android). The interfaces for controlling those are pretty minimal, though.

On the iPhone, tap the flash icon (the lightning bolt) to toggle between Off and Auto. For more options tap the carat (^) icon, which replaces the camera modes beneath the preview with buttons for more features. Tap the Flash button to choose between Auto, On, and Off.

On the Pixel, tap the Settings button in the camera app and, under More Light, tap the Flash icon (another lightning bolt).

A dimly lit night scene with an old car.
The crescent moon icon indicates the Pixel 6 Pro is using its Night Sight mode. Jeff Carlson

The Pixel includes its Night Sight mode in the More Light category. When it’s enabled, Night Sight automatically activates in dark situations—you’ll see a crescent moon icon on the shutter button. You can temporarily deactivate this by tapping the Night Sight Auto button that appears to the right of the camera modes.

The iPhone’s Night mode is controlled by a separate button, which looks like a crescent moon with vertical stripes indicating a dark side of the moon. Tap it to turn Night mode on or off. Or, tap the carat (^) icon and then tap the Night mode button to reveal a sliding control that lets you choose an exposure time beyond just Auto (up to 30 seconds in a dark environment when the phone is stabilized, such as on a tripod).

A dimly lit night scene with an old car.
The yellow Night mode button indicates that the current maximum exposure is set for 2 seconds. Jeff Carlson

Put the fun in smartphone fundamentals

As with every camera—smartphone or traditional—there are plenty of features to help you get the best shot. Be sure to explore the app settings and the other buttons (such as setting self-timers or changing the default aspect ratio) so that when the time comes, you know exactly which smartphone camera feature to turn to.

The post How to unlock your smartphone camera’s best hidden features appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The state of AI in your favorite photo editing apps https://www.popphoto.com/how-to/ai-photo-editing-apps/ Tue, 30 Aug 2022 19:43:41 +0000 https://www.popphoto.com/?p=184122
ON1’s Super Select AI feature automatically detects different subjects
ON1’s forthcoming Super Select AI feature automatically detects various subjects/elements (shown in blue), allowing user to quickly create an editable mask. ON1

From Lightroom to Luminar Neo, we surveyed the field and these are the most-powerful AI-enhanced photo editing platforms.

The post The state of AI in your favorite photo editing apps appeared first on Popular Photography.

]]>
ON1’s Super Select AI feature automatically detects different subjects
ON1’s forthcoming Super Select AI feature automatically detects various subjects/elements (shown in blue), allowing user to quickly create an editable mask. ON1

Artificial Intelligence (AI) technologies in photography are more widespread than ever before, touching every part of the digital image-making process, from framing to focus to the final edit. But they’re also widespread in the sense of being spread wide, often appearing as separate apps or plug-ins that address specific needs.

That’s starting to change. As AI photo editing tools begin to converge, isolated tasks are being added to larger applications, and in some cases, disparate pieces are merging into new utilities.

This is great for photographers because it gives us improved access to capabilities that used to be more difficult, such as dealing with digital noise. From developers’ perspectives, this consolidation could encourage customers to stick with a single app or ecosystem instead of playing the field.

Let’s look at some examples of AI integration in popular photo editing apps.

ON1 Photo RAW

ON1 currently embodies this approach with ON1 Photo RAW, its all-in-one photo editing app. Included in the package are tools that ON1 also sells as separate utilities and plug-ins, including ON1 NoNoise AI, ON1 Resize AI, and ON1 Portrait AI.

The company recently previewed a trio of new features it’s working on for the next major versions of ON1 Photo RAW and the individual apps. Mask AI analyzes a photo and identifies subjects; in the example ON1 showed, the software picked out a horse, a person, foliage, and natural ground. You can then click a subject and apply an adjustment, which is masked solely to that individual/object.

ai photo editing tools
In this demo of ON1’s Mask AI feature under development, the software has identified subjects such as foliage and the ground. ON1

Related: Edit stronger, faster, better with custom-built AI-powered presets

ON1’s Super Select AI feature works in a similar way, while Tack Sharp AI applies intelligent sharpening and optional noise reduction to enhance detail.

Topaz Photo AI

Topaz Labs currently sells its utilities as separate apps (which also work as plug-ins). That’s great if you just need to de-noise, sharpen, or enlarge your images. In reality, though, many photographers buy the three utilities in a bundle and then bounce between them during editing. But in what order? Is it best to enlarge an image and then remove noise and sharpen it, or do the enlarging at the end?

Topaz is currently working on a new app, Photo AI, that rolls those tools into a single interface. Its Autopilot feature looks for subjects, corrects noise, and applies sharpening in one place, with controls for adjusting those parameters. The app is currently available as a beta for owners of the Image Quality bundle with an active Photo Upgrade plan.

ai photo editing tools
Topaz Photo AI, currently in beta, combines DeNoise AI, Sharpen AI, and Gigapixel AI into a single app. Jeff Carlson

Luminar Neo

Skylum’s Luminar was one of the first products to really embrace AI technologies at its core, albeit with a confusing rollout. Luminar AI was a ground-up rewrite of Luminar 4 to center it on an AI imaging engine. The following year, Skylum released Luminar Neo, another rewrite of the app with a separate, more extensible AI base.

Now, Luminar Neo is adding extensions, taking tasks that have been spread among different apps by other vendors, and incorporating them as add-ons. Skylum recently released an HDR Merge extension for building high dynamic range photos out of several images at different exposures. Coming soon is Noiseless AI for dealing with digital noise, followed in the coming months by Upscale AI for enlarging images and AI Background Removal. In all, Skylum promises to release seven extensions in 2022.

ai photo editing tools
With the HDR Merge extension installed, Luminar Neo can now blend multiple photos shot at different exposures. Jeff Carlson

Adobe Lightroom & Lightroom Classic

Adobe Lightroom and Lightroom Classic are adding AI tools piecemeal, which fits the platform’s status of being one of the original “big photo apps” (RIP Apple Aperture). The most significant recent AI addition was the revamped Masking tool that detects skies and subjects with a single click. That feature is also incorporated into Lightroom’s adaptive presets.

ai photo editing tools
Lightroom Classic generated this mask of the fencers (highlighted in red) after a single click of the Select Subject mask tool. Jeff Carlson

It’s also worth noting that because Lightroom Classic has been one of the big players in photo editing for some time, it has the advantage of letting developers, like the ones mentioned so far, offer their tools as plug-ins. So, for example, if you primarily use Lightroom Classic but need to sharpen beyond the Detail tool’s capabilities, you can send your image directly to Topaz Sharpen AI and then get the processed version back into your library. (Lightroom desktop, the cloud-focused version, does not have a plug-in architecture.)

What does the consolidation of AI photo editing tools mean for photographers?

As photo editors, we want the latest and greatest editing tools available, even if we don’t use them all. Adding these AI-enhanced tools to larger applications puts them easily at hand for photographers everywhere. You don’t have to export a version or send it to another utility via a plug-in interface. It keeps your focus on the image.

It also helps to build brand loyalty. You may decide to use ON1 Photo RAW instead of other companies’ tools because the features you want are all in one place. (Insert any of the apps above in that scenario.) There are different levels to this, though. From the looks of the Topaz Photo AI beta, it’s not trying to replace Lightroom any time soon. But if you’re an owner of Photo AI, you’ll probably be less inclined to check out ON1’s offerings. And so on.

More subscriptions

Then there’s the cost. It’s noteworthy that companies are starting to offer subscription pricing instead of just single purchases. Adobe years ago went all-in on subscriptions, and it’s the only way to get any of their products except for Photoshop Elements. Luminar Neo and ON1 Photo RAW offer subscription pricing or one-time purchase options. ON1 also sells standalone versions of its Resize AI, NoNoise AI, and Portrait AI utilities. Topaz sells its utilities outright, but you can optionally pay to activate a photo upgrade plan that renews each year.

ai photo editing tools
AI-enhanced photo editing tools come in many forms, from standalone apps to plugins to built-in features in platforms like Lightroom. Getty Images

Subscription pricing is great for companies because it gives them a more stable revenue stream, and they’re hopefully incentivized to keep improving their products to keep those subscribers over time. And subscriptions also encourage customers to stick with what they’re actively paying for.

For instance, I subscribe to the Adobe Creative Cloud All Apps plan, and use Adobe Audition to edit audio for my podcasts. I suspect that Apple’s audio editing platform, Logic Pro would be a better fit for me, based on my preference for editing video in Final Cut Pro versus Adobe Premiere Pro, but I’m already paying for Audition. My audio-editing needs aren’t sophisticated enough for me to really explore the limits of each app, so Audition is good enough.

In the same way, subscribing to a large app adds the same kind of blanket access to tools, including new AI features, when needed. Having to pay $30-$70 for a focused tool suddenly feels like a lot (even though it means the tool is there for future images that need it).

The wrap

On the other hand, investing in large applications relies on the continued support and development of them. If software stagnates or is retired (again, RIP Aperture), you’re looking at time and effort to migrate them to another platform or extricate them and their edits.

Right now, the tools are still available in several ways, from single-task apps to plug-ins. But AI convergence is also happening quickly.

The post The state of AI in your favorite photo editing apps appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Classic film camera review: Nikon FG, the SLR that irked everyone (but me) https://www.popphoto.com/gear-reviews/nikon-fg-film-camera-review/ Thu, 25 Aug 2022 03:00:00 +0000 https://www.popphoto.com/?p=183448
The Nikon FG film camera from above
The Nikon FG is a reasonably compact film SLR from 1982. Aaron Gold

This unloved SLR is actually one of Nikon’s most innovative film cameras. And it offers great bang for the buck today.

The post Classic film camera review: Nikon FG, the SLR that irked everyone (but me) appeared first on Popular Photography.

]]>
The Nikon FG film camera from above
The Nikon FG is a reasonably compact film SLR from 1982. Aaron Gold

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

For reasons I can’t quite explain—my contrarian nature, perhaps, or the inferiority complex that comes with being a Pentax shooter—it pains me to heap praise on Nikon. It’s impossible to deny that most Nikon gear, be it film or digital, is pretty darn good. Still, any time I hear or read someone extolling the superiority of all things Nikon, I can’t help but imagine what they’d look like with a Nikkor AF-D 70-210mm zoom shoved firmly up their left nostril.

There is, however, one Nikon film camera I genuinely adore—and it just so happens to be the one that Nikonians love to hate. Fellow film friends, meet my favorite Nikon SLR: The quirky little Nikon FG 35mm camera.

Nikon FG pros:

  • Small size, lightweight
  • Great control layout
  • Works equally well in automatic and manual modes
  • Bargain price for a Nikon SLR

Nikon FG cons:

  • Unrefined feel compared to other Nikon cameras
  • No depth-of-field preview
  • Nikon fans might think you have a lousy camera and no taste

The Nikon that irked everyone

To appreciate both the FG’s eccentricities and the derision it attracts, it’s helpful to know a little about its history. The FG was the second attempt to market an entry-level camera under the Nikon brand; previously, such cameras were marketed as Nikkormats. The FG’s predecessor is the equally-detested EM, a lightweight, automatic-only SLR introduced in 1979 as a response to compact automatics like Pentax’s ME of 1976.

Related: Affordable analog – 10 alternatives to high-priced film cameras

The EM’s (relatively) cheap price and limited feature set jibed with Nikon’s pro-level snob appeal. Worse yet, Nikon ads touted the EM as a lightweight, low-cost, foolproof camera that delivered the same high-quality results as pro-level Nikons—something that likely did not sit well with those who had invested serious cabbage in their F2 and FE kits. One can only imagine how they felt about the snapshooting masses suddenly joining the ranks of the Nikonisti.

Enter the surprisingly sophisticated Nikon FG

The Nikon FG film camera from the front
An entry-level replacement for the Nikon EM, the FG was equally as loathed as its predecessor. Aaron Gold

Sensitive, perhaps, to the fan base’s criticisms of the EM, Nikon replaced it in 1982 with the FG, a name that implied closer kinship to other F-series SLRs. And the FG is a surprisingly sophisticated camera: Similar in size to the svelte EM, the FG adds full manual mode (along with the EM’s aperture priority auto mode) and an exposure compensation dial in addition to the EM’s backlight, +2EV button. Like the EM, the FG lacks a depth-of-field preview, but it does have a mirror lock-up tied to the self-timer. 

Nikon FG key specs:

  • Type: 35mm manual-focus, manual-wind SLR
  • Years produced: 1982-1984
  • Built-in light meter: Yes
  • Exposure modes: Metered manual, aperture priority auto, program auto
  • Focusing aids: Horizontal split prism, microprism
  • ISO range: 25 to 3200
  • ISO setting: Manual
  • Shutter type: Metal leaf, vertical travel, electronically timed
  • Shutter speed range: 1/1000 to 1 sec (stepless in auto modes) + Bulb
  • Flash sync speed: 1/90 sec
  • Hot shoe: Yes
  • Self-timer: Yes
  • DOF preview: No
  • Mirror lock-up: Yes, with self-timer
  • Exposure compensation: Yes
  • Batteries: 2 x LR44 or S76
  • Dimensions: 5.35 x 3.46 x 2.13 inches

But it was the innovations over and above other Nikon cameras that really set the FG apart. The FG was the first Nikon camera to offer a fully-automatic “program” mode, which set both shutter speed and aperture in stepless increments. It also offers off-the-film (OTF) flash metering, a feature borrowed from the pro-level F3. (It’s worth noting that when the FG-20 replaced the FG in 1985, the Program and OTF flash features were gone, transplanted to the high-end Nikon FA.)

The 1982 price for the FG was $322, but major retailers advertised it for as low as $185 (about $560 in 2022 dollars). For comparison, retailers were getting $99 for an EM, $205 for an FE, and $435 for an F3. Competing cameras included the Canon AE-1 Program, which sold retail for $170, and the Minolta X-700 at $195.

Ingenious workaround enables new tech on old lenses

One innovative feature that Popular Photography covered in our July 1983 Nikon FG Lab Report (in which we tore the camera down to its bare frame) is the camera’s last-second metering check. When the shutter is fired in program mode, just after the lens is stopped down—and before the mirror pops up—the FG takes a meter reading to set the final shutter speed. Why? The throw of the diaphragm actuating lever is so short that the FG can’t set the aperture with perfect precision, so this final check allows the shutter speed to be fine-tuned for proper exposure. It’s a work-around, to be sure, but one that allows the FG’s program mode to work with existing AI-series lenses, many introduced half a decade before the FG showed up.

PopPhoto’s response to this nifty new Nikon was favorable. In our First Look at the camera, published in the November 1982 issue, we said:

“The camera was extremely responsive and has an accurate, nicely center-weighted metering system that gave beautifully exposed negatives and slides. In spite of its plastic exterior, the FG felt solid and reliable, with none of the ‘tinniness’ that is sometimes characteristic of cameras this small… All in all, the FG is an extremely flexible picture-making machine that is at once quite sophisticated and easy to use.”

Not good enough for the Nikonians

The Nikon FG film camera shutter button
The FG was Nikon’s first camera with a fully-automatic program mode. Aaron Gold

Unfortunately, the Nikon FG proved to be no more popular than the EM among Nikon fanatics. Pick one up and it’s easy to see why: Smaller and lighter than the FE and FM, it feels substantially less substantial, and not just because of its plastic body. The FG doesn’t have the same mirror-damping mechanism as pricier Nikons, and, like the EM before it, it employs the same Seiko MFC-E shutter used by Pentax, Minolta, and others, rather than the Nikon-designed Copal shutter. Inertia has a field day with the FG: Fire the shutter and it shudders in a way most Nikon SLRs don’t. 

The film advance is just plain weird: It has a two-piece hinged lever and a ratcheting design which allows the film to be advanced in several short strokes rather than one big one. Winding it feels like manipulating a broken finger, and when the film is fully advanced, the clutchamathingie that makes the ratcheting action work stops the lever’s travel with a most un-Nikon-like clack. Compared to the refined feel of other Nikons, the FG is more Holga than Hasselblad. It just doesn’t feel like a proper Nikon, and I’m sure that’s a big part of why it alienated the fan base.

Why I love the Nikon FG

That’s unfortunate because those unable to get past the FG’s un-Nikon-like feel are missing out on a magnificent camera. I like my SLRs small and light, and the FG is a significant three ounces lighter than the Nikon FE. While not quite as light as the Pentax M-series cameras—my favorite walk-about bodies—the FG has a better control layout: The edge of the shutter speed dial sits proud of the camera’s front edge, so you can turn it with your shutter-button finger. The exposure compensation dial can also be easily adjusted while looking through the lens.

The meter display is one of my favorites. The FG’s viewfinder has a vertical row of numbers corresponding to shutter speeds. In manual mode, red LEDs light up solid next to your selected shutter speed and flash next to the meter’s recommendation, with arrows at the top and bottom warning of over- or under-exposure. A single solid LED means you and the meter agree. In automatic mode, the LED shows the camera’s selected shutter speed, with a beeper (which can be disabled) warning of shake-prone speeds of 1/30 or less. Unlike a mechanical needle, the LED display is visible even in very low light. Even if it’s too dark to read the numbers, I find I can figure out the approximate shutter speed by the position of the LEDs.

The Nikon FG film camera from the front
Some photographers complain that the FG isn’t as refined as other Nikon SLRs. Aaron Gold

The beauty of the FG is that it works equally well in manual, semi-automatic, and fully-automatic modes, which is more than I can say for my beloved Pentax ME Super (which I find to be great as an automatic camera but lousy as a manual one). Personally, I like the FG even better than my Nikon FE, which is supposed to be the superior SLR. The FG isn’t as refined, but I find it a lot easier and faster to use. 

And, of course, the photos that come out of the FG are just as good as what an FE or FM –or even an F2 or F3—can make, because they’re all shot through those lovely Nikkor lenses. And while I don’t own any, I’m told that the lower-cost plastic-body E-series lenses, which with the EM and FG were often bundled, also do an excellent job.

And yet it’s still the Nikon that Nikonians dislike

The odd thing about the FG is that even now, forty years after its introduction, it is still reviled by some of Nikon’s fan base. The FG has a reputation for fragility, though this seems to be propagated by folks who don’t trust cameras with electronic shutters (which is a little like keeping your money in a mattress because you don’t trust banks). I’ve seen little evidence that reliability is any more of a problem for the FG than any other electronic Nikons. In fact, having read through hectares of online reviews and forum commentary, it strikes me that most of the people who bag on the FG haven’t actually used one. Actual FG owners, what few of us there are, mostly love the li’l critter.

Still, this unfair tarnishing of the FG’s image has created a great situation for would-be Nikon shooters who are put off by price: The FG remains a bargain among manual-focus Nikon SLRs. While working FEs frequently sell in the $100 to $200 range, and FM-series cameras for even more, it’s still possible—easy, actually—to find an FG in good condition for well under a Benjamin. The same goes for the FG’s replacement, the FG-20. Though it lacks all of the FG’s features, it’s still a great (and greatly underappreciated) Nikon SLR. Of course, one still has to contend with the high price of Nikkor lenses, but the savings the FG offers over the FE should cover the cost of a light-weight Series E 50mm f/1.8, or get you most of the way to the lovely Nikkor 50mm f/1.4.

Will the FG ever get the love it deserves? 

The Nikon FG film camera logo
A used Nikon FG in working order can often be found for less than $100. Aaron Gold

I imagine that the Nikon FG will never be fully embraced by Nikon fans, and much as I would like to dismiss those who turn their nose as snobs, the truth is that I cannot blame them. There is a level of polish and sophistication that one expects from a Nikon camera, and the FG doesn’t meet that standard. Shoot with a Nikon FE or FM and you can understand why they command such high prices. Shoot with an FG and you can understand why it doesn’t.

But that doesn’t change my opinion that the Nikon FG is a brilliant camera. I’ve been shooting with the FG for far longer than I’ve been writing for PopPhoto, and what my forbears at this publication said about the FG four decades ago still holds true today: It’s a flexible picture-making machine that is at once sophisticated and easy to use. Nikon fanatics may not hold the FG in high esteem, but I sure do.

Nikon FG sample images

Below you’ll find a selection of sample images from the Nikon FG. Note: All shots were hand-processed and scanned.

Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ultrafine Xtreme 400. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ultrafine Xtreme 400. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ilford FP4 Plus. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ultrafine Xtreme 400. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ilford FP4 Plus. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ultrafine Xtreme 400. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ilford FP4 Plus. Aaron Gold
Sample image, in B&W, shot with the Nikon FG film camera
Shot on Ultrafine Xtreme 400. Aaron Gold

The post Classic film camera review: Nikon FG, the SLR that irked everyone (but me) appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How photography helped me understand my Blackness https://www.popphoto.com/inspiration/rebecca-arthur-re-imagining-black-identity/ Thu, 18 Aug 2022 04:01:00 +0000 https://www.popphoto.com/?p=182638
collage by rebecca arthur of her mother
'Untitled'. Rebecca Arthur

Photographer Rebecca Arthur reflects on how their perception of the Black identity evolved during a period of powerful personal growth.

The post How photography helped me understand my Blackness appeared first on Popular Photography.

]]>
collage by rebecca arthur of her mother
'Untitled'. Rebecca Arthur

As someone who grew into a mixed-race identity, I have felt that it has been my lifelong quest to understand my Blackness. I was raised by a white mother who filled our home with books by Black authors, collected dolls that resembled me and my siblings, and praised movies like The Color Purple and Crooklyn. Yet I was one of those kids who was considered “too white” to be Black and felt a large disconnect between the identity I presented to the outside world and the one I acquired within my household. Although my identity teetered between two worlds, I always felt a certain sort of safety in my Blackness, and I’ve perpetually held comfort there—even in the face of hatred. 

While pursuing a BFA in photography at NYU’s Tisch School of the Arts, I was intrigued by themes of identity in relation to family, and how or what people define as “home.” Photography was the medium to which I turned to understand my own existence underneath these two umbrellas. And in an attempt to explore them more deeply, I photographed my family in our childhood home, studying the dynamics we held.

In this process, I came to recognize the ways in which histories could be told through individuals and their relationships within the spaces they inhabit. Without words, their presence in the environment could speak an unspoken critical language that monumentalizes their humanity. 

rebecca arthur's portrait of her sister smoking
“Catherine taking a smoke break.” Rebecca Arthur

Related: In a self-portrait series, Chinelle Rojas reclaims her identity

Re-imagining the Black identity

This work led me to develop Re-imagining the Black Identity, a project I completed during a joint Fulbright-Harriet Hale Woolley Fellowship in Paris, France. I was aiming to explore the ways Blackness functions cross-culturally. I sought to examine the kinds of safety and language that occurs in other dynamics where Blackness is at the center of life experiences, but exists outside of my individual context. 

Upon the invention of photography and thereafter, the medium became a fundamental source to consider and contemplate our disparate identities. It was, however, simultaneously the exact mirror that when held in the hands of a dominant gaze, projected a prejudiced viewpoint—skewing the perception of the identities we’ve claimed throughout familial seasons and histories.

In my research, I examined a set of images taken by J.T. Zealy who worked as a photographer for Louis Agassiz, a Swiss-born American Biologist, gathering materials for his study of anatomical variations unique to the African race. While his experiment aimed to identify distinctions between African Blacks and whites, the ancillary goal was to present the superiority of white people. 

rebecca arthur's portrait of her father shaving
“Daddy shaving.” Rebecca Arthur

However, as the medium became more accessible to Black image-makers of the time, photography aided in the reclamation of this gaze and the re-imagination of the Black identity, giving ownership and autonomy to misrepresented communities through the archival and permanent practice of image-making. 

Witnessing & experiencing the Black experience abroad

Being a Black girl who had never left America before, I was confronted with a deep un-suturing upon my arrival in France. I felt the sense of self I had developed in relation to how I felt about being Black had unstrung itself from me. I hadn’t contemplated at the genesis of the work that I would undergo a shift in how I perceived my identity within the new landscape, coming to terms with my own Americanness outside of the context of my individual Black experience.

Upon my arrival in Paris, I wanted to hide. I felt uncomfortable and longed to return to the U.S., but I felt it was important to push through that discomfort and answer some of the questions I had. I persisted to connect with the Black and African community in Paris, sharing conversations on the subject of identity and finding similarities in our personal narratives and the new feelings I was experiencing. The conversations facilitated an exchange of mutual safety that allowed me to photograph each person in a way they had never seen themselves before.

rebecca arthur, reimagining the black identity fulbright
‘Stella in her room,’ Paris, France. Rebecca Arthur

In a place like France, participating in discourse on the subjects of race and identity does not generally take the lead in the daily life of non-Black French folk. To use art and writing to highlight topics of race in relation to the French identity in the public eye is seen as taboo. However, with deep care, I was able to fashion a mirror that reflected something familiar and invitingly unknown to those I photographed. 

That which is boundless

After returning to America, I became very sacred of my Blackness. Enduring the pain that followed the murders of George Floyd, Breonna Taylor, and Elijah McClain—to name a few—we became witness to an apparent call for action and experienced communal care for the Black community that mothered us through a period of intense violence and uncertainty. 

At this moment, and reflecting on my time in France, I developed a sense that identity, specifically in Blackness, is boundless and abstract. It is not so blatantly described in one singular photograph or story, but better understood through the slowness of intention and inherent care only truly known when a mirror of the same hand is held to it.

rebecca arthur, reimagining the black identity fulbright
“Tiffany at Châtelet-Les-Halles,” Paris, France. Rebecca Arthur

I am constantly thinking about how we come to know who we are through the creation of images. I relish the ways in which like Blackness is boundless, so is the affectivity that images carry. While the subject who is present may not offer all of themself to the viewer, their gaze, or lack thereof, still touches at the edges of influence and attaches itself to the viewer in a unique way.

I am interested in that attachment, and I am intrigued by this exchange of movement that occurs. It has changed the way I have come to understand the power of photography and how to use it as a tool to contemplate who we are in relation to others. And I will continue taking deep dives through the medium’s historical life and coming back to the surface with new ways to describe and think about how we want to be seen and the ways in which that exploration affects movement outside of our individual experience.

The post How photography helped me understand my Blackness appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bring on the noise: How to save high ISO files deemed ‘too noisy’ https://www.popphoto.com/how-to/ai-de-noise-software/ Sun, 14 Aug 2022 15:00:00 +0000 https://www.popphoto.com/?p=182365
A interior photo of a church with lots of visible noise from using a high ISO.
Crank that ISO, AI de-noise software is here to save the day. Shot at ISO 6400 with noise corrections made using On1 NoNoise AI. Jeff Carlson

Today's AI de-noise software is surprisingly powerful.

The post Bring on the noise: How to save high ISO files deemed ‘too noisy’ appeared first on Popular Photography.

]]>
A interior photo of a church with lots of visible noise from using a high ISO.
Crank that ISO, AI de-noise software is here to save the day. Shot at ISO 6400 with noise corrections made using On1 NoNoise AI. Jeff Carlson

In photography, knowing when to put the camera away is a valuable skill. Really dark situations are particularly difficult: without supplemental lighting, you can crank up the ISO to boost the camera’s light sensitivity, but that risks introducing too much distracting digital noise in the shot. Well, that was my thinking until recently. I now make photos I previously wouldn’t have attempted because of de-noise software, fueled by machine learning technology. A high ISO is no longer the compromise that it once was.

That opens up a lot of possibilities for photographers. Perhaps your camera is a few years old and doesn’t deal with noise as well as newer models. Maybe you have images in your library that you wrote off as being too noisy to process. Or you may need extremely fast shutter speeds to capture sports or other action. You can shoot with the knowledge that software will give you extra stops of exposure to play with.

Sensor sensibility

Too frequently, I run into the following circumstances. In a dark situation, I increase the ISO so I can use a fast enough shutter speed to avoid motion blur or camera shake. The higher ISO ekes out light from the scene by sending more power to the image sensor, boosting its light sensitivity. That power boost, however, introduces visible noise into the image. At higher ISO values—6400 and higher, depending on the camera—the noise can be distracting and hide detail.

The other common occurrence is when I forget to turn the ISO back down after shooting at night or in the dark. The next day, in broad daylight, I end up with noisy images and unusually fast shutter speeds because the camera is forced to adapt to so much light sensitivity. If I’m not paying attention to the values while shooting, it’s easy to miss the noise by just looking at previews on the camera’s LCD. Has this happened to some of my favorite images? You bet it has.

Incidentally, this is one of those areas where buying newer gear can help. The hardware and software in today’s cameras handle noise better than in the past. My main body is a four-year-old Fujifilm X-T3 that produces perfectly tolerable noise levels at ISO 6400. That has been my ceiling for setting the ISO, but now (depending on the scene of course) I’m comfortable pushing beyond that.

The sound of science

Noise-reduction editing features are not new, but the way we deal with noise has changed a lot in the past few years. In many photo editing apps, the de-noising controls apply algorithms that attempt to smooth out the noise, often resulting in overly soft results.

A more effective approach is to use tools built on machine learning models that have processed thousands of noisy images. In “Preprocess Raw files with machine learning for cleaner-looking photos,” I wrote about DxO PureRAW 2, which applies de-noising to raw files when they’re demosaiced.

If you’re working with a JPEG or HEIC file, or a Raw file that’s already gone through that processing phase, apps such as ON1 NoNoise AI (which is available as a stand-alone app/plug-in and also incorporated into ON1 Photo RAW) and Topaz DeNoise AI analyze the image’s noise pattern and use that information to correct it.

Testing various de-noise software

A interior photo of a church with lots of visible noise from using a high ISO.
Viewed as a whole, the noise isn’t terrible. Jeff Carlson

The following image was shot handheld at 1/60 shutter speed and ISO 6400. I’ve adjusted the exposure to brighten the scene, but that’s made the noise more apparent, particularly when I view the image at 200% scale. The noise is especially prominent in the dark areas.

A interior photo of a church with lots of visible noise from using a high ISO.
But zoom in and you can see how noisy the image is. Jeff Carlson

Lightroom

If I apply Lightroom’s Noise Reduction controls, I can remove the noise, but everything gets smudgy (see below).

A interior photo of a church with lots of visible noise from using a high ISO.
Lightroom’s built-in tool isn’t helpful when correcting. Jeff Carlson

ON1 NoNoise AI

When I open the image in ON1 NoNoise AI, the results are striking. The noise is removed from the pews, yet they retain detail and contrast. There’s still a smoothness to them, but not in the same way Lightroom rendered them. This is also the default interpretation, so I could manipulate the Noise Reduction and Sharpening sliders to fine-tune the effect. Keep in mind, too, that we’re pixel-peeping at 200%; the full corrected image looks good.

A interior photo of a church with lots of visible noise from using a high ISO.
Compare the original with the corrected version of the image using the preview slider in ON1 NoNoise AI. Jeff Carlson

Looking at the detail in the center of the photo also reveals how much noise reduction is being applied. Again, this is at 200%, so in this view the statues seem almost plastic. At 100% you can see the noise reduction and the statues look better.

A interior photo of a church with lots of visible noise from using a high ISO.
Detail at the middle of the frame. Jeff Carlson

Topaz DeNoise AI

When I run the same photo through Topaz DeNoise AI, you can see that the software is using what appears to be object recognition to adjust the de-noise correction in separate areas—in this case not as successfully. The cream wall in the back becomes nice and smooth as if it was shot at a low ISO, but the marble at the front is still noisy.

A interior photo of a church with lots of visible noise from using a high ISO.
Topaz DeNoise AI’s default processing on this image ends up fairly scattershot. Jeff Carlson

Bring the noise

As always, your mileage will vary depending on the image, the amount of noise, and other factors. I’m not here to pit these two apps against each other (you can do that yourself—both offer free trial versions that you can test on your own images).

What I want to get across are two things. One, AI is making sizable improvements in how noise reduction is handled. And because AI models are always being fed new data, they tend to improve over time.

But more important is this: Dealing with image noise is no longer the hurdle it once was. A noisy image isn’t automatically headed for the trash bin. Knowing that you can overcome noise easily in software makes you rethink what’s possible when you’re behind the camera. So try capturing a dark scene at a very high ISO, where before you may have just put the camera away.

And don’t be like me and forget to reset the ISO after shooting at high values the night before. Even if the software can help you fix the noise.

The post Bring on the noise: How to save high ISO files deemed ‘too noisy’ appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Edit stronger, faster, better with custom-built AI-powered presets https://www.popphoto.com/how-to/ai-powered-presets/ Fri, 29 Jul 2022 01:17:32 +0000 https://www.popphoto.com/?p=180624
A lighthouse photo with a purple filter applied
The editing platform Luminar Neo offers plenty of AI-powered sky replacement presets. Jeff Carlson

Good old-fashion presets are more powerful when combined with AI-assisted subject masking.

The post Edit stronger, faster, better with custom-built AI-powered presets appeared first on Popular Photography.

]]>
A lighthouse photo with a purple filter applied
The editing platform Luminar Neo offers plenty of AI-powered sky replacement presets. Jeff Carlson

It’s time to confess one of my biases. I’ve traditionally looked down on presets in photo editing software.

I get their utility. With one click you can apply a specific look without messing with sliders or knowing the specifics of how editing controls work. There’s certainly appeal in that, particularly for novice photo editors. And selling presets has become a vector for established photographers to make a little money on the side, or have something to give away in exchange for a newsletter sign-up or other merchandise. (I’m guilty of this too. I made some Luminar 4 presets to go along with a book I wrote years ago.)

I’ve just never seen the value in making my photos look like someone else’s. More often than not, the way I edit a photo depends on what the image itself demands. 

And then I saw the light: presets are not shortcuts, per se, they’re automation. Yes, you can make your photos look like those of your favorite YouTube personality, but a better alternative is to create your own presets that perform repetitive editing actions for you with one click.

For instance, perhaps in all of your portrait photos, you reduce Clarity to soften skin, add a faint vignette around the edges, and boost the shadows. A preset that makes those edits automatically saves you from manipulating a handful of controls to get the same effect each time. In many editing apps, presets affect those sliders directly, so if those shadows end up too bright, you can just knock down the value that the preset applied.

The downside is that a preset affects the entire image. Perhaps you do want to open up the shadows in the background, but not so much that you’re losing detail in the subject’s face. Well, then you’re creating masks for the subject or the background and manipulating those areas independently…and there goes the time you saved by starting with a preset in the first place.

Regular readers of this column no doubt know where this is headed. AI-assisted features that identify the content of images are making their way into presets, allowing you to target different areas automatically. Lightroom Classic and Lightroom desktop recently introduced Adaptive Presets that capitalize on the intelligent masking features in the most recent release. Luminar Neo and Luminar AI incorporate this type of smart selection because they’re both AI-focused at their cores.

Lightroom Adaptive Presets

A photo of a statue against a blue sky
An unedited image. Lightroom’s “Adaptive: Sky” presets let you adjust the look of the sky with a few clicks. And the “Adaptive: Subject” presets do the same for whatever Adobe deems to be the main subject, in this case, the statue. Jeff Carlson

Related: Testing 3 popular AI-powered sky replacement tools

Lightroom Classic and Lightroom desktop include two new groups of presets, “Adaptive: Sky” and “Adaptive: Subject.” When I apply the Sunset sky preset to an unedited photo, the app identifies the sky using its Select Sky mask tool and applies adjustments (specifically to Tint, Clarity, Saturation, and Noise) only to the masked area.

A photo of a statue against a purple sky
Only the area that Lightroom Classic identified as the sky is adjusted after choosing the “Adaptive: Sky Sunset” preset. Jeff Carlson

Similarly, if I click the “Adaptive: Subject Pop” preset, the app selects what it thinks is the subject and applies the correction, in this case, small nudges to Exposure, Texture, and Clarity.

A Lightroom mask on a statue.
“Adaptive: Subject Pop” selects what Lightoom believes to be the main subject of an image. Jeff Carlson

Depending on the image, that might be all the edits you want to make. Or you can build on those adjustments.

A Lightroom mask on a statue.
The final image with AI-powered presets applied to both the sky and the statue. Jeff Carlson

Related: ‘Photoshop on the Web’ will soon be free for anyone to use

Now let’s go back to the suggested portrait edits mentioned above. I can apply a subtle vignette to the entire image, switch to the Masking tool and create a new “Select Subject” mask for the people in the shot. With that active, I increase the Shadows value a little and reduce Clarity to lightly soften the subjects.

A photo of a couple
Increasing Shadows and bringing down Clarity brightens and softens the subjects’ skin in this portrait. Jeff Carlson

Since this photo is part of a portrait session, I have many similar shots. Instead of selecting the subject every time, I’ll click the “Add New Presets” button in the Presets panel, make sure the Masking option is enabled, give it a name and click Create. With that created, for subsequent photos I can choose the new preset to apply those edits. Even if it’s a preset that applies only to this photo shoot, that can still save a lot of time. 

Lightroom presets
Select the Masking option is turned on when creating a new adaptive preset, since by default it’s deselected.

Luminar Presets

When Luminar Neo and Luminar AI open an image, they both scan the photo for contents, identifying subjects and skies even before any edits have been made. When you apply one of the presets built into the apps, the edits might include adjustments to specific areas. 

A lighthouse photo
Luminar Neo offers a variety of sky-replacement presets. Jeff Carlson

For an extreme example, in Luminar Neo’s Presets module, the “Sunsets Collection” includes a Toscana preset that applies Color, Details, and Enhance AI settings that affect the entire image. But it also uses the Sky AI tool to swap in an entirely new sky.

The portrait editing tools in Luminar by default fall into this category, because they look for faces and bodies and make adjustments, such as skin smoothing and eye enhancement, to only those areas. Creating a new user preset with one of the AI-based tools targets the applicable sections.

A lighthouse photo with a purple filter applied
The Toscana preset in Luminar Neo is a good example of how a preset can affect a specific area of the image, replacing the sky using the Sky AI tool. Jeff Carlson

Preset Choices

The Luminar and Lightroom apps also use some AI detection to recommend existing presets based on the content of the photo you’re editing, although I find the choices to be hit or miss. Lightroom gathers suggestions based on presets that its users have shared, grouped into categories such as Subtle, Strong, and B&W. They tend to run the gamut of effects and color treatments, and for me that feels more like trying to put my image into someone else’s style.

Instead, I’ll stick to presets’ secret weapon, which is to create my own to automate edits that I’d otherwise make but take longer to do so.

The post Edit stronger, faster, better with custom-built AI-powered presets appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Outsmart your iPhone camera’s overzealous AI https://www.popphoto.com/how-to/outsmart-iphone-overzealous-ai/ Thu, 24 Mar 2022 20:55:29 +0000 https://www.popphoto.com/?p=166350
A green iPhone camera on a red background
Dan Bracaglia

Apps like Halide and Camera+ make it easy to bypass your smartphone's computational wizardry for more natural-looking photos.

The post Outsmart your iPhone camera’s overzealous AI appeared first on Popular Photography.

]]>
A green iPhone camera on a red background
Dan Bracaglia

Last weekend The New Yorker published an essay by Kyle Chayka with a headline guaranteed to pique my interest and raise my hackles: “Have iPhone Cameras Become Too Smart?” (March 18, 2022).

Aside from being a prime example of Betteridge’s Law of Headlines, it feeds into the idea that computational photography is a threat to photographers or is somehow ruining photography. The subhead renders the verdict in the way that eye-catching headlines do: “Apple’s newest smartphone models use machine learning to make every image look professionally taken. That doesn’t mean the photos are good.”

A bench on a beach with a blue sky.
This image was shot on an iPhone 13 Pro using the Halide app and saved as a Raw file. It was then processed in Adobe Camera Raw. Jeff Carlson

The implication there, and a thrust of the article, is that machine learning is creating bad images. It’s an example of a type of nostalgic fear contagion that’s increasing as more computational photography technologies assist in making images: The machines are gaining more control, algorithms are making the decisions we used to make, and my iPhone 7/DSLR/film SLR/Brownie took better photos. All wrapped in the notion that “real” photographers, professional photographers, would never dabble with such sorcery.

A bench on a beach with a blue sky.
Here’s the same scene shot using the native iPhone camera app, straight out of camera, with all of its processing. Jeff Carlson

(Let’s set aside the fact that the phrase “That doesn’t mean the photos are good” can be applied to every technological advancement since the advent of photography. A better camera can improve the technical qualities of photos, but doesn’t guarantee “good” images.)

I do highly recommend that you read the article, which makes some good points. My issue is that it ignores—or omits—an important fact: computational photography is a tool, one you can choose to use or not.

Knowing You Have Choices

A sandy beach with wood pylons.
Another Phone 13 Pro photo, captured straight out of camera. Jeff Carlson

Related: Meet Apple’s new flagship iPhone 13 Pro & Pro Max

To summarize, Chayka’s argument is that the machine learning features of the iPhone are creating photos that are “odd and uncanny,” and that on his iPhone 12 Pro the “digital manipulations are aggressive and unsolicited.” He’s talking about Deep Fusion and other features that record multiple exposures of the scene in milliseconds, adjust specific areas based on their content such as skies or faces, and fuses it all together to create a final image. The photographer just taps the shutter button and sees the end result, without needing to know any of the technical elements such as shutter speed, aperture, or ISO.

An underexposed photo sandy beach with wood pylons.
Here’s the same angle (slightly askew) captured using the Halide app and saved as a raw file, unedited. Jeff Carlson

You can easily bypass those features by using a third-party app such as Halide or Camera+, which can shoot using manual controls and save the images in JPEG or raw format. Some of the apps’ features can take advantage of the iPhone’s native image processing, but you’re not required to use them. The only manual control not available is aperture because each compact iPhone lens has a fixed aperture value.

That fixed aperture is also why the iPhone includes Portrait Mode, which detects the subject and artificially blurs the background to simulate the soft background depth of field effect created by shooting with a bright lens at f/1.8 or wider. The small optics can’t replicate it, so Apple (and other smartphone developers) turned to software to create the effect. The first implementations of Portrait Mode often showed noticeable artifacts, the technology has improved in the last half-decade to the point where it’s not always apparent the mode was used.

But, again, it’s the photographer’s choice whether to use it. Portrait Mode is just another tool. If you don’t like the look of Portrait Mode, you can switch to a DSLR or mirrorless camera with a decent lens.

A sandy beach with wood pylons.
The same Halide raw photo, quickly edited in Adobe Lightroom. Jeff Carlson

Algorithmic Choices

More apt is the notion that the iPhone’s processing creates a specific look, identifying it as an iPhone shot. Some images can appear to have exaggerated dynamic range, but that’s nothing like the early exposure blending processing that created HDR (high dynamic range) photos where no shadow was left un-brightened.

Each system has its own look. Apple’s processing, to my eye, tends to be more naturalistic, retaining darks while avoiding blown-out areas in scenes that would otherwise be tricky for a DSLR. Google’s processing tends to lean more toward exposing the entire scene with plenty of light. These are choices made by the companies’ engineers when applying the algorithms that dictate how the images are developed.

A lake scene with a blue sky and tree in the foreground.
The iPhone 13 Pro retains blacks in the shadows of the tree, the shaded portions of the building on the pier, and the darker blue of the sky at the top. Jeff Carlson

The same applies to traditional camera manufacturers: Fujifilm, Canon, Nikon, Sony cameras all have their own “JPEG look”, which are often the reason photographers choose a particular system. In fact, Chayka acknowledges this when reminiscing over “…the pristine Leica camera photo shot with a fixed lens, or the Polaroid instant snapshot with its spotty exposure.”

The article really wants to cast the iPhone’s image quality as some unnatural synthetic version of reality, photographs that “…are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning.” That’s a lovely turn of phrase, but it comes at the end of talking about the iPhone’s Photographic Styles feature that’s designed to give the photographer more control over the processing. If you prefer images to be warmer, you can choose to increase the warmth and choose that style when shooting.

A lake scene with a blue sky and tree in the foreground.
The Pixel 6 Pro differs slightly in this shot, opening up more image detail in the building and, to a lesser extent, the deep blue of the sky at the top Jeff Carlson

It’s also amusing that the person mentioned at the beginning of the article didn’t like how the iPhone 12 Pro rendered photos, so “Lately she’s taken to carrying a Pixel, from Google’s line of smartphones, for the sole purpose of taking pictures.”

The Pixel employs the same types of computational photography as the iPhone. Presumably, this person prefers the look of the Pixel over the iPhone, which is completely valid. It’s their choice.

Choosing with the Masses

A green iPhone camera on a red background
Computational photography is a tool, one you can choose to use or not. Dan Bracaglia

I think the larger issue with the iPhone is that most owners don’t know they have a choice to use anything other than Apple’s Camera app. The path to using the default option is designed to be smooth; in addition to prominent placement on the home screen, you can launch it directly from an icon on the lock screen or just swipe from right to left when the phone is locked. The act of taking a photo is literally “point and shoot.”

More important, for millions of people, the photos it creates are exactly what they’re looking for. The iPhone creates images that capture important moments or silly snapshots or any of the unlimited types of scenes that people pull out their phones to record. And computational photography makes a higher number of those images decent.

Of course not every shot is going to be “good,” but that applies to every camera. We choose which tools to use for our photography, and that includes computational photography as much as cameras, lenses, and capture settings.

The post Outsmart your iPhone camera’s overzealous AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to use artificial intelligence to tag and keyword photos for better organization https://www.popphoto.com/how-to/tag-and-organize-photos-with-ai/ Thu, 10 Mar 2022 13:00:00 +0000 https://www.popphoto.com/?p=164783
A women with glasses and the reflection of a computer screen in her lenses.
Getty Images

Tagging images with keywords is time-consuming, here's how AI can help shoulder some of the weight of this oh-so-dull task.

The post How to use artificial intelligence to tag and keyword photos for better organization appeared first on Popular Photography.

]]>
A women with glasses and the reflection of a computer screen in her lenses.
Getty Images

Computational photography technologies aim to automate tasks that are time-consuming or uninspiring: Adjusting the lighting in a scene, replacing a flat sky, culling hundreds of similar photos. But for a lot of photographers, assigning keywords and writing text descriptions makes those actions seem thrilling.

When we look at a photo, the image is supposed to speak for itself. And yet it can’t in so many ways. We work with libraries of thousands of digital images, so there’s no guarantee that a particular photo will rise to the surface when we’re scanning through screenfuls of thumbnails. But AI can assist.

Keywords, terms, descriptors, phrases, expressions…

I can’t overemphasize the advantages of applying keywords to images. How many times have you found yourself scrolling through your photos, trying to recall when the ones you want were shot? How often have you scrolled right past them, or realized they’re stored in another location? If those images contained keywords, the shots could often be found in just a couple of minutes or less. 

The challenge is tagging the photos at the outset.

It seems to me that people fall on the far ends of the keywording spectrum. On one side is a hyper-descriptive approach, where the idea is to apply as many terms as possible to describe the contents of an image. These can branch into hierarchies and subcategories and related concepts and all sorts of fascinating but arcane miscellany.

On the other side is where I suspect most people reside: keywords are a time-consuming waste of effort. Photographers want to edit, not categorize!

This is where AI technologies are helping. Many apps use image detection to determine the contents of photos and use that data when you perform a search. 

A screenshot of the AI features in Apple Photos
Apple Photos found photos of sunflowers…and tater tots. Jeff Carlson

Related: Computational photography, explained: The next age of image-making is already here

For example, in Apple Photos, typing “sunflower” brings up images in my library that contain sunflowers (and, inexplicably, a snapshot of tater tots). In each of these cases, I haven’t assigned a specific keyword to the images.

Similarly, Lightroom desktop (the newer app, not Lightroom Classic) takes advantage of Adobe Sensei technology to suggest results when I type “sunflower” in the Search field. Although some of my images are assigned keywords (at the top of the results list), it also suggested “Sunflower Sunset” as a term.

A screenshot of the AI features in Apple Photos
I never added a “sunflower” keyword to this image, as you can see in the Info panel, but Photos recognizes the flower in it. Jeff Carlson

That’s helpful, but the implementation is also fairly opaque. Lightroom and Photos are accessing their own internal data rather than creating keywords that you can view. 

What if you don’t use either of those apps? Perhaps your library is in Lightroom Classic or it exists in folder hierarchies you’ve created on disk?

Creating keywords with Excire Foto

I took two tools from Excire for a quick spin to see what they would do. Excire Foto is a standalone app that performs image recognition on photos and generates exactly the kind of metadata I’m talking about. Excire Search 2 does the same, just as a Lightroom Classic plug-in.

I loaded 895 images into Exire Foto, which it scanned and tagged in just a couple of minutes. It did a great job of creating keywords to describe the images; with people, for instance, it differentiates between adults and children. You can add or remove keywords and then save them back to the image or in sidecar files for RAW images. 

Excire Foto screenshot of AI tools.
Excire Foto analyzed the selected image and came up with keywords that describe aspects of the photo. Jeff Carlson

So if the thought of adding keywords makes you want to stand up and do pretty much anything else, you can now get some of the benefits of keywording without doing the grunt work. 

Generating ‘alt text’ for images

Text isn’t just for applying keywords and searching for photos. Many people who are blind or visually impaired still encounter images online, relying on screen reader technology to read the content aloud. So it’s important, when sharing images, to include alternative text that describes their content whenever possible.

Screen shot of adding alt text in Instagram.
The above shows how to add alt text on Instagram. Jeff Carlson

For example, when you add an image to Instagram or Facebook, you can add alt text—though it’s not always obvious how. On Instagram, once you’ve selected a photo and have the option of writing a caption, scroll down to “Advanced Settings,” tap it, and then under “Accessibility” tap “Write Alt Text.”

However, those are additional steps, throwing up barriers that make it less likely that people will create this information.

That being said, Meta, which owns both Instagram and Facebook, is using AI to generate alt text for you. In a blog post from January 2021, the company details “How Facebook is using AI to improve photo descriptions for people who are blind or visually impaired.”

A close-up photo of water on a leaf in an image editing window.
Facebooks’s automatically-generated alt text did an ok job identifying what’s in the above photo. Jeff Carlson

The results can be hit or miss. The alt text for the leaf photo above is described by Facebook as “May be a closeup of nature,” which is technically accurate but not overly helpful.

When there are more specific items in the frame, the AI does a bit better. In the image below—an indulgent drone selfie—Facebook came up with “May be an image of 2 people, people standing and road.”

A B&W photo of two men holding an umbrella in an image editing window.
The alt text for this image is a bit more accurate, though the text still doesn’t quite describe the image. Jeff Carlson

Another example is work being done by Microsoft to use machine learning to create text captions. In a paper last year, researchers presented a process called VIVO (VIsual VOcabulary pretraining) for generating captions with more specificity.

So while there’s progress, there’s also still plenty of room for improvement.

Yes, Automate This Please

Photographers get angsty when faced with the notion that AI might replace them in some way, but creating keywords and writing captions and alt text doesn’t seem to apply in the same way. This is one area where I’m certainly happy to let the machines shoulder some of the work, provided of course that the results are accurate. 

The post How to use artificial intelligence to tag and keyword photos for better organization appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Opinion: The joy of fixed-lens cameras https://www.popphoto.com/gear/joy-of-fixed-lens-cameras/ Sun, 06 Mar 2022 13:00:00 +0000 https://www.popphoto.com/?p=164333
The Fujifilm X-E4 sports a 35mm equivalent f/2. lens.
The Fujifilm X100V sports a 35mm equivalent f/2. lens. Fujifilm

Here's why fixed-lens, big-sensor cameras like the Fujifilm X100V, Ricoh GR III, and Leica Q2 are better than any ILC.

The post Opinion: The joy of fixed-lens cameras appeared first on Popular Photography.

]]>
The Fujifilm X-E4 sports a 35mm equivalent f/2. lens.
The Fujifilm X100V sports a 35mm equivalent f/2. lens. Fujifilm

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

I went out walking this morning, and I took my Leica Q2 Monochrom with me. Walking among the fields around my home in rural England is always interesting: the light changes often, the sky varies from hour to hour, and the crops in the fields show new tones as the seasons progress.

Since the Q2 is a fixed-lens camera, I didn’t need to decide which lens to take. I also own a Fujifilm X-E4, with a panoply of prime and zoom lenses, and when I shoot with that camera, choosing which lens or lenses will be ideal for the photos I plan to take can be complex. And my camera bag can end up quite heavy. With the Q2, there’s nothing additional to carry.

Less is more

There are a handful of high-quality fixed-lens cameras currently available, such as Fujifilm’s X100V, Ricoh’s GR III and GR IIIx, or Leica’s Q2 and Q2 Monochrom. With prices around $900 (Ricoh), $1,400 (Fujifilm), $5,600 (Leica Q2), and $6,200 (Q2 Monochrom), these are not inexpensive travel cameras, but very capable modern mirrorless devices. Many people see fixed-lens cameras as too limiting, that having only one lens can prevent them from taking diverse types of photos. But shooting with a fixed-lens camera can actually be liberating. 

The Ricoh GR IIIx has a 40mm equivalent f/2.8 lens.
The Ricoh GR IIIx has a 40mm equivalent f/2.8 lens. Ricoh/Pentax

Limitations spur creativity

The limitation of a fixed lens can help spur creativity. When working with a camera like this, you become familiar with how photos shot through that lens will look. When you scan your surroundings searching for subjects to photograph, you automatically scale them with the camera’s lens in mind; you learn to see the world as that lens sees it. Instead of thinking that you might want to use a different lens or focal length for a photo, you work with this limitation. You can awaken your beginner’s mind, freeing yourself from technical concerns. As Zen teacher Shunryu Suzuki said, “In the beginner’s mind there are many possibilities, but in the expert’s there are few.”

A wide field-of-view

The cameras I’m discussing all have different focal lengths: the Ricoh GR IIIx has a 40mm equivalent lens (the GR III has a 28mm equivalent), the Fujifilm’s lens is a 35mm equivalent, and the Leica has a 28mm lens. All of these focal lengths are considered wide-angle (Ok, 40mm straddles the line between wide-angle and “standard”), and if this isn’t your style of photography, they won’t work for you. Wildlife, sports, or macro photography require different lenses (though the Leica Q2 does have a macro focus mode). But for street photography, landscapes, and documenting family life, these focal lengths can be fantastic. 

Cropping in-camera

A B&W photo of a field.
If you do need more reach than any of these cameras offer, you can always zoom in via “crop modes.” This image was taken using Leica’s Digital Frame Selector (crop mode) at a 75mm equivalent focal length. Kirk McElhearn

Because of these wide-angle lenses, you may have to “zoom with your feet,” and, while you can’t always get as close as you want to your subjects, you can crop. These cameras all have modern sensors that have excellent dynamic range and enough megapixels for most photographers: the Ricoh has a 24-megapixel APS-C sensor, the Fujifilm has a 26-megapixel APS-C sensor, and the Leica has a 47-megapixel full-frame sensor, both for the color and monochrome models.

All of these cameras offer digital zoom features, which allow you to crop live, viewing different “virtual” focal lengths in the viewfinder or LCD. On the Fujifilm X100V, you have crops at 50mm and 70mm equivalents, and the Leica Q2 crops to 35mm, 50mm, and 75mm equivalents. The GR III crops at 35mm and 70mm equivalents and the GR IIIx crops at 50mm and 70mm equivalents. These features are compositional assistants, and if you shoot raw, you still retain the full images the lenses capture, but it can help to use these guides to compose photos that you might later crop.

A B&W photo of a field.
When opening the above image in a Raw processor you’ll notice Leica saves the entire frame, not just the cropped portion. Kirk McElhearn

Lenses built for their sensors

Another advantage of a fixed-lens camera is that the lens is designed specifically for the camera, not just for a manufacturer’s camera mount. It can sit closer to the sensor, saving a lot of space, and allowing the lenses to be optimized for the sensor. Both the Ricoh and Fujifilm cameras are incredibly compact for their quality; the former easily fits in a jacket pocket. The GR III X has an f/2.8 lens, and the X100V has an f/2 lens; not the fastest, but certainly not slow. The Leica Q2 series is more substantial: its f/1.7 lens takes up a lot of space, and it’s much heavier; but it’s also the toughest-built camera of the bunch.

An end to G.A.S.

Also, if you use a fixed-lens camera, you won’t have the same gear acquisition syndrome that you have with other interchangeable lens cameras; there is always the temptation to buy just one more lens. Sure, there are accessories you can buy, such as cases, straps, grips, and thumb rests—and both the Ricoh and Fujifilm cameras offer “converters,” lenses you screw onto the camera to shoot in longer and wider focal lengths. But you won’t be constantly looking at newer, faster lenses and wondering if you really need them. 

The Leica Q2 Monochrom has a fixed 28mm f/1.7 lens.
The Leica Q2 Monochrom has a fixed 28mm f/1.7 lens. Leica

The wrap

Yes, there are limitations with a fixed-lens camera. And, as a photographer, you learn to work them. Ernst Haas once said, “There is only you and your camera. The limitations in your photography are in yourself, for what we see is what we are.”

Ultimately, we have to be comfortable with the camera we choose. A fixed-lens camera isn’t for everyone, and, particularly, with the cost of the Leica, many people will balk at buying a camera with just one focal length. But if your photography suits this type of camera, it’s worth considering ditching the ILC and using a fixed-lens camera instead. 

The post Opinion: The joy of fixed-lens cameras appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The promise and difficulty of AI-enhanced photo editing https://www.popphoto.com/how-to/luminar-neo-ai-powered-photo-editing/ Thu, 24 Feb 2022 22:06:29 +0000 https://www.popphoto.com/?p=163334
Photo editing platform Luminar Neo's new "Relight AI" feature
Jeff Carlson

We tested out Luminar Neo's new AI-powered 'Relight' tool to see if it can really improve our shots. The results are, well, mixed.

The post The promise and difficulty of AI-enhanced photo editing appeared first on Popular Photography.

]]>
Photo editing platform Luminar Neo's new "Relight AI" feature
Jeff Carlson

Several years ago, an executive at Skylum (the makers of Luminar editing software) told me the company was aggressively hiring hotshot machine-learning programmers as part of a push to infuse Luminar with AI features. It was my first glimpse at the importance of using AI to stand apart from other photo editing apps. Now, Skylum has just released Luminar Neo, the newest incarnation of its AI-based editor.

One of the new features I’ve most wanted to explore is “Relight AI,” which is emblematic of what AI technologies can do for photo editing. Imagine being able to adjust the lighting in a scene based on the items the software identifies, adding light to foreground objects, and controlling the depth of the adjustment as if the image were rendered in 3D.

To be upfront, I’m focusing just on the Relight AI feature, not reviewing Luminar Neo as a whole. The app has only recently been released and, in my experience so far, still has rough edges and is missing some basic features.

Why ‘Relight?’

A lot of photo editing we do is relighting, from adjusting an image’s overall exposure to dodging and burning specific areas to make them more or less prominent. 

But one of the core features of AI-based tools is the ability to analyze a photo and determine what’s depicted in it. When the software knows what’s in an image, it can act on that knowledge.

If a person is detected in the foreground, but they’re in the shadows, you may want to increase the exposure on them to make it look as if a strobe or reflector illuminated them. Usually, we do that with selective painting, circular or linear gradients, or making complex selections. Those methods are often time-consuming, or the effects are too general.

For example, the following photo is not only underexposed, but the tones between the foreground and background are pretty similar; we want more light on the subjects in the foreground and to create separation from the active background.

Photo editing platform Luminar Neo's new "Relight AI" feature
The exposure and depth of field are not great here. Jeff Carlson

So I can start with the obvious: make the people brighter. One option in many apps is to paint an exposure adjustment onto them. In Luminar Neo, the way to do that is to use the “Develop” tool to increase the Exposure value, then use the “Mask” feature to make the edit apply only to the subjects.

Photo editing platform Luminar Neo's new "Relight AI" feature
I’ve painted a mask in a few seconds. To do it right would take several minutes of work. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
Unwanted halos are easy to create when making masks. Jeff Carlson

Another option would be to apply a linear gradient that brightens the bottom half of the image and blends into the top portion, but then the ground at the left side of the frame, which is clearly farther behind the family, would be brighter too.

Ideally, you want to be the art director who asks for the foreground to be brighter and lets the software figure it out.

How Relight AI Works

The Relight AI tool lets you control the brightness of areas near the camera and areas away from the camera, it also lets you extend the depth of the effect. In our example, increasing the “Brightness Near” slider does indeed light up the family and the railing, and even adjusts the background a little, to smooth the transition between what Luminar Neo has determined to be the foreground and background.

Photo editing platform Luminar Neo's new "Relight AI" feature
The image with only “Brightness Near” applied in Relight AI. Jeff Carlson

The photo is already much closer to what I intended, and I’ve moved a single slider. I can also lower the “Brightness Far” slider to make the entire background recede. The “Depth” control balances the other two values (I’ll get back to Depth shortly).

Photo editing platform Luminar Neo's new "Relight AI" feature
The background is now darker, creating more separation from the foreground elements. Jeff Carlson

Depending on how the effect applies, the “Dehalo” control under Advanced Settings can smooth the transition around the foreground elements, such as the people’s hair. You can also make the near and far areas warmer or cooler using the “Warmth” sliders.

What about photos without people?

OK, photos with people are important, but also low-hanging fruit for AI. Humans get special treatment because often a person detected in the foreground is going to be the subject of the photo. What if an image doesn’t include a person?

In this next example, I want to keep the color in the sky and the silhouettes of the building but brighten the foreground. I’m going to ratchet Brightness Near all the way to 100 to exaggerate the effect so we can get an idea of where Luminar is identifying objects.

Photo editing platform Luminar Neo's new "Relight AI" feature
The original image is too dark. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
Increasing “Brightness Near” reveals what Luminar thinks are foreground subjects. Jeff Carlson

We can see that the plants in the immediate foreground are lit up, as well as the main building. Luminar protected the sky in the background to the left of the building and didn’t touch the more distant building on the right. So Relight AI is clearly detecting prominent shapes.

Photo editing platform Luminar Neo's new "Relight AI" feature
Decreasing the “Depth” value illuminates just the bushes in the foreground. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
Relight AI is adjusting brightness based on the shapes it has detected. Taken to the extreme, it’s also introduced a halo around the nearest building. Jeff Carlson

When I reduce the Depth value, the nearest bushes are still illuminated but the buildings remain in shadow. Cranking up the Depth amount adds an unnatural halo to the main building—but the side building still holds up well.

So, overall Relight AI isn’t bad. In these two images it’s achieved its main goals: let me adjust near and far brightness quickly and easily.

Where It Struggles

This is where I hold up a large disclaimer that applies to all photos edited using AI tools: the quality of the effect depends a lot on the images themselves and what the software can detect in them.

In this photo of trees, the software doesn’t really know what it’s looking at. The bushes and groups of trees at the right and left are at about the same distance from the camera, and then the rest of the trees recede into the distance. My expectation would be that those side and foreground trees would be illuminated, and the forest would get darker the deeper it moves away from the lens.

Photo editing platform Luminar Neo's new "Relight AI" feature
This group of trees doesn’t have a typical depth perspective. Jeff Carlson

When I make dramatic changes to the near and far brightness controls, however, Relight AI falls back to gradients from top to bottom, since in many photos, the foreground is at the bottom and the background is in the middle and top areas. It looks like the prominent trees on the right and left have been partially recognized, since they don’t go as dark as the rest, but still, the effect doesn’t work here.

Photo editing platform Luminar Neo's new "Relight AI" feature
When in doubt, Relight AI applies a gradient to simulate foreground and background lighting. Jeff Carlson

Other limitations

Occasionally, with people, the tool will apply the Brightness Near value to them and stick with it, even when you adjust the Depth setting. For example, in this photo of a person in a sunflower field, darkening the background and illuminating the foreground balances the image better, picking up the leaves and sunflowers that are closest to the camera.

Photo editing platform Luminar Neo's new "Relight AI" feature
The original image. Jeff Carlson
Photo editing platform Luminar Neo's new "Relight AI" feature
With no other edits made, Relight AI improves the lighting of the subject and knocks down the brightness of the background. Jeff Carlson

When I set Depth to a low value to make the light appear very close to the camera, the flower on the left—the nearest object—gets dark, but the person’s lighting remains the same. The tool is making the assumption that a person is going to be the primary subject, regardless of the perceived depth in the image.

Photo editing platform Luminar Neo's new "Relight AI" feature
The lighting on the person is consistent even when Depth is set to almost zero. Jeff Carlson

One more limitation with the tool is the inability to adjust the mask that the AI creates. You can edit a mask of the tool’s overall effect, much as we did when painting manually earlier, but that affects only wherein the image the tool’s processing will be visible. You can’t go in and help the AI identify which areas are at which depths. (This also ties into the argument I made in a previous column about not knowing what an AI tool is going to detect.)

Getting Lit in the Future

Luminar Neo’s Relight AI feature is audacious, and when it works well it can produce good results with very little effort—that’s the point. Computational photography will continue to advance and object recognition will certainly improve in the future.

And it’s also important to realize that this is just one tool. A realistic workflow would involve using features like this and then augmenting them as needed with other tools, like dodging and burning, to get the result you’re looking for.

The post The promise and difficulty of AI-enhanced photo editing appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>