Scroll through some of the recent stories found on TechRepublic and you’ll see the topic of artificial intelligence (AI) mentioned on several occasions. AI isn’t something widely seen in action today, but the reality of its becoming more common is definitely on the lips and text editors of technologists. Can AI disrupt the world of photography? Will it eventually replace human input when it comes to processing photos? Anything is possible, but I truly doubt it.

What’s happening with AI and photography?

In a recent blog post, a team at Google shared how its deep learning technology has been able to produce “professional quality” photo editing for a batch of landscape photos. In blind testing, pro photographers rated up to 40% of the images edited by AI as semi-pro or pro level quality. Quite frankly, some of the images published were quite nice, but is this enough to disrupt the world of photography? I don’t think so. Disrupt the world of photography editing? Well it could be useful, but not disruptive. Allow me to explain.

SEE: The practical applications of AI: 6 videos (TechRepublic)

AI versus the photographer on set

Let’s think of a scenario that a photographer may face. First there’s a scheduled photo shoot with a client. In general, the client will have ideas on what they’re looking for in the session and the photographer works closely with the client to meet those needs. We’ll just throw headshot sessions out the window and look more at product photography or photography based on a scene in our example. Now close your eyes, be the client, and think of an ad showing a boardroom setting. In any scenario, it’s up to the client and photographer to determine the mood and message it wants presented in that boardroom photo shoot.

Is the message “Board meetings are serious and powerful”? Or is the message “Come together and collaborate”? Both messages can be answered from the same scene by making a few nuance changes with lighting, the models’ posture, facial expressions, and gestures, or even the props used within the scene. The client may not understand those concepts, but the photographer will. In this scenario, I can’t say AI will aid in getting the client’s message across. Right now, the AI used by Google isn’t based on compositing or replacing props in a scene. A boardroom with with a few bottles of water or cups of coffee does not give the same vibe as a boardroom with an open box of doughnuts and crumpled cans of energy drinks. AI isn’t ready to replace the analytical skills a photographer brings to the set of a photo shoot.

AI versus the photographer in the editing studio

In the editing process, the photographer and AI share the same data. If a client were to upload an image into an AI system, it could easily input specified parameters to assist in the editing process. Keywords and maybe even a brief description of what the client is looking for is handy data. The AI could analyze the keywords against the uploaded image, proceed with editing to fit the client’s needs, and display it within minutes or even SECONDS as a preview. The client could then approve the image and download it for use.

But what if the client doesn’t approve?

Speaking from experience, I’ve edited photos for clients who didn’t always agree with my post processing–especially when dealing with humans in the images. “Can you make my neck look slimmer?” “Can you remove that small mole that’s under my left eye?” Those are not outlandish requests and are pretty common because most people want aesthetically superior models in their photographs. On the other hand, some individuals have taken pride in or made a name for themselves around their imperfections. Think of the former NFL player, Michael Strahan. Strahan has a gap between his two front teeth. With the gazillions of dollars he’s earned as a professional football player, he could easily have gotten orthodontic care to correct the gap. He didn’t. How will AI photo editing handle such situations? Sure, the machine can learn to touch up skin blemishes or imperfections, but to what extent? Will the AI understand the context of the edit or the subject matter better than a human?

SEE: Video: Why artificial intelligence will be humankind’s final invention (TechRepublic)

When I hosted a Smartphone Photographers Community, we discussed how photos that tell a story are usually the photos that capture our emotions. It may not be the photo with the best exposure or color saturation, but when you see it, you stop to admire it. For example, one of the more iconic images of US history is the raising of the US flag at Iwo Jima. This image isn’t technically sound. The exposure isn’t quite right and the contrast could be increased. But at the end of the day, WHO CARES? It’s an awesome photo capturing an emotional moment. Who’s to say that running the image through post processing wouldn’t have ruined it?


I think it would be tough for AI to know when and where to draw the line when it comes to post processing photos. Some photos need human intervention in the editing process to understand the mood and message the photo is supposed to convey, not just the adjusting of exposure or white balance. If a photo is just a run-of-the-mill landscape photograph, there just may be a place for AI photo editing. But even with that said, I’d much rather lean on the professional skills of landscape photographers, such as Trey Ratcliff or Thomas Heaton, who have a way of tugging at your emotions with their photography.

Also read…

Your take

What are your thoughts about AI photo editing? Leave a comment below or tag me on Twitter with your thoughts.