Among the greatest challenges for those new to photography is learning the difference between the way they see and what a camera sees. Perhaps surprisingly, a camera captures both more and less. Here are some of the differences:
A camera does not have peripheral vision. It sees only what falls onto the film plane or digital sensor, which is roughly what you see when looking through the viewfinder or at the LCD viewing screen. (It’s not always exactly the same, however, as the viewfinder may show slightly less than what the camera actually captures, so things often creep in unexpectedly around the edges of the frame.) As a general rule, this lack of peripheral vision is a good thing that allows the photographer to focus more easily on the subject at hand. However, it can also make the photographer less sensitive to the broader scene or nearby activity.
The camera cannot capture the same range of light that the human eye can see. In particular, it cannot capture detail in both bright highlights/whites and deep shadows/blacks. This is especially a challenge on bright, sunny days. If you expose for the shadow details, the highlights will be blown out. If you expose to capture detail in highlights, particularly in the sky, the darker areas of the image will go black and loose detail. This can, of course, be used creatively. And yes, High Dynamic Range (HDR) technology makes it possible to produce an image with a much greater range of light and detail than ever before, but it does so by combining multiple exposures with post-processing software on the computer.
The camera isn't a very good judge of the color temperature of the light. Although automatic sensors have improved, most cameras still depend on you to tell it whether the light is natural, fluorescent, tungsten or flash, or whether you are shooting in bright sunlight, under soft overcast skies or in deep shade. Doing so by setting the white balance results in photographs that have more accurate colors.
The camera captures more than we tend to see on our own. Basically, it sees everything in a scene that is in focus, whereas our mind has learned to filter out the unwanted and focus only on certain aspects of a scene. In other words, we tend to see only what we want to see. The camera, however, also captures the ugly building behind the beautiful garden, the palm tree “growing” out of someone’s head, that bit of trash in the foreground of a scene, and the dirty plates on the table in that reunion shot of friends.
“Part of normal human development is learning to notice less than we are able to. The world is awash in details of color, form, sound—but to function, we have to ignore some of it.”
— Alexandra Horowitz, On Looking: Eleven Walks with Expert Eyes
And yet, a camera may not capture everything that we are capable of seeing. The depth of field that can be captured in sharp focus by a camera depends upon the combination of lens focal length (wide angle vs. telephoto), aperture (f2 vs. f16) and the distance between the camera and the subject (inches vs. miles). A telephoto lens used close to a subject at f2 will have very limited depth of field; the background will likely be blurred in the resulting image—even though it was perfectly clear to the photographer.
A camera has no concept of mood or emotion. That is entirely up to the photographer to capture or create.
As advanced as camera technology has become, it still requires human intervention. A camera is not motivated. It doesn't get out of the bag by itself to go out and take photographs. And it has no clue what is important in a scene. It is simply a tool. Someone must see potential in a scene, stop long enough to explore it, and tell the camera how to capture it.
What do you find to be the most challenging aspect of photography and operating a camera?