What is the standard by which you judge whether a photo has been over-manipulated? Do you judge it against the reality of the original scene that was photographed? Certainly that makes sense, but it’s hard to judge that objectively if you weren’t present when the photograph was captured. Instead, it’s common to judge the modifications against the original, unaltered photograph that the camera captured—despite the fact that the camera itself tends to change the scene in subtle ways due to lens distortion, sensor design, and internal processing algorithms. A new technology trend known as computational photography, however, is changing how we approach image capture, and this may have implications for how we judge what is manipulated.
The definition of computational photography is still evolving, but I like to think of it as a shift from using a camera as a picture-making device to using it as a data-collection device. Traditionally, you form the picture in the instant you click the shutter on the camera. You might modify the exposure or coloring of the photo after the fact, but the essential characteristics of the image were defined in that initial instant. In contrast, with computational photography, you’re generally gathering as much data as you can about the scene, and then later using advanced computational techniques to process that data into the final image. That creates a much more slippery definition of an original, because what is defined at the time of capture is not necessarily a fully formed picture.
Many photographers already are experimenting with computational techniques using standard digital cameras. In fact, I would classify both high-dynamic-range (HDR) imaging and panoramic stitching as being early forms of computational photography. In both cases, the photographer is gathering as much data as possible—in these cases generally via multiple photographs—and then using computational techniques to synthesize the final photograph. Generally, the goal with these two techniques is to capture the full impact of a real scene that is beyond the capabilities of the camera, either because the dynamic range is greater than the sensor can pick up or because the desired field of view is greater than the lens can see. (Admittedly, there are those who use HDR techniques to create crazy color effects, but that’s another story.)
Clearly, both HDR and panoramic photography can be applied in the honest representation of truth, but still they lead into tricky ground for photojournalism. That’s in part because we’ve been well-trained to think that any combination of multiple photos is inherently misleading. Recently, the Washington Post ran an HDR photo on its front page. The caption said that the image was “a composite created by taking several photos and combining them with computer software to transcend the visual limitations of standard photography.” Of course, calling the photo a “composite” was a red flag for many readers.
In response to the Washington Post incident, Sean Elliot, the president of the National Press Photographers Association, declared that “HDR is not appropriate for documentary photojournalism.” Notably, he came down against the technique because the newspaper “combined different moments, and thereby created an image that does not exist. The aircraft visible in the final product was not there for all of the other moments combined into the final, and that alone simply raises too many questions about the factual validity.”
Then what if the HDR photo was instead a composite of multiple cameras firing simultaneously? Wouldn’t that then represent a single moment? And, for that matter, does the NPPA have a stance on how long a single exposure can be before it stops qualifying as a single moment?
My point here isn’t to dismiss Mr. Elliot’s justification, but instead to point out the difficulty of defining a set of do’s and don’ts that can be consistently and uneqivocally applied. And the challenge may only get greater. Shortly, a company called Lytro will ship the first mass-market light field camera. A light-field camera dispenses with many of the standard concepts we’ve carried forward from the film days. Rather than capturing a single, complete image, a light-field camera captures all of the rays of light in a scene striking from multiple directions.
The raw information captured by the sensor in a light-field camera isn’t entirely recognizable to the human eye as a real picture. You can check out this video from Adobe research that shows one example of what a (mostly) raw capture can look like. Software—or in-camera firmware—must be used to synthesize the final photo. That final photo is an amalgam of the data captured when the shot was taken, but the user can actually have some control over how that data is combined to produce the photo. So, for example, with the Lytro camera (which likely captures the lighfield in a different manner than the Adobe example), the user can choose the focal depth at the time of viewing rather than having it locked in at the moment of capture. In the future, they may be able to create 3D views or choose small shifts in perspective. Variations of the light field camera design can also be optimized for HDR photography.
Granted, such a light-field image does still capture a single moment in time, so it at least meets the standard that Mr. Elliot used to reject the Washington Post’s HDR photo. Nevertheless, when a photo is inherently an after-the-fact, computed artifact, it may become harder to draw boundaries around what types of computation are allowable.
Ultimately, the most important criterion is the truthfulness to the original scene, regardless of what technologies or techniques were used. A good photojournalist can use his own judgment to adhere to that truth even as the technology evolves. Unfortunately, though, news organizations are left with a choice of blindly trusting the judgment of their contributors, or providing them with more specific and concrete boundaries to follow. As photographic technology evolves, those boundaries will need to be continually reevaluated and redefined.