Are You Sure You Know What a Photograph Is?

Once, I thought I had a definition of photography. Today, surrounded by thermal cameras, lidar, 3D printers, and AI software, I am not so sure.
Collage of images of framed photograph and 3d renders
Photo-Illustration: Sam Whitney; Rashed Haq 

Application

Deepfakes

End User

Consumer

Source Data

Images

Technology

Machine vision

As a child, I would sit on the balcony of our Dhaka apartment overlooking the pond and flip through our two family photo albums. After the Bangladesh liberation war in 1971, film was scarce and our camera had broken. With nowhere to get it repaired or to buy film, we had no more family photos for almost a decade. There are no photos of me until I was 8 years old.

The tiny, gemlike black-and-white prints of my parents and older brother were fragments of my history that, as curator Glen Helfand said, “captured a fraction of a second of activity and fueled narratives for generations.” These images were absorbed by my soul, stored as evidence of the stories of my family from before my birth, and are now on my kids’ iPhones.

Photograph of my mother from our family album.Photograph: Rashed Haq

On that pond-side balcony, it was apparent to me what photographs were. Later, I would be taught the technical language for those images: two-dimensional registration of light on cellulose negative, then printed on silver halide paper. However, 25 years later, sitting in my studio surrounded by thermal cameras, lidar, 3D printers, and AI software, I am not so sure anymore.

Much of photo criticism and theory today still actively debates the past, with very little consideration of what is coming up. For example, the American artist Trevor Paglen’s 2017 exhibition “A Study of Invisible Images” surveyed “machine vision”—images made by machines for other machines to consume, such as facial recognition systems. Jerry Saltz, senior art critic for New York magazine, declared the work to be “conceptual zombie formalism” based on “smarty-pants jargon,” rather than engaging seriously with the implications of his work. When it comes to theory, a large portion of Photography Theory, a 451-page book often used to teach, focuses on debating indexicality, the idea that taking a photograph leaves a physical trace of the object that was photographed. This was questionable in analog photography but is absent entirely with digital photography, unless information is to be considered a trace. Again, the book says nothing about new or emerging technologies and how it affects photography.

Evolving technologies affect every step of the photo production process, and photographers are using these technologies to question the definition of photography itself. Is something a photograph when it is capturing only light? Is it when it is physically printed? Is it when the image is 2D? Is it when it is not interactive? Is it the object or the information? Or is it something else?

Going Digital

Photography—from the Greek words photos and graphos, meaning “drawing with light”—started in the 19th century as the capture of light bouncing off objects onto a chemically coated medium, such as paper or a polished plate. This evolved with the use of negatives, allowing one to make multiple prints. The production steps of capturing, processing, and printing involved starting and stopping chemical reactions on the print paper and negatives.

With analog photography, the chemistry directly captures the physical reality in front of the camera. However, with digital photography, image-making consists of counting the number of photon light particles that hit each sensor pixel, using a computer to process the information, and, in the case of color sensors, doing further computations to determine the color. Only digitized bits of information are captured—there is no surface on which a physical trace is left. Because data is much easier to process and manipulate than chemicals, digital photography allows greater diversity and versatility of image manipulation possibilities. Film theorist Mary Ann Doane has said that the digital represents “the vision (or nightmare) of a medium without materiality, of pure abstraction incarnated as a series of 0s and 1s, sheer presence and absence, the code. Even light, that most diaphanous of materialities, is transformed into numerical form in the digital camera.”

Evolving Image Capture

Analog photography captured “actinic light,” a narrow sliver of the electromagnetic spectrum visible to the naked eye and able to cause photochemical reactions. Over time, photographers have expanded this to beyond the optical range to create images from infrared, x-ray, and other parts of the spectrum, such as thermography.

Irish photographer Richard Mosse uses a camera that captures contours in heat rather than light. Traditionally used in military surveillance, this camera allows him to photograph what we cannot see—it can detect humans at night or through tents, up to 18 miles away. In 2015, Mosse produced a body of work on the refugee crisis called “Heat Maps,” capturing what art critic Sean O’Hagan called the “white-hot misery of the migrant crisis,” showing monochrome images with shimmering landscapes and ghostly human figures. Unlike with light, the thermal signals cannot distinguish facial features, therefore converting human figures into faceless statistics, representing how immigrants are often treated.

Any form of information can be captured for imaging. Artists have worked with other inputs such as acoustic signals, matter particles such as electrons, and other forms of waves. The American artist Robert Dash uses an electron microscope, which utilizes matter waves rather than light waves, to create very high magnification images of natural objects, such as pollen or seeds found on the property where he lives. He then photo-montages these with life-sized photos of the same objects, creating a surreal, microscopic world. The first time I saw these photographs, my eyes were scanning for any signs in the landscape that could help locate where the images may have been taken, but without success.

Evolving Image Processing

Image processing, traditionally done during the printing process, is any kind of manipulation to create the final image, from darkening the sky in a landscape photograph to using an Instagram filter or editing in Adobe Photoshop. The recent documentary Black Holes | The Edge of All We Know shows an advanced version of digital image processing. The documentary explores the process of creating the first photo of a black hole, which took 250 people about a decade to make.

Researchers constructed the image by computationally combining the radio-frequency data collected over many years, using a novel mathematical model, from multiple observatories worldwide. The image shows a donut of light around the supermassive black hole at the center of the galaxy M87. It continues the photographic tradition of expanding beyond human perception, revealing previously invisible dimensions of reality and encoding it into visible knowledge, as Eadweard Muybridge did 150 years ago with his pioneering work using photography to study motion.

With the development of artificial intelligence, the image processing step can be taken further. For example, Paglen generates portraits of people by creating facial recognition models of his collaborators, and then using a second model that generates random images using polygons to fool the first model into thinking it is a portrait. Then, as Paglen explains, “these two programs go back and forth until an image ‘evolves’ that the facial recognition model identifies as a representation of that particular person.” This process creates a haunting portrait of what the machine sees.

A particular type of AI called a generative adversarial network (GAN) uses digital photographs as input data to generate new “photographs” of people or things that have never existed in the real world. I have used these models to visualize possible portraits of people who do not exist. To include a significant element of chance, these photographs are created using a GAN trained on a set of my Francis Bacon-like photographic portraits. Like a chef experimenting with different combinations of ingredients to see which version works, the AI develops a picture through experimental trials that extrapolate from different aspects of the existing images. Paglen says that this new development in vision is “more significant than the invention of photography.” While the black hole photograph extended our vision to something that exists but we cannot see, the AI-generated photos expand our vision to possibilities and imagination because it is “photographing” things that do not exist in our physical world.

One of my GAN-generated photographs from the series “Human Trials.”Photograph: Rashed Haq
Evolving Image Printing

The final step in photography is generating the visual that viewers can look at and contemplate. The advent of 3D printing is enabling new possibilities for printing photographs. Three years ago, I visited the fashion photographer Nick Knight at his London studio, a large white room, bare except for a white work table covered with photographs of the model Linda Evangelista.

Knight showed me his 3D porcelain print of the model Kate Moss, with wings extending from her arms, reminiscent of religious sculptures from centuries ago. He picked this iconography because, he said, “in many ways, religious icons have been replaced by fashion images, adored by masses of people.” Knight explained that the sculpture was printed from data of her that he had captured in the studio, “which is essentially a direct mathematical and optical recording of her form.” Because photography is defined very narrowly in many people’s minds, Knight calls the 3D prints photosculptures, and now calls himself an image-maker instead of a photographer.

While the vast majority of photographs are viewed on digital screens, digital technologies such as augmented and mixed reality are evolving the display of photography. Ed Burtynsky included augmented reality in his Photo London exhibition in 2018, and Mat Collishaw reproduced a nineteenth-century photography exhibition through virtual reality. The recent popularity of NFTs may be an early indication that digital display may be the dominant form of “printing” over time. This will expand further with the emergence of holographic and other forms of digital displays.

Perhaps most intriguing is the prospect of “printing” directly on the brain. Historically, studies focused on reading brain waves to understand what someone is thinking. More recently, studies have started to focus on writing brain waves to directly insert information to the brain through, for example, transcranial magnetic stimulation. This way, the artist can imagine the picture, and the “viewer” can see it in their mind, even if they are blind. If, instead of a brain-brain interface, a brain-machine interface were used, the imagined photo could be sent to a traditional printer.

Transhumanist Neil Harbisson is spearheading the adoption of this type of technology. Born with achromatism, a rare disease that made him colorblind, Harbisson created a new “sense” by having an antenna implanted in his skull so he could perceive visible and invisible color frequencies as audible vibrations. This not only augmented his color-blindness, but helped him sense beyond the human visual spectrum. In the same way that many artists are using the available digital and printing technologies described above, they will be able to use a combination of the eye, brain, camera and computer as soon as they become affordable.

Evolving Our Understanding of Photography

“There is no un-theoretical way to see photography,” noted David Bate, professor of photography at the University of Westminster, London. How we see and understand photographs is always informed by a framework we have in mind, even if it is subconscious. The current framework is still focused on the past, but this is a missed opportunity, given the advances arriving in the 21st century. Theory and criticism need to move beyond today’s coordinates of the discourse. The possible ways to capture raw data, process these, and how they can be printed or disseminated mean that the definition of photography is no longer the one I knew as a child, but needs to be much more inclusive.

Through this century, the most prevalent forms of image capture and dissemination will include these expanded forms of image-making, powered by software. These technologies will make the resulting image-continuum vast. Pushing these boundaries will enable us to see the world more clearly, see more things we could not see before, and be able to see and print our thoughts. Imagine seeing pictures of celestial structures that are much further out from Earth—and hence further back in time—than we have seen before, giving us a look into the origins of our universe. Imagine a photojournalist using a camera that renders walls transparent to take images inside of an immigration detention center, helping expand our social consciousness and building empathy in new ways. Imagine your family album contained not only photographs of your grandmother, but also of her dreams and thoughts that she wanted to share, giving you a deep intimacy with her across time. These types of images will expand how we view the world, how we see ourselves in that world and how we construct our sense of self in the twenty-first century. Understanding emerging photographic possibilities is crucial to better inform the practices that will profoundly influence this evolving sense of self.

We will continue to cherish this expanded suite of image types in our physically, digitally, or cerebrally stored family albums.

Evolving My Family Album

During the coronavirus pandemic, I bought a lidar camera to make family photos — an expansion of the family album I used to look at as a child. Unlike traditional images constructed from a single point of view, which limits information to a flat surface, lidar captures depth information and hence can be seen from multiple perspectives. These images can be rotated on the computer to see a three-dimensional view of the subject, and a selected angle can be printed in 2D. Like Nick Knight's photosculptures, these images can also be 3D-printed.

My LIDAR photograph of my wife.Photograph: Rashed Haq
My LIDAR photograph of my wife.Photograph: Rashed Haq

These photographs evoke the vocabulary of sculpture. The light and shadow give specificity to the form so you can feel the volume and mass of the subject. While discussing these photos with my family, my son, who has a strong visual memory, thought that these are interesting but added only a little beyond the (traditional) photographs in the album. My daughter, who has a strong kinesthetic memory and is a dance student, felt that these body forms activate our awareness of our own bodies, a sense of its scale and physicality, as if we are now speaking the body’s language. They are definitely staying in the family album.


More Great WIRED Stories