Like a child, I sat on the balcony of our apartment in Dhaka overlooking the lake and flipped through our two family photo albums. After the war for the liberation of Bangladesh in 1971, the film was scarce and our camera broke. Since we had nowhere to fix it or buy a movie, we had no more family photos for almost a decade. No photos of me until I was 8 years old.
The small gem-like black-and-white prints of my parents and older brother were fragments of my story that, as curator Glen Helfand said, “captured a split second of activity and nourished the stories of generations.” These images were swallowed up by my soul, preserved as evidence of my family’s pre-birth stories, and are now on my children’s iPhone.
On this balcony by the lake, it was clear to me what the photos were. Later I would be taught the technical language of these images: two-dimensional registration of light on a cellulose negative, then printed on paper with silver halide. However, 25 years later, sitting in my studio surrounded by thermal cameras, lidars, 3D printers and AI software, I’m not so sure anymore.
Much of the criticism and theory today is still actively discussing the past, with very little consideration of what lies ahead. For example, the 2017 exhibition by American artist Trevor Paglen explores “invisible images” exploring “machine vision” – images made by machines consumed by other machines, such as facial recognition systems. Jerry Solz, senior art critic for new York magazine, declared the work a “conceptual zombie formalism” based on “smart pants jargon” instead of committing seriously to the consequences of its work. When it comes to theory, much of Theory of photography, a 451-page book often used for teaching, focuses on discussing indexelism, the idea that taking a picture leaves a physical trace of the subject being photographed. This was questionable in analog photography, but missing entirely in digital photography, unless the information should be considered a clue. Again, the book says nothing about new or emerging technologies and how they affect photography.
Emerging technologies affect every step of the photo production process, and photographers use these technologies to question the definition of photography itself. Is something a picture when it captures only light? When is it physically printed? When is the image 2D? When not interactive? Is this the object or the information? Or is it something else?
Switch to digital
Photography – from Greek words photos and graphos, which means “painting with light” – began in the 19th century as the capture of light bouncing from objects on a chemically coated environment, such as paper or polished plate. This has evolved with the use of negatives, which allows multiple prints to be made. The production stages of capture, processing and printing include starting and stopping chemical reactions on printing paper and negatives.
With analog photography, chemistry directly captures physical reality in front of the camera. However, in digital photography, image creation consists of counting the number of photon light particles that hit each sensor pixel, using a computer to process the information, and, in the case of color sensors, performing additional calculations to determine color. . Only digitized bits of information are captured – there is no surface on which to leave a physical trace. Because data is much easier to process and manipulate than chemicals, digital photography allows for more variety and flexibility in image manipulation capabilities. The film’s theorist Mary Ann Dawn said that digital is the “vision” (or nightmare) of an environment without materiality, of pure abstraction, embodied as a series of 0 and 1, pure presence and absence, the code. Even light, the most transparent of materiality, is transformed into digital form in the digital camera. “
Evolving image capture
Analog photos are taken “Actinic light”, a narrow part of the electromagnetic spectrum visible to the naked eye and capable of causing photochemical reactions. Over time, photographers extended this beyond the optical range to create images from infrared, X-ray, and other parts of the spectrum, such as thermography.
Irish photographer Richard Moss uses a camera that captures contours in heat, not light. Traditionally used in military surveillance, this camera allows him to capture what we can’t see – it can detect people at night or in tents, up to 28 miles away. In 2015, Moss created part of the work on the refugee crisis called “Heat Maps”, capturing what art critic Sean O’Hagen called “the hot and white misery of the migrant crisis”, showing monochrome images with glittering landscapes and ghostly human figures. . . Unlike light, heat signals cannot distinguish facial features, making human figures a faceless statistic of how often immigrants are treated.
Any form of information can be captured for display. Artists have worked with other inputs such as acoustic signals, particles of matter such as electrons and other waveforms. American artist Robert Dash uses an electron microscope that uses waves of matter, not light waves, to create images with very high magnification of natural objects, such as pollen or seeds found in the property where he lives. He then photomontages them with life-size photos of the same objects, creating a surreal, microscopic world. The first time I saw these photos, my eyes scanned for any signs in the landscape that could help me figure out where the images may have been taken, but to no avail.
Evolving image processing
Image processing, traditionally done during the printing process, is any kind of manipulation to create the final image, from darkening the sky in a landscape photo to using a filter on Instagram or editing in Adobe Photoshop. The latest documentary Black holes The edge of everything we know shows an advanced version of digital image processing. The documentary explores the process of creating the first photo of a black hole, which took about 250 people a decade.
The researchers built the image by computing a combination of radio frequency data collected over many years using a new mathematical model from many observatories around the world. The image shows a donut of light around the supermassive black hole at the center of the galaxy M87. He continues the photographic tradition of expanding beyond human perception, revealing previously invisible dimensions of reality and encoding it in visible knowledge, as Eadweard Muybridge did 150 years ago with his pioneering work using photography to study movement.
With the development of artificial intelligence, the step of image processing can be further. For example, Paglen generates portraits of people by creating models to recognize the faces of his collaborators, and then uses a second model that generates random images using polygons to trick the first model into thinking it’s a portrait. Then, as Paglen explains, “these two programs move back and forth as they ‘evolve’ an image that the face recognition model identifies as representing that particular person.” This process creates a compelling portrait of what the machine sees.