3. Down to Earth



Down to Earth: Making Reality with NASA Imagery


Andrew Scheinman
The NYT called NASA’s color treatment of space photography “phony,” but how do you measure truth when looking at something you can’t even see with the naked eye?



Charles and Ray Eames’ landmark film Powers of Ten begins with a bird’s-eye view of a single square meter of Chicago. Two very mid-twentieth century people settle in for a picnic, what our narrator exuberantly tells us is “the start of a lazy afternoon early one October,” and then the camera zooms out, launching us into the sky. Every ten seconds, our visible area expands outward by a magnitude of ten. Our two protagonists disappear into specks along the shores of Lake Michigan, and then the sprawl of Chicago fades into an outline of the deep-green Midwest. Eddies of cloud soon smear over the continent. Land and sea meld, blend, become one kaleidoscopic object. That famous Blue Marble, the whole Earth, appears in the vacuum of space, shriveling us to nothing before it too disappears into the abyss. The iconic Milky Way, an impressionist swirl of dabbed paint, withers to just a fleck of light in a field of countless others like it. At 1024 meters or 100 million light years out, we hit a wall of almost pure black — the end of human vision circa 1977, the year the film was completed — and head home, back to our lazy pair in the grass. Then we slide into a man’s hand with a microscope, where the glowing orbs and helixes of his cells, DNA, and atomic particles could easily be confused for massive objects in deep space.

Tying all of these images together with a simple voice-over, Powers of Ten made the space age accessible to the general public, narrating a plunge into science not long after the moon landing debuted in every living room in America.01 It also placed humanity squarely in its center, with its narrative set in the quintessential twentieth-century city. The human hand, a symbol of both work and ingenuity, is effectively the star of the show. Everything, the film suggests, must be connected back to our earthly bodies for us to truly understand. Prefiguring the hubris-shrinking philosophy of Carl Sagan in Pale Blue Dot by about two decades, the screenwriter and director Paul Schrader wrote of Powers of Ten that it allowed the viewer to “think of himself as a citizen of the universe.” The nine-minute film, he added, “gives the full impact — instinctual as well as cerebral — of contemporary scientific theories.”




The human hand, a symbol of both work and ingenuity, is effectively the star of the show. Everything... must be connected back to our earthly bodies for us to truly understand.





Four decades later, much has changed in what defines that full impact, especially on the zoomed-out end of things. For one, the black wall at 1024 is no longer the end of our collective sight. The Hubble Space Telescope, or HST, launched in 1990 by NASA and the European Space Agency, now brings us up to at least 1026 meters, adding nine billion nine hundred million light years more to our eyes. Add to that the three other so-named Great Observatories launched in the nineties and early aughts, along with the New Horizons interplanetary space probe and the Curiosity rover currently cruising on Mars, and stunning photographs of planets, moons, and galaxies far, far away have become a steady drip.

Every so often, one of these drops pierces the cultural consciousness, sometimes so much so that a single image becomes a metonym for all of space exploration. The iconic Pillars of Creation, a surrealistic photograph of gas towers in the Eagle Nebula more than five light-years tall, made waves when it appeared in 1995, a huge public-relations boon for NASA and astronomy more generally. The image, however incomprehensible or bizarre to its non-astronomer public (how does anyone even begin to understand a five light-years-tall anything?) was inescapable, reprinted the world over on pillows, tee-shirts, and postage stamps. Endlessly reproducible, Pillars of Creation refamiliarized a general audience with space by sheer virality alone. Around the same time, our aesthetic culture of images for images’ sake, postmodernism, was thoroughly theorized and critiqued. Images, the theorist Jean Baudrillard declared, are not representative of reality but, for those accustomed to staring at screens and photos, the actualization of reality itself. Images of deep space are not representative of distant, inconceivable space — for all of us here on Earth, they are space.




Pillars of Creation These towering tendrils of cosmic dust and gas sit at the heart of M16, or the Eagle Nebula. The blue colors in the image represent oxygen, red is sulfur, and green represents both nitrogen and hydrogen. The pillars are bathed in the scorching ultraviolet light from a cluster of young stars located just outside the frame. The winds from these stars are slowly eroding the towers of gas and dust.

Image and caption credits: NASA, ESA and the Hubble Heritage Team (STScI/AURA)


Today, the jaw-dropping, crystal-clear images that the HST et al. regularly capture define our understanding of space, astronomy, and the void however many magnitudes of ten above us. Despite our familiarity with these photos, however, we seem to be baffled, perplexed, flabbergasted, or befuddled as to what it is we look at when we look at so-called photographs of the cosmos. Do the allegedly iconic elephant trunks of interstellar gas way out there in the Eagle Nebula actually look like the very iconic picture? And are we supposed to care if they don’t? To the chorus of critics that emerges with the publication of each new wide-reaching photo, the answers to those questions are no and yes, respectively. Science writer Charlie Petit, for one, tagged the Pillars of Creation image quite literally unbelievable: its “extensive color tweaking,” he claimed, dramatizes the scene way beyond what the naked eye would see. A New York Times reviewer of a National Air and Space Museum show in 2010 agreed with that sentiment, calling out the “phony” colors of the HST images. “This mixture of the real and the imagined,” the author wrote, “conspire to create a strange photographic universe in which the human is everywhere implicated but nowhere sensed.” As in the Eames film, everything zooms back to the human hand.

All of this might be confusing to the layman, or unimplicated human, who assumes photographs of space are like photographs of Earth — that is, two-dimensional facsimiles of reality. And yet even the implicated humans, e.g. astronomers, scientists, hobbyists, seem to miss the difference between the “real” and the “imagined.” According to a 2015 study, there is no notable discrepancy between the opinions of amateurs and of experts as to what is “real” in space photography. So, if nobody knows, then how exactly do we talk about what is and is not “real” space in these images? What does “real” mean when we look at something —an interstellar gas column in a faraway nebula, perhaps — that we would never witness without photography? And if the naked eye can never possibly see it, can it even be “real” at all?




There is no notable discrepancy between the opinions of amateurs and of experts as to what is “real” in space photography. So, if nobody knows, then how exactly do we talk about what is and is not “real” space in these images?





These questions require a space telescope, or several, to even begin to answer. They also require understanding what it really takes to zoom out to 1024 meters and beyond. Telescopes like the HST record the universe with electronic detectors called Charged Couple Devices, or CCDs, not dissimilar to those in an average smartphone camera. Both CCDs and digital cameras do, in essence, just what old-school film or photographic plates do: they collect light, but by very different means, that is, pixel by light-sensitive pixel rather than by chemical coatings and long exposure times. Each pixel is assigned a value according to the amount of light it receives, and together in a matrix of rows and columns, these pixels jointly construct an image.

Like digital camera sensors, CCD pixels are colorblind — they only measure the hard quantity of light pouring in — so these values record pure black, white, and shades of gray. To make these shots shine in color, most digital cameras correct for this by filtering that light three times, one each for red, green, and blue, which correspond to the three spectra of frequencies the human eye can see. All visible colors on the other end of a lens, i.e. everything on the planet that we see, can be broken down into or constructed from these three building blocks. Framing a vacationing family for the mantlepiece, a digital camera takes in light information in grayscale and in red, green, and blue, and then overlays these negatives to construct the two-dimensional likeness of that family and its decadent souvenir tee-shirts.

CCDs do something similar. They build an image of the faraway pixel by pixel, layer by layer, stitching these myriad fragments together into something abstract, awe-worthy, and generally good for earning NASA some funding. Shot from hundreds of thousands of miles or more away, telescope images use pixels often at least two or two and a half miles wide, packing an enormous quantity of light data into each. This process, with large swaths of the visible spectrum captured at a time and then overlaid, is what’s called “broadband filtering,” far and away the most straightforward process in coloring photographs of space. What broadband filtering does is add power —effectively, long-distance vision — way beyond what our frail, earthly eyes can do, presenting a true-color image of what we would see if we could look right at Jupiter from our bedroom windows.

Next to that is “narrowband filtering,” which is used to select and isolate tiny slices of the visible spectrum in order to amplify their perceptibility, like highlighting data. Rather than present a true-color version of intergalactic reality, narrowband filtering maps where elements like oxygen, hydrogen, and sulfur exist in a given image, providing scientists a better visual understanding of the complex machinations of gases that are otherwise lost to the haze of color. These isolated bands are then assigned a somewhat arbitrary color based not on what the eye would see, but on what would be practically useful in an image of the cosmos and, often, what looks more compelling or stunning to a public audience. The iconic elephant trunks in the Pillars of Creation, for instance, would not be the minor celebrities they are today without the added blues and greens that make oxygen and hydrogen particles sing.




Shot from hundreds of thousands of miles or more away, telescope images use pixels often at least two or two and a half miles wide, packing an enormous quantity of light data into each.





Most problematically for the idea of a “real” photograph of space, much of the light in more complex or long-distance images is, in fact, invisible to the human eye. Those red, green, and blue frequency ranges — the ones a digital camera focuses on — encompass only a slim fraction of the total light emitted or received at any given place at any given time. The rest of this invisible light is somewhere on a spectrum spanning the reddest reds with the lowest wavelength frequencies and the deepest violets with the highest. Infrared and ultraviolet rays sit just outside the visible spectrum on this scale (invisible to almost all mammals save for ultraviolet-vision reindeer, which, incredibly, see these rays reflected off of snow). What we humans see is just a tiny sliver of the total light emitted in the universe.

But space phenomena, with their whipping swirls and glistening spots and peppering of stars, also emit light outside of our measly visible spectrum. Unlike us, telescopes can see these things, which is one of the reasons space agencies build them in the first place. The HST, for instance, can observe in the near-infrared and ultraviolet spectra alongside the visible. Others see in x-ray or gamma ray. While the 1995 Pillars of Creation image shot by the HST covered only the visible spectrum, employing both broadband and narrowband filtering, a 2015 revisit with infrared filtering altered it completely. The new Pillars of Creation —mind you, of exactly the same distant matter — now brings out a quilt of brilliant stars from behind those ethereal gas columns. Ultimately, telescopes produce wild and fantastical photographs not in spite of what the naked eye would see, but precisely because the naked eye would never see these things without the photographs.

What these images and the cries of fakery surrounding them signify, really, is what the astronomer Jayanne English names a “battle between the culture of science and the culture of art.” In the culture of science, she claims, images are, in essence, data, used practically by astronomers and those in the know. They exist as maps, much like the way oxygen and hydrogen are isolated and colored in the Pillars of Creation image. But in the culture of art — the paradigm largely applied by the general public, who rave over these pictures — images are representations of reality, reflective of our aesthetics and the way we understand the world as related to our bodies.

Following what English pins as the “Western image tradition,” viewers often expect that, like National Geographic–style photographs of nature, space photos should be neutral, devoid of the human hand. This is, of course, nonsensical, almost paradoxical: even the most natural-looking and iconic of photos are staged, set-up, or otherwise concocted by the photographer. As Susan Sontag famously pointed out, photographs are not mere documentary of the world around us. As records of the real, they represent incontrovertible evidence that things look in real life the way they do in photographs. And yet notably, they are also biased, as, after all, somebody is always behind the camera, and in this case, working hundreds of hours making raw data visible and aesthetically appealing.

Photographs of space are complex because, like still-life paintings, portraits, selfies, or any two-dimensional image, they take in seemingly endless quantities of information and reduce them to a snapshot — a single something in a single frame at a single moment in time. That the image is reduced, changed, colorized, or not “real” is hardly the point. More important is what that reduction does, and for whom. As Paul Schrader noted, the roller-coaster ride of Powers of Ten does what data about the universe alone cannot do — it gives the full impact of contemporary science. It is effective because of its reductive qualities, because of its legibility. Maybe, likewise, it is the unreal, otherworldly “reality” of space photography that is precisely the point.



01. To string these scales together, the Eames Office needed to create the illusion of a continuous shot, sped-up to short film length. In total, the whole gamut of the piece is run in nine minutes, the space launch done in half that. For the first series of images in the film, the team commissioned three large-scale photographs from the Chicago Aerial Survey, the last from a high-altitude Cessna on the atmospheric margins. Each photo was then resized, recolored, and positioned in the center of its successor, and the whole rig was shot and zoomed out by a crack team of cinematographers. When we hit 106 or one million meters up, the survey images imperceptibly pivot to one provided by the NASA EROS Data Center. But while the transformation is seamless to us viewers, for the Eames team, fusing NASA imagery with the perceptible realism of mid-century color photography was a taxing, time-consuming mess. Minuteinconsistencies between disparate image sources — palette changes, boats moving on the lake, volatile weather — had to be painted or airbrushed out. The first NASA image had to be expanded in width and corrected for color to fit the Chicago of that lazy October afternoon.

Mark