The camera obscura, an ancient device that captures light through a lens or pinhole and projects it onto a surface, long tantalized artists, scientist and inventors with the dream of capturing its fleeting images. Eventually, scientists discovered that certain chemicals react to light with visible and persistent changes. Inventors developed devices and techniques for capturing images in silver hallide and other photosensitive chemicals. Photography revolutionized the magic lantern show, made stereoviews practical and movies possible.
The use of photochemistry is not limited to storing images: an optical soundtrack for a movie stores sound, phototypesetting discs store fonts as photographic negatives, and photosensitive dyes are used in write-once optical discs. Photochemical systems are also used in photolithography, photo-etching and other industrial processes.
Photographs store information as photochemical changes that alter the reflectance or transmittance of the medium. The chemicals used for photography are, in most cases, silver halides in the form of microscopic crystals suspended in a gel. Silver halide becomes metallic silver, which is opaque, in proportion to the amount of light falling on it. The initial exposure to light focused through a lens creates a latent image, which is enhanced during developing by the application of additional chemicals. In many cases, the resulting image is a negative, from which a positive print is generated in a second photochemical process. With reversal film, the exposed image is positive and can be viewed directly.
A black and white photograph stores the value of a single variable, light intensity, at each point in an image. It ignores the mix of wavelengths that we sense as color, which requires representing three variables—a much more difficult problem that occupied scientists and inventors for decades after the invention of photography.
Virtually all color photography is based on the trichromatic theory of vision proposed by Thomas Young in 1802 and developed by Hermann von Helmholtz in 1850. Our retinas have three types of receptors sensitive to ranges of the visible spectrum centered on red, green and blue. In 1855, James Clerk Maxwell conducted experiments that demonstrated this theory by combining red, green and blue light in various proportions using a spinning disc. He hypothesized that the different color receptors didn't themselves see color, but were simply sensitive to light intensity at particular frequencies. The sensation of color was created in the brain by combining these three intensities. This was important to color photography because it meant that color could be stored as three black and white images, each representing the intensity of light as seen through a color filter.
Maxwell demonstrated the concept in 1861 using color separations, taking three black and white photographs through color filters, then superimposing images projected through the same color filters by three magic lanterns. Separations could also be combined using mirrors in a viewer like the Ives Kromogram. Inventors produced a number of cameras and viewers using this method. The devices were necessarily complex and color photography came ultimately to depend on other approaches. But the use of color separations continued in Technicolor and similar color movie technologies in the early 20th century, as well as in color printing through the present day.
Additive processes reproduce color by passing light through three versions of the image, each filtered by the color originally used to take the image. Color separations are an additive technology, but for color photography to be commercially successful, something simpler was required. This was provided by screen processes, first suggested by Ducos du Hauron in 1868, but not successfully implemented until the end of the 19th century by John Joly (1895), James William McDonough (1897) and Louis Dufay (1905).
A screen process combines the three images, essentially by intermingling them to create a single image. The color at each location on the image is actually three separate dots, representing the intensity of red, green and blue. The dots are small enough that the eye fuses the three colors. (Pointillism, developed in 1886 by the painters George Seurat and Paul Signac, is a similar additive technique.) Inventors came up with a variety of ways to break the three images into dots. The autochrome process introduced by the Lumières in 1907, which used dyed grains of potato starch, was the first commercially successful approach. Color on computer and TV screens is still produced using tiny dots of the three primary colors.
Subtractive color processes take a different approach to combining the three images required to reproduce color. In subtractive color photography, the three images are superimposed. The film has three silver halide layers sensitive to blue, green and red. When developed, dyes in each layer produce a negative image in the complementary color: yellow, magenta and cyan. When printed, the dye in the cyan layer for example, will be opaque in inverse proportion to the red light falling on the original film. When white light passes through the three layers, red is subtracted based on that opacity.
Photographs store information through photochemical changes that alter the reflectance or transmittance of the medium. The grains of silver in a photograph store information about the intensity (irradiance ) of light arriving from a range of directions focused by the lens. The chemicals used for photography are silver halides in the form of, in most cases, microscopic crystals suspended in a gel. Silver halide changes to metallic silver in proportion to the amount of light falling on it. The initial exposure creates a latent image, which is enhanced during developing by the application of additional chemicals. In many cases, the photograph is a negative, with a positive then produced as a photographic print from the negative. Applications include photography, photolithography and some forms of optical audio storage.
Magic lanterns had been in common use for over a century when photography was invented. Before then, slides were hand painted, lithographed or transferred by decalcomania. A process for printing photographs on glass was invented by William and Frederick Langenheim in 1849. It was immediately applied to creating magic lantern slides. Photography itself hadn't been around for more than a decade, and for most of that time daggeureotypes had been the standard photographic process.
Magic lantern slides evolved into what came to be called simply slides when photography moved from glass plates to celluloid roll film. Instead of printing images on large and fragile plates, frames from a developed roll of film could be cut out and placed directly in cardboard mounts ready for the projector. Reversal film, which created a positive image when developed instead of a negative, simplified the process. Smaller slides led to smaller and lighter projectors.
Slides have come in a wide range of film sizes, although 35 mm is by far the most common, with 2 x 2 in. (5 x 5 cm) being the standard mount.
A slide strip consists of multiple images in a single mount. Typically these are called slides or filmstrips, but I've adopted this term to differentiate them from "slides," which consist of a single frame in a mount and "filmstrips," which are unmounted.
A sequence of photographic images mounted on a disc for projection or viewing through a hand-held viewer.
A filmstrip is a sequence of photographic stills on unmounted celluloid film—typically 35 mm or 16 mm with sprocket holes. The images are from life or from artwork that includes text, cartoons or illustrations. The first filmstrips were produced on 55 mm film around 1918 by the Underwoods of New York. Production was soon taken over by the Stillfilm Company. The familiar 35 mm filmstrip emerged in the mid-1920s.
In 1853, J. B. Dancer, an English optician and microscopist, used the newly-invented wet collodion process to create photographs mounted on slides for viewing under a microscope. They were popular as curiosities and sold well. The broader potential of microphotography was first demonstrated in 1870 when a French microphotographer named Rene Dagron used it to send messages by carrier pigeon into Paris while it was under siege during the Franco-Prussian War. In the twentieth century, microphotography, in the form of microfilm and microfiche, became a standard way of archiving magazines, newspapers and other documents. It also found use in the military and espionage during World War II and the cold war.
In addition to a transparency, a photograph can be reproduced as a print: an opaque image reproduced from a negative on photopaper.
Phototypesetting emerged in the early 1950s. At the time, most typesetting was done with linotype machines, a technique known as hot type. Linotype machines were large, highly complex machines in which type was cast on the fly from a lead alloy. Phototypesetting, or cold type, took an entirely different approach. The shapes of characters for a particular font were stored photographically as negative images on a transparent disc or plate. Characters on the disc were exposed to light one at a time by positioning the negative image over photosensitive paper.
The soundtrack for the first talkie (a film with spoken dialog), The Jazz Singer in 1927, was stored on a 16 inch, 33⅓ RPM record, a technology known as sound-on-disc. Sound-on-disc systems were used in theaters for several years, but synchronization was an issue and the records wore out quickly. Editing a film when the soundtrack was on a shellac record was also extremely difficult. A competing approach, sound-on-film, stored the sound on the film itself as an optical waveform—almost literally a picture of the sound. The waveform was recorded photographically using either light reflected by a vibrating mirror or light from a source whose intensity varied with the sound. The audio was played back by shining a light through the soundtrack onto a light sensitive vacuum tube (originally developed for an optical signaling system used by the U.S. Navy). The sound was thus inherently synchronized and editing the film simultaneously edited the soundtrack. Although audio recording was eventually separated from film recording, particularly with the introduction of magnetic tape recording, an optical soundtrack is still added when films are printed today.
Before digital synthesizers, optical organs like the Optigan stored analog waveforms photographically. The concept was the same as an optical sound track for a movie, except that the waveforms were stored as circular tracks on celluloid discs, with each track storing a different pitch. The sound wasn't great and the mechanisms weren't that robust. As a result, optical organs weren't very successful, although they are still sometimes used by recording artists in search of unusual or retro sounds. They've also lived on, ironically, in the form of digital samples for synthesizers.
Developed in the late 1980s by the Drexler Corp., the LaserCard could store 2.6 megabytes of data. A photosensitive strip is bonded to the card is exposed to ultraviolet light passing through transparent areas in a master. When developed, the exposed spots, which are darker than the unexposed areas of the strip, can be read by reflected laser light. Originally intended to hold medical data, the technology, now marketed by HID Global, is apparently still in use in multiple countries for government ID cards.
Certain dyes become opaque when heated by a laser or through direct contact with a heating element. The dye can be applied to a variety of substrates, including plastic, metal or paper. The process is irreversable, limiting it to applications like write-once optical discs and thermal printing.
Standard CDs and DVDs are manufactured by injection molding and are thus inherently read-only. The high quality of prerecorded audio and video they offered, at least relative to tape, played a large part of in their rapid ascendence. But cassette audio and video tape had accustomed consumers to recording their own music and video. You couldn't make a mixtape with a CD. You couldn't record a television show on a DVD to watch later. In the case of data applications like backup or scientific instrumentation, the capacity, speed of access and durability of optical media was attractive, but, again, the ability to record was the missing piece. This began to change in the mid-1980s with the introduction of recordable CDs.
The discs in this section are Write Once Read Multiple (WORM). WORM discs consist of a metal layer, a layer of organic dye, and the usual protective plastic. Data is written to a blank disc using a laser at a high enough power to heat the dye layer and cause a chemical change that makes the dye opaque in selected spots—the origin of the phrase to "burn a CD". During reading, the laser, at a lower power, directs light at the dye layer. Where it's still transparent, the light passes through and is reflected back by the metal layer. Where the dye is opaque, light is absorbed. The chemical change is permanent, which means the disc can only be written once. For many applications, such as recording, backup and data collection, this limitation is no problem. Discs that can be written multiple times use a different technology (see Phase).
In 1981, British Telecom introduced a prepaid phonecard that used thermal marking to track usage. The card was read by reflected infrared light. The reflective coating was burned away by a heating element as the card was used, allowing the phone to determine how much time was left.
Barcodes and mailing labels are often printed using direct thermal printing. In direct thermal printing, the print head contains heating elements that cause a chemical change that blackens a specially coated paper. Unlike thermal transfer printing, a direct thermal printer uses no ink or ribbon.