How a Digital Camera Works - The Imaging Sensor
If you took apart a modern film camera and a digital camera, you would find that their interior workings are very similar, in most cases. In fact, up to the point of actually recording an image, film and digital cameras function pretty much the same. So, many of our conversations concerning the functions of aperture and shutter applies to the digital camera as well. The major difference between film and digital cameras is the presence of the image sensor rather than film. Film records light chemically and image sensors record light electronically.
Digital cameras use an image sensor instead of film to record an image, thereby eliminating the need for a film-transport mechanism. This makes it much easier for manufacturers to construct digital cameras that are quite small, while also enabling innovative exterior designs that would not be possible with a film camera.
With the exception that an imaging sensor replaces film, the interior of a digital camera is very similar to the interior of a modern film camera. Light still travels through the lens and the TTL (through the lens) meter calculates proper exposure settings.
When the shutter is tripped, light passes through, enabling the film or, in the case of the digital camera, the imaging sensor to record the image. At this point, the film camera has completed its task; chemical development will take over when the film is processed. The digital camera, however, has more work to do.
The Imaging Sensor (continued)
Film responds to light on a chemical level and requires further processing using photochemistry to develop the image so it can be seen. An image sensor responds to light electronically, requiring further processing so the data it has gathered can be viewed as a digital photo.
An imaging sensor is a silicon chip that contains millions of small electrodes called photosites. The photosites are arranged in a grid, and there is one photosite for each pixel in the image the camera captures. The total number of photosites is the determining factor for the stated resolution of the imaging sensor. Resolution is specified using a number and the term megapixels, which is just another way of saying how many millions of pixels the sensor has. A five-megapixel camera, therefore, has approximately five-million photosites or pixels on its imaging sensor.
Most imaging sensors fall into two main categories: CCD (Charged Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor). The two sensors differ slightly in terms of how information is read. From CCD sensors, the camera reads the charges of individual photosites row by row, like a book, whereas the charges from photosites on CMOS sensors are read simultaneously.
How Light is Converted to an Electrical Response
The two squares represent two individual photosites (pixels) on the surface of imaging sensor. More light is striking the top one, which results in more electrons (shown in blue) gathering there. When interpreted by the camera's analog-to-digital converter, this results in a higher digital value for brighter tone.
When the exposure is finished, the computer in the camera measures the amount of electrical charge, or accumulated electrons, at each pixel site. This electrical charge directly correlates to how much light hit that particular pixel. This initial set of exposure information is the raw data generated by the imaging sensor. We'll be talking more about the significance of this raw data later on.
The A/D Converter
Using the initial numbers that represent the voltage response from the photosites, the camera processes the data through an analog-to-digital (A/D) converter that translates the voltage number into a digital value. The majority of digital cameras for the consumer market use an 8-bit A/D converter. This means that the electrical charge for each pixel is converted into a number ranging from 0 (black) to 255 (white), resulting in an image with 256 individual tonal gradations.
On many prosumer (consumers using professional-grade equipment) and professional 35mm models, however, the A/D converter can process to 14-bit images, which translates to 16,384 tonal values. Images that use more than eight bits can only be accessed when using the RAW file option.
The Creation of the Color Image
As sophisticated and technologically advanced as the imaging sensors on digital cameras may be, they do not record images in color. The truth of the matter is that imaging sensors are colorblind, and they can only see the world in shades of gray. The pixel values they produce represent brightness only. With the exception of the Foveon X3 imaging sensor (which, as of this writing, is currently available only on the Sigma SD10 camera), all the imaging sensors currently in use capture grayscale images.
To determine the color values in an image, each pixel on the sensor has a colored filter over it. These filters are arranged in a specific pattern, with most cameras using alternating filters of green and red, and blue and green, in every other row of pixels on the sensor. This arrangement is known as the Bayer Pattern. The Bayer Pattern contains twice as many green filters as red or blue because human vision is more receptive to lightwave frequencies that are close to green, which falls in the middle of the visible spectrum.
So, the image processed by the A/D converter is a grayscale file, with each pixel having only one value representing red, green, or blue. In order to make a full-color image, however, each pixel needs values for all three colors. To put the puzzle together and determine the missing color values, a process of color interpolation is used. Interpolation is the process of adding new data based on existing information.
Essentially, the computer in the camera looks at each pixel and at the surrounding color values and makes an extremely good, educated guess as to what the missing color numbers should be. Of course, calling this a guess is a simplification, and it does not do justice to the extraordinarily complex mathematical algorithms that come into play in order to create the final, full-color digital photograph.
The color-filtration pattern on an imaging sensor captures alternating pixels of red, green, and blue. Twice as many green pixels are captured as red or blue. A complex system of color interpolation then creates the resulting full-color image.
Wow, so all that sounds highly technical, but the premise is pretty simple. Think of it in terms of something familiar. For example, if you were to look at a magazine photograph with a magnifying glass, you would see the pattern of halftone dots that actually create the image. Unlike a true continuous-tone image such as a traditional black-and-white, or silver gelatin photograph, halftone images are composed of small dots that are generally small enough to fool our eyes into seeing a continuous-tone image. Cheap newspaper publications will often use coarser dot patterns that are noticeable even without a magnifying glass. Digital images function similarly, but instead of halftone dots, the image is comprised of tiny squares called pixels that are of equal size, but may vary in color and tone.
Additional In-Camera Processing
After the image has been captured by the sensor, processed by the A/D converter, and interpolated into a full-color image, the camera may apply additional processing. Whether or not this additional processing takes place (and what it actually entails) depends on the individual camera, as well as certain user-defined settings. Typically, the camera will apply what we like to call the "secret recipe." This is essentially a list of directions for brightness, contrast, color saturation, and sharpening adjustments that is different for each camera.
Some of the settings can be changed by the user or turned off altogether.
Most cameras allow you to adjust settings such as sharpening, contrast, brightness, and saturation. For greater flexibility, it is often best to make these adjustments later in photo-editing software and not at the time of image capture. Basically, Photoshop is a more productive software package compared with the internal software used in digital cameras. (Note: If the settings can't be changed in the camera, we call this firmware.)
After the final round of in-camera processing, the file and its metadata (information about the photo) are written to the memory card in the chosen file format (usually JPEG). At that point, the camera is ready to process another image. All of this happens very fast, of course, so you don't really notice the incredible activity going on inside your camera. But it's pretty amazing when you actually stop and think about all of the steps that take place after you focus on a subject and press the shutter-release button.
Another characteristic of a CMOS that differentiates it from a CCD is that it only uses significant power when its transistors are switching between on and off states. Therefore, CMOS is very energy-efficient and able to dissipate heat more effectively.
CMOS technology is used in many commercial applications and, as a result, is more economical to produce. Until recently, CCDs produced superior image-quality and a higher dynamic range. Now, CMOS sensors have improved and produce similar image quality to CCDs. In fact, CMOS may become the standard sensor in the next few years due to its economy of production and efficient energy use.
Before you actually take a photo, the camera prepares the sensor to receive data by charging the surface of the sensor with electrons. When the shutter opens, allowing the light from the lens to strike the sensor, the electrons gather over the pixels in a proportionate response to the amount of light that strikes each pixel. More light falling on a particular pixel means that a higher number of electrons will gather there.
No comments:
Post a Comment