Fundamentals of Digital Imaging

For the past fifty years, the primary medium for photomicrography has been film, which has served the scientific community well by faithfully reproducing countless images from the optical microscope. It has only been in the past decade that improvements in electronic camera and computer technology have made digital imaging cheaper and easier to use than conventional photography.

Illustrated in Figure 1 is a Nikon Eclipse 600 transmitted/reflected light microscope equipped with an aftermarket Peltier-cooled digital camera capable of integrating images over a long accumulation period. The camera system is controlled by a separate unit that interfaces to a FireWire port housed in an IBM-compatible personal computer. Integration periods and other image acquisition parameters are selected by a proprietary Windows-based software program.

When a camera having charge coupled device (CCD) imaging sensor incorporates an analog-to-digital (A/D) converter on the sensor or in close proximity, it is generally referred to as a digital camera. Because CCD chips, like all optical sensors, are analog devices that produce a stream of varying voltages, the term digital is used only when those voltages are digitized in the camera and output in a computer compatible format. In a 12-bit digital camera, the analog signal from the CCD is digitized with 12-bit depth by the on-board A/D converter. Whether or not the output can actually be resolved into 4096 discrete intensity levels (12 bits) depends on the camera noise. In order to discriminate between individual intensity levels, each gray level step should be about 2.7 times larger than the camera noise. Otherwise, the difference between steps 2982 and 2983, for example, cannot be resolved with any degree of certainty. Some so-called 12-bit cameras have so much camera noise that 4096 discrete steps cannot be discriminated.

If the signal is analog to start with, why digitize it in the camera rather than somewhere downstream? There are two benefits to using an in-camera A/D converter: reduced noise and direct computer-compatible output. In general, the closer the A/D is to the sensor, the lower the noise level. Low-level analog signals from the CCD are far more readily corrupted by noise than their high-level digital counterparts. In the ideal case, the A/D is on the CCD chip, immediately adjacent to the output amplifier of the sensor. The lower the noise, the more gray levels can be identified and, therefore, the more bits that can be meaningfully used for the intensity measurement.

A digital camera has several advantages over its analog counterpart. Digital cameras produce a progressive scan output unlike the interlaced signal generated by video cameras. Digitization of interlaced video signals requires specialized capture boards and frame buffers. The output of progressive scan cameras can be interfaced directly to the computer (e.g., IEEE-1394, RS-422 or SCSI interfaces). In a progressive scan camera, the entire image is first acquired during the exposure time (also denoted as the integration period) and then read out, line by line from the top of the image to the bottom. Modern, high-speed amplifiers and A/D converters permit digital cameras to produce full-frame images at rates that equal or exceed the video framing rate.

Another advantage to digital cameras is that the output perfectly suits the format of a computer monitor. Because the signal is already digitized, image storage, manipulation and display are greatly simplified in comparison with similar maneuvers using video signals. The difficulties of dealing with prints, slides and negatives are eliminated in digital photography because many scientific journals now accept digital image files. The result is improved quality both of published images and those shown in presentations. The digitized image can be processed, compressed, transmitted via the Internet, pasted into documents or tuned into a poster.

CCD Architecture

Two CCD designs are commonly used in digital cameras: interline transfer and frame transfer. The interline-transfer CCD incorporates charge transfer channels beside each photodiode so that the accumulated charge can be efficiently and rapidly shifter over to them (Figure 2). Interline-transfer sensors can also be electronically shuttered "off" by dumping the stored charge instead of shifting it into the transfer channels. The frame-transfer CCD uses a two-part sensor in which the top half is covered by a light-tight mask and is used as a storage region. Light is allowed to fall on the uncovered portion, and the accumulated charge is then rapidly shifted into the marked storage region. While the signal is being integrated on the light sensitive portion of the sensor, the stored charge is read out.

Interactive Java Tutorial

Full Frame CCD Operation

Java

Discover how a full-frame transfer CCD undergoes readout by shifting rows of image information in a parallel fashion, one row at a time, to the serial shift register.

Two types of color digital cameras are used for scientific applications-a single CCD camera with a wavelength selection filter or a three-sensor camera. Both use filters to produce red, green and blue versions of the field-of-view. The single-sensor cameras uses a filter wheel or liquid-crystal tunable filter to acquire the red, green and blue images in sequence. The three-sensor camera has a beam-splitting prism and trim filters that enable each sensor to image the appropriate color and to acquire all three images simultaneously. Invariably, color cameras are less sensitive than their monochrome counterparts because of the additional beam-splitting and wavelength selection components. In some applications, particularly immunofluorescence, the loss of sensitivity is offset by the ability to capture multiple wavelengths simultaneously or in rapid succession. In addition, some color cameras achieve a higher resolution by diagonally offsetting the red, green and blue sensors, each by one-third of a pixel, thereby tripling the number of samples obtained.

Although CCD camera manufacturers and users routinely refer to each photodiode as a pixel (picture element), there is no requisite correspondence between the number and position of the pixels in the sensor and those in the computer monitor or printer. However, the display or printer resolutions should always be at least as high as that of the sensor.

Quantum Efficiency

Quantum efficiency (QE) refers to the percentage of incident photons that are detected. (For reference purposes, the QE of our photopic vision is about 3 percent; Figure 3). Silicon photodiodes, the basic building blocks of the CCD, have a high QE (80 percent) across a broad range of the visible spectrum and into the near infrared, as illustrated in Figure 3. The spectral sensitivity of a CCD is lower than that of a simple silicon photodiode because the CCD has charge transfer channels on its surface that reduce peak QE to about 40 percent.

Recently, the transparency of the channels of some scientific-grade CCDs has been increased and the QE in the blue-green range improved to nearly 70 percent. The losses from the surface channels are completely eliminated in the back-illuminated CCD. In this design, light falls onto the back of the CCD in a region that has been thinned by etching until it is transparent. A quantum efficiency as high as 90 percent can be realized. However, back-thinning results in a delicate, relatively expensive sensor that, to date, has only been employed in scientific-grade, slow-scan CCD cameras.

Noise in CCD Cameras

There are two major sources of noise in CCD cameras-dark noise and read-out noise. Although great improvements have been made over the past few years in the reduction of CCD dark noise at room temperature, cooling the chip further reduces the noise tenfold per 20° C decrease. Dark noise is most evident as "hot" pixels (white dots) in images obtained with room temperature CCD cameras after integration periods of 4 or 5 seconds. Cooling to 0° C is usually sufficient for integration periods up to 30 seconds. Experiments requiring very long exposures (e.g., chemiluminescence) need even lower sensor temperatures. Digital cameras are available in cooled or uncooled versions.

Noise sources vary in digital cameras, and several common types are presented as oscilloscope traces in Figure 4. Photon noise, dark current, fixed pattern noise, and photo response nonuniformity are generated on the CCD itself, while reset noise, I/f noise, and quantization noise occur during amplification and conversion of the analog signal to a digital output. Read-out noise is generated in the amplifier on the CCD chip that converts the stored charge of each photodiode (i.e., pixel) into an analog voltage to be quantified by A/D conversion. Read-out noise may be viewed as a "toll" that must be paid for reading the stored charge. The size if this toll has decreased steadily in the past few years to 5-10 electrons/pixel because of improvements in CCD design, clocking and sampling methods. Read-out noise increases in proportion to read-out speed. The cost of going faster is more noise and hence, more uncertainty in the voltage determination and lower number of bits of resolution. This is why slow-scan cameras generally exhibit lower read-out noise than faster output detectors and have higher number of useful bits. Digital came4ras range from those with 8-12 bit depth at 30 frames per second output to 16-bit depth at 1-2 frames per second.

Interactive Java Tutorial

CCD Noise Sources

Java

Explore how CCD noise sources affect signal output. Although noise has a variety of origins, it always appears as variations in image intensity, regardless of the source.

One solution to the speed/read-out noise problem is the use of multiple output amplifiers (taps) on a large CCD. Instead of reading the stored charge from the entire CCD through one output amplifier, the sensor is divided into four or eight sections each of which has its own amplifier. The image is read out in parts and then stitched together in software at rates of several frames per second. The required speed and associated noise of each amplifier are reduced accordingly.

Signal-to-Noise Ratio

Since photons randomly arrive at the sensor surface, their numbers fluctuate with a noise as described by Poisson statistics that is equal to the square root of the number of detected photons. Of course, camera noise adds to this photon statistical noise and further reduces the S/N. The highest S/N that can be achieved by a digital camera is the square root of the maximum accumulated charge (the full-well capacity). A simple estimate dot the S/N of any homogeneous region in an image is the average intensity of the region of interest divided by the standard deviation of the intensity of that region.

How Many Pixels in a Digital Camera Are Enough?

The resolution of a CCD is a function of the number of photodiodes and their size relative to the projected image. CCD arrays of 1000 x 1000 photodiodes are now commonplace in digital cameras. The trend in consumer and scientific-grade CCD manufacture is for the sensor size to decrease, with some CCD photodiodes as small as 4 x 4 microns. From sampling theory, adequate resolution of an object can only be achieved if at least two samples are made for each resolvable unit. (Many users prefer three samples per resolvable unit to ensure sufficient sampling).

Pixel Size Requirements for Maximum Resolution
in Optical Microscopy

Objective
(numerical aperture)
Resolution
Limit
(microns)
Projected
Size on CCD
(microns)
Required Pixel
Size
(microns)
4x (0.20) 1.5 5.8 2.9
10x (0.45) 0.64 6.4 3.2
20x (0.75) 0.39 7.7 3.9
40x (0.85) 0.34 13.6 6.8
40x (1.30) 0.22 8.9 4.5
60x (0.95) 0.31 18.3 9.2
60x (1.40) 0.21 12.4 6.2
100x (0.90) 0.32 32.0 16.0
100x (1.25) 0.23 23.0 11.5
100x (1.40) 0.21 21.0 10.5
Table 1

In an epifluorescence microscope, the Abbe diffraction limit of a 1.4-numerical aperture lens at 550 nanometers is 0.22 microns. For a 100x objective lens, the projected size of a diffraction-limited spot on the face of the CCD is 22 microns. A photodiode size of 11 x 11 microns would just allow the optical and electronic resolution to be matched, with a 7 x 7 microns photodiode preferred. Pixel size requirements for maximum resolution in optical microscopy are presented in Table 1 for the objective magnification range of 4x to 100x. With a 100x objective and no additional magnification, a 1000 x 1000 CCD with 7 x 7 micron photodiodes would capture a field-of-view of 70 x 70 microns in the object plane. When the size of the image projected onto the CCCD is appropriately adjusted for proper sampling, a larger number photodiodes in the CCD increases the field-of-view, not the resolution. The resolution requirements of various output devices may require oversampling at the sensor so that the final product (e.g., slide, print or poster) has adequate resolution at the final size.

Dynamic Range

Intrascene dynamic range denotes the useful range of intensities that can be simultaneously detected in the same field-of-view. Interscene dynamic range is the range of intensities that can be accommodated when detector gain, integration time, lens aperture or other variables are adjusted for differing fields of view. Although small sensors on a CCD are desirable from a resolution viewpoint, they limit the dynamic range of the device. The full-well capacity of a CCD is about 1000 times the cross-sectional area of each photodiode. Thus, a CCD with 7 x7 micron pixels should have full-well capacity of 49000 electrons or holes. (A hole is the region of the silicon from which the electron came and constitutes and equally valid and usable measure of detected photons. The term electron is used, although most CCDs read out the number of holes generated rather than electrons.) Since CCDs do not have inherent gain, one electron-hole pair is accumulated for each detected photon.

The dynamic range of a CCD is typically defined as the full-well capacity divided by the camera noise. The camera noise is calculated as the square root of the sum of the squares of the dark and read-out noise. Thus, the dynamic range of a 49000 electron full-well capacity CCD with 10 electrons of read-out noise and negligible dark noise is about 4900, corresponding to 12 bits. However, digitization if the output from such a camera at 12-bit depth means the 49000 electrons are divided into 4096 A/D units, each containing 12 electrons (49000/4096). Since the noise is 10 electrons, each gray level step is only 1.2 times the noise and cannot be discriminated. Digitization at 10 bits would result in each A/D units being 49 electrons, about five times the noise level, and each of the 1024 gray levels could then be discriminated. A tabulation relating bit depth to grayscale levels and dynamic range (in decibels) is presented in Table 2, which covers a range having five orders of magnitude.

Control of Speed, Effective Pixel Size and Field-of-View

Slow-scan digital cameras allow for control over the read-out rate, the effective size of the pixel that constitutes a sensor and the field-of-view. Scientific-grade CCD cameras usually offer two or more read-out rates so that speed can be traded off against noise. The effective size of a pixel, in many slow-scan digital cameras can be increased by binning, a process in which the charge from a cluster of adjacent photodiodes is pooled and treated as if it came from a larger detector.

Dynamic Range of Charged Coupled Devices

Bit Depth Grayscale
Levels
Dynamic Range
(Decibels)
1 2 6 dB
2 4 12 dB
3 8 18dB
4 16 24 dB
5 32 30 dB
6 64 36 dB
7 128 42 dB
8 256 48 dB
9 512 54 dB
10 1,024 60 dB
11 2,048 66 dB
12 4,096 72 dB
13 8,192 78 dB
14 16,384 84 dB
16 65,536 96 dB
18 262,144 108 dB
20 1,048,576 120 dB
Table 2

Binning is useful when light levels are very low and few photons are detected because it enables the investigator to trade spatial resolution for sensitivity. In addition, most slow-scan CCD cameras allow region-of-interest read-out in which a selected portion of the image can be displayed and the remainder of the accumulated charge discarded. The framing rate generally increases in proportion to the reduction in the filed-of-view. For example, a CCD with a sensor size of 1000 x 1000 and an output rate of 10 frames/s can produce 100 frames/s if the read-out region is reduced to 100 x 100 diodes. By trading off field-of-view and framing rate, an investigator can adjust to a far wider range of experimental circumstances than would be possible with a fixed framing-rate camera.

Intensified Digital Cameras

Several manufacturers now offer digital cameras equipped with an image intensifier for very low light-level imaging. These have a photocathode in close proximity to a microchannel plate electron multiplier and a phosphorescent output screen (see the illustration in Figure 5). The photocathode in the latest generation of these devices has a high quantum efficiency (up to 50 percent) in the blue-green end of the spectrum. Intensifier gain is the adjustable over a broad range with a typical maximum of about 80000. Thermal noise from the photocathode as well as electron multiplication noise from the microchannel plate reduce the S/N in an intensified CCD camera to below that of a slow scan CCD. Resolution of an intensified CCD depends on both the intensifier and the CCD but is usually limited by the intensifier microchannel plate geometry to about 75 percent of that of the CCD alone.

Intensified digital cameras have a reduced dynamic (and intrascene dynamic) range compared with slow-scan cameras, and most are limited to 10-bit resolution. Intensifier gain may be rapidly and reproducibly changed to accommodate variations in scene brightness, thereby increasing the interscene dynamic range.

Interactive Java Tutorial

Proximity-Focused Image Intensifiers

Java

Discover how photoelectrons are accelerated onto the surface of a back-illuminated charged coupled device in a proximity-focused image intensifier CCD architecture.

Indeed, since image intensifiers can be gated, that is, turned off or on in a few nanoseconds, relatively bright objects can be visualized by a reduction in the "on" time. Gated, intensified digital cameras are required for most time-resolved fluorescence microscopy applications because the detector gain must be modulated at high-frequency in synchrony with the light source. Because of the low light fluxes required in living cells, intensified CCD cameras are frequently employed to study dynamic events and for ratio imaging.

Choosing the Appropriate Camera

No single detector will meet all requirements in fluorescence microscopy, so the investigator must compromise. Exposure time is often the critical parameter. When time is available for image integration, a slow-scan CCD camera will outperform and intensified camera in all areas, in large part because of its higher quantum efficiency and lower noise. Cooling always improves digital camera performance, although the difference may not be noticeable when the integration time is a few seconds or less and the digitization depth is 8-12 bits. For applications involving digital deconvolution, the detector of choice is cooled, scientific-grade, slow-scan camera capable of producing a high-resolution, 14-16-bit image. Photodiode size matters; some CCDs have such small pixels that the integration period may have to be limited to avoid saturation of the charge storage wells, with the result that the dynamic range and peak S/N may be compromised. If the event under investigation is rapid but can be precisely triggered, then a slow-scan CCD operating in a burst or high-speed mode may be suitable. However, when the event is not readily predictable and specimen must be monitored continuously at low-incident light flux, the intensified CCD is the detector of choice. For this reason, single-molecule fluorescence imaging uses an intensified digital camera.

The photomicrograph presented in Figure 6 is a combination epi-fluorescence/phase contrast image of a thin section of mouse intestine triple-stained with several fluorescent chromophores. A Nikon Eclipse E600, similar to the one illustrated in Figure 1, was utilized with a Nikon DXM 1200 digital camera to record the image in Figure 6. When color images are needed of routine histological specimens the three-CCD camera is preferable to the inexpensive single-sensor camera with an integral color mask. High resolution, single-sensor CCD cameras equipped with a removable, red-green-blue, liquid-crystal filter have proven very useful for both brightfield and fluorescence microscopy.

Future Prospects

Recent improvements in the performance of CMOS (complimentary metal oxide semiconductor) cameras herald a potentially important future role for these devices in fluorescence microscopy. CMOS cameras have an amplifier and digitizer associated with each photodiode in an integrated on chip format. The result is low-cost, compact, versatile detector combining the virtues of silicon detection without the problems of charge transfer. CMOS sensors allow again manipulation of individual photodiodes, region-of-interest readout, high-speed sampling, electronic shuttering and exposure format for the computer interface. Until recently, they suffered from high fixed-pattern noise associated with switching and sampling artifacts, but these problems are now being solved rapidly. It is likely that they will replace the CCD in digital cameras for a number of scientific applications in the near future.


BACK TO DIGITAL IMAGING IN OPTICAL MICROSCOPY

Contributing Authors

Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.

Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.