Optical System and Detector Requirements for Live-Cell Imaging

In designing an optical microscopy system for live-cell investigations, the primary considerations are detector sensitivity (signal-to-noise), the required speed of image acquisition, and specimen viability. The relatively high light intensities and long exposure times that are typically employed in recording images of fixed cells and tissues (where photobleaching is the major consideration) must be strictly avoided when working with living cells. In virtually all cases, live-cell microscopy represents a compromise between achieving the best possible image quality and preserving the health of the cells. Rather than unnecessarily oversampling time points and exposing the cells to excessive levels of illumination, the spatial and temporal resolutions set by the experiment should be limited to match the goals of the investigation.

In principle, the ideal live-cell image acquisition system would be sensitive enough to acquire superior images from weakly fluorescent specimens, while simultaneously being fast enough to record all dynamic processes. In addition, the system would have sufficient resolution to capture the finest specimen detail and a wide dynamic range capable of accurately measuring very minute differences in intensity. Unfortunately, optimizing any one of these criteria is only accomplished at the expense of the others. It is therefore currently impossible to design an all-purpose live-cell imaging system that will be ideal for the entire range of possible investigations. Instead, the researcher must compromise by determining the most important parameters for optimization while still attempting to limit the sacrifice made by those variables that are deemed of lesser interest. In the end, microscope configuration is ultimately determined by the requirements of the imaging mode(s), the necessities of maintaining specimen viability during the experiment, the difficulty level of labeling protocols, and the availability of equipment.

Presented in Figure 1 is a modern inverted research-level tissue culture microscope equipped with 4 camera ports, each coupled to a different camera designed for a specific imaging mode. In most cases, microscopes of this design direct either 100 percent of the light to a port (one port at a time) or split the light 80:20 between the camera port and the eyepieces. For critical imaging at low light levels, the investigator should ensure that the most sensitive cameras are attached to a port receiving 100 percent of the light emitted by the specimen. In Figure 1, the full color CCD camera attached to the bottom port (a) receives light from the objective without reflection from mirrors or prisms and, in this case, is employed to image multiply labeled fluorescence specimens (or stained specimens in brightfield mode). The high performance electron-multiplying digital (EMCCD) camera (b) attached to the right-side port is used to image specimens that exhibit very low levels of fluorescence. For differential interference contrast imaging deep into thick tissue with infrared illumination, a camera ((c); left-side port)) having a sensor with high quantum efficiency between 700 and 1000 nanometers is required. Finally, for high resolution monochrome imaging by total internal reflection and other fluorescence techniques, a Peltier-cooled camera ((d); front port) with relatively small pixels (6-micrometers) is ideal. Configurations such as that illustrated in Figure 1 are expensive, but can be extremely versatile for core imaging facilities.

Live-cell imaging is conducted using a wide spectrum of contrast-enhanced imaging modes in optical microscopy. A majority of the investigations involve some form of fluorescence microscopy, often coupled to one or more transmitted light techniques. The fluorescence techniques include traditional widefield epi-fluorescence, laser scanning confocal, spinning disk and swept field confocal, multiphoton, total internal reflection (TIRF), fluorescence correlation spectroscopy, lifetime imaging (FLIM), photoactivation, and laser trapping. As a subset, specialized imaging methodologies such as spectral imaging, multicolor imaging, colocalization, time-lapse sequences, photobleaching recovery techniques (FRAP, FLIP, and FLAP), resonance energy transfer (FRET), speckle microscopy (FSM), and patch clamping are often coupled to one or more of the primary imaging modes. The fluorescence techniques can be supplemented by traditional brightfield modes, including differential interference contrast (DIC), Hoffman modulation contrast (HMC), and phase contrast, as confirmation of fluorescent probe localization. In almost all cases, each microscope configuration requires a number of unique considerations that must be implemented for successful live-cell imaging. Regardless of the microscope and digital camera system employed for imaging living cells and tissues, the two most important and limiting factors are maintaining cell viability and achieving the highest possible signal level above background noise and autofluorescence.

Optimizing Signal-to-Noise in Live-Cell Imaging

One of the fundamental differences between imaging fixed and living cells is that the former provide the investigator with ample latitude in defining image acquisition parameters, such as locating a suitable viewfield and determining exposure time, as well as adjusting the electronic gain settings, read-out rates, and offset values. As a result, it is possible in most cases (with adequately stained fixed specimens) to acquire images that utilize the full dynamic range of the camera system, thus producing the optimal signal-to-noise ratios. Unfortunately, the situation is different with live cells due the limitations on imaging parameters imposed by the strict requirement of maintaining cell viability. All too often with acquired images of living cells, the intensity of the specimen is only a few gray levels higher than that of the background. In this case, the most important imaging consideration becomes the dark current and read noise levels of the detector system. Economical cameras often feature higher noise levels, leading to a less uniform background image that often obscures signal from the specimen, an effect that grows more serious as the readout speed is increased. In virtually all live-cell imaging applications, the detector choice is paramount in determining the success or failure of an experiment.

The four primary sources of noise in digital imaging arise from the detector, the illumination system, Poisson or shot noise (due to the stochastic nature of a photon flux), and stray light. In most cases, the systematic reduction of noise can be accomplished by carefully choosing the detector and optimizing the illumination conditions. Virtually every type of photon detector introduces some form of noise into each measurement recorded with the device. The photomultipliers employed in laser scanning and multiphoton microscopy can produce spurious electrons within the signal amplification system. Likewise, the charge-coupled devices (CCDs) utilized in widefield, deconvolution, total internal reflection, spinning disk, and swept field microscopy exhibit a background dark current associated with each pixel. Improvements in CCD and photomultiplier design, including cooling the devices to very low temperatures, have reduced the contribution of dark current to very low levels. However, especially in the case of CCDs, reading each pixel and converting the analog signal into a digital equivalent contributes an additional noise component known as read noise, the major noise artifact in the current generation of cooled digital camera systems.

Illustrated in Figure 2 is the interdependence of digital camera sensitivity with the readout speed and image quality in live-cell imaging. The specimen is a culture of human cervical carcinoma cells (HeLa line) expressing a fusion of monomeric Kusabira Orange (mKO) fluorescent protein and human alpha-tubulin imaged in widefield fluorescence illumination with a TRITC filter combination and grayscale camera system operating at readout speeds of 10 or 1.25 megahertz. When the camera CCD is shuttered (receiving no light), the background noise is dramatically lower when the slow readout speed is employed, as depicted in Figures 2(a) and 2(b) at 10 and 1.25 megahertz speeds, respectively. This difference does not seriously affect camera performance when the exposure time can be adjusted to utilize the entire dynamic range, as evidenced by the similar quality of the images presented in Figures 2(c) and 2(d). However, under low light level conditions, the slower readout speed (Figure 2(f)) provides superior image quality. The images in Figures 2(e) and 2(f) were captured at approximately at 10-fold lower light intensity than those in Figures 2(c) and 2(d). Reducing the illumination level another 5-fold produces a far more dramatic effect, as shown in Figures 2(g) and 2(h), where the slower readout speed provides a marginally acceptable image (Figure 2(h)) while the fast readout mode (Figure 2(g)) does not.

A successful digital imaging experiment requires that the spatial illumination pattern of the specimen be constant across the entire viewfield and between successive images. Depending upon the illumination source, the optical properties spanning the specimen field are usually fairly constant (termed spatial invariance) in microscopes using tungsten-halogen lamps. However, in laser scanning microscopes the illumination dose delivered to a particular pixel can vary for each scan. In addition, the mercury plasma arc-discharge lamp sources generally employed in widefield fluorescence microscopy do not provide even intensity across the spectrum, but instead produce discrete high-intensity peaks at specific wavelengths. Xenon lamps, in contrast, exhibit a far more evenly-distributed intensity profile across the visible spectrum, but are deficient in the ultraviolet region (an illumination range usually avoided in live-cell imaging).

Combination arc-discharge lamps (for example, the metal-halide lamp), featuring a mixture of properties intermediate between mercury and xenon, exhibit perhaps the best characteristics of both species by providing a continuous spectrum from the ultraviolet to the infrared in addition to the strong mercury lines. Regardless of the light source, the illumination field can be rendered almost uniform by placing an optical fiber (or a liquid light guide) between the lamphouse and the microscope optical train illumination input port. Any remaining illumination gradient across the viewfield can then be computationally corrected using flat-field algorithms. It is important to note that temporal variations in lamp output, which can vary by 10-percent from the mean, are not corrected by optical fibers and require the application of a stabilized power supply.

The measurement of photon flux is inherently a statistical process, having an uncertainty factor associated with signal detection. Termed Poisson or shot noise (as described above), the net effect is that for a measurement of N photons, the uncertainty equals the square root of N, which is also the signal-to-noise ratio. Maximizing the number of signal photons (N) leads directly to an improvement in signal-to-noise and image contrast. The simplest method for improving the signal is to collect more photons, usually by taking longer exposures or by increasing the level of excitation illumination, but these measures can also increase phototoxicity and compromise cell viability. A second, and perhaps more useful alternative, is to increase the detector sensitivity.

In order to overcome the low light levels associated with live-cell imaging, modern digital CCD cameras can be configured to deliver a significant increase in sensitivity by implementing a process known as binning (described in Figures 3 and 8). During normal readout, the pixels in a CCD are measured by transferring a row of horizontal pixels into the read register, which is then sequentially analyzed. In a binned image, the signal (photoelectrons) from a group of adjacent pixels on the image sensor (such as a 2 x 2 or 4 x 4 block) is combined and assigned to a single pixel value in the readout array. This is achieved by transferring multiple parallel rows of pixels into the serial read register (for example, two rows for 2 x 2 binning and four rows for 4 x 4 binning) and then reading groups of 2, 3, or more pixels together. In the case of 2 x 2 binning, there is a two-fold loss in resolution, a four-fold increase in signal, and a two-fold improvement in signal-to-noise. Obviously, the improvement in signal-limited applications can be significant, although at the cost of spatial resolution.

Figure 3 illustrates the effects of combining adjacent pixels on spatial resolution and the final dimensions of digital images captured using increasing levels of binning. The specimen is a culture of Indian Muntjac deer skin fibroblast cells expressing a fusion of mCherry fluorescent protein (pseudocolored red) and human beta-actin, which localizes in stress fibers comprising the filamentous cytoskeletal network. As is commonly observed with fluorescent protein labels, the cells that display superior localization are often those that express the fusion product at very low levels, significantly reducing the available signal. With no binning enabled (Figure 3(a)), the signal is too low to be distinguishable at exposure times that produce negligible phototoxicity and photobleaching. Binning 2 x 2 and 4 x 4 progressively increases the signal level, but at the cost of spatial resolution (Figures 3(b) and 3(c)). The brightest image is produced by binning 8 x 8 pixels (Figure 3(d)), but suffers drastic resolution loss. An important point to note is that the image dimensions are decreased proportionally by increasing the number of pixels combined during the binning process. For example, the unbinned (1 x 1) image in Figure 3 was originally 1360 x 1024 pixels, while the (2 x 2), (4 x 4), and (8 x 8) binned images were 680 x 512, 340 x 256, and 170 x 128 pixels, respectively. These images have been reduced in size for illustrative purposes in the figure. Likewise, a digital camera having a display size of 512 x 512 will produce images of 256 x 256, 128 x 128, and 64 x 64 pixels when binned in a similar fashion.

A key point in the binning concept is that the addition of read noise occurs after the binning process, so even though several pixels are being combined, there is only one read event per binned pixel. In contrast, if a 2 x 2 box of pixels were computationally averaged after data collection, the result would be inferior to binning because of the separate read noise contribution by each of the four pixels. Therefore, even though binning will sacrifice spatial resolution, it leads to a significant increase in the signal-to-noise ratio, an especially useful advantage when performing experiments on photo-sensitive cells that require low light levels and short exposure times. In live-cell imaging, the critical parameter is often the ability to detect faint fluorescent signals rather than to resolve them spatially, so the increase in image intensity and speed of acquisition afforded by binning more than offsets the loss in resolution. As an alternative to binning, many high-performance cameras enable the investigator to decrease image acquisition times by reading only a sub-array or selected region of interest on the full CCD array, a useful mechanism to preserve spatial resolution while simultaneously reducing the amount of time cells are exposed to harmful illumination.

In order to overcome the shortcomings of conventional high-performance CCD cameras for applications that demand rapid frame-rate capture at extremely low light levels, manufacturers have introduced an innovative method for amplifying weak signals above the CCD read noise floor. By incorporating an on-chip multiplication gain register, electron multiplying devices (EMCCDs) achieve the single-photon detection sensitivity typical of intensified or electron-bombarded CCDs at much lower cost and without compromising the quantum efficiency and resolution characteristics of the conventional CCD architecture. The distinguishing feature of EMCCDs is the incorporation of a specialized extended serial register on the sensor chip that generates multiplication gain through the process of impact ionization within the silicon sub-structure (see Figure 4). The photon-generated charge is elevated above the read noise, even at high frame rates, and is applicable to any of the current CCD sensor configurations, including back-illuminated devices.

A significant advantage of EMCCD camera systems is that they are considerably less expensive to manufacture than intensified CCDs due to the signal amplification stage being incorporated directly into the CCD structure. For critical live-cell investigations with low-intensity signals, such as those encountered in single-molecule imaging, total internal reflection, spinning disk and swept field confocal microscopy, flux determinations of calcium or other ions, and time-resolved three-dimensional microscopy, the EMCCD offers significant advantages over other sensors designed for low signal levels. Additionally, when employed with the higher signal levels of conventional fluorescence imaging techniques, the extreme sensitivity of the EMCCD system allows the application of lower fluorophore concentrations and/or lower power levels from the excitation source, thereby reducing both phototoxicity and photobleaching of the fluorescent probe.

The Microscope Optical System

Microscopes targeted at live-cell imaging applications must be equipped with a high-quality frame constructed of sturdy composites or aluminum and should be stabilized by being securely mounted on a vibration-free platform. The exterior optical surfaces as well as the component housings (condenser, lamphouse, objective turret, etc.) should be maintained in a clean state and the environment surrounding the microscope should be dust-free and of low relative humidity. Among the routine procedures that are continuously necessary for live-cell imaging is to ensure that the microscope optical pathway and diaphragms are aligned using the principles of Köhler illumination. The laboratory room housing the microscope benefits from the ability to curtail all overhead lighting in order to reduce the level of stray light entering the microscope through the eyepieces or the specimen chamber.

Electronic digital camera systems are usually mounted onto the microscope body via one or more camera ports using specialized adapters. Research-level microscopes are often equipped with multiple ports to enable attaching several cameras simultaneously (see Figure 1). This feature is particularly useful for microscopes that are employed for multiple imaging modes so that cameras optimally configured for each contrast-enhancing technique can be quickly accessed when necessary. Within the body of the microscope, a cascade of mirrors, beamsplitters, and prisms is strategically positioned to reflect image-forming light waves to the various camera ports and eyepieces. It is important to note that any light sent to the eyepieces during image acquisition is done so at the expense of light sent to the camera, thus decreasing the signal-to-noise level. Therefore, on microscopes so equipped, the port directional control should be adjusted to send 100-percent of the light to the camera port used for acquiring photon-limited fluorescence images.

The choice of objectives is critical for gathering the maximum signal from the specimen and transferring it to the microscope optical train. An excessive number of elements in the optical path, such as mirrors, beamsplitters, projection lenses, filters, and prisms can seriously degrade the signal-to-noise level in captured images and must be kept to an absolute minimum. In fluorescence imaging, objectives with very high numerical apertures are required to maximize the amount of light that is collected from the specimen. Numerical aperture, which defines how much light from a single point source can be gathered by the lens, is arguably the most important consideration in selecting objectives for live-cell imaging. This value is engraved on the objective barrel and ranges from 0.25 for low-magnification (10x) dry objectives to 1.20 for water and 1.45 for oil-immersion versions (40x through 100x). In mathematical terms, the numerical aperture is expressed as:

Numerical Aperture (NA) = Refractive Index (η) × Lens Acceptance Angle (sin(θ))

Thus, the light-gathering ability of the objective is determined by the refractive index (η) of the medium between the front lens and the specimen multiplied by the sine of the angle (θ) of maximally diffracted light rays that are able to contribute to image formation. Furthermore, the resolution of an image produced by the objective is a function of numerical aperture. According to the Rayleigh criterion (a conservative estimate), resolution is equal to a constant (0.61) multiplied by the wavelength of illumination and divided by the objective numerical aperture. In addition, the numerical aperture defines the intensity (brightness) of the image, which is proportional to the fourth power of the numerical aperture, but only inversely proportional to the second power of the magnification. Therefore, a small increase in numerical aperture can yield a significant improvement in signal. For this reason, live-cell fluorescence imaging usually requires the highest numerical aperture objectives coupled to high refractive index immersion media (oil or water).

As discussed above, the signal brightness is inversely proportional to the square of the objective magnification, so that lower magnification objectives are often beneficial when imaging dim samples. For example, if the signal-to-noise ratio becomes a limiting factor in a specimen imaged with a 100x objective of numerical aperture 1.4, changing to a 60x or 40x objective of similar numerical aperture will provide a significant improvement in brightness. In fact, a particular fluorescent feature should appear almost three times brighter when imaged with a 60x versus a 100x objective (both having a numerical aperture of 1.4). The observed intensity is also a function of variations in the transmission quality of the objective and it is always worthwhile to consider both magnification and binning together in order to assess the final magnification required for a particular experiment.

For imaging multiply labeled fixed cells (and in some cases, living cells), objectives that are well corrected for chromatic aberration throughout the visible light spectrum (ranging from 400 to 700 nanometers; termed apochromats) and having a flat imaging plane (referred to as plan or plano) are considered ideal. Unfortunately, apochromatic and plan-apochromatic objectives, as well as those designed to capture a wider viewfield, inevitably contain more optical elements (lenses) than less precise objectives (achromats and fluorites). The high degree of correction enables these objectives to be used for transmitted light techniques including phase contrast and DIC with minimal artifacts. The result, however, is that light throughput is sacrificed for optimum optical correction, a consequence that has little effect for imaging fixed cells, but is often counter productive in live-cell imaging. This downside is best demonstrated by the fact that a plan-apochromatic 40x objective having a high numerical aperture of 1.4, which theoretically should be approximately six times brighter than a similar 100x objective, is in practice only about four times as bright. Regardless, the 40x and 60x objectives still provide the brightest images achievable at the limits of optical resolution.

Illustrated in Figure 5 is an example of the principle of numerical aperture importance with regard to resolution and image brightness. The specimen is a culture of adherent African green monkey kidney epithelial cells (CV-1 line) transfected with a plasmid vector encoding for enhanced green fluorescent protein fused to the mitochondrial targeting nucleotide sequence from subunit VIII of human cytochrome C oxidase. Upon transcription and translation of the plasmid in transfected mammalian hosts, the mitochondrial localization signal is responsible for transport and distribution of the fluorescent protein chimera throughout the cellular mitochondrial network. Tubular mitochondria can be subsequently visualized using fluorescence microscopy.

In Figure 5, the same viewfield was imaged using objectives having the same optical correction (plan fluorite) and numerical aperture (1.3), but with magnifications ranging from 40x to 100x. Although the number of pixels and detector conditions utilized in collecting the images for Figure 5 were identical, the mitochondria are brightest when imaged through the 40x objective (Figure 5(a); note that the images have been adjusted to a common size). In contrast, the higher magnification 60x and 100x objectives (Figures 5(b) and 5(c), respectively) yield progressively darker images, with the 100x image being almost indiscernible. This objective can still be used to image the specimen, but the detector gain must be significantly increased, resulting in a deterioration of the signal-to-noise ratio and a generally inferior image. Note that the resolving power of the objectives (Figures 5(d) through 5(f)) is comparable due to the identical numerical aperture values.

One of the primary concepts to be gleaned from the data in Figure 5 is to avoid excessive magnification when choosing objectives for live-cell imaging of fluorescent proteins (and other fluorophores, for that matter). Simply increasing the digital enlargement (zoom) during image collection on a confocal microscope using the 40x or 60x objective results in an image equivalent in size to the 100x lens (Figures 5(d) and 5(e)). The resolution of both objectives is the same because they have identical numerical aperture values. The argument concerning the relative unimportance of magnification should not be taken to suggest that using the 60x or 100x objectives is not beneficial. In fact, selecting a high magnification objective is often necessary when imaging very small objects, such as peroxisomes or secretory granules, using a widefield microscope. Because the image size relative to detector size plays an important role in determining spatial sampling frequency, the optimal magnification is determined by the parameters of the digital camera system (CCD pixel size and the intermediate magnification factor). Thus, the best choice of objective usually depends on the optical configuration of the instrumentation in addition to the specific requirements of a particular experiment.

In cases where signal-to-noise is more critical than resolution, it is often a better choice to utilize objectives of lower correction (fewer optical elements) that have significantly higher light throughput. Caution should also be used when imaging through objectives that contain optical elements that significantly reduce light transmission, such as the amplitude-reducing plates in phase contrast objectives. This level of light reduction is rarely a problem in transmission mode, but can substantially reduce signal levels (by 15 to 20 percent) in fluorescence imaging. For this reason, phase objectives should not be used for live-cell imaging with low-abundance probes unless the experimental protocol requires overlays of phase contrast and fluorescence images. Similarly, when combination images using fluorescence and DIC are being collected, the polarized light analyzer (which reduces the signal by approximately 30 percent) is usually positioned in the shared light path immediately beneath the objective and should be removed before acquiring fluorescence images.

Any defects present in the optical train of a microscope will affect the signal-to-noise ratio in the final image. In live-cell imaging with high numerical aperture objectives, spherical aberration is the most common artifact that must be overcome. This aberration arises from a mismatch between the medium bathing the cell (usually an aqueous solution with a refractive index of 1.33) and the refractive index of the objective immersion medium. Spherical aberration usually manifests itself as an uneven spread in focus along the optical axis such that the image of a point is significantly elongated and distorted. Because the signal is spread over a much larger volume in the presence of the aberration, the signal-to-noise ratio is reduced. This problem is far more serious when imaging deep into thick tissues than it is with adherent cells on glass coverslips (usually only several micrometers thick), and can be largely eliminated by using a water immersion lens that is more closely matched to the refractive index of the culture medium. As an alternative, the refractive index of the immersion medium or the thickness of the coverslip can be adjusted. It is important to note that refractive index changes with temperature, so the optimal combination of immersion medium and coverslip thickness will differ between room temperature and 37 degrees Celsius. Newer immersion objectives with correction collars can be used to combat temperature-induced refractive index fluctuations.

Presented in Figure 6 are the significant limitations of the microscope optical system (panel (a)) and the detector (digital camera; panel (b)) in performing live-cell investigations. The specimen is consists of a single interphase nucleus from an adherent rat kangaroo kidney epithelial cell (PtK2 line) expressing enhanced green fluorescent protein (EGFP) fused to the histone H2B sequence (which localizes in the nucleus), and imaged in widefield fluorescence with epi-illumination. The highest resolution images are presented for the optical system in the lower right-hand corner of panel (a), and for the digital camera system in the lower left-hand corner of panel (b). In live-cell microscopy, the optical system determines the quantity of light available for contrast generation, as well as dictating the physical resolution limit, which is set by the objective numerical aperture and the wavelength of illumination. As the illumination intensity increases (traversing from top to bottom in panel (a)), the photon counting noise becomes less apparent and image quality dramatically increases. Similarly, as the optical resolution is increased (left to right in panel (a)), smaller details of the nucleus become more apparent.

The digital camera system captures optical information from the microscope and samples the data according to discrete light levels and spatial resolution elements (panel (b); Figure 6). As pixel size increases (left to right in panel (b)), the smaller details of many features become obscured and the image acquires a blocky, almost indiscernible appearance (lower right-hand corner in panel (b)). Decreasing the number of gray levels utilized to capture and display the image (bottom to top in panel (b)) reduces the shading, leaving only features with very high contrast (top row in panel (b)). Note that these two factors work together in providing information. Once again, in live-cell imaging experiments, the amount of information gathered should be proportional to the requirements of the investigation.

A variety of filters and mirrors are employed in microscopes designed for live-cell imaging to direct and fine-tune the wavelength profile of illumination and emission light as it travels from the light source through the microscope (and specimen) and then to the detector. In transmission mode with contrast-enhancing optical systems, the light-modifying components are either polarizers and prisms (DIC) or condenser annuli and phase rings (phase contrast and Hoffman modulation contrast). Because the overall throughput of light in the microscope optical train for brightfield, DIC, and phase contrast is much higher than with fluorescence, very little modification is usually necessary in order to increase signal-to-noise when imaging in these modes. The primary consideration is filtering the tungsten-halogen illumination to remove phototoxic ultraviolet and infrared wavelengths before they damage the specimen. Blocking infrared light is necessary to avoid heat damage to the cells and to reduce background levels that complicate signal-to-noise with infrared-sensitive camera systems. Ultraviolet light is less of a concern with transmitted light lamps, but the wavelengths reaching the specimen can easily be controlled using a green or red interference filter. Both the infrared and visible light filters will significantly reduce light levels and may require increased exposure and gain settings on the detector system to preserve signal-to-noise.

In fluorescence imaging, interference filters and dichromatic mirrors are combined to select the appropriate wavelength bands of light that are used to excite and collect emission from the fluorophores labeling the specimen. The relatively low light levels in fluorescence microscopy, when compared to transmission modes, render the choice of filters critical for achieving adequate signal-to-noise levels. In most cases, especially where signal bleed-through is of concern due to overlapping fluorophore emission profiles, the application of bandpass excitation and emission (barrier) filters is required. Often, the narrow spectral windows inherent in these filters further restricts the amount of light passing through the microscope and challenges the investigator to select the optimum bandwidth for obtaining satisfactory images.

A wide variety of specific bandpass filter and dichromatic mirror combinations are available from the microscope manufacturers and aftermarket optical filter companies. For optimum signal-to-noise, the filter combination selected for live-cell imaging should closely match the spectral profiles of the fluorophores used in the experiment. For example, using a standard fluorescein (FITC) filter set designed for applications in multi-color labeling to image cells expressing yellow fluorescent protein (YFP) alone will needlessly sacrifice some of the YFP fluorescence that would otherwise improve the signal-to-noise (see Figure 7(a)). In this case, matching the YFP absorption and emission profiles with the appropriate wide bandpass excitation filter coupled to a longpass emission filter yields significantly higher signal.

An example of optimizing fluorescence filter combinations to obtain the highest possible signal level is presented for two popular fluorescent protein derivatives in Figure 7. The top panel (Figures 7(a) and 7(b)) illustrates the absorption and emission spectra of Venus, a site-directed mutagenesis product of YFP, superimposed over a FITC filter set (Figure 7(a)) as well as a specialized wideband combination designed to maximize the signal collected, in general, from yellow fluorescent proteins (Figure 7(b)). Although the emission filters are comparable with regards to the level of signal transmitted to the detector, overlap between the Venus absorption spectrum and the FITC excitation filter bandpass region is significantly less than that observed with the wideband YFP set, leading to inefficient excitation and reduced signal. Likewise, a TRITC filter combination (Figure 7(c)) does not excite mCherry fluorescent protein as efficiently as a Texas Red set (Figure 7(d)). In addition, the wider bandwidth of the emission filter in the Texas Red combination passes more signal to the detector. Every fluorophore utilized in live-cell imaging experiments should be scrutinized with regard to the selected filter combination to ensure maximum efficiency for excitation and emission.

In addition to standard filter combinations designed to image a single fluorophore, multiband sets are now readily available to enable simultaneous illumination and detection of specimens labeled with two or more fluorescent probes. Other factors that must be considered when fine-tuning the microscope optical train are the strategic positioning of neutral density filters to reduce excitation illumination levels (critical for live-cell fluorescence imaging) as well as ultraviolet and infrared filters to offset or eliminate specimen phototoxicity. These filters are usually placed between the arc-discharge (or laser) illumination source and the microscope input light port, and can be readily interchanged to optimize light throughput. The intensity and numerical aperture of light passing through the microscope can also be regulated by careful adjustment of the condenser or epi-illuminator aperture and field diaphragms.

Matching Optical Resolution to Detector Geometry

Provided that the optical image is adequately sampled by the detector system, the spatial resolution of an image captured in the microscope is usually defined by the resolution of the optical system. Because the CCD detector is constructed from an array of photodiodes each having a fixed size, the geometry of the individual pixels can limit the final resolution of the image. In order to achieve the full resolution capabilities of the microscope, the detector size should conform to the Nyquist sampling criterion of 2.5 to 3 pixels for each Airy disk unit. As an example, for an optical resolution limit of 250 nanometers, the pixel size in the final image should be approximately 80 to 100 nanometers. In live-cell imaging, where signal-to-noise is often more critical than optical resolution, the most useful limit for gathering images without noticeable degradation is 2 pixels per Airy disk. Reducing the number of pixels per Airy unit increases the brightness of the image and enables the use of larger CCD photodiodes that can accumulate more photoelectrons.

Illustrated in Figure 8(a) are images of Airy disks from the high numerical aperture objectives commonly utilized in live-cell imaging projected onto the surface of a CCD pixel array. Each pixel in the array is 6 micrometers in size. The 100x objective, with a numerical aperture of 1.4, projects an image that is 20 micrometers in diameter onto the CCD surface (see Table 1), thus easily achieving the Nyquist sampling size (3.3 pixels per Airy unit). At this sampling frequency, sufficient margin is available that the Nyquist criterion is nearly satisfied even with 2 x 2 pixel binning. In contrast, the 60x objective (also 1.4 numerical aperture) projects an image that is 12-micrometers in diameter, just below the lower boundary of the Nyquist limit. Additionally, even with a reduced numerical aperture of 1.3, the 40x objective produces an 8.4-micrometer image and would require a magnifying camera adapter to conform with the Nyquist resolution criteria. Photodiode arrays having larger or smaller pixel dimensions either match the optical resolution of the microscope without intermediate magnification or can be easily modified with a specialized CCD adapter that contains the appropriate lens system.

A schematic diagram of 2 x 2 pixel binning is presented in Figure 8(b) for an array containing 16 pixels. After accumulation of photoelectrons during exposure, the camera readout circuitry performs two successive parallel shifts of 4 pixels each. Subsequently, two serial register shifts of 1 pixel place the photoelectrons gathered by 4 pixels (2 x 2 binning) into the output node where their voltage is read by the amplifier. In a similar manner, 4 x 4 binning (not illustrated) would shift the contents of the entire 16-pixel array into the output node before readout, significantly increasing the signal while simultaneously reducing the read noise. As previously discussed, pixel binning is an excellent method to retrieve weak signals when examining live specimens labeled with fluorescent proteins at the low expression levels necessary to reduce toxicity artifacts.

In practice, the ideal pixel size in the final image for an objective having a numerical aperture of 1.4 is between 100 and 120 nanometers. Translating this number into the appropriate physical dimensions for the CCD photodiodes requires consideration of the magnification factor for the objective. For example, to achieve the full optical resolution for a 60x objective (1.4 numerical aperture) the detector must have a photodiode size of 6 micrometers or less (determined by the product of the magnification and the resolution). However, for a 100x objective of the same numerical aperture, the optimum photodiode size is increased to 10 micrometers. Thus, it is evident that the digital resolution for a particular CCD camera only matches the microscope optical resolution when the camera is properly coupled to the appropriate objective. Mating a CCD camera having 10-micrometer photodiodes to a 100x objective will result in a final pixel size of 100 nanometers in the image. The same camera will produce 167-nanometer pixels when used with a 60x objective, which limits somewhat the resolution of recorded images. In contrast, a CCD with 6-micrometer photodiodes will be perfectly matched to a 60x objective, but will produce images that are oversampled when the microscope nosepiece is rotated to insert the 100x objective. Oversampling in this manner will significantly decrease image brightness while failing to improve resolution.

From the previous discussion, it appears that the only ideal solution for matching the microscope optical system with the pixel resolution of the detector is to use different cameras for each objective. In practice, obviously, this strategy is unrealistic and a compromise must be found for optimizing the utility of a single camera system. In some cases, the resolution problem can be circumvented by installing a coupler between the microscope and camera that reduces the magnification factor of the objective. For example, a 0.6x coupling adapter will lower the projected image magnification from a 100x objective to match that of a 60x objective. A wide variety of couplers are available from the aftermarket distributors to either increase or decrease the projected magnification on the CCD photodiode array plane. With regards to interchanging digital CCD cameras as dubiously suggested above, it is important to note that while these devices can be removed from the microscope with relative ease, it is preferable to leave the camera mounted on the microscope port to prevent dust and debris from entering the body of the microscope and from adhering to the camera image sensor window (an artifact promoted by static electricity). Cleaning the sensor faceplate window is an unusually difficult task, especially when the goal is to completely remove every particle of dust.

Pixel Size Requirements for Matching Microscope Optical Resolution
(Numerical Aperture)
Required Pixel
1x (0.04) 6.9 6.9 3.5
2x (0.06) 4.6 9.2 4.6
2x (0.10) 2.8 5.6 2.8
4x (0.10) 2.8 11.2 5.6
4x (0.12) 2.3 9.2 4.6
4x (0.20) 1.4 5.6 2.8
10x (0.25) 1.1 11.0 5.5
10x (0.30) 0.92 9.2 4.6
10x (0.45) 0.61 6.1 3.0
20x (0.40) 0.69 13.8 6.9
20x (0.50) 0.55 11.0 5.5
20x (0.75) 0.37 7.4 3.7
40x (0.65) 0.42 16.8 8.4
40x (0.75) 0.37 14.8 7.4
40x (0.95) 0.29 11.6 5.8
40x (1.00) 0.28 11.2 5.6
40x (1.30) 0.21 8.4 4.2
60x (0.80) 0.34 20.4 10.2
60x (0.85) 0.32 19.2 9.6
60x (0.95) 0.29 17.4 8.7
60x (1.40) 0.20 12.0 6.0
100x (0.90) 0.31 31.0 15.5
100x (1.25) 0.22 22.0 11.0
100x (1.30) 0.21 21.0 10.5
100x (1.40) 0.20 20.0 10.0
Table 1

Many of the current high-performance digital camera systems designed for fluorescence microscopy have photodiode sizes ranging from 5 to 16 micrometers and feature on-chip binning. As discussed above, the binning feature enables the charge from several neighboring photodiodes to be combined into a larger unit that effectively increases the size of the pixel. With cameras in the 6 to 8 micrometer photodiode size range, the full microscope optical resolution can usually be matched using the 60x objective without binning, while binning the photodiodes into a 2 x 2 array can be utilized with the 100x objective to increase image brightness. In addition to taking advantage of matching the optical system resolution with that of the CCD using the higher transmission 60x objective, this approach allows even brighter images when using the 100x objective combined with binning (2 x 2 binning increases sensitivity four-fold). As a result, the images recorded with a 100x objective at bin 2 are approximately twice as bright (and of equal or better resolution) as the images recorded with the 60x objective without binning. In cases where extra sensitivity is desired or necessary, the 60x objective can be binned 2 x 2 to increase the brightness by a factor of four with a concurrent loss in resolution that is usually acceptable for most live-cell imaging investigations.

In several applications, such as those requiring the simultaneous monitoring of a large cell population, capturing a larger portion of the viewfield is more important than spatial resolution. The rectangular CCD image sensor in a digital camera with no intermediate magnification factor does not capture the entire field of view as visualized through the microscope eyepiece; rather it is limited to a range between 30 and 80 percent, depending upon the chip dimensions. If a larger viewfield is required, a coupling adapter (described above) that reduces the magnification factor can be employed. In many cases, a 0.6x relay lens will project an image size that closely approximates the view seen in the eyepieces, but at some cost in resolution. Alternatively, in order to maintain resolution, multiple images of adjacent fields can be gathered at high magnification using a x-y motorized scanning stage and subsequently stitched together into a larger image using post-acquisition processing software.

The greatest resolving power in optical microscopy is realized with near-ultraviolet light, the shortest effective imaging wavelength. Near-ultraviolet light is followed by blue, then green, and finally red light in the ability to resolve specimen detail. Under most circumstances, microscopists use broad-spectrum white light generated by a tungsten-halogen bulb to illuminate the specimen, but in live-cell imaging the illumination spectrum is often limited to narrow bandwidths through the application of interference filters. The visible light spectrum is centered at about 550 nanometers, the dominant wavelength for green light (human eyes are most sensitive to green light). It is this wavelength that was used to calculate resolution values presented in Table 1, so these values will differ when light of either higher or lower wavelength is utilized. The numerical aperture value is also important and higher numerical apertures will also produce higher resolution, as can be observed for objectives of similar magnification in the table.


Perhaps the most critical aspect of live-cell imaging is achieving the optimal magnification while balancing signal intensity (favored by lower magnifications), resolution (favored by higher magnifications), and cell viability over the course of the experiment. The investigator must pay careful attention to match the final optical magnification with the pixel size of the detector utilizing as few lens elements as possible. Optical couplers having a wide variety of intermediate magnification factors are commercially available to match specific objective requirements (magnification and numerical aperture) for both light-limiting and high-resolution applications. The Nyquist resolution criterion should be stringently considered when choosing the objective magnification. Each pixel should correspond to no more than half the required resolution limit and, due to the diffraction limit of the microscope, this distance should not be smaller than 50 nanometers for most applications.

A standard cadre of microscope objectives commonly employed for live-cell imaging is the 10x and 40x (dry), as well as 40x, 60x and 100x oil or water immersion versions. Economical dry low-magnification lenses are used primarily for preliminary scanning of specimens to locate areas of interest and do not require the high optical correction factors. However, the immersion lenses should feature high numerical apertures (1.3 to 1.45 for oil and 1.2 for water) and high light transmission efficiency. Because images are typically gathered near the center of the viewfield, objectives that are highly corrected to produce a flat field usually provide no detectable benefit and are significantly more costly and less light efficient than lesser corrected objectives. All objectives targeted for fluorescence imaging should be examined for the quality of the point-spread function using fluorescent beads.

Cooled monochrome CCD cameras are the best choice for most imaging applications involving living cells in culture, regardless of whether epi-fluorescence or transmitted light with contrast enhancement (DIC and phase contrast) is the primary acquisition mode. In choosing a camera, the important parameters to consider include quantum efficiency, noise levels (dark current and readout), pixel size (and the related full-well capacity), and scanning frequency. In order to minimize the levels of excitation light impinging on the specimen, the camera should be as sensitive as possible, which generally means a high quantum efficiency, low noise, large pixel size, and slow scan rate. Sensitivity must therefore be balanced against the required resolution (favored by a large number of smaller pixels) and the target imaging rate (favored by a higher scanning frequency).

Slow-scan CCD cameras are generally limited in their frame rate and, furthermore, unless the specimen has extremely bright fluorescence, the signal-to-noise ratio is poor when exposure times are short. This limits use of these camera systems for high-speed imaging applications (ranging up to 30 frames per second) and frustrates attempts to focus the specimen. Attempts should be made to avoid photobleaching and phototoxicity, artifacts that can compromise cell viability and brightness, when locating a suitable specimen and focusing the camera. In this regard, cameras should be chosen that feature shutterless, frame-transfer or interline CCD sensors, which are able to provide a continuous stream of images at video rate in addition to slow-scan digital signals. Extreme low-light conditions, where fluorescent probes display limited abundance or low quantum yields, coupled with the requirement for capturing high-speed motion can necessitate the use of intensified or electron multiplying CCD camera systems.


Contributing Authors

Michael E. Dailey - Department of Biological Sciences and Neuroscience Program, 369 Biology Building, University of Iowa, Iowa City, Iowa, 52242.

Alexey Khodjakov and Conly L. Rieder - Wadsworth Center, New York State Department of Health, Albany, New York, 12201, and Marine Biological Laboratory, Woods Hole, Massachusetts, 02543.

Melpomeni Platani - Gene Expression Programme, European Molecular Biology Laboratory, Meyerhofstrasse 1, D-69117, Heidelberg, Germany.

Jason R. Swedlow, and Paul D. Andrews - Division of Gene Regulation and Expression, MSI/WTB Complex, University of Dundee, Dundee DD1 5EH, Scotland.

Yu-li Wang - University of Massachusetts Medical School, 377 Plantation Street, Suite 327, Worcester, Massachusetts, 01605.

Jennifer C. Waters - Nikon Imaging Center, LHRRB Room 113C, Department of Cell Biology, Harvard Medical School, 240 Longwood Avenue, Boston, Massachusetts, 02115.

Nathan S. Claxton, Scott G. Olenych, John D. Griffin, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.