Menu
STARS Project
close
STARS Project
close

Refine your search

Knowledge Portal

Multispectral and panchromatic images

EO sensors that capture data in multiple bands produce what are called multispectral images. Normally, the more spectral bands, the more information is gathered by the sensor. Most commercial EO satellites such as LandsatSPOTRapidEye and Worldview-2 and 3 generate multispectral images covering the visible and infrared portions of the EM spectrum. Imaging systems that capture data from numerous, and commonly narrow, bands over a wide portion of the electromagnetic spectrum produce hyperspectral images (e.g., AVIRIS, EnMap and Hyperion). Multispectral sensors typically provide less than 15 bands while hyperspectral sensors can provide more than 100 spectral bands, which explains its specific name.

In addition to multispectral bands, panchromatic images are produced by satellites such as Landsat, DigitalGlobe’s range of satellites and SPOT6/7. Such images have a single band that “combines” the information from the visible bands of blue, green and red. In other words, the band is formed by using the total light energy in the visible spectrum (instead of partitioning it into different spectra). It renders a single intensity value per pixel that is commonly visualized in a greyscale image. Information contained in each pixel of a panchromatic image is, therefore, directly related to the total intensity of solar radiation that is reflected by the objects in the pixel and is detected by the satellite sensor. Due to the higher amount of solar radiation collected per pixel, panchromatic sensors/detectors are able to detect brightness changes at smaller spatial extents (i.e., pixel size) than multispectral detectors. Conversely, due to the relatively small amount of energy available for each multispectral band, the detectors need to sample a larger area (pixel size) in order to collect the minimum amount of light energy required for detecting brightness differences. Thus, multispectral images tend to have larger pixel sizes (i.e., representing the sampled area) than panchromatic images, which due to the high amount of energy sample a smaller area and therefore smaller pixel size. For example, the panchromatic band of Landsat has a spatial resolution (pixel size) of 15 m, which is smaller than the 30 m pixel size of its multispectral bands. 

To optimally benefit from the advantages of multispectral images (i.e., high spectral resolution) and panchromatic images (i.e., high spatial resolution), the two are often combined or fused for improved visual image interpretation and information retrieval. This image fusion procedure, which is known as pan-sharpening or intensity substitution, combines three bands from the multispectral image with the high spatial resolution panchromatic image to produce an output (colour composite) that has the spatial and spectral properties of both image types. This procedure is extremely useful in object-based image analysis, in which very high resolution images are required to extract the objects of interest. In agricultural applications, for example, farm boundaries are often extracted from pan-sharpened high resolution multispectral images using image segmentation approaches.

A simple and commonly used approach to fuse multispectral and panchromatic images is the RGB -IHScolour space forward and inverse transformation technique. RGB (red-green-blue) and IHS (intensity-hue-saturation) are examples of three-dimensional colour spaces that humans use to perceive colour. In its simplest form, the RGB-IHS transformation approach first transforms an RGB image colour composite made from three multispectral bands into an IHS colour space, which results in three images of intensity, hue and saturation. This transformation enables a replacement of the “intensity” image of the IHS transform (which is derived from the multispectral image) to be replaced by the high spatial resolution panchromatic image. This new intensity image, together with the original hue and saturation images (from the multispectral image) are then transformed back into an RGB colour space for visualization. Variants to this simple technique exist and include the pixel addition method (Chavez et al., 1991), in which the high spatial resolution panchromatic image is added in equal amounts to each of the (three) multispectral bands. Further details of other variants can be found here.

Apart from the above techniques that rely on colour space transformation, other fusion techniques have been developed that rely on statistical transforms to extract the high frequency details from the panchromatic image and on injecting them into the multispectral bands (Upla et al., 2015). Examples include the ARSIS method (Ranchin and Wald, 2000), discrete wavelet transform (Shi et al., 2005), Laplacian pyramid (Wilson et al., 1997), curvelets (Choi et al., 2005) and contourlets (Shah et al., 2008).

Related publications