Imaging spectroscopy

Ash plumes on Kamchatka Peninsula, eastern Russia. A MODIS image.

Imaging spectroscopy (also hyperspectral or spectral imaging or chemical imaging) is similar to color photography, but each pixel acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model. More precisely, it is the simultaneous acquisition of spatially coregistered images in many spectrally contiguous bands.

Some spectral images contain only a few image planes of a spectral data cube, while others are better thought of as full spectra at every location in the image. For example, solar physicists use the spectroheliograph to make images of the Sun built up by scanning the slit of a spectrograph, to study the behavior of surface features on the Sun; such a spectroheliogram may have a spectral resolution of over 100,000 () and be used to measure local motion (via the Doppler shift) and even the magnetic field (via the Zeeman splitting or Hanle effect) at each location in the image plane. The multispectral images collected by the Opportunity rover, in contrast, have only four wavelength bands and hence are only a little more than 3-color images.

To be scientifically useful, such measurement should be done using an internationally recognized system of units.

One application is spectral geophysical imaging, which allows quantitative and qualitative characterization of the surface and of the atmosphere, using radiometric measurements. These measurements can then be used for unambiguous direct and indirect identification of surface materials and atmospheric trace gases, the measurement of their relative concentrations, subsequently the assignment of the proportional contribution of mixed pixel signals (e.g., the spectral unmixing problem), the derivation of their spatial distribution (mapping problem), and finally their study over time (multi-temporal analysis). The Moon Mineralogy Mapper on Chandrayaan-1 was a geophysical imaging spectrometer.[1]

Background

In 1704, Sir Isaac Newton demonstrated that white light could be split up into component colours. The subsequent history of spectroscopy led to precise measurements and provided the empirical foundations for atomic and molecular physics (Born & Wolf, 1999). Significant achievements in imaging spectroscopy are attributed to airborne instruments, particularly arising in the early 1980s and 1990s (Goetz et al., 1985; Vane et al., 1984). However, it was not until 1999 that the first imaging spectrometer was launched in space (the NASA Moderate-resolution Imaging Spectroradiometer, or MODIS).

Terminology and definitions evolve over time. At one time, >10 spectral bands sufficed to justify the term "imaging spectrometer" but presently the term is seldom defined by a total minimum number of spectral bands, rather by a contiguous (or redundant) statement of spectral bands.

The term hyperspectral imaging is sometimes used interchangeably with imaging spectroscopy. Due to its heavy use in military related applications, the civil world has established a slight preference for using the term imaging spectroscopy.

Unmixing

Hyperspectral data is often used to determine what materials are present in a scene. Materials of interest could include roadways, vegetation, and specific targets (i.e. pollutants, hazardous materials, etc.). Trivially, each pixel of a hyperspectral image could be compared to a material database to determine the type of material making up the pixel. However, many hyperspectral imaging platforms have low resolution (>5m per pixel) causing each pixel to be a mixture of several materials. The process of unmixing one of these 'mixed' pixels is called hyperspectral image unmixing or simply hyperspectral unmixing.

Models

A solution to hyperspectral unmixing is to reverse the mixing process. Generally, two models of mixing are assumed: linear and nonlinear. Linear mixing models the ground as being flat and incident sunlight on the ground causes the materials to radiate some amount of the incident energy back to the sensor. Each pixel then, is modeled as a linear sum of all the radiated energy curves of materials making up the pixel. Therefore, each material contributes to the sensor's observation in a positive linear fashion. Additionally, a conservation of energy constraint is often observed thereby forcing the weights of the linear mixture to sum to one in addition to being positive. The model can be described mathematically as follows:

where represents a pixel observed by the sensor, is a matrix of material reflectance signatures (each signature is a column of the matrix), and is the proportion of material present in the observed pixel. This type of model is also referred to as a simplex.

With satisfying the two constraints: 1. Abundance Nonnegativity Constraint (ANC) - each element of x is positive. 2. Abundance Sum-to-one Constraint (ASC) - the elements of x must sum to one.

Non-linear mixing results from multiple scattering often due to non-flat surface such as buildings and vegetation.

Unmixing (Endmember Detection) Algorithms

There are many algorithms to unmix hyperspectral data each with their own strengths and weaknesses. Many algorithms assume that pure pixels (pixels which contain only one materials) are present in a scene. Some algorithms to perform unmixing are listed below:

Non-linear unmixing algorithms also exist: support vector machines (SVM) or Analytical Neural Network (ANN).

Probabilistic methods have also been attempted to unmix pixel through Monte Carlo Unmixing (MCU) algorithm.

Abundance Maps

Once the fundamental materials of a scene are determined, it is often useful to construct an abundance map of each material which displays the fractional amount of material present at each pixel. Often linear programming is done to observed ANC and ASC.

Sensors

See also

References

External links

This article is issued from Wikipedia - version of the 3/19/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.