Multispectral pattern recognition

Last updated

Multispectral remote sensing is the collection and analysis of reflected, emitted, or back-scattered energy from an object or an area of interest in multiple bands of regions of the electromagnetic spectrum (Jensen, 2005). Subcategories of multispectral remote sensing include hyperspectral, in which hundreds of bands are collected and analyzed, and ultraspectral remote sensing where many hundreds of bands are used (Logicon, 1997). The main purpose of multispectral imaging is the potential to classify the image using multispectral classification. This is a much faster method of image analysis than is possible by human interpretation.

Contents

Multispectral remote sensing systems

Remote sensing systems gather data via instruments typically carried on satellites in orbit around the Earth. The remote sensing scanner detects the energy that radiates from the object or area of interest. This energy is recorded as an analog electrical signal and converted into a digital value though an A-to-D conversion. There are several multispectral remote sensing systems that can be categorized in the following way:

Multispectral imaging using discrete detectors and scanning mirrors

Multispectral imaging using linear arrays

Imaging spectrometry using linear and area arrays

Satellite analog and digital photographic systems

Multispectral classification methods

A variety of methods can be used for the multispectral classification of images:

Supervised classification

In this classification method, the identity and location of some of the land-cover types are obtained beforehand from a combination of fieldwork, interpretation of aerial photography, map analysis, and personal experience. The analyst would locate sites that have similar characteristics to the known land-cover types. These areas are known as training sites because the known characteristics of these sites are used to train the classification algorithm for eventual land-cover mapping of the remainder of the image. Multivariate statistical parameters (means, standard deviations, covariance matrices, correlation matrices, etc.) are calculated for each training site. All pixels inside and outside of the training sites are evaluated and allocated to the class with the more similar characteristics.

Classification scheme

The first step in the supervised classification method is to identify the land-cover and land-use classes to be used. Land-cover refers to the type of material present on the site (e.g. water, crops, forest, wet land, asphalt, and concrete). Land-use refers to the modifications made by people to the land cover (e.g. agriculture, commerce, settlement). All classes should be selected and defined carefully to properly classify remotely sensed data into the correct land-use and/or land-cover information. To achieve this purpose, it is necessary to use a classification system that contains taxonomically correct definitions of classes. If a hard classification is desired, the following classes should be used:

Some examples of hard classification schemes are:

Training sites

Once the classification scheme is adopted, the image analyst may select training sites in the image that are representative of the land-cover or land-use of interest. If the environment where the data was collected is relatively homogeneous, the training data can be used. If different conditions are found in the site, it would not be possible to extend the remote sensing training data to the site. To solve this problem, a geographical stratification should be done during the preliminary stages of the project. All differences should be recorded (e.g. soil type, water turbidity, crop species, etc.). These differences should be recorded on the imagery and the selection training sites made based on the geographical stratification of this data. The final classification map would be a composite of the individual stratum classifications.

After the data are organized in different training sites, a measurement vector is created. This vector would contain the brightness values for each pixel in each band in each training class. The mean, standard deviation, variance-covariance matrix, and correlation matrix are calculated from the measurement vectors.

Once the statistics from each training site are determined, the most effective bands for each class should be selected. The objective of this discrimination is to eliminate the bands that can provide redundant information. Graphical and statistical methods can be used to achieve this objective. Some of the graphic methods are:

Classification algorithm

The last step in supervised classification is selecting an appropriate algorithm. The choice of a specific algorithm depends on the input data and the desired output. Parametric algorithms are based on the fact that the data is normally distributed. If the data is not normally distributed, nonparametric algorithms should be used. The more common nonparametric algorithms are:

Unsupervised classification

Unsupervised classification (also known as clustering) is a method of partitioning remote sensor image data in multispectral feature space and extracting land-cover information. Unsupervised classification require less input information from the analyst compared to supervised classification because clustering does not require training data. This process consists in a series of numerical operations to search for the spectral properties of pixels. From this process, a map with m spectral classes is obtained. Using the map, the analyst tries to assign or transform the spectral classes into thematic information of interest (i.e. forest, agriculture, urban). This process may not be easy because some spectral clusters represent mixed classes of surface materials and may not be useful. The analyst has to understand the spectral characteristics of the terrain to be able to label clusters as a specific information class. There are hundreds of clustering algorithms. Two of the most conceptually simple algorithms are the chain method and the ISODATA method.

Chain method

The algorithm used in this method operates in a two-pass mode (it passes through the multispectral dataset two times. In the first pass, the program reads through the dataset and sequentially builds clusters (groups of points in spectral space). Once the program reads though the dataset, a mean vector is associated to each cluster. In the second pass, a minimum distance to means classification algorithm is applied to the dataset, pixel by pixel. Then, each pixel is assigned to one of the mean vectors created in the first step.....

ISODATA method

The Iterative Self-Organizing Data Analysis Technique (ISODATA) algorithm used for Multispectral pattern recognition was developed by Geoffrey H. Ball and David J. Hall at Stanford Research Institute. [2]

The ISODATA algorithm is a modification of the k-means clustering algorithm, with added heuristic rules based on experimentation. In outlines: [3]

INPUT. dataset, user specified configuration values.

Initialize cluster points for k-means algorithm randomly.

DO UNTIL. termination conditions are satisfied

Run a few iterations of the k-means algorithm.

Split a cluster point into two if the standard deviation of the points in the cluster is too high.

Merge two cluster points into one if the distance between their mean is too low.

Delete a cluster point if it contains too few data points.

Delete data points that are too distant from its cluster point.

Check heuristic conditions for termination.

RETURN. clusters found

There are many possible heuristic conditions for termination, depending on the implementation.

Related Research Articles

<span class="mw-page-title-main">Remote sensing</span> Acquisition of information at a significant distance from the subject

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object, in contrast to in situ or on-site observation. The term is applied especially to acquiring information about Earth and other planets. Remote sensing is used in numerous fields, including geophysics, geography, land surveying and most Earth science disciplines ; it also has military, intelligence, commercial, economic, planning, and humanitarian applications, among others.

Ground truth is information that is known to be real or true, provided by direct observation and measurement as opposed to information provided by inference.

<span class="mw-page-title-main">Landsat program</span> American network of Earth-observing satellites for international research purposes

The Landsat program is the longest-running enterprise for acquisition of satellite imagery of Earth. It is a joint NASA / USGS program. On 23 July 1972, the Earth Resources Technology Satellite was launched. This was eventually renamed to Landsat 1 in 1975. The most recent, Landsat 9, was launched on 27 September 2021.

<span class="mw-page-title-main">Moderate Resolution Imaging Spectroradiometer</span> Payload imaging sensor

The Moderate Resolution Imaging Spectroradiometer (MODIS) is a satellite-based sensor used for earth and climate measurements. There are two MODIS sensors in Earth orbit: one on board the Terra satellite, launched by NASA in 1999; and one on board the Aqua satellite, launched in 2002. MODIS has now been replaced by the VIIRS, which first launched in 2011 aboard the Suomi NPP satellite.

Measurement and signature intelligence (MASINT) is a technical branch of intelligence gathering, which serves to detect, track, identify or describe the distinctive characteristics (signatures) of fixed or dynamic target sources. This often includes radar intelligence, acoustic intelligence, nuclear intelligence, and chemical and biological intelligence. MASINT is defined as scientific and technical intelligence derived from the analysis of data obtained from sensing instruments for the purpose of identifying any distinctive features associated with the source, emitter or sender, to facilitate the latter's measurement and identification.

<span class="mw-page-title-main">Satellite imagery</span> Images taken from an artificial satellite

Satellite images are images of Earth collected by imaging satellites operated by governments and businesses around the world. Satellite imaging companies sell images by licensing them to governments and businesses such as Apple Maps and Google Maps.

<span class="mw-page-title-main">Spectral signature</span> Variation of reflectance or emittance of a material with respect to wavelengths

Spectral signature is the variation of reflectance or emittance of a material with respect to wavelengths. The spectral signature of stars indicates the composition of the stellar atmosphere. The spectral signature of an object is a function of the incidental EM wavelength and material interaction with that section of the electromagnetic spectrum.

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient.

In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in a data set. The output depends on whether k-NN is used for classification or regression:

<span class="mw-page-title-main">Multispectral imaging</span> Capturing image data across multiple electromagnetic spectrum ranges

Multispectral imaging captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or detected with the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, i.e. infrared and ultra-violet. It can allow extraction of additional information the human eye fails to capture with its visible receptors for red, green and blue. It was originally developed for military target identification and reconnaissance. Early space-based imaging platforms incorporated multispectral imaging technology to map details of the Earth related to coastal boundaries, vegetation, and landforms. Multispectral imaging has also found use in document and painting analysis.

In imaging spectroscopy each pixel of an image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model. More precisely, it is the simultaneous acquisition of spatially coregistered images in many spectrally contiguous bands.

<span class="mw-page-title-main">Normalized difference vegetation index</span> Graphical indicator of remotely sensed live green vegetation

The normalized difference vegetation index (NDVI) is a widely-used metric for quantifying the health and density of vegetation using sensor data. It is calculated from spectrometric data at two specific bands: red and near-infrared. The spectrometric data is usually sourced from remote sensors, such as satellites.

<span class="mw-page-title-main">Hyperspectral imaging</span> Multi-wavelength imaging method

Hyperspectral imaging collects and processes information from across the electromagnetic spectrum. The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes. There are three general types of spectral imagers. There are push broom scanners and the related whisk broom scanners, which read images over time, band sequential scanners, which acquire images of an area at different wavelengths, and snapshot hyperspectral imagers, which uses a staring array to generate an image in an instant.

The image fusion process is defined as gathering all the important information from multiple images, and their inclusion into fewer images, usually a single one. This single image is more informative and accurate than any single source image, and it consists of all the necessary information. The purpose of image fusion is not only to reduce the amount of data but also to construct images that are more appropriate and understandable for the human and machine perception. In computer vision, multisensor image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.

A remote sensing software is a software application that processes remote sensing data. Remote sensing applications are similar to graphics software, but they enable generating geographic information from satellite and airborne sensor data. Remote sensing applications read specialized file formats that contain sensor image data, georeferencing information, and sensor metadata. Some of the more popular remote sensing file formats include: GeoTIFF, NITF, JPEG 2000, ECW, MrSID, HDF, and NetCDF.

Atmospheric correction is the process of removing the scattering and absorption effects of the atmosphere on the reflectance values of images taken by satellite or airborne sensors. Atmospheric effects in optical remote sensing are significant and complex, dramatically altering the spectral nature of the radiation reaching the remote sensor. The atmosphere both absorbs and scatters various wavelengths of the visible spectrum which must pass through the atmosphere twice, once from the sun to the object and then again as it travels back up the image sensor. These distortions are corrected using various approaches and techniques, as described below.

<span class="mw-page-title-main">Remote sensing in geology</span> Data acquisition method for earth sciences

Remote sensing is used in the geological sciences as a data acquisition method complementary to field observation, because it allows mapping of geological characteristics of regions without physical contact with the areas being explored. About one-fourth of the Earth's total surface area is exposed land where information is ready to be extracted from detailed earth observation via remote sensing. Remote sensing is conducted via detection of electromagnetic radiation by sensors. The radiation can be naturally sourced, or produced by machines and reflected off of the Earth surface. The electromagnetic radiation acts as an information carrier for two main variables. First, the intensities of reflectance at different wavelengths are detected, and plotted on a spectral reflectance curve. This spectral fingerprint is governed by the physio-chemical properties of the surface of the target object and therefore helps mineral identification and hence geological mapping, for example by hyperspectral imaging. Second, the two-way travel time of radiation from and back to the sensor can calculate the distance in active remote sensing systems, for example, Interferometric synthetic-aperture radar. This helps geomorphological studies of ground motion, and thus can illuminate deformations associated with landslides, earthquakes, etc.

Land cover maps are tools that provide vital information about the Earth's land use and cover patterns. They aid policy development, urban planning, and forest and agricultural monitoring.

Remote sensing in oceanography is a widely used observational technique which enables researchers to acquire data of a location without physically measuring at that location. Remote sensing in oceanography mostly refers to measuring properties of the ocean surface with sensors on satellites or planes, which compose an image of captured electromagnetic radiation. A remote sensing instrument can either receive radiation from the earth’s surface (passive), whether reflected from the sun or emitted, or send out radiation to the surface and catch the reflection (active). All remote sensing instruments carry a sensor to capture the intensity of the radiation at specific wavelength windows, to retrieve a spectral signature for every location. The physical and chemical state of the surface determines the emissivity and reflectance for all bands in the electromagnetic spectrum, linking the measurements to physical properties of the surface. Unlike passive instruments, active remote sensing instruments also measure the two-way travel time of the signal; which is used to calculate the distance between the sensor and the imaged surface. Remote sensing satellites often carry other instruments which keep track of their location and measure atmospheric conditions.

<span class="mw-page-title-main">Machine learning in earth sciences</span>

Applications of machine learning in earth sciences include geological mapping, gas leakage detection and geological features identification. Machine learning (ML) is a type of artificial intelligence (AI) that enables computer systems to classify, cluster, identify and analyze vast and complex sets of data while eliminating the need for explicit instructions and programming. Earth science is the study of the origin, evolution, and future of the planet Earth. The Earth system can be subdivided into four major components including the solid earth, atmosphere, hydrosphere and biosphere.

References

  1. Ran, Lingyan; Zhang, Yanning; Wei, Wei; Zhang, Qilin (2017-10-23). "A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features". Sensors. 17 (10): 2421. Bibcode:2017Senso..17.2421R. doi: 10.3390/s17102421 . PMC   5677443 . PMID   29065535.
  2. Ball, Geoffrey H.; Hall, David J. (1965). Isodata, a Novel Method of Data Analysis and Pattern Classification. Stanford Research Institute.
  3. Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; Le Moigne, Jacqueline (February 2007). "A Fast Implementation of the Isodata Clustering Algorithm". International Journal of Computational Geometry & Applications. 17 (01): 71–103. doi:10.1142/S0218195907002252. ISSN   0218-1959.