The history of synthetic-aperture radar begins in 1951, with the invention of the technology by mathematician Carl A. Wiley, and its development in the following decade. Initially developed for military use, the technology has since been applied in the field of planetary science.
Carl A. Wiley, [1] a mathematician at Goodyear Aircraft Company in Litchfield Park, Arizona, invented synthetic-aperture radar in June 1951 while working on a correlation guidance system for the Atlas ICBM program. [2] In early 1952, Wiley, together with Fred Heisley and Bill Welty, constructed a concept validation system known as DOUSER ("Doppler Unbeamed Search Radar"). During the 1950s and 1960s, Goodyear Aircraft (later Goodyear Aerospace) introduced numerous advancements in SAR technology, many with the help from Don Beckerleg. [3]
Independently of Wiley's work, experimental trials in early 1952 by Sherwin and others at the University of Illinois' Control Systems Laboratory showed results that they pointed out "could provide the basis for radar systems with greatly improved angular resolution" and might even lead to systems capable of focusing at all ranges simultaneously. [4]
In both of those programs, processing of the radar returns was done by electrical-circuit filtering methods. In essence, signal strength in isolated discrete bands of Doppler frequency defined image intensities that were displayed at matching angular positions within proper range locations. When only the central (zero-Doppler band) portion of the return signals was used, the effect was as if only that central part of the beam existed. That led to the term Doppler Beam Sharpening. Displaying returns from several adjacent non-zero Doppler frequency bands accomplished further "beam-subdividing" (sometimes called "unfocused radar", though it could have been considered "semi-focused"). Wiley's patent, applied for in 1954, still proposed similar processing. The bulkiness of the circuitry then available limited the extent to which those schemes might further improve resolution.
The principle was included in a memorandum [5] authored by Walter Hausz of General Electric that was part of the then-secret report of a 1952 Dept. of Defense summer study conference called TEOTA ("The Eyes of the Army"), [6] which sought to identify new techniques useful for military reconnaissance and technical gathering of intelligence. A follow-on summer program in 1953 at the University of Michigan, called Project Wolverine, identified several of the TEOTA subjects, including Doppler-assisted sub-beamwidth resolution, as research efforts to be sponsored by the Department of Defense (DoD) at various academic and industrial research laboratories. In that same year, the Illinois group produced a "strip-map" image exhibiting a considerable amount of sub-beamwidth resolution.
A more advanced focused-radar project was among several remote sensing schemes assigned in 1953 to Project Michigan, a tri-service-sponsored (Army, Navy, Air Force) program at the University of Michigan's Willow Run Research Center (WRRC), that program being administered by the Army Signal Corps. Initially called the side-looking radar project, it was carried out by a group first known as the Radar Laboratory and later as the Radar and Optics Laboratory. It proposed to take into account, not just the short-term existence of several particular Doppler shifts, but the entire history of the steadily varying shifts from each target as the latter crossed the beam. An early analysis by Dr. Louis J. Cutrona, Weston E. Vivian, and Emmett N. Leith of that group showed that such a fully focused system should yield, at all ranges, a resolution equal to the width (or, by some criteria, the half-width) of the real antenna carried on the radar aircraft and continually pointed broadside to the aircraft's path. [7]
The required data processing amounted to calculating cross-correlations of the received signals with samples of the forms of signals to be expected from unit-amplitude sources at the various ranges. At that time, even large digital computers had capabilities somewhat near the levels of today's four-function handheld calculators, hence were nowhere near able to do such a huge amount of computation. Instead, the device for doing the correlation computations was to be an optical correlator.
It was proposed that signals received by the traveling antenna and coherently detected be displayed as a single range-trace line across the diameter of the face of a cathode-ray tube, the line's successive forms being recorded as images projected onto a film traveling perpendicular to the length of that line. The information on the developed film was to be subsequently processed in the laboratory on equipment still to be devised as a principal task of the project. In the initial processor proposal, an arrangement of lenses was expected to multiply the recorded signals point-by-point with the known signal forms by passing light successively through both the signal film and another film containing the known signal pattern. The subsequent summation, or integration, step of the correlation was to be done by converging appropriate sets of multiplication products by the focusing action of one or more spherical and cylindrical lenses. The processor was to be, in effect, an optical analog computer performing large-scale scalar arithmetic calculations in many channels (with many light "rays") at once. Ultimately, two such devices would be needed, their outputs to be combined as quadrature components of the complete solution.
A desire to keep the equipment small had led to recording the reference pattern on 35 mm film. Trials promptly showed that the patterns on the film were so fine as to show pronounced diffraction effects that prevented sharp final focusing. [8]
That led Leith, a physicist who was devising the correlator, to recognize that those effects in themselves could, by natural processes, perform a significant part of the needed processing, since along-track strips of the recording operated like diametrical slices of a series of circular optical zone plates. Any such plate performs somewhat like a lens, each plate having a specific focal length for any given wavelength. The recording that had been considered as scalar became recognized as pairs of opposite-sign vector ones of many spatial frequencies plus a zero-frequency "bias" quantity. The needed correlation summation changed from a pair of scalar ones to a single vector one.
Each zone plate strip has two equal but oppositely signed focal lengths, one real, where a beam through it converges to a focus, and one virtual, where another beam appears to have diverged from, beyond the other face of the zone plate. The zero-frequency (DC bias) component has no focal point, but overlays both the converging and diverging beams. The key to obtaining, from the converging wave component, focused images that are not overlaid with unwanted haze from the other two is to block the latter, allowing only the wanted beam to pass through a properly positioned frequency-band selecting aperture.
Each radar range yields a zone plate strip with a focal length proportional to that range. This fact became a principal complication in the design of optical processors. Consequently, technical journals of the time contain a large volume of material devoted to ways for coping with the variation of focus with range.
For that major change in approach, the light used had to be both monochromatic and coherent, properties that were already a requirement on the radar radiation. Lasers also then being in the future, the best then-available approximation to a coherent light source was the output of a mercury vapor lamp, passed through a color filter that was matched to the lamp spectrum's green band, and then concentrated as well as possible onto a very small beam-limiting aperture. While the resulting amount of light was so weak that very long exposure times had to be used, a workable optical correlator was assembled in time to be used when appropriate data became available.
Although creating that radar was a more straightforward task based on already-known techniques, that work did demand the achievement of signal linearity and frequency stability that were at the extreme state of the art. An adequate instrument was designed and built by the Radar Laboratory and was installed in a C-46 (Curtiss Commando) aircraft. Because the aircraft was bailed to WRRC by the U. S. Army and was flown and maintained by WRRC's own pilots and ground personnel, it was available for many flights at times matching the Radar Laboratory's needs, a feature important for allowing frequent re-testing and "debugging" of the continually developing complex equipment. By contrast, the Illinois group had used a C-46 belonging to the Air Force and flown by AF pilots only by pre-arrangement, resulting, in the eyes of those researchers, in limitation to a less-than-desirable frequency of flight tests of their equipment, hence a low bandwidth of feedback from tests. (Later work with newer Convair aircraft continued the Michigan group's local control of flight schedules.)
Michigan's chosen 5-foot (1.5 m)-wide World War II-surplus antenna was theoretically capable of 5-foot (1.5 m) resolution, but data from only 10% of the beamwidth was used at first, the goal at that time being to demonstrate 50-foot (15 m) resolution. It was understood that finer resolution would require the added development of means for sensing departures of the aircraft from an ideal heading and flight path, and for using that information for making needed corrections to the antenna pointing and to the received signals before processing. After numerous trials in which even small atmospheric turbulence kept the aircraft from flying straight and level enough for good 50-foot (15 m) data, one pre-dawn flight in August 1957 [9] yielded a map-like image of the Willow Run Airport area which did demonstrate 50-foot (15 m) resolution in some parts of the image, whereas the illuminated beam width there was 900 feet (270 m). Although the program had been considered for termination by DoD due to what had seemed to be a lack of results, that first success ensured further funding to continue development leading to solutions to those recognized needs.
The SAR principle was first acknowledged publicly via an April 1960 press release about the U. S. Army experimental AN/UPD-1 system, which consisted of an airborne element made by Texas Instruments and installed in a Beech L-23D aircraft and a mobile ground data-processing station made by WRRC and installed in a military van. At the time, the nature of the data processor was not revealed. A technical article in the journal of the IRE (Institute of Radio Engineers) Professional Group on Military Electronics in February 1961 [10] described the SAR principle and both the C-46 and AN/UPD-1 versions, but did not tell how the data were processed, nor that the UPD-1's maximum resolution capability was about 50 feet (15 m). However, the June 1960 issue of the IRE Professional Group on Information Theory had contained a long article [11] on "Optical Data Processing and Filtering Systems" by members of the Michigan group. Although it did not refer to the use of those techniques for radar, readers of both journals could quite easily understand the existence of a connection between articles sharing some authors.
An operational system to be carried in a reconnaissance version of the F-4 "Phantom" aircraft was quickly devised and was used briefly in Vietnam, where it failed to favorably impress its users, due to the combination of its low resolution (similar to the UPD-1's), the speckly nature of its coherent-wave images (similar to the speckliness of laser images), and the poorly understood dissimilarity of its range/cross-range images from the angle/angle optical ones familiar to military photo interpreters. The lessons it provided were well learned by subsequent researchers, operational system designers, image-interpreter trainers, and the DoD sponsors of further development and acquisition.
In subsequent work the technique's latent capability was eventually achieved. That work, depending on advanced radar circuit designs and precision sensing of departures from ideal straight flight, along with more sophisticated optical processors using laser light sources and specially designed very large lenses made from remarkably clear glass, allowed the Michigan group to advance system resolution, at about 5-year intervals, first to 15 feet (4.6 m), then 5 feet (1.5 m), and, by the mid-1970s, to 1 foot (the latter only over very short range intervals while processing was still being done optically). The latter levels and the associated very wide dynamic range proved suitable for identifying many objects of military concern as well as soil, water, vegetation, and ice features being studied by a variety of environmental researchers having security clearances allowing them access to what was then classified imagery. Similarly improved operational systems soon followed each of those finer-resolution steps.
Even the 5-foot (1.5 m) resolution stage had over-taxed the ability of cathode-ray tubes (limited to about 2000 distinguishable items across the screen diameter) to deliver fine enough details to signal films while still covering wide range swaths, and taxed the optical processing systems in similar ways. However, at about the same time, digital computers finally became capable of doing the processing without similar limitation, and the consequent presentation of the images on cathode ray tube monitors instead of film allowed for better control over tonal reproduction and for more convenient image mensuration.
Achievement of the finest resolutions at long ranges was aided by adding the capability to swing a larger airborne antenna so as to more strongly illuminate a limited target area continually while collecting data over several degrees of aspect, removing the previous limitation of resolution to the antenna width. This was referred to as the spotlight mode, which no longer produced continuous-swath images but, instead, images of isolated patches of terrain.
It was understood very early in SAR development that the extremely smooth orbital path of an out-of-the-atmosphere platform made it ideally suited to SAR operation. Early experience with artificial earth satellites had also demonstrated that the Doppler frequency shifts of signals traveling through the ionosphere and atmosphere were stable enough to permit very fine resolution to be achievable even at ranges of hundreds of kilometers. [12] The first spaceborne SAR images of Earth were demonstrated by a project now referred to as Quill (declassified in 2012). [13]
After the initial work began, several of the capabilities for creating useful classified systems did not exist for another two decades. That seemingly slow rate of advances was often paced by the progress of other inventions, such as the laser, the digital computer, circuit miniaturization, and compact data storage. Once the laser appeared, optical data processing became a fast process because it provided many parallel analog channels, but devising optical chains suited to matching signal focal lengths to ranges proceeded by many stages and turned out to call for some novel optical components. Since the process depended on diffraction of light waves, it required anti-vibration mountings, clean rooms, and highly trained operators. Even at its best, its use of CRTs and film for data storage placed limits on the range depth of images.
At several stages, attaining the frequently over-optimistic expectations for digital computation equipment proved to take far longer than anticipated. For example, the SEASAT system was ready to orbit before its digital processor became available, so a quickly assembled optical recording and processing scheme had to be used to obtain timely confirmation of system operation. In 1978, the first digital SAR processor was developed by the Canadian aerospace company MacDonald Dettwiler (MDA). [14] When its digital processor was finally completed and used, the digital equipment of that time took many hours to create one swath of image from each run of a few seconds of data. [15] Still, while that was a step down in speed, it was a step up in image quality. Modern methods now provide both high speed and high quality.
This article needs additional citations for verification .(August 2022) |
Highly accurate data can be collected by aircraft overflying the terrain in question. In the 1980s, as a prototype for instruments to be flown on the NASA Space Shuttles, NASA operated a synthetic aperture radar on a NASA Convair 990. In 1986, this plane caught fire on takeoff. In 1988, NASA rebuilt a C, L, and P-band SAR to fly on the NASA DC-8 aircraft. Called AIRSAR, it flew missions at sites around the world until 2004. Another such aircraft, the Convair 580, was flown by the Canada Center for Remote Sensing until about 1996 when it was handed over to Environment Canada due to budgetary reasons. Most land-surveying applications are now carried out by satellite observation. Satellites such as ERS-1/2, JERS-1, Envisat ASAR, and RADARSAT-1 were launched explicitly to carry out this sort of observation. Their capabilities differ, particularly in their support for interferometry, but all have collected tremendous amounts of valuable data. The Space Shuttle also carried synthetic aperture radar equipment during the SIR-A and SIR-B missions during the 1980s, the Shuttle Radar Laboratory (SRL) missions in 1994 and the Shuttle Radar Topography Mission in 2000.
The Venera 15 and Venera 16 followed later by the Magellan space probe mapped the surface of Venus over several years using synthetic aperture radar.
Synthetic aperture radar was first used by NASA on JPL's Seasat oceanographic satellite in 1978 (this mission also carried an altimeter and a scatterometer); it was later developed more extensively on the Spaceborne Imaging Radar (SIR) missions on the space shuttle in 1981, 1984 and 1994. The Cassini mission to Saturn used SAR to map the surface of the planet's major moon Titan, whose surface is partly hidden from direct optical inspection by atmospheric haze. The SHARAD sounding radar on the Mars Reconnaissance Orbiter and MARSIS instrument on Mars Express have observed bedrock beneath the surface of the Mars polar ice and also indicated the likelihood of substantial water ice in the Martian middle latitudes. The Lunar Reconnaissance Orbiter, launched in 2009, carries a SAR instrument called Mini-RF, which was designed largely to look for water ice deposits on the poles of the Moon.
The Mineseeker Project is designing a system for determining whether regions contain landmines based on a blimp carrying ultra-wideband synthetic aperture radar. Initial trials show promise; the radar is able to detect even buried plastic mines.
The National Reconnaissance Office maintains a fleet of (now declassified) synthetic aperture radar satellites commonly designated as Lacrosse or Onyx.
In February 2009, the Sentinel R1 surveillance aircraft entered service in the RAF, equipped with the SAR-based Airborne Stand-Off Radar (ASTOR) system.
The German Armed Forces' (Bundeswehr) military SAR-Lupe reconnaissance satellite system has been fully operational since 22 July 2008.
As of January 2021, multiple commercial companies have started launching constellations of satellites for collecting SAR imagery of Earth. [16]
The Alaska Satellite Facility provides production, archiving and distribution to the scientific community of SAR data products and tools from active and past missions, including the June 2013 release of newly processed, 35-year-old Seasat SAR imagery.
The Center for Southeastern Tropical Advanced Remote Sensing (CSTARS) downlinks and processes SAR data (as well as other data) from a variety of satellites and supports the University of Miami Rosenstiel School of Marine and Atmospheric Science. CSTARS also supports disaster relief operations, oceanographic and meteorological research, and port and maritime security research projects.
Radar is a radiolocation system that uses radio waves to determine the distance (ranging), angle (azimuth), and radial velocity of objects relative to the site. It is used to detect and track aircraft, ships, spacecraft, guided missiles, motor vehicles, map weather formations, and terrain. A radar system consists of a transmitter producing electromagnetic waves in the radio or microwaves domain, a transmitting antenna, a receiving antenna and a receiver and processor to determine properties of the objects. Radio waves from the transmitter reflect off the objects and return to the receiver, giving information about the objects' locations and speeds.
In antenna theory, a phased array usually means an electronically scanned array, a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas. The general theory of an electromagnetic phased array also finds applications in ultrasonic and medical imaging application and in optics optical phased array.
A Doppler radar is a specialized radar that uses the Doppler effect to produce velocity data about objects at a distance. It does this by bouncing a microwave signal off a desired target and analyzing how the object's motion has altered the frequency of the returned signal. This variation gives direct and highly accurate measurements of the radial component of a target's velocity relative to the radar. The term applies to radar systems in many domains like aviation, police radar detectors, navigation, meteorology, etc.
Synthetic-aperture radar (SAR) is a form of radar that is used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target region to provide finer spatial resolution than conventional stationary beam-scanning radars. SAR is typically mounted on a moving platform, such as an aircraft or spacecraft, and has its origins in an advanced form of side looking airborne radar (SLAR). The distance the SAR device travels over a target during the period when the target scene is illuminated creates the large synthetic antenna aperture. Typically, the larger the aperture, the higher the image resolution will be, regardless of whether the aperture is physical or synthetic – this allows SAR to create high-resolution images with comparatively small physical antennas. For a fixed antenna size and orientation, objects which are further away remain illuminated longer – therefore SAR has the property of creating larger synthetic apertures for more distant objects, which results in a consistent spatial resolution over a range of viewing distances.
Imaging radar is an application of radar which is used to create two-dimensional images, typically of landscapes. Imaging radar provides its light to illuminate an area on the ground and take a picture at radio wavelengths. It uses an antenna and digital computer storage to record its images. In a radar image, one can see only the energy that was reflected back towards the radar antenna. The radar moves along a flight path and the area illuminated by the radar, or footprint, is moved along the surface in a swath, building the image as it does so.
Satellite geodesy is geodesy by means of artificial satellites—the measurement of the form and dimensions of Earth, the location of objects on its surface and the figure of the Earth's gravity field by means of artificial satellite techniques. It belongs to the broader field of space geodesy. Traditional astronomical geodesy is not commonly considered a part of satellite geodesy, although there is considerable overlap between the techniques.
A pulse-Doppler radar is a radar system that determines the range to a target using pulse-timing techniques, and uses the Doppler effect of the returned signal to determine the target object's velocity. It combines the features of pulse radars and continuous-wave radars, which were formerly separate due to the complexity of the electronics.
Space-based radar or spaceborne radar is a radar operating in outer space; orbiting radar is a radar in orbit and Earth orbiting radar is a radar in geocentric orbit. A number of Earth-observing satellites, such as RADARSAT, have employed synthetic aperture radar (SAR) to obtain terrain and land-cover information about the Earth.
Inverse synthetic-aperture radar (ISAR) is a radar technique using radar imaging to generate a two-dimensional high resolution image of a target. It is analogous to conventional SAR, except that ISAR technology uses the movement of the target rather than the emitter to create the synthetic aperture. ISAR radars have a significant role aboard maritime patrol aircraft to provide them with radar image of sufficient quality to allow it to be used for target recognition purposes. In situations where other radars display only a single unidentifiable bright moving pixel, the ISAR image is often adequate to discriminate between various missiles, military aircraft, and civilian aircraft.
SAR-Lupe is Germany's first reconnaissance satellite system and is used for military purposes. SAR is an abbreviation for synthetic-aperture radar, and "Lupe" is German for magnifying glass. The SAR-Lupe program consists of five identical (770 kg) satellites, developed by the German aeronautics company OHB-System, which are controlled by a ground station responsible for controlling the system and analysing the retrieved data. A large data archive of images will be kept in a former Cold War bunker belonging to the Kommando Strategische Aufklärung of the Bundeswehr. The total price of the satellites was over 250 million Euro.
TerraSAR-X, is an imaging radar Earth observation satellite, a joint venture being carried out under a public-private-partnership between the German Aerospace Center (DLR) and EADS Astrium. The exclusive commercial exploitation rights are held by the geo-information service provider Astrium. TerraSAR-X was launched on 15 June 2007 and has been in operational service since January 2008. With its twin satellite TanDEM-X, launched 21 June 2010, TerraSAR-X acquires the data basis for the WorldDEM, the worldwide and homogeneous DEM available from 2014.
Interferometric synthetic aperture radar, abbreviated InSAR, is a radar technique used in geodesy and remote sensing. This geodetic method uses two or more synthetic aperture radar (SAR) images to generate maps of surface deformation or digital elevation, using differences in the phase of the waves returning to the satellite or aircraft. The technique can potentially measure millimetre-scale changes in deformation over spans of days to years. It has applications for geophysical monitoring of natural hazards, for example earthquakes, volcanoes and landslides, and in structural engineering, in particular monitoring of subsidence and structural stability.
Radar engineering details are technical details pertaining to the components of a radar and their ability to detect the return energy from moving scatterers — determining an object's position or obstruction in the environment. This includes field of view in terms of solid angle and maximum unambiguous range and velocity, as well as angular, range and velocity resolution. Radar sensors are classified by application, architecture, radar mode, platform, and propagation window.
Radar MASINT is a subdiscipline of measurement and signature intelligence (MASINT) and refers to intelligence gathering activities that bring together disparate elements that do not fit within the definitions of signals intelligence (SIGINT), imagery intelligence (IMINT), or human intelligence (HUMINT).
Quill was an experimental United States National Reconnaissance Office (NRO) program of the 1960s, which provided the first images of Earth from space using a synthetic aperture radar (SAR). Radar-imaging spacecraft of this design were not intended to be deployed operationally, since it was known that this system's resolution, inferior to that of concurrent experimental airborne systems, would not serve that purpose. Instead, the program's predominant goal was to show whether the propagation of radar waves through a large volume of the atmosphere and ionosphere would dangerously degrade the performance of the synthetic aperture feature.
The NASA-ISRO Synthetic Aperture Radar (NISAR) mission is a joint project between NASA and ISRO to co-develop and launch a dual-frequency synthetic aperture radar on an Earth observation satellite. The satellite will be the first radar imaging satellite to use dual frequencies. It will be used for remote sensing, to observe and understand natural processes on Earth. For example, its left-facing instruments will study the Antarctic cryosphere. With a total cost estimated at US$1.5 billion, NISAR is likely to be the world's most expensive Earth-imaging satellite.
High Resolution Wide Swath (HRWS) imaging is an important branch in synthetic aperture radar (SAR) imaging, a remote sensing technique capable of providing high resolution images independent of weather conditions and sunlight illumination. This makes SAR very attractive for the systematic observation of dynamic processes on the Earth's surface, which is useful for environmental monitoring, earth resource mapping and military systems.
The railSAR, also known as the ultra-wideband Foliage Penetration Synthetic Aperture Radar, is a rail-guided, low-frequency impulse radar system that can detect and discern target objects hidden behind foliage. It was designed and developed by the U.S. Army Research Laboratory (ARL) in the early 1990s in order to demonstrate the capabilities of an airborne SAR for foliage and ground penetration. However, since conducting accurate, repeatable measurements on an airborne platform was both challenging and expensive, the railSAR was built on the rooftop of a four-story building within the Army Research Laboratory compound along a 104-meter laser-leveled track.
The boomSAR is a mobile ultra-wideband synthetic aperture radar system designed by the U.S. Army Research Laboratory (ARL) in the mid-1990s to detect buried landmines and IEDs. Mounted atop a 45-meter telescoping boom on a stable moving vehicle, the boomSAR transmits low frequency short-pulse UWB signals over the side of the vehicle to scope out a 300-meter range area starting 50 meters from the base of the boom. It travels at an approximate rate of 1 km/hour and requires a relatively flat road that is wide enough to accommodate its 18 ft-wide base.
RISAT-2B, or Radar Imaging Satellite-2B is an Indian radar reconnaissance satellite that is part of India's RISAT programme and the third satellite in the series. It is built by Indian Space Research Organisation (ISRO) to replace RISAT-2.
Finland's Iceye operates five X-band SAR satellites. San Francisco-based Capella Space began releasing imagery in October from its first operational satellite, Sequoia, which also operates in X-band. Spacety plans to build, launch and operate a constellation of 56 small SAR satellites.